Dobb·E
HomepageGithub
  • Introduction
  • 🛠️Hardware
    • Putting Together the Stick
    • Setting up your iPhone
    • Using the Stick to Collect Data
    • Putting Together the Robot Mount
    • Mounting the iPhone to the Robot
  • 🖥️Software
    • Getting started with Dobb·E code
      • Setting up the Datasets
      • Fine-tuning Policies
      • Deploying a Policy on the Robot
      • [Optional] Training Your Own Home Pretrained Representations
    • Processing Collected Data
    • Running the Robot Controller
  • 📞Contact us
Powered by GitBook
On this page
  • Getting Started
  • Deploying
  • Behavior Cloning
  • VINN
  • Command Line Instructions
  1. Software
  2. Getting started with Dobb·E code

Deploying a Policy on the Robot

PreviousFine-tuning PoliciesNext[Optional] Training Your Own Home Pretrained Representations

Last updated 1 year ago

Getting Started

  1. Follow “Getting Started” in the robot-server documentation.

    • Skip the dataset related parts if you're not running VINN

  2. Install ROS1 within your Conda environment:

    # Only if you are not using mamba already
    conda install mamba -c conda-forge
    # this adds the conda-forge channel to the new created environment configuration 
    conda config --env --add channels conda-forge
    # and the robostack channel
    conda config --env --add channels robostack-staging
    # remove the defaults channel just in case, this might return an error if it is not in the list which is ok
    conda config --env --remove channels defaults
    
    mamba install ros-noetic-desktop

    Reference:

Deploying

  1. Perform joint calibration by running stretch_robot_home.py.

  2. .

  3. Follow documentation in for running roscore and start_server.py on the robot.

  4. Ensure that both of the previous commands are both running in the background in their own separate windows.

We like using tmux for running multiple commands and keeping track of them at the same time.

Behavior Cloning

  1. Transfer over the weights of a trained BC policy to the robot.

    1. Take the last checkpoint (saved after 50th epoch):

      rsync -av --include='*/' --include='checkpoint.pt' --exclude='*' checkpoints/2023-11-22 hello-robot@{ip-address}:/home/hello-robot/code/imitation-in-homes/checkpoints
  2. In configs/run.yaml set model_weight_pth to the path containing the trained BC policy weights.

  3. Run in terminal:

    python run.py --config-name=run

VINN

  1. Transfer over the encoder weights and finetune_task_data onto the robot.

    1. We recommend doing so using rsync

    2. To speed up the transfer of data and save space on the robot, only transfer the necessary files:

      rsync -avm --include='*/' --include='*.json' --include='*.bin' --include='*.txt' --include='*.mp4' --exclude='*' /home/shared/data/finetune_task_data hello-robot@{ip-address}:/home/hello-robot/data
  2. In configs/run_vinn.yaml set checkpoint_path to encoder weights.

  3. In configs/dataset/vinn_deploy_dataset.yaml, set include_tasks and include_envs to be a specific task (i.e. Drawer_Closing) and environment (i.e. Env2) from the finetune_task_data folder.

  4. Run in terminal:

    python run.py --config-name=run_vinn

Command Line Instructions

  • h

    • Bring the robot to its "home" position

  • r

    • Reset the "home" position height

      • Height values are in the range ~(0.2 to 1.1)

      • Wait a second or two, then home the robot ("h") to move to this height

  • s

    • Enter a value 1-10

    • Home the robot by "h" to move to the fixed starting position

  • ↵ (Enter)

    • Take one "step" of the policy

    • Alternative: Enter some number + ↵ to "step" n times (i.e. 5 + ↵ for 5 "steps")

🖥️
https://robostack.github.io/GettingStarted.html
Attach the iPhone to the Stretch's wrist and start recording
Running the Robot Controller