# Deploying a Policy on the Robot

## Getting Started

1. Follow “Getting Started” in the `robot-server` documentation.
   * Skip the dataset related parts if you're not running VINN
2. Install ROS1 within your Conda environment:

   ```bash
   # Only if you are not using mamba already
   conda install mamba -c conda-forge
   # this adds the conda-forge channel to the new created environment configuration 
   conda config --env --add channels conda-forge
   # and the robostack channel
   conda config --env --add channels robostack-staging
   # remove the defaults channel just in case, this might return an error if it is not in the list which is ok
   conda config --env --remove channels defaults

   mamba install ros-noetic-desktop
   ```

   Reference: <https://robostack.github.io/GettingStarted.html>

## Deploying

1. Perform joint calibration by running `stretch_robot_home.py`.
2. [Attach the iPhone to the Stretch's wrist and start recording](https://docs.dobb-e.com/hardware/attach-camera-to-robot).
3. Follow documentation in [Running the Robot Controller](https://docs.dobb-e.com/software/running-the-robot-server) for running `roscore` and `start_server.py` on the robot.
4. Ensure that both of the previous commands are both running in the background in their own separate windows.

{% hint style="info" %}
We like using `tmux` for running multiple commands and keeping track of them at the same time.
{% endhint %}

## Behavior Cloning

1. Transfer over the weights of a trained BC policy to the robot.
   1. Take the last checkpoint (saved after 50th epoch):

      ```bash
      rsync -av --include='*/' --include='checkpoint.pt' --exclude='*' checkpoints/2023-11-22 hello-robot@{ip-address}:/home/hello-robot/code/imitation-in-homes/checkpoints
      ```
2. In `configs/run.yaml` set `model_weight_pth` to the path containing the trained BC policy weights.
3. Run in terminal:

   ```bash
   python run.py --config-name=run
   ```

## VINN

1. Transfer over the encoder weights and `finetune_task_data` onto the robot.
   1. We recommend doing so using `rsync`
   2. To speed up the transfer of data and save space on the robot, only transfer the necessary files:

      ```bash
      rsync -avm --include='*/' --include='*.json' --include='*.bin' --include='*.txt' --include='*.mp4' --exclude='*' /home/shared/data/finetune_task_data hello-robot@{ip-address}:/home/hello-robot/data
      ```
2. In `configs/run_vinn.yaml` set `checkpoint_path` to encoder weights.
3. In `configs/dataset/vinn_deploy_dataset.yaml`, set `include_tasks` and `include_envs` to be a specific task (i.e. Drawer\_Closing) and environment (i.e. Env2) from the `finetune_task_data` folder.
4. Run in terminal:

   ```bash
   python run.py --config-name=run_vinn
   ```

### Command Line Instructions

* h
  * Bring the robot to its "home" position
* r
  * Reset the "home" position height
    * Height values are in the range \~(0.2 to 1.1)
    * Wait a second or two, then home the robot ("h") to move to this height
* s
  * Enter a value 1-10
  * Home the robot by "h" to move to the fixed starting position
* ↵ (Enter)
  * Take one "step" of the policy
  * Alternative: Enter some number + ↵ to "step" n times (i.e. 5 + ↵ for 5 "steps")
