Dobb·E
HomepageGithub
  • Introduction
  • 🛠️Hardware
    • Putting Together the Stick
    • Setting up your iPhone
    • Using the Stick to Collect Data
    • Putting Together the Robot Mount
    • Mounting the iPhone to the Robot
  • 🖥️Software
    • Getting started with Dobb·E code
      • Setting up the Datasets
      • Fine-tuning Policies
      • Deploying a Policy on the Robot
      • [Optional] Training Your Own Home Pretrained Representations
    • Processing Collected Data
    • Running the Robot Controller
  • 📞Contact us
Powered by GitBook
On this page
  • Training Policies
  • Training a Behavior Cloning Policy
  1. Software
  2. Getting started with Dobb·E code

Fine-tuning Policies

Fine-tuning policies with fresh demonstrations that you have collected.

PreviousSetting up the DatasetsNextDeploying a Policy on the Robot

Last updated 1 year ago

Training Policies

The following assumes that the current working directory is this repository’s root folder.

Training a Behavior Cloning Policy

  1. Modify include_task and include_env in finetune.yaml depending on the task and env you intend to finetune.

  2. [Optional, non-default:] only if you're using torch encoder, set enc_weight_pth (path to pretrained encoder weights) in image_bc_depth.yaml. You can download the weights from if you don't have them.

  3. Run in terminal:

    python train.py --config-name=finetune
  4. [Optional, experimental] If you want to take advantage of multi-GPU training using 🤗 accelerate, you can use the following command:

    accelerate config # Only the first time, to configure accelerate
    accelerate launch train.py --config-name=finetune
🖥️
https://dl.dobb-e.com/models/hpr_model.pt