Dobb·E
HomepageGithub
  • Introduction
  • 🛠️Hardware
    • Putting Together the Stick
    • Setting up your iPhone
    • Using the Stick to Collect Data
    • Putting Together the Robot Mount
    • Mounting the iPhone to the Robot
  • 🖥️Software
    • Getting started with Dobb·E code
      • Setting up the Datasets
      • Fine-tuning Policies
      • Deploying a Policy on the Robot
      • [Optional] Training Your Own Home Pretrained Representations
    • Processing Collected Data
    • Running the Robot Controller
  • 📞Contact us
Powered by GitBook
On this page
  • Getting Started
  1. Software

Getting started with Dobb·E code

Getting familiar with running Dobb·E, including pretraining (or downloading) the Home Pretrained Representation model, and fine-tuning your own behavior policy.

PreviousMounting the iPhone to the RobotNextSetting up the Datasets

Last updated 1 year ago

In this section, we will help you get started with running all the software parts of Dobb·E. There are three major parts in the software component of Dobb-E, which are:

  1. ,

  2. ,

  3. .

Optionally, you can similar to how we trained the Home Pretrained Representations (HPR).

We will frequently refer to the "robot", which for this part would be the Intel NUC installed in the Hello Robot Stretch, and the "machine", which should be a beefier machine with GPU(s) where you can preprocess your data and train new models.

Also, ensure that you have installed. Mamba is generally a faster but compatible alternative to conda, which we prefer.

Getting Started

  • Clone the repository:

    git clone https://github.com/notmahi/dobb-e.git
    cd dobb-e/imitation-in-homes
  • Set up the project environment:

    mamba env create -f conda_env.yaml
  • Activate the environment:

    mamba activate home_robot
  • Logging:

    • To enable logging, log in with a Weights and Biases (wandb) account:

      wandb login
    • Alternatively, disable logging altogether:

      export WANDB_MODE=disabled

🖥️
Processing data collected with the Stick
Training a model on the collected data
Deploying the model
pre-train your own model
mamba