Processing Collected Data
Once you collect some new data on the Stick, you need to process it into a dataset before you can train policies on it. This step will help you get started on that.
Clone the Repo
Installation
On your machine, in a new conda/virtual environment
Usage
For extracting a single environment:
Compress video taken from the Record3D app:
Get the files on your machine.
Using Google drive:
[Only once] Generate Google Service Account API key to download from private folders on Google Drive. There are some instructions on how to do so in this Stackoverflow link https://stackoverflow.com/a/72076913
[Only once] Rename the .json file to
client_secret.json
and put it in the same directory asgdrive_downloader.py
Upload
.zip
file into its own folder on Google Drive, and copy folder_id from URL to put it in theGDRIVE_FOLDER_ID
in the./do-all.sh
file.
Manually:
Comment out the
GDRIVE_FOLDER_ID
line from./do-all.sh
and create the following hierarchy locallyThe .zip files should contain .r3d files exported from the Record3D app in the previous step.
Modify required variables in
do-all.sh
.TASK_NO
task id, seegdrive_downloader.py
for more information.HOME
name or ID of the home.ROOT_FOLDER
folder where the data is stored after downloading.EXPORT_FOLDER
folder where the dataset is stored after processing. Should be different fromROOT_FOLDER
.ENV_NO
current environment number in the same home and task set.GRIPPER_MODEL_PATH
path to the gripper model. It should be in the github repo already, and can be downloaded from http://dl.dobb-e.com/models/gripper_model.pth.
Change current working directory to local repository root folder and run
Split the extracted data to include a validation set for each environment. The data should follow the following hierarchy: (Be sure change the corresponding paths in
r3d_files.txt
to include “_val
”)\
Last updated