System overview#

The system has 4 main steps:

Step 1. Make fragments: build local geometric surfaces (referred to as fragments) from short subsequences of the input RGBD sequence. This part uses RGBD Odometry, Multiway registration, and RGBD integration.

Step 2. Register fragments: the fragments are aligned in a global space to detect loop closure. This part uses Global registration, ICP registration, and Multiway registration.

Step 3. Refine registration: the rough alignments are aligned more tightly. This part uses ICP registration, and Multiway registration.

Step 4. Integrate scene: integrate RGB-D images to generate a mesh model for the scene. This part uses RGBD integration.

Example dataset#

We provide default datasets such as Lounge RGB-D dataset from Stanford, Bedroom RGB-D dataset from Redwood, Jack Jack RealSense L515 bag file dataset to demonstrate the system in this tutorial. Other than this, one may user any RGB-D data. There are lots of excellent RGBD datasets such as: Redwood data, TUM RGBD data, ICL-NUIM data, the SceneNN dataset and SUN3D data.

Quick start#

Getting the example code

# Activate your conda environment, where you have installed open3d pip package.
# Clone the Open3D github repository and go to the example.
cd examples/python/reconstruction_system/

# Show CLI help for `run_system.py`
python dense_slam_gui.py --help

Running the example with default dataset.

# The following command, will download and use the default dataset,
# which is ``lounge`` dataset from stanford.
# --make will make fragments from RGBD sequence.
# --register will register all fragments to detect loop closure.
# --refine flag will refine rough registrations.
# --integrate flag will integrate the whole RGBD sequence to make final mesh.
# [Optional] Use --slac and --slac_integrate flags to perform SLAC optimisation.
python run_system.py --make --register --refine --integrate

Changing the default dataset. One may change the default dataset to other available datasets. Currently the following datasets are available:

  1. Lounge (keyword: lounge) (Default)

  2. Bedroom (keyword: bedroom)

  3. Jack Jack (keyword: jack_jack)

# Using bedroom as the default dataset.
python run_system.py --default_dataset 'bedroom' --make --register --refine --integrate

Running the example with custom dataset using config file. Manually download or store the data in a folder and store all the color images in the image sub-folder, and all the depth images in the depth sub-folder. Create a config.json file and set the path_dataset to the data directory. Override the parameters for which you want to change the default values.

Example config file for offline reconstruction system has been provided in examples/python/reconstruction_system/config/tutorial.json, which looks like the following:

 1{
 2    "name": "Open3D reconstruction tutorial http://open3d.org/docs/release/tutorial/reconstruction_system/system_overview.html",
 3    "path_dataset": "dataset/tutorial/",
 4    "path_intrinsic": "",
 5    "depth_max": 3.0,
 6    "voxel_size": 0.05,
 7    "depth_diff_max": 0.07,
 8    "preference_loop_closure_odometry": 0.1,
 9    "preference_loop_closure_registration": 5.0,
10    "tsdf_cubic_size": 3.0,
11    "icp_method": "color",
12    "global_registration": "ransac",
13    "python_multi_threading": true
14}

We assume that the color images and the depth images are synchronized and registered. "path_intrinsic" specifies path to a json file that stores the camera intrinsic matrix (See Read camera intrinsic for details). If it is not given, the PrimeSense factory setting is used. For your own dataset, use an appropriate camera intrinsic and visualize a depth image (likewise RGBD images) prior to using the system.

Note

"python_multi_threading": true utilizes joblib to parallelize the system using every CPU cores. With this option, Mac users may encounter an unexpected program termination. To avoid this issue, set this flag to false.

Capture your own dataset#

This tutorial provides an example that can record synchronized and aligned RGBD images using the Intel RealSense camera. For more details, please see Capture your own dataset.