# System overview¶

The system has three main steps:

Step 1. Make fragments: build local geometric surfaces (referred to as fragments) from short subsequences of the input RGBD sequence. This part uses RGBD odometry, Multiway registration, and RGBD integration.

Step 2. Register fragments: the fragments are aligned in a global space to detect loop closure. This part uses Global registration, ICP registration, and Multiway registration.

Step 3. Refine registration: the rough alignments are aligned more tightly. This part uses ICP registration, and Multiway registration.

Step 3. Integrate scene: integrate RGB-D images to generate a mesh model for the scene. This part uses RGBD integration.

## Example dataset¶

We use the SceneNN dataset to demonstrate the system in this tutorial. Alternatively, there are lots of excellent RGBD datasets such as Redwood data, TUM RGBD data, ICL-NUIM data, and SUN3D data.

The tutorial uses the 016 sequence from the SceneNN dataset. The sequence is from SceneNN oni file archieve. The oni file can be extracted into color and depth image sequence using OniParser from the Redwood reconstruction system. Alternatively, any tool that can convert an .oni file into a set of synchronized RGBD images will work. This is a quick link to download the rgbd sequence used in this tutorial. Some helper scripts can be found from ReconstructionSystem/scripts.

## Quick start¶

Put all color images in the image folder, and all depth images in the depth folder. Run following commands from the root folder.

cd examples/Python/ReconstructionSystem/
python run_system.py [config_file] [--make] [--register] [--refine] [--integrate]


config_file has parameters and file paths. For example, ReconstructionSystem/config/redwood.json has the following script.

  1 2 3 4 5 6 7 8 9 10 11 12 13 14 { "name": "Open3D reconstruction tutorial http://open3d.org/docs/tutorial/ReconstructionSystem/system_overview.html", "path_dataset": "dataset/tutorial/", "path_intrinsic": "", "max_depth": 3.0, "voxel_size": 0.05, "max_depth_diff": 0.07, "preference_loop_closure_odometry": 0.1, "preference_loop_closure_registration": 5.0, "tsdf_cubic_size": 3.0, "icp_method": "color", "global_registration": "ransac", "python_multi_threading": true } 

We assume the color images and the depth images are synchronized and registered. "path_intrinsic" specifies path to a json file that stores the camera intrinsic matrix (See Read camera intrinsic for details). If it is not given, the PrimeSense factory setting is used. For your own dataset, use an appropriate camera intrinsic and visualize a depth image (likewise Redwood dataset) prior to use the system.

Note

"python_multi_threading": true utilizes joblib to parallelize the system using every CPU cores. With this option, Mac users may encounter an unexpected program termination. To avoid this issue, set this flag as false.