It is here: Open3D 0.7.0!

Open3D Version 0.7.0 (released 2019-06-05)

The Open3D team and the Open Source Vision Foundation are proud to present the 0.7.0 release of the Open3D library.

This release is focused on extending the functionality of Open3D data types such as Octree, VoxelGrid, and Mesh. Among the main novelties we would like to highlight:

  • New generic Octree data type with arbitrary leaves
  • Upgraded VectorGrid data type
  • Conversion functionalities from Point Cloud to VectorGrid and Octree
  • Upgraded Mesh data type
  • Visualization support for Octree and VectorGrid
  • New sampling methods for Triangle mesh
  • New Mesh filter functionalities: Sharpen, Smooth, Taubin
  • Upgraded TSDFVolume generation

We also tackled a set of issues brought up by the community, including support of finer control over the geometry of the Visualizer. Now it is possible to add and remove geometries dynamically!

Please, have a look at our documentation (Open3D docs) to see all the details, and send us feedback at info@open3d.org. You can also join our Discord network to participate in the development discussions.

Full list of changes below.

Enjoy!

The Open3D team


Legend:

  • [Added]: Used to indicate the addition of new features
  • [Changed]: Updates of existing functionalities
  • [Deprecated]: Functionalities / features removed in future releases
  • [Removed]: Functionalities/features removed in this release
  • [Fixed]: For any bug fixes
  • [Breaking] This functionality breaks the previous API and you need to check your code

Installation and project structure

  • [Added] Googletest as a submodule #981
  • [Changed] Standardize headers #902
  • [Changed] Remove dummy semicolon after empty functions #908
  • [Changed]  appveyor build badge to master branch #925
  • [Changed] Avoid import star (from open3d import *) #982
  • [Changed] Open3D-3rdparty repository into the main repository #967
  • [Removed] Duplicated sort includes #930
  • [Fixed] Eigen::floor compiler error issue #935 #952
  • [Fixed] Build docs on make -j #960

CORE features and applications

  • [Added] Support for Octree, OctreeNode, OctreeLeafNode, and OctreeNodeInfo #903 #946 #959
  • [Added] Octree python bindings #947
  • [Added] Visualizer remove geometry #904
  • [Added] Uniform point sampling from triangle mesh #907
  • [Added] Octree I/O support for json #911
  • [Added] Octree to VoxelGrid conversion #914
  • [Added] Voxel grid to octree #983
  • [Added] Remove non-manifold edges from triangle mesh #985
  • [Added] CuboidShader for rendering VoxelGrid and Octree in OpenGL3.1+ #918
  • [Added] Octree visualization support and refactor cuboid shader #924
  • [Added] Mesh filtering functionalities (Sharpen, Smooth, Laplacian, Taubin) #926
  • [Added] Method to check if TriangleMesh is watertight #929
  • [Added] Mesh Simplification method #945
  • [Added] Loop Subdivision method #954
  • [Added] Poisson Disk Sampling of TriangleMesh #955
  • [Added] Basic Transformation Methods (Translate, Scale, Rotate) to Geometry3D #956
  • [Added] Functionality to compute convex hulls from TriangleMesh and PointCloud #965
  • [Added] Extension of mesh properties to check if a triangle mesh is orientable,  #977
  • [Added] TUM trajectory format #953 [Thanks martinruenz]
  • [Added] Center parameter to rotate method #994
  • [Changed] Refactor UniformTSDFVolume to use VoxelGrid #971
  • [Changed] Small AttributeError problem in pointcloud.py #987 [Thanks #janfelixklein]
  • [Breaking]  Update of VoxelGrid data type #933

Documentation, tutorials, and examples

  • [Fixed] Point cloud dist function docs #899
  • [Fixed] doc string of PinholeCameraTrajectory #923
  • [Fixed] misc fixes for reconstruction docs #997
  • [Changed] Improve doc for TriangleMesh, LineSet and PointCloud #980
  • [Changed] Docs update on unit test and python style #989

 

Open3D is Joining 2019 Google Season of Docs

Open3D is Joining 2019 Google Season of Docs

News: We're proud to announce that Open3D is selected as one of the participating open-source projects for 2019 Google Season of Docs (GSoD) program. We're recruiting 1-2 technical writers for this project via the program. By participating in GSoD with Open3D, you'll spend three months working closely with the Open3D team, learn about this state-of-the-art 3D data processing library, build up your open-source contribution, and receive stipends. Star Open3D on GitHub and apply today! Please read this blog post for project ideas. To apply, please (1) fill this application survey form and (2) visit GSoD official site for GSoD application instructions. Contact us if you have any questions! 

 

Links:
(1) Google Season of Docs (GSoD): https://developers.google.com/season-of-docs
(2) Timelines: https://developers.google.com/season-of-docs/docs/timeline
(3) Stipends ($6,000 USD equivalent): https://developers.google.com/season-of-docs/docs/tech-writer-stipends
(4) Open3D blog post for project ideas: http://www.open3d.org/index.php/2019/04/11/season-of-docs-for-open3d
(5) Fill out this survey from: https://docs.google.com/forms/d/e/1FAIpQLSdYZJuYTsBV8C0Ta6cXPJgWqTsnd0l19rIAR9Xr7dBPt0Ftqw/viewform?usp=sf_link . You'll also need to apply through the GSoD official site as well.

 

 

1. Project description

Open3D is an open-source library that supports rapid development of software that deals with 3D data. The Open3D frontend exposes a set of carefully selected data structures and algorithms in both C++ and Python. The backend is highly optimized and is set up for parallelization. Open3D was developed from a clean slate with a small and carefully considered set of dependencies. It can be set up on different platforms and compiled from source with minimal effort. The code is clean, consistently styled, and maintained via a clear code review mechanism. Open3D has been used in a number of published research projects and is actively deployed in the cloud. We welcome contributions from the open-source community.

You can see the progress of Open3D through the different video releases we have published:

 

2. Organization

Open3D lives under the umbrella of the non-profit Open Source Vision Foundation (OSVF.org). OSVF provides support with the administration and coordination of the resources needed for the development of Open3D. The Open3D project has originally created within the Intelligent Systems Lab (ISL) at Intel, and the development of Open3D is still done in collaboration with ISL.

The main contributors of Open3D are:

The management of the project is done by German Ros and Vladlen Koltun.

3. Relevant links

  • Github repository [link]
  • Documentation page [link]
  • Contact e-mail [link]
  • Chat with us via Discord [link]
  • YouTube channel [link]

4. Project ideas

Below you can find our proposed ideas to improve Open3D documentation. Please, feel free to contact us to propose alternative ideas or changes to the current ideas. During the 3-month period, you're expected to complete 3 or more of the following projects.

  • Project 1 name: Upgrade and extend Open3D contributor’s guide
  • Description: Contributor’s guides are the main entry point to collaborate in an open source project. This project requires the creation of a new contributor’s guide that explains step-by-step how contributors can send their contributions to Open3D. These steps include:
    • How to find the current road map of Open3D and which features may be interesting
    • How to prepare a pull request to Open3D
    • How to interact with reviewers
    • What are the standards used by Open3D
  • Related material:
    • The current version of the contributor’s guide can be found here.
  • Project 2 name: Complete Python API documentation
  • Description: Open3D is a multi-language library, that has support for both C++ and Python. There is an initial version of the Python documentation done using Sphinx, which contains basic docstrings for each method and class of Open3D. In this project, we would like to extend the docstrings in a way that makes the functionality clearer, by rephrasing and extending the existing.
  • Related material:
    • The current version of the Python API documentation can be found here.
  • Project 3 name: Complete C++ API documentation
  • Description: Open3D is a multi-language library, that has support for both C++ and Python. There is an initial version of the Python documentation done using Sphinx http://www.open3d.org/docs/index.html#python-api-index. However, C++ documentation has not been completed for the most part of the library. C++ documentation is generated by Doxygen with docstring in the source code. In this project, we would like to fill in missing C++ docstrings, with the reference of Python docstrings.
  • Related material:
  • Project 4 name: High-level docs of core data structures
  • Description: Open3D implements several core data structures, such as PointCloud, TriangleMesh, VoxelGrid, LineSet, Octree, camera parameters and etc. Making sure that users understand the main concepts and usage of the data structures is central to the Open3D library. The current documentation we have is organized as tutorials, which is not well-structured for reference. The scope of this project includes:
    • Review current tutorial documentation at http://www.open3d.org/docs/tutorial/Basic/index.html and get familiar with the basic data structures.
    • Refactor the tutorials structured around each data structure.
    • For each data structure, provide a high-level overview, data structure API and example usage details.
  • Related material:
    • The current version of the contributor’s guide can be found here.
  • Project 5 name: Documentation of high-level users’ use cases, aka How-to do X using Open3D
  • Description: We would like to provide a section in our official documentation explaining how to perform basic functionalities using the Open3D library. Most of this information already exists within the current documentation, but it needs to be refactored and adapted to follow a common format. Examples of these functionalities would be:
    • How to load and visualize a point cloud using Open3D?
    • How to create an Octree from a Point cloud?
    • How to use your own camera poses for point cloud visualization?
    • Etc.
  • Related material:
    • Examples of these use cases can be seen here.

5. Mentors and point of contact

  • Technical writers will be mentored by two key contributors to Open3D, Yixing Lao, and Qian-Yi Zhou. Both Yixing and Qian-Yi have a deep understanding of the Open3D project as well as the process of automatic generation of documentation. They are both recognized industry leaders and their mentoring will definitely be a positive influence for technical writers.
  • If you would like to help us to improve our documentation, please contact German Ros [mail]

Open3D 0.6.0 release

Open3D Version 0.6.0 (released 2019-04-01)

The Open3D team and the Open Source Vision Foundation are excited to announce the 0.6.0 release of the Open3D library.

In this release, we focused our efforts on improving the quality of Open3D documentation and paving the way for upcoming GPU support.

Documentation is a critical aspect of any software project, but it becomes especially critical in open-source projects. This is one of the main ways we engage with the community. For this reason, we improved the internal infrastructure to automatically generate documentation in a new format, which makes the Python API more readable and easy to understand. Please take a look at Open3D docs.

The team has also been working on bringing multi-GPU support to Open3D. We will start rolling this out in upcoming releases. In the meantime, feedback and suggestions are welcomed. Please check our GPU integration branch hereThis release also includes new data types that serve as the foundation for new meshing algorithms that will be rolled out in our next release.

The full list of changes can be seen below. Please send us feedback at info@open3d.org and join our Discord network to participate in the discussions.

 

Enjoy!

The Open3D team


 

Legend:

  • [Added]: Used to indicate addition of new features
  • [Changed]: Updates of existing functionalities
  • [Deprecated]: Functionalities / features removed in future releases
  • [Removed]: Functionalities / features removed in this release
  • [Fixed]: For any bug fixes
  • [Breaking] This functionality breaks the previous API and you need to check your code

Installation and project structure

  • [Changed] simplified cmake include directory structure  #839
  • [Changed] new installation default behavior: don’t install 3rd party header except Eigen and GL #840
  • [Braking] new project directory structure #842 #850 #855

CORE features and applications

  • [Added] Travis build docs, use 16.04, and other fixes #885
  • [Added] update adjacency list after mesh operations #843
  • [Added] HalfEdgeTriangleMesh data type support #851 #868
  • [Added] STL file support #786
  • [Added] Compute vertex adjacency map #830
  • [Changed] Standardize API of SolveLinearSystemPSD #821
  • [Changed] upgraded pybind11 #837
  • [Changed] Upgrade OpenGL GLSL version #854
  • [Fixed] path in the comments of python_binding.py #878
  • [Fixed] clang format discrepancy and links #793 #795 #816
  • [Fixed] autocomplete for python modules #799
  • [Fixed] intrinsic parameters for Kinect2 #801
  • [Fixed] initializers for FastGlobalRegistration class #807 #808
  • [Fixed] Travis fails when unit tests fail in a docker container #810
  • [Fixed] ColorMap divergency and other issues #819 #860
  • [Fixed] add minus sign in SolveJacobianSystemAndObtainExtrinsicMatrixArray #822
  • [Fixed] STL mesh write vertex index #829

Documentation, tutorials, and examples

  • [Added] pybind docs parser and Google-style docstring generator #864
  • [Added] Expandible docs sidebar #832
  • [Changed] extended Python documentation #859 #861 #862 #869 #877 #881
  • [Changed] C++ project documentation for Windows #809
  • [Fixed] issues in the documentation building process #845
  • [Fixed] namespace typo in examples #856
  • [Fixed] small issues in RegistrationRansac example #800 #802

 

Testing and benchmarking

  • [Removed] visualization unit tests #857

 

Introducing Open3D 0.5.0

Open3D Version 0.5.0 (released 2019-01-17)

The Open3D team and the Open Source Vision Foundation (http://www.osvf.org) are excited to announce the 0.5.0 release of the Open3D library.

In this release we show the power of Open3D as a core tool to create machine learning solutions for 3D data. We introduce a re-implementation of the PointNet++ architecture to perform point cloud semantic segmentation using Open3D and TensorFlow. Our Open3D-PointNet++ is able to produce highly accurate results in the Semantic3D benchmark, surpassing the results of the original PointNet++ implementation. Even more exciting is the fact that our re-designed Open3D-PointNet++ is able to perform real-time inference (+10 FPS) on the KITTI dataset. We show how to perform training and inference of Open3D-PointNet++ in both Semantic3D and KITTI. Check out this blog post for more information!

We have also added a new VoxelGrid representation and tooling to convert from point clouds to a VoxelGrid structure. This functionality is extremely useful to produce representations that are easier to digest by neural networks.

We have also done significant improvements to our internal infrastructure, including a simplified CI testing mechanism via docker images, enhanced testing coverage, and easier installation of the library.

Full list of changes below. Please send us feedback at info@open3d.org and join our Discord network [link] to participate in the discussions.

Enjoy!

The Open3D team


 

Legend:

  • [Added]: Used to indicate addition of new features
  • [Changed]: Updates of existing functionalities
  • [Deprecated]: Functionalities / features removed in future releases
  • [Removed]: Functionalities / features removed in this release
  • [Fixed]: For any bug fixes
  • [Breaking] This functionality breaks the previous API and you need to check your code

Installation and project structure

  • [Added] docker images for Open3D in dockerhub
  • [Added] option to disable jupyter build
  • [Added] new way of detecting conda active environment
  • [Added] option to link to static Windows runtime
  • [Changed] 3rdparty folder moved to Open3D-3rdparty repository
  • [Changed] bug_report.md to improve communication with users when issues are reported
  • [Fixed] Conda and Pip packaging issues to build platform-specific targets
  • [Fixed] conda dependency conflicts resulting in forced downgrade
  • [Fixed] python 2.7 import JVisulizer
  • [Fixed] Disabled conda executable check (conda command could be a bash function instead of an executable, CMake may complain)
  • [Fixed] Windows compilation warning with py::ssize_t
  • [Removed] mac flag -Wno-expansion-to-defined in CI

CORE features and applications

  • [Added] New Open3D Point cloud semantic segmentation architecture based on PointNet++
  • [Added] New training code for Point cloud semantic segmentation
  • [Added] New real-time inference code for Point cloud semantic segmentation
  • [Added] New compatibility with TensorFlow operators
  • [Added] New function for building Jacobian matrices that follows RGBDOdometry structure
  • [Added] Non-rigid optimization for more than 6 variables (6D camera pose + anchor points)
  • [Added] A new general purpose image processing function: CreateDepthBoundaryMask
  • [Added] "shift + +/-" key event that can change width of LineSet for the visualization
  • [Added] line_width in RenderOption and corresponding Python binding (Applies to C++/Python API)
  • [Added] I/O functions for LineSet
  • [Added] "lineset" option into ViewGeometry application
  • [Added] New box primitive
  • [Added] VoxelGrid structure
  • [Added] I/O functions for VoxelGrid
  • [Added] Utility function to transform point clouds to voxels
  • [Added] new shader to render voxel clouds
  • [Added] warning output for "degenerated" TriangleMeshes
  • [Added] Promote compiled extension for pycharm autocomplete
  • [Changed] Image class to use namespace directive in order to reduce code line length
    [Changed] Image class to remove global variables
    [Changed] Image class to shorten local variables names
    [Changed] Image class to simplify comparisons using unit_test::ExpectEQ(...)
  • [Changed] KDTreeFlann class to use namespace directive in order to reduce code line length
  • [Changed] KDTreeFlann class to simplify comparisons using unit_test::ExpectEQ(...)
  • [Changed] TriangleMesh class to use namespace directive in order to reduce code line length
  • [Changed] TriangleMesh class to simplify comparisons using unit_test::ExpectEQ(...)
  • [Changed] Relative paths in CMake package config
  • [Changed] Factorization of internal functions in ColormapOptimization module as public functions
  • [Changed] RGBDImage class to use namespace directive in order to reduce code line length
  • [Changed] RGBDImage class to simplified comparisons using unit_test::ExpectEQ(...)
  • [Changed] RGBDImage class to fixed Rand float/double to return unscaled values between 0.0 and 1.0
  • [Changed] PointCloud class to use namespace directive in order to reduce code line length
  • [Changed] PointCloud class to simplify comparisons using unit_test::ExpectEQ(...)
  • [Changed] LineSet class to use namespace directive in order to reduce code line length
  • [Changed] LineSet class to simplify comparisons using unit_test::ExpectEQ(...)
  • [Changed] Vector3dvector and other vector Eigen bindings to improve performance (speedup of 40-200x)
  • [Fixed] Bug due to PinholeCameraIntrinsic constructor not initializing member data
  • [Fixed]  Bug in PinholeCameraTrajectory
  • [Fixed] Bug in ConvertToJsonValue
  • [Fixed] Bug in ConvertFromJsonValue
  • [Fixed] Bug in TransformationEstimationPointToPlane::ComputeRMSE
  • [Fixed] typos in FilePLY.cpp: from ply_poincloud_reader to ply_pointcloud_reader
  • [Fixed] parameter name of create_window
  • [Removed] Unneeded std::move calls

Documentation and tutorials

  • [Added] New tutorial on how to perform real-time PointCloud semantic segmentation using Open3D
  • [Added] Documentation on supported point cloud formats

 

Testing and benchmarking

  • [Added] Test case for IJsonConvertible
  • [Added] Test case for Core/Utility/Eigen
  • [Added] Test case for Core/Utility/FileSystem
  • [Added] Test case for PinholeCameraTrajectory
  • [Added] Test case for PinholeCameraIntrinsic
  • [Added] Test case for RGBDOdometryJacobianFromHybridTerm
  • [Added] Test case for RGBDOdometryJacobianFromColorTerm
  • [Added] New reference data for RGBDImage based on fixes to Rand float/double
  • [Added] New utilities for generating input data for the unit tests
  • [Changed] UnitTest/Utility moved to its own folder
  • [Changed] unit_test:ExpectEQ to removed unused code

On point clouds Semantic Segmentation

In this post, we will walk you through how Open3D can be used to perform real-time semantic segmentation of point clouds for Autonomous Driving purposes. We demonstrate our results in the KITTI benchmark and the Semantic3D benchmark. Please, use the following link to access our demo project. See Figure 1 for an example of semantic segmentation of PointClouds in the Semantic3D dataset.

OutputFigure 1. Example of PointCloud semantic segmentation. Left, input dense point cloud with RGB information. Right, semantic segmentation prediction map using Open3D-PointNet++.

The main purpose of this project is to showcase how to build a state-of-the-art machine learning pipeline for 3D inference by leveraging the building blogs available in Open3D. For this purpose we have to deal with several stages, such as: 1) pre-processing, 2) custom TensorFlow op integration, 3) post-processing and 4) visualization. Furthermore, we want to demonstrate how critical is the correct design of these modules in order to achieve maximum accuracy and run-time performance, and how Open3D can help to simplify this process.

Segmenting PointClouds

We based our development on the well-known PointNet++ architecture, following Mathieu Orhan and Guillaume Dekeyser's repo and the original PointNet++ implementations as a reference. We thank authors for sharing their methods. Our implementation was re-built using Open3D and we deviated from the reference design when needed in order to improve performance, as described in the following section.

PointNetFigure 2. Diagram depicting PointNet++ architecture.

For our experiments we made use of the state-of-the-art Semantic3D and KITTI datasets. In Semantic3D, there is ground truth labels for 8 semantic classes: 1) man-made terrain, 2) natural terrain, 3) high vegetation, 4) low vegetation, 5) buildings, 6) remaining hardscape, 7) scanning artifacts, 8) cars and trucks. The goal for the point cloud classification task is to output per-point class labels given the point cloud.

Semantic3DFigure 3. Semantic 3D snapshot

KITTIFigure 4. KITTI snapshot

Since Semantic3D dataset contains a huge number of points per point cloud (up to 5e8, see dataset stats), we first run voxel-downsampling with Open3D to reduce the dataset size. During both training and inference, PointNet++ is fed with fix-sized cropped point clouds within boxes, we set the box size to be 60m x 20m x Inf, with the Z-axis allowing all values. During inference with KITTI, we set the region of interest to be 30m in front and behind the car, 10m to the left and right of the car center to fit the box size. This allows the PointNet++ model to only predict one sample per frame.

Our semantic segmentation model is trained on the Semantic3D dataset, and it is used to perform inference on both Semantic3D and KITTI datasets. In this document, we focus on the techniques which enable real-time inference on KITTI.

Accelerating PointNet++ with Open3D-enabled TensorFlow op

In PointNet++’s set abstraction layer, the original points are subsampled, and features of the subsampled points must be propagated to all of the original points by interpolation (see Section 3.4 of PointNet++). This is achieved by 3-nearest neighbors search, of which the authors provided a simple C++ implementation via custom TensorFlow op called ThreeNN. However, this turns out to be the bottleneck of the PointNet++ prediction model.

The following benchmark is obtained by the benchmark script running inference on a batch of 64 samples on colored Semantic3D dataset. As we can see, the ThreeNN op accounts for 87% of the graph execution time.

// Batch time
Batch size: 64, batch_time: 1.8208365440368652

// Per-op time
node name |           total execution time |  accelerator execution time |        cpu execution time |
ThreeNN            1.73sec (100.00%, 87.61%),        0us (100.00%, 0.00%),   1.73sec (100.00%, 95.87%)
ThreeInterpolate     60.68ms (12.39%, 3.07%),        0us (100.00%, 0.00%),      60.68ms (4.13%, 3.36%)
GroupPoint            27.31ms (9.32%, 1.38%),   27.03ms (100.00%, 15.85%),        275us (0.77%, 0.02%)
Conv2D                26.91ms (7.94%, 1.36%),    23.99ms (84.15%, 14.07%),       2.91ms (0.76%, 0.16%)

Open3D uses FLANN to build KDTrees for fast retrieval of nearest neighbors, which can be used to accelerate the ThreeNN op. This custom TensorFlow op implementation must be linked with Open3D and the TensorFlow library. To conveniently link the various dependencies, we provide a CMake file that automatically downloads, builds and links Open3D. When Open3D is properly installed (in this case automatically), one can simply use Open3D's CMake finder to include headers and link Open3D like the following.

target_include_directories(tf_interpolate PUBLIC ${Open3D_INCLUDE_DIRS})
target_link_libraries(tf_interpolate tensorflow_framework ${Open3D_LIBRARIES})

For more details on how to link a C++ project to Open3D, please see this documentation.

Next, we refactor the ThreeNN to make use of Open3D. In summary, first, a KDTree with the reference point is created by

open3d::KDTreeFlann reference_kd_tree(reference_pcd);

Then for each target point, we search the 3 nearest neighbors in the KDTree

// for each j:
reference_kd_tree.SearchKNN(target_pcd.points_[j], 3, three_indices, three_dists);

After refactoring ThreeNN with Open3D, we see a ~2X speed up in both the ThreeNN and the full model run time with batch size 64.

// Batch time
Batch size: 64, batch_time: 0.7777869701385498

// Per-op time
node name |             total execution time |  accelerator execution time |         cpu execution time |
ThreeNN            694.14ms (100.00%, 73.72%),         0us (100.00%, 0.00%),   694.14ms (100.00%, 90.20%)
ThreeInterpolate      62.94ms (26.28%, 6.68%),         0us (100.00%, 0.00%),       62.94ms (9.80%, 8.18%)
GroupPoint            27.18ms (19.60%, 2.89%),    26.90ms (100.00%, 15.63%),         287us (1.62%, 0.04%)
Conv2D                26.39ms (16.71%, 2.80%),     23.83ms (84.37%, 13.85%),        2.56ms (1.58%, 0.33%)

Post processing: accelerating label interpolation

Since we subsampled the original dataset before feeding points to PointNet++, the network outputs only correspond to a sparse subset of the original point cloud.

SparseFigure 5. Inference on sparse pointcloud (KITTI).

DenseFigure 6. Inference results after interpolation.

The sparse labels need to be interpolated to generate labels for all input points. This interpolation can be achieved with nearest neighbor search using open3d.KDTreeFlann and majority voting, similar to what we did above in the ThreeNN op.

def interpolate_dense_labels(sparse_points, sparse_labels, dense_points, k=3):
    sparse_pcd = open3d.PointCloud()
    sparse_pcd.points = open3d.Vector3dVector(sparse_points)
    sparse_pcd_tree = open3d.KDTreeFlann(sparse_pcd)

    dense_labels = []
    for dense_point in dense_points:
        _, sparse_indexes, _ = sparse_pcd_tree.search_knn_vector_3d(
            dense_point, k
        )
        knn_sparse_labels = sparse_labels[sparse_indexes]
        dense_label = np.bincount(knn_sparse_labels).argmax()
        dense_labels.append(dense_label)
    return dense_labels

However, doing so in Python could be a major performance hit. We run the full kitti_predict.py inference on KITTI dataset for a benchmark. The interpolation step takes about 90% of the total run time and slows down the full pipeline to about 1 FPS.

$ python kitti_predict.py --ckpt path/to/checkpoint.ckpt
...
[ 1.05 FPS] load_data: 0.0028, predict: 0.0375, interpolate: 0.9076, visualize: 0.0031, total: 0.9545
[ 1.06 FPS] load_data: 0.0028, predict: 0.0355, interpolate: 0.8952, visualize: 0.0025, total: 0.9396
[ 1.04 FPS] load_data: 0.0028, predict: 0.0348, interpolate: 0.9214, visualize: 0.0024, total: 0.9653
...

To address the performance issue, another custom TensorFlow C++ op InterploateLabel is added. The op takes sparse_pointssparse_labelsdense_points and outputs dense_labels. OpenMP is used to parallelize KNN tree search. dense_colors output is also added to the op to directly output label-colored dense points. Please refer to the source code for details.

Another benefit of using such approach is that now the full pipeline of prediction and interpolation is implemented with one TensorFlow op graph. That is, TensorFlow session takes in the original dense points, and directly returns dense labels and label-colored dense points. This approach is more modular and efficient than doing the interpolation outside of the TensorFlow graph. After optimization, the end-to-end pipeline achieved an average of 10+ FPS on KITTI dataset, which faster than KITTI's capture rate at 10 FPS.

$ python kitti_predict.py --ckpt path/to/checkpoint.ckpt
...                      
[10.73 FPS] load_data: 0.0046, predict_interpolate: 0.0840, visualize: 0.0046, total: 0.0932                         
[12.89 FPS] load_data: 0.0047, predict_interpolate: 0.0693, visualize: 0.0035, total: 0.0776                         
[11.44 FPS] load_data: 0.0047, predict_interpolate: 0.0791, visualize: 0.0035, total: 0.0874                         
...

Video example on KITTI dataset

 

How-to train Open3D-PointNet++ on Semantic3D dataset

1. Data download

Download the  Semantic3D dataset and extract

cd dataset/semantic_raw
bash download_semantic3d.sh

The outcome of these commands should look like this:

Open3D-PointNet2-Semantic3D/dataset/semantic_raw
├── bildstein_station1_xyz_intensity_rgb.labels
├── bildstein_station1_xyz_intensity_rgb.txt
├── bildstein_station3_xyz_intensity_rgb.labels
├── bildstein_station3_xyz_intensity_rgb.txt
├── ...

2. Convert txt to pcd file

Run

python preprocess.py

Open3D is able to read .pcd files much more efficiently.

Open3D-PointNet2-Semantic3D/dataset/semantic_raw
├── bildstein_station1_xyz_intensity_rgb.labels
├── bildstein_station1_xyz_intensity_rgb.pcd (new)
├── bildstein_station1_xyz_intensity_rgb.txt
├── bildstein_station3_xyz_intensity_rgb.labels
├── bildstein_station3_xyz_intensity_rgb.pcd (new)
├── bildstein_station3_xyz_intensity_rgb.txt
├── ...

3. Downsample

Run

python downsample.py

The downsampled dataset will be written to dataset/semantic_downsampled. Points with label 0 (unlabled) are excluded during downsampling.

Open3D-PointNet2-Semantic3D/dataset/semantic_downsampled
├── bildstein_station1_xyz_intensity_rgb.labels
├── bildstein_station1_xyz_intensity_rgb.pcd
├── bildstein_station3_xyz_intensity_rgb.labels
├── bildstein_station3_xyz_intensity_rgb.pcd
├── ...

4. Compile TF Ops

We need to build TF kernels in tf_ops. First, activate the virtualenv and make sure TF can be found with current python. The following line shall run without error.

python -c "import tensorflow as tf"

Then build TF ops. You'll need CUDA and CMake 3.8+.

cd tf_ops
mkdir build
cd build
cmake ..
make

After compilation the following .so files shall be in the build directory.

Open3D-PointNet2-Semantic3D/tf_ops/build
├── libtf_grouping.so
├── libtf_interpolate.so
├── libtf_sampling.so
├── ...

Verify that that the TF kernels are working by running

cd .. # Now we're at Open3D-PointNet2-Semantic3D/tf_ops
python test_tf_ops.py

5. Train

Run

python train.py

By default, the training set will be used for training and the validation set will be used for validation. To train with both training and validation set, use the --train_set=train_full flag. Checkpoints will be output to log/semantic.

6. Predict

Pick a checkpoint and run the predict.py script. The prediction dataset is configured by --set. Since PointNet2 only takes a few thousand points per forward pass, we need to sample from the prediction dataset multiple times to get a good coverage of the points. Each sample contains the few thousand points required by PointNet2. To specify the number of such samples per scene, use the --num_samples flag.

python predict.py --ckpt log/semantic/best_model_epoch_040.ckpt \
                  --set=validation \
                  --num_samples=500

The prediction results will be written to result/sparse.

Open3D-PointNet2-Semantic3D/result/sparse
├── sg27_station4_intensity_rgb.labels
├── sg27_station4_intensity_rgb.pcd
├── sg27_station5_intensity_rgb.labels
├── sg27_station5_intensity_rgb.pcd
├── ...

7. Interpolate

The last step is to interpolate the sparse prediction to the full point cloud. We use Open3D's K-NN hybrid search with specified radius.

python interpolate.py

The prediction results will be written to result/dense.

Open3D-PointNet2-Semantic3D/result/dense
├── sg27_station4_intensity_rgb.labels
├── sg27_station5_intensity_rgb.labels
├── ...

8. Submission

Finally, if you're submitting to Semantic3D benchmark, we've included a handy tools to rename the submission file names.

python renamer.py

9. Summary of directories

  • dataset/semantic_raw: Raw Semantic3D data, .txt and .labels files. Also contains the .pcd file generated by preprocess.py.
  • dataset/semantic_downsampled: Generated from downsample.py. Downsampled data, contains .pcd and .labels files.
  • result/sparse: Generated from predict.py. Sparse predictions, contains .pcd and .labels files.
  • result/dense: Dense predictions, contains .labels files.
  • result/dense_label_colorized: Dense predictions with points colored by label type.

 

How-to train and validate Open3D-PointNet++ on the KITTI dataset

This section provides additional information on how to train a model to work with the KITTI dataset

1. Data download and preparation

  • First, make sure you followed steps 1 to 4 in "How-to train Open3D-PointNet++ on Semantic3D dataset".
  • Download the KITTI dataset using existing KITTI download script
cd Open3D-PointNet2-Semantic3D/dataset/kitti_raw 
./raw_data_downloader.sh

2. Install auxiliary dependencies

pip install pykitti

3. Training with adapted Semantic3D data

python train.py --train_set train --config_file semantic_no_color.json

3. Perform real-time Inference

python kitti_predict.py --ckpt logs/semantic_backup_dec_20_for_kitti/semantic/best_model_epoch_060.ckpt --kitti_root=Open3D-PointNet2-Semantic3D/dataset/kitti_raw

 

Open3D 0.4.0 is out!

Open3D Version 0.4.0 (released 2018-10-25)

The Open3D team and the Open Source Vision Foundation (http://www.osvf.org) are proud to announce the release of the 0.4 version of the Open3D library.

This release brings support for RealSense RGB-D sensors to Open3D, enabling functionalities such as real-time RGB-D capturing and a new point cloud viewer. We have also added new documentation and examples using RealSense sensors to create 3D reconstructions.

We are also excited to introduce support for Jupyter notebooks with a brand new WebGL widget to perform advanced 3D visualization from the comfort of your browser.

One of our main goals is to leverage Open3D as a tool to simplify the use of state-of-the-art 3D pipelines like those used in computer vision and machine learning. With this goal in mind, we are proud to introduce the Open3D Ecosystem, a set of repositories that make use of Open3D to create powerful applications. The first member of this ecosystem is Open3D-PointNet [link], a version of the famous machine learning architecture [link] for point cloud classification and semantic segmentation, which is now fully usable from the commodity of Open3D routines.

From a project infrastructure perspective, we just finalized the integration with TravisCI to perform automatic unit testing. This is a large step forward in terms of quality control for Open3D and it will help to make our software more reliable while expediting its development. The team has made an enormous effort to define UnitTests for all the functionalities present in the library, a task that will be concluded in future releases.

Check out our release video [link] to see these functionalities in action!

For a detailed description of all the features of Open3D 0.4, please keep reading. We hope you enjoy this release and hope to hear from you. Please send us feedback at info@open3d.org and join our Discord network [link] to participate in the discussions.

 

Thanks!

The Open3D team


Legend:

  • [Added]: Used to indicate addition of new features
  • [Changed]: Updates of existing functionalities
  • [Deprecated]: Functionalities / features removed in future releases
  • [Removed]: Functionalities / features removed in this release
  • [Fixed]: For any bug fixes
  • [Breaking] This functionality breaks the previous API and you need to check your code

 

Installation and project structure

  • [Added] Automated wheel creation with cmake
  • [Added] Automated make conda-package generation
  • [Added] Support for arm(jetson tx2) platform (Thanks kuonangzhe!)
  • [Fixed] Cmake install to solve problems with 3rd party header path
  • [Fixed] glfw linking issue when built from source
  • [Fixed] Bug in Ubuntu for the default install path

 

CORE features and applications

  • [Added] RealSense support for live synchronized RGB-D capture (Thanks baptiste-mnh and LLDavid)
  • [Added] Integration with Jupyter visualization WebGL widgets
  • [Added] New methods for outliers removal: statistical outlier removal and radius outlier removal (Thanks Nicolas Chaulet!)
  • [Added] Open3D-PointNet repository to Open3D Ecosystem
  • [Fixed] Problem with the ENABLE_HEADLESS_RENDERING flag
  • [Added] “Visible” parameter to Visualizer::CreateVisualizerWindow to enable off-screen windows
  • [Added] Stanford dataset for 3D reconstruction system
  • [Changed] Major update of the 3D reconstruction system to improve usability and robustness

Documentation and tutorials

  • [Added] RealSense support documentation
  • [Added] Jupyter examples to run PointNet using Open3D for point cloud classification

Testing and benchmarking

  • [Added] Consistent Random number generation initialization (compiler independent)
  • [Added] TravisCI support to automatic UnitTest evaluation
  • [Added] PointCloud test cases
  • [Added] TriangleMesh test cases
  • [Added] LineSet test cases
  • [Added] RGBDImage test cases
  • [Added] KDTreeFlann test cases

Open3D 0.3.0 is ready to go!

Open3D Version 0.3.0 (released 2018-09-13)

Open3D is being developed under the auspices of the Open Source Vision Foundation (http://www.osvf.org). The team has been working hard to make Open3D accessible and easy to use.

In this regard, version 0.3.0 brings major features related to library installation, including improved CMake installation for Linux, Mac and Windows with off-the-shelf systems; new installation options using PIP and CONDA for Linux, Mac and Windows; and overall an easier and cleaner installation experience.

We are also continuing to extend the 3D processing and visualization functionality. Among other features, version 0.3 brings support for enhanced 3D reconstruction; extension of TSDF volume integration to floating-point intensity images; and an improved non-blocking visualization tool.

This version also comes with extended and improved documentation. We have enhanced the tutorials on multiway registration, marching cubes, global registration, and headless rendering, among others.

Open3D 0.3.0 also includes our first set of tests to verify the integrity and correctness of the library. You can expect to see much more of this in future releases.

For a detailed description of all the features of Open3D 0.3, please keep reading. We hope you enjoy this release and hope to hear from you. Please send us feedback at info@open3d.org and join our Discord network (https://discord.gg/D35BGvn) to participate in the discussions.

Thanks!

The Open3D team


Legend:

  • [Added]: Used to indicate addition of new features
  • [Changed]: Updates of existing functionalities
  • [Deprecated]: Functionalities / features removed in future releases
  • [Removed]: Functionalities / features removed in this release
  • [Fixed]: For any bug fixes
  • [Breaking] This functionality breaks the previous API and you need to check your code

 

Installation and project structure

  • [Added] PIP installation under Conda environment for Linux, Mac and Windows for Python 2.7, 3.5 and 3.6 [Doc]
  • [Added] CONDA installation for Linux, Mac and Windows for Python 2.7, 3.5 and 3.6 [Doc]
  • [Changed] CMake installation to simplify installation in Linux, Mac and Windows
  • [Changed] Project folder structure
  • [Changed] Subgroups in CMakeLists.txt
  • [Breaking] Namespace three is now open3d, which brakes previous API
  • [Removed] Namespace three removed
  • [Removed] OpenCV dependency
  • [Fixed] Excessive warnings during compilation
  • [Fixed] Library not found problem for -lGLEW on OSX
  • [Fixed]  'jpeglib.h' file not found problem on OSX
  • [Fixed] MacOS libpng linking problem
  • [Fixed] XCode build failure
  • [Fixed] Installation problems on brand-new Ubuntu 16.04 systems
  • [Fixed] Visual Studio 2017 15.8 build problem
  • [Fixed] Problem reinstalling open3d
  • [Fixed] Xcode IDE not producing libOpen3D.a

CORE features and applications

  • [Added] Version file
  • [Added] Feature to change view during non-blocking visualization
  • [Added] Support of rendering 3D line segments by generalization of LineSet and adding a Python binding
  • [Changed] Enhance features for 3D reconstruction system
  • [Changed] Extend TSDF volume integration algorithm to use additional inputs: float intensity Image + depth map
  • [Fixed] Name conflict in CreateWindow
  • [Fixed] Removes warnings in color map optimization
  • [Fixed] Visualization error on Visualizer::CaptureScreenImage (VisualizerRender.cpp)

Documentation and tutorials

  • [Added] Tutorial on how to transform Numpy array into open3d Image
  • [Changed] Tutorial on how to perform multiway registration, showing how to combine multiple point clouds into single point cloud
  • [Changed] New version of the marching cubes mesh tutorial
  • [Changed] New documentation for color_map_optimization
  • [Changed]  Documentation for headless rendering and fast global registration
  • [Changed] Changed code rendering style in documents. The document is also directly linked to the tutorial code, to automatically reflect any future change in tutorial
  • [Fixed] Clarify how to retrieve estimated point cloud normal

Testing and benchmarking

  • [Added] New test for TestRGBDOdometry
  • [Changed] Updated rgbd_odometry test to include camera_primesense.json
  • [Fixed] Bug inTestRealSense.cpp

Misc

  • [Changed] Sort the file list with Alphanumeric order to create global mesh from files

Open3D release 0.2 is here!

Version 0.2 (Release date July 1st, 2018)

The first major update of Open3D, with various additional features and bug fixes.

Visualization

  1. Headless rendering: GLFW 3.3 dev + OSMesa combinations for supporting headless rendering. This feature is especially useful for users who want to get depth/normal/images from a remote server without physical monitors.
  2. Non-blocking visualization: draw_geometries() is a useful function for quick overview of static geometries. However, this function holds process until a visualization window is closed. Non-blocking visualization shows a live update of geometry while the window is open.
  3. Mesh cropping: In v0.1 VisualizerWithEditing only supports point cloud cropping. The new version supports mesh cropping as well.
  4. Point cloud picker with an application of manual point cloud registration

Docker for Open3D: a new Docker CE based solution for utilizing Open3D. With this update, you can:

  1. sandbox Open3D from other applications on a machine.
  2. operate Open3D on a headless machine using VNC or the terminal.
  3. edit the Open3D code on the host side but run it inside an Open3D container.

Additional features:

  1. Fast global registration: Open3D’s implementation of ‘fast global registration’ paper [Zhou et al 2016]. For the task of global registration, the single threaded fast global registration is about 20 times faster than RANSAC based implementation.
  2. Color map optimization: Another interesting application for copying seamless texture map to the reconstructed geometry. This is an implementation of ‘’Color Map Optimization for 3D Reconstruction with Consumer Depth Cameras” paper [Zhou and Koltun 2014].  The optimization pipeline creates sharp texture mapping on the geometry taken with color cameras.
  3. Basic operations for color and depth images: new function that can generate a depth discontinuity mask from a depth image. New dilation operators for making a thicker discontinuity mask.

Enhancement on build system:

  1. Refined CMake build system: polished CMake build system so that it can fully support make install or make uninstall. Once installed, Open3D library is searchable using find_package module in CMake. CMakeList.txt file for the new application is simplified.
  2. PyPi support: Newer version provides pip install which is a more convenient way to begin using Open3D. Try ‘pip install open3d-python’ and ‘import Open3D’ in Python.
  3. Basic test framework: Adding initial support for ‘Gtest’ examples to test Open3D’s functions and classes.

Miscellaneous

  1. Changing Python package name: python package name is changed from py3d to open3d.
  2. Many bug fixes:
    1. Bug fix on the Reconstruction system regarding defining and utilizing information matrix.
    2. Additional materials on Open3D documents.

Open3D release 0.1

Version 0.1 (Release date Feb 22nd, 2018)

First official release. Open3D is an open-source library that supports rapid development of software that deals with 3D data. Open3D has the following core features:

Core principle

  1. Started from scratch.
  2. Focus on implementation of basic widely-used 3D processing algorithms.
  3. Does not depend on heavyweight libraries.
  4. Minimum lines of code
  5. The library can be built and run from source using Ubuntu/MacOSX/Windows systems.

Basic 3D data structures

  1. Open3D provides point cloud, triangle mesh, image, and pose graph data structures.
  2. Each data structure has its own I/O interface with various files.
  3. Supported file formats:
    1. Image: JPEG, PNG.
    2. 3D geometry: Bin, PCD, PLY, PTS, XYZ, XYZN, XYZRGB.
    3. camera trajectory & pose graph: JSON, LOG.

Basic data processing algorithms

  1. Point cloud downsampling, normal estimation, and vertex coloring.
  2. Gaussian and Sobel filter for image processing.
  3. Scalable TSDF volume integration.

Scene reconstruction

  1. Provides basic scene reconstruction system specialized for RGBD sequence.
  2. The system consists of RGBD Odometry, pose graph optimization, and TSDF volume integration.
  3. The integration volume can be scalable.

Surface alignment

  1. Implementation of local geometric feature: ‘Fast Point Feature Histograms (FPFH)’ paper [Rusu and Beetz 2009].
  2. Point-to-point/point-to-plane/colored ICP implementations with OpenMP acceleration.
  3. Provides a basic, RANSAC-based global registration pipeline.
  4. Fast pose graph optimization: implementation of ‘Robust reconstruction of indoor scenes’ paper[Choi et al 2015].
  5. In house convex optimization: Gauss-Netwon and Levenberg-Marquardt methods.

3D visualization

  1. Quick overview of point cloud, mesh, image.
  2. Various options to customize camera path or camera intrinsics.
  3. Color/depth/normal rendering supported.
  4. Providing rendering buffer access to save rendered images.
  5. Can configure custom key callback functions.
  6. Can select a region and crop of crop point cloud.

Python binding

  1. The c++ functions/classes/definitions are exposed to the Python API.
  2. The Python API provides a quick debug cycle for development of novel algorithms.