open3d.t.geometry.PointCloud

class open3d.t.geometry.PointCloud

A point cloud contains a list of 3D points. The point cloud class stores the attribute data in key-value maps, where the key is a string representing the attribute name and the value is a Tensor containing the attribute data.

The attributes of the point cloud have different levels:

import open3d as o3d

device = o3d.core.Device("CPU:0")
dtype = o3d.core.float32

# Create an empty point cloud
# Use pcd.point to access the points' attributes
pcd = o3d.t.geometry.PointCloud(device)

# Default attribute: "positions".
# This attribute is created by default and is required by all point clouds.
# The shape must be (N, 3). The device of "positions" determines the device
# of the point cloud.
pcd.point.positions = o3d.core.Tensor([[0, 0, 0],
                                          [1, 1, 1],
                                          [2, 2, 2]], dtype, device)

# Common attributes: "normals", "colors".
# Common attributes are used in built-in point cloud operations. The
# spellings must be correct. For example, if "normal" is used instead of
# "normals", some internal operations that expects "normals" will not work.
# "normals" and "colors" must have shape (N, 3) and must be on the same
# device as the point cloud.
pcd.point.normals = o3d.core.Tensor([[0, 0, 1],
                                        [0, 1, 0],
                                        [1, 0, 0]], dtype, device)
pcd.point.colors = o3d.core.Tensor([[0.0, 0.0, 0.0],
                                        [0.1, 0.1, 0.1],
                                        [0.2, 0.2, 0.2]], dtype, device)

# User-defined attributes.
# You can also attach custom attributes. The value tensor must be on the
# same device as the point cloud. The are no restrictions on the shape and
# dtype, e.g.,
pcd.point.intensities = o3d.core.Tensor([0.3, 0.1, 0.4], dtype, device)
pcd.point.labels = o3d.core.Tensor([3, 1, 4], o3d.core.int32, device)
__init__(*args, **kwargs)

Overloaded function.

  1. __init__(self: open3d.cpu.pybind.t.geometry.PointCloud, device: open3d.cpu.pybind.core.Device = CPU:0) -> None

Construct an empty pointcloud on the provided device (default: ‘CPU:0’).

  1. __init__(self: open3d.cpu.pybind.t.geometry.PointCloud, positions: open3d.cpu.pybind.core.Tensor) -> None

  2. __init__(self: open3d.cpu.pybind.t.geometry.PointCloud, map_keys_to_tensors: Dict[str, open3d.cpu.pybind.core.Tensor]) -> None

  3. __init__(self: open3d.cpu.pybind.t.geometry.PointCloud, arg0: open3d.cpu.pybind.t.geometry.PointCloud) -> None

Copy constructor

append(self: open3d.cpu.pybind.t.geometry.PointCloud, arg0: open3d.cpu.pybind.t.geometry.PointCloud) → open3d.cpu.pybind.t.geometry.PointCloud
clear(self)

Clear all elements in the geometry.

Returns

open3d.t.geometry.Geometry

clone(self: open3d.cpu.pybind.t.geometry.PointCloud) → open3d.cpu.pybind.t.geometry.PointCloud

Returns a copy of the point cloud on the same device.

cluster_dbscan(self: open3d.cpu.pybind.t.geometry.PointCloud, eps: float, min_points: int, print_progress: bool = False) → open3d.cpu.pybind.core.Tensor

Cluster PointCloud using the DBSCAN algorithm Ester et al.,’A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise’, 1996. This is a wrapper for a CPU implementation and a copy of the point cloud data and resulting labels will be made.

Parameters
  • Density parameter that is used to find neighbouring points. (eps.) –

  • Minimum number of points to form a cluster. (min_points.) –

  • print_progress (default False) –

Returns

A Tensor list of point labels on the same device as the point cloud, -1 indicates noise according to the algorithm.

Example

We use Redwood dataset for demonstration:

import matplotlib.pyplot as plt

sample_ply_data = o3d.data.PLYPointCloud()
pcd = o3d.t.io.read_point_cloud(sample_ply_data.path)
labels = pcd.cluster_dbscan(eps=0.02, min_points=10, print_progress=True)

max_label = labels.max().item()
colors = plt.get_cmap("tab20")(
        labels.numpy() / (max_label if max_label > 0 else 1))
colors = o3d.core.Tensor(colors[:, :3], o3d.core.float32)
colors[labels < 0] = 0
pcd.point.colors = colors
o3d.visualization.draw([pcd])
compute_boundary_points(self: open3d.cpu.pybind.t.geometry.PointCloud, radius: float, max_nn: int = 30, angle_threshold: float = 90.0) → Tuple[open3d.cpu.pybind.t.geometry.PointCloud, open3d.cpu.pybind.core.Tensor]

Compute the boundary points of a point cloud. The implementation is inspired by the PCL implementation. Reference: https://pointclouds.org/documentation/classpcl_1_1_boundary_estimation.html

Parameters
  • Neighbor search radius parameter. (radius.) –

  • max_nn (default 30) –

  • angle_threshold (default 90.0) –

Returns

Tensor of boundary points and its boolean mask tensor.

Example

We will load the DemoCropPointCloud dataset, compute its boundary points:

ply_point_cloud = o3d.data.DemoCropPointCloud()
pcd = o3d.t.io.read_point_cloud(ply_point_cloud.point_cloud_path)
boundaries, mask = pcd.compute_boundary_points(radius, max_nn)
boundaries.paint_uniform_color([1.0, 0.0, 0.0])
o3d.visualization.draw([pcd, boundaries])
compute_convex_hull(self: open3d.cpu.pybind.t.geometry.PointCloud, joggle_inputs: bool = False) → open3d::t::geometry::TriangleMesh

Compute the convex hull of a triangle mesh using qhull. This runs on the CPU.

Parameters
  • joggle_inputs (default False) –

  • perturbing the input data. Set to True if perturbing the input (randomly) –

  • acceptable but you need convex simplicial output. If False (iis) –

:param : :param neighboring facets may be merged in case of precision problems. See: :param QHull docs for more :param details.:

Returns

TriangleMesh representing the convexh hull. This contains an extra vertex property “point_indices” that contains the index of the corresponding vertex in the original mesh.

Example

We will load the Eagle dataset, compute and display it’s convex hull:

eagle = o3d.data.EaglePointCloud()
pcd = o3d.t.io.read_point_cloud(eagle.path)
hull = pcd.compute_convex_hull()
o3d.visualization.draw([{'name': 'eagle', 'geometry': pcd}, {'name': 'convex hull', 'geometry': hull}])
cpu(self: open3d.cpu.pybind.t.geometry.PointCloud) → open3d.cpu.pybind.t.geometry.PointCloud

Transfer the point cloud to CPU. If the point cloud is already on CPU, no copy will be performed.

static create_from_depth_image(depth, intrinsics, extrinsics=(with default value), depth_scale=1000.0, depth_max=3.0, stride=1, with_normals=False)

Factory function to create a pointcloud (with only ‘points’) from a depth image and a camera model.

Given depth value d at (u, v) image coordinate, the corresponding 3d point is: z = d / depth_scale

x = (u - cx) * z / fx

y = (v - cy) * z / fy

Parameters
  • depth (open3d.t.geometry.Image) – The input depth image should be a uint16_t image.

  • intrinsics (open3d.core.Tensor) – Intrinsic parameters of the camera.

  • extrinsics (open3d.core.Tensor, optional) –

    Extrinsic parameters of the camera. Default value:

    [[1.0 0.0 0.0 0.0], [0.0 1.0 0.0 0.0], [0.0 0.0 1.0 0.0], [0.0 0.0 0.0 1.0]] Tensor[shape={4, 4}, stride={4, 1}, Float32

    ()

  • depth_scale (float, optional, default=1000.0) – The depth is scaled by 1 / depth_scale.

  • depth_max (float, optional, default=3.0) – Truncated at depth_max distance.

  • stride (int, optional, default=1) – Sampling factor to support coarse point cloud extraction. Unless normals are requested, there is no low pass filtering, so aliasing is possible for stride>1.

  • with_normals (bool, optional, default=False) – Also compute normals for the point cloud. If True, the point cloud will only contain points with valid normals. If normals are requested, the depth map is first filtered to ensure smooth normals.

Returns

open3d.t.geometry.PointCloud

static create_from_rgbd_image(rgbd_image, intrinsics, extrinsics=(with default value), depth_scale=1000.0, depth_max=3.0, stride=1, with_normals=False)

Factory function to create a pointcloud (with properties {‘points’, ‘colors’}) from an RGBD image and a camera model.

Given depth value d at (u, v) image coordinate, the corresponding 3d point is:

z = d / depth_scale

x = (u - cx) * z / fx

y = (v - cy) * z / fy

Parameters
  • rgbd_image (open3d.t.geometry.RGBDImage) – The input RGBD image should have a uint16_t depth image and RGB image with any DType and the same size.

  • intrinsics (open3d.core.Tensor) – Intrinsic parameters of the camera.

  • extrinsics (open3d.core.Tensor, optional) –

    Extrinsic parameters of the camera. Default value:

    [[1.0 0.0 0.0 0.0], [0.0 1.0 0.0 0.0], [0.0 0.0 1.0 0.0], [0.0 0.0 0.0 1.0]] Tensor[shape={4, 4}, stride={4, 1}, Float32

    ()

  • depth_scale (float, optional, default=1000.0) – The depth is scaled by 1 / depth_scale.

  • depth_max (float, optional, default=3.0) – Truncated at depth_max distance.

  • stride (int, optional, default=1) – Sampling factor to support coarse point cloud extraction. Unless normals are requested, there is no low pass filtering, so aliasing is possible for stride>1.

  • with_normals (bool, optional, default=False) – Also compute normals for the point cloud. If True, the point cloud will only contain points with valid normals. If normals are requested, the depth map is first filtered to ensure smooth normals.

Returns

open3d.t.geometry.PointCloud

crop(*args, **kwargs)

Overloaded function.

cuda(self: open3d.cpu.pybind.t.geometry.PointCloud, device_id: int = 0) → open3d.cpu.pybind.t.geometry.PointCloud

Transfer the point cloud to a CUDA device. If the point cloud is already on the specified CUDA device, no copy will be performed.

estimate_color_gradients(self: open3d.cpu.pybind.t.geometry.PointCloud, max_nn: Optional[int] = 30, radius: Optional[float] = None) → None

Function to estimate point color gradients. It uses KNN search (Not recommended to use on GPU) if only max_nn parameter is provided, Radius search (Not recommended to use on GPU) if only radius is provided and Hybrid Search (Recommended) if radius parameter is also provided.

estimate_normals(self, max_nn=30, radius=None)

Function to estimate point normals. If the point cloud normals exist, the estimated normals are oriented with respect to the same. It uses KNN search (Not recommended to use on GPU) if only max_nn parameter is provided, Radius search (Not recommended to use on GPU) if only radius is provided and Hybrid Search (Recommended) if radius parameter is also provided.

Parameters
  • max_nn (Optional[int], optional, default=30) – Neighbor search max neighbors parameter [default = 30].

  • radius (Optional[float], optional, default=None) – neighbors search radius parameter to use HybridSearch. [Recommended ~1.4x voxel size].

Returns

None

extrude_linear(self: open3d.cpu.pybind.t.geometry.PointCloud, vector: open3d.cpu.pybind.core.Tensor, scale: float = 1.0, capping: bool = True) → open3d::t::geometry::LineSet

Sweeps the point cloud along a direction vector.

Parameters
  • vector (open3d.core.Tensor) – The direction vector.

  • scale (float) – Scalar factor which essentially scales the direction vector.

Returns

A line set with the result of the sweep operation.

Example

This code generates a set of straight lines from a point cloud::

import open3d as o3d import numpy as np pcd = o3d.t.geometry.PointCloud(np.random.rand(10,3)) lines = pcd.extrude_linear([0,1,0]) o3d.visualization.draw([{‘name’: ‘lines’, ‘geometry’: lines}])

extrude_rotation(self: open3d.cpu.pybind.t.geometry.PointCloud, angle: float, axis: open3d.cpu.pybind.core.Tensor, resolution: int = 16, translation: float = 0.0, capping: bool = True) → open3d::t::geometry::LineSet

Sweeps the point set rotationally about an axis.

Parameters
  • angle (float) – The rotation angle in degree.

  • axis (open3d.core.Tensor) – The rotation axis.

  • resolution (int) – The resolution defines the number of intermediate sweeps about the rotation axis.

  • translation (float) – The translation along the rotation axis.

Returns

A line set with the result of the sweep operation.

Example

This code generates a number of helices from a point cloud:

import open3d as o3d
import numpy as np
pcd = o3d.t.geometry.PointCloud(np.random.rand(10,3))
helices = pcd.extrude_rotation(3*360, [0,1,0], resolution=3*16, translation=2)
o3d.visualization.draw([{'name': 'helices', 'geometry': helices}])
farthest_point_down_sample(self, num_samples)

Downsample a pointcloud into output pointcloud with a set of points has farthest distance.The sampling is performed by selecting the farthest point from previous selected points iteratively

Parameters

num_samples (int) – Number of points to be sampled.

Returns

open3d.t.geometry.PointCloud

static from_legacy(pcd_legacy: open3d.cpu.pybind.geometry.PointCloud, dtype: open3d.cpu.pybind.core.Dtype = Float32, device: open3d.cpu.pybind.core.Device = CPU:0) → open3d.cpu.pybind.t.geometry.PointCloud

Create a PointCloud from a legacy Open3D PointCloud.

get_axis_aligned_bounding_box(self: open3d.cpu.pybind.t.geometry.PointCloud) → open3d::t::geometry::AxisAlignedBoundingBox

Create an axis-aligned bounding box from attribute ‘positions’.

get_center(self: open3d.cpu.pybind.t.geometry.PointCloud) → open3d.cpu.pybind.core.Tensor

Returns the center for point coordinates.

get_max_bound(self: open3d.cpu.pybind.t.geometry.PointCloud) → open3d.cpu.pybind.core.Tensor

Returns the max bound for point coordinates.

get_min_bound(self: open3d.cpu.pybind.t.geometry.PointCloud) → open3d.cpu.pybind.core.Tensor

Returns the min bound for point coordinates.

get_oriented_bounding_box(self: open3d.cpu.pybind.t.geometry.PointCloud) → open3d::t::geometry::OrientedBoundingBox

Create an oriented bounding box from attribute ‘positions’.

has_valid_material(self: open3d.cpu.pybind.t.geometry.DrawableGeometry) → bool

Returns true if the geometry’s material is valid.

hidden_point_removal(self: open3d.cpu.pybind.t.geometry.PointCloud, camera_location: open3d.cpu.pybind.core.Tensor, radius: float) → Tuple[open3d::t::geometry::TriangleMesh, open3d.cpu.pybind.core.Tensor]

Removes hidden points from a point cloud and returns a mesh of the remaining points. Based on Katz et al. ‘Direct Visibility of Point Sets’, 2007. Additional information about the choice of radius for noisy point clouds can be found in Mehra et. al. ‘Visibility of Noisy Point Cloud Data’, 2010. This is a wrapper for a CPU implementation and a copy of the point cloud data and resulting visible triangle mesh and indiecs will be made.

Parameters
  • All points not visible from that location will be removed. (camera_location.) –

  • The radius of the spherical projection. (radius.) –

Returns

Tuple of visible triangle mesh and indices of visible points on the same device as the point cloud.

Example

We use armadillo mesh to compute the visible points from given camera:

# Convert mesh to a point cloud and estimate dimensions.
armadillo_data = o3d.data.ArmadilloMesh()
pcd = o3d.io.read_triangle_mesh(
armadillo_data.path).sample_points_poisson_disk(5000)

diameter = np.linalg.norm(
        np.asarray(pcd.get_max_bound()) - np.asarray(pcd.get_min_bound()))

# Define parameters used for hidden_point_removal.
camera = o3d.core.Tensor([0, 0, diameter], o3d.core.float32)
radius = diameter * 100

# Get all points that are visible from given view point.
pcd = o3d.t.geometry.PointCloud.from_legacy(pcd)
_, pt_map = pcd.hidden_point_removal(camera, radius)
pcd = pcd.select_by_index(pt_map)
o3d.visualization.draw([pcd], point_size=5)
is_empty(self)

Returns True iff the geometry is empty.

Returns

bool

normalize_normals(self: open3d.cpu.pybind.t.geometry.PointCloud) → open3d.cpu.pybind.t.geometry.PointCloud

Normalize point normals to length 1.

orient_normals_consistent_tangent_plane(self, k)

Function to orient the normals with respect to consistent tangent planes.

Parameters

k (int) – Number of k nearest neighbors used in constructing the Riemannian graph used to propagate normal orientation.

Returns

None

orient_normals_to_align_with_direction(self, orientation_reference=(with default value))

Function to orient the normals of a point cloud.

Parameters

orientation_reference (open3d.core.Tensor, optional) –

Normals are oriented with respect to orientation_reference. Default value:

[0.0 0.0 1.0] Tensor[shape={3}, stride={1}, Float32

()

Returns

None

orient_normals_towards_camera_location(self, camera_location=(with default value))

Function to orient the normals of a point cloud.

Parameters

camera_location (open3d.core.Tensor, optional) –

Normals are oriented with towards the camera_location. Default value:

[0.0 0.0 0.0] Tensor[shape={3}, stride={1}, Float32

()

Returns

None

paint_uniform_color(self, color)

Assigns uniform color to the point cloud.

Parameters

color (open3d.core.Tensor) – Color of the pointcloud. Floating color values are clipped between 0.0 and 1.0.

Returns

open3d.t.geometry.PointCloud

project_to_depth_image()
project_to_depth_image(self: open3d.cpu.pybind.t.geometry.PointCloud, width: int, height: int, intrinsics: open3d.cpu.pybind.core.Tensor, extrinsics: open3d.cpu.pybind.core.Tensor = [[1.0 0.0 0.0 0.0],

[0.0 1.0 0.0 0.0], [0.0 0.0 1.0 0.0], [0.0 0.0 0.0 1.0]]

Tensor[shape={4, 4}, stride={4, 1}, Float32, CPU:0, 0x5622d6b39560], depth_scale: float = 1000.0, depth_max: float = 3.0) -> open3d::t::geometry::Image

Project a point cloud to a depth image.

project_to_rgbd_image()
project_to_rgbd_image(self: open3d.cpu.pybind.t.geometry.PointCloud, width: int, height: int, intrinsics: open3d.cpu.pybind.core.Tensor, extrinsics: open3d.cpu.pybind.core.Tensor = [[1.0 0.0 0.0 0.0],

[0.0 1.0 0.0 0.0], [0.0 0.0 1.0 0.0], [0.0 0.0 0.0 1.0]]

Tensor[shape={4, 4}, stride={4, 1}, Float32, CPU:0, 0x5622d6b33c60], depth_scale: float = 1000.0, depth_max: float = 3.0) -> open3d::t::geometry::RGBDImage

Project a colored point cloud to a RGBD image.

random_down_sample(self, sampling_ratio)

Downsample a pointcloud by selecting random index point and its attributes.

Parameters

sampling_ratio (float) – Sampling ratio, the ratio of sample to total number of points in the pointcloud.

Returns

open3d.t.geometry.PointCloud

remove_duplicated_points(self: open3d.cpu.pybind.t.geometry.PointCloud) → Tuple[open3d.cpu.pybind.t.geometry.PointCloud, open3d.cpu.pybind.core.Tensor]

Remove duplicated points and there associated attributes.

remove_non_finite_points(self: open3d.cpu.pybind.t.geometry.PointCloud, remove_nan: bool = True, remove_infinite: bool = True) → Tuple[open3d.cpu.pybind.t.geometry.PointCloud, open3d.cpu.pybind.core.Tensor]

Remove all points from the point cloud that have a nan entry, or infinite value. It also removes the corresponding attributes.

Parameters
  • Remove NaN values from the PointCloud. (remove_nan.) –

  • Remove infinite values from the PointCloud. (remove_infinite.) –

Returns

Tuple of filtered point cloud and boolean mask tensor for selected values w.r.t. input point cloud.

remove_radius_outliers(self, nb_points, search_radius)

Remove points that have less than nb_points neighbors in a sphere of a given search radius.

Parameters
  • Number of neighbor points required within the radius. (nb_points.) –

  • Radius of the sphere. (search_radius.) –

  • nb_points (int) – Number of neighbor points required within the radius.

  • search_radius (float) – Radius of the sphere.

Returns

Tuple of filtered point cloud and boolean mask tensor for selected values w.r.t. input point cloud.

Returns

Tuple[open3d.t.geometry.PointCloud, open3d.core.Tensor]

remove_statistical_outliers(self: open3d.cpu.pybind.t.geometry.PointCloud, nb_neighbors: int, std_ratio: float) → Tuple[open3d.cpu.pybind.t.geometry.PointCloud, open3d.cpu.pybind.core.Tensor]

Remove points that are further away from their p nb_neighbor neighbors in average. This function is not recommended to use on GPU.

Parameters
  • Number of neighbors around the target point. (nb_neighbors.) –

  • Standard deviation ratio. (std_ratio.) –

Returns

Tuple of filtered point cloud and boolean mask tensor for selected values w.r.t. input point cloud.

rotate(self: open3d.cpu.pybind.t.geometry.PointCloud, R: open3d.cpu.pybind.core.Tensor, center: open3d.cpu.pybind.core.Tensor) → open3d.cpu.pybind.t.geometry.PointCloud

Rotate points and normals (if exist).

scale(self: open3d.cpu.pybind.t.geometry.PointCloud, scale: float, center: open3d.cpu.pybind.core.Tensor) → open3d.cpu.pybind.t.geometry.PointCloud

Scale points.

segment_plane(self: open3d.cpu.pybind.t.geometry.PointCloud, distance_threshold: float = 0.01, ransac_n: int = 3, num_iterations: int = 100, probability: float = 0.99999999) → Tuple[open3d.cpu.pybind.core.Tensor, open3d.cpu.pybind.core.Tensor]

Segments a plane in the point cloud using the RANSAC algorithm. This is a wrapper for a CPU implementation and a copy of the point cloud data and resulting plane model and inlier indiecs will be made.

Parameters
  • distance_threshold (default 0.01) –

  • model

  • still be considered an inlier. (and) –

  • ransac_n (default 3) –

  • num_iterations (default 100) –

  • probability (default 0.99999999) –

Returns

Tuple of the plane model ax + by + cz + d = 0 and the indices of the plane inliers on the same device as the point cloud.

Example

We use Redwood dataset to compute its plane model and inliers:

sample_pcd_data = o3d.data.PCDPointCloud()
pcd = o3d.t.io.read_point_cloud(sample_pcd_data.path)
plane_model, inliers = pcd.segment_plane(distance_threshold=0.01,
                                         ransac_n=3,
                                         num_iterations=1000)
inlier_cloud = pcd.select_by_index(inliers)
inlier_cloud = inlier_cloud.paint_uniform_color([1.0, 0, 0])
outlier_cloud = pcd.select_by_index(inliers, invert=True)
o3d.visualization.draw([inlier_cloud, outlier_cloud])
select_by_index(self, indices, invert=False, remove_duplicates=False)

Select points from input pointcloud, based on indices into output point cloud.

Parameters
  • indices (open3d.core.Tensor) – Int64 indexing tensor of shape {n,} containing index value that is to be selected.

  • invert (bool, optional, default=False) – Set to True to invert the selection of indices, and also ignore the duplicated indices.

  • remove_duplicates (bool, optional, default=False) – Set to True to remove the duplicated indices.

Returns

open3d.t.geometry.PointCloud

select_by_mask(self, boolean_mask, invert=False)

Select points from input pointcloud, based on boolean mask indices into output point cloud.

Parameters
  • boolean_mask (open3d.core.Tensor) – Boolean indexing tensor of shape {n,} containing true value for the indices that is to be selected..

  • invert (bool, optional, default=False) – Set to True to invert the selection of indices.

Returns

open3d.t.geometry.PointCloud

to(self: open3d.cpu.pybind.t.geometry.PointCloud, device: open3d.cpu.pybind.core.Device, copy: bool = False) → open3d.cpu.pybind.t.geometry.PointCloud

Transfer the point cloud to a specified device.

to_legacy(self: open3d.cpu.pybind.t.geometry.PointCloud) → open3d.cpu.pybind.geometry.PointCloud

Convert to a legacy Open3D PointCloud.

transform(self: open3d.cpu.pybind.t.geometry.PointCloud, transformation: open3d.cpu.pybind.core.Tensor) → open3d.cpu.pybind.t.geometry.PointCloud

Transforms the points and normals (if exist).

translate(self: open3d.cpu.pybind.t.geometry.PointCloud, translation: open3d.cpu.pybind.core.Tensor, relative: bool = True) → open3d.cpu.pybind.t.geometry.PointCloud

Translates points.

uniform_down_sample(self, every_k_points)

Downsamples a point cloud by selecting every kth index point and its attributes.

Parameters

every_k_points (int) – Sample rate, the selected point indices are [0, k, 2k, …].

Returns

open3d.t.geometry.PointCloud

voxel_down_sample(self, voxel_size)

Downsamples a point cloud with a specified voxel size.

Parameters

voxel_size (float) – Voxel size. A positive number.

Returns

open3d.t.geometry.PointCloud

property device

Returns the device of the geometry.

property is_cpu

Returns true if the geometry is on CPU.

property is_cuda

Returns true if the geometry is on CUDA.

property material
property point

positions, colors, normals, etc.

Type

Point’s attributes