open3d.ml.torch.ops.continuous_conv#

open3d.ml.torch.ops.continuous_conv(filters, out_positions, extents, offset, inp_positions, inp_features, inp_importance, neighbors_index, neighbors_importance, neighbors_row_splits, align_corners=False, coordinate_mapping='ball_to_cube_radial', normalize=False, interpolation='linear', max_temp_mem_MB=64)#

Continuous convolution of two pointclouds.

This op computes the features for the forward pass. This example shows how to use this op:

import tensorflow as tf
import open3d.ml.tf as ml3d

filters = tf.random.normal([3,3,3,8,16])

inp_positions = tf.random.normal([20,3])
inp_features = tf.random.normal([20,8])
out_positions = tf.random.normal([10,3])

nsearch = ml3d.layers.FixedRadiusSearch()
radius = 1.2
neighbors = nsearch(inp_positions, out_positions, radius)

ml3d.ops.continuous_conv(filters,
                         out_positions,
                         extents=[[2*radius]],
                         offset=[0,0,0],
                         inp_positions=inp_positions,
                         inp_features=inp_features,
                         inp_importance=[],
                         neighbors_index=neighbors.neighbors_index,
                         neighbors_row_splits=neighbors.neighbors_row_splits,
                         neighbors_importance=[]
                        )

# or with pytorch
import torch
import open3d.ml.torch as ml3d

filters = torch.randn([3,3,3,8,16])

inp_positions = torch.randn([20,3])
inp_features = torch.randn([20,8])
out_positions = torch.randn([10,3])

nsearch = ml3d.nn.FixedRadiusSearch()
radius = 1.2
neighbors = nsearch(inp_positions, out_positions, radius)

ml3d.ops.continuous_conv(filters,
                         out_positions,
                         extents=torch.FloatTensor([[2*radius]]),
                         offset=torch.FloatTensor([0,0,0]),
                         inp_positions=inp_positions,
                         inp_features=inp_features,
                         inp_importance=torch.FloatTensor([]),
                         neighbors_index=neighbors.neighbors_index,
                         neighbors_row_splits=neighbors.neighbors_row_splits,
                         neighbors_importance=torch.FloatTensor([]),
                        )
align_corners: If True the outer voxel centers of the filter grid are aligned

with the boundary of the spatial shape.

coordinate_mapping: Defines how the relative positions of the neighbors are

mapped before computing filter indices. For all mappings relative coordinates will be scaled with the inverse extent, i.e. the extent becomes a unit cube. After that one of the following mappings will be applied:

“ball_to_cube_radial”: maps a unit ball to a unit cube by radial stretching. “ball_to_cube_volume_preserving”: maps a unit ball to a unit cube preserving the volume. “identity”: the identity mapping.

Use “ball_to_cube_radial” for a spherical or ellipsoidal filter window and “identity” for a rectangular filter window.

normalize: If True the output feature values will be normalized using the sum

for neighbors_importance for each output point.

interpolation: If interpolation is “linear” then each filter value lookup is a

trilinear interpolation. If interpolation is “nearest_neighbor” only the spatially closest value is considered. This makes the filter and therefore the convolution discontinuous.

max_temp_mem_MB: Defines the maximum temporary memory in megabytes to be used

for the GPU implementation. More memory means fewer kernel invocations. Note that the a minimum amount of temp memory will always be allocated even if this variable is set to 0.

filters: The filter parameters. The shape of the filter is

[depth, height, width, in_ch, out_ch]. The dimensions ‘depth’, ‘height’, ‘width’ define the spatial resolution of the filter. The spatial size of the filter is defined by the parameter ‘extents’.

out_positions: A 2D tensor with the 3D point positions of each output point.

The coordinates for each point is a vector with format [x,y,z].

extents: The extent defines the spatial size of the filter for each output

point. It is a 2D vector of the form [[x_size, y_size, z_size], ..]. For ‘ball to cube’ coordinate mappings the extent defines the bounding box of the ball. Broadcasting is supported for all axes. E.g. providing only the extent for a single point as well as only providing ‘x_size’ is valid.

offset: A 1D tensor which defines the offset in voxel units to shift the input

points. Offsets will be ignored if align_corners is True.

inp_positions: A 2D tensor with the 3D point positions of each input point.

The coordinates for each point is a vector with format [x,y,z].

inp_features: A 2D tensor which stores a feature vector for each input point.

inp_importance: An optional scalar importance for each input point. The

features of each point will be multiplied with the corresponding value. The shape is [num input points]. Use a zero length Tensor to disable.

neighbors_index: The neighbors_index stores a list of indices of neighbors for

each output point as nested lists. The start and end of each list can be computed using ‘neighbors_row_splits’.

neighbors_importance: Tensor of the same shape as ‘neighbors_index’ with a

scalar value that is used to scale the features of each neighbor. Use a zero length Tensor to weigh each neighbor with 1.

neighbors_row_splits: The exclusive prefix sum of the neighbor count for the

output points including the total neighbor count as the last element. The size of this array is the number of output points + 1.

output_type: The type for the output.

out_features: A Tensor with the output feature vectors for each output point.