open3d.ml.tf.pipelines.SemanticSegmentation

class open3d.ml.tf.pipelines.SemanticSegmentation(model, dataset=None, name='SemanticSegmentation', batch_size=4, val_batch_size=4, test_batch_size=3, max_epoch=100, learning_rate=0.01, lr_decays=0.95, save_ckpt_freq=20, adam_lr=0.01, scheduler_gamma=0.95, momentum=0.98, main_log_dir='./logs/', device='gpu', split='train', train_sum_dir='train_log', **kwargs)

This class allows you to perform semantic segmentation for both training and inference using the TensorFlow framework. This pipeline has multiple stages: Pre-processing, loading dataset, testing, and inference or training.

Example:

This example loads the Semantic Segmentation and performs a training using the SemanticKITTI dataset.

import tensorflow as tf
from .base_pipeline import BasePipeline

Mydataset = TFDataloader(dataset=tf.dataset.get_split('training')
MyModel = SemanticSegmentation(self,model,dataset=Mydataset,
    name='SemanticSegmentation',
    batch_size=4,
    val_batch_size=4,
    test_batch_size=3,
    max_epoch=100,
    learning_rate=1e-2,
    lr_decays=0.95,
    save_ckpt_freq=20,
    adam_lr=1e-2,
    scheduler_gamma=0.95,
    momentum=0.98,
    main_log_dir='./logs/',
    device='gpu',
    split='train',
    train_sum_dir='train_log')
Args:
dataset: The 3D ML dataset class. You can use the base dataset,

sample datasets, or a custom dataset.

model: The model to be used for building the pipeline. name: The name of the current training. batch_size: The batch size to be used for training. val_batch_size: The batch size to be used for validation. test_batch_size: The batch size to be used for testing. max_epoch: The maximum size of the epoch to be used for training. leanring_rate: The hyperparameter that controls the weights during

training. Also, known as step size.

lr_decays: The learning rate decay for the training. save_ckpt_freq: The frequency in which the checkpoint should be

saved.

adam_lr: The leanring rate to be applied for Adam optimization. scheduler_gamma: The decaying factor associated with the scheduler. momentum: The momentum that accelerates the training rate schedule. main_log_dir: The directory where logs are stored. device: The device to be used for training. split: The dataset split to be used. In this example, we have used

“train”.

train_sum_dir: The directory where the trainig summary is stored.

Returns:

class: The corresponding class.

__init__(model, dataset=None, name='SemanticSegmentation', batch_size=4, val_batch_size=4, test_batch_size=3, max_epoch=100, learning_rate=0.01, lr_decays=0.95, save_ckpt_freq=20, adam_lr=0.01, scheduler_gamma=0.95, momentum=0.98, main_log_dir='./logs/', device='gpu', split='train', train_sum_dir='train_log', **kwargs)

Initialize.

Parameters
  • model – network

  • dataset – dataset, or None for inference model

  • device – ‘gpu’ or ‘cpu’

  • kwargs

Returns

The corresponding class.

Return type

class

get_3d_summary(results, input_data, epoch, save_gt=True)

Create visualization for network inputs and outputs.

Parameters
  • results – Model output (see below).

  • input_data – Model input (see below).

  • epoch (int) – step

  • save_gt (bool) – Save ground truth (for ‘train’ or ‘valid’ stages).

RandLaNet:

results (Tensor(B, N, C)): Prediction scores for all classes. input_data (Tuple): Batch of pointclouds and labels.

input_data[0] (Tensor(B,N,3), float) : points input_data[-1] (Tensor(B,N), int) : labels

SparseConvUNet:
results (Tensor(SN, C)): Prediction scores for all classes. SN is

total points in the batch.

input_data (Dict): Batch of pointclouds and labels. Keys should be:

‘point’ [Tensor(SN,3), float]: Concatenated points. ‘batch_lengths’ [Tensor(B,), int]: Number of points in each

point cloud of the batch.

‘label’ [Tensor(SN,) (optional)]: Concatenated labels.

Returns

[Dict] visualizations of inputs and outputs suitable to save as an

Open3D for TensorBoard summary.

load_ckpt(ckpt_path=None, is_resume=True)

Load a checkpoint. You must pass the checkpoint and indicate if you want to resume.

run_inference(data)

Run the inference using the data passed.

run_test()

Run the test using the data passed.

run_train()

Run model training.

save_ckpt(epoch)

Save a checkpoint at the passed epoch.

save_config(writer)

Save experiment configuration with tensorboard summary.

save_logs(writer, epoch)

Save logs from the training and send results to TensorBoard.