open3d.ml.torch.pipelines.SemanticSegmentation

class open3d.ml.torch.pipelines.SemanticSegmentation(model, dataset=None, name='SemanticSegmentation', batch_size=4, val_batch_size=4, test_batch_size=3, max_epoch=100, learning_rate=0.01, lr_decays=0.95, save_ckpt_freq=20, adam_lr=0.01, scheduler_gamma=0.95, momentum=0.98, main_log_dir='./logs/', device='gpu', split='train', train_sum_dir='train_log', **kwargs)

This class allows you to perform semantic segmentation for both training and inference using the Torch. This pipeline has multiple stages: Pre-processing, loading dataset, testing, and inference or training.

Example:

This example loads the Semantic Segmentation and performs a training using the SemanticKITTI dataset.

import torch, pickle import torch.nn as nn

from .base_pipeline import BasePipeline from torch.utils.tensorboard import SummaryWriter from ..dataloaders import get_sampler, TorchDataloader, DefaultBatcher, ConcatBatcher

Mydataset = TorchDataloader(dataset=dataset.get_split(‘training’)), MyModel = SemanticSegmentation(self,model,dataset=Mydataset, name=’SemanticSegmentation’, name=’MySemanticSegmentation’, batch_size=4, val_batch_size=4, test_batch_size=3, max_epoch=100, learning_rate=1e-2, lr_decays=0.95, save_ckpt_freq=20, adam_lr=1e-2, scheduler_gamma=0.95, momentum=0.98, main_log_dir=’./logs/’, device=’gpu’, split=’train’, train_sum_dir=’train_log’)

Args:

dataset: The 3D ML dataset class. You can use the base dataset, sample datasets , or a custom dataset. model: The model to be used for building the pipeline. name: The name of the current training. batch_size: The batch size to be used for training. val_batch_size: The batch size to be used for validation. test_batch_size: The batch size to be used for testing. max_epoch: The maximum size of the epoch to be used for training. leanring_rate: The hyperparameter that controls the weights during training. Also, known as step size. lr_decays: The learning rate decay for the training. save_ckpt_freq: The frequency in which the checkpoint should be saved. adam_lr: The leanring rate to be applied for Adam optimization. scheduler_gamma: The decaying factor associated with the scheduler. momentum: The momentum that accelerates the training rate schedule. main_log_dir: The directory where logs are stored. device: The device to be used for training. split: The dataset split to be used. In this example, we have used “train”. train_sum_dir: The directory where the trainig summary is stored.

Returns:

class: The corresponding class.

__init__(model, dataset=None, name='SemanticSegmentation', batch_size=4, val_batch_size=4, test_batch_size=3, max_epoch=100, learning_rate=0.01, lr_decays=0.95, save_ckpt_freq=20, adam_lr=0.01, scheduler_gamma=0.95, momentum=0.98, main_log_dir='./logs/', device='gpu', split='train', train_sum_dir='train_log', **kwargs)

Initialize.

Parameters
  • model – A network model.

  • dataset – A dataset, or None for inference model.

  • devce – ‘gpu’ or ‘cpu’.

  • kwargs

Returns

The corresponding class.

Return type

class

get_batcher(device, split='training')
load_ckpt(ckpt_path=None, is_resume=True)
run_inference(data)

Run inference on a given data.

Parameters

data – A raw data.

Returns

Returns the inference results.

run_test()

Run testing on test sets.

run_train()

Run training on train sets

save_ckpt(epoch)
save_config(writer)
save_logs(writer, epoch)
update_tests(sampler, inputs, results)