open3d.pipelines.registration.RobustKernel#

class open3d.pipelines.registration.RobustKernel#

Base class that models a robust kernel for outlier rejection. The virtual function weight() must be implemented in derived classes.

The main idea of a robust loss is to downweight large residuals that are assumed to be caused from outliers such that their influence on the solution is reduced. This is achieved by optimizing:

(1)#\[\def\argmin{\mathop{\rm argmin}} \begin{equation} x^{*} = \argmin_{x} \sum_{i=1}^{N} \rho({r_i(x)}) \end{equation}\]

where \(\rho(r)\) is also called the robust loss or kernel and \(r_i(x)\) is the residual.

Several robust kernels have been proposed to deal with different kinds of outliers such as Huber, Cauchy, and others.

The optimization problem in (1) can be solved using the iteratively reweighted least squares (IRLS) approach, which solves a sequence of weighted least squares problems. We can see the relation between the least squares optimization in stanad non-linear least squares and robust loss optimization by comparing the respective gradients which go to zero at the optimum (illustrated only for the \(i^\mathrm{th}\) residual):

\[\begin{split}\begin{eqnarray} \frac{1}{2}\frac{\partial (w_i r^2_i(x))}{\partial{x}} &=& w_i r_i(x) \frac{\partial r_i(x)}{\partial{x}} \\ \label{eq:gradient_ls} \frac{\partial(\rho(r_i(x)))}{\partial{x}} &=& \rho'(r_i(x)) \frac{\partial r_i(x)}{\partial{x}}. \end{eqnarray}\end{split}\]

By setting the weight \(w_i= \frac{1}{r_i(x)}\rho'(r_i(x))\), we can solve the robust loss optimization problem by using the existing techniques for weighted least-squares. This scheme allows standard solvers using Gauss-Newton and Levenberg-Marquardt algorithms to optimize for robust losses and is the one implemented in Open3D.

Then we minimize the objective function using Gauss-Newton and determine increments by iteratively solving:

\[\newcommand{\mat}[1]{\mathbf{#1}} \newcommand{\veca}[1]{\vec{#1}} \renewcommand{\vec}[1]{\mathbf{#1}} \begin{align} \left(\mat{J}^\top \mat{W} \mat{J}\right)^{-1}\mat{J}^\top\mat{W}\vec{r}, \end{align}\]

where \(\mat{W} \in \mathbb{R}^{n\times n}\) is a diagonal matrix containing weights \(w_i\) for each residual \(r_i\)

The different loss functions will only impact in the weight for each residual during the optimization step. Therefore, the only impact of the choice on the kernel is through its first order derivate.

The kernels implemented so far, and the notation has been inspired by the publication: “Analysis of Robust Functions for Registration Algorithms”, by Philippe Babin et al.

For more information please also see: “Adaptive Robust Kernels for Non-Linear Least Squares Problems”, by Nived Chebrolu et al.

__init__(*args, **kwargs)#
weight(self, residual)#

Obtain the weight for the given residual according to the robust kernel model.

Parameters:

residual (float) – value obtained during the optimization problem

Returns:

float