Refine registration

Input arguments

This script runs with python run_system.py [config] --refine. In [config], ["path_dataset"] should have subfolders fragments which stores fragments in .ply files and a pose graph in a .json file.

The main function runs local_refinement and optimize_posegraph_for_scene. The first function performs pairwise registration on the pairs detected by Register fragments. The second function performs multiway registration.

Fine-grained registration

30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
# examples/Python/ReconstructionSystem/register_fragments.py

def multiscale_icp(source, target, voxel_size, max_iter,
        config, init_transformation = np.identity(4)):
    current_transformation = init_transformation
    for i, scale in enumerate(range(len(max_iter))): # multi-scale approach
        iter = max_iter[scale]
        distance_threshold = config["voxel_size"] * 1.4
        print("voxel_size %f" % voxel_size[scale])
        source_down = voxel_down_sample(source, voxel_size[scale])
        target_down = voxel_down_sample(target, voxel_size[scale])
        if config["icp_method"] == "point_to_point":
            result_icp = registration_icp(source_down, target_down,
                    distance_threshold, current_transformation,
                    TransformationEstimationPointToPoint(),
                    ICPConvergenceCriteria(max_iteration = iter))
        else:
            estimate_normals(source_down, KDTreeSearchParamHybrid(
                    radius = voxel_size[scale] * 2.0, max_nn = 30))
            estimate_normals(target_down, KDTreeSearchParamHybrid(
                    radius = voxel_size[scale] * 2.0, max_nn = 30))
            if config["icp_method"] == "point_to_plane":
                result_icp = registration_icp(source_down, target_down,
                        distance_threshold, current_transformation,
                        TransformationEstimationPointToPlane(),
                        ICPConvergenceCriteria(max_iteration = iter))
            if config["icp_method"] == "color":
                result_icp = registration_colored_icp(source_down, target_down,
                        voxel_size[scale], current_transformation,
                        ICPConvergenceCriteria(relative_fitness = 1e-6,
                        relative_rmse = 1e-6, max_iteration = iter))
        current_transformation = result_icp.transformation
        if i == len(max_iter)-1:
            information_matrix = get_information_matrix_from_point_clouds(
                    source_down, target_down, voxel_size[scale] * 1.4,
                    result_icp.transformation)

    if config["debug_mode"]:
        draw_registration_result_original_color(source, target,
                result_icp.transformation)

Two options are given for the fine-grained registration. The color is recommended since it uses color information to prevent drift. Details see [Park2017].

Multiway registration

14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
# examples/Python/ReconstructionSystem/register_fragments.py

def update_posegrph_for_scene(s, t, transformation, information,
        odometry, pose_graph):
    if t == s + 1: # odometry case
        odometry = np.dot(transformation, odometry)
        odometry_inv = np.linalg.inv(odometry)
        pose_graph.nodes.append(PoseGraphNode(odometry_inv))
        pose_graph.edges.append(
                PoseGraphEdge(s, t, transformation,
                information, uncertain = False))
    else: # loop closure case
        pose_graph.edges.append(
                PoseGraphEdge(s, t, transformation,
                information, uncertain = True))

This script uses the technique demonstrated in Multiway registration. Function update_posegrph_for_refined_scene builds a pose graph for multiway registration of all fragments. Each graph node represents a fragments and its pose which transforms the geometry to the global space.

Once a pose graph is built, function optimize_posegraph_for_scene is called for multiway registration.

49
50
51
52
53
54
55
56
57
58
# examples/Python/ReconstructionSystem/optimize_posegraph.py

def optimize_posegraph_for_refined_scene(path_dataset, config):
    pose_graph_name = join(path_dataset, config["template_refined_posegraph"])
    pose_graph_optimized_name = join(path_dataset,
            config["template_refined_posegraph_optimized"])
    run_posegraph_optimization(pose_graph_name, pose_graph_optimized_name,
            max_correspondence_distance = config["voxel_size"] * 1.4,
            preference_loop_closure = \
            config["preference_loop_closure_registration"])

Main registration loop

The function make_posegraph_for_refined_scene below calls all the functions introduced above.

107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
# examples/Python/ReconstructionSystem/register_fragments.py

def make_posegraph_for_refined_scene(ply_file_names, config):
    pose_graph = read_pose_graph(join(config["path_dataset"],
            config["template_global_posegraph_optimized"]))

    n_files = len(ply_file_names)
    matching_results = {}
    for edge in pose_graph.edges:
        s = edge.source_node_id
        t = edge.target_node_id
        matching_results[s * n_files + t] = \
                matching_result(s, t, edge.transformation)

    if config["python_multi_threading"]:
        from joblib import Parallel, delayed
        import multiprocessing
        import subprocess
        MAX_THREAD = min(multiprocessing.cpu_count(),
                max(len(pose_graph.edges), 1))
        results = Parallel(n_jobs=MAX_THREAD)(
                delayed(register_point_cloud_pair)(ply_file_names,
                matching_results[r].s, matching_results[r].t,
                matching_results[r].transformation, config)
                for r in matching_results)
        for i, r in enumerate(matching_results):
            matching_results[r].transformation = results[i][0]
            matching_results[r].information = results[i][1]
    else:
        for r in matching_results:
            (matching_results[r].transformation,
                    matching_results[r].information) = \
                    register_point_cloud_pair(ply_file_names,
                    matching_results[r].s, matching_results[r].t,
                    matching_results[r].transformation, config)

    pose_graph_new = PoseGraph()
    odometry = np.identity(4)
    pose_graph_new.nodes.append(PoseGraphNode(odometry))
    for r in matching_results:
        (odometry, pose_graph_new) = update_posegrph_for_scene(
                matching_results[r].s, matching_results[r].t,
                matching_results[r].transformation,
                matching_results[r].information,
                odometry, pose_graph_new)
    print(pose_graph_new)
    write_pose_graph(join(config["path_dataset"],
            config["template_refined_posegraph"]), pose_graph_new)

The main workflow is: pairwise local refinement -> multiway registration.

Results

The following is messages from pose graph optimization.

[GlobalOptimizationLM] Optimizing PoseGraph having 14 nodes and 35 edges.
Line process weight : 789.730200
[Initial     ] residual : 1.208286e+04, lambda : 1.706306e+01
[Iteration 00] residual : 2.410383e+03, valid edges : 22, time : 0.000 sec.
[Iteration 01] residual : 8.127856e+01, valid edges : 22, time : 0.000 sec.
[Iteration 02] residual : 8.031355e+01, valid edges : 22, time : 0.000 sec.
Delta.norm() < 1.000000e-06 * (x.norm() + 1.000000e-06)
[GlobalOptimizationLM] total time : 0.001 sec.
[GlobalOptimizationLM] Optimizing PoseGraph having 14 nodes and 35 edges.
Line process weight : 789.730200
[Initial     ] residual : 8.031355e+01, lambda : 1.716826e+01
Delta.norm() < 1.000000e-06 * (x.norm() + 1.000000e-06)
[GlobalOptimizationLM] total time : 0.000 sec.
CompensateReferencePoseGraphNode : reference : 0

There are 14 fragments and 52 valid matching pairs between fragments. After 23 iteration, 11 edges are detected to be false positive. After they are pruned, pose graph optimization runs again to achieve tight alignment.