Realizing, Reversing, Recovering

Good description

RRR divides loop closures into clusters based on topological similarity and then tries to find the largest subset of clusters than are consistent among themselves as well as with the underlying odometry. Consistency is considered in the chi-squared ($\mathcal{X}^2)$ sense. The algorithm first carries out consistency checks for each cluster in order to weed out incorrect links within it, followed by an intra-cluster consistency check. RRR is different from the previous algorithms as it explicitly requires convergence of the graph in order to verify the validity of loop closures. In contrast with SC and DCS, in RRR and MM loop closure decisions are not modeled as continuous variables but as discrete yes/no decisions that need to be made

REF $\rightarrow$ this paper

Getting RRR to run on Ubuntu 16.04


$ cd ~/git
$ git clone https://github.com/ylatif/rrr.git
$ cd rrr
$ mkdir build; cd build;
$ export G2O_INCLUDE_DIR='~/Documents/git/g2o/g2o/core'
$ export G2O_LIBRARIES='~/Documents/git/g2o/lib'
$ EDITOR CMakeLists.txt

copy the following line into CMakeLists.txt so that gcc knows to 
compile the library with c++11. 


set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11")

$ cmake ../; make

Inital Test

To perform an initial test of RRR, the Manhattan 3500 dataset with 10 false constraints was used. The image below shows the optimized pose graph using RRR. These results are not great ( $\mathcal{X}^2$ = 1539.7 ); however, it does show improvement over $L^2$, which has a total graph error of $\mathcal{X}^2$ = 3494.7

rrrInitialTest photo Screenshot from 2017-05-18 16-40-41_zpsqb6a3lyw.png



Switch Factors Vs. DCS

This section provides a performance evaluation of Switchable constraints and DCS when faced with a faulty initial graph. To do this, four commonly used datasets were selected and false loop closure constraints were added. Using these faulty datasets we quantify performance by looking at total graph error ( $\mathcal{X}^2$ ) and the number of iterations required to reach the minimum.

Manhattan 3500

The first dataset that we will look at is the commonly used Manhattan 3500. This is a simulated dataset, which was originally developed by Olson for this paper. A picture of the initial pose graph is shown below.

man3500

Pose Graph for Manhattan


Using the initial error free pose graph shown above, multiple false constraints were added to evaluate the performance of the robust optimization scheme. As a reference, the results for traditional $L^2$ optimization is shown below. ( Note, to reduce the amount of time to conduct this study, I limited the maximum number of iterations to 1000 )

Num. False Constraints Num. Iterations $\mathcal{X}^2$
10 110 3494.8
20 1000 6519.5
30 1000 9776.3
40 1000 12736.5
50 1000 13998.3


The image below shows the optimization results for DCS and switch factors. Obviously, this is quite a dramatic improvement when compared to $L^2$ optimization.

m3500Comp photo m3500Comp_zpsspuzzjkx.png

Optimization results for Manhattan 3500



Intel

The next dataset we will look at was collected at Intel Reseach Lab in Seattle. This pose graph was obtained by processing the raw measurements from wheel odometry and laser range finder. The raw data can be found here

sphere3500

Inital Pose Graph for Sphere 3500


Again, using the initial error free pose graph shown above, multiple false constraints were added to evaluate the performance of the robust optimization scheme. The results for DCS and switch factors are shown below. To provide scale, the total graph error for $L^2$ optimization when there are 10 false constraints is $\mathcal{X}^2$ = 8076.2.

intelComp photo intelComp_zpsod0k5ac6.png

Optimization results for Intel