Initial Testing Of Topologicial Pose Graph Optimization
Conducting inference over a factor graph when all measurements accurately constrain their states is well studied. With the assumption that the measurement and motion models can be approximated by a uni-modal Gaussian, the optimization can be simplified to a traditional least-squares optimization, as depicted in Eq. 1.
However, collected data sets for navigation applications are rarely well approximated by the simplified uni-modal Gaussian model. For example, consider the data set depicted in Fig 1. Figure 1 depicts the commonly used Manhattan 3500 data set, which is composed of 3500 pose estimates and 5,598 range measurements. The right-hand side of Fig. 1 depicts the data set with no false constraints present, while the left-hand side depicts the data set when 100 false constants are included.
When presented with a data set that contains erroneous constants, such as the on depicted in Fig. 1, without accurate models, the traditional least-squares optimization breaks down. This can be depicted by taking an arbritary measurement $y_i$ and letting its magnitude approach infinity, $\lvert \lvert y_i \rvert \rvert \rightarrow \infty$. Now, if we examine Eq. \ref{ls} we can see that as $\lvert \lvert y_i \rvert \rvert \rightarrow \infty$, $\lvert \lvert x^{*} \rvert \rvert \rightarrow \infty$. This shows that a least-squares estimator, using the $L^2$ cost function, can not handle an arbitrarily large measurement, which shows that it has a break-down point of 0.
Fig 1 :: Manhattan 3500 data set
Several methodologies have been proposed to handle erroneous constants within $L^2$ optimization framework. The most commonly used is the M-estimator, which attempts to mitigate the contribution of false constraint by replacing the quadratic cost function with a modified cost function beyond a user-defined threshold. One obvious drawback to this method, and most other proposed methods, is the need to define a model $a-priori$. The model should be adaptive (i.e., not be dependent upon user defined inputs and vary with respect to the current data set).
To create a framework that is both adaptive and robust, we plan to utilize the recent advance within the field of topological data analysis (TDA). TDA allows for the efficient clustering of high-dimensional data, this provides an ideal framework for outlier rejection. To depict this, the residuals from $L^2$ optimization of the Manhattan data set with 100 false constraints was classified with the topological mapper. A depicting of the clustering is provided in Fig. 2, where the mapper found 13 clusters with 2 main subsets. Figure 3 shows just the residuals that are contained within the largest cluster (i.e., the inlier distribution).
Fig 2 :: Topological Mapper Clustering on Manhattan 3500 Data Set
Fig 3 :: Estimated Inlier Distribution From Topological Mapper
We can take an inital look at the robustness granted by the topological mapper. As a comparison, we will first look at the optimization using $L^2$. The final optimized pose graph using $L^2$ optimization is depicted in Fig. 4 on the left-hand side. As shown, the optimization breaks down, and does not resemble the true optimized graph, as depicted in Fig. 1. Now, we can test the topological mapper optimization by only including constraints in the optimization that are classified as inliers. The topological mapper optimization is depicted in Fig. 4 on the right-hand side. Through visual comparison, it can be seen to be more robust to erroneous constants than $L^2$ optimization; however, more research is required to make it comparable to state-of-the art techniques.
Fig 4 :: Traditional $L^2$ Optimization Compared To Optimiziation With Topological Mapper
Next Steps
1) Test on simulated GNSS data