Continued Testing of EM on Faulty Pose Graphs
Now, that the E.M. estimated covariance looks to be providing a reasonable estimate, we can begin testing it’s ability to robustly optimize and classify outliers. To visually represent the classification accuracy of the estimator, the confusion matrix will be utilized. A representation of the confusion matrix is provided in Figure 1.
Figure 1 :: Confusion Matrix Definition
10 False Constraints
First, we will look at a graph that only contains 10 false constraints. The optimization of the pose graph can be seen in the video provided.
Additionally, the confusion matrix for the estimator is provided in Figure 2. As can be seen from the figure, estimator performed very well, with only one false negative.
Figure 2 :: Confusion Matrix When 10 Faults are Present
50 False Constraints
Next, we will move onto a graph that contains 50 false constraints. Again, a video of the optimization process is provided in the video below.
Again, the estimator was provided a very good classification of the inlier and outlier distribution. For this example there were no false negatives and only one false positive; however, this is the more detrimental of the two false classification because this allows erroneous measurement to be weighted the same as in inlier.
Figure 3 :: Confusion Matrix When 50 Faults are Present
100 False Constraints
Finally, we move to a graph that contains 100 false constaints. The optimization of the graph can be seen in the video below.
As can be seen above, the optimize does not perform well when 100 false constraints are present. This is because the classifier inaccurately classified 10 measurements as inlier when they were erroneous measurements.
Figure 4 :: Confusion Matrix When 50 Faults are Present
Next Steps
As, can be seen from the discussion above, the classifier does not performs as well as would be expected when sufficiently many outliers are added to the graph. To combat this, the optimization routine will be re-written to iterate between the E.M. covariance estimation step and the Max-Mixtures optimization step. The new optimization routine is depicted in Figure 5.
Figure 5 :: E.M. Optimization Flowchart
Additionally, we will scale the final estimated covariance by the Neyman–Pearson lemma, which will provide information about the likelihood of the observable being a false alarm.