Data Generation & Initial Testing
Data
Simulated Data
Using PPP/INS simulator to generate GPS observables for two GNSS receivers, which are rigidly mounted to a platform. One receiver will be high fidelity ( i.e. final products with good models ) but the observables could have faults $ f \sim \mathcal{N}(0,25) $. The second receiver is lower fidelity ( positioning error $ \sim 15 m$ ); however, the observables generated by this receiver will have no faults.
Generated 10 Hz GNSS observables for the flight profile depicted below.
Generated ENU flight profile
Generated ENU velocity profile
Generated attitude profile
Collected Data in Degraded Environment
In addition to simulated data, TU-Chemnitz has made available 4 high quality data-sets. Currenlty talking with Tim to see if he can provide us with the raw INS measurements. A brief description of the data-set is provided in the table below. A more detailed description is provided in this paper.
Sensors | Info |
---|---|
Low Cost GPS | U-Blox EVK-M8T ( $\sim$ meter lever error ) |
High Quality GPS | Novatel SPAN Differential ( $\sim$ decimeter lever error ) |
Odometry | |
Camera | 5 Cameras on-board |
Initial Testing
For this initial test, we are assuming that we have one high quality receiver with erronous data, and one lower quality receiver with fault free data. With this data, we can construct the pose graph using the low quality receiver ( i.e. take the low quality receiver position as truth ) and optimze the position using the high fidelity receiver’s raw observables.
Inital factor graph
In the graph, $e_i$ represents an error function or probabilistic constraints applied to the state at the specified time-step. When Gaussian noise is assumed, the error function is equivalent to the innovation residual of the traditional Kalman filter. Utilizing this information it is easy to see that the optimal state estimate can be calculated through a traditional non-linear least squares formulation ( i.e., Levenberg Marquardt ) that minimizes the error over the graph.
Whenever errors are present in the observations, methods to make the optimization more robust must be incorporated. One such method is the switchable constraint, which can be thought of as an observation weight that is be optimized concurrently with the state estimates. The addition of the switchable constraint to the optimization processes modifies the cost function. The modified cost function is provided below, where s is the switchable constraint which is confined to the interval of zero to one. A graphical depiction of this is shown in the Figure below, where observable $m$ at epoch $n$ exceeded the pre-defined residual threshold.
Graph with switch factors incorporated