• 006 gtsam/examples/elaboratePoint2KalmanFilter.cpp


    simple linear Kalman filter on a moving 2D point, but done using factor graphs.This example manually creates all of the needed data structures

    一、源程序

    #include 
    #include 
    //#include 
    #include 
    #include 
    #include 
    #include 
    #include 
    
    using namespace std;
    using namespace gtsam;
    
    int main() {
    
      // [code below basically does SRIF with Cholesky]
    
      // Create a factor graph to perform the inference
      GaussianFactorGraph::shared_ptr linearFactorGraph(new GaussianFactorGraph);
    
      // Create the desired ordering
      Ordering::shared_ptr ordering(new Ordering);
    
      // Create a structure to hold the linearization points
      Values linearizationPoints;
    
      // Ground truth example
      // Start at origin, move to the right (x-axis): 0,0  0,1  0,2
      // Motion model is just moving to the right (x'-x)^2
      // Measurements are GPS like, (x-z)^2, where z is a 2D measurement
      // i.e., we should get 0,0  0,1  0,2 if there is no noise
    
      // Create new state variable
      Symbol x0('x',0);
      ordering->insert(x0, 0);
    
      // Initialize state x0 (2D point) at origin by adding a prior factor, i.e., Bayes net P(x0)
      // This is equivalent to x_0 and P_0
      Point2 x_initial(0,0);
      SharedDiagonal P_initial = noiseModel::Diagonal::Sigmas((Vec(2) << 0.1, 0.1));
      PriorFactor<Point2> factor1(x0, x_initial, P_initial);
      // Linearize the factor and add it to the linear factor graph
      linearizationPoints.insert(x0, x_initial);
      linearFactorGraph->push_back(factor1.linearize(linearizationPoints, *ordering));
    
    
      // Now predict the state at t=1, i.e. argmax_{x1} P(x1) = P(x1|x0) P(x0)
      // In Kalman Filter notation, this is x_{t+1|t} and P_{t+1|t}
      // For the Kalman Filter, this requires a motion model, f(x_{t}) = x_{t+1|t)
      // Assuming the system is linear, this will be of the form f(x_{t}) = F*x_{t} + B*u_{t} + w
      // where F is the state transition model/matrix, B is the control input model,
      // and w is zero-mean, Gaussian white noise with covariance Q
      // Note, in some models, Q is actually derived as G*w*G^T where w models uncertainty of some
      // physical property, such as velocity or acceleration, and G is derived from physics
      //
      // For the purposes of this example, let us assume we are using a constant-position model and
      // the controls are driving the point to the right at 1 m/s. Then, F = [1 0 ; 0 1], B = [1 0 ; 0 1]
      // and u = [1 ; 0]. Let us also assume that the process noise Q = [0.1 0 ; 0 0.1];
      //
      // In the case of factor graphs, the factor related to the motion model would be defined as
      // f2 = (f(x_{t}) - x_{t+1}) * Q^-1 * (f(x_{t}) - x_{t+1})^T
      // Conveniently, there is a factor type, called a BetweenFactor, that can generate this factor
      // given the expected difference, f(x_{t}) - x_{t+1}, and Q.
      // so, difference = x_{t+1} - x_{t} = F*x_{t} + B*u_{t} - I*x_{t}
      //                                  = (F - I)*x_{t} + B*u_{t}
      //                                  = B*u_{t} (for our example)
      Symbol x1('x',1);
      ordering->insert(x1, 1);
    
      Point2 difference(1,0);
      SharedDiagonal Q = noiseModel::Diagonal::Sigmas((Vec(2) << 0.1, 0.1));
      BetweenFactor<Point2> factor2(x0, x1, difference, Q);
      // Linearize the factor and add it to the linear factor graph
      linearizationPoints.insert(x1, x_initial);
      linearFactorGraph->push_back(factor2.linearize(linearizationPoints, *ordering));
    
      // We have now made the small factor graph f1-(x0)-f2-(x1)
      // where factor f1 is just the prior from time t0, P(x0)
      // and   factor f2 is from the motion model
      // Eliminate this in order x0, x1, to get Bayes net P(x0|x1)P(x1)
      // As this is a filter, all we need is the posterior P(x1), so we just keep the root of the Bayes net
      //
      // Because of the way GTSAM works internally, we have used nonlinear class even though this example
      // system is linear. We first convert the nonlinear factor graph into a linear one, using the specified
      // ordering. Linear factors are simply numbered, and are not accessible via named key like the nonlinear
      // variables. Also, the nonlinear factors are linearized around an initial estimate. For a true linear
      // system, the initial estimate is not important.
    
      // Solve the linear factor graph, converting it into a linear Bayes Network ( P(x0,x1) = P(x0|x1)*P(x1) )
      GaussianSequentialSolver solver0(*linearFactorGraph);
      GaussianBayesNet::shared_ptr linearBayesNet = solver0.eliminate();
    
      // Extract the current estimate of x1,P1 from the Bayes Network
      VectorValues result = optimize(*linearBayesNet);
      Point2 x1_predict = linearizationPoints.at<Point2>(x1).retract(result[ordering->at(x1)]);
      x1_predict.print("X1 Predict");
    
      // Update the new linearization point to the new estimate
      linearizationPoints.update(x1, x1_predict);
    
    
    
      // Create a new, empty graph and add the prior from the previous step
      linearFactorGraph = GaussianFactorGraph::shared_ptr(new GaussianFactorGraph);
    
      // Convert the root conditional, P(x1) in this case, into a Prior for the next step
      // Some care must be done here, as the linearization point in future steps will be different
      // than what was used when the factor was created.
      // f = || F*dx1' - (F*x0 - x1) ||^2, originally linearized at x1 = x0
      // After this step, the factor needs to be linearized around x1 = x1_predict
      // This changes the factor to f = || F*dx1'' - b'' ||^2
      //                              = || F*(dx1' - (dx1' - dx1'')) - b'' ||^2
      //                              = || F*dx1' - F*(dx1' - dx1'') - b'' ||^2
      //                              = || F*dx1' - (b'' + F(dx1' - dx1'')) ||^2
      //                              -> b' = b'' + F(dx1' - dx1'')
      //                              -> b'' = b' - F(dx1' - dx1'')
      //                              = || F*dx1'' - (b'  - F(dx1' - dx1'')) ||^2
      //                              = || F*dx1'' - (b'  - F(x_predict - x_inital)) ||^2
      const GaussianConditional::shared_ptr& cg0 = linearBayesNet->back();
      assert(cg0->nrFrontals() == 1);
      assert(cg0->nrParents() == 0);
      linearFactorGraph->add(0, cg0->R(), cg0->d() - cg0->R()*result[ordering->at(x1)], noiseModel::Diagonal::Sigmas(cg0->get_sigmas(), true));
    
      // Create the desired ordering
      ordering = Ordering::shared_ptr(new Ordering);
      ordering->insert(x1, 0);
    
      // Now, a measurement, z1, has been received, and the Kalman Filter should be "Updated"/"Corrected"
      // This is equivalent to saying P(x1|z1) ~ P(z1|x1)*P(x1) ~ f3(x1)*f4(x1;z1)
      // where f3 is the prior from the previous step, and
      // where f4 is a measurement factor
      //
      // So, now we need to create the measurement factor, f4
      // For the Kalman Filter, this is the measurement function, h(x_{t}) = z_{t}
      // Assuming the system is linear, this will be of the form h(x_{t}) = H*x_{t} + v
      // where H is the observation model/matrix, and v is zero-mean, Gaussian white noise with covariance R
      //
      // For the purposes of this example, let us assume we have something like a GPS that returns
      // the current position of the robot. For this simple example, we can use a PriorFactor to model the
      // observation as it depends on only a single state variable, x1. To model real sensor observations
      // generally requires the creation of a new factor type. For example, factors for range sensors, bearing
      // sensors, and camera projections have already been added to GTSAM.
      //
      // In the case of factor graphs, the factor related to the measurements would be defined as
      // f4 = (h(x_{t}) - z_{t}) * R^-1 * (h(x_{t}) - z_{t})^T
      //    = (x_{t} - z_{t}) * R^-1 * (x_{t} - z_{t})^T
      // This can be modeled using the PriorFactor, where the mean is z_{t} and the covariance is R.
      Point2 z1(1.0, 0.0);
      SharedDiagonal R1 = noiseModel::Diagonal::Sigmas((Vec(2) << 0.25, 0.25));
      PriorFactor<Point2> factor4(x1, z1, R1);
      // Linearize the factor and add it to the linear factor graph
      linearFactorGraph->push_back(factor4.linearize(linearizationPoints, *ordering));
    
      // We have now made the small factor graph f3-(x1)-f4
      // where factor f3 is the prior from previous time ( P(x1) )
      // and   factor f4 is from the measurement, z1 ( P(x1|z1) )
      // Eliminate this in order x1, to get Bayes net P(x1)
      // As this is a filter, all we need is the posterior P(x1), so we just keep the root of the Bayes net
      // We solve as before...
    
      // Solve the linear factor graph, converting it into a linear Bayes Network ( P(x0,x1) = P(x0|x1)*P(x1) )
      GaussianSequentialSolver solver1(*linearFactorGraph);
      linearBayesNet = solver1.eliminate();
    
      // Extract the current estimate of x1 from the Bayes Network
      result = optimize(*linearBayesNet);
      Point2 x1_update = linearizationPoints.at<Point2>(x1).retract(result[ordering->at(x1)]);
      x1_update.print("X1 Update");
    
      // Update the linearization point to the new estimate
      linearizationPoints.update(x1, x1_update);
    
    
    
    
    
    
      // Wash, rinse, repeat for another time step
      // Create a new, empty graph and add the prior from the previous step
      linearFactorGraph = GaussianFactorGraph::shared_ptr(new GaussianFactorGraph);
    
      // Convert the root conditional, P(x1) in this case, into a Prior for the next step
      // The linearization point of this prior must be moved to the new estimate of x, and the key/index needs to be reset to 0,
      // the first key in the next iteration
      const GaussianConditional::shared_ptr& cg1 = linearBayesNet->back();
      assert(cg1->nrFrontals() == 1);
      assert(cg1->nrParents() == 0);
      JacobianFactor tmpPrior1 = JacobianFactor(*cg1);
      linearFactorGraph->add(0, tmpPrior1.getA(tmpPrior1.begin()), tmpPrior1.getb() - tmpPrior1.getA(tmpPrior1.begin()) * result[ordering->at(x1)], tmpPrior1.get_model());
    
      // Create a key for the new state
      Symbol x2('x',2);
    
      // Create the desired ordering
      ordering = Ordering::shared_ptr(new Ordering);
      ordering->insert(x1, 0);
      ordering->insert(x2, 1);
    
      // Create a nonlinear factor describing the motion model
      difference = Point2(1,0);
      Q = noiseModel::Diagonal::Sigmas((Vec(2) <, 0.1, 0.1));
      BetweenFactor<Point2> factor6(x1, x2, difference, Q);
    
      // Linearize the factor and add it to the linear factor graph
      linearizationPoints.insert(x2, x1_update);
      linearFactorGraph->push_back(factor6.linearize(linearizationPoints, *ordering));
    
      // Solve the linear factor graph, converting it into a linear Bayes Network ( P(x1,x2) = P(x1|x2)*P(x2) )
      GaussianSequentialSolver solver2(*linearFactorGraph);
      linearBayesNet = solver2.eliminate();
    
      // Extract the current estimate of x2 from the Bayes Network
      result = optimize(*linearBayesNet);
      Point2 x2_predict = linearizationPoints.at<Point2>(x2).retract(result[ordering->at(x2)]);
      x2_predict.print("X2 Predict");
    
      // Update the linearization point to the new estimate
      linearizationPoints.update(x2, x2_predict);
    
    
    
      // Now add the next measurement
      // Create a new, empty graph and add the prior from the previous step
      linearFactorGraph = GaussianFactorGraph::shared_ptr(new GaussianFactorGraph);
    
      // Convert the root conditional, P(x1) in this case, into a Prior for the next step
      const GaussianConditional::shared_ptr& cg2 = linearBayesNet->back();
      assert(cg2->nrFrontals() == 1);
      assert(cg2->nrParents() == 0);
      JacobianFactor tmpPrior2 = JacobianFactor(*cg2);
      linearFactorGraph->add(0, tmpPrior2.getA(tmpPrior2.begin()), tmpPrior2.getb() - tmpPrior2.getA(tmpPrior2.begin()) * result[ordering->at(x2)], tmpPrior2.get_model());
    
      // Create the desired ordering
      ordering = Ordering::shared_ptr(new Ordering);
      ordering->insert(x2, 0);
    
      // And update using z2 ...
      Point2 z2(2.0, 0.0);
      SharedDiagonal R2 = noiseModel::Diagonal::Sigmas((Vec(2) << 0.25, 0.25));
      PriorFactor<Point2> factor8(x2, z2, R2);
    
      // Linearize the factor and add it to the linear factor graph
      linearFactorGraph->push_back(factor8.linearize(linearizationPoints, *ordering));
    
      // We have now made the small factor graph f7-(x2)-f8
      // where factor f7 is the prior from previous time ( P(x2) )
      // and   factor f8 is from the measurement, z2 ( P(x2|z2) )
      // Eliminate this in order x2, to get Bayes net P(x2)
      // As this is a filter, all we need is the posterior P(x2), so we just keep the root of the Bayes net
      // We solve as before...
    
      // Solve the linear factor graph, converting it into a linear Bayes Network ( P(x0,x1) = P(x0|x1)*P(x1) )
      GaussianSequentialSolver solver3(*linearFactorGraph);
      linearBayesNet = solver3.eliminate();
    
      // Extract the current estimate of x2 from the Bayes Network
      result = optimize(*linearBayesNet);
      Point2 x2_update = linearizationPoints.at<Point2>(x2).retract(result[ordering->at(x2)]);
      x2_update.print("X2 Update");
    
      // Update the linearization point to the new estimate
      linearizationPoints.update(x2, x2_update);
    
    
    
    
    
    
      // Wash, rinse, repeat for a third time step
      // Create a new, empty graph and add the prior from the previous step
      linearFactorGraph = GaussianFactorGraph::shared_ptr(new GaussianFactorGraph);
    
      // Convert the root conditional, P(x1) in this case, into a Prior for the next step
      const GaussianConditional::shared_ptr& cg3 = linearBayesNet->back();
      assert(cg3->nrFrontals() == 1);
      assert(cg3->nrParents() == 0);
      JacobianFactor tmpPrior3 = JacobianFactor(*cg3);
      linearFactorGraph->add(0, tmpPrior3.getA(tmpPrior3.begin()), tmpPrior3.getb() - tmpPrior3.getA(tmpPrior3.begin()) * result[ordering->at(x2)], tmpPrior3.get_model());
    
      // Create a key for the new state
      Symbol x3('x',3);
    
      // Create the desired ordering
      ordering = Ordering::shared_ptr(new Ordering);
      ordering->insert(x2, 0);
      ordering->insert(x3, 1);
    
      // Create a nonlinear factor describing the motion model
      difference = Point2(1,0);
      Q = noiseModel::Diagonal::Sigmas((Vec(2) << 0.1, 0.1));
      BetweenFactor<Point2> factor10(x2, x3, difference, Q);
    
      // Linearize the factor and add it to the linear factor graph
      linearizationPoints.insert(x3, x2_update);
      linearFactorGraph->push_back(factor10.linearize(linearizationPoints, *ordering));
    
      // Solve the linear factor graph, converting it into a linear Bayes Network ( P(x1,x2) = P(x1|x2)*P(x2) )
      GaussianSequentialSolver solver4(*linearFactorGraph);
      linearBayesNet = solver4.eliminate();
    
      // Extract the current estimate of x3 from the Bayes Network
      result = optimize(*linearBayesNet);
      Point2 x3_predict = linearizationPoints.at<Point2>(x3).retract(result[ordering->at(x3)]);
      x3_predict.print("X3 Predict");
    
      // Update the linearization point to the new estimate
      linearizationPoints.update(x3, x3_predict);
    
    
    
      // Now add the next measurement
      // Create a new, empty graph and add the prior from the previous step
      linearFactorGraph = GaussianFactorGraph::shared_ptr(new GaussianFactorGraph);
    
      // Convert the root conditional, P(x1) in this case, into a Prior for the next step
      const GaussianConditional::shared_ptr& cg4 = linearBayesNet->back();
      assert(cg4->nrFrontals() == 1);
      assert(cg4->nrParents() == 0);
      JacobianFactor tmpPrior4 = JacobianFactor(*cg4);
      linearFactorGraph->add(0, tmpPrior4.getA(tmpPrior4.begin()), tmpPrior4.getb() - tmpPrior4.getA(tmpPrior4.begin()) * result[ordering->at(x3)], tmpPrior4.get_model());
    
      // Create the desired ordering
      ordering = Ordering::shared_ptr(new Ordering);
      ordering->insert(x3, 0);
    
      // And update using z3 ...
      Point2 z3(3.0, 0.0);
      SharedDiagonal R3 = noiseModel::Diagonal::Sigmas((Vec(2) << 0.25, 0.25));
      PriorFactor<Point2> factor12(x3, z3, R3);
    
      // Linearize the factor and add it to the linear factor graph
      linearFactorGraph->push_back(factor12.linearize(linearizationPoints, *ordering));
    
      // We have now made the small factor graph f11-(x3)-f12
      // where factor f11 is the prior from previous time ( P(x3) )
      // and   factor f12 is from the measurement, z3 ( P(x3|z3) )
      // Eliminate this in order x3, to get Bayes net P(x3)
      // As this is a filter, all we need is the posterior P(x3), so we just keep the root of the Bayes net
      // We solve as before...
    
      // Solve the linear factor graph, converting it into a linear Bayes Network ( P(x0,x1) = P(x0|x1)*P(x1) )
      GaussianSequentialSolver solver5(*linearFactorGraph);
      linearBayesNet = solver5.eliminate();
    
      // Extract the current estimate of x2 from the Bayes Network
      result = optimize(*linearBayesNet);
      Point2 x3_update = linearizationPoints.at<Point2>(x3).retract(result[ordering->at(x3)]);
      x3_update.print("X3 Update");
    
      // Update the linearization point to the new estimate
      linearizationPoints.update(x3, x3_update);
    
    
    
      return 0;
    }
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78
    • 79
    • 80
    • 81
    • 82
    • 83
    • 84
    • 85
    • 86
    • 87
    • 88
    • 89
    • 90
    • 91
    • 92
    • 93
    • 94
    • 95
    • 96
    • 97
    • 98
    • 99
    • 100
    • 101
    • 102
    • 103
    • 104
    • 105
    • 106
    • 107
    • 108
    • 109
    • 110
    • 111
    • 112
    • 113
    • 114
    • 115
    • 116
    • 117
    • 118
    • 119
    • 120
    • 121
    • 122
    • 123
    • 124
    • 125
    • 126
    • 127
    • 128
    • 129
    • 130
    • 131
    • 132
    • 133
    • 134
    • 135
    • 136
    • 137
    • 138
    • 139
    • 140
    • 141
    • 142
    • 143
    • 144
    • 145
    • 146
    • 147
    • 148
    • 149
    • 150
    • 151
    • 152
    • 153
    • 154
    • 155
    • 156
    • 157
    • 158
    • 159
    • 160
    • 161
    • 162
    • 163
    • 164
    • 165
    • 166
    • 167
    • 168
    • 169
    • 170
    • 171
    • 172
    • 173
    • 174
    • 175
    • 176
    • 177
    • 178
    • 179
    • 180
    • 181
    • 182
    • 183
    • 184
    • 185
    • 186
    • 187
    • 188
    • 189
    • 190
    • 191
    • 192
    • 193
    • 194
    • 195
    • 196
    • 197
    • 198
    • 199
    • 200
    • 201
    • 202
    • 203
    • 204
    • 205
    • 206
    • 207
    • 208
    • 209
    • 210
    • 211
    • 212
    • 213
    • 214
    • 215
    • 216
    • 217
    • 218
    • 219
    • 220
    • 221
    • 222
    • 223
    • 224
    • 225
    • 226
    • 227
    • 228
    • 229
    • 230
    • 231
    • 232
    • 233
    • 234
    • 235
    • 236
    • 237
    • 238
    • 239
    • 240
    • 241
    • 242
    • 243
    • 244
    • 245
    • 246
    • 247
    • 248
    • 249
    • 250
    • 251
    • 252
    • 253
    • 254
    • 255
    • 256
    • 257
    • 258
    • 259
    • 260
    • 261
    • 262
    • 263
    • 264
    • 265
    • 266
    • 267
    • 268
    • 269
    • 270
    • 271
    • 272
    • 273
    • 274
    • 275
    • 276
    • 277
    • 278
    • 279
    • 280
    • 281
    • 282
    • 283
    • 284
    • 285
    • 286
    • 287
    • 288
    • 289
    • 290
    • 291
    • 292
    • 293
    • 294
    • 295
    • 296
    • 297
    • 298
    • 299
    • 300
    • 301
    • 302
    • 303
    • 304
    • 305
    • 306
    • 307
    • 308
    • 309
    • 310
    • 311
    • 312
    • 313
    • 314
    • 315
    • 316
    • 317
    • 318
    • 319
    • 320
    • 321
    • 322
    • 323
    • 324
    • 325
    • 326
    • 327
    • 328
    • 329
    • 330
    • 331
    • 332
    • 333
    • 334
    • 335
    • 336
    • 337
    • 338
    • 339
    • 340
    • 341
    • 342
    • 343
    • 344
    • 345
    • 346
    • 347
    • 348
    • 349
    • 350
    • 351
    • 352
    • 353
    • 354
    • 355
    • 356

    二、main函数

    利用均方根信息滤波和Cholesky

    2.1 创建

      // Create a factor graph to perform the inference
      GaussianFactorGraph::shared_ptr linearFactorGraph(new GaussianFactorGraph);
    
      // Create the desired ordering
      Ordering::shared_ptr ordering(new Ordering);
    
      // Create a structure to hold the linearization points
      Values linearizationPoints;
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8

    factor graph、ordering和Values

    2.2 Ground truth examples

      // Ground truth example
      // Start at origin, move to the right (x-axis): 0,0  0,1  0,2
      // Motion model is just moving to the right (x'-x)^2
      // Measurements are GPS like, (x-z)^2, where z is a 2D measurement
      // i.e., we should get 0,0  0,1  0,2 if there is no noise
    
    • 1
    • 2
    • 3
    • 4
    • 5

    2.3 创建变量

      // Create new state variable
      Symbol x0('x',0);
      ordering->insert(x0, 0);
    
      // Initialize state x0 (2D point) at origin by adding a prior factor, i.e., Bayes net P(x0)
      // This is equivalent to x_0 and P_0
      Point2 x_initial(0,0);
      SharedDiagonal P_initial = noiseModel::Diagonal::Sigmas((Vec(2) << 0.1, 0.1));
      PriorFactor<Point2> factor1(x0, x_initial, P_initial);
      // Linearize the factor and add it to the linear factor graph
      linearizationPoints.insert(x0, x_initial);
      linearFactorGraph->push_back(factor1.linearize(linearizationPoints, *ordering));
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12

    初始化Key和ordering,对因子进行线性化

    2.4 预测

    预测
    arg ⁡ max ⁡ x 1   P ( x 1 ) = P ( x 1 ∣ x 0 ) P ( x 0 ) \mathop{\arg\max}\limits_{x_{1}}\space P(x_1) = P(x_1|x_0) P(x_0) x1argmax P(x1)=P(x1x0)P(x0)
    在卡尔曼滤波中表示为 x t + 1 ∣ t   a n d   P t + 1 ∣ t x_{t+1|t}\ and\ P_{t+1|t} xt+1∣t and Pt+1∣t
    motion model: f ( x t ) = x t + 1 ∣ t f(x_{t}) = x_{t+1|t} f(xt)=xt+1∣t
    线性化:
    f ( x t ) = F ∗ x t + B ∗ u t + w f(x_{t}) = F*x_{t} + B*u_{t} + w f(xt)=Fxt+But+w
    F F F->transition model/matrix
    B B B->control input model
    w w w ->zero-mean, Gaussian white noise with covariance Q Q Q
    某些时候 Q = G ∗ w ∗ G T Q= G*w*G^T Q=GwGT
    在factor graph中factor related to the motion model ->defined as
    f 2 = ( f ( x t ) − x t + 1 ) ∗ Q − 1 ∗ ( f ( x t ) − x t + 1 ) T f2 = (f(x_{t}) - x_{t+1}) * Q^{-1} * (f(x_{t}) - x_{t+1})^T f2=(f(xt)xt+1)Q1(f(xt)xt+1)T
    BetweenFactor可以生成该因子

    given the expected difference, f ( x t ) − x t + 1 f(x_{t}) - x_{t+1} f(xt)xt+1, and Q Q Q.
    所以
    d i f f e r e n c e = x t + 1 − x t = F ∗ x t + B ∗ u t − I ∗ x t = ( F − I ) ∗ x t + B ∗ u t = B ∗ u t

    difference=xt+1xt=Fxt+ButIxt=(FI)xt+But=But" role="presentation">difference=xt+1xt=Fxt+ButIxt=(FI)xt+But=But
    difference=xt+1xt=Fxt+ButIxt=(FI)xt+But=But

    定义x1和BetweenFactor

      Symbol x1('x',1);
      ordering->insert(x1, 1);
    
      Point2 difference(1,0);
      SharedDiagonal Q = noiseModel::Diagonal::Sigmas((Vec(2) << 0.1, 0.1));
      BetweenFactor<Point2> factor2(x0, x1, difference, Q);
      // Linearize the factor and add it to the linear factor graph
      linearizationPoints.insert(x1, x_initial);
      linearFactorGraph->push_back(factor2.linearize(linearizationPoints, *ordering));
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9

    至此构建了小型因子图f1-(x0)-f2-(x1)
    // where factor f1 is just the prior from time t0, P(x0)
    // and factor f2 is from the motion model
    消元得到贝叶斯网络 P ( x 0 ∣ x 1 ) P ( x 1 ) P(x_0|x_1)P(x_1) P(x0x1)P(x1)

    然后求解linear factor graph,将其转化为a linear Bayes Network
    P ( x 0 , x 1 ) = P ( x 0 ∣ x 1 ) ∗ P ( x 1 ) P(x_0,x_1) = P(x_0|x_1)*P(x_1) P(x0,x1)=P(x0x1)P(x1)

    from Bayes Network获取估计值
    进行线性化点的更新

      GaussianSequentialSolver solver0(*linearFactorGraph);
      GaussianBayesNet::shared_ptr linearBayesNet = solver0.eliminate();
    
      // Extract the current estimate of x1,P1 from the Bayes Network
      VectorValues result = optimize(*linearBayesNet);
      Point2 x1_predict = linearizationPoints.at<Point2>(x1).retract(result[ordering->at(x1)]);
      x1_predict.print("X1 Predict");
    
      // Update the new linearization point to the new estimate
      linearizationPoints.update(x1, x1_predict);
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10

    2.5 重新建立,从之前获取先验

      linearFactorGraph = GaussianFactorGraph::shared_ptr(new GaussianFactorGraph);
    
    • 1

    将根条件 P ( x 1 ) P(x_1) P(x1)转化为下一步的先验
    必须注意线性化点的不同(未来线性化点和创建因子时使用的线性化点不同)
    f = ∣ ∣ F ∗ d x 1 ′ − ( F ∗ x 0 − x 1 ) ∣ ∣ 2 f = || F*dx_1' - (F*x_0 - x_1) ||^2 f=∣∣Fdx1(Fx0x1)2起初的线性化点 x 1 = x 0 x_1 = x_0 x1=x0
    在这一步后,线性化点更改为 x 1 = x 1 _ p r e d i c t x_1 = x_{1\_predict} x1=x1_predict
    This changes the factor to f

    f = ∣ ∣ F ∗ d x 1 ′ ′ − b ′ ′ ∣ ∣ 2 = ∣ ∣ F ∗ ( d x 1 ′ − ( d x 1 ′ − d x 1 ′ ′ ) ) − b ′ ′ ∣ ∣ 2 = ∣ ∣ F ∗ d x 1 ′ − F ∗ ( d x 1 ′ − d x 1 ′ ′ ) − b ′ ′ ∣ ∣ 2 = ∣ ∣ F ∗ d x 1 ′ − ( b ′ ′ + F ( d x 1 ′ − d x 1 ′ ′ ) ) ∣ ∣ 2 − > b ′ = b ′ ′ + F ( d x 1 ′ − d x 1 ′ ′ ) − > b ′ ′ = b ′ − F ( d x 1 ′ − d x 1 ′ ′ ) = ∣ ∣ F ∗ d x 1 ′ ′ − ( b ′ − F ( d x 1 ′ − d x 1 ′ ′ ) ) ∣ ∣ 2 = ∣ ∣ F ∗ d x 1 ′ ′ − ( b ′ − F ( x p r e d i c t − x i n i t a l ) ) ∣ ∣ 2

    f=||Fdx1b||2=||F(dx1(dx1dx1))b||2=||Fdx1F(dx1dx1)b||2=||Fdx1(b+F(dx1dx1))||2>b=b+F(dx1dx1)>b=bF(dx1dx1)=||Fdx1(bF(dx1dx1))||2=||Fdx1(bF(xpredictxinital))||2" role="presentation">f=||Fdx1b||2=||F(dx1(dx1dx1))b||2=||Fdx1F(dx1dx1)b||2=||Fdx1(b+F(dx1dx1))||2>b=b+F(dx1dx1)>b=bF(dx1dx1)=||Fdx1(bF(dx1dx1))||2=||Fdx1(bF(xpredictxinital))||2
    f=∣∣Fdx1′′b′′2=∣∣F(dx1(dx1dx1′′))b′′2=∣∣Fdx1F(dx1dx1′′)b′′2=∣∣Fdx1(b′′+F(dx1dx1′′))2>b=b′′+F(dx1dx1′′)>b′′=bF(dx1dx1′′)=∣∣Fdx1′′(bF(dx1dx1′′))2=∣∣Fdx1′′(bF(xpredictxinital))2

      const GaussianConditional::shared_ptr& cg0 = linearBayesNet->back();
      assert(cg0->nrFrontals() == 1);
      assert(cg0->nrParents() == 0);
      linearFactorGraph->add(0, cg0->R(), cg0->d() - cg0->R()*result[ordering->at(x1)], noiseModel::Diagonal::Sigmas(cg0->get_sigmas(), true));
    
      // Create the desired ordering
      ordering = Ordering::shared_ptr(new Ordering);
      ordering->insert(x1, 0);
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8

    2.6 更新(校正)

    观测到达后
    P ( x 1 ∣ z 1 ) ∼ P ( z 1 ∣ x 1 ) ∗ P ( x 1 )   f 3 ( x 1 ) ∗ f 4 ( x 1 ; z 1 ) P(x_1|z_1) \sim P(z_1|x_1)*P(x_1) ~ f_3(x_1)*f_4(x_1;z_1) P(x1z1)P(z1x1)P(x1) f3(x1)f4(x1;z1)
    f 3 f_3 f3 is the prior from the previous step
    f 4 f_4 f4 is a measurement factor

    2.6.1 创建 f 4 f_4 f4

    测量方程: h ( x t ) = z t h(x_{t}) = z_{t} h(xt)=zt
    假设系统是线性的: h ( x t ) = H ∗ x t + v h(x_{t}) = H*x_{t} + v h(xt)=Hxt+v
    H H H->观测模型
    v v v->0 mean with Gaussian white noise with covariance R R R
    假设类似于GPS类型的观测
    在这种factor graph情形下measurements would be defined as
    f 4 = ( h ( x t ) − z t ) ∗ R − 1 ∗ ( h ( x t ) − z t ) T = ( x t − z t ) ∗ R − 1 ∗ ( x t − z t ) T

    f4=(h(xt)zt)R1(h(xt)zt)T=(xtzt)R1(xtzt)T" role="presentation">f4=(h(xt)zt)R1(h(xt)zt)T=(xtzt)R1(xtzt)T
    f4=(h(xt)zt)R1(h(xt)zt)T=(xtzt)R1(xtzt)T
    可以使用PriorFactor
    均值是 z t z_{t} zt,协方差是 R R R

      Point2 z1(1.0, 0.0);
      SharedDiagonal R1 = noiseModel::Diagonal::Sigmas((Vec(2) << 0.25, 0.25));
      PriorFactor<Point2> factor4(x1, z1, R1);
      // Linearize the factor and add it to the linear factor graph
      linearFactorGraph->push_back(factor4.linearize(linearizationPoints, *ordering));
    
    • 1
    • 2
    • 3
    • 4
    • 5

    now made the small factor graph f3-(x1)-f4
    factor f 3 f_3 f3 is the prior from previous time( P ( x 1 ) P(x_1) P(x1) )
    factor f 4 f_4 f4 is from the measurement, z 1 z_1 z1 ( P ( x 1 ∣ z 1 ) P(x_1|z_1) P(x1z1) )
    Eliminate this in order x 1 x_1 x1, to get Bayes net P ( x 1 ) P(x_1) P(x1)
    posterior P ( x 1 ) P(x_1) P(x1)
    Solve the linear factor graph, converting it into a linear Bayes Network
    P ( x 0 , x 1 ) = P ( x 0 ∣ x 1 ) ∗ P ( x 1 ) P(x_0,x_1) = P(x_0|x_1)*P(x_1) P(x0,x1)=P(x0x1)P(x1)

      GaussianSequentialSolver solver1(*linearFactorGraph);
      linearBayesNet = solver1.eliminate();
    
      // Extract the current estimate of x1 from the Bayes Network
      result = optimize(*linearBayesNet);
      Point2 x1_update = linearizationPoints.at<Point2>(x1).retract(result[ordering->at(x1)]);
      x1_update.print("X1 Update");
    
      // Update the linearization point to the new estimate
      linearizationPoints.update(x1, x1_update);
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10

    2.7 重复过程

      // Wash, rinse, repeat for another time step
      // Create a new, empty graph and add the prior from the previous step
      linearFactorGraph = GaussianFactorGraph::shared_ptr(new GaussianFactorGraph);
    
      // Convert the root conditional, P(x1) in this case, into a Prior for the next step
      // The linearization point of this prior must be moved to the new estimate of x, and the key/index needs to be reset to 0,
      // the first key in the next iteration
      const GaussianConditional::shared_ptr& cg1 = linearBayesNet->back();
      assert(cg1->nrFrontals() == 1);
      assert(cg1->nrParents() == 0);
      JacobianFactor tmpPrior1 = JacobianFactor(*cg1);
      linearFactorGraph->add(0, tmpPrior1.getA(tmpPrior1.begin()), tmpPrior1.getb() - tmpPrior1.getA(tmpPrior1.begin()) * result[ordering->at(x1)], tmpPrior1.get_model());
    
      // Create a key for the new state
      Symbol x2('x',2);
    
      // Create the desired ordering
      ordering = Ordering::shared_ptr(new Ordering);
      ordering->insert(x1, 0);
      ordering->insert(x2, 1);
    
      // Create a nonlinear factor describing the motion model
      difference = Point2(1,0);
      Q = noiseModel::Diagonal::Sigmas((Vec(2) <, 0.1, 0.1));
      BetweenFactor<Point2> factor6(x1, x2, difference, Q);
    
      // Linearize the factor and add it to the linear factor graph
      linearizationPoints.insert(x2, x1_update);
      linearFactorGraph->push_back(factor6.linearize(linearizationPoints, *ordering));
    
      // Solve the linear factor graph, converting it into a linear Bayes Network ( P(x1,x2) = P(x1|x2)*P(x2) )
      GaussianSequentialSolver solver2(*linearFactorGraph);
      linearBayesNet = solver2.eliminate();
    
      // Extract the current estimate of x2 from the Bayes Network
      result = optimize(*linearBayesNet);
      Point2 x2_predict = linearizationPoints.at<Point2>(x2).retract(result[ordering->at(x2)]);
      x2_predict.print("X2 Predict");
    
      // Update the linearization point to the new estimate
      linearizationPoints.update(x2, x2_predict);
    
    
    
      // Now add the next measurement
      // Create a new, empty graph and add the prior from the previous step
      linearFactorGraph = GaussianFactorGraph::shared_ptr(new GaussianFactorGraph);
    
      // Convert the root conditional, P(x1) in this case, into a Prior for the next step
      const GaussianConditional::shared_ptr& cg2 = linearBayesNet->back();
      assert(cg2->nrFrontals() == 1);
      assert(cg2->nrParents() == 0);
      JacobianFactor tmpPrior2 = JacobianFactor(*cg2);
      linearFactorGraph->add(0, tmpPrior2.getA(tmpPrior2.begin()), tmpPrior2.getb() - tmpPrior2.getA(tmpPrior2.begin()) * result[ordering->at(x2)], tmpPrior2.get_model());
    
      // Create the desired ordering
      ordering = Ordering::shared_ptr(new Ordering);
      ordering->insert(x2, 0);
    
      // And update using z2 ...
      Point2 z2(2.0, 0.0);
      SharedDiagonal R2 = noiseModel::Diagonal::Sigmas((Vec(2) << 0.25, 0.25));
      PriorFactor<Point2> factor8(x2, z2, R2);
    
      // Linearize the factor and add it to the linear factor graph
      linearFactorGraph->push_back(factor8.linearize(linearizationPoints, *ordering));
    
      // We have now made the small factor graph f7-(x2)-f8
      // where factor f7 is the prior from previous time ( P(x2) )
      // and   factor f8 is from the measurement, z2 ( P(x2|z2) )
      // Eliminate this in order x2, to get Bayes net P(x2)
      // As this is a filter, all we need is the posterior P(x2), so we just keep the root of the Bayes net
      // We solve as before...
    
      // Solve the linear factor graph, converting it into a linear Bayes Network ( P(x0,x1) = P(x0|x1)*P(x1) )
      GaussianSequentialSolver solver3(*linearFactorGraph);
      linearBayesNet = solver3.eliminate();
    
      // Extract the current estimate of x2 from the Bayes Network
      result = optimize(*linearBayesNet);
      Point2 x2_update = linearizationPoints.at<Point2>(x2).retract(result[ordering->at(x2)]);
      x2_update.print("X2 Update");
    
      // Update the linearization point to the new estimate
      linearizationPoints.update(x2, x2_update);
    
    
    
    
    
    
      // Wash, rinse, repeat for a third time step
      // Create a new, empty graph and add the prior from the previous step
      linearFactorGraph = GaussianFactorGraph::shared_ptr(new GaussianFactorGraph);
    
      // Convert the root conditional, P(x1) in this case, into a Prior for the next step
      const GaussianConditional::shared_ptr& cg3 = linearBayesNet->back();
      assert(cg3->nrFrontals() == 1);
      assert(cg3->nrParents() == 0);
      JacobianFactor tmpPrior3 = JacobianFactor(*cg3);
      linearFactorGraph->add(0, tmpPrior3.getA(tmpPrior3.begin()), tmpPrior3.getb() - tmpPrior3.getA(tmpPrior3.begin()) * result[ordering->at(x2)], tmpPrior3.get_model());
    
      // Create a key for the new state
      Symbol x3('x',3);
    
      // Create the desired ordering
      ordering = Ordering::shared_ptr(new Ordering);
      ordering->insert(x2, 0);
      ordering->insert(x3, 1);
    
      // Create a nonlinear factor describing the motion model
      difference = Point2(1,0);
      Q = noiseModel::Diagonal::Sigmas((Vec(2) << 0.1, 0.1));
      BetweenFactor<Point2> factor10(x2, x3, difference, Q);
    
      // Linearize the factor and add it to the linear factor graph
      linearizationPoints.insert(x3, x2_update);
      linearFactorGraph->push_back(factor10.linearize(linearizationPoints, *ordering));
    
      // Solve the linear factor graph, converting it into a linear Bayes Network ( P(x1,x2) = P(x1|x2)*P(x2) )
      GaussianSequentialSolver solver4(*linearFactorGraph);
      linearBayesNet = solver4.eliminate();
    
      // Extract the current estimate of x3 from the Bayes Network
      result = optimize(*linearBayesNet);
      Point2 x3_predict = linearizationPoints.at<Point2>(x3).retract(result[ordering->at(x3)]);
      x3_predict.print("X3 Predict");
    
      // Update the linearization point to the new estimate
      linearizationPoints.update(x3, x3_predict);
    
    
    
      // Now add the next measurement
      // Create a new, empty graph and add the prior from the previous step
      linearFactorGraph = GaussianFactorGraph::shared_ptr(new GaussianFactorGraph);
    
      // Convert the root conditional, P(x1) in this case, into a Prior for the next step
      const GaussianConditional::shared_ptr& cg4 = linearBayesNet->back();
      assert(cg4->nrFrontals() == 1);
      assert(cg4->nrParents() == 0);
      JacobianFactor tmpPrior4 = JacobianFactor(*cg4);
      linearFactorGraph->add(0, tmpPrior4.getA(tmpPrior4.begin()), tmpPrior4.getb() - tmpPrior4.getA(tmpPrior4.begin()) * result[ordering->at(x3)], tmpPrior4.get_model());
    
      // Create the desired ordering
      ordering = Ordering::shared_ptr(new Ordering);
      ordering->insert(x3, 0);
    
      // And update using z3 ...
      Point2 z3(3.0, 0.0);
      SharedDiagonal R3 = noiseModel::Diagonal::Sigmas((Vec(2) << 0.25, 0.25));
      PriorFactor<Point2> factor12(x3, z3, R3);
    
      // Linearize the factor and add it to the linear factor graph
      linearFactorGraph->push_back(factor12.linearize(linearizationPoints, *ordering));
    
      // We have now made the small factor graph f11-(x3)-f12
      // where factor f11 is the prior from previous time ( P(x3) )
      // and   factor f12 is from the measurement, z3 ( P(x3|z3) )
      // Eliminate this in order x3, to get Bayes net P(x3)
      // As this is a filter, all we need is the posterior P(x3), so we just keep the root of the Bayes net
      // We solve as before...
    
      // Solve the linear factor graph, converting it into a linear Bayes Network ( P(x0,x1) = P(x0|x1)*P(x1) )
      GaussianSequentialSolver solver5(*linearFactorGraph);
      linearBayesNet = solver5.eliminate();
    
      // Extract the current estimate of x2 from the Bayes Network
      result = optimize(*linearBayesNet);
      Point2 x3_update = linearizationPoints.at<Point2>(x3).retract(result[ordering->at(x3)]);
      x3_update.print("X3 Update");
    
      // Update the linearization point to the new estimate
      linearizationPoints.update(x3, x3_update);
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78
    • 79
    • 80
    • 81
    • 82
    • 83
    • 84
    • 85
    • 86
    • 87
    • 88
    • 89
    • 90
    • 91
    • 92
    • 93
    • 94
    • 95
    • 96
    • 97
    • 98
    • 99
    • 100
    • 101
    • 102
    • 103
    • 104
    • 105
    • 106
    • 107
    • 108
    • 109
    • 110
    • 111
    • 112
    • 113
    • 114
    • 115
    • 116
    • 117
    • 118
    • 119
    • 120
    • 121
    • 122
    • 123
    • 124
    • 125
    • 126
    • 127
    • 128
    • 129
    • 130
    • 131
    • 132
    • 133
    • 134
    • 135
    • 136
    • 137
    • 138
    • 139
    • 140
    • 141
    • 142
    • 143
    • 144
    • 145
    • 146
    • 147
    • 148
    • 149
    • 150
    • 151
    • 152
    • 153
    • 154
    • 155
    • 156
    • 157
    • 158
    • 159
    • 160
    • 161
    • 162
    • 163
    • 164
    • 165
    • 166
    • 167
    • 168
    • 169
    • 170
    • 171
    • 172
    • 173
    • 174
  • 相关阅读:
    linq查询集合类入门 案例武林高手类
    完整指南:使用JavaScript从零开始构建中国象棋游戏
    Textbooks Are All You Need II: phi-1.5 technical report
    年薪高达50W的测开,到底是做什么的?
    15 医疗挂号系统_【预约挂号】
    第七章 数学 AcWing 1586. 连续因子
    后量子时代,未来密码该何去何从?
    【Linux】Vim的使用快捷方式
    计算机毕业设计(51)java小程序毕设作品之教室图书馆座位预约小程序系统
    华为云云耀云服务器L实例评测|单节点环境下部署ClickHouse21.1.9.41数据库
  • 原文地址:https://blog.csdn.net/weixin_43848456/article/details/127588861