Main Content

Track a Car-Like Robot Using Particle Filter

Particle filter is a sampling-based recursive Bayesian estimation algorithm, which is implemented in thestateEstimatorPFobject. In this example, a remote-controlled car-like robot is being tracked in the outdoor environment. The robot pose measurement is provided by an on-board GPS, which is noisy. There are known motion commands sent to the robot, but the robot will not execute the exact commanded motion due to mechanical slack or model inaccuracy. This example will show how to usestateEstimatorPFto reduce the effects of noise in the measurement data and get a more accurate estimation of the pose of the robot. The kinematic model of a car-like robot is described by the following non-linear system. The particle filter is ideally suited for estimating the state of such kind of systems, as it can deal with the inherent non-linearities.

x ˙ = v cos ( θ ) y ˙ = v sin ( θ ) θ ˙ = v L tan ϕ ϕ ˙ = ω

Scenario: The car-like robot drives and changes its velocity and steering angle continuously. The pose of the robot is measured by some noisy external system, e.g. a GPS or a Vicon system. Along the path it will drive through a roofed area where no measurement can be made.

Input:

  • The noisy measurement on robot's partial pose ( x , y , θ ).Notethis is not a full state measurement. No measurement is available on the front wheel orientation ( ϕ ) as well as all the velocities ( x ˙ , y ˙ , θ ˙ , ϕ ˙ ).

  • The linear and angular velocity command sent to the robot ( v c , ω c ).Notethere will be some difference between the commanded motion and the actual motion of the robot.

Goal: Estimate the partial pose ( x , y , θ ) of the car-like robot.Noteagain that the wheel orientation ( ϕ ) is not included in the estimation.From the observer's perspective, the full state of the car is only [ x , y , θ , x ˙ , y ˙ , θ ˙ ].

Approach: UsestateEstimatorPFto process the two noisy inputs (neither of the inputs is reliable by itself) and make best estimation of current (partial) pose.

  • At thepredictstage, we update the states of the particles with a simplified, unicycle-like robot model, as shown below. Note that the system model used for state estimation is not an exact representation of the actual system. This is acceptable, as long as the model difference is well-captured in the system noise (as represented by the particle swarm). For more details, seepredict.

x ˙ = v cos ( θ ) y ˙ = v sin ( θ ) θ ˙ = ω

  • At thecorrectstage, the importance weight (likelihood) of a particle is determined by its error norm from current measurement ( ( Δ x ) 2 + ( Δ y ) 2 + ( Δ θ ) 2 ), as we only have measurement on these three components. For more details, seecorrect.

Initialize a Car-like Robot

rng('default');% for repeatable resultdt = 0.05;% time stepinitialPose = [0 0 0 0]'; carbot = ExampleHelperCarBot(initialPose, dt);

Figure CarBot contains an axes object. The axes object with title t = 0, xlabel x (m), ylabel y (m) contains 11 objects of type rectangle, scatter, line, text. One or more of the lines displays its values using only markers These objects represent actual pose, estimated pose.

Set up the Particle Filter

This section configures the particle filter using 5000 particles. Initially all particles are randomly picked from a normal distribution with mean at initial state and unit covariance. Each particle contains 6 state variables ( x , y , θ , x ˙ , y ˙ , θ ˙ ). Note that the third variable is marked as Circular since it is the car orientation. It is also very important to specify two callback functionsStateTransitionFcnandMeasurementLikelihoodFcn. These two functions directly determine the performance of the particle filter. The details of these two functions can be found the in the last two sections of this example.

pf = stateEstimatorPF; initialize(pf, 5000, [initialPose(1:3)', 0, 0, 0], eye(6),'CircularVariables',[0 0 1 0 0 0]); pf.StateEstimationMethod ='mean'; pf.ResamplingMethod ='systematic';% StateTransitionFcn defines how particles evolve without measurementpf.StateTransitionFcn = @exampleHelperCarBotStateTransition;% MeasurementLikelihoodFcn defines how measurement affect the our estimationpf.MeasurementLikelihoodFcn = @exampleHelperCarBotMeasurementLikelihood;% Last best estimation for x, y and thetalastBestGuess = [0 0 0];

Main Loop

Note in this example, the commanded linear and angular velocities to the robot are arbitrarily-picked time-dependent functions. Also, note the fixed-rate timing of the loop is realized throughrateControl.

Run loop at 20 Hz for 20 seconds using fixed-rate support.

r = rateControl(1/dt);

Reset the fixed-rate object to restart the timer. Reset the timer right before running the time-dependent code.

reset(r); simulationTime = 0;whilesimulationTime < 20% if time is not up% Generate motion command that is to be sent to the robot% NOTE there will be some discrepancy between the commanded motion and the% motion actually executed by the robot.uCmd(1) = 0.7*abs(sin(simulationTime)) + 0.1;% linear velocityuCmd(2) = 0.08*cos(simulationTime);% angular velocitydrive(carbot, uCmd);% Predict the carbot pose based on the motion model[statePred, covPred] = predict(pf, dt, uCmd);% Get GPS readingmeasurement = exampleHelperCarBotGetGPSReading(carbot);% If measurement is available, then call correct, otherwise just use% predicted resultif~isempty(measurement) [stateCorrected, covCorrected] = correct(pf, measurement');elsestateCorrected = statePred; covCorrected = covPred;endlastBestGuess = stateCorrected(1:3);% Update plotif~isempty(get(groot,'CurrentFigure'))% if figure is not prematurely killedupdatePlot(carbot, pf, lastBestGuess, simulationTime);elsebreakendwaitfor(r);% Update simulation timesimulationTime = simulationTime + dt;end

Figure CarBot contains an axes object. The axes object with title t = 19.95, xlabel x (m), ylabel y (m) contains 11 objects of type rectangle, scatter, line, text. One or more of the lines displays its values using only markers These objects represent actual pose, estimated pose.

Details of the Result Figures

The three figures show the tracking performance of the particle filter.

  • In the first figure, the particle filter is tracking the car well as it drives away from the initial pose.

  • In the second figure, the robot drives into the roofed area, where no measurement can be made, and the particles only evolve based on prediction model (marked with orange color). You can see the particles gradually form a horseshoe-like front, and the estimated pose gradually deviates from the actual one.

  • In the third figure, the robot has driven out of the roofed area. With new measurements, the estimated pose gradually converges back to the actual pose.

State Transition Function

The sampling-based state transition function evolves the particles based on a prescribed motion model so that the particles will form a representation of the proposal distribution. Below is an example of a state transition function based on the velocity motion model of a unicycle-like robot. For more details about this motion model, please see Chapter 5 in[1]. Decreasesd1,sd2andsd3to see how the tracking performance deteriorates. Heresd1represents the uncertainty in the linear velocity,sd2represents the uncertainty in the angular velocity.sd3is an additional perturbation on the orientation.

function predictParticles = exampleHelperCarBotStateTransition(pf, prevParticles, dT, u)
thetas = prevParticles(:,3);
w = u(2); v = u(1);
l = length(prevParticles);
% Generate velocity samples sd1 = 0.3; sd2 = 1.5; sd3 = 0.02; vh = v + (sd1)^2*randn(l,1); wh = w + (sd2)^2*randn(l,1); gamma = (sd3)^2*randn(l,1);
% Add a small number to prevent div/0 error wh(abs(wh)<1e-19) = 1e-19;
%转换速度样本构成样本预测Particles(:,1) = prevParticles(:,1) - (vh./wh).*sin(thetas) + (vh./wh).*sin(thetas + wh*dT); predictParticles(:,2) = prevParticles(:,2) + (vh./wh).*cos(thetas) - (vh./wh).*cos(thetas + wh*dT); predictParticles(:,3) = prevParticles(:,3) + wh*dT + gamma*dT; predictParticles(:,4) = (- (vh./wh).*sin(thetas) + (vh./wh).*sin(thetas + wh*dT))/dT; predictParticles(:,5) = ( (vh./wh).*cos(thetas) - (vh./wh).*cos(thetas + wh*dT))/dT; predictParticles(:,6) = wh + gamma;
end

Measurement Likelihood Function

The measurement likelihood function computes the likelihood for each predicted particle based on the error norm between particle and the measurement. The importance weight for each particle will be assigned based on the computed likelihood. In this particular example,predictParticlesis a N x 6 matrix (N is the number of particles), andmeasurementis a 1 x 3 vector.

function likelihood = exampleHelperCarBotMeasurementLikelihood(pf, predictParticles, measurement)
% The measurement contains all state variables predictMeasurement = predictParticles;
% Calculate observed error between predicted and actual measurement % NOTE in this example, we don't have full state observation, but only % the measurement of current pose, therefore the measurementErrorNorm % is only based on the pose error. measurementError = bsxfun(@minus, predictMeasurement(:,1:3), measurement); measurementErrorNorm = sqrt(sum(measurementError.^2, 2));
% Normal-distributed measurem的噪音ent % Assuming measurements on all three pose components have the same error distribution measurementNoise = eye(3);
%误差准则转化为测量的可能性。% Evaluate the PDF of the multivariate normal distribution likelihood = 1/sqrt((2*pi).^3 * det(measurementNoise)) * exp(-0.5 * measurementErrorNorm);
end

Reference

[1] S. Thrun, W. Burgard, D. Fox, Probabilistic Robotics, MIT Press, 2006