Main Content

Extended Object Tracking with Lidar for Airport Ground Surveillance

An apron is a defined area at the airport intended to accommodate aircraft for purposes of loading or unloading passengers, mail or cargo, fueling, parking or maintenance [1]. Airport aprons are usually highly dynamic and heterogeneous environments where apron personnel and vehicles operate in close proximity to each other. Due to such nature of the aprons, it presents a higher risk for ground handling accidents involving aircraft as well as ground personnel. Lidar-based surveillance systems at aprons have been proposed as an effective method to improve the situation picture and to serve as a measure to mitigate high risk at the aprons [2].

This example shows you how to simulate Lidar data for an apron traffic scene and track ground traffic using a GGIW-PHD (Gamma Gaussian Inverse Wishart PHD) extended object tracker.

Setup Scenario

In this example, you simulate a scenario where an aircraft is entering into the gate area. The aircraft is guided into its parking spot by three marshallers, one on each side of the aircraft and one in front of it. You simulate ground traffic near the parking spot of the aircraft. After the aircraft is parked, the ground traffic as well as the marshallers start moving towards the aircraft. In addition to the aircraft entering into the gate area, you also simulate two aircraft already parked at the gate. The scenario used in this example was created by using theTracking Scenario Designerand exported to a MATLAB® function to connect it with downstream functionalities. The exported function was modified in MATLAB to specify theMeshproperty for each platform and to complete trajectories to the scenario end time. TheMeshproperty of thePlatformallows you to define a geometry of a platform for lidar simulation. In this example, you specify the geometry for aircraft astracking.scenario.airplaneMesh. The other objects in the scene are represented using cuboids. For more details on the how to create the scenario, refer to thecreateScenariofunction at the end of this example.

scenario = createScenario;

To perceive the scene, you use a 360-degree field of view lidar sensor using themonostaticLidarSensorSystem object™. The sensor has 64 elevation channels and is mounted at the terminal at a height of 8 meters from the ground. The azimuth resolution of the sensor is defined as 0.32 degrees. Under this configuration, the sensor produces a total of 72,000 points per scan. This high resolution set of data points is commonly termed as the point cloud. The data from the sensor is simulated at 5 Hz by specifying theUpdateRateof the sensor.

% Create a lidar sensorlidar = monostaticLidarSensor(...'SensorIndex',1,...'UpdateRate',5,...'HasINS',true,...'AzimuthLimits',[-180 180],...'ElevationLimits',[-10 27],...'AzimuthResolution',0.32,...'ElevationResolution',0.5781,...'DetectionCoordinates','scenario',...'MountingLocation',[50 -15 -8],...'MountingAngles',[-90 0 0],...'MaxRange',150,...'RangeAccuracy',0.002);% Get access to terminal platformterminal = scenario.Platforms{1};% Mount the lidar on the platformterminal.Sensors = lidar;

You can observe the scene and simulated lidar data from birds-eye view as well as from the roof of the terminal in the animation below. The point cloud is denoted by colored points, where color changes from blue to yellow with height. Notice that the lidar returns contain reflections from environment such as the ground.

You can also observe in the animation that the lidar sensor returns multiple measurements per object. Conventional multi-object trackers such astrackerGNN(GNN) tracker,trackerJPDA(JPDA) tracker assumes that the each sensor reports one measurement per object. To use conventional trackers for tracking objects using lidar, point cloud from potential objects is typically preprocessed and clustered into new type of measurements. These new measurements typically define positional as well as dimensional aspects of the object, for example, a bounding box measurement. For a complete workflow on using conventional trackers with lidar data, refer to theTrack Vehicles Using Lidar: From Point Cloud to Track List的例子。这个工作流的分段激光雷达数据d clustering into bounding boxes is prone to clustering imperfections, when objects get too close. This is especially true in the case of apron environments. An alternative way to track objects using lidar data is to use extended object trackers. In contrast to conventional trackers, extended object trackers are designed to track objects which produce more than one measurement per sensor.

Setup Extended Object Tracker and Performance Metrics

Extended Object Tracker

In this example, you use a GGIW (Gamma Gaussian Inverse Wishart) implementation of a probability hypothesis density (PHD) tracker. This GGIW model uses three distributions to specify the target model: A Gaussian distribution to describe the kinematics of the target's motion center such as its position and velocity; An Inverse Wishart (IW) distribution to describe an ellipsoidal extent of the target; A Gamma distribution to describe the expected number of measurements from the target. For more details on the GGIW-PHD filter, refer to [3].

An extended object PHD tracker uses a partitioning algorithm to process the input measurement set. A partitioning algorithm is responsible for specifying multiple possible segmentation hypothesis for sensor measurements. As the number of total hypothesis for segmentation is typically very large, approximation techniques like Distance Partitioning (seepartitionDetections), Prediction Partitioning and Expectation Maximization (EM) are used [3]. The distance-partitioning algorithm works similar to distance-based clustering techniques with the difference that it produces multiple possible partitions. The prediction-partitioning and EM algorithm uses predictions from the tracker about the objects to assist the partitioning of the measurements into multiple possible clusters. This technique to use predictions from the tracker is essential when objects are spatially close to each other such as in an apron environment. In this example, you use a helper classhelperPartitioningAlgorithmto partition the measurements set using predictions from the tracker. You obtain these predictions by utilizing thepredictTracksToTimefunction oftrackerPHD.

% A handle object to pass predicted tracks into partitioning algorithm瓜分者= helperPartitioningAlgorithm;% A function which takes detections as inputs and return partitionspartitioningFcn = @(detections)partitionDetections(partitioner,detections);

To setup a PHD tracker, you first define the configuration of the sensor using atrackingSensorConfigurationobject. You use the properties of the simulated lidar sensor to specify properties of the configuration. You also define a functioninitFilter, which initializes a constant-velocityggiwphdfilter. This function wraps aroundinitcvggiwphdand increases the certainty in Gamma distribution and Inverse Wishart distribution of the target. It also configures the filter to work a large number of detections by specifying theMaxNumDetectionsproperty of ggiwphd filter.

% Sensor's field of view defined in same order as returned% by sensor's transform functionsensorLimits = [lidar.AzimuthLimits;lidar.ElevationLimits;0 lidar.MaxRange];% Sensor's resolution defined in same order as limitssensorResolution = [lidar.AzimuthResolution;lidar.ElevationResolution;lidar.MaxRange];% Parameters to transform state of the track by transform function.trackToSensorTransform = struct('Frame','spherical',...'OriginPosition',lidar.MountingLocation(:),...'Orientation',rotmat(quaternion(lidar.MountingAngles,'eulerd','ZYX',“帧”),“帧”),...'IsParentToChild',true,...'HasVelocity',false);% A function to initiate a PHD filter by this sensorfilterInitFcn = @initFilter; config = trackingSensorConfiguration(1,...'IsValidTime',true,...% update with sensor on each step call'FilterInitializationFcn',filterInitFcn,...% Function to initialize a PHD filter'SensorTransformFcn',@cvmeas,...% Transformation function from state to az,el,r'SensorTransformParameters',trackToSensorTransform,...% Parameters for transform function'SensorLimits',sensorLimits,...'SensorResolution',sensorResolution,...'DetectionProbability',0.95...% Probability of detecting the target);

Next, you assemble this information and create an extended object PHD tracker using thetrackerPHDSystem object™.

tracker = trackerPHD('SensorConfigurations',config,...'PartitioningFcn',partitioningFcn,...“AssignmentThreshold”,30,...% -log-likelihood beyond which a measurment cell initializes new components'ConfirmationThreshold',0.9,...% ca门槛ll a component as confirmed track'ExtractionThreshold',0.75,...% ca门槛ll a component as track'MergingThreshold',25,...% Threshold to merge components with same Label'LabelingThresholds',[1.1 0.1 0.05]);% Prevents track-splitting

Metrics

Next, you set up a GOSPA metric calculator using thetrackGOSPAMetricclass to evaluate the performance of the tracker. GOSPA metric aims to analyze the performance of a tracker by providing a single cost value. A lower value of the cost represents better tracking performance. To use GOSPA metric, a distance function is defined between a track and a truth. This distance function calculates the cost to assign a track and truth to each other. It is also used to represent the localization or track-level accuracy of the estimate. The distance between a track and truth in this example is defined using a'Custom'distance functiontrackTruthDistance, included in theSupporting Functionsbelow. As the GGIW-PHD tracker estimates the geometric center, the custom distance function uses the position of the platform's geometric center to define the positional error.

gospaObj = trackGOSPAMetric('Distance','custom','DistanceFcn',@trackTruthDistance);

You use a helper classhelperAirportTrackingDisplayto visualize the ground truth, simulated data and estimated tracks.

display = helperAirportTrackingDisplay;

Run Scenario and Tracker

Next, you run the scenario, simulate returns from the lidar sensor and process them using the tracker. You also calculate the GOSPA metric for the tracks using the available ground truth. This example assumes that preprocessing of lidar data to remove lidar returns from environment such as ground and terminal is already performed. Therefore, returns from the environment are removed using the available ground truth information from simulation about lidar returns.

% Initialize tracksconfTracks = struct.empty(0,1);% Ground truth for metricplatforms = scenario.Platforms; trackablePlatforms = platforms(cellfun(@(x)x.ClassID,platforms) > 0);% Initialize metric valuesgospaMetrics = zeros(0,4);% Number of tracks recording for MATLAB vs MEXnumTrackML = zeros(0,1);% Advance scenariowhileadvance(scenario) time = scenario.SimulationTime;% Call lidarDetect on terminal to simulate point cloud returns.[ptCloud, ~, groundTruthIdx] = lidarDetect(terminal,time);% Pack returns as objectDetection. Included in Supporting Functionsdetections = packAsObjectDetection(ptCloud, groundTruthIdx, time);% Provide predicted tracks for prediction partitioningifisLocked(tracker) predictedTracks = predictTracksToTime(tracker,'all',time); partitioner.PredictedTrackList = predictedTracks;end% Call tracker step using detections and timeconfTracks = tracker(detections, time);% Calculate OSPA metrictruthStruct = cellfun(@(x)x.pose,trackablePlatforms); [~,gospa,~,loc,missT,falseT] = gospaObj(confTracks, trackablePlatforms); gospaMetrics(end+1,:) = [gospa loc missT falseT];%#ok% Update the displaydisplay(scenario, ptCloud, confTracks);end

Results

Next, you evaluate the performance of each tracker using visualization as well as quantitative metrics. In the animation below, the tracks are represented by blue squares annotated by their identities (TrackID).The ellipsoid around the object describes the extent estimated by the GGIW-PHD filter.

Track Maintenance

Till the aircraft is parked, the tracker is able to track all objects, except the marshaller on the right of the aircraft. This is because the marshaller remained undetected by the sensor at its initial position. After the aircraft is parked, the tracker is able to react to the motion of the objects to estimate their position as well as velocity. The tracker is able to maintain a track on all objects up to about 34 seconds. After that, the marshaller on the left gets occluded behind the aircraft jet engines and its track is dropped by the tracker due to multiple misses. As the ground traffic approaches the aircraft, the tracker is able to track them as separate objects and did not confuse their measurements with one other. This was possible because the predictions of the tracker about objects assisted in partitioning the measurement set properly.

You can also quantitatively analyze this performance of the tracker using the GOSPA metric and its components. Notice that the false track component of the metric remains zero, which indicates that no false tracks were confirmed. The non-zero missed target component represents the undetected marshaller. Notice that as soon as the tracker establishes a track on that object, the component drops down to zero. The missed target component rises to a non-zero value again, which represents the track drop due to object occlusion.

figure; order = ["Total GOSPA","Localization","Missed Target","False Tracks"];fori = 1:4 ax = subplot(2,2,i); plot(gospaMetrics(:,i),'LineWidth',2); title(order(i)); xlabel('Time');ylabel('Metric');gridon;end

Figure contains 4 axes objects. Axes object 1 with title Total GOSPA contains an object of type line. Axes object 2 with title Localization contains an object of type line. Axes object 3 with title Missed Target contains an object of type line. Axes object 4 with title False Tracks contains an object of type line.

Track-level accuracy

Track-level accuracy refers to the accuracy in estimating the state of each object such as its position, velocity and dimensions. You can observe that the that tracks lie on the ground truth of the objects. When the object starts moving, a velocity vector is plotted, representing the direction and magnitude the velocity. Notice that the estimated velocity is in the same direction as the object motion. The GGIW-PHD filter assumes that the measurements from the target are distributed in the target's estimated ellipsoidal extent. This results in the filter estimating the size of the object using observable or non-occluded regions of the objects. Notice that the tracks on the parked ground vehicles have an offset in their position towards the observable sides.

The track-level inaccuracies can also be quantitatively estimated using the localization component of the GOSPA metric. The total GOSPA metric is a combined metric of all its components and hence captures the effect of localization accuracies, missed targets, and false tracks.

Summary

In this example, you learned how to simulate lidar data for an apron scenario and process it with an extended object GGIW-PHD tracker. You also learned how to evaluate the performance of the tracker using the GOSPA metric and its associated components.

Supporting Functions

packAsObjectDetection

functiondetections = packAsObjectDetection(ptCloud, groundTruthIdx, time)% Get x,y,z locations from the cell arrayloc = ptCloud{1}';% Points which are nan reflect those rays which did not intersect with any% objectnanPoints = any(isnan(loc),1);% groundTruthIdx{1} represents the ground truth of the points. The first% column represents PlatformID and second column represents ClassID.% ClassID for environment is specified as 0.envPoints = groundTruthIdx{1}(:,2) == 0;% nanPoints or envPoints are removetoRemove = nanPoints | envPoints';% Keep valid returnloc = loc(:,~toRemove);% Pack each return as objectDetection.n = sum(~toRemove);% Allocate memorydetections = repmat({objectDetection(time,[0;0;0],'MeasurementNoise',1e-2*eye(3))},n,1);% Fill measurementsfori = 1:n detections{i}.Measurement = loc(:,i);endend

createScenario

functionscenario = createScenario% Create Scenarioscenario = trackingScenario; scenario.UpdateRate = 0; planeMesh = tracking.scenario.airplaneMesh;% Create platformsTerminal = platform(scenario,'ClassID',0); Terminal.Dimensions = struct(...'Length', 100,...'Width', 29,...'Height', 20,...'OriginOffset', [0 0 10]); Aircraft = platform(scenario,'ClassID',1,'Mesh',planeMesh); Aircraft.Dimensions = struct(...'Length', 39,...'Width', 34,...'Height', 13.4,...'OriginOffset', [39/2 0 13.4/2]); Aircraft.Signatures{1} =...rcsSignature(...'Pattern', [20 20;20 20],...'Azimuth', [-180 180],...'Elevation', [-90;90],...'Frequency', [0 1e+20]); Aircraft.Trajectory = waypointTrajectory(...[73.2829 -80.5669 0;60.7973 -37.3337 0],...[0;11.25],...'GroundSpeed',[8;0],...'Course',[106.1085;106.1085],...'Orientation',quaternion([106.1085 0 0;106.1085 0 0],'eulerd','ZYX',“帧”)); delayTrajectory(Aircraft,0.1,70); Ground = platform(scenario,'ClassID',0); Ground.Trajectory.Position = [0 0 0]; Ground.Dimensions = struct('Length',500,...'Width',500,...'Height',1,...'OriginOffset',[0 0 -0.5]); Plane = platform(scenario,'ClassID',1,'Mesh',planeMesh); Plane.Dimensions = struct(...'Length', 40,...'Width', 30,...'Height', 10,...'OriginOffset', [0 0 5]); Plane.Signatures{1} =...rcsSignature(...'Pattern', [20 20;20 20],...'Azimuth', [-180 180],...'Elevation', [-90;90],...'Frequency', [0 1e+20]); Plane.Trajectory.Position = [-11 -50.3 0]; Plane.Trajectory.Orientation = quaternion([90 0 0],'eulerd','zyx',“帧”);马歇尔=平台(场景中,'ClassID',6); Marshall.Dimensions = struct(...'Length', 0.24,...'Width', 0.45,...'Height', 1.7,...'OriginOffset'[0 0 0.85]);s = [57.3868 -25.5244 0;59.41 -32.53 0]; s2 = [57.3868 -25.5244 0;60.5429 -36.4531 0]; Marshall.Trajectory = waypointTrajectory(...s2,...[0;8],...'Course', [-75;-75],...'GroundSpeed', [0;0],...'ClimbRate', [0;0],...'Orientation', quaternion([0.793353340291235 0 0 -0.608761429008721;0.793353340291235 0 0 -0.608761429008721])); delayTrajectory(Marshall, 12, 70); Marshall1 = platform(scenario,'ClassID',6); Marshall1.Dimensions = struct(...'Length', 0.24,...'Width', 0.45,...'Height', 1.7,...'OriginOffset'[0 0 0.85]);Marshall1.Trajectory = waypointTrajectory(...[50 -100 0;56.5 -89.3 0;56.8 -73.6 0;60.9 -59.5 0],...[0;6.28880678506828;14.2336975651715;38.5523666710762],...'Course', [52.5692412453125;71.0447132029417;101.247131543705;107.567935050594],...'GroundSpeed', [2;2;2;0],...'ClimbRate', [0;0;0;0],...'Orientation', quaternion([0.896605327597113 0 0 0.442830539286162;0.813888868238491 0 0 0.581020576363237;0.634412633965184 0 0 0.772994572985708;0.590831447702589 0 0 0.806795017588522])); delayTrajectory(Marshall1, 12, 70); Marshall2 = platform(scenario,'ClassID',6); Marshall2.Dimensions = struct(...'Length', 0.24,...'Width', 0.45,...'Height', 1.7,...'OriginOffset'[0 0 0.85]);Marshall2。Trajectory = waypointTrajectory(...[135.1 -87.7 0;118.3 -87.7 0;112.2 -73.3 0;80.8 -47.6 0],...[0;10.3490919743252;18.1872070129823;42],...'Course', [155.957629971433;124.12848387907;114.734795167842;153.606988602699],...'GroundSpeed', [2;2;2;0],...'ClimbRate', [0;0;0;0],...'Orientation', quaternion([0.896605327597113 0 0 0.442830539286162;0.813888868238491 0 0 0.581020576363237;0.634412633965184 0 0 0.772994572985708;0.590831447702589 0 0 0.806795017588522])); delayTrajectory(Marshall2, 12, 70); Loader = platform(scenario,'ClassID',2); Loader.Dimensions = struct(...'Length', 10,...'Width', 4,...'Height', 4,...'OriginOffset', [0 0 2]); Loader.Trajectory = waypointTrajectory(...[25.2 -42.2 0;39.4500 -42.3000 0;53.7 -42.4 0],...[0;5;10],...'Course',[-0.4021;-0.4021;-0.4021],...'Orientation',[quaternion([-0.4021 0 0],'eulerd','ZYX',“帧”);quaternion([-0.4021 0 0],'eulerd','ZYX',“帧”);quaternion([-0.4021 0 0],'eulerd','ZYX',“帧”)],...'GroundSpeed',[0;5.8;0],...'ClimbRate',[0;0;0],...'AutoPitch', false,...'AutoBank', false); delayTrajectory(Loader, 12, 70); Loader2 = platform(scenario,'ClassID',2); Loader2.Dimensions = struct(...'Length', 10,...'Width', 4,...'Height', 4,...'OriginOffset', [0 0 2]); Loader2.Trajectory = waypointTrajectory(...[27.142823040363762 -75 0;42.5614 -71.9000 0;57.98 -68.8 0],...[0;5;10],...'Course'(11.368107148451321;11.368107148451321;11.368107148451321],...'Orientation',quaternion([11.368107148451321 0 0;11.368107148451321 0 0;11.368107148451321 0 0],'eulerd','ZYX',“帧”),...'GroundSpeed', [0;5.824096001626275;0],...'ClimbRate', [0;0;0],...'AutoPitch', false,...'AutoBank', false); delayTrajectory(Loader2, 50, 70); Power = platform(scenario,'ClassID',2); Power.Dimensions = struct(...'Length', 5,...'Width', 2,...'Height', 1,...'OriginOffset', [0 0 0.5]); Power.Trajectory = waypointTrajectory(...[27.2312963703295 -20.7687036296705 0;40.1656 -27.6344 0;53.1 -34.5 0],...[0;5;10],...'Course', [-27.9596882700088;-27.9596882700088;-27.9596882700088],...'Orientation',[quaternion([-27.9596882700088 0 0],'eulerd','ZYX',“帧”);quaternion([-27.9596882700088 0 0],'eulerd','ZYX',“帧”);quaternion([-27.9596882700088 0 0],'eulerd','ZYX',“帧”)],...'GroundSpeed', [0;5.8574;0],...'ClimbRate', [0;0;0],...'AutoPitch', false,...'AutoBank', false); delayTrajectory(Power, 20, 70); Refueller = platform(scenario,'ClassID',2); Refueller.Dimensions = struct(...'Length', 7,...'Width', 3,...'Height', 2,...'OriginOffset', [0 0 1]); Refueller.Trajectory = waypointTrajectory(...[92.3 -31.6 0;83.3 -36.7 0;74.3 -41.8 0],...[0;5;10],...'Course', [-150.4612;-150.4612;-150.4612],...'Orientation',[quaternion([-150.4612 0 0],'eulerd','ZYX',“帧”);quaternion([-150.4612 0 0],'eulerd','ZYX',“帧”);quaternion([-150.4612 0 0],'eulerd','ZYX',“帧”)],...'GroundSpeed', [0;4.137825515896000;0],...'ClimbRate', [0;0;0],...'AutoPitch', false,...'AutoBank', false); delayTrajectory(Refueller, 20, 70); Car = platform(scenario,'ClassID',2); Car.Dimensions = struct(...'Length', 4.7,...'Width', 1.8,...'Height', 1.4,...'OriginOffset', [0 0 0.7]); Car.Trajectory = waypointTrajectory(...[111.1 -44.8 0;93.8100 -48.1725 0;76.5200 -51.5450 0],...[0;7.046336423986581;14.092672847973162],...'Course', [-169.250904745059;-169.250904745059;-169.250904745059],...'Orientation',[quaternion([-169.250904745059 0 0],'eulerd','ZYX',“帧”);quaternion([-169.250904745059 0 0],'eulerd','ZYX',“帧”);quaternion([-169.250904745059 0 0],'eulerd','ZYX',“帧”)],...'GroundSpeed', [0;5;0],...'ClimbRate', [0;0;0],...'AutoPitch', false,...'AutoBank', false); delayTrajectory(Car, 20, 70); Plane1 = platform(scenario,'ClassID',1,'Mesh',planeMesh); Plane1.Dimensions = struct(...'Length', 40,...'Width', 30,...'Height', 10,...'OriginOffset', [0 0 5]); Plane1.Signatures{1} =...rcsSignature(...'Pattern', [20 20;20 20],...'Azimuth', [-180 180],...'Elevation', [-90;90],...'Frequency', [0 1e+20]); Plane1.Trajectory.Position = [110 0 0]; Plane1.Trajectory.Orientation = quaternion([180 0 0],'eulerd','zyx',“帧”);functiondelayTrajectory (tOffset,平台往往)traj =平台。Trajectory; wp = traj.Waypoints; toa = traj.TimeOfArrival; g = traj.GroundSpeed; c = traj.Course; cr = traj.ClimbRate; o = traj.Orientation; tEnd = max(toa(end),tEnd); wp = [repmat(wp(1,:),1,1);wp;repmat(wp(end,:),1,1)]; toa = [0;toa + tOffset;tEnd]; g = [0;g;0]; c = [c(1);c;c(end)]; cr = [0;cr;0]; o = [repmat(o(1),1,1);o;repmat(o(end),1,1)]; newTraj = waypointTrajectory(wp,toa,...;'Course',c,...'GroundSpeed',g,...'ClimbRate',cr,...'Orientation',o); plat.Trajectory = newTraj;endend

initFilter

functionfilter = initFilter (varargin)% This function interacts with the tracker in two signatures% filter = initFilter() is called during prediction stages to add% components to PHD.%% filter = initFilter(detections) is called during correction stages to add% components to the PHD from detection sets which have a low-likelihood% against existing tracks. All detections in the input have the assumption%美ng to the same target.% Adds zero components during prediction stagesfilter = initcvggiwphd(varargin{:});% If called with a detectionifnargin == 1% Get expected sizeexpSize = filter.ScaleMatrices/(filter.DegreesOfFreedom - 4);% Higher the dof, higher the certainty in dimensionsdof = 50; filter.DegreesOfFreedom = dof; filter.ScaleMatrices = (dof-4)*expSize;% Get expected number of detectionsexpNumDets = filter.Shapes/filter.Rates;% shape/rate^2 = uncertainty;uncertainty = (expNumDets/4)^2; filter.Shapes = expNumDets^2/uncertainty; filter.Rates = expNumDets/uncertainty;end% GammaForgettingFactors acts like process noise% for measurement ratefilter.GammaForgettingFactors(:) = 1.03;% Temporal Decay acts like process noise for% dimensions. Lower the decay, higher the variance increasefilter.TemporalDecay = 500;% Specify MaxNumDetectionsfilter.MaxNumDetections = 10000;end

trackTruthDistance

functiondist = trackTruthDistance(track, truth)% The tracks estimate the center of the object. Compute the origin positionrOriginToCenter = -truth{1}.Dimensions.OriginOffset(:); rot = quaternion([truth{1}.Orientation],'eulerd','ZYX',“帧”);actPos = truth{1}.Position(:) + rotatepoint(rot,rOriginToCenter')';% Estimated track positionestPos = track.State(1:2:end);% Error distance is between position.dist = norm(actPos(:) - estPos);end

References

[1] Weber, Ludwig.International Civil Aviation Organization. Kluwer Law International BV, 2017.

[2] Mund, Johannes, Lothar Meyer, and Hartmut Fricke. "LiDAR Performance Requirements and Optimized Sensor Positioning for Point Cloud-based Risk Mitigation at Airport Aprons."Proceedings of the 6th International Conference on Research in Air Transportation. 2014.

[3] Granström, Karl, et al. "On extended target tracking using PHD filters." (2012).