Main Content

Lidar Processing Overview

Introduction

Lidaris an acronym for light detection and ranging. It is an active sensing system that can be used for perception, navigation, and mapping of advanced driving assistance systems (ADAS), robots, and unmanned aerial vehicles (UAVs).

Lidar is an active remote sensing system. In an active system, the sensor generates energy by itself. Lidar sensors emit laser pulses that reflect off of objects, allowing them to perceive the structure of their surroundings. The sensors record the reflected light energy, to determine the distances to objects. The distance computation is based on the time of flight (TOF) principle. Lidar sensors are comparable to radar sensors, which emit radio waves.

Most modern autonomous or semi-autonomous vehicles are equipped with sensor suites that contain multiple sensors like a camera, IMU, and radar. Lidar sensors can resolve the drawbacks of some of these other sensors. Radar sensors can provide constant distance and velocity measurements, but the results lack resolution, and they have trouble with reflected energy and precision at longer ranges. Camera sensors can be significantly affected by environmental and lighting conditions. Lidar sensors address these issues by providing depth perception capabilities over long ranges, even in challenging weather and lighting conditions.

There are a wide variety of lidar sensors available in the industry, from companies such as Velodyne, Ouster, Quanergy, and Ibeo. These sensors generate lidar data in various formats. Lidar Toolbox™ currently supports reading data in the PLY, PCAP, PCD, LAS, LAZ, and Ibeo sensor formats. For more information, seeI/O. For more information about streaming data from Velodyne®sensors, see激光雷达的工具箱Supported Hardware.

Point Cloud

Apoint cloudis the representation of output data from a lidar sensor, similar to how an image is the representation of output data from a camera. It is a large collection of points that describe a 3-D map of the environment around the sensor. You can use apointCloudobject to store point cloud data. Lidar Toolbox provides basic processing for point clouds such as downsampling, median filtering, aligning, transforming, and extracting features from point clouds. For more information, seePreprocessing.

There are two types of point clouds:organizedandunorganized. These describe point cloud data stored in an arbitrary fashion or in a structured manner. An organized point cloud resembles a 2-D matrix, where the data is divided into rows and columns. The data is divided according to the spatial relationship between the points. As a result, the memory layout of an organized point cloud relates to the spatial layout represented by thexyz-coordinates of its points. In contrast, unorganized point clouds are stored as a single stream of 3-D coordinates, each representing a single point. You can convert unorganized point clouds to organized point clouds with theUnorganized to Organized Conversion of Point Clouds Using Spherical Projectionworkflow.

You can also differentiate these point clouds based on the shape of their data. Organized point clouds are specified asM-by-N-by-3 arrays. The three channels represent thex,y, andzcoordinates of the points. Unorganized point clouds are specified asM-by-3 matrices, whereMis the total number of points in the point cloud.

These are some of the major lidar processing applications:

  • Labeling point cloud data— Labeling objects in point clouds helps with organizing and analyzing the data. Labeled point clouds can be used to train object segmentation and detection models. To learn more about labeling, seeGet Started with the Lidar Labeler.

  • Semantic segmentation——语义分割标记的过程specific regions of a point cloud as belonging to an object. The goal of the process is to associate each point in a point cloud with its corresponding class or label, such as car, truck, or vegetation in a driving scenario. It does not differentiate between multiple instances of objects from the same class. Semantic segmentation models can be used in autonomous driving applications to parse the environment of the vehicle. To learn more about the semantic segmentation workflow, seeLidar Point Cloud Semantic Segmentation Using SqueezeSegV2 Deep Learning Network.

  • Object detection and tracking— Object detection and tracking usually follows point cloud segmentation. Objects in a point cloud can be detected and represented using cuboid bounding boxes. Tracking is the process of identifying the detected objects in one frame of a point cloud sequence throughout the sequence of point clouds. For detailed information on the complete workflow of segmentation, detection, and tracking, seeDetect, Classify, and Track Vehicles Using Lidar.

  • Lidar camera calibration— Due to the positional differences of the sensors in a sensor suite, the recorded data from each sensor is in a different coordinate system. Rotational and translational transformations are required to calibrate and fuse data from these sensors to each other. For more information, see激光雷达的相机标定是什么?.

Related Topics