Main Content

Differential Wheeled Robot with LIDAR Sensor

The vrcollisions_lidar example shows how a LinePickSensor can be used to model LIDAR sensor behavior in Simulink® 3D Animation™.

In a simple virtual world, a wheeled robot with a LIDAR sensor mounted on its top is defined. This LIDAR sensor is implemented using the LinePickSensor that detects collisions of several rays (modeled as IndexedLineSet) with surrounding scene objects. Sensor pickedRange and pickedPoint fields are used in this model for visualization purposes only, but together with robot pose information they can be used for Simultaneous Localization and Mapping (SLAM) and other similar purposes.

The sensor sensing lines are visible, shown as transparent green lines. There are 51 sensing rays evenly spaced in the horizontal plane between -90 and 90 degrees. LIDAR range is 10 meters.

In order to visualize the LIDAR sensor output, there is a visualization proxy LineSet defined with lines identical to lines defined as the LinePickSensor sensing geometry. Visualization lines are blue. Combination of pickedPoint and pickedRange LinePickSensor outputs is used to visualize points of collision. The pickedPoint output contains coordinates of points that collided with surrounding objects. This output has variable size depending on how many sensor rays collided. The pickedRange output size is fixed, equal to the number of sensing rays. The output returns distance from LIDAR sensor origin to collision point for each sensing line. For rays that don't collide, this output returns -1. The pickedRange is used to determine the indices of lines for which the collision points are returned in the pickedPoint sensor output. In effect, the blue lines are shortened so that only the line segment between the ray fan origin and point of collision is displayed for each line.

Robot trajectory is modeled in a trivial way using the Signal Editor and the Ramp blocks. In the Signal Editor, a simple 1x1 meter square trajectory is defined for the first 40 seconds of simulation. After returning to its original position, the robot only rotates indefinitely.

In the model, there are both VR Sink and VR Source blocks defined, associated with the same virtual world. The VR Source is used to read the sensor signals. The VR Sink is used to set the Robot position / rotation and the coordinates of endpoints of the sensor visual proxy lines.

In the virtual world, there are several viewpoints defined, both static and attached to the robot, allowing to observe LIDAR visualization from different perspectives.