Visual Simultaneous Localization and Mapping (vSLAM)
Visual simultaneous localization and mapping (vSLAM), refers to the process of calculating the position and orientation of a camera, with respect to its surroundings, while simultaneously mapping the environment. The process uses only visual inputs from the camera. Applications for visual SLAM include augmented reality, robotics, and autonomous driving. For more details, see在MATLAB中实施视觉大满贯.
Functions
Topics
- Stereo Visual Simultaneous Localization and Mapping
Process image data from a stereo camera to build a map of an outdoor environment and estimate the trajectory of the camera.
- Visual Localization in a Parking Lot
Develop a visual localization system using synthetic image data from the Unreal Engine® simulation environment.
- 在3D模拟中进行无人机导航的立体视觉大满贯
Develop a visual SLAM algorithm for a UAV equipped with a stereo camera.
- 发展视觉算法使用Unreal Engine Simulation(Automated Driving Toolbox)
Develop a visual simultaneous localization and mapping (SLAM) algorithm using image data from the Unreal Engine®simulation environment.
- 在MATLAB中实施视觉大满贯
Understand the visual simultaneous localization and mapping (vSLAM) workflow and how to implement it using MATLAB.
- Choose SLAM Workflow Based on Sensor Data
Choose the right simultaneous localization and mapping (SLAM) workflow and find topics, examples, and supported features.