Writing Visual SLAM from Scratch (0)

less than 1 minute read

Published:

SLAM (Simultaneous Localization and Mapping) is a useful technology in robotic navigation. Visual SLAM means using information from a camera, such as a monocular camera (i.e., webcam), stereo camera (i.e., Intel realsense), to estimate the poses of the camera in the environment and to map this surrounding environment. In this series of posts, I would like to write visual SLAM code from scratch. These works refer to Xiang Gao, Tao Zhang, Yi Liu, Qinrui Yan, 14 Lectures on Visual SLAM: From Theory to Practice, Publishing House of Electronics Industry, 2017.

The code will be updated in my GitHub under repository Visual-SLAM. You also can find the references code from 14 Lectures on Visual SLAM: From Theory to Practice.