SPRADE2 October 2023 AM69A
Figure 2-3 shows the localization process in the mapped environment. This process is similar to the SLAM front end in Figure 2-2. The only difference is that, once the features are extracted from a frame, the corresponding features are searched from the map instead of ones from other frames. After finding the matched features, the pose of the mobile robot can be calculated by Perspective-n-Point (PnP), Iterative Closest Point (ICP), and so forth.
Table 2-1 summarizes the widely used technique in each step of the graph SLAM and localization.
Visual SLAM | LiDAR SLAM | |
---|---|---|
Feature extraction |
|
|
Feature association |
|
|
Pose estimation |
|
|
Loop closure detection |
|
|
Graph optimization (Bundle adjustment) |
|
|
The AM69A embedded processor is an excellent choice for SLAM and localization. Octal A72 cores provide more than enough computing power for complex SLAM and localization algorithms. Many open-source algorithms can be quickly implemented and benchmarked on the AM69A. Moreover, the functional blocks such as feature extraction, feature matching, and pose estimation in Figure 2-2 and Figure 2-3 can be offloaded to hardware accelerators (HWAs) and C7x DSP to improve performance. Internal studies show that the throughput of the ORB SLAM with stereo camera is improved by 2 to 3 times by offloading stereo rectification and feature extraction to LDC, MSC, and DSP.