Autonomous navigation of a mobile robot requires high-performance computing resources of both Artificial Intelligence (AI) and traditional computer vision to perform various tasks with multiple sensors. In this paper, the main tasks and related technologies for the autonomous navigation of mobile robots are introduced. This paper also demonstrates that the AM69A processor, which is the high-end device among the AM6xA scalable embedded processor family, is well designed for the autonomous navigation in terms of the performance and ease of development.
Autonomous Mobile Robots (AMRs) help improve productivity and operation efficiency in manufacturing, warehousing, logistics, and so forth. For example, AMRs can carry packages in warehouses and logistic centers, vacuum-clean floors, and serve foods and drinks in restaurants. Early AMRs used to operate in workspaces restricted from humans and navigated along the predetermined path guided by lanes and AprilTags on the ground. Therefore, early AMRs did not necessitate lots of sensors and stringent functional safety features. Such robots that follow the predefined path are also called automated guided vehicles (AGVs). On the contrary, recent AMRs are equipped with advanced sensors to operate in the shared workspace with humans and navigate freely but safely through the environment to perform assigned tasks at designated locations with as little human intervention as possible.
As shown in Figure 1-1, there are three main tasks for AMRs to self-navigate safely: localization, perception, and planning. First of all, the mobile robot must know their own location in the workspace. Accurate localization is the minimum requirement for autonomous navigation. Once localized, the mobile robot must perceive the dynamic environment with moving objects including humans and other robots in operation. Next, the robot must plan a path to the destination to control themselves accordingly to avoid situations that cause safety concerns. This paper discusses how these tasks work and the inherent challenges while focusing mainly on localization and mapping.
The AM69A processor is a heterogeneous microprocessor built for high-performance computing applications with traditional analytics and AI algorithms. Key components include octal Arm® Cortex® A72 cores, Vision Processing Accelerator (VPAC), quad C7x Digital Signal Processor (DSP) with Matrix Multiplication Accelerator (MMA), Graphical Processing Unit (GPU), video codec, isolated MCU island, and so forth. VPAC has multiple accelerators including an Vision Imaging Subsystem (VISS), that is, Image Signal Processor (ISP), Lens Distortion Correction (LDC), and Multi-Scaler (MSC). Figure 1-2 illustrates the simplified AM69A block diagram. More details are found in the AM69x Processors, Silicon Revision 1.0 data sheet. Multi-camera AI use cases on AM69A are introduced in the Advanced AI Vision Processing Using AM69A for Smart Camera Applications technical white paper. This paper explains why AM69A is the best processor to run all three tasks simultaneously for autonomous navigation.
The process for making the robot navigate along the fixed path is easily accomplished with lanes and AprilTags painted on the ground. For example, the robot can be programmed to travel from one tag to another by defining what the robot must do upon detecting and recognizing each tag, for example, move forward, turn right and move, lift package and move forward, and so forth. In such scenarios, the robot does not need to localize as long as the robot can be controlled precisely as directed.
Localization is the process for the mobile robot to calculate their position and orientation in the mapped area while navigating dynamically. In most cases, the map is built in advance by the mobile robot as well and saved in the format that the robot can reuse for localization. The mobile robot explores an unknown environment and updates their location while building the map simultaneously. For this reason, this mapping process is called Simultaneous Localization and Mapping (SLAM).