SSZT087 March 2022 TDA4AEN-Q1 , TDA4AH-Q1 , TDA4AL-Q1 , TDA4AP-Q1 , TDA4VE-Q1 , TDA4VEN-Q1 , TDA4VH-Q1 , TDA4VL-Q1 , TDA4VM , TDA4VM-Q1 , TDA4VP-Q1
Originally published in Electronic Products.
Autonomous robots are intelligent machines that can understand and navigate through its environment without human control or intervention. Although this technology is relatively young, there are many different use cases of autonomous robots in factories, warehouses, cities, and homes. For example, robots can be used to transport goods around warehouses, like in Figure 1, or perform last-mile delivery, while other kinds of robots can vacuum homes or mow lawns.
Autonomy requires that robots are able to sense and orient themselves within a mapped environment, dynamically detect the obstacles around them, track those obstacles, plan their route to reach a specified destination, and control the vehicle to follow that plan. In addition, the robot must only perform these tasks when it is safe to do so, avoiding situations that pose risks to humans, property, or the system itself. With robots working in greater proximity to humans than ever before, the need for them to not only be autonomous, mobile and energy-efficient, but also meet functional safety requirements. Sensors, processors and control devices can help designers reach the rigorous requirements of functional safety standards, such as International Electrotechnical Commission 61508.
There are several different types of sensors that can help solve the challenges posed by autonomous robots. Let's take a closer look at two of them:
Vision sensors. Vision sensors closely emulate human vision and perception. Vision systems can solve the challenges of localization, obstacle detection, and collision avoidance because they have high resolution spatial coverage and the ability to not only detect objects, but to classify that object. Vision sensors are also more cost efficient when compared to sensors like lidar. Vision sensors are very computationally intensive, however.
Power-hungry central processing units (CPUs) and graphics processing units (GPUs) can pose a challenge in power-constrained robot systems. When designing an energy-efficient robotic system, CPU- or GPU-based processing should be minimal. The system-on-a-chip (SoC) in an efficient vision system should process the vision signal chain at high speeds and low power, with optimized system costs. SoCs used for vision processing must be smart, safe, and energy-efficient. The TDA4 processor family is a highly-integrated family of processors designed with a heterogenous architecture to deliver computer vision performance, deep learning processing, stereo vision capabilities, and video analysis – all while consuming minimal power.
TI millimeter (mmWave) radar. Using TI mmWave radar in robotics applications is a relatively new concept, but the idea of using TI mmWave sensing for autonomy has been around for a while. In automotive applications, TI mmWave radar is one of the key components of advanced driver assistance systems (ADAS) and has been used to monitor a vehicle's surroundings. You can take some of those same ADAS concepts, like surround view monitoring or collision avoidance, and apply them to robotics.
TI mmWave radar is unique from a sensing technology perspective because these sensors provide range, velocity and angle-of-arrival information of objects and better instruct the robot how to navigate for collision avoidance. Using radar sensor data, the robot can decide to either safely continue on its path or slow down or even stop depending on the position, speed and trajectory of an approaching person or object, as shown in Figure 2.
For more complicated applications, a single sensor alone might not be sufficient to enable autonomy, regardless of the type of sensors. Ultimately, sensors such as camera or radar should complement each other in a system. Leveraging the data of different sensor modalities on a processor through sensor fusion can help solve some of the more complex autonomous robot challenges.
While sensor fusion helps robots to be more accurate, using artificial intelligence (AI) at the edge can help make robots intelligent. Incorporating AI at the edge into robot systems can help enable robots to intelligently perceive, make decisions and perform actions. A robot with edge AI can intelligently detect the object and its position, classify the object and take action accordingly. For example, when a robot is navigating a busy warehouse, edge AI can help the robot infer what kind of objects, including humans, boxes, machinery, or even other robots, are in its path and decide what actions are appropriate to navigate around them.
When designing a robot system that incorporates AI, there should be design considerations for both hardware and software. The TDA4 processor family has hardware accelerators for edge AI functions to help perform computationally-intensive tasks in real time. Having access to an easy-to-use edge AI software development environment can help simplify and speed up the application development and hardware deployment processes. You can learn more about TI free tools, software and services designed to help with the development process with our article "How to simplify your embedded edge AI application development."
Designing more intelligent and autonomous robots is a necessity to continue improving automation. Robots can be used in warehouses and delivery to keep up with and enhance ecommerce growth. Robots can perform mundane household tasks like vacuuming and mowing. Using autonomous robots unlocks productivity and efficiency that helps to improve and add value to our lives.