Rachel Scheller
Lidar technology continues to improve how robotics and autonomous systems sense, react and operate safely in various environments. The technology has been around for decades, but recent developments have led to lidar’s adoption in robots such as the automated mobile robot (AMR) shown in Figure 1.
There’s a popular movie franchise in which robots transform from an automobile into a fully functional robot with humanlike personalities. In reality, robots today don’t have this level of sentience; they need cameras to act as their eyes in order to move through the world, with all of its unpredictable obstacles.
In the automotive sector, it’s easy to understand why it’s beneficial to have all possible navigation methods available – not just cameras – to ensure the safety of vehicles, passengers and pedestrians. Using a suite of sensing modules called sensor fusion, navigation methods such as lidar, radar and cameras work in parallel to conduct distance and velocity measurements. Sensor fusion gives a vehicle the best view of obstacles in its surrounding environment.
So in the robotic sector, advancements in lidar technology will help enable wider deployment of AMRs in diverse environments, providing greater environmental awareness, obstacle detection and real-time reaction capabilities, while overcoming the limitations of traditional camera-based systems.
On commercial vehicles, you’ll find the mechanically scanning lidar on top of the vehicle or side of the chassis (see Figure 2). About the size of a hockey puck, lidar modules often contain anywhere from 32 to 128 channels per module and move at extremely high speeds, rotating 360 degrees in 0.2s average. An optical time-of-flight (ToF) architecture with an analog-to-digital converter (ADC)-based system enables each lidar module to gain additional information per channel, but has the trade-off of higher power consumption and size.
Lidar modules with an ADC are often referred to as 3D or 4D designs, based on their ability to create a three- or four-dimensional point cloud of information.
Industrial lidar has the same technological fundamentals, but often fewer – in some cases only one – channel per module. Because of their reduced complexity and size, industrial modules are often lower in cost and power consumption, facilitating easier integration into a robotic design. Industrial lidar systems are categorized as two or single dimensional based on their ability to create a two-dimensional point cloud or a single dimensional distance measurement.
Industrial lidar applications include traffic monitoring, port and terminal monitoring, distribution warehouse navigation and monitoring, AMRs, autonomous industrial vehicles, and personal electronics such as smartphones and tablets.
With AMRs going into new places and moving more independently than ever before, cameras may not be enough. Imagine an AMR delivery unit moving down a sidewalk in a neighborhood. Even along the sidewalk, there could be obstacles such as cars, garbage bins, pedestrians, bikes or children’s toys, all of which could affect the robot’s ability to navigate.
It’s important for the AMR to detect those obstacles, assess the potential impact, and react accordingly in real time. Adding a lidar module provides the necessary resolution and response time for AMRs to sense an environmental change (such as a ball rolling into its path), react quickly, and avoid a collision. Figure 3 demonstrates an AMR with lidar moving along a crowded sidewalk.
While cameras can provide a high-resolution image, they struggle to measure distance accurately, which is important when deciding whether an AMR can keep moving or needs to redirect.
Additionally, AMRs need to be able to work in all lighting and weather conditions. Lidar technology is not limited by these conditions, nor does it require external illumination (a limitation of camera-based systems).
As the promise of lidar becomes more evident, design engineers must choose the best path to integrate this advanced sensing technology into their systems. To start, one must design a laser driver circuit for the transmit path and a transimpedance amplifier (TIA) to manage the receive path of the lidar optical design, as shown in Figure 4. Alternative options include implementing a single-chip design between the photodiode and time-to-digital converter for the receive signal chain with a TIA such as LMH34400, or pairing your design with another high-speed comparator such as the TLV3801.
When designing the transmission path, the LMH13000 integrated laser driver offers high-output current drive in two operational modes – continuous and pulsed – to minimize the need for additional discrete components. This low-voltage differential signaling-controlled current source enables 2% pulse variation over temperature, with rise and fall times of 800ps and frequencies of up to 250MHz. The LMH13000 can support an output current from 50mA to 5A while operating as a pulsed current source.
Combined, narrow pulses and a high output-current drive enable higher power pulses that can lead to distance measurements as much as 30% longer, all while maintaining eye safety standards. With improved capabilities, robots can detect obstacles faster and more accurately, improving real-time decision-making and enabling safer navigation in complex environments.
Lidar is an inherent part of the path toward mobile autonomy in both automotive and industrial vehicles. Enabling object detection and collision avoidance in real time enhances safety for vehicles and people. Those lifelike mobile robots in movies that are navigating everyday surroundings may not be as far in the future as we think.
All trademarks are the property of their respective owners.