AI at the edge happens when AI algorithms
are processed on local devices instead of in the cloud and is changing what is possible in
industrial and automotive applications where deep neural networks (DNNs) are the main
algorithm component. To operate efficiently in size-constrained, power and heat
dissipation-constrained, and cost-constrained environments, edge AI applications require
high-speed and low-power processing, along with advanced integrations unique to the
application and its tasks. Figure 1 shows some of the applications where edge AI processing can be used to improve
performance and efficiency. For example, edge AI systems that use vision input can implement
a single camera for quality control on a production line, or multiple cameras to help
support functional safety in a car or mobile robot.
Edge AI systems can help improve efficiency
in warehouses and factories; make cities, construction and agriculture safer and more
efficient; and make homes and retail settings smart. Let’s take a look at a few systems that
require efficient edge AI processing:
- Advanced driver assistance systems
(ADAS). ADAS technology provides information insight into the environment around a
vehicle to make driving more convenient, less stressful and safer. Most ADAS features are
vision-based systems, taking high-resolution inputs from multiple camera sensors and using
deep learning and computer vision algorithms to interpret those ADAS technology provides
information insight into the environment around a vehicle to make driving more convenient,
less stressful and safer. Most ADAS features are vision-based systems, taking
high-resolution inputs from multiple camera sensors and using deep learning and computer
vision algorithms to interpret those images.
- Autonomous mobile robots and
drones. For commercially viable robots, the system on chip (SoC) must process
complex perception and navigation stacks at high speeds and low power, with optimized
system costs. The SoC must also offload computationally intensive tasks such as image
dewarping, stereo depth estimation, scaling, image pyramid generation and deep learning
for maximum system efficiency.
- Smart shopping carts. Smart
shopping carts can calculate order totals when items are placed in the cart, recommend
shopping list items and allow consumers to pay for groceries on the cart, enabling
customers to have a more customized shopping experience and skip checkout lines. Most
smart shopping carts have multiple vision sensors that automatically detect items with
cameras and computer vision smart shopping carts can calculate order totals when items are
placed in the cart, recommend shopping list items and allow consumers to pay for groceries
on the cart, enabling customers to have a more customized shopping experience and skip
checkout lines.
- Edge AI boxes. Edge AI boxes are
an intelligent extension of the camera systems used in retail automation, factory
monitoring and building surveillance systems. Despite size constraints and power and heat
dissipation challenges, high-throughput AI enables the box to perform intelligent
processing for a greater number of cameras.
- Machine vision cameras. Machine
vision cameras for optical character recognition, object identification, defect detection
and robotic arm guidance leverage embedded AI technologies to further simplify product
development and improve system accuracy.
Table 1 lists system requirements from various applications.
Table 1 Key processing and components
requirements of edge AI systems.
|
ADAS |
Robotics |
Smart Retail |
Machine Vision |
Edge AI Box |
Deep learning accelerator |
x |
x |
x |
x |
x |
Multicamera image signal processing (ISP) |
x |
x |
x |
x |
x |
Vision accelerators |
x |
x |
x |
x |
x |
Depth and motion accelerators |
x |
x |
x |
x |
x |
Ethernet switch |
x |
x |
|
|
x |
Peripheral Component Interconnect Express (PCIe) switch |
x |
x |
|
|
|
Functional safety |
x |
x |
|
|
|