SPRADB4 june   2023 AM69A , TDA4VH-Q1

 

  1.   1
  2.   Abstract
  3.   Trademarks
  4. 1Introduction
  5. 2AM69 Processor
  6. 3Edge AI Use Cases on AM69A
    1. 3.1 AI Box
    2. 3.2 Machine Vision
    3. 3.3 Multi-Camera AI
    4. 3.4 Other Use Cases
  7. 4Software Tools and Support
  8. 5Conclusion
  9. 6References

Introduction

A camera is a primary sensor modality for robots and machines to perceive and comprehend environments around them because of the rich visual data that cameras provide. The advance of deep learning-based AI and the embedded processors with AI capability, that is, edge AI processors, have made it possible to analyze enormous and complex visual data with higher accuracy but at lower power than before. Consequently, cameras are the most widely used sensors to analyze scenes, detect obstacles, recognize tags, and 2D and 3D barcodes, localize the position of objects as well as the position of the ego robot, map environment, and so forth.

Depending on the proximity of compute resources and data sources, there are two approaches to execute AI for video analytics, cloud AI and edge AI. Cloud AI processes visual data on the central computing infrastructure for training and inference of deep neural network (DNN) model. Cloud AI can analyze enormous amounts of data using substantial computing resource and therefore cloud AI has been dominating – especially in model training. However, since data needs to be transferred to the cloud, cloud AI presents the latency and security issues for real-time applications. In contrast, edge AI runs DNN model inference on the devices directly connected to camera sensors. Since camera data is processed locally, edge AI enables real-time processing with the reduced latency and security issues.

Edge AI requires the low-power edge AI processor that can process multiple cameras and execute multiple DNN inferences on them simultaneously. As edge AI processors become more powerful, edge AI technology is getting widely used in many applications, which in turn poses challenges to the edge AI processor in terms of size, power consumption and heat dissipation. The processor needs to fit in a small form factor and operate efficiently under the harsh environment of factories and construction sites as well as inside vehicles or cameras installed on roads. Moreover, certain equipment such as mobile machines and robots necessitates the edge AI processor to be certified for the applications that follow strict functional safety standards.

This paper introduces the highly-integrated AM69A processor. Several use cases of edge AI running on the AM69A are presented with the estimations of resource utilization and power consumption. The use cases include AI-Box, machine vision, multi-camera AI, and so on. Developing the edge AI systems using the heterogeneous architecture of the AM69A with the optimized AI models and the easy-to-use software architecture is also discussed.