SPRADA8 may   2023 AM68A , TDA4VL-Q1

 

  1.   1
  2.   Abstract
  3.   Trademarks
  4. 1Introduction
  5. 2AM68A Processor
  6. 3Edge AI Use Cases on AM68A
    1. 3.1 AI Box
    2. 3.2 Machine Vision
    3. 3.3 Multi-Camera AI
  7. 4Software Tools and Support
    1. 4.1 Edge AI Software Development Kit (SDK)
    2. 4.2 Edge AI SDK Demonstrations
    3. 4.3 Edge AI Model Zoo
    4. 4.4 Edge AI Studio
  8. 5Conclusion
  9. 6Reference

Introduction

As vision is a primary sensor for human beings, machines also use vision to perceive and comprehend environments around them. Camera sensors provide rich information on surroundings and the advance of deep-learning based AI makes it possible to analyze enormous and complex visual data with higher accuracy. Therefore, in the applications like machine vision, robotics, surveillance, home and factory automation, camera-based analytics has become a more powerful and important tool.

Embedded processors (EP) with AI capability, that is, edge AI processors, are accelerating this trend. EP can process visual data from multiple cameras into actionable insight by mimicking the eyes and brain of a human. In contrast to cloud-based AI, where deep neural network (DNN) inference is running on the central computing devices, edge AI processes and analyzes the visual data on the systems, for example, edge AI processors, directly connected to the sensors. Edge AI technology not only makes existing applications smarter but also opens up new applications that require intelligent processing of large amounts of visual data for 2D and 3D perception.

Edge AI is specifically designed for time-sensitive applications. However, edge AI requires a low-power processor to process multiple vision sensors and execute multiple DNN inferences simultaneously at the edge, which presents challenges in size, power consumption, and heat dissipation. These sensors and processor must fit in a small form factor and operate efficiently under the harsh environments of factories, farm and construction sites, as well as inside vehicles or cameras installed on the road. Moreover, certain equipment such as mobile machines and robots necessitate functionally safe 3D perception. The global market for such edge AI processors was valued at $2.1 billion in 2021 and is expected to reach $5.5 billion by 2028(1).

This paper focuses on the highly-integrated AM68A processor and several edge AI use cases including AI Box, machine vision, and multi-camera AI. Optimizing the edge AI systems using the heterogeneous architecture of the AM68A with the optimized AI models and the easy-to-use software architecture is also discussed.