SPRY344A January 2022 – March 2023 AM67 , AM67A , AM68 , AM68A , AM69 , AM69A , TDA4AEN-Q1 , TDA4AH-Q1 , TDA4AL-Q1 , TDA4AP-Q1 , TDA4APE-Q1 , TDA4VE-Q1 , TDA4VEN-Q1 , TDA4VH-Q1 , TDA4VL-Q1 , TDA4VM , TDA4VM-Q1 , TDA4VP-Q1 , TDA4VPE-Q1
Vision-based edge AI systems often include single or multicamera image processing and traditional computer vision tasks. In a CPU or GPU, these tasks consume lot of power and have throughput limitations.
This class of Edge AI processor SoC accelerates computationally intensive low-level brute-force pixel-processing vision tasks in hardware, such as ISP, lens distortion correction, multiscaling, and bilateral noise filtering in a vision processing accelerator core. A depth and motion perception accelerator core accelerates stereo depth estimation and dense optical flow to help enhance perception of the environment, as seen in Figure 3.
Accelerating these tasks in hardware results in low power consumption and small size. Although these tasks are accelerated in hardware, their configurability offers flexibility by using the accelerator functions to best meet your system needs.
Such integration and acceleration removes the need for a custom ISP or FPGAs, and frees up CPU Mega Hertz for processing computationally intensive imaging and vision tasks in hardware. For example, a single vision processing accelerator core can process up to eight 2-megapixel or two 8-megapixel cameras at 30 fps. A depth and motion processing acceleration core can do stereo depth estimation at 80 megapixels per second and motion vectors at 150 megapixels per second.