SPRAD74 March 2023 AM62A3 , AM62A3-Q1 , AM62A7 , AM62A7-Q1
TI has put significant effort into simplifying Edge AI development and evaluation on processors, like the AM62A, that contain hardware accelerators for Edge AI [2].
As described in the referenced E2E blogpost, TI has tools to help select a model, train/refine, evaluate, and deploy to the processor with minimal increase in code complexity. Developers can invoke the deep learning accelerator with only a few extra lines to TensorFlow Lite (TFlite), ONNX, or TVM-DLR API calls.
TI's Edge AI Out-of-Box demos in Linux, the dominant operating system for Edge AI applications, further accelerates development of C++ and Python-based applications. These demos take a trained neural network model and an input-output description to run the model with full acceleration from both the C7xMMA and the ISP for a sample end-to-end application. For example, a developer can choose a MobileNetV2SSD trained on COCO dataset from the Texas Instruments Model Zoo as the model, a stored video file for the input, and the HDMI display as the output medium.
These demos are built using Gstreamer to efficiently pipeline image capture, preprocessing, deep learning inference, postprocessing, and further application specific software, including H.264/H.265 encode. TI's custom gstreamer plugins reduce overhead using zero-copy buffering, saving RAM/DDR bandwidth. In addition to Gstreamer and the open source runtimes (TFLite, ONNX, TVM), OpenCV is enabled in our default Linux builds to help developers perform computer vision operations not directly supported by hardware accelerators.
For users outside a Linux environment, the hardware accelerators are exposed via TI's implementation of the OpenVX standard, TIOVX.