SPRADA8 may 2023 AM68A , TDA4VL-Q1
To run deep neural networks on embedded hardware, the networks need to be optimized and converted into embedded-friendly formats. TI has converted or exported 100+ models from their original training frameworks in PyTorch, TensorFlow, and MXNet into these embedded friendly formats and is hosted in a public GitHub repository(3). In this process TI also makes sure that these models provide optimized inference speed on TI’s embedded processors. These models provide a good starting point for our customers to explore high performance deep learning on TI's embedded processors.