SPRUJ59A April 2024 – September 2024 TMS320F28P550SJ , TMS320F28P559SJ-Q1
The Neural-network Processing Unit (NPU) supports intelligent inferencing running pre-trained models. Capable of 600–1200MOPS (Mega Operations Per Second), the NPU provides up to 10x Neural Network (NN) inferencing cycle improvement versus a software only based implementation. Using TI supplied tools, users can train and evaluate models as well as acquire and visualize the data stream from the MCU. The model is then compiled to a standalone library that is added to the main project to utilize the NPU in a system.