SPRUJ53B April 2024 – September 2024 TMS320F28P550SJ , TMS320F28P559SJ-Q1
The Neural-network Processing Unit (NPU) can support intelligent inferencing running pre-trained models. Capable of 600–1200MOPS (Mega Operations Per Second) with example model support for ARC fault detection or Motor Fault detection, the NPU provides up to 10x Neural Network (NN) inferencing cycle improvement versus a software only based implementation. Load and train models with tools from TI: Model Composer GUI or TI's command-line Modelmaker tool for an advanced set of capabilities. Both of these options automatically generate source code for the C28x, eliminating the need to manually write code.
Figure 8-1shows the toolchain and steps to add NPU support to a project, starting with importing or using existing models from TI, training the models, generating the associated software libraries, and integrating into an existing Code Composer Studio™ IDE project.