SPRACZ2 August 2022 TDA4VM , TDA4VM-Q1
ADVANCE INFORMATION
There are two aspects required of any neural network or a deep learning model: training and inference. The training function involves using a set of training data to train the model. Once the training is done, the model can be used for inference on a new set of input data. Typically, training is done once or a few times for a given product. Inference, on the other hand, happens all the time for a given edgeAI system.
Figure 2-1 shows the difference between the two steps. This is the reason why the inference process needs to be optimized for high performance and high energy efficiency for any embedded edge AI device.
Training a deep learning model typically requires very high TOPS processing engine but for most low- to mid-end edge AI inference applications, TOPS required are in the range of 1 to 32 TOPS. This is the segment that TDA4x processor family is targeting.