SPRACV1B February 2022 – January 2024 AM2434 , AM6411 , AM6412 , AM6421 , AM6441 , AM6442
AM64x SDK has integrated open source TensorFlow Lite for deep learning inference at the edge. AM64x is not specifically targeting real-time image processing but is can execute machine learning inference for some edge applications. As examples below are to runs of TensorFlow Lite models for image classification (224x224 pixels 3 bytes for colors) based on imagenet database and 1000 object classes. The example image of Rear Admiral Grace Hopper is installed in the file system (available at). The example label_image program will crop and resize the bmp image to the 224 x 224 pixels before calling the TensorFlow Lite. The inference time benchmark for the quantization aware trained Mobilenetv1 network (mobilenet_v1_1.0_224_quant.tflite) is 280 milliseconds. The example run is shown below can also be found in the AM64x Linux SDK (in folder /usr/share/tensorflow-lite/examples) :
root@am6x-evm:/usr/share/tensorflow-lite-1.15/examples# ./label_image -i grace_hopper.bmp -l labels.txt -m mobilenet_v1_1.0_224_quant.tflite
Loaded model mobilenet_v1_1.0_224_quant.tflite
resolved reporter
invoked
average time: 280.587 ms
0.780392: 653 military uniform
0.105882: 907 Windsor tie
0.0156863: 458 bow tie
0.0117647: 466 bulletproof vest
0.00784314: 835 suit
The inference time for the floating point Mobilenetv2 model based inference of an image of the exact same resolution (224 x 224 x 3) is 362 milliseconds. The console command and printout is shown below:
root@am6x-evm:/usr/share/tensorflow-lite-1.15/examples# ./label_image -i grace_hopper.bmp -l labels.txt -mtest/mobilenet_v2_1.0_224.tflite
Loaded model test/mobilenet_v2_1.0_224.tflite
resolved reporter
invoked
average time: 362.22 ms
0.911345: 653 military uniform
0.014466: 835 suit
0.0062473: 440 bearskin
0.00296661: 907 Windsor tie
0.00269019: 753 racket
root@am6x-evm:/usr/share/tensorflow-lite-1.15/examples#
The numbers printed in the console below the inference time are the top-5 classification results as a number between 0 and 1 and the class out of the 1000 in imagenet labels.txt. The accuracy result is a benchmark of the model and input image, not the device running the inference.
All .tflite models will run on AM64x, a quantized Mobilenetv1 and floating point Mobilenetv2 were chosen as common benchmarks that can be used to interpolate the performance of an inference application. The quantized Mobilenetv1 is in the file system, the Mobilenetv2 was downloaded from Hosted models hosted models at tensorflow.org .