How does Model Analyzer work
Evaluate embedded deep learning inference at no cost. Our easy-to-use software development environment lets you:
- Login and connect to an 8 TOPS Jacinto™ TDA4VM processor evaluation module using only your web browser
- Compile and deploy a deep learning model easily using industry standard runtime engines
- Accelerate deep learning inference on an embedded processor with no hand-tooling
Resources
Hello. In this video, we will first briefly describe how the TI Edge AI Cloud service works, and then walk you through how to get started with evaluating deep-learning models on TI processors via this service. So we developed this service to simplify evaluating TI Edge AI processes with deep-learning inference.
In a nutshell, the service facilitates a connection between an EVM and a user via the cloud for some period of time, so that multiple users can use the service simultaneously and no one needs to buy an EVM. Once the connection is established, the user can evaluate the deep-learning inference capabilities of TI SoCs in a matter of minutes.
So to get started, you need a trained deep-learning model. For that, one option is to use one of many pre-compiled models from our Model Zoo. These include popular models such as MobileNet, Inception Net, ResNet, et cetera. Then the second option is to start with your own pre-trained model.
Now, once you've selected a model, the next step is compiling and optimizing the model for a TI embedded processor. If you chose a model from the Model Zoo, you can skip this step and instead load the pre-compiled model and directly move to the deployment step. However, if you chose to go with a custom model, you will first need to compile your model for TI hardware.
With this service, models can be compiled using one of three industry-standard popular open-source runtime engines: TensorFlow Lite, TVM with Neo-AI-DLR, or ONNX Runtime. The reason we enable three options is to facilitate flexibility in the choice of training frameworks, and also to ensure ease of use.
The TI tools underneath the runtime engines take care of compiling the model for a TI processor. It is important to note here that the compile step includes accelerating the model via the neural network accelerator on the SoC. If you're interested in learning more on the compilation step, please visit the link below.
Now, once the compile step is complete, using the same runtime engine as before, you can deploy the module on a TI Edge AI processor to run hardware-accelerated inference. It is important to note here that inference is executed on an actual TI processing EVM farm, not an emulation environment. In addition, with the TI Edge AI Cloud service, both the compile and deploy steps are performed using a Jupyter Notebook running on your browser.
Now, to show you some examples, I will take you to the page you land on after you log in. As you can see here, we provide a few different methods for you to evaluate our Edge AI processes. The quickest way is to use the model selection tool. This is an interactive tool that allows you to learn the performance of models in the TI Model Zoo when executed on a TI processor. It will help you find a model that meets your performance and accuracy needs.
Your next option is to run models from the TI Model Zoo on TI hardware to get benchmarks. You can find examples of how to do that by visiting the section titled "Model Performance." In this section, you get access to several teaching example scripts that walk you through how to benchmark model performance. As you can see here, you get to first select the task and then the runtime engines. Afterwards, if you click on Open Notebook, you will be taken to a notebook based on your selections above.
So in the example that will load, you first get to select a model from the Model Zoo. Then they walk you through loading pre-compiled model artifacts, running inference on the TI SoC, and finally, collecting benchmarks. This entire process takes less than five minutes. Also, since we are working with pre-compiled models, in these notebooks we skip the model compilation step.
Then we also provide an example script to run inference and collect accuracy benchmarks for models from the TI Model Zoo. You can access this notebook by visiting the "Model Accuracy" section. These examples will allow you to experience how easy it is to use our software programming environment, and will also allow you to evaluate the hardware capabilities of our processors.
Then you can move on to the next set of examples, where we walk you through the workflow for performing inference with your own models. The example notebooks for this task are located in the section "Custom Models." Of course, in these examples, in addition to the inference steps you saw in the previous set of examples, you also get to go through the model compilation step.
All these different scripts are meant to serve as a starting point, so you can create your own notebooks to evaluate different models and different task types. When you are ready for that, click on My Workspace to access your workspace. Once you are there, you can upload data, models, and create new notebooks. Thank you for your attention.
This video is part of a series
-
TI Edge AI Cloud - embedded deep learning evaluation
video-playlist (6 videos)