Edge AI use cases
Leverage use cases for your edge AI application in perception, real-time monitoring and control or audio
Use cases
Add embedded intelligence to your design using TI edge AI software demos and examples for perception, real-time monitoring and control and audio applications.
Skip to to the specific use case on this page:
Perception
Defect detection
Defect detection uses sensors or AI to find flaws in materials, improving quality and efficiency
Defect detection is a crucial part of quality assurance in the manufacturing process. Vision AI makes defect detection even easier by quickly identifying incorrectly produced parts and materials as they move down a conveyor belt so that they can be removed automatically. An object tracker is developed to provide the precise coordinates of the units for sorting and filtering and a live video is displayed on the screen to make monitoring easy. The units are marked on the screen with boxes to identify which parts are acceptable or defective. A dashboard shows live statistics about total products, defect percentage, production rate and a histogram of the types of defect.
PROCESSOR-SDK-AM62A
People tracking
Real-time vision-based people tracking with statistical insights such as total visitors, current occupancy, and visit duration. Also offers heatmap highlighting frequently visited areas.
License plate recognition
Reads license plates from images, providing the license plate information automatically without human intervention.
People detection and face identification
Plumerai provides AI software for people detection, face identification and more. Target applications include security cameras, video doorbells, video conferencing, elder care, and building automation
Smart retail scanner
Vision processing for codeless food scanning, enabling automatic recognition of products and checkout of the user’s order.
Barcode reader
Vision-based barcode reader application for 1-D and 2-D codes. Uses deep learning and open source decoder software to serve as the main processor for high performance barcode readers and imagers.
Conferencing systems
Showcases the use of vision AI and audio keyword spotting for audio-visual conferencing systems. Audio commands control the area that the camera focuses on.
Gesture controlled HMI
Shows integration of a camera sensor and an mmWave radar sensor to control a building access HMI with face detection and gestures to unlock a PIN controlled entry.
Personnel protective equipment detector
Vision-based object detection solution for recognizing specific types of personal protective equipment (PPE). Also supports a field trainable mode, allowing new types of PPE to be added.
Monocular depth estimation
Depth estimation using a single camera via a deep learning model. The MiDaS deep learning CNN gives relative depth information to distinguish people and objects from each other and backgrounds.
Human pose estimation
Uses vision AI to detect humans in an image and to localize their body joints. This can be used for health monitoring applications as well as for robotics training via demonstration.
6D pose estimation
Uses vision AI to estimate the 3D orientation and translation of objects in a given environment. Accurate estimation is key for robotics applications where manipulation of objects is needed.
Gesture recognition
Uses an mmWave radar sensor to detect and classify 9 different hand gestures for touchless human-machine interfaces.
Human vs non-human
Uses an mmWave radar sensor to detect, track and classify dynamic objects as human or non-human. This can be used to filter false detections caused by fans or pets in home appliance applications.
Surface classification
Uses an mmWave radar sensor to detect and classify a change in surface type in front of the sensor. This can be used for applications usch as home robots or AGVs and AMRs.
Kick-to-open
Uses an mmWave radar sensor in order to detect kick motions for hands-free access to the trunk of a vehicle. Leverages a low power mode to detect for presence before trying to classify the motion.
Real-time monitoring and control
Arc fault detection
Arc fault detetion identifies dangerous electrical arcs to prevent fires and ensure safety
By investing in Edge AI based arc fault detection, TI provides optimized AI model and complete reference solution that allow for fast and reliable detection of arc faults with >99% accuracy per UL-1699B test requirement.
The onchip Neural-network Processing Unit (NPU) provides 5 to 10 times faster AI model processing time than software implementation, allowing multi-channel arc fault detection and power conversion control to be done by the same MCU.
TI provides a full development toolchain and SDKs that allow customers to rapidly complete all the steps of Edge AI solution development.
Arc fault detection analog front end (AFE) reference design
C2000WARE-DIGITALPOWER-SDK
Motor bearing fault detection
Motor bearing fault detection activates via keyword wake, analyzing vibrations for early faults
Our Edge AI based motor bearing fault detection solution provides a quick path for customers to get started. This includes optmizied AI models and also preprocessing algorithms in the form of complete application software projects. The solution can achieve >99% fault detection accuracy with minimal or no false alarm.
In the meantime, the onchip hardware Neural-network processing unit (NPU) can be leveraged to offload model execution, leading to fast model exectuion and low latency and allowing motor control to be done by the same MCU.
TI provides a full development toolchain and SDKs that allow customers to rapidly complete all the steps of Edge AI solution development
C2000WARE-MOTORCONTROL-SDK
Channel Sounding
High-accuracy, low-cost and secure ranging with Bluetooth® Channel Sounding
Our CC27xx 2.4 GHz wireless MCUs integrate a unique algorithm processing unit that can perform Bluetooth Channel Sounding functions like smart access and asset tracking with more than 100 times greater computation performance and energy efficiency.
Bluetooth Channel Sounding technology uses phase-based ranging to enhance accuracy and the round-trip time of random modulated data packets to enhance security. Machine learning is then enabled to provide better line-of-sight and non-line-of-sight performance.
Audio
Wake and command
Activate and operate devices by recognizing specific wake words and commands
Leverage 3rd party solutions enabling quick and easy audio-based Edge AI application such as key wake-word and commands detection. These can be used in applications such as electronic door-locks, appliances and even automotive. Sensory’s TrulyHandsfree-Micro library provides seamless integration on CC27xx/CC35xx Cortex-M33 wireless MCU that can be attached to Bluetooth LE, Matter or Zigbee. This application utilizes only a portion of the device’s processing bandwidth, 40% and 27% for wake word and command detection respectively. With Sensory’s VoiceHub, customers can enable any wake and commands.
TI provides a full development toolchain and SDKs that allow customers to rapidly complete all the steps of edge AI solution development.