Live classification

Live classification

Live classification lets you validate your model with data captured directly from any device or supported development board. This gives you a picture on how your model will perform with real world data. To achieve this, go to Live classification and connect the device or development board you want to capture data from.

Using a fully supported development board

All of your connected devices and sensors will appear under Devices as shown below. The devices can be connected through the Edge Impulse CLI or WebUSB:

Using your mobile phone

To perform live classification using your phone, go to Devices and click Connect a new device then select "Use your mobile phone". Scan the QR code using your phone then click Switch to classification mode and start sampling.

Using your computer

To perform live classification using your computer, go to Devices and click Connect a new device then select "Use your computer". Give permissions on your computer then click Switch to classification mode and start sampling.

Working with Object Detection Model Architectures

Object detection models identify and locate objects within an image, providing class, quantity, position, and size information. Edge Impulse supports various object detection model architectures, each optimized for specific hardware targets and use cases. The following sections detail the key features and performance metrics of the supported object detection models.

Read on to learn more about the object detection models available in Edge Impulse:

Employs a Single Shot MultiBox Detector (SSD) with a MobileNet V2 backbone for object detection. This model is optimized for running on MCUs and CPUs.

Live Classification Result

This view is particularly useful for a direct comparison between the raw image and the model's interpretation. Each object detected in the image is highlighted with a bounding box. Alongside these boxes, you'll find labels and confidence scores, indicating what the model thinks each object is and how sure it is about its prediction. This mode is ideal for understanding the model's performance in terms of object localization and classification accuracy.

Overlay Mode for the Live Classification Result

In this view, bounding boxes are drawn around the detected objects, with labels and confidence scores displayed within the image context. This approach offers a clearer view of how the bounding boxes align with the objects in the image, making it easier to assess the precision of object localization. The overlay view is particularly useful for examining the model's ability to accurately detect and outline objects within a complex visual scene.

Summary Table

Name: This field displays the name of the sample file analyzed by the model. For instance, 'sample.jpg.22l74u4f' is the file name in this case.

CATEGORY: Lists the types of objects that the model has been trained to detect. In this example, two categories are shown: 'coffee' and 'lamp'.

COUNT: Indicates the number of times each category was detected in the sample file. In this case, both 'coffee' and 'lamp' have a count of 1, meaning each object was detected once in the sample.

INFO: This column provides additional information about the model's performance. It displays the 'Precision score', which, in this example, is 95.00%. The precision score represents the model's accuracy in making correct predictions over a range of Intersection over Union (IoU) values, known as the mean Average Precision (mAP).

Last updated