Collected data, now let's label the data with the labeling queue.
Labeling multiple objects with the labeling queue. Note the dark borders on both sides of the image, these will be cut off during training, so you don't have to label objects that are located there.
320
, the ‘resize mode’ to Fit shortest axis
and add the ‘Images’ and ‘Object Detection (Images)’ blocks. Then click Save impulse.
Designing an impulse
Configuring the processing block.
The feature explorer visualizing the data in the dataset. Clusters that separate well in the feature explorer will be easier to learn for the machine learning model.
A trained model showing the precision score. This is the COCO mean average precision score, which evaluates how well the predicted labels match your earlier labels.
Live classification helps you determine how well your model works, showing the objects detected and the confidence score side by side.
Changing to overlay mode provides a more integrated view by superimposing the model's detections directly onto the original image
This table provides a concise summary of the performance metrics for an object detection model, using a specific sample file. The layout and contents are as follows.
edge-impulse-linux-runner
. This will build and download your model, and then run it on your development board. If you’re on the same network you can get a view of the camera, and the classification results directly from your dev board. You’ll see a line like:
Object detection model running on a Raspberry Pi 4