Object detection takes an image and outputs information about the class and number of objects, position, (and, eventually, size) in the image. Edge Impulse provides several object detection model architectures built into the platform, in addition to the providing the ability to use a custom learning block to bring in your own architectures. The built-in options are:Documentation Index
Fetch the complete documentation index at: https://docs.edgeimpulse.com/llms.txt
Use this file to discover all available pages before exploring further.
| Specification | YOLO-Pro | FOMO | MobileNetV2 SSD FPN |
|---|---|---|---|
| Labelling method | Bounding boxes | Bounding boxes | Bounding boxes |
| Input image size | Multiples of 32 (square) | Any (square) | 320x320 |
| Input image colour | RGB | Greyscale or RGB | RGB |
| Output format | Bounding boxes | Centroids | Bounding boxes |
| MCU inference | ✅ | ✅ | ❌ |
| CPU/GPU inference | ✅ | ✅ | ✅ |
| Limitations | Stronger performance at int8 precision than float32. | Objects should have similar sizes and shapes. | Objects should be large relative to the image. |
| Objects should not be too close to each other. | Models use high compute resources (in the edge computing world). | ||
| Object size not available. | Input image size is fixed. |