Project Demo
About this Project
This embedded machine learning project focuses on visual object detection that can detect and differentiate between a car and a motorcycle. A camera is positioned under a bicycle saddle to see the rear view, and can replace the need for a rearview mirror.

Hardware Components:
- Raspberry Pi 4 Model B
- USB webcam
- Sense HAT for Raspberry Pi
- Powerbank / battery
Software/Apps & Online Services:
- Edge Impulse Studio
- Raspberry Pi OS
- Terminal
Others:
- 3D printed case for Pi4 with Sense HAT on bike’s top-tube
Steps
Preparation:
Prepare the Raspberry Pi, connect via SSH, install dependencies, and the Edge Impulse for Linux CLI. Follow this guide for extra details.Data Collection:
To simulate the rearview, I took pictures from the backseat of a car to take photos of other vehicles and their surroundings. In order to obtain enough variety of vehicles types, colors, vehicle position, and different ambience, be sure to take lots of pictures.

Data Labeling:
Click on Labeling with BoundingBoxes method, and choose Raspberry Pi 4 for latency calculations. Then Upload your images, and then drag a box around each object and label it (car or motorcycle). Split or auto split all Training & Test data around 80/20.
Train and Build Model:
Create an Impulse with 320x320 pixels and RGB parameter, and choose Image and Object Detection blocks. By using the MobileNetV2 SSD FPN-lite 320x320 model, it will be able to output the object type (car and motorcycle) with a pretty accurate result. Using a YOLO method, the Bounding Box is also obtained, so we can create an estimation of how near or far away the vehicles are located. After getting the desired outcome during testing, the model is ready to be deployed to the Raspberry Pi 4.



