Connect a device
from the Data acquisition
menu when using e.g. a mobile phone.Connect using WebUSB
. Depending on the board, you can choose different sensors, or combination of sensors. In this case, I chose to use 96x96 as image size when capturing images with the xG24 board, this to avoid the need of resampling.Squash
as Resize mode
to not lose any data because of cropping. It might not actually have mattered in the end in this case, but as I used two completely different cameras (Arducam & mobile phone), having different aspect ratios, I wanted to avoid images from one camera being cropped where images from the other camera perhaps were not cropped similarly.Ìmage
as Processing block
and Transfer Learning (Images)
as Learning block
. Transfer learning means that you’ll use a pre-trained image classification model on your data with only some fine-tuning. This generally leads to good performance even with relatively small image datasets.Start training
when you are ready to train the model
Build
to create the files to be deployed.edge-impulse-run-impulse --debug
as you can see a live picture and the inferencing result in a web browser. Note that this is the same picture as is used for inferencing, in this case 96x96 pixels which explains the pixelation and unsharpness.