Note: Be sure to replace the username in Step 8, with your own username.If successful, the
akida devices
command should return:
scp
command, FTP, etc.) in order to evaluate how our model performs on the hardware.
In a terminal, we’ll continue where we left off above. With the images copied over to a USB stick and then inserted into the Akida Developer Kit, the following series of commands will copy the images to the device, and use the example python from the Linux SDK to run inference:
Note: Once again, the username needs to be substituted with your own. The project number and version number can be obtained by simplyThels
’ing themodels
directory.
edge-impulse-linux-runner
command in Step 4 above is used to connect the Akida Developer Kit to Edge Impulse, login with your credentials, select a project, and then download your model to the device. Once that is complete, inference will attempt to begin, but you can cancel the running process with Control+C
to exit the process. The model is downloaded, which is what we are interested in. Continuing on with Step 5 and Step 6 will run the inference and display the results, time it took to process, and power consumption. You can iterate through all of the images in the validation
folder you created (which should contain some Normal and some Pneumonia images.)
edge-impulse-linux-runner
on the command line.
Inference will start running continuously, printing out results to the console. An extra feature of the Linux Runner is that it also starts an HTTP service, which can be accessed at http://<IP-address-of-the-device>:4912
(the IP will be displayed in the text that is printed out as the application begins, or just run ip a
to find it).
Then in a browser on the PC or laptop, open up that URL and you will see a view from the camera, and its inference results. You might need to arrange your windows or move the camera so that it is only seeing the x-ray, otherwise classification will not work.
However, as identified earlier, this method may not be as reliable for the x-ray classification use-case, due to lighting conditions of the room, brightness and contrast of the monitor, quality of the USB Webcam, resolution and size of the monitor, etc. It is worth exploring though, as many vision projects are excellent candidates for live inferencing with a camera and the Akida NPU.