Hardware required for the project
Device IP Address
Built-in demo running
Verifying packages
As we are working with computer vision, we will need “opencv-python>=4.5.1.48, “PyAudio”, “Psutil”, and “Flask”
New project creation
Dataset creating source
Raw image & PoseNet output
Taking at least +50 pictures of each class will let you create a robust enough model
Adding a Custom Block
Adding a Custom Block
Adding a Custom Block
Confusion matrix results
.fbz
file.
Downloading the project model
scp
command as follows:
You will be asked for your Linux machine login password.Now, the model is on the Akida Dev Kit local storage
(/home/ubuntu)
and you can verify it by listing the directory content using ls
.
Move the model to the project directory with the following command:
Project directory
class-pose.py
is the project’s main script to be run.akida_model.fbz
is the Meta TF model name we downloaded from our Edge Impulse project.0
force the script to use the first camera available.Project running and printing the results
ssh
session and running the make-page.py
script from the project directory:
Preview Web Page script command
AC
, Light
, Other
and TV
.
Project running | Inference results
The Home Assistant is running on a separate Raspberry PI.Once the integration is set, we can send
HTTP
requests to it with the following format:
http://<Raspberry Pi IP>:8123/api/services/google_assistant_sdk/send_text_command
"Bearer"
"application/json"
{"command":"turn on the light"}
url
and auth
variables in the code with the respective ones of your setup.
Final project deployment