
Intro
Certain machines used by leather craftsmen involve continuous, repetitive tasks like hard pressing. Not using the correct posture while using this equipment may cause musculoskeletal disorders affecting the muscles, tendons, ligaments, joints, peripheral nerves and supporting blood vessels in the body. During a training phase, a supervisor could teach the correct posture, but after training is complete, constantly verifying and enforcing could be expensive and not viable. Is it possible to develop an automatic system to constantly monitor a craftsman’s posture? Is this problem a good fit for Machine Learning? While heuristic programming works perfectly for limited and known data and states, in this case, that won’t work, due to the fact that posture will not be the same as in sampling. Clothing, background and even the worker could be different compared with our samples. An algorithm to learn subtle patterns in pictures could be the solution and that is exactly what Machine Learning could do. For this project I will use the SK-TDA4VM board from Texas Instruments, since it was designed for Edge AI vision systems and it has impressive features like multi-camera support, onboard machine learning accelerators, and a powerful dual-core Arm Cortex A72 processor. What else will be required? A Logitech C270, C920 or C922 USB Webcam, a 5V/3A USB-C power supply, a microSD card, an Ethernet cable, and an Edge Impulse account (free for developers).TDA4VM Board Setup
- Download the latest TDA4VM image at https://www.ti.com/tool/download/PROCESSOR-SDK-LINUX-SK-TDA4VM
- Flash the image to the microSD card and place the card into the board.
- Connect the Logitech camera to a USB port and power on the board.
:q
, etc.), I have decided to use an Ethernet cable connected to the router, obtain the IP by checking DHCP in the router admin page, and access through SSH and SFTP.


TDA4VM Circuits and Connections
I have connected the Logitech USB camera to the first USB port, the Power Supply to the back USB-C connector (the other one is for UART over USB) and I still had to connect the 1-channel Relay that will be used to provide power to the theoretical machine being operated.

/run/media/mmcblk0p1
I have downloaded uenv.txt
and added this line at the end:

Data Acquisition
For Machine Learning I will use Edge Impulse. The platform is free for developers and the link to sign up for a new account is https://studio.edgeimpulse.com/signup Note: for testing purposes you can skip the data acquisition phase, since you can clone my entire Edge Impulse project. Data is the foundation of any Machine Learning project and in this case data means pictures. I have selected the correct board for calculation in the EI Dashboard and also Bounding Boxes as the Labeling method.



Model Training
With all the pictures uploaded and labeled, the next phase in the process is the Impulse Design. I clicked on Create Impulse and I used the following parameters:- First block is Image Data, 96x96 pixels, Resize fit shortest axis. Then Image for the Processing block. Then Object Detection for the Learning block. You can read more about these blocks here.

- I went to Image, Generate Features. Since the data usually has redundant information, this option is used to extract relevant features used to detect patterns. It is also useful to visualize the complexity of a feature classification.

- I went to Object Detection and I have configured 60 Training cycles, 0.001 Learning rate, Data augmentation and FOMO (Faster Objects More Objects) model.

Deployment
I went back to the TDA4VM board SSH connection, and installed the Edge Impulse Linux package using this command:http://<deviceIP>:4912
. This will give you a preview of the camera stream.
After confirming that postures were detected, I stopped the running process.
Output Examples
Here is the output generated by the Edge Impulse Linux runner:Parsing Script
At this point I have acquired the data, I have trained the model and I also have deployed the model to the TDA4VM board, but the output is simply probabilistic data displayed on screen. In Machine Learning projects there is usually a “precondition” and an “action”. The precondition is completed and working. But what can be done for the action side whenever an incorrect posture could harm the worker? I could parseedge-impulse-linux-runner
output, detect the “ok” and “notok” labels, and produce some action. What do I mean by “some action”? In my previous TDA4VM project I have used Telegram to send notifications. In this case, I will try to turn off the machine whenever an incorrect posture is detected.
With a simple HIGH and LOW signal to the appropriate GPIO pin, the relay will enable or disable the Power to the machine.
Download the posture1.py
Python script from https://github.com/ronibandini/tda4vmPostureEnforcer, and then upload the script with FTP to /opt/edge_ai_apps
on the TDA4VM.
Now execute the script using:
