Fig 1: Generating Synthetic Data
crying
or sleeping
, to categorize the generated audio samples.Fig 1.2: Prepared Dataset
Fig 2: Designing an Impulse in Edge Impulse
1000 ms
(1 second), which defines the duration of audio analyzed per segment.300 ms
to determine the overlap between consecutive segments, ensuring sufficient coverage of the audio signal.16000 Hz
to match the sampling rate of your audio data.MFCC
) and link the Input Axes to audio
.crying
and sleeping
).Classifier
).crying
and sleeping
.Fig 3: Configuration of MFCC Block
Fig 4: Generating Features for Our Data
Fig 5: Feature Visualization of the Dataset
Fig 6: Enabling Data Augmentation
Fig 7: Accuracy, Loss, and Confusion Matrix for the Model
Fig 8: Metrics for Classifier and Confusion Matrix
Fig 9: Deploying to Device using Edge Impulse
main
branch. Below are the steps to set up the GitHub repository, configure the action, and define the necessary workflow to achieve this.
.github/workflows
directory in your repository. Inside this directory, create a YAML file for the workflow configuration.
push
event to the main
branch.
ubuntu-22.04
environment.
edgeimpulse/build-deploy@v1
action to automate the build and deployment of your Edge Impulse model. Make sure to use your GitHub secrets for the project_id
and api_key
.
Fig 10: The Success Workflow Run From Actions