Devices page showing a mobile phone as a connected device
unknown
, the sample length to 2 seconds. This indicates that you want to record 1 second of audio, and label the recorded data as unknown
. You can later edit these labels if needed.
After you click Start recording, the device will capture a second of audio and transmit it to Edge Impulse.
When the data has been uploaded, you will see a new line appear under ‘Collected data’ in the Data acquisition tab of your Edge Impulse project. You will also see the waveform of the audio in the ‘RAW DATA’ box. You can use the controls underneath to listen to the audio that was captured.
Audio waveform
1000
(you can click on the 1000 ms.
text to enter an exact value), the window increase to 500
, and add the ‘Audio MFCC’ and ‘Classification (Keras)’ blocks. Then click Save impulse.
Impulse with processing and learning blocks
The MFCC page.
Spectrogram of background noise.
Audio waveform and sample dropdown box.
The MFCC parameters box.
Running the feature generation process.
The NN Classifier page.
⋮
icon and select Edit expected outcome, then enter noise
. Now, select the sample using the checkbox to the left of the table and click Classify selected.
You’ll see that the model’s accuracy has been rated based on the test data. Right now, this doesn’t give us much more information that just classifying the same sample in the Live classification tab. But if you build up a big, comprehensive set of test samples, you can use the Model testing tab to measure how your model is performing on real data.
Ideally, you’ll want to collect a test set that contains a minimum of 25% the amount of data of your training set. So, if you’ve collected 10 minutes of training data, you should collect at least 2.5 minutes of test data. You should make sure this test data represents a wide range of possible conditions, so that it evaluates how the model performs with many different types of inputs. For example, collecting test audio for several different doorbells, perhaps moving collecting the audio in a different room, is a good idea.
You can use the Data acquisition tab to manage your test data. Open the tab, and then click Test data at the top. Then, use the Record new data panel to capture a few minutes of test data, including audio for both background noise and doorbells. Make sure the samples are labelled correctly. Once you’re done, head back to the Model testing tab, select all the samples, and click Classify selected.
The screenshot shows classification results from a large number of test samples (there are more on the page than would fit in the screenshot). It’s normal for a model to perform less well on entirely fresh data.
For each test sample, the panel shows a breakdown of its individual performance. Samples that contain a lot of misclassifications are valuable, since they have examples of types of audio that our model does not currently fit. It’s often worth adding these to your training data, which you can do by clicking the ⋮
icon and selecting Move to training set. If you do this, you should add some new test data to make up for the loss!
Testing your model helps confirm that it works in real life, and it’s something you should do after every change. However, if you often make tweaks to your model to try to improve its performance on the test dataset, your model may gradually start to overfit to the test dataset, and it will lose its value as a metric. To avoid this, continually add fresh data to your test dataset.
⋮
, then selecting Move to training set.200
and see if performance increases (the classified file is stored, and you can load it through ‘Classify existing validation sample’).project.properties
file in the directory that you just downloaded and extracted from the section above.deviceOS@5.3.2
(or a later version)main.cpp
. In this minimal code example, inference is run from a static buffer of input feature data. To verify that our embedded model achieves the exact same results as the model trained in Studio, we want to copy the same input features from Studio into the static buffer in main.cpp
.
To do this, first head back to Edge Impulse Studio and click on the Live classification tab. Follow this video for instructions.
In main.cpp
paste the raw features inside the static const float features[]
definition, for example:
main.cpp
to run classification on live data.