Devices tab with the device connected to the remote management interface.
Recording your keyword from the Studio.
Note: Data collection from a development board might be slow, you can use your Mobile phone as a sensor to make this much faster.Afterwards you have a file like this, clearly showing your keywords, separated by some noise.
10 seconds of 'Hello world' data
⋮
next to your sample, and select Split sample.
'Split sample' automatically cuts out the interesting parts of an audio file.
Importing the noise and unknown data into your project
Training data, showing an even split between the three classes
Testing data, also showing an even split between the three classes
An impulse to classify human speech
MFCC block looking at an audio file
MFCC Spectrogram for 'Hello world'
MFCC Spectrogram for 'On'
On-device performance is updated automatically when you change parameters
The feature explorer showing 'Hello world' (in blue), vs. 'unknown' (in green) data. This separates well, so the dataset looks to be in good condition.
Neural network configuration
A trained Machine Learning model that can distinguish keywords!
Model testing showing 88.62% accuracy on our test set.
⋮
) next to a sample and select Show classification. You’re then transported to the classification view, which lets you inspect the sample, and compare the sample to your training data. This way you can inspect whether this was actually a classification failure, or whether your data was incorrectly labeled. From here you can either update the label (when the label was wrong), or move the item to the training set to refine your model.
Inspecting a misclassified label. Here the audio actually only says 'Hello', and thus this sample was mislabeled.