The EON Tuner helps you find and select the best embedded machine learning model for your application within the constraints of your target device. The EON Tuner analyzes your input data, potential signal processing blocks, and neural network architectures - and gives you an overview of possible model architectures that will fit your chosen device's latency and memory requirements. You can read our EON Tuner introduction here.
First, make sure you have an audio project in your Edge Impulse account. No audio projects yet? Follow one of our getting started guides to create a dataset for recognizing sounds or responding to your voice. Or, clone one of our existing public audio projects to your Edge Impulse account: Tutorial: Recognize sounds from audio or Tutorial: Responding to your voice.
- Log in to the Edge Impulse Studio and open any of your audio projects
- After uploading your audio data samples, select the EON Tuner tab
- Click the Configure target button to select your model’s dataset category, target device, and time per inference (in ms)
- Click on the Dataset category dropdown and select the use case for your audio dataset (keyword spotting, audible events, or continuous audio)
- Click Save and then select Start EON Tuner
- Wait for the EON Tuner to finish running, then click the Select button next to your preferred DSP/neural network model architecture to save as your project’s primary blocks:
- Click on the DSP and Neural Network tabs within your Edge Impulse project to see the parameters the EON Tuner has generated and selected for your dataset, use case, and target device hardware
- Now you’re ready to deploy your automatically configured Edge Impulse model to your target edge device!
Updated 2 months ago