Getting Started: Next Steps

Congratulations, you've trained your first embedded machine learning model! This page lists next steps you can take to make your devices smarter.

Run your model on a real device

You've ran your model in the browser, but you can also run it on a wide variety of devices. Head to the development boards section for a full overview. If you have a device that is not supported, no problem, you can export your model as a C++ library that runs on any embedded device. See Running your impulse locally for more information.

More than audio

Making a machine learning model that responds to your voice is cool, but you can do a lot more with Edge Impulse. Here are a number of tutorials to get you started:

Make your model more robust by adding more data

Your model was trained on +/- 20 seconds of data, which is a very small amount of data. To make your model more robust you can add more data.

  • If your model does not respond well enough on your keyword (e.g. if you have someone saying the word in a different tone or pitch), record some more data of the keyword.

  • If the model is too sensitive (triggers when you say something else), then say some different words and label them with the 'unknown' class.

You can record new data from your computer, your phone, or a development board. Go to Data acquisition and click Show options for instructions. Then, to split your data into individual samples, click the three dots next to a sample, and select Split sample (more info).

Share your project with the world

Think your model is awesome, and want to share it with the world? Go to Dashboard and click Make this project public. This will make your whole project - including all data, machine learning models and visualizations - available, and can be viewed and cloned by anyone with the URL.

More questions? Ask us on the forums!

Do you have any other questions or want to share your awesome ideas? Head to the forum!

Last updated

Revision created

Merge branch 'main' into brickml