Impulse runner
The impulse runner shows the results of your impulse running on your development board. This only applies to ready-to-go binaries built from the studio.
You start the impulse via:
This will sample data from your real sensors, classify the data, then print the results. E.g.:
Other options
--debug
- run the impulse in debug mode, this will print the intermediate DSP results. For image models, a live feed of the camera and inference results will also be locally hosted and available in your browser (More on this below.)--continuous
- run the impulse in continuous mode (not available on all platforms).
Embedded API Server
The Linux CLI Runner has an embedded API server that allows you to interact with the model easily from any application, environment, or framework that implements an HTTP client. This feature is started with the runner using the --run-http-server option.
To start the API server:
Which will share the link to the web page where you can see the live feed of the camera and inference results.
or
This will start the API server on port 3000, if you don't have an image model you will not see the http server web page.
API Endpoints
Once the server is running, you can send HTTP requests to interact with the model. Here is a simple example using Python:
How would you use this?
Here are a few examples of how you could use this embedded API server:
Custom Applications: A custom app running on the same Linux device can interact with the model using an HTTP client, simplifying the integration process.
IoT Devices: Small IoT devices with an HTTP client in the firmware can send data to the inference server (the runner) in the local network, get results, and leverage powerful ML models without the need for local model storage and inference.
Web Applications: Web applications can interact with the model running on the Linux device using the HTTP client, enabling powerful ML models in web applications without the need for cloud services.
Mobile Applications: Mobile applications can interact with the model running on the Linux device using the HTTP client, enabling powerful ML models in mobile applications without the need for cloud services.
Summary
The impulse runner is a powerful tool that allows you to run your impulse on your development board and interact with it using an embedded API server. This feature is useful for custom applications, IoT devices, web applications, and mobile applications that need to interact with the model running on the Linux device.
For more information on the impulse runner, or to discuss how you may use this please reach us on the Forum
Last updated