Linux Python SDK
- 2.Install the SDKRaspberry Pi$ sudo apt-get install libatlas-base-dev libportaudio0 libportaudio2 libportaudiocpp0 portaudio19-dev$ pip3 install edge_impulse_linux -i https://pypi.python.org/simpleJetson Nano$ sudo apt-get install libatlas-base-dev libportaudio2 libportaudiocpp0 portaudio19-dev$ pip3 install edge_impulse_linuxOther platforms$ pip3 install edge_impulse_linux
- 3.Clone this repository to get the examples:$ git clone https://github.com/edgeimpulse/linux-sdk-python
Before you can classify data you'll first need to collect it. If you want to collect data from the camera or microphone on your system you can use the Edge Impulse CLI, and if you want to collect data from different sensors (like accelerometers or proprietary control systems) you can do so in a few lines of code.
To collect data from other sensors you'll need to write some code to collect the data from an external sensor, wrap it in the Edge Impulse Data Acquisition format, and upload the data to the Ingestion service. Here's an end-to-end example.
To classify data (whether this is from the camera, the microphone, or a custom sensor) you'll need a model file. This model file contains all signal processing code, classical ML algorithms and neural networks - and typically contains hardware optimizations to run as fast as possible. To grab a model file:
- 1.Train your model in Edge Impulse.
- 3.Download the model file via:$ edge-impulse-linux-runner --download modelfile.eimThis downloads the file into
modelfile.eim. (Want to switch projects? Add
Then you can start classifying realtime sensor data. We have examples for:
If you see this error you can re-install portaudio via:
brew uninstall --ignore-dependencies portaudio
brew install portaudio --HEAD
This error shows when you want to gain access to the camera or the microphone on macOS from a virtual shell (like the terminal in Visual Studio Code). Try to run the command from the normal Terminal.app.