
Espressif ESP-EYE
ESP-DSP Acceleration
We’ve added ESP-DSP acceleration to ESP32 deployments. On supported devices, this significantly speeds up DSP feature extraction (e.g. MFCC for audio) without requiring any changes to your impulse configuration. Example: MFCC Keyword Spotting on a regular ESP32 (standard configuration)Configuration | DSP Time | Inference Time | Anomaly Time | Speed-up |
---|---|---|---|---|
Without ESP-DSP | 297 ms | 4 ms | 0 ms | — |
With ESP-DSP | 54 ms | 4 ms | 0 ms | ~5–6× |
Installing dependencies
To set this device up in Edge Impulse, you will need to install the following software:- Edge Impulse CLI.
- Python 3.
- ESP Tool.
- The ESP documentation website has instructions for macOS and Linux.
- On Linux:
- GNU Screen: install for example via
sudo apt install screen
.
- GNU Screen: install for example via
Problems installing the CLI?See the Installation and troubleshooting guide.
Connecting to Edge Impulse
With all the software in place it’s time to connect the development board to Edge Impulse.1. Connect the development board to your computer
Use a micro-USB cable to connect the development board to your computer.2. Update the firmware
The development board does not come with the right firmware yet. To update the firmware:- Download the latest Edge Impulse firmware, and unzip the file.
- Open the flash script for your operating system (
flash_windows.bat
,flash_mac.command
orflash_linux.sh
) to flash the firmware. - Wait until flashing is complete.
3. Setting keys
From a command prompt or terminal, run:--clean
.
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See this blog post for more information.
4. Verifying that the device is connected
That’s all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
Device connected to Edge Impulse.
Next steps: building a machine learning model
With everything set up you can now build your first machine learning model with these tutorials:- Building a continuous motion recognition system.
- Recognizing sounds from audio.
- Keyword spotting.
- Image classification.
- Object detection with centroids (FOMO).
Sensors available
The standard firmware supports the following sensors:- Camera: OV2640, OV3660, OV5640 modules from Omnivision
- Microphone: I2S microphone on ESP-EYE (MIC8-4X3-1P0)
- LIS3DHTR module connected to I2C (SCL pin 22, SDA pin 21)
- Any analog sensor, connected to A0

DFRobot FireBeetle ESP32
Using with other ESP32 boards
ESP32 is a very popular chip both in a community projects and in industry, due to its high performance, low price and large amount of documentation/support available. There are other camera enabled development boards based on ESP32, which can use Edge Impulse firmware after applying certain changes, e.g.- AI-Thinker ESP-CAM
- M5STACK ESP32 PSRAM Timer Camera X (OV3660)
- M5STACK ESP32 Camera Module Development Board (OV2640)
Deploying back to device
To deploy your impulse on your ESP32 board, please see:- Generate an Edge Impulse firmware (ESP32-EYE only)
- Download a C++ library (using ESP-IDF)
- Download an Arduino library