What You’ll Build

- Captures real-time audio from microphone
- Recognizes spoken keywords continuously
- Displays classification results with confidence scores
- Runs entirely on-device with low latency
Difficulty: Intermediate
Prerequisites
- Trained audio keyword spotting model
- Android Studio with NDK and CMake
- Android device with microphone (usb camera with a mic also works)
- Basic familiarity with Android development
Step 1: Clone the Repository
Step 2: Download TensorFlow Lite Libraries
Step 3: Export Your Audio Model
- In Edge Impulse Studio, go to Deployment
- Select Android (C++ library)
- Enable EON Compiler (recommended for audio)
- Click Build and download the
.zip
Step 4: Integrate the Model
- Extract the downloaded
.zipfile - Copy all files except
CMakeLists.txtto:
Step 5: Configure Audio Permissions
Permissions are already set inAndroidManifest.xml:
Step 6: Build and Run
- Open in Android Studio
- Build > Make Project
- Connect your Android device
- Run the app
- Grant microphone permission when prompted