Bring your own model (BYOM)
Upload your own model directly into your Edge Impulse project (TensorFlow SavedModel, ONNX, or TensorFlow Lite)
Bring your own model or BYOM allows you to optimize and deploy your own pretrained model (TensorFlow SavedModel, ONNX, or TensorFlow Lite) to any edge device, directly from your Edge Impulse project.
Also make sure you have your own pretrained model available locally on your computer, in one of the following formats: TensorFlow SavedModel (
saved_model.zip
), ONNX model (.onnx
) or TensorFlow Lite model (.tflite
)For this guide, we will be uploading a pretrained image classification TFLite model for plant disease classification, downloaded from the TensorFlow Dev Hub.
Then, from the Dashboard of your Edge Impulse project under "Getting started", select Upload your model:

Edge Impulse project dashboard.
- 1.Upload your trained model: Upload a TensorFlow SavedModel (
saved_model.zip
), ONNX model (.onnx
) or TensorFlow Lite model (.tflite
) to get started. - 2.Model performance: Do you want performance characteristics (latency, RAM and ROM) for a specific device? Select "No" to show the performance for a range of device types, or "Yes" to run performance profiling for any of our available officially supported Edge Impulse development platforms.

Upload pretrained model - Step 1: Upload a model
After configuring the settings for uploading your model, select Upload your model and wait for your model to upload, you can check the upload status via the "Upload progress" section.
When selecting an ONNX model, you can also upload a
.npy
file to Upload representative features (Optional). If you upload a set of representative features - for example, your validation set - as an .npy
file we can automatically quantize this model for better on-device performance.
Uploading a pretrained .onnx model
Depending on the model you have uploaded in Step 1, the configuration settings available for Step 2 will change.
For this guide, we have selected the following configuration model settings for optimal processing for an image classification model with input shape
(300, 300, 3)
in RGB format, Classification model output and 16 output labels: Tomato Healthy, Tomato Septoria Leaf Spot, Tomato Bacterial Spot, Tomato Blight, Cabbage Healthy, Tomato Spider Mite, Tomato Leaf Mold, Tomato_Yellow Leaf Curl Virus, Soy_Frogeye_Leaf_Spot, Soy_Downy_Mildew, Maize_Ravi_Corn_Rust, Maize_Healthy, Maize_Grey_Leaf_Spot, Maize_Lethal_Necrosis, Soy_Healthy, Cabbage Black Rot
After configuring your model settings, select Save model to view your model's on-device performance information for both MCUs and microprocessors (if applicable, depending on your model's arena size).

Step 2: Process your model
Optionally upload test data to ensure correct model settings and proper model processing:

Step 2: Check model behavior

Step 2: Check model behavior results
There are a couple of restrictions to converting models with our tooling:
- The model must have 1 input tensor
- The batch size must be equal to 1
--saved-model /tmp/saved_model does not exist:
If you encountered the following error:
Job started
Converting SavedModel...
Scheduling job in cluster...
Job started
Application exited with code 1
INFO: No representative features passed in, won't quantize this model
Extracting saved model...
Extracting saved model OK
--saved-model /tmp/saved_model does not exist
Converting SavedModel failed, see above
Job failed (see above)
Make sure to upload a
.zip
archive containing at minimum a saved_model
directory that contains your saved_model.pb
.Could not profile: No uploaded model yet
If you encounter the following error:
Could not profile: No uploaded model yet
This often means that the model you are attempting to upload is unsupported. Only the following model formats are supported at this time:
- TensorFlow SavedModel (in .zip archive)
- ONNX (.onnx)
- TensorFlow Lite (.tflite or .lite)
This model won’t run on MCUs.
If you encounter an error like the following:
This model won’t run on MCUs. Expecting value: line 1 column 1 (char 0).
It means that you are attempting to upload a model with an unsupported data type (dtype) or operation. To export models for microcontrollers, please make sure your model only contains operations listed in the TensorFlow Lite Micro ops list