Deploy pretrained model
Last updated
Was this helpful?
Last updated
Was this helpful?
Takes in a TFLite file and builds the model and SDK. Updates are streamed over the websocket API (or can be retrieved through the /stdout endpoint). Use getProfileTfliteJobResult to get the results when the job is completed.
/api/{projectId}/jobs/deploy-pretrained-model
Project ID
Impulse ID. If this is unset then the default impulse is used.
A base64 encoded pretrained model
The name of the built target. You can find this by listing all deployment targets through listDeploymentTargetsForProject
(via GET /v1/api/{projectId}/deployment/targets
) and see the format
type.
tflite
, tflite-eon
, tflite-eon-ram-optimized
, tensorrt
, tensaiflow
, drp-ai
, tidl
, akida
, syntiant
, memryx
, neox
, ethos-linux
, st-aton
A base64 encoded .npy file containing the features from your validation set (optional for onnx and saved_model) - used to quantize your model.
tflite
, onnx
, saved_model
, lgbm
, xgboost
, pickle
int8
, float32
Optional, use a specific converter (only for ONNX models).
onnx-tf
, onnx2tf