Deploy pretrained model
Takes in a TFLite file and builds the model and SDK. Updates are streamed over the websocket API (or can be retrieved through the /stdout endpoint). Use getProfileTfliteJobResult to get the results when the job is completed.
Project ID
Impulse ID. If this is unset then the default impulse is used.
A base64 encoded pretrained model
The name of the built target. You can find this by listing all deployment targets through listDeploymentTargetsForProject
(via GET /v1/api/{projectId}/deployment/targets
) and see the format
type.
A base64 encoded .npy file containing the features from your validation set (optional for onnx and saved_model) - used to quantize your model.
Optional, use a specific converter (only for ONNX models).
POST /v1/api/{projectId}/jobs/deploy-pretrained-model HTTP/1.1
Host: studio.edgeimpulse.com
x-api-key: YOUR_API_KEY
Content-Type: application/json
Accept: */*
Content-Length: 323
{
"modelFileBase64": "text",
"modelFileType": "tflite",
"deploymentType": "text",
"engine": "tflite",
"modelInfo": {
"input": {
"inputType": "time-series",
"frequencyHz": 1,
"windowLengthMs": 1
},
"model": {
"modelType": "classification",
"labels": [
"text"
]
}
},
"representativeFeaturesBase64": "text",
"deployModelType": "int8",
"useConverter": "onnx-tf"
}
OK
{
"success": true,
"error": "text",
"id": 12873488112
}
Last updated
Was this helpful?