model package
Submodules
deploy
Transform a machine learning model into a library for an edge device.
Transforms a trained model into a library, package, or firmware ready to deploy on an embedded device. Can optionally apply post-training quantization if a representative data sample is uploaded.
Supported model formats:
Keras Model instance <https://www.tensorflow.org/api_docs/python/tf/keras/Model>
_TensorFlow SavedModel <https://www.tensorflow.org/guide/saved_model>
_ (as path to directory or.zip
file)ONNX model file <https://learn.microsoft.com/en-us/windows/ai/windows-ml/get-onnx-model>
_ (as path to.onnx
file)TensorFlow Lite file <https://www.tensorflow.org/lite/guide>
_ (as bytes, or path to any file that is not.zip
or.onnx
)
Representative data for quantization:
Must be a numpy array or
.npy
file.Each element must have the same shape as your model's input.
Must be representative of the range (maximum and minimum) of values in your training data.
Note: the available deployment options will change depending on the values given for model
, model_output_type
, and model_input_type
. For example, the openmv
deployment option is only available if model_input_type
is set to ImageInput
. If you attempt to deploy to an unavailable target, you will receive the error Could not deploy: deploy_target: ...
.
Parameters
model: Union[pathlib.Path, str, bytes, Any]
model_output_type: Union[edgeimpulse.model.output_type.Classification, edgeimpulse.model.output_type.Regression, edgeimpulse.model.output_type.ObjectDetection]
model_input_type: Union[edgeimpulse.model.input_type.ImageInput, edgeimpulse.model.input_type.AudioInput, edgeimpulse.model.input_type.TimeSeriesInput, edgeimpulse.model.input_type.OtherInput, ForwardRef(None)] = None
representative_data_for_quantization: Union[pathlib.Path, str, bytes, Any, ForwardRef(None)] = None
deploy_model_type: Optional[str] = None
engine: str = 'tflite'
deploy_target: str = 'zip'
output_directory: Optional[str] = None
api_key: Optional[str] = None
timeout_sec: Optional[float] = None
Return
_io.BytesIO
list_deployment_targets
List suitable deployment targets for the project associated with configured or provided api key.
Parameters
api_key: Optional[str] = None
Return
List[str]
list_engines
List all the engines that can be passed to deploy()
's engine
parameter.
Returns: List[str]: List of engines
Return
List[str]
list_model_types
List all the model types that can passed to deploy()
's deploy_model_type
parameter.
Returns: List[str]: List of model types
Return
List[str]
list_profile_devices
List possible values for the device
field when calling edgeimpulse.model.profile()
.
Parameters
api_key: Optional[str] = None
Return
List[str]
profile
Profile the performance of a trained model on a range of embedded targets, or a specific device.
The response includes estimates of memory usage and latency for the model across a range of targets, including low-end MCU, high-end MCU, high-end MCU with accelerator, microprocessor unit (MPU), and a GPU or neural network accelerator. It will also include details of any conditions that preclude operation on a given type of device.
If you request a specific device
, the results will also include estimates for that specific device. A list of devices can be obtained from edgeimpulse.model.list_profile_devices()
.
You can call .summary()
on the response to obtain a more readable version of the most relevant information.
Parameters
model: Union[pathlib.Path, str, bytes, Any]
device: Optional[str] = None
api_key: Optional[str] = None
timeout_sec: Optional[float] = None
Return
edgeimpulse.model._functions.profile.ProfileResponse
Last updated