model
edgeimpulse v1.0.1
edgeimpulse.model package
Submodules
Module contents
edgeimpulse.model.deploy(model: Union[Path, str, bytes, Any], model_output_type: Union[Classification, Regression, ObjectDetection], model_input_type: Optional[Union[AudioInput, TimeSeriesInput, OtherInput]] = None, representative_data_for_quantization: Optional[Union[Path, str, bytes, Any]] = None, deploy_model_type: Optional[str] = None, engine: str = 'tflite', deploy_target: str = 'zip', output_directory: Optional[str] = None, api_key: Optional[str] = None)Transforms a machine learning model into a library for an edge device
Transforms a trained model into a library, package, or firmware ready to deploy on an embedded device. Can optionally apply post-training quantization if a representative data sample is uploaded.
Supported model formats: * Keras Model instance * TensorFlow SavedModel (as path to directory or .zip file) * ONNX model file (as path to .onnx file) * TensorFlow Lite file (as bytes, or path to any file that is not .zip or .onnx)
Representative data for quantization: * Must be a numpy array or .npy file. * Each element must have the same shape as your model’s input. * Must be representative of the range (maximum and minimum) of values in your training data.
Parameters
model (Union*[Path, str, bytes, Any]*) – A machine learning model, or similarly represented computational graph. Can be Path or str denoting file path, Python bytes containing a model, or a Keras model instance.
model_output_type (Union*[Classification, Regression, ObjectDetection]*) – Describe your model’s type: Classification, Regression, or ObjectDetection. The types are available in the module edgeimpulse.model.output_type.
model_input_type (Union*[AudioInput, TimeSeriesInput, OtherInput]**, *optional) – Determines any input preprocessing (windowing, downsampling) that should be performed by the resulting library. The types are available in edgeimpulse.model.input_type. The default is no preprocessing.
representative_data_for_quantization – A numpy representative input dataset. Accepts either an in memory numpy array or the Path/str filename of a np.save .npy file.
deploy_model_type (str*, *optional) – Use int8 to receive an 8-bit quantized model, float32 for non-quantized. Defaults to None, in which case it will become int8 if representative_data_for_quantization if provided and float32 otherwise. For other values see edgeimpulse.model.list_model_types().
engine (str*, *optional) – Inference engine. Either tflite (for TensorFlow Lite for Microcontrollers) or tflite-eon (for EON Compiler) to output a portable C++ library. For all engines, call edgeimpulse.deploy.list_engines(). Defaults to tflite.
deploy_target (str*, *optional) – Target to deploy to, defaulting to a portable C++ library suitable for most devices. See edgeimpulse.model.list_deployment_targets() for a list.
output_directory (str*, *optional) – Directory to write deployment artifact to. File name may vary depending on deployment type. Defaults to None in which case model will not be written to file.
api_key (str*, *optional) – The API key for an Edge Impulse project. This can also be set via the module-level variable edgeimpulse.API_KEY, or the env var EI_API_KEY.
Returns
Binary representation of deployment output.
Return type
bytes
Raises
InvalidAuthTypeException – Incorrect authentication type was provided.
InvalidDeployParameterException – Unacceptable parameter given to deploy function.
InvalidEngineException – Unacceptable engine for this target.
InvalidTargetException – Unacceptable deploy_target for this project.
FileNotFoundError – Model file could not be loaded.
Exception –
Examples# Turn a Keras model into a C++ library and write to disk ei.model.deploy(model=keras_model, model_output_type=ei.model.output_type.Classification(), output_directory=".") # Convert various types of serialized models: ei.model.deploy(model="heart_rate.onnx", # ONNX model_output_type=ei.model.output_type.Regression()) ei.model.deploy(model="heart_rate", # TensorFlow SavedModel (can also be a zip) model_output_type=ei.model.output_type.Regression()) ei.model.deploy(model="heart_rate.lite", # TensorFlow Lite model_output_type=ei.model.output_type.Regression()) # Quantize a model to int8 during deployment by passing a numpy array of data ei.model.deploy(model=keras_model, representative_data_for_quantization=x_test, model_output_type=ei.model.output_type.Classification(), output_directory=".")
edgeimpulse.model.list_deployment_targets(api_key: Optional[str] = None)Lists suitable deployment targets for the project associated with configured or provided api key.
Parameters
api_key (str*, *optional) – The API key for an Edge Impulse project. This can also be set via the module-level variable edgeimpulse.API_KEY, or the env var EI_API_KEY.
Returns
List of deploy targets for project
Return type
List[str]
edgeimpulse.model.list_engines()Lists all the engines that can be passed to deploy()’s engine parameter.
Returns
List of engines
Return type
List[str]
edgeimpulse.model.list_model_types()Lists all the model types that can passed to deploy()’s deploy_model_type parameter.
Returns
List of model types
Return type
List[str]
edgeimpulse.model.list_profile_devices(api_key: Optional[str] = None)Lists possible values for the device field when calling edgeimpulse.model.profile().
Parameters
api_key (str*, *optional) – The API key for an Edge Impulse project. This can also be set via the module-level variable edgeimpulse.API_KEY, or the env var EI_API_KEY.
Returns
List of profile targets for project
Return type
List[str]
edgeimpulse.model.profile(model: Union[Path, str, bytes, Any], device: Optional[str] = None, api_key: Optional[str] = None)Profiles the performance of a trained model on a range of embedded targets, or a specific device.
The response includes estimates of memory usage and latency for the model across a range of targets, including low-end MCU, high-end MCU, high-end MCU with accelerator, microprocessor unit (MPU), and a GPU or neural network accelerator. It will also include details of any conditions that preclude operation on a given type of device.
If you request a specific device, the results will also include estimates for that specific device. A list of devices can be obtained from edgeimpulse.model.list_profile_devices().
You can call .summary() on the response to obtain a more readable version of the most relevant information.
Parameters
model (Union*[Path, str, bytes, Any]*) – A machine learning model, or similarly represented computational graph. Can be Path or str denoting file path, Python bytes containing a model, or a Keras model instance.
device (Optional*[str]**, *optional) – An embedded processor for which to profile the model. A comprehensive list can be obtained via edgeimpulse.model.list_profile_devices().
api_key (Optional*[str]**, *optional) – The API key for an Edge Impulse project. This can also be set via the module-level variable edgeimpulse.API_KEY, or the env var EI_API_KEY.
Returns
Structure containing profile information. A subclass of edgeimpulse_api.models.get_pretrained_model_response. You can call its .summary() method for a more readable version of the most relevant information.
Return type
ProfileResponse
Raises
InvalidAuthTypeException – Incorrect authentication type was provided.
InvalidDeviceException – Device is not valid.
Examples# Profile a Keras model across a range of devices result = ei.model.profile(model=keras_model) result.summary() # Profile different types of models on specific devices result = ei.model.profile(model="heart_rate.onnx", # ONNX device="cortex-m4f-80mhz") result = ei.model.profile(model="heart_rate", # TensorFlow SavedModel (can also be a zip) device="nordic-nrf9160-dk") result = ei.model.profile(model="heart_rate.lite", # TensorFlow Lite (float32 or int8) device="synaptics-ka10000")
Last updated
Was this helpful?