Custom learning block structure
parameters.json
file will be passed as command line arguments to the script you defined in your Dockerfile as the ENTRYPOINT
for the Docker image. Please refer to the parameters.json documentation for further details about creating this file, parameter options available, and examples.
In addition to the items defined by you, the following arguments will be automatically passed to your custom learning block.
Argument | Passed | Description |
---|---|---|
--info-file <file> | Always | Provides the file path for train_input.json as a string. The train_input.json file contains configuration details for model training options. See train_input.json. |
--data-directory <dir> | Always | Provides the directory path for training/validation datasets as a string. |
--out-directory <dir> | Always | Provides the directory path to the output directory as a string. This is where block output needs to be written. |
--epochs <value> | Conditional | Passed if no custom parameters are provided. Provides the number of epochs for model training as an integer. |
--learning-rate <value> | Conditional | Passed if no custom parameters are provided. Provides the learning rate for model training as a float. |
tf.data.Dataset
) for your model and batched as desired within your custom learning block training script.
In addition to the datasets, a sample_id_details.json
file (see sample_id_details.json) is located within the data directory. The location of this directory is specified by the --data-directory <dir>
argument and its structure is shown below.
X_*.npy
files are float32 arrays in the appropriate shape. You can typically load these into your training pipeline without any modification.
The Y_*.npy
files are int32 arrays with four columns: label_index
, sample_id
, sample_slice_start_ms
, and sample_slice_end_ms
, unless the labels are bounding boxes. See below.
X_*.npy
files follow the NHWC (batch_size, height, width, channels) format for image data.
The Y_*.npy
files are a JSON array in the form of:
RGB->BGR
option when configuring pixel scaling for your block. See below.
model.tflite
- a TFLite file with float32 inputs and outputsmodel_quantized_int8_io.tflite
- a quantized TFLite file with int8 inputs and outputssaved_model.zip
file.
TensorFlow SavedModel:
saved_model.zip
- a TensorFlow SavedModel filemodel.onnx
- an ONNX file with int8, float16 or float32 inputs and outputsmodel.pkl
- a pickled instance of the scikit-learn modelObjectDetectionLastLayer
.
edge-impulse-blocks runner
tool as described below, you can adjust the train/validation split using the --validation-set-size <size>
argument but you are unable to split using a metadata key. To profile your model after training locally, see Getting profiling metrics.
Option to edit a built-in block locally
train_input.json
file is not available when training locallyIf your script needs information that is contained within train_input.json
, you will not be able to train locally. You will either need to push your block to Edge Impulse to train and test in Studio or alter your training script such that you can pass in that information (or eliminate it all together).edge-impulse-blocks runner
tool. See Block runner for additional details. The runner expects the following arguments for learning blocks.
Argument | Description |
---|---|
--epochs <number> | If not provided, you will be prompted to enter a value. |
--learning-rate <learningRate> | If not provided, you will be prompted to enter a value. |
--validation-set-size <size> | Defaults to 0.2 but can be overwritten. |
--input-shape <shape> | Automatically identified but can be overwritten. |
--extra-args <args> | Additional arguments for your script. |
/home
), an output directory (e.g. /home/out
), and any other parameters required for your script.
ei-block-data
directory within your custom block directory. It will contain a subdirectory with the associated project ID as the name - this is the directory that gets mounted into the container as /home
.
The first time you enter the above command, you will be asked some questions to configure the runner. Follow the prompts to complete this. If you would like to change the configuration in future, you can execute the runner command with the --clean
flag.
example-custom-ml-<description>
. As such, they can be found by going to the Edge Impulse account and searching the repositories for ml-block
.
Below are direct links to some examples:
Block parameters do not update
parameters.json
file are not being reflected, or there are no parameters at all, in your block after being pushed to Studio, you may need to update the Edge Impulse CLI: