Multi-label (time-series)
The multi-label feature brings considerable value by preserving the context of longer data samples, simplifying data preparation, and enabling more efficient and effective data analysis.
The first improvement is in the way you can analyze and process complex datasets, especially for applications where context and continuity are crucial. With this feature, you can maintain the integrity of longer-duration samples, such as hour-long exercise sessions or night-long sleep studies, without the need to segment these into smaller fragments every time there is a change in activity. This holistic view not only preserves the context but also provides a richer data set for analysis.
Then, the ability to select window sizes directly in Edge Impulse addresses a common pain point - data duplication. Without the multi-label feature, you need to pre-process data, either externally or using transformation jobs, creating multiple copies of the same data with different window sizes to determine the optimal configuration. This process is not only time-consuming but also prone to errors and inefficiencies. With multi-label samples, adjusting the window size becomes a simple parameter change in the "Impulse design", streamlining the process significantly. This flexibility saves time, reduces the risk of errors, and allows for more dynamic experimentation with data, leading to potentially more accurate and insightful models.

Detecting key events in multi-label samples
When working with time-series data labeled using the multi-label format, detecting short-duration or critical events (e.g. tamper detection, fall detection or short noises among others) is possible. Edge Impulse provides flexible strategies for assigning labels to windows of data during training and inference to ensure the events represented.
During the data acquisition process, it's important to understand the available labeling strategies. Choosing the right approach for handling multi-label events ensure accurate detections. The labeling strategies are selected when designing your impulse in the Create Impulse screen. You need to define your appropriate strategy to detect multi-label events in your data.

Use label at the end of the window
This strategy assigns the label that is active at the end of each window as the label for the entire window. It works well for scenarios where the primary interest lies in the resulting state or activity of the window such as recognizing sustained motions or transitions.
If a sample transitions from idle
to running
within a window, and the last timestamp in the window corresponds to running
, the window will be labeled as running
.
Use label X if anywhere present in the window
This strategy assigns a label to the window if a specific event label is present anywhere within the windows' duration (e.g. tamper
, fall
, etc). It is particularly useful for detecting short or sparse events that may not occupy the full window but are critical to capture when they occur.
With this option, you can configure which label(s) to prioritize. If the selected label is found within any part of the window, the window will be assigned to that label, even if the short event occurs alongside other labels.

This approach ensures better coverage for rare or time-sensitive events and improves the model sensitivity to important transitions or anomalies.
In the situation when multiple selected label appear within the same window, the label with the highest number of occurrences will be assigned to that window. And in the case that none of the selected labels are found, the system defaults to using the label at the end of the window.
Upload multi-label samples
1. Using the CSV Wizard
If your dataset is in the CSV format and contains a label column, the CSV Wizard is probably the easiest method to import your multi-label data.
For example:
seconds_elapsed, accX, accY, accZ, label
0.00, 0.14642,-0.01645,-0.00858,idle
0.16, 0.15051,-0.01149,-0.00345,idle
0.32, 0.15546,-0.02141,-0.00342,idle
...
20.48, 0.14347,-0.03758,-0.00369,running
20.56, 0.13447,-0.01657,-0.01520,running
20.72, 0.11453,-0.00961,-0.01021,running

Once your CSV Wizard is configured, you can use the Studio Uploader, the CLI Uploader or the Ingestion API:
2. Using Edge Impulse info.labels
description file
info.labels
description fileThe other way is to create a info.labels
file, present in your dataset. Edge Impulse will automatically detect it when you upload your dataset and will use this file to set the labels.
The info.labels
looks like the following:
{
"version": 1,
"files": [{
"path": "audio1.wav",
"category": "split",
"label": {
"type": "multi-label",
"labels": [
{
"label": "noise",
"startIndex": 0,
"endIndex": 5000
},
{
"label": "nominal_mode",
"startIndex": 5001
"endIndex": 60000
},
{
"label": "defect",
"startIndex": 60001
"endIndex": 60200
}
],
"metadata": {
"site_collected": "Factory_01"
}
}
},
{
"path": "audio2.wav",
"category": "split",
"label": {
"type": "multi-label",
"labels": [
{
"label": "noise",
"startIndex": 0,
"endIndex": 2000
},
{
"label": "nominal_mode",
"startIndex": 2001
"endIndex": 40000
}
],
"metadata": {
"site_collected": "Factory_02"
}
}
},
]
}
Once you have your info.labels
file available, to upload it, you can use:
The Studio Uploader:
The Studio Uploader will automatically detect the info.labels
file:

The CLI Uploader:
> edge-impulse-uploader * --info-file info.labels
Edge Impulse uploader v1.23.0
Endpoints:
API: https://studio.edgeimpulse.com
Ingestion: https://ingestion.edgeimpulse.com
Upload configuration:
Label: Not set, will be inferred from file name
Category: training
Project: Example Multi-label upload (ID: XXXXX)
[ 1/11] Uploading training/machine_multilabel_8.json OK (1589 ms)
[ 2/11] Uploading testing/machine_multilabel_3.json OK (2024 ms)
[ 3/11] Uploading training/machine_multilabel_6.json OK (2176 ms)
[ 4/11] Uploading training/machine_multilabel_2.json OK (2224 ms)
[ 5/11] Uploading testing/machine_multilabel_1.json OK (2394 ms)
[ 6/11] Uploading training/machine_multilabel_8.json OK (2395 ms)
[ 7/11] Uploading training/machine_multilabel_9.json OK (2485 ms)
[ 8/11] Uploading training/machine_multilabel_7.json OK (2603 ms)
[ 9/11] Uploading testing/machine_multilabel_4.json OK (2617 ms)
[10/11] Uploading training/machine_multilabel_11.json OK (3426 ms)
[11/11] Uploading training/machine_multilabel_10.json OK (3488 ms)
Done. Files uploaded successful: 11. Files that failed to upload: 0.
2. Using Edge Impulse structured_labels.labels
description file
structured_labels.labels
description fileIf you want to use the Ingestion API, you need to use the structured_labels.labels
format:
The structured_labels.labels
format looks like the following:
{
"version": 1,
"type": "structured-labels",
"structuredLabels": {
"updown.3.json": [{
"startIndex": 0,
"endIndex": 300,
"label": "first_label"
}, {
"startIndex": 301,
"endIndex": 621,
"label": "second_label"
}]
}
}
Then you can run the following command:
curl -X POST \
-H "x-api-key: $EI_PROJECT_API_KEY" \
-H "Content-Type: multipart/form-data" \
-F "[email protected]" \
-F "data=@structured_labels.labels" \
https://ingestion.edgeimpulse.com/api/training/files
You can have a look at this tutorial for a better understanding: Ingest multi-label data with Edge Impulse API.
Visualizing multi-label samples

Please note that you can also hide the sensors in the graph:

Edit multi-label samples
To edit the labels using the UI, click â‹® -> Edit labels. The following model will appear:

Please note that you will need to provide continuous and non-overlapping labels for the full length of your data sample.
The format is the like following:
[
{
"label": "label 1",
"startMs": 0,
"endMs": 2000
},
{
"label": "label 2",
"startMs": 2001,
"endMs": 4000
},
{
"label": "label 3",
"startMs": 4001,
"endMs": 4500
}
]
Classify multi-label data
In the Live classification tab, you can classify your multi-label test samples:

Limitations
Labeling UI is available but is only text-based.
Overlapping labels are not supported
The entire data sample needs to have a label, you cannot leave parts unlabeled.
Please, leave us a note on the forum or feedback using the "?" widget (bottom-right corner) if you see a need or an issue. This can help us prioritize the development or improvement of the features.
Resources
Public projects
Last updated
Was this helpful?