Functions
delete_all_samples
None, all samples in the project are deleted.
| Parameters | |
|---|---|
category | str | None = None | 
api_key | str | None = None | 
timeout_sec | float | None = None | 
| Returns | 
|---|
edgeimpulse_api.models.generic_api_response.GenericApiResponse | None | 
delete_sample_by_id
| Parameters | |
|---|---|
sample_id | int | 
api_key | str | None = None | 
timeout_sec | float | None = None | 
| Returns | 
|---|
edgeimpulse_api.models.generic_api_response.GenericApiResponse | None | 
delete_samples_by_filename
filename argument must not include the original extension. For example,
if you uploaded a file named my-image.01.png, you must provide the filename as
my-image.01.
| Parameters | |
|---|---|
filename | str | 
category | str | None = None | 
api_key | str | None = None | 
timeout_sec | float | None = None | 
| Returns | 
|---|
Tuple[Any | None, ...] | 
download_samples_by_ids
DownloadSample object, which contains the raw data in
a BytesIO object along with associated metadata.
Important! All time series data is returned as a JSON file (in BytesIO format)
with a timestamp column. This includes files originally uploaded as CSV, JSON, and
CBOR. Edge Impulse Studio removes the timestamp column from any uploaded CSV
files and computes an estimated sample rate. The timestamps are computed based on
the sample rate, will always start at 0, and will be in milliseconds. These
timestamps may not be the same as the original timestamps in the uploaded file.
| Parameters | |
|---|---|
sample_ids | int | List[int] | 
api_key | str | None = None | 
timeout_sec | float | None = None | 
max_workers | int | None = None | 
show_progress | bool | None = False | 
pool_maxsize | int | None = 20 | 
pool_connections | int | None = 20 | 
| Returns | 
|---|
List[edgeimpulse.data.sample_type.Sample] | 
get_filename_by_id
| Parameters | |
|---|---|
sample_id | int | 
api_key | str | None = None | 
timeout_sec | float | None = None | 
| Returns | 
|---|
str | None | 
get_sample_ids
my-image.01.png, it will be stored in your project with
a hash such as my-image.01.png.4f262n1b.json. To find the ID(s) that match this
sample, you must provide the argument filename=my-image.01. Notice the lack of
extension and hash.
Because of the potential for multiple samples (i.e., different sample IDs) with the
same filename, we recommend providing unique filenames for your samples when
uploading.
| Parameters | |
|---|---|
filename | str | None = None | 
category | str | None = None | 
labels | str | None = None | 
api_key | str | None = None | 
num_workers | int | None = 4 | 
timeout_sec | float | None = None | 
| Returns | 
|---|
List[edgeimpulse.data.sample_type.SampleInfo] | 
infer_category_and_label_from_filename
my-dataset/training/wave.1.cbor where wave is the label and training is the category.
It checks if there is training, testing or anomaly in the filename to determine the sample category.
| Parameters | |
|---|---|
sample | edgeimpulse.data.sample_type.Sample | 
file | str | 
| Returns | 
|---|
None | 
numpy_timeseries_to_sample
| Parameters | |
|---|---|
values |   | 
sensors | List[edgeimpulse.data.sample_type.Sensor] | 
sample_rate_ms | int | 
| Returns | 
|---|
edgeimpulse.data.sample_type.Sample | 
pandas_dataframe_to_sample
- More than one row
 - A sample rate or an index from which the sample rate can be inferred
- Therefore must be monotonically increasing
 - And int or a date
 
 
| Parameters | |
|---|---|
df |   | 
sample_rate_ms | int | None = None | 
label | str | None = None | 
filename | str | None = None | 
axis_columns | List[str] | None = None | 
metadata | dict | None = None | 
label_col | str | None = None | 
category | Literal['training', 'testing', 'split'] = 'split' | 
| Returns | 
|---|
edgeimpulse.data.sample_type.Sample | 
stream_samples_by_ids
| Parameters | |
|---|---|
sample_ids | int | Sequence[int] | 
api_key | str | None = None | 
timeout_sec | float | None = None | 
max_workers | int | None = None | 
show_progress | bool | None = False | 
pool_maxsize | int | None = 20 | 
pool_connections | int | None = 20 | 
| Returns | 
|---|
Generator[edgeimpulse.data.sample_type.Sample, None, None] | 
upload_directory
| Parameters | |
|---|---|
directory | str | 
category | str | None = None | 
label | str | None = None | 
metadata | dict | None = None | 
transform | <built-in function callable> | None = None | 
allow_duplicates | bool | None = False | 
show_progress | bool | None = False | 
batch_size | int | None = 1024 | 
| Returns | 
|---|
edgeimpulse.data.sample_type.UploadSamplesResponse | 
upload_exported_dataset
info.labels information.
Use this when you’ve exported your data in the studio, via the export functionality.
| Parameters | |
|---|---|
directory | str | 
transform | <built-in function callable> | None = None | 
allow_duplicates | bool | None = False | 
show_progress | bool | None = False | 
batch_size | int | None = 1024 | 
| Returns | 
|---|
edgeimpulse.data.sample_type.UploadSamplesResponse | 
upload_numpy
| Parameters | |
|---|---|
data |   | 
labels | List[str] | 
sensors | List[edgeimpulse.data.sample_type.Sensor] | 
sample_rate_ms | int | 
metadata | dict | None = None | 
category | Literal['training', 'testing', 'split', 'anomaly'] = 'split' | 
| Returns | 
|---|
edgeimpulse.data.sample_type.UploadSamplesResponse | 
upload_pandas_dataframe
| Parameters | |
|---|---|
df |   | 
feature_cols | List[str] | 
label_col | str | None = None | 
category_col | str | None = None | 
metadata_cols | List[str] | None = None | 
| Returns | 
|---|
edgeimpulse.data.sample_type.UploadSamplesResponse | 
upload_pandas_dataframe_wide
| Parameters | |
|---|---|
df |   | 
sample_rate_ms | int | 
data_col_start | int | None = None | 
label_col | str | None = None | 
category_col | str | None = None | 
metadata_cols | List[str] | None = None | 
data_col_length | int | None = None | 
data_axis_cols | List[str] | None = None | 
| Returns | 
|---|
edgeimpulse.data.sample_type.UploadSamplesResponse | 
upload_pandas_dataframe_with_group
group_by in order to detect what timeseries value belongs
to which sample.
| Parameters | |
|---|---|
df |   | 
timestamp_col | str | 
group_by | str | 
feature_cols | List[str] | 
label_col | str | None = None | 
category_col | str | None = None | 
metadata_cols | List[str] | None = None | 
| Returns | 
|---|
edgeimpulse.data.sample_type.UploadSamplesResponse | 
upload_pandas_sample
| Parameters | |
|---|---|
df |   | 
label | str | None = None | 
sample_rate_ms | int | None = None | 
filename | str | None = None | 
axis_columns | List[str] | None = None | 
metadata | dict | None = None | 
label_col | str | None = None | 
category | Literal['training', 'testing', 'split'] = 'split' | 
| Returns | 
|---|
edgeimpulse.data.sample_type.UploadSamplesResponse | 
upload_plain_directory
| Parameters | |
|---|---|
directory | str | 
category | str | None = None | 
label | str | None = None | 
metadata | dict | None = None | 
transform | <built-in function callable> | None = None | 
allow_duplicates | bool | None = False | 
show_progress | bool | None = False | 
batch_size | int | None = 1024 | 
| Returns | 
|---|
edgeimpulse.data.sample_type.UploadSamplesResponse | 
upload_samples
Sample object, which contains metadata about that sample.
Give this function a single Sample or a List of Sample objects to upload to your
project. The data field of the Sample must be a raw binary stream, such as a BufferedIOBase
object (which you can create with the open(..., "rb") function).
| Parameters | |
|---|---|
samples | edgeimpulse.data.sample_type.Sample | List[edgeimpulse.data.sample_type.Sample] | 
allow_duplicates | bool | None = False | 
api_key | str | None = None | 
timeout_sec | float | None = None | 
max_workers | int | None = None | 
show_progress | bool | None = False | 
pool_maxsize | int | None = 20 | 
pool_connections | int | None = 20 | 
| Returns | 
|---|
edgeimpulse.data.sample_type.UploadSamplesResponse | 
Classes
Sample
open(..., "rb"). The
upload_samples() function expects Sample objects as input.
| Parameters | |
|---|---|
data | io.BufferedIOBase | _io.StringIO | str | 
filename | str | None = None | 
category | Literal['training', 'testing', 'anomaly', 'split'] | None = 'split' | 
label | str | None = None | 
bounding_boxes | List[dict] | None = None | 
metadata | dict | None = None | 
sample_id | int | None = None | 
structured_labels | List[dict] | None = None | 
| Instance variables | |
|---|---|
bounding_boxes | List[dict] | None | 
category | Literal['training', 'testing', 'anomaly', 'split'] | None | 
data | io.BufferedIOBase | _io.StringIO | str | 
filename | str | None | 
label | str | None | 
metadata | dict | None | 
sample_id | int | None | 
structured_labels | List[dict] | None |