Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Add a new data item. You can add a maximum of 10000 files directly through this API. Use addOrganizationDataFile
to add additional files.
Organization ID
Name of the bucket name (as an Edge Impulse name)
Optional path in the bucket to create this data item (files are created under this path).
Key-value pair of metadata (in JSON format)
OK
Whether the operation succeeded
Optional error description (set if 'success' was false)
Updates storage bucket details. This only updates fields that were set in the request body.
Organization ID
Bucket ID
S3 access key
S3 secret key
S3 endpoint
S3 bucket
S3 region
Set this if you don't have access to the root of this bucket. Only used to verify connectivity to this bucket.
OK
Whether the operation succeeded
Optional error description (set if 'success' was false)
Add a storage bucket.
Organization ID
S3 access key
S3 secret key
S3 endpoint
S3 bucket
S3 region
Set this if you don't have access to the root of this bucket. Only used to verify connectivity to this bucket.
OK
Whether the operation succeeded
Optional error description (set if 'success' was false)
Retrieve all configured storage buckets. This does not list the secret key.
Organization ID
OK
Whether the operation succeeded
Optional error description (set if 'success' was false)
S3 access key
S3 endpoint
S3 bucket
S3 region
Whether we can reach the bucket
Set this if you don't have access to the root of this bucket. Only used to verify connectivity to this bucket.
Get storage bucket details.
Organization ID
Bucket ID
OK
Whether the operation succeeded
Optional error description (set if 'success' was false)
S3 access key
S3 endpoint
S3 bucket
S3 region
Whether we can reach the bucket
Set this if you don't have access to the root of this bucket. Only used to verify connectivity to this bucket.
Download all files in the given folder in a dataset, ignoring any subdirectories.
Organization ID
Dataset name
Path, relative to dataset
ZIP file
Preview a single file from a data item (same as downloadOrganizationDataFile but w/ content-disposition inline and could be truncated).
Organization ID
Data ID
File name
File
Download a single file from a data item.
Organization ID
Data ID
File name
File
Download all data for this data item.
Organization ID
Data ID
Data filter in SQL WHERE format, where you can reference 'dataset', 'bucket', 'name', 'total_file_count', 'total_file_size', 'created' and any metadata label through 'metadata->' (dots are replaced by underscore).
"dataset = 'activity data' AND (label = 'running' OR metadata->user = 'Jan Jongboom')"
ZIP file
View a file that's located in a dataset (requires JWT auth). File might be converted (e.g. Parquet) or truncated (e.g. CSV).
Organization ID
Dataset name
Path to file in portal
OK
Hide a dataset (does not remove underlying data)
Organization ID
Dataset name
OK
Whether the operation succeeded
Optional error description (set if 'success' was false)
Update all data items. HEADs all underlying buckets to retrieve the last file information. Use this API after uploading data directly to S3. If your dataset has bucketId and bucketPath set then this will also remove items that have been removed from S3.
Organization ID
Selected dataset
"activity data"
OK
Whether the operation succeeded
Optional error description (set if 'success' was false)
Job identifier. Status updates will include this identifier.
12873488112
Remove a storage bucket. This will render any data in this storage bucket unreachable.
Organization ID
Bucket ID
OK
Whether the operation succeeded
Optional error description (set if 'success' was false)
Delete a data item. This will remove items the items from the underlying storage if your dataset has "bucketPath" set.
Organization ID
Data ID
OK
Whether the operation succeeded
Optional error description (set if 'success' was false)
Delete a single file from a data item.
Organization ID
Data ID
File name
OK
Whether the operation succeeded
Optional error description (set if 'success' was false)
Bulk update the metadata of many data items in one go. This requires you to submit a CSV file with headers, one of which the columns should be named 'name'. The other columns are used as metadata keys.
Organization ID
OK
Whether the operation succeeded
Optional error description (set if 'success' was false)
Job identifier. Status updates will include this identifier.
12873488112
Change the dataset for selected data items.
Organization ID
Selected dataset
"activity data"
Data IDs as an Array
Data filter in SQL WHERE format, where you can reference 'dataset', 'bucket', 'name', 'total_file_count', 'total_file_size', 'created' and any metadata label through 'metadata->' (dots are replaced by underscore).
"dataset = 'activity data' AND (label = 'running' OR metadata->user = 'Jan Jongboom')"
OK
Whether the operation succeeded
Optional error description (set if 'success' was false)
Job identifier. Status updates will include this identifier.
12873488112
Clear all checklist flags for selected data items.
Organization ID
Selected dataset
"activity data"
Data IDs as an Array
Data filter in SQL WHERE format, where you can reference 'dataset', 'bucket', 'name', 'total_file_count', 'total_file_size', 'created' and any metadata label through 'metadata->' (dots are replaced by underscore).
"dataset = 'activity data' AND (label = 'running' OR metadata->user = 'Jan Jongboom')"
OK
Whether the operation succeeded
Optional error description (set if 'success' was false)
Job identifier. Status updates will include this identifier.
12873488112
Add a new file to an existing data item.
Organization ID
Data ID
OK
Whether the operation succeeded
Optional error description (set if 'success' was false)
Update the data item metadata.
Organization ID
Data ID
OK
Whether the operation succeeded
Optional error description (set if 'success' was false)
Rename a file in a dataset
Organization ID
Dataset name
S3 path (within the portal)
S3 path (within the portal)
OK
Whether the operation succeeded
Optional error description (set if 'success' was false)
Download a file from a dataset. Will return a signed URL to the bucket.
Organization ID
Dataset name
S3 path (within the portal)
OK
Whether the operation succeeded
Optional error description (set if 'success' was false)
Signed URL to download the file
Bulk adds data items that already exist in a storage bucket. The bucket path specified should contain folders. Each folder is added as a data item in Edge Impulse.
Organization ID
OK
Whether the operation succeeded
Optional error description (set if 'success' was false)
Job identifier. Status updates will include this identifier.
12873488112
Verify whether we can reach a dataset (and return some random files, used for data sources)
Organization ID
Dataset name
OK
Whether the operation succeeded
Optional error description (set if 'success' was false)
20 random files from the bucket.
Indicates whether there are any info.labels files in this bucket. If so, those are used for category/labels.
A signed URL that allows you to PUT an item, to check whether CORS headers are set up correctly for this bucket.
Get all transformation jobs that ran for a data item. If limit / offset is not provided then max. 20 results are returned.
Organization ID
Data ID
Maximum number of results
Offset in results, can be used in conjunction with LimitResultsParameter to implement paging.
OK
Whether the operation succeeded
Optional error description (set if 'success' was false)
Creates a signed link to securely upload data to s3 bucket directly from the client.
Organization ID
Dataset name
file name
file size in bytes
hash to identify file changes
OK
Whether the operation succeeded
Optional error description (set if 'success' was false)
S3 Upload Link
S3 File Tag
Verify whether we can reach a bucket before adding it.
Organization ID
S3 access key
S3 secret key
S3 bucket
S3 endpoint
S3 region
Optional prefix in the bucket. Set this if you don't have access to the full bucket for example.
OK
Whether the operation succeeded
Optional error description (set if 'success' was false)
20 random files from the bucket.
Indicates whether there are any info.labels files in this bucket. If so, those are used for category/labels.
A signed URL that allows you to PUT an item, to check whether CORS headers are set up correctly for this bucket.
Lists all data items. This can be filtered by the ?filter parameter.
Organization ID
Selected dataset
"activity data"
Data filter in SQL WHERE format, where you can reference 'dataset', 'bucket', 'name', 'total_file_count', 'total_file_size', 'created' and any metadata label through 'metadata->' (dots are replaced by underscore).
"dataset = 'activity data' AND (label = 'running' OR metadata->user = 'Jan Jongboom')"
Maximum number of results
Offset in results, can be used in conjunction with LimitResultsParameter to implement paging.
OK
Whether the operation succeeded
Optional error description (set if 'success' was false)
String that's passed in to a transformation block in --metadata
(the metadata + a dataItemInfo
object)
Verify whether we can reach a bucket before adding it.
Organization ID
Bucket ID
OK
Whether the operation succeeded
Optional error description (set if 'success' was false)
20 random files from the bucket.
Indicates whether there are any info.labels files in this bucket. If so, those are used for category/labels.
A signed URL that allows you to PUT an item, to check whether CORS headers are set up correctly for this bucket.
Set information about a dataset
Organization ID
Dataset name
Bucket ID
Path in the bucket
Number of levels deep for data items, e.g. if you have folder "test/abc", with value 1 "test" will be a data item, with value 2 "test/abc" will be a data item. Only used for "clinical" datasets.
OK
Whether the operation succeeded
Optional error description (set if 'success' was false)
Add a new research dataset
Organization ID
Bucket ID
Path in the bucket
Number of levels deep for data items, e.g. if you have folder "test/abc", with value 1 "test" will be a data item, with value 2 "test/abc" will be a data item. Only used for "clinical" type.
OK
Whether the operation succeeded
Optional error description (set if 'success' was false)
Job identifier. Status updates will include this identifier.
12873488112
Get a data item. This will HEAD the underlying bucket to retrieve the last file information.
Organization ID
Data ID
Data filter in SQL WHERE format, where you can reference 'dataset', 'bucket', 'name', 'total_file_count', 'total_file_size', 'created' and any metadata label through 'metadata->' (dots are replaced by underscore).
"dataset = 'activity data' AND (label = 'running' OR metadata->user = 'Jan Jongboom')"
OK
Whether the operation succeeded
Optional error description (set if 'success' was false)
Lists all files included by the filter.
Organization ID
Selected dataset
"activity data"
Data filter in SQL WHERE format, where you can reference 'dataset', 'bucket', 'name', 'total_file_count', 'total_file_size', 'created' and any metadata label through 'metadata->' (dots are replaced by underscore).
"dataset = 'activity data' AND (label = 'running' OR metadata->user = 'Jan Jongboom')"
Maximum number of results
Offset in results, can be used in conjunction with LimitResultsParameter to implement paging.
OK
Whether the operation succeeded
Optional error description (set if 'success' was false)
List all files and directories in specified prefix.
Organization ID
Dataset name
S3 prefix
Only one S3 page (1000 items typically) is returned. Pass in the continuationToken on the next request to receive the next page.
If set, then no files will be returned
OK
Whether the operation succeeded
Optional error description (set if 'success' was false)
"2019-07-21T17:32:28Z"
Preview files and directories in a default dataset for the given prefix, with support for wildcards. This is an internal API used when starting a transformation job.
Organization ID
Dataset name
S3 prefix
Return either files or folders matching the specified prefix
OK
Whether the operation succeeded
Optional error description (set if 'success' was false)
"2019-07-21T17:32:28Z"
True if results are truncated.
Explains why results are truncated; only present in the response if isTruncated is true. Results can be truncated if there are too many results (more than 500 matches), or if searching for more results is too expensive (for example, the dataset contains many items but very few match the given wildcard).
Get information about a dataset
Organization ID
Dataset name
OK
Whether the operation succeeded
Optional error description (set if 'success' was false)
Bucket ID
Path in the bucket
Full bucket path, incl. protocol (e.g. s3://bucket/path) - to be used in the UI
Number of levels deep for data items, e.g. if you have folder "test/abc", with value 1 "test" will be a data item, with value 2 "test/abc" will be a data item. Only used for "clinical" type.
Location of the dataset within the bucket
Delete a file from a dataset
Organization ID
Dataset name
S3 path (within the portal)
OK
Whether the operation succeeded
Optional error description (set if 'success' was false)