Uploader

You can upload your existing data samples and datasets to your project directly through the Edge Impulse Studio Uploader.

The uploader signs local files and uploads them to the ingestion service. This is useful to upload existing data samples and entire datasets, or to migrate data between Edge Impulse instances.

The uploader currently handles these types of files:

  • .cbor - Files in the Edge Impulse Data Acquisition format. The uploader will not resign these files, only upload them.

  • .json - Files in the Edge Impulse Data Acquisition format. The uploader will not resign these files, only upload them.

  • .csv - Files in the Edge Impulse Comma Separated Values (CSV) format. If you have configured the "CSV wizard", the settings will be used to parse your CSV files.

  • .wav - Lossless audio files. It's recommended to use the same frequency for all files in your data set, as signal processing output might be dependent on the frequency.

  • .jpg and .png - Image files. It's recommended to use the same ratio for all files in your data set.

  • .mp4 and .avi- Video file. You can then from the studio split this video file into images at a configurable frame per second.

  • info.labels - JSON-like file (without the .json extension). You can use it to add metadata and for custom labeling strategies (single-label vs multi-label, float values labels, etc...). See Edge Impulse exporter format

The uploader currently handles these types of image dataset annotation formats:

Need more?

If none of these above choices are suitable for your project, you can also have a look at the Transformation blocks to parse your data samples to create a dataset supported by Edge Impulse. See Building your Transformation Blocks

To upload data using the uploader, go to the Data acquisition page and click on the uploader button as shown in the image below:

Bounding boxes?

If you have existing bounding boxes for your images dataset, make sure your project's labeling method is set to Bounding Boxes (object detection), you can change this parameter in your project's dashboard.

Then you need to upload any label files with your images. You can upload object detection datasets in any supported annotation format. Select both your images and the labels file when uploading to apply the labels. The uploader will try to automatically detect the right format.

Upload data

Upload mode

Select individual files: This option let you select multiple individual files within a single folder. If you want to upload images with bounding boxes, make sure to also select the label files.

Select a folder: This option let you select one folder, including all the subfolders.

Upload into a category

Select which category you want to upload your dataset into. Options can be training, testing or perform an 80/20 split between your data samples.

If needed, you can always perform a split later from your project's dashboard.

Label

When a labeling method is not provided, the labels are automatically inferred from the filename through the following regex: ^[a-zA-Z0-9\s-_]+. For example: idle.01 will yield the label idle.

Thus, if you want to use labels (string values) containing float values (e.g. "0.01", "5.02", etc...), automatic labeling won't work.

To bypass this limitation, you can make an info.labels JSON file containing your dataset files' info. We also support adding metadata to your samples. See below to understand the Edge Impulse Exporter format.

Edge Impulse Exporter format (info.labels files)

The Edge Impulse Exporter acquisition format provides a simple and intuitive way to store files and associated labels. Folders containing data in this format will take the following structure:

.
├── info.labels
└── training
│   ├── info.labels
│   ├── file1.wav
│   ├── file2.wav
│   ├── file3.wav
│   ...
│   └── file100.jpg
└── testing
    ├── info.labels
    ├── file101.wav
    ├── file102.wav
    ...
    └── file120.wav

2 directories, 123 files

The subdirectories contain files in any Edge Impulse-supported format (see above). Each file represents a sample and is associated with its respective labels in the info.labels file.

The info.labels file (can be located in each subdirectory or at the folder root) provides detailed information about the labels. The file follows a JSON format, with the following structure:

  • version: Indicates the version of the label format.

  • files: A list of objects, where each object represents a supported file format and its associated labels.

    • path: The path or file name.

    • category: Indicates whether the image belongs to the training or testing set.

    • label (optional): Provides information about the labeled objects.

      • type: Specifies the type of label - unlabeled, label, multi-label

      • label (optional): The actual label or class name of the sample.

      • labels (optional): The labels in the multi-label format:

        • label: Label for the given period.

        • startIndex: Timestamp in milliseconds.

        • endIndex: Timestamp in milliseconds.

    • metadata (Optional): Additional metadata associated with the image, such as the site where it was collected, the timestamp or any useful information.

    • boundingBoxes (Optional): A list of objects, where each object represents a bounding box for an object within the image.

      • label: The label or class name of the object within the bounding box.

      • x, y: The coordinates of the top-left corner of the bounding box.

      • width, height: The width and height of the bounding box.

The Studio Uploader will automatically detect the info.labels file:

Want to try it yourself? You can export any dataset from Edge Impulse public projects once you cloned it.

Image dataset annotation format

Image datasets can be found in a range of different formats. Different formats have different directory structures, and require annotations (or labels) to follow a particular structure. We support uploading data in many different formats in the Edge Impulse Studio.

Image datasets usually consist of a bunch of image files, and one (or many) annotation files, which provide labels for the images. Image datasets may have annotations that consist of:

  • A single-label: each image has a single label

  • Bounding boxes: used for object detection; images contain 'objects' to be detected, given as a list of labeled 'bounding boxes'

When you upload an image dataset, we try to automatically detect the format of that data (in some cases, we cannot detect it and you will need to manually select it).

Once the format of your dataset has been selected, click on Upload Data and let the Uploader parse your dataset:

Understanding image dataset annotation formats

Unlabeled

Leave the data unlabeled, you can manually label your data sample in the studio.

Edge Impulse object detection format

The Edge Impulse object detection acquisition format provides a simple and intuitive way to store images and associated bounding box labels. Folders containing data in this format will take the following structure:

.
├── testing
│   ├── bounding_boxes.labels
│   ├── cubes.23im33f2.jpg
│   ├── cubes.23j3rclu.jpg
│   ├── cubes.23j4jeee.jpg
│   ...
│   └── cubes.23j4k0rk.jpg
└── training
    ├── bounding_boxes.labels
    ├── blue.23ijdngd.jpg
    ├── combo.23ijkgsd.jpg
    ├── cubes.23il4pon.jpg
    ├── cubes.23im28tb..jpg
    ...
    └── yellow.23ijdp4o.jpg

2 directories, 73 files

The subdirectories contain image files in JPEG or PNG format. Each image file represents a sample and is associated with its respective bounding box labels in the bounding_boxes.labels file.

The bounding_boxes.labels file in each subdirectory provides detailed information about the labeled objects and their corresponding bounding boxes. The file follows a JSON format, with the following structure:

  • version: Indicates the version of the label format.

  • files: A list of objects, where each object represents an image and its associated labels.

    • path: The path or file name of the image.

    • category: Indicates whether the image belongs to the training or testing set.

    • (optional) label: Provides information about the labeled objects.

      • type: Specifies the type of label (e.g., a single label).

      • label: The actual label or class name of the object.

    • (Optional) metadata: Additional metadata associated with the image, such as the site where it was collected, the timestamp or any useful information.

    • boundingBoxes: A list of objects, where each object represents a bounding box for an object within the image.

      • label: The label or class name of the object within the bounding box.

      • x, y: The coordinates of the top-left corner of the bounding box.

      • width, height: The width and height of the bounding box.

bounding_boxes.labels example:

{
    "version": 1,
    "files": [
        {
            "path": "cubes.23im33f2.jpg",
            "category": "testing",
            "label": {
                "type": "label",
                "label": "cubes"
            },
            "metadata": {
                "version": "2023-1234-LAB"
            },
            "boundingBoxes": [
                {
                    "label": "green",
                    "x": 105,
                    "y": 201,
                    "width": 91,
                    "height": 90
                },
                {
                    "label": "blue",
                    "x": 283,
                    "y": 233,
                    "width": 86,
                    "height": 87
                }
            ]
        },
        {
            "path": "cubes.23j3rclu.jpg",
            "category": "testing",
            "label": {
                "type": "label",
                "label": "cubes"
            },
            "metadata": {
                "version": "2023-4567-PROD"
            },
            "boundingBoxes": [
                {
                    "label": "red",
                    "x": 200,
                    "y": 206,
                    "width": 74,
                    "height": 75
                },
                {
                    "label": "yellow",
                    "x": 370,
                    "y": 245,
                    "width": 79,
                    "height": 73
                }
            ]
        }
    ] 
}

Want to try it yourself? Check this cubes on a conveyor belt dataset in Edge Impulse Object Detection format. You can also retrieve this dataset from this Edge Impulse public project. Data exported from an object detection project in the Edge Impulse Studio is exported in this format.

COCO JSON

The COCO JSON (Common Objects in Context JSON) format is a widely used standard for representing object detection datasets. It provides a structured way to store information about labeled objects, their bounding boxes, and additional metadata.

A COCO JSON dataset can follow this directory structure:

.
├── testing
│   ├── _annotations.coco.json
│   ├── cubes.23im33f2.jpg
│   ├── cubes.23j3rclu.jpg
│   ├── cubes.23j4jeee.jpg
│   ...
│   └── cubes.23j4k0rk.jpg
└── training
    ├── _annotations.coco.json
    ├── blue.23ijdngd.jpg
    ├── combo.23ijkgsd.jpg
    ├── cubes.23il4pon.jpg
    ├── cubes.23im28tb..jpg
    ...
    └── yellow.23ijdp4o.jpg

2 directories, 73 files

The _annotations.coco.json file in each subdirectory provides detailed information about the labeled objects and their corresponding bounding boxes. The file follows a JSON format, with the following structure:

Categories

The "categories" component defines the labels or classes of objects present in the dataset. Each category is represented by a dictionary containing the following fields:

  • id: A unique integer identifier for the category.

  • name: The name or label of the category.

  • (Optional) supercategory: A higher-level category that the current category belongs to, if applicable. This supercategory is not used or imported by the Uploader.

Images

The "images" component stores information about the images in the dataset. Each image is represented by a dictionary with the following fields:

  • id: A unique integer identifier for the image.

  • width: The width of the image in pixels.

  • height: The height of the image in pixels.

  • file_name: The file name or path of the image file.

Annotations

The "annotations" component contains the object annotations for each image. An annotation refers to a labeled object and its corresponding bounding box. Each annotation is represented by a dictionary with the following fields:

  • id: A unique integer identifier for the annotation.

  • image_id: The identifier of the image to which the annotation belongs.

  • category_id: The identifier of the category that the annotation represents.

  • bbox: A list representing the bounding box coordinates in the format [x, y, width, height].

  • (Optional) area: The area (in pixels) occupied by the annotated object.

  • (Optional) segmentation: The segmentation mask of the object, represented as a list of polygons.

  • (Optional) iscrowd: A flag indicating whether the annotated object is a crowd or group of objects.

Edge Impulse uploader currently doesn't import the area, segmentation, iscrowd fields.

_annotations.coco.json example:

{
  "info": {
    "description": "Cubes on conveyor belt",
    "version": "1.0",
    "year": 2023,
    "contributor": "Edge Impulse",
    "date_created": "2023-07-04"
  },
  "categories": [
    {
      "id": 0,
      "name": "cubes"
    },
    {
      "id": 1,
      "name": "green",
      "supercategory": "cubes"
    },
    {
      "id": 2,
      "name": "blue",
      "supercategory": "cubes"
    },
    {
      "id": 3,
      "name": "red",
      "supercategory": "cubes"
    },
    {
      "id": 4,
      "name": "yellow",
      "supercategory": "cubes"
    }
  ],
  "images": [
    {
      "id": 0,
      "height": 960,
      "width": 1280,
      "file_name": "cubes.23im33f2.jpg",
      "date_captured": "2023-06-29T15:09:34+00:00"
    },
    {
      "id": 1,
      "height": 960,
      "width": 1280,
      "file_name": "cubes.23j3rclu.jpg",
      "date_captured": "2023-06-29T15:09:34+00:00"
    },
    ...
  ],
   "annotations": [
    {
        "id": 1,
        "image_id": 0,
        "category_id": 2,
        "bbox": [321,397,117,113],
        "area": 13221,
        "segmentation": [],
        "iscrowd": 0
    },
    {
        "id": 2,
        "image_id": 0,
        "category_id": 3,
        "bbox": [887,447,132,122],
        "area": 16104,
        "segmentation": [],
        "iscrowd": 0
    },
    {
        "id": 3,
        "image_id": 1,
        "category_id": 3,
        "bbox": [470,529,129,126],
        "area": 16254,
        "segmentation": [],
        "iscrowd": 0
    },
    ...
   ]
}

Want to try it yourself? Check this cubes on a conveyor belt dataset in the COCO JSON format.

Open Images CSV

The OpenImage dataset provides object detection annotations in CSV format. The _annotations.csv file is located in the same directory of the images it references. A class-descriptions.csv mapping file can be used to give short description or human-readable classes from the MID LabelName.

An OpenImage CSV dataset usually has this directory structure:

.
├── class-descriptions.csv
├── testing
│   ├── _annotations.csv
│   ├── cubes.23im33f2.jpg
│   ├── cubes.23j3rclu.jpg
│   ├── cubes.23j4jeee.jpg
│   ...
│   └── cubes.23j4k0rk.jpg
└── training
    ├── _annotations.csv
    ├── blue.23ijdngd.jpg
    ├── combo.23ijkgsd.jpg
    ├── cubes.23il4pon.jpg
    ├── cubes.23im28tb..jpg
    ...
    └── yellow.23ijdp4o.jpg

2 directories, 73 files

Annotation Format:

  • Each line in the CSV file represents an object annotation.

  • The values in each line are separated by commas.

CSV Columns:

  • The CSV file typically includes several columns, each representing different attributes of the object annotations.

  • The common columns found in the OpenImage CSV dataset include:

    • ImageID: An identifier or filename for the image to which the annotation belongs.

    • Source: The source or origin of the annotation, indicating whether it was manually annotated or obtained from other sources.

    • LabelName: The class label of the object.

    • Confidence: The confidence score or probability associated with the annotation.

    • XMin, YMin, XMax, YMax: The coordinates of the bounding box that encloses the object, usually represented as the top-left (XMin, YMin) and bottom-right (XMax, YMax) corners.

    • IsOccluded, IsTruncated, IsGroupOf, IsDepiction, IsInside: Binary flags indicating whether the object is occluded, truncated, a group of objects, a depiction, or inside another object.

Currently, Edge Impulse only imports these fields:

ImageID, LabelName, XMin, XMax, YMin, YMax

Class Labels:

  • Each object in the dataset is associated with a class label.

  • The class labels in the OpenImage dataset are represented as LabelName in the CSV file.

  • The LabelName correspond to specific object categories defined in the OpenImage dataset's ontology (MID).

Note that Edge Impulse does not enforce this ontology, if you have an existing dataset using the MID LabelName, simply provide a class-description.csv mapping file to see your classes in Edge Impulse Studio.

Bounding Box Coordinates:

  • The bounding box coordinates define the normalized location and size of the object within the image.

  • The coordinates are represented as the X and Y pixel values for the top-left corner (XMin, YMin) and the bottom-right corner (XMax, YMax) of the bounding box.

class-descriptions.csv mapping file:

_annotations.csv example:

ImageID,LabelName,Confidence,XMin,XMax,YMin,YMax
cubes_testing_0,yellow,1,0.440625,0.5359375,0.5197916666666667,0.6489583333333333
cubes_testing_0,green,1,0.25078125,0.3421875,0.41354166666666664,0.53125
cubes_testing_0,red,1,0.69296875,0.79609375,0.465625,0.5927083333333333
cubes_testing_1,red,1,0.3671875,0.46796875,0.5510416666666667,0.6822916666666666
...

Want to try it yourself? Check this cubes on a conveyor belt dataset in the OpenImage CSV format.

Pascal VOC XML

The Pascal VOC (Visual Object Classes) format is another widely used standard for object detection datasets. It provides a structured format for storing images and their associated annotations, including bounding box labels.

A Pascal VOC dataset can follow this directory structure:

.
├── testing
│   ├── cubes.23im33f2.jpg
│   ├── cubes.23im33f2.xml
│   ├── cubes.23j3rclu.jpg
│   ├── cubes.23j3rclu.xml
│   ...
└── training
    ├── blue.23ijdngd.jpg
    ├── blue.23ijdngd.xml    
    ├── combo.23ijkgsd.jpg
    ├── combo.23ijkgsd.xml
    ├── cubes.23il4pon.jpg
    ├── cubes.23il4pon.xml
    ...
    ├── yellow.23ijdp4o.jpg
    └── yellow.23ijdp4o.xml

2 directories, 140 files

The Pascal VOC dataset XML format typically consists of the following components:

  1. Image files: The dataset includes a collection of image files, usually in JPEG or PNG format. Each image represents a sample in the dataset.

  2. Annotation files: The annotations for the images are stored in XML files. Each XML file corresponds to an image and contains the annotations for that image, including bounding box labels and class labels.

  3. Class labels: A predefined set of class labels is defined for the dataset. Each object in the image is assigned a class label, indicating the category or type of the object.

  4. Bounding box annotations: For each object instance in an image, a bounding box is defined. The bounding box represents the rectangular region enclosing the object. It is specified by the coordinates of the top-left corner, width, and height of the box.

  5. Additional metadata: Pascal VOC format allows the inclusion of additional metadata for each image or annotation. This can include information like the source of the image, the author, or any other relevant details. The Edge Impulse uploader currently doesn't import these metadata.

The structure of an annotation file in Pascal VOC format typically follows this pattern:

cubes.23im33f2.xml:

<?xml version="1.0" ?>
<annotation>
  <folder>cubes_pascal-voc-format/testing</folder>
  <filename>cubes.23im33f2.jpg</filename>
  <size>
    <width>640</width>
    <height>480</height>
    <depth>3</depth>
  </size>
  <object>
    <name>green</name>
    <bndbox>
      <xmin>105</xmin>
      <ymin>201</ymin>
      <xmax>196</xmax>
      <ymax>291</ymax>
    </bndbox>
  </object>
  <object>
    <name>blue</name>
    <bndbox>
      <xmin>283</xmin>
      <ymin>233</ymin>
      <xmax>369</xmax>
      <ymax>320</ymax>
    </bndbox>
  </object>
</annotation>

Want to try it yourself? Check this cubes on a conveyor belt dataset in the Pascal VOC format.

Plain CSV

The Plain CSV format is a very simple format: a CSV annotation file is stored in the same directory as the images. We support both "Single Label" and "Object Detection" labeling methods for this format.

An Plain CSV dataset can follow this directory structure:

.
├── testing
│   ├── _annotations.csv
│   ├── cubes_testing_0.jpg
│   ├── cubes_testing_1.jpg
    ...
│   └── cubes_testing_9.jpg
└── training
    ├── _annotations.csv
    ├── cubes_training_0.jpg
    ├── cubes_training_1.jpg
    ├── cubes_training_10.jpg
    ...
    └── cubes_training_9.jpg

2 directories, 72 files

Annotation Format:

  • Each line in the CSV file represents an object annotation.

  • The values in each line are separated by commas.

CSV Columns (Single Label):

The Plain CSV format (single Label) just contains the file_name and the class:

  • file_name: The filename of the image.

  • classes: The class label or category of the image.

_annotations_single_label.csv example:

file_name,class_name
cubes_1.jpg,cubes
cubes_2.jpg,cubes
unknown_1.jpg,no cubes
unknown_2.jpg,no cubes

CSV Columns (Object Detection):

This Plain CSV format is similar to the TensorFlow Object Detection Dataset format. In this format, the CSV file contains the following columns:

  • file_name: The filename of the image.

  • classes: The class label or category of the object.

  • xmin: The x-coordinate of the top-left corner of the bounding box.

  • ymin: The y-coordinate of the top-left corner of the bounding box.

  • xmax: The x-coordinate of the bottom-right corner of the bounding box.

  • ymax: The y-coordinate of the bottom-right corner of the bounding box.

Each row represents an annotated object in an image. In the following example, there are three objects in cubes_training_0.jpg: a blue, a green and a red cube, two objects in cubes_training_1.jpg, etc... The bounding box coordinates are specified as the top-left corner (xmin, ymin) and the bottom-right corner (xmax, ymax).

_annotations_bounding_boxes.csv example:

file_name,classes,xmin,xmax,ymin,ymax
cubes_training_0.jpg,blue,305,395,244,334
cubes_training_0.jpg,green,389,473,145,225
cubes_training_0.jpg,red,449,544,256,348
cubes_training_1.jpg,red,556,692,453,582
cubes_training_1.jpg,green,777,919,346,481
cubes_training_2.jpg,blue,194,345,529,670
cubes_training_2.jpg,red,508,648,330,476
cubes_training_2.jpg,green,896,1025,553,666

Want to try it yourself? Check this cubes on a conveyor belt dataset in the Plain CSV (object detection) format.

YOLO TXT

The YOLO TXT format is a specific text-based annotation format mostly used in conjunction with the YOLO object detection algorithm. This format represents object annotations for an image in a plain text file.

  1. File Structure:

    • Each annotation is represented by a separate text file.

    • The text file has the same base name as the corresponding image file.

    • The file extension is .txt.

Example:

.
├── classes.txt
├── data.yaml
├── test
│   ├── images
│   │   ├── cubes-23im33f2.jpg
│   │   ├── cubes-23im858s.jpg
│   │   ...
│   │   └── cubes-23j4k0rk.jpg
│   └── labels
│   │   ├── cubes-23im33f2.txt
│   │   ├── cubes-23im858s.txt
│   │   ...
│   │   └── cubes-23j4k0rk.txt
└── train
    ├── images
    │   ├── blue-23ijdngd.jpg
    │   ... 
    │   └── yellow-23ijdp4o.jpg
    └── labels
    │   ├── blue-23ijdngd.txt
    │   ... 
    │   └── yellow-23ijdp4o.txt

6 directories, 142 files
  1. Annotation Format:

    • Each line in the TXT file represents an object annotation.

    • Each annotation line contains space-separated values representing different attributes.

    • The attributes in each line are ordered as follows: class_label, normalized bounding box coordinates (center_x, center_y, width, height).

  2. Class label:

    • The class label represents the object category or class.

    • The class labels are usually represented as integers, starting from 0 or 1.

    • Each class label corresponds to a specific object class defined in the dataset.

  3. Normalized Bounding Box Coordinates:

    • The bounding box coordinates represent the location and size of the object in the image.

    • The coordinates are normalized to the range [0, 1], where (0, 0) represents the top-left corner of the image, and (1, 1) represents the bottom-right corner.

    • The normalized bounding box coordinates include the center coordinates (center_x, center_y) of the bounding box and its width and height.

    • The center coordinates (center_x, center_y) are relative to the width and height of the image, where (0, 0) represents the top-left corner, and (1, 1) represents the bottom-right corner.

    • The width and height are also relative to the image size.

Here's an example of a YOLO TXT annotation file format for a single object:

<class_id> <center_x> <center_y> <width> <height>

For instance: cubes-23im33f2.txt

3 0.24296875 0.5041666666666667 0.1109375 0.12708333333333333
2 0.487890625 0.5385416666666667 0.10390625 0.13958333333333334
1 0.663671875 0.4328125 0.11171875 0.13854166666666667

Each line represent a given normalized bounding box for the corresponding cubes-23im33f2.jpg image.

  1. Mapping the Class Label:

    • The classes.txt, classes.names or data.yaml (used by Roboflow YOLOv5 PyTorch export format) files contain configuration values used by the model to locate images and map class names to class_ids.

For example with the cubes on a conveyor belt dataset with the classes.txt file:

blue
green
red
yellow

Want to try it yourself? Check this cubes on a conveyor belt dataset in the YOLOv5 format.

Last updated

Revision created

tab only