Data sources

The data sources page is actually much more than just adding data from external sources. It let you create complete automated data pipelines so you can work on your active learning strategies.

From there, you can import datasets from existing cloud storage buckets, automate and schedule the imports, and, trigger actions such as explore and label your new data, retrain your model, automatically build a new deployment task and more.

Data sources

Add a data source

Click in + Add new data source and select where your data lives:

You can either use:

  • AWS S3 buckets

  • Google Cloud Storage

  • Any S3-compatible bucket

  • Upload portals (enterprise feature)

  • Transformation blocks (enterprise feature)

  • Don't import data (if you just need to create a pipeline)

Add new data source

Click on Next, provide credentials:

Provide your credentials

Click on Verify credentials:

Automatically label your data

Here, you have several options to automatically label your data:

Infer from folder name

In the example above, the structure of the folder is the following:

.
├── cars
│   ├── cars.01741.jpg
│   ├── cars.01743.jpg
│   ├── cars.01745.jpg
│   ├── ... (400 items)
├── unknown
│   ├── unknown.test_2547.jpg
│   ├── unknown.test_2548.jpg
│   ├── unknown.test_2549.jpg
│   ├── ... (400 items)
└── unlabeled
    ├── cars.02066.jpg
    ├── cars.02067.jpg
    ├── cars.02068.jpg
    └── ... (14 items)

3 directories, 814 files

The labels will be picked from the folder name and will be split between your training and testing set using the following ratio 80/20.

The samples present in an unlabeled/ folder will be kept unlabeled in Edge Impulse Studio.

Alternatively, you can also organize your folder using the following structure to automatically split your dataset between training and testing sets:

.
├── testing
│   ├── cars
│   │   ├── cars.00012.jpg
│   │   ├── cars.00031.jpg
│   │   ├── cars.00035.jpg
│   │   └── ... (~150 items)
│   └── unknown
│       ├── unknown.test_1012.jpg
│       ├── unknown.test_1026.jpg
│       ├── unknown.test_1027.jpg
│       ├── ... (~150 items)
├── training
│   ├── cars
│   │   ├── cars.00006.jpg
│   │   ├── cars.00025.jpg
│   │   ├── cars.00065.jpg
│   │   └── ... (~600 items)
│   └── unknown
│       ├── unknown.test_1002.jpg
│       ├── unknown.test_1005.jpg
│       └── unknown.test_46.jpg
│       └── ... (~600 items)
└── unlabeled
    ├── cars.02066.jpg
    ├── cars.02067.jpg
    ├── cars.02068.jpg
    └── ... (14 items)

7 directories, 1512 files

Infer from file name

When using this option, only the file name is taken into account. The part before the first . will be used to set the label. E.g. cars.01741.jpg will set the label to cars.

Keep the data unlabeled

All the data samples will be unlabeled, you will need to label them manually before using them.

Finally, click on Next, post-sync actions.

Trigger actions

From this view, you can automate several actions:

  • Recreate data explorer

    The data explorer gives you a one-look view of your dataset, letting you quickly label unknown data. If you enable this you'll also get an email with a screenshot of the data explorer whenever there's new data.

  • Retrain model

    If needed, will retrain your model with the same impulse. If you enable this you'll also get an email with the new validation and test set accuracy.

    Note: You will need to have trained your project at least once.

  • Create new version

    Store all data, configuration, intermediate results and final models.

  • Create new deployment

    Builds a new library or binary with your updated model. Requires 'Retrain model' to also be enabled.

Run the pipeline

Once your pipeline is set, you can run it directly from the UI, from external sources or by scheduling the task.

Run your pipeline

Run the pipeline from the UI

To run your pipeline from Edge Impulse studio, click on the â‹® button and select Run pipeline now.

Run the pipeline from code

To run your pipeline from Edge Impulse studio, click on the â‹® button and select Run pipeline from code. This will display an overlay with curl, Node.js and Python code samples.

You will need to create an API key to run the pipeline from code.

Run the pipeline from code

Schedule your pipeline jobs

By default, your pipeline will run every day. To schedule your pipeline jobs, click on the â‹® button and select Edit pipeline.

Free users can only run the pipeline every 4 hours. If you are an enterprise customer, you can run this pipeline up to every minute.

Edit pipeline

Once the pipeline has successfully finish, you will receive an email like the following:

Email example containing the full results

You can also define who can receive the email. The users have to be part of your project. See: Dashboard -> Collaboration.

Webhooks

Another useful feature is to create a webhook to call a URL when the pipeline has ran. It will run a POST request containing the following information:

{
    "organizationId":XX,
    "pipelineId":XX,
    "pipelineName":"Import data from portal \"Data sources demo\"",
    "projectId":XXXXX,
    "success":true,
    "newItems":0,
    "newChecklistOK":0,
    "newChecklistFail":0
}
Data sources webhooks

Edit your pipeline

As of today, if you want to update your pipeline, you need to edit the configuration json available in â‹® -> Run pipeline from code.

Here is an example of what you can get if all the actions have been selected:

[
    {
        "name": "Fetch data from s3://data-pipeline/data-pipeline-example/infer-from-folder/",
        "builtinTransformationBlock": {
            "type": "s3-to-project",
            "endpoint": "https://s3.your-endpoint.com",
            "path": "s3://data-pipeline/data-pipeline-example/infer-from-folder/",
            "region": "fr-par",
            "accessKey": "XXXXX",
            "category": "split",
            "labelStrategy": "infer-from-folder-name",
            "secretKeyEncrypted": "xxxxxx"
        }
    },
    {
        "name": "Refresh data explorer",
        "builtinTransformationBlock": {
            "type": "project-action",
            "refreshDataExplorer": true
        }
    },
    {
        "name": "Retrain model",
        "builtinTransformationBlock": {
            "type": "project-action",
            "retrainModel": true
        }
    },
    {
        "name": "Create new version",
        "builtinTransformationBlock": {
            "type": "project-action",
            "createVersion": true
        }
    },
    {
        "name": "Create on-device deployment (C++ library)",
        "builtinTransformationBlock": {
            "type": "project-action",
            "buildBinary": "zip",
            "buildBinaryModelType": "int8"
        }
    }
]

Free projects have only access to the above builtinTransformationBlock.

If you are part of an organization, you can use your custom transformation jobs in the pipeline. In your organization workspace, go to Custom blocks -> Transformation and select Run job on the job you want to add.

Transformation blocks

Select Copy as pipeline step and paste it to the configuration json file.

Copy

Last updated

Revision created

Add to wordlist