Optimize compute time

Why lower compute time?

Developer accounts on Edge Impulse are granted 20 min of compute time per job. If you are wondering how you can stay within this 20-min compute limit, below are some tips to stay within this limit.

Need more?

For professionals who want additional compute time, more private projects, and more flexibility in usage, we also offer a professional tier version of our platform.

Try our Professional Plan!

Reduce dataset size

The dataset size has a direct impact on the training time. If you're reaching the limit, you can reduce decrease your dataset size.

To easily reduce your dataset, go to Data acquisition, click on the Filter and Select icons. You can either delete your samples or disable them:

Note: Reducing your dataset size will have an impact on your accuracy. Try first with a small dataset and increase it over time until you reach the limit.

Reduce your number of epochs

An epoch (or training cycle) means one complete pass of the training dataset through the algorithm. Reducing this hyper parameter will reduce the number of times you perform a complete pass through your dataset, thus, lower your training time.

To reduce the number of epochs, just lower the Number of training cycles value:

Apply Early Stopping

Early Stopping is a technique that helps prevent overfitting by halting the training process at the right time. This approach allows your model to stop training as soon as it starts overfitting, or if further training doesn't lead to better performance, making your training process more efficient and potentially leading to better model performance.

See how to apply Early Stopping in Expert Mode.

Increase your batch size

The batch size is a hyperparameter that defines the number of samples to work through before updating the internal model parameters. A training dataset can be divided into one or more batches. The bigger your batch is, the less iterations will be performed.

To increase the batch size, on the NN Classifier view, switch to expert mode and change the BATCH_SIZE hyper parameter:

Note: This also have an impact on the memory, the bigger the batch size is, the more memory your training will use.

Reduce the complexity of your neural network architecture

A simple neural network architecture will train faster than a very complex. To reduce the complexity of your NN architecture, remove some of the layers, reduce the number of neurons and kernel size:

Still need more compute time?

If you still need more compute time for your project, you can check our pricing page and contact us

Last updated