Developer accounts on Edge Impulse are granted 20 min of compute time per job. Below are some tips to stay within this limit:
Reduce dataset size
The dataset size has a direct impact on the training time. If you're reaching the limit, you can reduce decrease your dataset size.
To easily reduce your dataset, go to Data acquisition, click on the Filter and Select icons. You can either delete your samples or disable them:
Reduce your dataset size in data acquisition view
Note: Reducing your dataset size will have an impact on your accuracy. Try first with a small dataset and increase it over time until you reach the limit.
Reduce your number of epochs
An epoch (or training cycle) means one complete pass of the training dataset through the algorithm. Reducing this hyper parameter will reduce the number of times you perform a complete pass through your dataset, thus, lower your training time.
To reduce the number of epochs, just lower the Number of training cycles value:
Reduce the number of training cycles
Increase your batch size
The batch size is a hyperparameter that defines the number of samples to work through before updating the internal model parameters. A training dataset can be divided into one or more batches. The bigger your batch is, the less iterations will be performed.
To increase the batch size, on the NN Classifier view, switch to expert mode and change the BATCH_SIZE hyper parameter:
Increase batch size
Note: This also have an impact on the memory, the bigger the batch size is, the more memory your training will use.
Reduce the complexity of your neural network architecture
A simple neural network architecture will train faster than a very complex. To reduce the complexity of your NN architecture, remove some of the layers, reduce the number of neurons and kernel size: