Transfer learning (Images)

When creating an impulse to solve an image classification problem, you will most likely want to use transfer learning. This is particularly true when working with a relatively small dataset.

Transfer learning is the process of taking features learned from one problem and leveraging it on a new but related problem. Most of the time these features are learned from large scale datasets with common objects hence making it faster & more accurate to tune and adapt to new tasks.

To choose transfer learning as your learning block, go to create impulse and click on Add a Learning Block, and select Transfer Learning.

Impulse setup for image classification.

To choose your preferred pre-trained network, go to Transfer learning on the left side of your screen and click choose a different model. A pop up will appear on your screen with a list of models to choose from as shown in the image below.

Choose a different transfer learning model.

Edge Impulse uses state of the art MobileNetV1 & V2 architectures trained on an ImageNet dataset as it's pre-trained network for you to fine-tune for your specific application. The pre-trained networks comes with varying input blocks ranging from 96x96 to 320x320 and both RGB & Grayscale images for you to choose from depending on your application & target deployment hardware.

Available transfer learning models.

Note: For Enterprise projects, Edge Impulse integrates with NVIDIA TAO Toolkit to utilize transfer learning with state-of-the-art pre-trained models from NVIDIA.

Neural Network settings

See Neural Network Settings on the Learning Block page.

Expert mode

See Expert mode on the Learning Block page.

The preset configurations just don't work for your model? No worries, Expert Mode is for you! Expert Mode gives you full control of your model so that you can configure it however you want. To enable the expert mode, just click on the "⋮" button and toggle the expert mode.

Expert mode.

You can use the expert mode to change your loss function, optimizer, print your model architecture and even set an early stopping callback to prevent overfitting your model.

Last updated

Revision created

fix