Skip to main content
Detecting visual defects in products like scratches, missing parts, or surface damage is often a challenging task for traditional machine learning. Training a machine learning model to recognize every possible defect can be very complex. That’s where Edge Impulse with the Visual Anomaly Detection can help you. Instead of training a machine learning model to detect every type of issue, the Visual Anomaly Detection learns what normal looks like providing clean and undamaged examples. Once the model is deployed, it will detect anything that deviates from the normal baseline, such as anomalies, missing parts, or unusual patterns in images showing damages without having seen these defects before. By the end of this tutorial, you will be able to have a working Visual Anomaly Detection model running on your device, using only images from anything in good condition.

What You’ll Need

Building the project pipeline

Create a new project

Log in to Edge Impulse Studio and click Create a new Project.
Once the project is created, you can click Collect new data or go to Devices in the menu and click Create a new connection.

Capture data

From here, you can use your mobile phone, scan the QR code, and it will automatically connect to your Edge Impulse Studio project.
At this stage, start collecting images without anomalies or defects that represent normal conditions. Use the Training dataset to collect the images without anomalies. In this tutorial, we want to detect bricks with anomalies, scratches, or defects. You can find the dataset public here and you can clone the project in your account.
If you collect images with anomalies, place them into the Test dataset.

Create the impulse

Next, go to the Create Impulse section in the menu and add an Image Processing Block. This block will resize your images and convert them into a format that the model will understand. The default resolution is 96x96 or in this case 224x224, which offers a good balance between inference speed and performance.
Afterwards, add the Learning Block and select Visual Anomaly Detection - FOMO AD. This tells Edge Impulse to train a model that learns what “normal” looks like, rather than requiring labeled classes. This is why the Training dataset must be only good data.
Once the blocks from your Impulse are configured, click Save Impulse. This finalizes your pipeline setup and prepares the project for training.

Training the Model

Now that the Impulse is defined, it’s time to train the Visual Anomaly Detection model. This process trains to the neural network what “normal” looks like, based on the Training images you have provided.

Image parameters and features

Before training, Edge Impulse will automatically extract visual features from the Training dataset images. Things like color distribution, edges or shapes.
Once the Parameters are saved, move to the next tab Generate features. Click the Generate Features button and Edge Impulse will extract numerical representations (the features) from the images. These features represent the essence of your image data and are what the model uses to learn patterns. In this step is when consistency during Data acquisition is key to produce reliable features.
When the feature extraction is complete, you will see the Feature Explorer, a visual tool of the representation of the features of the data. Each dot represents an image and the position is based on visual similarity. That means that images that look similar will cluster together. Ideally, all the training dataset should form one cluster, since they represent only normal samples. If any image is far apart, it might indicate differences in lighting, angle or background noise that can confuse the model later. Additionally, you can see the on-device performance metrics below the Feature explorer. This indicates an estimation of how much RAM and processing time may be needed in your target hardware.

Start Training

After the features have been generated, go to the Visual Anomaly Detection section and you can start training the machine learning model. To understand the Anomaly Detection settings selected, Edge Impulse uses a technique called PatchCore which is an unsupervised learning method for detecting anomalies in images by focusing on small regions in the images, called patches, and it works great for visual anomaly detection.
Once the job is completed you will see the training output and the model on-device performance.

Testing the Model

Now that your model is trained, it’s time to evaluate how well it can detect anomalies. This step helps you understand if the model can correctly detect images that are different from the “normal” ones.

Live Classification

Go to Live classification in Edge Impulse Studio.
If you haven’t uploaded any defective image, you can connect a device (e.g. your mobile phone again) and start taking samples. If you have captured images with defects and placed them in the Test dataset during the Data Capture process, they will appear in the Classify existing test sample section. Click any to classify and the model will return the anomaly score (between 0 and 1).

Model Testing

You also can go to Model Testing to classify all the captured Test Data images and see the classification of the data with the Visual Anomaly Detection.

Using the Model in the Real World

Once your model is trained and tested, you are ready to bring the machine learning model to production in a real device. Edge Impulse Studio reduces the friction to deploy ML models into all kinds of embedded hardware. Find here the list of supported hardware for your edge AI projects.

Understanding Visual Anomaly detection

In this tutorial, you have learned how to build and deploy a Visual Anomaly Detection machine learning model using Edge Impulse Studio. Find here what we have done:
  • Created an Edge Impulse project and captured image data of normal (non-defective) objects using a mobile phone camera.
  • Built a ML pipeline by using an Anomaly Detection (FOMO-AD) Learning block and using PatchCore to train the machine learning model.
  • Trained the model to learn what “normal” looks like. No labeled defective data is required as we were using unsupervised learning technique.
  • Tested the model using existing test samples and real-time image capture.
  • Deployed the model to a Linux device (e.g. Arduino UNO Q) to enable real-time anomaly detection in the edge.

Next Steps

Want to try this with your own data? Now you can create a new Edge Impulse project or join our Discord channel to share your experience and get help from the Edge Impulse community.