Generate audio datasets using Eleven Labs

Generate audio data using the Eleven Labs Sound Effects models. This integration allows you to generate realistic sound effects for your projects, such as glass breaking, car engine revving, or other custom sounds. You can customize the sound prompts and generate high-quality audio samples for your datasets.

This integration allows you to expand your datasets with sounds that may be difficult or expensive to record naturally. This approach not only saves time and money but also enhances the accuracy and reliability of the models we deploy on edge devices.

Example: Glass Breaking Sound

In this tutorial, we focus on a practical application that can be used in a smart security system, or in a factory to detect incidents, such as detecting the sounds of glass breaking.

There is also a video version of this guide:

Getting Started

Only available with Edge Impulse Pro Plan and Enterprise Plan

Try our FREE Enterprise Trial today.

You will also need an Eleven Labs account and API Key.

Navigate to Data Acquisition: Once you're in your project, navigate to the Data Acquisition section, go to Synthetic data and select the ElevenLabs Synthetic Audio Generator data source.

Step 1: Get your ElevenLabs API Key

First, get your Eleven Labs API Key. Navigate to the Eleven Labs web interface to get your key and optionally test your prompt.

Step 2: Parameters

Here we will be trying to collect a glass-breaking sound or impact.

  • Prompt: "glass breaking"

    Simple prompts are just that: they are simple, one-sided prompts where we try to get the AI to generate a single sound effect. This could be, for example, “person walking on grass” or “glass breaking.” These types of prompts will generate a single type of sound effect with a few variations either in the same generation or in subsequent generations. All in all, they are fairly simple.

    There are a few ways to improve these prompts, however, and that is by adding a little bit more detail. Even if they are simple prompts, they can be made to give better output by improving the prompt itself. For example, something that sometimes works is adding details like “high-quality, professionally recorded footsteps on grass, sound effects foley.” It can require some experimentation to find a good balance between being descriptive and keeping it brief enough to have AI understand the prompt. e.g. high quality audio of window glass breaking

  • Label: The label of the generated audio sample.

  • Prompt influence: Between 0 and 1, this setting ranges from giving the AI more creativity in how it interprets the prompt to telling the AI to be more strict in following the exact prompt that you’ve given. 1 being more creative.

  • Number of samples: Number of samples to generate

  • Minimum length (seconds): Minimum length of generated audio samples. Audio samples will be padded with silence to minimum length. It also determines how long your generations should be. Depending on what you set this as, you can get quite different results. For example, if I write “kick drum” and set the length to 11 seconds, I might get a full drum loop with a kick drum in it, but that might not be what I want. On the other hand, if I set the length to 1 second, I might just get a one-shot with a single instance of a kick drum.

  • Frequency (Hz): Audio frequency, ElevenLabs generates data at 44100Hz, so any other value will be resampled.

  • Upload to category: Data will be uploaded to this category in your project.

See ElevenLabs API documentation for more information

Step 3: Generate samples

Once you've set up your prompt, and api key, run the pipeline to generate the sound samples. You can then view the output in the Data Acquisition section.

Benefits of Using Generative AI for Sound Generation

  • Enhance Data Quality: Generative AI can create high-quality sound samples that are difficult to record naturally.

  • Increase Dataset Diversity: Access a wide range of sounds to enrich your training dataset and improve model performance.

  • Save Time and Resources: Quickly generate the sound samples you need without the hassle of manual recording.

  • Improve Model Accuracy: High-quality, diverse sound samples can help fill gaps in your dataset and enhance model performance.

Conclusion

By leveraging generative AI for sound generation, you can enhance the quality and diversity of your training datasets, leading to more accurate and reliable edge AI models. This innovative approach saves time and resources while improving the performance of your models in real-world applications. Try out the Eleven Labs block in Edge Impulse today and start creating high-quality sound datasets for your projects.

Last updated