Watch the tutorial
In this walkthrough tutorial, we get to know the main features of the Concatenator plugin, so you can get started using the most powerful audio mosaicing algorithm in the world!
Concatenator
The Concatenator is the world’s most powerful AI-powered audio mosaicing plug-in enabling seamless concatenative synthesis in music production.
Think of it as granular synthesis, except the grain selection, amount, and size parameters are guided by features of an input audio signal instead of randomness. There is no upper limit to how many samples can be added.
Ideal for sound design in Game, TV, Film, and EDM production.
FAQ
Q: Will Combobulator work on my computer?
A: Combobulator now runs on Windows 10 and above, and on Mac OSX 12 and above.
This version is a Beta. Please check for known bugs in the documentation. We will be sending updates as they are released – it is recommended to update as soon as you see these update announcements come in through the plugin interface.
Combobulator can be installed as a VST or Audio Unit. Every major DAW is supported (except for Avid ProTools).
Q: Can I use Combobulator with a live input?
A: In its current version, Combobulator requires a LOT of latency. This means that if you try to send a live input into it, there will be a long delay (about 900ms) before hearing the Combobulated version, due to the requirement of a large buffer size to compute the synthesis algorithm.
While it *is* possible to make neural audio faster to process, it currently comes at the cost of lowering the audio quality. We decided to make the highest fidelity neural networks possible, but of course this has come at the cost of long latency times. Sorry beatboxers, gearheads, and live performers!! This technology is still evolving, however, and we will be trying to improve this part of the user experience with future iterations.
Q: Is DataMind Audio’s AI ethical?
A: DataMind Audio is at the forefront of integrating AI into the music industry, developing innovative tools that augment human creativity, and establishing a marketplace that ensures fair compensation for artists.
We are creating tools for music producers who wish to use the incredible breakthrough technology of AI-powered tools, but also have understandable ethical concerns about the unpaid use of artist output while training generative neural networks. We aim to be one of the first providers of ethical neural networks to meet this newly identified market demand for guilt-free use of generative AI technology in creative audio products.
Generative neural audio synthesis relies on “training” neural networks to imitate data. Our project gives artists direct control over what goes into the training data, and the final say about what comes out, providing a market opportunity for artists who are trying to find their way into the AI space.
Much like a record label or a sample library website, musicians are paid royalties for sales of AI-models that were trained on their sounds. Through this ethical standard, we aim to unlock the incredible creative power of AI neural audio synthesis while creating a new income stream for artists, rather than exploiting their originality.
Q: How does DataMind Audio train Artist Brains?
A: All of our Artist Brains are trained by our small team of Model Reliability Engineers based in Edinburgh, Scotland, who have designed batch-processing scripts that analyze and optimize our artists’ dataset for machine learning. We train many models on our in-house custom built computer, which ranks in top 0.01% of fastest PCs in the world. The custom-built computer is on an electrical grid that uses solar power as a primary energy source which, combined with our custom liquid-cooling system and other hardware optimization methods, results in a low energy footprint. All other cloud GPU services that we use for machine learning applications produce zero-emissions. DataMind Audio is proud to be a green company.
Q: Can I make my own models for the Combobulator?
A: At this time we are not offering a system that allows the user to train/use their own models in the Combobulator. Training high-fidelity neural networks is currently an expensive process that takes time and expertise. Being a small company, we are currently only producing boutique Artist Brains that are ethically trained on the select Artists we affiliate with.
That said, we recognize the immediate demand for users to train and use their own models, and we are taking this into consideration as we continue into our next phase of product development.
Q: What is an Artist Brain?
A: Our generative AI models create new sound by imitating timbres it has been trained on. As a machine learning algorithm repeatedly analyzes a database of sound, it gradually learns to produce new audio that shares a timbral similarity to patterns found in the original data set.
At DataMind Audio, instead of producing large models that can make any sound, we create small models that only imitate one artist at a time. We call these small, customized, fine-tuned neural networks (aka models) “Artist Brains”.
While popular large AI models produce text and images based on text prompts, our AI-powered VST audio effect, The Combobulator, uses neural audio synthesis to produce a timbral “style-transfer” from a real-time audio input signal. As a live audio signal passes into our VST’s input, the AI re-synthesizes the input signal based on the Artist Brain, and a new output sound is synthesized! The Combobulator also features familiar synthesis modulators, which can alter the way the audio is interpreted and synthesized, allowing for explorations of the Artist Brain far beyond direct interpretations of the input audio signal.
Artist Brains are not capable of plagiarizing artists. They do not make music for you, and they only vaguely resemble the artists the models are trained on. You can not slap the Combobulator on the master channel and hope to pass yourself off as the artist that the model was trained on.
Artist Brains produce a “hallucinated” range of timbres based on the culminated features of the original audio database supplied by the artists we work with.
While it may not sound like an actual replica of your sound, our team of Model Reliability Engineers strive to capture an aspect of the artist’s essence in the Artist Brain.
About DataMind Audio
DataMind Audio was co-founded by Rob Clouth, Ben Cantil, Catherine Stewart, Zack Zukowski and CJ Carr (Dadabots). We are a small company created by musicians for musicians, making AI-powered music production and sound design software that empowers artists and inspires new ideas.
DataMind Audio has received generous support and funding from the University of Edinburgh’s Creative Informatics Resident Entrepreneur Grant and Creative AI Music & Audio Pilot Project Grant, and from the UK government’s Feasibility Studies for Artificial Intelligence Solutions Grant by Innovate UK. We have also received additional support from Edinburgh Innovations, the EPCC, and the University of Edinburgh.
WORKS WITH ALL MAJOR DAW’S
Based on the RAVE technology developed at IRCAM in the STMS Lab. Authors : Antoine Caillon, Philippe Esling
The AI Style-Transfer Audio Plugin
Sounds are 100% generated by neural networks, real-time Neural Audio Synthesis in your DAW