The AI tool is bundled with three models, AudioGen, EnCodec and MusicGen, and works for music, sound, compression and generation, Meta said.
MusicGen is trained using company-owned and specifically licensed music, it added.
Artists and industry experts have raised concerns over copyright violations, as machine learning software work by recognizing and replicating patterns from data scraped from the web.
According to the company’s blog post, MusicGen generates music from text prompts, while AudioGen generates audio from text prompts. Meta has also released an improved version of our EnCodec decoder. It helps in generating higher quality music with fewer artifacts. The pre-trained AudioGen models, announced by the company, lets users generate environmental sounds and sound effects.It helps in generating sounds like a dog bark, or sirens from vehicles.
The models will be available for access to researchers and practitioners to train their models with their own datasets. These models are capable of producing high-quality audio with long-term consistency, claims the company. They have been internally developed at Meta over the past years.
AudioCraft models will act as tools for musicians and sound designers in the future, the company said. The company is also working towards improving the current models and add improvisations based on the users’ feedback.
Earlier this year, Alphabet introduced its own experimental audio generating AI tool called MusicLM.