Suno's latest AI music generator, Suno v5, shows notable technical improvements in audio clarity and instrument separation over previous versions, but it still produces music that feels soulless and lacks emotional depth, with a tendency to layer effects and harmonies regardless of user input.
Stability AI has launched Stable Audio 2.0, an update to its AI music generator that can now produce fully structured tracks up to three minutes in length, process and transform audio uploaded by the user, and generate sound and audio effects. The company aims to enhance current production workflows and expand the creative toolkit for artists, but faces criticism over copyright concerns and the potential impact on working musicians.
Suno is an AI music generator that uses text prompts to create original songs with lyrics and vocals, leveraging the power of ChatGPT. It stands out from other music generators by producing original content and offering an intuitive interface for users. Free users get 50 credits per day, while paid subscribers can use the generated music for commercial purposes. Suno avoids copyright issues by not generating music in the style of real artists' voices and provides ownership of songs to paying subscribers, while retaining ownership of songs created by free users. The copyright protection for content generated using artificial intelligence is a complex and evolving area of law, and as AI becomes more prevalent in the music industry, policy developments will continue to delineate the limitations and allowances of AI-generated content.
Meta has released an open-source AI code called AudioCraft, which allows users to create music and sounds using generative AI. The code consists of three AI models, including MusicGen, which generates music from text inputs, and AudioGen, which creates audio from written prompts. Meta believes that AudioCraft could usher in a new wave of music, similar to how synthesizers revolutionized the industry. However, concerns about copyright infringement and bias in training data remain, and it is yet to be seen if machine-made songs can gain popularity beyond being used for elevator music or stock songs.
Meta has released an open-source AI-powered music generator called MusicGen, which can turn a text description into about 12 seconds of audio. MusicGen was trained on 20,000 hours of music, including 10,000 licensed music tracks and 390,000 instrument-only tracks from Shutterstock and Pond5. MusicGen can also be "steered" with reference audio. While generative music is improving, there are still major ethical and legal issues to be ironed out, including copyright concerns.
Google has released MusicLM, an AI-powered music creation tool that generates new music in a range of styles based on a text description. MusicLM was trained on hundreds of thousands of hours of audio and is available in preview via Google’s AI Test Kitchen app. However, the system's songs sound passable at best and at worst like a four-year-old let loose on a DAW. MusicLM's usefulness is limited due to artificial limitations on the prompting side, and it won't generate music featuring artists or vocals.