Artificial intelligence has taken over the world by a storm. Last year, Google had launched Magenta, which is the name of the research project undertaken which aimed at understanding what Artificial intelligence can do in the field of arts.
Google’s newest machine learning project managed to release the first of its kind art. With the help of artificial intelligence, it generated a 90 second piano melody that was created with the provision of just four notes. For emphasis, the drums and orchestration were added later on.
THE AIM OF MAGENTA
Magenta by Google has two major aims-
- Help in the advancement of the machine intelligence for the production of music and art. Machine learning so far has been used for understanding content like translation and the speech recognition. But magenta aims to understand and produce music and art.
- To build a community for the coders, artists and the researchers in machine learning for making music and art.
MORE ABOUT MAGENTA
Douglas Eck, the team’s lead in California, said that there is no effort being put on learning about the artificial intelligence. The team claims to have experimented with various different techniques for learning new ways of generating new stuff.
As for Magenta, the team trained the NSynth algorithm which synthesise new sounds by using neural networks and generates the new sounds on the notes that are created by various instruments. Performance RNN is the new music algorithm which was trained on the performances of classical piano and captured on the piano of a modern player. Basically, the aim was to help the musicians in making their own musical creations by training models and then altering the results.
Nsynth consists of a large database of sounds. The team provided a wide range of the notes from various instruments and added them in a neural net. The neural network then analysed the notes and managed to understand the characteristics which were audible for each of the instrument. It then created a mathematical vector for each of the instrument which helped the machine in copying the sound of every individual instrument and combining the sounds of two different instruments.
The team aims to push back the artistic boundaries and explore the audible space that lies between multiple instruments at the same time. The second neural network can be then used for combining the sounds from the instruments and work in favour of the primary artificial intelligence.
The team has also described each and every NSynth algorithms in a paper for the people who want to access their database of sounds.