In 1968, Philip K. Dick posed a question through the title of one of his many famed works: “Do Androids Dream of Electric Sheep?” Close to 50 years later, Google somewhat answered that question when it asked its artificial neural networks what it saw in an image of clouds, or to produce a pattern it found in white noise. The results were often psychedelic.
Now, Google announced it will continue to explore if artificial intelligence can indeed be creative. According to Popular Science, the company plans on launching the project, dubbed Magenta, on June 1.
The project will see whether machine intelligence can produce original music, videos, images, and text.
“The question Magenta asks is, ‘Can machines make music and art? If so, how? If not, why not?’” wrote Google research scientist Douglas Eck. The goal “is to produce open-source tools and models that help creative people be even more creative.”
According to Eck, the project is part of the Google Brain team, and will leverage TensorFlow, an open source library for machine learning.
Quartz reported the company will first launch a program that allows users to import music data from MIDI files into TensorFlow. More information will be available to the public on the project’s GitHub page.
“Additionally, I’m working on how to bring other aspects of the creative process into play,” wrote Eck. “For example, art and music is not just about generating new pieces. It’s also about drawing one’s attention, being surprising, telling an interesting story, knowing what’s interesting in a scene, and so on.”
At this year’s Moogfest, a member of the Magenta team showed off the project in action. After inputting a few notes, the artificial intelligence built a simple melody from it.
Eck, according to Popular Science, said a Magenta app may be created to gauge whether people enjoy the art the artificial intelligence produces.