Exploring the Intersection of Art and Machine Intelligence

In June of last year, we published a story about a visualization techniques that helped to understand how neural networks carried out difficult visual classification tasks. In addition to helping us gain a deeper understanding of how NNs worked, these techniques also produced strange, wonderful and oddly compelling images.

Following that blog post, and especially after we released the source code, dubbed DeepDream, we witnessed a tremendous interest not only from the machine learning community but also from the creative coding community. Additionally, several artists such as Amanda Peterson (aka Gucky), Memo Akten, Samim Winiger, Kyle McDonald and many others immediately started experimenting with the technique as a new way to create art.
GCHQ”, 2015, Memo Akten, used with permission.
Soon after, the paper A Neural Algorithm of Artistic Style by Leon Gatys in Tuebingen was released. Their technique used a convolutional neural network to factor images into their separate style and content components. This in turn allowed the creation, by using a neural network as a generic image parser, of new images that combined the style of one with the content of another. Once again it took the creative coding community by storm and immediately many artists and coders began experimenting with the new algorithm, resulting in Twitter bots and other explorations and experiments.
The style transfer algorithm crosses a photo with a painting style; for example Neil deGrasse Tyson in the style of Kadinsky’s Jane Rouge Bleu. Photo by Guillaume Piolle, used with permission.
The open-source deep-learning community, especially projects such as GitXiv, hugely contributed to the spread, accessibility and development of these algorithms. Both DeepDream and style transfer were rapidly implemented in a plethora of different languages and deep learning packages. Immediately others took the techniques and developed them further.
“Saxophone dreams” - Mike Tyka.
With machine learning as field moving forward at a breakneck pace and rapidly becoming part of many -- if not most -- online products, the opportunities for artistic uses are as wide as they are unexplored and perhaps overlooked. However the interest is growing rapidly: the University of London is now offering a course on Machine learning and art. NYU ITP offers a similar program this year. The Tate Modern’s IK Prize 2016 topic: Artificial Intelligence.

These are exciting early days, and we want to continue to stimulate artistic interest in these emerging technologies. To that end, we are announcing a two day DeepDream event in San Francisco at the Gray Area Foundation for the Arts, aimed at showcasing some of the latest exploration of the intersection of Machine Intelligence and Art, and spurring discussion focused around future directions:
  • Friday Feb 26th: DeepDream: The Art of Neural Networks, an exhibit consisting of 29 neural network generated artworks, created by artists at Google and from around the world. The works will be auctioned, with all proceeds going to the Gray Area Foundation, which has been active in supporting the intersection between arts and technology for over 10 years.
  • On Saturday Feb 27th: Art and Machine Learning Symposium, an open one-day symposium on Machine Learning and Art, aiming to bring together the neural network and the creative coding communities to exchange ideas, learn and discuss. Videos of all the talks will be posted online after the event.
We look forward to sharing some of the interesting works of art generated by the art and machine learning community, and being part of the discussion of how art and technology can be combined.