Tag Archives: DeepDream

The Building Blocks of Interpretability

Cross-posted on the Google Research Blog.

In 2015, our early attempts to visualize how neural networks understand images led to psychedelic images. Soon after, we open sourced our code as DeepDream and it grew into a small art movement producing all sorts of amazing things. But we also continued the original line of research behind DeepDream, trying to address one of the most exciting questions in Deep Learning: how do neural networks do what they do?

Last year in the online journal Distill, we demonstrated how those same techniques could show what individual neurons in a network do, rather than just what is “interesting to the network” as in DeepDream. This allowed us to see how neurons in the middle of the network are detectors for all sorts of things — buttons, patches of cloth, buildings — and see how those build up to be more and more sophisticated over the networks layers.
Visualizations of neurons in GoogLeNet. Neurons in higher layers represent higher level ideas.
While visualizing neurons is exciting, our work last year was missing something important: how do these neurons actually connect to what the network does in practice?

Today, we’re excited to publish “The Building Blocks of Interpretability,” a new Distill article exploring how feature visualization can combine together with other interpretability techniques to understand aspects of how networks make decisions. We show that these combinations can allow us to sort of “stand in the middle of a neural network” and see some of the decisions being made at that point, and how they influence the final output. For example, we can see things like how a network detects a floppy ear, and then that increases the probability it gives to the image being a “Labrador retriever” or “beagle”.

We explore techniques for understanding which neurons fire in the network. Normally, if we ask which neurons fire, we get something meaningless like “neuron 538 fired a little bit,” which isn’t very helpful even to experts. Our techniques make things more meaningful to humans by attaching visualizations to each neuron, so we can see things like “the floppy ear detector fired”. It’s almost a kind of MRI for neural networks.
We can also zoom out and show how the entire image was “perceived” at different layers. This allows us to really see the transition from the network detecting very simple combinations of edges, to rich textures and 3d structure, to high-level structures like ears, snouts, heads and legs.
These insights are exciting by themselves, but they become even more exciting when we can relate them to the final decision the network makes. So not only can we see that the network detected a floppy ear, but we can also see how that increases the probability of the image being a labrador retriever.
In addition to our paper, we’re also releasing Lucid, a neural network visualization library building off our work on DeepDream. It allows you to make the sort lucid feature visualizations we see above, in addition to more artistic DeepDream images.

We’re also releasing colab notebooks. These notebooks make it extremely easy to use Lucid to reproduce visualizations in our article! Just open the notebook, click a button to run code — no setup required!
In colab notebooks you can click a button to run code, and see the result below.
This work only scratches the surface of the kind of interfaces that we think it’s possible to build for understanding neural networks. We’re excited to see what the community will do — and we’re excited to work together towards deeper human understanding of neural networks.

By Chris Olah, Research Scientist and Arvind Satyanarayan, Visiting Researcher, Google Brain Team

Feature Visualization



Have you ever wondered what goes on inside neural networks? Feature visualization is a powerful tool for digging into neural networks and seeing how they work.

Our new article, published in Distill, does a deep exploration of feature visualization, introducing a few new tricks along the way!

Building on our work in DeepDream, and lots of work by others since, we are able to visualize what every neuron a strong vision model (GoogLeNet [1]) detects. Over the course of multiple layers, it gradually builds up abstractions: first it detects edges, then it uses those edges to detect textures, the textures to detect patterns, and the patterns to detect parts of objects….
But neurons don’t understand the world by themselves — they work together. So we also need to understand how they interact with each other. One approach is to explore interpolations between them. What images can make them both fire, to different extents?

Here we interpolate from a neuron that seems to detect artistic patterns to a neuron that seems to detect lizard eyes:
We can also let you try adding different pairs of neurons together, to explore the possibilities for yourself:
In addition to allowing you to play around with visualizations, we explore a variety of techniques for getting feature visualization to work, and let you experiment with using them.
Techniques for visualizing and understanding neural networks are becoming more powerful. We hope our article will help other researchers apply these techniques, and give people a sense of their potential. Check it out on Distill.

Acknowledgement
We're extremely grateful to our co-author, Ludwig Schurbert, who made incredible contributions to our paper and especially to the interactive visualizations.







Exploring the Intersection of Art and Machine Intelligence



In June of last year, we published a story about a visualization techniques that helped to understand how neural networks carried out difficult visual classification tasks. In addition to helping us gain a deeper understanding of how NNs worked, these techniques also produced strange, wonderful and oddly compelling images.

Following that blog post, and especially after we released the source code, dubbed DeepDream, we witnessed a tremendous interest not only from the machine learning community but also from the creative coding community. Additionally, several artists such as Amanda Peterson (aka Gucky), Memo Akten, Samim Winiger, Kyle McDonald and many others immediately started experimenting with the technique as a new way to create art.
GCHQ”, 2015, Memo Akten, used with permission.
Soon after, the paper A Neural Algorithm of Artistic Style by Leon Gatys in Tuebingen was released. Their technique used a convolutional neural network to factor images into their separate style and content components. This in turn allowed the creation, by using a neural network as a generic image parser, of new images that combined the style of one with the content of another. Once again it took the creative coding community by storm and immediately many artists and coders began experimenting with the new algorithm, resulting in Twitter bots and other explorations and experiments.
The style transfer algorithm crosses a photo with a painting style; for example Neil deGrasse Tyson in the style of Kadinsky’s Jane Rouge Bleu. Photo by Guillaume Piolle, used with permission.
The open-source deep-learning community, especially projects such as GitXiv, hugely contributed to the spread, accessibility and development of these algorithms. Both DeepDream and style transfer were rapidly implemented in a plethora of different languages and deep learning packages. Immediately others took the techniques and developed them further.
“Saxophone dreams” - Mike Tyka.
With machine learning as field moving forward at a breakneck pace and rapidly becoming part of many -- if not most -- online products, the opportunities for artistic uses are as wide as they are unexplored and perhaps overlooked. However the interest is growing rapidly: the University of London is now offering a course on Machine learning and art. NYU ITP offers a similar program this year. The Tate Modern’s IK Prize 2016 topic: Artificial Intelligence.

These are exciting early days, and we want to continue to stimulate artistic interest in these emerging technologies. To that end, we are announcing a two day DeepDream event in San Francisco at the Gray Area Foundation for the Arts, aimed at showcasing some of the latest exploration of the intersection of Machine Intelligence and Art, and spurring discussion focused around future directions:
  • Friday Feb 26th: DeepDream: The Art of Neural Networks, an exhibit consisting of 29 neural network generated artworks, created by artists at Google and from around the world. The works will be auctioned, with all proceeds going to the Gray Area Foundation, which has been active in supporting the intersection between arts and technology for over 10 years.
  • On Saturday Feb 27th: Art and Machine Learning Symposium, an open one-day symposium on Machine Learning and Art, aiming to bring together the neural network and the creative coding communities to exchange ideas, learn and discuss. Videos of all the talks will be posted online after the event.
We look forward to sharing some of the interesting works of art generated by the art and machine learning community, and being part of the discussion of how art and technology can be combined.