Deep Dream VR Experiments

Garden Street: ATP Tech Gallery

DeepDream VR Experiments are our first attempts at bringing the computer vision program known as “DeepDream” into virtual reality. A series of VR documentary-style scenes were fed through a convolutional neural net. The system found and enhanced particular patterns. What resulted was newly imagined virtual landscapes and worlds.

We did these experiments for many reasons. First, we wanted to see if it would actually work from a technical standpoint. Second, we wanted to throw VR footage into this system and see what would stick. What about this was interesting? What was forgettable? Third, we wanted to explore conversations that naturally arise. How did our creative roles shift? Were we now mere data feeders, parameter-string pullers, button pushers? Were we more inclined to let the system do its thing or did we feel an immediate need to interfere?

The stereoscopic live-action VR footage used for these experiments was intentionally non-precious in nature. It spans an array of experiences – from standing on a fisherman’s wharf in Iceland, to witnessing a futbol game in the Champ de Mars, to hanging out in a family’s living room in Pennsylvania. These scenes were filmed as tests in of themselves in order to explore the medium of VR. As such, they were purposely kept simple in order to allow the medium’s capabilities to be more identifiable.

Using this kind of VR footage is also historically appropriate, as it calls to mind Eadweard Muybridge’s motion experiments in the early 19th century as well as Dziga Vertov’s cinematic experiments with editing and montage. In each case “normalcy” helped to establish a much more suitable platform on which to understand a new kind technology and artistic process.

While some experiments resonate more than others, we feel that DeepDream VR Experiments as a whole provokes necessary critical conversation around the role of AI in creative process and its potential for the VR space.

Installation / Activity