Disney's Magic Bench

Magic Bench is a platform that supports multi-sensory immersive experiences in which people can interact directly with animated characters.  Surroundings are instrumented rather than the individual, allowing people to share the magical experience as a group without the use of a head-mounted display or handheld device.

Role

Asset Creation

Concept Development

Narrative Development

Usability Testing 

 

Where

SIGGRAPH 2017 

Grace Hopper 2017

Google SPAN 2017

Disney's Polynesian Resort

Disney Fairy Tale Weddings

Press

"Hear a character coming, see them enter the space, and feel them sit next to you.”

We demonstrate this technology in a series of vignettes featuring playful animals. Participants not only see and hear these characters, but they can also feel them as they interact with the bench through haptic feedback. Many of the characters also interact with users directly, either through speech or touch. Sit on Disney Research’s Magic Bench and you might get rained on or have a tiny elephant hand you a glowing orb. This demonstrates HCI in its simplest form: a person walks up to a computer, and the computer hands the person an object.

"Disney’s ‘Magic Bench’ Fixes AR’s Biggest Blind Spot" 

-wired.com

The Bench

The bench itself is a critical element. Not only does it contain haptic actuators, but it elegantly constrains the experience in several important ways. We know the location and the number of participants and can infer their gaze. The bench creates a stage with a foreground and a background, with the seated participants in the middle ground. It even serves as a controller; the mixed reality experience doesn’t begin until someone sits down and different formations of people seated prompt different types of experiences.

Under the Hood

People seated on the Magic Bench can see themselves in a mirrored image on a large screen in front of them, creating a third person point of view. The scene is reconstructed using a depth sensor, allowing the participants to actually occupy the same 3D space as a computer-generated character or object, rather than superimposing one video feed onto another.

 

We used a color camera and depth sensor to create a real-time, HD-video-textured 3D reconstruction of the bench, surroundings, and participants. The algorithm reconstructs the scene, aligning the RGB camera information with the depth sensor information.

To eliminate depth shadows that occur in areas where the depth sensor has no corresponding line of sight with the color camera, a modified algorithm creates a 2D backdrop. The 3D and 2D reconstructions are positioned in virtual space and populated with 3D characters and effects in such a way that the resulting real-time rendering is a seamless composite, fully capable of interacting with virtual physics, light, and shadows.

Team

Moshe Mahler

Kyna McIntosh

John Mars

Jimmy Krahe

Jim McCann

Alexander Rivera

Jake Marsico

Michelle Ma

Ali Israr

Shawn Lawson

Where

SIGGRAPH 2017 

Grace Hopper 2017

Google SPAN 2017

Disney's Polynesian Resort

Disney Fairy Tale Weddings