← News

Touchless Gesture-Based Exhibits, Part One: High-Fidelity Interaction

A 2015 proof-of-concept with Intel shows the potential power of touchless interaction in the age of COVID-19.
Apr
10
2020
Authored by
Jim Spadaccini
Founder & Creative Director

As museums and public spaces start to reopen later this year, some visitors and staff will be wary of interacting with touch screens and other hands-on exhibits. We’ve seen this before during the 2009 H1N1 Swine Flu outbreak, although that pandemic was not as impactful as COVID-19, and touch screens were not nearly as common at that time as they are now.

This likely means that touchless gesture interfaces and even voice recognition will find traction in public environments as venues begin to open up again. While we don’t believe this situation will be permanent, the current crisis provides opportunities to move touchless technology and design approaches forward as we continue to develop new types of interactions.

Our own history with gesture and voice recognition goes back more than five years. Most notably, we worked with Intel on a proof-of-concept interface that employed the premiere version of the RealSense motion recognition system and SDK. For this project, we used the RealSense SR300, a short-range recognition system.

While this application was developed for the desktop, we learned a great deal about interface design for motion-based applications in general. For example, providing immediate and active feedback to users, and letting them know how to interact and when their interactions are recognized, are essential in these types of applications. Also, visual helpers for specific gestures are important, as most gestures are not necessarily intuitive. (We’ve seen this with multitouch screen-based interfaces, where most visitors know how to pinch and zoom, but more elaborate gestures require specific instructions.)

As a desktop application, we had the added advantage of being able to expect that some users would learn some gestures in a short period of time, making more elaborate and nuanced interactions possible. In public spaces like museums, which often present a wide range of opportunities and stimuli, designers need to be careful to avoid overloading visitors with too many instructions and ways to interact. As we know from the basics of interface design, visitors can also become frustrated when elements don’t work as expected. That’s why keeping the set of possible input gestures simple is key; rather than creating a large library of possible gestures, it’s often most effective to focus on a small set of common or easy-to-remember movements.

One could easily imagine a slimmed-down version of this proof-of-concept working well in a public space. In addition, if the space is not too loud, voice recognition could be an option for navigation. Voice recognition for interaction has the added benefit that visitors can often say exactly what they want without having to learn a new interface: “play”, “stop”, “next”, “back”, etc. This also presents an avenue for increasing accessibility, with voice commands providing access to individuals who cannot interact with gestures. (For example, our audio accessibility layer could be retrofitted to allow voice commands to replace touch gestures.)

All of this poses an intriguing question: Why weren’t gesture- or motion-based interfaces pursued more aggressively years ago? At the time, there were some technical limitations in recognizing and differentiating inputs, but those have largely been overcome. A more likely explanation is that mouse, keyboard, and touch interaction are simply very efficient, both on the desktop and in public settings on kiosks, touch walls, and touch tables. With high-fidelity touch-based interaction already in wide use, many designers may have not seen the value of devoting time and effort to the creation of new systems that visitors would need to learn. But we are living in a new world, and although touch-based interfaces are part of that landscape, the time may be right to re-explore high-fidelity motion and voice recognition as well.

When it comes to motion-based exhibits for public spaces, the emphasis has largely been on full-body motion. These types of kinetic exhibits also have the benefit of allowing groups of people to interact (even with physical distancing). In the next post in this series, we will look at motion-based exhibits and share several examples of motion-based kiosks and immersive environments we’ve developed over the years.

In the next post in this series, we will look at motion-based exhibits and share several examples of motion-based kiosks and immersive environments we’ve developed over the years. (Touchless Gesture-Based Exhibits, Part Two: Full-Body Interaction is now available - April 22, 2020)