With the rise of gestural interfaces and ubiquitous computing experiences, users encounter systems with few physical affordances for interaction. Lately, designers have tried overcoming these barriers to use by offering multi-page instruction screens upon application startup, introductory courses for first time device users, and significant feedback for allowed and non-allowed interactions.
How do you introduce users to new gestures and ways of interacting without extensive help modules or person-to-person assistance? How do people discover that a four-finger swipe is an interaction with purpose, not an accident? Where is the sweet spot between an overly assistive interface and one that leaves the user grasping for a lifeline?
This talk reviews some of the latest assistance methods in touch, gesture-based and mediated interaction, with examples from the introduction and refinement of gestures and voice in Google Now, the trials of photo-taking and editing on phones, Apple’s hidden gestural language, early Xbox discoveries, challenges faced by the Google Glass team, and the almost-ready-for-public devices like the Leap Motion controller.