Spatial Gestures using a Tactile-Proprioceptive Display
Spatial interaction is a key component of natural user interfaces (NUI), where a touch or a gesture activates or alters the position of on-screen objects. Spatial interaction relies upon being able to visually acquire the position of an object, which is challenging for a user who is visually impaired or in particular mobile contexts where the use of a display may be dangerous or inappropriate.
Several non-visual NUIs have been proposed, yet these only facilitate non-spatial gestures, such as navigating lists and selecting an item. Others do allow for spatial gestures but rely upon the user's visuospatial memory, e.g., users must keep track of objects with which they interaction, which may be difficult to achieve if multiple objects are present.
This project addresses the current limitations of non-visual NUIs by presenting an eye and ear free display technique that can point out the location of an object in a 2D or 3D display defined in front of the user. Once the location is acquired users can interact with this object using a spatial gesture.
How it works
Our display uses proprioception --the human ability to sense the position and orientation of their limbs-- to appropriate the human body into a display device. Haptic feedback can be augmented with proprioceptive information to facilitate a significantly larger information space that can be accessed in an ear and eye free manner. For example, tactile-proprioceptive displays have been explored to point out a target in mobile navigation system. Users scan their environment with a mobile device and a vibrotactile cue guides the user to point their device at the target. Target direction is then conveyed to the user using their own arm; effectively appropriating the human body into a display. Where prior research has only explored one handed 1D target acquisition, our project investigates manual/bimanual 2D/3D target acquisition.