M3: Multi-Modal Interface in Multi-Display Environment for Multi-Users
Full Conference
One-Day Full Conference
Basic Conference/Exhibits Plus
A sophisticated and intuitive interface for multi-display environments where the displays are stitched seamlessly and dynamically according to the users' viewpoints. |
Enhanced Life
M3 is a multi-modal interface in a multi-display environment for multiple users. It combines multi-modal interaction techniques such as gaze, body movement, and hand gestures. Perspective-aware interfaces also allow users to observe and control information on the multiple displays as if they are in front of an ordinary desktop GUI environment.
Goal
To build intelligent environments that provide appropriate types of information and input methods for specific interaction requirements.
Innovations
This project explores two important domains of interface technologies: multi-modal and perspective-aware.
Vision
In the future, people will use multi-modal interfaces to interact naturally and intuitively with displays located everywhere.
Contributors
Satoshi Sakurai
Tokuo Yamaguchi
Yoshifumi Kitamura
Yuichi Itoh
Ryo Fukazawa
Fumio Kishino
Osaka University
Miguel A. Nacenta
University of Saskatchwan
Sriram Subramanian
University of Bristol