rightsretained \isbn978-1-4503-6819-3/20/04 https://doi.org/10.1145/3334480.XXXXXXX \copyrightinfo\acmcopyright
Back to the Future: Revisiting Mouse and Keyboard Interaction for HMD-based Immersive Analytics
Abstract
With the rise of natural user interfaces, immersive analytics applications often focus on novel forms of interaction modalities such as mid-air gestures, gaze or tangible interaction utilizing input devices such as depth-sensors, touch screens and eye-trackers. At the same time, traditional input devices such as the physical keyboard and mouse are used to a lesser extent. We argue, that for certain work scenarios, such as conducting analytic tasks at stationary desktop settings, it can be valuable to combine the benefits of novel and established input devices as well as input modalities to create productive immersive analytics environments.
keywords:
virtual reality; keyboard; mouse; immersive analytics; head-mounted displays1 Introduction
The area of Immersive Analytics tries to remove barriers between data, people who analyze this data and the tools they use to do so [19]. Researchers combine knowledge from fields such as data visualization, human-computer interaction and mixed reality to create and study new tools and approaches to engage with data. The rise of natural user interfaces as well as the introduction of affordable immersive head-mounted displays (HMDs) [10] led to a wide variety of interaction techniques for data and view specification and manipulation [5, 13] including touch, spatial gestures, tangible and gaze interaction and a number of archetypal setups such as large screen collaborative spaces (with or without personal displays such as tablets) or immersive setups (projection or head-mounted display-based) (for an overview we refer to Büschel et al. [8]).
Specifically, HMD-based systems make heavy use of spatial gestures using bare hands or controllers but are typically designed to support free-space interaction, assuming no interfering objects or humans nearby. While this allows for expressive, and potentially co-located interaction, free space interaction comes at the cost of increased fatigue [14] or inaccurate input (e.g., when using hand or gaze-based ray casting techniques [7, 21]). While a number of techniques have been proposed to facilitate object selection in presence of clutter (e.g., [23]), to increase spatial pointing accuracy [2, 15] or to mitigate fatigue of spatial gestures [12] they still do not eliminate those challenges.
We argue, that the combination of desktop-based input devices such as the physical keyboard and mouse with immersive head-mounted displays can benefit single users in immersive analytics tasks, similar to office-based knowledge work [11, 9] or the use of hybrid 2D/3D interaction in medicine [17].
[-40pc]
Interaction with head-mounted display, keyboard and mouse.
[-12pc]
Top: 3D pointing with one hand, selection confirmation via mouse press. Bottom: Transition from mid-air pointing to key press.
2 Keyboard and Mouse for HMD-based Immersive Analytics
The physical keyboard and mouse are optimized for symbolic and precise 2D input and have a long tradition in being used as standard input devices in desktop environments. While not free from challenges, they have been optimized to support long hours of work [6, 28].
The keyboard was designed for rapid entrance of symbolic information, and although it may not be the best mechanism developed for the task, its familiarity that enabled good performance by users without considerable learning efforts kept it almost unchanged for many years. However, when interacting with spatial data, they are perceived as falling short of providing efficient input capabilities [4], even though they are successfully used in many 3D environments (such as CAD or gaming [24]), can be modified to to allow 3D interaction [27, 20] or can outperform 3D input devices in specific tasks such as 3D object placement [3, 25].
With the advent of self-contained immersive head-mounted displays, which allow for spatial tracking of the environment and the users hand, as well as eye-tracking, there is a potential to efficiently utilize keyboard and mouse interaction in single user, desktop-based environments (see Figure 1) for immersive analytics tasks. For example, Wang et al. [26] explored the use of an Augmented Reality extension to a desktop-based analytics environment. Specifically, they added a stereoscopic data view using a HoloLens to a traditional 2D desktop environment and interacted with keyboard and mouse across both the HoloLens and the desktop. Furthermore, the ability of immersive near eye displays to modify the visual representations of keyboard and mouse enhance their flexibility allows for application-specific adaptations [22].
Along this research trajectory, we see the following aspects applicable to immersive analytics using virtual reality or video see-through-based augmented reality.
2.1 Complementary and Multi-modal Input
So far, problems in switching between spatial interaction (e.g., using controllers) and keyboard and mouse interaction have limited the applicability of desktop-based input devices for immersive analytics. Even in stationary, desktop-based scenarios it might be challenging to switch from motion-tracked controllers to keyboard and mouse devices. However, given the possibility to spatially track the users hands and the keyboard and mouse through model-based tracking [16, 18] applicable to today’s HMDs with camera-based inside-out tracking, we see the potential to seamlessly switch between mid-air interaction and mouse or keyboard input, see Figure 1. This could open up efficient switching between tasks (e.g., selecting 3D around the user through spatial gestures and changing data properties through symbolic input on the keyboard) or subsequent fine-grained selection on a 2D subspace of the data using the mouse. Further, the input devices can be combined for multi-modal interaction. For example, one hand could be used for (uncertain) data selection again, while the other hand could be used for certain action confirmation, e.g., through key press on the physical keyboard, or alternatively for moving the data views around the user - instead of having the user navigate through the virtual scene.
[-29pc]
Color scale mapped to keyboard keys. Color selection could be interpolated by pressing two buttons at once.
2.2 Augmenting peripherals
Virtual data entities can also be augmented on or around the keyboard and mouse to allow for direct interaction with those virtual data items [22]. For example, in a node-link diagram, individual nodes could be associated to individual keys to allow quick selection of individual nodes (i.e. one key is mapped to one data entity), to multiple keys e.g., when only few nodes are present, or a single key could represent multiple nodes (e.g. in a dense node-link diagram with many nodes). Similarly, user interface elements for manipulating object properties, such as sliders could be mapped to multiple keys on the keyboard, to the mouse-wheel or to the area around the mouse. Also, different areas on a physical mouse with touch sensitive surfaces could have different semantics. Again, the advantage of mapping these graphical elements to the physical input devices lies in the increased certainty of the input (e.g., key press, moving the mouse over a physical surface) in contrast to uncertain mid-air or gaze-based input. In addition, a spatially tracked mouse could be utilized to enable constrained 3D object manipulations such as rotations or scaling.
3 Conclusion and Future Work
Through this position paper, we aim at increasing the awareness about the potential that traditional desktop-based input devices such as the physical keyboard and mouse can bring into immersive analytics tasks. It lies in the combination of certain but (in terms of degrees of freedom) spatially limited input of those devices with expressive but uncertain and fatiguing spatial input, as well as the ability to virtually augment keyboard and mouse for enhanced interaction in immersive analytics tasks. In future work, we aim at investigating specific immersive analytics tasks and at studying the opportunities of multi-modal interaction between spatial and keyboard and mouse-based interaction in more detail. Finally, we will also explore the opportunities of integrating stationary touch-screens (e.g. integrated in laptops) for immersive analytics tasks.
References
- [1]
- [2] Ferran Argelaguet and Carlos Andujar. 2013. A survey of 3D object selection techniques for virtual environments. Computers & Graphics 37, 3 (2013), 121–136.
- [3] François Bérard, Jessica Ip, Mitchel Benovoy, Dalia El-Shimy, Jeffrey R Blum, and Jeremy R Cooperstock. 2009. Did “Minority Report” get it wrong? Superiority of the mouse over 3D input devices in a 3D placement task. In IFIP Conference on Human-Computer Interaction. Springer, 400–414.
- [4] Lonni Besançon, Paul Issartel, Mehdi Ammi, and Tobias Isenberg. 2017. Mouse, tactile, and tangible input for 3D manipulation. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. 4727–4740.
- [5] Doug Bowman, Ernst Kruijff, Joseph J LaViola Jr, and Ivan P Poupyrev. 2004. 3D User interfaces: theory and practice, CourseSmart eTextbook. Addison-Wesley.
- [6] Jay L Brand. 2008. Office ergonomics: A review of pertinent research and recent developments. Reviews of human factors and ergonomics 4, 1 (2008), 245–282.
- [7] Michelle A Brown, Wolfgang Stuerzlinger, and EJ Mendonça Filho. 2014. The performance of un-instrumented in-air pointing. In Proceedings of Graphics Interface 2014. Citeseer, 59–66.
- [8] Wolfgang Büschel, Jian Chen, Raimund Dachselt, Steven Drucker, Tim Dwyer, Carsten Görg, Tobias Isenberg, Andreas Kerren, Chris North, and Wolfgang Stuerzlinger. 2018. Interaction for immersive analytics. In Immersive Analytics. Springer, 95–138.
- [9] Citigroup. 2016 (accessed March 31, 2020). Citi HoloLens Holographic Workstation. https://www.youtube.com/watch?v=0NogltmewmQ
- [10] Grégoire Cliquet, Matthieu Perreira, Fabien Picarougne, Yannick Prié, and Toinon Vigier. 2017. Towards hmd-based immersive analytics.
- [11] Jens Grubert, Eyal Ofek, Michel Pahud, Per Ola Kristensson, Frank Steinicke, and Christian Sandor. 2018. The office of the future: Virtual, portable, and global. IEEE computer graphics and applications 38, 6 (2018), 125–133.
- [12] Jeffrey T Hansberger, Chao Peng, Shannon L Mathis, Vaidyanath Areyur Shanthakumar, Sarah C Meacham, Lizhou Cao, and Victoria R Blakely. 2017. Dispelling the gorilla arm syndrome: the viability of prolonged gesture interactions. In International Conference on Virtual, Augmented and Mixed Reality. Springer, 505–520.
- [13] Jeffrey Heer and Ben Shneiderman. 2012. Interactive dynamics for visual analysis. Queue 10, 2 (2012), 30–55.
- [14] Juan David Hincapié-Ramos, Xiang Guo, Paymahn Moghadasian, and Pourang Irani. 2014. Consumed Endurance: A Metric to Quantify Arm Fatigue of Mid-Air Interactions. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’14). Association for Computing Machinery, New York, NY, USA, 1063–1072. DOI:http://dx.doi.org/10.1145/2556288.2557130
- [15] Mikko Kytö, Barrett Ens, Thammathip Piumsomboon, Gun A Lee, and Mark Billinghurst. 2018. Pinpointing: Precise head-and eye-based target selection for augmented reality. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 1–14.
- [16] Vincent Lepetit, Pascal Fua, and others. 2005. Monocular model-based 3d tracking of rigid objects: A survey. Foundations and Trends® in Computer Graphics and Vision 1, 1 (2005), 1–89.
- [17] Veera Bhadra Harish Mandalika, Alexander I Chernoglazov, Mark Billinghurst, Christoph Bartneck, Michael A Hurrell, Niels De Ruiter, Anthony PH Butler, and Philip H Butler. 2018. A hybrid 2D/3D user interface for radiological diagnosis. Journal of digital imaging 31, 1 (2018), 56–73.
- [18] Eric Marchand, Hideaki Uchiyama, and Fabien Spindler. 2015. Pose estimation for augmented reality: a hands-on survey. IEEE transactions on visualization and computer graphics 22, 12 (2015), 2633–2651.
- [19] Kim Marriott, Falk Schreiber, Tim Dwyer, Karsten Klein, Nathalie Henry Riche, Takayuki Itoh, Wolfgang Stuerzlinger, and Bruce H Thomas. 2018. Immersive Analytics. Vol. 11190. Springer.
- [20] Gary Perelman, Marcos Serrano, Mathieu Raynal, Celia Picard, Mustapha Derras, and Emmanuel Dubois. 2015. The roly-poly mouse: Designing a rolling input device unifying 2d and 3d interaction. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. 327–336.
- [21] Yuan Yuan Qian and Robert J Teather. 2017. The eyes don’t have it: an empirical comparison of head-based and eye-based selection in virtual reality. In Proceedings of the 5th Symposium on Spatial User Interaction. 91–98.
- [22] Daniel Schneider, Alexander Otte, Travis Gesslein, Philipp Gagel, Bastian Kuth, Mohamad Shahm Damlakhi, Oliver Dietz, Eyal Ofek, Michel Pahud, Per Ola Kristensson, Jörg Müller, and Jens Grubert. 2019. Reconviguration: Reconfiguring physical keyboards in virtual reality. IEEE transactions on visualization and computer graphics 25, 11 (2019), 3190–3201.
- [23] Ludwig Sidenmark, Christopher Clarke, Xuesong Zhang, Jenny Phu, and Hans Gellersen. 2020. Outline Pursuits: Gaze-assisted Selection of Occluded Objects in Virtual Reality. (2020).
- [24] Wolfgang Stuerzlinger and Chadwick A Wingrave. 2011. The value of constraints for 3D user interfaces. In Virtual Realities. Springer, 203–223.
- [25] Junwei Sun, Wolfgang Stuerzlinger, and Bernhard E Riecke. 2018. Comparing input methods and cursors for 3D positioning with head-mounted displays. In Proceedings of the 15th ACM Symposium on Applied Perception. 1–8.
- [26] Xiyao Wang, Lonni Besançon, David Rousseau, Mickael Sereno, Mehdi Ammi, and Tobias Isenberg. 2020. Towards an Understanding of Augmented Reality Extensions for Existing 3D Data Analysis Tools. In ACM Conference on Human Factors in Computing Systems.
- [27] Colin Ware and Kathy Lowther. 1997. Selection using a one-eyed cursor in a fish tank VR environment. ACM Transactions on Computer-Human Interaction (TOCHI) 4, 4 (1997), 309–322.
- [28] EHC Woo, Peter White, and CWK Lai. 2016. Ergonomics standards and guidelines for computer workstation design and the impact on users’ health–a review. Ergonomics 59, 3 (2016), 464–475.