David Lindlbauer
I am an Assistant Professor at the Human-Computer Interaction Institute at Carnegie Mellon University, leading the Augmented Perception Lab and co-direct the CMU Extended Reality Technology Center.
My research focuses on understanding how humans perceive and interact with digital information, and to build technology that goes beyond the flat displays of PCs and smartphones to advance our capabilities when interacting with the digital world. To achieve this, I create and study enabling technologies and computational approaches that control when, where and how virtual content is presented to increase the usability of Augmented Reality and Virtual Reality interfaces. I hold a PhD from TU Berlin and was a postdoctoral researcher at ETH Zurich before joining CMU. I have published more than 35 scientific papers at premier venues in Human-Computer Interaction such as ACM CHI and ACM UIST. My work has attracted media attention in outlets such as MIT Technology Review, Fast Company Design, and Shiropen Japan.
You can also find me on Linkedin, and Google Scholar, or contact me via davidlindlbauer[at]cmu.edu.
Download my cv here: cv_davidlindlbauer.pdf.
Check out the Augmented Perception Lab at CMU HCII.
Selected Publications
For a full list of publications, please visit the Augmented Percetion Lab website or my Google Scholar profile.
MineXR: Mining Personalized Extended Reality Interfaces
Extended Reality (XR) interfaces offer engaging user experiences, but their effective design requires a nuanced understanding of user behavior and preferences. This knowledge is challenging to obtain without the widespread adoption of XR devices. We introduce MineXR, a design mining workflow and data analysis platform for collecting and analyzing personalized XR user interaction and experience data. MineXR enables elicitation of personalized interfaces directly from users: for any particular context, users create interface elements using application snapshots from their own smartphone, place them in the environment, and simultaneously preview the resulting XR layout on a headset. Using MineXR, we contribute a dataset of personalized XR interfaces collected from 31 participants, consisting of 695 XR widgets created from 178 unique applications. We provide insights for XR widget functionalities, categories, clusters, UI element types, and placement. Our open-source tools and data support researchers and designers in developing future XR interfaces.
H. Cho, Y. Yan, K. Todi, M. Parent, M. Smith, T. Jonker, H. Benko, D. Lindlbauer. 2024.
MineXR: Mining Personalized Extended Reality Interfaces.
CHI '24. Honolulu, HI, USA.
Project page
RealityReplay: Detecting and Replaying Temporal Changes In Situ using Mixed Reality
Humans easily miss events in their surroundings due to limited short-term memory and field of view. This happens, for example, while watching an instructor's machine repair demonstration or conversing during a sports game. We present RealityReplay, a novel Mixed Reality (MR) approach that tracks and visualizes these significant events using in-situ MR visualizations without modifying the physical space. It requires only a head-mounted MR display and a 360-degree camera. We contribute a method for egocentric tracking of important motion events in users’ surroundings based on a combination of semantic segmentation and saliency prediction, and generating in-situ MR visual summaries of temporal changes. These summary visualizations are overlaid onto the physical world to reveal which objects moved, in what order, and their trajectory, enabling users to observe previously hidden events. The visualizations are informed by a formative study comparing different styles on their effects on users' perception of temporal changes. Our evaluation shows that RealityReplay significantly enhances sensemaking of temporal motion events compared to memory-based recall. We demonstrate application scenarios in guidance, education, and observation, and discuss implications for extending human spatiotemporal capabilities through technological augmentation.
H. Cho, M. Komar, D. Lindlbauer. 2023
RealityReplay: Detecting and Replaying Temporal Changes In Situ using Mixed Reality.
IMWUT '23, Cancun, Mexico.
Project page
SemanticAdapt: Optimization-based Adaptation of Mixed Reality Layouts Leveraging Virtual-Physical Semantic Connections
We present an optimization-based approach that automatically adapts Mixed Reality (MR) interfaces to different physical environments. Current MR layouts, including the position and scale of virtual interface elements, need to be manually adapted by users whenever they move between environments, and whenever they switch tasks. This process is tedious and time consuming, and arguably needs to be automated by MR systems for them to be beneficial for end users. We contribute an approach that formulates this challenges as combinatorial optimization problem and automatically decides the placement of virtual interface elements in new environments. In contrast to prior work, we exploit the semantic association between the virtual interface elements and physical objects in an environment. Our optimization furthermore considers the utility of elements for users' current task, layout factors, and spatio-temporal consistency to previous environments. All those factors are combined in a single linear program, which is used to adapt the layout of MR interfaces in real time. We demonstrate a set of application scenarios, showcasing the versatility and applicability of our approach. Finally, we show that compared to a naive adaptive baseline approach that does not take semantic association into account, our approach decreased the number of manual interface adaptations by 37%.
Y. Cheng, Y. Yan, X. Yi, Y. Shi, D. Lindlbauer
SemanticAdapt: Optimization-based Adaptation of Mixed Reality Layouts Leveraging Virtual-Physical Semantic Connections.
UIST '21, Virtual.
Project page
Context-Aware Online Adaptation of Mixed Reality Interfaces
We present an optimization-based approach for Mixed Reality (MR) systems to automatically control when and where applications are shown, and how much information they display. Currently, content creators design applications, and users then manually adjust which applications are visible and how much information they show. This choice has to be adjusted every time users switch context, i.e., whenever they switch their task or environment. Since context switches happen many times a day, we believe that MR interfaces require automation to alleviate this problem. We propose a real-time approach to automate this process based on users' current cognitive load and knowledge about their task and environment. Our system adapts which applications are displayed, how much information they show, and where they are placed. We formulate this problem as a mix of rule-based decision making and combinatorial optimization which can be solved efficiently in real-time. We present a set of proof-of-concept applications showing that our approach is applicable in a wide range of scenarios. Finally, we show in a dual-task evaluation that our approach decreased secondary tasks interactions by 36%.
D. Lindlbauer, A. Feit, O. Hilliges, 2019.
Context-Aware Online Adaptation of Mixed Reality Interfaces.
UIST '19, New Orleans, LA, USA.
Project page / Full video (5 min) / talk recording from UIST '19
Remixed Reality: Manipulating Space and Time in Augmented Reality
We present Remixed Reality, a novel form of mixed reality. In contrast to classical mixed reality approaches where users see a direct view or video feed of their environment, with Remixed Reality they see a live 3D reconstruction, gathered from multiple external depth cameras. This approach enables changing the environment as easily as geometry can be changed in virtual reality, while allowing users to view and interact with the actual physical world as they would in augmented reality. We characterize a taxonomy of manipulations that are possible with Remixed Reality: spatial changes such as erasing objects; appearance changes such as changing textures; temporal changes such as pausing time; and viewpoint changes that allow users to see the world from different points without changing their physical location. We contribute a method that uses an underlying voxel grid holding information like visibility and transformations, which is applied to live geometry in real time.
D. Lindlbauer, A. Wilson, 2018.
Remixed Reality: Manipulating Space and Time in Augmented Reality.
CHI '18, Montreal, Canada.
Microsoft Research Blog / Full video (5 min)
Featured on: Shiropen (Seamless), VR Room, MSPowerUser, It's about VR.
Professional activity, awards & talks
Program committee and editorial boards
Subcommittee Chair for CHI 2025
Subcommittee Chair for CHI 2024
Program Committee member for CHI 2023
Program Committee member for UIST 2022
Program Committee member for CHI 2022
Program Committee member for UIST 2021
Program Committee member for CHI 2021
Associate Editor for ISS 2021 (ACM PACM HCI journal)
Guest Editor for Frontiers in VR - Training in XR
Program Committee member for UIST 2020
Associate Editor for ISS 2020 (ACM PACM HCI journal)
Program Committee member for CHI 2020
Program Committee member for CHI 2019
Program Committee member for UIST 2018
Program Committee member for ISS 2017
Organizing committee
UIST 2023 Workshops co-chair
ISS 2023 Doctoral Symposium chair
CHI 2023 Interactivity co-chair
CHI 2022 Interactivity co-chair
SIGCHI operations committee (2016 - 2021)
UIST 2020 Virtual Experience and Operations co-chair
CHI 2016 - 2020 Video capture chair
UIST 2019 Student Innovation Contest co-chair
UIST 2018 Student Innovation Contest co-chair
UIST 2018 Best Paper Committee member
UIST 2016 Student Volunteer co-chair
UIST 2015 Documentation chair
Pervasive Displays 2016 Poster chair
Reviewing & other activity
I routinely review for premier venues in HCI and graphics such as CHI, UIST, TOCHI, SIGGRAPH, Computer Graphics Forum, ISMAR, IEEE VR, Frontiers in VR, TEI, GI, ISS, SUI, ICMU, IMWUT, and others.
Poster committee for ISS 2016 & 2017, MUM 2016
Student volunteer for ITS 2014, UIST 2014, CHI 2015
Grants & fellowships
Sightful Inc. (2023)
Meta Reality Labs Research (2023, 2024)
Accenture (with Nikolas Martelaro, Alexandra Ion. 2022)
Honda Research (with Nikolas Martelaro. 2023)
CMU MFI (with Jean Oh, Ji Zhang. 2022)
CMU Center for Machine Learning and Health (with Jean Oh. 2021)
NSF Grant - Student Innovation Challenge at UIST ($15,900, Co-writer, 2019)
Increasing diversity & inclusiveness at UIST. Grant provides funding for 5 teams
from underrepresented minorities to participate in the contest and attend the conference.
SIGCHI Grant - Student Innovation Challenge at UIST ($18,330, Co-writer, 2019)
Increasing diversity & inclusiveness at UIST. Grant provides funding for 2 non-US teams
from underrepresented minorities to participate in the contest and attend the conference
and pay registration for 5 US-based teams.
ETH Zurich Postdoctoral Fellowships (CHF 229,600 / $229,068, Principal Investigator, 2018)
A Computational Framework for Increasing the Usability of Augmented Reality and Virtual Reality
Shapeways Educational Grant ($1,000, Contributor, 2015)
Exploring Visual Saliency of 3D Objects
Performance scholarship of FH Hagenberg (€750 / $850, Awardee, 2011)
One of twelve awardees for scholarship by FH Hagenberg (Leistungsstipendium)
Awards
CHI 2024 Best Paper Honorable Mention Award for
MARingBA: Music-Adaptive Ringtones for Blended Audio Notification Delivery.
ISS 2023 Best Paper Award for
BlendMR: A Computational Method To Create Ambient Mixed Reality Interfaces.
CHI 2016 Best Paper Honorable Mention Award for
Influence of Display Transparency on Background Awareness and Task Performance.
UIST 2015 Best Paper Honorable Mention Award for
GelTouch: Localized Tactile Feedback Through Thin, Programmable Gel
Invited talks
2023/07/24University of Konstanz.
2023/06/26Columbia University.
2023/04/23CHI 2023 Workshop on Computational Approaches for Adapting User Interfaces.
2022/10/24PNC Innovation Speaker Series.
2022/10/03Distinguished Lecture at McGill CIRMMT.
2022/06/21Austrian Computer Science Days.
2022/06/16Summer School on Computational Interaction, Saarbruecken.
2021/07/14Global Innovation Exchange - 2021 Access Computing Summer School.
2020/03/25Carnegie Mellon University.
2020/03/12Aalto University.
2020/03/02University of Chicago.
2020/02/27University of Illinois at Chicago.
2020/02/24Boston University.
2020/02/05Facebook Reality Labs.
2019/12/17Aalto University.
2019/10/28University of Chicago
2019/08/09Google Interaction Lab
2019/08/08UC Berkeley
2019/08/07Stanford University
2019/08/02UCLA.
2019/07/10MIT Media Lab - Tangible Meda Group.
2019/07/10MIT CSAIL.
2019/07/08Columbia University.
2019/06/15Swiss Society of Virtual and Augmented Reality, Meetup #HOMIXR
2018/05/22Interact Lab - University of Sussex.
2018/03/02IST Austria.
2018/02/21DGP – University of Toronto.
2017/12/15ETH Zurich.
2017/12/14Disney Research Zurich.
2017/12/12INRIA Bordeaux.
2017/10/05Aarhus University.
You can download my cv here: cv_davidlindlbauer.pdf.