David Lindlbauer


I am an Assistant Professor at the Human-Computer Interaction Institute at Carnegie Mellon University, leading the Augmented Perception Lab and co-direct the CMU Extended Reality Technology Center.


My research focuses on understanding how humans perceive and interact with digital information, and to build technology that goes beyond the flat displays of PCs and smartphones to advance our capabilities when interacting with the digital world. To achieve this, I create and study enabling technologies and computational approaches that control when, where and how virtual content is presented to increase the usability of Augmented Reality and Virtual Reality interfaces. I hold a PhD from TU Berlin and was a postdoctoral researcher at ETH Zurich before joining CMU. I have published more than 50 scientific papers at premier venues in Human-Computer Interaction such as ACM CHI and ACM UIST. My work has attracted media attention in outlets such as MIT Technology Review, Fast Company Design, and Shiropen Japan.


You can also find me on Linkedin, and Google Scholar, or contact me via davidlindlbauer[at]cmu.edu.


Download my cv here: cv_davidlindlbauer.pdf.


Check out the Augmented Perception Lab at CMU HCII.


Selected Publications
For a full list of publications, please visit the Augmented Percetion Lab website or my Google Scholar profile.


Auptimize: Optimal Placement of Spatial Audio Cues for Extended Reality

Spatial audio in Extended Reality (XR) provides users with better awareness of where virtual elements are placed, and efficiently guides them to events such as notifications, system alerts from different windows, or approaching avatars. Humans, however, are inaccurate in localizing sound cues, especially with multiple sources due to limitations in human auditory perception such as angular discrimination error and front-back confusion. This decreases the efficiency of XR interfaces because users misidentify from which XR element a sound is coming. To address this, we propose Auptimize, a novel computational approach for placing XR sound sources, which mitigates such localization errors by utilizing the ventriloquist effect. Auptimize disentangles the sound source locations from the visual elements and relocates the sound sources to optimal positions for unambiguous identification of sound cues, avoiding errors due to inter-source proximity and front-back confusion. Our evaluation shows that Auptimize decreases spatial audio-based source identification errors compared to playing sound cues at the paired visual-sound locations. We demonstrate the applicability of Auptimize for diverse spatial audio-based interactive XR scenarios.

H. Cho, A. Wang, D. Kartik, E. Xie, Y. Yan, D. Lindlbauer. 2025. Auptimize: Optimal Placement of Spatial Audio Cues for Extended Reality. UIST '25. Pittsburgh, PA, USA.
Project page

MiniMates: Miniature Avatars for AR Remote Meetings within Limited Physical Spaces

Remote meetings using 3D avatars in Augmented Reality (AR) allow effective communication and enable users to retain awareness of their surroundings. However, positioning 3D avatars effectively and consistently for all users in AR is challenging since most spaces, such as offices or living rooms, are not large enough to accommodate multiple life-sized avatars without interference. To address this issue, we contribute MiniMates---a novel approach leveraging miniature avatars, which make it possible to place multiple remote users in a limited physical space. We see MiniMates as complementary to traditional 2D video conferencing and immersive telepresence. Our approach automatically adjusts the formation of avatars and redirects users' head and body orientation to facilitate communication. Results from our user study (n = 24) show that participants experience a higher sense of co-presence compared to video conferencing, and that MiniMates enabled them to communicate the direction of their interactions non-verbally as well as manage multiple simultaneous conversations.

A. Kiuchi, J. Wieland, T. Igarashi, D. Lindlbauer.2025. MiniMates: Miniature Avatars for AR Remote Meetings within Limited Physical Spaces. CHI '25. Yokohama, Japan.
Project page

MineXR: Mining Personalized Extended Reality Interfaces

Extended Reality (XR) interfaces offer engaging user experiences, but their effective design requires a nuanced understanding of user behavior and preferences. This knowledge is challenging to obtain without the widespread adoption of XR devices. We introduce MineXR, a design mining workflow and data analysis platform for collecting and analyzing personalized XR user interaction and experience data. MineXR enables elicitation of personalized interfaces directly from users: for any particular context, users create interface elements using application snapshots from their own smartphone, place them in the environment, and simultaneously preview the resulting XR layout on a headset. Using MineXR, we contribute a dataset of personalized XR interfaces collected from 31 participants, consisting of 695 XR widgets created from 178 unique applications. We provide insights for XR widget functionalities, categories, clusters, UI element types, and placement. Our open-source tools and data support researchers and designers in developing future XR interfaces.

H. Cho, Y. Yan, K. Todi, M. Parent, M. Smith, T. Jonker, H. Benko, D. Lindlbauer. 2024. MineXR: Mining Personalized Extended Reality Interfaces. CHI '24. Honolulu, HI, USA.
Project page

Context-Aware Online Adaptation of Mixed Reality Interfaces

We present an optimization-based approach for Mixed Reality (MR) systems to automatically control when and where applications are shown, and how much information they display. Currently, content creators design applications, and users then manually adjust which applications are visible and how much information they show. This choice has to be adjusted every time users switch context, i.e., whenever they switch their task or environment. Since context switches happen many times a day, we believe that MR interfaces require automation to alleviate this problem. We propose a real-time approach to automate this process based on users' current cognitive load and knowledge about their task and environment. Our system adapts which applications are displayed, how much information they show, and where they are placed. We formulate this problem as a mix of rule-based decision making and combinatorial optimization which can be solved efficiently in real-time. We present a set of proof-of-concept applications showing that our approach is applicable in a wide range of scenarios. Finally, we show in a dual-task evaluation that our approach decreased secondary tasks interactions by 36%.

D. Lindlbauer, A. Feit, O. Hilliges, 2019. Context-Aware Online Adaptation of Mixed Reality Interfaces. UIST '19, New Orleans, LA, USA.
Project page / Full video (5 min) / talk recording from UIST '19

Professional activity, awards & talks


Program committee and editorial boards
Program Committee member for UIST 2026
Subcommittee Chair for CHI 2025
Subcommittee Chair for CHI 2024
Program Committee member for CHI 2023
Program Committee member for UIST 2022
Program Committee member for CHI 2022
Program Committee member for UIST 2021
Program Committee member for CHI 2021
Associate Editor for ISS 2021 (ACM PACM HCI journal)
Guest Editor for Frontiers in VR - Training in XR
Program Committee member for UIST 2020 Associate Editor for ISS 2020 (ACM PACM HCI journal)
Program Committee member for CHI 2020
Program Committee member for CHI 2019
Program Committee member for UIST 2018
Program Committee member for ISS 2017

Organizing committee
UIST 2026 Sponsorship co-chair
UIST 2023 Workshops co-chair
ISS 2023 Doctoral Symposium chair
CHI 2023 Interactivity co-chair
CHI 2022 Interactivity co-chair
SIGCHI operations committee (2016 - 2021)
UIST 2020 Virtual Experience and Operations co-chair
CHI 2016 - 2020 Video capture chair
UIST 2019 Student Innovation Contest co-chair
UIST 2018 Student Innovation Contest co-chair
UIST 2018 Best Paper Committee member
UIST 2016 Student Volunteer co-chair
UIST 2015 Documentation chair
Pervasive Displays 2016 Poster chair

Reviewing & other activity
I routinely review for premier venues in HCI and graphics such as CHI, UIST, TOCHI, SIGGRAPH, Computer Graphics Forum, ISMAR, IEEE VR, Frontiers in VR, TEI, GI, ISS, SUI, ICMU, IMWUT, and others.

Poster committee for ISS 2016 & 2017, MUM 2016
Student volunteer for ITS 2014, UIST 2014, CHI 2015

Grants & fellowships
Toyota Research Institute (with Kris Kitani. 2026) BMW (with Alexandra Ion, Sossena Wood. 2026) Honda Research (with Nikolar Martelaro, Kris Kitani. 2026) Fujitsu Research (2025) Sightful Inc. (2023) Meta Reality Labs Research (2023, 2024) Accenture (with Nikolas Martelaro, Alexandra Ion. 2022) Honda Research (with Nikolas Martelaro. 2023) CMU MFI (with Jean Oh, Ji Zhang. 2022) CMU Center for Machine Learning and Health (with Jean Oh. 2021) NSF Grant - Student Innovation Challenge at UIST ($15,900, Co-writer, 2019)
Increasing diversity & inclusiveness at UIST. Grant provides funding for 5 teams
from underrepresented minorities to participate in the contest and attend the conference.
SIGCHI Grant - Student Innovation Challenge at UIST ($18,330, Co-writer, 2019)
Increasing diversity & inclusiveness at UIST. Grant provides funding for 2 non-US teams
from underrepresented minorities to participate in the contest and attend the conference
and pay registration for 5 US-based teams.
ETH Zurich Postdoctoral Fellowships (CHF 229,600 / $229,068, Principal Investigator, 2018)
A Computational Framework for Increasing the Usability of Augmented Reality and Virtual Reality
Shapeways Educational Grant ($1,000, Contributor, 2015)
Exploring Visual Saliency of 3D Objects
Performance scholarship of FH Hagenberg (€750 / $850, Awardee, 2011)
One of twelve awardees for scholarship by FH Hagenberg (Leistungsstipendium)

Awards
CHI 2024 Best Paper Honorable Mention Award for
MARingBA: Music-Adaptive Ringtones for Blended Audio Notification Delivery.

ISS 2023 Best Paper Award for
BlendMR: A Computational Method To Create Ambient Mixed Reality Interfaces.

CHI 2016 Best Paper Honorable Mention Award for
Influence of Display Transparency on Background Awareness and Task Performance.

UIST 2015 Best Paper Honorable Mention Award for
GelTouch: Localized Tactile Feedback Through Thin, Programmable Gel


You can download my cv here: cv_davidlindlbauer.pdf.