David Lindlbauer
I am an Assistant Professor at the Human-Computer Interaction Institute at Carnegie Mellon University, leading the Augmented Perception Lab.
I am hiring. I am looking for students to join my new lab at CMU. We will work at the intersection of perception, interaction, computation and Mixed Reality. Please reach out if you are interested.
My research focusses on understanding how humans perceive and interact with digital information, and to build technology that goes beyond the flat displays of PCs and smartphones to advances our capabilities when interacting with the digital world. To achieve this, I create and study enabling technologies and computational approaches that control when, where and how virtual content is displayed to increase the usability of AR and VR interfaces.
Before CMU, I was a postdoc at ETH Zurich in the AIT Lab of Otmar Hilliges. I completed my PhD at TU Berlin in the Computer Graphics group, advised by Marc Alexa. I have worked with Jörg Müller at TU Berlin, the Media Interaction Lab in Hagenberg, Austria, Stacey Scott and Mark Hancock at the University of Waterloo, and interned Microsoft Research (Redmond) in the Perception & Interaction Group.
You can also find me on Twitter, Linkedin, and Google Scholar, or contact me via davidlindlbauer[at]cmu.edu.
Download my cv here: cv_davidlindlbauer.pdf.
Check out the Augmented Perception Lab at CMU HCII.
Selected Publications
For a full list of publications, please visit the Augmented Percetion Lab website or my Google Scholar profile.
RealityReplay: Detecting and Replaying Temporal Changes In Situ using Mixed Reality
Humans easily miss events in their surroundings due to limited short-term memory and field of view. This happens, for example, while watching an instructor's machine repair demonstration or conversing during a sports game. We present RealityReplay, a novel Mixed Reality (MR) approach that tracks and visualizes these significant events using in-situ MR visualizations without modifying the physical space. It requires only a head-mounted MR display and a 360-degree camera. We contribute a method for egocentric tracking of important motion events in users’ surroundings based on a combination of semantic segmentation and saliency prediction, and generating in-situ MR visual summaries of temporal changes. These summary visualizations are overlaid onto the physical world to reveal which objects moved, in what order, and their trajectory, enabling users to observe previously hidden events. The visualizations are informed by a formative study comparing different styles on their effects on users' perception of temporal changes. Our evaluation shows that RealityReplay significantly enhances sensemaking of temporal motion events compared to memory-based recall. We demonstrate application scenarios in guidance, education, and observation, and discuss implications for extending human spatiotemporal capabilities through technological augmentation.
H. Cho, M. Komar, D. Lindlbauer. 2023
RealityReplay: Detecting and Replaying Temporal Changes In Situ using Mixed Reality.
IMWUT '23, Cancun, Mexico.
Project page
SemanticAdapt: Optimization-based Adaptation of Mixed Reality Layouts Leveraging Virtual-Physical Semantic Connections
We present an optimization-based approach that automatically adapts Mixed Reality (MR) interfaces to different physical environments. Current MR layouts, including the position and scale of virtual interface elements, need to be manually adapted by users whenever they move between environments, and whenever they switch tasks. This process is tedious and time consuming, and arguably needs to be automated by MR systems for them to be beneficial for end users. We contribute an approach that formulates this challenges as combinatorial optimization problem and automatically decides the placement of virtual interface elements in new environments. In contrast to prior work, we exploit the semantic association between the virtual interface elements and physical objects in an environment. Our optimization furthermore considers the utility of elements for users' current task, layout factors, and spatio-temporal consistency to previous environments. All those factors are combined in a single linear program, which is used to adapt the layout of MR interfaces in real time. We demonstrate a set of application scenarios, showcasing the versatility and applicability of our approach. Finally, we show that compared to a naive adaptive baseline approach that does not take semantic association into account, our approach decreased the number of manual interface adaptations by 37%.
Y. Cheng, Y. Yan, X. Yi, Y. Shi, D. Lindlbauer
SemanticAdapt: Optimization-based Adaptation of Mixed Reality Layouts Leveraging Virtual-Physical Semantic Connections.
UIST '21, Virtual.
Project page
Context-Aware Online Adaptation of Mixed Reality Interfaces
We present an optimization-based approach for Mixed Reality (MR) systems to automatically control when and where applications are shown, and how much information they display. Currently, content creators design applications, and users then manually adjust which applications are visible and how much information they show. This choice has to be adjusted every time users switch context, i.e., whenever they switch their task or environment. Since context switches happen many times a day, we believe that MR interfaces require automation to alleviate this problem. We propose a real-time approach to automate this process based on users' current cognitive load and knowledge about their task and environment. Our system adapts which applications are displayed, how much information they show, and where they are placed. We formulate this problem as a mix of rule-based decision making and combinatorial optimization which can be solved efficiently in real-time. We present a set of proof-of-concept applications showing that our approach is applicable in a wide range of scenarios. Finally, we show in a dual-task evaluation that our approach decreased secondary tasks interactions by 36%.
D. Lindlbauer, A. Feit, O. Hilliges, 2019.
Context-Aware Online Adaptation of Mixed Reality Interfaces.
UIST '19, New Orleans, LA, USA.
Project page / Full video (5 min) / talk recording from UIST '19
Remixed Reality: Manipulating Space and Time in Augmented Reality
We present Remixed Reality, a novel form of mixed reality. In contrast to classical mixed reality approaches where users see a direct view or video feed of their environment, with Remixed Reality they see a live 3D reconstruction, gathered from multiple external depth cameras. This approach enables changing the environment as easily as geometry can be changed in virtual reality, while allowing users to view and interact with the actual physical world as they would in augmented reality. We characterize a taxonomy of manipulations that are possible with Remixed Reality: spatial changes such as erasing objects; appearance changes such as changing textures; temporal changes such as pausing time; and viewpoint changes that allow users to see the world from different points without changing their physical location. We contribute a method that uses an underlying voxel grid holding information like visibility and transformations, which is applied to live geometry in real time.
D. Lindlbauer, A. Wilson, 2018.
Remixed Reality: Manipulating Space and Time in Augmented Reality.
CHI '18, Montreal, Canada.
Microsoft Research Blog / Full video (5 min)
Featured on: Shiropen (Seamless), VR Room, MSPowerUser, It's about VR.
Changing the Appearance of Real-World Objects by Modifying Their Surroundings
We present an approach to alter the perceived appearance of physical objects by controlling their surrounding space. Many real-world objects cannot easily be equipped with displays or actuators in order to change their shape. While common approaches such as projection mapping enable changing the appearance of objects without modifying them, certain surface properties (e.g. highly reflective or transparent surfaces) can make employing these techniques difficult. In this work, we present a conceptual design exploration on how the appearance of an object can be changed by solely altering the space around it, rather than the object itself. In a proof-of-concept implementation, we place objects onto a tabletop display and track them together with users to display perspective-corrected 3D graphics for augmentation. This enables controlling properties such as the perceived size, color, or shape of objects. We characterize the design space of our approach and demonstrate potential applications. For example, we change the contour of a wallet to notify users when their bank account is debited. We envision our approach to gain in importance with increasing ubiquity of display surfaces.
D. Lindlbauer, J. Müller, M. Alexa, 2017.
Changing the Appearance of Real-World Objects by Modifying Their Surroundings.
CHI '17, Denver, CO, USA.
Full video (5 min)
Influence of Display Transparency on Background Awareness and Task Performance
It has been argued that transparent displays are beneficial for certain tasks by allowing users to simultaneously see on-screen content as well as the environment behind the display. However, it is yet unclear how much in background awareness users gain and if performance suffers for tasks performed on the transparent display, since users are no longer shielded from distractions. Therefore, we investigate the influence of display transparency on task performance and background awareness in a dual-task scenario. We conducted an experiment comparing transparent displays with conventional displays in different horizontal and vertical configurations. Participants performed an attention-demanding primary task on the display while simultaneously observing the background for target stimuli. Our results show that transparent and horizontal displays increase the ability of participants to observe the background while keeping primary task performance constant.
D. Lindlbauer, K. Lilija, R. Walter, J. Müller, 2016.
Influence of Display Transparency on Background Awareness and Task Performance.
CHI '16, San Jose, CA, USA.
Full video (3 min)
ACM CHI 2016 Best Paper Honorable Mention Award
Tracs: Transparency Control for See-through Displays
Tracs is a dual-sided see-through display system with controllable transparency. Traditional displays are a constant visual and communication barrier, hindering fast and efficient collaboration of spatially close or facing co- workers. Transparent displays could potentially remove these barriers, but introduce new issues of personal privacy, screen content privacy and visual interference. We therefore propose a solution with controllable transparency to overcome these problems. Tracs consists of two see-through displays, with a transparency-control layer, a backlight layer and a polarization adjustment layer in-between. The transparency- control layer is built as a grid of individually addressable transparency-controlled patches, allowing users to control the transparency overall or just locally. Additionally, the locally switchable backlight layer improves the contrast of LCD screen content. Tracs allows users to switch between personal and collaborative work fast and easily and gives them full control of transparent regions on their display.
D. Lindlbauer, T.Aoki, R. Walter, Y. Uema, A. Höchtl, M. Haller, M. Inami, J. Müller, 2014.
Tracs: Transparency Control for See-through Displays.
UIST '14, Honolulu, Hawaii, USA. long video (3 min)
also presented as demo at UIST'14
D. Lindlbauer, T.Aoki, Y. Uema, A. Höchtl, M. Haller, M. Inami, J. Müller, 2014.
A Collaborative See-through Display Supporting On-demand Privacy,
Siggraph Emerging Technology '14, Vancouver, Canada. video
Featured on: Gizmodo
Professional activity, awards & talks
Program committee and editorial boards
Subcommittee Chair for CHI 2025
Subcommittee Chair for CHI 2024
Program Committee member for CHI 2023
Program Committee member for UIST 2022
Program Committee member for CHI 2022
Program Committee member for UIST 2021
Program Committee member for CHI 2021
Associate Editor for ISS 2021 (ACM PACM HCI journal)
Guest Editor for Frontiers in VR - Training in XR
Program Committee member for UIST 2020
Associate Editor for ISS 2020 (ACM PACM HCI journal)
Program Committee member for CHI 2020
Program Committee member for CHI 2019
Program Committee member for UIST 2018
Program Committee member for ISS 2017
Organizing committee
UIST 2023 Workshops co-chair
ISS 2023 Doctoral Symposium chair
CHI 2023 Interactivity co-chair
CHI 2022 Interactivity co-chair
SIGCHI operations committee (2016 - 2021)
UIST 2020 Virtual Experience and Operations co-chair
CHI 2016 - 2020 Video capture chair
UIST 2019 Student Innovation Contest co-chair
UIST 2018 Student Innovation Contest co-chair
UIST 2018 Best Paper Committee member
UIST 2016 Student Volunteer co-chair
UIST 2015 Documentation chair
Pervasive Displays 2016 Poster chair
Reviewing & other activity
I routinely review for premier venues in HCI and graphics such as CHI, UIST, TOCHI, SIGGRAPH, Computer Graphics Forum, ISMAR, IEEE VR, Frontiers in VR, TEI, GI, ISS, SUI, ICMU, IMWUT, and others.
Poster committee for ISS 2016 & 2017, MUM 2016
Student volunteer for ITS 2014, UIST 2014, CHI 2015
Grants & fellowships
Sightful Inc. (2023)
Meta Reality Labs Research (2023)
Accenture (with Nikolas Martelaro, Alexandra Ion. 2022)
Honda Research (with Nikolas Martelaro. 2023)
CMU MFI (with Jean Oh, Ji Zhang. 2022)
CMU Center for Machine Learning and Health (with Jean Oh. 2021)
NSF Grant - Student Innovation Challenge at UIST ($15,900, Co-writer, 2019)
Increasing diversity & inclusiveness at UIST. Grant provides funding for 5 teams
from underrepresented minorities to participate in the contest and attend the conference.
SIGCHI Grant - Student Innovation Challenge at UIST ($18,330, Co-writer, 2019)
Increasing diversity & inclusiveness at UIST. Grant provides funding for 2 non-US teams
from underrepresented minorities to participate in the contest and attend the conference
and pay registration for 5 US-based teams.
ETH Zurich Postdoctoral Fellowships (CHF 229,600 / $229,068, Principal Investigator, 2018)
A Computational Framework for Increasing the Usability of Augmented Reality and Virtual Reality
Shapeways Educational Grant ($1,000, Contributor, 2015)
Exploring Visual Saliency of 3D Objects
Performance scholarship of FH Hagenberg (€750 / $850, Awardee, 2011)
One of twelve awardees for scholarship by FH Hagenberg (Leistungsstipendium)
Awards
CHI 2016 Best Paper Honorable Mention Award for
Influence of Display Transparency on Background Awareness and Task Performance.
UIST 2015 Best Paper Honorable Mention Award for
GelTouch: Localized Tactile Feedback Through Thin, Programmable Gel
Invited talks
2023/07/24University of Konstanz.
2023/06/26Columbia University.
2023/04/23CHI 2023 Workshop on Computational Approaches for Adapting User Interfaces.
2022/10/24PNC Innovation Speaker Series.
2022/10/03Distinguished Lecture at McGill CIRMMT.
2022/06/21Austrian Computer Science Days.
2022/06/16Summer School on Computational Interaction, Saarbruecken.
2021/07/14Global Innovation Exchange - 2021 Access Computing Summer School.
2020/03/25Carnegie Mellon University.
2020/03/12Aalto University.
2020/03/02University of Chicago.
2020/02/27University of Illinois at Chicago.
2020/02/24Boston University.
2020/02/05Facebook Reality Labs.
2019/12/17Aalto University.
2019/10/28University of Chicago
2019/08/09Google Interaction Lab
2019/08/08UC Berkeley
2019/08/07Stanford University
2019/08/02UCLA.
2019/07/10MIT Media Lab - Tangible Meda Group.
2019/07/10MIT CSAIL.
2019/07/08Columbia University.
2019/06/15Swiss Society of Virtual and Augmented Reality, Meetup #HOMIXR
2018/05/22Interact Lab - University of Sussex.
2018/03/02IST Austria.
2018/02/21DGP – University of Toronto.
2017/12/15ETH Zurich.
2017/12/14Disney Research Zurich.
2017/12/12INRIA Bordeaux.
2017/10/05Aarhus University.
You can download my cv here: cv_davidlindlbauer.pdf.