David Lindlbauer

I am a Postdoctoral researcher in the field of Human–Computer Interaction, working at ETH Zurich in the Advanced Interaction Technologies Lab, led by Prof. Otmar Hilliges.

I completed my PhD at TU Berlin in the Computer Graphics group, advised by Prof. Marc Alexa. Before, I was working with Prof. Jörg Müller at TU Berlin as well as at the Media Interaction Lab in Hagenberg, Austria. I completed the bachelor’s and master’s program in Hagenberg. During my master’s degree, I spent an exchange semester at the University of Waterloo, Canada, working with Prof. Stacey Scott and Prof. Mark Hancock. I was also working as a software developer at Interactive Pioneers. I also interned at Microsoft Research (Redmond) in the Perception & Interaction Group.

My research focuses on exploring how modifying the optical appearance of interactive devices changes the way we use them (e.g., making displays and objects transparent). This includes mixed reality (VR & AR), new input technologies, and gaze tracking.

You can also find me on Twitter, Linkedin, and Google Scholar, or contact me via info[at]davidlindlbauer.com.

Download my cv here: cv_davidlindlbauer.pdf.

Publications::Dynamic Appearance
In the virtual world, changing properties of objects such as their color, size or shape is one of the main means of communication. I am interested how these features can be brought into the real world by modifying the optical properties of objects and devices and how this dynamic appearance influences interaction and behavior. The interplay of creating functional prototypes of interactive artifacts and devices and studying them in controlled experiments forms the basis of my research.

Remixed Reality: Manipulating Space and Time in Augmented Reality

We present Remixed Reality, a novel form of mixed reality. In contrast to classical mixed reality approaches where users see a direct view or video feed of their environment, with Remixed Reality they see a live 3D reconstruction, gathered from multiple external depth cameras. This approach enables changing the environment as easily as geometry can be changed in virtual reality, while allowing users to view and interact with the actual physical world as they would in augmented reality. We characterize a taxonomy of manipulations that are possible with Remixed Reality: spatial changes such as erasing objects; appearance changes such as changing textures; temporal changes such as pausing time; and viewpoint changes that allow users to see the world from different points without changing their physical location. We contribute a method that uses an underlying voxel grid holding information like visibility and transformations, which is applied to live geometry in real time.

D. Lindlbauer, A. Wilson, 2018. Remixed Reality: Manipulating Space and Time in Augmented Reality. CHI '18, Montreal, Canada.
Microsoft Research Blog / Full video (5 min)

Changing the Appearance of Real-World Objects by Modifying Their Surroundings

We present an approach to alter the perceived appearance of physical objects by controlling their surrounding space. Many real-world objects cannot easily be equipped with displays or actuators in order to change their shape. While common approaches such as projection mapping enable changing the appearance of objects without modifying them, certain surface properties (e.g. highly reflective or transparent surfaces) can make employing these techniques difficult. In this work, we present a conceptual design exploration on how the appearance of an object can be changed by solely altering the space around it, rather than the object itself. In a proof-of-concept implementation, we place objects onto a tabletop display and track them together with users to display perspective-corrected 3D graphics for augmentation. This enables controlling properties such as the perceived size, color, or shape of objects. We characterize the design space of our approach and demonstrate potential applications. For example, we change the contour of a wallet to notify users when their bank account is debited. We envision our approach to gain in importance with increasing ubiquity of display surfaces.

D. Lindlbauer, J. Müller, M. Alexa, 2017. Changing the Appearance of Real-World Objects by Modifying Their Surroundings. CHI '17, Denver, CO, USA.
Full video (5 min)

Changing the Appearance of Physical Interfaces Through Controlled Transparency

We present physical interfaces that change their appearance through controlled transparency. These transparency-controlled physical interfaces are well suited for applications where communication through optical appearance is sufficient, such as ambient display scenarios. They transition between perceived shapes within milliseconds, require no mechanically moving parts and consume little energy. We build 3D physical interfaces with individually controllable parts by laser cutting and folding a single sheet of transparency-controlled material. Electrical connections are engraved in the surface, eliminating the need for wiring individual parts. We consider our work as complementary to current shape-changing interfaces. While our proposed interfaces do not exhibit dynamic tangible qualities, they have unique benefits such as the ability to create apparent holes or nesting of objects. We explore the benefits of transparency-controlled physical interfaces by characterizing their design space and showcase four physical prototypes: two activity indicators, a playful avatar, and a lamp shade with dynamic appearance.

D. Lindlbauer, J. Müller, M. Alexa, 2016. Changing the Appearance of Physical Interfaces Through Controlled Transparency. UIST '16, Tokyo, Japan. Project website / long video (5 min) / talk at UIST'16

Featured on: Fast Company Co.Design, Vice Motherboard, Futurism, prosthetic knowledge.

Combining Shape-Changing Interfaces and Spatial Augmented Reality Enables Extended Object Appearance

We propose combining shape-changing interfaces and spatial augmented reality for extending the space of appearances and interactions of actuated interfaces. While shape-changing interfaces can dynamically alter the physical appearance of objects, the integration of spatial augmented reality additionally allows for dynamically changing objects' optical appearance with high detail. This way, devices can render currently challenging features such as high frequency texture or fast motion. We frame this combination in the context of computer graphics with analogies to established techniques for increasing the realism of 3D objects such as bump mapping. This extensible framework helps us identify challenges of the two techniques and benefits of their combination. We utilize our prototype shape-changing device enriched with spatial augmented reality through projection mapping to demonstrate the concept. We present a novel mechanical distance-fields algorithm for real-time fitting of mechanically constrained shape-changing devices to arbitrary 3D graphics. Furthermore, we present a technique for increasing effective screen real estate for spatial augmented reality through view-dependent shape change.

D. Lindlbauer, J.E. Grønbæk, M. Birk, K. Halskov, M. Alexa, J. Müller, 2016. Combining Shape-Changing Interfaces and Spatial Augmented Reality Enables Extended Object Appearance. CHI '16, San Jose, CA, USA.
Project website / long video (5 min) / talk at CHI '16

Influence of Display Transparency on Background Awareness and Task Performance

It has been argued that transparent displays are beneficial for certain tasks by allowing users to simultaneously see on-screen content as well as the environment behind the display. However, it is yet unclear how much in background awareness users gain and if performance suffers for tasks performed on the transparent display, since users are no longer shielded from distractions. Therefore, we investigate the influence of display transparency on task performance and background awareness in a dual-task scenario. We conducted an experiment comparing transparent displays with conventional displays in different horizontal and vertical configurations. Participants performed an attention-demanding primary task on the display while simultaneously observing the background for target stimuli. Our results show that transparent and horizontal displays increase the ability of participants to observe the background while keeping primary task performance constant.

D. Lindlbauer, K. Lilija, R. Walter, J. Müller, 2016. Influence of Display Transparency on Background Awareness and Task Performance. CHI '16, San Jose, CA, USA.
Full video (3 min)
ACM CHI 2016 Best Paper Honorable Mention Award

Tracs: Transparency Control for See-through Displays

Tracs is a dual-sided see-through display system with controllable transparency. Traditional displays are a constant visual and communication barrier, hindering fast and efficient collaboration of spatially close or facing co- workers. Transparent displays could potentially remove these barriers, but introduce new issues of personal privacy, screen content privacy and visual interference. We therefore propose a solution with controllable transparency to overcome these problems. Tracs consists of two see-through displays, with a transparency-control layer, a backlight layer and a polarization adjustment layer in-between. The transparency- control layer is built as a grid of individually addressable transparency-controlled patches, allowing users to control the transparency overall or just locally. Additionally, the locally switchable backlight layer improves the contrast of LCD screen content. Tracs allows users to switch between personal and collaborative work fast and easily and gives them full control of transparent regions on their display.

D. Lindlbauer, T.Aoki, R. Walter, Y. Uema, A. Höchtl, M. Haller, M. Inami, J. Müller, 2014. Tracs: Transparency Control for See-through Displays.
UIST '14, Honolulu, Hawaii, USA. long video (3 min)
also presented as demo at UIST'14

D. Lindlbauer, T.Aoki, Y. Uema, A. Höchtl, M. Haller, M. Inami, J. Müller, 2014. A Collaborative See-through Display Supporting On-demand Privacy,
Siggraph Emerging Technology '14, Vancouver, Canada. video

Featured on: Gizmodo

Publications:: Devices, Interactions & Perception
I am working on a wide range of other topics, mostly in collaboration with colleagues or students. We worked on projects in the fields of tactile feedback, human perception, and novel interaction techniques.

HeatSpace: Automatic Placement of Displays by Empirical Analysis of User Behavior.

We present HeatSpace, a system that records and empirically analyzes user behavior in a space and automatically suggests positions and sizes for new displays. The system uses depth cameras to capture 3D geometry and users’ perspectives over time. To derive possible display placements, it calculates volumetric heatmaps describing geometric persistence and planarity of structures inside the space. It evaluates visibility of display poses by calculating a volumetric heatmap describing occlusions, position within users’ field of view, and viewing angle. Optimal display size is calculated through a heatmap of average viewing distance. Based on the heatmaps and user constraints we sample the space of valid display placements and jointly optimize their positions. This can be useful when installing displays in multi-display environments such as meeting rooms, offices, and train stations.

A. Fender, D. Lindlbauer, P. Herholz, M. Alexa, J. Müller, 2017. HeatSpace: Automatic Placement of Displays by Empirical Analysis of User Behavior. UIST '17, Quebec City, Canada.
Full video (5 min)

GelTouch: Localized Tactile Feedback Through Thin, Programmable Gel

GelTouch is a gel-based layer that can selectively transition between soft and stiff to provide tactile multi-touch feedback. It is flexible, transparent when not activated, and contains no mechanical, electromagnetic, or hydraulic components, resulting in a compact form factor (a 2mm thin touchscreen layer for our prototype). The activated areas can be morphed freely and continuously, without being limited to fixed, predefined shapes. GelTouch consists of a poly(N-isopropylacrylamide) gel layer which alters its viscoelasticity when activated by applying heat (>32 ◦C). We present three different activation techniques: 1) Indium Tin Oxide (ITO) as a heating element that enables tactile feed- back through individually addressable taxels; 2) predefined tactile areas of engraved ITO, that can be layered and combined; 3) complex arrangements of resistance wire that create thin tactile edges. We present a tablet with 6x4 tactile areas, enabling a tactile numpad, slider, and thumbstick. We show that the gel is up to 25 times stiffer when activated and that users detect tactile features reliably (94.8%).

V. Miruchna, R. Walter, D. Lindlbauer, M. Lehmann, R. von Klitzing, J. Müller, 2015. GelTouch: Localized Tactile Feedback Through Thin, Programmable Gel. UIST '15, Charlotte, NC, USA.
ACM UIST 2015 Best Paper Award Honorable Mention
Watch Viktor's talk on Youtube

Featured on: MIT Technology Review, Engadget, Wired DE, El País.

Measuring Visual Salience of 3D Printed Objects

We investigate human viewing behavior when participants are presented with physical realizations of 3D objects by gathering fixations on the surface of the presented stimuli. This data is used to validate assumptions regarding visual saliency so far only experimentally analyzed using flat stimuli. We provide a way to compare fixation sequences from different subjects as well as a model for generating test sequences of fixations unrelated to the stimuli. This way we can show that human observers agree in their fixations for the same object under similar viewing conditions — as expected based on similar results for flat stimuli. We also develop a simple procedure to validate computational models for visual saliency of 3D objects and use it to show that popular models of mesh salience based on the center surround patterns fail to predict fixations.

X. Wang, D. Lindlbauer, C. Lessig, M. Maertens, M. Alexa, 2016. Measuring Visual Salience of 3D Printed Objects. IEEE Computer Graphics and Applications, Special Issue on Quality Assessment and Perception. Vol. 36 / 4, 2016.
Project website, IEEE Xplore

Accuracy of Monocular Gaze Tracking on 3D Geometry

Many applications in visualization benefit from accurate knowledge of where a person is looking at. We present a system for accurately tracking gaze positions on a three dimensional object using a monocular head mounted eye tracker. We accomplish this by 1) using digital manufacturing to create stimuli with accurately known geometry, 2) embedding fiducial markers directly into the manufactured objects to reliably estimate the rigid transformation of the object, and, 3) using a perspective model to relate pupil positions to 3D locations. This combination enables the efficient and accurate computation of gaze position on an object from measured pupil positions. We validate the accuracy of our system experimentally, achieving an angular resolution of 0.8◦ and a 1.5% depth error using a simple calibration procedure with 11 points.

X. Wang, D. Lindlbauer, C. Lessig, M. Alexa, 2015.
Accuracy of Monocular Gaze Tracking on 3D Geometry.
ETVIS Workshop '15 (in conj. IEEE VIS '15), Chicago, Il, USA.

X. Wang, D. Lindlbauer, C. Lessig, M. Alexa, 2015. Accuracy of Monocular Gaze Tracking on 3D Geometry. In Book: Eye Tracking and Visualization. Foundations, Techniques, and Applications. ETVIS 2015. Springer Int. Pub. 2017. M. Burch, L. Chuang, B. Fisher, A. Schmidt and D. Weiskopf (Eds.).
Project website

Analyzing Visual Attention During Whole Body Interaction with Public Displays

While whole body interaction can enrich user experience on public displays, it remains unclear how common visualizations of user representations impact users’ ability to perceive content on the display. In this work we use a head-mounted eye tracker to record visual behavior of 25 users interacting with a public display game that uses a silhouette user representation, mirroring the users’ movements. Results from visual attention analysis as well as post-hoc recall and recognition tasks on display contents reveal that visual attention is mostly on users’ silhouette while peripheral screen elements remain largely unattended. In our experiment, content attached to the user representation attracted significantly more attention than other screen contents, while content placed at the top and bottom of the screen attracted significantly less. Screen contents attached to the user representation were also significantly better remembered than those at the top and bottom of the screen.

R. Walter, A. Bulling, D. Lindlbauer, M. Schüssler, J. Müller, 2015.
Analyzing Visual Attention During Whole Body Interaction with Public Displays. UBICOMP '15, Osaka, Japan.

Creature Teacher: A Performance-Based Animation System for Creating Cyclic Movements

Creature Teacher is a performance-based animation system for creating cyclic movements. Users directly manipulate body parts of a virtual character by using their hands. Creature Teacher’s generic approach makes it possible to animate rigged 3D models with nearly arbitrary topology (e.g., non-humanoid) without requiring specialized user-to-character mappings or predefined movements. We use a bimanual interaction paradigm, allowing users to select parts of the model with one hand and manipulate them with the other hand. Cyclic movements of body parts during manipulation are detected and repeatedly played back - also while animating other body parts. Our approach of taking cyclic movements as an input makes mode switching between recording and playback obsolete and allows for fast and seamless creation of animations. We show that novice users with no animation background were able to create expressive cyclic animations for initially static virtual 3D creatures.

A. Fender, J. Müller, D. Lindlbauer, 2015.
Creature Teacher: A Performance-Based Animation System for Creating Cyclic Movements. SUI '15, Los Angeles, CA, USA.

A Chair as Ubiquitous Input Device: Exploring Semaphoric Chair Gestures for Focused and Peripheral Interaction

During everyday office work we are used to controlling our computers with keyboard and mouse, while the majority of our body remains unchallenged and the physical workspace around us stays largely unattended. Addressing this untapped potential, we explore the concept of turning a flexible office chair into a ubiquitous input device. To facilitate daily desktop work, we propose the utilization of semaphoric chair gestures that can be assigned to specific application functionalities. The exploration of two usage scenarios in the context of focused and peripheral interaction demonstrates high potential of chair gestures as additional input modality for opportunistic, hands-free interaction.

K. Probst, D. Lindlbauer, M. Haller, B. Schwartz, A. Schrempf, 2014. A Chair as Ubiquitous Input Device: Exploring Semaphoric Chair Gestures for Focused and Peripheral Interaction. CHI '14, Toronto, Canada.

K. Probst, D. Lindlbauer, M. Haller, B. Schwartz, A. Schrempf, 2014.
Exploring the Potential of Peripheral Interaction through Smart Furniture.
Workshop on Peripheral Interaction: Shaping the Research and Design Space at CHI '14, Toronto, Canada.

K. Probst, D. Lindlbauer, P. Greindl, M. Trapp, M. Haller, B. Schwartz, and A. Schrempf, 2013. Rotating, Tilting, Bouncing: Using an Interactive Chair to Promote Activity in Office Environments. CHI EA ’13, Paris, France, 2013

Perceptual Grouping: Selection Assistance for Digital Sketching

Modifying a digital sketch may require multiple selections before a particular editing tool can be applied. Especially on large interactive surfaces, such interactions can be fatiguing. Accordingly, we propose a method, called Suggero, to facilitate the selection process of digital ink. Suggero identifies groups of perceptually related drawing objects. These “perceptual groups” are used to suggest possible extensions in response to a person’s initial selection. Two studies were conducted. First, a background study investigated participant’s expectations of such a selection assistance tool. Then, an empirical study compared the effectiveness of Suggero with an existing manual technique. The results revealed that Suggero required fewer pen interactions and less pen movement, suggesting that Suggero minimizes fatigue during digital sketching.

D. Lindlbauer, M. Haller, M. Hancock, S. D. Scott, and W. Stuerzlinger, 2013. Perceptual Grouping: Selection Assistance for Digital Sketching.
ITS ’13, St. Andrews, Scotland.

D. Lindlbauer, 2012
Perceptual Grouping of Digital Sketches.
Master’s thesis (supervised by Prof Michael Haller)
University of Applied Sciences Upper Austria, Hagenberg

Understanding Mid-Air Hand Gestures: A Study of Human Preferences in Usage of Gesture Types for HCI

In this paper we present the results of a study of human preferences in using mid-air gestures for directing other humans. Rather than contributing a specific set of gestures, we contribute a set of gesture types, which together make a set of the core actions needed to complete any of our six chosen tasks in the domain of human-to-human gestural communication without the speech channel. We observed 12 participants, cooperating to accomplish different tasks only using hand gestures to communicate. We analyzed 5,500 gestures in terms of hand usage and gesture type, using a novel classification scheme which combines three existing taxonomies in order to better capture this interaction space. Our findings indicate that, depending on the meaning of the gesture, there is preference in the usage of gesture types, such as pointing, pantomimic acting, direct manipulation, semaphoric, or iconic gestures. These results can be used as guidelines to design purely gesture driven interfaces for interactive environments and surfaces.

R. Aigner, D. Wigdor, H. Benko, M. Haller, D. Lindlbauer, A. Ion, S. Zhao, and J.T.K.V. Koh, 2012. Understanding Mid-Air Hand Gestures: A Study of Human Preferences in Usage of Gesture Types for HCI.
Microsoft Tech Report, Redmond, WA, USA. MSR-TR-2012-11.

Exploring the Use of Distributed Multiple Monitors Within an Activity-Promoting Sit-and-Stand Office Workspace

Nowadays sedentary behaviors such as prolonged sitting have become a predominant element of our lives. Particularly in the office environment, many people spend the majority of their working day seated in front of a computer. In this paper, we investigate the adoption of a physically active work process within an activity-promoting office workspace design that is composed of a sitting and a standing workstation. Making use of multiple distributed monitors, this environment introduces diversity into the office workflow through the facilitation of transitions between different work-related tasks, workstations, and work postures. We conducted a background study to get a better understanding of how people are performing their daily work within this novel workspace. Our findings identify different work patterns and basic approaches for physical activity integration, which indicate a number of challenges for software design. Based on the results of the study, we provide design implications and highlight new directions in the field of HCI design to support seamless alternation between different postures while working in such an environment.

K. Probst, D. Lindlbauer, F. Perteneder, M. Haller, B. Schwartz, and A. Schrempf, 2013. Exploring the Use of Distributed Multiple Monitors Within an Activity-Promoting Sit-and-Stand Office Workspace.
Interact ’13, Capetown, South Africa.

Professional activity, awards & talks

Program committee
Program Committee member for CHI 2019
Program Committee member for UIST 2018
Program Committee member for ISS 2017

Organizing committee
SIGCHI operations committee (since 02/2016)
CHI 2016 - 2020 Video capture chair
UIST 2019 Student Innovation Contest co-chair
UIST 2018 Student Innovation Contest co-chair
UIST 2018 Best Paper Committee member
UIST 2016 Student Volunteer co-chair
UIST 2015 Documentation chair
Pervasive Displays 2016 Poster chair

Reviewing & other activity
2015 CHI, ITS, ICMI, SUI, PerDis, PERCOMP Journal
2014 CHI, UIST*, ICMI, SUI, NordiCHI
(*) received special recognition for reviewing

Poster committee for ISS 2017 & 2016, MUM 2016
Student volunteer for ITS 2014, UIST 2014, CHI 2015

Grants & fellowships
ETH Zurich Postdoctoral Fellowships (CHF 229,600 / $229,068, Principal Investigator, 2018)
A Computational Framework for Increasing the Usability of Augmented Reality and Virtual Reality
Shapeways Educational Grant ($1,000, Contributor, 2015)
Exploring Visual Saliency of 3D Objects
Performance scholarship of FH Hagenberg (€750 / $850, Awardee, 2011)
One of twelve awardees for scholarship by FH Hagenberg (Leistungsstipendium)

CHI 2016 Best Paper Honorable Mention Award
UIST 2015 Best Paper Honorable Mention Award

Invited talks
2018/05/22Interact Lab - University of Sussex. Hosted by Diego Martinez.
2018/03/02IST Austria. Hosted by Bernd Bickel.
2018/02/21DGP – University of Toronto. Hosted by Seongkook Heo.
2017/12/15ETH Zurich. Hosted by Otmar Hilliges.
2017/12/14Disney Research Zurich. Hosted by Anselm Grundhöfer.
2017/12/12INRIA Bordeaux. Hosted by Martin Hachet.
2017/10/05Aarhus University.

You can download my cv here: cv_davidlindlbauer.pdf.

Older Projects
Some projects I did during my studies or while working as a software developer.

matreco [2011]

matreco is an eco-feedback visualisation. The software analyses energy data coming from the home automation system and displays it to users. The energy used by each consumer and its status is presented in a 2D visualisation. Additionally, users can replay the last 12/24/48 hours of energy consumption and can listen to a musical interpretation of the data. Therefore, users can keep track of their energy consumption and if necessary, change behavior.

AEC Facade Visualisation [2011]

This project is a visualization on the interactive facade of the Ars Electronica Center, Linz. Users can play the game Breakout (or "bricks"). The platform is controlled with users' body movement. The system tracks the player in front of the building with a camera and positions the camera accordingly. This is done by sending network commands to the AEC facade interface. The project was realized within 3 days with Alexandra Ion as a course project for the class "Generative Arts" during my master's degree.

Kontrollwerk [2009]

Kontrollwerk is a multitouch midi controller software for surface platforms. It was a bachelor project done with Alexandra Ion and Stefan Wasserbauer. KontrollWerk gives the user the possibility to create its own user interface with different types of midi controls. The output can be directed to any software or to any kind of internal and external midi devices. With gesture recognition and a blob menu the application has an intuitive user interface and handling. The software fits very well to DJs and VJs controlling several devices during a live performance.

The Witness [2009]

The iPhone App “The Witness” is an interactive real life game containing multiple components, realized during my time at Interactive Pioneers. Depending on the level of the game and the location the player has to complete different tasks to find out more information and to complete the game. Located in Berlin, players were guided to multiple locations through the software. After reaching a location, players used the app to watch videos of the story, fulfill tasks like finding QR codes and communicate with actors, which were part of the game. The project was realized with Jung von Matt Spree (advertising agency, concept) and 13th Street (client). My part was the complete development of the iPhone App.