ProjectHuman Harmonization: Learning motion models of human scene interaction

Basic data

Human Harmonization: Learning motion models of human scene interaction
04/01/2022 to 04/01/2025
Abstract / short description:
Human intelligence evolved to enable us to move and interact and act upon the environment, and yet most digital humans do not have such skills. The goal of this project is to build the next generation of digital humans which can move and execute daily activities within a 3D the environment. The research in this project will open up many new applications and research directions. For example, a digital assistant, taking the form of a human we can relate to, can show us the way in a new building, or show us a new skill. Robots need to execute tasks in the real world, and could learn from the environment aware digital humans we will develop. Digital content creation (3D movies, phone-apps, video-games etc) requires synthesizing human motion coherent with the 3D digital world, which currently requires manual work, and is very costly. This process, which currently requires manual intervention and is very costly, could be automatized with our models. With the rise of Virtual, Mixed and Augmented Reality (VR/AR), there is an increasing need to develop computational tools to make digital humans blend in seamlessly in mixed environments where the boundaries of real and virtual are blurred. Beyond applications, many studies support that general intelligence requires an active body. Our models developed here can provide such active body in the form of a human to future intelligent systems.

Involved staff


Department of Informatics
Faculty of Science
Tübingen AI Center
Department of Informatics, Faculty of Science

Local organizational units

Department of Informatics
Faculty of Science
University of Tübingen

will be deleted permanently. This cannot be undone.