The HAI group works at the intersection of Computer Vision, Sensor Fusion, Machine Learning, and Psychology. We aim to build models of human state (intentions, engagement, comfort), and to use the insights gained from these models to build technology that improves human well-being, comfort and convenience.

Other keywords: IoT, Intelligent Environments, Ambient Assisted Living.

Current focus: Human Intention Estimation using Computer Vision and Machine Learning

Human Activities of Daily Living are driven by our underlying intentions. For example, the Intention of "making Pasta" spawns a sequence of activities like fetch pasta, boil it, fetch and chop vegetables for the sauce, and clean up after cooking. Correct and timely estimation of human intentions is critical for non-intrusive and anticipatory assistance by robots.

Under this topic, we develop methods and models for human intention estimation based on sensory observation of human activity and environment. Primarily RGBD data recorded in indoor environments is used. Depending on availability, other sensors wore by people (e.g. IMU) and installed in the environment (RFID, motion detectors, etc.) may be fused together with RGBD data.