PostDoc


Picture of Rahul Chaudhari

Rahul Chaudhari, Dr.-Ing.

Chair of Media Technology (Prof. Steinbach)

Postal address

Postal:
Arcisstr. 21
80333 München

  • Phone: +49 (89) 289 - 23542
  • Room: 0509.03.938
  • rahul.chaudhari(at)tum.de

Biography

Rahul Chaudhari works as a Senior Researcher at the Chair of Media Technology since August 2019.

He earned his doctoral degree (Summa cum Laude) in Communications and Signal Processing from TUM in early 2015. His PhD thesis was titled "Data Compression and Quality Evaluation for Haptic Communications". During his PhD time, he was a research visitor at Prof. Katherine Kuchenbecker's Haptics Lab at University of Pennsylvania and Prof. Seungmoon Choi's Haptics & VR lab at POSTECH, South Korea.

He earned his Master's degree in Communications Engineering at TUM in 2009, and his Bachelor's degree in Electronics and Telecommunications from the University of Pune, India. His master's studies were sponsored by Siemens in recognition of his successful performance in bachelor's, where he was the top student in class throughout.

After his doctorate, he was in the industry for several years in various capacities -- as Software Engineer and as Technical Project Manager. He worked for several Munich-based start-ups (Artisense, NavVis, and Advanced Navigation Solutions) in the area of sensor fusion (camera, IMU, LIDAR) for 3D Mapping, Positioning and Navigation. He is happy sharing his practical knowledge and experiences in both Technology and Business, especially with students interested in an industrial career or wanting to start-up on their own.

Based on his academic research and experience in the start-up world, he dreams of one day building a product that improves human lives in some way. Apart from his work, among other things he loves reading, biking, cooking, and playing table-tennis and badminton.

Research

Rahul's research interests lie at the intersection of Computer Vision, Sensor Fusion, Machine Learning, and Psychology. He aims to build models of human state (intentions, engagement, comfort), and to use the insights gained from these models to build technology that improves human well-being, comfort and convenience.

Other keywords: IoT, Intelligent Environments, Ambient Assisted Living.

Current focus: Human Intentions Estimation using Computer Vision and Machine Learning

Human Activities of Daily Living are driven by our underlying intentions. For example, the Intention of "making Pasta" spawns a sequence of activities like fetch pasta, boil it, fetch and chop vegetables for the sauce, and clean up after cooking. Correct and timely estimation of human intentions is critical for non-intrusive and anticipatory assistance by robots.

Under this topic, methods and models for human intention estimation shall be developed based on sensory observation of human activity and environment. Primarily RGBD data recorded in indoor environments shall be used. Depending on availability, other sensors wore by people (e.g. IMU) and installed in the environment (RFID, motion detectors, etc.) may be fused together with RGBD data.