Nicolas Alt, Dr.-Ing.
Chair of Media Technology (Prof. Steinbach)
- Room: 2410.02.220
Nicolas Alt studied electrical engineering at the Technische Universität München (Germany) and the Georgia Institute of Technology (USA, France). He graduated from the master program "Systems of Information and Multimedia Technology" in 2008 and also holds a M.Sc. degree from the Georgia Institute of Technology.
In July 2009 he joined the Media Technology Group at the Technische Universität München where he is a member of the research and teaching staff.
His current research interests are in the field of haptic-visual modeling for cognitive robots.
Intelligent or cognitive robots are systems that operate in complex unstructured environments, such as private households, offices, commercial buildings, museums or even outdoors. As these environments are built for and used by humans, cognitive robots require a high level of semantic scene knowledge. They must deal with uncertain information, interact with objects or persons and constantly adapted to changes, such as walking people. Cognitive robots are an active field of research; yet first systems are already commercially available for some constraint problems – such as automatic vacuum cleaners, lawn mowers, or the intelligent production robot Baxter.
Our research focuses on visual and haptic perception for cognitive robots. Using multimodal perception is crucial for manipulation tasks like grasping or pushing of objects: Computer vision techniques provide the pose, geometry and identity of objects in the vicinity of the robot. Haptic information, like deformability, weight, adhesion or surface roughness, are essential once the robot gets in contact with an object. For instance, consider cups made of plastic, paper or ceramic: While visually similar, these objects require considerably different grasping forces and sensitivity. Furthermore, haptic and visual modalities complement each other and thus increase certainty about the scene: Transparent objects are hard to recognize with visual sensors, but touching them yields reliable information. Similarly, very soft and light objects, such as curtains, may disturb haptic sensors, but they are clearly recognized by visual methods.
Visuo-haptic Models for Navigation
Navigation and path planning are important skills for intelligent robots. Existing approaches rely on laser scanners or depth sensors to build 2D maps or 3D models of the environment. Thereby, a path towards a destination point can be planned, avoiding collision with obstacles in the room.
We complement these purely visual models with haptic information about obstacles. Since the acquisition of haptic data is costly, only simple information – such as weight, friction, deformability and stability – are stored for each obstacle, just enough to plan simple manipulation tasks. Based on these so-called haptic tags, obstacles can be moved out of the way, allowing for a better navigation path through a cluttered environment. The approach is feasible for small platforms, which may push away objects like toys or paper bins in household environments, as well as for humanoid robots, which may need to move chairs or other furniture.
We propose visuo-haptic sensors which acquire haptic and visual measurements simultaneously, solely based on a standard camera that already observes the scene. For this purpose, a passive material is mounted to the robot, which deforms when it comes to contact with an obstacle. Deformation is measured visually and provides information about the haptic modality, such as friction forces. This approach allows to obtain measurements that are naturally coherent in the haptic and visual modality, and furthermore reduces costs for the acquisition of haptic tags.
iRobot, Roomba, http://www.irobot.com/global/en/roomba_range.aspx
rethink robotics, Baxter, http://www.rethinkrobotics.com/index.php/products/baxter/