We constantly offer topics for student projects (engineering experience, research experience, student, IDPs) and final year projects (bachelor thesis or master thesis).
Graphical User Interface for Interactive Object Segmentation with Minimal User Input
yle="margin-bottom: 0in; line-height: 100%;">In this work, a web-based GUI for interactive 2D object segmentation scheme needs to be designed and implemented. With this interface, users will be able to provide inputs to the seed-based segmentation algorithm to segment the interested objects in unstructured scenes robustly. The user should be able to provide different kinds of seeds to indicate the regions within the requested object and the regions outside the object easily. Interaction types will range from a single click on the object to multiple brush seeds to provide positive and negative samples for segmentation. Different tools for cutting, bounding or adding areas will also be included for correcting the segmentation errors in real-time application.
This work includes:
- Literature search and selection of robust seed-based 2D object segmentation algorithms based on computational cost and success analysis
- Design and implementation of a web-based GUI for providing easy and efficient interaction with the seeded segmentation algorithm
- Conducting and evaluating experiments for various unstructured scenes with different interaction types
Simulation Environment for Human Activity Analysis
3D Simulation, Unity3D, Unreal Engine, RGBD, Computer Vision, Machine Learning
This topic is about 3D simulation for human activity analysis in indoor environments. The student(s) will investigate simulation platforms like MORSE, Unity3D, UE, etc. and map human activity flows from daily life into the simulation. The student(s) will extend the then simulator capabilities to cover a larger and more complex spectrum of activity flows.
If time permits, the 3D data generated from the simulation will be processed using Machine Learning Techniques for Human intention recognition/anticipation.
This is a great opportunity to contribute to Open-source Software.
Interest and first experience in 3D graphics, Python, C++, ML/DL
Deep-learning-based 3D Object Segmentation using RGB-D Image and Human Assistance Input
In this thesis, a deep neural network for 3D object segmentation will be designed that takes RGB-D data and human assistance data as inputs, and gives an accurate 3D segmentation of the object of interest in the scene as output. The human assistance will be in form of seeds in the RGB image, varying from a single point within the object region to multiple point or brush samples indicating the ROI and non-ROI.
- Basic knowledge of digital signal processing / image processing
- Hands-on experience with Artificial Neural Network libraries / motivation to learn them
- Motivation to yield a successful work
Simulation of Autonomous Airplane Inspection using Drones
Autonomous drone inspection airplane gazebo simulation
In this project of the aviation research programme V (LuFo V) of the Federal Ministry for Economic Affairs and Energy the goal is to develop an autonomous drone for airplane inspection inside a hangar. The drone is expected to be equipped with a variety of sensors, such as LiDAR, stereo cameras, IMUs, compass and optionally active RGB-D sensors. The challenge in this environment are the large metal structures from the airplane, but also from the hangar itself, having a great influence on GPS signals and compass. In this GPS-denied environment the drone will collect data from the sensors and send them to a ground station, where SLAM algorithms map the environment and localize the drone precisely. The drone will follow preset inspection points at which it is capturing inspection images from the surface of the airplane. The images are then sent to the ground station, which performs machine learning techniques using IBM Watson in order to retrieve a damage classification result. The results are collected and an overall report is generated, documenting the current condition of the airplane.
The LMT will develop the software for module communication (ROS-based), sensor data acquisition and encoding, as well as data processing in the ground station, such as SLAM and drone control. In order to test the algorithms two test platforms are used at the LMT. Our small platform is based on "DJI Flame Wheel 550" for autonomous control and stability tests and the bigger drone is based on "DJI Spreading Wings S1000+" to mount all available sensors for data acquisition tests. During the development phase this data is then analyzed offline to adjust parameters and algorithms. This leads to an improvement of localization precision and better real-time control.
More information on the project can be found on the project's website:
Working tasks can vary from week to week. Mainly we are looking for a working student to improve our drone simulation in Gazebo. So students with knowledge in Gazebo simulation will be preferred.
Interested students should send their current grade sheet and CV to the contact below.
The student is required to have excellent knowledge in C++/Python and experience with the ROS framework. We are looking for an employment of a student assistant for a time frame of +6 months with at least 10 hours per week working time.
3D-Vision Dataset for Human Activity and Intention Analysis in Indoor Environments
Machine Learning, Conditional Random Fields, Computer Vision, RGBD, IMU, Human Intentions
Overarching goal of this thesis is to define and record a novel human activity dataset that addresses the limitations of current datasets. A complementary goal is to evaluate performance of state-of-the-art ML algorithms for Human Intention Recognition on this dataset, and propose improvements based on dataset peculiarities.
Human Activities of Daily Living are driven by our underlying intentions. For example, the Intention of "making Pasta" spawns a sequence of activities like fetch pasta, boil it, fetch and chop vegetables for the sauce, and clean up after cooking.
Correct estimation of human intentions is critical for non-intrusive and anticipatory assistance in these tasks by robots. So far several datasets have been made publicly available by researchers for benchmarking Machine Learning algorithms for Human Intention/Activity recognition.
However, these datasets have several limitations, for example most of them: 1) focus only on one sensor or the other (2D camera or RGBD camera, LiDAR, IMU, RFID on objects, motion sensors, etc.) and ignore others, 2) do not capture natural activities, rather staged ones, 3) do not include sequences of regularly occurring long and complex daily activities.
This thesis shall construct a novel dataset that addresses the above limitations and is comprehensive in the most important respects -- sensor modalities, human and environment variability, activity/intention duration and complexity, etc. If time permits, performance of state-of-the-art algorithms on this dataset shall be evaluated.
Ambition, Motivation, interest and first experience in Computer Vision, ML/DL, Python, C++
Deep-learning-based robot grasp planning
For a given grasp, use deep-learning to estimate possible friction between the gripper and the object to predict grasp success
Robustness of a grasp highly depends on the friction between the gripper and the grasped object. The existing algorithm computes possible frictional force/torque pairs of the contact and fits them to a 6d ellipsoid by using conventional algorithms such as least square fitting and convex optimization. While such algorithms provide a guarantee of accuracy, the computational time is substantial.
This work aims to train a deep neural network to estimate grasp success in realtime. Given the geometry and pressure distribution of a contact, the network outputs a fitted 6d ellipsoid, which describes the possible frictional force and torque, and predicts grasp success based on the ellipsoid. The existing conventional methods provide the ground truth.
Algorithm evaluation for robot grasping with compliant jaws
python, ROS, robot grasping
Evaluate a state-of-the-art algorithm for robot grasp planning with a customized physical setup including a KUKA robot arm and a parallel-jaw gripper with compliant materials.
Model-based grasp planning algorithms depend on friction analysis since friction between objects and gripper-jaws highly affect the grasp robustness. A state-of-the-art friction analysis algorithm for grasp planning is evaluated with plastic robot fingers and achieved promising results, but will it work if grippers are mounted with compliant materials such as rubber and silicon?
The task of this work is to collect data with the physical robot setup and evaluate the existing algorithm with collected data.
Design and implementation of a fault-tolerant haptic controller for Jaco2 manipulator
ROS, Haptics, Teleoperation, Jaco2
fault-tolerant haptic teleportation
By the advancement of robotics and communication networks such as 5G, telemedicine has become a critical application for remote diagnosis and treatment.
In this project, we want to perform robotic teleoperation using a Sigma 7 haptic master and a Jaco 2 robotic manipulator.
- State of the art review and mathematical modeling
- Jaco2 haptic controller implementation
- Fault-tolerant (delay, network disconnect) controller design
- System evaluation with external force-torque sensor
- Strong background in C++ programming
- Solid background in control theory
- Be familiar with robot dynamics and kinematics
- Be familiar with the robot operating system (ROS) and ROS Control (Optional)
Development of the Cloud-based Kinova Movo Virtual Twin
Create a virtual twin alternative universe for the Kinova Movo platform using NVidia ISAAC SIM SDK and Unity3D game engine. Facilitate robot training in a photo-realistic environment using NVidia RTX real-time ray tracing engine.
IsaacSim allows us to use the Unity3D game engine as the simulation environment for Isaac robotics. IsaacSim provides an expandable test environment to evaluate the performance of several complex robotic interactions and dynamic navigation. It also provides an infinite stream of procedurally generated, fully annotated training data for machine learning. Features include emulation of sensor hardware and robot base models, scene randomization, and scenario management. In this project, the student shall develop a realistic robot model of Kinova Movo as well as a reconstructed version of the current demo lab in Unity3D using NVidia ISAAC Sim SDK. Later this virtual twin of the environment and robot will be used for robot training in particular haptic sensitive tasks for medical and assistive applications.
- Porting the current Kinova Movo Gazebo simulation to Unity3d
- Create the virtual twin of the lab environment
- Host multiple simulation environments for both students and robot training
- Simulate the object friction and Kinestaic interactions
- Evaluate with different real scenarios
- Strong Unity3D, C# ,and C++ Background
- Be familiar with the robot operating system (ROS)
Designing the most intuitive UX/UI for Medical Telepresence Robots
Unity3D, C#, GUI, UI/UX, Robot Interface Design
We want to perform a human study to investigate the human-robot interaction to design the most intuitive interface for a general-purpose telepresence robot from the psychological aspects.
The “UI” in UI design stands for “user interface.” The user interface is the graphical layout of an application, whereas “UX” stands for “user experience.” A user’s experience of the app is determined by how they interact with it. Is the experience smooth and intuitive or clunky and confusing? By the advancement of robotics and communication networks such as 5G, telemedicine has become an essential application for remote diagnosis and treatment. In this project, we want to perform a human study to investigate the human-robot interaction to design the most intuitive interface for a general-purpose telepresence robot from the psychological aspects. Besides, the student should create a poll to study user expectations in interacting with a sophisticated robotic platform and design a general theme for both robot and application to maximizing the overall intuition of the interaction.
- Human Study on the most intuitive way of robot interaction
- Design and implement the overall project theme
- Design and implement of 5G test best Application interface
- Strong background in C# programming language
- UI/UX Design Fundamentals
- Solid background in Unity3D engine
- Be familiar with the robot operating system (ROS) (Optional)
- Solid background in Mobile Device App development (Optional)
Evaluation of Point Cloud Compression for Teleoperated Driving
Point Cloud Compression, Autonomous Driving
This work can be done in German or English
LIDAR is one important sensor type for autonomous vehicles' perception. Human perception is mainly based on RGB data, in case of teleoperation captured by RGB cameras and transmitted to the remote operator through a communication network. In some situation a 3D representation of the scene might be helpful for the operator which could be achieved using LIDAR data. To avoid hight transmission rate LIDAR data need to be compressed. Existing methods for point cloud compression [1, 2, 3] don't have their focus on automotive LIDAR data.
The objective of this project is to setup existing point cloud compression implementations and compare them focusing on automotive point clouds.
- Setup available point cloud compression implementations
* ROS PCL 
* MPEG L-PCC  (available at MPEG Repo)
* MPEG G-PCC  (available at MPEG Repo)
* MPEG Anchor Implementation 
- Evaluate implementations in terms of
* Encoding time/complexity
* Compression rate
* Compression quality
- Experience with ROS and Point Clouds
- Basic knowledge of C++ and Linux
Student Assistant for distributed haptic training system
server-client, UDP, GUI programming
1. build a serve-client telehaptic training system based on current code.
2. develop GUI for client side.
- knowledge about socket programming, e.g. UDP
- GUI programming, e.g. QT
- working environment: windows+visual studio
This work has a tight connection with the project Teleoperation over 5G networks and the IEEE standardization P1918.1.1.
Source Separation in Vibrotactile Signals
Haptics, Vibrotactile, Signal Processing, Source Separation, Denoising
Microscoping roughness sensations can be captured with the vibrations induced by sliding motion over a textured surface. However, the captured signals can be affected by various factors, such as other vibrations of external systems or sensor inaccuracies. An important goal is to be able to filter signals and remove noise components and imperfections. To do this, source separation is a promising approach. The idea is to separate different components in signals originating from different sources, respectively. After separation, it is possible to remove noise components and maintain only meaningful parts of the original signal. The student shall investigate different forms of distortions in vibrotactile signals and methods to separate and remove them.
MATLAB, basics in signal processing, basics in audio processing