Student research projects

​​​​​​​At MMK, almost all work can be done remotely, and in most cases, the processing of the work is not limited.

There are always topics for student research projects here at MMK (Bachelor's and Master's thesis, Research Internship, IDP).

When you have found a topic please contact the scientific assistant. If there is no suitable topic, please contact an assistant to get one.

Ingenieurpraxis: The aim of the Ingenieurpraxis is to have a look into the processes in the industry. For this reason we don't offer some Ingenieurpraxis here at MMK, but it is possible to supervise you if you find a position in a company.

Additionally, we do not offer any internships to students from outside TUM. Because of the volume of requests we receive, it is not possible for us to answer all emails with internship requests.

Current appointments of the MMK student research project talks

Topics for Student Projects

Area: Virtual Reality

Empirische Forschung im Bereich Virtual Reality

Thema Empirische Forschung im Bereich Virtual Reality
Typ Forschungspraxis (FP), Interdisziplinäres Projekt (IDP)
Betreuer Maximilian Rettinger, M.Sc.
E-Mail: maximilian.rettinger@tum.de
Sachgebiet Virtual Reality
Beschreibung Am Lehrstuhl für Mensch-Maschine-Kommunikation haben Sie ab sofort die Möglichkeit, sich für ein Interdisziplinäres Projekt (IDP) oder eine Forschungspraxis (FP) im Bereich Virtual Reality zu bewerben. Hierfür gibt es verschiedene Themen, welche Sie mit dem Betreuer besprechen können.
Zudem besteht die Möglichkeit, bestimmte Themen in Teamarbeit mit Ihren Kommilitonen zu erforschen.
Aufgaben
  • Themenbezogene Literaturrecherche
  • Implementierung eines Szenarios
  • Planung, Durchführung und Evaluierung einer Nutzerstudie
Voraussetzung
  • Interesse an neuen Technologien und empirischer Forschung
  • Strukturiertes sowie zuverlässiges Arbeiten
  • Grundkenntnisse der objektorientierten Programmierung
Bewerbung Wenn Sie Interesse an einem Thema aus diesem Bereich haben, senden Sie bitte eine E-Mail an die oben angegebene Adresse. Diese sollte Folgendes enthalten: Motivation, bisherige Erfahrung, Starttermin, Lebenslauf und Zeugnis bzw. Transcript of Records.

Area: Speech Recognition

Joint training of speech enhancement and speech recognition

Topic Joint training of speech enhancement and speech recognition
Typ Research Internship (FP), Master's Thesis (MA)
Betreuer Lujun Li, M.Eng.
E-Mail: lujun.li@tum.de
Beschreibung Motivation:
Recently, end-to-end neural networks have made significant breakthroughs in the field of speech recognition, challenging the dominance of DNN-HMM hybrid architectures. However, speech inputs for ASR systems are generally interfered by various background noises and reverberations in realistic environments, leading to the dramatic performance degradation. To alleviate this issue, the mainstream approach is to use a well-designed speech enhancement module as the front-end of ASR. However, enhancement modules would result in speech distortions and mismatches to training, which sometimes degrades the ASR performance. Therefore, integrating the speech enhancement and end-to-end recognition network via jointly training is a promising research field.
Task:
The main task is to improve an already working joint training pipeline, which can be seen below, with state-of-the-art feature extraction methods, speech enhancement algorithms and speech recognition algorithms. Details of the architecture can be found in [1]. As a future reading, [2] also provides a detailed explanation of the integration of speech enhancement and speech recognition.

References:
  1. Liu, Bin & Nie, Shuai & Liang, Shan & Liu, Wen-Ju & Yu, Meng & Chen, Lianwu & Peng, Shouye & Li, Changliang. (2019). Jointly Adversarial Enhancement Training for Robust End-to-End Speech Recognition. 491-495. 10.21437/Interspeech.2019-1242.
  2. M. Ravanelli, P. Brakel, M. Omologo and Y. Bengio, "Batch-normalized joint training for DNN-based distant speech recognition," 2016 IEEE Spoken Language Technology Workshop (SLT), San Diego, CA, 2016, pp. 28-34, doi: 10.1109/SLT.2016.7846241.

Requirements
  • Excellent coding skills, preferably in Python.
  • Experience in deep learning frameworks, preferably in Torch/PyTorch & Tensorflow.
  • Background knowledge in speech signal processing or natural language processing is a bonus.
  • Motivation to work on deep learning.
Application If you are interested in working in the promising field of artificial intelligence and more specifically, speech signal processing, we welcome the applications via the email address above. Please specify the topic in the email subject, e.g. Masterarbeit/Forschungspraxis application for topic ‘XYZ’, while emphasizing your previous project experience and ideal starting date. Please also attach your recent CV and the transcript.

A New Method to Generate Hidden Markov Model Topology

Thema A New Method to Generate Hidden Markov Model Topology
Typ Masterarbeit, Forschungspraxis
Betreuer Lujun Li, M.Sc.
Tel.: +49 (0)89 289-28543
E-Mail: lujun.li@tum.de
Beschreibung Motivation : For decades, acoustic models in speech recognition systems pivot on Hidden Markov Models (HMMs), e.g., Gaussian Mixture Model-HMM system and Deep Neural Network-HMM system, and achieve a series of impressive performance. At present, the most widely-employed HMM topology is 3-state left-to-right architecture. However, there is no adamant evidence for its suitability and superiority, reflecting the deficiency of the research into HMM topology. We propose an innovative technique to customize an individual HMM topology for each phoneme, and achieve great results in monophone system. The topic of this thesis is to apply it in triphone system.
Task : The main task is to transfer an already working deep architecture from monophone system to triphone system. Overall, the following steps will be followed during the thesis:
1. State-of-the-art research
2. Understanding the already working deep architecture
3. Implementing the algorithm in triphone system
4. Evaluation of the architecture on Tedliumv2 corpus
5. Demonstration of the working system
Voraussetzung 1. Background knowledge in speech signal processing or natural language processing.
2. Excellent coding skills, preferably in C++ & Python.
3. Experience in deep learning frameworks, preferably in Torch/PyTorch & Tensorflow.
4. Experience in Kaldi toolkits is a big bonus.
5. Motivation to work on deep learning.
Bewerbung If you are interested in this topic, we welcome the applications via the email address above. Please set the email subject to “ application for topic 'XYZ'”, ex. “Master’s thesis application for topic 'XYZ'”, while clearly specifying why are you interested in the topic in the text of the message. Also make sure to attach your most recent CV (if you have one) and grade report.

Area: Computer Vision

Statistical Analysis of Deep Learning Object Detectors

Thema Statistical Analysis of Deep Learning Object Detectors
Typ Forschungspraxis
Betreuer Johannes Gilg, M.Sc.
E-Mail: johannes.gilg@tum.de
Sachgebiet Computer Vision
Beschreibung Motivation: Deep Learning has revolutionized the Computer Vision task of Object Detection. This has led to an explosion in the number of Object Detection architectures (e.g. SSD, R-CNN, YOLO, FPN[1], CenterNet[2] and DETR[3]) and training "tricks". Their merit is usually judged on the single metric of mean average precision (mAP), on a popular dataset (COCO)[4], against the model size in number of parameters and compute cost in FLOPs. This raises the question whether this reduction to a single metric hides some model specific performance differences and biases.
Task: The task is to gather or generate the outputs of different publicly available object detectors and compute distinct expressive metrics on their predictions. These metrics should then be rigorously analyzed to gain insights on model/architectures biases and other notable distinctions beyond the mAP
Opportunity: Get hands-on experience with different Deep Learning frameworks. Go on a deep dive into current state-of-the-art deep learning Object Detector architectures, methods and tricks.
References:
[1] Zhao, Zhong-Qiu, et al. "Object detection with deep learning: A review." IEEE transactions on neural networks and learning systems . 2019.
[2] Xingyi, Wang, and Krähenbühl "Objects as points." arXiv preprint arXiv:1904.07850. 2019.
[3] Carion, Nicolas, et al. "End-to-End Object Detection with Transformers." arXiv preprint arXiv:2005.12872. 2020.
[4] Lin, Tsung-Yi, et al. "Microsoft coco: Common objects in context." European conference on computer vision. 2014.
Voraussetzung
  • Experience in Computer Vision & Deep Learning
  • Good programming skills, ideally in Python
  • Solid understanding of statistics
Bewerbung If you are interested in this topic, we welcome the applications via the email address above. Please set the email subject to " application for topic 'XYZ'", ex. "Master’s thesis application for topic 'XYZ'", while clearly specifying why are you interested in the topic in the text of the message. Also make sure to attach your most recent CV (if you have one) and grade report.

End-to-End Pose-Based Gait Recognition with Graph Neural Networks

Thema End-to-End Pose-Based Gait Recognition with Graph Neural Networks
Typ Master's Thesis
Betreuer Torben Teepe, M.Sc.
E-Mail: t.teepe@tum.de
Sachgebiet Computer Vision
Beschreibung Motivation: Skeleton-based approaches have shown excellent results in understanding human action [1] and recognizing human gait. Current methods require a two-stage architecture (see Fig. 1): 1. Human Pose Estimation, 2. Graph Convolutional Neural Network. In this work, we want to explore the possibilities of a single-stage, graph-based approach. A single-stage architecture can be archived using a keypoint detector that feeds the pose data into a Graph Convolutional Network.

Task: The task is to extend the CenterNet-based [2] architecture with a temporal Graph Convolutional Network for Gait Recognition. The architecture will then be evaluated and refined using standard Gait Recognition datasets. Once a decent performance is archived, the architecture can also be used for Action Recognition, Tracking, or Re-Identification.

References:
[1] Liu, Ziyu, et al. "Disentangling and Unifying Graph Convolutions for Skeleton-Based Action Recognition." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020.
[2] Duan, Kaiwen, et al. "CenterNet: Keypoint triplets for object detection." Proceedings of the IEEE International Conference on Computer Vision. 2019.
Voraussetzung
  • Experience in Computer Vision & Deep Learning
  • Good programming skills, ideally in Python
  • Experience in deep learning frameworks, preferably PyTorch
Bewerbung If you are interested in this topic, we welcome the applications via the email address above. Please set the email subject to " application for topic 'XYZ'", ex. "Master’s thesis application for topic 'XYZ'", while clearly specifying why are you interested in the topic in the text of the message. Also make sure to attach your most recent CV (if you have one) and grade report.