Studentische Arbeiten

Am Lehrstuhl MMK sind ständig Themen für studentische Arbeiten (Bachelor- und Masterarbeiten, Forschungspraxis, IDP) zu vergeben.

Wenn Sie ein passendes Thema für Ihre studentische Arbeit gefunden haben wenden Sie sich an den zuständigen Assistenten. Falls keine passende Arbeit ausgeschrieben ist, können Sie auch mit einem Assistenten Kontakt aufnehmen, um ein Thema zu erhalten.

Ingenieurpraxis: Das Ziel der Ingenieurpraxis ist einen Einblick in die Abläufe in der Industrie zu erhalten. Daher bieten wir keine Ingenieurspraxis bei uns an, betreuen Sie aber gerne, wenn Sie eine Stelle in einer Firma finden.

Ebenso bieten wir keine Praktikumsstellen am Lehrstuhl an! Eingehende Anfragen werden aufgrund der Menge nicht beantwortet.

Termine der aktuellen Abschlussvorträge von studentischen Arbeiten am Lehrstuhl MMK.

Themen für studentische Arbeiten

Sachgebiet: Virtual Reality

Development of a virtual reality environment for teaching medical skills
(cooperation project with the Klinikum rechts der Isar)

Thema Development of a virtual reality environment for teaching medical skills
(cooperation project with the Klinikum rechts der Isar)
Typ Research Internship (FP), Interdisciplinary Project (IDP), Bachelor's Thesis (BA), Master's Thesis (MA)
Betreuer Maximilian Rettinger, M.Sc.
Tel.: +49 (0)89 289-28547
E-Mail: maximilian.rettinger@tum.de

PD Dr. Christoph Schmaderer
E-Mail: christoph.schmaderer@mri.tum.de
Sachgebiet Virtual Reality
Beschreibung Motivation:
Medical Learning and Teaching is still rather old-fashioned and done either in lectures and seminars or even better bed-side. Bed-side Teaching has a huge advantage, that medical students, residents and nursing staff learn by seeing real cases (case-based learning) which are much easier to remember and more interactive than lectures. The downside is that bed-side teaching needs a considerable number of staff that is not available as needed and is highly dependent on the motivation of the teacher. Virtual reality teaching is a way to create an immersive environment where teaching can take place repeatedly without any cost after the VR teaching application has been developed. Furthermore, the teaching con-tent is standardized and can be continuously improved by the VR developers together with the teaching staff. The immersive environment might make learning more intense and memorable compared to text-books and lectures. https://upload.wikimedia.org/wikipedia/commons/b/b1/Hemo_11-16-17.jpg

Dialysis is needed by patients with renal failure. In these patients, the machine takes over the function of the native kidneys. The dialysis machine which cleans the blood has to be set up before each use (link). As there is a high fluctuation in residents and nursing staff it would be a great relief for caregivers if the teaching of the dialysis machine setup could be supported by a virtual reality application. Furthermore, the knowledge could be deepened by repeat training. Development for the standalone device Oculus Quest should be the main goal, as it is the most realistic device to be used in a hospital environment.

Task:
  • Research in the area of education in VR (deepen the learning expe-rience and make it more enjoyable)
  • Development a 3D Model of a dialysis machine (e.g. with blender or 3ds Max)
  • Implementing a 3D Application with Unity for the Oculus Quest
  • Evaluation of the system in the Klinikum rechts der Isar (with medical students and nursing staff)
References:
  1. Multimodal Learning in Health Sciences and Medicine: Merging Technologies to Enhance Student Learning and Communication. Moro C, Smith J, Stromberga. Adv Exp Med Biol. 2019;1205:71-78.
  2. Emerging Applications of Virtual Reality in Cardiovascular Medicine. Silva JNA, Southworth M, Raptis C, Silva J. JACC Basic Transl Sci. 2018 Jun 25;3(3):420-430.
  3. Effect of virtual reality training to decreases rates of needle stick/sharp injuries in new-coming medical and nursing interns in Taiwan. J Educ Eval Health Prof. 2020 Jan;17:1. Wu SH, Huang CC, Huang SS, Yang YY, Liu CW, Shulruf B, Chen CH.

Voraussetzung
  • Great object-oriented programming skills
  • Experience with 3D modelling
  • High motivation and interest in biomedical applications as well as in translational medical research
Bewerbung If you are interested in the rapidly expanding field of biomedical VR applications and want to participate in translational research between engineering and medicine, please send your CV together with your grade report emphasizing your previous experience in this area and your desired starting date to maximilian.rettinger@tum.de.

Sachgebiet: Computer Vision

Camera-based Gait Recognition and Re-Identification with Pose Tracking

Thema Camera-based Gait Recognition and Re-Identification with Pose Tracking
Typ Master's Thesis
Betreuer Torben Teepe, M.Sc.
E-Mail: t.teepe@tum.de
Sachgebiet Computer Vision
Beschreibung Motivation: Recognizing people by their gait has become more and more popular in surveillance scenarios [1] since it can work remotely, does not require the cooperation of the user, and can be applied to low-resolution images. Vision-based Gait Recognition can be categorized in model-based, and model-free approaches. Traditionally, most camera-based Gait Recognition Application used a model-free Gait Energy Images (GEI) to represent a Gait Cycle. With recent advances in the area of Pose Estimation and Pose Tracking, a new approach with this temporal model-based information for Gait Recognition becomes possible.


The siamese graph convolution network for pose matching from [3]


Task: The first task is to implement our approach based on the LightTrack architecture [3] and evaluate it on the CASIA-Gait Database. Secondly, we want to bring our approach into the wild: Using a Top-Down Re-Identification architecture, we want to evaluate the performance in a surveillance scenario.

References:
[1] Wan, Changsheng, Li Wang, and Vir V. Phoha, eds. "A survey on gait recognition." ACM Computing Surveys (CSUR) 51.5 (2018): 1-35.
[2] Xiao, Bin, Haiping Wu, and Yichen Wei. "Simple baselines for human pose estimation and tracking." Proceed- ings of the European conference on computer vision (ECCV). 2018.
[3] Ning, Guanghan, and Heng Huang. "Lighttrack: A generic framework for online top-down human pose track- ing." arXiv preprint arXiv:1905.02822 (2019).
Voraussetzung
  • Experience in Computer Vision & Deep Learning
  • Good programming skills, ideally in Python
  • Experience in deep learning frameworks, preferably Tensorflow
Bewerbung If you are interested in this topic, we welcome the applications via the email address above. Please set the email subject to " application for topic 'XYZ'", ex. "Master’s thesis application for topic 'XYZ'", while clearly specifying why are you interested in the topic in the text of the message. Also make sure to attach your most recent CV (if you have one) and grade report.

Distracted Driver Dataset

Thema Distracted Driver Dataset
Typ Master
Betreuer Okan Köpüklü, M.Sc.
Tel.: +49 (0)89 289-28554
E-Mail: okan.kopuklu@tum.de
Sachgebiet Computer Vision
Beschreibung Motivation: According to the last National Highway Traffic Safety Administration (NHTSA) report, one in ten fatal crashes and two in ten injury crashes were reported as distracted driver crashes in the United State in 2014. Therefore detecting the drivers distraction state is utmost important to reduce driver-related accidents. For this task, properly annotated dataset for drivers actions observation is necessary. With such a dataset, state-of-the art Deep Learning Architectures can be used to recognize the distraction state of the drivers.

Task: The main task is to collect a “Distracted Driver Dataset”, and use a light-weight Convolutional Neural Networks (CNN) architecture in order to detect driver’s distractive actions. The dataset should contain the following annotations:
1. Predefined distractive actions that the drivers do
2. Drivers hand states (whether they are on the wheel or not)

During the thesis, the following steps will be followed in general:
1. State-of-the-art research
2. Dataset collection and preparation (i.e. labeling and formating)
3. Light-weight CNN Architecture design
4. Evaluation of the CNN Architecture on the prepared dataset
5. Demonstration of the working system

References:
[1] Baheti, B., Gajre, S., & Talbar, S. (2018). Detection of Distracted Driver using Convolutional Neural Network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (pp. 1032-1038).
[2] Hssayeni, M. D., Saxena, S., Ptucha, R., & Savakis, A. (2017). Distracted driver detection: Deep learning vs handcrafted features. Electronic Imaging, 2017(10), 20-26.
[3] G. Borghi, E. Frigieri, R. Vezzani and R. Cucchiara, "Hands on the wheel: A Dataset for Driver Hand Detection and Tracking," 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), Xi'an, 2018, pp. 564-570.
Voraussetzung 1. Excellent coding skills, preferable in Python
2. Experience in deep learning frameworks, preferably in Torch/PyTorch
3. Motivation to work on deep learning.
Bewerbung If you are interested in this topic, we welcome the applications via the email address above. Please set the email subject to " application for topic 'XYZ'", ex. "Master’s thesis application for topic 'XYZ'", while clearly specifying why are you interested in the topic in the text of the message. Also make sure to attach your most recent CV (if you have one) and grade report.

Real-time Detection and classification of Dynamic Hand Gestures

Thema Real-time Detection and classification of Dynamic Hand Gestures
Typ Forschungspraxis, Masterarbeit
Betreuer Okan Köpüklü, M.Sc.
Tel.: +49 (0)89 289-28554
E-Mail: okan.kupuklu@tum.de
Sachgebiet Computer Vision
Beschreibung Motivation :Detection and classification of dynamic hand gestures is a challenging task since there is no indication when an action starts in a video stream. However, most of the deep learning architectures which are working offline can also function online with proper adjustments. The topic of this thesis is convert an offline-working architecture to an online-working one.
Task : The main task is to bring an already working deep architecture, which can be seen below, to online functionality. Details of the architecture can be found in [1].
As a further reading, [2] also provides a detailed online detection architecture.


References :
[1] O. Köpüklü, N. Köse, and G. Rigoll. Motion fused frames: Data level fusion strategy for hand gesture recognition. arXiv preprint, arXiv:1804.07187, 2018.
[2] P. Molchanov, X. Yang, S. Gupta, K. Kim, S. Tyree, and J. Kautz. Online detection and classification of dynamic hand gestures with recurrent 3d convolutional neural network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4207–4215, 2016.
Voraussetzung 1. Excellent coding skills in Python,
2. Experience in deep learning frameworks, preferably in Torch/PyTorch.
3. Motivation to work on deep learning.
Bewerbung If you are interested in this topic, we welcome the applications via the email address above. Please set the email subject to “ application for topic 'XYZ'”, ex. “Master’s thesis application for topic 'XYZ'”, while clearly specifying why are you interested in the topic in the text of the message. Also make sure to attach your most recent CV (if you have one) and grade report.