Bachelor and Master theses

In the following list are topics for both Master or Bachelor theses, please get in touch with the contact person listed, to find out more (last update October 2021).

Master Theses

Topics already taken

Topic: Using Affordance-based Attention to Optimize Depth Data Processing in Mobile Robot Teleoperation Scenarios (already taken)

Supervisors: Constantin Uhde, Nicolas Berberich

Abstract: Bandwidth is a major factor when controlling robotic systems remotely. Using high-density pointcloud data to visualize the surroundings of the system makes optimization necessary, if one wants to maintain responsive control. The aim of this project is to utilize affordance information about the environment [1], in order to apply foveation [2] as a form of lossy compression to the pointcloud data stream from the robot to the operator. The methods will be implemented on an Nvidia Jetson Nano, which will be installed on a PR2 robot platform. Two implementations for region-of-interest selection will be compared in an experimental setup.

Requirements:

- Solid knowledge in C++

- Some experience with pointclouds, robotics, virtual reality

References:

[1] Lueddecke, T., Kulvicius, T., & Woergoetter, F. (2019). Context-based affordance segmentation from 2D images for robot actions. Robotics and Autonomous Systems, 119, 92-107.

[2] Ude, A., Atkeson, C. G., & Cheng, G. (2003, October). Combining peripheral and foveal humanoid vision to detect, pursue, recognize and act. In Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003)(Cat. No. 03CH37453) (Vol. 3, pp. 2173-2178). IEEE.


Topic: Neuromorphic object recognition and scene representation for robotic applications (already taken)

Supervisor: Elvin Hajizada

Abstract: In collaboration with the Neuromorphic Computing Lab of Intel Labs, we offer Bachelor and Master thesis projects, as well as semester projects on the topic of neuromorphic computing in robotics.

The object recognition task in robotic applications significantly differs from that in image classification scenarios. The difference in the nature of available visual data makes deep learning methods poor fit for the robotic object recognition scenarios [1]. Aim of this project is to build spiking neuronal network (SNN) algorithms and systems for continual learning of objects and building a scene representation [2,3]. In contrast to the backpropagation-based methods, the developed architecture should learn new objects incrementally, without requiring the retraining the network each time a new object is presented [4]. The developed SNN will be implemented on the Intel’s neuromorphic research chip Loihi to leverage the event-based processing, on-chip learning, fine-grained parallelism, and energy efficiency of these chips. Subsequently, the network will be tested in experiments with a humanoid robot iCub.

Requirements:

  • Solid knowledge of python
  • Some experience with robotics and/or computer vision. C++/ROS/Yarp/OpenCV
  • Knowledge about the basics of machine learning and neuronal networks will be advantageous.

References:

[1] Pasquale, Giulia & Ciliberto, Carlo & Odone, Francesca & Rosasco, Lorenzo & Natale, Lorenzo. (2017). Are we Done with Object Recognition? The iCub robot's Perspective. Robotics and Autonomous Systems. 112. 10.1016/j.robot.2018.11.001.

[2] M. Mozafari, S. R. Kheradpisheh, T. Masquelier, A. Nowzari-Dalini and M. Ganjtabesh, "First-Spike-Based Visual Categorization Using Reward-Modulated STDP," in IEEE Transactions on Neural Networks and Learning Systems, vol. 29, no. 12, pp. 6178-6190, Dec. 2018, doi: 10.1109/TNNLS.2018.2826721.

[3] Gerstner, Wulfram & Lehmann, Marco & Liakoni, Vasiliki & Corneil, Dane & Brea, Johanni. (2018). Eligibility Traces and Plasticity on Behavioral Time Scales: Experimental Support of NeoHebbian Three-Factor Learning Rules. Frontiers in Neural Circuits. 12. 10.3389/fncir.2018.00053.

[4] Parisi, German & Kemker, Ronald & Part, Jose & Kanan, Christopher & Wermter, Stefan. (2019). Continual Lifelong Learning with Neural Networks: A Review. Neural Networks. 113. 54-71. 10.1016/j.neunet.2019.01.012.


Topic: A Comparison of Signal Characteristics and Classification Methods for Image Recognition and EEG Event-Related-Potentials to Contribute towards a More Generalized Representation of EEG Signals (already taken)

Supervisor: Stefan Ehrlich

Abstract: In recent years, researchers have successfully used machine learning (ML) and neural networks (NN) on some key EEG paradigms such as event-related P300 [1] and motor imagery [2,3]. However, most of these applications suffer from the EEG’s non-stationarity and poor signal-to-noise ratio and are therefore adapted to specific tasks and subjects. To address this problem, some researchers have developed more generalized networks such as the convolutional EEGNet architecture. This network yields results which are comparable to reference algorithms across subjects and tasks [3]. Nevertheless, we still observe a lack of generic, pre-trained algorithms which are transferable between scenarios. On the contrary, in image recognition transfer learning across datasets is a common practice [4]. Thus, we are wondering why NNs and other ML algorithms produce excellent results for some data types but unsatisfactory results for others, even though they seem to share certain key characteristics (e.g. spatiotemporal features)? In order to answer this question, we aim to compare the signal characteristics and classification algorithms of image and EEG data as examples for best- and worst-case data scenarios.

The main objective of this project is to understand why machine learning algorithms (in particular convolutional NNs) produce excellent results for image data but unsatisfying outcomes for EEG data (in particular event-related EEG). We want to identify the data’s crucial signal characteristics and analyze how they are/could be encoded in classification algorithms. Further, we want to know if we can transfer knowledge from the image data and its classification to EEG data and its classification algorithms. Thereby, we aim to give general guidelines for the design of ML algorithms on the basis of data characteristics and to improve the architectures of algorithms deployed on EEG data.

References:

[1] Lawhern, V. J., Solon, A. J., Waytowich, N. R., Gordon, S. M., Hung, C. P., & Lance, B. J. (2018). EEGNet: a compact convolutional neural network for EEG-based brain–computer interfaces. Journal of neural engineering, 15(5), 056013.

[2] Tayeb, Z., Fedjaev, J., Ghaboosi, N., Richter, C., Everding, L., Qu, X., ... & Conradt, J. (2019). Validating deep neural networks for online decoding of motor imagery movements from EEG signals. Sensors, 19(1), 210.

[3] Schirrmeister RT, Springenberg JT, Fiederer LDJ, et al. Deep learning with convolutional neural networks for EEG decoding and visualization. Hum Brain Mapp. 2017;38(11):5391‐5420. doi:10.1002/hbm.23730

[4] Shin, H. C., Roth, H. R., Gao, M., Lu, L., Xu, Z., Nogues, I., ... & Summers, R. M. (2016). Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE transactions on medical imaging, 35(5), 1285-1298.

------------

Topic: Multi-limb low-impact locomotion for humanoid robots (already taken)

Supervisor: Julio Rogelio Guadarrama Olvera

------------

Topic: Whole body control for competitive mobile service robots (already taken)

Supervisor: Julio Rogelio Guadarrama Olvera


Bachelor Theses

Topic: Encoding different temperature levels using the ICS Robot Skin (already taken)

Advisor: Zied Tayeb / Dr. Emmanuel Carlos Dean Leon
Skin is a very important sensor for human beings. Up to 5 million discrete receptors of different modalities (e.g. temperature, force, and vibration) are distributed close to our bodies ' surface. The skin helps us to learn more about our environment and how we can safely interact with it. The aim of this project is to distinguish and encode different measured temperature ranges using the ICS Robot skin. These encoded levels can be translated thereafter into the stimulation of an amputee's arm and/or can be used as in the context of real-time human-robot interaction.

References:

  • Cheng, Gordon; Dean-Leon, Emmanuel; Bergner, Florian; Olvera, Julio Rogelio Guadarrama; Leboutet, Quentin; Mittendorfer, Philipp: A Comprehensive Realization of Robot Skin: Sensors, Sensing, Control, and Applications. Proceedings of the IEEE Volume 107 (10), 2019.
  • Zied Tayeb, Nicolai Waniek, Juri Fedjaev, Nejla Ghaboosi, Leonard Rychly, Christian Widderich, Christoph Richter, Jonas Braun, Matteo Saveriano, Gordon Cheng, Jörg Conradt, 'Gumpy: A Python toolbox suitable for hybrid brain-computer interfaces', Journal of neural engineering, Volume 15 (6), 2018.