Bachelor and Master theses

In the following list are topics for both Master or Bachelor theses, please get in touch with the contact person listed, to find out more (last update May 2020).

Master Theses

Topic: ErrP-based assessment of human-agent collaboration in shared workspace and shared responsibility scenarios (already taken)

Advisor: Stefan Ehrlich

Previous works have proposed that error-related potentials (ErrPs) can be used to improve human-machine interaction (HMI)1. Ehrlich and Cheng, 20182 have shown that ErrPs can be used as a feedback to adapt the behavior of agents to the human needs in a co-adaptive scenario. Furthermore, Wirth et al., 20193 have shown that event-related potentials (ERPs) from two similar stimuli that evoke errors have different characteristics.

This thesis aims to investigate the different characteristics of the ErrPs that result from errors evoked by mistakes done from the agent and from the computer interface under two different scenarios. The goal is to better understand the ErrPs, to evaluate the possibility of understanding the stimuli of the error that can be used for future development of brain-computer interfaces for HMI.

References:

  • Chavarriaga, R., Sobolewski, A., & Millán, J. D. R. (2014). Errare machinale est: the use of error-related potentials in brain-machine interfaces. Frontiers in neuroscience, 8, 208.
  • Ehrlich, S. K., & Cheng, G. (2018). Human-agent co-adaptation using error-related potentials, Journal of Neural Engineering, 15(6), 066014.
  • Wirth, C., Dockree, P. M., Harty, S., Lacey, E. & Arcaneh, M. (2019). Towards error categorisation in BCI: single-trial EEG classification between different errors, Journal of Neural Engineering, 17(1), 016008.

------------

Topic:Evolution of functional connectivity across training in motor-imagery BCI tasks (already taken)

Advisor: Stefan Ehrlich

Abstract:

Motor imagery (MI) consists in imagining performing a movement without actual activation of muscles. It is widely used in EEG-based brain-computer interface (BCI) paradigms. MI tasks are however not trivial and require training subjects adequately. MI tasks are also known to enhance performance in motor execution, when done in parallel to physical training [3], suggesting that MI and motor execution are using overlapping learning processes. It has been shown that functional connectivity (FC) was impacted by MI-BCI tasks [2], and even further, that it indicates an underlying learning process as the ability to control improved, and it could predict the learning rate in the subsequent trial [1] However, it remains unknown how the network evolves over training, or what training paradigms are more efficient to modify the FC. In particular, identifying consistent components across subjects of the functional network when a threshold of ability to control is reached would help understand neural mechanisms that guarantee the function of motor imagery. This achievement would mean the generalizability of MI paradigms for rehabilitation, athletic training and limb replacement. The goal of this project is to study the evolution of the functional connectome throughout training of participants in order to identify learning processes linked to MI, connectivity patterns linked to proficiency in the BCI task and inter-subject and inter-paradigm differences.

References:

[1] Corsi, Marie-Constance; Chavez, Mario; Schwartz, Denis; George, Nathalie; Hugueville, Laurent; Kahn, Ari E. et al. (2020) Functional disconnection of associative cortical areas predicts performance during BCI training. In : NeuroImage, vol. 209, p. 116500. DOI: 10.1016/j.neuroimage.2019.116500.

[2] Mottaz, Anaïs; Corbet, Tiffany; Doganci, Naz; Magnin, Cécile; Nicolo, Pierre; Schnider, Armin; Guggisberg, Adrian G. (2018) Modulating functional connectivity after stroke with neurofeedback: Effect on motor deficits in a controlled cross-over study. In : NeuroImage. Clinical, vol. 20, p. 336–346. DOI: 10.1016/j.nicl.2018.07.029.

[3] Schuster, Corina; Hilfiker, Roger; Amft, Oliver; Scheidhauer, Anne; Andrews, Brian; Butler, Jenny et al. (2011) Best practice for motor imagery: a systematic literature review on motor imagery training elements in five different disciplines. In : BMC medicine, vol. 9, p. 75. DOI: 10.1186/1741-7015-9-75.

-----------

Topic: Multi-limb low-impact locomotion for humanoid robots (already taken)

Supervisor: Julio Rogelio Guadarrama Olvera

------------

Topic: Whole body control for competitive mobile service robots (already taken)

Supervisor: Julio Rogelio Guadarrama Olvera

Topic: Error-related potentials in human-robot collaborative assembly tasks

Supervisor: Stefan Ehrlich and Dr. Emmanuel Dean-Leon

Abstract: There have been several works on training robots using reinforcement learning with the help of error-related potentials (ErrP) [1], where humans could be regarded as a teacher forcing robots to submit to the human’s best strategy. But in some domains, there might be no predefined dominant role and no explicit optimal strategy. Humans and robots are expected to be capable of adapting to each other by reflecting upon the behaviors of the counterpart [2][3] so that the teamwork could take advantage of both human knowledge and the robot’s capability, and enhance human’s willingness to work with robots.

The project aims to realize a co-adaption between robots and human workers in collaborative tasks. [2] has already demonstrated the co-adaptation using ErrPs as a feedback signal in a simplified human-robot interaction scenario. This work will be extended in a practical assembly task, where human workers and robots work together to complete a task but might have different working styles and different ideas about the best strategy at the beginning. During the collaboration they will learn more about each other, thus adjust their actions to improve team performance and finally reach a consensus.

References:

[1] Iturrate I, Montesano L, Minguez J. Robot reinforcement learning using EEG-based reward signals[C]//2010 IEEE International Conference on Robotics and Automation. IEEE, 2010: 4822-4829.

[2] Ehrlich S K, Cheng G. Human-agent co-adaptation using error-related potentials[J]. Journal of neural engineering, 2018, 15(6): 066014.

[3] Nikolaidis S, Zhu Y X, Hsu D, et al. Human-robot mutual adaptation in shared autonomy[C]//2017 12th ACM/IEEE International Conference on Human-Robot Interaction (HRI. IEEE, 2017: 294-302.

------------

Topic: A Comparison of Signal Characteristics and Classification Methods for Image Recognition and EEG Event-Related-Potentials to Contribute towards a More Generalized Representation of EEG Signals (already taken)

Supervisor: Stefan Ehrlich

Abstract: In recent years, researchers have successfully used machine learning (ML) and neural networks (NN) on some key EEG paradigms such as event-related P300 [1] and motor imagery [2,3]. However, most of these applications suffer from the EEG’s non-stationarity and poor signal-to-noise ratio and are therefore adapted to specific tasks and subjects. To address this problem, some researchers have developed more generalized networks such as the convolutional EEGNet architecture. This network yields results which are comparable to reference algorithms across subjects and tasks [3]. Nevertheless, we still observe a lack of generic, pre-trained algorithms which are transferable between scenarios. On the contrary, in image recognition transfer learning across datasets is a common practice [4]. Thus, we are wondering why NNs and other ML algorithms produce excellent results for some data types but unsatisfactory results for others, even though they seem to share certain key characteristics (e.g. spatiotemporal features)? In order to answer this question, we aim to compare the signal characteristics and classification algorithms of image and EEG data as examples for best- and worst-case data scenarios.

The main objective of this project is to understand why machine learning algorithms (in particular convolutional NNs) produce excellent results for image data but unsatisfying outcomes for EEG data (in particular event-related EEG). We want to identify the data’s crucial signal characteristics and analyze how they are/could be encoded in classification algorithms. Further, we want to know if we can transfer knowledge from the image data and its classification to EEG data and its classification algorithms. Thereby, we aim to give general guidelines for the design of ML algorithms on the basis of data characteristics and to improve the architectures of algorithms deployed on EEG data.

References:

[1] Lawhern, V. J., Solon, A. J., Waytowich, N. R., Gordon, S. M., Hung, C. P., & Lance, B. J. (2018). EEGNet: a compact convolutional neural network for EEG-based brain–computer interfaces. Journal of neural engineering, 15(5), 056013.

[2] Tayeb, Z., Fedjaev, J., Ghaboosi, N., Richter, C., Everding, L., Qu, X., ... & Conradt, J. (2019). Validating deep neural networks for online decoding of motor imagery movements from EEG signals. Sensors, 19(1), 210.

[3] Schirrmeister RT, Springenberg JT, Fiederer LDJ, et al. Deep learning with convolutional neural networks for EEG decoding and visualization. Hum Brain Mapp. 2017;38(11):5391‐5420. doi:10.1002/hbm.23730

[4] Shin, H. C., Roth, H. R., Gao, M., Lu, L., Xu, Z., Nogues, I., ... & Summers, R. M. (2016). Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE transactions on medical imaging, 35(5), 1285-1298.


Bachelor Theses

Topic: Encoding different temperature levels using the ICS Robot Skin (already taken)

Advisor: Zied Tayeb / Dr. Emmanuel Carlos Dean Leon
Skin is a very important sensor for human beings. Up to 5 million discrete receptors of different modalities (e.g. temperature, force, and vibration) are distributed close to our bodies ' surface. The skin helps us to learn more about our environment and how we can safely interact with it. The aim of this project is to distinguish and encode different measured temperature ranges using the ICS Robot skin. These encoded levels can be translated thereafter into the stimulation of an amputee's arm and/or can be used as in the context of real-time human-robot interaction.

References:

  • Cheng, Gordon; Dean-Leon, Emmanuel; Bergner, Florian; Olvera, Julio Rogelio Guadarrama; Leboutet, Quentin; Mittendorfer, Philipp: A Comprehensive Realization of Robot Skin: Sensors, Sensing, Control, and Applications. Proceedings of the IEEE Volume 107 (10), 2019.
  • Zied Tayeb, Nicolai Waniek, Juri Fedjaev, Nejla Ghaboosi, Leonard Rychly, Christian Widderich, Christoph Richter, Jonas Braun, Matteo Saveriano, Gordon Cheng, Jörg Conradt, 'Gumpy: A Python toolbox suitable for hybrid brain-computer interfaces', Journal of neural engineering, Volume 15 (6), 2018.