Student Work
Our student theses. internships and other student jobs can mostly be done from remote and will be supervised online (videochat, email,...).
We offer students the opportunity to actively participate on interesting and cutting edge research topics and concrete research projects by conducting their thesis in our group.
We offer topics for your Bachelor's Thesis and Master's Thesis to successfully complete your studies with a scientific work. We offer students of the Department of Electrical and Computer Engineering to supervise your Forschungspraxis (research internship /industrial internship) and Ingenieurpraxis directly at our chair. For students with other specializations, such as Informatics, we offer opportunities to supervise your Interdisciplinary Project (IDP) (German: "Interdisziplinäres Projekt (IDP)"). Please contact us directly for more information.
Open Thesis
BAMAIDPFPIPSEMSHK
Title
✔--✔✔--
Probability parameters of 5G RANs featuring dynamic functional split
Probability parameters of 5G RANs featuring dynamic functional split
Description
The architecture of 5G radio access networks features the division of the base station (gNodeB) into a centralized unit (CU) and a distributed unit (DU). This division enables cost reduction and better user experience via enhanced interference mitigation. Recent research proposes the posibility to modify this functional split dynamically, that is, to lively change the functions that run on the CU and DU. This has interesting implications at the network operation.
In this topic, the student will employ a dedicated simulator developed by LKN to characterize the duration and transition rates of each functional split under multiple variables: population density, mitigation capabilities, mobility, etc. This characterization may be used then on traffic models to predict the network behavior.
Prerequisites
MATLAB, some experience with mobile networks and simulators
Supervisor:
-✔-✔---
Age of Information-based Multi-Hop Communication using GNU Radio & Software-defined Radios
Age of Information-based Multi-Hop Communication using GNU Radio & Software-defined Radios
GNU Radio, Software-defined Radios, Age of Information
Description
Age of Information(AoI) is defined as the elapsed time since the generation of the most recent packet. It has been used as a cross-layer metric in remote monitoring and controlling scenarios.
AoI-aware networking & protocol design requires the modification within the communication stack. To that end, our department has set up a testbed that consists of multiple software-defined radios (SDRs) programmed in GNU Radio. This enables us to develop our own solutions, e.g., MAC protocol, routing algorithms etc. to improve e.g. AoI performance at the application layer.
This tasks of this project will consist of:
- Working with the existing setup, that consists of multiple SDRs programmed with GNU Radio / C++
- Scheduling of flows over multi-hop paths between the source and the destination pairs
- Performance analysis and comparison of experimental results to analytical ones from the literature
Prerequisites
The project requires:
- Solid C/C++ knowledge and low-level programming
- Python skills are beneficial
- Solid fundamental knowledge in wireless communications
- Having worked with practical implementations of wireless communication devices is a great benefit, such as SDRs or sensor motes (some examples are Telosb Motes, Zolertia Re-motes, ...)
Supervisor:
-✔-----
Flexible Implementation of 5G User Plane Function using P4
Flexible Implementation of 5G User Plane Function using P4
5G, P4, UPF
Description
The 5G cellular networks are the state-of-the-art cellular networks for the coming 10 years. One of the critical network functions in the 5G core system is the evolved packet gateway or User Plane Function (UPF). The UPF is responsible for carrying the users' packets from the base stations to the data network (like the internet).
On the other hand, P4 is a promising language for programming packet processors. It can be used to program different networking devices (Software/FPGAs/ASICs/...).
Using P4 to implement the UPF has many advantages in terms of flexibility and scalability. In this work, the student will realize/implement the UPF in P4 language. Then, the advantages of this approach, especially in terms of performance gains, will be evaluated.
Prerequisites
-Good programming skills
-Basic Linux knowledge
-Motivation and critical thinking
-Knowledge about LTE, 5G, or P4 is a plus.
Supervisor:
✔✔-✔--✔
An SCTP Load Balancer for Kubernetes to aid RAN-Core Communication
An SCTP Load Balancer for Kubernetes to aid RAN-Core Communication
5G, SCTP, Kubernetes, RAN, 5G Core, gNB, AMF
Description
Cloud Native deployments of the 5G Core network are gaining increasing interest and many providers are exploring these options. One of the key technologies that will be used to deploy these Networks, is Kubernetes (k8s).
In 5G, NG Application Protocol (NGAP) is used for the gNB-AMF (RAN-Core) communication. NGAP uses SCTP as a Transport Layer protocol. In order to load balance traffic coming from the gNB towards a resilient cluster of AMF instances, a L4 load balancer needs to be deployed in the Kubernetes Cluster.
The goal of this project is do develop a SCTP Load Balancer to be used in a 5G Core Network to aid the communication between the RAN and Core.
The project will be developed using the language Go (https://golang.org/).
Prerequisites
- General knowledge about Mobile Networks (RAN & Core).
- Good knowledge of Cloud Orchestration tools like Kuberentes.
- Strong programming skills. Knowledge of Go (https://golang.org/) is a plus.
Contact
endri.goshi@tum.de
Supervisor:
-✔-----
Wireless resource allocation for clients with LiFi and RF dual connections
Wireless resource allocation for clients with LiFi and RF dual connections
Description
LiFi is becoming an increasingly important technology as the unlicensed ISM band gets overcrowded. LiFi, with its high data rates, emerges as a promising solution to provide connectivity using LED lights thereby reducing the interference with other wireless technologies. The integration of LiFi into existing communication systems is best envisioned in the form of wireless Heterogeneous Networks or HetNets where the short-range, additional capacity providing LiFi cells complement the broader coverage providing WiFi cells. Any user device in such a HetNet is equipped with multiple wireless interfaces i.e. LiFi and WiFi.
The challenge in a such a heterogeneous network is in the wireless resource allocation. Most of the existing research focuses on this problem in a scenario where the client is either connected to LiFi or WiFi. But, in order to fully utilize the potential of a inter-technology interference free LiFi-WiFi HetNet, the scenario where the client is simultaneously served by LiFi and WiFi must be considered.
In this work, the student is expected to formulate and solve an optimization problem to allocate wireless resources while maximizing the network sum throughput in a LiFi-RF dual connectivity scenario. The scenario should then be simulated and results compared with existing literature.
Prerequisites
- Python
- Sound knowledge of wireless communication
- LiFi knowledege is an advantage but not necessary
- Experience with optimization problems is an advantage
Supervisor:
-✔-✔---
Performance Analysis in GNURadio and Testbed Implementation with Software-defind Radios
Performance Analysis in GNURadio and Testbed Implementation with Software-defind Radios
GNUradio, Software Defined Radios
Description
GNU Radio is a free & open-source software development toolkit that provides signal processing blocks to implement software radios. It can be used with readily-available low-cost external RF hardware to create software-defined radios, or without hardware in a simulation-like environment. It is widely used in research, industry, academia, government, and hobbyist environments to support both wireless communications research and real-world radio systems.(https://www.gnuradio.org/)
LKN has a testbed comprised of multiple feedback control loops communicating over a wireless network. The communication is done with software-defined radios (SDRs) that are programmed with the help of GNURadio.The task of this project is to conduct a detailed latency analysis of the existing testbed code implemented with GNURadio toolkit.
The current framework consists of 2 main blocks: 1) Graphical User Interface implemented in Python 2) Software for wireless communications is implemented with GNUradio in C++.
The task involves close investigation of latency contributions by different parts of the system, where are the packets buffered and how can it be optimized for the existing task., It requires getting into the details of GNURadio.
Prerequisites
The student is asked to have:
- Experience with software development on Linux-based operating systems.
- Familiarity with common Linux system administration tasks and the command line.
- Programming ability in C, C++, Python.
- Good communication & writing skills in English
- Motivation and self-organization throughout the thesis
The following are beneficial for conducting the thesis:
- Some experience with practical hardware in the past, e.g. wireless sensor motes, SDRs.
- It is beneficial to have GNUradio experience but not mandatory.
Supervisor:
-✔✔✔---
Root Cause Analysis for TSN
Root Cause Analysis for TSN
network monitoring, root cause analysis, Linux, P4, TAS, TSN
Description
Time Sensitive Networks aim to provide for mechanisms and methods for deterministic delay and jitter in industrial networks. Sources of delay and jitter in a TSN can be numerous, e.g., misconfigured or incorrectly dimensioned TAS schedules, congested egress ports, large clock jitter in distributed time synchronization, congestion at egress / ingress ports of the end-stations, variable switch backplane delays etc. are all possible causes of poor end-to-end performance.
In this work, an existing approach to end-to-end monitoring of transmission delay and jitter will be extended to support for quick identification of source of the anomaly in a TSN scenario (i.e., the root cause). Two or more approaches will be proposed to cater for end-to-end monitoring of TSN streams, relying on:
1) intermediate, pre-placed network TAPs;
2) hardware / software data plane agents capable of telemetry and dynamic re-programming to support the quick identification of anomaly sources for individual streams.
Any resulting overhead of the intermediary monitoring solution will be quantified for consideration in the TSN stream planning. The work will hence implement and validate the monitoring mechanisms in an existing TSN testbed that relies on a Linux-based control plane, and will optionally involve P4-based middlebox devices to serve as intermediate monitoring entities.
[1] 802.1Qbv - Enhancements for Scheduled Traffic - http://www.ieee802.org/1/pages/802.1bv.html
[2] Improving Network Monitoring and Management with Programmable Data Planes - https://p4.org/p4/inband-network-telemetry/
Prerequisites
- Good knowledge of networking concepts and architectures
- Good knowledge and practical experiences with Linux networking
- Good knowledge of Python, C or Rust
- Any prior experience with TSN, P4, SDN is a plus but not mandatory
Contact
ermin.sakic@siemens.com
blenk@tum.de
andreas.zirkler@siemens.com
Supervisor:
✔✔✔✔✔--
Extension and Evaluation of a TSN Scheduler for Stream Redundancy
Extension and Evaluation of a TSN Scheduler for Stream Redundancy
optimization, metaheuristics, redundancy, TAS, TSN
Description
IEEE 802.1Qbv Time-Aware Shaper (TAS) standardizes the mechanisms for TDMA-based packet transmission in future Ethernet deployments, as well as the interfaces for definition and configuration of associated transmission schedules [1].
In this work, a state-of-the-art end-to-end TAS-based scheduler will be extended with schemes enabling for data plane redundancy under consideration of worst-case delay and jitter constraints. Support for TSN streams with 1:1 replication for proactive redundancy, as well as an approach to efficient schedule reservation with reactive 'fast-failover' redundancy will be investigated, implemented, and subsequently evaluated in terms of adherence to jitter, delay, and packet loss constraints, as well as the computation complexity and acceptance rates of the approach.
The joint routing/scheduling task will first be formulated as an optimization problem and implemented using an optimization solver. The student will then extend the existing TAS scheduler relying on metaheuristics for faster execution compared to using a solver.
[1] IEEE 802.1Qbv - Enhancements for Scheduled Traffic - http://www.ieee802.org/1/pages/802.1bv.html
[2] IEEE P802.1CB – Frame Replication and Elimination for Reliability - https://1.ieee802.org/tsn/802-1cb/
Prerequisites
- Good knowledge of networking concepts and architectures
- Good knowledge and practical experiences with Linux networking
- Good knowledge of Python
- Any prior experience with optimization methods and tools is a plus, but not mandatory
- Any prior experience with TSN and SDN is a plus
Contact
ermin.sakic@siemens.com
blenk@tum.de
andreas.zirkler@siemens.com
Supervisor:
------✔
Support for the new lecture Data Networking in WS20/21
Support for the new lecture Data Networking in WS20/21
Data Networking replaces Broadband Communication Networks with a rich set of blended learning formats
Description
<p>The course "Data Networking" (replaces Broadband Communication Networks) of Prof. Wolfgang Kellerer teaches network-related methods of mobile communication: WIFI, 2G to 5G cellular networks, etc. In order to bridge the gap between the methods and real life, innovative teaching concepts will be applied and developed to present Data Networking in a blended learning format based on Podcasts (the students learn and experiment in short episodes about wireless communication and networking in day to day scenarios), online quizzes, feedback channels, recording of lecture, tutorial and discussions.</p>
<p>In order to setup and support the course Data Networking in WS 20/21, Prof. Kellerer is looking for students experienced in programming and in video tools for recording and other online tools (incl. moodle) to help him. Work can start in Sept./Oct 2020 already for preparations.</p>
Note: This work can also be be done in parallel to taking the course Data Networking in WS 20/21
Prerequisites
<p>Programming skills; experience with online teaching tools, video tools and recording; knowledge of communication networks </p>
Contact
Arled Papa (arled.papa@tum.de)
Anna Prado (anna.prado@tum.de)
Supervisor:
✔✔-----
Development of a Tracking Glove for Real Time Trajectory Simulation in Virtual Environments
Development of a Tracking Glove for Real Time Trajectory Simulation in Virtual Environments
Towards VR Beer Pong - Development of a Tracking Glove for Real Time Trajectory Simulation in Virtual Environments
Description
Problem Statement
The human hand is one of the most versatile instruments to interact with the environment. Consequently, hand interactions play a key role for realistic interactivity in virtual environments. Most existing hand tracking systems today are vision based which require a complex instrumentalization of the environment. This constrains portability and ad-hoc usage, limiting application use cases for industry and consumers.
The goal of this thesis is to implement a glove to track a hand position in space and visualize it in real time in VR to simulate the trajectory of an object released from the hand. The overall goal of this research project is to implement a physically realistic VR simulation of the popular game beer pong to investigate whether practice in VR environments results in training effects in real life.
Tasks
- Review of related literature
yle="margin-left: 18.0pt; mso-add-space: auto; text-indent: -18.0pt; mso-list: l0 level1 lfo1;">- Prototypical development of a sensory glove
yle="margin-left: 18.0pt; mso-add-space: auto; text-indent: -18.0pt; mso-list: l0 level1 lfo1;">- Implementation of real-time hand tracking and visualization in VR (Unity)
yle="margin-left: 18.0pt; mso-add-space: auto; text-indent: -18.0pt; mso-list: l0 level1 lfo1;">- Implementation of a real-time trajectory simulation
yle="margin-left: 18.0pt; mso-add-space: auto; text-indent: -18.0pt; mso-list: l0 level1 lfo1;"> - Evaluation of the implemented trajectory simulation in VR in a user study
Prerequisites
Preferred Qualifications
- Hardware Prototyping (IMUs, Microcontrollers, e.g. nRF52 DK)
- Experience with signal processing and filtering (i.e. Kalman filtering)
- Interest in VR (Unity, C++ or C#)
- Independent thinking and creative problem solving
Related Work
Real Time VR
https://doi.org/10.1109/38.963459
Hand Tracking and Gloves
https://ieeexplore.ieee.org/document/6223225
https://doi.org/10.1109/TIM.2008.925725
https://doi.org/10.1145/1531326.1531369
Trajectory Prediction
https://www.3delement.com/?p=610
https://web.mit.edu/16.unified/www/FALL/systems/Lab_Notes/traj.pdf
Contact
Michael Fröhlich, CDTM (TUM & LMU)
http://www.cdtm.de
Email: froehlich@cdtm.de
Supervisor:
Ongoing Thesis
Bachelor's Theses
Evaluation of telematic configurations with regard to the mobile radio transmission quality including predictions
Coexistence of 5G-NR-U and WiFi6 in aircraft cabins (SISO)
Depth Image based 5G Reliability Enhancement
Depth Image based 5G Reliability Enhancement
Deep Learning, Wireless Channel Estimation, SDR
Description
Wireless channel estimation is an important topic when it comes to the reliability of the wireless communication. Up to now the estimation of the wireless channel is maintained over the pilot signal transmissions or blind estimations. However, the accuracy of these depends on the complexity of the estimation process or how frequently the pilots are transmitted. Especially in the upcoming mission critical - machine type communications (MC-MTC, i.e URLLC) the overhead of the pilots would result in inefficient usage of radio resources considering the high reliability requirement with the short packet transmissions. Hence, in this thesis the main purpose is to obtain reliable wireless channel estimations for a specific room (but dynamic environment) without requiring to use pilots. A previous work can be found in [1] which serves as a proof-of-concept, however; for industrial IOT standard using DS-SS signals. In this work the expected result is to generalize the previous work to the new 5G-NR standard which uses OFDM with many other possible configuration parameters.
[1] Serkut Ayvasik, H. Murat Gürsu, and Wolfgang Kellerer. 2019. Veni Vidi Dixi: reliable wireless communication with depth images. In Proceedings of the 15th International Conference on Emerging Networking Experiments And Technologies (CoNEXT '19). Association for Computing Machinery, New York, NY, USA, 172–185. DOI:https://doi.org/10.1145/3359989.3365418
Prerequisites
- OFDM Knowledge is required
- Wireless Channel Estimation knowledge is a plus
- Machine Learning understanding is a plus
- Medium Matlab experience is required
- C/C++ experience is a plus
Contact
serkut.ayvasik@tum.de
Supervisor:
Development of a Tracking GWVE for Real-Time Tradeltory Simulation in virtual Enviroments
How do random body drops influence aircraft cabin channels for LiFi?
How do random body drops influence aircraft cabin channels for LiFi?
LiFi, 3D-Model, Aircraft Cabin
Description
In future, data will be transmitted over light, especially in case of aircraft cabins. However, cabin channel models are non existent for communication based on light (LiFi). Therefore, we developed a 3D structure in blender to simulate light propagation in aircrafts. However, the model is lacking passengers. Passengers will block the light path and cause disruptions in communication.
The goal of this thesis is to extend the model by adding passengers, with the help of pre-existing models. After extending the simulator, different scenarios should be simulated and analysed. The overall goal is to show the influence of adding passengers and to derive a blockage model for LiFi.
Related Work:
Blender
https://www.blender.org/
LiFi
H. Haas, L. Yin, Y. Wang and C. Chen, "What is LiFi?," in Journal of Lightwave Technology, vol. 34, no. 6, pp. 1533-1544, 15 March15, 2016, doi: 10.1109/JLT.2015.2510021.
Prerequisites
- Some experience in python
- Interest in 3D modeling
- Interest in new communication technologies
Supervisor:
Attention! - Learning a Link State Routing Protocol
Development of an East/West API for SD-RAN control communication
Development of an East/West API for SD-RAN control communication
Description
Software-Defined Radio Access Network (SD-RAN) is receiving a lot of attention in 5G networks, since it offers means for a more flexible and programmable mobile network architecture.
The heart of the SD-RAN architecture are the so called SD-RAN controllers. Currently, initial prototypes have been developed and used in commercial and academic testbeds. However, most of the solutions only contain a single SD-RAN controller. Nonetheless, a single controller becomes also a single point of failure for a system, not only due to potential controller failures but also due to a high load induced from the devices in the data plane.
To this end a multi-controller control plane often becomes a reasonable choice. However, a multi-controller control plane renders the communication among the controllers more challenging, since they need to often exchange control information with each other to keep an up to date network state. Unfortunately, currently there is no protocol available for such a communication.
The aim of this work is the development and implementation of an East/West API for SD-RAN controller communication according to 5G stardardization. The protocol should enable the exchange of infromation among the SD-RAN controllers regarding UEs, BSs, wireless channel state and allow for control plane migration among controllers.
Prerequisites
- Experience with programming languages Python/C++.
- Experience with socket programming.
- Knowledge about SDN is a must.
- Knowledge about 4G/5G networks is a plus.
Supervisor:
Modelling and Analyzing the influence of passengers on aircraft cabin channel models
Modelling and Analyzing the influence of passengers on aircraft cabin channel models
RF, 3D-Model, Aircraft Cabin
Extension of an aircraft cabin channel model and analysis of the resulting effects on the system model
Description
In future, data will be transmitted over light, especially in case of aircraft cabins. However, cabin channel models are scarce and outdated for radio frequencies. Therefore, we developed a 3D structure in blender to simulate light propagation in aircrafts. However, the model is lacking passengers.
The goal of this thesis is to extend the model by adding passengers, with the help of pre-existing models and automated by a script. After extending the simulator, different scenarios should be simulated and analysed. The overall goal is to show the influence of adding passengers.
Tasks:
- Review of related literature
- Extend the model with people
- Compare the new channel effect
- Evaluate of the simulation results
Related Work:
Blender
https://www.blender.org/
Prerequisites
- Some experience in python
- Interest in 3D modeling
- Interest in new communication technologies
Supervisor:
Finite-Horizon AoI-based Scheduling with Gilbert-Elliot Channel Model
Weighted Overlapping Networks
Traffic classification using graphattention neural networks
Anpassungs eines HArdware in the Loop Simulationsbaus
Untersuchung zur rückwärtskompatiblen Datenratensteigerung für proprietäre Brandmeldebustechnik
Untersuchung zur rückwärtskompatiblen Datenratensteigerung für proprietäre Brandmeldebustechnik
Description
Das Thema dieser Bachelorthesis ist die Entwicklung, Optimierung und Evaluation des proprietären Brandmeldebussystems LSNi1 der Firma Bosch Sicherheitssysteme GmbH in Grasbrunn.
Brandmeldeanlagen sind ein bestehender notwendiger Teil der Gebaudetechnik und sollen fur neue vernetzte IoT-Dienste verwendet werden. Aufgrund immer höheren Anforderungen dieser Dienste soll die Datenrate uber den Brandmeldebus gesteigert werden. Zur Zeit wird das Bussystem zur verhaltnismäßig datenarmen Alarmabfrage und Antwort der Netzelemente verwendet. Ein wichtiger Punkt ist dabei, die Systemcharakteristika des Brandmeldesystems nicht zu verschlechtern. Solche Netzwerke sind als Sicherheitssysteme deklariert und mussen daher äußerst zuverlässig in Sachen Ausfallsicherheit und Fehlererkennung sein.
Die Aufgabe besteht darin, eine neue Übertragungstechnik mit höherer Datenrate auf der physikalischen Schicht des Busses zu finden und diese mit Hilfe eines Prototypen umzusetzen. Die Optimierung des Prototypen soll durch Berechnung und Experimentieren mit Systemelementen ausgearbeitet werden. Daraufhin werden verschiedene Testaufbauten, im firmeneigenen Labor getestet. Die dabei aufgenommenen Daten werden über ein entwickeltes Matlab-Evaluierungsprogramm ausgewertet, um eindeutige Aussagen über die Funktionalitat des Systems zu geben. Eine besondere Herausforderung bei Systemen dieser Art sind die hohen Kabellängen und die große Anzahl an Systemelementen auf dem Bus. Mit den ausgewerteten Ergebnissen soll überpruft werden, in welchem Umfang die Datenrate auf dem Bus gesteigert werden kann.
Supervisor:
Master's Theses
Effiziente Hochbandbreitige Datenübertragung in verteilten Systemen
Machine-Learning based fault diagnosis in flexible optical networks
Machine-Learning based fault diagnosis in flexible optical networks
Apply machine learning techniques to predict and detect faults based on available monitored data of a flexible optical network provided by ADVA.
Description
Flexible optical networks achieve a significant increase of network capacity by assigning as much spectrum to the demands as needed. With advances in transponders supporting software tunable channel configurations, network planners are able to select the best combination of data rate, modulation format and forward error correction (FEC) for each light-path. However, the impact of failures in these networks is higher due to the closeness between channels as well as the low OSNR margins.
Hence, in this thesis we will apply machine learning techniques to predict and detect faults based on available monitored data of a flexible optical network provided by ADVA.
Supervisor:
Hierarchical SDN control for Multi-domain TSN Industrial Networks
Hierarchical SDN control for Multi-domain TSN Industrial Networks
Description
In this thesis student will focus on designing and implementing a hierarchical SDN solution for industrial multi-domain TSN network.
Contact
kostas.katsalis@huawei.com
Supervisor:
Realization of a TCP Proxy with P4
Realization of a TCP Proxy with P4
Description
In this thesis, studnt will try to implement a simple TCP proxy with a P4 programming language.
Supervisor:
Placing LiFi OFEs in a 3D space
Placing LiFi OFEs in a 3D space
Description
LiFi (Light-fidelity) is becoming an increasingly important technology as the unlicensed ISM band gets overcrowded. LiFi, with its high data rates, emerges as a promising solution to provide connectivity using LED lights thereby reducing the interference with other wireless technologies. Due to the unique properties of a visible light communication system, the position of the access points/LEDs/Optical front end (OFE) has a significant effect on the network coverage and illumination level in an indoor area.
The placement of these OFEs is a 3D placement problem. This work involves optimizing the placement of the OFEs given a set of requirements eg. throughput, luminance, etc. Given these requirements the goal is to find the optimal placement such that, for example, throughput is maximized.
Related Reading:-
- H. Haas, L. Yin, Y. Wang and C. Chen, "What is LiFi?," in Journal of Lightwave Technology, vol. 34, no. 6, pp. 1533-1544, 15 March15, 2016, doi: 10.1109/JLT.2015.2510021.
- Uluturk, I., Uysal, I., & Chen, K. C. (2019, January). Efficient 3d placement of access points in an aerial wireless network. In 2019 16th IEEE Annual Consumer Communications & Networking Conference (CCNC) (pp. 1-7). IEEE.
Prerequisites
- Python
- Sound knowledge of Wireless Communication
- Interest to learn new communication technologies
- Experience with optimization problems is an advantage
Supervisor:
Lossless Data Center Networks Congestion Control
Lossless Data Center Networks Congestion Control
Data Center, Congestion Control
Description
Congestion control (CC) can help achieve the requirements of today's and future generation data center networks (DCNs): namely high bandwidth, ultra-low latency and stability. There are several state-of-the-art proposals but there is no optimal CC scheme yet. Categorizing the CC schemes into end-host control and in-network scheduling, we at Huawei Munich Research Center believe that in-network scheduling provides faster convergence of transmission rate towards the fair rate of the network as compared to end-host control. A novel in-network scheduling CC algorithm together with an optimal load balancing scheme has been designed for RoCEv2 (RDMA over Converged Ethernet version 2) transport. The proposed CC algorithm leverages in-network telemetry (INT) to estimate link load information and pace traffic accordingly, thereby, reducing congestion. It has optimal packet level scheduling and is traffic insensitive, thus fair to all flows (like that of Processor Sharing). The algorithm is also simple (in terms of simple operations on packet arrival) and can be deployed in the DCN switches. The goal of the algorithm is to minimize the FCT (flow completion time) of large flows and provide a bounded delay for short flows. The load balancing scheme built on top of the optimal scheduling scheme aims to optimize the overall network performance, achieving higher network utilization while guaranteeing the performance of individual flows. We intend to compare this algorithm by simulation in Omnet++ simulator with some state-of-the-art algorithms from which the algorithm derives its motivation.
Supervisor:
Benchmarking P4Runtime Controllers
Benchmarking P4Runtime Controllers
P4Runtime, P4, Benchmarking, Performance Evaluation
Description
The student will develop a benchmarking tool for P4Runtime controllers. Then the performance of ONOS controller with P4Runtime southbound protocol will be evaluated and compared to that when OpenFlow is adopted as a southbound protocol.
Prerequisites
SDN, P4, and JAVA programming skills.
Supervisor:
Safe Runtime Reconfiguration of TAS-based TSN schedules
Safe Runtime Reconfiguration of TAS-based TSN schedules
optimization, metaheuristics, reconfiguration, TAS, TSN, simulation
Description
Industry 4.0 scenarios have introduced the novel requirements for dynamic device-to-device interaction without strict plan-ahead interactions (typical of Industrial Ethernet of the last two decades). For example, the use cases of work piece de-attachment and interacting Automated Guided Vehicles (AGVs) in factory automation impose the need for dynamic inter-connection of distributed industrial applications at runtime [1] without a plan-ahead schedule.
Dynamic TSN stream establishment has been proposed by IEEE802.1 to address scenarios of applications requesting real-time communication services at runtime [2, 3]. TSN schedulers capable of computing the transmission schedules with guaranteed real-time properties (upper-bound constraints on latency, jitter, and guaranteed bandwidth) have accordingly been proposed in the past, however, they have traditionally focused on one-time / offline execution [4].
Combined with the requirement for dynamic and efficient resource (de-)allocation, transmission schedules must generally be adaptable at runtime in order to provide for maximum efficiency in network resource utilization. This work will investigate the runtime adaptation of TSN control and data plane to support for the dynamic stream allocation. Algorithms for adaptation of existing TAS-based schedules with minimal impact on availability (packet loss) and QoS guarantees for running TSN streams will be proposed, implemented, and validated. Novel metrics for stream consistency conservation may need to be proposed by the approach. The resulting approach will eventually be evaluated in NeSTiNg (OMNeT++/INET) simulator and/or a small-scale physical testbed comprising multiple TSN switches and end-stations.
[1] IEC/IEEE 60802 TSN Profile for Industrial Automation - https://1.ieee802.org/tsn/iec-ieee-60802/
[2] P802.1Qdd – Resource Allocation Protocol - https://1.ieee802.org/tsn/802-1qdd/
[3] 802.1Qat - Stream Reservation Protocol -/ http://www.ieee802.org/1/pages/802.1at.html
[4] Dürr et al., No-wait Packet Scheduling for IEEE Time-sensitive Networks (TSN) - https://dl.acm.org/doi/abs/10.1145/2997465.2997494
Prerequisites
- Very good knowledge of networking concepts and architectures
- Very good knowledge and practical experiences with Linux networking
- Good knowledge of Python, C, C++ or Rust
- Any prior experience with optimization methods and and tools and OMNeT++ is a plus
- Any prior experience with TSN is a plus
- Any prior experience with SDN is a plus
- Proficient English and/or German communication skills
- Ability to work and pursue solutions under limited supervision
Contact
ermin.sakic@siemens.com
blenk@tum.de
andreas.zirkler@siemens.com
Supervisor:
5G-NR-U and WiFi6 Coexistence
5G-NR-U and WiFi6 Coexistence
5G-NR-U, WiFi6, Coexistence
Extension and Analysis of an Interference Cancellation Solution for Aircraft Cabins
Description
More and more devices communicate wirelessly in the cabin. This includes, seat screens, passenger devices (mobile phones, laptops), sensors and actuators. As of today, the complete unlicensed spectrum is used. In future, 5G-NR-U base station can be deployed in this scenario, to offer new connectivity possiblities. This requires new coexistence mechanisms that do not put further strain on spectrum resources.
The goal of this thesis is to extend a pre-existing enhance a pre-existing simulator (based on matlab) for 5G-NR-U signal demodulation, with the help of pre-existing matlab 5G-NR-U functions. After extending the simulator, different scenarios and channel etimation techniques should be investigated, as well as several Key Performance Indicators such as Bit Error Rate, Signal-to-Interference-to-Noise-Ratio and throughput.
The overall goal is to show that interference cancellation, the underlying concept of Rate Splitting and NOMA is viable even in a challenging environment such as an aircraft cabin where the high density of radio devices is a challenge
Prerequisites
Tasks
- Review of related literature
- Extend our simualtion from 4G to 5G
- Compare different channel estimation techniques
- Evaluate of the simulation results
Preferred Qualifications
- Experience in matlab
- Interest in signal modeling
- Interest in new communication technologies
Related Work
Interference Cancelation
https://www.researchgate.net/profile/Cheng-Xiang_Wang/publication/
224083390_Cognitive_radio_networks/links/0fcfd50b57f883862f000000/Cognitive-radio-
networks.pdf
5G-NR-U
https://de.mathworks.com/help/5g/examples/nr-pdsch-throughput.html
Channel Estimation Techniques
https://ieeexplore.ieee.org/document/1033876
http://engold.ui.ac.ir/~sabahi/Advanced%20digital%20communication/Channel%20estimation%20for
%20wireless%20OFDM%20sysytems.pdf
Supervisor:
Performance Modeling of P4 Devices Using Queuing Theory
Performance Modeling of P4 Devices Using Queuing Theory
P4, Performance Modeling, Queuing Theory
Derive Analytical models based on queuing theory for the perfromance of P4 devices.
Description
In this work, the student will derive analytical equations for different performance metrics related to P4 devices. The modeling will be based on queuing theory and previously conducted measurements on P4 devices. Simulations will be used to verify the derived performance model.
Prerequisites
Knowledge about P4, Queuing Theory, and Matlab.
Supervisor:
Adversarial Benchmarking of Data-driven Reconfigurable Data Center Networking
Adversarial Benchmarking of Data-driven Reconfigurable Data Center Networking
Data center networks, Adversarial input
Description
Today's Data Center (DC) networks are facing increasing demands and a plethora of requirements. Factors for this are the rise of Cloud Computing, Virtualization and emerging high data rate applications such as distributed Machine Learning frameworks.
Recently several new architectures have been proposed that rely on reconfigurable optics to evolve the topology over time, such as Helios[3], Projector[5], c-Through[1] or RotorNet[6].
Some of these approaches learn attributes of the traffic distribution in the network to determine topology configurations, e.g., xWeaver[4] or DeepConf[2].
A different stream of research focuses on generating adversarial input to networks or networking algorithms [7,8] to identify weak spots or improve robustness and solution quality of data-driven approaches.
The goal of this thesis is to apply the latter one to reconfigurable topology designs with the main focus on the data-driven variants (xWeaver) to benchmark their performance and to improve their solution quality.
The thesis builds upon an existing flow-level simulator in Python and initial algorithms that generate adversarial inputs for networking problems.
[1] G. Wang et al., “c-Through: Part-time Optics in Data Centers,” presented at the ACM SIGCOMM, 2010, p. 12.
[2] S. Salman, C. Streiffer, H. Chen, T. Benson, and A. Kadav, “DeepConf: Automating Data Center Network Topologies Management with Machine Learning,” in Proceedings of the 2018 Workshop on Network Meets AI & ML, New York, NY, USA, 2018, pp. 8–14, doi: 10.1145/3229543.3229554.
[3] N. Farrington et al., “Helios: A Hybrid Electrical/Optical Switch Architecture for Modular Data Centers,” pp. 1–12, 2010.
[4] M. Wang et al., “Neural Network Meets DCN: Traffic-driven Topology Adaptation with Deep Learning,” Proceedings of the ACM on Measurement and Analysis of Computing Systems, vol. 2, no. 2, pp. 1–25, Jun. 2018, doi: 10.1145/3224421.
[5] M. Ghobadi et al., “ProjecToR: Agile Reconfigurable Data Center Interconnect,” 2016, pp. 216–229, doi: 10.1145/2934872.2934911.
[6] W. M. Mellette et al., “RotorNet: A Scalable, Low-complexity, Optical Datacenter Network,” 2017, pp. 267–280, doi: 10.1145/3098822.3098838.
[7] S. Lettner and A. Blenk, “Adversarial Network Algorithm Benchmarking,” in Proceedings of the 15th International Conference on emerging Networking EXperiments and Technologies, Orlando FL USA, Dec. 2019, pp. 31–33, doi: 10.1145/3360468.3366779.
[8] J. Zerwas et al., “NetBOA: Self-Driving Network Benchmarking,” in Proceedings of the 2019 Workshop on Network Meets AI & ML - NetAI’19, Beijing, China, 2019, pp. 8–14, doi: 10.1145/3341216.3342207.
Supervisor:
Optimizing BGP Forwarding Information Base Aggregation Routines by Prediction of Updates
Optimizing BGP Forwarding Information Base Aggregation Routines by Prediction of Updates
Description
The Boarder Gateway Protocol (BGP) lies at the heart of the Internet. Internet routers use BGP to exchange routing information with neighbors. Based on their routing policy, current updates, and internal network routes, routers populate their local routing information base. The final paths are pushed to the forwarding information base (FIB). For fast matching, FIBs are implemented with expensive TCAM. In order to use TCAM space efficiently, aggregation algorithms compress the RIB before pushing the final paths to the FIB. However, each BGP update may demand a change of the FIB, leading to expensive compression and decompression tasks. Hence, this work focusses on the possibility of machine learning to enhance the FIB compression routine.
Supervisor:
Deliberate Load-Imbalancing in Data Center Networks
Deliberate Load-Imbalancing in Data Center Networks
Traffic Engineering, Scheduling, Data Center Networks
Goal of this thesis is the implementation and evaluation of an in-dataplane flow scheduling algorithm based on the online scheduling algorithm IMBAL in NS3.
Description
Recently, a scalable load balancing algorithm in the dataplane has been proposed that leverages P4 to estimate the utilization in the network and assign flows to the least utilized path. This approach can be interpreted as a form of Graham's List algorithm.
In this thesis, the student is tasked to investigate how a different online scheduling algorithm called IMBAL performs compared to HULA. A prototype of IMBAL should be implemented in NS3. The tasks of this thesis are:
- Literature research and overview to online scheduling and traffic engineering in data center networks.
- Design how IMBAL can be implemented in NS3.
- Implementation of IMBAL in NS3.
- Evaluation of the implementation in NS3 with production traffic traces and comparison to HULA (a HULA implementation is provided from the chair and its implementation not part of this thesis).
Supervisor:
5G-RAN control plane modeling and Core network evaluation
5G-RAN control plane modeling and Core network evaluation
Description
Next generation mobile networks are envisioned to cope with heterogeneous applications with diverse requirements. To this end, 5G is paving the way towards more scalable and higher performing deployments. This leads to a revised architecture, where the majority of the functionalities are implemented as network functions, which could be scaled up/down depending on the application requirements.
3GPP has already released the 5G architecture overview, however there exists no actual open source deployment of RAN functionalities. This will be crucial towards the evaluation of the Core network both in terms of scalability and performance. In this thesis, the student shall understand the 5G standardization, especially the control plane communication between the RAN and 5G Core. Further, an initial RAN function compatible with the 5G standards shall be implemented and evaluation of control plane performance will be carried out.
Prerequisites
- Strong knowledge on programming languages Python, C++ or Java.
- Knowledge about mobile networking is necessary.
- Knowlegde about 4G/5G architecture is a plus.
Supervisor:
Optimal Placement of SDN Controllers and Hypervisors in Virtualized Networks
Optimal Placement of SDN Controllers and Hypervisors in Virtualized Networks
Description
Introduction to Software Defined Networking (SDN) and Network Virtualization paradigms has brought new challenges to tackle. In this environment, network switches communicate to SDN controllers through a hypervisor. Therefore, considering a network graph, three questions may come up: 1) Where to deploy SDN controllers? 2) Where to deploy SDN hypervirors? 3) How to connect network switches to the respective SDN controllers to meet the traffic dynamicity and QoS requirements?
In this work, we plan to design and evaluate an optimization framework to optimally answer the above questions.
Prerequisites
Mathematical Optimization, Algorithms, Java / Python
Contact
amir.varasteh@tum.de
Supervisor:
Impact of noise-aware routing on different multiplexing techniques in flexible optical networks
Impact of noise-aware routing on different multiplexing techniques in flexible optical networks
optical networks, RSA
Implementation and evaluation of different multiplexing techniques
Description
Optical core networks deploy fixed grid in order to accommodate their demands. The trend is however, migrate towards flexgrid networks able to use as much spectrum as required by the demand allowing a better use of the spectrum. However, the Routing and Wavelength Assignment (RWA) paradigm becomes more complex as it has to find the suitable spectrum of the demand.
This thesis will consider different multiplexing alternatives: space division, band division multiplexing, etc.
The objective is to compare the performance, throughput, BoM, etc. of the different alternatives for different networks and demand evolution matrices.
Prerequisites
Java programming skills
Basic knowledge of flexible optical networks
Supervisor:
Research Internship (Forschungspraxis)
Software Engineering for Automotive Ethernet
Coherent Optical High Speed Modems
Coherent Optical High Speed Modems
Description
- yle="color: black; mso-margin-top-alt: auto; mso-margin-bottom-alt: auto; mso-list: l0 level1 lfo1; tab-stops: list 36.0pt;">Characterization of Coherent optical high speed modems
- yle="color: black; mso-margin-top-alt: auto; mso-margin-bottom-alt: auto; mso-list: l0 level1 lfo1; tab-stops: list 36.0pt;">Flexible bit rates of 100 Gbps to 600 Gbps with respect to possible standardization of these interfaces
Supervisor:
Design and Development of robust Risk and Verification Pipeline
Implementation and Evaluation of Scheduling Techniques for Reconfigurable, Hybrid Circuit/Packet Data-center Networks
Implementation and Evaluation of Scheduling Techniques for Reconfigurable, Hybrid Circuit/Packet Data-center Networks
Description
Today's Data Center (DC) networks are facing increasing demands and a plethora of requirements. Factors for this are the rise of Cloud Computing, Virtualization and emerging high data rate applications such as distributed Machine Learning frameworks.
Recently, several new architectures have been proposed that rely on topologies that can be reconfigured during run-time with the goal to account for changing demands.
One example is Solstice [1] which proposes a scheduling algorithm for hybrid networks with electrical packet switch parts and optical circuit switches.
It specifically considers the non-trivial switching times of optical circuit switches while also accounting for limited bandwidth on electrical packet switches.
The goal of this research internship is to re-implement and evaluate Solstice in an existing flow-level simulator, to verify the original results and compare it to other topologies and identify pontetial weak spots in terms of traffic patterns that are not well handled by Solstice's approach.
[1]H. Liu et al., “Scheduling techniques for hybrid circuit/packet networks,” in Proceedings of the 11th ACM Conference on Emerging Networking Experiments and Technologies - CoNEXT ’15, Heidelberg, Germany, 2015, pp. 1–13, doi: 10.1145/2716281.2836126.
Supervisor:
Reimplementation and Evaluation of METTEOR: Robust Multi-Traffic Topology Engineering
Reimplementation and Evaluation of METTEOR: Robust Multi-Traffic Topology Engineering
Description
Today's Data Center (DC) networks are facing increasing demands and a plethora of requirements. Factors for this are the rise of Cloud Computing, Virtualization and emerging high data rate applications such as distributed Machine Learning frameworks.
Recently, several new architectures have been proposed that rely on topologies that can be reconfigured during run-time with the goal to account for changing demands.
One example is METTEOR [1] which considers a robust optimization approach to optimize the reconfigurable topology while aiming at a reduced amount of reconfigurations.
The goal of this internship is to re-implement and evaluate METTEOR, to verify the original results and compare it to other topologies and identify pontetial weak spots in terms of traffic patterns that are not well handled by METTEOR's approach.
[1]M. Y. Teh, S. Zhao, and K. Bergman, “METTEOR: Robust Multi-Traffic Topology Engineering for Commercial Data Center Networks,” arXiv:2002.00473 [cs], Feb. 2020, Accessed: Aug. 12, 2020. [Online]. Available: http://arxiv.org/abs/2002.00473.
Supervisor:
Jitter Analysis and Comparison of Jitter Algorithms
Jitter Analysis and Comparison of Jitter Algorithms
Description
In electronics and telecommunication, jitter is a significant and an undesired factor. The effect of jitter on the signal depends on the nature of the jitter. It is important to sample jitter and noise sources when the clock frequency is especially prone to jitter or when one is debugging failure sources in the transmission of high speed serial signals. Managing jitter is of utmost importance and the methods of jitter decomposition have changed comparably over the past years.
yle="background-image: initial; background-position: initial; background-size: initial; background-repeat: initial; background-origin: initial; background-clip: initial;">
yle="background-image: initial; background-position: initial; background-size: initial; background-repeat: initial; background-origin: initial; background-clip: initial;">In a system, jitter has many contributions and it is not an easy job to identify the contributors. It is difficult to get Random Jitter on a spectrogram. The waveforms are initially constant, but the 1/f noise and flicker noise cause a lot of disturbance when it comes to output measurement at particular frequencies in a system.
yle="background-image: initial; background-position: initial; background-size: initial; background-repeat: initial; background-origin: initial; background-clip: initial;">
yle="background-image: initial; background-position: initial; background-size: initial; background-repeat: initial; background-origin: initial; background-clip: initial;">The task is to understand the difference between the jitter calculations based on a step response estimation and the dual dirac model by comparing the jitter algorithms between the R&S oscilloscope and other competition oscilloscopes. Also to understand how well the jitter decomposition and identification is there.
yle="margin-bottom: 8.0pt; line-height: 105%;">
yle="background-image: initial; background-position: initial; background-size: initial; background-repeat: initial; background-origin: initial; background-clip: initial;">The tasks in detail are as follows.
yle="background-image: initial; background-position: initial; background-size: initial; background-repeat: initial; background-origin: initial; background-clip: initial;">Setup a waveform simulation environment and extend to elaborate test cases
yle="background-image: initial; background-position: initial; background-size: initial; background-repeat: initial; background-origin: initial; background-clip: initial;">Run the generated waveforms through the algorithms
yle="background-image: initial; background-position: initial; background-size: initial; background-repeat: initial; background-origin: initial; background-clip: initial;">Analyze and compare the results:
yle="background-image: initial; background-position: initial; background-size: initial; background-repeat: initial; background-origin: initial; background-clip: initial;">Frequency domain
yle="background-image: initial; background-position: initial; background-size: initial; background-repeat: initial; background-origin: initial; background-clip: initial;">Statistically (histogram, etc)
yle="background-image: initial; background-position: initial; background-size: initial; background-repeat: initial; background-origin: initial; background-clip: initial;">Time domain
yle="background-image: initial; background-position: initial; background-size: initial; background-repeat: initial; background-origin: initial; background-clip: initial;">Consistency of the results
yle="background-image: initial; background-position: initial; background-size: initial; background-repeat: initial; background-origin: initial; background-clip: initial;">Evaluate the estimation of the BER (bit error rate)
yle="background-image: initial; background-position: initial; background-size: initial; background-repeat: initial; background-origin: initial; background-clip: initial;">Identify the limitations of the dual-dirac model
yle="background-image: initial; background-position: initial; background-size: initial; background-repeat: initial; background-origin: initial; background-clip: initial;">Compare dual-dirac model results with a calculation based on the step response estimation
yle="background-image: initial; background-position: initial; background-size: initial; background-repeat: initial; background-origin: initial; background-clip: initial;">Generate new waveforms based on the analysis
yle="background-image: initial; background-position: initial; background-size: initial; background-repeat: initial; background-origin: initial; background-clip: initial;">Summarize findings
Supervisor:
Mobility trajectory prediction using machine learning algorithms
Mobility trajectory prediction using machine learning algorithms
Mobility, trajectory prediction
Description
The task is to predict the trajectory (or path) of a mobile user using some machine learning algorithms (e.g. reinforcement learning or recurrent neural networks). Using this information one can then use this for eg. In LiFi to predict when the moving user will block the light for another user. Or eg. In 5G to predict when to perform handovers.
The tasks are:
- Create the training dataset by implementing an existing mathematical mobility model using Python (SLAW model).
- Use multiple learning techniques to predict the trajectory of a mobile user
- The goal is to compare the learning techniques to see how good the prediction can be and in which applications they can be used and how this would impact handovers in LiFi and 5G.
Related work:
Lee, K., Hong, S., Kim, S.J., Rhee, I. and Chong, S., 2009, April. Slaw: A new mobility model for human walks. In IEEE INFOCOM 2009 (pp. 855-863). IEEE.
Gebrie, H., Farooq, H. and Imran, A., 2019, May. What machine learning predictor performs best for mobility prediction in cellular networks?. In 2019 IEEE International Conference on Communications Workshops (ICC Workshops) (pp. 1-6). IEEE.
Prerequisites
- Python knowledge
- Knowledge in machine learning algorithms (especially RNNs)
- Interest in learning about mobility models
Supervisor:
Evaluation of Communication Mechanisms in 5G Control Plane
Evaluation of Communication Mechanisms in 5G Control Plane
5G, 5G Core, NFV
Description
As the architecture of the 5G Core has already been standardized and it is fundamentally different from the previous generations, a question rises regarding the efficiency of the communication mechanisms that will be used for the communication between Network Functions (NFs) inside the 5G Core Network.
In this project, the student will evaluate these mechanisms used in an open-source implementation of the 5G Core (https://www.free5gc.org/), and propose improvements that will increase the performance and scalability of the Core Network (e.g., serialization, protocols, etc.).
Prerequisites
- General knowledge about Mobile Networks (RAN & Core).
- Motivated to learn more about the communication side of the Mobile Core Networks.
- Strong programming skills. Knowledge of Go (https://golang.org/) is a plus.
Contact
endri.goshi@tum.de
Supervisor:
A scalable and flexible 5G RAN Emulator
A scalable and flexible 5G RAN Emulator
5G, gNB, UE, RAN, AMF
Description
3GPP has already released the 5G architecture overview, however there exists no actual open source deployment of RAN functionalities. This will be crucial towards the evaluation of the Core network both in terms of scalability and performance. In this thesis, the student shall understand the 5G standardization, especially the control plane communication between the RAN and 5G Core. Further, an initial RAN function compatible with the 5G standards shall be implemented and evaluation of control plane performance will be carried out.
Prerequisites
- Strong knowledge on programming languages Python, Go.
- Knowledge about mobile networking is necessary.
- Knowlegde about 4G/5G architecture is a plus.
Supervisor:
3D modeling of an indoor office and analysis of different LiFi access point placement strategies
3D modeling of an indoor office and analysis of different LiFi access point placement strategies
Description
LiFi (Light-fidelity) is becoming an increasingly important technology as the unlicensed ISM band gets overcrowded. LiFi, with its high data rates, emerges as a promising solution to provide connectivity using LED lights thereby reducing the interference with other wireless technologies. Due to the unique properties of a visible light communciation system, the position of theaccess points/LEDs has a significant effect on the network coverage and illumination level in an indoor area.
This work involves modelling an indoor office using Blender (a 3D modelling software) and analyzing the effects of different LED placement strategies.The goal is to answer the question: Which design is preferred for LiFi access point placement in an office environment?
Related Reading:
Blender
https://www.blender.org/
LiFi
H. Haas, L. Yin, Y. Wang and C. Chen, "What is LiFi?," in Journal of Lightwave Technology, vol. 34, no. 6, pp. 1533-1544, 15 March15, 2016, doi: 10.1109/JLT.2015.2510021.
Prerequisites
- Python
- Interest in new communication technologies
- Interest in 3D modelling
Supervisor:
Analysis and Development of a Network Slice Control Handover Scheme
Analysis and Development of a Network Slice Control Handover Scheme
Description
Software-Defined Radio Access Network (SD-RAN) and network slicing are foreseen as promising solutions to enable programmability and flexibility for next generations 5G/6G netwrorks. The concept of SD-RAN enables a hierchical scheduling approach for wireless resource allocation, where each network slice is controlled by a centralized unit referred to as the SD-RAN controller.
However, a single centralized controller can often be a single point of failure. That said if a problem occurs there is no recovery for the network. Apart from such kind of failures, another important issue is the controller overload. Since the controllers are running on physical devices, they experience undesirable behavior when the load of the underlying network increases. To that end, a multi controller layer is proposed. In such a scenario, different network slices can be managed by different controllers to reduce the load and keep the system stable. Alternatively, when one controller is loaded, some of the network slice control has to be offloaded to other controller performing a type of load balancing. We refer to such problem as "vertical control handover", that we define as the handover of slices' controller towards different controllers.
The purpose of this thesis is to model and develop an optimization problem for vertical slice handover with aim at minimizing packet losses, signaling overhead, system load for the SD-RAN controller. The developed optimization problem shall be analyzed with respect to the aforementioned aspects for different system parameters as well as compared with alternative literature approaches.
Prerequisites
- Experience with optimization problems: linear programming, convex optimizations is a plus
- Experience with programming: Python/Matlab is a plus.
- Knowledge about SDN is a plus
Supervisor:
Probabilistic Traffic Classification
Probabilistic Traffic Classification
Probabilistic Graphical Models, Markov Model, Hidden Markov Model, Machine Learning, Traffic Classification
Classification of packet level traces using Markov and Hidden Markov Models.
Description
The goal of this thesis is the classification of packet-level traces using Markov- and Hidden Markov Model. The scenario is open-world: Traffic of specific web applications should be distinguished from all possible web-pages (background traffic). In addition, several pages should be differentiated. Examples include: Google Maps, Youtube, Google Search, Facebook, Google Drive, Instagram, Amazon Store, Amazon Prime Video, etc.
Supervisor:
Joint Planning of Optical and Satellite QKD networks
Joint Planning of Optical and Satellite QKD networks
Description
This research internship looks at Quanktum Key Distribution exchange in a wide area pan-European network. Different architectures, routing and spectrum allocation methods need to be considered, keeping in mind both technological and economic constraints of such a combined deployment.
This work is helpful for Optical Networking as well as satellite network providers in order to provide secure data transmissions over large region multi-national governmental organizations, banks and security agencies.
Supervisor:
Lightpath Reconfiguration Algorithms for Optical Network Planning
Lightpath Reconfiguration Algorithms for Optical Network Planning
Description
In this internship/bachelor thesis, a student will research different lightpath reconfiguration algorithms models and their applications to flexible optical networking. Then, they will create an evaluation framework and finally simulate a network planning study. The results will be investigated to draw useful conclusions.
Prerequisites
Must Have
- Basic knowledge of graph theory, communication networks and optical networks
- Independent worker
- Working knowledge of Python
- Willingness to learn
Good to Have
- Knowledge of Java
- Familiarity with NetConf/YANG
Contact
To apply, please send your CV and grade transcripts to kireet.patri@tum.de
Supervisor:
FlexGraph: Flexibility Study of Network Topologies (Reliability Use Case)
FlexGraph: Flexibility Study of Network Topologies (Reliability Use Case)
Description
Flexibility is an indispensable feature of future communication networks and technologies. Only flexible networks allow coping with the fast-changing demands of future network applications and use cases like 5G/6G. In order to quantify the flexibility of networks, we have recently presented a new flexibility measurement framework. As many problems in network communication are studied on graphs, this thesis should investigate the impact of topologies on network flexibility by applying the concepts proposed in our framework. In detail, different networking use cases (like network programmability, network failures, and network verification) should be analyzed for a variety of network topologies. As graph measures are one way to characterize network graphs, the thesis should analyze the relation between graph features and network flexibility. It will be expected that the studies will be data-intensive; hence, the student should bring preliminaries in carrying out automated and data-intensive analysis. Finally, this thesis should study the predictability of flexibility given topologies. For this, it should apply supervised machine learning models to answer the question of whether we can predict the flexibility of given problems for specific networking problems.
Supervisor:
Network Functions Placement into P4 Devices
Network Functions Placement into P4 Devices
P4, Network Functions Placement
Description
P4 is a novel language designed for programming packet processors in a high-level language. Currently, it can be used to program different types of networking devices. https://p4.org/
Our goal in this thesis is to investigate the optimal placement of different network functions (or P4 programs) into different P4 devices based on the characteristics of the P4 programs and the P4 devices.
In this work, the student will formulate the placement problem as an optimization problem and solve it for different objectives and under the relevant constraints.
Prerequisites
- Good theoretical background on optimization.
- Background on P4 language is a plus.
Supervisor:
Internships
Netzoptimierumg
Student Assistant Jobs
Student Assistant for the Wireless Sensor Networks Lab WS20/21
Student Assistant for the Wireless Sensor Networks Lab WS20/21
Description
The Wireless Sensor Networks lab offers the opportunity to develop software solutions for the wireless sensor networking system, targeting innovative applications. For the next semester, a position is available to assist the participants in learning the programming environment and during the project development phase. The lab is planned to be held every Tuesday from 15:00 to 18:00.
Prerequisites
Solid knowledge in Wireless Communication: PHY, MAC, and network layers. Solid programming skills: C/C++. Linux knowledge. Experience with embedded systems and microcontroller programming knowledge is preferable.
Supervisor:
Implementation of a Techno-Economic tool for VLC
Implementation of a Techno-Economic tool for VLC
Development and implementation in Excel/VBA of visible light communication (VLC) techno-economic tool for IoT services.
Description
Future IoT will need wireless links with high data rates, low latency and reliable connectivity despite the limited radio spectrum. Connected lighting is an interesting infrastructure for IoT services because it enables visible light communication (VLC), i.e. a wireless communication using unlicensed light spectrum. This work will aim at developing a tool to perform an economic evaluation of the proposed solution in the particular case of a smart office.
For that purpose, the following tasks will have to be performed:
- Definition of a high-level framework specifying the different modules that will be implemented as well as the required inputs and the expected outputs of the tool.
- Development of a cost evaluation Excel-VBA tool. This tool will allow to evaluate different variations of the selected case study and if possible, to compare different alternative models (e.g., dimensioning) or scenarios (e.g., building types).
Prerequisites
- Excel and VBA
Supervisor:
Development of NCSbench
Development of NCSbench
Description
Networked Control Systems, i. e. the interconnection of a control system over a communication network, are a fundamental building block of future industrial automation systems. To study NCS, the NCSbench platform [1,2] was developed allowing performance measurements of a real NCS. The platform is built using Lego, is developed in python, communicates over standard IP network interfaces and is fully open-source.
In order to further develop the platform, a working position is available.
[1] https://github.com/tum-lkn/NCSbench/wiki
[2] S. Zoppi, O. Ayan, F. Molinari; Z. Music, S. Gallenmüller, G. Carle, W. Kellerer, NCSbench: Reproducible Benchmarking Platform for Networked Control Systems. IEEE Consumer Communications & Networking Conference, 2020
Prerequisites
This work will consist of a significant amount of programming and also of processing the measurement data.
Requirements:
- Good knowledge of wireless systems and protocols (WLAN, WSN).
- Basic knowledge of LTI control systems.
- Strong C and Python programming skills.
- Experience with Linux systems is recommended.
- Experience with python data processing tools is beneficial.
Supervisor:
Machine-Learning based fault diagnosis in flexible optical networks
Machine-Learning based fault diagnosis in flexible optical networks
machine learning, classifiers, flexible optical networks
Develop, implement and evaluate two solutions to detect and identify failures in flexible optical networks.
Description
yle="text-align: justify;">Flexible optical networks achieve a significant increase of network capacity by assigning as much spectrum to the demands as needed. With advances in transponders supporting software tunable channel configurations, network planners are able to select the best combination of data rate, modulation format and forward error correction (FEC) for each light-path. However, the impact of failures in these networks is higher due to the closeness between channels as well as the low OSNR margins.
yle="text-align: justify;">Hence, in this thesis we will look into the monitoring capabilities of flexible optical networks provided by existing components as well as extra monitoring equipment. All the gathered monitoring information should be processed by machine learning based classification tools such that fast and efficient failure diagnosis can be achieved.
Prerequisites
Programming skills (Python)
Supervisor:
Implementation of Energy-Aware Algorithms for Service Function Chain Placement
Implementation of Energy-Aware Algorithms for Service Function Chain Placement
Description
Network Function Virtualization (NFV) is becoming a promissing technology in modern networks. A challenging problem is determining the placement of Virtual Network Functions (VNFs). In this work, we plan to implement existing algorithms for embedding VNFs chains in NFV-enabled networks.
Prerequisites
Experience in Python or Java, object oriented programming
Contact
amir.varasteh@tum.de
Supervisor:
Working student for innovative Podcasts for BCN lecture
Working student for innovative Podcasts for BCN lecture
Programming support for the design of podcasts/apps for the Brodadband Communication Networks Lecture
Description
The lecture Broadband Communication Networks (Prof. Wolfgang Kellerer) teaches network-related methods of mobile communication: WIFI, 2G to 5G cellular networks, etc. In order to bridge the gap between the methods and real life, innovative teaching concepts shall be developed in form of short podcasts where the students learn in short episodes about wireless communication and networking in day to day scenarios. In the podcasts the student should also be introduced to short exercises they can perform on their own.
In order to support designing these podcasts Pro. Kellerer is looking for a student experienced in programming and maybe in podcasts to help him.
Prerequisites
Very good programming skills; experience with podcasts; experience with app programming (on android/ios)
Supervisor:
Working student for the SDN lab
Working student for the SDN lab
Description
The yle="color: #333333; font-family: Arial, sans-serif; font-size: 14px;">Software-Defined Networking (SDN) lab offers the opportunity to work with real hardware on interesting and entertaining projects related to the SDN network paradigm. For the next semester, a position is available to assist the teaching assistants for the lab (definition and preparation of the assignments, preparation of the hardware, etc.).
Prerequisites
- yle="list-style-type: square; margin-left: 0px; padding-left: 20px; color: #333333; font-family: Arial, sans-serif; font-size: 14px;">
- yle="padding-left: 0.3em;">Solid knowledge in computer networking (TCP/IP, SDN)
- yle="padding-left: 0.3em;">Solid knowledge of networking tools and Linux (iperf, ssh, etc)
- yle="padding-left: 0.3em;">Good programming skills: C/C++, Python