In Cooperation with:
Prof. Markus Rupp (Institute of Telecommunications, TU Vienna)
Prof. Armin Wittneben (Communication Technology Laboratory, ETH Zürich)
Organization: Andreas Barthelme
Target Audience: Master EI
Offered in: Summer Term
|The Institute of Telecommunications at TU Vienna (Prof. Rupp, Prof. Mecklenbräucker and Prof. Görtz), the Communication Technology Lab at ETH Zürich (Prof. Wittneben) and the Methods of Signal Processing Group at Technische Universität München (Prof. Utschick) are organizing an international seminar on selected topics in signal processing and communications for students from the participating institutions during the summer term. The students (4-6 from each department) are offered potential topics by the corresponding supervisors, collect the required literature, understand the topic, summarize it in a two-page abstract and finally give a scientific talk. The talks are held in Zürich, Vienna and Munich. The travel and accommodation costs are covered by the organizers.|
Note that due to the Coronavirus crisis, we have to suspend traveling to our partner universities in the summer term 2020.
|Please send your informal application with an up-to-date transcript of records by April 1, 2020, to email@example.com|
|May 22, 2020 (canceled)||TU Wien|
|May 29, 2020 (canceled)||ETH Zürich|
|June 26, 2020||TU München|
The ever growing popularity of deep neural networks (DNNs) can easily be observed across many fields - especially in computer vision and data mining. However, these DNNs, generally speaking, come with a lack of interpretability, confidence quantification and robustness to adversarial attacks. The authors of , address this problem by introducing the Deep k-Nearest Neighbours (DkNN) algorithm, which combines the interpretability of the k-NN algorithm with the feature extraction capacity of DNNs. In this seminar, the student is expected to give an overview of the benefits of the DkNN algorithm in terms of interpretability, confidence and robustness. Moreover, the student should also provide an intuitive explanation of the DkNN algorithm.
 N. Papernot, and P. McDaniel, “Deep k-Nearest Neighbors: Towards Confident, Interpretable and Robust Deep Learning,” 2018. [Online]. Available: arxiv.org/abs/1803.04765
Clustering is central to many data-driven application domains and has been studied extensively. Due to the success of generative modeling in the last years it seems natural to leverage the potential of generative models also for clustering tasks. The intuition behind is that if a model learns to produce realistic samples from a low dimensional representation, then similar data points should be naturally close to each other in this low dimensional representation.
In this seminar topic a novel method called “ClusterGAN”  should be investigated which performs clustering within the framework of Generative Adversarial Networks (GANs) .
 S. Mukherjee, H. Asnani, E. Lin, and S. Kannan, “Clustergan: Latent space clustering in generative adversarial networks,” The Thirty-Third AAAI Conference on Artificial Intelligence (AAAI), 2019.
 Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems 27
Recently, a framework for optimization of MISO downlink beamforming was proposed, where fast optimal downlink beamforming strategies were studied by leveraging powerful deep learning (DL) techniques . In this paper, three typical optimization problems are considered, i.e., the signal to-interference-plus-noise ratio (SINR) balancing problem, the power minimization problem and the sum rate maximization problem . Depending on the problem, specific approaches to solve the computationally expensive problems with the help of a deep neural network consisting of a neural network module and a beamforming recovery module are considered . The student should provide a brief summary of the key findings of the paper, where the focus is on identifying and understanding the differences between the proposed training strategies for each of the three optimization problems.
 “A Deep Learning Framework for Optimization of MISO Downlink Beamforming” - Wenchao Xia, Gan Zheng, Yongxu Zhu, Jun Zhang, Jiangzhou Wang, and Athina P. Petropulu
Machine learning and deep learning techniques for physical communications have gained quite a lot of popularity and are considered to be a key enabling factor for channel state information (CSI) prediction in frequency division duplexing (FDD) massive multiple-input multiple-output (MIMO). To address this problem,  develops a deep learning framework for a mixed scenario, in which different users experience different wireless transmission environments. In particular, three different algorithms for CSI prediction are proposed:
- no-transfer algorithm, where the neural network is trained for multiple environments and then tested on the single environment;
- direct-transfer algorithm, where after a coarse estimation of the parameters, the network is specifically tuned for the single environment;
- meta-learning algorithm, where the network learns to adapt to a new environment with a small number of labeled data.
The student can see how machine learning techniques such as the one in  and  can be utilized for solving practical communication problems.
 “Deep Transfer Learning Based Downlink Channel Prediction for FDD Massive MIMO systems” - Y. Yang, F. Gao, Z. Zhong and A. Alkhateeb.
 “A survey on transfer learning” - S. J. Pan and Q. Yang.
 “Model-agnostic meta-learning for fast adaptation of deep networks” - C. Finn, P. Abbeel and S. Levine.
A method to test how well a probabilistic model fits a set of observations has been proposed . The method combines Stein's identity with the concept of reproducing kernel Hilbert spaces. In the seminar, the student should summarize and present this method.
 “A Kernelized Stein Discrepancy for Goodness-of-fit Tests” – Q. Liu, J. Lee, M. Jordan
Approximate Message Passing (AMP)  is a well-known iterative approach to solving problems, which fall into the Compressed Sensing framework. As has been proposed for similar iterative algorithms, unfolding the iterations of the AMP algorithm with the help of a deep neural network can improve its reconstruction properties . For the seminar, the student shall produce an introduction into classical AMP methods and explain how unfolding can be used to improve the signal reconstruction.
 “Message Passing Algorithms for Compressed Sensing” – David L. Donoho, Arian Maleki and Andrea Montanari
 “AMP-Inspired Deep Networks for Sparse Linear Inverse Problems” – Mark Borgerding, Philip Schniter and Sundeep Rangan
Unlike conventional downlink strategies consisting of exclusively transmitting private messages to the users, the rate splitting (RS) approach divides the message of each user to a private part and a common part. The common part is decoded by all users whilst the private part can only be decoded by the intended user. This method is shown to achieve a higher sum Degree-of-Freedom (DoF) with an appropriate power allocation among the private and common messages compared to conventional strategies. It is also shown to offer robustness in large scale systems with imperfect channel state information at the transmitter (CSIT) .
The student should look into the RS approach and investigate the so-called hierarchical RS (HRS) proposed in  for the case of imperfect CSIT. Further, the student is supposed to present the limitations and challenges related to this strategy.
 "A Rate Splitting Strategy for Massive MIMO with Imperfect CSIT," - M. Dai, B. Clerckx, D. Gesbert and G. Caire
 "A Hierarchical Rate Splitting Strategy for FDD Massive MIMO under Imperfect CSIT," - M. Dai, B. Clerckx, D. Gesbert and G. Caire