Seminar on Topics in Signal Processing
|Language of instruction||English|
|Position within curricula||See TUMonline|
- 18.10.2019 13:15-14:45 0406, Seminarraum
- 25.10.2019 13:15-14:45 0406, Seminarraum
- 08.11.2019 13:15-14:45 0406, Seminarraum
- 15.11.2019 13:15-14:45 0406, Seminarraum
- 22.11.2019 13:15-14:45 0406, Seminarraum
- 29.11.2019 13:15-14:45 0406, Seminarraum
- 06.12.2019 13:15-14:45 0406, Seminarraum
- 13.12.2019 13:15-14:45 0406, Seminarraum
- 20.12.2019 13:15-14:45 0406, Seminarraum
- 10.01.2020 13:15-14:45 0406, Seminarraum
- 17.01.2020 13:15-14:45 0406, Seminarraum
- 24.01.2020 13:15-14:45 0406, Seminarraum
- 31.01.2020 13:15-14:45 0406, Seminarraum
- 07.02.2020 13:15-14:45 0406, Seminarraum
Note: Please use TUMonline to register for the seminar: https://campus.tum.de/tumonline/lv.detail?clvnr=950112728
Teaching and learning methods
The kick-off meeting for the seminar is on 18.10.19 at 13:15 in the room 0406, attendance is mandatory for registration due to the limited quota.
Timetable for the course:
|Seminar on “How to write a scientific paper and give a presentation?”|
Private meetings with your supervisor
First draft paper submission
Final paper submission
Available topics are given below with additional information.
Autonomous vehicles are being designed with increasing success in recent years. But especially in the transient phase towards autonomous driving it is very unlikely that such systems will not fail. To handle such failure there exist different options. Either returning control to the driver which is not suitable for driverless vehicles or stopping the vehicle safely. Another option is resolving such failures with a remote human driver. This approach has some major challenges in terms of the network, delay, infrastructure and the general definition of application scenarios. In this topic we will investigate in the challenges and issues for resolving autonomous driving failures with teleoperation.
Supervision: Markus Hofbauer (email@example.com)
 L. Kang, W. Zhao, B. Qi, und S. Banerjee, „Augmenting Self-Driving with Remote Control: Challenges and Directions“, in Proceedings of the 19th International Workshop on Mobile Computing Systems & Applications - HotMobile ’18, Tempe, Arizona, USA, 2018, S. 19–24.
 L. Kang, H. Qiu, P. Liu, und S. Banerjee, „AutoMice: A Testbed Framework for Self-Driving Systems“, S. 8.
 A. Gohar, A. Raza, und S. Lee, „A Distributed Remote Driver Selection for Cost Efficient and Safe Driver Handover“, in 2018 International Conference on Information and Communication Technology Convergence (ICTC), Jeju, 2018, S. 801–804.
 J.-M. Georg, J. Feiler, F. Diermeyer, und M. Lienkamp, „Teleoperated Driving, a Key Technology for Automated Driving? Comparison of Actual Test Drives with a Head Mounted Display and Conventional Monitors“, in 2018 21st International Conference on Intelligent Transportation Systems (ITSC), Maui, HI, 2018, S. 3403–3408.
In a task as challenging as autonomous driving, failures are inevitable. Failures can originate from different tasks such as object detection, image segmentation and trajectory planning. One approach to mitigate this problem is to predict a failure for both a given task and the entire system far enough in advance to take appropriate countermeasures. In this seminar, relevant papers focusing on detecting critical situations seconds in advance should be researched. This includes predicting the failure of individual tasks, but also predicting correlated events such as hard breaking.
Supervision: Christopher Kuhn (firstname.lastname@example.org)
 Arguing Machines: Human Supervision of Black Box AI Systems That Make Life-Critical Decisions, Fridman et al. 2017
 Failure Prediction for Autonomous Driving, Hecker et al. 2018
 Drive2Vec: Multiscale State-Space Embedding of Vehicular Sensor Data, Hallac et al. 2018
 Evaluating Uncertainty Quantification in End-to-End Autonomous Driving Control, Michelmore et al. 2018
An important part of autonomous driving is determining the exact position of the vehicle on the road. Various sensors and systems are used for this purpose. The best known is probably GPS, but this is not precise enough and is not always available (e.g. in tunnels). In addition, laser sensors, so-called LIDARs, are used to create a map of the environment and simultaneously locate themselves in it. This is also called Simultaneous Localization and Mapping (SLAM). The disadvantage of LIDARs is that they are very expensive and complex. In recent years, more and more SLAM systems have been developed that use cameras to capture the environment (Visual SLAM systems). These mainly differ in the number of cameras (monocular and stereo) and the type of information (features, dense) used to build the map. However, Visual SLAM also has many challenges to solve such as low light, motion blur and loss of tracking.
Supervision: Sebastian Eger (email@example.com)
 N. Zikos, V. Petridis, “6-DoF Low Dimensionality SLAM (L-SLAM)”;
 D. Ball et al., “OpenRatSLAM: an open source brain-based SLAM system”;
 R. Mur-Artal, J. D. Tardos, “Fast relocalisation and loop closing in keyframe-based SLAM”;
 J. Engel et al., “LSD-SLAM: Large-Scale Direct Monocular SLAM”.
The ability of leading a vehicle to a target location is the basic functionality required for autonomous driving. While path planning is a topic that has been researched for decades (e.g. A*,RRT,PRM), the aim of this seminar is to investigate recent advances in reinforcement learning for path planning.
Supervision: Martin Piccolrovazzi (firstname.lastname@example.org)
 You et al.:Advanced planning for autonomous vehicles using reinforcement learning and deep inverse reinforcement learning
 Pflueger et al.:Rover-IRL: Inverse Reinforcement Learning with Soft Value Iteration Networks for Planetary Rover Path Planning
 Lee et al.:PRM-RL: Long range robotic navigation tasks by combining reinforcement learning and sampling based planning
The high rate of the multidimensional sensory data becomes computationally expensive for localization algorithms in autonomous or semi-autonomous systems. Research shows that not all the sensory information with their highest update rate are required during the entire process to perform a reliable output in such a system , , . The attention control scheme is one of the effective ways to reduce data rate in high dimensional sensory information processing. Bottom-Up attention model , for instance, can help us to improve the result of the sensor fusion algorithms by concentrating on the most valuable sensory data based on the dynamic of the vehicle motion. The aim of this work is to investigate the state-of-the-art attention control models that can be adapted for the multidimensional sensory data acquisition system and compare them from different modalities.
Supervision: Mojtaba Karimi (email@example.com)
 Wu, Junfeng, Qing-Shan Jia, Karl Henrik Johansson, and Ling Shi. "Event-based sensor data scheduling: Trade-off between communication rate and estimation quality." IEEE Transactions on automatic control 58, no. 4 (2012): 1041-1046.
 Zhang, Xiang, Lina Yao, Chaoran Huang, Sen Wang, Mingkui Tan, Guodong Long, and Can Wang. "Multi-modality sensor data classification with selective attention." arXiv preprint arXiv:1804.05493 (2018).
 Ouerhani, Nabil, Alexandre Bur, and Heinz Hügli. "Visual attention-based robot self-localization." In Proceeding of European Conference on Mobile Robotics, pp. 8-13. 2005.
 Tamiz, Mohsen, Mojtaba Karimi, Ismaiel Mehrabi, and Saeed Shiry Ghidary. "A novel attention control modeling method for sensor selection based on fuzzy neural network learning." In 2013 First RSI/ISM International Conference on Robotics and Mechatronics (ICRoM), pp. 7-13. IEEE, 2013.
One of the key-problems of autonomous driving is understanding the environment. The environment itself can be recorded with different kind of sensors. Even though one prominent car manufacturer promises to do this with cameras only, from a security and safety point of view one should use different modalities. Multi-modal object detection shows better performance and is more robust to external influences like weather conditions. In this work the student should present basic concepts of fusing different modalities, explain their main problems and compare their results.
Supervision: Michael Adam (Michael.firstname.lastname@example.org)
 J. Ku, M. Mozifian, J. Lee, A. Harakeh, and S. L. Waslander, “Joint 3d proposal generation and object detection from view aggregation,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018, pp. 1–8.
 A. Asvadi, L. Garrote, C. Premebida, P. Peixoto, and U. J. Nunes, “Multimodal vehicle detection: fusing 3D-LIDAR and color camera data,” Pattern Recognition Letters, vol. 115, pp. 20–29, 2018.
 X. Chen, H. Ma, J. Wan, B. Li, and T. Xia, “Multi-view 3d object detection network for autonomous driving,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 1907–1915.
One of the main challenges of real-time teledriving applications is the communication delay inherently produced by the networks. Fixed delay due to physical limitations and variable delay due to changing network congestion deteriorate remote driving sessions. As both the teleoperator perception and the command execution are delayed, the agility of the teleoperated vehicle decreases, causing poor safety for urban scenarios. Therefore, various methods have been proposed to predict the future states of the teleoperated vehicle or the future control inputs of the teleoperator in order to compensate the network delay. In this topic, we will investigate different prediction techniques and how they improve the teleoperated driving session.
Supervision: Furkan Kaynar (email@example.com)
 Y. Zheng, M. J. Brudnak, P. Jayakumar, J. L. Stein, and T. Ersal, “A Predictor-Based Framework for Delay Compensation in Networked Closed-Loop Systems,” IEEE/ASME Transactions on Mechatronics, vol. 23, no. 5, pp. 2482–2493, 2018.
 X. Ge, Y. Zheng, M. J. Brudnak, P. Jayakumar, J. L. Stein, and T. Ersal, “Analysis of a Model-Free Predictor for Delay Compensation in Networked Systems,” Advances in Delays and Dynamics Time Delay Systems, pp. 201–215, 2017.
 R. Liu, D. Kwak, S. Devarakonda, K. Bekris, and L. Iftode, “Investigating Remote Driving over the LTE Network,” Proceedings of the 9th International Conference on Automotive User Interfaces and Interactive Vehicular Applications - AutomotiveUI 17, 2017.
 A. Hosseini and M. Lienkamp, “Predictive safety based on track-before-detect for teleoperated driving through communication time delay,” 2016 IEEE Intelligent Vehicles Symposium (IV), 2016.
 A. Kuzu, S. Bogosyan, and M. Gokasan, “Predictive Input Delay Compensation with Grey Predictor for Networked Control System,” International Journal of Computers Communications & Control, vol. 11, no. 1, p. 67, 2016.
The reliable detection of road lanes, barriers, and other road users is crucial for autonomous driving. The performance of these tasks usually relies on high quality data captured by the sensors, e.g. images, videos, depth maps. However, the modern sensor set of an autonomous vehicle, especially cameras are malfunctioning when they come to visual impairing weather conditions such as sun glare, rain, snowfall, haze, and fog. The captured images/videos distorted by the rain, snow, fog and illumination condition, which will dramatically decrease the performance of high-level computer vision tasks like image segmentation and recognition. In order to make autonomous driving practical under all weather conditions, these problems must be solved. For this topic of the seminar, we are going to investigate the impact of adverse weather conditions on the output of the sensor data, and investigate the potential solutions to alleviate or mitigate these negative effects.
Supervision: Kai Cui (firstname.lastname@example.org)
. Zang, Shizhe, et al. "The Impact of Adverse Weather Conditions on Autonomous Vehicles: How Rain, Snow, Fog, and Hail Affect the Performance of a Self-Driving Car." IEEE Vehicular Technology Magazine 14.2 (2019): 103-111.
. Porav, Horia, Tom Bruls, and Paul Newman. "I Can See Clearly Now: Image Restoration via De-Raining." arXiv preprint arXiv:1901.00893 (2019).
. Sakaridis, Christos, Dengxin Dai, and Luc Van Gool. "Semantic foggy scene understanding with synthetic data." International Journal of Computer Vision 126.9 (2018): 973-992.
. Das, Anik, Ali Ghasemzadeh, and Mohamed M. Ahmed. "Analyzing the effect of fog weather conditions on driver lane-keeping performance using the SHRP2 naturalistic driving study data." Journal of safety research 68 (2019): 71-80.