Seminar on Topics in Signal Processing

Lecturer (assistant)
Number240226687
TypeSeminar
Duration3 SWS
TermWintersemester 2019/20
Language of instructionEnglish
Position within curriculaSee TUMonline
DatesSee TUMonline

Dates

Admission information

See TUMonline
Note: Please use TUMonline to register for the seminar: https://campus.tum.de/tumonline/lv.detail?clvnr=950112728

Objectives

Every participant works on his/her own topic. The goal of the seminar is to train and enhance the ability to work independently on a scientific topic. Every participant is supervised individually by an experienced researcher. This supervisor helps the student to get started, provides links to the relevant literature and gives feedback and advice on draft versions of the paper and the presentation slides.

Description

The major goals of the seminar are to learn how to do scientific research and to learn and practice presentation techniques. Each student has to prepare a scientific talk about the topic he or she has registered for. The students have to collect the required literature, understand its contents, and prepare a presentation about it that summarizes the topic.

Teaching and learning methods

The main teaching methods are: - Computer-based presentations by the student - The students mainly work with high quality and recent scientific publications

Examination

Scientific paper (30%) Interaction with the supervisor and working attitude (20%) Presentation and discussion (50%)

Links

Main subject for WS19/20: Autonomous Driving Methods

The kick-off meeting for the seminar is on 18.10.19 at 13:15 in the room 0406, attendance is mandatory for registration due to the limited quota.

Available topics are given below with additional information.

 

Teleoperation for Autonomous Driving Failures

Autonomous vehicles are being designed with increasing success in recent years. But especially in the transient phase towards autonomous driving it is very unlikely that such systems will not fail. To handle such failure there exist different options. Either returning control to the driver which is not suitable for driverless vehicles or stopping the vehicle safely. Another option is resolving such failures with a remote human driver. This approach has some major challenges in terms of the network, delay, infrastructure and the general definition of application scenarios. In this topic we will investigate in the challenges and issues for resolving autonomous driving failures with teleoperation.

Supervision: Markus Hofbauer (markus.hofbauer@tum.de)

References:

[1] L. Kang, W. Zhao, B. Qi, und S. Banerjee, „Augmenting Self-Driving with Remote Control: Challenges and Directions“, in Proceedings of the 19th International Workshop on Mobile Computing Systems & Applications  - HotMobile ’18, Tempe, Arizona, USA, 2018, S. 19–24.

[2] L. Kang, H. Qiu, P. Liu, und S. Banerjee, „AutoMice: A Testbed Framework for Self-Driving Systems“, S. 8.

[3] A. Gohar, A. Raza, und S. Lee, „A Distributed Remote Driver Selection for Cost Efficient and Safe Driver Handover“, in 2018 International Conference on Information and Communication Technology Convergence (ICTC), Jeju, 2018, S. 801–804.

[4] J.-M. Georg, J. Feiler, F. Diermeyer, und M. Lienkamp, „Teleoperated Driving, a Key Technology for Automated Driving? Comparison of Actual Test Drives with a Head Mounted Display and Conventional Monitors“, in 2018 21st International Conference on Intelligent Transportation Systems (ITSC), Maui, HI, 2018, S. 3403–3408.

Early Failure Prediction in Autonomous Driving

In a task as challenging as autonomous driving, failures are inevitable. Failures can originate from different tasks such as object detection, image segmentation and trajectory planning. One approach to mitigate this problem is to predict a failure for both a given task and the entire system far enough in advance to take appropriate countermeasures. In this seminar, relevant papers focusing on detecting critical situations seconds in advance should be researched. This includes predicting the failure of individual tasks, but also predicting correlated events such as hard breaking.

Supervision: Christopher Kuhn (christopher.kuhn@tum.de)

References:

[1] Arguing Machines: Human Supervision of Black Box AI Systems That Make Life-Critical Decisions, Fridman et al. 2017

[2] Failure Prediction for Autonomous Driving, Hecker et al. 2018

[3] Drive2Vec: Multiscale State-Space Embedding of Vehicular Sensor Data, Hallac et al. 2018

[4] Evaluating Uncertainty Quantification in End-to-End Autonomous Driving Control, Michelmore et al. 2018

Visual SLAM Methods

An important part of autonomous driving is determining the exact position of the vehicle on the road. Various sensors and systems are used for this purpose. The best known is probably GPS, but this is not precise enough and is not always available (e.g. in tunnels). In addition, laser sensors, so-called LIDARs, are used to create a map of the environment and simultaneously locate themselves in it. This is also called Simultaneous Localization and Mapping (SLAM). The disadvantage of LIDARs is that they are very expensive and complex. In recent years, more and more SLAM systems have been developed that use cameras to capture the environment (Visual SLAM systems). These mainly differ in the number of cameras (monocular and stereo) and the type of information (features, dense) used to build the map. However, Visual SLAM also has many challenges to solve such as low light, motion blur and loss of tracking.

Supervision: Sebastian Eger (sebastian.eger@tum.de)

References:

[1] N. Zikos, V. Petridis, “6-DoF Low Dimensionality SLAM (L-SLAM)”;

[2] D. Ball et al., “OpenRatSLAM: an open source brain-based SLAM system”;

[3] R. Mur-Artal, J. D. Tardos, “Fast relocalisation and loop closing in keyframe-based SLAM”;

[4] J. Engel et al., “LSD-SLAM: Large-Scale Direct Monocular SLAM”.

 

Path Planning

The ability of leading a vehicle to a target location is the basic functionality required for autonomous driving. While path planning is a topic that has been researched for decades (e.g. A*,RRT,PRM), the aim of this seminar is to investigate recent advances in reinforcement learning for path planning.

Supervision: Martin Piccolrovazzi (martin.piccolrovazzi@tum.de)

References:

[1] You et al.:Advanced planning for autonomous vehicles using reinforcement learning and deep inverse reinforcement learning

[2] Pflueger et al.:Rover-IRL: Inverse Reinforcement Learning with Soft Value Iteration Networks for Planetary Rover Path Planning

[3] Lee et al.:PRM-RL: Long range robotic navigation tasks by combining reinforcement learning and sampling based planning

 

Personalized Autonomous Driving: Learning from Driver Demonstrations

It is expected that autonomous vehicles capable of driving without human supervision will be released to market within the next decade. For user acceptance, such vehicles should not only be safe and reliable, they should also provide a comfortable user experience. However, individual perception of comfort may vary considerably among users. A learning from demonstration approach that allows the user to simply demonstrate the desired style by driving the car manually may help reproducing different driving styles. Learning from Demonstration (LfD) can jointly improve the driving experience and safety by exploiting the driver demonstrations in variety of situations. In this work the student should investigate LfD concept for autonomous driving and how it can be employed to personalize the driving experience.

Supervision: Basak Gülecyüz (basak.guelecyuez@tum.de)

References:

[1] M. Kuderer, S. Gulati and W. Burgard, "Learning driving styles for autonomous vehicles from demonstration," 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, 2015

[2] S. Lefèvre, A. Carvalho and F. Borrelli, "A Learning-Based Framework for Velocity Control in Autonomous Driving," in IEEE Transactions on Automation Science and Engineering, vol. 13, no. 1, pp. 32-42, Jan. 2016.

[3] C. Vallon, Z. Ercan, A. Carvalho and F. Borrelli, "A machine learning approach for personalized autonomous lane change initiation and control," 2017 IEEE Intelligent Vehicles Symposium (IV), Los Angeles, CA, 2017

[4] S. David, J. A. Bagnell and A. Stentz. “Learning Autonomous Driving Styles and Maneuvers from Expert Demonstration.” ISER, 2012.

[5] M. Ashish, A. Subramanian and A. Subramanian. “Learning End-to-end Autonomous Driving using Guided Auxiliary Supervision.” ArXiv abs/1808.10393 (2018).

Multi-modal Object Detection

One of the key-problems of autonomous driving is understanding the environment. The environment itself can be recorded with different kind of sensors. Even though one prominent car manufacturer promises to do this with cameras only, from a security and safety point of view one should use different modalities. Multi-modal object detection shows better performance and is more robust to external influences like weather conditions. In this work the student should present basic concepts of fusing different modalities, explain their main problems and compare their results.

Supervision: Michael Adam (Michael.adam@tum.de)

References:

[1] J. Ku, M. Mozifian, J. Lee, A. Harakeh, and S. L. Waslander, “Joint 3d proposal generation and object detection from view aggregation,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018, pp. 1–8.

[2] A. Asvadi, L. Garrote, C. Premebida, P. Peixoto, and U. J. Nunes, “Multimodal vehicle detection: fusing 3D-LIDAR and color camera data,” Pattern Recognition Letters, vol. 115, pp. 20–29, 2018.

[3] X. Chen, H. Ma, J. Wan, B. Li, and T. Xia, “Multi-view 3d object detection network for autonomous driving,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 1907–1915.

Predictive Methods for Network Delay Compensation by Teledriving

One of the main challenges of real-time teledriving applications is the communication delay inherently produced by the networks. Fixed delay due to physical limitations and variable delay due to changing network congestion deteriorate remote driving sessions. As both the teleoperator perception and the command execution are delayed, the agility of the teleoperated vehicle decreases, causing poor safety for urban scenarios. Therefore, various methods have been proposed to predict the future states of the teleoperated vehicle or the future control inputs of the teleoperator in order to compensate the network delay. In this topic, we will investigate different prediction techniques and how they improve the teleoperated driving session.

Supervision: Furkan Kaynar (furkan.kaynar@tum.de)

References:

[1] Y. Zheng, M. J. Brudnak, P. Jayakumar, J. L. Stein, and T. Ersal, “A Predictor-Based Framework for Delay Compensation in Networked Closed-Loop Systems,” IEEE/ASME Transactions on Mechatronics, vol. 23, no. 5, pp. 2482–2493, 2018.

[2] X. Ge, Y. Zheng, M. J. Brudnak, P. Jayakumar, J. L. Stein, and T. Ersal, “Analysis of a Model-Free Predictor for Delay Compensation in Networked Systems,” Advances in Delays and Dynamics Time Delay Systems, pp. 201–215, 2017.

[3] R. Liu, D. Kwak, S. Devarakonda, K. Bekris, and L. Iftode, “Investigating Remote Driving over the LTE Network,” Proceedings of the 9th International Conference on Automotive User Interfaces and Interactive Vehicular Applications - AutomotiveUI 17, 2017.

[4] A. Hosseini and M. Lienkamp, “Predictive safety based on track-before-detect for teleoperated driving through communication time delay,” 2016 IEEE Intelligent Vehicles Symposium (IV), 2016.

[5] A. Kuzu, S. Bogosyan, and M. Gokasan, “Predictive Input Delay Compensation with Grey Predictor for Networked Control System,” International Journal of Computers Communications & Control, vol. 11, no. 1, p. 67, 2016.

 

Vision Enhancement for Autonomous Driving under Adverse Weather Conditions

The reliable detection of road lanes, barriers, and other road users is crucial for autonomous driving. The performance of these tasks usually relies on high quality data captured by the sensors, e.g. images, videos, depth maps. However, the modern sensor set of an autonomous vehicle, especially cameras are malfunctioning when they come to visual impairing weather conditions such as sun glare, rain, snowfall, haze, and fog. The captured images/videos distorted by the rain, snow, fog and illumination condition, which will dramatically decrease the performance of high-level computer vision tasks like image segmentation and recognition. In order to make autonomous driving practical under all weather conditions, these problems must be solved. For this topic of the seminar, we are going to investigate the impact of adverse weather conditions on the output of the sensor data, and investigate the potential solutions to alleviate or mitigate these negative effects.

Supervision: Kai Cui (kai.cui@tum.de)

References:

[1]. Zang, Shizhe, et al. "The Impact of Adverse Weather Conditions on Autonomous Vehicles: How Rain, Snow, Fog, and Hail Affect the Performance of a Self-Driving Car." IEEE Vehicular Technology Magazine 14.2 (2019): 103-111.

[2]. Porav, Horia, Tom Bruls, and Paul Newman. "I Can See Clearly Now: Image Restoration via De-Raining." arXiv preprint arXiv:1901.00893 (2019).

[3]. Sakaridis, Christos, Dengxin Dai, and Luc Van Gool. "Semantic foggy scene understanding with synthetic data." International Journal of Computer Vision 126.9 (2018): 973-992.

[4]. Das, Anik, Ali Ghasemzadeh, and Mohamed M. Ahmed. "Analyzing the effect of fog weather conditions on driver lane-keeping performance using the SHRP2 naturalistic driving study data." Journal of safety research 68 (2019): 71-80.