Offene Arbeiten
BAMAIDPFPIPSEMSHK
Titel
-✔-----
Introspective Sensor Monitoring for Multimodal Object Detection
Introspective Sensor Monitoring for Multimodal Object Detection
Beschreibung
In multimodal object detection, different sensors such as cameras or LIDAR have different strengths that are combined for optimal detection rates. However, different sensors also have different weaknesses. In this thesis, a monitoring model for each individual sensor is trained with previous performances of that sensor. For a new input, the sensor's performance is then predicted based only on the sensory input. The predicted performance score is then used in the subsequent sensor fusion to reduce the impact of challenging sensory readings, allowing the fusion architecture to dynamically adapt and rely more on the other sensors instead.
Voraussetzungen
Experience with Machine Learning and Object Detection
Betreuer:
---✔✔--
How Well Perform Today's Autonomous Driving Models
How Well Perform Today's Autonomous Driving Models
Autonomous Driving
Beschreibung
This work can be done in German or English
At the current stage of autonomous driving, failures in complex situations are inevitable. A learning-based method to predict such failures could prevent dangerous situations or crashes. However, collecting real-life training data of crashes caused by autonomous vehicles is not feasible. A different solution is to use data from realistic simulations of a self-driving car, such as CARLA [1].
In this project, the objective is to setup available autonomous driving models such as [2,3] and use our existing data logging pipeline to evaluate these model's failure cases. The whole process should be further improved by extending our logging pipeline.
Tasks
- Improving the existing data logging pipeline
- Setup of existing autonomous driving models
- Collection of driving data with the implemented system
- Evaluation of autonomous driving model failures and collection of failure data
References
[1] A. Dosovitskiy, „CARLA: An Open Urban Driving Simulator“, S. 16, 2017.
[2] https://github.com/erdos-project/pylot
[3] https://github.com/commaai/research
Voraussetzungen
- Experience with Python (ROS and Linux)
- Knowledge about Docker would be helpful
- General knowledge about Machine Learning
Betreuer:
---✔---
Multimodal Object Detection with Introspective Experts
Multimodal Object Detection with Introspective Experts
Beschreibung
Object detection can be performed with different sensor modalities, such as camera images, LIDAR point clouds or a fusion thereof. Adverse weather conditions such as rain or fog can cause different failure cases for different sensor types. In this work, an approach for finding the best suited models for different conditions is investigated. In training, each sensor configuration (camera, LIDAR or fusion) is trained separately. Then, the models are finetuned using only the training images they performed well on, leading to a set of expert models designed to work best with a subset of the training images. Finally, a selection model needs to be designed that is trained to select which expert model is most suitable for the current scene. The models can be trained and evaluated in the CARLA simulator, where different weather conditions can be easily generated.
Voraussetzungen
Knowledge of Deep Learning and Linux
Betreuer:
---✔---
Failure Prediction for LIDAR-Based Semantic Segmentation
Failure Prediction for LIDAR-Based Semantic Segmentation
Failure Prediction, Semantic Segmentation, LIDAR, Autonomous Driving
Beschreibung
LIDAR sensors allow to capture a scene in 3D while being more robust than cameras to distortions like rain. They are therefore an important part of autonomous driving, where they can be used for semantic segmentation of the environment. For this, each point in the 3D point cloud is classified as belonging to a semantic class such as "car", "pedestrian" or "road". In a safety-critical application such as driving, knowing when such a classification can be trusted or not is important. To this end, failure prediction methods such as introspection [1] can be used to predict where the segmentation failed.
In this internship, a state-of-the-art neural network such as [2] will be implemented to perform semantic segementation of LIDAR point clouds. After implementing the semantic segmentation, a state-of-the-art failure prediction approach will be implemented to detect incorrect classifications. The evaluation will be done using the CARLA driving simulator [3]. A reference implementation based on camera input for both semantic segmentation and failure prediction is available for a comparison.
References:
[1] "Introspective Failure Prediction for Semantic Image Segmentation", Kuhn et al., IEEE ITSC 2020
[2] "RangeNet++: Fast and accurate LiDAR semantic segmentation", Milioto et al., IEEE IROS 2019
[3] https://carla.org/
Voraussetzungen
Basic knowledge of Machine Learning, Python and Linux
Betreuer:
Laufende Arbeiten
Bachelorarbeiten
Reverse Synthesis for Object Detection Failure Prediction
Reverse Synthesis for Object Detection Failure Prediction
Beschreibung
Bounding boxes proposed by object detection networks do not always contain the classified object. In this work, a prediction method for such failures is investigated. The idea is based on reverse synthesis: Using the predicted class and the proposed bound box, an autoencoder is trained for each class to reconstruct the input image patch. If the bounding box does not contain the predicted class, the reconstructed image ideally contains traces of the imagined or misclassified object which can then be detected.
Voraussetzungen
Basic knowledge of deep learning and Linux
Betreuer:
Masterarbeiten
Data and Model Verification for Machine Learning in Autonomous Vehicles
Data and Model Verification for Machine Learning in Autonomous Vehicles
Autonomous Vehicles, Operational Design Domains, Constrained Random Verification
Beschreibung
Operational Design Domains (ODDs) represent an integral part of the specification of the ex-pected operating conditions of autonomous vehicles. They provide the relevant environment de-scription and verification criteria for developing autonomous vehicles. Especially for autonomous vehicles employing machine learning, the ODD defines completeness criteria for the data used during learning, as well as scenarios for verifying the performance of such systems.
The aim of this work is to:
• Create an ODD for Semantic Segmentation:
o modelling an ODD language based on JSON (considering expressive power and algorithmic feasibility of the following topics)
o subset of expected features from ongoing standardization activities (on ODDs and ontologies)
o discrete (hierarchical) conditions (like weather in {sunny, rainy, …}) describing the driving environment
o allowing for probability distributions and constraints (e.g. “disallow rain and strong wind at the same time”) on the conditions
• Explore the verification of a dataset and a learnt model against an ODD regarding the following questions:
o How good does a dataset cover an ODD (with or without attached probabilities)? This relates to the topics of constrained random counting/sampling.
o Does discrimination in a dataset translate to the learnt model?
o How good is a model if we sample from the ODD with attached probabilities?
• Implement the required tooling (processing the ODD, evaluating data and model against ODD) and apply it to a dataset (Berkeley DeepDrive dataset or synthetic data generated with CARLA) and model for image segmentation.
Voraussetzungen
• Python and/or C++ programming
• Machine Learning, Logic, Algorithms (probably in the area of SAT Solving for constraint modelling and constrained random verification) and Probability Theory
Kontakt
Betreuer:
Interdisziplinäre Projekte
How Well Perform Today's Autonomous Driving Models
How Well Perform Today's Autonomous Driving Models
Autonomous Driving
Beschreibung
This work can be done in German or English in a team of 2-4 members
At the current stage of autonomous driving, failures in complex situations are inevitable. A learning-based method to predict such failures could prevent dangerous situations or crashes. However, collecting real-life training data of crashes caused by autonomous vehicles is not feasible. A different solution is to use data from realistic simulations of a self-driving car, such as CARLA [1].
In this project, the objective is to setup available autonomous driving models such as [2-7] and use our existing data logging pipeline to evaluate these model's failure cases. The whole process should be further improved by extending our logging pipeline with an orchestration layer to manage all other services.
Tasks
- Implementation and integration of orchestration layer into existing data logging pipeline
- Setup of existing autonomous driving models
- Collection of driving data with the implemented system
- Evaluation of autonomous driving model failures and collection of failure data
References
[1] A. Dosovitskiy, „CARLA: An Open Urban Driving Simulator“, S. 16, 2017.
[2] F. Codevilla, E. Santana, A. M. Lopez, und A. Gaidon, „Exploring the Limitations of Behavior Cloning for Autonomous Driving“, S. 10, 2019.
[3] M. Toromanoff, E. Wirbel, und F. Moutarde, „End-to-End Model-Free Reinforcement Learning for Urban Driving using Implicit Affordances“, arXiv:1911.10868 [cs, stat], Nov. 2019.
[4] NVIDIA Drive, https://www.nvidia.com/de-de/self-driving-cars/drive-platform/
[5] https://github.com/tech-rules/DAVE2-Keras
[6] https://github.com/adityaguptai/Self-Driving-Car-
[7] https://github.com/commaai/research
Voraussetzungen
- Experience with Python (ROS and Linux)
- Knowledge about Docker would be helpful
- General knowledge about Machine Learning
Betreuer:
How Well Perform Today's Autonomous Driving Models
How Well Perform Today's Autonomous Driving Models
Autonomous Driving
Beschreibung
This work can be done in German or English in a team of 2-4 members
At the current stage of autonomous driving, failures in complex situations are inevitable. A learning-based method to predict such failures could prevent dangerous situations or crashes. However, collecting real-life training data of crashes caused by autonomous vehicles is not feasible. A different solution is to use data from realistic simulations of a self-driving car, such as CARLA [1].
In this project, the objective is to setup available autonomous driving models such as [2-7] and use our existing data logging pipeline to evaluate these model's failure cases. The whole process should be further improved by extending our logging pipeline with an orchestration layer to manage all other services.
Tasks
- Implementation and integration of orchestration layer into existing data logging pipeline
- Setup of existing autonomous driving models
- Collection of driving data with the implemented system
- Evaluation of autonomous driving model failures and collection of failure data
References
[1] A. Dosovitskiy, „CARLA: An Open Urban Driving Simulator“, S. 16, 2017.
[2] F. Codevilla, E. Santana, A. M. Lopez, und A. Gaidon, „Exploring the Limitations of Behavior Cloning for Autonomous Driving“, S. 10, 2019.
[3] M. Toromanoff, E. Wirbel, und F. Moutarde, „End-to-End Model-Free Reinforcement Learning for Urban Driving using Implicit Affordances“, arXiv:1911.10868 [cs, stat], Nov. 2019.
[4] NVIDIA Drive, https://www.nvidia.com/de-de/self-driving-cars/drive-platform/
[5] https://github.com/tech-rules/DAVE2-Keras
[6] https://github.com/adityaguptai/Self-Driving-Car-
[7] https://github.com/commaai/research
Voraussetzungen
- Experience with Python (ROS and Linux)
- Knowledge about Docker would be helpful
- General knowledge about Machine Learning
Betreuer:
How Well Perform Today's Autonomous Driving Models
How Well Perform Today's Autonomous Driving Models
Autonomous Driving
Beschreibung
This work can be done in German or English in a team of 2-4 members
At the current stage of autonomous driving, failures in complex situations are inevitable. A learning-based method to predict such failures could prevent dangerous situations or crashes. However, collecting real-life training data of crashes caused by autonomous vehicles is not feasible. A different solution is to use data from realistic simulations of a self-driving car, such as CARLA [1].
In this project, the objective is to setup available autonomous driving models such as [2-7] and use our existing data logging pipeline to evaluate these model's failure cases. The whole process should be further improved by extending our logging pipeline with an orchestration layer to manage all other services.
Tasks
- Implementation and integration of orchestration layer into existing data logging pipeline
- Setup of existing autonomous driving models
- Collection of driving data with the implemented system
- Evaluation of autonomous driving model failures and collection of failure data
References
[1] A. Dosovitskiy, „CARLA: An Open Urban Driving Simulator“, S. 16, 2017.
[2] F. Codevilla, E. Santana, A. M. Lopez, und A. Gaidon, „Exploring the Limitations of Behavior Cloning for Autonomous Driving“, S. 10, 2019.
[3] M. Toromanoff, E. Wirbel, und F. Moutarde, „End-to-End Model-Free Reinforcement Learning for Urban Driving using Implicit Affordances“, arXiv:1911.10868 [cs, stat], Nov. 2019.
[4] NVIDIA Drive, https://www.nvidia.com/de-de/self-driving-cars/drive-platform/
[5] https://github.com/tech-rules/DAVE2-Keras
[6] https://github.com/adityaguptai/Self-Driving-Car-
[7] https://github.com/commaai/research
Voraussetzungen
- Experience with Python (ROS and Linux)
- Knowledge about Docker would be helpful
- General knowledge about Machine Learning
Betreuer:
Forschungspraxis (Research Internship)
Multiview-Consistent ROI Prediction
Multiview-Consistent ROI Prediction
Autonomous Driving, Region of Interest Prediction, Computer Vision
Beschreibung
This work can be done in German or English
In autonomous driving, predicting Regions of Interest (ROI) allows the car to focus on areas critical to driving. A trained ROI prediction model is available. In an existing driving simulation setup, the ROI of six cameras is predicted separately. However, the ROIs are not independent if an object appears in multiple views. For example, a traffic light detected as an ROI in one front-facing camera should also be marked as an ROI in the other front-facing cameras if it appears in their field of view as well.
In this internship, the goal is to check whether ROIs appear in more than one view and to project the ROIs into other frames to make the ROI prediction consistent. Besides ensuring consistency, this can help improving robustness as the separate ROI prediction models can share their knowledge by projecting their predictions into the other camera frames.
Voraussetzungen
- Linux and Python
- Basics of Computer Vision beneficial, but can also be learned during the internship
Betreuer:
Region of Interest Annotation for Detection of driving-relevant Features and ROI Prediction
Region of Interest Annotation for Detection of driving-relevant Features and ROI Prediction
Beschreibung
This work can be done in German or English
For fully immersive telepresence within a vehicle, the operator needs to be aware of all driving-relevant features mostly provided with via RGB camera data. But there is still the uncertainty that the operator did not recognize all driving relevant information, which means the operator has no full situation awareness (SA). To measure its SA we first need to determine driving-relevant features (cars, pedestrians, traffic signs…) and ground-truth data. The main objective of this thesis is to select an existing [1] or develop an own tool for image annotation and create a small dataset of labeled data as proposed in [2]. The data are provided by the CARLA [3] simulator.
Your tasks:
- Find or develop Region of Interest annotation tool
- Determine driving-relevant features and classify their importance
- Annotate Region of Interests
- Optional: Train simple CNN for ROI prediction
Requirements:
- Experience Linux/ROS and one programming language (C++, Python or JavaScript)
References:
- [1]: https://github.com/opencv/cvat
- [2]: Y. Xia, D. Zhang, J. Kim, K. Nakayama, K. Zipser, und D. Whitney, „Predicting Driver Attention in Critical Situations“, Nov. 2017.
- [3]: http://carla.org/