Speaker: Sungkyu Lee
At the current stage of autonomous driving, failures in complex situations are inevitable. Instead of minimizing the amount of failures, the idea of introspection is to learn when a prediction for a given input cannot be trusted. The learned introspective model allows rejecting sensor input where the vehicle would not be qualified to make a decision.
In this thesis, the concept of introspective failure prediction is used in the context of autonomous driving. Large-scale datasets such as BDD100K contain pixel-wise annotated images of urban driving scenes. An introspective failure prediction model is developed to predict the failures of a state-of-the-art semantic segmentation model.
Supervisor: Christopher Kuhn