Doctoral Research Seminar on "Refining action segmentation with hierarchical video representations" by Dr. Hyemin Ahn, Institute of Robotics and Mechatronics,German Aerospace Center, DLR /Germany.
In this talk, we introduce Hierarchical Action Segmentation Refiner (HASR), which can refine temporal action segmentation results from various models by understanding the overall context of a given video in a hierarchical way. When a backbone model for action segmentation estimates how the given video can be segmented, our model extracts segment-level representations based on frame-level features, and extracts a video-level representation based on the segment-level representations. Based on these hierarchical representations, our model can refer to the overall context of the entire video, and predict how the segment labels that are out of context should be corrected. Our HASR can be plugged into various action segmentation models (MS-TCN, SSTDA, ASRF) and improve the performance of the state-of-the-art models based on three challenging datasets (GTEA, 50Salads, and Breakfast). For example, in 50Salads dataset, the segmental edit score improves from 67.9% to 77.4% (MS-TCN), from 75.8% to 77.3% (SSTDA), from 79.3% to 81.0% (ASRF). In addition, our model can refine the segmentation result from the unseen backbone model, which has not been referred to during training HASR. This generalization performance would make HASR be an effective tool for boosting up the existing approaches for temporal action segmentation.
Monday Dec. 6th 11:00-12:00, online via zoom. To participate, get in touch with the organisers.