2nd Workshop on Semantic Policy and Action Representations for Autonomous Robots (SPAR)
Welcome to our website, here you can find more information about the full day workshop for September 24th 2017 as part of the IROS 2017 conference. This workshop took place in Vancouver, Canada.
Date: 24th September 2017
Location: Vancouver, Canada
Submission for poster and demonstration deadline: 25th July (EXTENDED) 12th August 2017
Notification of acceptance: 15th August 2017
Camera ready submission: 1st September 2017
Karinne Ramirez-Amaro, Technische Universität München, Germany
Please feel free to send us an email, if you have any questions regarding this workshop.
This full day workshop is supported by
- The European Community's Seventh FrameworkProgramme FP7/2007-2013, under grant agreement no. 609206, project Factory-in-a-Day.
- The European Union’s Horizon 2020 research and innovation programme under grant agreement No 641100, project Timestorm.
- The DFG Collaborative Research Center 1320: Everyday Activity Science and Engineering - EASE
Service and industrial robots are expected to be more autonomous and work effectively around/ alongside humans. This implies that robots should have special capabilities, such as interpreting and understanding human intentions in different domains. The major challenge is to find appropriate mechanisms to explain the observed raw sensor signals such as poses, velocities, distances, forces, etc., in a way that robots are able to make informative and high-level descriptive models out of that. These models will for instance permit the understanding of, what is the meaning of the observations/demonstrations, infer how they could generate/produce a similar behavior in other conditions/domains?, and more importantly, allow robots to communicate with the user/operator about why they infer that behavior. One promising way to achieve that is using high-level semantic representations. Several methods have been proposed, for example, linguistic approaches, syntactic approaches, graphical models, etc. Even though these methods have achieved robust performance, one of the missing components is the lack of common measurements to compare the proposed techniques in established bench-marking data sets, due to the fact that they are not publicly available.
This workshop has mainly two objectives:
- We intend to highlight to the robotics community the recent developments in semantic reasoning representations and semantic policy generation from low level (sensory signal) to high level (planning and execution). More importantly, we want to reconcile and integrate various bottom-up and top-down approaches for semantic action perception and executions in different domains.
- We are aiming to compare various state-of-the-art approaches for generic action and reasoning representations in both computer vision and robotic communities, looking for a common ground to combine assumedly different approaches for autonomous capability and reliability. For this, we would like to propose and define different data sets that could be potentially used as bench-marks to compare the presented methods. We would like to take advantage of the recent efforts that some laboratories took by making their testing data sets publicly available. In addition, we would like to encourage this best practice to the participants of this workshop.
This workshop will present the main benefits of this new emerging type of methods, such as allowing robots to learn generalized semantic models for different domains. We will also like to discuss the next break-through topics in this area, e.g. the scalability of the learned models that can adapt to new scenarios/domains in a way that the robot can transfer all the acquired knowledge and experience from existing data to new domains with very little human intervention.
The topics that are indicative but by no means exhaustive are as follows:
- AI-Based Methods
- Learning and Adaptive System
- Probability and Statistical Methods
- Action grammars/libraries
- Machine learning techniques for semantic representations
- Spatiotemporal event encoding
- Reasoning Methods in Robotics and Automation
- Signal to symbol transition (Symbol grounding)
- Different levels of abstraction Semantics of manipulation actions
- Semantic policy representation
- Context modeling method
- Human behavior Recognition
- Learning from demonstration
- Object-action relations
- Bottom-up and top-down perception
- Task, geometric, and dynamic level plans and policies
- PDDL high-level planning
- Task and motion planning methods
- Human-robot interaction
- Prediction of human intentions
- Linking linguistic and visual data
This workshop proposes to discuss about the most recent approaches on the area of semantic and reasoning (policies) representations, which is a topic that is not widely presented in the general IROS17 conference. The goal of this workshop is to spread the results and benefits of this type of approaches to a wider public looking for emerging new technologies. We intent to invite well-known experts on this area and concentrate them in this workshop to better disseminate our growing community among the IROS17 attendees.
This workshop is intended for roboticists working in the areas of perception, control, planning, and learning. It is especially aimed at roboticists interested in improving the reliability and autonomy of robots. We hope to bring together outstanding researchers and graduate students to discuss current trends, problems, and opportunities in semantic action (policy) representations, encouraging communication and common practices such as sharing data-sets among scientists in this field.
Based on the first SPAR workshop at IROS 2015, we expect to attracted both Industry and Academia attendees making the discussion of topics more diverse and productive in this new edition.
We would like to invite the attendees of this workshop to submit an extended abstract explaining their current work or developed systems on the topics of interest of this workshop. The accepted demonstrations can show their systems and its benefits to a broader audience.
The main goal of the poster and demonstration session is to motivate our expert speakers to interact with our audience and further discuss on (1) how to link symbolic representations with sensory experience while being fully grounded at the signal level to couple perception and execution of actions, and (2) how to open new avenues to build robots with greater learning capability and autonomy. Given these insights, we want to discuss important next steps and open problems in semantic action perception and policy learning.
As a follow-up of this workshop, we are planning on proposing a special issue for a Journal (to be defined), which will cover the main topics of interest of this workshop and we will invite a selective number of papers from the poster sessions to submit their current novel work.
In this second edition we aim to demonstrate the maturity of the state-of-the-art semantic and reasoning systems validated in real complex scenarios. This will be shown through the proposed call for demonstrations, where the presented systems can show their robustness in real world situations.
Details for the submission for the poster and demonstrations can be downloaded here.
Details on all accepted posters here.
|09:00-09:05||Opening by Karinne Ramirez-Amaro|
|09:05-09:35||Gordon Cheng "Semantic representations to enable robots to observe, segment and recognize human activities"|
|09:35-10:05||Gregory D. Hager "Mentoring Robots: Showing, Telling, and Critiquing"|
|10:05-10:10||Poster teaser 1 (3 posters and demonstrations 2,5 minutes each)|
|1. On-line simultaneous learning and recognition of everyday activities from virtual reality performances, Tamas Bates, Karinne Ramirez-Amaro, Tetsunari Inamura and Gordon Cheng.|
|2. System design to evaluate guidance by robots based on immersive VR, Yoshiaki Mizuchi and Tetsunari Inamura.|
|3. Robotic Agents that Know What They are Doing, Michael Beetz and Daniel Beßler and Gayane Kazhoyan.|
|10:10-11:00||Coffee break, Poster & Demo session|
|11:00-11:30||Yiannis Aloimonos "One-shot Visual Learning of Human-Environment Interactions"|
|11:30-12:00||Tamim Asfour "On Combining Human Demonstration and Natural Language for Semantic Action Representations"|
|12:00-12:30||Heni Ben Amor, TBA|
|14:00-14:30||Michael Beetz "Robotic Agents that Know What They are Doing"|
|15:00-15:20||Poster teaser 2 (5 posters, 2,5 minutes each)|
|4. The Use of Expert Systems for Semantic Reasoning in Service Robots, Jesus Savage, Julio Cruz, Reynaldo Martell, Hugo Leon, Marco Negrete and Jesus Cruz.|
|5. Pillar Networks for Action Recognition, B Sengupta and Y Qian.|
|6. A Question Selection Method for Active Learning of Context-Depending Motion Labels, Tatsuya Sakato and Tetsunari Inamura.|
|7. Combining Neural Networks and Tree Search for Task and Motion Planning in Challenging Environments, Chris Paxton, Vasumathi Raman, Gregory D. Hager and Marin Kobilarov.|
|8. Neural dynamic architecture for behavioral organization on chip, Julien Martel, David Lobato , and Yulia Sandamirskaya.|
|15:20-17:00||Coffee Break , Poster & Demo session|
|17:00-17:30||Danny Zhu (on behalf on Manuela Veloso) "A Multi-layered Visualization Language for Video Augmentation"|