Description
This virtual lab addresses the challenge of enabling robots to autonomously prepare meals by bridging natural language recipe instructions and robotic action execution. We propose a novel methodology leveraging Actionable Knowledge Graphs to map recipe instructions into core categories of robotic manipulation task which represent specific motion parameters required for task execution.
We created a knowledge graph containing recipes from the 1Mio. Recipe Dataset and made it available on this website By selecting a recipe on this platform, you will gain access to:
- Detailed Instructions: A step-by-step guide on how to prepare the selected dish.
- Task List: A set of clearly structured and ordered tasks, derived from the recipe instructions.
Through this task-based methodology, we aim to break down the complexities of cooking into a framework that can be adopted by robots or any automated system. Our research showcases how strategic planning and task allocation can simplify culinary processes, leading to more consistent and efficient meal preparation.
We invite you to explore our platform, experience the recipes, and discover how the integration of clear instructions and concise task definitions can streamline cooking for both humans and machines.
With these actions and their parameters, you can instantiate the Action Core Lab and simulate a robot to perform the action.
Our objective is to empower a robotic system to autonomously prepare any recipe by systematically mapping each step in a recipe to clearly defined tasks. This website serves as a demonstration of our approach, illustrating how textual cooking instructions can be transformed into actionable subtasks for robotics applications.
Robots (still) don't prepare our daily dishes. This is due to the fact that the manipulation skills involved in meal preparation actions are very complex. Even if we consider only one action category like cutting, we have to consider many factors that influence action execution and the desired goal state, such as object properties (e.g. the existence of a peel), task variations (such as halving or slicing) and their influence on motion parameters, as well as the situational context (e.g. the available tools).
In order for robots to be able to successfully compute the body motions needed for execution of different recipe instructions, they need knowledge. This work addresses the question how we can build knowledge bases for meal preparation actions that robots can use to translate the contained information into body motion parameters.
We use the concept of Action Cores (ACs) (an AC is a main manipulation capability like cutting that can be translated to a general action plan), Action Groups (AGs) (an AC consists of several more specific AGs that use a similar manipulation plan and thus result in similar body movements and outputs, e.g. the AC of cutting consists of the AGs of dicing, slicing, etc.) as well as an Actionable Knowledge Graph, that contains task, object and environment knowledge and enables robots to infer the body motions needed to prepare any given recipe.