Temporal Scaling: Robot Execution of Time Constrained Tasks

We propose a method that can extract the semantic representation of human demonstrated manipulation actions.  Such a semantic representation helps robots accurately segment and recognize unique action primitives.
The proposed framework can further let robots apply additional reasoning on individual primitives. For instance, the robot can autonomously estimate the temporal length and also the type of the motion primitive.
At this point, we apply a novel trajectory sub-segmentation technique which computes local extrema, i.e. geometrical variations in the trajectory pattern (e.g. curves, straight lines), to identify the main intention in each primitive.
By considering the distribution of all derived trajectory subsegments, our method can finally measure the similarity between two primitives and also explore whether the followed motion is periodic or discrete.
The periodicity information helps robots autonomously generate observed trajectories at different temporal scales without altering the characteristic features, such as the action speed.
For instance, in the stirring action shown in the video the agent can repeat the derived periodic pattern until meeting the given temporal constraints. However, in the case of a discrete motion, i.e. pick & place action, the robot decides to slow down to execute the action in a longer scale.

Generative Time Models (GTMs)

The ability to estimate and predict the duration of an activity can enable a robot to plan its actions ahead, allocate effort and resources to tasks that are time-constrained or critical, and facilitate interactions with the environment. Generative Time Models (GTMs) is an unsupervised method that can estimate, by observation, the temporal properties of an activity. We demonstrate two example use-cases of the concept, (i) wiping a table and (ii) chopping vegetables, for which we predict their temporal information including the (i) overall task duration, (ii) the time remaining and, (iii) the fastest way to finish the activity.


GTMs require little prior knowledge about the behaviors involved, and are able to derive accurate predictions, with little training. We investigate different methods to approximate the progress of each task, and demonstrate how the method can generalize by transferring different components across use-cases.


Multi-Agent Interaction

Incorporating time into the cognitive loop of artificial autonomous systems is a major goal for TimeStorm.

FORTH has developed a new time-informed framework for planning robot actions based on the fuzzy number representation of time-intervals. The latter enables mixing time with other quantitative measures on multi-agent interaction to accomplish multi-criteria optimal collaborative plans.


The following video shows two agents collaborating in cereal-milk preparation. The planner distributes tasks to agents aiming to maximize their usability (implementation time + implementation quality) for the team.

In a similar context, the following video shows two agents collaborating for the preparation of a salad. In this scenario, the planner assigns agents the tasks that better fit their own individual skills (the agents move at different speeds and they exhibit different levels of efficiency for certain actions).

In a different context, the planner aims to enforce coordination of two agents moving at different speeds. The use of fuzzy arithmetic enables estimating the delay of one agent in relation to the other. The planner undertakes corrective actions (request speed up, simplified execution of actions) to maximize synchronization of individual agents.

Symbolic Episodic Memory

We develop a time-enriched symbolic memory to encode important events memorized by the robot. The key concepts are summarized in the following video.