Ontology-based explicit long-term memory for robots

In the context of the ARISE project, CSIC, through its Institute of Robotics and Industrial Informatics team, is leading the development of the ontology-based framework of the project, and recently they have developed an explicit long-term robot memory. This approach enables robots to store, organize, and reason about past experiences in complex, long-horizon tasks.

This contribution is implemented through two software tools, available online as code repositories:

  • know-plan, a Robot Operating System (ROS) package for ontology-based representation and reasoning about plans and their properties.
  • know-demo, a ROS package to create, store, and utilize demonstration-based episodic memories as an ontological knowledge base.

Both packages can be used independently or in combination, as they rely on structured ontological frameworks. As a whole, they currently support applications such as robot planning, plan comparison and selection, and soon will be used for robot learning, behaviour introspection, and adaptive long-term task execution.

Figure 1: Main functionalities supported by each of the two implemented repositories. Note that both provide a knowledge base with the same structure, thus both frameworks may interact and share knowledge.

Advancing robot long-term representation and reasoning

In long-term tasks, such as those addressed in ARISE use cases, robots are expected to adapt to the changes in the environment they operate, and to different users they work with. For this reason, it is of great importance to equip them with the ability to model their plan’s expectations and actual plan executions, which will allow them to compare plans (i.e. strategies of performing the tasks such as human demonstrations) between them, and also with respect to the actual execution. These technologies enable robots to combine planning and learning under a unified knowledge structure, enhancing their reasoning capabilities.

Scaling toward long-term explainable robot autonomy

The novelty of this approach lies in treating both plans (e.g. demonstrations) and executions within the same ontological model, enabling introspection reasoning. Unlike traditional systems, which focus on either modeling plans or executions, this framework provides a sound long-term explicit memory that can be used to evaluate and adapt behaviour over time in a transparent and explainable manner.

The technology has been validated using laboratory mock-ups of realistic human-robot interactive tasks, where robots generate, store, and compare multiple plans, selecting the most efficient one. CSIC researchers will soon integrate this work with the real and also simulated data collected in the context of ARISE use cases such as lettuce transplanting.

Looking ahead, the framework will be extended to include robot execution experiences, detect atypical behaviours, and support continuous robot learning and explanation generation of robot experiences. This paves the way toward robots capable of autonomous adaptation and long-term collaboration with humans.

To support this work, several scientific articles were recently published presenting the team’s findings:

Articles about know-plan

[1] Olivares-Alarcos, A., Foix, S., Borràs, J., Canal, G., and Alenyà, G. (2024). Ontological modeling and reasoning for comparison and contrastive narration of robot plans. In Proceedings of the 2024 International Conference on Autonomous Agents and Multiagent Systems, pp. 2405–2407. IFAAMAS.

[2] Olivares-Alarcos, A., Foix, S., Borràs, J., Canal, G., and Alenyà, G. (2026). Ontological foundations for contrastive explanatory narration of robot plans. Information Sciences, 123280.

Articles about know-demo:

[3] Olivares-Alarcos, A., Muhammad, A., Sanjaya, S., Lin, H. and Alenyà, G. (2026). Blending ontologies and language models to generate sound and natural robot explanations. In Proceedings of the International Conference on Autonomous Agents and Multiagent Systems, pp. to appear. IFAAMAS.

[4] Olivares-Alarcos, A., Ahsan, M., Sanjaya, S., Lin, H. I., & Alenyà, G. (2026). Ontological grounding for sound and natural robot explanations via large language models. arXiv preprint arXiv:2602.13800.