A research-grade exploration of continuity, experience, and accountability in embodied AI.
Persistence of Memory studies how an embodied agent carries experience forward. Memory is treated as lived continuity, not just storage. The goal is to understand how past context shapes future action, and how an agent can remain consistent, explainable, and responsible over time.
We approach memory as a layered system: episodic experiences, semantic knowledge, and behavioral routines. Each layer carries different risks, benefits, and ethical requirements.
Most AI forgets. Each session starts fresh, and logs are not true memory. We are building systems that carry forward lessons, context, and relationships.
Human memory keeps meaning, not just facts. We remember what worked, what failed, and how it felt. A persistent AI remembers how a learner responds, and that history shapes every future interaction.
When advice fails, a stateless system shrugs. A persistent system updates its understanding of that person and becomes better at helping the individual over time.
Continuity supports relationship building without pretending to be human, keeps momentum across sessions, and helps plan long-term goals.
True persistence requires consolidation, relevance, and timely retrieval. It also requires avoiding catastrophic forgetting while adapting to new information.
Research threads
Signals we track
With persistence, AI becomes a long-term collaborator rather than a disposable tool. That difference is essential in education and care, where relationships matter.
Open questions include how much continuity is necessary for trust, what should be immutable versus editable, and how to ensure memory supports human dignity rather than surveillance or manipulation.