Memory is not a feature.
Why every agent we tried felt like a goldfish, and what changed when we treated long-term memory as the core abstraction instead of a plugin.
For about eighteen months we tried, in good faith, to make every general-purpose assistant on the market remember our work. We wrote elaborate system prompts. We pasted context windows the size of small novels. We built vector stores on the side and shoved retrieval into the chain. None of it stuck — not really.
The pattern was always the same: a great first session, a useful second one, a third one where the agent confidently invented a teammate, and a fourth where we silently went back to a text editor.
The mistake was treating memory like a feature you bolt on. Memory isn't a feature. It's the substrate the agent reasons over. If it isn't there from the first line of the runtime, everything above it is a guess.
So we wrote Iris.Md around a single inversion: the memory store is the primary object, and the chat loop is a thin policy on top of it. Every message, every tool call, every skill the agent writes for itself, lands in the same place — content-addressed, embeddable, query-able. The model doesn't remember; the system does.
What that buys you is unglamorous. Yesterday's decisions are still there today. The way you like your reports formatted is still there next month. The eight-step deploy you taught it in March still works in October without re-teaching. The agent stops being a precocious intern and starts being something more like a colleague who actually pays attention.
It isn't magic. It's just refusing to forget.