I’ve been thinking a lot about what’s currently called “agentic AI”. Many systems try to achieve agent-like behavior through planning, tool use, orchestration layers, or increasingly careful prompting. In practice, what I keep running into is that these systems don’t fail because models can’t reason or plan, but because they lack stable state. Without persistent state, coherence has to be re-established every turn. The result is longer prompts, retrieval pipelines, guardrails, and corrective instructions — all of which help access information, but don’t really solve continuity over time. I’ve been experimenting with a different approach: making state explicit and persistent outside the model, but directly attached to the assistant’s working environment. Append-only logs, rules, inventories, histories — readable files that the model initializes from every run. Not queried opportunistically like a vector DB, just present as working context. Once state is stable, a lot of “agentic” behavior seems to emerge naturally. The system stops reacting moment by moment and starts behaving coherently across longer timescales. I’m curious how others here see this: Is persistent state under-discussed compared to planning and tooling? For those building agents with RAG / LangChain / similar stacks: how do you handle continuity across days or weeks? Am I underestimating what current agent frameworks already solve here? Would love technical perspectives or counterexamples.