Three things worth reading
Managing agent memory / building reliable AI applications / justifying AI project ROI
When I’m trying to master a new domain, I focus on finding people who really know their stuff and absorbing what they’ve learned. And while I mainly want to feature hands-on learnings here, I sometimes come across resources that are high-leverage and worth sharing
So I’m passing them along, with a bit of commentary. I’ll keep it tight. Let me know if this is useful; perhaps it becomes a Friday habit.
Memory Optimization Strategies in AI Agents
A big part of the magic of interacting with LLMs isn’t just their reasoning. It’s their ability to work in context, offering replies that take into account a history of past interactions. It’s the difference between a logic machine with an encyclopedia and something that feels truly intelligent and aware.
Memory is complex and technical, but in this article
makes the concepts easy to grasp. I’m getting ideas for how to design more useful memory in my own applications.12-Factor Agents: Patterns of reliable LLM applications
’s essay and presentation on 12 Factor Agents tackles a question that’s been troubling me since I started working on AI projects: What are the principles we can use to build LLM-powered software that is actually good enough to put in the hands of production customers?
At its core, it’s about how to design for reliability beyond the 80% threshold—something most AI applications still struggle with.
He approaches this from a developer’s perspective, but I’ve run into many of the same challenges while building with no-code/low-code (NCLC) platforms. It was actually a real relief to see that these are universal domain problems.
I think the design patterns and practices he recommends have application to internal systems as well, although NCLC tools don’t typically offer the level of control needed to implement them yet.
How to Apply a Scenario Validation Framework for AI Agent ROI
There’s a fair bit of doom-and-gloom right now about the ROI of agentic AI. For example, Gartner estimates that over 40% of agentic AI projects will be canceled by the end of 2027.
gives a masterclass on how to provide reasonable and persuasive estimates for project ROI when pitching AI projects. It’s equally relevant for internal project leaders or consultants selling services.The key insight is that acknowledging uncertainty and building multiple scenarios actually provides MORE confidence, not less.
I’d love to know what you learned from this week. Just reply or drop a note.