Keep your Context Current
Your AI context is always deteriorating. You need to keep it healthy.

This article covers:
|
Tl:dr
|
Context Drifts
A couple weeks ago we wrote about the importance of context graphs for post-sale AI agents. Context is the structural knowledge an agent needs to understand your business. Without it, agents give shallow, generic answers, like a first day employee.
Context graphs require maintenance. The one you built last quarter was accurate at that point in time, but your business has probably changed since.
Context drift is the gap between what your agent believes about your business and what's actually true.
Take a common enterprise scenario:
Your company launches a new product. As part of the rollout, pricing changes across existing tiers with new bundles, list prices, and discount structures.
Here’s the chain of events:
Reality changes. There's a new product in your catalog. The pricing for your other three tiers has changed. Sales is already quoting the new numbers. Finance has updated the billing system.
The context graph falls behind. The agent's map of your business still has three products, not four. It's still using last quarter's pricing across the board. It doesn't know the new product exists, and it doesn't know the old prices are wrong.
The agent's effective context worsens. A CSM asks the agent to prep a renewal brief. The agent pulls the old pricing, calculates expansion opportunity against numbers that are no longer accurate, and doesn't mention the new product as an upsell option.
Outcomes degrade. The CSM walks into a renewal conversation quoting the wrong price. The customer corrects them. Now the CSM looks unprepared, and they stop trusting the tool. Leadership hears "the AI got the pricing wrong" and pulls back on the rollout.
This is a context issue, not a model issue. You could upgrade to the most powerful model available and still get bad answers if the context feeding it is stale.
Why Context Drifts
Drift isn't caused by some catastrophic system failure. It's caused by normal business operations that happen every quarter.
New data, new signals. Your data team instruments a new feature and starts tracking adoption. You add a product analytics tool. You start collecting a signal you didn't have before. The agent's map of where data lives is suddenly incomplete.
Products and definitions change. You roll out a new product tier that needs to show up in your product catalog, renewal discussions, and usage analytics. Or you run a correlation analysis and discover that executive alignment is one of the strongest predictors of retention, so you start tracking it as a churn signal. Your product is constantly evolving.
Health scores evolve. You redefine what "healthy account" means after a QBR. Maybe you add product champion usage as a health factor, or change the weight on support ticket volume. The formula in the business ontology needs an update too.
People and ownership shift. CSMs change accounts. Sales reps hand off post-close. A tier 3 account gets upgraded to tier 2, and now it's in a different segment with different playbooks. The CSM who knew that "ARR" in your Snowflake table excludes professional services just moved to a different team.
Processes evolve. You used to prep QBRs with a spreadsheet. Now you run a structured workflow. The old instructions are still in the context graph.
A context graph without a maintenance system is just an aging snapshot.
How Context Is Managed Today
Right now, most people provide context fresh every time they use an AI tool. You paste your CRM notes into ChatGPT. You copy a Gong call summary into the thread. You upload a product guide so the AI knows what your product actually does. Every conversation starts with some version of "here's what you need to know."
If you're more organized, you've set up a project in ChatGPT or Claude with reference files pre-loaded: your product catalog, your health score definitions, your QBR template, maybe a doc explaining how your team calculates ARR. Every new chat in that project starts with that context already available. If you're doing this consistently, you're ahead of most people.
But even this approach has a scaling problem. Those reference files are static. When your product changes, when a definition evolves, when you add a new data source, you have to manually update the file, re-upload it, and hope everyone on the team is working from the same version. There's no versioning, no approval process, no way to know if your colleague's project has the same definitions as yours.
The AI tools themselves have built-in features to help. If you've clicked through the ChatGPT settings, you've probably noticed the memory feature. Open Settings, go to Personalization, and you'll see a list of things ChatGPT has remembered about you: your role, your preferences, tools you use. You correct the AI in a thread ("actually, we use BigQuery, not Snowflake"), and it stores that as a memory so it doesn't get it wrong next time. That's a step beyond static reference files. It's context maintenance at the individual level, and it works.
The limitation is scope. These are personal memories. One person's corrections, stored as flat text, with no structure, no governance, and no way to share them across a team. They're useful for "remember that I prefer tables over charts." They don't work for "our enterprise tier calculates ARR differently, and that definition should apply to every analysis anyone on our team runs."
What you need is a system for your system: something that keeps shared context current, structured around your actual business entities, and governed so one person's preference doesn't silently become everyone's default.
How We Keep Context Current
The business ontology is the centerpiece. In Part 1, we described the context graph as a map of entity types and relationships. The ontology is that map, and everything in the maintenance system either feeds into it or updates it.
The Ontology: Your Business, Editable
The ontology defines your core entities (Account, User, Subscription, Ticket, Product), their relationships, synonyms, properties, lifecycle states, and which data sources map to them. It's the agent's understanding of what things are and where to find them.
When something changes in your business, the ontology is where you update it. You launch a new product? Add it as an entity. Rename a field? Update the mapping. Deprecate a data source? Remove it. The agent's understanding changes immediately.
This is the most direct lever for fighting context drift. If the agent is getting something wrong about your business structure, you edit the ontology.
Automated Data Sync
Your integrations (CRM, support, product analytics, billing) sync on a schedule, typically hourly or daily. When you connect a new source, the system discovers schemas, tables, and columns automatically. When a source changes, the catalog updates.
The agent always knows what tables exist, what columns they have, and what the current schema looks like. Think of it like your agent re-reading the org chart every morning instead of relying on the one it memorized during onboarding.
Auto-Generated Descriptions
Raw table and column names are cryptic. tbl_acct_hlth_v2 doesn't tell the agent much. The system generates human (and AI) readable descriptions for every schema, table, and column, and you can edit them to add more detail.
These descriptions feed directly into the context the agent uses when planning a query. Better descriptions mean better queries mean better answers. They also help when the same concept shows up differently across sources, clarifying what a column actually means so the agent doesn't confuse similar-sounding fields.
Thread-Based Memories
Thread-based memories are Humm's version of what ChatGPT and Claude are doing with personal memories, except they are designed for for teams and enterprises, and tied to a shared ontology.
When you use the agent and correct it, the system analyzes that thread for context-relevant information. If it finds a discrepancy ("this user says 'active account' means something different than what the ontology defines") or missing context ("this metric isn't defined anywhere"), it surfaces a candidate update.
These candidates go through an approval queue. An admin reviews the proposed change, confirms it reflects how the business actually works, and approves or rejects it. Approved updates become part of the active ontology. Rejected ones are discarded with feedback logged.
This is how institutional knowledge flows from day-to-day conversations into the context graph without anyone needing to manually edit a schema. And the approval step prevents one person's correction from silently overwriting a shared definition.
Other Signal Sources
Thread-based memories are the primary mechanism, but the system also watches call transcripts (when a customer call surfaces a new term or a discrepancy with current context), negative feedback (when a user marks an answer as wrong, the system examines whether the underlying context contributed), and positive feedback (when an answer is confirmed as correct, the supporting context is reinforced).
Human-in-the-Loop Curation
Automated sync handles the plumbing. Thread-based memories catch the business logic changes. But the approval queue is what keeps humans in control.
The system is designed to make curation fast: review a suggested ontology update, approve a description change, toggle a data source on or off. Not rebuilding. Curating.
The Flywheel
On day one, you connect your data sources, generate your initial ontology, and write a few foundational descriptions and business rules. The agent understands your business structure and knows where to find data. It's like a well-onboarded new hire: it has the map, but hasn't developed judgment yet.
After a month, your team has run dozens of analyses. Thread-based memories have surfaced ontology updates: a clarified metric, a new product entity, a corrected data source mapping. Each one was reviewed and approved. The agent now reflects how your team actually works, not just how you thought it worked during setup.
After six months, the ontology is a living, governed document. New team members get the same quality of analysis as veterans because the definitions are shared and current. When you launch a new product or start tracking a new churn signal, the update flows through the ontology edit or approval queue, and every analysis from that point forward reflects it. Context drift is managed, not eliminated, but the system catches it before it compounds.
This isn't a one-time setup. It's a system that stays current because your team's daily work, the conversations, the corrections, the feedback, feeds back into the context that powers it.
In summary
A context graph gives the agent structure. A context maintenance system prevents drift. Together, they're the difference between an AI that gets less useful over time and one that gets more useful. If your AI depends on business context, the real question isn’t whether drift will happen, it’s whether you’ve built a system to catch it before users lose trust.
If you're building context for your own AI tools, or curious how this works in practice, we'd love to talk.