Written by

Niko Raes

At

Tue Dec 30 2025

Building the Context Graph Layer: Why AI Agents Need More Than Systems of Record

How validated semantic graphs with vector embeddings create the infrastructure for agent autonomy—capturing not just what happened, but why decisions were made.

Back

Foundation Capital's recent piece "Context Graphs: AI's Trillion-Dollar Opportunity" articulates something we've been building toward since we started Konnektr: AI agents need a new kind of infrastructure—one that captures not just what happened, but why it was allowed to happen.

The article struck a nerve in the AI community because it names a problem everyone building agents has encountered: the decision trace gap. When a VP approves a 20% discount over policy, when a support lead escalates a ticket, when an engineer makes an exception—the reasoning behind those decisions lives in Slack threads, Zoom calls, and hallway conversations. It's never systematically captured, never becomes searchable precedent, and never informs future agent behavior.

As Jaya Gupta and Ashu Garg put it: "Rules tell an agent what should happen in general. Decision traces capture what happened in this specific case—we used X definition, under policy v3.2, with a VP exception, based on precedent Z, and here's what we changed."

This is the infrastructure challenge of the agent era. And it's what we're solving with Konnektr Graph.

The Missing Layer: Context Graphs vs Systems of Record

Systems of record (Salesforce, Workday, SAP) became trillion-dollar platforms by owning canonical data: customers, employees, transactions. They're optimized for storing objects—structured entities with defined schemas.

But agents don't just need objects. They need context:

  • Who made this decision and why?
  • What was the state of the world at decision time?
  • What precedents governed this exception?
  • How did this relationship evolve over time?

This is what Foundation Capital calls a context graph: "a living record of decision traces stitched across entities and time so precedent becomes searchable."

The key insight: context graphs are fundamentally different from systems of record. They're:

  • Cross-system by nature - synthesizing data from CRM, billing, support, Slack, meetings
  • Time-aware - preserving not just current state, but how relationships evolved
  • Decision-centric - capturing why things happened, not just what happened
  • Relationship-first - connections between entities are as important as the entities themselves

This is precisely what Konnektr Graph was designed for.

Why Validated Semantic Graphs Are the Foundation

Here's where the architectural choice matters. Several companies are positioning themselves in this context graph space, but there's a critical question: how do you ensure context remains trustworthy as agents both consume and generate it?

This is where Konnektr Graph's approach differs. We start with a principle: agents need semantic structure they can trust.

The Problem with Unstructured Context

Most "AI memory" solutions today treat context as unstructured text with embeddings:

  1. Ingest documents, conversations, data
  2. Generate embeddings
  3. Store in vector database
  4. Retrieve via similarity search

This works for retrieval, but it has a fatal flaw for agent autonomy: there's no semantic validation. When an agent writes new context (creates relationships, captures decisions, links precedents), nothing prevents:

  • Contradictory information
  • Broken references (entity deleted, relationships remain)
  • Invalid property types
  • Relationship mismatches

As agents start writing to the context graph—not just reading from it—this becomes critical. You need schema validation or the graph degrades into noise.

Ontology Enforcement + Embeddings: The Hybrid Approach

Konnektr Graph enforces ontologies through validated schemas. This means:

Entities are validated:

{
  "@type": "Interface",
  "@id": "dtmi:com:example:Customer;1",
  "contents": [
    {
      "@type": "Property",
      "name": "tier",
      "schema": {
        "@type": "Enum",
        "valueSchema": "string",
        "enumValues": [
          {"name": "enterprise"},
          {"name": "standard"},
          {"name": "startup"}
        ]
      }
    },
    {
      "@type": "Property",
      "name": "revenue",
      "schema": "double"
    },
    {
      "@type": "Property",
      "name": "embedding",
      "schema": "vector" 
    },
    {
      "@type": "Relationship",
      "name": "hasContact",
      "target": "dtmi:com:example:Person;1"
    }
  ]
}

Relationships are constrained: When an agent creates a relationship, the target type is validated. You can't link a Customer to a Building if the ontology says Customers only link to Persons, Accounts, and Contracts.

Properties have schemas: If tier is defined as an enum ["enterprise", "standard", "startup"], an agent can't write tier: "random_value".

But embeddings are still first-class: We store vector embeddings as properties. Agents can:

  • Perform similarity search: "Find customers similar to this one"
  • Traverse relationships: "Get their account managers and recent support escalations"
  • Combine both: "Find similar customers who also have escalations in the last 30 days"

This is the hybrid that makes context graphs work for autonomous agents: semantic structure you can trust + vector search for similarity.

The Deterministic Pipeline + Agent Augmentation Pattern

Foundation Capital's piece describes how context graphs form. In practice, we see two complementary flows:

1. Deterministic Ingestion (System of Record → Context Graph)

Structured pipelines pull canonical data from systems of record:

  • ERP → entities (customers, contracts, invoices)
  • CRM → relationships (account ownership, contact history)
  • Service Desk → incidents, escalations, resolutions
  • IoT/Telemetry → asset status, sensor readings, operational state

These are deterministic transforms: defined schemas, predictable mappings, validated on write. This forms the foundational layer—the entities and relationships agents need to ground their decisions.

2. Agent Augmentation (Capturing Decision Context)

Agents capture the decision traces that don't live in systems of record:

  • Zoom/Slack/Email → "VP approved exception because customer threatened to churn"
  • Meeting transcripts → "Engineering committed to fix in next sprint based on revenue impact"
  • Exception approvals → "Used policy v3.2, applied precedent from Account-123, approved by Sarah Chen"

Agents write this context into the same validated graph:

// Agent creates decision trace entity
{
  "$dtId": "decision-20250130-001",
  "$metadata": { "$model": "dtmi:com:example:DecisionTrace;1" },
  "decisionType": "discount_approval",
  "policyVersion": "v3.2",
  "exceptionReason": "churn_risk",
  "approver": "sarah.chen@company.com",
  "precedent": "decision-20241115-042",
  "reasoning": "Customer has 3 P1 incidents in 30 days, threatened move to Competitor-X...",
  "embedding": [0.123, 0.456, ...]  // Semantic embedding of reasoning
}

// Agent creates relationships
decision-20250130-001 --appliedTo--> customer-789
decision-20250130-001 --basedOn--> decision-20241115-042
decision-20250130-001 --approvedBy--> employee-sarah-chen

The schema ensures:

  • Decision traces link to valid entities (customers, employees, policies)
  • Precedent chains are explicit and traversable
  • Future agents can query: "Show me all discount approvals for enterprise customers with churn risk in the last quarter"

This is how precedent becomes searchable—the core promise of context graphs.

Real-World Architecture: How This Works in Practice

Let's walk through a concrete example from Foundation Capital's piece: renewal discount approval.

The Scenario

A renewal agent proposes a 20% discount for Customer-789, despite a 10% policy cap. The agent needs to:

  1. Understand customer context (tier, revenue, history)
  2. Check precedents (similar exceptions in the past)
  3. Gather evidence (recent incidents, Slack threads about churn risk)
  4. Get approval and capture the decision trace

With Konnektr Graph

Step 1: Query Current Context The agent queries the validated graph for structured context:

MATCH (customer:Customer {id: 'customer-789'})
MATCH (customer)-[:hasContact]->(contact:Person)
MATCH (customer)-[:hasAccount]->(account:Account)
MATCH (account)-[:managedBy]->(am:Employee)
MATCH (customer)-[incident:hasIncident]->(ticket:SupportTicket)
WHERE ticket.createdAt > datetime() - duration('P30D')
RETURN customer, contact, account, am, collect(ticket) as recent_incidents

This returns validated, structured entities—not text chunks. The agent knows:

  • Customer tier: "enterprise"
  • Account manager: "john.doe@company.com"
  • Recent incidents: 3 P1 tickets in last 30 days
  • Revenue: $2.4M ARR

Step 2: Find Similar Precedents The agent performs hybrid vector + graph query:

MATCH (decision:DecisionTrace)
WHERE decision.decisionType = 'discount_approval'
  AND decision.exceptionReason = 'churn_risk'
  AND VectorDistance(decision.embedding, @currentContextEmbedding) < 0.3
MATCH (decision)-[:appliedTo]->(precedentCustomer:Customer)
WHERE precedentCustomer.tier = 'enterprise'
RETURN decision, precedentCustomer
ORDER BY decision.timestamp DESC
LIMIT 5

This finds semantically similar exceptions (via vector search) that also match structural criteria (enterprise tier, churn risk). The agent discovers:

  • 2 prior cases where 20% discount was approved for enterprise customers with churn risk
  • Both had similar incident patterns (3+ P1 tickets)
  • Both preserved revenue >$2M

Step 3: Capture External Context The agent ingests unstructured context (Slack thread, meeting transcript) and extracts entities/relationships:

From Slack: "Sarah Chen (VP Sales): Customer threatened to move to Competitor-X if we don't match their pricing. This is our largest account in the region."

Agent creates:

{
  "$dtId": "context-slack-20250130",
  "$metadata": {"$model": "dtmi:com:example:ConversationContext;1"},
  "source": "slack",
  "channel": "#sales-escalations",
  "participants": ["sarah.chen@company.com", "john.doe@company.com"],
  "keyPoints": ["churn_threat", "competitor_pricing", "largest_regional_account"],
  "embedding": [...]  // Semantic embedding
}

Links to graph:

context-slack-20250130 --relatesTo--> customer-789
context-slack-20250130 --involvesPerson--> employee-sarah-chen

Step 4: Get Approval & Create Decision Trace The agent routes to Sarah Chen for approval. When approved, it creates the decision trace:

{
  "$dtId": "decision-20250130-001",
  "$metadata": {"$model": "dtmi:com:example:DecisionTrace;1"},
  "decisionType": "discount_approval",
  "requestedBy": "renewal-agent-v2",
  "approvedBy": "sarah.chen@company.com",
  "customerTier": "enterprise",
  "discountPercent": 20,
  "policyCapPercent": 10,
  "exceptionReason": "churn_risk",
  "policyVersion": "v3.2",
  "precedents": ["decision-20241115-042", "decision-20241203-018"],
  "supportingContext": ["context-slack-20250130", "incident-p1-789-001"],
  "reasoning": "Customer has 3 P1 incidents in 30 days, threatened move to Competitor-X, matches precedent from similar enterprise accounts. Revenue preservation ($2.4M ARR) justifies exception.",
  "embedding": [...]  // Embedding of full reasoning
}

Links to graph:

decision-20250130-001 --appliedTo--> customer-789
decision-20250130-001 --approvedBy--> employee-sarah-chen
decision-20250130-001 --basedOn--> decision-20241115-042
decision-20250130-001 --basedOn--> decision-20241203-018
decision-20250130-001 --referencedContext--> context-slack-20250130
decision-20250130-001 --referencedIncident--> incident-p1-789-001

Step 5: Precedent Becomes Searchable Future agents can now query:

  • "Show all discount approvals >15% for enterprise customers with churn risk"
  • "Find decision traces where Sarah Chen approved exceptions to v3.2 policy"
  • "Get precedents involving Competitor-X pricing threats"

The decision trace is structurally linked (validated relationships) and semantically searchable (vector embeddings).

Why Konnektr Graph?

Foundation Capital's piece doesn't prescribe technology, but it implies requirements:

  • Cross-system entity resolution
  • Temporal state preservation
  • Relationship-first modeling
  • Scale to millions of entities
  • Validated schemas so agent-written context stays trustworthy

We built Konnektr Graph specifically for this:

1. Validated Semantic Structure Schema enforcement means agents can trust the context they read and write. Relationships are constrained, properties are typed, references are validated.

2. Hybrid Vector + Graph Combine similarity search with relationship traversal in one query. No separate vector database to keep synchronized.

3. Built for Agent Autonomy Agents don't just read—they write decision traces, capture context, create relationships. The graph validates everything at write time.

4. Open Source, Self-Hostable 100% open source. Audit the code, run on-premises, no vendor lock-in. Critical for enterprises building context graphs as competitive moats.

5. Production-Ready Infrastructure Built on proven database technology. Transactional integrity, standard operations, scales to millions of entities.

Event Notifications & Data History: Keeping Context Fresh

Foundation Capital emphasizes that capturing decision traces requires being in the execution path at commit time. This is where Konnektr Graph's event streaming capabilities matter.

When an agent writes to the graph—whether deterministic ingestion or decision trace capture—Konnektr can stream those changes to external systems in real-time. This enables:

Context Graph Retention As your graph grows, you may want to archive older entities while keeping recent context fresh. Event streams let you move historical data to cold storage while maintaining a working set in the operational graph.

Historical Analysis Stream all changes to a time-series database (like Azure Data Explorer, InfluxDB, or TimescaleDB). This creates a complete audit trail of every decision, every relationship change, every context update. Analysts can query: "How did our discount approval patterns change over the last year?"

Reducing Hallucinations By archiving complete decision history externally, you can provide agents with temporal context without bloating the operational graph. "Show me how we handled similar exceptions in Q3 2024" becomes queryable without loading all historical data into memory.

Multi-System Synchronization Other systems can subscribe to graph changes and react accordingly. When a decision trace is created, trigger workflows in n8n, update dashboards, notify compliance systems, or propagate to downstream agents.

This keeps the context graph as a living system—agents write to it, events flow from it, history is preserved externally, and the operational graph stays focused on current context.

What's Next: The Context Graph Ecosystem

Foundation Capital's thesis is that context graphs will be trillion-dollar platforms—"systems of record for decisions, not just objects."

We're building the infrastructure layer for that future. Specifically, the validated semantic graph layer that ensures context remains trustworthy as agents gain autonomy.

We're not trying to be the whole stack. We're building the foundation that sits between:

  • Below: Systems of record (ERP, CRM, service desk)
  • Above: Agent orchestration, workflow engines, LLM applications

We're the layer that ensures context remains validated, searchable, and actionable.

If you're building agents that need to capture decision traces, we'd love to hear what patterns you're seeing. What context are your agents struggling to capture? What precedents do they wish they could query?

Why This Matters Now

The timing is critical. As Foundation Capital notes: "Agents are shipping into real workflows—contract review, quote-to-cash, support resolution—and teams are hitting a wall that governance alone can't solve."

The wall is missing decision traces. And unlike observability tools that capture execution traces for debugging, context graphs capture organizational memory for autonomy.

Three trends make this urgent:

1. Agents Are Writing, Not Just Reading Early LLM applications were retrieval-focused. Now agents are taking actions: approving discounts, escalating tickets, committing code. They're not just consuming context—they're creating it.

2. Multi-Agent Systems Need Shared Memory When multiple agents coordinate (sales agent + support agent + finance agent), they need a shared understanding of entities, relationships, and precedents. Unstructured memory doesn't scale.

3. Compliance and Auditability As agents make consequential decisions, organizations need audit trails: "Why was this exception allowed?" Context graphs make decisions explainable.

Building in the Open

Konnektr Graph is 100% open source. We're building this infrastructure layer in the open because:

  • Transparency matters for infrastructure-critical systems
  • Community input makes the platform better
  • No vendor lock-in enables competitive moats built on top

Foundation Capital's piece describes a market opportunity. We're building the infrastructure to enable it.

If you're building agents that need to:

  • Understand complex entity relationships across systems
  • Capture and query decision traces
  • Combine semantic search with structural validation
  • Maintain auditability as agents gain autonomy

Konnektr Graph was built for you.


Try It Today

The context graph layer is being built. We'd love your input as we build it.


About the Author: Niko Raes is the founder of Konnektr and has spent years building digital twin and IoT platforms at Arcadis, working with semantic modeling, ontologies, and real-world infrastructure systems at scale. Konnektr Graph was born from the challenges of building agent-ready semantic infrastructure in production environments.

Further Reading: