MetricChat
Core Concepts

Context Management

How MetricChat constructs and manages context for every Agent run.

Why Context Matters

Even the best models or agentic systems will fail without the right context. Your organization has specific definitions, rules, and conventions that no pre-trained model knows. MetricChat treats context as a primary feature — every Agent run begins with a dedicated context block containing your definitions, rules, and accumulated learnings.

What Goes Into Context

Static Context (Persistent Organizational Knowledge)

Instructions Scoped rules, guardrails, KPI definitions, preferred data joins, exclusions, and conventions. These encode your business logic and standards.

Schemas & Tables Source metadata, column descriptions, relationships, and data types. This is the structural foundation the AI uses to generate correct queries.

Code Repositories dbt and LookML files, Git repositories, and AGENTS.md documents that encode business logic and transformation rules.

Learnings Historical query data, user feedback, validation results, and lineage tracking from previous interactions.

Warm Context (Dynamic, Run-Specific Signals)

Previous Messages Conversation history from the current session.

Tool Outputs Queries, dashboards, and intermediate results from the current run.

User Clarifications Additional information provided when the agent asks follow-up questions.

Observations Errors, validation outcomes, and system reflections from the current execution.

How Context Is Assembled

Each Agent run assembles context in layers:

  1. Retrieve static context — Instructions, schemas, and repository content based on the query
  2. Add warm context — Conversation history, tool outputs, and clarifications
  3. Score and rank — Prioritize the most relevant context items
  4. Construct the context block — Package everything into a structured prompt for the LLM

Monitoring Context Quality

MetricChat provides tools to monitor and improve context:

  • High-level metrics — Track context effectiveness across all runs
  • Individual run analysis — Inspect the full context block for any Agent run
  • Instruction Effectiveness Scoring — Evaluate how well business rules contributed to results
  • Context inspection — Administrators can review exactly what context was provided per run

See Monitoring for detailed information on tracking context quality.

Best Practices

  • Invest in instructions — Well-defined business rules have the highest impact on result quality
  • Keep schemas documented — Add column descriptions and relationship notes
  • Connect code repositories — dbt and LookML files provide rich transformation context
  • Monitor and iterate — Use effectiveness scores to identify and improve weak instructions
  • Review agent runs — Inspect context blocks to understand why the AI made specific decisions

On this page