Skip to content

Context Graph

AI coding assistants have a limited context window. Feeding them your entire codebase wastes tokens on irrelevant code, dilutes the signal, and pushes important context out of the window when the codebase is large.

But giving them too little context causes different failures: the AI makes changes that break existing patterns, duplicates abstractions that already exist, or misses dependencies that constrain what’s possible.

The challenge is relevance-ranked curation: surface the most useful context for the current task, within a token budget.

ComposeProof solves this with the cp_get_context tool, which builds a context graph — a ranked view of your codebase tuned to what the AI needs right now.


The context graph pipeline has three stages:

Your source files
[1] AST Parsing (tree-sitter)
Parse all Kotlin source files
Extract: functions, classes, imports, composables, annotations
Build symbol → file index
[2] Import Graph + PageRank
Nodes: source files
Edges: import relationships (A imports B → edge A→B)
PageRank: files imported by many others rank higher
Output: ranked list of "central" files
[3] Scope Filter + Token Budget
Apply scope filter (structure / compose / previews / patterns / full)
Trim to token budget (performance=8K / balanced=2K / minimal=512)
Include focus-file and its direct neighbors first
Output: curated context block for AI

Each scope extracts a different view of your codebase.

structure (default for architecture questions)

Section titled “structure (default for architecture questions)”

Runs PageRank on the import graph. Returns the top N most-imported files — these are your foundational abstractions (design system, base components, domain models, navigation graph).

"show me the app architecture"
"what's the project structure?"
"give me an overview"

What you get: A ranked list of files by centrality, with their public API surface (class/function signatures, not implementations). High-centrality files are the ones that break the most things when changed.

Example output excerpt:

[centrality: 0.84] ui/theme/AppTheme.kt
AppTheme(darkTheme: Boolean, content: @Composable () -> Unit)
AppColors, AppTypography, AppShapes
[centrality: 0.71] ui/components/BaseCard.kt
BaseCard(modifier: Modifier, elevation: Dp, content: @Composable () -> Unit)
ClickableCard(onClick: () -> Unit, ...)
SectionCard(title: String, ...)

Focuses on Compose-specific signals. Returns: composable function signatures, @Preview locations, compiler metrics (skippable/stable counts from Compose reports if available), and recomposition hot spots.

"analyze the compose layer"
"what composables exist?"
"find recomposition issues"

What you get: Composable function signatures grouped by file, with stability annotations and @Preview locations. If Compose compiler reports are available (generated by composeCompilerPluginOptions), includes skippable/restartable percentages.

Returns every @Preview function: name, file, line number, annotation parameters. This is what cp_list_previews returns, but formatted as context rather than a tool result.

"list all my previews"
"what screens have previews?"

Scans for Compose anti-patterns and recurring implementation patterns:

  • Unstable composable parameters (non-primitive, non-@Stable types)
  • Missing key() in LazyColumn/LazyRow items
  • remember without keys on derived values
  • derivedStateOf misuse
  • Uncached lambdas in composable parameters
  • Consistent patterns you use (e.g., your error state handling approach)
"what patterns does this codebase use?"
"are there any anti-patterns?"
"before I add a new screen, what conventions should I follow?"

Returns all scopes combined, but applies focus-file priority: the specified file and its direct import neighbors get included in full. Everything else gets signature-only truncation.

"I'm working on HomeScreen.kt — give me full context"
"deep dive on the onboarding flow"

The token budget controls how much context is included. More budget means more completeness; less budget means tighter curation.

BudgetTokensUse case
performance~8,000Long sessions where you’ve already established context
balanced~2,000Default. Good for most tasks.
minimal~512Quick questions, remaining context window is small

The budget is enforced by trimming lower-ranked content first. High-centrality files and direct focus-file neighbors are always included within budget; peripheral files are truncated to signatures only.

Terminal window
# In Claude Code:
"use minimal context — I only have a small window left"
# Or configure globally:
"set context budget to performance"
# Stored in .composeproof/config.json

The context graph rebuilds per branch. If you switch branches, the next cp_get_context call detects the git HEAD change and rebuilds the AST index.

This matters for feature branches: the context graph reflects the code on your current branch, not main. If you’ve added a new component or refactored a screen, the context graph will show it.

# No configuration needed — ComposeProof reads git HEAD automatically
git checkout feature/redesign-home
> give me the compose context # rebuilds for this branch

The AST index is cached per git commit hash. Switching back to a previous commit restores the cached context immediately.


Use cp_configure_context to persist your preferences:

Terminal window
# Set default scope
"always use full context scope by default"
# Set default budget
"set context budget to performance"
# Specify which module is the UI module (helps scope focus)
"my UI code is in :feature:home and :core:ui"
# Exclude generated code from context
"exclude build/ and generated/ directories from context"

These preferences are saved in .composeproof/config.json and apply to all future cp_get_context calls.


ComposeProof’s context graph uses tree-sitter for AST parsing and PageRank for file ranking. This gives solid results for most projects.

For deeper semantic understanding — finding related code by meaning rather than by import relationships — ComposeProof integrates with Kartograph, a separate MCP server that provides tree-sitter AST parsing combined with vector embeddings.

ComposeProof = eyes (renders UI, captures screenshots)
Kartograph = brain (understands code semantics, finds related code)

When both MCP servers are running, they work as peers. ComposeProof handles all visual tooling; Kartograph handles semantic code search. You can combine them in a single AI session:

# In .mcp.json (consumer project)
{
"mcpServers": {
"composeproof": {
"command": "composeproof",
"args": ["serve", "--project", "."]
},
"kartograph": {
"command": "kartograph",
"args": ["serve", "--project", "."]
}
}
}
> find all screens that handle authentication

Kartograph answers this with semantic search (finds code by meaning). ComposeProof renders the results. The two servers complement each other without overlap.


cp_get_context is most useful at the start of a task and when switching to a new area of the codebase.

# Start of a session — orient the AI
"understand this project before we start"
→ cp_insights → cp_get_context scope=full
# Starting UI work
"I need to add a new settings screen"
→ cp_get_context scope=compose
→ AI learns: existing components, theme tokens, patterns to follow
# Before a refactor
"I'm going to refactor the navigation setup"
→ cp_get_context scope=structure
→ AI learns: what depends on navigation, centrality of nav files
# Code review
"review this PR for Compose best practices"
→ cp_get_context scope=patterns
→ AI learns: existing patterns, then checks PR diffs against them

You don’t need to call cp_get_context before every tool call. Once the AI has the context in its window, it persists for the session. Call it again when you switch to a different part of the codebase or start a new task.