Code Graphs FAQ: Framework-Aware AI Context Layer Guide
Most "code graphs" are just syntax trees with extra steps. They'll tell you a function calls another function. Maybe they'll trace imports. But ask them "where does this API endpoint get hit?" or "which components depend on this hook?" and you get silence.
Framework-aware code graphs are different. They understand that app/api/users/[id]/route.ts is an API endpoint. They know a default export in pages/ creates a route. They map data flow through React contexts and server actions.
This matters because AI tools need context. When you ask Cursor or Claude to "optimize this component," should it know about the 47 other places that import it? Should it see the API route it calls? Of course. That's what framework awareness gives you.
A code graph maps relationships between code entities. Not just files — the actual things in your codebase. Functions, classes, components, routes, database models, API endpoints.
At minimum, a code graph tracks:
Definitions: Where things are declared
References: Where they're used
Dependencies: What depends on what
Relationships: How pieces connect
The naive version parses your code into an AST (Abstract Syntax Tree) and calls it done. Better tools build a semantic graph that understands language constructs. Framework-aware graphs go further — they know Next.js rewrites, Prisma schemas, tRPC routers.
Glue builds framework-aware graphs by indexing codebases with pattern recognition for common frameworks. It maps not just "this file imports that file" but "this component renders in these routes" and "this API endpoint serves these frontend calls."
Why can't I just use grep or GitHub search?
You can. For simple queries, text search is faster.
But try finding "all components that use this custom hook" with grep. You'll get false positives from comments, test mocks, TypeScript types. You won't catch dynamic imports or re-exports through barrel files.
Code graphs solve this because they parse structure, not text. They know import { useAuth } from './hooks' is different from // TODO: useAuth here. They follow the re-export chain when ./hooks is just export * from './useAuth'.
More importantly: graphs enable traversal queries. "Show me everything three levels downstream from this function." Text search can't do that without recursive manual work.
What makes a graph "framework-aware"?
Understanding the conventions and implicit behaviors of frameworks.
In Next.js 13+, a file at app/dashboard/settings/page.tsx automatically becomes the /dashboard/settings route. A framework-aware graph knows this without you annotating anything. It can answer "what's the entry point for this URL?" instantly.
Same with API routes. app/api/users/route.ts with export async function POST means there's a POST endpoint at /api/users. The graph captures this as a typed relationship — not just "this file exports a function" but "this defines an HTTP endpoint."
React patterns matter too. When you createContext and then useContext, there's data flow even though no function directly calls another. Framework-aware graphs track these context relationships. Same with Redux stores, Zustand state, React Query caches.
Without framework awareness, your graph is just generic language analysis. With it, you get semantic understanding of your actual architecture.
Can't language servers do this already?
Language servers (LSP) are good at local navigation. Go-to-definition, find-references, autocomplete. They excel at developer-initiated actions in a specific file.
But language servers are stateless and file-scoped. They don't maintain a persistent graph of your entire codebase. They won't tell you "these 12 components broke when you changed that prop type" until you open each file. They don't understand cross-cutting concerns like "all places this feature flag is checked."
Code graphs are global and persistent. They index your whole codebase once, then answer structural queries instantly. "What's the blast radius of changing this API?" That's a graph query — trace all call paths from that endpoint. LSP can't help you there.
The two are complementary. LSP for in-editor smarts, graphs for codebase-level intelligence.
How do code graphs help AI coding tools?
LLMs are context-limited. Even with 200k token windows, you can't dump your entire codebase into every prompt. You need to select relevant context.
This is where code graphs shine. When you ask an AI to modify a component, the graph can:
Pull in the component's dependencies
Find where it's imported
Grab related type definitions
Include any context providers or hooks it uses
Check for similar components (via feature clustering)
Glue's MCP integration does exactly this for Cursor, Copilot, and Claude Desktop. The AI gets framework-aware context automatically. It knows when you're editing a Next.js route that there's probably an API handler to consider. It sees the database schema if you're writing a query.
Without graphs, AI tools either get too much context (slow, expensive) or too little (hallucinations, broken code). Graphs let you be surgical about what matters.
What's the difference between static and dynamic analysis?
Static analysis examines code without running it. Dynamic analysis watches actual execution.
Code graphs are static. They parse your source files and infer behavior from structure. This is fast and safe — no need to spin up your app, no risk of side effects.
The tradeoff: static analysis can miss runtime behavior. If your code does const Component = components[dynamicKey], a static graph might not know which component that is. Dynamic analysis would trace it at runtime.
But most codebases are statically analyzable. Frameworks follow conventions. Types are declared. Imports are explicit. For 90% of queries, static graphs give you answers instantly. For the remaining 10%, you pair it with runtime instrumentation or trace logs.
How do you handle monorepos and workspace dependencies?
Monorepos are where basic code graphs fall apart. You've got shared packages, internal dependencies, different framework versions per app.
Framework-aware graphs need to understand workspace boundaries. When packages/ui exports a component and apps/web imports it, that's a cross-package dependency. The graph should track it as such — not just treat it like any import.
Build tools matter too. Turborepo caching, Nx task graphs, pnpm workspaces — these affect what gets rebuilt when something changes. A good code graph integrates with your build system to map "if this changes, what else needs rebuilding?"
This feeds into change impact analysis. Before you merge that PR touching shared utilities, the graph shows you every app and package affected. No surprises in production.
What about generated code and build artifacts?
Skip them in the graph. Index source files only.
Generated code pollutes the graph with noise. Your Prisma client generates hundreds of type definitions — you don't need those in the graph. Same with Next.js build artifacts, TypeScript output, bundled files.
The trick is knowing what to ignore. Framework-aware graphs use smart heuristics:
Anything in .next/, dist/, build/ — ignored
node_modules/ — indexed at package level, not individual files
Generated Prisma/Graphql files — mark as generated, include only schemas
You want the source of truth, not artifacts. When the graph says "this function is used in 5 places," it means 5 places in source code you maintain, not 47 including transpiled output.
How do you keep graphs up to date?
Incremental updates. Watch file changes, re-parse only what changed.
When you edit a React component, the graph re-analyzes that file and updates edges. If you changed a prop type, it checks everywhere that component is imported and flags type mismatches. If you renamed a function, it updates all call sites in the graph (without modifying code).
The key is speed. Full re-indexing a 100k line codebase might take 30 seconds. Incremental updates for a single file? Under 100ms. Fast enough to stay in sync with your editor.
Glue does this continuously. As you code, the graph updates in the background. When you query it — whether through the UI or via MCP for AI context — you get current data. No stale analysis from yesterday's commit.
Can code graphs detect architectural problems?
Yes, if they track the right metrics.
Circular dependencies show up as cycles in the graph. Highly coupled modules have dense edges between them. Orphaned code has no incoming references.
More sophisticated analysis:
Change coupling: Files that always change together suggest missing abstraction
Complexity hotspots: High cyclomatic complexity + high churn = refactor target
Ownership gaps: Code with no clear owner tends to accumulate debt
Glue combines code graphs with churn data and team activity to surface these patterns. It's not just "here's your call graph" but "this function has changed 47 times across 8 PRs by 5 different people and has 12 code paths." That's a problem worth addressing.
The graph gives you structure. Metrics give you priorities.
What's next for code graphs?
Deeper AI integration. Right now, code graphs provide context to AI. Soon, AI will help build better graphs.
Imagine: the graph encounters a dynamic import it can't resolve statically. It asks an LLM "given this pattern and surrounding code, what's likely being imported?" The LLM suggests candidates based on naming conventions and usage patterns. The graph validates them and updates.
Or: AI-suggested architectural improvements based on graph analysis. "These components are tightly coupled — here's a refactoring that reduces dependencies by 40%." The graph visualizes before/after structure.
Bidirectional flow. Graphs inform AI, AI enhances graphs. That's where codebase intelligence is heading.
Should you build your own or use existing tools?
Don't build your own unless you have to.
Parsing languages correctly is harder than it looks. Handling all of TypeScript's edge cases? Months of work. Adding framework awareness? More months. Keeping up with Next.js 15, React 19, new patterns? Ongoing maintenance forever.
Use tools built for this. Tree-sitter gives you parsers. Language servers give you semantics. Glue gives you framework-aware graphs with team insights and AI integration.
Build on top of existing graphs, don't rebuild them. Query the graph API to build custom analysis. Export graph data for visualization. Integrate with your CI pipeline for PR checks.
The value isn't in the graph itself — it's in what you do with it. Focus there.