MCP FAQ: Turn AI Assistants Into Product Intelligence Partners
You're in a PR review. Claude is helping you refactor a feature. You ask: "What depends on this authentication module?" Claude responds with generic advice about how auth systems typically work.
Wrong question? No. Wrong context.
Claude doesn't know your codebase. It can see the diff. Maybe the file you're editing. But it has no idea that your auth module is used by 47 different services, that the payments team is actively refactoring their integration, or that there's a feature flag controlling the new SSO flow.
This is the gap that Model Context Protocol (MCP) solves. Not by giving AI assistants a bigger context window. By giving them structured access to your actual codebase intelligence.
MCP is Anthropic's protocol for connecting AI assistants to external data sources. Think of it like an API standard, but specifically designed for AI context.
The simple explanation: Your AI assistant can now call functions that retrieve specific information from your tools. Want to know which features are in production? MCP server queries your codebase index. Need to check test coverage for a module? MCP server pulls metrics. Curious about who owns this service? MCP server checks your code health data.
The protocol defines how these requests work. The servers implement actual data retrieval. The AI assistant decides when to use them based on your conversation.
This matters because AI assistants have been operating blind. They're brilliant at code generation, decent at debugging, and surprisingly good at architecture discussions. But they're having those discussions without knowing what you've actually built.
Why This Changes Everything for Product Development
I've watched developers explain their codebase architecture to AI assistants dozens of times. Same conversation, different day:
"We have a microservices architecture..."
"The frontend uses React..."
"Our API layer handles authentication..."
You're wasting tokens and time providing context that should be automatic. Worse, you're providing incomplete context. You know the high-level architecture, but do you know every feature using that auth module? Do you remember which services are most complex or most frequently changed?
MCP flips this. Your AI assistant queries your codebase intelligence directly. At Glue, we implemented MCP so Claude, Cursor, and ChatGPT can access your feature catalog, code health metrics, and team ownership data without you manually explaining any of it.
The conversation shifts from "let me explain our system" to "analyze our actual implementation."
Common Questions About MCP (The Honest Answers)
Does MCP replace documentation?
No. It makes documentation queryable by AI. If your docs are garbage, MCP won't fix that. But if you have feature catalogs, architecture diagrams, or code health metrics, MCP makes them accessible exactly when needed.
Glue auto-generates documentation from code and discovers features via AI. That becomes the knowledge base your AI assistant queries. You're not maintaining docs for MCP—you're making existing intelligence accessible.
Is this just fancy retrieval?
Kind of, yes. But "just retrieval" undersells it. The magic isn't the retrieval mechanism—it's that the AI assistant knows when to retrieve and what to ask for.
You don't specify: "Query the feature catalog, check code health, then analyze dependencies." You say: "Should we refactor this module?" The assistant decides it needs feature usage data, checks what depends on it, reviews churn metrics, and then gives you an informed answer.
What data actually matters?
Three categories keep coming up:
Feature-level understanding: What features exist, where they're implemented, what they do
Code health metrics: Complexity, churn, test coverage, ownership
Dependencies and relationships: What depends on what, which teams own which code
These answer the questions developers actually ask: "What will break if I change this?" "Who should review this?" "Is this code stable or actively changing?"
Glue surfaces all three through MCP. Your AI assistant can query your feature catalog (what exists), check code health (should we change it), and map ownership (who to involve).
Does MCP access my entire codebase?
It accesses what the MCP server provides. That's usually metadata, not raw code. Feature descriptions. Metrics. Ownership info. Dependency graphs.
Think of it like this: MCP isn't grep for your codebase. It's structured queries against your codebase intelligence layer.
Real Examples: MCP in Action
Let me show you what this looks like in practice.
Scenario 1: Planning a Refactor
Without MCP:
You: "I want to refactor the user authentication module."
Claude: "Here's a general approach to refactoring authentication systems..."
With MCP:
You: "I want to refactor the user authentication module."
Claude: [queries feature catalog] "That module is used by 12 features including SSO, API keys, and mobile auth. The payments team has active work in their integration (15 commits last week). Complexity score is 7.2/10. Three teams have ownership. Should we coordinate before starting?"
See the difference? One is generic advice. The other is specific intelligence about your codebase.
Scenario 2: Understanding Impact
Without MCP:
You: "What happens if I change this API endpoint?"
Claude: "You'll need to check all callers and update documentation..."
With MCP:
You: [shares code]
Claude: [queries dependencies and feature catalog] "This endpoint is called by the dashboard (user-facing), data sync job (internal), and mobile app (v2.3+). There's also a deprecated version in the legacy admin panel. Five features depend on this: user settings, profile management, notification preferences, export data, and admin overrides. The mobile team should review since they added error handling for rate limiting last month."
That's not generic advice. That's actionable intelligence.
Scenario 3: Code Review Context
Without MCP:
You: "Review this PR."
Claude: [reviews the diff] "Looks good, consider adding tests for edge cases..."
With MCP:
You: "Review this PR."
Claude: [checks code health and ownership] "This file has high churn (12 changes this month) and complexity score 8.1/10. Three developers have edited it recently. The error handling you're adding is good, but this file is a hotspot—consider splitting it. Also, @sarah owns the related feature and should probably review since this touches the core logic she refactored."
Context changes everything.
Setting Up MCP (Less Painful Than You Think)
Most AI assistants with MCP support use a configuration file. For Claude Desktop, it's claude_desktop_config.json. For Cursor, similar setup.
The config specifies which MCP servers to connect to. Each server provides specific capabilities. Example:
That's it. Restart your AI assistant. It now has access to your codebase intelligence.
The MCP server (in this case, Glue's) handles authentication, query processing, and data formatting. Your AI assistant just makes requests when it needs information.
What MCP Isn't (Yet)
Let's be real about limitations.
MCP isn't real-time code execution. Your AI assistant can query metadata but shouldn't be running arbitrary commands in your codebase. That's a security nightmare.
MCP isn't a replacement for local context. If you're editing a file, your AI assistant should see that file directly. MCP is for adjacent context—the stuff outside the immediate view.
MCP isn't magic synthesis. It returns structured data. The AI assistant still needs to interpret it correctly. If your feature catalog is messy, the responses will be messy.
MCP isn't standardized yet. Different servers implement different capabilities. The protocol is stable, but what data you can query varies. This will improve as the ecosystem matures.
Why Glue Built MCP Support
We built Glue to solve a specific problem: engineering teams lose track of what they've built. Features scatter across services. Documentation goes stale. Nobody knows what's actually running in production.
Glue indexes your codebase, discovers features via AI, generates docs from code, and maps code health. It's a product intelligence layer.
MCP makes that intelligence conversational. Instead of logging into a dashboard to check feature usage, you ask your AI assistant while you're coding. Instead of manually searching for ownership info, it's surfaced automatically during PR reviews.
We're not trying to replace your AI assistant. We're making it smarter about your specific codebase.
The Future of AI-Assisted Development
MCP is the first step toward something bigger: AI assistants that understand not just code syntax but product context.
Right now, they're code generators that happen to be very good. With MCP, they become development partners that know your system architecture, understand feature relationships, and can reason about impact.
The best part? This doesn't require AGI or massive model improvements. It requires better context retrieval. The AI models are already capable—they just need the right information at the right time.
That's what MCP provides. Not smarter AI. Better-informed AI.
Your codebase has intelligence locked in commits, features, metrics, and team knowledge. MCP unlocks it. Your AI assistant stops being a generic code helper and becomes a product intelligence partner.
The question isn't whether this is useful. The question is why you'd develop any other way.