SonarQube has 5,000+ rules. And it still can't tell you if your architecture makes sense.
This is the fundamental limitation of rule-based static analysis: it can only find problems someone already thought to write a rule for. It's pattern matching at scale, not understanding.
Let me explain why this matters and what comes next.
The Rule-Based Ceiling
Traditional SAST (Static Application Security Testing) tools work like this:
- Parse code into an AST (Abstract Syntax Tree)
- Match patterns against a rule database
- Flag violations
// Rule: "Don't use System.out.println in production code"
// Matches this:
System.out.println("User logged in: " + userId);
// But has no idea:
// - Whether this is actually production code
// - Whether userId contains sensitive data
// - Whether there's a proper logging framework already in use
// - Whether this println breaks the app's logging strategy
The rule catches the symptom but misses the context.
What Rules Can't See
Here are real issues that rule-based analysis completely misses:
Architectural Violations
// api/handlers/userHandler.ts
import { prisma } from '../../lib/prisma';
// This handler directly accesses the database
// instead of going through UserService
export async function getUser(id: string) {
return prisma.user.findUnique({ where: { id }});
}
No rule violation here. Types are correct. No security issues. But this bypasses your service layer and will cause problems when you need to add caching, logging, or authorization.
Feature Duplication
// File 1: src/utils/formatCurrency.ts
export function formatCurrency(amount: number): string {
return '$' + amount.toFixed(2);
}
// File 2: src/helpers/money.ts
export function displayMoney(value: number): string {
return '$' + value.toFixed(2);
}
Two teams implemented the same thing. Neither knows about the other. Rules can't detect this because each function is individually correct.
Dead Features
// This entire feature is never called
export class LegacyExportService {
async exportToCSV(data: any[]) {
// 500 lines of code nobody uses
}
}
The code is valid. Tests might even pass. But it's dead weight that confuses new developers and increases maintenance burden.
Implicit Dependencies
// UserService expects NotificationService to be initialized first
// But nothing in the code makes this explicit
export class UserService {
async createUser(data: UserData) {
// Will crash if NotificationService.init() wasn't called
await NotificationService.sendWelcomeEmail(data.email);
}
}
No rule can detect this. It requires understanding initialization order across the entire application.
The Graph-Based Alternative
Instead of matching patterns, what if we built a complete model of code relationships?
That's what we did with Glue. Every codebase becomes a queryable graph:
// Our data model
interface CodeGraph {
symbols: Map<SymbolId, Symbol>;
calls: Map<SymbolId, SymbolId[]>;
imports: Map<FileId, FileId[]>;
types: Map<TypeId, TypeRelationship[]>;
}
// Now we can answer real questions:
const callers = await findCallers('UserService.createUser');
const dependencies = await getTransitiveDeps('PaymentModule');
const features = await discoverFeatures(workspaceId);
This isn't rule matching. It's structural understanding.
What Intelligence-First Analysis Looks Like
Question: "Is this service properly isolated?"
Rule-based answer: ¯_(ツ)_/¯
Graph-based answer:
PaymentService Isolation Analysis:
Inbound dependencies: 4 services (good - limited surface area)
Outbound dependencies: 12 services (warning - high coupling)
Unexpected connections:
- PaymentService → UserPreferencesService (why?)
- PaymentService → AnalyticsService (direct, should use events)
Database access: Direct (should use repository pattern)
Recommendation: Extract AnalyticsService calls to event-based
communication. Review UserPreferences dependency.
Question: "What's the blast radius of changing the User model?"
Rule-based answer: "3 type errors in files that import User"
Graph-based answer:
User Model Change Impact:
Direct references: 45 files
Transitive impact: 127 files
Affected API endpoints: 23
Affected background jobs: 7
Affected tests: 89
High-risk areas:
- AuthenticationService (core auth flow)
- SyncService (mobile data sync)
- ExportService (data export compliance)
Suggested change order:
1. Update User model
2. Update AuthenticationService (most dependencies)
3. Update API handlers
4. Update background jobs
5. Run full test suite
This is actionable intelligence, not just pattern violations.
The Hybrid Approach
Rules aren't useless. They catch real bugs:
- SQL injection patterns
- XSS vulnerabilities
- Hardcoded credentials
- Memory leaks
But rules should be the floor, not the ceiling.
The ideal setup:
- Rule-based analysis for known bad patterns (security, performance)
- Type checking for interface contracts
- Graph-based analysis for architecture, dependencies, and understanding
What to Look For in Modern Tools
If you're evaluating static analysis tools, ask:
-
Does it build a symbol graph? File-level analysis misses cross-file relationships.
-
Can it answer "what calls this?" Basic dependency tracing is table stakes.
The Bottom Line
5,000 rules will catch 5,000 known problems. But the bugs that kill you are the ones nobody wrote rules for.
The future of static analysis isn't more rules. It's deeper understanding.
Code intelligence that knows your architecture, understands your features, and can answer questions about impact and relationships — that's what actually improves code quality.
Rules are where we started. Intelligence is where we're going.