Junior engineers review code for bugs. Senior engineers review code for the next 6 months.
After reviewing thousands of PRs and watching senior engineers review code, I've noticed they're looking at completely different things than what most review checklists suggest.
Here's what actually matters.
The Junior Review Checklist
Most code review guides focus on:
- Does it compile?
- Are there tests?
- Is the formatting correct?
- Are there obvious bugs?
This catches syntax issues. It misses everything important.
What Senior Engineers Actually Look For
1. "Does this fit how we do things here?"
// PR: Add user validation
export async function validateUser(userId: string): Promise<boolean> {
const response = await fetch(`/api/users/${userId}`);
return response.ok;
}
A junior reviewer asks: "Does this work?" (Yes)
A senior reviewer asks:
- "Do we have a UserService that should handle this?" (Probably)
- "Are we using fetch directly elsewhere or do we have an API client?" (Check)
- "Where does this get called and does that make sense?" (Context)
The pattern: Code that works but doesn't fit existing patterns creates maintenance debt.
2. "What happens when this fails?"
async function processPayment(order: Order) {
const payment = await chargeCustomer(order.customerId, order.total);
await updateOrderStatus(order.id, 'paid');
await sendConfirmationEmail(order.customerId);
await triggerShipment(order.id);
}
Junior: "Looks clean, approved."
Senior: "What if sendConfirmationEmail fails after payment succeeds? We've charged them, updated the order, but they never got confirmation and shipment didn't trigger. This needs error handling for partial failures."
The pattern: Happy path code is easy. Error handling is where bugs live.
3. "How does this affect the call graph?"
// PR adds this to AuthService
import { AnalyticsService } from './analytics';
export class AuthService {
async login(credentials: Credentials) {
const user = await this.authenticate(credentials);
await AnalyticsService.trackLogin(user.id); // New line
return user;
}
}
Junior: "Just a tracking call, looks fine."
Senior: "Wait — AuthService now depends on AnalyticsService. That creates a circular dependency if AnalyticsService already imports AuthService for user context. And now authentication fails if analytics is down. This should be an event, not a direct call."
The pattern: Every import is a dependency. Dependencies create coupling. Coupling creates brittleness.
4. "Will this scale?"
async function getActiveUsers(): Promise<User[]> {
const users = await db.user.findMany();
return users.filter(u => u.lastActive > Date.now() - 86400000);
}
Junior: "Works, has a test, approved."
Senior: "This loads ALL users into memory, then filters in JavaScript. With 100 users, fine. With 1M users, this crashes the server. Move the filter to the database query."
The pattern: Code that works today might not work at scale. Think about the data size in 2 years.
5. "Is this testable?"
export class ReportGenerator {
async generateReport(startDate: Date, endDate: Date) {
const data = await fetch('/api/metrics');
const processed = this.processData(data);
const pdf = await PDFLibrary.create(processed);
await EmailService.send(getCurrentUser().email, pdf);
return pdf;
}
}
Junior: "Complex but handles everything."
Senior: "This method has 4 side effects (fetch, process, create PDF, send email) and uses global state (getCurrentUser()). Impossible to unit test. Each step should be injectable or this should be broken into smaller functions."
The pattern: If you can't test it in isolation, the design is wrong.
6. "What tribal knowledge does this assume?"
// Works perfectly, but...
const config = await loadConfig();
if (config.featureFlags.newCheckout) {
await processWithNewFlow(order);
} else {
await processLegacyFlow(order); // Don't touch this
}
Junior: "Clean feature flag pattern."
Senior: "What's the context on processLegacyFlow? That comment 'Don't touch this' is a red flag. Is there a reason we can't remove it? Is there documentation? Should we add a TODO with a date for removal?"
The pattern: Code without context becomes unmaintainable. The comment should explain why, not what.
The Review Framework
Here's how I structure reviews now:
First pass (2 minutes): What is this PR trying to do? Does the approach make sense at a high level?
Second pass (5 minutes):
- What services/modules does this touch?
- Does it follow existing patterns?
- Are there new dependencies?
Third pass (detailed):
- Error handling
- Edge cases
- Test coverage (not just existence, but what is tested)
- Documentation if needed
Final question: "Would I be comfortable maintaining this code in 6 months?"
What Tools Can Help
Most of this requires understanding context. Tools can help surface it:
Call graph analysis: "This function is called by 15 other functions across 8 files."
Dependency tracking: "This PR adds a new dependency from AuthService to AnalyticsService."
Blast radius: "Changes to this file typically require changes to 12 other files."
When we built Glue, we included PR context features for exactly this reason. Before reviewing, you can see:
- What features this code affects
- Who else has modified these files
- What the call relationships are
That context changes the review.
The Bottom Line
Good code review isn't about finding bugs. Automated tools can find bugs.
Good code review is about:
- Ensuring the code fits the existing architecture
- Preventing future maintenance problems
- Sharing knowledge about how the system works
- Making sure the next person can understand this
The best reviews I've received weren't "found a bug on line 47." They were "this approach will cause problems when X happens — here's a better pattern we use."
That's what separates senior review from checkbox review.