Kubernetes and serverless did not make infrastructure simpler. They made it someone else's problem - until it was not.
The developer who used to deploy a monolith to a single server now deploys 15 microservices across 3 Kubernetes namespaces with serverless functions handling the edges. The infrastructure complexity did not decrease. It distributed.
The New Understanding Problem
In a monolith, understanding the system meant reading one codebase. In a cloud-native architecture, understanding the system means:
Service boundaries: Which service owns which data? Where are the API contracts?
Deployment topology: Which services can talk to which? What are the network policies?
Data flow: A user action touches services A, B, and C through an event bus. Which one owns the transaction?
Failure modes: Service B goes down. What happens to A and C? Are there circuit breakers? Retry policies?
None of this is in any single codebase. It is distributed across repositories, Helm charts, Terraform configs, and the heads of the two engineers who set it up.
Why This Matters for Developer Productivity
The Understanding Tax multiplies with cloud-native architectures. A developer picking up a ticket in a monolith needs to understand one codebase. A developer picking up the same ticket in a microservices architecture needs to understand:
Their service (the one they will modify)
Upstream services (what calls their service)
Downstream services (what their service calls)
The data contracts between all of them
The deployment and scaling characteristics
This is 3-5x more context acquisition than a monolith, for the same business logic change.
The Serverless Trap
Serverless promised to eliminate infrastructure thinking. Instead, it created a new kind: you do not manage servers, but you must understand cold starts, concurrency limits, timeout constraints, and the 47 ways a Lambda can silently fail.
The developer who "does not need to worry about infrastructure" now needs to worry about:
15-second cold starts affecting user experience
6MB payload limits breaking file uploads
15-minute execution limits for batch jobs
Concurrency throttling during traffic spikes
IAM permissions for every service-to-service call
This is infrastructure understanding with extra steps.
What Actually Helps
1. Cross-Service Dependency Maps
Visualize which services depend on which. Not from documentation (always stale) but from actual API call patterns and deployment configs.
2. Distributed Call Tracing Context
When a developer picks up a ticket, show them the full distributed trace - which services are involved, what the data flow looks like, where the bottlenecks are.
3. Infrastructure-as-Code Analysis
Parse Terraform, Helm charts, and serverless configs alongside application code. The networking rules and scaling policies are just as important as the business logic.
4. Codebase Intelligence Across Repos
The killer: a tool that understands your entire system, not just one repository. Cross-repo dependency graphs, cross-service call tracing, and unified search across all your codebases.
This is what Glue provides - pre-code intelligence that spans service boundaries, not just file boundaries.
Keep Reading
Cloud-native complexity is a specific form of the Understanding Tax. The more distributed your system, the higher the tax.
Glue provides pre-code intelligence that works across service boundaries - dependency graphs, feature maps, and tribal knowledge extraction that span your entire distributed system.