Code Review Is a Bottleneck — Here's How to Fix It
·ScaledByDesign·
code-reviewengineeringproductivityprocess
The Hidden Bottleneck
Your engineers write code for 2 hours. Then the PR sits for 2 days waiting for review. Multiply by every engineer, every PR, every week — and code review is quietly killing your velocity.
Most teams don't measure this. They should.
The math on a typical team of 8:
PRs opened per day: ~12
Average time to first review: 18 hours
Average review cycles: 2.3
Average time from PR open to merge: 3.2 days
That's 3.2 days of context-switching, waiting,
and rebasing — for every single change.
Why Reviews Take So Long
Problem 1: PRs Are Too Large
PR size vs review quality (industry data):
< 200 lines: Review takes 15 min, catches 85% of issues
200-500 lines: Review takes 45 min, catches 60% of issues
500-1000 lines: Review takes 90 min, catches 40% of issues
> 1000 lines: Reviewer gives up, approves with "LGTM"
The biggest PRs get the worst reviews.
The fix: Maximum 400 lines per PR. No exceptions. If the feature is larger, break it into stacked PRs with clear boundaries.
Problem 2: No Clear Ownership
Bad: PR assigned to "the team" (everyone's problem = nobody's problem)
Good: Explicit review assignment with rotation
Monday: Sarah reviews Alex's PRs, Alex reviews Mike's
Tuesday: Mike reviews Sarah's PRs, Sarah reviews Jordan's
Wednesday: Jordan reviews Mike's PRs, Mike reviews Alex's
...
Problem 3: Reviewers Don't Know What to Look For
A good code review checks (in priority order):
1. Correctness: Does it do what it's supposed to?
2. Edge cases: What happens with empty input, nulls, failures?
3. Security: Any auth bypasses, injection risks, data leaks?
4. Performance: Any O(n²) loops, missing indexes, N+1 queries?
5. Maintainability: Will someone understand this in 6 months?
A bad code review checks:
✗ Naming conventions (use a linter)
✗ Formatting (use a formatter)
✗ Import order (use a tool)
✗ Whether the reviewer would have done it differently
The System That Works
Rule 1: 4-Hour SLA
Every PR gets a first review within 4 business hours. Not a full review — a first pass.
How to enforce:
- Bot posts in Slack when a PR has no reviewer after 2 hours
- Daily standup includes "any PRs blocked on review?"
- Track time-to-first-review as a team metric
- Manager reviews the metric weekly
Rule 2: Small PRs Only
PR size guidelines:
Feature work: Max 400 lines changed
Refactoring: Max 600 lines (moves are cheap to review)
Config/generated: No limit (but flag as "generated, no review needed")
How to break up large features:
1. Data model changes (migration + model, no UI)
2. Backend logic (service layer, tested independently)
3. API endpoint (thin layer, calls service)
4. Frontend (UI consuming the API)
5. Integration tests (end-to-end verification)
Rule 3: PR Descriptions Are Mandatory
## What this PR does
[1-2 sentences explaining the change]
## Why
[Link to ticket/RFC, or brief explanation]
## How to test
[Steps to verify this works]
## Screenshots (if UI change)
[Before/after screenshots]
## Risks
[What could go wrong? What should reviewers pay attention to?]Rule 4: Automate the Boring Stuff
# Everything that can be automated, should be:
ci:
- linting (ESLint, Prettier)
- type checking (TypeScript)
- unit tests
- integration tests
- security scanning (Snyk, Dependabot)
- bundle size check
- performance benchmarks
# Humans should review:
- Logic correctness
- Architecture decisions
- Edge case handling
- Security implications
- Whether the approach makes senseRule 5: Two Types of Comments
Blocking (must fix before merge):
"This SQL query is vulnerable to injection. Use parameterized queries."
"This will cause a null pointer exception when the user has no address."
Non-blocking (suggestion, take it or leave it):
"nit: Consider renaming this variable for clarity"
"optional: You could simplify this with a reduce()"
Prefix non-blocking comments with "nit:" or "optional:"
so the author knows they can merge without addressing them.
Measuring Review Health
Track weekly:
Time Metrics:
Time to first review: Target < 4 hours
Time to merge: Target < 24 hours
Review cycles: Target < 2
Quality Metrics:
PRs merged without tests: Target 0 (except config changes)
Production incidents from merged PRs: Track, aim for 0
Post-merge issues found: Target < 5% of PRs
Volume Metrics:
Reviews per engineer per day: Target 2-4
PR size (median lines): Target < 300
PRs open > 48 hours: Target 0
The Cultural Shift
What Good Review Culture Looks Like
- Reviews are a priority, not an interruption
- Feedback is about the code, not the person
- Authors respond to feedback quickly (same day)
- Disagreements are resolved in comments, not meetings
- Everyone reviews, including senior engineers and managers who code
What Bad Review Culture Looks Like
- Reviews pile up because "I'm heads down on my feature"
- Senior engineers' PRs get rubber-stamped
- Feedback is personal or nitpicky
- PRs are abandoned and re-created because they went stale
Start This Week
- Measure your current time-to-first-review (check your Git platform's analytics)
- Set a 4-hour SLA and announce it to the team
- Reject any PR over 400 lines — ask the author to split it
- Add a PR template with the description format above
- Automate linting, formatting, and type checking in CI
Code review should make your team faster and your code better. If it's doing neither, the process is broken — not the concept. Fix the process.