Skip to content

A Code Review Playbook That Catches Real Bugs

A

Al Amin Ahamed

Senior Engineer

9 min read
𝕏 in

Code review is the highest-leverage activity in software engineering. A 30-minute review catches bugs that take 2 hours to debug in production, transfers knowledge between engineers, and establishes the quality bar for the codebase.

But most code reviews are bad. They focus on style nits, miss the structural issues that matter, and leave reviewer and author both frustrated. Here's the playbook I've refined over a decade.

Review in Three Passes

Pass 1: Does this need to exist?

Read the PR description. Skim the diff. Don't comment yet.

Questions to ask:

  • Is the problem worth solving?
  • Is this the right approach?
  • Are we duplicating something that already exists?

If the answer to any is "no", say so before reviewing line-by-line. Reviewing the implementation of code that shouldn't exist wastes everyone's time.

"This implements a custom rate limiter, but Laravel's built-in RateLimiter facade covers the same use case. Worth using that instead of maintaining our own?"

Pass 2: Architectural integrity

Now read the code. Look for:

  • Boundaries — Is the layering respected? Service code in controllers? Database queries in views?
  • Abstraction level — Are there one or two too many indirections? One too few?
  • Coupling — Does this PR add hidden dependencies between modules that should be independent?
  • Naming — Do the names express intent? Could a new engineer figure out what this does without reading the body?

Most "this code is hard to maintain" complaints reduce to one of these four.

Pass 3: Correctness

Now go line-by-line. Look for:

  • Off-by-one errors in loops
  • Null/undefined assumptions
  • Missing error handling at boundaries (external APIs, user input, file I/O)
  • Race conditions
  • N+1 queries
  • Missing tests for edge cases the diff implies

Style nits go in this pass too, but they're a footnote, not the main event.

Comment Like a Human

Bad comment:

This is wrong.

Good comment:

This treats `null` and `''` the same, but the API contract differentiates them (null = field not provided, '' = explicitly empty). Suggest: if ($value === null) { ... } elseif ($value === '') { ... } (Or push back if I'm wrong about the contract.)

Three things the good comment does:

  1. Explains why it's a problem
  2. Suggests a concrete fix
  3. Leaves room for the author to disagree

Severity Tags

I prefix every non-trivial comment:

  • [blocker] — Must be fixed before merging
  • [suggestion] — Worth considering but I'll merge without
  • [nit] — Style or preference, ignorable
  • [question] — I genuinely don't know — explain to me

Authors triage faster. Reviewers stop accidentally turning suggestions into blockers.

When To Push Back

Authors will sometimes push back on your reviews. Most of the time, accept it — they have context you don't. But push back yourself when:

  • The disagreement is about correctness (not preference)
  • You can articulate a specific failure mode
  • The cost of the fix is small relative to the cost of the bug

Don't push back on stylistic preferences once the author has said "I disagree". Let it go and let your team's linter rules decide.

Approving With Outstanding Comments

Most PRs ship with some unresolved discussion. That's fine if:

  • All [blocker] comments are resolved
  • The author has acknowledged [suggestion] and [nit] comments (even if declined)
  • No comment is left without acknowledgement (silence reads as ignored)

For complex PRs, leave a summary comment after approving:

Approved. Two things to follow up on: 1. We should add an integration test covering the timeout case (filed #142) 2. The retry strategy is fine for now but we'll likely need exponential backoff when usage grows. Worth a separate PR rather than in this one.

What I Don't Do

  • Approve a PR I haven't read end-to-end. Even if I trust the author. The reviewer is a second pair of eyes; rubber-stamping defeats the point.
  • Block on personal preference. If the project has no rule, the author wins.
  • Demand 100% test coverage. Cover the contract, the bug fixed, and the obvious edge cases. Coverage % targets are theatre.
  • Review more than 400 lines per session. Past that, my attention drops and I miss things. I split into multiple sessions.

What Authors Can Do To Help Reviewers

This is the other half. As an author:

  • Write a PR description that explains why, not just what. The diff shows what; only you know why.
  • Keep PRs under 400 lines when possible. If it's bigger, break it up or call it out explicitly: "This is large because X, please review in chunks."
  • Pre-empt obvious questions ("I considered X but went with Y because Z").
  • Reply to every reviewer comment, even with just "👍" or "good catch". Silence reads as dismissal.

A reviewer's job gets easier when authors do these. The team's overall throughput goes up.

Share 𝕏 in
A

Al Amin Ahamed

Senior software engineer & AI practitioner. Laravel, PHP, WordPress plugins, WooCommerce extensions.

About me →

One email a month. No noise.

What I shipped, what I read, occasional deep dive. Unsubscribe anytime.