brandonwie.dev
EN / KR
On this page
backend backendtestingcoverageknowledge

Fallback-Branch Test Coverage Gap

Tests pass. Coverage hits 100%. Removing the `|| randomUUID()` would still pass everything. How builder-driven fixtures hide the falsy branch.

6 min read

claude[bot] left an inline comment on PR #860: the test for mRefreshToken = user.refreshToken || randomUUID() only exercised the truthy branch. Every existing test built users via UserBuilder.aBeliever().withRefreshToken('x'). Drop the || randomUUID() from production and every test still passes. The fallback was load-bearing in production (fresh signups have null refresh tokens) but invisible to the suite. This is a structural coverage gap that line-coverage tools never catch and AI reviewers spot mechanically.

Who, When, Where

This pattern shows up everywhere code uses || or ?? (or any short-circuit fallback) when the LHS is supplied by a builder or fixture. It bites every time a test fixture pre-populates a value that the production code expects to be sometimes-null. You’ll find the gap in unit tests for handlers, services, and factories — anywhere the test setup makes the falsy branch impossible to exercise.

What the Gap Looks Like

When production code has a fallback like value = source || generateDefault() and your test fixtures always populate source, the generateDefault() branch never runs. Every test passes. Coverage tools may even report 100% line coverage because the line is executed (the LHS evaluates and short-circuits). But branch coverage is incomplete — and dropping the || generateDefault() would not cause any test to fail.

This is a structural coverage blind spot that:

  • Hides accidental refactors (someone removes the fallback in a “cleanup” pass — invisible regression)
  • Hides production-only execution paths (fresh signups have null fields; existing fixtures pre-populate them)
  • Is invisible to line-coverage gates and even to most branch-coverage tools when the LHS is a fixture-controlled variable

Why It’s a Gotcha

Three things conspire to hide the gap:

  1. Builder fluency biases tests toward “complete” objects. When you have UserBuilder.aBeliever().withEmail(x).withRefreshToken(y).build(), calling .withRefreshToken(y) is the ergonomic path. Future test authors copy-paste from existing tests and inherit the pre-populated state. The builder makes “complete” the default; “partial” requires conscious effort.
  2. Production code returns “incomplete” objects. A factory that returns { refreshToken: null } for fresh signups doesn’t exist in the test fixture vocabulary unless someone explicitly omits the withX() call. The fixture’s defaults reflect what’s convenient to test, not what production actually emits.
  3. Coverage tools flatten line-level execution. const x = a || b() shows as one line. If a is truthy, b() doesn’t execute, but the line is “covered.” Branch-coverage tools sometimes catch this; line-coverage tools never do.

The Pre-emptive Fix

For every || or ?? fallback in production code, write at least two tests:

  1. LHS truthy — fixture pre-populates the value. Asserts the truthy path.
  2. LHS falsy — fixture explicitly does NOT populate the value. Asserts the fallback path executed (the generated default has the expected shape).

The falsy test should also assert the input precondition (expect(input.x).toBeNull()) so the test doesn’t silently degrade if the fixture default changes.

The Detective Fix

When you add a new fallback to production code, grep all tests that exercise that code and check:

  • Does every test pre-populate the LHS via the builder?
  • If yes — add one parallel test with the LHS omitted.

When you receive an AI review flagging this gap (claude[bot], copilot, codex), it’s almost always real. The pattern is mechanical and AI reviewers spot it reliably — much more reliably than a human reviewer scanning a long diff.

When the Fallback Uses randomUUID() or Random Output

You can’t assert the exact output, but you CAN:

  • Match the shape: expect.stringMatching(/^[0-9a-f-]{36}$/i) for UUIDs
  • Match against a captured value: expect(updateMock).toHaveBeenCalledWith(id, accessToken, expect.any(String))
  • Assert the redirect URL includes a UUID-shaped param: expect(result).toMatch(/refreshToken=[0-9a-f-]{36}/i)

Don’t skip the test because “the value is random” — the branch executing is what you’re proving, not the specific value.

A Regex-Composition Gotcha

Writing a regex with anchors and then composing it into a URL match carries the ^...$ anchors into the URL match, which fails:

const uuidPattern = /^[0-9a-f-]{36}$/i;
// Wrong: anchors carry into the substring match
new RegExp(`mRefreshToken=${uuidPattern.source}`, 'i');

Strip the anchors when composing — keep them only for standalone-arg matching:

// Reusable pattern body without anchors
const uuidBody = "[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}";
expect(arg).toMatch(new RegExp(`^${uuidBody}$`, "i")); // anchored for standalone arg
expect(url).toMatch(new RegExp(`token=${uuidBody}`, "i")); // unanchored for substring

Worked Example

Production code on auth-v1.service.ts:277:

const mRefreshToken = user.refreshToken || randomUUID();

The existing test only exercised the truthy branch:

it("should issue tokens for new user", () => {
  const newUser = UserBuilder.aBeliever()
    .withEmail("[email protected]")
    .withRefreshToken("existing-token") // ← LHS pre-populated
    .build();
  // ... runs, passes, never executes randomUUID()
});

The missing test exercises the falsy branch:

it("should generate new refresh token when user has none", () => {
  const newUser = UserBuilder.aBeliever()
    .withEmail("[email protected]")
    // ← .withRefreshToken() OMITTED — refreshToken stays null
    .build();
  expect(newUser.refreshToken).toBeNull(); // precondition guard

  const result = service.handle(input);

  const uuidBody =
    "[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}";
  expect(updateTokenMock).toHaveBeenCalledWith(
    newUser.id,
    expect.any(String),
    expect.stringMatching(new RegExp(`^${uuidBody}$`, "i"))
  );
});

The precondition guard (expect(newUser.refreshToken).toBeNull()) is load-bearing — without it, a future change to UserBuilder.aBeliever() defaults could populate refreshToken and silently turn the test into a duplicate of the truthy case.

Key Points

  • value || fallback() is a two-branch expression. Tests need to cover both branches.
  • Builder-driven test fixtures bias every test toward the truthy branch. Be explicit about the falsy case.
  • Line coverage doesn’t catch this. Branch coverage sometimes does. AI review catches it reliably.
  • When the fallback is non-deterministic (UUID, timestamp, random), assert the shape of the output, not the value.

When to Use

  • Reviewing code under PR with new || / ?? fallbacks.
  • Writing tests for handlers/services with builder-driven fixtures.
  • Auditing legacy code where coverage looks high but production has bugs in default-generation paths.
  • After AI review flags missing branch coverage on a fallback expression.

When NOT to Use

  • Tests where the fallback is a constant (x || 0, name || 'anonymous') and the fallback value is trivially asserted in the truthy test’s negative case.
  • Code where the fallback is genuinely impossible to reach in production (e.g., guarded by an upstream invariant) — but then the fallback itself is dead code; remove it instead of testing it.

Takeaway

The most useful thing AI reviewers do today isn’t catching novel bugs — it’s catching mechanical patterns that humans skim past. Fallback branch coverage is one of those patterns: easy to define, easy to mechanically check, easy to miss when you’re reading a diff for intent. When claude[bot] (or copilot, or codex) flags this kind of gap, treat the suggestion as a near-certain real find and add the parallel test.

Comments

enko