ENGINEERING

From C+ to B+: Our 13-Point Quality Score Journey

Andres MuguiraFebruary 26, 20267 min read
TypeScriptQualityTestingZod
← Back to Blog
Summarize with AI

The Reckoning

Every fast-moving startup accumulates technical debt. We knew this intellectually. What we did not know was how much we had accumulated until we ran a comprehensive quality audit on our codebase in early February 2026. The result was a C+ score. Not failing, but not something you would put on the refrigerator either.

The audit measured four dimensions: type safety (how many escape hatches existed in our TypeScript), runtime validation (how many external inputs were validated before use), test coverage (percentage of code paths exercised by automated tests), and error observability (how many error paths reported to our monitoring system). We scored poorly on all four.

A C+ codebase works. It ships features. It serves users. But it does all of these things while holding its breath, hoping nothing unexpected happens. When something unexpected does happen, nobody finds out until a user reports it.
Quality score card: C+ to B+ with 13-point improvement across 4 metrics — TypeScript, Validation, Testing, Monitoring

Pillar 1: Eliminating 74 `as any` Casts

TypeScript's as any is the escape hatch that lets you bypass the type system entirely. It tells the compiler: "I know this is wrong, but let me do it anyway." Every as any in a codebase is a location where a runtime type error can occur without any compile-time warning. We had 74 of them.

Some were legitimate shortcuts taken during rapid prototyping. Others were workarounds for incorrect type definitions in third-party libraries. A few were genuinely lazy. We categorized all 74 into three groups and addressed each differently:

Before/After: `as any` escape hatch vs. proper Zod schema with compile-time types and runtime validation

After eliminating all 74 casts, we enabled the noUncheckedIndexedAccess compiler flag and set a pre-commit hook that rejects any new as any usage. The rule is simple: if you need as any, you need to justify it in a code review comment, and the comment needs to explain what would break without it.

Pillar 2: Zod on 16 Edge Functions

Our Supabase edge functions receive data from the client, from webhooks, and from third-party APIs. Before the quality sprint, none of this data was validated at the function boundary. We trusted that the client would send the right shape, that Twilio's webhook payload would match the documentation, and that Gmail's API responses would be consistent. This trust was occasionally misplaced.

We added Zod schemas to all 16 of our edge functions. Each function now validates its input before processing it. If the input does not match the schema, the function returns a 400 error with a human-readable description of what was wrong. No more silent failures where a missing field causes a null reference error three function calls deep.

The Pattern We Adopted

Every edge function follows the same structure:

  1. Parse the raw request body
  2. Validate against the Zod schema with .safeParse()
  3. If validation fails, return a 400 with error.flatten()
  4. If validation passes, proceed with the typed, validated data

The Zod schemas also serve as documentation. Anyone reading the edge function can see exactly what inputs it expects, what types they are, and which ones are optional. We export the schemas as TypeScript types using z.infer, so the validated data carries its type through the rest of the function without any additional type annotations.

The most impactful validation was on our Gmail webhook handler. Gmail sends push notifications when a user's inbox changes, but the payload format varies depending on the type of change. Before Zod, we handled this with a series of optional field checks that missed edge cases. After Zod, we have a discriminated union schema that validates the exact payload shape for each notification type. Three bugs that had been intermittently causing sync failures disappeared immediately.

Pillar 3: 276 New Tests

Our test count went from approximately 1,350 to 1,626 during the quality sprint. The 276 new tests were not random coverage grabs. We identified the modules with the highest bug frequency from our Sentry data and wrote tests specifically targeting those code paths.

We also introduced Supabase query testing using a local Supabase instance that mirrors our production schema. Tests that hit the database run against this local instance, giving us confidence that our queries return the right data without mocking the database layer.

Pillar 4: Sentry in 84 Catch Blocks

The final pillar was error observability. We had try/catch blocks throughout the codebase, but most of them caught errors and either swallowed them silently or logged them to the console where nobody would ever see them. In production, console.log is a black hole.

We audited every catch block in the codebase and found 84 that were not reporting to Sentry. Each one was updated to call Sentry.captureException() with the error, plus context about where the error occurred (which function, which user, which operation). We also added breadcrumbs to critical user flows so that when an error does occur, the Sentry event includes the sequence of actions that led to it.

Sentry error tracking dashboard: 84 catch blocks audited and wired to Sentry.captureException()
An error you cannot see is an error you cannot fix. And an error you cannot fix is a user you are about to lose.

The Results

After two weeks of quality work, we re-ran the audit. The score: B+. A 13-point improvement. More importantly, the qualitative difference is enormous:

We are not done. B+ is not A. The next push will focus on increasing integration test coverage, adding performance benchmarks to our CI pipeline, and achieving 100% type coverage (eliminating all remaining unknown types). But for now, we sleep better at night knowing that our code is honest about what it does and transparent when it fails.

Try SalesSheet Free

No credit card required. Start selling smarter today.

Start Free Trial