Engineering

From 5-Second Load to Instant: How We Fixed Email Performance

By Andres Muguira September 22, 2025 6 min read
CRM Performance Fast Email Loading Engineering PostgreSQL
Summarize with AI

Email in a CRM should feel like email in Gmail — instant. But for the first few months of SalesSheet, opening the email tab meant watching a spinner for three to five seconds. For a tool that promises speed, that was embarrassing.

This is the story of how we rewrote our email loading pipeline to go from a 5-second wait to sub-100ms renders, without changing the UI at all. Every fix happened behind the scenes, in the data layer.

Before: spinner and skeleton rows while 847 emails download. After: instant list with only the 12 relevant emails fetched.

The Original Problem: Client-Side Everything

Our first implementation was the classic mistake. When a user opened their inbox, the app fetched every email tied to their account from Supabase, dumped the full array into React state, and then filtered client-side by folder, date, and search query.

This worked fine during development when we had 30 test emails. It fell apart the moment a real user connected a Gmail account with 2,000 messages. The browser was downloading megabytes of JSON, parsing it, and then throwing away 98% of the rows to show the 12 messages that actually belonged in the current view.

The symptoms were obvious:

Fix 1: Server-Side Filtering

The first and most impactful change was moving filtering to the database. Instead of SELECT * FROM emails WHERE user_id = ? and filtering in JavaScript, we pushed every filter parameter into the query itself:

SELECT * FROM emails WHERE user_id = ? AND folder = ? AND date >= ? ORDER BY date DESC LIMIT 25

This sounds trivially obvious in hindsight. But it required restructuring our entire data fetching layer. We had built our React Query hooks around the assumption that you always fetched a complete collection and filtered locally. Moving to parameterized server queries meant rethinking cache keys, pagination, and how folder switches worked.

The payoff was immediate. Instead of returning 847 rows and filtering to 12, the database returned exactly 12 rows. Network payload dropped from ~1.2 MB to ~8 KB.

Fix 2: Diff/Merge Instead of Delete-and-Reload

The second problem was what happened when new emails arrived. Our original sync strategy was brutal: delete all cached emails, re-fetch everything from the provider, write all rows back to the database, then re-query. Every sync was a full teardown and rebuild.

We replaced this with a diff/merge approach. When the sync runs, we compare the provider's message list against what we already have stored. Only genuinely new messages get inserted. Only changed messages (read/unread status, label changes) get updated. Nothing gets deleted unless the user actually deleted it.

This cut our sync write volume by roughly 95%. A typical sync that used to insert 800+ rows now inserts 2–5 new messages and updates 0–3 existing ones.

Fix 3: Composite Indexes

With server-side filtering in place, our queries were semantically correct but still slow. A query filtering on user_id, folder, and date was hitting three separate indexes and merging the results. On tables with hundreds of thousands of rows, the query planner was making poor choices.

We added a composite index on (user_id, folder, date DESC). This single index covers our most common query pattern — "show me this user's inbox sorted by newest first" — without any index merging. Query times dropped from ~200ms to ~3ms. For a query that runs on every folder click, that difference is the gap between feeling sluggish and feeling instant.

The new email loading architecture: browser sends filter params, API checks cache, Supabase queries with composite indexes.

Fix 4: Rate-Limited Sync with 30-Second Polling

Email providers like Gmail have strict API rate limits. Our original implementation tried to sync on every page load, which meant that rapidly switching between folders could burn through quota in minutes.

We moved to a 30-second polling interval with a simple rule: if the last sync completed less than 30 seconds ago, skip it and serve from cache. This keeps the inbox fresh enough that users rarely see stale data, while staying well within provider rate limits.

The 30-second window also solved another subtle problem: real-time collaboration conflicts. When two teammates are looking at the same contact's email thread, rapid sync cycles could cause flickering as rows were deleted and re-inserted. The polling window acts as a natural debounce.

Results After the Rewrite

We measured the impact across our user base over a two-week period after deploying all four fixes:

The most telling metric, though, was user behavior. Before the fix, users averaged 3.2 folder switches per session. After, that number jumped to 11.7. When the tool is fast, people actually use it. When every click costs 5 seconds, people stop clicking.

Lessons Learned

Three takeaways that apply to any application fetching external data:

If you are building a CRM — or any app that aggregates data from external APIs — these patterns will save you months of performance debugging. For more on how we handle real-time updates across the team, check out our post on building real-time collaboration with Supabase Realtime.

Experience the Fastest CRM Email You've Ever Used

SalesSheet loads your inbox in milliseconds, not seconds. Try it free.

Try SalesSheet Free — No Credit Card