When we set out to add real-time collaboration to SalesSheet, the goal was straightforward: when any team member changes a deal, contact, or activity, every other connected team member should see the update instantly. No polling. No refresh button. Just a living, breathing dataset that everyone shares.
We use Supabase Realtime as the backbone for this. Supabase provides PostgreSQL change detection via the Write-Ahead Log (WAL) and broadcasts changes over WebSocket channels. This post covers the three patterns we developed to make it production-ready: dedup windows, echo prevention, and stale-refetch strategy.
The Naive Approach and Why It Breaks
The simplest implementation of Supabase Realtime is to subscribe to a table, receive the full new row in every event payload, and patch your local state directly:
channel.on('postgres_changes', { event: '*', table: 'deals' }, (payload) => { setState(prev => prev.map(d => d.id === payload.new.id ? payload.new : d)); });
This works in a demo. It fails in production for three reasons:
- Duplicate events. Supabase Realtime delivers at-least-once. Network reconnections, channel resubscriptions, and WAL replay can all produce duplicate events for the same change. If you blindly apply each one, you get flickering UI and wasted renders.
- Echo events. When Client A writes a row, Client A also receives the change event via the WebSocket. If you apply it, you re-render a row that was already updated locally, sometimes causing a brief flicker as the optimistic update is replaced by the slightly-delayed server version.
- Partial payloads. Depending on your Realtime configuration and Row Level Security policies, the payload might not include all columns. Patching local state with an incomplete row corrupts your cache.
Pattern 1: The Dedup Window
To handle duplicate events, we maintain a Map of recently processed event IDs with timestamps. When an event arrives, we check whether we have already processed an event for this row ID within the last 2 seconds. If so, we skip it.
The 2-second window is wide enough to catch duplicates from reconnections (which typically replay within milliseconds) but narrow enough that a legitimate second update to the same row within 2 seconds still gets processed. In practice, two distinct updates to the same CRM deal within 2 seconds is extremely rare, but if it happens, the stale-refetch pattern (covered below) catches it on the next cycle.
The Map is stored in a React ref rather than state to avoid re-renders when entries are added. We also clean up old entries every 30 seconds to prevent the Map from growing unbounded during long sessions.
Pattern 2: Echo Prevention
Echo prevention is critical for maintaining a responsive UI. When a user updates a deal's stage, we optimistically update the local state immediately — the UI reflects the change before the server even confirms it. Then, milliseconds later, the WebSocket delivers a change event for the same row.
If we process that event, the optimistic update gets replaced by the server version, potentially causing a brief flicker. Worse, if the server response is slightly different from the optimistic state (perhaps a server-side trigger modified a timestamp), the UI jumps.
Our solution tracks which row IDs the current session has written to within the last 3 seconds. When a Realtime event arrives for a row in this "just-written" set, we skip it entirely. The local optimistic state is already correct, and the next refetch will reconcile any differences.
This is a deliberate trade-off: we prioritize UI consistency over data freshness for the author of a change, knowing that the stale-refetch pattern will correct any discrepancies within seconds.
Pattern 3: Stale-Refetch Instead of Direct Patching
The most important architectural decision was choosing cache invalidation over direct state patching. When a Realtime event passes both the dedup and echo filters, we do not apply the event payload directly. Instead, we invalidate the relevant React Query cache key and let React Query refetch the data from the server.
This costs one additional network request per change event. But it eliminates an entire category of bugs: partial payloads, column mismatches, computed field inconsistencies, and RLS-filtered data. The refetch always returns the complete, correct, permission-checked row.
React Query's built-in stale-while-revalidate strategy means the user never sees a loading spinner during the refetch. The current data stays visible while the updated version loads in the background. The transition is seamless — the row simply updates in place.
We batch invalidations for rapid successive changes. If five events arrive for five different deals within 200 milliseconds (which happens during bulk operations), we coalesce them into a single cache invalidation rather than triggering five separate refetches.
Subscribing to Four Tables
SalesSheet maintains Realtime subscriptions on four tables: contacts, deals, activities, and emails. Each subscription is scoped to the user's organization using Supabase's filter parameter:
filter: `org_id=eq.${orgId}`
This server-side filtering is essential. Without it, every connected client would receive change events for every organization in the database. The filter ensures that each WebSocket channel only carries events relevant to the subscribing user's team.
We manage the four subscriptions through a single custom hook that accepts the table name and organization ID as parameters. This keeps the subscription lifecycle tied to the React component tree — when a component unmounts, its subscriptions are cleaned up automatically, preventing memory leaks and orphaned WebSocket connections.
Monitoring and Debugging
Real-time systems are notoriously difficult to debug because the bugs are transient. An event that was duplicated, missed, or processed out of order might only be visible for a fraction of a second before the UI self-corrects via refetch.
We added lightweight instrumentation to every event handler: each processed, deduped, and echo-filtered event gets logged to a circular buffer with a timestamp, event type, table, and row ID. In development mode, this buffer is accessible from the browser console, making it straightforward to trace exactly what happened when something looks wrong.
In production, we track three aggregate metrics: events received per minute, events processed per minute, and cache invalidations per minute. A significant divergence between received and processed (meaning too many events are being filtered) or between processed and invalidated (meaning the batching is too aggressive) signals a tuning problem.
For the product-level view of how real-time sync transforms team collaboration, see our companion post: No More Refresh: How Real-Time Sync Changes Team Selling. And if you are interested in how we optimized the email layer that runs alongside these Realtime subscriptions, check out how we fixed email performance.
Real-Time Collaboration, Built In
SalesSheet syncs your team's data instantly. No polling, no refresh, no stale dashboards.
Try SalesSheet Free — No Credit Card