Chapter 8: Real-time Data with Supabase Realtime
Theoretical Foundations
In traditional web development, the client-server relationship has long been governed by a simple, repetitive pattern: the client asks, and the server answers. This is polling. Imagine a child in the backseat of a car asking their parent every ten seconds, "Are we there yet?" The parent, even if they are driving directly to a known destination, must expend mental energy to answer each time, and the child only receives new information when they ask. This is inefficient, creates unnecessary load on the server (the parent), and introduces latency (the child only knows the answer after asking).
In previous chapters, we explored how tRPC provides a type-safe, declarative way to build these "ask and answer" APIs. We defined procedures that, when invoked, return a snapshot of data at a specific moment in time. However, in modern applications—collaborative tools, dashboards, live chats, or inventory systems—data is often mutable by other users or external processes. Relying on polling to detect these changes is like trying to watch a live sports game by refreshing a static webpage every minute; you miss the action in between.
Supabase Realtime fundamentally changes this dynamic. It shifts the paradigm from "pull" to "push." Instead of the client repeatedly asking for updates, the server (or rather, the database layer) actively pushes changes to the client the moment they occur. This is achieved through PostgreSQL's native publication and replication system, exposed via WebSocket connections.
To understand this deeply, we must look at the underlying mechanism. PostgreSQL maintains a Write-Ahead Log (WAL), a low-level record of every single change (insert, update, delete) made to the database. This log is primarily used for replication and point-in-time recovery. Supabase Realtime taps into this stream. It allows you to create Publications—essentially filters that decide which tables or even which specific rows (based on RLS policies) should have their changes broadcast.
When a change occurs, it is serialized into a payload and sent over a WebSocket to any subscribed clients. This is not an approximation; it is the exact same data flow that ensures a read-replica stays in sync with a primary database, but we are exposing it directly to the frontend.
The Analogy: The Newsroom vs. The Town Crier
To visualize this, let's compare the traditional polling approach to a Town Crier.
In the Town Crier model (Polling), a resident (the Client) must walk to the town square (the Server/API) every hour to ask the crier (the Backend) if there is any news. The crier might have no news, or they might have news from five minutes ago. The resident never knows exactly when the news breaks; they only know when they ask.
Supabase Realtime transforms this into a Modern Newsroom.
- The Publisher (The Database): The newspaper editor (PostgreSQL) writes a story (a database row).
- The Wire Service (WAL & Publications): As soon as the editor hits "print," the story is sent over the wire.
- The Subscribers (The Clients): Every client with a radio (WebSocket) tuned to that specific frequency receives the story instantly.
Crucially, this isn't a broadcast to everyone. It's targeted. If you subscribe only to "Sports News" (a specific table or row), you won't be disturbed by "Political News" (other tables). This is where Row Level Security (RLS) becomes the gatekeeper of the newsroom. RLS determines not just who can read the news, but who is allowed to publish it and who is allowed to subscribe to the updates.
Integrating Realtime with tRPC: The Type-Safe Bridge
In the context of our stack, we are using tRPC as the primary interface for our backend logic. While Supabase provides a JavaScript client library (@supabase/supabase-js) to handle WebSockets directly, mixing raw Supabase client calls with tRPC procedures can lead to fragmented logic and loss of type safety.
The theoretical goal here is to encapsulate the Realtime logic within the tRPC ecosystem. We treat the Realtime stream not as a separate side-channel, but as a specialized type of tRPC query or subscription.
Think of tRPC procedures as Microservices (as discussed in previous chapters regarding API architecture). A standard query is a synchronous microservice call. A mutation is a command that alters state. A subscription (using WebSockets) is a persistent, bi-directional microservice connection.
By wrapping the Supabase Realtime listener inside a tRPC subscription, we ensure that:
- Type Inference: The data structure pushed from the database is automatically typed on the client. If the database schema changes, TypeScript will alert us immediately, preventing runtime errors.
- Context Handling: We can leverage tRPC's context (middleware) to inject the authenticated Supabase client, ensuring that the Realtime subscription is tied to the user's session and RLS policies.
- Unified Developer Experience: The developer interacts with real-time data using the same mental model and syntax as fetching static data.
The Role of Edge Functions and LLMs
The chapter outline mentions optimizing these streams for Edge Functions and preparing data for LLMs. This introduces a layer of complexity regarding data gravity and processing.
Edge Functions are serverless functions running closer to the user (at the "edge" of the network). However, WebSocket connections are stateful and long-lived. Standard serverless functions are stateless and short-lived. Therefore, we cannot run the WebSocket listener inside an Edge Function in the traditional sense. Instead, the architecture shifts:
- Supabase Realtime (The Source): Handles the persistent WebSocket connection and pushes data.
- Edge Function (The Processor): Acts as a consumer of this stream or a middleware. When a Realtime event occurs, it might trigger an Edge Function via a webhook or HTTP call to perform heavy computation (like generating embeddings) without blocking the main database.
- LLM Consumption (The Consumer): Large Language Models require structured data. A raw database row is often too noisy. We use the Edge Function to transform the Realtime payload into a clean, semantic format suitable for an LLM.
Analogy: The Assembly Line
- Database (Raw Material): Produces the raw data (the WAL).
- Realtime (Conveyor Belt): Moves the material instantly to the next station.
- Edge Function (Quality Control/Refinement): Inspects, cleans, and formats the data.
- LLM (Final Assembly): Takes the refined data to build the final product (insights, summaries, or generated code).
Visualizing the Data Flow
The following diagram illustrates the flow of data from the database change to the LLM, highlighting the separation of concerns between the persistent Realtime connection and the ephemeral Edge Function processing.
Under the Hood: The Mechanics of the Subscription
When we implement a tRPC subscription backed by Supabase Realtime, we are essentially creating a listener loop.
- Connection Establishment: The client initiates a WebSocket handshake with Supabase. This is authenticated via JWT (JSON Web Tokens), which Supabase verifies against the database's RLS policies.
- Channel Subscription: The client subscribes to a specific channel (e.g.,
realtime:public:todos). This channel name is constructed from the schema (public) and the table name (todos). - Filtering via RLS: This is the most critical security aspect. Supabase Realtime respects PostgreSQL RLS. Even if a client subscribes to a table, they will only receive events for rows where their RLS policy allows
SELECT(orINSERT/UPDATEif they are the ones making changes). This prevents data leakage in multi-tenant applications. - Event Payloads: When a change occurs, the payload contains:
eventType:INSERT,UPDATE, orDELETE.new: The row data after the change.old: The row data before the change (mostly relevant forUPDATEandDELETE).columns: Metadata about the columns changed.
- tRPC Integration: In the tRPC backend, we define a subscription procedure. This procedure acts as the event handler. It listens for the WebSocket events and pushes them down the tRPC WebSocket transport layer to the client.
Optimizing for LLM Data Transformation
The final theoretical piece is the transformation of this real-time data for LLM consumption. LLMs operate on tokens and context windows. Sending a raw, verbose database row with timestamps, foreign keys, and internal metadata is inefficient and confusing for the model.
We utilize the Supervisor Node / Worker Agent pattern (defined in the definitions) here conceptually.
- The Supervisor Node (The Edge Function Trigger): It monitors the Realtime stream. It doesn't process the data itself; it decides if the data is relevant enough to be sent to a specialized worker.
- The Worker Agent (The Data Transformer): Once triggered, a specific Worker Agent (the Edge Function) takes the raw payload. It strips away irrelevant noise, formats the remaining text, and perhaps converts it into a vector embedding (using a model like
text-embedding-ada-002). - The Vector Store (The Memory): This transformed, vectorized data is then stored in a Vector Store (like
pgvectorwithin the same PostgreSQL database). Now, the LLM can query this semantic memory to understand the context of the real-time changes.
For example, if a user updates a status in a project management app:
- Realtime pushes the update to the UI immediately (User sees "Status: Done").
- Edge Function triggers, sees the status change, summarizes the task, and updates a vector embedding in the database.
- LLM later queries the vector store to answer "What tasks were completed today?" using semantic search, retrieving the exact context generated by the real-time event.
This architecture ensures that the frontend remains responsive via Supabase Realtime, while the backend intelligence grows asynchronously via Edge Functions, feeding the LLM a constant stream of clean, semantic data.
Basic Code Example
This example demonstrates a minimal, self-contained SaaS dashboard component that listens for live changes to a tasks table in a Supabase database. Instead of polling, it uses Supabase's Realtime subscriptions to instantly update the UI when a new task is added or an existing one is modified.
We will implement this within a React component using TypeScript. The logic handles connection lifecycle, error resilience, and data transformation for immediate UI consumption.
The Architecture
The flow of data is straightforward: The Supabase client establishes a WebSocket connection to the database. When a database row changes, Supabase emits a payload. Our client-side listener receives this payload, transforms it, and updates the local React state.
The Implementation
Here is the complete TypeScript code. It is designed to be dropped into a Next.js or Vite project (assuming @supabase/supabase-js is installed).
// src/hooks/useRealtimeTasks.ts
import { useEffect, useState } from 'react';
import { createClient, SupabaseClient, RealtimeChannel } from '@supabase/supabase-js';
// 1. CONFIGURATION
// In a real app, load these from environment variables (.env.local)
const SUPABASE_URL = 'https://your-project-ref.supabase.co';
const SUPABASE_ANON_KEY = 'your-anon-public-key';
// 2. TYPE DEFINITIONS
// Defines the shape of our database table 'tasks'
interface Task {
id: string;
title: string;
status: 'pending' | 'done';
inserted_at: string;
}
// Defines the shape of the Realtime event payload
interface RealtimePayload {
eventType: 'INSERT' | 'UPDATE' | 'DELETE';
new: Task | null;
old: { id: string } | null;
}
/**
* A custom hook to manage a Realtime subscription to the 'tasks' table.
* It handles connection setup, event filtering, and state updates.
*/
const useRealtimeTasks = () => {
const [tasks, setTasks] = useState<Task[]>([]);
const [channel, setChannel]<RealtimeChannel | null>(null);
const [error, setError] = useState<string | null>(null);
// Initialize Supabase Client
const supabase: SupabaseClient = createClient(SUPABASE_URL, SUPABASE_ANON_KEY);
useEffect(() => {
/**
* 3. CHANNEL SETUP
* Create a dedicated channel for the 'tasks' table.
* 'schema': public
* 'table': tasks
* 'event': '*' (listen for all events: INSERT, UPDATE, DELETE)
*/
const taskChannel = supabase
.channel('public:tasks')
.on(
'postgres_changes',
{
event: '*', // Listen for all changes
schema: 'public',
table: 'tasks',
},
(payload: RealtimePayload) => {
// 4. DATA TRANSFORMATION & STATE UPDATE
handleRealtimeEvent(payload);
}
)
.subscribe((status) => {
// Handle connection status changes
if (status === 'SUBSCRIBED') {
console.log('Realtime connection established.');
setError(null);
}
if (status === 'CHANNEL_ERROR') {
setError('Failed to connect to Realtime server.');
}
});
setChannel(taskChannel);
// 5. CLEANUP
// Unsubscribe when the component unmounts to prevent memory leaks
return () => {
if (taskChannel) {
supabase.removeChannel(taskChannel);
}
};
}, []); // Run once on mount
/**
* 6. EVENT HANDLER LOGIC
* Pure function to determine how to update state based on event type.
*/
const handleRealtimeEvent = (payload: RealtimePayload) => {
const { eventType, new: newTask, old: oldTask } = payload;
switch (eventType) {
case 'INSERT':
if (newTask) {
// Append new task to the beginning of the list
setTasks((prev) => [newTask, ...prev]);
}
break;
case 'UPDATE':
if (newTask) {
// Map over existing tasks and replace the updated one
setTasks((prev) =>
prev.map((task) => (task.id === newTask.id ? newTask : task))
);
}
break;
case 'DELETE':
if (oldTask) {
// Filter out the deleted task by ID
setTasks((prev) => prev.filter((task) => task.id !== oldTask.id));
}
break;
default:
console.warn('Unknown event type:', eventType);
}
};
return { tasks, error };
};
// --- MOCK IMPLEMENTATION FOR DEMONSTRATION PURPOSES ---
// In a real app, this would be a separate component consuming the hook.
/**
* Dashboard Component
* Renders the list of tasks and displays real-time updates.
*/
export const TaskDashboard = () => {
const { tasks, error } = useRealtimeTasks();
if (error) {
return <div style={{ color: 'red' }}>Error: {error}</div>;
}
return (
<div style={{ padding: '20px', fontFamily: 'sans-serif' }}>
<h2>Live Task Dashboard</h2>
<p>Open this in two tabs. Add a task in one, and it appears instantly in the other.</p>
<ul>
{tasks.length === 0 ? (
<li>No tasks yet. Waiting for database changes...</li>
) : (
tasks.map((task) => (
<li key={task.id} style={{ marginBottom: '10px', padding: '10px', border: '1px solid #eee' }}>
<strong>{task.title}</strong> -
<span style={{ color: task.status === 'done' ? 'green' : 'orange' }}>
{' '}{task.status.toUpperCase()}
</span>
</li>
))
)}
</ul>
</div>
);
};
Detailed Line-by-Line Explanation
1. Configuration and Types
- Why: These credentials authenticate the client with your Supabase project.
- Security Note: In production, these must be injected via environment variables (e.g.,
process.env.NEXT_PUBLIC_SUPABASE_URL) to avoid committing secrets to version control.
- Why: TypeScript interfaces ensure type safety. When the Realtime event arrives, we cast the payload to
RealtimePayload. This prevents runtime errors when accessing properties likepayload.new.title.
2. The Custom Hook (useRealtimeTasks)
- Why: We use React's
useStateto hold the local cache of data. This is the "source of truth" for the UI while the WebSocket connection is active.
3. Channel Setup & Subscription
- Under the Hood: This creates a unique identifier for the WebSocket connection topic. If you have multiple listeners (e.g., one for tasks, one for users), you create separate channels.
.on(
'postgres_changes',
{ event: '*', schema: 'public', table: 'tasks' },
(payload: RealtimePayload) => { ... }
)
postgres_changes: This is the specific event filter provided by Supabase. It hooks into the database's logical replication stream.event: '*': We listen for everything. In a high-traffic app, you might filter for specific events (e.g.,event: 'INSERT') to reduce client-side processing.- Callback: The third argument is the function executed every time a change occurs. This is where we trigger our state update.
- Lifecycle Management: The subscription callback receives the connection status.
SUBSCRIBED: The WebSocket is open and listening.CHANNEL_ERROR: Something went wrong (e.g., network drop, RLS policy denial). We set an error state here to alert the user.
4. Event Handling Logic
The handleRealtimeEvent function is the brain of the data transformation. It maps database events to React state updates.
- INSERT: We prepend (
[newTask, ...prev]) the new item to the list so the user sees it immediately at the top. - UPDATE: We use
.map()to find the specific task by ID and swap it with the new version. This is efficient for small lists. - DELETE: We use
.filter()to remove the item with the matching ID from the local array.
5. Cleanup (Critical for Performance)
- Why: If a user navigates away from this page without this cleanup, the WebSocket connection remains open in the background, consuming bandwidth and memory. This
useEffectcleanup function ensures the connection is closed when the component unmounts.
Common Pitfalls
1. The "Stale Closure" Trap in Async Loops
When using useEffect, the setTasks function inside the subscription callback captures the tasks state from the time the effect ran. If you rely on the previous state (e.g., setTasks(prev => ...)), you are safe. However, if you reference the tasks variable directly inside the callback without the functional update pattern, you may overwrite newer changes made by other users.
- Bad:
setTasks([...tasks, newTask])(Uses staletasksfrom initial render) - Good:
setTasks((prev) => [...prev, newTask])(Always uses the latest state)
2. Row Level Security (RLS) Denials
Supabase Realtime is subject to RLS policies. If your RLS policy for the tasks table does not allow SELECT for the anonymous user (or the authenticated user), the WebSocket connection will connect successfully, but no events will be received.
- Fix: Ensure your RLS policy allows the
SELECToperation for the relevant role. Check the Supabase Dashboard > SQL Editor >select * from pg_subscription;or monitor network logs for 403 errors.
3. Vercel/Edge Function Timeouts
If you are using Edge Functions to write to the database that triggers these Realtime events, be aware that Realtime events are not instant. They are asynchronous.
- Issue: If an Edge Function writes data and immediately closes the connection, the Realtime event might fire after the client expects it.
- Mitigation: Do not rely on Realtime for the immediate confirmation of a write operation. Use the
insertresponse from the client-sidesupabase.from('tasks').insert(...)to update the UI optimistically, and let Realtime act as a synchronization mechanism for other clients or subsequent updates.
4. Resource Leaks on Rapid Navigation
In Single Page Applications (SPAs) like Next.js, users often navigate between pages quickly. If the useEffect cleanup function is slow or if the removeChannel call is missed, you will accumulate open WebSocket sockets. Browsers have a limit on concurrent connections (usually around 6 per domain). Exceeding this will block new requests.
- Mitigation: Always verify the cleanup function runs by adding a
console.log('Cleaning up')inside the return statement of youruseEffect.
5. Hallucinated JSON Payloads
The Realtime payload structure changes slightly depending on the operation.
INSERT: Containsnewobject,oldis null.UPDATE: Containsnewobject,oldobject (with changed fields).DELETE: Containsnewnull,oldobject (usually just the primary key).- Risk: Assuming
payload.newalways exists will cause a runtime crash on delete events. Always check for existence (e.g.,if (newTask) { ... }) before accessing properties.
The chapter continues with advanced code, exercises and solutions with analysis, you can find them on the ebook on Leanpub.com or Amazon
Loading knowledge check...
Code License: All code examples are released under the MIT License. Github repo.
Content Copyright: Copyright © 2026 Edgar Milvus | Privacy & Cookie Policy. All rights reserved.
All textual explanations, original diagrams, and illustrations are the intellectual property of the author. To support the maintenance of this site via AdSense, please read this content exclusively online. Copying, redistribution, or reproduction is strictly prohibited.