Supabase & PostgreSQL
PostgreSQL Query Optimization in Supabase: Practical Patterns
TL;DR
Every Supabase project runs on PostgreSQL, and PostgreSQL is fast — until it is not. I have watched a single missing index turn a 4ms query into a 12-second table scan on a 2M-row orders table. I have debugged N+1 queries that fired 200 database calls on a single page load. I have seen offset pagination bring a dashboard to its knees at page 500. These are not hypotheticals. These are production incidents from projects I have built through my services. This guide covers the patterns I use on every project: reading EXPLAIN ANALYZE output like a detective, choosing the right index type for the job, eliminating N+1 queries before they ship, configuring Supabase connection pooling correctly, switching from offset to cursor pagination, using materialized views for expensive aggregations, and the full optimization checklist I run before any launch.
EXPLAIN ANALYZE — Your Best Friend
If you take one thing from this article, let it be this: never create an index without running EXPLAIN ANALYZE first. Never assume a query is fast because it looks simple. Never guess.
EXPLAIN shows you the query plan — what PostgreSQL intends to do. EXPLAIN ANALYZE actually runs the query and shows you what it did, how long each step took, and how many rows it processed. The difference between the two is the difference between reading a recipe and tasting the food.
Here is what a slow query looks like in EXPLAIN ANALYZE output:
EXPLAIN ANALYZE
SELECT *
FROM orders
WHERE customer_email = 'john@example.com'
ORDER BY created_at DESC
LIMIT 10;Sort (cost=45230.12..45230.15 rows=10 width=284) (actual time=1247.331..1247.334 rows=10 loops=1)
Sort Key: created_at DESC
Sort Method: top-N heapsort Memory: 27kB
-> Seq Scan on orders (cost=0.00..41842.00 rows=523 width=284) (actual time=0.028..1241.207 rows=489 loops=1)
Filter: (customer_email = 'john@example.com'::text)
Rows Removed by Filter: 1999511
Planning Time: 0.089 ms
Execution Time: 1247.402 msRead that output carefully. Three things jump out:
- Seq Scan on orders — PostgreSQL is reading every single row in the table. Two million rows scanned to find 489 matches. That is the database equivalent of searching every book in a library to find one author.
- Rows Removed by Filter: 1999511 — Almost two million rows examined and thrown away. That is wasted I/O, wasted CPU, wasted time.
- Execution Time: 1247.402 ms — Over a second for one query. Stack a few of these on a single page load and your users are staring at a spinner.
Now watch what happens after we add an index:
CREATE INDEX idx_orders_customer_email ON orders (customer_email);EXPLAIN ANALYZE
SELECT *
FROM orders
WHERE customer_email = 'john@example.com'
ORDER BY created_at DESC
LIMIT 10;Limit (cost=1.56..24.83 rows=10 width=284) (actual time=0.041..0.068 rows=10 loops=1)
-> Index Scan using idx_orders_customer_email on orders (cost=0.43..1143.29 rows=489 width=284) (actual time=0.039..0.062 rows=10 loops=1)
Index Cond: (customer_email = 'john@example.com'::text)
Planning Time: 0.132 ms
Execution Time: 0.089 msFrom 1247ms to 0.089ms. That is a 14,000x improvement from a single index. But we can do better. The query still sorts all 489 matching rows to find the top 10 by created_at. A composite index handles both the filter and the sort:
CREATE INDEX idx_orders_customer_email_created ON orders (customer_email, created_at DESC);Limit (cost=0.43..2.87 rows=10 width=284) (actual time=0.031..0.042 rows=10 loops=1)
-> Index Scan using idx_orders_customer_email_created on orders (cost=0.43..119.52 rows=489 width=284) (actual time=0.029..0.038 rows=10 loops=1)
Index Cond: (customer_email = 'john@example.com'::text)
Planning Time: 0.118 ms
Execution Time: 0.058 msThe planner now walks the index in order and stops after 10 rows. No sort step at all. This is the kind of optimization that makes your API feel instant.
You can run EXPLAIN ANALYZE directly in the Supabase SQL Editor. I do it constantly — before adding indexes, after adding indexes, when a query feels slow, when I am reviewing someone else's schema. It is the single most useful debugging tool PostgreSQL gives you.
Indexing Strategy
Indexes are not free. Every index you add slows down writes because PostgreSQL has to update the index on every INSERT, UPDATE, and DELETE. Indexes consume disk space. Too many indexes on a table with heavy writes will degrade performance in a different way.
My rule is simple: index what you query, not what you might query someday.
Here is how I decide what to index:
Start with your queries
Look at every query your application actually runs. In a Supabase project, that means:
- Every
.from('table').select()call in your frontend - Every RPC function
- Every RLS policy (these run on every request and people forget to index them)
- Every Server Action that hits the database
For each query, identify the columns used in WHERE, JOIN ON, ORDER BY, and GROUP BY clauses. Those are your index candidates.
Check if the index will actually be used
PostgreSQL has a query optimizer and it is smart. If your table has 100 rows, the optimizer will ignore your index and do a sequential scan because it is faster to read 100 rows than to traverse an index tree. That is correct behavior. Do not fight the optimizer — it knows more than you think.
This is why you always EXPLAIN ANALYZE first. Create the index, run the query, and verify PostgreSQL is actually using it. If the planner chooses a Seq Scan on a table with an index, it has a reason. Usually the table is small, or the query returns a large percentage of the table's rows.
The selectivity test
An index is most useful when it is selective — when it narrows down the result set significantly. An index on a status column with three possible values (active, inactive, pending) is far less useful than an index on email where every value is unique.
Ask yourself: does this column filter out at least 90% of the rows? If yes, it is a good index candidate. If the column has low cardinality (few distinct values), a partial index or a different approach is usually better:
-- Instead of indexing the entire status column
-- Use a partial index for the specific status you query most
CREATE INDEX idx_orders_pending ON orders (created_at DESC)
WHERE status = 'pending';This index is smaller, faster to maintain, and faster to scan because it only contains the rows you actually care about.
Common Index Types — B-tree, GIN, and GiST
PostgreSQL gives you several index types. Most developers only use B-tree because it is the default. That is like only using a hammer when the toolbox has wrenches and screwdrivers too.
B-tree (the default)
B-tree indexes handle equality (=) and range queries (<, >, <=, >=, BETWEEN) efficiently. They also support ORDER BY and IS NULL checks. If you run CREATE INDEX without specifying a type, you get a B-tree.
Use B-tree for:
- Primary keys and foreign keys (PostgreSQL does this automatically for primary keys)
- Email lookups, user ID lookups, any exact-match query
- Date range queries, price ranges, any range-based filter
- Columns you sort by
-- B-tree for exact match and range
CREATE INDEX idx_products_price ON products (price);
-- Composite B-tree for filter + sort
CREATE INDEX idx_orders_user_date ON orders (user_id, created_at DESC);GIN (Generalized Inverted Index)
GIN indexes are built for composite values — arrays, JSONB, and full-text search. If you use Supabase's JSONB columns or PostgreSQL's tsvector for search, GIN is your index.
-- GIN index for JSONB queries
CREATE INDEX idx_products_metadata ON products USING gin (metadata);
-- Now this query uses the index
SELECT * FROM products
WHERE metadata @> '{"color": "red", "size": "large"}';
-- GIN index for full-text search
ALTER TABLE articles ADD COLUMN search_vector tsvector
GENERATED ALWAYS AS (to_tsvector('english', title || ' ' || body)) STORED;
CREATE INDEX idx_articles_search ON articles USING gin (search_vector);
-- Full-text search query
SELECT title, ts_rank(search_vector, query) AS rank
FROM articles, to_tsquery('english', 'postgresql & optimization') AS query
WHERE search_vector @@ query
ORDER BY rank DESC
LIMIT 20;GIN indexes are slower to build and update than B-tree, but they make queries on complex data types possible at scale. Without a GIN index, a JSONB containment query on a large table will do a sequential scan every time.
GiST (Generalized Search Tree)
GiST indexes handle geometric data, ranges, and proximity queries. If you are doing geospatial work with PostGIS (which Supabase supports out of the box), GiST is essential.
-- Enable PostGIS extension in Supabase
CREATE EXTENSION IF NOT EXISTS postgis;
-- GiST index for geospatial queries
CREATE INDEX idx_stores_location ON stores USING gist (location);
-- Find stores within 5km of a point
SELECT name, ST_Distance(location, ST_MakePoint(79.8612, 6.9271)::geography) AS distance
FROM stores
WHERE ST_DWithin(location, ST_MakePoint(79.8612, 6.9271)::geography, 5000)
ORDER BY distance;The coordinates in that example are Colombo, Sri Lanka — where I often build location-based features for local clients. Without the GiST index, that proximity query scans every row in the table and calculates the distance for each one. With the index, PostgreSQL narrows it down to a small set of candidates before calculating distances.
N+1 Query Prevention
The N+1 problem is the single most common performance killer I see in Supabase projects. It happens when your code fetches a list of items (1 query) and then fetches related data for each item individually (N queries).
Here is the classic example:
// THE N+1 DISASTER — do not do this
const { data: orders } = await supabase
.from('orders')
.select('*')
.eq('status', 'pending');
// This fires one query PER order
for (const order of orders) {
const { data: items } = await supabase
.from('order_items')
.select('*')
.eq('order_id', order.id);
order.items = items;
}
// 1 query for orders + 200 queries for items = 201 database callsSupabase gives you the tools to solve this in one line. Use the foreign key relationship syntax:
// ONE QUERY — Supabase joins through foreign keys
const { data: orders } = await supabase
.from('orders')
.select(`
*,
order_items (
id,
product_name,
quantity,
unit_price
)
`)
.eq('status', 'pending');
// 1 query. All data. Done.Under the hood, Supabase's PostgREST layer translates this into a single SQL query with a lateral join. One round trip to the database instead of 201.
For more complex joins that the Supabase client cannot express, use an RPC function:
CREATE OR REPLACE FUNCTION get_order_summary(p_status text)
RETURNS TABLE (
order_id uuid,
customer_name text,
total_amount numeric,
item_count bigint,
latest_item text
) AS $$
BEGIN
RETURN QUERY
SELECT
o.id AS order_id,
c.name AS customer_name,
SUM(oi.quantity * oi.unit_price) AS total_amount,
COUNT(oi.id) AS item_count,
(SELECT oi2.product_name
FROM order_items oi2
WHERE oi2.order_id = o.id
ORDER BY oi2.created_at DESC
LIMIT 1) AS latest_item
FROM orders o
JOIN customers c ON c.id = o.customer_id
JOIN order_items oi ON oi.order_id = o.id
WHERE o.status = p_status
GROUP BY o.id, c.name;
END;
$$ LANGUAGE plpgsql STABLE;const { data } = await supabase.rpc('get_order_summary', { p_status: 'pending' });One call. All the data your UI needs. The database does what databases are built to do — join, aggregate, and return structured results.
Connection Pooling with Supabase
Every Supabase project comes with two database connection options:
- Direct connection (port 5432) — A raw PostgreSQL connection. Each connection holds a server process. Limited by your plan's connection count.
- Pooled connection via Supavisor (port 6543) — A connection pooler that sits between your app and PostgreSQL. Multiple application connections share a smaller number of database connections.
For most applications, you want the pooled connection. Here is why.
PostgreSQL connections are expensive. Each connection consumes about 10MB of memory on the server. On a Supabase Free plan, you get roughly 60 direct connections. On Pro, around 200. If you are running a Next.js app on Vercel with serverless functions, each function invocation can open a new connection. During a traffic spike, you exhaust your connection limit in seconds and every subsequent request fails.
The pooler solves this by maintaining a pool of persistent connections to PostgreSQL and multiplexing your application's requests across them. A hundred serverless functions can share twenty database connections.
Configuration
In your Supabase project settings under Database, you will find both connection strings. Use the pooled one for your application:
# Direct connection — use for migrations and admin tasks
DATABASE_URL="postgresql://postgres:[password]@db.[ref].supabase.co:5432/postgres"
# Pooled connection — use for your application
DATABASE_URL="postgresql://postgres.[ref]:[password]@aws-0-[region].pooler.supabase.com:6543/postgres?pgbouncer=true"For Prisma specifically, you need to configure both because Prisma migrations require a direct connection:
# prisma/schema.prisma
DATABASE_URL="postgresql://postgres.[ref]:[password]@aws-0-[region].pooler.supabase.com:6543/postgres?pgbouncer=true"
DIRECT_URL="postgresql://postgres:[password]@db.[ref].supabase.co:5432/postgres"datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
directUrl = env("DIRECT_URL")
}One thing to watch: if you use Supabase Realtime subscriptions or long-running queries, those need the direct connection. The pooler is designed for short-lived transactional queries, not persistent connections.
Pagination — Cursor vs Offset
Offset pagination is how most developers start:
SELECT * FROM products ORDER BY created_at DESC LIMIT 20 OFFSET 400;This looks fine until you realize what PostgreSQL actually does. It fetches 420 rows, throws away the first 400, and gives you 20. At page 100 (offset 2000), it fetches 2020 rows and throws away 2000. The deeper you paginate, the slower it gets. On a table with millions of rows, page 500 can take several seconds.
Cursor-based pagination does not have this problem:
-- First page
SELECT * FROM products
ORDER BY created_at DESC, id DESC
LIMIT 20;
-- Next page (use the last item's created_at and id as the cursor)
SELECT * FROM products
WHERE (created_at, id) < ('2025-02-10T14:30:00Z', 'abc-123')
ORDER BY created_at DESC, id DESC
LIMIT 20;PostgreSQL uses the index to jump directly to the cursor position and reads 20 rows. It does not matter if you are on page 1 or page 5000 — the performance is identical.
Here is how I implement cursor pagination with Supabase:
interface PaginatedResult<T> {
data: T[];
nextCursor: string | null;
hasMore: boolean;
}
async function getProducts(
cursor?: string,
pageSize = 20
): Promise<PaginatedResult<Product>> {
let query = supabase
.from('products')
.select('id, name, price, created_at')
.order('created_at', { ascending: false })
.order('id', { ascending: false })
.limit(pageSize + 1); // Fetch one extra to check if there are more
if (cursor) {
const { created_at, id } = JSON.parse(
Buffer.from(cursor, 'base64url').toString()
);
query = query.or(
`created_at.lt.${created_at},and(created_at.eq.${created_at},id.lt.${id})`
);
}
const { data, error } = await query;
if (error) throw error;
const hasMore = data.length > pageSize;
const items = hasMore ? data.slice(0, pageSize) : data;
const lastItem = items[items.length - 1];
const nextCursor = hasMore
? Buffer.from(
JSON.stringify({ created_at: lastItem.created_at, id: lastItem.id })
).toString('base64url')
: null;
return { data: items, nextCursor, hasMore };
}The cursor is opaque to the client — it is just a base64-encoded string. The client sends it back to get the next page. No offset counting. No performance degradation. This pattern works identically whether your table has 1,000 rows or 10 million.
The tradeoff: cursor pagination does not let you jump to page 47 directly. For most UIs — infinite scroll, "Load More" buttons, API endpoints — that is perfectly fine. If your UI genuinely needs random page access, use offset for display but cache aggressively, and consider whether your users actually click past page 5. Most do not.
Materialized Views
Some queries are expensive by nature. Aggregating sales data across millions of rows, computing leaderboard rankings, generating reporting summaries — these queries do heavy computation and they do not need to be real-time.
Materialized views pre-compute the result and store it as a table. Subsequent reads are as fast as reading a regular table:
CREATE MATERIALIZED VIEW mv_daily_sales AS
SELECT
date_trunc('day', o.created_at) AS sale_date,
p.category,
COUNT(DISTINCT o.id) AS order_count,
SUM(oi.quantity) AS units_sold,
SUM(oi.quantity * oi.unit_price) AS revenue,
AVG(oi.quantity * oi.unit_price) AS avg_order_value
FROM orders o
JOIN order_items oi ON oi.order_id = o.id
JOIN products p ON p.id = oi.product_id
WHERE o.status = 'completed'
GROUP BY date_trunc('day', o.created_at), p.category
ORDER BY sale_date DESC, revenue DESC;
-- Index the materialized view for fast lookups
CREATE UNIQUE INDEX idx_mv_daily_sales ON mv_daily_sales (sale_date, category);Reading from it is instant:
-- This reads from the pre-computed table, not the raw data
SELECT * FROM mv_daily_sales
WHERE sale_date >= CURRENT_DATE - INTERVAL '30 days'
ORDER BY sale_date DESC;The catch is that materialized views go stale. You need to refresh them:
-- Full refresh (locks the view during refresh)
REFRESH MATERIALIZED VIEW mv_daily_sales;
-- Concurrent refresh (no lock, requires a unique index)
REFRESH MATERIALIZED VIEW CONCURRENTLY mv_daily_sales;I typically refresh materialized views with a Supabase Edge Function on a cron schedule using pg_cron:
-- Enable pg_cron in Supabase (available on Pro plan)
SELECT cron.schedule(
'refresh-daily-sales',
'0 */4 * * *', -- Every 4 hours
'REFRESH MATERIALIZED VIEW CONCURRENTLY mv_daily_sales'
);For dashboards where data can be a few hours old, this is a massive performance win. One expensive query runs every 4 hours instead of on every page load.
Query Caching Patterns
PostgreSQL does not have a built-in query cache like MySQL. Every query is parsed, planned, and executed. But Supabase gives you tools to build effective caching layers.
Application-level caching with Next.js
Next.js 16 with the unstable_cache API (or the cache function from React) lets you cache database results at the edge:
import { unstable_cache } from 'next/cache';
import { createClient } from '@/lib/supabase/server';
const getPopularProducts = unstable_cache(
async () => {
const supabase = await createClient();
const { data } = await supabase
.from('products')
.select('id, name, price, image_url')
.order('total_sales', { ascending: false })
.limit(12);
return data;
},
['popular-products'],
{
revalidate: 3600, // Revalidate every hour
tags: ['products'],
}
);When a product is updated, invalidate the cache:
import { revalidateTag } from 'next/cache';
async function updateProduct(id: string, data: ProductUpdate) {
const supabase = await createClient();
await supabase.from('products').update(data).eq('id', id);
revalidateTag('products');
}Prepared statements
For queries that run frequently with different parameters, prepared statements skip the parsing and planning phases:
PREPARE get_user_orders (uuid) AS
SELECT o.id, o.total, o.status, o.created_at
FROM orders o
WHERE o.user_id = $1
ORDER BY o.created_at DESC
LIMIT 20;
EXECUTE get_user_orders('550e8400-e29b-41d4-a716-446655440000');Supabase's PostgREST layer uses prepared statements automatically for repeated query patterns. When you use the Supabase client SDK consistently, you get this optimization for free.
RPC functions for complex queries
Wrapping complex queries in PostgreSQL functions gives you another layer of optimization. The function's query plan is cached after the first call:
CREATE OR REPLACE FUNCTION get_dashboard_stats(p_user_id uuid)
RETURNS json AS $$
SELECT json_build_object(
'total_orders', (SELECT COUNT(*) FROM orders WHERE user_id = p_user_id),
'total_spent', (SELECT COALESCE(SUM(total), 0) FROM orders WHERE user_id = p_user_id AND status = 'completed'),
'pending_orders', (SELECT COUNT(*) FROM orders WHERE user_id = p_user_id AND status = 'pending'),
'last_order_date', (SELECT MAX(created_at) FROM orders WHERE user_id = p_user_id)
);
$$ LANGUAGE sql STABLE;One RPC call, one round trip, four aggregations computed server-side. Combine this with Next.js caching and your dashboard loads in milliseconds.
Real Performance Wins
Here are three real optimizations from projects I shipped. Numbers are from actual production databases.
Win 1: Product search on an e-commerce platform
Before: Full-text search on a products table with 45,000 rows using ILIKE '%search_term%'. Average response time: 340ms. The ILIKE with leading wildcard cannot use a B-tree index.
After: Added a tsvector column with a GIN index and switched to @@ full-text search. Average response time: 8ms.
Win 2: Dashboard analytics
Before: A dashboard that computed daily revenue, order counts, and top products on every page load. Five separate queries hitting the orders and order_items tables (1.2M rows combined). Page load: 3.8 seconds.
After: Created a materialized view refreshed every 2 hours via pg_cron. Added Next.js unstable_cache with a 30-minute revalidation on top. Page load: 120ms.
Win 3: User activity feed
Before: An activity feed that joined activities, users, and projects tables with offset pagination. Page 1 was fine (90ms). Page 20 was painful (2.1 seconds). Page 100 was unusable (8+ seconds).
After: Switched to cursor-based pagination with a composite index on (created_at DESC, id DESC). Consistent 15ms response time regardless of page depth. Added the Supabase foreign key join syntax to eliminate the N+1 pattern for user and project data.
My Optimization Checklist
I run through this checklist on every Supabase project before it goes to production. Print it. Pin it. Use it.
Schema and Indexes
- [ ] Every foreign key column has an index
- [ ] Every column used in RLS policies has an index
- [ ] Every column in
WHEREclauses on tables with more than 10,000 rows has an appropriate index - [ ] Composite indexes match the query column order (most selective column first)
- [ ] Partial indexes used where full indexes have low selectivity
- [ ] GIN indexes on JSONB columns that are queried with
@>,?, or?| - [ ] GIN index on
tsvectorcolumns for full-text search - [ ] No duplicate indexes (check with
pg_stat_user_indexesfor unused indexes)
Queries
- [ ] EXPLAIN ANALYZE run on every query that touches tables with more than 1,000 rows
- [ ] No sequential scans on large tables (unless intentional for batch processing)
- [ ] No N+1 patterns — use Supabase's foreign key joins or RPC functions
- [ ] All
SELECTstatements specify columns instead ofSELECT * - [ ] Aggregations that do not need real-time data use materialized views
Pagination
- [ ] Cursor-based pagination for all list endpoints
- [ ] Offset pagination only used where total count and page jumping are truly required
- [ ] Appropriate
LIMITon all queries — never fetch unbounded result sets
Connections
- [ ] Application uses the pooled connection string (port 6543)
- [ ] Migrations and admin scripts use the direct connection string (port 5432)
- [ ] Prisma configured with both
urlanddirectUrlif applicable - [ ] No long-running queries on the pooled connection
Caching
- [ ] Frequently accessed, rarely changing data cached with
unstable_cacheor ISR - [ ] Cache invalidation strategy defined for every cached query
- [ ] Materialized views scheduled for refresh via
pg_cronon appropriate intervals
Monitoring
- [ ] Supabase Dashboard query performance monitored weekly
- [ ]
pg_stat_statementsenabled to identify slow queries - [ ] Alerts set for query duration exceeding acceptable thresholds
Key Takeaways
- EXPLAIN ANALYZE before everything. Do not guess. Measure. A query plan tells you exactly where time is spent and what to fix.
- Index what you query. Match your indexes to your actual query patterns. Composite indexes for multi-column filters. Partial indexes for high-cardinality status columns. GIN for JSONB and full-text search.
- Kill N+1 queries. Use Supabase's foreign key join syntax. One query, one round trip. If the join is complex, write an RPC function.
- Use the connection pooler. Serverless environments will exhaust your direct connections during traffic spikes. The pooler exists for a reason.
- Cursor pagination over offset. Consistent performance at any depth. The only reason to use offset is if your UI genuinely needs "jump to page N" functionality.
- Materialize expensive aggregations. If the data does not need to be real-time, pre-compute it. Your users do not care if the dashboard numbers are 2 hours old if the page loads in 100ms instead of 4 seconds.
- Cache at the right layer. PostgreSQL does not cache query results. Use Next.js caching, materialized views, or application-level caching to avoid re-computing expensive queries on every request.
PostgreSQL is extraordinarily powerful. Supabase makes it accessible. But neither tool can save you from poorly written queries and missing indexes. The patterns in this guide are the difference between an app that works and an app that works at scale.
If you are building a Supabase project and want help optimizing your database layer, check out my services — performance audits are one of the most impactful things I do for clients.
*Written by Uvin Vindula↗ — Web3 and AI engineer building production-grade applications from Sri Lanka and the UK. I write about the tools and patterns I use every day at iamuvin.com↗.*
Working on a Web3 or AI project?

Uvin Vindula
Web3 and AI engineer based in Sri Lanka and the UK. Author of The Rise of Bitcoin. Director of Blockchain and Software Solutions at Terra Labz. Founder of uvin.lk — Sri Lanka's Bitcoin education platform with 10,000+ learners.