DevOps & Deployment
Deploying Next.js to Vercel: My Production Deployment Guide
Last updated: April 14, 2026
TL;DR
Vercel deployment for Next.js is straightforward until you're running multiple production sites with real traffic, real costs, and real uptime requirements. I deploy everything to Vercel under my uvindev account — EuroParts Lanka, uvin.lk↗, FreshMart, and iamuvin.com↗. This guide covers what I actually do in production: how I structure environment variables across preview and production, how I use edge functions to keep response times under 50ms globally, how I configure CI/CD with GitHub Actions before Vercel even touches the build, and how I keep the bill under control when running multiple projects on Pro. If you're past the "click deploy" stage and need production patterns, this is the guide.
Why Vercel for Next.js
I've deployed Next.js apps to AWS Amplify, Cloudflare Pages, Railway, and self-managed EC2 instances. I keep coming back to Vercel for one reason: it's built by the same team that builds Next.js. Every App Router feature — Server Components, streaming, ISR, Partial Prerendering — works on Vercel without configuration. On other platforms, I've spent days debugging SSR edge cases that Vercel handles out of the box.
Here's what I get from Vercel that I can't easily replicate elsewhere:
Zero-config Next.js support. Server Actions, ISR, middleware, image optimization — all work without Docker files, custom build scripts, or serverless.yml configs. I push code. It works.
Global edge network. Vercel's edge network spans 30+ regions. For EuroParts Lanka, customers in Colombo hit the Singapore edge. UK customers hit London. I didn't configure this — it's automatic.
Preview deployments. Every pull request gets a unique URL with its own environment. My clients review features on real URLs before they touch production. This alone saved me from at least a dozen "it worked on localhost" incidents.
Built-in analytics. Vercel Analytics gives me Core Web Vitals from real users, not synthetic tests. When I see LCP spike on a specific page, I know it's real because it's measured from actual visitor sessions.
The trade-off is vendor lock-in. If I use Vercel-specific features like @vercel/og or Edge Config, migrating to another platform takes work. I'm fine with that trade-off because the developer experience and reliability are worth it. If you're building production web applications, Vercel is where I'd start.
Project Setup
Every project I deploy to Vercel starts with a vercel.json at the root. You don't strictly need one — Vercel infers most settings from next.config.ts — but I want explicit control over headers, redirects, and region configuration.
Here's the vercel.json I use as a baseline for all my projects:
{
"$schema": "https://openapi.vercel.sh/vercel.json",
"framework": "nextjs",
"regions": ["lhr1", "sin1", "iad1"],
"headers": [
{
"source": "/(.*)",
"headers": [
{ "key": "X-Frame-Options", "value": "DENY" },
{ "key": "X-Content-Type-Options", "value": "nosniff" },
{ "key": "Referrer-Policy", "value": "strict-origin-when-cross-origin" },
{ "key": "Permissions-Policy", "value": "camera=(), microphone=(), geolocation=()" }
]
},
{
"source": "/api/(.*)",
"headers": [
{ "key": "Cache-Control", "value": "no-store, max-age=0" }
]
}
],
"redirects": [
{ "source": "/blog/:slug", "destination": "/articles/:slug", "permanent": true }
]
}The regions array is critical. I select lhr1 (London) for UK traffic, sin1 (Singapore) for Sri Lanka and Southeast Asia, and iad1 (Washington D.C.) for US East. Serverless functions execute in these regions, which keeps latency low for my actual user base rather than defaulting to iad1 only.
For project linking, I use the Vercel CLI:
npm i -g vercel
vercel linkThis creates a .vercel directory with your project and org IDs. Add .vercel to .gitignore — it contains account-specific identifiers you don't want in version control.
My next.config.ts for production always includes these performance settings:
import type { NextConfig } from "next";
const config: NextConfig = {
images: {
formats: ["image/avif", "image/webp"],
remotePatterns: [
{ protocol: "https", hostname: "**.supabase.co" },
{ protocol: "https", hostname: "images.unsplash.com" },
],
},
experimental: {
ppr: true,
optimizePackageImports: [
"lucide-react",
"@radix-ui/react-icons",
"framer-motion",
],
},
logging: {
fetches: { fullUrl: true },
},
};
export default config;The optimizePackageImports setting is huge for bundle size. Without it, importing a single icon from lucide-react pulls in the entire library during development. Vercel's build process tree-shakes this, but the explicit config ensures consistent behaviour across local dev and production builds.
Environment Variables Management
Environment variables are where most Vercel deployments go wrong. I've seen production databases get wiped because someone used the development Supabase URL in production. Vercel's environment variable system has three scopes — Production, Preview, and Development — and you need to use all three deliberately.
Here's how I structure environment variables for a typical project:
# Production only — real services, real data
NEXT_PUBLIC_SUPABASE_URL=https://abc123.supabase.co
SUPABASE_SERVICE_ROLE_KEY=eyJhbG...
STRIPE_SECRET_KEY=sk_live_...
NEXT_PUBLIC_SITE_URL=https://europarts.lk
# Preview only — staging services, seed data
NEXT_PUBLIC_SUPABASE_URL=https://def456.supabase.co
SUPABASE_SERVICE_ROLE_KEY=eyJhbG...
STRIPE_SECRET_KEY=sk_test_...
NEXT_PUBLIC_SITE_URL=https://*.vercel.app
# Development — pulled with `vercel env pull`
# Creates .env.local automaticallyI set these through the Vercel dashboard under Project Settings > Environment Variables. For sensitive keys, I enable the "Sensitive" toggle, which encrypts the value and prevents it from being read after creation.
The vercel env pull command is something I run at the start of every project:
vercel env pull .env.localThis pulls your Development-scoped variables into .env.local. No more manually copying environment variables between team members. Everyone runs vercel env pull and gets the same development configuration.
For projects with multiple environments, I use Vercel's environment variable groups with a naming convention:
NEXT_PUBLIC_* → Available in browser (public)
*_SECRET_KEY → Server-only (never prefixed with NEXT_PUBLIC_)
*_SERVICE_ROLE → Server-only, admin-level accessOne pattern I've adopted for all my projects is a runtime environment check:
// lib/env.ts
import { z } from "zod";
const envSchema = z.object({
NEXT_PUBLIC_SUPABASE_URL: z.string().url(),
SUPABASE_SERVICE_ROLE_KEY: z.string().min(1),
STRIPE_SECRET_KEY: z.string().startsWith("sk_"),
NEXT_PUBLIC_SITE_URL: z.string().url(),
});
export const env = envSchema.parse({
NEXT_PUBLIC_SUPABASE_URL: process.env.NEXT_PUBLIC_SUPABASE_URL,
SUPABASE_SERVICE_ROLE_KEY: process.env.SUPABASE_SERVICE_ROLE_KEY,
STRIPE_SECRET_KEY: process.env.STRIPE_SECRET_KEY,
NEXT_PUBLIC_SITE_URL: process.env.NEXT_PUBLIC_SITE_URL,
});This crashes the build if any variable is missing or malformed. I'd rather fail at build time than discover a missing API key from a 500 error in production.
Preview Deployments
Preview deployments are the single most underrated feature of Vercel. Every time I push a branch or open a pull request, Vercel builds and deploys a unique URL like my-project-abc123-uvindev.vercel.app. The preview uses Preview-scoped environment variables, which means it hits staging databases and test Stripe keys.
I use preview deployments for three things:
Client review. When I'm building a project for a client, I share the preview URL instead of setting up screen recordings. They click the link, see the real site with real data, and leave feedback. This cuts review cycles in half.
Integration testing. My GitHub Actions CI runs Playwright tests against the preview URL. Vercel provides a VERCEL_URL environment variable in the build, and I expose the deployment URL to my test pipeline:
# .github/workflows/e2e.yml
name: E2E Tests
on:
deployment_status:
jobs:
test:
if: github.event.deployment_status.state == 'success'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 22
- run: npm ci
- run: npx playwright install --with-deps chromium
- run: npx playwright test
env:
BASE_URL: ${{ github.event.deployment_status.target_url }}This workflow triggers after Vercel finishes the preview deployment, then runs E2E tests against the live preview URL. If tests fail, the PR gets a red check.
Branch-specific previews for features. For larger projects like EuroParts Lanka, I create feature branches like feat/ai-part-finder and the preview URL becomes a staging environment for that specific feature. Product managers bookmark it and test over days, not minutes.
One caveat: preview deployments use the same Vercel project limits as production. If you're on the Hobby plan, you get 100 deployments per day. On Pro, it's 6,000. I've never hit the Pro limit, even with aggressive PR workflows.
Custom Domains
Every production project needs a custom domain. Vercel makes this simple, but there are patterns that matter for SEO and reliability.
For each project, I configure three things:
Primary domain: europarts.lk
www redirect: www.europarts.lk → europarts.lk (308 redirect)
Preview domain: staging.europarts.lk → latest preview branchThe www redirect is non-negotiable. Search engines treat www.europarts.lk and europarts.lk as different sites. Pick one canonical domain and redirect the other.
For DNS, I use Cloudflare as the DNS provider with Vercel as the hosting platform. The setup is:
Type Name Value Proxy
A @ 76.76.21.21 DNS only (grey cloud)
CNAME www cname.vercel-dns.com DNS only (grey cloud)Important: disable Cloudflare's proxy (orange cloud) for Vercel domains. Running traffic through both Cloudflare's CDN and Vercel's edge network causes SSL certificate conflicts and doubles your latency. Use Cloudflare for DNS resolution only. Vercel handles SSL, CDN, and edge caching.
For my portfolio site uvin.lk, I also set up a staging subdomain that points to the develop branch:
# In Vercel dashboard: Settings > Domains
# Add staging.uvin.lk and assign it to the "develop" git branchThis gives me a permanent staging URL that automatically updates when I push to develop. No manual deployment triggers needed.
Edge Functions
Edge functions run on Vercel's edge network — the same 30+ regions that serve your static assets. They execute in under 50ms because the code runs geographically close to the user. I use them for three production patterns.
Geolocation-based routing. For EuroParts Lanka, I detect the user's country from the edge and serve localised pricing:
// middleware.ts
import { NextResponse } from "next/server";
import type { NextRequest } from "next/server";
export function middleware(request: NextRequest) {
const country = request.geo?.country ?? "LK";
const response = NextResponse.next();
response.headers.set("x-user-country", country);
if (country === "LK") {
response.cookies.set("currency", "LKR", { path: "/" });
} else if (country === "GB") {
response.cookies.set("currency", "GBP", { path: "/" });
} else {
response.cookies.set("currency", "USD", { path: "/" });
}
return response;
}
export const config = {
matcher: ["/((?!_next/static|_next/image|favicon.ico).*)"],
};This runs on the edge before any page renders. The user sees prices in their local currency on the first paint — no flash of wrong currency, no client-side detection.
Authentication checks at the edge. Instead of checking auth in every Server Component, I validate JWT tokens in middleware:
// middleware.ts (auth section)
import { createServerClient } from "@supabase/ssr";
export async function middleware(request: NextRequest) {
const response = NextResponse.next({
request: { headers: request.headers },
});
const supabase = createServerClient(
process.env.NEXT_PUBLIC_SUPABASE_URL!,
process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!,
{
cookies: {
getAll: () => request.cookies.getAll(),
setAll: (cookies) => {
cookies.forEach(({ name, value, options }) => {
response.cookies.set(name, value, options);
});
},
},
}
);
const { data: { user } } = await supabase.auth.getUser();
if (!user && request.nextUrl.pathname.startsWith("/dashboard")) {
return NextResponse.redirect(new URL("/login", request.url));
}
return response;
}A/B testing without client-side flicker. Edge middleware can assign users to experiment groups before the page renders:
export function middleware(request: NextRequest) {
const bucket = request.cookies.get("ab-bucket")?.value;
if (!bucket) {
const newBucket = Math.random() > 0.5 ? "control" : "variant";
const response = NextResponse.next();
response.cookies.set("ab-bucket", newBucket, {
path: "/",
maxAge: 60 * 60 * 24 * 30,
});
return response;
}
return NextResponse.next();
}Server Components then read the cookie and render the appropriate variant. No layout shift, no hydration mismatch, no flash of wrong content.
Edge functions have a 128KB size limit after compression and a 25ms CPU time limit on Hobby (no limit on Pro). Keep them lean — no heavy npm packages, no database connections. They're for routing logic, auth checks, and header manipulation.
Vercel Analytics and Speed Insights
Vercel offers two monitoring tools: Analytics (traffic and Web Vitals) and Speed Insights (detailed performance metrics). I enable both on every production project.
npm install @vercel/analytics @vercel/speed-insights// app/layout.tsx
import { Analytics } from "@vercel/analytics/react";
import { SpeedInsights } from "@vercel/speed-insights/next";
export default function RootLayout({
children,
}: {
children: React.ReactNode;
}) {
return (
<html lang="en">
<body>
{children}
<Analytics />
<SpeedInsights />
</body>
</html>
);
}What I actually monitor weekly:
LCP (Largest Contentful Paint). Target: under 2.5s. On iamuvin.com, my LCP was 3.8s because the hero image wasn't preloaded. Adding priority to the next/image component and using AVIF format brought it to 1.9s.
INP (Interaction to Next Paint). Target: under 200ms. FreshMart had INP spikes on the product filter page because I was re-rendering 200+ product cards on every filter change. Switching to useDeferredValue and virtualising the list fixed it.
CLS (Cumulative Layout Shift). Target: under 0.1. The most common CLS offender in my projects is web fonts. I use next/font with display: swap and explicit size-adjust to eliminate layout shifts from font loading.
Vercel Analytics on Pro gives you 30 days of retention with 25,000 data points per month. For higher traffic sites, I supplement with a self-hosted Plausible instance. But for most of my projects, Vercel Analytics covers the metrics that matter.
The real value is the Audiences tab — it shows Web Vitals broken down by page, device, and country. When I see that mobile users in Sri Lanka have worse LCP than desktop users in the UK, I can investigate whether it's a network issue, an image size issue, or a server region issue.
CI/CD with GitHub
Vercel's Git integration triggers a deployment on every push. But I don't trust automated deployments without gates. My CI pipeline validates code quality before Vercel builds.
Here's the GitHub Actions workflow I use across all my projects:
# .github/workflows/ci.yml
name: CI
on:
push:
branches: [main, develop]
pull_request:
branches: [main]
jobs:
quality:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 22
cache: "npm"
- run: npm ci
- run: npm run lint
- run: npm run typecheck
- run: npm run test -- --run
- run: npm run build
security:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: npm audit --audit-level=highThe key insight: this runs *before* Vercel deploys. If linting, type checking, or tests fail, the PR gets a red check and I don't waste Vercel build minutes on broken code.
I also configure Vercel to only deploy when the CI passes. In the Vercel dashboard under Settings > Git:
- Ignored Build Step: I use the default behaviour for branch-based filtering
- Production Branch:
mainonly - Preview Branches: All branches except
main
For monorepos (I don't use them often, but FreshMart started as one), Vercel's "Ignored Build Step" command is essential:
# vercel-ignore-build.sh
if [ "$VERCEL_GIT_COMMIT_REF" = "main" ]; then
echo "Building production..."
exit 1 # proceed with build
fi
git diff --quiet HEAD^ HEAD -- ./apps/web/This only builds the web app when files in apps/web/ changed. No more rebuilding the marketing site because someone edited a backend config.
Performance Monitoring
Beyond Vercel Analytics, I set up three monitoring layers for production sites:
Vercel Logs. Runtime logs from serverless functions are available in the Vercel dashboard under Deployments > Functions. I filter by status code to catch 500 errors early:
// lib/logger.ts
export function logError(context: string, error: unknown) {
console.error(JSON.stringify({
context,
error: error instanceof Error ? error.message : String(error),
stack: error instanceof Error ? error.stack : undefined,
timestamp: new Date().toISOString(),
}));
}Structured JSON logs make filtering in Vercel's log viewer much easier than plain text.
Cron jobs for health checks. Vercel supports cron functions in vercel.json:
{
"crons": [
{
"path": "/api/health",
"schedule": "*/5 * * * *"
}
]
}// app/api/health/route.ts
import { NextResponse } from "next/server";
export async function GET() {
const checks = {
database: await checkDatabase(),
storage: await checkStorage(),
timestamp: new Date().toISOString(),
};
const healthy = Object.values(checks).every(
(v) => typeof v === "string" || v === true
);
return NextResponse.json(checks, { status: healthy ? 200 : 503 });
}
async function checkDatabase(): Promise<boolean> {
try {
const response = await fetch(
`${process.env.NEXT_PUBLIC_SUPABASE_URL}/rest/v1/`,
{
headers: {
apikey: process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!,
},
}
);
return response.ok;
} catch {
return false;
}
}
async function checkStorage(): Promise<boolean> {
try {
const response = await fetch(
`${process.env.NEXT_PUBLIC_SUPABASE_URL}/storage/v1/bucket`,
{
headers: {
apikey: process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!,
},
}
);
return response.ok;
} catch {
return false;
}
}This runs every 5 minutes and validates that the database and storage are reachable. On Pro, you get up to two cron functions. I pair this with an external uptime monitor (UptimeRobot, free tier) that pings the health endpoint and alerts me on Slack if it returns 503.
Build time tracking. I track build times across deployments. If a build that took 45 seconds suddenly takes 3 minutes, something changed — usually an unoptimised image import or a heavy dependency addition. Vercel shows build duration in the deployment list, and I review it weekly.
Cost Management
Vercel's pricing catches people off guard. The Pro plan is $20/month per team member, which sounds reasonable until you see the usage-based charges.
Here's my actual cost breakdown running four production sites on Vercel Pro:
Base plan (1 member): $20/month
Bandwidth (avg 150GB/month): Included in Pro (1TB limit)
Serverless function executions: Included (1M/month on Pro)
Edge function executions: Included (3M/month on Pro)
Image optimizations: ~$5/month (5,000 source images)
Analytics: Included on Pro
Speed Insights: Included on Pro
─────────────────────────────────────────────
Total: ~$25/month for 4 production sitesThe biggest cost trap is image optimization. Vercel charges $5 per 1,000 source images beyond the included allowance. For EuroParts Lanka with its product catalogue, I optimise images at upload time using Sharp, so Vercel's image optimizer handles only layout-responsive resizing, not format conversion:
// lib/image-upload.ts
import sharp from "sharp";
export async function optimiseForUpload(buffer: Buffer) {
return sharp(buffer)
.resize(1920, 1080, { fit: "inside", withoutEnlargement: true })
.avif({ quality: 80 })
.toBuffer();
}Other cost control strategies I use:
ISR over SSR. Pages that don't change per-request use Incremental Static Regeneration. My product listing pages revalidate every 60 seconds instead of server-rendering on every request. This cuts function executions by 90%.
// app/products/page.tsx
export const revalidate = 60;Edge functions over serverless functions. Edge functions are 3x cheaper per invocation than serverless functions on Vercel. I use edge runtime for anything that doesn't need Node.js APIs:
// app/api/track/route.ts
export const runtime = "edge";
export async function POST(request: Request) {
const body = await request.json();
// lightweight analytics event tracking
await fetch("https://analytics.example.com/event", {
method: "POST",
body: JSON.stringify(body),
});
return new Response("ok", { status: 200 });
}Aggressive caching. Static assets get year-long cache headers automatically. For API responses that can tolerate staleness, I use stale-while-revalidate:
export async function GET() {
const data = await fetchProducts();
return NextResponse.json(data, {
headers: {
"Cache-Control": "s-maxage=60, stale-while-revalidate=300",
},
});
}My Production Checklist
Before every production deployment, I run through this checklist. I've burned myself on every single one of these items at least once.
Pre-deployment:
[ ] All environment variables set for Production scope
[ ] Zod env validation passes in build
[ ] Security headers configured in vercel.json
[ ] Custom domain DNS propagated and SSL active
[ ] robots.txt allows search engine crawling
[ ] sitemap.xml generates correctly
[ ] OG images render for all pages
[ ] 404 and 500 error pages are styled (not default Next.js)
Performance:
[ ] LCP < 2.5s on mobile (test with PageSpeed Insights)
[ ] CLS < 0.1 (no layout shifts from fonts or images)
[ ] Bundle size checked — no unnecessary client-side JS
[ ] Images use next/image with explicit width/height
[ ] Fonts loaded via next/font with display swap
Security:
[ ] No secrets in NEXT_PUBLIC_ variables
[ ] API routes validate input with Zod
[ ] Rate limiting on auth endpoints
[ ] CORS configured for API routes
[ ] CSP header set for script sources
Monitoring:
[ ] Vercel Analytics component mounted
[ ] Speed Insights component mounted
[ ] Health check cron configured
[ ] Error logging outputs structured JSON
[ ] External uptime monitor configured
Post-deployment:
[ ] Smoke test critical user flows on production URL
[ ] Verify environment variables loaded (check a protected page)
[ ] Confirm preview deployments still work on develop branch
[ ] Check Vercel Functions tab for cold start timesI keep this as a GitHub issue template and create a new issue for every major release. The team checks off items as they verify each one.
Key Takeaways
Vercel is the best platform for Next.js production deployments. The zero-config support for every Next.js feature means I spend time building features, not debugging deployment configurations.
Environment variables need discipline. Use three scopes (Production, Preview, Development), validate with Zod at build time, and never put secrets in NEXT_PUBLIC_ variables.
Edge functions are your first choice for lightweight logic. Geo-routing, auth checks, and A/B testing belong on the edge. Save serverless functions for heavy compute and database operations.
Preview deployments change your workflow. Every PR gets a live URL. Use it for client reviews, E2E testing, and feature staging. It's the most underused feature on the platform.
Cost management is about architecture. ISR over SSR, edge over serverless, pre-optimised images over on-demand optimization. These architectural decisions determine your Vercel bill more than your traffic volume does.
Monitor what matters. LCP, INP, and CLS from real users via Vercel Analytics. Structured logs for debugging. Health check crons for uptime. External monitoring for alerting.
I run four production sites on Vercel for about $25/month total. The platform handles global CDN, SSL, preview deployments, and serverless compute. That's a fraction of what equivalent AWS infrastructure would cost — and I don't have to manage any of it. If you're deploying Next.js to production, Vercel is the obvious choice.
*Uvin Vindula is a Web3 and AI engineer based in Sri Lanka and the UK, building production applications with Next.js, Supabase, and Vercel. He deploys all his projects — including EuroParts Lanka, uvin.lk↗, and iamuvin.com↗ — under the Vercel account uvindev. Follow his work at uvin.lk↗ or explore his services.*
Working on a Web3 or AI project?

Uvin Vindula
Web3 and AI engineer based in Sri Lanka and the UK. Author of The Rise of Bitcoin. Director of Blockchain and Software Solutions at Terra Labz. Founder of uvin.lk — Sri Lanka's Bitcoin education platform with 10,000+ learners.