IAMUVIN

SEO & Content Strategy

Technical SEO for Next.js Developers: The Complete Guide

Uvin Vindula·February 10, 2025·12 min read

Last updated: April 14, 2026

Share

TL;DR

Technical SEO in Next.js is not about installing a plugin and forgetting about it. It's about understanding what search engines and AI crawlers actually need from your site, then implementing it with the tools Next.js gives you natively — generateMetadata, sitemap.ts, robots.ts, JSON-LD structured data, and now llms.txt for Generative Engine Optimization. I built iamuvin.com with every technique in this guide. The site targets #1 rankings for Web3 developer searches, and the SEO layer is entirely code — no WordPress plugins, no third-party SEO tools. This article walks through each implementation with TypeScript examples you can drop into your own project. If you're a developer who thinks SEO is "someone else's job," this will change your mind.


Why Developers Need to Understand SEO

Most developers treat SEO as a marketing concern. Someone hands them a list of meta tags, they paste them in, and they move on. That approach leaves performance on the table because the biggest SEO levers in 2025 are technical — and they live in your codebase.

Technical SEO for Next.js is the intersection of web performance, crawlability, and structured data. Google's ranking algorithm weighs Core Web Vitals directly. AI search engines like Perplexity, ChatGPT search, and Google's AI Overviews are pulling answers from sites that provide clean, machine-readable content. If your site isn't optimized for both traditional crawlers and LLM-based engines, you're invisible to the fastest-growing search channels.

Here's what I learned building iamuvin.com: the technical SEO layer took me about two days to implement properly. That two-day investment drives the majority of my organic traffic. Every page has structured data. Every route generates its own metadata. The sitemap rebuilds on every deploy. AI crawlers get a dedicated llms.txt file that summarizes what the site offers.

None of this required an SEO specialist. It required a developer who understood what the crawlers were looking for.

The techniques in this guide are everything I implemented on iamuvin.com. They work because Next.js gives you the primitives — you just need to know how to wire them together.


Metadata in Next.js — generateMetadata

Next.js App Router gives you generateMetadata, a function that runs on the server and returns metadata for each route. This replaces the old Head component approach and is significantly more powerful because it can be async, fetch data, and generate metadata dynamically per page.

Here's the pattern I use on iamuvin.com for static pages:

typescript
// app/about/page.tsx
import type { Metadata } from "next";

export const metadata: Metadata = {
  title: "About Uvin Vindula — Web3 & AI Engineer",
  description:
    "Full-stack engineer specializing in Web3, AI integration, and Next.js. Based in Sri Lanka and UK. Building production-grade decentralized applications.",
  openGraph: {
    title: "About Uvin Vindula — Web3 & AI Engineer",
    description:
      "Full-stack engineer specializing in Web3, AI integration, and Next.js.",
    url: "https://iamuvin.com/about",
    siteName: "IAMUVIN",
    locale: "en_US",
    type: "website",
  },
  twitter: {
    card: "summary_large_image",
    title: "About Uvin Vindula — Web3 & AI Engineer",
    description:
      "Full-stack engineer specializing in Web3, AI integration, and Next.js.",
    creator: "@iamuvin",
  },
  alternates: {
    canonical: "https://iamuvin.com/about",
  },
};

For dynamic routes — like blog articles — you need generateMetadata as a function:

typescript
// app/blog/[slug]/page.tsx
import type { Metadata } from "next";
import { getArticleBySlug } from "@/lib/articles";

interface PageProps {
  params: Promise<{ slug: string }>;
}

export async function generateMetadata({
  params,
}: PageProps): Promise<Metadata> {
  const { slug } = await params;
  const article = await getArticleBySlug(slug);

  if (!article) {
    return { title: "Article Not Found" };
  }

  return {
    title: `${article.title} — IAMUVIN`,
    description: article.excerpt,
    keywords: article.keywords,
    openGraph: {
      title: article.title,
      description: article.excerpt,
      url: `https://iamuvin.com/blog/${slug}`,
      siteName: "IAMUVIN",
      type: "article",
      publishedTime: article.publishedAt,
      modifiedTime: article.updatedAt,
      authors: ["Uvin Vindula"],
    },
    twitter: {
      card: "summary_large_image",
      title: article.title,
      description: article.excerpt,
      creator: "@iamuvin",
    },
    alternates: {
      canonical: `https://iamuvin.com/blog/${slug}`,
    },
  };
}

Three things matter here. First, always set a canonical URL. Without it, search engines may index query parameter variants of your pages as duplicates. Second, include openGraph and twitter metadata — social shares drive backlinks, and backlinks drive rankings. Third, publishedTime and modifiedTime in Open Graph tell Google your content is fresh. I update updatedAt in frontmatter whenever I revise an article, and it gets picked up automatically.

The layout.tsx at the root should set defaults that every page inherits:

typescript
// app/layout.tsx
import type { Metadata } from "next";

export const metadata: Metadata = {
  metadataBase: new URL("https://iamuvin.com"),
  title: {
    default: "IAMUVIN — Web3 & AI Engineer",
    template: "%s — IAMUVIN",
  },
  description:
    "Uvin Vindula — Full-stack engineer building production-grade Web3 and AI applications.",
  robots: {
    index: true,
    follow: true,
    googleBot: {
      index: true,
      follow: true,
      "max-video-preview": -1,
      "max-image-preview": "large",
      "max-snippet": -1,
    },
  },
};

Setting metadataBase is critical. Without it, relative URLs in Open Graph images and canonical tags resolve incorrectly. I've seen sites lose ranking because their OG image URLs pointed to localhost:3000 in production.


JSON-LD Structured Data

JSON-LD is how you tell search engines exactly what your content is — not through inference, but through explicit declarations. Google uses structured data for rich results: article cards, FAQ dropdowns, breadcrumbs, star ratings. If you're not adding JSON-LD to your pages, you're leaving rich snippets on the table.

I add structured data to every page on iamuvin.com. Here's my reusable component:

typescript
// components/json-ld.tsx
interface JsonLdProps {
  data: Record<string, unknown>;
}

export function JsonLd({ data }: JsonLdProps) {
  return (
    <script
      type="application/ld+json"
      dangerouslySetInnerHTML={{ __html: JSON.stringify(data) }}
    />
  );
}

For blog articles, the Article schema:

typescript
// app/blog/[slug]/page.tsx
import { JsonLd } from "@/components/json-ld";

export default async function ArticlePage({ params }: PageProps) {
  const { slug } = await params;
  const article = await getArticleBySlug(slug);

  const articleJsonLd = {
    "@context": "https://schema.org",
    "@type": "Article",
    headline: article.title,
    description: article.excerpt,
    author: {
      "@type": "Person",
      name: "Uvin Vindula",
      url: "https://iamuvin.com",
      jobTitle: "Web3 & AI Engineer",
    },
    publisher: {
      "@type": "Organization",
      name: "IAMUVIN",
      url: "https://iamuvin.com",
    },
    datePublished: article.publishedAt,
    dateModified: article.updatedAt,
    url: `https://iamuvin.com/blog/${slug}`,
    keywords: article.keywords.join(", "),
    inLanguage: "en-US",
  };

  return (
    <>
      <JsonLd data={articleJsonLd} />
      <article>{/* Article content */}</article>
    </>
  );
}

For the homepage, I use Person and WebSite schemas together:

typescript
const personJsonLd = {
  "@context": "https://schema.org",
  "@type": "Person",
  name: "Uvin Vindula",
  alternateName: "IAMUVIN",
  url: "https://iamuvin.com",
  jobTitle: "Web3 & AI Engineer",
  knowsAbout: [
    "Web3 Development",
    "Smart Contracts",
    "AI Integration",
    "Next.js",
    "TypeScript",
  ],
  sameAs: [
    "https://github.com/iamuvin",
    "https://twitter.com/iamuvin",
    "https://linkedin.com/in/iamuvin",
  ],
};

const websiteJsonLd = {
  "@context": "https://schema.org",
  "@type": "WebSite",
  name: "IAMUVIN",
  url: "https://iamuvin.com",
  description:
    "Web3 and AI engineering by Uvin Vindula. Smart contracts, DeFi protocols, AI-powered applications.",
  author: { "@id": "https://iamuvin.com/#person" },
};

Validate your structured data with Google's Rich Results Test after every change. I've caught malformed dates and missing required fields that would have silently killed rich snippet eligibility.


Dynamic Sitemaps

Next.js lets you generate sitemaps programmatically with sitemap.ts. This is better than static XML files because the sitemap rebuilds on every deployment, automatically includes new pages, and never goes stale.

Here's my implementation:

typescript
// app/sitemap.ts
import type { MetadataRoute } from "next";
import { getAllArticles } from "@/lib/articles";
import { getAllProjects } from "@/lib/projects";

export default async function sitemap(): Promise<MetadataRoute.Sitemap> {
  const baseUrl = "https://iamuvin.com";

  const articles = await getAllArticles();
  const projects = await getAllProjects();

  const articleEntries: MetadataRoute.Sitemap = articles.map((article) => ({
    url: `${baseUrl}/blog/${article.slug}`,
    lastModified: new Date(article.updatedAt || article.publishedAt),
    changeFrequency: "monthly",
    priority: 0.7,
  }));

  const projectEntries: MetadataRoute.Sitemap = projects.map((project) => ({
    url: `${baseUrl}/work/${project.slug}`,
    lastModified: new Date(project.updatedAt),
    changeFrequency: "monthly",
    priority: 0.8,
  }));

  const staticPages: MetadataRoute.Sitemap = [
    {
      url: baseUrl,
      lastModified: new Date(),
      changeFrequency: "weekly",
      priority: 1.0,
    },
    {
      url: `${baseUrl}/about`,
      lastModified: new Date(),
      changeFrequency: "monthly",
      priority: 0.9,
    },
    {
      url: `${baseUrl}/services`,
      lastModified: new Date(),
      changeFrequency: "monthly",
      priority: 0.9,
    },
    {
      url: `${baseUrl}/blog`,
      lastModified: new Date(),
      changeFrequency: "weekly",
      priority: 0.8,
    },
    {
      url: `${baseUrl}/work`,
      lastModified: new Date(),
      changeFrequency: "monthly",
      priority: 0.8,
    },
    {
      url: `${baseUrl}/contact`,
      lastModified: new Date(),
      changeFrequency: "yearly",
      priority: 0.5,
    },
  ];

  return [...staticPages, ...projectEntries, ...articleEntries];
}

The key decisions: homepage gets priority: 1.0 and changeFrequency: "weekly". Service pages and portfolio get high priority because they convert visitors. Blog articles get 0.7 — important for traffic but secondary to conversion pages. Contact page gets 0.5 because it rarely changes.

This generates a /sitemap.xml route automatically. Submit it to Google Search Console and Bing Webmaster Tools. I check the sitemap after every deploy by visiting https://iamuvin.com/sitemap.xml to make sure new content appears.


robots.txt

The robots.ts file in Next.js controls which pages search engines can crawl. Getting this wrong can deindex your entire site, so precision matters.

typescript
// app/robots.ts
import type { MetadataRoute } from "next";

export default function robots(): MetadataRoute.Robots {
  return {
    rules: [
      {
        userAgent: "*",
        allow: "/",
        disallow: ["/api/", "/admin/", "/_next/"],
      },
      {
        userAgent: "GPTBot",
        allow: "/",
      },
      {
        userAgent: "Google-Extended",
        allow: "/",
      },
      {
        userAgent: "ClaudeBot",
        allow: "/",
      },
      {
        userAgent: "PerplexityBot",
        allow: "/",
      },
    ],
    sitemap: "https://iamuvin.com/sitemap.xml",
  };
}

I explicitly allow AI crawlers — GPTBot, Google-Extended, ClaudeBot, PerplexityBot. Many sites block these by default, which means they never appear in AI-generated search results. If you want your content cited in ChatGPT, Perplexity, or Google AI Overviews, you need to let their crawlers in.

The disallow list blocks API routes, admin pages, and Next.js internal routes. You don't want search engines indexing your API endpoints or wasting crawl budget on framework internals.


GEO — Generative Engine Optimization

Generative Engine Optimization is the practice of making your content discoverable by AI-powered search engines. Traditional SEO gets you into Google's index. GEO gets you cited in AI-generated answers.

This matters because search behavior is shifting. When someone asks Perplexity "how to implement structured data in Next.js," it doesn't return ten blue links — it synthesizes an answer from multiple sources and cites them. If your content is well-structured, authoritative, and machine-readable, you get cited. If it's not, someone else does.

Here's what I've found works for GEO after optimizing iamuvin.com:

Clear, factual statements. AI models pull direct answers. Sentences like "Next.js generateMetadata is an async function that returns a Metadata object" get cited more than vague introductions.

Code examples with context. AI search engines love code blocks with preceding explanation. They pull the explanation as the answer and link to your page for the full implementation.

Structured content with H2/H3 hierarchy. AI crawlers parse heading structure to understand topic segmentation. Flat, wall-of-text articles get passed over.

Author attribution and E-E-A-T signals. Google's AI Overviews prioritize content from identifiable experts. A named author with a track record (linked via JSON-LD Person schema) ranks higher than anonymous blog posts.

Direct answers early in each section. I front-load the key takeaway in the first sentence of every section. AI models that extract snippet-style answers grab this first sentence preferentially.

GEO isn't a separate discipline from SEO. It's an extension. If you're already writing well-structured, technically accurate content with proper metadata and structured data, you're 80% of the way there.


llms.txt for AI Crawlers

llms.txt is a proposed standard (inspired by robots.txt) that gives AI crawlers a structured summary of your site's content. It sits at the root of your domain and tells LLMs what your site is about, what it offers, and where to find key content.

Here's the llms.txt I serve on iamuvin.com:

text
# IAMUVIN — Uvin Vindula

> Web3 and AI engineer based in Sri Lanka and UK. Building production-grade
> decentralized applications, AI-powered products, and developer tools.

## Services

- [Web3 Development](https://iamuvin.com/services): Smart contracts, DeFi protocols, NFT platforms
- [AI Integration](https://iamuvin.com/services): Claude API, RAG systems, AI-powered features
- [Full-Stack Development](https://iamuvin.com/services): Next.js, TypeScript, Supabase

## Portfolio

- [EuroParts Lanka](https://iamuvin.com/work/europarts-lanka): AI-powered auto parts finder
- [Heavenly Events](https://iamuvin.com/work/heavenly-events): Event management platform

## Blog Topics

- Web3 development guides and tutorials
- AI integration patterns for production applications
- Next.js performance and SEO optimization
- Smart contract security and auditing

## Contact

- Email: contact@uvin.lk
- Website: https://iamuvin.com
- GitHub: https://github.com/iamuvin

To serve this in Next.js, create a route handler:

typescript
// app/llms.txt/route.ts
import { NextResponse } from "next/server";
import fs from "fs";
import path from "path";

export async function GET() {
  const filePath = path.join(process.cwd(), "public", "llms.txt");
  const content = fs.readFileSync(filePath, "utf-8");

  return new NextResponse(content, {
    headers: {
      "Content-Type": "text/plain; charset=utf-8",
      "Cache-Control": "public, max-age=86400, s-maxage=86400",
    },
  });
}

Alternatively, just place the file in your public/ directory and it serves automatically at /llms.txt. I use the route handler approach because I want to set specific cache headers and potentially generate the content dynamically from my CMS in the future.

The llms.txt standard is still early, but adoption is growing. Sites that provide this file give AI models a clean, structured overview instead of forcing them to crawl and infer. It's five minutes of work with meaningful upside.


Core Web Vitals Impact on Rankings

Google uses three Core Web Vitals as ranking signals: LCP (Largest Contentful Paint), INP (Interaction to Next Paint), and CLS (Cumulative Layout Shift). Next.js gives you the tools to hit every threshold, but you have to use them correctly.

LCP < 2.5s — The largest visible element (usually a hero image or heading) must paint in under 2.5 seconds. In Next.js, this means using next/image with priority on above-the-fold images, preloading fonts, and avoiding client-side data fetching for hero content.

typescript
import Image from "next/image";

// Hero image with priority loading
<Image
  src="/hero.webp"
  alt="IAMUVIN — Web3 and AI Engineering"
  width={1200}
  height={630}
  priority
  sizes="100vw"
/>

INP < 200ms — Every interaction must produce a visual response within 200ms. Heavy client-side JavaScript is the usual culprit. Use Server Components by default — they send zero JavaScript to the client. Only add "use client" when you need interactivity, and keep those components small.

CLS < 0.1 — No layout shifts. In practice, this means setting explicit width and height on all images, reserving space for dynamic content with CSS, and never injecting content above the fold after initial paint.

typescript
// Font loading without CLS
// app/layout.tsx
import { Inter, Plus_Jakarta_Sans } from "next/font/google";

const inter = Inter({
  subsets: ["latin"],
  display: "swap",
  variable: "--font-inter",
});

const jakarta = Plus_Jakarta_Sans({
  subsets: ["latin"],
  display: "swap",
  variable: "--font-jakarta",
});

Using next/font with display: "swap" prevents invisible text during font loading and eliminates font-related CLS. The fonts are self-hosted automatically, removing external network requests.

I monitor Core Web Vitals on iamuvin.com through Vercel Analytics and Google Search Console. Both give field data from real users, which is what Google actually uses for ranking — not lab scores from Lighthouse.


Internal Linking Strategy

Internal links distribute page authority across your site and help search engines discover content. Most developer blogs ignore internal linking entirely, which means their deep pages never accumulate enough authority to rank.

My approach on iamuvin.com:

Link to service pages from blog content. Every technical article includes at least one contextual link to /services. This sends ranking authority from high-traffic blog posts to the pages that actually convert visitors into clients.

Cross-link related articles. At the bottom of every article, I link to 2-3 related posts. This keeps crawlers exploring and increases time-on-site, which is a soft ranking signal.

Use descriptive anchor text. "Click here" tells search engines nothing. "Learn more about Web3 development services" tells Google exactly what the linked page is about.

Breadcrumbs with structured data. Breadcrumbs improve navigation and generate rich results in Google. Here's the JSON-LD:

typescript
const breadcrumbJsonLd = {
  "@context": "https://schema.org",
  "@type": "BreadcrumbList",
  itemListElement: [
    {
      "@type": "ListItem",
      position: 1,
      name: "Home",
      item: "https://iamuvin.com",
    },
    {
      "@type": "ListItem",
      position: 2,
      name: "Blog",
      item: "https://iamuvin.com/blog",
    },
    {
      "@type": "ListItem",
      position: 3,
      name: article.title,
      item: `https://iamuvin.com/blog/${article.slug}`,
    },
  ],
};

Internal linking isn't glamorous, but it compounds. Every new article I publish strengthens the existing ones through cross-links, and the service pages benefit from every blog post that references them.


Measuring Results

SEO without measurement is guesswork. Here are the tools I use and what I track:

Google Search Console — The primary source of truth. I check it weekly for: total impressions, average position for target keywords, click-through rates, and indexing errors. If a page isn't indexed, something is wrong with the technical implementation. Fix it before writing more content.

Vercel Analytics — Real-user Core Web Vitals data. I track LCP, INP, and CLS at the route level. If a specific page regresses, I investigate immediately. Performance issues compound — a slow page today becomes a ranking drop next month.

Rich Results status — After deploying structured data changes, I validate with Google's Rich Results Test and monitor the "Enhancements" tab in Search Console for errors.

AI citation tracking — I search for my target keywords in Perplexity and ChatGPT search weekly to check if iamuvin.com gets cited. This is manual for now, but it tells me whether the GEO optimization is working.

The metrics that matter most:

  • Indexed pages — Every page you want ranked should be indexed. Period.
  • Average position for target keywords — Track the 10-15 keywords you care about. Movement here validates your strategy.
  • Click-through rate — Low CTR on high-impression pages means your title and description need work.
  • Core Web Vitals pass rate — Aim for 100% of URLs passing all three metrics.

I review these monthly and adjust. SEO is iteration, not a one-time setup.


Key Takeaways

  1. `generateMetadata` is your primary SEO tool in Next.js. Use it on every route. Set canonical URLs, Open Graph, and Twitter cards. Don't forget metadataBase in your root layout.
  1. JSON-LD structured data belongs on every page. Article for blog posts, Person for about pages, WebSite for your homepage, BreadcrumbList for navigation. Validate with Google's Rich Results Test.
  1. Dynamic sitemaps via `sitemap.ts` never go stale. They rebuild on every deploy and automatically include new content. Submit to Search Console.
  1. Allow AI crawlers in `robots.ts`. If you block GPTBot, ClaudeBot, and PerplexityBot, you won't appear in AI-generated search results. That's an increasingly large share of search traffic.
  1. `llms.txt` is five minutes of work with real upside. Give AI models a structured summary of your site instead of making them infer it.
  1. Core Web Vitals are ranking signals. Use next/image with priority, next/font with swap, and Server Components by default. Monitor with real-user data, not just Lighthouse.
  1. Internal linking compounds. Link blog posts to your service pages, cross-link related content, and use descriptive anchor text.
  1. Measure or it didn't happen. Google Search Console weekly. Vercel Analytics for vitals. Rich Results Test after every structured data change.

Technical SEO isn't a separate discipline from web development. It's a core part of building a production-grade Next.js application. Every technique in this article is running on iamuvin.com right now. The code examples are real. The results are measurable. Start with generateMetadata and JSON-LD, then layer in the rest.


*Written by Uvin Vindula — Web3 and AI engineer based in Sri Lanka and UK. I build production-grade decentralized applications, AI-powered products, and the technical infrastructure behind them. Everything on iamuvin.com — including the SEO layer described in this article — is built and maintained by me. If you need a developer who understands both the code and the search engines that index it, get in touch.*

Working on a Web3 or AI project?

Share
Uvin Vindula

Uvin Vindula

Web3 and AI engineer based in Sri Lanka and the UK. Author of The Rise of Bitcoin. Director of Blockchain and Software Solutions at Terra Labz. Founder of uvin.lk — Sri Lanka's Bitcoin education platform with 10,000+ learners.