Forget the theory — here's what I actually did to get LCP under 2.5s, CLS to zero, and INP under 200ms on a real Next.js production site. Specific techniques, not vague advice.
I spent the better part of two weeks making this site fast. Not "looks fast in a Lighthouse audit on my M3 MacBook" fast. Actually fast. Fast on a $150 Android phone on a shaky 4G connection in a subway tunnel. Fast where it matters.
The result: LCP under 1.8s, CLS at 0.00, INP under 120ms. All three green in CrUX data, not just lab scores. And I learned something in the process — most performance advice on the internet is either outdated, vague, or both.
"Optimize your images" is not advice. "Use lazy loading" without context is dangerous. "Minimize JavaScript" is obvious but tells you nothing about what to cut.
Here's what I actually did, in the order that mattered.
Let me be direct: Google uses Core Web Vitals as a ranking signal. Not the only signal, and not even the most important one. Content relevance, backlinks, and domain authority still dominate. But at the margins — where two pages have comparable content and authority — performance is a tiebreaker. And on the internet, millions of pages live at those margins.
But forget SEO for a second. The real reason to care about performance is users. The data hasn't changed much in the last five years:
Core Web Vitals in 2026 consist of three metrics:
| Metric | What It Measures | Good | Needs Improvement | Poor |
|---|---|---|---|---|
| LCP | Loading performance | ≤ 2.5s | 2.5s – 4.0s | > 4.0s |
| CLS | Visual stability | ≤ 0.1 | 0.1 – 0.25 | > 0.25 |
| INP | Responsiveness | ≤ 200ms | 200ms – 500ms | > 500ms |
These thresholds haven't changed since INP replaced FID in March 2024. But the techniques to hit them have evolved, especially in the React/Next.js ecosystem.
Largest Contentful Paint measures when the largest visible element in the viewport finishes rendering. For most pages, this is a hero image, a heading, or a large block of text.
Before optimizing anything, you need to know what your LCP element is. People assume it's their hero image. Sometimes it's a web font rendering the <h1>. Sometimes it's a background image applied via CSS. Sometimes it's a <video> poster frame.
Open Chrome DevTools, go to the Performance panel, record a page load, and look for the "LCP" marker. It tells you exactly which element triggered LCP.
You can also use the web-vitals library to log it programmatically:
import { onLCP } from "web-vitals";
onLCP((metric) => {
console.log("LCP element:", metric.entries[0]?.element);
console.log("LCP value:", metric.value, "ms");
});On this site, the LCP element turned out to be the hero image on the homepage and the first paragraph of text on blog posts. Two different elements, two different optimization strategies.
If your LCP element is an image, the single most impactful thing you can do is preload it. By default, the browser discovers images when it parses the HTML, which means the image request doesn't start until after the HTML is downloaded, parsed, and the <img> tag is reached. Preloading moves that discovery to the very beginning.
In Next.js, you can add a preload link in your layout or page:
import Head from "next/head";
export default function HeroSection() {
return (
<>
<Head>
<link
rel="preload"
as="image"
href="/images/hero-optimized.webp"
type="image/webp"
fetchPriority="high"
/>
</Head>
<section className="relative h-[600px]">
<img
src="/images/hero-optimized.webp"
alt="Hero banner"
width={1200}
height={600}
fetchPriority="high"
decoding="sync"
/>
</section>
</>
);
}Notice fetchPriority="high". This is the newer Fetch Priority API, and it's a game changer. Without it, the browser uses its own heuristics to prioritize resources — and those heuristics often get it wrong, especially when you have multiple images above the fold.
On this site, adding fetchPriority="high" to the LCP image dropped LCP by ~400ms. That's the single biggest win I've ever gotten from a one-line change.
CSS blocks rendering. All of it. If you have a 200KB stylesheet loaded via <link rel="stylesheet">, the browser won't paint anything until it's fully downloaded and parsed.
The fix is threefold:
Inline critical CSS — Extract the CSS needed for above-the-fold content and inline it in a <style> tag in the <head>. Next.js does this automatically when you use CSS Modules or Tailwind with proper purging.
Defer non-critical CSS — If you have stylesheets for below-the-fold content (a footer animation library, a chart component), load them asynchronously:
<link
rel="preload"
href="/styles/charts.css"
as="style"
onload="this.onload=null;this.rel='stylesheet'"
/>
<noscript>
<link rel="stylesheet" href="/styles/charts.css" />
</noscript>LCP can't be fast if TTFB is slow. If your server takes 800ms to respond, your LCP will be at least 800ms + everything else.
On this site (Node.js + PM2 + Nginx on a VPS), I measured TTFB at around 180ms on a cold hit. Here's what I did to keep it there:
Cache-Control: public, s-maxage=3600, stale-while-revalidate=86400 on static pages.# nginx.conf snippet
gzip on;
gzip_types text/plain text/css application/json application/javascript text/xml;
gzip_min_length 1000;
gzip_comp_level 6;
# Brotli (if ngx_brotli module is installed)
brotli on;
brotli_types text/plain text/css application/json application/javascript text/xml;
brotli_comp_level 6;My before/after on LCP:
Cumulative Layout Shift measures how much visible content moves around during page load. A CLS of 0 means nothing shifted. A CLS above 0.1 means something is visually annoying your users.
CLS is the metric most developers underestimate. You don't notice it on your fast development machine with everything cached. Your users notice it on their phones, on slow connections, where fonts load late and images pop in one by one.
1. Images without explicit dimensions
This is the most common CLS cause. When an image loads, it pushes content below it down. The fix is embarrassingly simple: always specify width and height on <img> tags.
// BAD — causes layout shift
<img src="/photo.jpg" alt="Team photo" />
// GOOD — browser reserves space before image loads
<img src="/photo.jpg" alt="Team photo" width={800} height={450} />If you're using Next.js <Image>, it handles this automatically as long as you provide dimensions or use fill with a sized parent container.
But here's the gotcha: if you use fill mode, the parent container must have explicit dimensions or the image will cause a CLS:
// BAD — parent has no dimensions
<div className="relative">
<Image src="/photo.jpg" alt="Team" fill />
</div>
// GOOD — parent has explicit aspect ratio
<div className="relative aspect-video w-full">
<Image src="/photo.jpg" alt="Team" fill sizes="100vw" />
</div>2. Web fonts causing FOUT/FOIT
When a custom font loads, text rendered in the fallback font gets re-rendered in the custom font. If the two fonts have different metrics (they almost always do), everything shifts.
The modern fix is font-display: swap combined with size-adjusted fallback fonts:
// Using next/font — the best approach for Next.js
import { Inter } from "next/font/google";
const inter = Inter({
subsets: ["latin"],
display: "swap",
// next/font automatically generates size-adjusted fallback fonts
// This eliminates CLS from font swapping
});
export default function RootLayout({
children,
}: {
children: React.ReactNode;
}) {
return (
<html lang="en" className={inter.className}>
<body>{children}</body>
</html>
);
}next/font is genuinely one of the best things in Next.js. It downloads fonts at build time, self-hosts them (no Google Fonts request at runtime), and generates size-adjusted fallback fonts so the swap from fallback to custom font causes zero layout shift. I measured CLS from fonts at 0.00 after switching to next/font. Before, with a standard Google Fonts <link>, it was 0.04-0.08.
3. Dynamic content injection
Ads, cookie banners, notification bars — anything that gets injected into the DOM after initial render causes CLS if it pushes content down.
The fix: reserve space for dynamic content before it loads.
// Cookie banner — reserve space at the bottom
function CookieBanner() {
const [accepted, setAccepted] = useState(false);
if (accepted) return null;
return (
// Fixed positioning doesn't cause CLS because it
// doesn't affect document flow
<div className="fixed bottom-0 left-0 right-0 z-50 bg-gray-900 p-4">
<p>We use cookies. You know the drill.</p>
<button onClick={() => setAccepted(true)}>Accept</button>
</div>
);
}Using position: fixed or position: absolute for dynamic elements is a CLS-free approach because these elements don't affect the normal document flow.
4. The aspect-ratio CSS trick
For responsive containers where you know the aspect ratio but not the exact dimensions, use the CSS aspect-ratio property:
// Video embed without CLS
function VideoEmbed({ src }: { src: string }) {
return (
<div className="w-full aspect-video bg-gray-900 rounded-lg overflow-hidden">
<iframe
src={src}
className="w-full h-full"
title="Embedded video"
allow="accelerometer; autoplay; clipboard-write; encrypted-media"
allowFullScreen
/>
</div>
);
}The aspect-video utility (which is aspect-ratio: 16/9) reserves the exact right amount of space. No shift when the iframe loads.
5. Skeleton screens
For content that loads asynchronously (API data, dynamic components), show a skeleton that matches the expected dimensions:
function PostCardSkeleton() {
return (
<div className="animate-pulse rounded-lg border p-4">
<div className="h-48 w-full rounded bg-gray-200" />
<div className="mt-4 space-y-2">
<div className="h-6 w-3/4 rounded bg-gray-200" />
<div className="h-4 w-full rounded bg-gray-200" />
<div className="h-4 w-5/6 rounded bg-gray-200" />
</div>
</div>
);
}
function PostList() {
const { data: posts, isLoading } = usePosts();
if (isLoading) {
return (
<div className="grid grid-cols-1 gap-6 md:grid-cols-2 lg:grid-cols-3">
{Array.from({ length: 6 }).map((_, i) => (
<PostCardSkeleton key={i} />
))}
</div>
);
}
return (
<div className="grid grid-cols-1 gap-6 md:grid-cols-2 lg:grid-cols-3">
{posts?.map((post) => (
<PostCard key={post.id} post={post} />
))}
</div>
);
}The key is that PostCardSkeleton and PostCard should have the same dimensions. If the skeleton is 200px tall and the actual card is 280px tall, you still get a shift.
My CLS results:
Interaction to Next Paint replaced First Input Delay in March 2024, and it's a fundamentally harder metric to optimize. FID only measured the delay before the first interaction was processed. INP measures every interaction throughout the page lifecycle and reports the worst one (at the 75th percentile).
This means a page can have great FID but terrible INP if, say, clicking a dropdown menu 30 seconds after load triggers a 500ms reflow.
offsetHeight) then writing them (like changing style.height) in a loop forces the browser to recalculate layout synchronously.The most impactful technique for INP is breaking up long tasks so the browser can process user interactions between chunks:
async function processLargeDataset(items: DataItem[]) {
const results: ProcessedItem[] = [];
for (let i = 0; i < items.length; i++) {
results.push(expensiveTransform(items[i]));
// Every 10 items, yield to the browser
// This lets pending user interactions get processed
if (i % 10 === 0 && "scheduler" in globalThis) {
await scheduler.yield();
}
}
return results;
}scheduler.yield() is available in Chrome 129+ (September 2024) and is the recommended way to yield to the main thread. For browsers that don't support it, you can fall back to a setTimeout(0) wrapper:
function yieldToMain(): Promise<void> {
if ("scheduler" in globalThis && "yield" in scheduler) {
return scheduler.yield();
}
return new Promise((resolve) => setTimeout(resolve, 0));
}React 18+ gives us useTransition, which tells React that certain state updates are not urgent and can be interrupted by more important work (like responding to user input):
import { useState, useTransition } from "react";
function SearchableList({ items }: { items: Item[] }) {
const [query, setQuery] = useState("");
const [filteredItems, setFilteredItems] = useState(items);
const [isPending, startTransition] = useTransition();
function handleSearch(e: React.ChangeEvent<HTMLInputElement>) {
const value = e.target.value;
// This update is urgent — the input must reflect the keystroke immediately
setQuery(value);
// This update is NOT urgent — filtering 10,000 items can wait
startTransition(() => {
const filtered = items.filter((item) =>
item.name.toLowerCase().includes(value.toLowerCase())
);
setFilteredItems(filtered);
});
}
return (
<div>
<input
type="text"
value={query}
onChange={handleSearch}
placeholder="Search..."
className="w-full rounded border px-4 py-2"
/>
{isPending && (
<p className="mt-2 text-sm text-gray-500">Filtering...</p>
)}
<ul className="mt-4 space-y-2">
{filteredItems.map((item) => (
<li key={item.id}>{item.name}</li>
))}
</ul>
</div>
);
}Without startTransition, typing in the search input would feel sluggish because React would try to filter 10,000 items synchronously before updating the DOM. With startTransition, the input updates immediately, and the filtering happens in the background.
I measured INP on a tool page that had a complex input handler. Before useTransition: 380ms INP. After: 90ms INP. That's a 76% improvement from a React API change.
For handlers that trigger expensive operations (API calls, heavy computation), debounce them:
import { useCallback, useRef } from "react";
function useDebounce<T extends (...args: unknown[]) => void>(
fn: T,
delay: number
): T {
const timeoutRef = useRef<ReturnType<typeof setTimeout>>();
return useCallback(
((...args: unknown[]) => {
if (timeoutRef.current) {
clearTimeout(timeoutRef.current);
}
timeoutRef.current = setTimeout(() => fn(...args), delay);
}) as T,
[fn, delay]
);
}
// Usage
function LiveSearch() {
const [results, setResults] = useState<SearchResult[]>([]);
const search = useDebounce(async (query: string) => {
const response = await fetch(`/api/search?q=${encodeURIComponent(query)}`);
const data = await response.json();
setResults(data.results);
}, 300);
return (
<input
type="text"
onChange={(e) => search(e.target.value)}
placeholder="Search..."
/>
);
}300ms is my go-to debounce value. It's short enough that users don't notice the delay, long enough to prevent firing on every keystroke.
If you have genuinely heavy computation (parsing large JSON, image manipulation, complex calculations), move it off the main thread entirely:
// worker.ts
self.addEventListener("message", (event) => {
const { data, operation } = event.data;
switch (operation) {
case "sort": {
// This could take 500ms for large datasets
const sorted = data.sort((a: number, b: number) => a - b);
self.postMessage({ result: sorted });
break;
}
case "filter": {
const filtered = data.filter((item: DataItem) =>
complexFilterLogic(item)
);
self.postMessage({ result: filtered });
break;
}
}
});// useWorker.ts
import { useEffect, useRef, useCallback } from "react";
function useWorker() {
const workerRef = useRef<Worker>();
useEffect(() => {
workerRef.current = new Worker(
new URL("../workers/worker.ts", import.meta.url)
);
return () => workerRef.current?.terminate();
}, []);
const process = useCallback(
(operation: string, data: unknown): Promise<unknown> => {
return new Promise((resolve) => {
if (!workerRef.current) return;
workerRef.current.onmessage = (event) => {
resolve(event.data.result);
};
workerRef.current.postMessage({ operation, data });
});
},
[]
);
return { process };
}Web Workers operate on a separate thread, so even a 2-second computation won't affect INP at all. The main thread stays free to handle user interactions.
My INP results:
If you're on Next.js (13+ with App Router), you have access to some powerful performance primitives that most developers don't fully exploit.
next/image is great, but the default configuration leaves performance on the table:
// next.config.ts
import type { NextConfig } from "next";
const nextConfig: NextConfig = {
images: {
formats: ["image/avif", "image/webp"],
deviceSizes: [640, 750, 828, 1080, 1200],
imageSizes: [16, 32, 48, 64, 96, 128, 256],
minimumCacheTTL: 60 * 60 * 24 * 365, // 1 year
},
};
export default nextConfig;Key settings:
formats: ["image/avif", "image/webp"] — AVIF is 20-50% smaller than WebP. The order matters: Next.js tries AVIF first, falls back to WebP, then to the original format.minimumCacheTTL — Default is 60 seconds. For a blog, images don't change. Cache them for a year.deviceSizes and imageSizes — The defaults include 3840px. Unless you're serving 4K images, trim this list. Each size generates a separate cached image, and unused sizes waste disk space and build time.And always use the sizes prop to tell the browser what size the image will be rendered at:
// Full-width hero image
<Image
src="/hero.jpg"
alt="Hero"
width={1200}
height={600}
sizes="100vw"
priority // LCP image — don't lazy load it!
/>
// Card image in a responsive grid
<Image
src="/card.jpg"
alt="Card"
width={400}
height={300}
sizes="(max-width: 768px) 100vw, (max-width: 1200px) 50vw, 33vw"
/>Without sizes, the browser might download a 1200px image for a 300px slot. That's wasted bytes and wasted time.
The priority prop on the LCP image is critical. It disables lazy loading and adds fetchPriority="high" automatically. If your LCP element is a next/image, just add priority and you're most of the way there.
I covered this in the CLS section, but it deserves emphasis. next/font is the only font loading solution I've seen that consistently achieves zero CLS:
import { Inter, JetBrains_Mono } from "next/font/google";
const inter = Inter({
subsets: ["latin"],
display: "swap",
variable: "--font-inter",
});
const jetbrainsMono = JetBrains_Mono({
subsets: ["latin"],
display: "swap",
variable: "--font-mono",
});
export default function RootLayout({
children,
}: {
children: React.ReactNode;
}) {
return (
<html
lang="en"
className={`${inter.variable} ${jetbrainsMono.variable}`}
>
<body className="font-sans">{children}</body>
</html>
);
}Two fonts, zero CLS, zero external requests at runtime. The fonts are downloaded at build time and served from your own domain.
This is where Next.js gets really interesting for performance. With the App Router, you can stream parts of the page to the browser as they become ready:
import { Suspense } from "react";
import { PostList } from "@/components/blog/PostList";
import { Sidebar } from "@/components/blog/Sidebar";
import { PostListSkeleton } from "@/components/blog/PostListSkeleton";
import { SidebarSkeleton } from "@/components/blog/SidebarSkeleton";
export default function BlogPage() {
return (
<div className="grid grid-cols-1 gap-8 lg:grid-cols-3">
<div className="lg:col-span-2">
{/* This loads fast — stream it immediately */}
<h1 className="text-4xl font-bold">Blog</h1>
{/* This requires a database query — stream it when ready */}
<Suspense fallback={<PostListSkeleton />}>
<PostList />
</Suspense>
</div>
<aside>
{/* Sidebar can load independently */}
<Suspense fallback={<SidebarSkeleton />}>
<Sidebar />
</Suspense>
</aside>
</div>
);
}The browser receives the shell (heading, navigation, layout) immediately. The post list and sidebar stream in as their data becomes available. The user sees a fast initial load, and content fills in progressively.
This is particularly powerful for LCP. If your LCP element is the heading (not the post list), it renders immediately regardless of how long the database query takes.
Next.js lets you configure caching and revalidation at the route segment level:
// app/blog/page.tsx
// Revalidate this page every hour
export const revalidate = 3600;
// app/tools/[slug]/page.tsx
// These tool pages are fully static — generate at build time
export const dynamic = "force-static";
// app/api/search/route.ts
// API route — never cache
export const dynamic = "force-dynamic";On this site, blog posts use revalidate = 3600 (1 hour). Tool pages use force-static because their content never changes between deployments. The search API uses force-dynamic because every request is unique.
The result: most pages serve from the static cache, TTFB is under 50ms for cached pages, and the server barely breaks a sweat.
Your perception of performance is unreliable. Your development machine has 32GB of RAM, an NVMe SSD, and a gigabit connection. Your users don't.
1. Chrome DevTools Performance Panel
The most detailed tool available. Record a page load, look at the flamechart, identify long tasks, find render-blocking resources. This is where I spend most of my debugging time.
Key things to look for:
2. Lighthouse
Good for a quick check, but don't optimize for Lighthouse scores. Lighthouse runs in a simulated throttled environment that doesn't perfectly match real-world conditions. I've seen pages score 98 in Lighthouse and have 4s LCP in the field.
Use Lighthouse for directional guidance, not as a scoreboard.
3. PageSpeed Insights
The most important tool for production sites because it shows real CrUX data — actual measurements from real Chrome users over the last 28 days. Lab data tells you what could happen. CrUX data tells you what does happen.
4. The web-vitals Library
Add this to your production site to collect real user metrics:
// components/analytics/WebVitals.tsx
"use client";
import { useEffect } from "react";
import { onCLS, onINP, onLCP, onFCP, onTTFB } from "web-vitals";
import type { Metric } from "web-vitals";
function sendToAnalytics(metric: Metric) {
// Send to your analytics endpoint
const body = JSON.stringify({
name: metric.name,
value: metric.value,
rating: metric.rating, // "good" | "needs-improvement" | "poor"
delta: metric.delta,
id: metric.id,
navigationType: metric.navigationType,
});
// Use sendBeacon so it doesn't block page unload
if (navigator.sendBeacon) {
navigator.sendBeacon("/api/vitals", body);
} else {
fetch("/api/vitals", {
body,
method: "POST",
keepalive: true,
});
}
}
export function WebVitals() {
useEffect(() => {
onCLS(sendToAnalytics);
onINP(sendToAnalytics);
onLCP(sendToAnalytics);
onFCP(sendToAnalytics);
onTTFB(sendToAnalytics);
}, []);
return null;
}This gives you your own CrUX-like data, but with more detail. You can segment by page, device type, connection speed, geographic region — whatever you need.
5. Chrome User Experience Report (CrUX)
The CrUX BigQuery dataset is free and contains 28-day rolling data for millions of origins. If your site gets enough traffic, you can query your own data:
SELECT
origin,
p75_lcp,
p75_cls,
p75_inp,
form_factor
FROM
`chrome-ux-report.materialized.metrics_summary`
WHERE
origin = 'https://yoursite.com'
AND yyyymm = 202603Third-party scripts are the number one performance killer on most websites. Here's what I found and what I did about it.
GTM itself is ~80KB. But GTM loads other scripts — analytics, marketing pixels, A/B testing tools. I've seen GTM configurations that load 15 additional scripts totaling 2MB.
My approach: Don't use GTM in production. Load analytics scripts directly, defer everything, and use loading="lazy" for scripts that can wait:
// Instead of GTM loading everything
// Load only what you need, when you need it
export function AnalyticsScript() {
return (
<script
defer
src="https://analytics.example.com/script.js"
data-website-id="your-id"
/>
);
}If you absolutely must use GTM, load it after the page is interactive:
"use client";
import { useEffect } from "react";
export function DeferredGTM({ containerId }: { containerId: string }) {
useEffect(() => {
// Wait until after page load to inject GTM
const timer = setTimeout(() => {
const script = document.createElement("script");
script.src = `https://www.googletagmanager.com/gtm.js?id=${containerId}`;
script.async = true;
document.head.appendChild(script);
}, 3000); // 3 second delay
return () => clearTimeout(timer);
}, [containerId]);
return null;
}Yes, you'll lose data from users who bounce in the first 3 seconds. In my experience, that's a trade-off worth making. Those users weren't converting anyway.
Live chat widgets (Intercom, Drift, Crisp) are some of the worst offenders. Intercom alone loads 400KB+ of JavaScript. On a page where 2% of users actually click the chat button, that's 400KB of JavaScript for 98% of users.
My solution: Load the widget on interaction.
"use client";
import { useState } from "react";
export function ChatButton() {
const [loaded, setLoaded] = useState(false);
function loadChat() {
if (loaded) return;
// Load the chat widget script only when the user clicks
const script = document.createElement("script");
script.src = "https://chat-widget.example.com/widget.js";
script.onload = () => {
// Initialize the widget after script loads
window.ChatWidget?.open();
};
document.head.appendChild(script);
setLoaded(true);
}
return (
<button
onClick={loadChat}
className="fixed bottom-4 right-4 rounded-full bg-blue-600 p-4 text-white shadow-lg"
aria-label="Open chat"
>
{loaded ? "Loading..." : "Chat with us"}
</button>
);
}Run Coverage in Chrome DevTools (Ctrl+Shift+P > "Show Coverage"). It shows you exactly how much of each script is actually used on the current page.
On a typical Next.js site, I usually find:
Button from a UI library, but the entire library gets bundled. Solution: use tree-shakeable libraries or import from subpaths (import Button from "lib/Button" instead of import { Button } from "lib").Promise, fetch, or Array.prototype.includes. In 2026, you don't need them.I use the Next.js bundle analyzer to find oversized chunks:
// next.config.ts
import withBundleAnalyzer from "@next/bundle-analyzer";
const nextConfig = {
// your config
};
export default process.env.ANALYZE === "true"
? withBundleAnalyzer({ enabled: true })(nextConfig)
: nextConfig;ANALYZE=true npm run buildThis opens a visual treemap of your bundles. I found a 120KB date formatting library that I replaced with native Intl.DateTimeFormat. I found a 90KB markdown parser imported on a page that didn't use markdown. Small wins that add up.
I mentioned this in the LCP section, but it's worth repeating because it's so common. Every <link rel="stylesheet"> in the <head> blocks rendering. If you have five stylesheets, the browser waits for all five before painting anything.
Next.js with Tailwind handles this well — CSS is inlined and minimal. But if you're importing third-party CSS, audit it:
// BAD — loads entire library CSS on every page
import "some-library/dist/styles.css";
// BETTER — dynamic import so it only loads on pages that need it
const SomeComponent = dynamic(
() => import("some-library").then((mod) => {
// CSS is imported inside the dynamic component
import("some-library/dist/styles.css");
return mod.SomeComponent;
}),
{ ssr: false }
);Let me walk through the actual optimization of this site's tools page. It's a page with 15+ interactive tools, each with its own component, and some of them (like the regex tester and JSON formatter) are JavaScript-heavy.
Initial measurements (CrUX data, mobile, 75th percentile):
Lighthouse score: 62.
LCP analysis: The LCP element was the page heading (<h1>), which should render instantly. But it was delayed by:
CLS analysis: Three sources:
INP analysis: The regex tester tool was the worst offender. Every keystroke in the regex input triggered:
Total time per keystroke: 280-400ms.
Week 1: LCP and CLS
Replaced Google Fonts CDN with next/font. Font is now self-hosted, loaded at build time, with size-adjusted fallback. CLS from fonts: 0.06 → 0.00
Removed the component library CSS. Rewrote the 3 components I was using from it with Tailwind. Total CSS removed: 180KB. Render-blocking CSS: eliminated
Added revalidate = 3600 to the tools page and tool detail pages. First hit is server-rendered, subsequent hits serve from cache. TTFB: 420ms → 45ms (cached)
Added explicit dimensions to all tool card components and used aspect-ratio for responsive layouts. CLS from cards: 0.04 → 0.00
Moved cookie banner to position: fixed at the bottom of the screen. CLS from banner: 0.02 → 0.00
Week 2: INP
startTransition:function RegexTester() {
const [pattern, setPattern] = useState("");
const [testString, setTestString] = useState("");
const [results, setResults] = useState<RegexResult[]>([]);
const [isPending, startTransition] = useTransition();
function handlePatternChange(value: string) {
setPattern(value); // Urgent: update the input
startTransition(() => {
// Non-urgent: compute matches
try {
const regex = new RegExp(value, "g");
const matches = [...testString.matchAll(regex)];
setResults(
matches.map((m) => ({
match: m[0],
index: m.index ?? 0,
groups: m.groups,
}))
);
} catch {
setResults([]);
}
});
}
return (
<div>
<input
value={pattern}
onChange={(e) => handlePatternChange(e.target.value)}
className={isPending ? "opacity-70" : ""}
/>
{/* results rendering */}
</div>
);
}INP on regex tester: 380ms → 85ms
Added debouncing to the JSON formatter's input handler (300ms delay). INP on JSON formatter: 260ms → 60ms
Moved the hash generator's computation to a Web Worker. SHA-256 hashing of large inputs now happens off the main thread entirely. INP on hash generator: 200ms → 40ms
After two weeks of optimization (CrUX data, mobile, 75th percentile):
Lighthouse score: 62 → 97.
All three metrics solidly in the "Good" range. The page feels instant on mobile. And organic search traffic increased 12% in the month following the improvements (though I can't prove causation — other factors were at play).
If you take nothing else from this post, here's the checklist I run through on every project:
priority (or fetchPriority="high") to the LCP image<head>next/fontwidth and heightnext/font with size-adjusted fallbacksposition: fixed/absolute or reserved spacestartTransitionPerformance optimization is not a one-time task. It's a discipline. Every new feature, every new dependency, every new third-party script is a potential regression. The sites that stay fast are the ones where someone is watching the metrics continuously, not the ones where someone did a one-time optimization sprint.
Set up real user monitoring. Set up alerts when metrics regress. Make performance a part of your code review process. When someone adds a 200KB library, ask if there's a 5KB alternative. When someone adds a synchronous computation in an event handler, ask if it can be deferred or moved to a worker.
The techniques in this post aren't theoretical. They're what I actually did, on this site, with real numbers to back them up. Your mileage will vary — every site is different, every audience is different, every infrastructure is different. But the principles are universal: load less, load smarter, don't block the main thread.
Your users won't send you a thank-you note for a fast site. But they'll stay. They'll come back. And Google will notice.