Boost your website speed with 20 actionable performance optimization tips. Improve Core Web Vitals, reduce load times, and rank higher in search results.
A slow website is an invisible tax on everything you build. It erodes conversions, frustrates users, and quietly pushes you down in search rankings. The worst part is that most developers do not realize how slow their site actually is because they test it on fast hardware and fast networks.
In 2026, performance is not optional. Google uses Core Web Vitals as a ranking signal. Users abandon pages that take more than three seconds to load. And with the rise of mobile-first browsing in markets with inconsistent connectivity, every kilobyte and every millisecond counts.
This guide covers 20 concrete, actionable performance tips. No vague advice. No "just optimize your images" without telling you how. Each tip includes what to do, why it matters, and code examples where relevant.
Before diving into specific tips, you need to understand the three metrics Google cares about most:
| Metric | What It Measures | Good Threshold | Poor Threshold |
|---|---|---|---|
| LCP (Largest Contentful Paint) | Loading speed | Under 2.5s | Over 4.0s |
| INP (Interaction to Next Paint) | Responsiveness | Under 200ms | Over 500ms |
| CLS (Cumulative Layout Shift) | Visual stability | Under 0.1 | Over 0.25 |
INP replaced FID (First Input Delay) in March 2024 and is a much harder metric to pass because it measures every interaction during the entire page lifecycle, not just the first click.
Every tip below targets at least one of these metrics. Some improve all three.
You cannot improve what you do not measure. Before changing a single line of code, establish your baseline.
Run your site through multiple tools to get both lab data (synthetic tests) and field data (real user measurements):
# Use Lighthouse from the command line for repeatable lab tests
npx lighthouse https://yoursite.com --output=json --output-path=./baseline.json
# Key metrics to record:
# - LCP, INP, CLS (Core Web Vitals)
# - TTFB (Time to First Byte)
# - Total Blocking Time (TBT)
# - Speed IndexLab data from Lighthouse is useful for debugging, but field data from the Chrome User Experience Report (CrUX) is what Google actually uses for rankings. Check the PageSpeed Insights API or the CrUX dashboard in BigQuery for real-world numbers.
You can also use the akousa.net Website Analyzer tool to get a comprehensive performance audit of your site, including Core Web Vitals scores, third-party script analysis, and specific recommendations for improvement.
LCP is the single most impactful metric for perceived speed. The first step is identifying what your LCP element actually is. It is usually a hero image, a heading rendered with a web font, or a large block of text.
import { onLCP } from "web-vitals";
onLCP((metric) => {
const entry = metric.entries[metric.entries.length - 1];
console.log("LCP element:", entry.element);
console.log("LCP time:", metric.value, "ms");
});Once you know the element, apply targeted optimizations:
font-display: optional<head>
<link
rel="preload"
as="image"
href="/hero-image.webp"
type="image/webp"
fetchpriority="high"
/>
</head>The fetchpriority="high" attribute tells the browser to prioritize this resource over others discovered at the same time. This alone can shave 200-500ms off LCP on image-heavy pages.
AVIF and WebP deliver 30-50% smaller file sizes compared to JPEG at equivalent visual quality. In 2026, browser support for both formats is excellent.
<picture>
<source srcset="/hero.avif" type="image/avif" />
<source srcset="/hero.webp" type="image/webp" />
<img
src="/hero.jpg"
alt="Descriptive alt text"
width="1200"
height="630"
loading="eager"
decoding="async"
/>
</picture>If you use Next.js, the built-in Image component handles format negotiation automatically:
import Image from "next/image";
<Image
src="/hero.jpg"
alt="Descriptive alt text"
width={1200}
height={630}
priority
sizes="(max-width: 768px) 100vw, 1200px"
/>The priority prop adds fetchpriority="high" and disables lazy loading, which is exactly what you want for above-the-fold LCP images.
Serving a 2400px-wide image to a 375px-wide phone screen is wasteful. Use srcset and sizes to let the browser pick the right resolution:
<img
src="/photo-800.webp"
srcset="
/photo-400.webp 400w,
/photo-800.webp 800w,
/photo-1200.webp 1200w,
/photo-1600.webp 1600w
"
sizes="(max-width: 768px) 100vw, (max-width: 1200px) 50vw, 800px"
alt="Descriptive alt text"
width="1600"
height="900"
loading="lazy"
/>This can reduce image payload by 60-80% on mobile devices without any visible quality loss.
Native lazy loading is now supported in all major browsers. Apply it to every image and iframe that is not visible in the initial viewport:
<img src="/gallery-photo.webp" loading="lazy" alt="Gallery photo" width="600" height="400" />
<iframe src="https://www.youtube.com/embed/xyz" loading="lazy" title="Video"></iframe>Critical rule: never lazy load your LCP element. If the largest image in the viewport has loading="lazy", the browser deliberately delays fetching it, which directly hurts your LCP score.
CSS blocks rendering by default. The browser will not paint anything until it has downloaded and parsed all CSS in the <head>. For large stylesheets, this creates a significant delay.
Extract the CSS needed for above-the-fold content and inline it directly:
<head>
<style>
/* Critical CSS for the initial viewport */
body { margin: 0; font-family: system-ui, sans-serif; }
.hero { display: grid; min-height: 60vh; place-items: center; }
.nav { display: flex; padding: 1rem 2rem; }
</style>
<link rel="preload" href="/styles/main.css" as="style" onload="this.rel='stylesheet'" />
<noscript><link rel="stylesheet" href="/styles/main.css" /></noscript>
</head>Tools like critters (used by Next.js automatically) can extract critical CSS at build time.
JavaScript is the most expensive resource on the web, byte for byte. It must be downloaded, parsed, compiled, and executed. Every step blocks the main thread and hurts INP.
# For webpack-based projects
npx webpack-bundle-analyzer stats.json
# For Next.js
ANALYZE=true npx next build
# (requires @next/bundle-analyzer)// Instead of importing everything upfront
import HeavyChart from "@/components/HeavyChart";
// Dynamically import components that are not needed immediately
import dynamic from "next/dynamic";
const HeavyChart = dynamic(() => import("@/components/HeavyChart"), {
loading: () => <div className="h-96 animate-pulse bg-gray-200 rounded" />,
ssr: false,
});Target: keep your initial JavaScript payload under 150KB gzipped. Anything beyond that on mobile connections will push your Time to Interactive past acceptable thresholds.
Analytics, chat widgets, ad scripts, and social media embeds are often the biggest performance killers. They compete with your own code for bandwidth and main thread time.
<!-- Bad: blocks parsing -->
<script src="https://analytics.example.com/tracker.js"></script>
<!-- Better: defers execution until HTML is parsed -->
<script defer src="https://analytics.example.com/tracker.js"></script>
<!-- Best: loads after the page is fully interactive -->
<script>
window.addEventListener("load", () => {
const script = document.createElement("script");
script.src = "https://analytics.example.com/tracker.js";
document.body.appendChild(script);
});
</script>For non-critical third-party scripts, consider loading them only after user interaction:
let thirdPartyLoaded = false;
function loadThirdParty() {
if (thirdPartyLoaded) return;
thirdPartyLoaded = true;
// Load chat widget, analytics, etc.
const script = document.createElement("script");
script.src = "/third-party-bundle.js";
document.body.appendChild(script);
}
// Trigger on first interaction
["scroll", "click", "touchstart", "keydown"].forEach((event) => {
window.addEventListener(event, loadThirdParty, { once: true, passive: true });
});Resource hints tell the browser about resources it will need soon, allowing it to start fetching them before they are discovered in HTML:
<head>
<!-- DNS prefetch for third-party domains -->
<link rel="dns-prefetch" href="https://fonts.googleapis.com" />
<!-- Preconnect to establish early connections (DNS + TCP + TLS) -->
<link rel="preconnect" href="https://cdn.example.com" crossorigin />
<!-- Prefetch resources for likely next navigations -->
<link rel="prefetch" href="/about" />
<!-- Preload critical resources for the current page -->
<link rel="preload" href="/fonts/inter-var.woff2" as="font" type="font/woff2" crossorigin />
</head>Use preconnect sparingly. Each preconnect opens a full connection including TLS handshake, which consumes CPU and bandwidth. Limit it to 2-3 critical origins.
Web fonts are a common LCP blocker. The browser discovers the font in CSS, then has to download it before rendering any text that uses it.
@font-face {
font-family: "Inter";
src: url("/fonts/inter-var.woff2") format("woff2");
font-weight: 100 900;
font-display: swap;
unicode-range: U+0000-00FF, U+0131, U+0152-0153, U+02BB-02BC, U+2000-206F;
}font-display: swap shows fallback text immediately, swaps when font loads (can cause CLS)font-display: optional gives the font 100ms to load; if it misses the window, uses the fallback for the entire page visit (best for CLS, but some visits use system fonts)For the best balance of performance and visual consistency, use swap with a well-matched fallback font to minimize layout shift:
body {
font-family: "Inter", ui-sans-serif, system-ui, -apple-system, sans-serif;
/* Adjust fallback metrics to match your web font */
}
@font-face {
font-family: "Inter";
src: url("/fonts/inter-var.woff2") format("woff2");
font-display: swap;
font-weight: 100 900;
size-adjust: 107%;
ascent-override: 90%;
descent-override: 22%;
line-gap-override: 0%;
}The size-adjust, ascent-override, and descent-override properties match the fallback font metrics to your web font, virtually eliminating the layout shift when the font swaps in.
CLS measures unexpected layout movement. The most common causes are images without dimensions, late-loading ads, dynamically injected content, and web font swaps.
<!-- Bad: causes layout shift when the image loads -->
<img src="/photo.webp" alt="Photo" />
<!-- Good: browser reserves space before the image loads -->
<img src="/photo.webp" alt="Photo" width="800" height="600" />
<!-- Also good: use aspect-ratio in CSS -->
<img src="/photo.webp" alt="Photo" style="aspect-ratio: 4/3; width: 100%;" />/* Reserve space for an ad slot */
.ad-container {
min-height: 250px;
background: #f0f0f0;
contain: layout;
}
/* Reserve space for a dynamically loaded widget */
.widget-placeholder {
min-height: 400px;
contain: layout size;
}The contain: layout CSS property tells the browser that the element's contents will not affect anything outside it, which helps the browser optimize paint and layout operations.
Gzip and Brotli compression can reduce text-based asset sizes by 70-90%. Brotli consistently delivers 15-20% better compression than Gzip.
# Enable Brotli (requires ngx_brotli module)
brotli on;
brotli_comp_level 6;
brotli_types text/plain text/css application/json application/javascript
text/xml application/xml application/xml+rss text/javascript
image/svg+xml;
# Fallback to Gzip
gzip on;
gzip_comp_level 6;
gzip_types text/plain text/css application/json application/javascript
text/xml application/xml application/xml+rss text/javascript
image/svg+xml;curl -H "Accept-Encoding: br,gzip" -I https://yoursite.com/
# Look for: content-encoding: br (Brotli) or content-encoding: gzipIf your response headers do not include a content-encoding header, compression is not working and you are leaving easy performance gains on the table.
Time to First Byte is the foundation. No amount of frontend optimization can compensate for a server that takes 2 seconds to respond.
// Example: simple in-memory cache for expensive operations
const cache = new Map();
const CACHE_TTL = 60 * 1000; // 60 seconds
async function getCachedData(key, fetchFn) {
const cached = cache.get(key);
if (cached && Date.now() - cached.timestamp < CACHE_TTL) {
return cached.data;
}
const data = await fetchFn();
cache.set(key, { data, timestamp: Date.now() });
return data;
}Place your static assets and, ideally, your HTML behind a CDN. For static or ISR pages, serving cached HTML from edge nodes closest to the user can reduce TTFB from 500ms to under 50ms.
Key TTFB targets:
The stale-while-revalidate pattern serves cached content immediately while fetching fresh data in the background:
// Cache-Control header for static assets
// Serve cached for 1 day, allow stale for 7 days while revalidating
res.setHeader(
"Cache-Control",
"public, max-age=86400, stale-while-revalidate=604800"
);
// For Next.js API routes or page props
export const revalidate = 3600; // ISR: revalidate every hourFor immutable assets like hashed JS/CSS bundles, use aggressive caching:
# Nginx: cache hashed assets for 1 year
location /_next/static/ {
add_header Cache-Control "public, max-age=31536000, immutable";
}INP measures the delay between a user interaction (click, tap, keypress) and the next visual update. Long tasks on the main thread directly block interactivity.
// Bad: one long synchronous operation blocks the main thread
function processAllItems(items) {
items.forEach((item) => heavyComputation(item));
}
// Good: yield to the main thread between chunks
async function processAllItems(items) {
const CHUNK_SIZE = 50;
for (let i = 0; i < items.length; i += CHUNK_SIZE) {
const chunk = items.slice(i, i + CHUNK_SIZE);
chunk.forEach((item) => heavyComputation(item));
// Yield to the browser so it can handle user input and paint
await new Promise((resolve) => setTimeout(resolve, 0));
}
}async function handleClick() {
// Process first part
updateUI();
// Yield to let the browser paint, then continue
if ("scheduler" in window && "yield" in scheduler) {
await scheduler.yield();
} else {
await new Promise((resolve) => setTimeout(resolve, 0));
}
// Process second part (non-urgent)
sendAnalytics();
prefetchNextPage();
}scheduler.yield() is a newer API that yields to the browser while preserving task priority, unlike setTimeout which pushes the continuation to the back of the task queue.
These CSS properties help the browser skip rendering work for off-screen content:
/* Skip layout and paint for off-screen sections */
.below-the-fold-section {
content-visibility: auto;
contain-intrinsic-size: auto 500px;
}
/* Tell the browser this element's internals don't affect the rest of the page */
.card {
contain: layout style paint;
}content-visibility: auto can dramatically reduce initial rendering time on long pages. The browser skips layout and paint for elements that are not in the viewport, only rendering them when the user scrolls near them.
The contain-intrinsic-size property gives the browser an estimated size to use for the element before it is rendered, preventing scrollbar jumps.
Large CSS files with thousands of unused rules slow down both download and parse time. Modern tools can tree-shake unused CSS at build time:
// postcss.config.js — Example with PurgeCSS
module.exports = {
plugins: [
require("@fullhuman/postcss-purgecss")({
content: ["./src/**/*.{js,jsx,ts,tsx}", "./public/**/*.html"],
defaultExtractor: (content) => content.match(/[\w-/:]+(?<!:)/g) || [],
safelist: ["html", "body", /^dark:/, /^data-/],
}),
],
};If you use Tailwind CSS, unused classes are already purged at build time. But watch out for CSS imported from third-party component libraries. A single import "some-library/styles.css" can add 200KB of unused CSS to your bundle.
Prefetching resources for pages the user is likely to visit next makes subsequent navigations feel instant:
// Next.js automatically prefetches <Link> targets on hover
import Link from "next/link";
<Link href="/pricing">Pricing</Link>
// For custom prefetching based on user behavior
function ProductCard({ slug }) {
const prefetchProduct = () => {
const link = document.createElement("link");
link.rel = "prefetch";
link.href = `/products/${slug}`;
document.head.appendChild(link);
};
return (
<div onMouseEnter={prefetchProduct}>
{/* card content */}
</div>
);
}Use the Speculation Rules API for more granular control in supported browsers:
<script type="speculationrules">
{
"prerender": [
{
"where": { "href_matches": "/products/*" },
"eagerness": "moderate"
}
],
"prefetch": [
{
"where": { "href_matches": "/blog/*" },
"eagerness": "conservative"
}
]
}
</script>The Speculation Rules API can prerender entire pages in the background, making navigations truly instant with zero perceived load time.
HTTP/2 multiplexes multiple requests over a single connection, eliminating the head-of-line blocking problem in HTTP/1.1. HTTP/3 goes further with QUIC, reducing connection setup time and handling packet loss more gracefully.
curl -sI https://yoursite.com/ | grep -i "http/"
# Expected: HTTP/2 or HTTP/3With HTTP/2, some old best practices become anti-patterns:
Performance is not a one-time fix. New features, dependency updates, and content changes can quietly regress your scores. Set up continuous monitoring:
// Report Core Web Vitals to your analytics
import { onCLS, onINP, onLCP } from "web-vitals";
function sendToAnalytics(metric) {
const body = JSON.stringify({
name: metric.name,
value: metric.value,
rating: metric.rating, // "good", "needs-improvement", or "poor"
delta: metric.delta,
id: metric.id,
navigationType: metric.navigationType,
});
// Use sendBeacon for reliability during page unload
if (navigator.sendBeacon) {
navigator.sendBeacon("/api/vitals", body);
} else {
fetch("/api/vitals", { body, method: "POST", keepalive: true });
}
}
onCLS(sendToAnalytics);
onINP(sendToAnalytics);
onLCP(sendToAnalytics);// In your CI/CD pipeline (e.g., Lighthouse CI)
// lighthouserc.json
{
"ci": {
"assert": {
"assertions": {
"categories:performance": ["error", { "minScore": 0.9 }],
"largest-contentful-paint": ["error", { "maxNumericValue": 2500 }],
"cumulative-layout-shift": ["error", { "maxNumericValue": 0.1 }],
"interactive": ["error", { "maxNumericValue": 3500 }],
"total-byte-weight": ["error", { "maxNumericValue": 500000 }]
}
}
}
}Fail your CI build when performance regresses past acceptable thresholds. This is the only reliable way to prevent slow regressions from shipping.
Here is a summary of all 20 tips organized by which Core Web Vital they primarily improve:
Web performance optimization in 2026 is not about applying a single silver bullet. It is about systematically addressing dozens of small bottlenecks, each of which shaves off tens or hundreds of milliseconds. The compound effect is what takes a site from "mediocre" to "fast."
Start with measurement. Identify your worst Core Web Vital. Fix the lowest-hanging fruit first: image optimization, compression, and render-blocking resources typically deliver the biggest improvements with the least effort. Then work your way through the more nuanced optimizations like main thread scheduling, font metric matching, and caching strategies.
The tools and techniques covered here work across any framework and any hosting setup. Whether you are building with React, Vue, Svelte, or plain HTML, the browser does not care. It cares about bytes, timing, and visual stability.
Make your site fast. Your users will thank you, your conversion rates will reflect it, and Google will notice.