Slow websites lose visitors and rankings. Here's how to test your site's speed for free, understand Core Web Vitals, and fix the most common performance killers.
I lost a client last year because their website took 7.2 seconds to load on mobile. Not because the design was bad. Not because the content was thin. Because people hit the back button before they ever saw the homepage. The bounce rate was 73%. The conversion rate was basically a rounding error. And the worst part? They had no idea. They'd been testing on their fiber connection at the office and everything "seemed fine."
That experience taught me something I now tell every website owner I work with: if you haven't tested your website speed recently, you're flying blind. And you might be losing money, traffic, and search rankings every single day without knowing it.
The good news is that testing your site speed costs nothing. Zero. There are excellent free tools that will tell you exactly how fast (or slow) your site is, what's causing the problems, and what to fix first. You don't need a performance consultant. You don't need expensive monitoring software. You need 10 minutes, a free speed test tool, and this guide.
Let's fix your website.
You've probably heard that "speed matters" a hundred times. Let me give you the numbers that actually changed how I think about performance.
This isn't speculation. Google has explicitly confirmed — multiple times — that page speed is a ranking signal. In 2021, they rolled out the Page Experience update, which made Core Web Vitals a direct ranking factor. In 2024, they doubled down with more granular speed signals in their algorithm.
What does this mean practically? If two pages have similar content quality and backlink profiles, the faster one ranks higher. Not every time, not by an enormous margin, but consistently enough that speed is table stakes for competitive search terms.
I've seen sites climb 5-15 positions in search results purely from performance improvements. No new content. No new links. Just making the existing site faster.
Here are the numbers from Google's own research, updated through 2025:
Read those again. Going from a 1-second load time to 5 seconds doesn't just lose you some visitors — it nearly doubles your bounce rate. For an e-commerce site doing $100K/month, that speed difference could easily cost $40-50K in lost revenue annually.
Portent's research found that a site that loads in 1 second has a conversion rate 3x higher than a site that loads in 5 seconds. Deloitte's "Milliseconds Make Millions" study showed that a 0.1-second improvement in site speed led to an 8.4% increase in conversions for retail sites and a 10.1% increase for travel sites.
Not 10 seconds. Not 1 second. A tenth of a second. That's how sensitive users are to speed, even when they don't consciously notice it.
Over 60% of web traffic now comes from mobile devices. And mobile connections are inherently slower — higher latency, more variable bandwidth, less processing power. A site that loads in 2 seconds on desktop might take 5-8 seconds on a mid-range phone on 4G.
Google moved to mobile-first indexing years ago. They're judging your site's speed primarily on how it performs on mobile, not desktop. If you're only testing on your laptop, you're looking at the wrong numbers.
Before we test anything, let's agree on what we're measuring and what "good" looks like.
Traditionally, we measured "page load time" — how long until the browser fires the load event. This was always a crude metric. A page could fire the load event in 2 seconds but show nothing useful for 5 seconds because JavaScript was still rendering the content.
Google introduced Core Web Vitals as a more meaningful way to measure user experience. Instead of one number, you get three metrics that capture different aspects of how a page actually feels to use.
These aren't obscure technical metrics. They directly correspond to questions users subconsciously ask:
Based on Google's thresholds and industry data:
| Performance Aspect | Good | Needs Improvement | Poor |
|---|---|---|---|
| Largest Contentful Paint (LCP) | Under 2.5s | 2.5s - 4.0s | Over 4.0s |
| Interaction to Next Paint (INP) | Under 200ms | 200ms - 500ms | Over 500ms |
| Cumulative Layout Shift (CLS) | Under 0.1 | 0.1 - 0.25 | Over 0.25 |
| Total Page Load | Under 3s | 3s - 5s | Over 5s |
| Time to First Byte (TTFB) | Under 800ms | 800ms - 1800ms | Over 1800ms |
If all three Core Web Vitals are "Good," congratulations — you're in the top tier. If any of them are "Poor," you have a problem worth fixing immediately.
Let me break these down without the jargon.
What it measures: How long until the biggest visible element on the page finishes loading. Usually this is a hero image, a large text block, or a video poster.
Why it matters: LCP is what the user perceives as "the page loaded." Until the largest visible element appears, the page feels incomplete.
Common causes of bad LCP:
Target: Under 2.5 seconds. Under 1.5 seconds if you want to be competitive.
What it measures: How long the page takes to respond when you click, tap, or press a key. Specifically, it measures the delay from the interaction to the next time the browser paints a visual update.
INP replaced First Input Delay (FID) in March 2024 as a Core Web Vital. FID only measured the delay of the first interaction. INP measures all interactions throughout the page's lifecycle and reports the worst one (approximately — it's the 98th percentile).
Why it matters: If you click a button and nothing happens for 400ms, the site feels sluggish and broken. Users lose trust.
Common causes of bad INP:
Target: Under 200ms. Under 100ms is excellent.
What it measures: How much the visible content moves around unexpectedly while the page loads. You know that infuriating experience where you're about to click a link and suddenly an ad loads above it, pushing everything down, and you click the wrong thing? That's a layout shift.
Why it matters: Layout shifts are one of the most annoying user experiences on the web. They cause mis-clicks, reading interruptions, and a general feeling that the site is janky.
Common causes of bad CLS:
Target: Under 0.1. Under 0.05 is excellent.
Now for the practical part. Here are the tools I use regularly, what each one is best at, and how to interpret the results.
URL: pagespeed.web.dev
Best for: Getting a quick Core Web Vitals score with both lab and field data.
PageSpeed Insights is the go-to for most people, and for good reason. It runs a Lighthouse audit (lab data) and also shows Chrome User Experience Report data (field data from real users) if your site has enough traffic.
How to use it:
Strengths: Free, from Google (so you know it reflects their ranking criteria), shows both lab and real-world data, gives specific fix suggestions with estimated time savings.
Limitations: Only tests one page at a time. Lab data can vary between runs. The performance score is a weighted composite that can be confusing.
Pro tip: Always test the mobile version first. That's what Google uses for ranking.
URL: gtmetrix.com
Best for: Historical performance tracking and waterfall analysis.
GTmetrix gives you a detailed waterfall chart showing exactly when each resource loads. This is invaluable for diagnosing specific bottlenecks. The free tier lets you test from one location and keep a history of your tests.
Strengths: Beautiful waterfall visualization, historical tracking, shows page size and request counts clearly.
Limitations: Free tier limited to one test location (Vancouver). Paid plans needed for more locations and scheduled monitoring.
URL: webpagetest.org
Best for: Advanced diagnostics, filmstrip view, and multi-location testing.
WebPageTest is the most powerful free speed testing tool available. It lets you test from multiple global locations, choose specific connection speeds, see a frame-by-frame filmstrip of how your page loads, and run comparison tests.
Strengths: Incredibly detailed, multi-step testing, visual comparison, connection throttling, completely free.
Limitations: The interface is not beginner-friendly. Results can be overwhelming. Tests take longer to complete.
Best for: Testing during development and debugging specific issues.
You already have this. Press F12 in Chrome, go to the Lighthouse tab, and run an audit. It's the same engine behind PageSpeed Insights but running locally on your machine.
Strengths: No external service needed, test localhost/staging sites, full browser debugging tools available, customizable audit settings.
Limitations: Results are affected by your computer's performance and extensions. Always use incognito mode with extensions disabled.
If you want a comprehensive performance audit without installing anything, browser-based tools like the Website Analyzer on akousa.net can run speed tests, check Core Web Vitals, audit performance bottlenecks, and give you actionable recommendations — all from your browser. These are particularly useful when you need a quick check and don't want to wait for multiple tools to finish.
My recommendation: use at least two tools and compare results. No single tool gives you the complete picture.
| Tool | Best For | Difficulty | Core Web Vitals |
|---|---|---|---|
| PageSpeed Insights | Quick checks, SEO context | Easy | Yes (lab + field) |
| GTmetrix | Waterfall analysis, tracking | Easy | Yes (lab) |
| WebPageTest | Deep diagnostics | Advanced | Yes (lab) |
| Chrome DevTools | Development, debugging | Intermediate | Yes (lab) |
| Browser analyzers | Quick audits, accessibility | Easy | Varies |
My typical workflow: I start with PageSpeed Insights for the overview and real-user data, then use GTmetrix or WebPageTest for the waterfall when I need to diagnose a specific problem.
Here's something that surprises many website owners: your desktop and mobile scores can be drastically different. I've seen sites score 95 on desktop and 38 on mobile. Same site. Same content. Same server.
Processing power: Even flagship phones in 2026 have significantly less CPU power than a laptop. Mid-range phones (which represent the majority of mobile users globally) can be 4-6x slower at executing JavaScript.
Network conditions: 4G connections have 50-100ms of latency before a single byte transfers. 5G helps but isn't universal. Many users are on congested networks, metered connections, or older infrastructure.
Screen rendering: Mobile browsers have to do more work — responsive layout calculations, touch event handling, virtual keyboard management — all on weaker hardware.
After auditing hundreds of websites, these are the problems I see over and over. They account for roughly 90% of all speed issues.
Impact: Often the single biggest performance killer. I've seen single hero images that are 4MB — larger than the entire rest of the page combined.
How to identify: Run a page speed test and look for "Properly size images" or "Serve images in next-gen formats" in the recommendations. Check your waterfall for images that take more than 500ms to load.
Fixes:
Convert to modern formats: WebP is supported by every modern browser and is 25-34% smaller than JPEG at equivalent quality. AVIF is even better (20% smaller than WebP) but has slightly less browser support.
Resize appropriately: If your hero image displays at 1200px wide, don't upload a 4000px original. Use responsive images with srcset so the browser downloads the right size for each screen.
Compress aggressively: Most images can lose 60-80% of their file size with no visible quality loss. Tools like Squoosh, TinyPNG, or browser-based image compressors can do this in seconds.
Lazy load below-the-fold images: Only load images that are visible on screen. Use loading="lazy" on any image that isn't in the initial viewport.
Set explicit dimensions: Always include width and height attributes on <img> tags. This prevents layout shift (CLS) because the browser knows how much space to reserve before the image loads.
<!-- Bad: No dimensions, wrong format, no lazy loading -->
<img src="photo.png" alt="Team photo">
<!-- Good: Dimensions, modern format, lazy loaded -->
<img
src="photo.webp"
alt="Team photo"
width="800"
height="600"
loading="lazy"
decoding="async"
/>Expected improvement: 40-70% reduction in page weight for image-heavy sites.
Impact: JavaScript blocks the main thread, delays interactivity, and is often the primary cause of bad INP scores.
How to identify: In PageSpeed Insights, look for "Reduce unused JavaScript" and "Minimize main-thread work." In Chrome DevTools, the Performance tab shows you exactly which scripts are hogging the CPU.
Fixes:
Audit your scripts: Go through every JavaScript file loading on your page. Do you actually use all of them? I regularly find analytics scripts, chat widgets, A/B testing tools, and social media embeds that the site owner forgot they installed years ago.
Defer non-critical scripts: Any script that isn't needed for the initial render should use defer or async. This lets the browser continue parsing HTML while downloading the script.
<!-- Blocks rendering -->
<script src="analytics.js"></script>
<!-- Better: Downloads in parallel, executes after HTML parsing -->
<script src="analytics.js" defer></script>Code split: If you're using a framework, make sure you're code splitting so users only download the JavaScript they need for the current page, not the entire application.
Remove unused code: Tree-shaking, dead code elimination, and bundle analysis tools can dramatically reduce your JavaScript payload. I've seen sites cut 40-60% of their JavaScript by removing unused library imports.
Target: Under 200KB of compressed JavaScript for most sites. Under 100KB is excellent.
Impact: The browser can't display anything until it has downloaded and parsed all CSS files in the <head>. Large or numerous CSS files delay the first paint.
How to identify: Look for "Eliminate render-blocking resources" in your speed test results.
Fixes:
Inline critical CSS: Identify the CSS needed for above-the-fold content and inline it directly in the <head>. Load the rest asynchronously.
Remove unused CSS: Most sites use CSS frameworks or accumulated stylesheets with massive amounts of unused rules. PurgeCSS or similar tools can strip out everything you don't need.
Minify CSS: Remove whitespace, comments, and redundant code. This typically saves 10-20%.
Use fewer CSS files: Each separate CSS file is an additional HTTP request. Combine where practical.
Expected improvement: 200-800ms faster first paint.
Impact: Custom fonts can block text from displaying, cause layout shifts, and add significant download weight.
How to identify: If you see a flash of invisible text (FOIT) or flash of unstyled text (FOUT) when the page loads, you have a font loading problem. Your speed test may flag "Ensure text remains visible during webfont load."
Fixes:
Use font-display: swap: This tells the browser to show a fallback system font immediately, then swap to the custom font when it loads. Users see content immediately instead of staring at blank space.
@font-face {
font-family: 'MyFont';
src: url('myfont.woff2') format('woff2');
font-display: swap;
}Preload critical fonts: Tell the browser to start downloading your most important font file early.
<link rel="preload" href="/fonts/main.woff2" as="font" type="font/woff2" crossorigin>Use WOFF2 format: It's the most compressed web font format. If you're still serving TTF or OTF files, you're sending 30-50% more data than necessary.
Limit font variations: Each weight and style is a separate file. Do you really need Regular, Medium, SemiBold, Bold, and Black? Most sites can get by with 2-3 weights.
Consider system fonts: For body text, system font stacks (-apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, sans-serif) are instantly available and perfectly readable. Reserve custom fonts for headings and branding.
Expected improvement: 100-500ms faster text display, significant CLS reduction.
Impact: Every third-party script (analytics, ads, chat widgets, social embeds, marketing tags) adds DNS lookups, connection overhead, download time, and CPU processing. They're often the biggest single contributor to slow sites.
How to identify: Check your waterfall chart for requests to domains that aren't yours. Count them. I've seen sites with 30+ third-party scripts, each adding latency.
Fixes:
Audit everything: List every third-party script on your site. For each one, ask: "Is this actively providing value?" Remove anything you're not actively using.
Load non-essential scripts after interaction: Chat widgets, for example, don't need to load until the user scrolls or moves their mouse. Social share buttons can load lazily.
Use a tag manager sparingly: Google Tag Manager makes it easy to add scripts — too easy. Audit your container regularly.
Self-host when possible: If you're loading a small library from a third-party CDN, consider hosting it yourself. This eliminates the extra DNS lookup and connection time.
Set resource hints: For scripts you must load from third parties, use dns-prefetch and preconnect to start the connection early.
Expected improvement: Highly variable. I've seen 2-4 second improvements from removing or deferring unnecessary third-party scripts.
Impact: Without proper caching, every visit requires the browser to re-download everything from scratch. Returning visitors get no speed benefit.
How to identify: Your speed test will flag "Serve static assets with an efficient cache policy." In the response headers, look for Cache-Control headers on your resources.
Fixes:
Set long cache times for static assets: CSS, JavaScript, images, and fonts that have versioned filenames (like style.a3f8b2.css) should be cached for at least one year.
Cache-Control: public, max-age=31536000, immutable
Use shorter cache times for HTML: HTML pages should have shorter cache times (or no-cache with revalidation) so users always get the latest content.
Expected improvement: 50-90% faster load times for returning visitors.
Impact: Time to First Byte (TTFB) is the foundation of all other metrics. If your server takes 2 seconds to respond, nothing else can start until those 2 seconds have passed.
How to identify: TTFB over 800ms in your speed test results. The first bar in your waterfall chart (the HTML document) shows this clearly.
Fixes:
Upgrade your hosting: This is the most impactful fix for slow TTFB. If you're on shared hosting that costs $3/month, you're sharing resources with hundreds of other sites. A basic VPS ($10-20/month) or quality managed hosting will often cut TTFB in half.
Enable server-side caching: If your pages are dynamically generated, cache the output so the server doesn't have to rebuild the page for every request. Most CMS platforms have caching plugins.
Use a CDN: A Content Delivery Network stores copies of your site on servers around the world. Instead of every request going to your origin server (which might be in Dallas), users get served from the nearest edge location (which might be in their city). More on this below.
Optimize database queries: Slow database queries are a common cause of high TTFB on dynamic sites. Add proper indexes, optimize queries, and consider query caching.
Expected improvement: 200ms-2s improvement in TTFB, which cascades to improve every other metric.
If your website serves visitors from multiple countries (or even multiple states), a CDN is probably the highest-impact optimization you can make.
Without a CDN, every request goes to your origin server. If your server is in New York and a user is in Tokyo, every request travels ~11,000 miles round trip. That's roughly 200-300ms of latency just from physics — the speed of light through fiber optic cable.
A CDN has servers (called "edge nodes" or "points of presence") distributed globally. When someone visits your site, the CDN serves static content (images, CSS, JavaScript, fonts) from the nearest edge node. The Tokyo user gets served from a Tokyo edge node. The London user gets served from London.
Cloudflare (Free tier): The most popular free option. Generous free tier includes CDN, basic DDoS protection, and SSL. The free plan is honestly good enough for most small to medium sites.
Bunny CDN: Pay-per-use pricing starting at $0.01/GB. Extremely fast, great for sites with moderate traffic that don't want to commit to a monthly plan.
Cloud provider CDNs (AWS CloudFront, Google Cloud CDN, Azure CDN): Best if you're already on that cloud platform. Pricing is per-GB.
A CDN accelerates the delivery of your content, but it won't fix:
Think of a CDN as a faster delivery truck. It helps, but if you're shipping 50-pound boxes when 5-pound boxes would do, you still have a weight problem.
Caching is the art of storing things you've already computed or downloaded so you don't have to do it again. There are several layers of caching, and each one helps.
The browser stores files locally after the first download. On subsequent visits, it uses the cached versions instead of re-downloading. This is controlled by HTTP headers your server sends.
What to cache aggressively (long cache times):
What to cache conservatively (short times or revalidation):
Instead of regenerating a page from scratch for every request, the server can cache the finished HTML and serve it directly. Full-page caching is the fastest option for pages that don't change frequently. CDN edge nodes add another layer — they cache your content so requests don't even reach your origin server, combining the benefits of proximity with avoiding recomputation.
Not every speed issue is worth fixing. Here's how I prioritize:
In my experience, 80% of the speed improvement comes from five things: optimizing images, reducing JavaScript, using a CDN, enabling proper caching, and upgrading from bad hosting. If you do those five things, you'll be faster than 80% of websites. Everything else is optimization at the margins.
Testing your speed once and moving on is a mistake. Website speed degrades over time as you:
Monthly manual checks: Run your site through PageSpeed Insights and GTmetrix on the first of each month. Save the results. Look for trends.
Use Google Search Console: The Core Web Vitals report in Search Console shows how your real users experience your site over time. This is field data — it's what Google actually uses for ranking.
Test after every major change: New theme? Test speed. New plugin? Test speed. New ad network? Test speed.
And remember: don't chase the PageSpeed Insights score. Chase the Core Web Vitals. I've seen sites with a score of 72 that have excellent real-world performance. The individual metrics — LCP, INP, CLS — are what actually matter for rankings.
Here's everything in one actionable list. Work through it top to bottom — the items are roughly ordered by impact.
loading="lazy" to all below-the-fold imageswidth and height attributes to all <img> tagssrcset for different screen sizesdefer to non-critical scriptsfont-display: swap on all custom fontsfont-display: swap to prevent font-loading layout shiftsYou've read 3,000+ words about website speed. Now do something with it. Here's your assignment:
Don't try to fix everything at once. Fix the biggest problem first. Then retest. Then fix the next one. Incremental progress beats perfectionist paralysis every time.
Your visitors don't know what Core Web Vitals are. They don't care about your PageSpeed score. They just know whether your site feels fast or slow. And that feeling — that split-second judgment — determines whether they stay or leave.
Make them stay.
At minimum, monthly. After any major change (new theme, plugin, feature, ad network), test immediately. If you're actively working on SEO, weekly tests help you track progress.
No. The score is a composite metric. Google uses Core Web Vitals (LCP, INP, CLS) as ranking signals, not the overall score. You can have a score of 75 and pass all Core Web Vitals.
Mobile tests simulate a mid-range phone on a 4G connection. This means less CPU power (so JavaScript takes longer to process) and higher latency (so everything takes longer to download). It's closer to what most of your users actually experience.
Yes, significantly. Research consistently shows that each additional second of load time reduces conversions by 4.4% or more. For a site doing $50,000/month in revenue, that's $2,200/month per extra second.
All of them — and none of them. Speed tests are snapshots affected by server load, network conditions, and testing infrastructure. Run 3-5 tests and use the median. For the most accurate picture, look at field data (Chrome User Experience Report) in PageSpeed Insights or Google Search Console.
For most sites: LCP under 2.5 seconds, INP under 200ms, CLS under 0.1, and total page load under 3 seconds on mobile. If you hit those numbers, you're outperforming the vast majority of the web.
Speed isn't a project with a finish line. It's an ongoing discipline. The web gets heavier every year — larger images, more JavaScript, more third-party integrations. The sites that stay fast are the ones that measure regularly, catch regressions early, and treat performance as a feature, not an afterthought.
Start testing. Start fixing. Your users — and Google — will notice.