Image compression isn't just about making files smaller. It's about finding the sweet spot where file size drops 80% but your eyes can't tell the difference.
I spent an embarrassing amount of time last year uploading a 14MB hero image to a client's landing page and wondering why their Core Web Vitals score tanked overnight. The Largest Contentful Paint went from 1.8 seconds to 4.6 seconds. Bounce rate jumped 23%. The client called me on a Friday evening.
The fix took thirty seconds: I compressed the image from 14MB to 340KB. The page loaded in 1.4 seconds. Nobody — not me, not the client, not a single user — could tell the difference visually.
That experience changed how I think about images on the web. Most developers treat image optimization as an afterthought, something you do "later" or "when we have time." But images typically account for 50-70% of a web page's total weight. Get this wrong and nothing else you optimize matters.
This guide is everything I've learned about compressing images without destroying them. Not theory — practical, opinionated advice you can use today.
Let me hit you with some real data, because vague statements like "faster pages are better" don't motivate anyone.
Google's own research shows:
And here's the thing: images are the #1 contributor to page weight. The HTTP Archive's 2026 data shows the median web page is now 2.4MB, and images account for roughly 1MB of that. On image-heavy pages (portfolios, e-commerce, blogs), it's often 3-5MB+ of unoptimized images.
This hits three critical areas:
Google's ranking signals directly penalize slow-loading images:
width and height attributes solve this.If you're serving 100,000 pageviews per month with 2MB of uncompressed images per page, that's 200GB of bandwidth. Compress those images by 80% and you're at 40GB. At CDN rates of $0.08/GB, that's the difference between $16/month and $3.20/month. Scale to millions of pageviews and the savings are significant.
Here's a fact most developers in high-income countries forget: most of the world's internet users are on slow, metered connections. A 5MB page costs real money on pay-per-MB mobile plans in Southeast Asia, Africa, and South America. Compressing your images isn't just a performance decision — it's an accessibility decision.
Before we talk about formats, you need to understand the two fundamental approaches to compression. I'm going to explain this simply because most guides overcomplicate it.
Think of lossless compression like packing a suitcase more efficiently. Everything that went in comes back out — identical, bit-for-bit. You're just organizing the data more cleverly.
How it works: The algorithm finds patterns and redundancies in the image data and encodes them more efficiently. "This row of 200 pixels is all the same shade of blue" gets stored as "blue × 200" instead of listing each pixel individually.
Typical savings: 10-40% file size reduction.
When to use it: Screenshots, diagrams, text-heavy images, logos, anything where you need pixel-perfect reproduction. Also essential for images you'll edit later — re-saving a lossy image multiple times compounds quality loss (the "generation loss" problem).
Lossy compression is like summarizing a book. You keep the important parts and discard details that most readers won't notice. The original text is gone, but the meaning is preserved.
How it works: The algorithm analyzes which visual information humans are least likely to notice and removes it. We're terrible at seeing small color variations in busy areas of a photo, so the algorithm simplifies those regions aggressively while preserving sharp edges and high-contrast areas where we'd notice.
Typical savings: 60-90% file size reduction.
When to use it: Photographs, complex images, hero images, product photos — basically any real-world imagery. The quality loss at sensible settings is genuinely invisible to human eyes.
Here's my rule of thumb after years of doing this:
If you remember nothing else from this section: use lossy compression for photos, lossless for everything else.
JPEG has been the backbone of web images for three decades, and it's still the most widely used format. Here's the uncomfortable truth: most people have no idea what quality level to use.
I've tested this extensively. I took 50 different photographs — portraits, landscapes, product shots, food photography, architecture — and exported each at quality levels from 100 down to 10 in steps of 5. Then I did blind comparisons.
Here's what I found:
| Quality Level | Avg File Size (vs. Q100) | Visible Difference |
|---|---|---|
| 100 | 100% (baseline) | None (it's the original) |
| 95 | ~65% | Imperceptible to everyone |
| 90 | ~45% | Imperceptible to nearly everyone |
| 85 | ~35% | Imperceptible in blind tests |
| 80 | ~28% | Barely perceptible if you zoom in |
| 75 | ~23% | Slight softening in detailed areas |
| 70 | ~20% | Noticeable softening, minor artifacts |
| 60 | ~16% | Visible artifacts around text/edges |
| 50 | ~13% | Obviously degraded |
The 80-85% range is where magic happens. You're dropping the file size to about a third of the original, and in blind side-by-side tests, almost nobody can tell the difference. Even professional photographers struggle to identify which is which without zooming to 400%.
Below 75%, degradation becomes apparent, especially around sharp edges and text overlaid on photos. Below 60%, you're in "obviously compressed" territory.
My recommendation: Default to quality 82. It's a great balance for general-purpose web images. For hero images and portfolio pieces where quality matters most, use 88-90. For thumbnails and images below 300px wide, you can go as low as 70 because the small display size hides artifacts.
Here's something that catches even experienced developers: never re-compress a JPEG. Each time you open a JPEG, edit it, and save it again, you're applying lossy compression on top of lossy compression. After 5-6 generations, the quality degradation is brutal.
Always keep your original files (RAW, TIFF, or PNG) and export to JPEG as the final step. One compression pass, not five.
PNGs use lossless compression, so you'd think there's nothing to optimize. Wrong. There's actually a lot of room.
A typical PNG from Photoshop or a screenshot tool has:
All of these optimizations are lossless — the image looks identical, pixel for pixel. You're just removing waste.
For PNGs with lots of flat color (screenshots, UI mockups), reduction from 24-bit to indexed 8-bit color (256 colors) is often visually identical but dramatically smaller. If the image has gradients, stick with 24-bit to avoid banding.
If you're still serving only JPEG and PNG in 2026, you're leaving performance on the table. WebP has been supported by every major browser since 2020, and global support is now above 97%.
Here's why WebP matters:
That 25-35% savings over JPEG is not a marketing number. I've verified it repeatedly with my own testing. A 200KB JPEG at quality 82 typically becomes a 140-150KB WebP at equivalent visual quality. Across hundreds of images on a site, that adds up fast.
One gotcha: WebP's quality numbers don't map 1:1 to JPEG quality numbers. WebP quality 80 is roughly equivalent to JPEG quality 85 in terms of visual output. My WebP sweet spot is quality 75-80, which gives you output comparable to JPEG 82-85 at significantly smaller file sizes.
Short answer: almost always. Long answer:
AVIF is what happens when the video compression world and the image world collide. Based on the AV1 video codec, AVIF delivers stunning compression efficiency:
Those numbers are real, and they're frankly remarkable. An image that's 200KB as a JPEG is 100KB as AVIF at the same visual quality. That's half the bandwidth, half the storage, half the CDN cost.
Browser support is at ~93% globally (Chrome, Firefox, Safari 16.4+, Edge). That's great, but not universal. You need fallbacks.
The bigger issue is encoding speed. AVIF encoding is computationally expensive — 5-20x slower than JPEG encoding. For a single hero image, this doesn't matter. For batch processing thousands of images, it's a real consideration.
My recommendation: Use AVIF as your primary format with WebP fallback and JPEG as the final fallback. The <picture> element makes this easy:
<picture>
<source srcset="hero.avif" type="image/avif" />
<source srcset="hero.webp" type="image/webp" />
<img src="hero.jpg" alt="Hero image" width="1200" height="630" />
</picture>The browser picks the first format it supports. Users with modern browsers get AVIF (smallest), others get WebP, and the handful on ancient browsers get JPEG.
Every photo from a digital camera or smartphone carries metadata (EXIF data). This includes:
On a typical smartphone photo, metadata adds 20-100KB. On a DSLR photo, it can be 200KB+ because of embedded previews.
For web delivery, you almost always want to strip this data. It's dead weight that browsers ignore for rendering, and the GPS data is a legitimate privacy concern. You don't want your blog photos revealing your home address.
The only metadata worth keeping is the color profile (sRGB is standard for web) and orientation tags. Everything else can go.
Privacy note: If you're using any image optimization tool, make sure it strips EXIF data by default. I've seen sites that inadvertently exposed photographers' home locations through unstripped GPS coordinates in product photos.
This is where many developers waste the most bandwidth without realizing it.
Your hero image is 2400px wide for desktop displays. A phone screen is 375px wide (750px at 2x DPI). If you're serving the same 2400px image to that phone, you're sending 3-4x more pixels than needed. The phone downloads it, then the browser downscales it. Wasted bandwidth, wasted decode time, wasted battery.
The srcset attribute solves this:
<img
src="hero-800.jpg"
srcset="hero-400.jpg 400w, hero-800.jpg 800w, hero-1200.jpg 1200w, hero-1600.jpg 1600w, hero-2400.jpg 2400w"
sizes="(max-width: 768px) 100vw, (max-width: 1200px) 80vw, 1200px"
alt="Hero image"
width="2400"
height="1260"
loading="lazy"
/>The browser automatically selects the most appropriate size based on the viewport width and device pixel ratio. A phone gets the 800px version. A laptop gets the 1600px version. A 4K monitor gets the 2400px version.
After testing across dozens of projects, here are the sizes I generate for most images:
| Breakpoint | Width | Use Case |
|---|---|---|
| Small | 400px | Thumbnails, mobile cards |
| Medium | 800px | Mobile full-width, tablet cards |
| Large | 1200px | Tablet full-width, desktop cards |
| XL | 1600px | Desktop full-width |
| XXL | 2400px | Large desktop, retina full-width |
Five sizes per image covers virtually every scenario without going overboard. Some guides recommend 8-10 sizes — that's overkill unless you're Netflix.
Here's a stat that changed my perspective: the average user only sees 50-60% of a web page's content before leaving. That means 40-50% of the images you're loading are never seen by the user.
Lazy loading defers image loading until the image is about to enter the viewport. It's absurdly simple in 2026:
<img src="photo.webp" alt="Description" loading="lazy" width="800" height="600" />That's it. The loading="lazy" attribute is now supported by 96%+ of browsers. No JavaScript library needed, no Intersection Observer polyfill, just one HTML attribute.
Never lazy-load your LCP image. Your hero image, first content image, or above-the-fold banner should load eagerly (the default). Lazy loading your LCP image adds a loading delay that directly harms your Core Web Vitals.
Always include width and height. Without explicit dimensions, the browser can't reserve space for lazy-loaded images, causing layout shifts (bad CLS scores).
Use fetchpriority="high" for your LCP image. This tells the browser to prioritize downloading it:
<img src="hero.webp" alt="Hero" fetchpriority="high" width="1200" height="630" />loading="lazy". Blog post images, product grids, gallery thumbnails — all lazy.You've compressed your images, chosen modern formats, generated responsive sizes, and added lazy loading. The last piece is how you deliver them.
A CDN (Content Delivery Network) caches your images on servers worldwide. When a user in Tokyo requests your image, they get it from a server in Tokyo — not from your origin server in Virginia. The difference in latency is dramatic:
Most CDN providers also offer automatic image optimization — on-the-fly format conversion, resizing, and compression. You upload one high-quality original, and the CDN serves optimized versions automatically based on the requesting browser's capabilities.
If you're on Vercel, Netlify, or Cloudflare Pages, you already have a CDN. If you're self-hosting, Cloudflare's free tier gives you global CDN with image optimization.
Every time I create an image for social media, I have to look up the recommended dimensions. So I made this table for myself, and now I'm sharing it with you:
| Platform | Image Type | Recommended Size | Aspect Ratio | Max File Size |
|---|---|---|---|---|
| X (Twitter) | In-stream photo | 1600 × 900px | 16:9 | 5MB |
| X (Twitter) | Card image | 800 × 418px | 1.91:1 | 5MB |
| Feed image | 1200 × 630px | 1.91:1 | 8MB | |
| Story | 1080 × 1920px | 9:16 | 4MB | |
| Feed (square) | 1080 × 1080px | 1:1 | 8MB | |
| Feed (landscape) | 1080 × 566px | 1.91:1 | 8MB | |
| Story | 1080 × 1920px | 9:16 | 8MB | |
| Feed image | 1200 × 627px | 1.91:1 | 10MB | |
| Article cover | 1280 × 720px | 16:9 | 10MB | |
| Pin | 1000 × 1500px | 2:3 | 20MB | |
| YouTube | Thumbnail | 1280 × 720px | 16:9 | 2MB |
| Open Graph | Default | 1200 × 630px | 1.91:1 | Varies |
Pro tip: Design at the largest required dimensions and let the platform resize. Uploading an image that's too small results in blurry upscaling. Uploading too large just wastes upload time.
Compress your social media images before uploading. Platforms re-compress everything anyway (often aggressively), so starting with a well-compressed image gives the algorithm less to destroy.
Here's my quality-checking workflow for any image I compress:
If you can't see a difference at actual display size, the compression is good enough. Don't chase pixel-perfection at zoom levels no user will ever see.
Here's something that should concern you: many online image compression tools upload your images to their servers for processing. Your personal photos, client work, unreleased product shots — all passing through someone else's infrastructure.
I have a strong opinion on this: image compression should happen in your browser, on your machine.
Modern browsers have the computational power to compress images entirely client-side. There's no technical reason to upload your images to a remote server for compression. The only reason most tools do it is because they were built before browser APIs were powerful enough — or because they want to monetize your data.
Browser-based compression tools that process everything locally give you:
This is exactly how image tools should work in 2026. If a compression tool asks you to create an account or shows an upload progress bar, close the tab and find a better one.
Compressing one image at a time is fine for a blog post. But what about an e-commerce site with 10,000 product images? Or a photography portfolio with 500 photos? You need a batch workflow.
For static sites and apps with a build step, integrate compression into your build pipeline. This way, every image is automatically optimized during deployment:
This is the most reliable approach because it's automated. No human has to remember to compress images.
For CMS-driven sites where content editors upload images, compress at upload time:
For quick, ad-hoc compression without setting up a pipeline, browser-based tools that support multiple file processing are ideal. Drag in 50 images, set your quality level, download compressed versions. No server, no installation, no setup.
I use this approach for one-off projects and when I need to quickly compress a batch of images for a presentation or email. It's fast, private, and requires zero configuration.
| Approach | Best For | Savings | Effort | Privacy |
|---|---|---|---|---|
| JPEG Q82 | Photos, general web | 65-70% | Low | Depends on tool |
| WebP Q78 | Modern web delivery | 75-80% | Low | Depends on tool |
| AVIF Q65 | Cutting-edge browsers | 85-90% | Low | Depends on tool |
| PNG optimization | Screenshots, logos | 20-40% | Low | Depends on tool |
| Responsive images | All web images | 50-70% (mobile) | Medium | N/A |
| Lazy loading | Below-fold images | 40-50% bandwidth | Trivial | N/A |
| Metadata stripping | All images | 5-15% | Trivial | Improved |
| CDN + auto-format | High-traffic sites | 30-50% | Medium | Varies |
| Browser-based tools | Quick optimization | 60-80% | Low | Excellent |
| Build pipeline | Automated workflows | 70-85% | High setup, zero ongoing | Excellent |
The best strategy combines several of these. I typically use: modern formats (AVIF/WebP) + responsive sizes + lazy loading + metadata stripping + CDN delivery. Together, these can reduce image bandwidth by 90%+ compared to unoptimized originals.
After auditing dozens of websites last year, here are the mistakes I see most often:
1. Using PNG for photographs. A photo saved as PNG can be 5-10x larger than the same photo as JPEG/WebP. PNGs are for graphics, not photos. This single mistake is responsible for more wasted bandwidth than any other.
2. Serving 4000px-wide images in 300px containers. If the image is displayed at 300px, sending 4000px is insane. Even at 2x DPI, you only need 600px. Use responsive images.
3. Not using WebP in 2026. Browser support has been universal for years. There's no excuse anymore.
4. Quality 100 JPEG exports. Quality 100 gives you a file 3x larger than quality 85 with zero visible improvement. JPEG is inherently lossy — quality 100 doesn't mean "lossless," it means "least lossy but still lossy and needlessly huge."
5. Forgetting width and height attributes. Without them, the browser can't reserve space, causing layout shifts as images load. This directly hurts your CLS score.
6. Lazy loading the LCP image. Your most important image should load eagerly with fetchpriority="high". Lazy loading it makes it load later, which is the opposite of what you want.
7. Re-compressing already compressed images. Downloading a JPEG from a stock site (already compressed) and compressing it again through your build pipeline adds a second generation of lossy compression. Keep originals in lossless formats.
8. Ignoring metadata/EXIF. Free 20-100KB savings per image, plus privacy protection. Strip it.
Before I publish any page with images, I run through this:
srcset with 4-5 sizes for hero/large imageswidth and height attributes on every <img>fetchpriority="high"loading="lazy"Run through this list and you'll have better image optimization than 95% of the web.
Image compression isn't glamorous work. Nobody's going to tweet about how you reduced your hero image from 2MB to 280KB. But it directly affects how fast your site loads, how much you pay for bandwidth, how well you rank in search, and whether users on slow connections can use your site at all.
The good news is that the tooling in 2026 makes this almost trivially easy. Browser-based compression tools handle everything locally without uploading your files anywhere. Modern formats like WebP and AVIF deliver dramatically better compression than JPEG ever could. And responsive images with lazy loading can cut bandwidth by half with a few HTML attributes.
If you want a single tool that handles image compression, format conversion, resizing, and metadata stripping — all running locally in your browser with zero uploads — check out akousa.net/tools. The image tools cover everything in this guide, and your files never leave your machine.
Stop serving 5MB hero images. Your users (and your hosting bill) will thank you.