Reduce image file size by up to 80% without visible quality loss. Compare PNG, JPEG, WebP, and AVIF compression methods with free online tools.
A single uncompressed hero image can weigh more than your entire HTML document, all your CSS, and half your JavaScript combined. I have seen product pages where a 4.8 MB background photo took longer to load than every other asset on the page put together. The page itself was 6.1 MB total. The image was 78% of it.
And the worst part? Nobody could tell the difference between that 4.8 MB original and a 420 KB compressed version. Not the designer. Not the client. Not the end user squinting at it on a 6-inch phone screen over a 4G connection.
Image compression is the single highest-impact performance optimization most websites can make. It costs nothing. It requires no code changes. It takes about 30 seconds per image. And yet, an astonishing number of websites still serve images that are 5x to 20x larger than they need to be.
This guide covers everything you need to know: what compression actually does at a technical level, the difference between lossy and lossless approaches, how every major image format handles it, practical quality settings for real-world use cases, batch workflows, and free tools that handle all of it in your browser without uploading your files to anyone's server.
Let's shrink some files.
Before we get into the how, let's talk about the why — because "faster page loads" is only the beginning.
Google has published data showing that as page load time goes from 1 second to 3 seconds, the probability of bounce increases by 32%. From 1 second to 5 seconds? That jumps to 90%. And images are the primary contributor to page weight on most websites.
Amazon famously found that every 100ms of additional load time cost them 1% in sales. For a company doing $500 billion in annual revenue, that is $5 billion per second of delay. Your site probably is not Amazon, but the proportional impact is the same.
Google's Largest Contentful Paint (LCP) metric — one of the three Core Web Vitals that directly influence search ranking — measures how quickly the largest visible element loads. On most pages, that largest element is an image. If your hero image is 3 MB, your LCP will suffer, your CWV score will drop, and Google will rank you lower than a competitor whose images are properly compressed.
The threshold Google considers "good" for LCP is under 2.5 seconds. "Poor" is anything over 4 seconds. An uncompressed 5 MB image on a median mobile connection will blow past that 4-second mark by itself.
If you serve 100,000 page views per month and each page loads 2 MB of uncompressed images, that is 200 GB of bandwidth per month just for images. Compress those images by 75% and you are down to 50 GB. At typical CDN pricing, that savings adds up to real money over a year — and it scales linearly with traffic.
Not everyone is on gigabit fiber. The global median mobile connection speed in 2026 is around 45 Mbps, but that is a median — half of all connections are slower. In parts of South Asia, Africa, and South America, average mobile speeds are under 15 Mbps. A 4 MB image takes over 2 seconds to download at 15 Mbps, assuming zero latency and no other assets competing for bandwidth. In practice, it is worse.
Every image compression method falls into one of two categories. Understanding the difference is essential to making the right choice for your use case.
Lossless compression reduces file size without discarding any image data. When you decompress a losslessly compressed image, you get back the exact original — pixel for pixel, bit for bit. Nothing is lost.
How does it shrink the file then? By finding more efficient ways to represent the same data. Think of it like this: instead of storing "red pixel, red pixel, red pixel, red pixel, red pixel" the algorithm stores "5 red pixels." The information is identical; the representation is shorter.
Common lossless techniques include:
Lossless compression typically reduces image file sizes by 20% to 50%, depending on the image content. Images with large areas of uniform color (screenshots, diagrams, logos) compress much better than photographs with continuous tonal variation.
Best for: Screenshots, diagrams, logos, illustrations, medical imaging, archival storage, any situation where pixel-perfect accuracy matters.
Lossy compression reduces file size by permanently discarding some image data. The discarded data is chosen carefully — algorithms prioritize removing information that the human visual system is least likely to notice.
This is possible because human vision has well-studied limitations:
Lossy compression typically reduces file sizes by 60% to 90% while maintaining visual quality that is indistinguishable from the original to most viewers at normal viewing distances.
The trade-off is irreversibility. Once you apply lossy compression, the discarded data is gone. If you compress, then decompress, then re-compress, quality degrades further with each cycle. Always keep your originals and compress from the source.
Best for: Photographs, complex illustrations, web images, social media, any situation where file size matters more than pixel-perfect accuracy.
When people say "compress without losing quality," they usually mean one of two things:
For most web use cases, visually lossless compression is the sweet spot. You get dramatic file size savings with no perceptible quality difference. The rest of this guide focuses primarily on achieving visually lossless results, but I will note where true lossless methods are the better choice.
Each format has different compression characteristics, strengths, and ideal use cases. Choosing the right format is often more impactful than tweaking compression settings.
JPEG has been the default photographic image format since 1992, and for good reason — it was specifically designed for photographs.
Compression type: Lossy (with a rarely-used lossless mode).
How it works: JPEG converts the image from RGB to YCbCr color space (separating brightness from color), divides the image into 8x8 pixel blocks, applies a Discrete Cosine Transform (DCT) to each block to convert spatial data into frequency data, then quantizes (rounds) the frequency coefficients. The quantization step is where data is lost, and the quality slider controls how aggressively it rounds.
Strengths: Excellent compression ratios for photographs. Universal browser support. Universal device support. Mature tooling. At quality 80-85, most photographs compress to 1/5th their uncompressed size with no visible artifacts.
Weaknesses: No transparency support. No animation. Block artifacts become visible at low quality settings (that telltale "blocky" look). Poor at sharp edges and text — the 8x8 block structure creates visible ringing around high-contrast boundaries. Each re-save degrades quality further.
Recommended quality settings:
PNG was created in 1996 as a patent-free replacement for GIF, and it has become the standard for non-photographic web images.
Compression type: Lossless only. PNG uses DEFLATE compression (the same algorithm behind ZIP files) combined with predictive filtering.
How it works: Before compression, PNG applies a per-row filter that transforms pixel values relative to their neighbors (difference from the pixel above, difference from the pixel to the left, etc.). This creates data with more redundancy, which DEFLATE then compresses efficiently.
Strengths: Lossless — no quality degradation ever. Full alpha transparency support (256 levels of transparency per pixel). Excellent for screenshots, UI elements, logos, text, and anything with sharp edges or flat colors. No generation loss from re-saving.
Weaknesses: Much larger file sizes than JPEG for photographs. A typical photo saved as PNG will be 3-5x larger than the same photo as JPEG at quality 85 — with no visible quality advantage. Even with maximum compression, PNG photographs are simply too large for web use.
Optimization tips: PNG compression level (0-9) controls the trade-off between compression speed and file size but does not affect quality — it is always lossless. Level 9 produces the smallest file but takes longer. For batch processing, level 6 is a good balance. Tools that "optimize" PNGs (like OptiPNG or PNGQuant) either find better DEFLATE parameters (lossless) or reduce the color palette (lossy, but often imperceptible for graphics).
Developed by Google and released in 2010, WebP was designed specifically to replace both JPEG and PNG on the web.
Compression type: Both lossy and lossless modes.
How it works: Lossy WebP uses predictive coding (predicting each block from previously decoded blocks) rather than JPEG's transform coding. Lossless WebP uses a combination of spatial transforms, color space transforms, and entropy coding.
Strengths: 25-35% smaller than JPEG at equivalent visual quality for lossy mode. 20-25% smaller than PNG for lossless mode. Supports both transparency and animation. Supported by all modern browsers (Chrome, Firefox, Safari, Edge — Safari added support in 2022, making it truly universal).
Weaknesses: Encoding is slower than JPEG. Maximum image dimensions are 16383 x 16383 pixels (sufficient for virtually all web use but a limitation for very large images). Lossy WebP can occasionally produce different artifacts than JPEG — some images that look perfect in JPEG show subtle "smearing" in WebP at equivalent file sizes.
Recommended quality settings:
AVIF is the newest major image format, derived from the AV1 video codec and finalized in 2019.
Compression type: Both lossy and lossless.
How it works: AVIF uses the same intra-frame coding techniques as AV1 video — block-based prediction, transforms, and quantization, but with significantly more advanced tools than JPEG. It supports block sizes up to 64x64 (vs. JPEG's fixed 8x8), uses directional intra prediction, and applies a post-processing "loop filter" that reduces artifacts.
Strengths: 30-50% smaller than JPEG at equivalent quality. 20-30% smaller than WebP. Excellent at low bitrates — where JPEG falls apart with block artifacts, AVIF maintains smooth, clean output. Supports HDR, wide color gamut, 12-bit depth, transparency, and animation.
Weaknesses: Encoding is very slow — 10-100x slower than JPEG encoding. Decoding is also slower (though fast enough for web use). Browser support is now broad (Chrome 85+, Firefox 93+, Safari 16.1+) but some older browsers still in circulation lack support. Maximum image dimensions vary by implementation. Progressive decoding is not supported the way JPEG handles it.
Recommended quality settings:
SVG is fundamentally different from the formats above — it is a vector format, not raster.
Compression type: Not applicable in the traditional sense. SVGs are XML text files that describe shapes, paths, and colors mathematically.
How it works: Instead of storing a grid of pixels, SVG stores instructions: "draw a circle at coordinates (100, 50) with radius 30 and fill it with #FF0000." This means SVGs are resolution-independent and typically tiny for simple graphics.
Best for: Icons, logos, simple illustrations, charts, and any graphic composed of geometric shapes. A logo that might be 50 KB as a PNG could be 2 KB as an SVG — and the SVG looks perfect at any size, on any screen density.
Not suitable for: Photographs or any image with continuous tonal variation. A photograph "traced" to SVG would be either enormous (millions of tiny shapes) or inaccurate.
Here is a concrete workflow for compressing images for web use that balances quality, file size, and time.
Always begin with the original, uncompressed (or minimally compressed) image. If you are working with photographs, that means the RAW file or the highest-quality export from your camera or editing software. If you are working with screenshots, capture at native resolution.
Never compress an already-compressed image. Each round of lossy compression introduces additional artifacts. Compressing a JPEG that was already saved at quality 80 will produce noticeably worse results than compressing the original directly to the same target size.
This is the single most overlooked optimization. If your image will be displayed at 800 x 600 pixels on the page, there is no reason to serve a 4000 x 3000 pixel original. Resizing to the display dimensions (or 2x for retina displays, so 1600 x 1200) will reduce the file size by 75% or more before any compression is applied.
Use your image editor or a browser-based tool to resize. For retina support, export at 2x the CSS display dimensions. For standard displays, 1x is sufficient.
Use the format selection guide above. For most web photographs, the answer in 2026 is: serve AVIF with WebP fallback and JPEG as the final fallback. The HTML <picture> element makes this straightforward:
<picture>
<source srcset="photo.avif" type="image/avif">
<source srcset="photo.webp" type="image/webp">
<img src="photo.jpg" alt="Description" width="800" height="600">
</picture>Browsers will automatically select the best format they support.
For each format, use the quality settings recommended earlier in this guide. The key insight is that you do not need to find the "perfect" setting — any quality level in the recommended range will produce good results. The difference between JPEG quality 80 and quality 83 is essentially invisible. Do not spend time agonizing over single-digit quality differences.
Open the compressed image at 100% zoom and compare it to the original. Look for:
If you see artifacts, increase quality by 5-10 points and re-compress from the original (not from the already-compressed version).
EXIF data embedded in photographs can add 10-100 KB to every file. This metadata includes camera model, lens settings, GPS coordinates, timestamps, and sometimes embedded thumbnails. Unless you specifically need to preserve metadata (for archival or rights management purposes), strip it during compression. Most compression tools have an option to remove metadata.
Stripping metadata also has a privacy benefit — GPS coordinates in your photos can reveal your home address, workplace, or other locations you may not want to broadcast.
If you have dozens or hundreds of images to compress, doing them one at a time is impractical. Here are approaches for batch processing.
Modern browser-based image tools can process multiple files simultaneously. You select all your images, configure quality settings once, and the tool processes them in parallel using your computer's CPU. Because processing happens locally in your browser, your images never leave your machine.
On akousa.net, the image compression and conversion tools support batch processing — you can drop multiple files, choose your target format and quality, and download the results as a ZIP file. Everything runs client-side in your browser using WebAssembly, so your images stay private and processing is fast regardless of your internet speed.
For developers and power users, command-line tools offer maximum flexibility. Most image processing libraries can be scripted to process entire directories with a single command, applying consistent settings across hundreds or thousands of images.
If you use a static site generator (Next.js, Gatsby, Hugo, Astro) or a CMS (WordPress, Ghost), image optimization can be automated as part of your build or upload pipeline. Next.js, for example, has a built-in Image component that automatically serves optimized images in modern formats at appropriate sizes. WordPress plugins can compress images automatically on upload.
Instead of serving one image at one size, serve multiple sizes and let the browser choose the appropriate one based on the viewport. A mobile user on a 375-pixel-wide screen does not need a 1920-pixel-wide image.
<img
srcset="photo-400.webp 400w,
photo-800.webp 800w,
photo-1200.webp 1200w,
photo-1600.webp 1600w"
sizes="(max-width: 600px) 100vw,
(max-width: 1200px) 50vw,
800px"
src="photo-800.webp"
alt="Description"
width="800"
height="600"
loading="lazy"
>This alone can reduce the amount of image data transferred to mobile users by 60-80%.
Add loading="lazy" to images that are not visible in the initial viewport (anything below the fold). The browser will defer loading those images until the user scrolls near them. This does not reduce file sizes, but it reduces the amount of data transferred on initial page load, which directly improves LCP and perceived performance.
Do not lazy-load your hero image or any image visible above the fold — that would make LCP worse, not better.
If your PNG uses millions of colors but the image is a diagram or screenshot that only needs a few hundred distinct colors, reducing the color palette from 24-bit (16.7 million colors) to 8-bit (256 colors) can cut file size by 60-80% with no visible difference for that type of content. This is technically lossy (you are discarding colors), but for non-photographic content the result is visually identical.
Progressive JPEGs load in multiple passes — first a blurry version of the full image, then progressively sharper versions. This gives users something to see immediately rather than watching the image load top-to-bottom. Progressive JPEGs are also typically 2-5% smaller than baseline JPEGs for images over 10 KB. There is no reason not to use progressive encoding for web JPEGs.
Some advanced tools can analyze image content and apply different compression levels to different regions. For a portrait photograph, the face gets high quality while the out-of-focus background gets more aggressive compression. This can save an additional 10-20% over uniform compression with no perceptible quality loss in the areas that matter.
This is the number one mistake. Every round of lossy compression introduces additional artifacts. Downloading a JPEG from the web, editing it, and saving it as a JPEG again will produce a worse result than if you had the original. Always go back to the highest-quality source.
I see this constantly. Someone exports a photograph as PNG because they heard "PNG is higher quality." It is lossless, yes — but for photographs, that just means it is 5x larger than a JPEG at quality 85 with no perceptible visual difference. PNG is for graphics, not photographs.
The opposite mistake. JPEG's block-based compression creates visible artifacts around sharp text edges. Screenshots, code snippets, and UI mockups should be PNG or lossless WebP. The file sizes will be reasonable because these images have large areas of uniform color that compress well with lossless algorithms.
Compressing a 6000 x 4000 pixel image to quality 30 to get a small file is worse than resizing it to 1200 x 800 and compressing at quality 82. The resized version will look better at the display size and be a similar file size (or smaller). Always resize to display dimensions first, then compress.
Some people compress at quality 50 thinking "smaller is always better." Below about quality 70-75 for JPEG, artifacts become visible in normal viewing conditions. The user experience damage from ugly images outweighs the performance benefit of smaller files. Find the balance.
If you are still serving only JPEG and PNG in 2026, you are leaving 30-50% file size savings on the table. WebP has universal browser support. AVIF has broad support and is growing. The <picture> element makes serving modern formats with fallbacks trivial.
You do not need to install software or pay for subscriptions to compress images effectively.
The most privacy-friendly option is tools that run entirely in your browser. Your images never leave your computer — all processing happens locally using JavaScript and WebAssembly. This is ideal for sensitive images (medical, legal, personal) or when you are on a slow connection.
akousa.net offers a full suite of browser-based image tools including compression, format conversion (JPEG to WebP, PNG to AVIF, and more), resizing, and batch processing. Everything processes locally — your files stay on your machine.
Here are typical file size reductions you can expect for a 4000 x 3000 pixel photograph, starting from an uncompressed 36 MB bitmap:
| Format | Quality | File Size | Reduction | Visual Quality |
|---|---|---|---|---|
| PNG (lossless) | Max | 18.2 MB | 49% | Perfect |
| JPEG | 95 | 3.8 MB | 89% | Near-perfect |
| JPEG | 85 | 1.4 MB | 96% | Excellent |
| JPEG | 75 | 820 KB | 97.7% | Good |
| WebP (lossy) | 82 | 980 KB | 97.3% | Excellent |
| WebP (lossless) | - | 14.1 MB | 61% | Perfect |
| AVIF | 65 | 520 KB | 98.6% | Excellent |
| AVIF | 50 | 340 KB | 99.1% | Good |
These numbers are approximate and vary significantly with image content. Photographs with lots of fine detail (landscapes, cityscapes) tend to compress less efficiently than images with smoother content (portraits, product shots).
After compressing your images, measure the actual impact on your website performance.
Run your pages through PageSpeed Insights before and after compression. Look at the LCP metric specifically — this is where image optimization has the most direct impact. Also check the "Properly size images" and "Serve images in next-gen formats" audits.
Open DevTools, go to the Network tab, filter by "Img," and check the total transfer size. Compare before and after. You should see a dramatic reduction.
If you have analytics set up, monitor your CWV scores over time. You should see LCP improvements within days of deploying optimized images as Google collects field data from real users.
Here is the short version of everything above, distilled into actionable rules:
<picture> element to serve AVIF and WebP with a JPEG fallback.loading="lazy" to anything not in the initial viewport.Follow these rules and you will cut your image payload by 70-90% without any visible quality loss. Your pages will load faster, your Core Web Vitals will improve, your bandwidth costs will drop, and your users — especially those on mobile connections — will thank you with longer sessions and lower bounce rates.
The tools are free. The knowledge is here. The only thing left is to go compress your images.