Find out if a website is down for everyone or just you. Learn how to check server status, diagnose outages, and get real-time downtime alerts for 1700+ services.
You open your browser, type in a URL, and nothing loads. The page spins. Maybe you get a white screen, maybe a cryptic error code, maybe the browser just gives up and tells you the site can't be reached. Your first thought is always the same: is this website actually down, or is something wrong on my end?
It is a surprisingly difficult question to answer on your own. The internet is not a single thing that is either "on" or "off." Between your device and the server hosting a website, there are dozens of systems that can fail independently. Your Wi-Fi router, your ISP, DNS servers, content delivery networks, load balancers, application servers, databases — any one of them going down can make a website unreachable for you while it works perfectly fine for someone in another city.
This post walks through how websites actually go down, what the different failure modes look like, how to diagnose them, and the tools you can use to figure out whether the problem is on your end or theirs.
Before you can understand why a website goes down, you need to understand what happens when a website works. Every time you type a URL into your browser and hit enter, a chain of events fires in sequence. If any link in that chain breaks, the page does not load.
The first step is DNS resolution. DNS stands for Domain Name System, and it is essentially the phone book of the internet. When you type example.com into your browser, your computer does not know what that means. Computers communicate using IP addresses — numerical addresses like 93.184.216.34 for IPv4 or longer hexadecimal strings for IPv6.
Your browser asks a DNS resolver (usually provided by your ISP, or a public one like Google's 8.8.8.8 or Cloudflare's 1.1.1.1) to look up the IP address associated with that domain name. The resolver may have it cached from a recent lookup. If not, it queries a chain of authoritative DNS servers — root servers, TLD servers, and finally the domain's own nameservers — to get the answer.
This process usually takes milliseconds. But when it fails, it fails silently and confusingly. Your browser will show "DNS_PROBE_FINISHED_NXDOMAIN" or "This site can't be reached" or "Server IP address could not be found." The website might be perfectly healthy, but if DNS resolution fails on your end, you will never reach it.
DNS failures can happen for several reasons:
Once your browser has the IP address, it opens a TCP connection to the server. This involves a three-way handshake (SYN, SYN-ACK, ACK) that establishes a reliable communication channel. If the server is unreachable — because it is powered off, because a firewall is blocking traffic, because a network link between you and the server is broken — the TCP handshake times out and your browser shows a connection error.
For HTTPS websites (which is nearly all of them now), there is an additional TLS handshake after the TCP connection is established. Your browser and the server negotiate encryption parameters, the server presents its SSL/TLS certificate, and your browser verifies that the certificate is valid, not expired, and issued for the correct domain. If any of this fails, you get certificate errors — those scary "Your connection is not private" warnings.
After DNS resolution, TCP connection, and TLS handshake, your browser finally sends an HTTP request to the server. The server processes the request and sends back a response with a status code. This is where the familiar error codes come from.
When a website goes down, the error code you see (if you see one at all) tells you a lot about what went wrong. Not all "down" situations are the same.
These mean the server received your request but could not fulfill it. The problem is on the server side.
Sometimes users confuse client errors with the website being down.
The most frustrating case is when you get no HTTP status code at all. The browser just spins and eventually times out. This means the request never reached the server, or the server never responded. It could be a DNS failure, a network routing issue, a firewall blocking your traffic, or the server being completely offline.
Understanding the common causes of outages helps you diagnose them faster.
The most common reason popular websites go down is traffic spikes. When a website gets more simultaneous visitors than its infrastructure can handle, response times increase, memory fills up, and eventually the server starts rejecting connections or crashing entirely. This is why major product launches, viral social media moments, and breaking news events frequently take websites offline.
Modern web applications are deployed multiple times per day. Each deployment is a moment of vulnerability. A bad code change, a missing environment variable, a database migration that locks a table — any of these can take a website down immediately after deployment. Good teams use rolling deployments, canary releases, and automated rollbacks to minimize the blast radius, but mistakes still happen.
Web applications depend heavily on databases. When the database goes down, the application usually goes down with it. Database failures can be caused by disk space running out, connection pool exhaustion, replication lag, deadlocks, or corrupted indexes. Even if the web server is technically running, if every request requires a database query and the database is not responding, users see 500 errors.
Domains expire. DNS records get misconfigured. DNSSEC keys get rotated incorrectly. Registrar accounts get hacked. These are some of the most catastrophic failures because they affect the website at the most fundamental level — if users cannot resolve your domain name, nothing else matters.
Many large websites use Content Delivery Networks (CDNs) to serve content from servers geographically close to users. CDNs like Cloudflare, Akamai, and AWS CloudFront sit between users and the origin server. When the CDN has an issue, it can affect millions of websites simultaneously.
CDN outages are particularly confusing because they can be regional. The website might be down for all users in Europe but working fine in North America, because different CDN edge nodes are affected.
TLS certificates have expiration dates. When a certificate expires and is not renewed, browsers will show a security warning and refuse to load the page. This has taken down major services — including, embarrassingly, services owned by some of the largest tech companies in the world. Automated certificate renewal (like Let's Encrypt with certbot) has reduced this problem significantly, but it still happens.
Modern websites depend on dozens of third-party services: payment processors, authentication providers, email services, analytics platforms, CDNs, cloud hosting providers. When any of these go down, the websites that depend on them can go down too. A single AWS region having an outage can take down thousands of websites and services simultaneously.
Now for the practical part. When a website is not loading for you, here is how to figure out what is going on.
The simplest first step is to try loading the website from a different device or a different network. If it loads on your phone using mobile data but not on your laptop using Wi-Fi, the problem is with your local network, not the website.
The fastest way to check if a website is down for everyone is to use a dedicated monitoring service. These services continuously check thousands of popular websites from multiple locations around the world and can tell you immediately whether a site is experiencing an outage.
On akousa.net, we run a down detector that monitors over 1,765 services across 8 categories. It shows real-time status, outage history, user reports, and even lets you compare the status of multiple services side by side. If you are wondering whether a particular service is having issues, that is the quickest way to find out without having to do any manual diagnosis.
The advantage of using a centralized monitoring tool is that it checks from infrastructure that is separate from your own network. If the down detector says the service is up, the problem is almost certainly on your end. If it shows the service is down, you know it is not just you.
If you want to dig deeper, start with DNS. Open a terminal or command prompt and run:
nslookup example.comOr, for more detailed information:
dig example.comIf DNS resolution fails or returns unexpected results, you have found your culprit. Try switching to a different DNS resolver temporarily. On most systems, you can configure your network adapter to use 8.8.8.8 (Google) or 1.1.1.1 (Cloudflare) instead of your ISP's default DNS.
You can also flush your local DNS cache to clear out any stale entries:
ipconfig /flushdnssudo dscacheutil -flushcache; sudo killall -HUP mDNSRespondersudo systemd-resolve --flush-caches (or restart systemd-resolved)If DNS is resolving correctly but the site still will not load, the problem might be somewhere in the network path between you and the server. Use traceroute to see every hop your packets take:
traceroute example.comOn Windows, the command is tracert example.com. Look for hops where the latency suddenly spikes or where you see * * * (timeouts). This can tell you whether the problem is with your ISP, a transit network, or the destination server itself.
Use curl to make a direct HTTP request and see exactly what the server returns:
curl -I https://example.comThe -I flag fetches only the headers, which is enough to see the status code. If you get a 200, the server is responding. If you get a 5xx error, the server is having problems. If the connection times out, the server is unreachable.
For more verbose output that shows the DNS resolution time, TCP connection time, TLS handshake time, and time to first byte:
curl -w "\nDNS: %{time_namelookup}s\nConnect: %{time_connect}s\nTLS: %{time_appconnect}s\nTTFB: %{time_starttransfer}s\nTotal: %{time_total}s\n" -o /dev/null -s https://example.comThis breaks down exactly where the delay is occurring. If DNS lookup takes 5 seconds, your DNS resolver is slow. If the connection time is high, there is network latency. If TTFB is high, the server is processing slowly.
If the page partially loads but something is broken, open your browser's developer tools (F12 or Ctrl+Shift+I) and check the Network tab. Look for failed requests — they will be highlighted in red. Check the Console tab for JavaScript errors. These tools can show you exactly which resource failed to load and why.
During major outages, social media lights up fast. Search Twitter/X for the service name plus "down" and sort by recent. Many large services also maintain official status pages (like status.github.com or status.aws.amazon.com) where they post incident updates in real time.
This almost always points to a DNS or routing issue with your home network. Try these in order:
1.1.1.1 or 8.8.8.8 in your router settings or on your device directly.Slow loading without a complete failure usually indicates server-side performance problems rather than an outage. The server is responding, but it is struggling. This could be due to high traffic, a slow database query, an overloaded application server, or a CDN that is not properly caching static assets.
Check the Network tab in your browser's developer tools. Look at the TTFB (Time To First Byte) for the main document request. If it is over 2-3 seconds, the server is processing slowly. If the TTFB is fast but the page still loads slowly, the issue is with the size or number of resources being loaded (images, scripts, stylesheets).
If your browser shows a certificate error, do not bypass it unless you know exactly what you are doing. The most common causes are:
www.example.com but you are visiting example.com (or vice versa). Try the other version of the URL.This usually points to a browser-specific issue rather than the website being down. Clear your browser cache and cookies for that site. Disable browser extensions (ad blockers and privacy extensions sometimes break websites). Try an incognito or private browsing window, which runs without extensions and with a clean cache.
This is typically a partial outage or a third-party dependency failure. The main website is up, but a service it depends on is down. Login systems, payment processing, search functionality, and media uploads often depend on separate backend services or third-party APIs that can fail independently.
If you run a website yourself, relying on users to tell you about outages is not a viable strategy. You need proactive monitoring.
At a minimum, set up an external HTTP check that hits your website every 1-5 minutes from multiple geographic locations. If the check fails from multiple locations simultaneously, trigger an alert. Services like UptimeRobot, Pingdom, and Better Stack provide this functionality.
A 200 status code does not mean everything is working. Your homepage might return a 200 while your API is down, your database is unreachable, or your payment processing is broken. Effective monitoring checks specific functionality, not just whether the server returns a success code.
Consider monitoring:
The alert itself matters as much as the monitoring. An alert that goes to an email inbox nobody checks is useless. Configure alerts to go to a channel your team actually watches — Slack, PagerDuty, SMS, whatever works for your team. Set up escalation policies so that if the primary on-call person does not acknowledge an alert within a few minutes, it escalates to someone else.
Avoid alert fatigue by setting reasonable thresholds. A single failed check is not necessarily an outage — networks are noisy and transient failures happen. Require multiple consecutive failures before triggering an alert. But do not set the threshold so high that real outages go unnoticed for 15 minutes.
If you have confirmed that a website you need is genuinely down, here is what you can do:
Website outages are an inherent part of how the internet works. No system has 100% uptime. Even the most reliable cloud providers advertise 99.99% uptime, which still allows for about 52 minutes of downtime per year. Smaller services may experience significantly more.
The difference between a minor inconvenience and a major disaster is preparation — both for the people running the website and for the users who depend on it. Understanding how outages happen, knowing how to diagnose them, and having tools that give you real-time visibility into service status turns a confusing experience into one you can navigate quickly.
The next time a website does not load, you will know exactly where to start looking.