Convert Unix timestamps to human-readable dates and vice versa. Learn what epoch time is, how it works, and use free converter tools for instant results.
If you have ever looked at a database column and seen a number like 1711540800, you have encountered a Unix timestamp. It looks meaningless at first glance, but that single integer represents an exact moment in time — March 27, 2024, at midnight UTC, to be precise.
Unix timestamps are one of the most universal ways to represent time in software. They appear in API responses, log files, database records, JWT tokens, file metadata, and countless other places. Understanding how they work is not optional for developers. It is a fundamental skill that prevents bugs, saves debugging time, and makes you better at working with time-sensitive data.
This guide covers everything you need to know about Unix timestamps and epoch time: what they are, how to convert them, common pitfalls that trip up even experienced developers, and practical code examples in multiple languages. If you need a quick conversion right now, try our Epoch Converter or Timestamp Converter tool.
Unix time, also called epoch time or POSIX time, counts the number of seconds that have elapsed since January 1, 1970, 00:00:00 UTC. That specific moment — midnight on the first day of 1970 in Coordinated Universal Time — is called the Unix epoch.
The concept is beautifully simple. Instead of storing dates as complex strings with varying formats, time zones, and calendar quirks, you store a single number. The timestamp 0 means the epoch itself. The timestamp 86400 means exactly one day later (24 hours times 60 minutes times 60 seconds). The timestamp 1000000000 marks September 9, 2001, at 01:46:40 UTC — a moment that Unix enthusiasts celebrated as the "billennium."
Negative timestamps represent dates before the epoch. The timestamp -86400 is December 31, 1969. This means Unix time can represent dates well into the past, though the practical range depends on whether the system uses 32-bit or 64-bit integers.
The beauty of Unix timestamps is that they are timezone-agnostic at their core. The number 1711540800 means the same instant in time regardless of whether you are in Tokyo, London, or New York. It is only when you convert that number to a human-readable date that time zones enter the picture.
One of the most common sources of confusion is the difference between second-precision and millisecond-precision timestamps.
A Unix timestamp in seconds is typically 10 digits long (as of 2024, numbers like 1711540800). A millisecond timestamp is 13 digits long (1711540800000). The millisecond version is simply the seconds value multiplied by 1000, with additional precision for sub-second timing.
Different systems use different precisions:
time(), Python time.time() (returns float), MySQL UNIX_TIMESTAMP()Date.now(), Java System.currentTimeMillis(), many REST APIstime.time_ns() // 1000, PostgreSQL extract(epoch)time.Now().UnixNano(), some high-frequency trading systemsMixing these up is a classic bug. If you pass a millisecond timestamp to a function expecting seconds, you will get a date thousands of years in the future. If you pass seconds where milliseconds are expected, your date will land in January 1970 — just a few seconds after the epoch.
// Common mistake: mixing seconds and milliseconds
const timestampInSeconds = 1711540800;
// Wrong: JavaScript Date expects milliseconds
new Date(timestampInSeconds);
// Result: Mon Jan 20 1970 — not what you wanted
// Correct: multiply by 1000
new Date(timestampInSeconds * 1000);
// Result: Wed Mar 27 2024 00:00:00 UTCWhen working with timestamps from external APIs, always check the documentation to confirm whether the value is in seconds or milliseconds. If the documentation is unclear, the digit count is your best clue: 10 digits means seconds, 13 means milliseconds.
If you have heard of the Y2K bug, the Year 2038 problem is its sequel. Many older systems store Unix timestamps as signed 32-bit integers. A signed 32-bit integer can hold values up to 2,147,483,647. That maximum value corresponds to January 19, 2038, at 03:14:07 UTC.
One second later, the integer overflows. On systems that do not handle this correctly, the timestamp wraps around to the minimum negative value, which represents a date in December 1901. This could cause crashes, data corruption, or incorrect calculations in any software that still relies on 32-bit timestamps.
The fix is straightforward: use 64-bit integers. A signed 64-bit integer can represent timestamps until approximately 292 billion years from now, which should be sufficient. Most modern operating systems, programming languages, and databases already use 64-bit time internally. However, legacy systems, embedded devices, and some file formats still use 32-bit timestamps.
If you are building new software today, make sure your timestamp storage uses 64-bit integers. If you are maintaining legacy systems, audit your time-handling code before 2038 arrives.
Unix timestamps are always in UTC. This is both their greatest strength and a common source of mistakes. The timestamp itself has no time zone — it represents an absolute point in time. But the moment you display it to a user, you need to convert it to their local time zone.
Consider the timestamp 1711540800. In different time zones:
The underlying moment is identical. Only the human representation changes. This is why you should always store timestamps in UTC and only convert to local time at the presentation layer.
A related pitfall is Daylight Saving Time. When clocks spring forward, one hour is skipped. When they fall back, one hour repeats. If you are working with local times and converting to timestamps, you need to handle these transitions carefully. The safest approach is to always work with UTC internally and only apply time zone conversions when displaying dates to users.
Need to handle time zone conversions? Our Time Zone Converter makes it easy to see the same moment across multiple zones.
Here are practical examples for the most common operations: getting the current timestamp, converting a timestamp to a date, and converting a date to a timestamp.
// Current timestamp in seconds
const now = Math.floor(Date.now() / 1000);
// Timestamp to date
const date = new Date(1711540800 * 1000);
console.log(date.toISOString()); // "2024-03-27T00:00:00.000Z"
console.log(date.toLocaleString("en-US", { timeZone: "America/New_York" }));
// Date to timestamp
const ts = Math.floor(new Date("2024-03-27T00:00:00Z").getTime() / 1000);
console.log(ts); // 1711540800import time
from datetime import datetime, timezone
# Current timestamp
now = int(time.time())
# Timestamp to date
dt = datetime.fromtimestamp(1711540800, tz=timezone.utc)
print(dt.isoformat()) # "2024-03-27T00:00:00+00:00"
# Date to timestamp
dt = datetime(2024, 3, 27, tzinfo=timezone.utc)
ts = int(dt.timestamp())
print(ts) # 1711540800// Current timestamp
$now = time();
// Timestamp to date
$date = date('Y-m-d H:i:s', 1711540800);
echo $date; // "2024-03-27 00:00:00" (server timezone)
// With specific timezone
$dt = new DateTime('@1711540800');
$dt->setTimezone(new DateTimeZone('America/New_York'));
echo $dt->format('Y-m-d H:i:s T'); // "2024-03-26 20:00:00 EDT"
// Date to timestamp
$ts = strtotime('2024-03-27 00:00:00 UTC');
echo $ts; // 1711540800-- MySQL: current timestamp
SELECT UNIX_TIMESTAMP();
-- MySQL: timestamp to date
SELECT FROM_UNIXTIME(1711540800);
-- Result: '2024-03-27 00:00:00'
-- MySQL: date to timestamp
SELECT UNIX_TIMESTAMP('2024-03-27 00:00:00');
-- PostgreSQL: current timestamp
SELECT EXTRACT(EPOCH FROM NOW())::INTEGER;
-- PostgreSQL: timestamp to date
SELECT TO_TIMESTAMP(1711540800);
-- Result: 2024-03-27 00:00:00+00
-- PostgreSQL: date to timestamp
SELECT EXTRACT(EPOCH FROM TIMESTAMP '2024-03-27 00:00:00 UTC')::INTEGER;# Current timestamp
date +%s
# Timestamp to date (GNU/Linux)
date -d @1711540800
# Timestamp to date (macOS)
date -r 1711540800
# Date to timestamp (GNU/Linux)
date -d "2024-03-27 00:00:00 UTC" +%sDo not store Unix timestamps as strings in your database. Use integer or bigint column types. String storage wastes space, prevents efficient sorting and range queries, and invites parsing errors.
Unix time does not account for leap seconds. A Unix day is always exactly 86,400 seconds, even though actual UTC days occasionally have 86,401 seconds (when a leap second is inserted). In practice, most systems handle this by repeating a second or smearing the adjustment over a longer period. For the vast majority of applications, you can safely ignore leap seconds. If you are working on scientific, astronomical, or high-precision timing applications, you will need a different time standard like TAI.
Some languages return timestamps as floating-point numbers (Python's time.time() returns a float, for example). Floating-point arithmetic can introduce tiny rounding errors. If you need exact second precision, cast to an integer. If you need sub-second precision, use dedicated high-resolution time APIs rather than floating-point seconds.
A timestamp 1711540800 is always UTC. If your application creates a Date object and displays it without specifying a time zone, the result depends on the server or browser's local time zone. This leads to bugs that only appear in certain time zones or during DST transitions. Always be explicit about time zones when converting timestamps to human-readable dates.
When converting dates like "March 27, 2024" to a timestamp, the result depends on what time you assume. Midnight UTC gives you 1711540800. Midnight in US Eastern time gives you 1711544400 (four hours later in UTC). If two systems interpret the same date string with different time zone assumptions, they will produce different timestamps, leading to records that appear on the wrong day.
Unix timestamps are ideal for:
They are less ideal for:
For quick date arithmetic and comparisons, our Date Calculator can help you find the difference between dates or add and subtract time periods without manual calculations.
Always log in UTC. When your application writes log entries, use UTC timestamps. This makes it possible to correlate events across services running in different time zones.
Validate timestamp ranges. If your application accepts timestamps from user input or external APIs, validate that they fall within a reasonable range. A 10-digit number that starts with 1 is a valid current-era timestamp. A 15-digit number is likely nanoseconds or an error.
Use ISO 8601 for interchange. When you need a human-readable date format in APIs, use ISO 8601 (2024-03-27T00:00:00Z). It is unambiguous, sortable as a string, and supported by every major programming language. You can convert between ISO 8601 and Unix timestamps easily.
Test across time zones. Set your development environment to different time zones and verify that your date handling still works. Many timestamp bugs only surface when the server and client are in different zones.
| Timestamp | Date (UTC) |
|---|---|
| 0 | January 1, 1970, 00:00:00 |
| 1000000000 | September 9, 2001, 01:46:40 |
| 1234567890 | February 13, 2009, 23:31:30 |
| 1711540800 | March 27, 2024, 00:00:00 |
| 2000000000 | May 18, 2033, 03:33:20 |
| 2147483647 | January 19, 2038, 03:14:07 (32-bit max) |
Unix timestamps solve a genuinely hard problem — representing time in a way that is compact, unambiguous, and universal. Once you understand that a timestamp is just seconds since 1970 UTC, everything else follows naturally: conversions are arithmetic, comparisons are integer operations, and storage is a single column.
The pitfalls are predictable and avoidable. Know whether your system uses seconds or milliseconds. Always be explicit about time zones. Use 64-bit integers. Test your date handling across time zones and DST boundaries.
For instant conversions without writing code, use our Epoch Converter to convert any timestamp to a human-readable date, or the Timestamp Converter to go the other direction. Both tools handle seconds, milliseconds, and multiple output formats.