Unix Timestamp Converter

Convert between Unix epoch timestamps and human-readable dates.

Current Unix Timestamp
1776694262

About Unix Timestamps:

  • Counts seconds since January 1, 1970 00:00:00 UTC
  • Used in programming, databases, and APIs
  • Millisecond timestamps are 13 digits vs 10

What is a Unix Timestamp?

A Unix timestamp is the number of seconds that have elapsed since the Unix epoch — midnight UTC on 1 January 1970. It is a single integer, timezone-independent, language-independent, and trivially sortable. Every major operating system, programming language, and database stores "time" internally as some variant of this count: seconds on traditional Unix, milliseconds in JavaScript and Java, microseconds in PostgreSQL, nanoseconds in Go. When a system says an event happened at 1700000000, that means 14 November 2023 at 22:13:20 UTC — roughly 53 years and 10.5 months after the epoch.

The epoch choice was pragmatic rather than profound: Ken Thompson and Dennis Ritchie needed a reference point when designing early Unix in the late 1960s, and 1970 was close enough to "now" to fit comfortably in a 32-bit signed integer. That decision, along with a small API choice to represent time as an integer rather than a complex date struct, became one of the most durable portable conventions in computing.

Seconds vs Milliseconds: The 10-vs-13-Digit Rule

Classic Unix APIs (time(), mtime, PHP time()) return seconds — a 10-digit number for any time between 2001 and 2286. JavaScript's Date.now(), Java System.currentTimeMillis(), and most JSON APIs return milliseconds — 13 digits in the same range. Microseconds (16 digits) and nanoseconds (19 digits) appear in high-resolution monitoring, tracing, and event-sourcing systems.

Rule of thumb: 10 digits = seconds, 13 digits = milliseconds, 16 digits = microseconds. Off-by-1000 errors (treating ms as seconds) will place the timestamp ~30 years in the future, or place a seconds value in January 1970.

The Year 2038 Problem

A 32-bit signed integer overflows at 231 − 1 = 2,147,483,647 seconds after the epoch — which is 03:14:07 UTC on 19 January 2038. At that instant, any system still storing Unix time in a 32-bit signed time_t will wrap to −231, interpreting the moment as 13 December 1901. This is the direct analogue of the Y2K problem and it is real: embedded devices, older filesystems (ext3), and some database column types still use 32-bit time.

The fix is 64-bit time. Modern Linux, macOS, and Windows have used 64-bit time_t for years; 64-bit time wraps in approximately the year 292,277,026,596 — long after human extinction. For any new schema or protocol, specify 64-bit timestamps explicitly, even if 32-bit "looks fine" today.

UTC, Local Time, and ISO 8601

A Unix timestamp has no timezone — it is an absolute instant, interpretable only in UTC. "Local time" is a display concern: the same timestamp 1700000000 reads as 22:13 in London, 17:13 in New York, 06:13 next-day in Tokyo. Rendering a timestamp always requires the viewer's timezone, which is why most APIs either (a) return the timestamp plus an explicit timezone string, or (b) normalise everything to UTC and let the client convert.

ISO 8601 is the textual standard for exchanging date-times: 2023-11-14T22:13:20Z where the trailing Z ("Zulu") means UTC. Offset form is also valid: 2023-11-14T17:13:20-05:00. ISO 8601 strings sort lexicographically in chronological order, an often-exploited property for database keys and log filenames.

Worked Examples

Example 1: Convert a log timestamp

A log entry reads [1699891200] user.login. Dividing by 86,400 (seconds per day) and adding to 1970 gives approximately day 19,674 — that is 13 November 2023, 12:00:00 UTC. In practice you paste the timestamp into the converter above to get the exact ISO string.

Example 2: Compute an expiration

An API issues a JWT valid for 1 hour. The token's exp claim contains the current timestamp plus 3600: if iat = 1700000000, then exp = 1700003600. Servers compare the incoming timestamp against server-side Unix time to decide if the token has expired — no timezone arithmetic required.

Example 3: Caught by the ms-vs-seconds trap

A Python script calls datetime.fromtimestamp(ts) with ts = 1700000000000 (accidentally copied as milliseconds). Python interprets it as seconds, producing a year around 55,842 AD — the tell-tale sign of an off-by-1000 error. Divide by 1000 or switch to fromtimestamp(ts / 1000).

Example 4: Cross-system event ordering

Two microservices log events at 1700000000.120 and 1700000000.118. The fractional part shows that the second event actually happened 2 ms earlier than the first — but only if the two machines' clocks are synchronised (NTP-disciplined). Without clock sync, you cannot trust sub-second ordering across hosts; use a logical clock or a vector clock for strong ordering.

Common Pitfalls

  • Assuming all systems use seconds. JavaScript, Java, and most modern databases default to milliseconds. Always confirm which unit an API returns before converting.
  • Ignoring leap seconds. Unix time ignores leap seconds by design — it re-runs the previous second when one occurs. Systems that need strict monotonicity (audit logs, distributed consensus) should use a monotonic clock, not wall-clock time.
  • Treating Unix time as local. A timestamp is always UTC. If your database returns a "naive" datetime that looks local, it has been converted somewhere — audit the pipeline.
  • Storing timestamps as strings. ISO 8601 strings sort chronologically but take 4× the bytes of an integer and cannot be arithmetic'd directly. For high-volume columns, store the integer and render as ISO on display.
  • Assuming negative timestamps are invalid. Dates before 1970 produce negative Unix timestamps, and most libraries accept them — but some validate against >= 0 and reject historical data. Test with a pre-epoch date if that matters.

Frequently Asked Questions

Why 1 January 1970?

Unix was designed in 1969–70 at Bell Labs, and 1 January 1970 was the most recent round date when the team settled on a 32-bit seconds-based time format. The epoch has been frozen ever since for backwards compatibility — changing it would invalidate every filesystem timestamp, every certificate, and every database row.

Can I represent dates before 1970?

Yes — most implementations use a signed integer, so negative Unix timestamps represent dates before the epoch. -86400 is 31 December 1969, 00:00:00 UTC. Some APIs reject negatives; check your language or database before storing historical dates as Unix time.

How do I get the current timestamp in different languages?

JavaScript: Date.now() (ms) or Math.floor(Date.now() / 1000) (s). Python: time.time(). PHP: time(). Bash: date +%s. Go: time.Now().Unix(). Java: System.currentTimeMillis() / 1000. All return the same value at the same moment, only the unit differs.

What is the difference between Unix time and Windows FILETIME?

Windows FILETIME counts 100-nanosecond intervals since 1 January 1601 UTC. To convert to Unix time: divide by 10,000,000 and subtract 11,644,473,600. FILETIME's 1601 epoch dates from the adoption of the Gregorian calendar being fully propagated — a nod to historical accuracy that makes cross-platform date handling quirky.

Does Unix time account for timezones or daylight saving?

No — and that is its principal virtue. A Unix timestamp is one integer representing one specific moment, the same everywhere in the world. Timezones and DST are strictly display concerns, applied only when converting to a human-readable string for a given locale.

Related Calculators

View all IT / DevOps Calculators →

Disclaimer

This calculator is provided for educational and informational purposes only. While we strive for accuracy, users should verify all calculations independently. We are not responsible for any errors, omissions, or damages arising from the use of this calculator.


Also in Technical