A Unix timestamp is the number of seconds (or milliseconds) elapsed since 1970-01-01 00:00:00 UTC — the "Unix epoch". It's a simple integer, completely timezone-independent, and trivially comparable and sortable. That's why it's the default time representation in virtually every backend system.
iat (issued at) and exp (expiry) are Unix timestamps in seconds.TIMESTAMP WITH TIME ZONE stores as microseconds from epoch; MySQL's TIMESTAMP as seconds.mtime, ctime, atime on Linux/macOS are stored as Unix timestamps. A common source of bugs: some systems use seconds (10-digit timestamp in 2024: 1714000000), others use milliseconds (13 digits: 1714000000000). JavaScript's Date.now() returns milliseconds; Unix shell's date +%s returns seconds. When storing or comparing across systems, always document the unit — or use ISO 8601 strings to eliminate ambiguity entirely.
A Unix timestamp is always in UTC. The timezone only matters when displaying it to a human. Best practice: store timestamps as UTC everywhere; convert to local time only at the presentation layer. Avoid storing "local time without timezone" — it makes daylight saving transitions and international users a nightmare.
ISO 8601 (2024-04-25T14:30:00Z) is the recommended string format for interchange: unambiguous, sortable lexicographically, and parseable by every modern language. The trailing Z means UTC; an offset like +02:00 indicates local time with its offset.
All conversions run in your browser. No data is sent anywhere.