Timestamp Precision Levels
Compare timestamp precision levels: seconds, milliseconds, microseconds, and nanoseconds. Learn which precision to use and how different systems handle each.
Concept
s / ms / μs / ns
Detailed Explanation
Timestamps can represent time at different levels of precision, from whole seconds down to nanoseconds. Choosing the right precision depends on your use case, storage constraints, and what your language or system supports natively.
Precision levels explained:
Seconds (s): 1700000000 10 digits
Milliseconds (ms): 1700000000000 13 digits
Microseconds (μs): 1700000000000000 16 digits
Nanoseconds (ns): 1700000000000000000 19 digits
Each level is 1000x more precise than the previous. One second = 1,000 milliseconds = 1,000,000 microseconds = 1,000,000,000 nanoseconds.
What uses which precision:
| Precision | Systems |
|---|---|
| Seconds | Unix time_t, Python int(time.time()), cron |
| Milliseconds | JavaScript, Java, MongoDB ObjectId |
| Microseconds | PostgreSQL, Python datetime, UUID v7 |
| Nanoseconds | Go time.Time, Rust Instant, Linux clock_gettime |
Storage considerations:
A seconds-based timestamp fits in a 32-bit integer until 2038 and a 64-bit integer essentially forever. Nanosecond timestamps need 64 bits now (they overflow a 32-bit integer in about 4.3 seconds). When storing in databases, consider that higher precision means larger storage requirements and potentially slower indexing.
Conversion formulas:
seconds = milliseconds / 1_000
milliseconds = microseconds / 1_000
microseconds = nanoseconds / 1_000
// Example: nanoseconds to seconds
1_700_000_000_000_000_000 ns ÷ 1_000_000_000 = 1_700_000_000 s
Common pitfall — silent truncation: When converting nanoseconds to seconds, integer division in most languages truncates the fractional part. If you need to preserve the sub-second precision, keep the remainder or use floating-point (but be aware that IEEE 754 doubles only have about 15-17 significant digits, which can lose precision for nanosecond timestamps).
Choosing the right precision: Use seconds for scheduling, caching, and APIs where sub-second precision is unnecessary. Use milliseconds for UI events, request logging, and database timestamps. Use microseconds or nanoseconds for distributed tracing, performance profiling, and scientific data acquisition.
Use Case
Distributed tracing systems like Jaeger and Zipkin require microsecond-precision timestamps to accurately order spans within a request that traverses multiple microservices.