Time

Tech Notes

Computers and Time

March 01, 2016

How computers handle time

Computers usually use two kinds of clocks. One is tied to the incoming voltage (110 in the U.S., 220 in Europe). Every voltage cycle causes an interrupt, either at 50 or 60 times per second. This is the simplest kind.

Another one is based on a crystal oscillator. Tension is applied to a piece of crystal in such a way that it emits a signal (usually between 5 million and 100 million times per second (or 5 to 100 Mhz). The signal is fed to a counter, which, after so many iterations, sends an interrupt to the CPU.

A 1 Mhz clock sends a signal every microsecond, or every millionth of a second.

The number of a bits in the counter (which is a register) is important, as the register can only accumulate before flipping back to zero. A 32 bit counter accumulating ticks from a 60 Mhz clock rate will overflow in two years.

Programmable clocks are ones whose interrupts can be controlled by software.

A clock driver for an operating system can keep the time and provides timing for programs and the CPU scheduler.

From Andrew S. Tanenbaum's "Modern Operating Systems (3rd Edition)

The Network Time Protocol (NTP) is used to update a computer's clock time from a time server over the Internet.

Synchronization is done through an exchange of packets between a NTP time server and the computer, known here as the client. The client sends a request packet to the server, which includes the time the packet was sent (called the "originate timestamp"). When the server receives the packet, it sends back another packet with the time it received that packet (the "receive timestamp").

When the client gets the receive timestamp, it logs the time again. It can then use its two timestamps, in conjunction with the time on the receive timestamp, to estimate the time.

The current time would be the receive timestamp, plus half the total travel time (The time the receive timestamp was received minus the time that originate timestamp was sent), plus the remote processing time.

A series of these sorts of exchanges are executed to validate the time.

NTP gets the time to the client, but how does the operating system adjust its own time to this new time? The IETF put forth a generic way in RFC 1589.

The Unix kernel is alerted by a hardware counter interrupt at some fixed rate. In the 1589 scheme, OS'es keep the time by an accumulation of microseconds. It is some multiple of a hardware counter interrupt, which takes place at a periodic fixed rate, depending on the frequency of the counter (Which, by default, must be some divisor of the CPU frequency). If this number does not divide evenly into a microsecond, the OS adds in some small sum periodically.

For instance, in the Ultrix kernel gets interrupted at 256 Hz. Since this does not divide evenly into a microsecond, the kernel adds 64 microseconds in each second.

For Unix systems, the NTP-driven clock adjustments are made using the adjtime()system call. One trick though: The clock frequency is changed by the value tickadj, which means the time can be slewed only by that amount. This rounding error can accumulate to such a degree the time is eventually wrong. As a result, a synchronization daemon must make an adjustment.

The venerable Network Time Protocol needs to be updated. It increments time in chunks too coarse for modern use.

For instance, how do you measure one-way packet delay from one node to another(through the trusty ping command)?

Well, the current network Time Protocol (NTP) can do it within 20 milliseconds. That was nice back in the day, but these days when it takes 50 nanoseconds to send a packet such a link, a minimum sized packet takes 6 microseconds to traverse the 1 kilometer (10 Gb/s) link. The level of nuance is not enough.

Thinner slices of time could be useful elsewhere too: synchronization at the MAC-address level; for an intra-PoP time transmission mechanism used by the cable company; and for Wi-Max transmitter response times, etc. Likewise, the electrical and printing industry also has the need for time increments less than a few microseconds. And the military is looking into large sensor networks, which need to be synchronized as well.

Note: version 4 of NTP is accurate to about a microsecond, or about 2^-32 seconds. --From the TicToc working group charter and problem statement.

For Posix-based computers such as Unix or Linux, time began Jan, 1, 1970, at midnight. The time according to those machines is the number of seconds that have accumulated since. For instance, the Greenwich Mean Time 8:22 p.m., Tuesday, June 10, 2008, translates to 1213129345 in Posix-speak. (NOTE: This number is disputed on the Perl Datetime List

At the SIGAda 2007 conference in Fairfax, Va., one developer bemoaned Linux’s measurement of time only in microseconds, or millionths of a second. Microseconds are fine for most purposes, but the developer had written a program to applications for bugs and wanted to divide time into everfiner slices. He wanted Linux to cycle in nanoseconds, or billionths of a second.

Using algorithms to estimate the offset caused by transmission times, the current version of NTP can synchronize local time with a reference clock to within a few hundred milliseconds, an accuracy that can be maintained by checking the time server every 1,024 seconds. The NTP update would bring the accuracy to within tens of milliseconds and allow as much as 36 hours between checks with the time server.

Despite the improved accuracy, many technologists still say too much ambiguity remains in NTP.

“There is a lot of ambiguity about how you time stamp [an Internet] packet,” said Symmetricom technologist Greg Dowd, speaking at a March IETF meeting in Philadelphia. “I have a tremendously difficult time trying to do high-accuracy, high-stability time transfer” using the NTP protocol, he said.

NIST physicist Till Rosenband is working on an atomic clock based on a pair of ions, one aluminum and the other beryllium. This clock is at least 10 times more accurate than the cesium-based clocks, NIST concluded after a year of measurements. The aluminum ion emits a steady vibration, which is amplified by the beryllium. A femtosecond oscillation of light emitted by a laser records the vibration. A femtosecond, if you’re keeping track, is a quadrillionth of a second.

“The aluminum clock is very accurate because it is insensitive to background magnetic and electric fields, and also to temperature,” Rosenband said. “Accuracy is measured by how well you reproduce the unperturbed frequency of this atom without any background magnetic or electric fields.” Rosenband said standards labs worldwide are in a race to build the next-generation atomic clock.

The new generation of atomic clocks would neither gain nor lose a second in more than 1 billion years — if they could run that long. Such clocks change no more than 1.6 quadrillionth of 1 percent per year. By comparison, the cesium clock can run without gaining or losing a second for only about 80 million years.

Back