April 2016

A second look at time

The RTT February Technology Topic, Time for 5G, reviewed the need for more accurate centralized and localized time coordination to support safety critical automotive transport systems, energy grid applications, e-health and m-health and factory of the future applications including a requirement to support end to end latency of less than 5 milliseconds. Light travelling in a straight line takes a millisecond to travel 186 miles so by the time transmission loss and routing is added in, 5 milliseconds is an ambitious target over anything other than short distances.

Safety critical vertical markets also require a 1 in 105   packet loss threshold. A packet loss threshold is a combination of the packet loss and the end to end latency constraint. Packet loss rates can be reduced by resending packets but this introduces delay and delay variability.

A 1 in 105  packet loss threshold might seem a modest target given that fibre is typically specified at 1 in 1012  but cellular networks have typically been designed for 1 in 103 for legacy voice. Moving from 1 in 103 to 1 in 106  requires an extra 3dB of link budget and more closely managed core and edge timing. Every dB of additional link budget translates into a 14% increase in network density. Reducing the packet loss threshold expressed as elapsed time therefore has a direct impact on capital and operational cost.

In this month’s topic we take a second look at these timing issues including the implications of providing sub microsecond accurate time to distributed server software for high added value applications such as high frequency trading.

Delivering one microsecond phase accuracy at the network edge of the network implies a need for 100 nanosecond accuracy in the network core. While not impossible, these requirements introduce additional cost both in terms of timing cost and the management of end to end routing and router delay variability and can be compromised by time inaccuracy in the server. A temperature change caused by a server or slave computer fan turning on and off can cause an oscillator drift of several microseconds.

Thanks are due to Tony Flavin of Chronos Technology Limited for providing inputs and insights on this and related 5G timing issues.

If you ask a timing expert how accurate a time reference is needed for any given application the answer will always be’ it all depends on….’

One of the dependencies is the time over which the timing accuracy needs to be maintained.
For example, a time stamping requirement for a financial transaction or automated computer trading system of less than one millisecond drift compared to universal coordinated time (UTC) can be maintained for three hours independently of GPS using a standard temperature controlled oscillator. Maintaining the same specification over three weeks requires a high specification rubidium source.

Maintaining <1 microsecond  of accuracy relative to UTC, needed for example for high frequency trading or smart grid or LTE Advanced mobile networks using a temperature compensated crystal oscillator TCXO would support three minutes of holdover. Three hours of holdover would need a highly specified OCXO or low specification rubidium source.

Legacy networks have typically been deployed using a master clock frequency specified to the G.811 ITU standard developed to prevent slips in international switch buffers, primarily for speech traffic, but also used as the master clock for systems such as SDH. This has been supplemented by ITU-T G 8272 for time, phase and frequency in packet networks and other recommendations in the G.827x series to compensate for variable delay introduced by switch and router hardware and routing flexibility.

Digital networks since PDH systems in the 1980’s through to SDH and SONET and today’s optical networks have required synchronization. All of these guided media protocols are inherently suitable for synchronization distribution due to the bit-by-bit deterministic way in which they transport data.

The transition to packet networks and Ethernet for backhaul in parallel with the need to maintain legacy TDM networks has meant that synchronization has to be maintained across non-deterministic packet networks.

A common method of achieving this is by using the Precise Timing Protocol (PTP) based on a continuous exchange of time stamped packets which ensures that the grand master clock reference maintains the alignment of boundary and slave clocks. A parallel protocol, the Network Time Protocol (NTP) is used to synchronise computer clocks over a network.

These protocols can be compromised by frame delay (latency), frame delay variation (packet jitter) and frame loss. PTP operates in a similar manner to NTP, but at higher packet rates and generally at the Ethernet Layer rather than the IP layer. This allows PTP to achieve higher levels of accuracy than the one millisecond level generally quoted for NTP systems.

Inconveniently, packet delay in the network is often asymmetric, different between master to slave and slave to master. This complicates the phase synchronization process because the offset computed by the slave will be wrong by the sum of the difference between the two paths.

So for example a computer or server exchanging time stamps every second between a slave and master with 50 nanosecond accuracy could be transitioning through a switch or router introducing asymmetric path delay (packet delay variation) of the order of tens of microseconds.

The computer or server will be running an operating system coupled to a quartz oscillator which can add microseconds of error per day and there will be an additional difference of several microseconds depending on whether the server is loaded (with the fan running) or unloaded. The filling and emptying of traffic buffers causes additional asymmetric delay variation.

The impact of this is that the core network reference has to be at least an order of magnitude more accurate than the boundary clock reference, for example one microsecond at the edge will need 100 nanoseconds at the core. This level of accuracy is also needed to provide back up when GPS is unavailable.

There seems to be an emerging consensus within the 5G standards community that there will need to be a reference time accuracy at the network edge of the order of 300 to 500 nanoseconds which implies 30 nanoseconds at the core though it is hard to see how useful this will be if the other causes of end to end delay cannot be measured and managed and could potentially result in unexpected edge timing and synchronisation costs.

This also implies a need to qualify the timing needs of network function virtualization (NFV), assumed as one of the prime mechanisms for reducing delivery cost in 5G networks - a badly timed virtual network will by implication be a badly behaved virtual network. Packet timing protocols work adequately well over Layer 2 (the data link layer) but not Layer 3 (the network layer) and expensive work arounds may be needed which will negate the promised cost benefits.

The default answer is to use GPS with the comfort and assurance that GPS is becoming more accurate and resilient to jamming with the addition of the L2 and L 5 frequencies, launch of the Galileo and Beidou constellations and enhanced upgraded Glonass but getting GNSS signals into buildings can be hard and expensive. Lightning strikes or high winds can take out external antennas and satellite signals are subject to space weather effects. A wire head alternative therefore continues to be a desirable back up.

What accuracy is needed and what will it cost?

The chip scale atomic clocks referenced in the January technology topic are one solution but are presently priced at around $1500 dollars and do not have the accuracy of a full scale atomic clock.
As with all electrical equipment, these devices will be subject to electrical failure.

Improving the accuracy of grand master clocks is also both desirable and necessary for 5G but has cost implications. An optimised cesium clock costs around $100,000 dollars but cesium depletion means that the cesium tube will need replacing somewhere between every five and ten years at a cost of $30,000 dollars.

Strontium based atomic clocks are being suggested as an alternative as are optical clocks but cesium and rubidium based devices remain as the  default sources of accurate time in present and future networks for at least the next few years.

An optimised cesium clock today performing at an offset from UTC at 1x 10 -12  accumulates one picosecond per second of phase compared to a UTC based device such as a GNSS/GPS timing receiver.

These devices can therefore maintain the +- 1.5 microseconds needed for time and phase alignment of macro and micro cells in LTE TDD and LTE Advanced for a month and future improvements will probably be sufficient to support more tightly toleranced 5G timing needs.

However there are other performance parameters including start up time that are critical to radio network applications including broadcasting, satellite and terrestrial mobile broadband which introduce additional synchronization cost. The better clocks typically have much longer start up and stabilisation times.

Resilience is also a cost and generally dependent on supplying multiple clock sources. The repurposing and recommissioning of legacy Loran Very Low Frequency (VLF) transmitters is being studied and tested as a cost effective way to provide UTC traceable time to applications in GNSS denied environments. Initial test programme results suggest this could yield UTC traceable results with an accuracy of better than 100 ns, a quality comparable to GPS but with better indoor penetration.

Supplementary system innovations such as e-Loran are therefore likely to become progressively more important.

Summary  

The applications running in smart phones today have a relatively relaxed timing requirement, of the order of a second, but are dependent on tightly toleranced timing in the radio access layer and core network.

The transition from 4G to 5G implies a need for higher data rates, lower end to end latency, better resiliency, lower packet loss thresholds and low packet delay variability. These together with advanced interference management techniques in the radio layer imply at least an order of magnitude improvement in time accuracy both at the core and in boundary clock devices.

This improvement will also be needed to support network function virtualization (NFV). In particular the promised cost efficiency gains of NFV may be at least partially offset by additional synchronization costs. At the very least it is to be expected that synchronization costs are likely to increase as a percentage of network deployment costs as we move from 4G to 5G networks.

Clock quality is equally critical to all guided media including next generation cable (DOCSIS 3), copper (G.fast) and fibre (GPON) as is the time domain integration of the radio access layer with copper, cable and fibre back haul.

It is plausible to claim and probably possible to prove that clock quality value, the difference between the cost of improving clock quality versus the additional realized value at network and device level increases as bit rate increases. Traffic per watt efficiency will also increase – a subject to which we will return in a future technology topic.


http://www.chronos.co.uk/index.php/en/time-timing-phase-monitoring-systems

https://www.itu.int/rec/T-REC-G.811/en

http://www.ntp.org/

http://www.nist.gov/pml/div689/20150421_strontium_clock.cfm

https://www.rp-photonics.com/optical_clocks.html

http://www.chronos.co.uk/index.php/en/delivering-a-national-timescale-using-eloran

 

New 5G Vertical market study

There are potentially three key enablers that could realise a 5G physical layer capable of delivering a step function performance improvement over existing 4G systems.

Super accurate super small clocks – to enable the improved frequency, phase and time referencing needed for high throughput low latency low packet loss connectivity and enhanced interference management (for capacity gain).

Supercomputing including quantum computing – to enable cost efficient and energy efficient spatial processing.

Superconducting materials including single atomic layer 2D materials such as graphene, silicene and germanane - to enable the ultra-low loss transfer of heat and energy.

All three topics are covered in a new study on 5G Vertical Markets co-authored by Geoff Varrall and published by Policy Tracker – you can order a copy here

https://www.policytracker.com/5G_report

Geoff Varrall is also presenting two ninety minute modules on the Policy Tracker 5 day Training Course Understanding Modern Spectrum Management at the LS Telcom Training Centre in Central London April 18-22 – to book a place follow the link

http://www.policytracker.com/training/understanding-modern-spectrum-management-london

The three topics (and many others) are also covered in Geoff’s new book published by Artech House, 5G Spectrum and Standards which is now available to pre order

http://www.artechhouse.com/International/Books/5G-Spectrum-and-Standards-2327.aspx