Artificial intelligence (AI) and various cloud services are driving the growth of data centers. AI training requires more computing resources, while cloud services such as media streaming require more storage and data processing.
The growth in storage and computing necessitates higher connectivity speeds. As a result, the PCIe 6.0 link data rate was boosted to 64 GT/s, and Ethernet lane speeds are as fast as 224 Gbps. Higher-speed data links require lower-noise clocks to maintain high data quality.
Hao Zheng
Systems Engineer
1 A closer look at data centers | Data centers process, store and relay data through servers and
switches. |
2 Clocking in data centers | There are two types of Peripheral Component Interconnect Express (PCIe)
clocking architectures that serve several purposes within data centers:
common clock (CC) and independent reference clock (IR). |
3 Trend toward lower jitter | As Ethernet data rate increases, lower jitter is necessary for high-speed
SerDes. |
4 Greater integration | BAW integration improves reliability and reduces jitter, size and cost. |
As shown in Figure 1, data centers typically comprise racks of servers. On top of each server rack is a ToR switch that relays data packets between servers and the network. A spine or fabric switch is a higher-layer switch that connects the ToR switches to the network.
High data rates usually require active cables between servers and ToR switches and optical modules between ToR switches and spine or fabric switches to reduce losses. Figure 2 shows the server blocks that usually require clocking components.