Insight Home | Previous Article | Next Article
Issue 3, 2013
Mega Data Centers Drive Demand for a New Class of SoCs
Trends in the data center industry are driving the development of a new class of SoCs for ultra-low-power micro servers and software defined networks. Ron DiGiuseppe, senior strategic marketing manager in the Solutions Group at Synopsys, describes the factors for success in this technically demanding market.
The huge uptake of cloud-based services is forcing designers to fundamentally re-think how they are developing the next-generation data center System-on-Chips (SoCs). The enterprise market is rapidly shifting from desktop processing to cloud computing – replacing the traditional installed applications on PCs with thin clients and hosted services in the cloud. Meanwhile, social media websites are putting significant internet data pressure on data centers: consider that Facebook alone must accommodate some 350 million photo uploads per day. These industry trends leave data centers to shoulder the processing and networking burdens for increasingly larger mega data centers.
Figure 1: Mega data centers are demanding significant amounts of power and network management
As a result, companies operating the biggest data centers, such as Facebook, Google, Amazon and Microsoft, now depend on having hundreds of thousands of compute servers running 24/7 to deliver their services reliably.
Designing high performance networks to support the large number of data center servers creates new challenges to manage network traffic, improve bandwidth provisioning, and scale the high-bandwidth data center network. The high number of servers in tomorrow’s mega data centers also makes power reduction a critical goal in order to minimize power and cooling operating costs.
A New Class of Low-Power Server SoCs
Today’s data center servers are power hungry machines consuming up to 400W or more per server. Scale the consumption for one server across a mega data center and you have a power load equivalent to a small city – just to power the servers alone. On top of that, there is a need to minimize the heat dissipation, which demands a significant amount of electricity for air cooling.
Figure 2: Reducing power with micro server SoCs
The need for lower power consumption is driving chip manufacturers to invest significant effort into developing a new class of low-power micro server SoCs (Figure 2).
These chips use ultra-low power processors based on 64-bit micro architectures, and integrate common functions such as Ethernet, SATA, PCI Express®, memory controllers, fabric switches and memory caches. Innovation in low-power micro servers will be one of the growth drivers in the IP market over the next few years.
Re-Architecting the Network
In the past, network architects have designed their networks using system switches and routers running proprietary network operating systems and software. However, new mega data centers are implementing Software Defined Networks (SDNs) to eliminate proprietary software and network operating systems in favor of a simplified network architecture based on a common software stack (Figure 3) running on standard system platforms. This simplification helps standardize traffic management of complex networks, which reduces network management costs.
Figure 3: Simplifying the data center network with SDN
The Open Network Foundation (ONF) is leading the SDN standardization effort through a new protocol called OpenFlow. Software Defined Networks using OpenFlow allow the control and data planes to be decoupled and centralize the network management abstracted from the underlying network infrastructure.
The move towards SDN heralds a fundamental change in the data center networking industry. The next generation of SDN-enabled communications systems will incorporate new communications processors and switch chips requiring high-performance, low-latency IP such as Ethernet, PCI Express and DDR that support advanced data center protocols, improve reliability-availability-serviceability (RAS) and meet advanced FinFET process technology requirements.
Critical Design Success Factors
To successfully deliver a new generation of high-end, low-power SoCs that enable SDNs and micro servers, designers will be focused on implementing chips that deliver the following:
Advanced Protocol Support
An extensive portfolio of IP supporting key protocols for data center applications such as PCI Express I/O virtualization, high performance DDR4, 10GbEthernet TCP segment offload and SATA Host/Device.
Achieving lower power budgets for micro servers will require IP to support specific low-power features such as PCIe L1 substates, Energy Efficient Ethernet, hot-plug functionality, advanced power management and more.
Data centers have to transfer user data through the network, into their servers, run an application and then get it back to the user. Low latency is a critical requirement in the architecture and IP.
Data centers put huge demands on operators to achieve high levels of availability, which they can only sustain by building highly reliable and easily serviceable servers.
Latest Process Nodes
Achieving the target performance and low-power goals will be possible by transitioning to the latest 16-nm/14-nm FinFET manufacturing processes.
A New Class of IP for Data Center Micro Servers
Synopsys offers a comprehensive portfolio of IP, which has been designed to support the specific needs of these leading-edge applications including servers, networking and storage (Figure 4). The IP solutions are optimized for high performance, low power and low latency, and support the most advanced protocols and process technologies including 28-nm down to 16/14-nm FinFET.
For example, the DesignWare IP portfolio includes low-latency integrated memories, which are sized appropriately for L1 and L2 cache with multi-bit error correction to support RAS; a low-latency multi-port memory controller and PHYs optimized to share main memory with compute offload engines plus network and storage I/O resources; low latency Ethernet digital controllers and PHYs enabling integrated network switches; PCIe Express endpoint optimized for AXI ordering rules; and multi-protocol enterprise 10G PHY enabling flexible system I/O with support for 10GBase-KR, XAUI, CEI-6G, and PCIe 3.0 connectivity (Figure 4).
Figure 4: Data center SoC architecture incorporating DesignWare IP
Table 1 lists some of the specific attributes within the DesignWare IP components that make them suitable for data center SoC applications.
Table 1: DesignWare IP supports specific data center SoC needs
The move toward more digital applications and cloud services is driving the rise of the mega data center. However, operators recognize that data centers are reaching the limits of scalability. Data centers, in their present form, are becoming unsustainable due to the huge power demands and the increasing complexity of network management.
The industry is focusing on developing a new class of micro servers and implementing standardized SDNs to address these problems. The success of these solutions will require design teams to create highly advanced chips that incorporate a broad range of IP that is optimized for low power, high performance and low latency with support for advanced protocols, RAS and advanced FinFET processes.
About the Author
Ron DiGiuseppe is the Senior Strategic Marketing Manager in the Solutions Group at Synopsys. He is responsible for data center and enterprise segment marketing for Synopsys DesignWare IP Solutions for networking and micro server SoC applications.
Ron brings more than 18 years of semiconductor experience to Synopsys. Prior to joining Synopsys, Ron held a range of management positions at Xilinx for advanced connectivity and networking IP products as well as engineering development and management roles for companies including Oki Semiconductor, NEC, and Raytheon Corporation. Ron holds a bachelor's degree in Electrical Engineering from San Jose State University and a Certificate in Network Engineering from the University of California.