Cloud native EDA tools & pre-optimized hardware platforms
The presence of Ethernet in our lives has paved the way for the emergence of the Internet of Things (IoT). Ethernet has connected everything around us and beyond, from smart homes and businesses, to industries, schools, and governments. This specification is even found in our vehicles, facilitating communication between internal devices. Ethernet has enabled high-performance computing data centers, accelerated industrial processes and commerce, and can be found in households worldwide. Despite the advancements in Ethernet technology, with the rise of 800G Ethernet and the standardization of 1.6T Ethernet, high-speed Ethernet above 100G remains a rarity in edge computing. This article explores how 100G Ethernet enables edge computing and describes applications and design challenges for IP designers.
"The Edge" encompasses any device that collects and processes data before it reaches a data center or cloud processing environment. This includes cameras, sensors, mobile devices, vehicles, routers, switches, and even smart appliances. Despite its dynamic and complex nature, the rapid growth of edge computing has been driven by the increasing number of edge devices, including machines, sensors, meters, and mobile and wearable devices. The adoption of AI in transportation, home, and metropolitan technologies has also contributed to this growth. Ethernet has been a key enabler of edge computing, as it allows for high-speed data transfer between the edge and the internet.
According to Vantage Market Research, the global edge computing market is valued at USD 7.1 billion in 2021 and is projected to reach USD 49.6 billion by 2028 at a Compound Annual Growth Rate (CAGR) of 38.2%. The devices involved may have many form factors and architectures but let’s look at an individual server as being representative of them.
Figure 1: Computing is Moving from Cloud to Edge
Includes in-depth technical articles, white papers, videos, upcoming webinars, product announcements and more.
Figure 2: Paths to the Cloud from the Edge
Servers typically use a shared PCIe bus to attach network in cards (NICs), and computers using PCIe 3.0 are the first generation with a bus fast enough at 8 GT/s per lane to support 100 G Ethernet adapters using a x16 link (unidirectional 16 GB/s or 128 Gb/s). With PCIe 4.0, an 8-lane slot will support a 100G adapter at full speed and that is a sweet spot for today’s machines because x8 slots are usually available on a PCIe bus. Even with the upcoming generation of PCIe 5.0/CXL 1.1 or 2.0 systems, 100G data rate is a comfortable fit on a shared PCIe bus, unless designers are trying to accelerate parallel computation with maximum bandwidth and minimal latency for inter-process communication (IPC) like designers need for HPC clusters.
Table 1: PCIe speeds as a function of version and lane count (Total BW shown is bidirectional)
Edge devices are generally designed to pre-process, compress, and reduce the amount of data that needs to be transferred upstream. Even if you had the necessary amount of post-processed data to fully utilize 100G data rate at the individual server connection, it all still needs to be aggregated for datacenter-facing traffic across a concentrating set of routers and switches. Additionally, those architectures couldn’t service too many simultaneous connections at full bandwidth unless they have up-links that are a significant multiple of the individual port speeds. For example, a 32 port 100G Ethernet switch needs to send all that traffic upstream. Link Aggregation Control Protocol (LACP) can be used to aggregate multiple ports for a connection but even that protocol is limited to eight ports for a given bond. Using LACP with a fixed radius switch quickly drives up the cost of infrastructure and cabling by rapidly reducing the number of downstream connections that device can provide. Wi-Fi connections are all individually well below 1 Gb/s and even cellular 5G theoretically peaks at 20 Gbps, so 100G at the aggregation layer services those markets well.
Automotive applications rarely need more than 10G to 25G Ethernet within the vehicle but do require many of the optional quality of service (QoS) and time-sensitive networking features not yet found in higher-speed Ethernet specifications. If you share a network between vehicle control systems like brakes and an entertainment system, it is important to prioritize vehicle control even if your kids are watching an engaging video. Time-sensitive networking features, soon to be supported at 100G, enable support aggregation on industrial floors, audio visual applications, security, health care and even high-end automotive applications on the edge!
Another advantage that 100G Ethernet provides, as opposed to its higher-speed counterparts, is support for all the required and many optional features specified by the IEEE standards such as:
All required features of the base IEEE 802.3/802.3ba standard
100G Ethernet is currently the fastest Ethernet speed that can be sustained over a single lane. The third generation of 100G Ethernet using a single 100 Gb/s lane was published in December 2022 as IEEE 802.3ck along with 200G and 400G Ethernet using two and four of those lanes respectively and will be supported as 100GBASE-CR for twinax up to 2m and 100GBASE-KR for electrical backplanes. Using multiple lane architectures, the 100GBASE-ZR standard can support 100G Ethernet more than 80 km over a Dense wavelength-division multiplexing (DWDM) system using a single wavelength! For more cost-effective options, a four-lane configuration using 25G NRZ SerDes provides a reliable transport medium.
Figure 3: Secure Network Traffic with MACsec for Automotive, 5G & HPC SoCs
Security is important for all network environments, but it is particularly critical on the edge where 100G Ethernet fully supports MACSec – aka IEEE 802.1AE. MACSec is a hardware-layer encryption mechanism that protects and secures data by ensuring compliance with privacy laws and preventing data theft. MACSec can also prevent rogue devices from being connected to a network, which is a critical protection for an edge environment that could be both unmanaged and unmonitored. Each connection on an Ethernet network (host to host, host to switch, or switch to switch) will traverse both encrypted and unencrypted traffic if control over that encryption is imposed at higher layers, but once MACsec is enabled for a link, all traffic on that connection will be secured from prying eyes.
Lastly, the cost per port goes up dramatically at the bleeding edges of high-speed Ethernet technology. Adding in the cabling cost for ultra-high-speed Ethernet for edge devices just makes them that much more expensive. These factors conspire to make 100G the perfect top-end match for all but the most cutting-edge computing applications which, in turn have led to the creation of a huge market of products, both at the consumer and the professional levels, for 100G Ethernet products – switches and routers, NICs and cables and the competition has help to keep the price point manageable for edge deployments.
As the market for edge computing rises, 100G Ethernet remains the sweet spot for high-speed data transfer between the edge and the internet. Synopsys provides a complete solution for 100G Ethernet IP, including MAC, PCS, and a full range of PHY options, as well as Verification IP, software development, and IP prototyping kits. This solution is ideal for developers of NICs, switches, and routers in the edge market who are looking to incorporate 100G Ethernet technology into their products. In addition to its 100G Ethernet IP solution, Synopsys also offers high-speed Ethernet IP up to 800G today and is actively working with various standards groups to enable 1.6T going forward. This broad range of IP solutions allows Synopsys to address the needs of a wide range of customers in the networking market, from edge devices to high-end data centers.
In-depth technical articles, white papers, videos, webinars, product announcements and more.Explore all articles →