At OFC 2026, the networking ecosystem is aligning around a shared reality: AI is fundamentally changing how Ethernet networks need to scale.
As accelerator clusters grow to support larger, more complex models and emerging agentic workloads, the pressure on network bandwidth, density, and efficiency is increasing fast. To meet that demand, the IEEE 802.3 Ethernet community is taking an important first step forward—initiating a new effort focused on 400 Gbps per lane Ethernet signaling.
This marks the beginning of the path toward the next generation of Ethernet, designed to better support the scale, performance, and flexibility required by AI driven infrast
Next-generation AI systems are pushing beyond traditional networking assumptions. In these environments, high-radix, high-bandwidth connectivity—GPU-to-GPU and GPU-to- switch—is becoming the dominant requirement, especially in back-end “scale-up” networks.
Figure 1. AI Data Center Network Hierarchy based on “AI Datacenters and their Diverse Network Requirements”, Ram Huggahalli (Microsoft), Ethernet Alliance TEF 2024, Oct 2024.
One of the biggest constraints designers face today is how much data can move on and off a piece of silicon. The “beachfront” of a chip—the available area for I/O—has become a gating factor for how much total throughput a compute cluster can deliver.
Increasing the per lane data rate directly addresses this challenge. With higher per-lane speeds, the same number of silicon I/Os can carry more data, enabling more accelerators to communicate with each other faster and more efficiently. That translates into higher overall system throughput without simply adding more lanes.
Power and latency are also key drivers for AI networks. Historically, each new generation of high-speed SerDes has improved power efficiency, and higher data-rates can reduce serialization overhead—helping lower latency across the system. Together, these factors set the stage for the next wave of AI scale networking.
The new IEEE 802.3 initiative is focused on creating an initial set of building blocks that future specification development efforts could use.
The new Study Group looks to open a broad “toolbox” to address the diversity of system architectures emerging across AI platforms. That includes support for a wide range of interconnect technologies—from traditional copper to active cables, to multiple optical approaches—as well as evolving system architectures.
Figure 2. IEEE 802.3 CFI Request: Initiation of Study Group to Standardize
Rather than narrowing options too early, the goal is to define a solid foundation that can accommodate innovation across:
Figure 3. Reach limitations of passive copper emphasizes the need for active solutions
This flexibility is critical. AI networks are evolving quickly, and future Ethernet standards need to work across different deployment models while maintaining interoperability across vendors.
Synopsys has been deeply engaged in the high-speed Ethernet ecosystem for many years, enabling more than 60 customers SoCs across 56G, 112G, and 224G and is actively helping shape the industry’s transition toward the 448G era with deep SerDes expertise, system level interoperability work, close ecosystem collaboration, and sustained participation in standards discussions.
Over the past year, Synopsys teams have contributed studies and insights that help the ecosystem better understand the tradeoffs involved in next generation signaling—from DSP and modulation considerations to the relationship between hosts and channels, to how transmitter performance impacts overall system behavior.
This experience positions Synopsys to support the IEEE effort in a practical, system-aware way—helping to ensure that future standards enable real, interoperable solutions that can scale across vendors and architectures.
The launch of this IEEE 802.3 Study Group does not define the final standard, but it does set direction.
It signals that the industry recognizes both the urgency and the opportunity created by AI scale networking, and that Ethernet is evolving to address these needs directly. By starting with a broad and inclusive framework, the ecosystem has a real opportunity to converge on a truly interoperable 400 Gbps per lane solution that enables multi-vendor innovation and supports the highest performance AI clusters.
At Synopsys, we’re excited to support this progress and to engage with customers, partners, and peers at OFC 2026 as the path to 400 Gbps per lane Ethernet begins to take shape.