Securing Every Memory Path: Inside Synopsys’ Scalable IME Architecture

Dana Neustadter

Apr 13, 2026 / 6 min read

Introduction

Modern SoCs are in the midst of a fundamental architectural transition. Compute clusters are multiplying, memory hierarchies are expanding outward through CXL and UCIe, and bandwidth across DDR, LPDDR, and MRDIMM continues to rise generation after generation. At the same time, workloads tied to AI, confidential computing, and hyperscale cloud services are pushing exponentially greater volumes of sensitive data through these increasingly complex memory fabrics.

In this environment, memory can no longer be treated as a trusted zone inside the system perimeter. It has become an exposed interface, a high‑value target, and a critical link in the confidentiality chain. Understanding what threatens today’s DRAM systems is the first step in understanding why memory security must evolve.

Why Inline Memory Encryption Is Needed

Off‑chip DRAM has become a prime target for hardware‑level attacks because its cells are physically exposed, increasingly dense, and accessible through high‑bandwidth interfaces. Several real‑world attack techniques demonstrate why encrypting memory inline is essential.

  • Rowhammer can deliberately flip bits in adjacent memory rows by rapidly accessing nearby data, potentially corrupting critical structures or altering privileges.
  • RAMBleed takes advantage of the same underlying DRAM physics but uses them to extract information, threatening confidentiality even without modifying data.
  • Cold‑boot attacks allow an adversary who gains physical access to reboot a system and read residual charge from DRAM to recover sensitive material like encryption keys.

Together, these attack classes highlight a critical reality: DRAM is an exposed and actively exploited attack surface. These challenges make it clear that modern SoCs need protection that operates at the speed and scale of today’s memory systems.

Synopsys’ Inline Memory Encryption (IME) Security Modules ensures that even if an attacker can probe, tamper with, or extract bits from DRAM, the information they obtain is unintelligible, which preserves both integrity and confidentiality across the entire memory subsystem. It provides a unified, scalable architecture for securing data wherever it travels inside an SoC or between chiplets.

IME’s Design Approach: Security That Adapts to the System

To understand how IME delivers this protection without compromising performance, it helps to look at the architecture behind it. IME is built around the idea that encryption must be deployed where the memory architecture needs it, not where a fixed block happens to fit. To make this possible, the module offers independent read and write encryption engines, each based on deeply pipelined AES-XTS or SM4-XTS. These pipelines scale with datapath width, allowing designs to target optimal area, power, or throughput whether they are working with 128, 256, or 512 bit configurations.

The architecture maintains exceptionally low latency, in some cases down to just two cycles when IME is integrated inside Synopsys DDR/LPDDR/MRDIMM controllers, while still supporting a rich set of protection modes, including dozens of address‑based regions or up to 1024 key-based contexts. The system’s configurability extends further, from support for ARM’s Confidential Compute Architecture (CCA) to mechanisms that ensure secure key handling and zeroization.

Internally, IME is organized around paired tweak and data pipelines, an optimized bypass path for unencrypted traffic, and a secure APB4 or APB5 interface for configuration and key programming. The block diagram from Figure 1 captures this structure clearly and serves as a foundation for the many deployment models that follow.

Synopsys IME Security Module Block Diagram

Figure 1: Synopsys IME Security Module Block Diagram

How IME Maintains Ultra Low Latency

The module’s low latency performance comes from a combination of architectural choices. Its deeply pipelined AES-XTS or SM4-XTS datapaths allows multiple rounds of encryption to execute simultaneously, while the integrated tweak generation engine ensures that block-based memory protocols never stall waiting for metadata. Because the bypass path is designed to maintain in-order delivery without touching the encryption pipeline, unencrypted regions pass through with no extra delay.

Key management is also engineered for minimal disruption: configuration locks, secure key loading, key freshness counters, and SRAM zeroization all operate in ways that protect the system without compromising the datapath’s speed. Collectively, these design elements enable IME to keep pace with the accelerating memory bandwidths of modern SoCs while still delivering robust, standards‑aligned protection.

A Unified Architecture Across Five Deployment Models

What makes IME unique is its ability to operate consistently across a wide range of memory topologies. Each deployment model solves a different system requirement while preserving the same underlying security foundation.

 

1. Integrated Protection Inside DDR/LPDDR/MRDIMM Controllers

In performance critical SoCs, like AI accelerators, HPC processors, or high bandwidth networking equipment, latency is paramount. These systems benefit most when IME sits directly inside the memory controller close to the PHY interface. In this configuration, IME encrypts and decrypts data on DRAM bursts with minimal latency while maintaining complete independence between the read and write channels. Designers gain region based protection, compliance with AESXTS standards, and virtually no performance penalty thanks to an encryption latency overhead as low as two cycles.

Synopsys Secure (LP)DDR Controller with IME Integrated

Figure 2: Synopsys Secure (LP)DDR Controller with IME Integrated

2. Protecting Memory in UCIe‑Based Chiplet Systems

The rise of chiplets brings new architectural challenges. Data frequently travels off the processing die, across UCIe links, before reaching memory chiplets or PCIe‑based storage devices. IME addresses the resulting exposure by encrypting data before it leaves the compute die. This approach preserves confidentiality across the entire multi‑die pathway, independent of how the memory chiplet itself is implemented. It’s a critical capability for advanced heterogeneous systems where CPU, GPU, and specialized accelerators all share memory at package scale.

Figure 3: IME Protecting Data Shared with Memory Chiplets

3. Coherent Memory Encryption for ARM CCA Platforms

Systems implementing ARM’s Confidential Compute Architecture (CCA) introduces a new requirement: memory isolation must be enforced not just for software processes but for hardware realms. IME supports this by acting as a standalone encryption module connected to the ARM CXL/CHI Coherent Network through a CXS interface. When deployed this way, it ensures that DRAM or CXL attached memory remains protected across shared, coherent fabrics, and does so while supporting the extensive context requirements of Arm Realm Management Extension (RME) V2.

Figure 4: IME Connected to ARM CCG

4. Inline AXI Encryption for Drop‑In Integration

Not every system can modify its memory controller, and not every controller supports native encryption. For these designs, IME operates as an inline AXI engine inserted transparently between the application logic and the memory interface. This configuration allows SoCs to add memory encryption without redesigning existing controllers. Because the architecture matches system bandwidth and preserves ordering through an optimized bypass mechanism, it behaves like a natural extension of the memory path rather than an intrusive component.

Figure 5: IME Used as an Inline AXI Engine

5. LookAside Encryption for Heterogeneous and Multi‑Vendor SoCs

Some systems require even more architectural flexibility, particularly those that combine multiple independent memory interfaces or incorporate controllers from different vendors. In these situations, IME can serve as a lookaside engine receiving AXI transactions for encryption or decryption. This mode decouples security from the memory controller entirely, allowing IME to protect DRAM, SSDs, CXL memory devices, or virtually any storage subsystem. It’s a compelling solution for designs that must evolve quickly or support multiple memory technologies in parallel.

Figure 6: IME Used as an AXI LookAside Engine

A Scalably Deployed Memory Security Architecture for Modern SoCs

As designers need to deal with rapidly growing memory bandwidths, increasingly distributed compute topologies, and a shift toward chiplet‑based designs, the need for robust, low‑latency memory protection has become universal. Synopsys’ IME architecture provides a consistent, unified approach to solving this problem across every major memory path. Whether it is tightly integrated into an (LP)DDR controller, connected to ARM CCA coherent fabrics, embedded in chiplet systems, inserted inline through AXI, or deployed as a flexible lookaside engine, IME adapts to the system rather than forcing architectural compromises.

This adaptability, combined with ultra‑low latency and comprehensive standards support, makes IME not just an encryption block, but the foundation of a scalable memory security strategy for the next generation of SoC designs.

Continue Reading