As new technology nodes have become available, memory applications have aggressively adopted advanced process technology to meet continually strong demand for memory by an array of electronic devices. With each new node, memory capacity has grown dramatically, while performance per watt has increased.
While they adopt new technologies, memory designers can move forward with confidence that their products will be denser as well as faster. Given the custom nature of memory design, teams have needed to handcraft new cells, cell arrays, and the sensing and control circuits on the periphery, with fairly predictable results.
In addition to scaling for new nodes, there have been many other innovations in the world of memory. Can you imagine today’s electronic devices without multiple generations of double data rate (DDR) technology or content-addressable memory (CAM) for caches? Developing new memories has generally happened independent of process development. As new technologies were adopted, memories also stayed at the leading edge of semiconductor development.
However, today’s trend of increasing chip complexity in our deep submicron age has not bypassed memory. Given this, there’s a need for much closer cooperation between the design and process teams to drive continued improvements in memory density and performance. In this blog post, adapted by an article that originally appeared on Semiconductor Engineering, we discuss the need for memory design technology co-optimization (DTCO).