Content-addressable memory (CAM) is now common in networking applications but may become very useful in deep learning applications if power consumption can be minimized. For example, ternary content-addressable memory (TCAM) is a specialized type of high-speed memory that searches its entire content in a single clock cycle that could be a lower power solution for deep learning SoC. The term “ternary” refers to the memory's ability to store and query data using three different inputs: 0, 1, and X (the “don't care” logic bit acts as a wildcard during searches).
Near-memory processing and in-memory processing may take advantage of non-volatile memory (NVM) technologies such as spin torque transfer RAM (STT RAM), phase-change memory (PCM), and resistive RAM (RRAM). These emerging technologies have been produced in discrete devices for years by mainstream semiconductor manufacturers and could be used to perform multiplication using an analog method of weighted bits, resistive ladders and capacitors for accumulation. These technologies, if they can be combined with CMOS logic, could provide a magnitude of energy efficiency to deep learning as could quantum computing, both years away. Today, energy efficient embedded SRAMs and register files are available for deep learning accelerators.