As the size of SSDs grows, the need to do more processing inside of drives is also growing. Compute in storage is being used to deal with latency and power issues associated with moving large amounts of data and extending drive life while increasing reliability. In the past, data was moved from a drive to a compute device for processing. In enterprise systems, the data had to be transferred across multiple interfaces and protocols. Not only does this take time and increase latency but it also burns power. In the end, multiple copies of data would be held at different points in the system, increasing memory needs, and lowering data security.
To eliminate these issues, compute is being moved into the storage device. This results in a significant reduction in data movement, minimized latency, reduced power, and increased data security because the data stays inside of the drive. An additional benefit is that the processing can be optimized for the workload, resulting in increased throughput and performance.
Compute in storage is also being used to increase the reliability and endurance of SSDs. To increase the density of flash memory, designers have increased the number of bits that can be stored per cell. As the number of bits stored in a cell increases, the number of program/erase cycles that the cell can endure declines significantly. This is a big problem in SSDs, especially in enterprise applications where a cell could be erased and reprogrammed many times in a day. It is necessary to design storage controllers with enough performance to support in-storage compute using smart storage to deal with both the write amplification factor and the program/erase cycles of the memory to maximize endurance and reliability.