These challenges are daunting but the capabilities being offered in new embedded processors will help designers to deal with them. While the clock speeds of embedded designs aren’t increasing, the performance continues to increase because the latest embedded processors can support more instructions per clock. Adding the capability to issue and execute multiple instructions in parallel, or multithreading, will increase processor performance without increasing frequency. Another approach is to use multi-core processors in either a symmetric or asymmetric configuration. These approaches enable more work to be done in parallel, increasing performance and throughput.
However, increasing the work done per clock doesn’t address memory access limitations. The increasing gap between memory access speeds and logic speeds is most profound for processors that allow only one stage in their pipeline to access memory. In 28-nm processes, memory access speeds will limit the best-case maximum clock speed of processors to just over 1 GHz or less. Processors with single-cycle memory access have few options to overcome the clock speed limits. Newer high-performance embedded processors offer two or more cycles of memory access so memories can be banked and accessed in parallel. With two-cycle memory access, a processor can be run at twice the speed of the memory and achieve much higher maximum clock speeds at all process nodes, including the newer advanced nodes.
Unfortunately, increasing processor performance through increasing instructions per clock, using multi-core processors or running the processors at higher speeds taking advantage of multi-cycle memory access will burn more power, which is a problem for designs with constrained power budgets. Designers of embedded processors can no longer throw transistors at the problem of increasing performance and throughput as has been done in the past. Any increases in performance have to be balanced against the increase in power consumption that is a natural result. Therefore, embedded processors are now being measured in terms of performance efficiency instead of straight performance or power. Measured in terms of performance per microwatt (DMIPS/mW, CoreMark/mW, etc.), performance efficiency has to be considered as a key design metric for any new embedded processor. Careful balancing of performance efficiency enables embedded application designers to take advantage of increases in processor performance while limiting the increases in power consumption.
Of course, performance-efficiency is not the only thing being done to control power consumption. New embedded processors give the designer much greater control over how the processor uses power. The ability to create power islands and exercise dynamic control over power consumption in the processor helps designers meet their system-on-chip’s (SoC’s) power consumption targets. Significant strides are being made in improving instruction sets and compilers to improve embedded code density. Saving 10% or more in embedded code size will reduce memory requirements and save, in many cases, more power than the processor uses.