Why Power Modeling Is So Difficult

Demand is increasing for consistency in power modeling, but it has taken far longer than anyone would have guessed.

popularity

Power modeling has been talked about for years and promoted by EDA vendors and chipmakers as an increasingly important tool for advanced designs. But unlike hardware and software modeling, which have been proven to speed time to market for multiple generations of silicon, power modeling has some unique problems that are more difficult to solve.

Despite continued development in this field, power models generated by EDA vendors, IP companies and even large chipmakers remain largely incompatible. Those models also are often incomplete or contain errors. And even where they do work, many engineers don’t know how to effectively use them.

“Power modeling today is cumbersome and error-prone,” said John Redmond, associate technical director for digital video technology at Broadcom. “There are three primary issues. First, hardware models today are developed in a custom way and are not interchangeable between companies, and often within a single company. Second, there is a lack of a good way to represent the workloads in an efficient manner. The stimulus input needs to match up with the hardware and we don’t have a standard way of doing this. And third, the EDA community has good solutions for power analysis at the RT and netlist levels, but there is a lack of tools at the system level.”

This can be partly explained by the fact that power models and the need for data from those models vary greatly by block, chip, package, board and system, as well as by use case. They even vary from one design to the next within the same company. But as more companies are forced to deal with power—starting at 65nm and 45/40nm, with progressively more complex issues at each successive node—the pressure to create better models is growing.

“Today’s industry standards for power modeling are in a pretty good shape at the RTL/gate level,” said Vita Vishnyakov, low power methodology specialist in Microsoft’s Devices Group. “However, there is a gap in any level that is higher than that, whether that is architecture, system or software. Various homegrown autonomous methodologies have been developed within the larger companies to close this gap, but these methodologies are not available or supported industry-wide, and they usually require quite a high effort to bring up or maintain. The recently released UPF 3.0 does the first step in standardizing the way we describe power at these levels. Additional industry-wide working groups are in flight to expand and supplement this effort, such are P2415 and P2416. However, this is just the first step. Industry-wide adoption by EDA tools companies, IP providers and SoC/system integrators is what needs to happen next for a meaningful progress.”

Multiple power models
A major challenge in developing power models on a commercial scale has been defining common elements that need to be included in all of these models. From there, tools can be created that build off the lowest common denominator of requirements.

“The challenge is that power modeling needs to be done both at the system level and for the hardware architecture,” said Alan Gibbons, power architect at Synopsys. “So you need accuracy, but not just for milliwatts. You have to deal with physics at different accuracy levels. The system-level power model needs to be abstract enough and have just enough detail, but not too much data. You need to be accurate enough to get the job done. You see this on the software side where there is trend-based analysis, which is accurate enough for that task, but the task changes for software, hardware and SoCs.”

The delineation here is between qualitative and quantitative accuracy, and those distinctions vary greatly depending upon what the engineer is trying to accomplish at any point in the design flow.

“One problem is agreeing upon a set of methodologies—some still use spreadsheets, while others are using high-level simulation with activity profiles such as processing load and bandwidth to represent use cases,” said Ashley Crawford, power architect at ARM. ” With those methods, you can make system or SoC choices like dimensioning, but you can’t do much about the software. Others are using a full, virtual platform simulation. With virtual prototyping you can do a lot about the software, if you have something representative to run. This has led to many types of solutions addressing only certain pockets of the problem. While there is no one-size-fits-all answer, the industry must address differing needs with the smallest number of methodologies and as much commonality between them as possible.
 
The IEEE 1801/UPF 3.0 standardization effort has potential to drive this convergence because it enables interoperability and reuse across methodologies and EDA solutions. What should evolve over time is a clear picture of the needed use models. Component power models may need to have multiple levels of abstraction in order to support that.
 
Adds Crawford: “Have we made progress? I think so. Some has taken place with organizations using their own solutions, so it isn’t readily apparent. The EDA providers are also active, so we will begin to see navigation away from private solutions to the supported EDA flows.”

“There are several issues here,” said Nagu Dhanwada, low power tools and methodology architect at IBM. “The first is component abstraction. The second aspect involves PVT (power voltage temperature) in the model. The third thing is that you need a certain level of abstraction for the model at the system level. In all of this, re-use is a key aspect.”

That has led to some debate over just how accurate these models need to be.

“The accuracy needs to be good enough to make the correct decision when it needs to be made,” said Broadcom’s Redmond. “Early decisions don’t always need to be precisely accurate, just relatively accurate. If comparing two options, the magnitude can have a larger error bar, but the relative error bar between the options needs to be smaller. Accuracy improves as the design goes from system, RTL, netlist, and finally silicon. The error bar in RTL power estimation is in the 10% to 15% range today, and we as an industry are okay with this. This accuracy allows for making good decisions for relative comparisons.”

Those numbers vary, depending upon where engineers come into the the design and what their focus is, which is one of the reasons power modeling has been so difficult to develop. In effect, it means different things to different people, and sometimes it varies even for the same people depending upon what they are trying to accomplish at any given point in the design.

“Accuracy is the relation between the pre-hardware availability models to the real hardware measured models,” said Microsoft’s Vishnyakov. “The accuracy requirements however, are a function of application. The accuracy bar for design signoff would be a pretty high one (less than 5% to 10% gap vs. actual hardware). The accuracy bar for hardware exploration or software development could be as high as 20% to 40% error vs. final hardware, as long as it provides the right direction on improvements and degradations.”

Big chipmakers have been developing their own power models for the past several process nodes. As with most tools, building your own isn’t the best solution if tools are commercially available. They don’t scale, reuse is limited, and development resources are tight even in large organizations. The fact that this is still being done, though, is a reflection of the state of power modeling across the industry.

“The adoption issue is about the right combination of tooling and the availability of good models,” said ARM’s Crawford. “Until both EDA and models are available supporting IEEE 1801 UPF 3.0, the motivation to play in the open isn’t really there.”
 
He noted that this becomes very interesting for the IoT market because while the system isn’t necessarily complex in terms of hardware and software scale, the energy profile is critical. “That lower scale might suggest you can ‘work it out,’ but it also enables a relatively easy path to a power model of the complete system, including the software interactions. You can potentially make a lot of progress optimizing these together; including finding out if implementation of time-consuming hardware features is really worth it.”

“A few years ago each EDA and IP vendor created a homegrown solution,” said Synopsys’ Gibbons. “We’re seeing that change now, and that will continue to pick up over the next couple years because there is a standard for this. From a business standpoint the investment is smaller, the investment for EDA companies is smaller because they don’t have to support a number of vendors’ products, and the IP vendors will not have to create new models every time they update their IP.”

Broadcom’s Redmond agrees. “Across all segments, power is becoming a larger concern. Mobile products have realized that power modeling is not optional and have incurred the cost to do it, whereas other less-power-sensitive products will wait until the power modeling cost comes down. The cost will be reduced significantly as modeling standards and system level power tools come online.”

At least that’s the goal.

“Standardization can make things more accessible,” said Vishnyakov. “Standardization can help develop consistent EDA tools that are easier for adoption than home grown solution. The system level power/performance challenges in the IoT/IoE space can be more challenging than of the ‘standard’ mobile world. In addition, the huge range of IoT/IoE devices and architectures would drive a more urgent need for easy and quick system-level optimization flow, with power/performance optimization being an important portion of the flow.”

There also is a healthy amount of optimism that change is coming for the better. “In time, people will look to IP providers for models,” said Crawford. “However, for complex IP the issue is with simply starting delivery of ‘something.’ This is because correct use of the model requires mapping to the functional model stimulating it. In order to marry the power model to the functional model, you need to know what stimulus is available for consistency. As mentioned, designers may work at different abstractions, so models might need multiple levels with the associated stimulus and fidelity. Unless both the functional and power model components are delivered by the IP provider, then what is needed must be determined first.”

He noted that deployment of power models requires portability, which is dependent on separation of model behavior and model data—the power state representation and the form of the power estimation equations, among other things. “The model data then requires implementation-specific re-characterization in order to be scalable. Thus, a package to do so should come with the model. For that reason, expect the interchange of models to catalyze progress on characterization methods. The IEEE 2416 group is addressing this.”



2 comments

Kev says:

Power is something you can model perfectly well with Verilog-AMS. The proper flow is to use something like UPF to annotate power-intent to synthesis, and get Verilog-AMS out of synthesis. Verilog-AMS is an abstract modeling language too, so there’s probably a large overlap with UPF – particularly since there was no attempt to join the efforts at Accellera.

Given that SystemVerilog doesn’t have the potential & flow constructs of Verilog-AMS, it’s questionable how accurately you can model power with SV/UPF, particularly if you are trying to do DVFS or back-biasing (for FDSOI). AFAIK the only models in the celll libraries are fixed-voltage RTL/logic or SPICE level, and SPICE level isn’t usable for SoC verification.

Also, most EDA standards being handled at the IEEE are now SA member-only participation, which locks out a lot of the users and lets the EDA companies stamp a badge of approval on bad methodology.

Pitch Monk says:

The issue is not if the power could be modeled perfectly at higher abstraction layers or if standardization can get there. The issue is currently how these “standards” are getting adopted by various tools to benefit the designers. The implementation of these standards are lagging severely that even a complete UPF 2.0 is not yet implemented by many EDA vendors.

Leave a Reply


(Note: This name will be displayed publicly)