When I was an engineer fresh out of college, I worked for a large defense contractor in southern California. The workplace was filled with employees that worked their whole life with the company; some of them for as many as 40 years. To get an idea of how many people I’m talking about, there was a retirement party for at least 3 or 4 people every week just in our division. You can imagine that I heard many stories, got a lot of advice, and was frequently given many “phrases of wisdom” from these soon to be retired pillars of the engineering community. When thinking about the topic of this blog, it brings to mind one of those phrases… “good enough for government work.”
The connotation of “good enough for government work” is that a solution was not perfect or to exacting standards but was simply good enough to get the job done because government standards were… well, I think you get the point. So what am I thinking of when I think of this phrase? The subject is static timing analysis.
When you think about how static timing analysis has evolved over the years, you come to realize that it has been a “good enough for government work” exercise that that has gone through several levels of refinement driven by decreasing feature sizes of chip design. Early STA didn’t account for interconnect delays because cell delays were the dominant factor. Once gate delays got small enough it became important to model delay through interconnect and once routing pitch got small enough, cross coupling delays on interconnect needed to be modeled. Then on-chip variation became a problem and that needed to be modeled. The constant in all of these approaches is that the delay of a cell is pre-characterized and contained in a timing model.
The timing model contains a table look up for various external load and input slew conditions that the cell might see in a real circuit, and this is what brings me back to “good enough for government work.” To get around the differences between how a cell is characterized and what it actually sees in a real circuit requires a bit of margin to be on the safe side. So a timing model is not a perfect solution but it gets the job done with some pessimism thrown in. This is not to say that there are other forms of margin and pessimism. Parasitic extraction error, delay calculation error, process, temperature, and voltage variation error are just a few, but let’s focus on the timing model.
As we get to 5nm and ultra-low voltage operation, margins are becoming increasingly relied upon to cover uncertainties in delay calculation. These designs are running at multi-gigahertz frequencies, so it’s not so much that the margins are increasing but that the clock periods are decreasing and this means fewer precious picoseconds for a signal to get from one flip-flop to another. For those critical signals that are marginally passing or failing we need to rethink the idea of the timing model. In fact I contend that we should abandon the timing model for critical paths and go straight to a transistor-level STA approach.
In a transistor-level STA approach, standard cells can be re-analyzed at the transistor level and characterized with the exact contextual conditions (input slew and out load), thereby removing the delay uncertainty interpolating or extrapolating points in a look up table. Furthermore, operating at the transistor level brings SPICE level analysis into the picture, which further reduces the uncertainty in computing delays along a path. Imagine an STA system that automatically applies transistor-level STA to cells in critical paths to reduce pessimism and waive violations.
At Synopsys, we have the basis for transistor-level STA in our NanoTime product. It is a solution whose time has come for advanced process nodes beyond 7nm, and my example is just one of many potential uses.