Everyone is in a race to get to market first with a chip that has the best PPA. Engineering ingenuity goes a long way toward this goal. After all, smart engineers can always make things work, but not without a huge hidden cost. Time spent tinkering with disparate tools to increase correlation of results is money and design cycles wasted—and time not spent further differentiating your design. Some designers find themselves putting margins in and essentially making the implementation tool work harder to achieve similar numbers at timing signoff. Unfortunately, this “forced correlation and convergence” approach can significantly “overcook” the system, manifesting itself as extra power or area in the resulting design.
Say you have a year to complete a project. If you could reduce the time it takes to hit your initial PPA targets down to six months, you have another half of a year to architect and further enhance your PPA metrics—perhaps well beyond what you imagined would be possible. Freed up to do certain things earlier in the cycle, you can enjoy the efficiencies and positive outcomes of a “shift left” approach. With today’s market pressures and distributed design teams, anything that brings greater efficiency could be turned into a competitive advantage.
The changing landscape of the hardware design world also calls for a reinvented tool flow. These days, traditional chip design houses are joined by hyperscalers who are designing their own high-performing chips for the massive data centers that support their core businesses, such as social media, search engine, and e-commerce platforms. These companies need their design engineering teams to ramp up and be productive quickly. A common platform of chip design tools simplifies the effort and fosters better outcomes, eliminating the need to spend time making the tools work together.
Tightly correlated tools also share meta data that could prove useful for optimization later in the design flow. By contrast, if you’re using point tools and typical standardized database or ASCII hand-offs, then the meta data gets lost. For example, when you’re synthesizing an adder, you create the netlist and, often, even the idea that it’s an adder gets lost in the broad sea of elemental logic gates. A converged tool flow, however, will remember that you’ve got a 32-bit input adder. Later in the flow, you can easily change the structures of the adder if you find that it is on a critical data path that needs to be faster. Power intent is another useful parameter when shared across tools (and in a common data model). When you read the RTL in, you can understand the power intent. You can see that a particular register set that you’ll infer later is connected, so you can treat it is a bank of multi-bit registers, keeping the information available in the data model. This way, you can optimize this bank of registers accordingly—something that wouldn’t be possible if you had no knowledge of the power intent across each register and that they were meant to exist as a common structure.