In their latest article from “The Quest for Bugs” series, entitled “Correct by Design,” Joe Convey and Bryan Dickman explore how using virtual prototyping to design and simulate product architectures can eliminate costly architectural design bugs before committing to register-transfer level (RTL). In other words, get the design architecture right first, then write the RTL! Joe and Bryan characterize bad architecture decisions as “architecture design bugs.” They examine this problem through the lens of ASIC design and verification and ask how such bugs can be found and eliminated in the architecture design phase. Not finding these bugs can have costly impacts downstream for power, performance, and security of the end product. Further, they point out that design viability can be compromised by poor architectural design decisions, leading to designs that struggle to meet timing or are difficult to place and route due to wire congestion, for example.
As Joe and Bryan mention, historically, architectural analysis was carried out using spreadsheets, an entirely static approach. This method is very limited, with a high degree of guess work as you speculate on what the dynamic activity will look like for the target system. It’s very easy to make a miscalculation when doing this. The trouble is, you may not realize your mistake until you reach the system validation stage, when you finally have functional RTL and functional software running together. This pre-silicon system validation stage is essential for wringing out hardware and software bugs, including performance and power issues. It is supported by performance-leading hardware acceleration platforms featuring best-in-class debug capabilities, such as ZeBu® Server 4, ZeBu EP1, ZeBu Empower, or the HAPS®-100 prototyping system from Synopsys.
So, bad product architecture decisions can lead to late RTL rework costs. Worse still, you might have successfully built the wrong product: functionally correct, but missing the mark in performance and power, for example. Adopting a more simulation-oriented architectural exploration approach enables a far more extensive and measurable analysis of design choices, achieved through the execution of realistic traffic profiles and critical software sequences. You wouldn’t dream of skipping the RTL or software validation stages, so why leave the critical architecture design stage to guess work?
As Joe and Bryan discuss in “Correct by Design,” we need to simulate architecture design because dynamic effects will arise from workloads that change over time, or from multiple applications sharing the same resources. Modern many-core systems with dynamic scheduling will lead to multiple initiators, giving rise to arbitration on shared interconnects and dynamic memory access latency due to caches and DDR memories. Dynamic effects make it difficult to predict performance with static analysis.
In the same way, dynamic residency of workloads on resources leads to dynamic power consumption, making it difficult to estimate average and realistic peak power. Dynamic power management (DVFS) creates a closed control loop where the application workload drives resource utilization, which is influenced by power management. This makes prediction of power and performance even more difficult, as the elements of the loop become interdependent.