Cloud native EDA tools & pre-optimized hardware platforms
Synopsys Northern Europe is hosting a Technical Symposium providing updates on all aspects of doing state of the art designs at emerging and established nodes.
This event provides an opportunity for users to stay connected with the latest products and innovations as well as getting tips & tricks and best practices that our experts will share.
Synopsys technical experts will provide an “under the hood” look at proven and new technologies that designers can use to accelerate their aggressive time schedules and to achieve challenging Performance, Power and Area goals.
Multiple tracks will be offered where experts will update you on exciting new technologies and features that are now available.
We will conclude the day with a social event where you will have the opportunity to meet and discuss with your industry peers and Synopsys experts in an informal atmosphere.
You will also have the opportunity to win one of our prizes in our Prize draw at the end of the day.
Thursday, 27 October
8:30 a.m. – 5:30 p.m.
Hilton Reading, Drake Way, Reading,
RG2 0GQ UK
09:30 - 10:30 - Windsor 1
Design Compiler is the industries trusted solution for the best quality RTL synthesis and it continues to evolve with significant investments to leverage Fusion technologies. In this session we will highlight recent enhancements and preview the technology roadmap.
RTL Architect is gaining rapid adoption throughout the SOC and IP development community. It offers the ability to pinpoint PPA and congestion bottlenecks; it has powerful, automated, floorplanning capabilties; it has a very flexible, powerful and intuitive GUI cockpit; and it shares technologies from the Fusion Compiler RTL2GDS implementation solution and incorporates golden signoff power analysis capabilities and accuracy.
In this session we will introduce the use model and benefits of RTL Architect for RTL developers.
RTL Architect is gaining rapid adoption throughout the SOC and IP development community. It offers the ability to pinpoint PPA and congestion bottlenecks; it has powerful, automated, floorplanning capabilties; it has a very flexible, powerful and intuitive GUI cockpit; and it shares technologies from the Fusion Compiler RTL2GDS implementation solution and incorporates golden signoff power analysis capabilities and accuracy.
In this session we will introduce the use model and benefits of RTL Architect for Physical Designers.
Learn about the newest technologies in the Synopsys RTL to GDSII Implementation flows using Fusion Compiler & ICCII. Discover what’s different from separate Synthesis & P&R point tool flows and why this benefits power, performance and area quality of results.
As we shift left in the ASIC Flow, Synopsys TestMAX Family of DFT products provides a comprehensive integration flow from RTL to ATE. TestMAX Manager provides a Tcl based framework for the interoperability of these products, enabling flow automation and customization along with design introspection and editing capability. This presentation will give an overview of these DFT Solutions with special emphasis on TestMAX Access and its support for IEEE 1687.
TestMAX Manager supports multiple features and technologies. Each feature, each technology also has various options. Designs usually have several hierarchical levels, requiring DFT features for integration. This flexibility leads to nearly infinite combinations of a Design for Test (DFT) implementation.
The TestMAX Mainstream and Automotive flows guide the users through a proven path of the DFT implementation thanks to a documented and validated example testcase and exercise the interoperability between TestMAX Manager and the other tools involved in the design development. This presentation is intended to give necessary information on the DFT flow, the example design and the exercised features and technologies.
The failure modes of each component in the design and the distribution of component failure mode on the product functionality are measured by the Failure Mode Distribution (FMD) of those failures. The FMD is entered as an estimation value in FMEDA is deemed acceptable for ASIL A/B. But ISO26262 requires a quantified analysis of FMD for traceability especially for ASIL D. There are several approaches in presenting the FMD data, based on a qualitative pin distribution analysis up to quantitative analysis.
This presentation shows an automated tool-based flow and introduces the quantitative FMD calculation and reporting through TestMAX FuSa tool.
An example will also be presented as how TestMAX FuSa is used in the functional safety development flow and the quality of the reported results compared to other approaches in ARC processor core.
Formality is an equivalence checking (EC) tool that uses formal, static techniques to determine if two versions of a design are functionally equivalent. Formality includes interactive and automated ECO functionality. In this session we will highlight recent enhancements and preview the technology roadmap.
As designs continue to grown in size and complexity, the challenge to close and sign them off becomes increasingly difficult as we strive to keep to the timescale, and the available resources.
Synopsys signoff tools offer methodologies to achieve these goals with technologies called “HyperScale” and “HyperGrid”, that use the familiar PrimeTime flow, and its core engines, to perform analysis hierarchically and with design partitioning.
This presentation will share technical details on how these technologies can be used to sign off large designs efficiently.
Current derating solutions are applied to deal with increased design variability at advanced nodes and in low power designs, introducing over-margining and timing pessimism impacting PPA directly.
PrimeShield introduces new solutions in place of the current derating approaches, to reduce the pessimism and improve design robustness through:
- Analyzing and improving design variation robustness, including global and local effects
- Analyzing design at High sigma using Fast Monte Carlo and Machine Learning
- Improving voltage robustness and I/R drop resilience, while reducing Vdd or increasing frequency
- Analyzing global skew, including Vt skew, Device skew or Interconnect skew, allowing to remove margin and/or reducing number of scenarios
This presentation will cover how PrimeShield can help improving the robustness and performance by tackling the over margining using the solutions outlined above.
Ever increasing complexity is making it harder for designers to achieve the best PPA. Use of machine learning leads to better PPA. In this session we will explore controlling Fusion Compiler from a machine learning environment.
Why Redhawk Fusion ? With the increase in power integrity challenges, the IR drop closure becomes a major challenge in signoff closure in that DRC/Timing/Power/EMIR fixing impact each other. RedHawk is the industry leading IR signoff tool and produces accurate analysis data. RedHawk Fusion driven power integrity optimization in early stage aims to reduce project risk, TAT, and TTM.
The emerging paradigm shift towards Silicon Lifecycle Management is changing the way the Semiconductor Test and SoC communities think about device and system performance, silicon health and predictability. The new era of SLM has opened up opportunities for developing new, insightful monitoring and analytics technologies that are now providing solutions to the optimization challenges faced by test teams, chip and system developers across a wide range of applications. Silicon Lifecycle Management is one of the most exciting areas of evolution for the semiconductor industry and based on the value it brings to each phase of the device lifecycle from early design right through to in-field.
As Silicon Lifecycle Management continues to gain momentum it is just a matter of time before adopting and using SLM is standard procedure on every project. The Synopsys Silicon Lifecycle Management family has been developed to improve silicon operational metrics at every phase of the device lifecycle and has been built on a foundation of enriched in-chip observability, analytics and integrated automation. Embedded monitors enable deep insights from silicon to system. Meaningful data is gathered at every opportunity for continuous analysis and actionable feedback.
Today Silicon Lifecycle Management (SLM) is generating a lot of interest within the semiconductor test and SoC communities as it will soon allow designers to optimize and track their devices throughout their entire lifetimes, from the early design phase, through manufacturing and finally during in-field operation. To aid in this new process, added visibility within the device is required and provided by the use of embedded monitors to gather key environmental and structural data from the device. This critical data is then and transported off-chip to a unified SLM database ready for analysis.
Targeted analytics opportunities therefore lie within each of the lifecycle phases. In the design phase, silicon parametric data can be fed back for better power and performance design tuning. During the ramp phase, precise assessments can be made to quickly identify systematic issues to speed up yield ramp. The production phase utilizes analytics to greatly improve device screening for increased quality and reliability as well as reduced test cost while maintaining high yield. The final in field phase is where the collection and analysis of in-chip monitor data can help monitor the health and performance of the device, allowing for predictive maintenance and performance tuning when allowed to enable device longevity and prevent any disruptive downtime in the overall system hosting the device.
In this presentation, we will look at how Synopsys is starting to implement a holistic analytics solution as part of its integrated SLM family - complete end-to-end integrated analysis across the life of the device. In-silicon health, observability and insight are key when it comes to SLM and as an industry we can no longer afford to ignore what is happening inside our devices throughout their lifetime.
The Synopsys integrated Silicon Lifecycle Management (SLM) family is built on a foundation of enriched on-chip observability, high speed data access and analytics.
Embedded in-chip environmental and structural monitors enable this enhanced visibility and ensure optimal silicon health is achieved throughout the device lifecycle.
In-chip PVT monitors provide real time data on dynamic conditions like process variability, voltage supply and thermal activity and Path Margin Monitors measure the timing margins of real functional paths. The meaningful data from these types of embedded SLM monitors is gathered at every at every stage of the silicon lifecycle. The data is then transported off-chip where analytics is applied allowing insightful decisions to be made and action to be taken.
Industry experts describe the amount of verification needed for processor-based system with the term ‘deep cycles’. Only a high-performance prototype farm can deliver the deep cycles needed to run meaningful software workloads and system validation regressions.
In this tutorial we will showcase best-in-class prototyping methodologies harvesting all the capabilities of a state-of-the-art, high-performance prototyping system. We will show how to get full insight into the hardware running at prototyping speeds and efficiently run regressions leveraging a full range of visibility technologies and techniques including System Verilog assertions for an Arm-based system.
We will also connect the prototype with Arm software debuggers and process data capture for full visibility of the software execution. We explain how to accelerate interface subsystems validation through the use of real-world interfaces and pre-built interface prototyping kits.
Finally, we will explain how prototyping teams can enable their end users through a centralized deployment of prototyping systems using cloud-ready resource management systems. Throughout the tutorial we use examples based on the HAPS-100 prototyping system.
Low Power remains the #1 verification issue, whether to meet the battery constraints of portable devices, or contain the runtime costs of high density data centre computing, or to ensure the correct specification of SoC packaging and PCB power supplies.
Accurate power estimation requires real-world software payloads: short verification tests or peak usage assumptions are no longer enough to ensure the finished product meets its power goals.
Applying real software payloads to gate-level netlists however has usually been a fairly unachievable feat in realistic timescales.
ZeBu Empower meets the challenge, and provides an efficient way of accurately calculating power using realistic software payloads with workable turnaround times, as part of an end-to-end power methodology that ensures designs meet their low power requirements.
ECO’s describe the process of changing a design data base to address functional and timing discrepancies, typically, but not exclusively, late in the implementation flow This presentation explores the commands and techniques that underpin their physical implementation.
When you’re developing a multi-million-gate chip, every seemingly small bit of power and area you can save from every block multiplies into a big impact on the overall power-performance-area (PPA) equation. An ECO (engineering change order) file is issued to the physical implementation (layout) tools to make final tweaks and correct any issues found. This process worked well for designs at older technology nodes, but it has become a big challenge in electronic design signoff at the advanced technology process nodes. This challenge frequently takes numerous iterations to converge, consumes growing hardware resources, and removes predictability in project schedules. This process has become worse in recent years. It is common for the ECO process to consume 50% or more of the design-closure time at advanced technology nodes.
In this session we will introduce the latest capabilities from Synopsys to help address the ever growing challenge of effective and efficient ECO design closure.
Design teams are constantly in need of new technologies to minimize test time and test data volume, accelerate design-for-test (DFT) development, and ensure optimal design implementation. In this tutorial, we will provide guidance on using two key TestMAX technologies that address these challenges: DFTMAX SEQ codec which encompasses flexible sequential compression to significantly reduce test data volume for large designs; and Streaming Fabric which is a high-throughput bus-based data delivery structure designed to reduce test time while easing its physical implementation. In addition, key runtime improvements for ATPG will also be covered.
As designs are growing and GPIO availability for scan is reducing, it has become imperative to use functional I/Os for scan-testing. This presentation will give an overview of how adaptive high bandwidth testing can be done over functional interface.
08:30 - 09:30 - Windsor Foyer
10:30 - 11:00
12:30 - 13:30
15:00 - 15:30
16:30 - 17:30 - Windsor Foyer