SNUG Silicon Valley – IP Summit 

 
IP Summit
March 25th - 26th, 2013
Santa Clara Convention Center


The IP Summit program, for Synopsys customers only, consists of seven sessions focused on how you can easily integrate IP into your SoC designs with less risk and improved time-to-market. Don't miss our special Lunch and Learn session on "Designing IP for FinFET Technology: The Opportunities and Challenges."


Customers please REGISTER NOW for SNUG and the IP Summit

Monday, March 25

MA09: Hardening CPUs for Performance and Power with DesignWare Logic Libraries and Embedded Memories (11:00-12:30)
With mobile devices requiring better performance and longer battery life, SoCs need to deliver excellent speed and consume less power. The implementation of processors fundamentally determines the overall performance and power consumption of the chip. In this session, VeriSilicon share their design experiences with hardening a high-performance and low power processor core using DesignWare Logic Libraries and Memory Compilers on a 28nm process, along with Synopsys' implementation and signoff tools. We will also show how choosing the correct IP and methodology helps achieve optimal results as well as discuss best practices to fine tune the results to reduce leakage power. Imagination will also present best practices for implementing memories and libraries to deliver superior performance, power and area. (90 min)

MA10: Lunch and Learn: Designing IP for FinFET Technology: The Opportunities and Challenges (12:30-2:00)
Jamil Kawa, Director of Research and Development, Synopsys

MB09: 20-nm Mixed-Signal IP - A Stepping Stone to 16-nm FinFET? (2:00-3:30)
Word has it that 14-nm or 16-nm FInFET processes are based on a 20-nm back end of line. This would essentially mean that the development expertise for a 20-nm analog/mixed-signal IP design can be leveraged for a 16-nm design. But could this really be true? The interconnects and dual pattern technology are similar but the devices are very different. This session starts by describing how 20-nm designs require a much deeper link between layout and power, performance and area requirements compared to previous nodes. Furthermore, quantization of these devices means that the 20-nm development is from the ground up, so you can’t reuse the 28-nm design. Double patterning also comes into play. Attend this tutorial to see real-world design examples on 20-nm and 16-nm FinFET and determine whether 20-nm is truly a stepping stone for 16-nm.

MC09: Integration of Synopsys' DesignWare DDR Controller & DDR3/2 PHY IP in 28nm (3:45 – 5:15)
Open-Silicon presents an overview of the steps and challenges involved in integrating Synopsys' DesignWare DDRn Memory Controller and DDR3/2 PHY in a 28nm home gateway application. The DDRn Controller is configured per the application requirements using Synopsys' coreConsultant tool and verified using the default PHY model. The same environment is modified to include the actual Synopsys DesignWare DDR PHY (configured using the online DDR PHY compiler) and the entire DDR subsystem is verified using the same set-up. This tutorial will also talk about the key items required to be checked for optimal DDR IO pad ring implementation during the back-end implementation of the PHY in the top level design. (60 min)

Tuesday, March 26

TA09: Achieving Predictable and Highly Reliable 10G Backplane Designs
The presentation explores the challenges of implementing 10 Gbps backplane systems. These systems can have greater than 30" PCB traces with multiple connectors. It is also desirable to have bit-error-rates (BER) better than 10-12 for high-reliability applications, going beyond the base specification for real-world channels. A system model is described and representative channels are presented. The presentation explores the architectural and circuit techniques required to meet the stringent requirements, including the trade-offs associated with PLL implementation and receiver equalization to enable high-reliability system design. (90 min)

TB09A: Deriving Timing Budgets for DDR4 Interfaces (1:30-2:30)
DDR memory interfaces have remained source synchronous in nature even as DDR4 SDRAM standards plan to reach 3200 Mbps and LPDDR4 has ambitions to reach 4266 Mbps. Achieving such high data rates in a wide parallel interface results in extreme challenges within the read and write timing budgets of the interface. This presentation will define the typical DDR timing budgets and describe how they are derived from various skew and jitter contributors. The session will also cover what changes to jitter accountability have been introduced for robust DDR4 interfaces. (60 min)

TB09B: In the Cloud with PCI Express (2:30 – 3:30)
With PCI Express continuing to be the de-facto interconnect for Cloud computing systems, there is a growing need for functionality to address the increased storage requirements as well as greater path loss and equalization complexity at 8 GT/s. This tutorial discusses how the PCI Express interconnect is addressing storage requirements in server-based SoCs with standards such as SATA Express and NVM Express. In addition, we will examine the need for active repeaters to help compensate for the significant path loss at 8 GT/s and other developments in the specification to support the continued development of Cloud-based computing. Since the PCI Express protocol doesn't stop at servers, this session also examines how the latest PCI Express features help designers address low power requirements in mobile applications including Optimized Buffer Flush/Fill (OBFF), latency tolerance reporting (LTR), L1 sub-states and the new M-PHY over PCI Express standard. (60 min)

TC09: Increasing SoC Performance and Reducing Power Consumption through Memory Request Optimization (3:45 – 5:15)
Memory latency limits performance in many of today's multi-client SoC designs. As CPU performance has continued to increase at almost exponential rates, memory performance has struggled to keep up with that pace. Memory prefetch engines have been used as a technique to reduce memory latency. With the vastly divergent request patterns of today's multi-client SoCs, yesterday's prefetch engines struggle to achieve high efficiency rates. Coupled with today's green design requirements, every false fetch can negatively affect your power budget. This paper demonstrates how system-level SoC performance can be improved using a next generation prefetch engine with ARC processors. This simultaneously reduces the number of read requests that reach the external memory system and reduces power consumption through Memory Request Optimization. As an example a modern design with a Synopsys ARC Processor is use to illustrate the results (90 min)

Customers please REGISTER NOW for SNUG and the IP Summit



NewsArticlesBlogsWhite PapersWebinarsVideosNewslettersCustomer Successes