GenAI holds tremendous promise for large, established companies with deep design expertise. Industry leaders will deploy their own copilots to operationalize treasure troves of methodology, architecture, and other domain-specific data accumulated over decades of experience (check out NVIDIA’s ChipNeMo paper for an excellent example). At the same time, in 2024, GenAI will further expand democratization of the chip design process, allowing new silicon pioneers to innovate and scale faster than ever before, and to focus on their core value proposition while tapping into industry-standard reference flows and optimization knowledge.
EDA companies and IP providers will play an important role in bringing together their deep expertise, flows, and IP with customers’ own domain data to create powerful GenAI solutions across the entire technology stack. In 2024, we will likely see the formation of some early data ecosystems in chip design, similar to the ones discussed by OpenAI’s Sam Altman at OpenAI DevDay 2023. These data partnerships will drive the availability of large-scale data sets that reflect chip design domains across multiple modalities of code, specs, register-transfer level (RTL), simulations, etc., making it possible to train better, more efficient models for GenAI applications.
These partnerships, however, will not take flight unless new, scalable, and secure business models allow sharing of data in a secure and economically viable manner. Here, the emergence of hosted micro-services offers a model that companies will look to explore and put into production use (for a good discussion, see NVIDIA’s recent announcement at AWS re:Invent).