Nvidia wants to use GPUs and AI to speed up and improve future chip design

Looking to the future: When not busy building some of the most advanced silicon, Nvidia is exploring ways to improve the chip design process using the same silicon it manufactures. The company expects the complexity of IC design to increase exponentially in the coming years. So adding the power of GPU computing will soon go from an intriguing laboratory experiment to a necessity for all chipmakers.

In a talk at this year’s GPU Technology Conference, Bill Dally, Chief Scientist and Senior Vice President of Research at Nvidia, talked a lot about using GPUs to speed up the various stages of the process. design behind modern GPUs and other SoCs. Nvidia believes certain tasks could be done better and much faster using machine learning rather than humans by hand, freeing them up to work on more advanced aspects of chip development.

Dally leads a team of approximately 300 researchers tackling everything from the technological challenges of creating ever-faster GPUs to developing software that harnesses the capabilities of these GPUs to automate and speed up a variety of tasks that were traditionally done mostly by hand. This research team has grown from 175 people in 2019 and is expected to grow in the coming years.

When it comes to accelerating chip design, Dally says Nvidia has identified four areas where the use of machine learning techniques can have a significant impact on the typical development timeline. For example, mapping where power is used in a GPU is an iterative process that takes three hours on a conventional CAD tool, but it only takes a few minutes using an AI model trained specifically for the task. Once learned, the pattern can reduce the time to seconds. Of course, AI models trade speed for accuracy. However, Dally claims that Nvidia’s tools already achieve 94% accuracy, which is still a respectable figure.

Nvidia wants to use GPUs and AI to speed up and

Circuit design is a labor-intensive process where engineers may need to change the layout multiple times after running simulations on partial designs. Thus, training AI models to make accurate pest predictions can help eliminate much of the manual labor needed to make the minor adjustments needed to meet desired design specifications. Nvidia can leverage GPUs to predict glitches using graphical neural networks.

1650494531 994 Nvidia wants to use GPUs and AI to speed up and

Dally explains that one of the biggest challenges in modern chip design is routing congestion – a flaw in a particular circuit layout where the transistors and the many tiny wires that connect them are not optimally placed. This condition can lead to something resembling a traffic jam, but in this case, it’s chunks instead of cars. Engineers can quickly identify problem areas and adjust their placement and routing accordingly using a graphical neural network.

In these scenarios, Nvidia is essentially trying to use AI to critique human-made chip designs. Instead of embarking on a labor-intensive and computationally expensive process, engineers can quickly create a surrogate model, evaluate it, and iterate using AI. The company also wants to use AI to design the most basic features of transistor logic used in GPUs and other advanced silicon.

1650494533 593 Nvidia wants to use GPUs and AI to speed up and

Nvidia is taking the necessary steps to move to a more advanced manufacturing node, where several thousand so-called standard cells must be modified according to complex design rules. A project called NVCell seeks to automate this process as much as possible through an approach called reinforcement learning.

The trained AI model is used to correct design errors until it is complete. Nvidia claims that to date it has achieved a 92% success rate. In some cases, the cells designed by the AI ​​were smaller than those made by humans. This breakthrough could help improve overall design performance and reduce chip size and power requirements.

Process technology is rapidly approaching the theoretical limits of what we can do with silicon. At the same time, production costs increase with each node transition. Thus, any slight improvement at the design stage can lead to better yields, especially if it reduces chip size. Nvidia outsources manufacturing to Samsung and TSMC. However, Dally says NVCell is allowing the company to use two GPUs to do the work of a team of ten engineers in a few days, leaving them to focus on other areas.

Nvidia isn’t alone in taking the AI ​​route to designing chips. Google also uses machine learning to develop accelerators for AI tasks. The research giant has discovered that artificial intelligence can create unexpected ways to optimize performance and energy efficiency. Samsung’s foundry division uses a Synopsys tool called DSO.ai, which other companies large and small are gradually adopting.

It should also be noted that foundries can also leverage AI fabrication chips on mature process nodes (12nm and above) to address a lack of fabrication capacity that has proven to be detrimental to the operation of the process. automotive industry over the past two years. Most manufacturers are hesitant to invest in this area because the semiconductor space is very competitive and focused on the cutting edge.

1650494535 164 Nvidia wants to use GPUs and AI to speed up and

More than 50% of all chips are designed on mature process nodes. Analysts at International Data Corporation expect that share to grow to 68% by 2025. Synopsis CEO Aart de Geus believes that AI can help companies design smaller and larger chips. energy-efficient applications where performance is not a top priority, such as cars, household appliances and some industrial equipment. This approach is much less expensive than migrating to a more advanced process node. Additionally, installing more chips on each wafer also results in cost savings.

This story is not about AI replacing humans in the chip design process. Nvidia, Google, Samsung and others have found that AI can augment humans and do the heavy lifting when increasingly complex designs are involved. Humans still have to find the ideal problems to solve and decide what data validates their chip designs.

There’s a lot of debate around artificial general intelligence and when we might be able to create it. Yet all the experts agree that the AI ​​models we use today can barely address the specific problems we know and can describe. Even then, they can produce unexpected results that are not necessarily helpful to the end goals.

We wish to say thanks to the writer of this short article for this incredible material

Nvidia wants to use GPUs and AI to speed up and improve future chip design


Visit our social media profiles and other pages related to themhttps://www.ai-magazine.com/related-pages/