GTC Silicon Valley-2019: Using Machine Learning for VLSI Testability and Reliability
Note: This video may require joining the NVIDIA Developer Program or login
GTC Silicon Valley-2019 ID:S9528:Using Machine Learning for VLSI Testability and Reliability
Miloni Mehta(NVIDIA),Mark Ren(NVIDIA)
In this session we will discuss the applications of machine learning techniques in improving VLSI design reliability and testability. As we continue to push performance in semiconductor design, we are shrinking the geometry of transistors and moving to non-planar FinFET devices. Due to this effect, the available area for a device to dissipate heat from current switching is reducing, thus causing reliability concerns. High Self-heat (SH) temperature of the device can not only degrade device lifetime but also affect interconnect reliability. Since none of our sign-off tools can handle full chip SH analysis and running Spice simulations at full chip is infeasible, we developed a deep Learning model that can quickly and accurately predict Self-heat temperature. Design testability is an important consideration of modern VLSI design. To improve design testability, additional registers (test points) are inserted into the design to enhance the observability of the design. Inspired by recent success of Graph Convolutional Network (GCN) in graph problems, we propose to use GCN to predict difficult to test nodes in the design. We believe that GCN is well suited to process graph representations of logic circuits. We train a high performance GCN classifier model with multiple GPUs on a large set of design data. This GCN classifier is then used in an iterative process to select test point insertion candidates for improving testability. Experimental results show the proposed GCN model has superior accuracy to classical machine learning models and can achieve better results than commercial test analysis tools.