Professional Documents
Culture Documents
Paper Report
Paper Report
Paper Report
Chapter 1
INTRODUCTION
1.1 Introduction
The continuous increase in transistor count in Integrated Circuits (ICs), along with
complex design flows, has posed significant challenges for engineers. These challenges
range from the high cost of exploring extensive design spaces, lengthy verification
simulations, to the requirement for expertise in intricate optimization algorithms. This is
where Machine Learning (ML) emerges as a transformative force. ML equips IC
designers, which includes a broad spectrum of professionals such as circuit designers,
physical design engineers, and verification engineers. A Typical IC design flow shown in
the figure 1.1 where many complex processes are involved the overall ship design
process. By harnessing its data-driven learning capabilities, ML effectively addresses the
challenges mentioned.
The complexity of the ML process also poses a challenge. The process of training
ML models, tuning hyperparameters, and validating the models can be complex and time-
consuming. In some cases, there might not be enough data available to train robust ML
models, leading to a lack of training data. Furthermore, the process of integrating ML
models into existing workflows and systems can be slow and challenging, leading to slow
implementation.
Implementing Machine Learning (ML) for Integrated Circuit (IC) design is not
without its challenges. One of the primary challenges is the quality of data. ML heavily
relies on data, and poor quality can significantly impact the performance of ML models.
This is closely tied to the issues of underfitting and overfitting, common problems in ML.
Underfitting occurs when the model is too simple to capture the underlying structure of
the data, while overfitting happens when the model is too complex, leading to an overfit
of the training data.
B.E., Dept of ECE, BNMIT 1 2023-24
Machine Learning in Advanced IC Design: A Methodological Survey.
As data grows, there can be imperfections in the algorithm which can affect the
performance of the ML models, leading to algorithmic imperfections. The characteristics
of analog systems are challenging, and the "always new" device models are being
progressively introduced with new specifications and functionalities. This presents a
unique challenge in the implementation of ML in IC design.
1.3 Objectives
The main objectives of the survey were to:
Chapter 2
LITERATURE SURVEY
2.1 Related Work
B. Khailany et al., “Accelerating Chip Design with Machine Learning” [1] offers
a comprehensive examination of Machine Learning (ML) applications in expediting chip
design processes. It delves into the mounting complexity of chip designs and the
accompanying challenges. Emphasizing ML's potential to streamline chip design, the
authors elucidate its utilization for overcoming these hurdles. The paper meticulously
reviews existing ML methodologies employed to accelerate chip design, encompassing
techniques like deep learning and reinforcement learning, and their application across
various design stages. Additionally, it scrutinizes the benefits and limitations of
integrating ML into chip design workflows, acknowledging ML's potential to enhance
efficiency and accuracy while acknowledging the complexities involved. Serving as a
valuable resource for researchers and professionals, the paper furnishes a comprehensive
understanding of the current ML landscape in chip design and offers insights into
potential future trajectories in this swiftly evolving domain.
neural network to classify 1.2 million high-resolution images from the ImageNet LSVRC-
2010 contest into 1000 distinct classes. Remarkably, they achieved significantly improved
performance, with top-1 and top-5 error rates of 37.5% and 17.0%, respectively,
surpassing previous state-of-the-art benchmarks. The neural network architecture
comprised five convolutional layers, some accompanied by max-pooling layers, and three
fully connected layers culminating in a 1000-way softmax output. Notably, to expedite
training, they employed non-saturating neurons and leveraged a highly efficient GPU
implementation for convolution operations. Additionally, to curb overfitting in the fully
connected layers, they introduced a novel regularization technique known as "dropout,"
which proved remarkably effective. This seminal work has profoundly impacted the
machine learning landscape, particularly in image classification, serving as a catalyst for
numerous subsequent studies and innovations.
Huang et al., “Machine learning for electronic design automation: A survey” [9]
underscores the rising demand for semiconductor ICs, coupled with advancements in ML
and the deceleration of Moore's law, which collectively fuel the growing interest in
leveraging ML to augment EDA and CAD tools and processes. The paper systematically
surveys existing EDA and CAD tools, methodologies, processes, and techniques for ICs
employing ML algorithms. ML-based EDA/CAD tools are categorized based on IC
design steps, encompassing synthesis, physical design (such as floorplanning, placement,
clock tree synthesis, and routing), IR drop analysis, static timing analysis (STA), design
for test (DFT), power delivery network analysis, and sign-off. Additionally, it delves into
the contemporary landscape of ML-based VLSI-CAD tools, current trends, and prospects
of ML in VLSI-CAD. Serving as a valuable resource, this paper furnishes researchers and
professionals with a comprehensive understanding of the current state and potential future
trajectories of ML in EDA and CAD for ICs.
Chapter 3
The key advantage of linear models lies in their simplicity and efficiency. Unlike
more complex models, they require less computational power and are easier to interpret.
This makes them particularly well-suited for real-time applications on chip, where
minimizing overhead is crucial. Imagine a scenario where someone tries to tamper with
your computer's hardware. Linear models can be used to analyse data like memory access
patterns and identify such security threats. Their low computational cost allows them to
run continuously on the chip itself, detecting suspicious activity in real-time with minimal
impact on performance.
Battery life is a major concern for mobile devices. Linear models can be employed
to predict future traffic on a chip's internal network, allowing for optimized power
management strategies. By anticipating workload demands, the model can help adjust
power settings for different components, leading to more efficient power usage. Linear
models have a key limitation: they assume that each feature has an independent effect on
the outcome. In real-world scenarios, features can be interconnected and influence each
other. For instance, high memory usage might also lead to increased network traffic. Just
like a good cook might adjust a recipe, researchers can modify the basic linear model
structure to improve its effectiveness for specific tasks. For example, using different types
of linear regression models might yield better results for predicting power consumption
across various types of processing cores on a chip.
Combining Linear Layers to overcome limitation where each layer can focus on a
specific set of features, and the combined output provides a more comprehensive
prediction. This approach can be used for tasks like workload and power prediction,
where different features like memory usage and processing activity play a role. The
usefulness of linear models extends beyond power management and security. They also
contribute to the verification stages of IC design:
Linear models, despite their simplicity, offer a valuable toolbox for IC designers.
Their efficiency, ease of use, and diverse applications in areas like security, power
management, and verification make them a cornerstone for various stages of the design
process. However, it's important to be aware of their limitations and explore variations or
combinations with other models when dealing with complex relationships between
features.
design include Artificial Neural Networks (ANNs), Gaussian Processes (GPs), Support
Vector Machines (SVMs), and Random Forests. For trying to predict the weather. While
a simple linear model might struggle to account for all the factors involved (e.g.,
temperature, humidity, wind), non-linear models can handle these complexities. Here's
how some of them achieve this:
Figure. 3.2 (a) Activations. (a) Feature space mapping by the kernel.
Random Forests: This model takes a different approach by using adaptive basis
functions. Imagine dividing the input feature space (think of it as a map) into
regions. Random Forests define these regions using functions and then assign
weights to each region based on the data. This allows the model to capture
complex relationships within the data without relying on explicit non-linear
transformations.
Design Rule Checking (DRC) Prediction: As chip designs become more intricate,
verifying adherence to design rules becomes time-consuming. A non-linear SVM
model can predict potential DRC violations after global routing (a stage in the
design flow) without requiring a detailed routing step, which can be
computationally expensive. The model analyses feature like connected pins and
cell density in the layout to identify potential problem areas.
Rapid Layout Feasibility Evaluation: Imagine quickly assessing if a layout design
is even feasible. An ANN model can estimate the correlations between various
interconnect parasitic (unwanted electrical effects) based on the layout
configuration. This is crucial because the relationship between hardware
configuration and performance can be highly non-linear, making traditional linear
models ineffective.
Fast Performance Estimation: Traditional analytical performance evaluation
methods can be computationally expensive. Support Vector Regression (SVR), a
type of non-linear model, can be used to rapidly estimate power consumption
based on input signals and hardware configuration. Similarly, non-linear models
have been used to estimate the performance of adders, replacing traditional EDA
(Electronic Design Automation) tools for faster power, delay, and area estimation.
Circuit structure information and tool settings are used as input features for these
models.
In some non-linear models, like GPs, can provide predictions with uncertainties. This
allows for a technique called Bayesian optimization, which uses this information to guide
the selection of the most informative data points for training the model. This reduces the
number of simulations needed and speeds up the design exploration process. By reducing
EDA tool runtime and enhancing learning during the design stage, Bayesian optimization
techniques based on regression forests can be used for faster exploration across various
areas like digital design, analog design, and hardware deployment.
Regression forests can also be used to find a sweet spot between different design
goals. Imagine you want a chip that is both powerful and energy efficient. Regression
forests can model the relationship between design choices and a metric called Pareto
hypervolume, which helps achieve a better trade-off between various design objectives.
Convolutional Layers layers act like filters that scan the layout image. Each filter
focuses on detecting specific features, like lines, shapes, or patterns. As the filter slides
across the image, it performs calculations to highlight these features, creating a "feature
map." Pooling layers aim to reduce the complexity of the feature maps as shown in figure
3.4 a. Imagine summarizing the information in a small neighbourhood within an image.
Pooling layers achieve this by taking the maximum value (max-pooling) or the average
value (average-pooling) within a specific region of the feature map as shown in figure 3.4
b. This helps the model focus on the most important information. These final layers take
the information extracted from the feature maps and use it for tasks like classification
(e.g., identifying a potential defect) or regression (e.g., predicting performance metrics).
The field of deep learning is constantly evolving, and researchers are exploring
ways to improve CNNs. Imagine focusing on a specific part of an image while ignoring
the rest. The attention mechanism works similarly. It assigns weights to different areas of
the feature map, directing the model's focus towards crucial information for improved
accuracy. This powerful architecture is often used for tasks involving sequences. In IC
design, it can be used with CNNs to capture global relationships within the layout data.
Imagine looking at an entire image, not just individual parts, to understand the bigger
picture. Transformers with multi-head attention modules help achieve this for complex
layout analysis.
As current flows through wires on a chip, there can be a voltage drop (IR-drop).
Predicting this drop is crucial for ensuring chip reliability. CNNs can analyse the
distribution of power across the layout and predict the maximum IR-drop, guiding
designers towards layouts with minimal voltage drops.
Chip layouts can come in various sizes. A special type of CNN called a Fully
Convolutional Network (FCN) can address this challenge shown in figure 3.5. FCNs
replace fully connected layers with convolutional layers, allowing them to handle inputs
of different sizes. This makes them suitable for tasks like predicting potential design rule
violations or performance metrics across layouts of varying dimensions.
Global routing is a stage where connections between different parts of the chip are
established. Traditionally, this involves trial-and-error routing to check for design rule
violations (DRC). FCNs can analyse the layout and predict potential DRC violations
without actual routing, saving significant time. They consider factors like placement of
blocks, wire density, and pin locations to make these predictions.
CNNs can be used to predict performance metrics like congestion or how easily
wires can be routed within a layout. By analysing how the model's output changes with
respect to the input layout, engineers can understand which layout features most
significantly impact performance. This information can then be used to guide the routing
and placement processes for optimal chip design.
Attention mechanisms can be easily integrated into CNNs for tasks like
lithography hotspot detection. By focusing on critical features in the layout, they can lead
to more accurate hotspot identification. Transformers with multi-head attention modules
can be used to develop one-stage detectors for identifying hotspots within large-scale
layouts. These models can directly predict the bounding boxes (areas containing hotspots)
within the layout image, offering a faster and more efficient approach.
Figure. 3.6 Attention (a) channel attention and (b) spatial attention.
In Figure 3.6 (b), the max-pooling, mean pooling, and convolutional layers are
stacked to obtain spatial attention from a channel-refined feature. Based on the attention
mechanism, the transformer is proposed in many sequence-to-sequence tasks. They both
contain multiheaded attention modules to extract information at different locations
globally.
GNNs don't just analyse individual devices, they also consider how they're
connected. Special techniques called embedding methods are used to create a more
comprehensive picture. Imagine asking each station on the map about its neighbours and
summarizing the information. This allows GNNs to capture the relationships between
devices within the circuit. One way to learn from connections is through graph
convolution. This method considers a device's neighbours (up to a certain distance) and
generates a new "understanding" of that device based on the information gathered. So, a
GNN model might consider all the devices connected to a specific transistor (within a
two-station radius) to understand its role within the circuit as shown in Figure 3.7.
GNNs go beyond just representing the circuit structure. They use special
techniques called embedding methods to understand how components relate to each other.
Imagine asking your friends about a restaurant their opinions (information) can influence
your decision (embedding). Similarly, GNNs gather information from a node's
neighbours (connected components) and combine it to create a new feature that represents
the node's role within the circuit. Some common embedding methods include:
This method considers how close connected components are to a node. It gathers
information from a node's neighbourhood (up to a certain distance) to understand its
context within the circuit. Imagine a GNN model analysing a transistor (M1) – it might
consider information from nearby resistors and capacitors to understand M1's role in the
circuit (Figure 3.6).
Predicting Aging Effects: Over time, transistors can become less effective. GNNs
can consider the different types of devices within a circuit and predict which ones might
be more susceptible to aging, allowing for preventive measures during design.
GAT can enhance GNNs' ability to learn from existing designs. This allows them
to predict electrical effects (called net parasitic) and device behaviour even without
complete physical design information, which can be helpful in the early stages of design.
GNNs with GAT can be used to estimate the length of wires before they're placed within
the circuit. By considering the relationships between devices and their neighbours, GAT
helps the model prioritize relevant information for more accurate length estimation.
Analog circuits often have specific design rules. Traditionally, engineers manually
identify these constraints. GNNs can analyse the circuit netlist (graph) and automatically
learn to identify critical structures that require special attention during design, saving
engineers time and effort. GNNs can be combined with other techniques like Generative
Adversarial Networks (GANs) to classify circuits and organize them into sub-blocks. This
can be particularly useful in complex analog designs, where better organization aids in
managing and understanding the circuit.
Verifying a circuit design involves checking for errors. GNNs can be used to
design classifiers that can quickly identify areas in the netlist that might require further
testing, accelerating the verification process. Electronic components can degrade over
time. GNNs can be used to identify transistors susceptible to aging effects in analog
circuits. By considering different types of elements within the circuit, GNNs can make
more accurate predictions.
Standard GNN methods might struggle with circuits that have many connections
between components. Graph Attention (GAT) addresses this by assigning weights
(importance scores) to connections (edges). Imagine focusing on the most helpful advice
from your friends about the restaurant – GAT works similarly. By assigning higher
weights to crucial connections, GAT allows the model to prioritize relevant information
from neighbouring nodes, leading to more accurate results. Additionally, GAT can
introduce multiple weights for each connection, enabling even more nuanced learning
based on the specific context.
Unwanted electrical effects (parasitic) can impact circuit performance. GAT can
enhance GNNs' ability to learn from existing designs and predict these parasitic, even for
new circuits without complete physical design information. Estimating the length of
connections (nets) before placing components on the chip is crucial. GNNs with GAT can
capture a more comprehensive understanding of a node and its neighbours, allowing for
more accurate net length estimations.
patterns and creating new samples. This section explores two prominent generative
models used in IC design:
A special type of GAN called a Conditional GAN (CGAN) takes things a step
further. CGANs can generate new designs based on specific requirements or conditions.
This is particularly useful in IC design, where designers often have certain goals in mind.
Noise can disrupt chip functionality. CGANs can be used to generate "noise maps" that
indicate areas where noise sensors are most needed. This helps engineers place sensors
strategically with minimal placements, ensuring efficient noise monitoring.
Manufacturing chips involves creating masks that define circuit patterns. CGANs
can be used to optimize these mask layouts. Imagine the CGAN taking an existing mask
layout and generating a new one that compensates for potential manufacturing issues, all
without requiring complex simulations. Clock signals are crucial for chip operation.
CGANs can be used to optimize clock tree design, which efficiently distributes these
signals. By providing desired parameters like power consumption and wire length, the
CGAN can recommend optimal layouts for the clock tree network.
the outcome. For example, a more efficient placement might result in a higher reward.
Over time, the agent learns to choose actions that consistently lead to better rewards. To
understand RL problems, we use a concept called a Markov Decision Process (MDP)
shown in figure 3.10. The MDP as a game with four key elements:
States: These represent all the possible situations the agent can encounter (e.g.,
different chip design layouts with varying block placements).
Actions: These are the choices the agent can make (e.g., selecting and placing a
specific block).
State Transitions: These represent the probability of moving from one situation
(e.g., current chip layout) to another (e.g., layout after placing a block).
Rewards: These are like feedback signals telling the agent how good or bad an
action was in a particular situation (e.g., higher reward for a more efficient
placement).
The goal of RL in IC design is for the agent to develop an optimal strategy, or policy,
that consistently leads to the best outcomes (highest rewards) over time.
Machine learning (ML) has become a game-changer in IC design, but there's still
room for improvement. This section explores some of the key challenges and promising
solutions to push the boundaries of ML-powered chip design.
can be harnessed to guide existing optimization algorithms, leading them towards high-
quality solutions faster. Additionally, using powerful computing platforms like GPUs can
significantly speed up the design process.
All these methods That are discussed in the paper have their own advantages and
disadvantages. Hence, it cannot be considered whether one method is good or bad. So
from comprehensive analysis of the research article a conclusion is drawn in the below
table 1 it shows the overall survey in a summarized way:
Applications in
Methodology Description Strengths Weaknesses
IC Design
These - Regression
- Lower
traditional - Limited analysis: Modeling
computational
models learning relationships
cost compared
encompass capacity for between design
to deep learning.
various complex, non- parameters and
Shallow statistical and linear performance metrics
- Interpretable
Learning linear algebra relationships. (e.g., power
results: Model
Models techniques. consumption, delay).
behavior and
They are - May struggle - Classification
decision-making
typically less with high tasks: Categorizing
processes can be
complex than dimensional design elements or
more easily
deep learning data. identifying defects
understood.
approaches. (fault detection).
- Physical Design:
Placement
optimization:
Analyzing
placement
- Excellent at
CNNs are a configurations to
capturing local
powerful type - Can be minimize congestion
patterns and
of deep learning computationall or wirelength.
features within
architecture y expensive to - Design rule
an image or
specifically train for large checking (DRC):
Convolutional layout due to
designed to datasets. Identifying potential
Neural convolutional
work with grid- layout violations
Networks layers.
like data, - May require based on predefined
(CNNs)
making them significant data rules.
- Effective at
ideal for tasks pre-processing - Manufacturing
learning
involving for optimal Process Modeling:
hierarchical
spatial performance. Predicting yield or
representations
relationships. identifying potential
of data.
hotspots for failure
during the
manufacturing
process based on
chip layout images.
Graph Neural GNNs are a - Well-suited for - GNNs are a - Circuit Design:
Networks type of deep analyzing relatively new Topology synthesis:
(GNNs) learning model complex research area, Exploring different
designed to relationships and there are circuit structures to
work with within circuits still ongoing find the optimal
graph structured due to their efforts to design for a given
data. ability to develop more functionality.
Since circuit process graph efficient and Transistor sizing:
schematics can data. scalable Optimizing
be naturally - Can effectively architectures. transistor sizes
within a circuit to
represented as meet performance
graphs, GNNs targets.
are well- capture long- - Design
positioned for range Verification:
tasks that dependencies in Analyzing circuit
involve the data. connectivity and
analyzing these identifying potential
circuits. bugs or errors within
the schematic.
Generative This category - Offer the - Generative - Physical Design:
Models (VAEs, encompasses ability to create models can be Mask optimization
GANs) models that can entirely new complex to for lithography:
learn from design options train and Generating
existing design or variations, require careful improved mask
data and expanding hyperparameter layouts that
generate new design tuning. - The compensate for
designs or exploration quality of process variations.
design capabilities. generated Standard cell
elements. Two designs might placement:
prominent - Can be used require Proposing new
examples are: for data additional placement
Variational augmentation, refinement or configurations based
Autoencoders generating human on existing library
(VAEs): VAEs synthetic data to intervention. cells.
learn a latent improve model
representation performance.
of design data
and can
reconstruct
existing designs
or generate new
variations based
on the learned
features.
Generative
Adversarial
Networks
(GANs): GANs
consist of two
competing
neural
networks: a
generator that
creates new
designs and a
discriminator
that tries to
distinguish real
designs from
generated ones.
B.E., Dept of ECE, BNMIT 25 2023-24
Machine Learning in Advanced IC Design: A Methodological Survey.
RL employs a - Design
unique Optimization:
approach where Device sizing:
an agent learns - RL algorithms Finding optimal
through trial can be transistor sizes for a
- Well-suited for
and error in a computationall circuit to achieve
complex design
simulated y expensive, desired performance
optimization
environment. especially for targets.
problems where
The agent large and
the relationship
interacts with complex design - Placement
between actions
the spaces. optimization:
and outcomes is
environment, Optimizing the
not easily
Reinforcement taking actions - Exploration placement of blocks
defined.
Learning (RL) and receiving vs. exploitation on a chip layout to
rewards based trade-off: minimize
- Can learn
on the Balancing congestion,
effective
outcomes. learning new wirelength, or power
strategies
Over time, the strategies consumption.
without the need
agent learns to (exploration)
for explicit
choose actions with exploiting - Clock tree
programming of
that lead to currently synthesis:
all design rules.
higher rewards, known good Generating efficient
enabling it to options. clock tree structures
discover for distributing
optimal design clock signals
strategies. throughout the chip.
Chapter 4
RESULTS
The ever-growing complexity of integrated circuits (ICs) demands innovative
solutions for design automation and optimization. This survey explored the exciting
intersection of machine learning (ML) and advanced IC design, highlighting the immense
potential of ML to revolutionize this critical field:
Reinforcement Learning (RL) has emerged as a powerful tool for tackling various
design challenges, from placement and routing to power management. RL agents
can learn optimal design strategies through trial and error within a simulated
environment.
Value function-based methods provide a valuable framework for RL in IC design,
but limitations in convergence time and stability necessitate further research.
Policy function-based methods offer an alternative approach, particularly for
problems with vast state and action spaces. However, challenges related to sample
efficiency and local optima require careful consideration during implementation.
Scalability remains a hurdle for applying ML to massive industrial designs.
Promising solutions involve Model Order Reduction (MOR) techniques to create
smaller, faster-to-simulate circuits that capture essential information for effective
verification.
Current ML models in IC design primarily function as assistants, suggesting
improvements without significantly reducing design time. Future advancements
lie in integrating ML models as constraints within the design optimization process,
leading to faster exploration of high-quality solutions.
Performance-aware physical design with ML faces challenges in modelling
overall performance and interacting with existing Electronic Design Automation
(EDA) tools. Leveraging the gradient information within ML models and
deploying these approaches on high-performance computing platforms can offer
significant speed improvements.
Choosing the right RL technique depends on the specific design task, data
availability, and state/action space size.
Tailoring neural network architectures (actor and critic networks) to the design
problem can improve efficiency. Graph neural networks (GNNs) are particularly
effective for circuit schematics.
Addressing sample efficiency is crucial. Techniques like pre-training and focusing
on relevant state space regions can help.
Preventing local optima requires employing exploration techniques and diverse
initialization methods for the RL networks.
Scalability and model compression techniques like knowledge distillation or
pruning can enable deployment on resource-constrained hardware.
Chapter 5
CONCLUSION
The integration of machine learning (ML) into advanced integrated circuit (IC)
design signifies a promising frontier in chip creation, propelled by the escalating
intricacies of ICs demanding innovative design solutions. This survey delved into the
intersection of ML and advanced IC design, underscoring the transformative potential of
ML in this pivotal domain. Reinforcement Learning (RL) has emerged as a potent tool for
addressing diverse design challenges encompassing placement, routing, and power
management.
Envisioning the horizon, the marriage of ML and IC design holds the promise of
accelerated design cycles, streamlined exploration of the design space, and the assurance
of higher-quality chips meeting stringent performance standards. Beyond these
advancements, continued research in interpretable ML models, efficient hardware co-
design, and fortified security measures will be indispensable for the enduring success of
REFERENCES
[1] B. Khailany et al., "Accelerating Chip Design with Machine Learning," in IEEE
Micro, vol. 40, no. 6, pp. 23-32, 1 Nov.-Dec. 2020.
[4] K. Zhu et al., “GeniusRoute: A new analog routing paradigm using generative
neural network guidance,” in Proc. ICCAD, 2019, pp. 1–8.
[5] S. K. Mandal et al., “An energy-aware online learning framework for resource
management in heterogeneous platforms,” ACM Trans. Design Autom. Electron.
Syst., vol. 25, no. 3, pp. 1–26, May 2020
[7] M. Rapp et al., "MLCAD: A Survey of Research in Machine Learning for CAD
Keynote Paper," in IEEE Transactions on Computer-Aided Design of Integrated
Circuits and Systems, vol. 41, no. 10, pp. 3162-3181, Oct. 2022.
[8] K. I. Gubbi et al., “Survey of machine learning for electronic design automation,”
in Proc. GLSVLSI '22: Proceedings of the Great Lakes Symposium on VLSI
2022, pp. 513–518, June 2022.
[9] Huang et al., “Machine learning for electronic design automation: A survey,”
ACM Trans. Design Autom. Electron. Syst., vol. 26, no. 5, pp. 1–46, Jun. 2021.