Download as pdf or txt
Download as pdf or txt
You are on page 1of 2

GPU-parallel Laplacian solver and message passing

Sumaiya Dabeer
Department of computer science and engineering
Indian Institute of Technology, Delhi
June 4, 2024

1 Research Interests and Objective


My research delves into GPU-based parallel message passing in the graph. I have used this special-
ization to solve the Laplacian system of the equation which can solve the symmetric and diagonally
dominant systems on graph with million nodes and billion edges. We have design a datastruc-
ture and key value reduction operation to perform GPU based message passing which enable us
to model the random walk and queueing system that ultilmately used to implemnt our Laplacian
solver GPU-LSolve. I am targeting to bridge the gap between theoretical advancements and practi-
cal applications across diverse domains, including spectral clustering, communication locality, online
learning, and network brain research using GPU-Lsolve.

2 Key Findings and Anticipated Outcomes


We implemented the aforementioned solver using CUDA-C++ by exploiting our developed message
passing. Our solver is found to be compatible with others on small datasets but significantly out-
performs on large datasets (million scale graphs) where state-of-the-art solvers either timed out or
out of memory.
Laplacian solvers are directly or indirectly used for spectral clustering and communication locality
by minimizing the second smallest eigen value to capture the connectivity structure. This is used
to group datapoints as clusters. These cluster partitions have good locality which reduces the
communication overhead in distributed settings. It might be used to approximate the graph structure
or node similarity which could be used in online discripancy minimization algorithms.
Currently, we are refining and benchmarking our message-passing approach. We are using re-
duction operations which perform well on GPUs. The other methods are using atomic operations
that suffer from the congestion caused by imbalance degree distribution. These methods also do not
work well for dense graphs.
The GPU enabled message passing is anticipated to have benifits in label propogation and GNNs.
The efficient algorithm of message passing will reduce the carbon footprint of graph communication
models which will be an significant addition to AI-ML community.

3 Broader Significance and Impact


The graph is most natural representation of world around us. The message passing has poten-
tial to model various operations like lable propogation and diffusion process. A major use case of

1
this is the graph neural network. To summarize, my research holds significant promise for various
fields such as machine learning, neuroscience, high-performance computing and domain specializa-
tion. The improved spectral clustering will enhance feature extraction and classification on large
datasets, impacting areas like social network analysis, recommender systems, and fraud detection.
The optimized communication strategies via GPU-enabled message passing can revolutionize dis-
tributed graph processing tasks and accelerate scientific discovery across disciplines. Understanding
brain function through network analysis can lead to breakthroughs in diagnostics and treatment of
neurological disorders like Alzheimer’s and schizophrenia. Combining these techniques with domain-
specific knowledge can develop customized solutions for network analysis in areas like finance, biology,
and social sciences.

4 Conclusion
By exploring the synergy between Laplacian solvers and GPU-based message passing, I strive to
solve Laplacian systems on million-scale graphs, The message passing empowers advancements in
fields ranging from machine learning and high-performance computing to neuroscience and diverse
scientific domains.

You might also like