Professional Documents
Culture Documents
Page Rank With Apache Spark Graphx
Page Rank With Apache Spark Graphx
Page Rank With Apache Spark Graphx
The PageRank algorithm outputs a probability distribution used to represent the likelihood
that a person randomly clicking on links will arrive at any particular page. PageRank can be
calculated for collections of documents of any size. It is assumed in several research papers
that the distribution is evenly divided among all documents in the collection at the beginning
of the computational process. The PageRank computations require several passes, called
"iterations", through the collection to adjust approximate PageRank values to more closely
reflect the theoretical true value.
A probability is expressed as a numeric value between 0 and 1. A 0.5 probability is
commonly expressed as a "50% chance" of something happening. Hence, a document with a
PageRank of 0.5 means there is a 50% chance that a person clicking on a random link will be
directed to said document.
Companies run search engine can set the prices for placing ad on a web page based on page
rank of the web page, placing ad on higher traffic web pages, conceivably with higher page
ranks will cost more.
Below is a simple example:
Suppose you have a website that has 4 pages, there are links from each web pages. For
simplicity, just assume these links are static (hard coded links). In the real world, links
(URLs) to the web pages are more dynamically rendered for example, rather than hard coded
one. Page ranks are actually dynamic, not static, need to be computed anytime when pages
and links are rendered.
Looking at page products.html, which has 2 outbound URL links, 1 to index.html and 1 to
services.html
Similarly, Index.html has 3 outbound URL links, 1 to products.html, 1 to services.html and 1
to investor.html
Services.html has 1 outbound URL links to products.html
Investor.html has 2 outbound URL links, 1 to products.html and 1 to index.html
Given all other pages have links to products.html, therefore, calculation of PageRank of
products.html is denoted as PR(products.html) as below:
1
PR(products.html)=PR(index.html)/3+PR(services.html)/1+PR(investor.html)/2
Why PR(index.html)/3? Because index.html has 3 outbound links, only 1 pointing to
products.html, hence only 1/3 of its PR value contributes to PR(products.html)
Likewise, for PR(services.html)/1 and PR(investor.html)/2
The PageRank algorithm outputs a probability distribution used to represent the likelihood
that a person randomly clicking on links will arrive at any particular page. Since it is
probability, Page Rank should be anywhere between 0 and 1, right?
Here is the scala code create a random graph of 10 vertices and outputs the page rank for each
of the vertices and sort the page rank in descending order
import org.apache.spark.graphx
import org.apache.spark.graphx.impl._
import org.apache.spark.graphx.lib._
import org.apache.spark.graphx.util._
import org.apache.spark.sql._
/*
Output Tuple pair, 1st value is Vertex Id, 2nd value is Page Rank
(8,1.3816084012350922)
(2,1.2167791912510777)
(4,1.1607761148828422)
(7,1.0003408285776794)
(6,0.9886969400377069)
(0,0.9724138586272979)
(5,0.9366520865492737)
(9,0.896870650813802)
(3,0.7478478664160348)
2
(1,0.6980140616091922)
*/
I notice some page rank is > 1. I also notice if I add up all page ranks, it is equal to the
number of vertices, by following code:
graph.pageRank(0.0001).vertices
.sortBy(-_._2).toDF
.withColumnRenamed("_1","VertexId")
.withColumnRenamed("_2","PageRank")
.createOrReplaceTempView("pagerank")
/*
Output:
+-------------+
|sum(PageRank)|
+-------------+
| 10.0|
+-------------+
*/
3
This is probability-based Page Rank, because add up all Page Ranks evaluate to 1, Page Rank
for each Vertex must be within 0 and 1
However, according to original research paper from Google, the formula to calculate page
rank is
(Formula 2):
It is likely pageRank method from Spark Graphx is based on formula 2. To prove it, look at
the relevant open source codes that compute page rank of vertex in a Graph:
1. pageRank is a method exposed in the abstract class Graph:
abstract class Graph[VD: ClassTag, ED: ClassTag] {
def pageRank(tol: Double, resetProb: Double = 0.15): Graph[Double, Double]
}
2. method pageRank is implemented in class GraphOps
class GraphOps[VD: ClassTag, ED: ClassTag](graph: Graph[VD, ED]) extends
Serializable {
/**
* Run a dynamic version of PageRank returning a graph with vertex attributes
containing the
* PageRank and edge attributes containing the normalized edge weight.
*
* @see [[org.apache.spark.graphx.lib.PageRank$#runUntilConvergence]]
*/
def pageRank(tol: Double, resetProb: Double = 0.15): Graph[Double, Double] = {
PageRank.runUntilConvergence(graph, tol, resetProb)
}
3. Actual implementation method runUntilConvergence is in object pageRank:
4
/**
* PageRank algorithm implementation.
* ….
* The second implementation uses the `Pregel` interface and runs PageRank until
* convergence:
*
* {{{
* var PR = Array.fill(n)( 1.0 )
* val oldPR = Array.fill(n)( 0.0 )
* while( max(abs(PR - oldPr)) > tol ) {
* swap(oldPR, PR)
* for( i <- 0 until n if abs(PR[i] - oldPR[i]) > tol ) {
* PR[i] = alpha + (1 - \alpha) * inNbrs[i].map(j => oldPR[j] / outDeg[j]).sum
* }
*}
* }}}
*
* `alpha` is the random reset probability (typically 0.15), `inNbrs[i]` is the set of
* neighbors which link to `i` and `outDeg[j]` is the out degree of vertex `j`.
*
* @note This is not the "normalized" PageRank and as a consequence pages that
have no
* inlinks will have a PageRank of alpha.
*/
object PageRank extends Logging {
/**
* Run a dynamic version of PageRank returning a graph with vertex attributes
containing the
* PageRank and edge attributes containing the normalized edge weight.
*
* @tparam VD the original vertex attribute (not used)
* @tparam ED the original edge attribute (not used)
*
* @param graph the graph on which to compute PageRank
* @param tol the tolerance allowed at convergence (smaller => more accurate).
* @param resetProb the random reset probability (alpha)
*
* @return the graph containing with each vertex containing the PageRank and each
edge
* containing the normalized weight.
*/
def runUntilConvergence[VD: ClassTag, ED: ClassTag](
graph: Graph[VD, ED], tol: Double, resetProb: Double = 0.15): Graph[Double,
Double] =
{
runUntilConvergenceWithOptions(graph, tol, resetProb)
5
}
/**
* Run a dynamic version of PageRank returning a graph with vertex attributes
containing the
* PageRank and edge attributes containing the normalized edge weight.
*
* @tparam VD the original vertex attribute (not used)
* @tparam ED the original edge attribute (not used)
*
* @param graph the graph on which to compute PageRank
* @param tol the tolerance allowed at convergence (smaller => more accurate).
* @param resetProb the random reset probability (alpha)
* @param srcId the source vertex for a Personalized Page Rank (optional)
*
* @return the graph containing with each vertex containing the PageRank and each
edge
* containing the normalized weight.
*/
def runUntilConvergenceWithOptions[VD: ClassTag, ED: ClassTag](
graph: Graph[VD, ED], tol: Double, resetProb: Double = 0.15,
srcId: Option[VertexId] = None): Graph[Double, Double] =
{
require(tol >= 0, s"Tolerance must be no less than 0, but got ${tol}")
require(resetProb >= 0 && resetProb <= 1, s"Random reset probability must
belong" +
s" to [0, 1], but got ${resetProb}")
6
// version of Pregel
def vertexProgram(id: VertexId, attr: (Double, Double), msgSum: Double):
(Double, Double) = {
val (oldPR, lastDelta) = attr
val newPR = oldPR + (1.0 - resetProb) * msgSum
(newPR, newPR - oldPR)
}
7
// SPARK-18847 If the graph has sinks (vertices with no outgoing edges) correct
the sum of ranks
normalizeRankSum(rankGraph, personalized)
}
Conclusion
That explains page rank code above produces page rank that can be greater than 1 and total of page
rank added together is N (N=number of vertices in the Graph). In fact, whether or not page rank
being a probability value is not important, the relative significance of page rank value is. The higher
the page rank of a vetex (web page), the more visiting traffic the page is likely to have, therefore, set
the price accordingly for anyone placing ads, that is what matters.