Www and the pagerank-Related problems - pdf 14

Download miễn phí Luận văn Www and the pagerank-Related problems



Contents
List of Figures ii
List of Tables iii
Introduction iv
Acknowledgement vi
Abstract ix
List of Glossaries xi
1 Objects’ ranks and applications to WWW 1
1.1 Rank of objects . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 Rank for objects different from web page. . . . . . . . . . 2
1.1.2 Linkage usage in search engine . . . . . . . . . . . . . 4
1.2 Basis of PageRank . . . . . . . . . . . . . . . . . . . . . . 10
1.2.1 Mathematical basis . . . . . . . . . . . . . . . . . . . 11
1.2.2 Practical issues. . . . . . . . . . . . . . . . . . . . . 13
Conclusion of chapter . . . . . . . . . . . . . . . . . . . . . . . 19
2 Some PageRank-related problems 20
2.1 Accelerating problems . . . . . . . . . . . . . . . . . . . . . 20
2.1.1 Related works . . . . . . . . . . . . . . . . . . . . . 21
2.1.2 Exploiting block structure of the Web . . . . . . . . . . . 22
2.2 Connected-component PageRank approach . . . . . . . . . . . 30
2.2.1 Initial ideas. . . . . . . . . . . . . . . . . . . . . . . 30
2.2.2 Mathematical basis of CCP . . . . . . . . . . . . . . . 32
2.2.3 On practical side of CCP. . . . . . . . . . . . . . . . . 35
2.3 Spam and spam detection . . . . . . . . . . . . . . . . . . . 37
2.3.1 Introduction to Web spam . . . . . . . . . . . . . . . . 37
2.3.2 Spam techniques . . . . . . . . . . . . . . . . . . . . 38
Conclusion of chapter . . . . . . . . . . . . . . . . . . . . . . . 42
3 Implementations and experimental results 43
3.1 Search engine Nutch . . . . . . . . . . . . . . . . . . . . . 43
3.2 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.2.1 Infrastructure and data sets . . . . . . . . . . . . . . . 48
3.2.2 Results . . . . . . . . . . . . . . . . . . . . . . . . 48
Conclusion of chapter . . . . . . . . . . . . . . . . . . . . . . . 58
Conclusions and future works 59
References



Để tải bản DOC Đầy Đủ xin Trả lời bài viết này, Mods sẽ gửi Link download cho bạn sớm nhất qua hòm tin nhắn.
Ai cần download tài liệu gì mà không tìm thấy ở đây, thì đăng yêu cầu down tại đây nhé:
Nhận download tài liệu miễn phí

Tóm tắt nội dung:

stinction between
pages because it yields quite different results of ranks. Results become more
neutral when value of α decreases. Medium high-ranked pages do not change
much in ranks but the highest and the lowest change quite clearly. For more
specifically information, in Figure 2.3(a)
pi = (0.1973 0.2637 0.2066 0.1584 0.1574 0.0167)
and in Figure 2.3(i)
pi = (0.1681 0.2183 0.1809 0.1720 0.1773 0.0833)
so pi2 goes down to 0.2183 from 0.2637 and pi6 goes up to 0.0833 from 0.0167.
One’s changing rate is about 0.5 and another one is about 0.6. This can lead
us to a different result in searching if α is not carefully chosen. For a better
performance of system, we can choose a small value for α but the result may
be useless. For a usable result, we should choose α near to 1 but the cost may
increase sharply as in Table 1.1.
16
1.2 Basis of PageRank
1 2 3 4 5 6
0
0.2
0.4
(a)
1 2 3 4 5 6
0
0.2
0.4
(b)
1 2 3 4 5 6
0
0.2
0.4
(c)
1 2 3 4 5 6
0
0.2
0.4
(d)
1 2 3 4 5 6
0
0.2
0.4
(e)
1 2 3 4 5 6
0
0.2
0.4
(f)
1 2 3 4 5 6
0
0.2
0.4
(g)
1 2 3 4 5 6
0
0.2
0.4
(h)
1 2 3 4 5 6
0
0.2
0.4
(i)
Figure 1.3: Figures exemplifying PR vector with (a) α = .9; (b) α = .85; (c)
α = .8;
(d) α = .75; (e) α = .7; (f) α = .65; (g) α = .6; (h) α = .55; (i) α = .5
Table 1.1 shows the number of iterations needed to compute PR vector cor-
responding to each value of α.
Value of α .9 .85 .8 .75 .7 .65 .6 .55 .5
Iterations 18 16 14 12 12 10 10 8 8
Table 1.1: Iterations needed for each value of decay-factor α of SmallSet
Iterations needed for α = .9 is as double as iterations needed for α = .5.
As Figure 1.4 shows, for the biggest value of α chosen, the cost is remarkable
but if reference back Figure 1.3 the result is relatively better because we can
discriminate pages’ ranks. For the smallest value of α, the number of spending
iterations is smaller and therefore time taken is much lower but the result may
be not really good. 
17
1.2 Basis of PageRank
0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9
8
10
12
14
16
18
Figure 1.4: Graph of iterations needed with α ∈ [.5; .9] of SmallSet
Algorithm 1.2 will discuss about the PR computation in practical use.
18
1.2 Basis of PageRank
Algorithm 1.2 Iterative method for practical PageRank computation
1: function PRACTICALPAGERANK(G) . G is the Web graph
2: α← any decay value . α is often chosen .85
3: tmp← any random vector . vector tmp is often assigned by
(
1
n
)
1×n
4: ← any threshold value . value to determine if pi converges
5: for indexRow ← 1÷ n do
6: remove rank leak and rank sink nodes
7: end for
8: construct P˜
9: while ||pi − tmp||L >  do . makes pi converge to the principal
eigenvector
10: pi ← tmp
11: pi ← tmp× P˜
12: end while
13: return pi . the final PR vector
14: end function
In fact, all rank sink and rank leak nodes are removed by making them
connected by other nodes before constructing matrix P˜ . After that, we apply
the same process as in Algorithm 1.1 to get the final result. And, Algorithm
1.2 also finals this chapter.
Conclusion of chapter
In this chapter, we have gone through a key idea for making an order to objects
in our world. Objects in a database should be ranked, scientific papers should
be ranked and web pages must be ranked. Moreover, the most important thing
it brings us that we can put everything in order to easily mine for knowledge.
We also navigated about PageRank, a method used for ranking web pages,
completely about its base and arisen problems in practical usage. In the next
chapter, we will examine some matters rounding accelerating and spamming
problems.
19
CHAPTER
Some PageRank-related problems
PageRank is a famous occurrence, a trademark and a tool from
which Google benefits much. Because it is famous, many scientists
pay attention and as a result, there are many ways to tweak PageR-
ank up. Some accelerating methods for PageRank will be shown in
this chapter as a proof that PageRank is one of the biggest current
affairs in Internet-related fields. Furthermore, because a page can
easily exploit PageRank to get higher rank so we must deal with a
problem, which is called spam4.
2.1 Accelerating problems
PageRank computation is a big problem which involves many fields of study
such as storage capacity, computational complexity... In this section, we will
discuss one of the major problems rising with the computation of PageRank
which is so-called accelerating problems.
4This kind of spam is different from mail spam
20
2.1 Accelerating problems
2.1.1 Related works
The Web is becoming a massive repository for delivering information. Recent
studies [8] show that most of the users access the Web by means of a search
engine. Nielsen/NetRatings, one of the leading Internet and digital media au-
dience information and analysis services, reports that there were an estimated
151 million active Internet users in the US, for January 2004. Of these, 76%
used a search engine at least once during the month and the average time
spent searching was about 40 minutes. Therefore, to access something on In-
ternet the easiest way is to insert some keywords on a favorite search engine
and to jump to the selected Web site returned as answer. For these reasons,
we witnessed to a growing interest and in large investment efforts to improve
the quality of Web ranking algorithms. Besides, the research community has
devoted an increased attention to reduce the computation time needed by Web
ranking algorithms. Indeed, a particular attention was devoted to improve
PageRank [6, 16], the well known ranking algorithm used by Google. The
core of PageRank exploits an iterative weight assignment of ranks to the Web
pages, until a fixed point is reached. This fixed point turns out to be the com-
putation of the (dominant) eigenpairs of a matrix derived by the Web Graph
itself. Brin and Page originally suggested computing this pair using the well-
known Power method and they also gave a nice interpretation of PageRank in
term of Markov Chains.
The most recent researches about PageRank are aimed to address at least
two different needs. First, the desire to reduce the time spent to weight the
nodes of a Web Graphs containing many billions of pages which requires sev-
eral days of computation. Note that Fetterly et al. in [9] showed that 35% of the
Web changed during the course of their study (eleven downloads over a period
of 11 weeks). More recently Cho et al. [14] showed that in a week more than
25% of links are changed and 5% of "new content" is created. They conclude
that "This result indicates that search engines need to update link based rank-
ing metrics (such as PageRank) very often... a week-old ranking may not reflect
the current ranking of the pages very well. The second need, is to assign many
PageRank values to each Web page, as results of PageRank’s personalization
that was recently presented by Google as beta-service [1].
21
2.1 Accelerating problems
Previous approaches to accelerate PageRank followed three different direc-
tions. The first approach tries to fit the graph structure into main memory
by using compression techniques for the Web Graph [5]. The second approach
efficiently computes in external memory the PageRank, by using a sequence
of scan operations on the Web graph. The last direction exploits efficient nu-
merical methods to reduce the computation time. These kinds of numerical
techniques are the most promising and we have seen many intriguing results
in the last few years. Arasu et al. [3] observe that computing PageRank is
equivalent to solve a linear system and proposed to exploit iterative methods
such as the Gauss-Seidel algorithm. Kamvar et al. [12] suggest accelerating
computation by using extrapolation methods which periodically subtracts off
estimates of non-principal eigenvectors from the current iterate of the power
method. The reported speedup is about 50-300% and it is influenced by the
particular parameter settings used for computing the PageRank. Kamvar et
al. [14] note that the Web Graph’s matrix, permuted by sorting the lexico-
graphically the URLs, assumes an approximate block structure since most of
the links are intra-domains. They suggest to compute separately several "lo-
cal" PageRank vectors, one for each block and to use the concatenation of these
vectors as starting point for the computation of a global web rank. In [13], the
authors suggest to avoid the re-computation of the (sub)-components of PageR-
ank vector as soon as they arrive to a fixed point. This results in a speedup of
about 17%.
We will investigate a method motivated by cost-reducing direction as men-
tioned above in the following part. It exploits the st...
Music ♫

Copyright: Tài liệu đại học © DMCA.com Protection Status