Anatomy of Google etc

advertisement
Project part B due a month from now (10/26)
Anatomy of Google
(circa 1999)
Slides from
http://www.cs.huji.ac.il/~sdbi/2000/google/index.htm
Some points…
• Fancy hits?
• Why two types of
barrels?
• How is indexing
parallelized?
• How does Google show
that it doesn’t quite care
about recall?
• How does Google avoid
crawling the same URL
multiple times?
• What are some of the
memory saving things
they do?
• Do they use TF/IDF?
• Do they normalize?
(why not?)
• Can they support
proximity queries?
• How are “page
synopses” made?
Challenges in Web Search Engines
• Spam
– Text Spam
– Link Spam
– Cloaking
• Content Quality
– Anchor text quality
• Quality Evaluation
– Indirect feedback
• Web Conventions
– Articulate and develop
validation
• Duplicate Hosts
– Mirror detection
• Vaguely Structured
Data
– Page layout
– The advantage of making
rendering/content
language be same
Search Engine Size over Time
Number of indexed pages, self-reported
Google: 50% of the web?
Information from
searchenginewatch.com
Google Search Engine Architecture
SOURCE: BRIN & PAGE
URL Server- Provides URLs to be
fetched
Crawler is distributed
Store Server - compresses and
stores pages for indexing
Repository - holds pages for indexing
(full HTML of every page)
Indexer - parses documents, records
words, positions, font size, and
capitalization
Lexicon - list of unique words found
HitList – efficient record of word locs+attribs
Barrels hold (docID, (wordID, hitList*)*)*
sorted: each barrel has range of words
Anchors - keep information about links
found in web pages
URL Resolver - converts relative
URLs to absolute
Sorter - generates Doc Index
Doc Index - inverted index of all words
in all documents (except stop
words)
Links - stores info about links to each
page (used for Pagerank)
Pagerank - computes a rank for each
page retrieved
Searcher - answers queries
Major Data Structures
• Big Files
– virtual files spanning multiple file systems
– addressable by 64 bit integers
– handles allocation & deallocation of File
Descriptions since the OS’s is not enough
– supports rudimentary compression
Major Data Structures (2)
• Repository
– tradeoff between speed & compression
ratio
– choose zlib (3 to 1) over bzip (4 to 1)
– requires no other data structure to access
it
Major Data Structures (3)
• Document Index
– keeps information about each document
– fixed width ISAM (index sequential access mode)
index
– includes various statistics
• pointer to repository, if crawled, pointer to info lists
– compact data structure
– we can fetch a record in 1 disk seek during search
Major Data Structures (4)
• Lexicon
– can fit in memory for reasonable price
• currently 256 MB
• contains 14 million words
• 2 parts
– a list of words
– a hash table
Major Data Structures (4)
• Hit Lists
– includes position font & capitalization
– account for most of the space used in the
indexes
– 3 alternatives: simple, Huffman , handoptimized
– hand encoding uses 2 bytes for every hit
Major Data Structures (4)
• Hit Lists (2)
Major Data Structures (5)
• Forward Index
–
–
–
–
–
partially ordered
used 64 Barrels
each Barrel holds a range of wordIDs
requires slightly more storage
each wordID is stored as a relative difference from
the minimum wordID of the Barrel
– saves considerable time in the sorting
Major Data Structures (6)
• Inverted Index
– 64 Barrels (same as the Forward Index)
– for each wordID the Lexicon contains a
pointer to the Barrel that wordID falls into
– the pointer points to a doclist with their hit
list
– the order of the docIDs is important
• by docID or doc word-ranking
– Two inverted barrels—the short barrel/full barrel
Major Data Structures (7)
• Crawling the Web
–
–
–
–
–
fast distributed crawling system
URLserver & Crawlers are implemented in phyton
each Crawler keeps about 300 connection open
at peek time the rate - 100 pages, 600K per second
uses: internal cached DNS lookup
– synchronized IO to handle events
– number of queues
– Robust & Carefully tested
Major Data Structures (8)
• Indexing the Web
– Parsing
• should know to handle errors
–
–
–
–
HTML typos
kb of zeros in a middle of a TAG
non-ASCII characters
HTML Tags nested hundreds deep
• Developed their own Parser
– involved a fair amount of work
– did not cause a bottleneck
Major Data Structures (9)
• Indexing Documents into Barrels
– turning words into wordIDs
– in-memory hash table - the Lexicon
– new additions are logged to a file
– parallelization
• shared lexicon of 14 million pages
• log of all the extra words
Major Data Structures (10)
• Indexing the Web
– Sorting
• creating the inverted index
• produces two types of barrels
– for titles and anchor (Short barrels)
– for full text (full barrels)
• sorts every barrel separately
• running sorters at parallel
• the sorting is done in main memory
Searching
• Algorithm
–
–
–
–
– 5. Compute the rank of that
1. Parse the query
document
2. Convert word into
– 6. If we’re at the end of the
wordIDs
short barrels start at the
doclists of the full barrel,
3. Seek to the start of
unless we have enough
the doclist in the short
barrel for every word – 7. If were not at the end of any
doclist goto step 4
4. Scan through the
doclists until there is a – 8. Sort the documents by rank
document that
return the top K
matches all of the
• (May jump here after 40k pages)
search terms
The Ranking System
• The information
– Position, Font Size, Capitalization
– Anchor Text
– PageRank
• Hits Types
– title ,anchor , URL etc..
– small font, large font etc..
The Ranking System (2)
• Each Hit type has it’s own weight
– Counts weights increase linearly with counts at first
but quickly taper off this is the IR score of the doc
– (IDF weighting??)
• the IR is combined with PageRank to give the final
Rank
• For multi-word query
– A proximity score for every set of hits with a
proximity type weight
• 10 grades of proximity
Feedback
• A trusted user may optionally evaluate
the results
• The feedback is saved
• When modifying the ranking function we
can see the impact of this change on all
previous searches that were ranked
Results
• Produce better results than major commercial
search engines for most searches
• Example: query “bill clinton”
–
–
–
–
–
return results from the “Whitehouse.gov”
email addresses of the president
all the results are high quality pages
no broken links
no bill without clinton & no clinton without bill
Storage Requirements
• Using Compression on the repository
• about 55 GB for all the data used by the
SE
• most of the queries can be answered by
just the short inverted index
• with better compression, a high quality
SE can fit onto a 7GB drive of a new PC
Storage Statistics
Total size of
Fetched Pages
Compressed
Repository
Short Inverted
Index
Temporary
Anchor Data
Document
Index Incl.
Variable Width
Data
Links Database
147.8 GB
Web Page
Statistics
24 million
53.5 GB
Number of Web
Pages Fetched
76.5 million
4.1 GB
Number of URLs
Seen
6.6 GB
Number of Email
Addresses
1.7 million
9.7 GB
Number of 404’s
1.6 million
3.9 GB
Total Without 55.2 GB
Repository
System Performance
•
•
•
•
•
It took 9 days to download 26million pages
48.5 pages per second
The Indexer & Crawler ran simultaneously
The Indexer runs at 54 pages per second
The sorters run in parallel using 4 machines,
the whole process took 24 hours
Computing Page Rank
Practicality
• Challenges
– M no longer sparse (don’t represent explicitly!)
– Data too big for memory (be sneaky about disk
usage)
• Stanford version of Google :
–
–
–
–
24 million documents in crawl
147GB documents
259 million links
Computing pagerank “few hours” on single 1997
workstation
• But How?
– Next discussion from Haveliwala paper…
Efficient Computation:
Preprocess
• Remove ‘dangling’ nodes
– Pages w/ no children
• Then repeat process
– Since now more danglers
• Stanford WebBase
– 25 M pages
– 81 M URLs in the link graph
– After two prune iterations: 19 M nodes
Representing ‘Links’ Table
• Stored on disk in binary format
Source node
(32 bit int)
Outdegree
(16 bit int)
Destination nodes
(32 bit int)
0
1
4
3
12, 26, 58, 94
5, 56, 69
2
5
1, 9, 10, 36, 78
• Size for Stanford WebBase: 1.01 GB
– Assumed to exceed main memory
=
Algorithm 1
Dest
dest node
source node
Links (sparse)

Source
s Source[s] = 1/N
while residual > {
d Dest[d] = 0
while not Links.eof() {
Links.read(source, n, dest1, … destn)
for j = 1… n
Dest[destj] = Dest[destj]+Source[source]/n
}
d Dest[d] = c * Dest[d] + (1-c)/N
/* dampening */
residual = Source – Dest
/* recompute every few iterations */
Source = Dest
}
Analysis of Algorithm 1
• If memory is big enough to hold Source & Dest
– IO cost per iteration is | Links|
– Fine for a crawl of 24 M pages
– But web ~ 800 M pages in 2/99
[NEC study]
– Increase from 320 M pages in 1997
[same authors]
• If memory is big enough to hold just Dest
– Sort Links on source field
– Read Source sequentially during rank propagation step
– Write Dest to disk to serve as Source for next iteration
– IO cost per iteration is | Source| + | Dest| + | Links|
• If memory can’t hold Dest
– Random access pattern will make working set = | Dest|
– Thrash!!!
Block-Based Algorithm
• Partition Dest into B blocks of D pages each
– If memory = P physical pages
– D < P-2 since need input buffers for Source & Links
• Partition Links into B files
– Linksi only has some of the dest nodes for each source
– Linksi only has dest nodes such that
• DD*i <= dest < DD*(i+1)
• Where DD = number of 32 bit integers that fit in D pages
=
Dest
dest node
source node
Links (sparse)

Source
Partitioned Link File
Source node Outdegr Num out
(32 bit int) (16 bit) (16 bit)
Destination nodes
(32 bit int)
0
1
2
4
3
5
2
1
3
12, 26
5
1, 9, 10
0
1
2
4
3
5
1
1
1
58
56
36
0
1
2
4
3
5
1
1
1
94
69
78
Buckets
0-31
Buckets
32-63
Buckets
64-95
Block-based Page Rank algorithm
Analysis of Block Algorithm
• IO Cost per iteration =
– B*| Source| + | Dest| + | Links|*(1+e)
– e is factor by which Links increased in size
• Typically 0.1-0.3
• Depends on number of blocks
• Algorithm ~ nested-loops join
Comparing the Algorithms
PageRank Convergence…
PageRank Convergence…
Summary of Key Points
• PageRank Iterative Algorithm
• Rank Sinks
• Efficiency of computation – Memory!
– Single precision Numbers.
– Don’t represent M* explicitly.
– Break arrays into Blocks.
– Minimize IO Cost.
• Number of iterations of PageRank.
• Weighting of PageRank vs. doc
similarity.
2/24
Shopping at job fairs
Push my resume
[But] jobs aren't what I seek
I will be your
walking student advertisement
Can't live on my research stipend
Everybody wants a Google shirt
HP, Amazon
Pixar, Cray, and Ford
I just can't decide
Help me score the most
free pens and free umbrellas
or a coffee mug from Bell Labs
Everybody wants a Google..
[Un]til I find a steady funder
I'll make do with cheap-a## plunder
Everybody wants a Google..
Wait! You will never never never need it
It's free; I couldn't leave it
Everybody wants a Google shirt
Shameless corp'rate carrion crows
Turn your backs and show your logos
Everybody wants a Google shirt
("Everybody Wants a Google Shirt" is based on
"Everybody Wants to Rule the World"
by Tears for Fears.
Alternate lyrics by Andy Collins, Kate Deibel,
Neil Spring, Steve Wolfman, and Ken Yasuhara.)
Discussion
• What parts of Google did you find to be
in line with what you learned until now?
• What parts of Google were different?
Beyond Google (and Pagerank)
• Are backlinks reliable metric of importance?
– It is a “one-size-fits-all” measure of importance…
• Not user specific
• Not topic specific
– There may be discrepancy between back links and actual popularity
(as measured in hits)
» The “sense” of the link is ignored (this is okay if you think that all
publicity is good publicity)
• Mark Twain on Classics
– “A classic is something everyone wishes they had already read and no one
actually had..” (paraphrase)
• Google may be its own undoing…(why would I need back links when I
know I can get to it through Google?)
• Customization, customization, customization…
– Yahoo sez about their magic bullet.. (NYT 2/22/04)
– "If you type in flowers, do you want to buy flowers, plant flowers or see
pictures of flowers?"
The rest of the slides on Google as well as
crawling were not
specifically discussed one at a time, but
have been discussed in essence
(read “you are still responsible for them”)
Robot (4)
2.
How to extract URLs from a web page?
Need to identify all possible tags and attributes that hold
URLs.
•
Anchor tag: <a href=“URL” … > … </a>
•
Option tag: <option value=“URL”…> … </option>
•
Map: <area href=“URL” …>
•
Frame: <frame src=“URL” …>
•
Link to an image: <img src=“URL” …>
•
Relative path vs. absolute path: <base href= …>
Focused Crawling
•
•
Classifier: Is crawled page P
relevant to the topic?
– Algorithm that maps page
to relevant/irrelevant
• Semi-automatic
• Based on page
vicinity..
Distiller:is crawled page P
likely to lead to relevant
pages?
– Algorithm that maps page
to likely/unlikely
• Could be just A/H
computation, and
taking HUBS
– Distiller determines the
priority of following links off of
P
Download