Title: Information Retrieval and Data Mining (AT71.07) Comp. Sc. and Inf. Mgmt. Asian Institute of Technology
1Information Retrieval and Data Mining
(AT71.07)Comp. Sc. and Inf. Mgmt.Asian
Institute of Technology
- Instructor Dr. Sumanta Guha
- Slide Sources Introduction to Information
Retrieval book slides from Stanford University,
adapted and supplemented - Chapter 4 Index construction
2- CS276 Information Retrieval and Web Search
- Christopher Manning and Prabhakar Raghavan
- Lecture 4 Index construction
3Index construction
Ch. 4
- How do we construct an index?
- What strategies can we use with limited main
memory?
4Hardware basics
Sec. 4.1
- Many design decisions in information retrieval
are based on the characteristics of hardware - We begin by reviewing hardware basics
5Hardware basics
Sec. 4.1
- Access to data in memory is much faster than
access to data on disk. - Disk seeks No data is transferred from disk
while the disk head is being positioned. - Therefore Transferring one large chunk of data
from disk to memory is faster than transferring
many small chunks. - Disk I/O is block-based Reading and writing of
entire blocks (as opposed to smaller chunks). - Block sizes 8KB to 256 KB.
6Hardware basics
Sec. 4.1
- Servers used in IR systems now typically have
several GB of main memory, sometimes tens of GB. - Available disk space is several (23) orders of
magnitude larger. - Fault tolerance is very expensive Its much
cheaper to use many regular machines rather than
one fault tolerant machine.
7Hardware assumptions
Sec. 4.1
- symbol statistic value
- s average seek time 5 ms 5 x 10-3 s
- b transfer time per byte 0.02 µs 2 x 10-8 s
- processors clock rate 1 ns 10-9
s - transfer time/byte in main 5 ns
5 x 10-9 s - p low-level operation 10 ns 10-8 s
- (e.g., compare swap a word)
- size of main memory several GB
- size of disk space 1 TB or more
8RCV1 Our collection for this lecture
Sec. 4.2
- Shakespeares collected works definitely arent
large enough for demonstrating many of the points
in this course. - The collection well use isnt really large
enough either, but its publicly available and is
at least a more plausible example. - As an example for applying scalable index
construction algorithms, we will use the Reuters
RCV1 collection. - This is one year of Reuters newswire (part of
1995 and 1996)
9A Reuters RCV1 document
Sec. 4.2
10Reuters RCV1 statistics
Sec. 4.2
- symbol statistic value
- N documents 800,000
- L avg. tokens per doc 200
- M terms ( word types) 400,000
- avg. bytes per token 6
- (incl. spaces/punct.)
- avg. bytes per token 4.5
- (without spaces/punct.)
- avg. bytes per term 7.5
- non-positional
postings 100,000,000
4.5 bytes per word token vs. 7.5 bytes per term
Why? Many tokens of small size, while there is
only 1 term for identical tokens.
11Recall IIR Ch. 1 index construction
Sec. 4.2
- Documents are parsed to extract words and these
are saved with the Document ID.
Doc 1
Doc 2
I did enact Julius Caesar I was killed i' the
Capitol Brutus killed me.
So let it be with Caesar. The noble Brutus hath
told you Caesar was ambitious
12 Key step
Sec. 4.2
- After all documents have been parsed, the
inverted file is sorted by terms.
We focus on this sort step. We have 100M items to
sort.
13Scaling index construction
Sec. 4.2
- In-memory index construction does not scale.
- How can we construct an index for very large
collections? - Taking into account the hardware constraints we
just learned about . . . - Memory, disk, speed, etc.
14Sort-based index construction
Sec. 4.2
- As we build the index, we parse docs one at a
time. - While building the index, we cannot easily
exploit compression tricks (you can, but much
more complex) - The final postings for any term are incomplete
until the end. - At 12 bytes per non-positional postings entry
(termID 4 bytes docID 4 bytes freq 4 bytes),
demands a lot of space for large collections. - Total 100,000,000 in the case of RCV1
- So we can do this in memory in 2009, but
typical collections are much larger. E.g. the
New York Times provides an index of gt150 years of
newswire - Thus We need to store intermediate results on
disk.
15Use the same algorithm for disk?
Sec. 4.2
- Can we use the same index construction algorithm
for larger collections, but by using disk instead
of memory? - No Sorting T 100,000,000 records on disk is
too slow too many disk seeks. - We need an external sorting algorithm.
16Bottleneck
Sec. 4.2
- Parse and build postings entries one doc at a
time - Now sort postings entries by term (then by doc
within each term) - Doing this with random disk seeks would be too
slow must sort T100M records
If every comparison took 2 disk seeks, and N
items could be sorted with N log2N comparisons,
how long would this take?
17BSBI Blocked sort-based Indexing (Sorting with
fewer disk seeks)
Sec. 4.2
- 12-byte (444) records (termID, doc, freq).
- These are generated as we parse docs.
- Must now sort 100M such 12-byte records by term.
- Define a Block 10M such records
- Can fit comfortably into memory for in-place
sorting (e.g., quicksort). - Will have 10 such blocks to start with.
- Basic idea of algorithm
- Accumulate postings for each block, sort, write
to disk. - Then merge the blocks into one long sorted order.
Total 100M records
The term -gt termID mapping ( dictionary) must
already be available built from a first pass.
18Postings lists to be merged
Merged postings lists
brutus d1, 3 d3, 2 caesar d1, 2 d2, 1 d4,
4 noble d5, 2 with d1, 2 d3, 1 d5, 2
brutus d6, 1 d8, 3 caesar d6, 4 julius d10,
1 killed d6, 4 d7, 3
brutus d1, 3 d3, 2 d6, 1 d8, 3 caesar d1, 2
d2, 1 d4, 4 d6, 4 julius d10, 1 killed d6, 4
d7, 3 noble d5, 2 with d1, 2 d3, 1 d5, 2
disk
19Sorting 10 blocks of 10M records
Sec. 4.2
- First, read each block, sort in main, write back
to disk - Quicksort takes 2N ln N expected steps
- In our case 2 x (10M ln 10M) steps
- Exercise estimate total time to read each block
from disk and and quicksort it. - 10 times this estimate gives us 10 sorted runs
of 10M records each on disk. Now, need to merge
all! - Done straightforwardly, merge needs 2 copies of
data on disk (one for the lists to be merged, one
for the merged output) - But we can optimize this
20Sec. 4.2
21How to merge the sorted runs?(Source Wikipedia)
Sec. 4.2
Use a 9-element priority queue repeatedly
deleting its smallest element and adding to it
from the buffer to which the smallest belonged.
- External mergesort
- One-pass
- One example of external sorting is the external
mergesort algorithm. For example, for sorting 900
megabytes of data using only 100 megabytes of
RAM - Read 100 MB of the data in main memory and sort
by some conventional method, like quicksort. - Write the sorted data to disk.
- Repeat steps 1 and 2 until all of the data is in
sorted 100 MB chunks, which now need to be merged
into one single output file. - Read the first 10 MB of each sorted chunk into
input buffers in main memory and allocate the
remaining 10 MB for an output buffer. (In
practice, it might provide better performance to
make the output buffer larger and the input
buffers slightly smaller.) - Perform a 9-way merge and store the result in the
output buffer. If the output buffer is full,
write it to the final sorted file. If any of the
9 input buffers gets empty, fill it with the next
10 MB of its associated 100 MB sorted chunk until
no more data from the chunk is available.
22How to merge the sorted runs?(Source Wikipedia)
Sec. 4.2
- External mergesort
- Mutliple-passes
- Previous example shows a one-pass sort.
- For sorting, say, 50 GB in 100 MB of RAM, a
one-pass sort wouldn't be efficient the disk
seeks required to fill the input buffers with
data from each chunk would take up most of the
sort time. - Multi-pass sorting solves the problem. For
example, to avoid doing a 500-way merge for the
preceding example, a program could - Run a first pass merging 25 chunks at a time,
resulting in 500/2520 larger sorted chunks. - Run a second pass to merge the 20 larger sorted
chunks.
23Remaining problem with sort-based algorithm
Sec. 4.3
- Our assumption was we can keep the dictionary in
memory. - We need the dictionary (which grows dynamically)
in order to implement a term to termID mapping. - Actually, we could work with term,docID postings
instead of termID,docID postings . . . - . . . but then intermediate files become very
large. (We would end up with a scalable, but very
slow index construction method.)
24SPIMI Single-pass in-memory indexing
Sec. 4.3
- Key idea 1 Generate separate dictionaries for
each block no need to maintain term-termID
mapping across blocks. - In other words, sub-dictionaries are generated
on the fly. - Key idea 2 Dont sort. Accumulate postings in
postings lists as they occur. - With these two ideas we can generate a complete
inverted index for each block. - These separate indexes can then be merged into
one big index.
25SPIMI-Invert
Sec. 4.3
Dictionary term generated on the fly!
- Merging of blocks is analogous to BSBI.
26BSBI vs. SPIMI
Block 2
Block 4
Dictionary
Block 2
Block 1
Inverted Index
Block 1
Block 3
Block 5
Main
Pass 1
Pass 2
Merge
Phase
Disk
BSBI
27BSBI vs. SPIMI
Sub-dictionary
Block 3
Sub-dictionary
Block 1
Sub-dictionary
Sub-dictionary
Block 1
Block 2
Inverted Index
Sub-dictionary
Main
Block 2
Single Pass
Merge
Phase
Disk
SPIMI
28SPIMI Compression (From IIR Ch. 5)
Sec. 4.3
- Compression makes SPIMI even more efficient.
- Compression of terms
- Compression of postings
- Instead of storing successive docIDs, store
successive offsets, e.g., instead of lt1001, 1010,
1052, gt store lt1001, 9, 42, gt. This gives rise
to smaller numbers if the term occurs in many
docs. - Store the offset values as a variable-size prefix
code so that they can be stored one after another
in a bit array, without having to reserve a fixed
bit length (e.g., 32) for each. Examples of such
codes include the Elias gamma and delta codes.
29Elias gamma coding
- Elias gamma code is a prefix code for positive
integers developed by Peter Elias. - To code a number
- Write it in binary.
- Subtract 1 from the number of bits written in
step 1 and prepend that many zeros. - An equivalent way to express the same process
- Separate the integer into the highest power of 2
it contains (2N) and the remaining N binary
digits of the integer. - Encode N in unary that is, as N zeroes followed
by a one. - Append the remaining N binary digits to this
representation of N. - Examples
- 1 ?1, 2 ? 010, 3 ?011, 4 ? 00100, 5 ? 00101, 6 ?
?, 7 ? ?, 8 ? ?, 27 ? ?, 33 ? ? - The sequence 12345 ? 10100110010000101 decode
?
30Elias delta coding
- Elias delta code is a prefix code for positive
integers developed by Peter Elias. - To code a number
- Separate the integer into the highest power of 2
it contains (2N' ) and the remaining N' binary
digits of the integer. - Encode N N' 1 with Elias gamma coding.
- Append the remaining N' binary digits to this
representation of N. - Examples
- 1 ?1, 2 ? 0100, 3 ?0101, 4 ? 01100, 5 ? 01101, 6
? 01110, 7 ? ?, 8 ? ?, 27 ? ?, 33 ? ?
31Distributed indexing
Sec. 4.4
- For web-scale indexing (dont try this at home!)
- must use a distributed computing cluster
- Individual machines are fault-prone
- Can unpredictably slow down or fail
- How do we exploit such a pool of machines?
32Google data centers
Sec. 4.4
- Google data centers mainly contain commodity
machines. - Data centers are distributed around the world.
- Estimate a total of 1 million servers, 3 million
processors/cores (Gartner 2007) - Estimate Google installs 100,000 servers each
quarter. - Based on expenditures of 200250 million dollars
per year - This would be 10 of the computing capacity of
the world!?!
33Google data centers
Sec. 4.4
- If in a non-fault-tolerant system with 1000
nodes, each node has 99.9 uptime, what is the
uptime of the system? - Answer 63 (99.9)1000
34Distributed indexing
Sec. 4.4
- Maintain a master machine directing the indexing
job considered safe. - Break up indexing into sets of (parallel) tasks.
- Master machine assigns each task to an idle
machine from a pool.
35Parallel tasks
Sec. 4.4
- We will use two sets of parallel tasks
- Parsers
- Inverters
- Break the input document collection into splits
- Each split is a subset of documents
(corresponding to blocks in BSBI/SPIMI)
36Parsers
Sec. 4.4
- Master assigns a split to an idle parser machine
- Parser reads a document at a time and emits
(term, doc) pairs - Parser writes pairs into j partitions
- Each partition is for a range of terms first
letters - (e.g., a-f, g-p, q-z) here j 3.
- Now to complete the index inversion
37Inverters
Sec. 4.4
- An inverter collects all (term,doc) pairs (
postings) for one term-partition. - Sorts and writes to postings lists
38Data flow
Sec. 4.4
Master
assign
assign
Postings
Parser
Inverter
a-f
g-p
q-z
a-f
Parser
a-f
g-p
q-z
Inverter
g-p
Inverter
splits
q-z
Parser
a-f
g-p
q-z
Map phase
Reduce phase
Segment files
39MapReduce
Sec. 4.4
- The index construction algorithm we just
described is an instance of MapReduce. - MapReduce (Dean and Ghemawat 2004) is a robust
and conceptually simple framework for distributed
computing - without having to write code for the
distribution part. - They describe the Google indexing system (ca.
2002) as consisting of a number of phases, each
implemented in MapReduce.
40Dynamic indexing
Sec. 4.5
- Up to now, we have assumed that collections are
static. - They rarely are
- Documents come in over time and need to be
inserted. - Documents are deleted and modified.
- This means that the dictionary and postings lists
have to be modified - Postings updates for terms already in dictionary
- New terms added to dictionary
41Simplest approach
Sec. 4.5
- Maintain big main index
- New docs go into small auxiliary index
- Search across both, merge results
- Deletions
- Invalidation bit-vector for deleted docs
- Filter docs output on a search result by this
invalidation bit-vector - Periodically, re-index into one main index
42Issues with main and auxiliary indexes
Sec. 4.5
- Problem of frequent merges you touch stuff a
lot - Poor performance during merge
- Actually
- Merging of the auxiliary index into the main
index is efficient if we keep a separate file for
each postings list. - Merge is the same as a simple append.
- But then we would need a lot of files
inefficient for O/S. - Assumption for the rest of the lecture The index
is one big file. - In reality Use a scheme somewhere in between
(e.g., split very large postings lists, collect
postings lists of length 1 in one file etc.)
43Dynamic/Positional indexing at search engines
Sec. 4.5
- All the large search engines now do dynamic
indexing - Their indices have frequent incremental changes
- News items, blogs, new topical web pages
- Sarah Palin,
- But (sometimes/typically) they also periodically
reconstruct the index from scratch - Query processing is then switched to the new
index, and the old index is then deleted - Positional indexes
- Same sort of sorting problem just larger
Why?
44Sec. 4.5
45Resources for todays lecture
Ch. 4
- Chapter 4 of IIR
- MG Chapter 5
- Original publication on MapReduce Dean and
Ghemawat (2004) - Original publication on SPIMI Heinz and Zobel
(2003)