Design a web crawler
Design a web crawler
Step 1: Outline use cases and constraints
Use cases
We'll scope the problem to handle only the following use cases
Service crawls a list of urls:
Generates reverse index of words to pages containing the search terms
Generates titles and snippets for pages
Title and snippets are static, they do not change based on search query
User inputs a search term and sees a list of relevant pages with titles and snippets the crawler generated
Only sketch high level components and interactions for this use case, no need to go into depth
Service has high availability
Out of scope
Search analytics
Personalized search results
Page rank
Constraints and assumptions
State assumptions
Traffic is not evenly distributed
Some searches are very popular, while others are only executed once
有 hot entity issue,有的 search traffic 会非常高
Support only anonymous users
Generating search results should be fast
对 efficiency 有要求
The web crawler should not get stuck in an infinite loop
We get stuck in an infinite loop if the graph contains a cycle
1 billion links to crawl
Pages need to be crawled regularly to ensure freshness
Average refresh rate of about once per week, more frequent for popular sites
4 billion links crawled each month
Average stored size per web page: 500 KB
For simplicity, count changes the same as new pages
100 billion searches per month
Exercise the use of more traditional systems - don't use existing systems such as solr or nutch.
Calculate usage
2 PB of stored page content per month
1,600 write requests per second
40,000 search requests per second
Handy conversion guide:
2.5 million seconds per month
1 request per second = 2.5 million requests per month
40 requests per second = 100 million requests per month
400 requests per second = 1 billion requests per month
Step 2: Create a high level designs
Step 3: Design core components
Use case: Service crawls a list of urls
We'll assume we have an initial list of links_to_crawl
ranked initially based on overall site popularity. If this is not a reasonable assumption, we can seed the crawler with popular sites that link to outside content such as Yahoo, DMOZ, etc.
We'll use a table crawled_links
to store processed links and their page signatures.
We could store links_to_crawl
and crawled_links
in a key-value NoSQL Database. For the ranked links in links_to_crawl
, we could use Redis with sorted sets to maintain a ranking of page links. We should discuss the use cases and tradeoffs between choosing SQL or NoSQL.
The Crawler Service processes each page link by doing the following in a loop:
PagesDataStore
is an abstraction within the Crawler Service that uses the NoSQL Database:
Page
is an abstraction within the Crawler Service that encapsulates a page, its contents, child urls, and signature:
Crawler
is the main class withinCrawler Service
, composed ofPage
andPagesDataStore
.
Handling duplicates
We need to be careful the web crawler doesn't get stuck in an infinite loop, which happens when the graph contains a cycle.
We'll want to remove duplicate urls:
For smaller lists we could use something like
sort | unique
With 1 billion links to crawl, we could use
MapReduce
to output only entries that have a frequency of 1
Detecting duplicate content is more complex. We could generate a signature based on the contents of the page and compare those two signatures for similarity. Some potential algorithms are Jaccard index and cosine similarity.
Determining when to update the crawl results
Pages need to be crawled regularly to ensure freshness. Crawl results could have a timestamp
field that indicates the last time a page was crawled. After a default time period, say one week, all pages should be refreshed. Frequently updated or more popular sites could be refreshed in shorter intervals.
Although we won't dive into details on analytics
, we could do some data mining to determine the mean time before a particular page is updated, and use that statistic to determine how often to re-crawl the page.
We might also choose to support a Robots.txt
file that gives webmasters control of crawl frequency.
Use case: User inputs a search term and sees a list of relevant pages with titles and snippets
The Client sends a request to the
Web Server
, running as areverse proxy
The
Web Server
forwards the request to theQuery API
serverThe
Query API
server does the following:Parses the query
Removes markup
Breaks up the text into terms
Fixes typos
Normalizes capitalization
Converts the query to use boolean operations
Uses the Reverse Index Service to find documents matching the query
The Reverse Index Service ranks the matching results and returns the top ones
Uses the Document Service to return titles and snippets
We'll use a public REST API:
Response:
For internal communications, we could use Remote Procedure Calls.
Step 4: Scale the design
State you would 1) Benchmark/Load Test, 2) Profile for bottlenecks 3) address bottlenecks while evaluating alternatives and trade-offs, and 4) repeat. See Design a system that scales to millions of users on AWS as a sample on how to iteratively scale the initial design.
It's important to discuss what bottlenecks you might encounter with the initial design and how you might address each of them. For example, what issues are addressed by adding a Load Balancer with multiple Web Servers? CDN? Master-Slave Replicas? What are the alternatives and Trade-Offs for each?
We'll introduce some components to complete the design and to address scalability issues. Internal load balancers are not shown to reduce clutter.
To avoid repeating discussions, refer to the following system design topics for main talking points, tradeoffs, and alternatives:
DNS
Load balancer
Horizontal scaling
Web server (reverse proxy)
API server (application layer)
Cache
NoSQL
Consistency patterns
Availability patterns
Some searches are very popular, while others are only executed once. Popular queries can be served from a Memory Cache such as Redis or Memcached to reduce response times and to avoid overloading the Reverse Index Service and Document Service. The Memory Cache is also useful for handling the unevenly distributed traffic and traffic spikes. Reading 1 MB sequentially from memory takes about 250 microseconds, while reading from SSD takes 4x and from disk takes 80x longer.
Below are a few other optimizations to the Crawling Service:
To handle the data size and request load, the Reverse Index Service and Document Service will likely need to make heavy use sharding and federation.
DNS lookup can be a bottleneck, the Crawler Service can keep its own DNS lookup that is refreshed periodically
The Crawler Service can improve performance and reduce memory usage by keeping many open connections at a time, referred to as connection pooling
Switching to UDP could also boost performance
Web crawling is bandwidth intensive, ensure there is enough bandwidth to sustain high throughput
Additional talking points
SQL scaling patterns
Read replicas
Federation
Sharding
Denormalization
SQL Tuning
NoSQL
Key-value store
Document store
Wide column store
Graph database
SQL vs NoSQL
Caching
Where to cache
Client caching
CDN caching
Web server caching
Database caching
Application caching
What to cache
Caching at the database query level
Caching at the object level
When to update the cache
Cache-aside
Write-through
Write-behind (write-back)
Refresh ahead
Asynchronism and microservices
Message queues
Task queues
Back pressure
Microservices
Communications
Discuss tradeoffs:
External communication with clients - HTTP APIs following REST
Internal communications - RPC
Service discovery
Security
Refer to the security section.
Latency numbers
See Latency numbers every programmer should know.
Ongoing
Continue benchmarking and monitoring your system to address bottlenecks as they come up
Scaling is an iterative process
Last updated