System Design
  • Introduction
  • Glossary of System Design
    • System Design Basics
    • Key Characteristics of Distributed Systems
    • Scalability - Harvard lecture
      • Scalability for Dummies - Part 1: Clones
      • Scalability for Dummies - Part 2: Database
      • Scalability for Dummies - Part 3: Cache
      • Scalability for Dummies - Part 4: Asynchronism
    • Trade-off
      • CAP Theorem
      • Performance vs scalability
      • Latency vs throughput
      • Availability vs consistency
    • Load Balancing
      • Load balancer
    • Proxies
      • Reverse proxy
    • Cache
      • Caching
    • Asynchronism
    • Processing guarantee in Kafka
    • Database
      • Relational database management system (RDBMS)
      • Redundancy and Replication
      • Data Partitioning
      • Indexes
      • NoSQL
      • SQL vs. NoSQL
      • Consistent Hashing
    • Application layer
    • DNS
    • CDN
    • Communication
      • Long-Polling vs WebSockets vs Server-Sent Events
    • Security
    • Lambda Architecture
  • OOD design
    • Concepts
      • Object-Oriented Basics
      • OO Analysis and Design
      • What is UML?
      • Use Case Diagrams
    • Design a parking lot
  • System Design Cases
    • Overview
    • Design a system that scales to millions of users on AWS
    • Designing a URL Shortening service like TinyURL
      • Design Unique ID Generator
      • Designing Pastebin
      • Design Pastebin.com (or Bit.ly)
    • Design notification system (scott)
      • Designing notification service
    • Designing Chat System
      • Designing Slack
      • Designing Facebook Messenger
    • Design Top K System
    • Designing Instagram
    • Design a newsfeed system
      • Designing Facebook’s Newsfeed
      • Design the data structures for a social network
    • Designing Twitter
      • Design the Twitter timeline and search
      • Designing Twitter Search
    • Design Youtube - Scott
      • Design live commenting
      • Designing Youtube or Netflix
    • Designing a Web Crawler
      • Designing a distributed job scheduler
      • Designing a Web Crawler/Archive (scott)
      • Design a web crawler
    • Designing Dropbox
    • Design Google Doc
    • Design Metrics Aggregation System
      • Design Ads Logging System
    • Design Instacart
    • Design a payment system
      • Airbnb - Avoiding Double Payments in a Distributed Payments System
    • Design Distributed Message Queue
      • Cherami: Uber Engineering’s Durable and Scalable Task Queue in Go
    • Design Distributed Cache
      • Design a key-value cache to save the results of the most recent web server queries
    • Design a scalable file distribution system
    • Design Amazon's sales ranking by category feature
    • Design Mint.com
    • Design Autocomplete System
      • Designing Typeahead Suggestion
    • Designing an API Rate Limiter
      • Designing Rate Limiter
    • Design Google Map
      • Designing Yelp or Nearby Friends
      • Designing Uber backend
    • Designing Ticketmaster
      • Design 12306 - Scott
    • Design AirBnB or a Hotel Booking System
  • Paper Reading
    • MapReduce
  • Other Questions
    • What happened after you input the url in the browser?
Powered by GitBook
On this page
  • Design a key-value cache to save the results of the most recent web server queries
  • Step 1: Outline use cases and constraints
  • Use cases
  • Constraints and assumptions
  • Step 2: Create a high level design
  • Step 3: Design core components
  • Use case: User sends a request resulting in a cache hit
  • Step 4: Scale the design
  • Additional talking points

Was this helpful?

  1. System Design Cases
  2. Design Distributed Cache

Design a key-value cache to save the results of the most recent web server queries

Design a key-value cache to save the results of the most recent web server queries

Step 1: Outline use cases and constraints

Use cases

  • User sends a search request resulting in a cache hit

  • User sends a search request resulting in a cache miss

  • Service has high availability

Constraints and assumptions

State assumptions

  • Traffic is not evenly distributed

    • Popular queries should almost always be in the cache

    • Need to determine how to expire/refresh

  • Serving from cache requires fast lookups

  • Low latency between machines

  • Limited memory in cache

    • Need to determine what to keep/remove

    • Need to cache millions of queries

  • 10 million users

  • 10 billion queries per month

Calculate usage

  • Cache stores ordered list of key: query, value: results

    • query - 50 bytes

    • title - 20 bytes

    • snippet - 200 bytes

    • Total: 270 bytes

  • 2.7 TB of cache data per month if all 10 billion queries are unique and all are stored

    • 270 bytes per search * 10 billion searches per month

    • Assumptions state limited memory, need to determine how to expire contents

  • 4,000 requests per second

Handy conversion guide:

  • 2.5 million seconds per month

  • 1 request per second = 2.5 million requests per month

  • 40 requests per second = 100 million requests per month

  • 400 requests per second = 1 billion requests per month

Step 2: Create a high level design

Step 3: Design core components

Use case: User sends a request resulting in a cache hit

Popular queries can be served from a Memory Cache such as Redis or Memcached to reduce read latency and to avoid overloading the Reverse Index Service and Document Service. Reading 1 MB sequentially from memory takes about 250 microseconds, while reading from SSD takes 4x and from disk takes 80x longer.

Since the cache has limited capacity, we'll use a least recently used (LRU) approach to expire older entries.

  • The Client sends a request to the Web Server, running as a reverse proxy

  • The Web Server forwards the request to the Query API server

  • The Query API server does the following:

    • Parses the query

      • Removes markup

      • Breaks up the text into terms

      • Fixes typos

      • Normalizes capitalization

      • Converts the query to use boolean operations

    • Checks the Memory Cache for the content matching the query

      • If there's a hit in the Memory Cache, the Memory Cache does the following:

        • Updates the cached entry's position to the front of the LRU list

        • Returns the cached contents

      • Else, the Query API does the following:

        • Uses the Reverse Index Service to find documents matching the query

          • The Reverse Index Service ranks the matching results and returns the top ones

        • Uses the Document Service to return titles and snippets

        • Updates the Memory Cache with the contents, placing the entry at the front of the LRU list

Cache implementation

The cache can use a doubly-linked list: new items will be added to the head while items to expire will be removed from the tail. We'll use a hash table for fast lookups to each linked list node.

Query API Server implementation:

class QueryApi(object):

    def __init__(self, memory_cache, reverse_index_service):
        self.memory_cache = memory_cache
        self.reverse_index_service = reverse_index_service

    def parse_query(self, query):
        """Remove markup, break text into terms, deal with typos,
        normalize capitalization, convert to use boolean operations.
        """
        ...

    def process_query(self, query):
        query = self.parse_query(query)
        results = self.memory_cache.get(query)
        if results is None:
            results = self.reverse_index_service.process_search(query)
            self.memory_cache.set(query, results)
        return results

Node implementation:

class Node(object):

    def __init__(self, query, results):
        self.query = query
        self.results = results

LinkedList implementation:

class LinkedList(object):

    def __init__(self):
        self.head = None
        self.tail = None

    def move_to_front(self, node):
        ...

    def append_to_front(self, node):
        ...

    def remove_from_tail(self):
        ...

Cache implementation:

class Cache(object):

    def __init__(self, MAX_SIZE):
        self.MAX_SIZE = MAX_SIZE
        self.size = 0
        self.lookup = {}  # key: query, value: node
        self.linked_list = LinkedList()

    def get(self, query)
        """Get the stored query result from the cache.

        Accessing a node updates its position to the front of the LRU list.
        """
        node = self.lookup[query]
        if node is None:
            return None
        self.linked_list.move_to_front(node)
        return node.results

    def set(self, results, query):
        """Set the result for the given query key in the cache.

        When updating an entry, updates its position to the front of the LRU list.
        If the entry is new and the cache is at capacity, removes the oldest entry
        before the new entry is added.
        """
        node = self.lookup[query]
        if node is not None:
            # Key exists in cache, update the value
            node.results = results
            self.linked_list.move_to_front(node)
        else:
            # Key does not exist in cache
            if self.size == self.MAX_SIZE:
                # Remove the oldest entry from the linked list and lookup
                self.lookup.pop(self.linked_list.tail.query, None)
                self.linked_list.remove_from_tail()
            else:
                self.size += 1
            # Add the new key and value
            new_node = Node(query, results)
            self.linked_list.append_to_front(new_node)
            self.lookup[query] = new_node

When to update the cache

The cache should be updated when:

  • The page contents change

  • The page is removed or a new page is added

  • The page rank changes

The most straightforward way to handle these cases is to simply set a max time that a cached entry can stay in the cache before it is updated, usually referred to as time to live (TTL).

Step 4: Scale the design

Important: Do not simply jump right into the final design from the initial design!

It's important to discuss what bottlenecks you might encounter with the initial design and how you might address each of them. For example, what issues are addressed by adding a Load Balancer with multiple Web Servers? CDN? Master-Slave Replicas? What are the alternatives and Trade-Offs for each?

We'll introduce some components to complete the design and to address scalability issues. Internal load balancers are not shown to reduce clutter.

Expanding the Memory Cache to many machines

To handle the heavy request load and the large amount of memory needed, we'll scale horizontally. We have three main options on how to store the data on our Memory Cache cluster:

  • Each machine in the cache cluster has its own cache - Simple, although it will likely result in a low cache hit rate.

  • Each machine in the cache cluster has a copy of the cache - Simple, although it is an inefficient use of memory.

Additional talking points

Additional topics to dive into, depending on the problem scope and time remaining.

SQL scaling patterns

NoSQL

Caching

  • Where to cache

  • What to cache

  • When to update the cache

Asynchronism and microservices

Communications

  • Discuss tradeoffs:

Security

Latency numbers

Ongoing

  • Continue benchmarking and monitoring your system to address bottlenecks as they come up

  • Scaling is an iterative process

PreviousDesign Distributed CacheNextDesign a scalable file distribution system

Last updated 4 years ago

Was this helpful?

Refer to for tradeoffs and alternatives. The approach above describes .

State you would 1) Benchmark/Load Test, 2) Profile for bottlenecks 3) address bottlenecks while evaluating alternatives and trade-offs, and 4) repeat. See as a sample on how to iteratively scale the initial design.

To avoid repeating discussions, refer to the following for main talking points, tradeoffs, and alternatives:

The cache is across all machines in the cache cluster - More complex, although it is likely the best option. We could use hashing to determine which machine could have the cached results of a query using machine = hash(query). We'll likely want to use .

External communication with clients -

Internal communications -

Refer to the .

See .

When to update the cache
cache-aside
Design a system that scales to millions of users on AWS
system design topics
DNS
Load balancer
Horizontal scaling
Web server (reverse proxy)
API server (application layer)
Cache
Consistency patterns
Availability patterns
sharded
consistent hashing
Read replicas
Federation
Sharding
Denormalization
SQL Tuning
Key-value store
Document store
Wide column store
Graph database
SQL vs NoSQL
Client caching
CDN caching
Web server caching
Database caching
Application caching
Caching at the database query level
Caching at the object level
Cache-aside
Write-through
Write-behind (write-back)
Refresh ahead
Message queues
Task queues
Back pressure
Microservices
HTTP APIs following REST
RPC
Service discovery
security section
Latency numbers every programmer should know