146. LRU Cache
Last updated
Was this helpful?
Last updated
Was this helpful?
Medium
Design a data structure that follows the constraints of a .
Implement the LRUCache
class:
LRUCache(int capacity)
Initialize the LRU cache with positive size capacity
.
int get(int key)
Return the value of the key if the key
exists, otherwise return -1
.
void put(int key, int value)
Update the value of the key
if the key
exists. Otherwise, add the key-value
pair to the cache. If the number of keys exceeds the capacity
from this operation, evict the least recently used key.
The functions get
and put
must each run in O(1)
average time complexity.
Input: ["LRUCache", "put", "put", "get", "put", "get", "put", "get", "get", "get"] [[2], [1, 1], [2, 2], [1], [3, 3], [2], [4, 4], [1], [3], [4]] Output: [null, null, null, 1, null, -1, null, -1, 3, 4] Explanation: LRUCache lRUCache = new LRUCache(2); lRUCache.put(1, 1); // cache is {1=1} lRUCache.put(2, 2); // cache is {1=1, 2=2} lRUCache.get(1); // return 1 lRUCache.put(3, 3); // LRU key was 2, evicts key 2, cache is {1=1, 3=3} lRUCache.get(2); // returns -1 (not found) lRUCache.put(4, 4); // LRU key was 1, evicts key 1, cache is {4=4, 3=3} lRUCache.get(1); // return -1 (not found) lRUCache.get(3); // return 3 lRUCache.get(4); // return 4
1 <= capacity <= 3000
0 <= key <= 10^4
0 <= value <= 10^5
At most 2 * 10^5
calls will be made to get
and put
.
My Solution
The optimal solution for an LRU Cache uses a combination of:
HashMap for O(1) key-value lookups
Doubly Linked List for O(1) insertion/deletion
Pointer manipulation for updating access order
The LRU Cache implementation uses two main data structures:
HashMap (map[int]*Node):
Provides O(1) access to cache entries
Maps keys to doubly-linked list nodes
Enables quick lookups for get/put operations
Doubly Linked List:
Maintains access order of elements
Most recently used items at front
Least recently used items at back
Allows O(1) removal/insertion of nodes
Time Complexity:
Constructor: O(1)
Get: O(1)
Put: O(1)
All internal operations (moveToFront, removeLRU): O(1)
Space Complexity:
O(capacity) for both HashMap and Linked List
Each entry requires:
HashMap entry: O(1)
Linked List node: O(1)
Total per entry: O(1)
HashMap + Linked List Synergy:
HashMap provides instant access to nodes
Linked List maintains order without requiring shifts
Both structures complement each other's weaknesses
Sentinel Nodes (Head/Tail):
Eliminate edge cases in node manipulation
Simplify insertion/deletion logic
No need to check for null pointers
Encapsulated Operations:
moveToFront handles the "recently used" aspect
removeLRU maintains capacity constraint
Clean separation of concerns
Use LRU Cache when:
Need O(1) access time for cache operations
Want to evict least recently used items when capacity is reached
Memory usage can be bounded by a fixed capacity
Recent access patterns are good predictors of future accesses
Common applications:
Database query caching
Web browser page caching
File system caching
Memory management in operating systems
Caching Pattern:
Fast access to recent items
Automatic eviction of stale items
Fixed memory footprint
Doubly Linked List Pattern:
O(1) removal from any position
O(1) insertion at any position
Bidirectional traversal capability
HashMap + Linked List Pattern:
Common in cache implementations
Combines fast lookup with ordered access
Used in many system designs
Implementation Strategy:
Start with data structure definitions
Implement basic operations first
Add edge case handling
Consider thread safety if asked
Key Points to Mention:
Why you chose the data structures
Time/space complexity analysis
How you handle edge cases
Possible optimizations
Common Follow-up Questions:
How to make it thread-safe?
How to implement LFU Cache instead?
How to handle variable entry sizes?
How to implement distributed LRU Cache?
Gotchas to Watch For:
Memory leaks in evicted entries
Proper linking/unlinking of nodes
HashMap and List consistency
Capacity edge cases
Leetcode: