LeetCode Problem Workspace
LFU Cache
Implement an LFU Cache using linked-list pointer manipulation with constant-time get and put operations for interview scenarios.
4
Topics
5
Code langs
3
Related
Practice Focus
Hard · Linked-list pointer manipulation
Answer-first summary
Implement an LFU Cache using linked-list pointer manipulation with constant-time get and put operations for interview scenarios.
Ace coding interviews with Interview AiBoxInterview AiBox guidance for Linked-list pointer manipulation
This problem requires designing a Least Frequently Used (LFU) cache that supports O(1) get and put operations. The solution hinges on combining a hash table with doubly-linked lists to track frequencies efficiently. Correct pointer manipulation ensures updates, evictions, and frequency tracking operate without performance degradation, handling edge cases where multiple keys share the same frequency.
Problem Statement
Design and implement a data structure that behaves as a Least Frequently Used (LFU) cache. The LFU cache should store key-value pairs and evict the least frequently used key when capacity is reached. If multiple keys have the same frequency, evict the least recently used among them.
Implement the LFUCache class with a constructor LFUCache(int capacity) and methods get(int key) and put(int key, int value). Maintain a use counter for each key, increment it on access, and ensure all operations, including evictions, occur in O(1) time using linked-list pointer manipulation and hash tables.
Examples
Example 1
Input: See original problem statement.
Output: See original problem statement.
Input ["LFUCache", "put", "put", "get", "put", "get", "get", "put", "get", "get", "get"] [[2], [1, 1], [2, 2], [1], [3, 3], [2], [3], [4, 4], [1], [3], [4]] Output [null, null, null, 1, null, -1, 3, null, -1, 3, 4]
Explanation // cnt(x) = the use counter for key x // cache=[] will show the last used order for tiebreakers (leftmost element is most recent) LFUCache lfu = new LFUCache(2); lfu.put(1, 1); // cache=[1,_], cnt(1)=1 lfu.put(2, 2); // cache=[2,1], cnt(2)=1, cnt(1)=1 lfu.get(1); // return 1 // cache=[1,2], cnt(2)=1, cnt(1)=2 lfu.put(3, 3); // 2 is the LFU key because cnt(2)=1 is the smallest, invalidate 2. // cache=[3,1], cnt(3)=1, cnt(1)=2 lfu.get(2); // return -1 (not found) lfu.get(3); // return 3 // cache=[3,1], cnt(3)=2, cnt(1)=2 lfu.put(4, 4); // Both 1 and 3 have the same cnt, but 1 is LRU, invalidate 1. // cache=[4,3], cnt(4)=1, cnt(3)=2 lfu.get(1); // return -1 (not found) lfu.get(3); // return 3 // cache=[3,4], cnt(4)=1, cnt(3)=3 lfu.get(4); // return 4 // cache=[4,3], cnt(4)=2, cnt(3)=3
Constraints
- 1 <= capacity <= 104
- 0 <= key <= 105
- 0 <= value <= 109
- At most 2 * 105 calls will be made to get and put.
Solution Approach
Use Hash Table for Constant-Time Key Lookup
Maintain a hash map that maps keys to nodes containing values and frequency counters. This allows immediate access to any key without traversing the list, ensuring get and put operations remain O(1). The map must always stay synchronized with the linked lists to prevent pointer inconsistencies.
Doubly-Linked List per Frequency
Create a doubly-linked list for each frequency count to track order of insertion and recent use. Nodes move between lists when their frequency increases. Linked-list pointer manipulation is crucial here to remove and insert nodes in constant time, preventing traversal delays during frequency updates.
Eviction Using LFU Rules with LRU Tiebreaker
When the cache reaches capacity, remove the node from the lowest frequency list that is least recently used. Update both the hash map and linked list pointers carefully to avoid dangling references. This pattern ensures that the LFU policy is strictly enforced with O(1) complexity.
Complexity Analysis
| Metric | Value |
|---|---|
| Time | O(1) |
| Space | O(N) |
All operations—get, put, and evict—run in O(1) time due to hash table lookups and direct pointer manipulation in doubly-linked lists. Space complexity is O(N) to store nodes, frequency lists, and the hash map, where N is the cache capacity.
What Interviewers Usually Probe
- Expect emphasis on linked-list pointer correctness and edge cases.
- Clarify how frequency counters integrate with hash table nodes.
- Demonstrate eviction order when multiple keys share the same frequency.
Common Pitfalls or Variants
Common pitfalls
- Failing to update both the hash map and linked lists on frequency change, leading to inconsistent state.
- Incorrectly handling eviction when multiple nodes share the lowest frequency.
- Using a single list for all nodes, which breaks O(1) time guarantees.
Follow-up variants
- Implement a Most Frequently Used (MFU) cache using the same linked-list pointer manipulation strategy.
- Support dynamic resizing of capacity while maintaining O(1) operations.
- Track both time-based and frequency-based evictions for hybrid LFU-LRU caches.
FAQ
What is the main challenge in implementing LFU Cache?
The primary challenge is updating frequency counts and moving nodes between lists in O(1) time while maintaining accurate pointers for eviction and retrieval.
How do I ensure constant-time get and put operations?
Use a hash map for key lookup and maintain separate doubly-linked lists per frequency, carefully updating pointers when nodes move between lists.
What happens when multiple keys have the same frequency?
Evict the least recently used key among them by removing the tail node from the corresponding frequency list, maintaining LFU order.
Can LFU Cache handle zero capacity?
Yes, but any put operation should be ignored, and get always returns -1, since no keys can be stored.
Why is linked-list pointer manipulation critical for LFU Cache?
It allows constant-time insertion, removal, and frequency updates without traversing lists, which is essential to maintain O(1) complexity.
Solution
Solution 1
#### Python3
class Node:
def __init__(self, key: int, value: int) -> None:
self.key = key
self.value = value
self.freq = 1
self.prev = None
self.next = None
class DoublyLinkedList:
def __init__(self) -> None:
self.head = Node(-1, -1)
self.tail = Node(-1, -1)
self.head.next = self.tail
self.tail.prev = self.head
def add_first(self, node: Node) -> None:
node.prev = self.head
node.next = self.head.next
self.head.next.prev = node
self.head.next = node
def remove(self, node: Node) -> Node:
node.next.prev = node.prev
node.prev.next = node.next
node.next, node.prev = None, None
return node
def remove_last(self) -> Node:
return self.remove(self.tail.prev)
def is_empty(self) -> bool:
return self.head.next == self.tail
class LFUCache:
def __init__(self, capacity: int):
self.capacity = capacity
self.min_freq = 0
self.map = defaultdict(Node)
self.freq_map = defaultdict(DoublyLinkedList)
def get(self, key: int) -> int:
if self.capacity == 0 or key not in self.map:
return -1
node = self.map[key]
self.incr_freq(node)
return node.value
def put(self, key: int, value: int) -> None:
if self.capacity == 0:
return
if key in self.map:
node = self.map[key]
node.value = value
self.incr_freq(node)
return
if len(self.map) == self.capacity:
ls = self.freq_map[self.min_freq]
node = ls.remove_last()
self.map.pop(node.key)
node = Node(key, value)
self.add_node(node)
self.map[key] = node
self.min_freq = 1
def incr_freq(self, node: Node) -> None:
freq = node.freq
ls = self.freq_map[freq]
ls.remove(node)
if ls.is_empty():
self.freq_map.pop(freq)
if freq == self.min_freq:
self.min_freq += 1
node.freq += 1
self.add_node(node)
def add_node(self, node: Node) -> None:
freq = node.freq
ls = self.freq_map[freq]
ls.add_first(node)
self.freq_map[freq] = ls
# Your LFUCache object will be instantiated and called as such:
# obj = LFUCache(capacity)
# param_1 = obj.get(key)
# obj.put(key,value)Continue Practicing
Continue Topic
hash table
Practice more edge cases under the same topic.
arrow_forwardauto_awesomeContinue Pattern
Linked-list pointer manipulation
Expand the same solving frame across more problems.
arrow_forwardsignal_cellular_altSame Difficulty Track
Hard
Stay on this level to stabilize interview delivery.
arrow_forward