LeetCode Problem Workspace
Word Ladder II
Find all shortest transformation sequences from beginWord to endWord using a dictionary, leveraging backtracking search with pruning for efficiency.
4
Topics
3
Code langs
3
Related
Practice Focus
Hard · Backtracking search with pruning
Answer-first summary
Find all shortest transformation sequences from beginWord to endWord using a dictionary, leveraging backtracking search with pruning for efficiency.
Ace coding interviews with Interview AiBoxInterview AiBox guidance for Backtracking search with pruning
This problem requires generating all shortest paths from beginWord to endWord while ensuring no unnecessary branches are explored, using a combination of BFS for distance mapping and backtracking for sequence reconstruction. Proper pruning prevents exploring longer or invalid sequences. Hash tables optimize lookups for valid next words and prevent revisiting nodes, keeping the search efficient and manageable.
Problem Statement
Given two words, beginWord and endWord, along with a dictionary of words, wordList, the goal is to return every shortest transformation sequence from beginWord to endWord. Each transformation must change exactly one character, and all intermediate words must exist in wordList. If no sequence exists, return an empty list. The output is a list of sequences, where each sequence is a list of words representing the transformation steps.
For example, transforming beginWord = "hit" to endWord = "cog" with wordList = ["hot","dot","dog","lot","log","cog"] yields sequences like ["hit","hot","dot","dog","cog"] and ["hit","hot","lot","log","cog"]. Implementing this efficiently requires backtracking with pruning to avoid redundant exploration and hash tables for constant-time word checks.
Examples
Example 1
Input: beginWord = "hit", endWord = "cog", wordList = ["hot","dot","dog","lot","log","cog"]
Output: [["hit","hot","dot","dog","cog"],["hit","hot","lot","log","cog"]]
There are 2 shortest transformation sequences: "hit" -> "hot" -> "dot" -> "dog" -> "cog" "hit" -> "hot" -> "lot" -> "log" -> "cog"
Example 2
Input: beginWord = "hit", endWord = "cog", wordList = ["hot","dot","dog","lot","log"]
Output: []
The endWord "cog" is not in wordList, therefore there is no valid transformation sequence.
Constraints
- 1 <= beginWord.length <= 5
- endWord.length == beginWord.length
- 1 <= wordList.length <= 500
- wordList[i].length == beginWord.length
- beginWord, endWord, and wordList[i] consist of lowercase English letters.
- beginWord != endWord
- All the words in wordList are unique.
- The sum of all shortest transformation sequences does not exceed 105.
Solution Approach
Breadth-First Search for Distance Mapping
First, perform a BFS from beginWord to build a distance map of the shortest steps to reach each word. This ensures that during backtracking, only the words that lie on a valid shortest path are explored. BFS also helps identify when no valid path exists early, preventing unnecessary recursion. Hash tables store neighbors efficiently for O(1) lookup, and each level of BFS represents a step in the transformation sequence, which guides pruning in subsequent backtracking.
Backtracking with Pruning
After BFS, use backtracking to explore all paths from beginWord to endWord, adding words only if they match the next expected distance in the BFS map. Pruning occurs by ignoring words that are not at the correct distance, which prevents exploring longer sequences or loops. This ensures all returned sequences are minimal in length. Careful handling of visited states and hash table lookups allows backtracking to remain efficient and avoids repeating the same paths in multiple sequences.
Hash Table Optimization for Neighbor Lookup
Precompute possible word transformations by changing each character position and storing results in a hash table for O(1) neighbor retrieval. This reduces repeated scans through the wordList during recursion and BFS. Using hash tables also simplifies checking whether a candidate word exists in wordList, and prevents revisiting words within the same path. This combination of BFS distance mapping, backtracking, and hash table neighbor access ensures the algorithm explores only feasible paths with minimal overhead.
Complexity Analysis
| Metric | Value |
|---|---|
| Time | Depends on the final approach |
| Space | Depends on the final approach |
Time complexity depends on the number of words and the branching factor at each word, with BFS and backtracking combined to explore all shortest sequences, leading to worst-case exponential growth relative to sequence length. Space complexity accounts for the BFS queue, hash tables for word lookups, and recursion stacks in backtracking, making both time and space dependent on the number of words and sequences generated, as outlined in the provided approach.
What Interviewers Usually Probe
- Do you handle cycles and repeated words to avoid redundant paths in backtracking?
- Can you optimize neighbor lookups using hash tables to reduce time complexity?
- Will you ensure that all returned sequences are the minimal length possible?
Common Pitfalls or Variants
Common pitfalls
- Failing to prune paths not on the shortest sequence, leading to TLE or incorrect extra sequences.
- Revisiting words within the same transformation path, causing cycles or duplicate sequences.
- Ignoring BFS distance levels in backtracking, which can include longer-than-necessary transformations.
Follow-up variants
- Return only the count of distinct shortest transformation sequences instead of the sequences themselves.
- Modify the problem to allow transformations that change up to two characters at a time, adjusting BFS and backtracking logic.
- Find the shortest transformation path from beginWord to endWord while allowing intermediate words that are not in wordList if necessary.
FAQ
What is the best approach pattern for Word Ladder II?
The problem is best solved using a backtracking search with pruning after BFS distance mapping, ensuring all returned sequences are minimal.
How does BFS help in generating shortest sequences?
BFS computes the minimal distance from beginWord to all reachable words, which guides backtracking to explore only paths that could yield shortest transformations.
Why is a hash table important in this problem?
Hash tables provide O(1) lookup for valid word neighbors, preventing repeated scans of wordList during BFS or backtracking and improving overall efficiency.
Can there be duplicate sequences in the output?
Proper pruning and BFS distance checking prevent duplicates, but failing to respect distance levels or revisiting words can produce repeated sequences.
What are common mistakes in Word Ladder II backtracking?
Common errors include not pruning longer paths, revisiting words in the same sequence, and ignoring BFS distance mapping, leading to TLE or incorrect results.
Solution
Solution 1
#### Python3
class Solution:
def findLadders(
self, beginWord: str, endWord: str, wordList: List[str]
) -> List[List[str]]:
def dfs(path, cur):
if cur == beginWord:
ans.append(path[::-1])
return
for precursor in prev[cur]:
path.append(precursor)
dfs(path, precursor)
path.pop()
ans = []
words = set(wordList)
if endWord not in words:
return ans
words.discard(beginWord)
dist = {beginWord: 0}
prev = defaultdict(set)
q = deque([beginWord])
found = False
step = 0
while q and not found:
step += 1
for i in range(len(q), 0, -1):
p = q.popleft()
s = list(p)
for i in range(len(s)):
ch = s[i]
for j in range(26):
s[i] = chr(ord('a') + j)
t = ''.join(s)
if dist.get(t, 0) == step:
prev[t].add(p)
if t not in words:
continue
prev[t].add(p)
words.discard(t)
q.append(t)
dist[t] = step
if endWord == t:
found = True
s[i] = ch
if found:
path = [endWord]
dfs(path, endWord)
return ansContinue Practicing
Continue Topic
hash table
Practice more edge cases under the same topic.
arrow_forwardauto_awesomeContinue Pattern
Backtracking search with pruning
Expand the same solving frame across more problems.
arrow_forwardsignal_cellular_altSame Difficulty Track
Hard
Stay on this level to stabilize interview delivery.
arrow_forward