Longest Common Prefix of K Strings After Removal
Problem
You are given an array of strings words and an integer k.
For each index i in the range [0, words.length - 1], find the length of the longest common prefix among any k strings (selected at distinct indices) from the remaining array after removing the ith element.
Return an array answer, where answer[i] is the answer for ith element. If removing the ith element leaves the array with fewer than k strings, answer[i] is 0.
Examples
Example 1
Input: words = ["jump","run","run","jump","run"], k = 2
Output: [3,4,4,3,4]
Explanation:
* Removing index 0 (`"jump"`):
* `words` becomes: `["run", "run", "jump", "run"]`. `"run"` occurs 3 times. Choosing any two gives the longest common prefix `"run"` (length 3).
* Removing index 1 (`"run"`):
* `words` becomes: `["jump", "run", "jump", "run"]`. `"jump"` occurs twice. Choosing these two gives the longest common prefix `"jump"` (length 4).
* Removing index 2 (`"run"`):
* `words` becomes: `["jump", "run", "jump", "run"]`. `"jump"` occurs twice. Choosing these two gives the longest common prefix `"jump"` (length 4).
* Removing index 3 (`"jump"`):
* `words` becomes: `["jump", "run", "run", "run"]`. `"run"` occurs 3 times. Choosing any two gives the longest common prefix `"run"` (length 3).
* Removing index 4 ("run"):
* `words` becomes: `["jump", "run", "run", "jump"]`. `"jump"` occurs twice. Choosing these two gives the longest common prefix `"jump"` (length 4).
Example 2
Input: words = ["dog","racer","car"], k = 2
Output: [0,0,0]
Explanation:
* Removing any index results in an answer of 0.
Constraints
1 <= k <= words.length <= 10^51 <= words[i].length <= 10^4words[i]consists of lowercase English letters.- The sum of
words[i].lengthis smaller than or equal105.
Solution
Method 1 – Trie with Dynamic Frequency Tracking
Intuition
The key insight is to use a modified Trie data structure where each node tracks the frequency of words passing through it. We can efficiently determine the longest common prefix among k strings by maintaining a global frequency map and a sorted set of valid prefix lengths.
The strategy involves initially building a complete Trie with all words, then for each word removal, temporarily removing it from the Trie, checking the maximum valid prefix length, and reinserting it back.
Approach
-
Enhanced Trie Structure: Create a Trie where each node contains a frequency counter representing how many words pass through that node.
-
Global Tracking: Maintain a frequency map that counts how many prefix lengths have frequency ≥ k, and a descending-ordered set to quickly access the maximum valid prefix length.
-
Initial Population: Insert all words into the Trie. During insertion, when a node's frequency reaches k, increment the count for that prefix length in the global map and add it to the sorted set.
-
Simulation Process: For each word to be "removed":
- Temporarily erase the word from the Trie
- When a node's frequency drops below k, decrement the corresponding length count in the map
- If a length count reaches zero, remove it from the sorted set
- Record the maximum valid prefix length (largest element in the set)
- Reinsert the word back into the Trie
-
Result Extraction: The maximum element in the sorted set gives the answer for each removal scenario.
Code
C++
class Solution {
private:
map<int, int> lengthFreq;
set<int, greater<int>> validLengths;
struct TrieNode {
TrieNode* children[26];
int frequency;
bool isEnd;
TrieNode() {
for (int i = 0; i < 26; i++) {
children[i] = nullptr;
}
frequency = 0;
isEnd = false;
}
bool containsKey(char ch) {
return children[ch - 'a'] != nullptr;
}
void put(char ch, TrieNode* node) {
children[ch - 'a'] = node;
}
TrieNode* get(char ch) {
return children[ch - 'a'];
}
};
class Trie {
public:
TrieNode* root;
Trie() {
root = new TrieNode();
}
void insert(string& word, int k) {
TrieNode* node = root;
for (int i = 0; i < word.size(); i++) {
if (!node->containsKey(word[i])) {
node->put(word[i], new TrieNode());
}
node = node->get(word[i]);
node->frequency++;
if (node->frequency >= k) {
lengthFreq[i + 1]++;
if (lengthFreq[i + 1] == 1) {
validLengths.insert(i + 1);
}
}
}
node->isEnd = true;
}
void erase(string& word, int k) {
TrieNode* node = root;
for (int i = 0; i < word.size(); i++) {
node = node->get(word[i]);
node->frequency--;
if (node->frequency == k - 1) {
lengthFreq[i + 1]--;
if (lengthFreq[i + 1] == 0) {
validLengths.erase(i + 1);
}
}
}
}
};
public:
vector<int> longestCommonPrefix(vector<string>& words, int k) {
lengthFreq.clear();
validLengths.clear();
Trie* trie = new Trie();
vector<int> result;
// Insert all words initially
for (int i = 0; i < words.size(); i++) {
trie->insert(words[i], k);
}
// Process each word removal
for (int i = 0; i < words.size(); i++) {
trie->erase(words[i], k);
if (validLengths.empty()) {
result.push_back(0);
} else {
result.push_back(*validLengths.begin());
}
trie->insert(words[i], k);
}
return result;
}
};
Java
class Solution {
private Map<Integer, Integer> lengthFreq = new HashMap<>();
private TreeSet<Integer> validLengths = new TreeSet<>(Collections.reverseOrder());
class TrieNode {
TrieNode[] children;
int frequency;
boolean isEnd;
public TrieNode() {
children = new TrieNode[26];
frequency = 0;
isEnd = false;
}
public boolean containsKey(char ch) {
return children[ch - 'a'] != null;
}
public void put(char ch, TrieNode node) {
children[ch - 'a'] = node;
}
public TrieNode get(char ch) {
return children[ch - 'a'];
}
}
class Trie {
TrieNode root;
public Trie() {
root = new TrieNode();
}
public void insert(String word, int k) {
TrieNode node = root;
for (int i = 0; i < word.length(); i++) {
char ch = word.charAt(i);
if (!node.containsKey(ch)) {
node.put(ch, new TrieNode());
}
node = node.get(ch);
node.frequency++;
if (node.frequency >= k) {
lengthFreq.put(i + 1, lengthFreq.getOrDefault(i + 1, 0) + 1);
if (lengthFreq.get(i + 1) == 1) {
validLengths.add(i + 1);
}
}
}
node.isEnd = true;
}
public void erase(String word, int k) {
TrieNode node = root;
for (int i = 0; i < word.length(); i++) {
char ch = word.charAt(i);
node = node.get(ch);
node.frequency--;
if (node.frequency == k - 1) {
lengthFreq.put(i + 1, lengthFreq.get(i + 1) - 1);
if (lengthFreq.get(i + 1) == 0) {
validLengths.remove(i + 1);
}
}
}
}
}
public int[] longestCommonPrefix(String[] words, int k) {
lengthFreq.clear();
validLengths.clear();
Trie trie = new Trie();
int[] result = new int[words.length];
// Insert all words initially
for (String word : words) {
trie.insert(word, k);
}
// Process each word removal
for (int i = 0; i < words.length; i++) {
trie.erase(words[i], k);
if (validLengths.isEmpty()) {
result[i] = 0;
} else {
result[i] = validLengths.first();
}
trie.insert(words[i], k);
}
return result;
}
}
Python
from collections import defaultdict
import bisect
class TrieNode:
def __init__(self):
self.children = {}
self.frequency = 0
self.is_end = False
class Solution:
def longestCommonPrefix(self, words: list[str], k: int) -> list[int]:
self.length_freq = defaultdict(int)
self.valid_lengths = [] # Will maintain in descending order
class Trie:
def __init__(self):
self.root = TrieNode()
def insert(self, word: str, k: int):
node = self.root
for i, char in enumerate(word):
if char not in node.children:
node.children[char] = TrieNode()
node = node.children[char]
node.frequency += 1
if node.frequency >= k:
self.length_freq[i + 1] += 1
if self.length_freq[i + 1] == 1:
# Insert in descending order
bisect.insort(self.valid_lengths, -(i + 1))
node.is_end = True
def erase(self, word: str, k: int):
node = self.root
for i, char in enumerate(word):
node = node.children[char]
node.frequency -= 1
if node.frequency == k - 1:
self.length_freq[i + 1] -= 1
if self.length_freq[i + 1] == 0:
self.valid_lengths.remove(-(i + 1))
trie = Trie()
result = []
# Insert all words initially
for word in words:
trie.insert(word, k)
# Process each word removal
for word in words:
trie.erase(word, k)
if not self.valid_lengths:
result.append(0)
else:
result.append(-self.valid_lengths[0]) # Convert back from negative
trie.insert(word, k)
return result
Go
import (
"container/heap"
"sort"
)
type TrieNode struct {
children [26]*TrieNode
frequency int
isEnd bool
}
func NewTrieNode() *TrieNode {
return &TrieNode{
children: [26]*TrieNode{},
frequency: 0,
isEnd: false,
}
}
type Trie struct {
root *TrieNode
}
func NewTrie() *Trie {
return &Trie{root: NewTrieNode()}
}
func (t *Trie) insert(word string, k int, lengthFreq map[int]int, validLengths *[]int) {
node := t.root
for i, char := range word {
index := char - 'a'
if node.children[index] == nil {
node.children[index] = NewTrieNode()
}
node = node.children[index]
node.frequency++
if node.frequency >= k {
lengthFreq[i+1]++
if lengthFreq[i+1] == 1 {
// Insert in descending order
length := i + 1
pos := sort.Search(len(*validLengths), func(j int) bool {
return (*validLengths)[j] <= length
})
*validLengths = append(*validLengths, 0)
copy((*validLengths)[pos+1:], (*validLengths)[pos:])
(*validLengths)[pos] = length
}
}
}
node.isEnd = true
}
func (t *Trie) erase(word string, k int, lengthFreq map[int]int, validLengths *[]int) {
node := t.root
for i, char := range word {
index := char - 'a'
node = node.children[index]
node.frequency--
if node.frequency == k-1 {
lengthFreq[i+1]--
if lengthFreq[i+1] == 0 {
length := i + 1
for j, val := range *validLengths {
if val == length {
*validLengths = append((*validLengths)[:j], (*validLengths)[j+1:]...)
break
}
}
}
}
}
}
func longestCommonPrefix(words []string, k int) []int {
lengthFreq := make(map[int]int)
validLengths := make([]int, 0)
trie := NewTrie()
result := make([]int, len(words))
// Insert all words initially
for _, word := range words {
trie.insert(word, k, lengthFreq, &validLengths)
}
// Process each word removal
for i, word := range words {
trie.erase(word, k, lengthFreq, &validLengths)
if len(validLengths) == 0 {
result[i] = 0
} else {
result[i] = validLengths[0]
}
trie.insert(word, k, lengthFreq, &validLengths)
}
return result
}
Kotlin
import java.util.*
class Solution {
private val lengthFreq = mutableMapOf<Int, Int>()
private val validLengths = TreeSet<Int>(Collections.reverseOrder())
class TrieNode {
val children = Array<TrieNode?>(26) { null }
var frequency = 0
var isEnd = false
fun containsKey(ch: Char): Boolean {
return children[ch - 'a'] != null
}
fun put(ch: Char, node: TrieNode) {
children[ch - 'a'] = node
}
fun get(ch: Char): TrieNode? {
return children[ch - 'a']
}
}
inner class Trie {
val root = TrieNode()
fun insert(word: String, k: Int) {
var node = root
for (i in word.indices) {
val ch = word[i]
if (!node.containsKey(ch)) {
node.put(ch, TrieNode())
}
node = node.get(ch)!!
node.frequency++
if (node.frequency >= k) {
lengthFreq[i + 1] = lengthFreq.getOrDefault(i + 1, 0) + 1
if (lengthFreq[i + 1] == 1) {
validLengths.add(i + 1)
}
}
}
node.isEnd = true
}
fun erase(word: String, k: Int) {
var node = root
for (i in word.indices) {
val ch = word[i]
node = node.get(ch)!!
node.frequency--
if (node.frequency == k - 1) {
lengthFreq[i + 1] = lengthFreq[i + 1]!! - 1
if (lengthFreq[i + 1] == 0) {
validLengths.remove(i + 1)
}
}
}
}
}
fun longestCommonPrefix(words: Array<String>, k: Int): IntArray {
lengthFreq.clear()
validLengths.clear()
val trie = Trie()
val result = IntArray(words.size)
// Insert all words initially
for (word in words) {
trie.insert(word, k)
}
// Process each word removal
for (i in words.indices) {
trie.erase(words[i], k)
result[i] = if (validLengths.isEmpty()) {
0
} else {
validLengths.first()
}
trie.insert(words[i], k)
}
return result
}
}
Rust
use std::collections::{HashMap, BTreeSet};
use std::cmp::Reverse;
struct TrieNode {
children: [Option<Box<TrieNode>>; 26],
frequency: i32,
is_end: bool,
}
impl TrieNode {
fn new() -> Self {
TrieNode {
children: Default::default(),
frequency: 0,
is_end: false,
}
}
fn contains_key(&self, ch: char) -> bool {
let index = (ch as u8 - b'a') as usize;
self.children[index].is_some()
}
fn put(&mut self, ch: char, node: TrieNode) {
let index = (ch as u8 - b'a') as usize;
self.children[index] = Some(Box::new(node));
}
fn get_mut(&mut self, ch: char) -> &mut TrieNode {
let index = (ch as u8 - b'a') as usize;
self.children[index].as_mut().unwrap()
}
}
struct Trie {
root: TrieNode,
}
impl Trie {
fn new() -> Self {
Trie {
root: TrieNode::new(),
}
}
fn insert(&mut self, word: &str, k: i32, length_freq: &mut HashMap<i32, i32>, valid_lengths: &mut BTreeSet<Reverse<i32>>) {
let mut node = &mut self.root;
for (i, ch) in word.chars().enumerate() {
if !node.contains_key(ch) {
node.put(ch, TrieNode::new());
}
node = node.get_mut(ch);
node.frequency += 1;
if node.frequency >= k {
let length = (i + 1) as i32;
*length_freq.entry(length).or_insert(0) += 1;
if length_freq[&length] == 1 {
valid_lengths.insert(Reverse(length));
}
}
}
node.is_end = true;
}
fn erase(&mut self, word: &str, k: i32, length_freq: &mut HashMap<i32, i32>, valid_lengths: &mut BTreeSet<Reverse<i32>>) {
let mut node = &mut self.root;
for (i, ch) in word.chars().enumerate() {
node = node.get_mut(ch);
node.frequency -= 1;
if node.frequency == k - 1 {
let length = (i + 1) as i32;
*length_freq.get_mut(&length).unwrap() -= 1;
if length_freq[&length] == 0 {
valid_lengths.remove(&Reverse(length));
}
}
}
}
}
impl Solution {
pub fn longest_common_prefix(words: Vec<String>, k: i32) -> Vec<i32> {
let mut length_freq: HashMap<i32, i32> = HashMap::new();
let mut valid_lengths: BTreeSet<Reverse<i32>> = BTreeSet::new();
let mut trie = Trie::new();
let mut result = Vec::with_capacity(words.len());
// Insert all words initially
for word in &words {
trie.insert(word, k, &mut length_freq, &mut valid_lengths);
}
// Process each word removal
for word in &words {
trie.erase(word, k, &mut length_freq, &mut valid_lengths);
if valid_lengths.is_empty() {
result.push(0);
} else {
result.push(valid_lengths.iter().next().unwrap().0);
}
trie.insert(word, k, &mut length_freq, &mut valid_lengths);
}
result
}
}
TypeScript
class TrieNode {
children: Map<string, TrieNode>;
frequency: number;
isEnd: boolean;
constructor() {
this.children = new Map();
this.frequency = 0;
this.isEnd = false;
}
containsKey(ch: string): boolean {
return this.children.has(ch);
}
put(ch: string, node: TrieNode): void {
this.children.set(ch, node);
}
get(ch: string): TrieNode | undefined {
return this.children.get(ch);
}
}
class Trie {
root: TrieNode;
constructor() {
this.root = new TrieNode();
}
insert(word: string, k: number, lengthFreq: Map<number, number>, validLengths: number[]): void {
let node = this.root;
for (let i = 0; i < word.length; i++) {
const ch = word[i];
if (!node.containsKey(ch)) {
node.put(ch, new TrieNode());
}
node = node.get(ch)!;
node.frequency++;
if (node.frequency >= k) {
const length = i + 1;
lengthFreq.set(length, (lengthFreq.get(length) || 0) + 1);
if (lengthFreq.get(length) === 1) {
// Insert in descending order
let insertIndex = 0;
while (insertIndex < validLengths.length && validLengths[insertIndex] > length) {
insertIndex++;
}
validLengths.splice(insertIndex, 0, length);
}
}
}
node.isEnd = true;
}
erase(word: string, k: number, lengthFreq: Map<number, number>, validLengths: number[]): void {
let node = this.root;
for (let i = 0; i < word.length; i++) {
const ch = word[i];
node = node.get(ch)!;
node.frequency--;
if (node.frequency === k - 1) {
const length = i + 1;
lengthFreq.set(length, lengthFreq.get(length)! - 1);
if (lengthFreq.get(length) === 0) {
const index = validLengths.indexOf(length);
if (index > -1) {
validLengths.splice(index, 1);
}
}
}
}
}
}
function longestCommonPrefix(words: string[], k: number): number[] {
const lengthFreq = new Map<number, number>();
const validLengths: number[] = [];
const trie = new Trie();
const result: number[] = [];
// Insert all words initially
for (const word of words) {
trie.insert(word, k, lengthFreq, validLengths);
}
// Process each word removal
for (const word of words) {
trie.erase(word, k, lengthFreq, validLengths);
if (validLengths.length === 0) {
result.push(0);
} else {
result.push(validLengths[0]);
}
trie.insert(word, k, lengthFreq, validLengths);
}
return result;
}
Complexity
- ⏰ Time complexity:
O(n × L), where n is the number of words and L is the average word length. Each word is inserted and removed from the Trie twice, and each operation takes O(L) time. - 🧺 Space complexity:
O(n × L), for storing the Trie structure with all unique prefixes and the additional data structures for frequency tracking.