Skip to content
Open
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 20 additions & 0 deletions Sprint-2/improve_with_precomputing/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,3 +3,23 @@ Each directory contains implementations that can be made faster by using pre-com
The problem they solve is described in docstrings and tests.

Speed up the solutions by introducing precomputing. You may rewrite as much of the solution as you want, and may add any tests, but must not modify existing tests.

## Improvements Made

### common_prefix

**Original Approach (Nested Loops):**
- Time Complexity: **O(n² × m)** - Compares every string with every other string (n²) pairs, each taking O(m) comparisons

**Optimized Approach (Sort + Adjacent Pairs):**
- Time Complexity: **O(n log n + n × m)** - Sort once (n log n), then compare only adjacent pairs (n comparisons)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we are factoring in the length of the strings, m, then the complexity of sorting won't just be O(nlogn). nlogn measures only the number of comparisons needed.

- **Why better:** The longest common prefix will be found in the first and last strings after sorting, eliminating unnecessary comparisons. Reduces from O(n²) to O(n) pair comparisons.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you mean by first and last strings? That sounds like only two comparisons are needed to find the longest prefix after sorting.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @cjyuan for the detailed feedback! You're absolutely right on both points. I've updated the README to correct these issues


### count_letters

**Original Approach (Repeated String Checks):**
- Time Complexity: **O(n²)** - For each letter, checking if lowercase version exists in string is O(n), done n times

**Optimized Approach (Precomputed Set):**
- Time Complexity: **O(n)** - Precompute lowercase letters into a set (O(n)), then check membership is O(1) each time
- **Why better:** Replaces repeated O(n) string searches with single O(n) precomputation and O(1) set lookups, reducing overall from O(n²) to O(n).
Original file line number Diff line number Diff line change
Expand Up @@ -2,17 +2,16 @@


def find_longest_common_prefix(strings: List[str]):
"""
find_longest_common_prefix returns the longest string common at the start of any two strings in the passed list.

if len(strings) < 2:
return ""

In the event that an empty list, a list containing one string, or a list of strings with no common prefixes is passed, the empty string will be returned.
"""
sorted_strings = sorted(strings)
longest = ""
for string_index, string in enumerate(strings):
for other_string in strings[string_index+1:]:
common = find_common_prefix(string, other_string)
if len(common) > len(longest):
longest = common
for index in range(len(sorted_strings) - 1):
common = find_common_prefix(sorted_strings[index], sorted_strings[index + 1])
if len(common) > len(longest):
longest = common
return longest


Expand Down
Original file line number Diff line number Diff line change
@@ -1,11 +1,10 @@
def count_letters(s: str) -> int:
"""
count_letters returns the number of letters which only occur in upper case in the passed string.
"""

lower_letters = {letter for letter in s if letter.islower()}
only_upper = set()
for letter in s:
if is_upper_case(letter):
if letter.lower() not in s:
if letter.lower() not in lower_letters:
only_upper.add(letter)
return len(only_upper)

Expand Down