puzzle on table
Shares

Coding Challenge: Two Sum

Last updated: Jul 7, 2024

The Two Sum coding challenge is another easy challenge that can get away from you real fast if not done right, i.e. O(n^2)

Coding challenges are not only a fact of life for technical interviews and assessments, they are also fun! Mastering the process of solving coding challenges will give you the confidence to go into any technical interview or assessment and succeed. And we all know confidence is half the battle when interviewing. In this series, I am tackling various coding challenges and explaining my process along the way. For this challenge, I am using Javascript since that is my primary focus these days.

The Challenge: Two Number Sum

This is another easy challenge with a gotcha if you aren't careful. It can quickly grow to Big O\mathcal{O} of n2n^2 And that's never where you want to be.

Given a number, A, and an array of numbers, determine if any two numbers within the array sum to the number A.

Time Management

Time management is an important aspect of interviews and assessments and should be taken into consideration right from the start. It is better to have completely coded correct answers to all the challenges than to have one perfect answer and the rest unfinished. For that reason, I usually start with the "brute force" method. Just get a solution down. Worry about efficiency and elegance later. And yes, Big O\mathcal{O} considerations come after we get through the brute force phase.

Brute force

Our not very good brute force version:

function isTwoSum(numbers, A) {
  for (const num of numbers) {
    const result = A - num;
    if (result === num) continue; // same number
    if (numbers.includes(result)) {
      console.log(`${A} = ${num} + ${result}`);
      return true;
    }
  }
  return false;
}

At first glance, that looks fine but as I alluded to, this results in a Big O\mathcal{O} of O(n2)\mathcal{O}(n^2). That means as the length of the array of numbers increases, processing time goes up quadratically. This is bad, but what does that really mean? Before going on to our optimizations, let's take a deeper dive into O(n2)\mathcal{O}(n^2). It's that important.

Big O\mathcal{O}

In Big O\mathcal{O} notation, O(n2)\mathcal{O}(n^2) (read as "O of n squared") describes an algorithm's time complexity that grows quadratically with the size of the input. Here's what it means in detail:

Explanation

Growth Rate: If an algorithm has a time complexity of O(n2)\mathcal{O}(n^2) it means that the time required to complete the algorithm is proportional to the square of the size of the input. In other words, if you double the size of the input, the time required will increase by a factor of four (since (2n)2=4n2(2n)^2 = 4n^2).

Behavior: Algorithms with O(n2)\mathcal{O}(n^2) complexity typically involve nested loops where each loop runs n times. For example:

function exampleQuadraticAlgorithm(array) {
  for (let i = 0; i < array.length; i++) {
    for (let j = 0; j < array.length; j++) {
      // Some constant-time operation
    }
  }
}

In this example, the outer loop runs n times, and for each iteration of the outer loop, the inner loop also runs n times, resulting in nn=n2n * n = n^2 total iterations.

Practical Implications

Performance: O(n2)\mathcal{O}(n^2) algorithms can become very slow as the input size grows. For small inputs, the performance might be acceptable, but for large inputs, the execution time can become impractically long.

Examples: Common examples of O(n2)\mathcal{O}(n^2) algorithms include certain sorting algorithms like bubble sort, selection sort, and insertion sort (in their worst-case scenarios), and algorithms that check all pairs of elements in a dataset, such as checking for duplicate elements in a naive manner.

Comparison with Other Complexities

Better than O(n3)\mathcal{O}(n^3): An algorithm with O(n2)\mathcal{O}(n^2) is more efficient than one with O(n3)\mathcal{O}(n^3) complexity.

Worse than O(nlogn)\mathcal{O}(n\log{}n): An algorithm with O(n2)\mathcal{O}(n^2) is less efficient than one with O(nlogn)\mathcal{O}(n\log{}n). O(nlogn)\mathcal{O}(n\log{}n) algorithms are common in more efficient sorting algorithms like merge sort and quicksort.

Much Worse than O(n)\mathcal{O}(n): O(n)\mathcal{O}(n) algorithms, which grow linearly with the input size, are much more efficient algorithms.

Visualization

Imagine you have an input of size n:

  • For O(n)\mathcal{O}(n), if n=10, it performs about 10 operations.
  • For O(n2)\mathcal{O}(n^2), if n = 10, it performs about 10210^2 or 100 operations.

As n increases, the difference becomes more pronounced:

  • For n = 100, O(n)\mathcal{O}(n) performs 100 operations, while O(n2)\mathcal{O}(n^2) performs 10,000 operations.
  • For n = 1000, O(n)\mathcal{O}(n) performs 1,000 operations, while O(n2)\mathcal{O}(n^2) performs 1,000,000 operations.

This exponential growth highlights why O(n2)\mathcal{O}(n^2) algorithms are impractical for large input sizes.

Refactor

Ok, time to refactor:

function isTwoSum(numbers, A) {
  const seen = new Set();
  for (const num of numbers) {
    const result = A - num;
    if (result !== num && seen.has(result)) {
      console.log(`${A} = ${num} + ${result}`);
      return true;
    }
    seen.add(num);
  }
  return false;
}

Here we have replaced the array of numbers with a Set. Sets are great because you only have to traverse the array once to create the Set, then lookups are O(1)\mathcal{O}(1) after that. It eliminates a whole dimension of operations. Thus our new function is now O(n)\mathcal{O}(n), excellent. Let's take a look at the details.

Time Complexity Analysis

Initialization: const seen = new Set();

Initializing an empty set takes O(1)\mathcal{O}(1) time.

Iterating Through the Array: for (const num of numbers)

This loop runs n times, where n is the length of the numbers array.

Set Operations:

  • const result = A - num; is a constant-time operation, O(1)\mathcal{O}(1).
  • seen.has(result) is an average O(1)\mathcal{O}(1) operation.
  • seen.add(num) is also an average O(1)\mathcal{O}(1) operation.

Therefore, the total time complexity is:

Iterating through the array: O(n)\mathcal{O}(n)

Each iteration performs constant-time operations: O(1)\mathcal{O}(1)

Combining these steps, the overall time complexity is O(n)\mathcal{O}(n).

Space Complexity Analysis

Set Storage: seen stores up to n elements, requiring O(n)\mathcal{O}(n) space.

Other Variables: The variables num and result require O(1)\mathcal{O}(1) space each.

Therefore, the space complexity is O(n)\mathcal{O}(n).

Optimizations

The function is already efficient with a time complexity of O(n)\mathcal{O}(n) and a space complexity of O(n)\mathcal{O}(n). It uses a Set to track the elements seen so far, which allows for O(1)\mathcal{O}(1) average-time checks and insertions. This is a common and efficient approach for solving the Two Sum problem.

Conclusion

This coding challenge threw in a big red flag for interviewers and assessments that we want to avoid at all costs. The ability to understand the Big O\mathcal{O} of your functions and algorithms is key. We went a little deeper into that here, and I hope it helped you understand the importance of optimizing your code. We'll go even further as we continue tackling these coding challenges.

See Also