Two Pointers

EasyMedium

Imagine two people standing at opposite ends of a bridge, walking toward each other. They can see the same bridge, but from different perspectives. When they meet, they have collectively examined every plank. That is the Two Pointers pattern: two references scanning a data structure simultaneously, eliminating possibilities with every step.

Frequency: Very High
Google, Meta, Amazon, Apple, Microsoft

Interactive: watch two pointers find a pair that sums to 8

1[0]3[1]5[2]7[3]9[4]11[5]LRarr[0] + arr[5] = 12 (target = 8)

Why This Pattern Exists

Every pattern in computer science exists because it solves a problem that brute force handles poorly. Two pointers is no exception. To truly understand it, you need to see what it replaces and why the replacement is so much better.

The Brute Force Problem

Consider the classic problem: “Find two numbers in a sorted array that sum to a target.” The naive approach uses two nested loops — try every possible pair:

brute_force_pair_sum.py
1# BRUTE FORCE: Check every pair — O(n^2) time
2def two_sum_brute(arr, target):
3 for i in range(len(arr)): # Pick first element
4 for j in range(i + 1, len(arr)): # Pick second element
5 if arr[i] + arr[j] == target:
6 return [i, j]
7 return []
8
9# For n = 10,000 elements: ~50,000,000 pair checks
10# For n = 100,000 elements: ~5,000,000,000 pair checks (TOO SLOW)

The outer loop picks element i, and the inner loop checks every element after it. For an array of 10,000 elements, that is roughly 50 million comparisons. For 100,000 elements, it is 5 billion. This is O(n²) — and it completely ignores the fact that the array is sorted.

The Two Pointer Breakthrough: O(n²) to O(n)

Here is the insight that changes everything: in a sorted array, you do not need to try every pair. When you place one pointer at the start (smallest) and one at the end (largest), every comparison tells you exactly which direction to go. If the sum is too small, the only way to increase it is to move the left pointer right. If it is too large, the only way to decrease it is to move the right pointer left. You eliminate entire swaths of pairs with a single comparison.

O(n²)

Brute Force

n = 100k → 5B ops

Two Pointers

O(n)

Two Pointers

n = 100k → 100k ops

That is a 50,000x speedup for n = 100,000. The difference between a solution that takes 5 seconds and one that takes 0.0001 seconds. This is not an incremental improvement — it is a fundamentally different approach.

Build Your Mental Model

Think of it like this: you are in a library where books are arranged by page count, smallest to largest. You need to find two books whose combined page count is exactly 500. Instead of pulling out every possible pair (brute force), you grab the thinnest book from the left end and the thickest from the right end. If their combined pages are too many, you swap the thick book for the next thinnest one on the right. If too few, you swap the thin book for the next thickest on the left. Each swap eliminates an entire shelf of possibilities. You work your way inward until you find the perfect pair — or confirm none exists.

The key principle: each pointer movement eliminates an entire row or column of the comparison matrix. When you move the left pointer right, you are saying “every pair involving the old left value is too small, so skip them all.” When you move the right pointer left, you are saying “every pair involving the old right value is too large, so skip them all.” That is why O(n²) collapses to O(n).

Three Core Variations at a Glance

Not all two-pointer problems look the same. There are three distinct variations, and recognizing which one to use is the first step to solving the problem.

VariationHow Pointers MoveWhen to UseClassic Problems
Opposite DirectionStart at both ends, converge toward centerSorted array + find pair/range, palindrome checkTwo Sum II, 3Sum, Container Water, Valid Palindrome
Same DirectionBoth start at left; slow=write, fast=readIn-place filtering, removing elements, partitioningRemove Duplicates, Move Zeroes, Sort Colors
Fast / SlowSame start, different speeds (1x vs 2x)Cycle detection, finding midpoints in linked listsLinked List Cycle, Find Middle Node, Happy Number

The Unifying Principle

All three variations share the same core idea: use two references into a data structure to avoid redundant work. Instead of checking every element against every other element (O(n²)), each pointer movement eliminates multiple candidates at once. The pattern works because there is an ordering invariant (sorted values, structural position, or speed difference) that guarantees correctness.

Pattern Recognition

The first step to solving any problem is recognizing which tool to reach for. Here are the signals that scream “two pointers.”

Use this pattern when you see...

  • "Find a pair in a sorted array that sums to X"
  • "Determine if a string is a palindrome"
  • "Remove duplicates from a sorted array in-place"
  • "Merge two sorted arrays"
  • "Move all zeros to the end"
  • "Container with most water / trapping rain water"
  • "Three values that sum to zero (3Sum)"
  • "Partition array around a pivot"

Decision Flowchart

Is input sorted / sortable?YesNoConsider otherpatternsFind pair / shrink range?YesTwo PointersOpposite DirPair sum, 3SumContainer WaterSame DirectionRemove duplicatesPartition arrayFast / SlowCycle detectionFind middle node

Two-Pointer Problems

  • Two Sum II (sorted input)
  • Valid Palindrome
  • 3Sum / 4Sum
  • Container With Most Water
  • Remove Duplicates from Sorted Array

NOT Two-Pointer Problems

  • Two Sum (unsorted -- use hash map)
  • Subarray Sum Equals K (prefix sums)
  • Longest Substring Without Repeating (sliding window)
  • Median of Two Sorted Arrays (binary search)
  • Word Break (dynamic programming)

How It Works — Visual Walkthrough

We will find two numbers in [1, 2, 4, 6, 10, 13, 17] that sum to 10. This example shows how the pointers zigzag — sometimes moving left, sometimes right — until they converge on the answer.

StepLeft (L)Right (R)Sumvs TargetAction & Why
10 (val=1)6 (val=17)Initialize pointers at both ends
20 (1)6 (17)1818 > 10Sum too large → move R left to decrease
30 (1)5 (13)1414 > 10Still too large → move R left again
40 (1)4 (10)1111 > 10Just 1 over! Move R left once more
50 (1)3 (6)77 < 10Now too small! Switch direction → move L right to increase
61 (2)3 (6)88 < 10Still too small → move L right again
72 (4)3 (6)1010 = 10Found! Return [2, 3] → values 4 + 6 = 10

Notice the zigzag: the right pointer moved left 3 times (steps 2-4), then the direction switched and the left pointer moved right twice (steps 5-6), before the pointers met at the answer. This convergence pattern is the key insight.

Detailed Algorithm Walkthrough

Step through each iteration below. Watch the pointers zigzag — moving right when the sum is too small, left when too large — until they converge on a pair summing to 10 in [1, 2, 4, 6, 10, 13, 17].

1 / 7
1[0]2[1]4[2]6[3]10[4]13[5]17[6]LRarr[0] + arr[6] = 1 + 17 = 18 (target=10)

Initialize: place left pointer at index 0 (value 1) and right pointer at index 6 (value 17). Our target sum is 10. The pointers start at opposite ends of the sorted array.

left=0, right=6 | Setup complete

Why This Works

Because the array is sorted, moving the left pointer right always increases the sum, and moving the right pointer left always decreases it. This gives us a guarantee that we will never miss a valid pair, and we scan the entire array in O(n) time.

The Template — Reusable Code

Memorize these three skeletons. Every two-pointer problem is a variation of one of them.

1. Opposite Direction (converging pointers)

opposite_direction_template
1def two_pointer_opposite(arr, target):
2 """Opposite-direction two pointers on a SORTED array."""
3
4 # ---- SECTION 1: INITIALIZATION ----
5 # WHY start at extremes? In a sorted array, arr[0] is the smallest
6 # and arr[-1] is the largest — this gives us maximum range to work with
7 left, right = 0, len(arr) - 1
8
9 # ---- SECTION 2: MAIN LOOP ----
10 # WHY < not <=? When left == right, both point to the same element.
11 # We need TWO distinct elements for a pair, so we stop before they meet.
12 while left < right:
13 current = arr[left] + arr[right]
14
15 # ---- SECTION 3: DECISION LOGIC ----
16 if current == target:
17 return [left, right] # Found the answer
18 elif current < target:
19 left += 1 # WHY move left? Sum is too small — we need a
20 # larger value, and moving left RIGHT gives us one
21 # (array is sorted ascending)
22 else:
23 right -= 1 # WHY move right? Sum is too large — we need a
24 # smaller value, and moving right LEFT gives us one
25
26 # ---- SECTION 4: RETURN ----
27 return [] # No valid pair found — we exhausted all combinations

Template Breakdown

1

Initialization

Place pointers at opposite ends. This maximizes the search range and leverages the sorted property — the smallest and largest values are the natural starting points.

2

Main Loop (while left < right)

We use strict < because when pointers meet, there is only one element left — you cannot form a pair from a single element. The loop guarantees termination because at least one pointer moves inward every iteration.

3

Decision Logic

Compare the current sum to the target. Move the pointer that corrects the problem: too small → increase by moving left right; too large → decrease by moving right left. This works because the array is sorted.

4

Return

If the loop completes without finding a match, no valid pair exists. We return an empty result.

How to Apply This Template

  1. Step 1: Identify what left and right represent in YOUR problem. For pair sum, they are indices. For container water, they are wall positions.
  2. Step 2: Define the condition to move left vs right. Ask: “What does moving each pointer do to the quantity I am tracking?”
  3. Step 3: Define what to track/return. Sum? Area? Count? The template skeleton stays the same — only the comparison and return value change.

When to Modify This Template

  • Palindrome check: Change the loop condition to left <= right (single middle character is valid). Replace the sum logic with character comparison.
  • 3Sum / K-Sum: Wrap this template inside a for loop that fixes one element. The target becomes -nums[i]. Add duplicate-skipping logic.
  • Closest pair (not exact): Instead of returning on ==, track the minimum difference and update the best answer. Always move one pointer.
  • Container With Most Water: Replace the sum comparison with an area calculation. The pointer-movement rule becomes “move the shorter wall.”

2. Same Direction (read / write pointers)

same_direction_template
1def two_pointer_same(arr):
2 """Same-direction: slow = write position, fast = read position."""
3
4 # ---- SECTION 1: INITIALIZATION ----
5 # WHY start slow at 0? Because position 0 is where the first valid
6 # element will be written. slow always points to the NEXT write slot.
7 slow = 0
8
9 # ---- SECTION 2: MAIN LOOP ----
10 # WHY use fast as reader? fast scans every element exactly once.
11 # slow only advances when we find an element worth keeping.
12 for fast in range(len(arr)):
13 # ---- SECTION 3: CONDITION + UPDATE ----
14 # WHY this structure? We separate "reading" (fast) from "writing"
15 # (slow). Only elements passing the filter get written to the front.
16 if should_keep(arr[fast]): # Replace with your condition
17 arr[slow] = arr[fast] # Write valid element at slow position
18 slow += 1 # Advance write pointer
19
20 # ---- SECTION 4: RETURN ----
21 # WHY return slow? Everything in arr[0..slow-1] is valid.
22 # slow equals the count of kept elements.
23 return slow

Template Breakdown

1

Initialization

slow = 0 because the first valid element goes to index 0. Think of slow as a “write cursor” and fast as a “read cursor.”

2

Main Loop

fast visits every element once. The key invariant: everything before slow is valid and finalized. Everything from slow to fast is “trash” that may be overwritten.

3

Condition + Update

Replace should_keep() with your problem's filter. For “remove duplicates”: arr[fast] != arr[slow-1]. For “move zeros”: arr[fast] != 0.

4

Return

slow is both the count of valid elements and the index just past the last valid one. The array is modified in-place — O(1) extra space.

How to Apply This Template

  1. Step 1: Decide what “keeping” means for your problem. Remove duplicates? Remove zeros? Remove a specific value?
  2. Step 2: Write the should_keep() condition. This is the only line that changes between problems.
  3. Step 3: Return slow as the new length. The first slow elements of the array are the result.

When to Modify This Template

  • Remove duplicates (sorted): should_keep becomes arr[fast] != arr[slow]. Start slow at 0, fast at 1.
  • Remove specific value: should_keep becomes arr[fast] != val. Start both at 0.
  • Move zeroes: should_keep becomes arr[fast] != 0. After the loop, fill remaining positions with 0.
  • Sort Colors (Dutch National Flag): Use THREE pointers: low, mid, high. Swap elements to partition into three regions. Same idea, extended to three categories.
  • Partition array: should_keep tests whether an element belongs to the “left” partition. This is the core of quicksort's partition step.

3. Fast / Slow (Floyd's Cycle Detection)

fast_slow_template
1def has_cycle(head):
2 """Fast/slow pointer — cycle detection in a linked list."""
3
4 # ---- SECTION 1: INITIALIZATION ----
5 # WHY both start at head? If there is a cycle, the fast pointer
6 # will "lap" the slow pointer — like two runners on a track.
7 slow = head
8 fast = head
9
10 # ---- SECTION 2: MAIN LOOP ----
11 # WHY check fast AND fast.next? fast moves 2 steps, so we need
12 # both the current node and the next node to exist. If either is
13 # None, we have hit the end — no cycle possible.
14 while fast and fast.next:
15 slow = slow.next # Move 1 step — the "tortoise"
16 fast = fast.next.next # Move 2 steps — the "hare"
17
18 # ---- SECTION 3: CYCLE CHECK ----
19 # WHY does meeting prove a cycle? In a non-cyclic list, fast
20 # reaches the end. In a cyclic list, fast laps slow — they
21 # MUST meet within one full cycle (at most n steps).
22 if slow == fast:
23 return True # Cycle detected
24
25 # ---- SECTION 4: RETURN ----
26 # WHY False? fast hit None, meaning the list has an end — no cycle.
27 return False

Template Breakdown

1

Initialization

Both start at the same node. The speed difference (1x vs 2x) is what creates the “lapping” behavior in a cycle. Think of two runners on a circular track.

2

Main Loop Guard

We check fast AND fast.next because fast jumps 2 nodes. If either is null, we have hit the tail — a tail means no cycle. The slow pointer never needs checking because it is always behind fast.

3

Cycle Check

If slow and fast point to the same node, fast has lapped slow — cycle confirmed. Mathematically, fast closes the gap by 1 node per step, so they meet within one cycle length.

4

Return

If we exit the loop, fast reached the end of the list. A list with an end cannot have a cycle.

How to Apply This Template

  1. Step 1: Identify if your problem involves cycles or “middle finding.” Cycle detection, finding middle node, and happy number all use this pattern.
  2. Step 2: Set the speed ratio. For cycle detection and middle finding, use 1x/2x. For “find cycle start,” add a second phase where you reset slow to head and advance both at 1x until they meet again.
  3. Step 3: Define what meeting means. For cycle detection: return true. For middle node: return slow (it is at the midpoint when fast hits the end).

When to Modify This Template

  • Find cycle start: After detecting the cycle (slow == fast), reset slow to head. Advance both at 1x speed. They meet at the cycle entrance. This is Floyd's full algorithm.
  • Find middle node: When fast reaches the end, slow is at the midpoint. For even-length lists, slow points to the second of the two middle nodes (use fast and fast.next to control which).
  • Happy Number: Instead of traversing a linked list, the “next node” is the sum of squared digits. Apply the same cycle detection logic — if it converges to 1, it is happy; if it cycles, it is not.
  • Find kth from end: Advance fast k steps ahead, then move both at 1x until fast reaches the end. Slow is at the kth node from the end.

Three Variations

Every two-pointer problem falls into one of these three buckets. Learn to identify which one you need in the first 30 seconds.

Opposite Direction

Pointers start at opposite ends and converge. Used when the array is sorted and you need to find pairs or shrink a range.

Examples: Pair sum, 3Sum, Container With Most Water, Valid Palindrome

Same Direction

Both pointers move left to right. One reads, the other writes. Used for in-place modifications and partitioning.

Examples: Remove Duplicates, Move Zeroes, Partition Array

Fast / Slow

One pointer moves at 2x speed. Used for cycle detection and finding the middle of a linked list.

Examples: Linked List Cycle, Find Middle Node, Happy Number

Worked Examples

Five problems, easy to hard. Each one teaches a different aspect of the two-pointer pattern — from basic pair finding to in-place array modification to the hardest interview classic.

1. Two Sum II — Input Array Is Sorted

EasyOpposite Direction

Given a 1-indexed sorted array numbers and a target, return indices of the two numbers that add up to the target. You may not use the same element twice. The solution is guaranteed to be unique.

Example: numbers = [2, 7, 11, 15], target = 9[1, 2]

Think Before You Code

Before writing any code, ask yourself these questions:

  1. 1. What property of the input can I exploit? The array is sorted. Sorted means values increase left-to-right. This gives us a predictable relationship between index position and value.
  2. 2. If I pick the smallest and largest values, what does their sum tell me? If their sum is too big, the largest value is too large for ANY remaining element to pair with — we can eliminate it entirely. If too small, the smallest is too small for any remaining element.
  3. 3. Can I avoid checking every pair? Yes. Each comparison eliminates an entire row or column of the pair matrix. Instead of n² checks, we need at most n.

Brute Force: O(n²)

The obvious approach: for each element, scan all elements after it to find a matching pair. This works but completely ignores the sorted property. For an array of 10,000 elements, you would check ~50 million pairs when the answer could be found in at most 10,000 steps.

The “Aha” Moment: From Brute Force to Optimal

Here is the insight: in the brute force, the inner loop exists because after picking arr[i], we do not know where the matching element is. But in a sorted array, we do know something: if we pick the smallest and largest elements and their sum is too big, then the largest element is too big to pair with any element (since the smallest is already as small as possible). We can safely discard it. This insight — one comparison eliminates one candidate permanently — is what collapses the inner loop into a single pointer movement.

Why Two Pointers Instead of a Hash Map?

You could solve this with a hash map in O(n) time and O(n) space. But two pointers achieves O(n) time with O(1) space — no extra data structure needed. In an interview, mention both approaches. The interviewer wants to see you recognize that sorted input unlocks a space-optimal solution.
two_sum_ii.py
1def twoSum(numbers, target):
2 left, right = 0, len(numbers) - 1 # Start at opposite ends of sorted array
3
4 while left < right: # WHY <? We need two DISTINCT elements for a valid pair
5 s = numbers[left] + numbers[right]
6 if s == target:
7 return [left + 1, right + 1] # Convert to 1-indexed as required
8 elif s < target:
9 left += 1 # WHY? Sum too small — only way to increase it in a
10 # sorted array is to move left pointer right
11 else:
12 right -= 1 # WHY? Sum too large — only way to decrease it is to
13 # move right pointer left
14
15 return [] # Problem guarantees a solution exists, but defensive coding
16# Time: O(n) | Space: O(1)

Dry Run: numbers = [2, 7, 11, 15], target = 9

StepleftrightsumAction
10 (val=2)3 (val=15)1717 > 9 → move right left
20 (val=2)2 (val=11)1313 > 9 → move right left
30 (val=2)1 (val=7)9Found! Return [1, 2]

Key Line Spotlight

elif s < target: left += 1

This single line is why the algorithm works. Because the array is sorted, incrementing left is guaranteed to increase the sum (or keep it the same if duplicates exist). Without the sorted property, moving left could increase or decrease the sum unpredictably, and the algorithm would fail.

Why O(n) Time, O(1) Space?

  • Time O(n): Each iteration moves at least one pointer inward. Left can move at most n-1 times right, and right can move at most n-1 times left. Total moves ≤ n-1, so the while loop runs at most n-1 times.
  • Space O(1): We only use two integer variables (left, right) regardless of input size. No hash map, no extra array.

Common Mistakes

  • Forgetting 1-indexed output: The problem says 1-indexed but internally you use 0-indexed. Return [left+1, right+1], not [left, right].
  • Using <= instead of <: With left <= right, you can pair an element with itself. The problem says “may not use the same element twice.”
  • Not checking if input is sorted: This algorithm only works on sorted arrays. If the interviewer gives you an unsorted array, sort it first (but the indices change) or use a hash map.

Time: O(n) — Space: O(1)

2. Container With Most Water

MediumOpposite Direction

Given an array height where height[i] is the height of a line at position i, find two lines that together with the x-axis form a container that holds the most water. Return the maximum amount of water the container can store.

Example: height = [1,8,6,2,5,4,8,3,7]49

Think Before You Code

  1. 1. What determines the area? Area = width * min(left_height, right_height). Two factors: width and the shorter wall.
  2. 2. If I start with the widest container, what do I know? I have maximum width. Any movement will reduce width. The only way to compensate is to find a taller short-wall.
  3. 3. Which wall should I move? Moving the taller wall can only make things worse (width shrinks, height bottleneck unchanged). Moving the shorter wall is the only move that has a chance of improvement.

Brute Force: O(n²)

Try every pair of lines and compute the area. For each pair (i, j), the area is (j - i) * min(height[i], height[j]). Track the maximum. This works but checks n*(n-1)/2 pairs — far too slow for large inputs.

The “Aha” Moment: From Brute Force to Optimal

The key insight is a proof by contradiction: suppose the left wall is shorter than the right wall. Could moving the right wall inward ever give us a better area? No. The width decreases by 1. The height is still capped by the left wall (since it is the shorter one). So the area can only decrease or stay the same. Moving the right wall is provably useless. The only productive move is replacing the bottleneck — the shorter wall. This greedy choice eliminates one candidate per step, giving us O(n).

LR

The Key Insight

Always move the pointer pointing to the shorter line. Moving the taller line can never increase the area (the width shrinks and the height is still limited by the shorter line), but moving the shorter line might find a taller one.
container_with_most_water.py
1def maxArea(height):
2 left, right = 0, len(height) - 1 # WHY start at extremes? Maximize width first
3 max_water = 0
4
5 while left < right:
6 # Area = width * min(height_left, height_right)
7 # WHY min? Water overflows over the shorter wall
8 w = right - left
9 h = min(height[left], height[right])
10 max_water = max(max_water, w * h)
11
12 # WHY move the shorter side? Moving the taller side CANNOT help:
13 # - Width decreases by 1 (guaranteed loss)
14 # - Height is still capped by the SHORT side (no gain possible)
15 # Moving the shorter side: width decreases, but height MIGHT increase
16 # enough to compensate — it is our only chance for improvement
17 if height[left] < height[right]:
18 left += 1
19 else:
20 right -= 1
21
22 return max_water
23# Time: O(n) | Space: O(1)

Dry Run: height = [1, 8, 6, 2, 5, 4, 8, 3, 7]

StepL (h)R (h)AreamaxAction
10 (1)8 (7)8*1=88h[0]=1 shorter, move L
21 (8)8 (7)7*7=4949h[8]=7 shorter, move R
31 (8)7 (3)6*3=1849h[7]=3 shorter, move R
41 (8)6 (8)5*8=4049Equal height, move R

Continues until L meets R. Best area = 49 found at step 2.

Key Line Spotlight

if height[left] < height[right]: left += 1

This is the greedy choice that makes the algorithm correct. Moving the taller wall can never improve the area because the bottleneck (shorter wall) stays the same while width decreases. Moving the shorter wall is the only move that has any chance of finding a taller replacement.

Why O(n) Time, O(1) Space?

  • Time O(n): The while left < right loop runs at most n-1 times because every iteration moves exactly one pointer inward. Once they meet, the loop ends. Total: O(n).
  • Space O(1): Only 4 variables: left, right, w, max_water. No data structures scale with input.

Time: O(n) — Space: O(1)

3. 3Sum

MediumOpposite Direction + Loop

Given an integer array nums, return all unique triplets [nums[i], nums[j], nums[k]] such that i != j != k and nums[i] + nums[j] + nums[k] == 0. The solution set must not contain duplicate triplets.

Example: nums = [-1, 0, 1, 2, -1, -4][[-1,-1,2], [-1,0,1]]

Think Before You Code

  1. 1. Can I reduce this to a problem I already know? If I fix one element nums[i], the remaining problem is: “find two numbers in the rest of the array that sum to -nums[i].” That is exactly Two Sum II.
  2. 2. How do I handle duplicates? Sorting groups identical values together. Then I can skip consecutive duplicates for the outer element AND for the inner two-pointer matches.
  3. 3. What is the complexity? The outer loop is O(n). The inner two-pointer scan is O(n). Total: O(n²). The O(n³) brute force is eliminated.

Brute Force: O(n³)

Three nested loops check every possible triplet. For n=1000 elements, that is a billion operations. Even worse, you need a set to deduplicate triplets, adding memory overhead. The brute force is impractical for any real interview input.

The “Aha” Moment: Reduce K-Sum to (K-1)-Sum

This problem teaches a powerful general technique: reduce K-Sum to (K-1)-Sum by fixing one element. Fix nums[i], then find pairs summing to -nums[i] in the remaining subarray. After sorting, the inner problem is exactly Two Sum II — which we solve with two pointers in O(n). The outer loop runs n times, so total is O(n²). This same reduction works for 4Sum, 5Sum, etc.

Duplicate Handling Is the Hard Part

The algorithm itself is straightforward — it is the duplicate-skipping that trips people up. There are two levels of deduplication: (1) skip duplicate anchors in the outer loop withif i > 0 and nums[i] == nums[i-1]: continue, and (2) skip duplicate pairs in the inner loop after finding a match. Miss either one and your output will contain duplicate triplets.
three_sum.py
1def threeSum(nums):
2 nums.sort() # WHY sort? Enables two-pointer technique AND duplicate skipping
3 result = []
4
5 for i in range(len(nums) - 2): # WHY -2? Need room for left and right after i
6 # WHY skip duplicates here? If nums[i] == nums[i-1], we would find
7 # the exact same triplets again — wasted work AND duplicate output
8 if i > 0 and nums[i] == nums[i - 1]:
9 continue
10
11 left, right = i + 1, len(nums) - 1
12 target = -nums[i] # WHY negate? We need nums[i]+nums[left]+nums[right]=0,
13 # so nums[left]+nums[right] must equal -nums[i]
14
15 while left < right:
16 s = nums[left] + nums[right]
17 if s == target:
18 result.append([nums[i], nums[left], nums[right]])
19 # WHY skip duplicates AFTER finding a match? Without this,
20 # [-1, -1, 0, 1, 1] would produce [-1,0,1] twice
21 while left < right and nums[left] == nums[left + 1]:
22 left += 1
23 while left < right and nums[right] == nums[right - 1]:
24 right -= 1
25 left += 1 # WHY move both? We already recorded this pair,
26 right -= 1 # and we need to find NEW combinations
27 elif s < target:
28 left += 1 # Sum too small — increase it
29 else:
30 right -= 1 # Sum too large — decrease it
31
32 return result
33# Time: O(n^2) | Space: O(1) ignoring output

Dry Run: nums = [-1, 0, 1, 2, -1, -4] → sorted: [-4, -1, -1, 0, 1, 2]

inums[i]L (val)R (val)sumAction
0-41 (-1)5 (2)-3-3 < 0 → move L. Eventually no match.
1-12 (-1)5 (2)0Found [-1,-1,2]! Skip dupes, move both
1-13 (0)4 (1)0Found [-1,0,1]!
2-1Skipped! nums[2]==nums[1], duplicate anchor

Key Line Spotlight

if i > 0 and nums[i] == nums[i - 1]: continue

This single line prevents duplicate triplets. Without it, the same anchor value would generate the same set of pairs in the inner loop. It must be i > 0 (not i >= 0) because the first occurrence of any value must still be processed.

Why O(n^2) Time?

  • Sorting: nums.sort() takes O(n log n). This is dominated by the main loop.
  • Outer loop: for i in range(len(nums)) runs n times.
  • Inner loop: For each i, the two-pointer scan runs at most O(n). Total: n * n = O(n^2).
  • Duplicate skipping: The while loops inside the match case do not add extra complexity — they advance pointers that would have been advanced anyway.

Time: O(n2) — Space: O(1) ignoring output

4. Trapping Rain Water

HardOpposite Direction

Given n non-negative integers representing an elevation map where the width of each bar is 1, compute how much water it can trap after raining.

Example: height = [0,1,0,2,1,0,1,3,2,1,2,1]6

Think Before You Code

  1. 1. How much water sits above position i? Water level at position i = min(tallest wall to its left, tallest wall to its right). Water trapped = water level - height[i]. If height[i] is taller than the water level, no water (the bar sticks up above the water line).
  2. 2. Do I need to know both max-left and max-right at the same time? No. If I know the left-max is smaller than the right-max, the bottleneck is the left side. I do not care exactly how tall the right side is — I only need to know it is at least as tall as the left max.
  3. 3. Can two pointers help? Yes. Left pointer tracks left_max, right pointer tracks right_max. Always process the side with the smaller max (the bottleneck). This computes the exact water at each position in O(1).

Brute Force: O(n²) → Prefix Arrays: O(n) time, O(n) space

The brute force computes max-left and max-right for each position by scanning in both directions. That is O(n) work per position → O(n²) total. A common improvement is to precompute two arrays: leftMax[i] and rightMax[i]. This is O(n) time but uses O(n) extra space. The two-pointer approach eliminates even that extra space.

The “Aha” Moment: You Only Need the Bottleneck Side

The prefix-array approach computes max from both sides for every position. But think about it: if leftMax = 3 and rightMax = 5, the water level is min(3, 5) = 3. We did not need to know rightMax was exactly 5 — we only needed to know it was at least 3. This means: when leftMax < rightMax, we can safely compute the water at the left position using only leftMax. The right side is guaranteed tall enough. This eliminates the need for precomputed arrays.

The Key Insight

Water at any position depends on min(maxLeft, maxRight) - height[i]. With two pointers, we track the running max from each side. We always process the side with the smaller max, because that side is the bottleneck and we know the other side is at least as tall. This gives us O(n) time with O(1) extra space — optimal on both dimensions.
trapping_rain_water.py
1def trap(height):
2 if not height:
3 return 0
4
5 left, right = 0, len(height) - 1
6 # WHY track max from each side? Water level at any position
7 # is determined by min(tallest wall to left, tallest wall to right)
8 left_max, right_max = height[left], height[right]
9 water = 0
10
11 while left < right:
12 # WHY process the smaller max side? If left_max < right_max, we KNOW
13 # the right side has a wall at least as tall as left_max. So the water
14 # level at position left is determined entirely by left_max — we don't
15 # need to know the exact right_max for this position
16 if left_max < right_max:
17 left += 1
18 left_max = max(left_max, height[left]) # Update running max
19 water += left_max - height[left] # WHY always >= 0? Because
20 # left_max is the max of all heights seen from the left,
21 # including height[left] itself, so left_max >= height[left]
22 else:
23 right -= 1
24 right_max = max(right_max, height[right]) # Mirror logic
25 water += right_max - height[right]
26
27 return water
28# Time: O(n) | Space: O(1)

Dry Run: height = [0, 1, 0, 2, 1, 0, 1, 3, 2, 1, 2, 1]

StepLRlMaxrMax+watertotal
11 (h=1)11 (h=1)1100
21 (h=1)10 (h=2)1200
32 (h=0)1012+11
43 (h=2)102201

Continues through all positions. Final total water = 6.

Key Line Spotlight

if left_max < right_max: ... water += left_max - height[left]

This is the heart of the algorithm. When left_max < right_max, we know the bottleneck for position left is the left side. The right side is guaranteed to have a wall at least as tall as left_max. So the water level at this position is exactly left_max - height[left], regardless of what is further to the right.

Why O(n) Time, O(1) Space?

  • Time O(n): Each iteration of the while loop moves exactly one pointer inward. Total moves = n-1. Each move does O(1) work (a max comparison and an addition).
  • Space O(1): Only 5 variables: left, right, left_max, right_max, water. Compare this to the O(n) prefix-max approach that needs two arrays.

Time: O(n) — Space: O(1)

5. Remove Duplicates from Sorted Array

EasySame Direction

Given a sorted array nums, remove the duplicates in-place such that each element appears only once. Return the number of unique elements. Do not allocate extra space — modify the input array in-place with O(1) extra memory.

Example: nums = [0,0,1,1,1,2,2,3,3,4]5, nums = [0,1,2,3,4,...]

Think Before You Code

  1. 1. What does “in-place” mean? You cannot create a new array. You must overwrite the existing array so unique elements are at the front. The elements after the returned length do not matter.
  2. 2. What advantage does “sorted” give us? Duplicates are adjacent. You only need to compare an element with the last unique element you kept — not with all previous elements.
  3. 3. Do I need two pointers moving in the same direction? Yes. One pointer (slow) marks the position to write the next unique element. The other (fast) reads through every element. This is the classic read/write pointer pattern.

Alternative: Create a New Array — O(n) time, O(n) space

The simplest approach would be to create a new array and copy only unique elements into it. But the problem explicitly requires O(1) extra space. The same-direction two-pointer technique achieves this by overwriting duplicates in-place.

The “Aha” Moment: Separate Reading from Writing

Imagine you are rewriting a book and skipping duplicate paragraphs. You read every paragraph (fast pointer), but you only write a paragraph to the clean copy if it is different from the last one you wrote (slow pointer). The reading hand moves steadily forward. The writing hand only advances when there is something new to write. By the end, the first slow positions contain exactly the unique elements, in order.

remove_duplicates.py
1def removeDuplicates(nums):
2 if not nums:
3 return 0
4
5 # slow = position of the last unique element we wrote
6 # Everything at and before slow is "finalized" — unique and in order
7 slow = 0
8
9 # fast = the reader that scans every element
10 for fast in range(1, len(nums)):
11 # WHY compare with nums[slow]? Because slow points to the last
12 # unique value. If nums[fast] is different, it is a NEW unique value.
13 if nums[fast] != nums[slow]:
14 slow += 1 # Advance write position
15 nums[slow] = nums[fast] # Write the new unique value
16
17 # slow is the index of the last unique element
18 # Number of unique elements = slow + 1
19 return slow + 1
20
21# Example: [0,0,1,1,1,2,2,3,3,4]
22# slow=0, fast=1: 0==0, skip
23# slow=0, fast=2: 1!=0, slow=1, write 1 → [0,1,1,1,1,2,2,3,3,4]
24# slow=1, fast=3: 1==1, skip
25# slow=1, fast=4: 1==1, skip
26# slow=1, fast=5: 2!=1, slow=2, write 2 → [0,1,2,1,1,2,2,3,3,4]
27# slow=2, fast=6: 2==2, skip
28# slow=2, fast=7: 3!=2, slow=3, write 3 → [0,1,2,3,1,2,2,3,3,4]
29# slow=3, fast=8: 3==3, skip
30# slow=3, fast=9: 4!=3, slow=4, write 4 → [0,1,2,3,4,2,2,3,3,4]
31# Return 5. First 5 elements: [0,1,2,3,4]
32# Time: O(n) | Space: O(1)

Dry Run: nums = [0, 0, 1, 1, 1, 2, 2, 3, 3, 4]

fastnums[fast]nums[slow]Equal?slowArray (first 5)
100Yes, skip0[0,0,1,1,1]
210No, write!1[0,1,1,1,1]
311Yes, skip1[0,1,1,1,1]
411Yes, skip1[0,1,1,1,1]
521No, write!2[0,1,2,1,1]
622Yes, skip2[0,1,2,1,1]
732No, write!3[0,1,2,3,1]
833Yes, skip3[0,1,2,3,1]
943No, write!4[0,1,2,3,4]

Return 5 (slow + 1). First 5 elements are [0, 1, 2, 3, 4].

Key Line Spotlight

if nums[fast] != nums[slow]

This condition is the entire algorithm. We compare the reader (fast) against the last written unique value (slow). If different, it is a new unique element — write it. If the same, skip it. The sorted property guarantees all duplicates are adjacent, so we never miss one.

Common Mistakes

  • Starting slow at 1 instead of 0: Some implementations start slow at 1 and compare nums[fast] != nums[fast-1]. This works but is harder to reason about. The slow as “last unique index” approach is cleaner.
  • Forgetting the +1 in the return: slow is an index (0-based). The count of unique elements is slow + 1. Off-by-one errors here will fail test cases.
  • Trying to shift elements: Some candidates try to “shift” elements left when removing a duplicate. This is O(n) per removal and makes the total O(n²). The write-pointer approach avoids this entirely.

Why O(n) Time, O(1) Space?

  • Time O(n): The fast pointer visits each element exactly once. The slow pointer advances at most n times. Total work is proportional to n.
  • Space O(1): Only two pointer variables (slow and fast). The array is modified in-place — no extra data structures.

Time: O(n) — Space: O(1)

Common Mistakes & Pro Tips

Mistakes That Cost Offers

1. Forgetting to sort the array

Opposite-direction two pointers only work on sorted data. If the input is unsorted, sort it first (O(n log n)) or use a hash map instead. In an interview, always ask: “Is the input sorted?”

2. Wrong pointer movement direction

Moving the wrong pointer breaks the algorithm silently — you get wrong answers, not crashes. Always reason about what moving each pointer does to the value you are tracking (sum, area, distance).

3. Off-by-one: using ≤ instead of <

The loop condition should be left < right, not left <= right. With <=, the same element can be used twice, which is wrong for pair-finding problems. The only exception is palindrome checking where a single middle character is valid.

Pro Tips

Verbalize Your Pointer Logic

In the interview, always say out loud: “I am moving the left pointer right because the current sum is too small and I need to increase it.” This shows the interviewer you understand the invariant, not just the code.

Handle Duplicates Early

For problems like 3Sum, add duplicate-skipping logic immediately after finding a valid result. If the interviewer asks “what about duplicates?” and you have not handled it, it looks like you have not seen the problem before.

Two Pointers + Sorting = O(n log n) Minimum

If you sort first, your overall complexity is at least O(n log n). Mention this trade-off to the interviewer: “Sorting costs O(n log n), but it enables an O(n) scan, so total is O(n log n) which is optimal for this problem.”

Interviewer's Perspective

Here is what senior engineers are really evaluating when they give you a two-pointer problem.

What Interviewers Are Really Testing

  • 1.Can you identify the O(n) opportunity? Many candidates jump straight to brute force O(n²). The interviewer wants to see you recognize that sorting + two pointers (or working on sorted input) collapses nested loops into a single pass. Say: “Since the array is sorted, I can use two pointers to avoid the inner loop entirely.”
  • 2.Can you handle duplicates correctly? Problems like 3Sum have duplicate-skipping logic that trips up most candidates. The interviewer is watching whether you proactively handle duplicates or wait to be told about them. Always ask: “Can the input contain duplicates? Should the output contain duplicate results?”
  • 3.Do you understand WHY the pointers converge? The interviewer may ask: “Why does moving the left pointer right guarantee you do not miss a valid answer?” You need to articulate the invariant: if the current sum is too small, moving left right can only increase it; if too large, moving right left can only decrease it. No valid pair is skipped.
  • 4.Can you extend to harder variants? After you solve 2Sum II, expect: “What about 3Sum? 4Sum? K-Sum?” Show that you understand the recursive reduction: fix one element, reduce to (k-1)-Sum. This is the mark of a strong candidate.

Edge Cases & Follow-up Questions

Edge Cases to Watch For

  • Empty array or single element: Two pointers need at least 2 elements. Return early.
  • All elements identical: Does your duplicate-skip logic handle an array like [0,0,0,0]?
  • Negative numbers: For Container With Most Water, heights are non-negative. For 3Sum, negatives are expected. Ensure your logic handles both.
  • Already sorted vs unsorted: If the input is not sorted, sorting it first changes in-place requirements. Clarify with the interviewer.
  • Integer overflow: When summing elements, consider if the sum could overflow (especially in Java/C++). Use long if needed.

Common Follow-up Questions

  • • “Can you solve this without sorting?” — Usually means use a hash map instead (trading O(n log n) sort for O(n) hash lookups).
  • • “What if we want the closest sum to target, not exact?” — Track the minimum difference as you move pointers. Same template, different update logic.
  • • “Extend to K-Sum.” — Fix one element at a time, reduce to (k-1)-Sum recursively. Base case is 2Sum with two pointers.
  • • “What if the array is too large to sort in memory?” — External sort, then streaming two-pointer pass. Mention this for system design awareness.
  • • “Can you do this in-place with O(1) extra space?” — Two pointers naturally uses O(1) space (the pointers themselves). This is one of its biggest advantages over hash-based approaches.

Related Patterns

Two pointers is closely related to these patterns. Master them next.

Ready to practice? Try Essential 75 problems that use the Two Pointers pattern.