Data StructureAppears in ~50% of InterviewsFoundational

Arrays & Strings

Think of an array as a row of numbered mailboxes in an apartment lobby. Each mailbox has a number painted on it — 0, 1, 2, 3. If someone says “check mailbox #42,” you walk straight to it. No scanning, no searching. That instant access is the superpower of arrays: O(1) random access by index.

Array in Contiguous Memory

Interactive
Contiguous memory block -- each cell is adjacent10[0]23[1]4[2]57[3]89[4]12[5]36[6]71[7]0x1000x1040x108

How to Learn This Page (Student Guide)

This page covers both arrays and strings. Here's the recommended flow:

  1. 1.Start with What is an Array? and How Arrays Work in Memory. Even if you think you know arrays, the memory model explanation will change how you think about performance.
  2. 2.Play through How It Works — Visual Deep Dive and the Intuition & Mental Model section. Watch the O(1) access vs. O(n) insert animation — that tradeoff drives every array interview question.
  3. 3.Study Interview Patterns — “When You See X, Think Y” carefully. These four patterns (Two Pointers, Sliding Window, Prefix Sum, Sort Then Solve) cover 90% of array problems.
  4. 4.Scroll down to the Strings section. Strings are really just arrays of characters with one twist — immutability. Read String Interview Patterns to see how the same techniques adapt.
  5. 5.Finally, work through the Implementation code and Practice Problems. Type each solution yourself.

Arrays and strings are the bread and butter of interviews — nearly every problem starts with one. Master the patterns here and you have the foundation for everything else.

What is an Array? — Start Here if You're New

If you have never worked with data structures before, welcome. Let's start from the very beginning. An array is the simplest and most fundamental way a computer organizes data. Think of it like this:

Real-World Analogy: Numbered Lockers

Imagine a hallway with a row of numbered lockers — locker #0, locker #1, locker #2, and so on. Each locker can hold exactly one item. Here is the key:

  • If someone says “go to locker #42,” you walk straight there. You do not need to check lockers #0 through #41 first. That instant jump is what we call O(1) access.
  • But if someone asks “which locker has the red notebook?” and you do not know the number, you have to open every locker one by one until you find it. That searching is O(n).
  • If you want to add a new locker in the middle of the row, every locker after that point must shift one position to the right to make room. That shifting is also O(n).

Why Do Arrays Exist?

Computers need a way to store collections of things — a list of test scores, a sequence of characters in a word, pixel values in an image. An array solves this by putting all the items right next to each other in memory, like books on a shelf with no gaps. This gives the computer two huge advantages:

Instant Lookup

Because every item is the same size and they are all packed together, the computer can calculate exactly where any item lives. Want item #500? The computer just does a tiny bit of math (start position + 500 x item size) and jumps directly there. No searching needed.

Speed Through Proximity

Modern computers have a cache — a small, ultra-fast memory near the processor. When the computer reads one array element, it pulls its neighbors into the cache automatically. This means looping through an array is incredibly fast because the next element is already loaded.

Strings Covered Below

A string is an array of characters — everything you learn about arrays applies to strings too. We cover strings in depth in Part 2: Strings below, including immutability, the string pool, and string-specific interview patterns.

You know what an array is conceptually. Now let's understand why arrays are so fast for some operations and slow for others — the answer lies in how they are physically stored in memory.

How Arrays Work in Memory

This is one of the most important things to understand, because it explains why every operation costs what it does. Let's peek behind the curtain at what the computer is actually doing.

Contiguous Memory — The Secret Sauce

Your computer's memory (RAM) is like a massive grid of numbered boxes. Each box has an address — a number like 0x100, 0x104, 0x108. When you create an array, the computer reserves a continuous block of these boxes, one right after another:

Memory Address:  0x100    0x104    0x108    0x10C    0x110
                 ┌────────┬────────┬────────┬────────┬────────┐
Array values:    │   10   │   23   │    4   │   57   │   89   │
                 └────────┴────────┴────────┴────────┴────────┘
Index:              [0]      [1]      [2]      [3]      [4]

Each element takes up the same amount of space (4 bytes for an integer). So the computer can find any element instantly with this formula:

address of element[i] = base_address + i * element_size

For example, to find arr[3]: 0x100 + 3 * 4 = 0x10C. The computer jumps directly to address 0x10C. No loops. No searching. Just math. This is why array access is O(1).

Why Inserting in the Middle is Expensive

Because array elements are packed together with no gaps, there is no room to squeeze in a new element. If you want to insert at position 2, every element from position 2 onward must physically move one slot to the right. For an array of 1,000 elements, inserting at the beginning means moving all 1,000 elements. That is O(n).

Appending to the end, on the other hand, is usually O(1) because nothing needs to shift. The only exception is when the internal buffer is full and needs to be resized (more on that below).

Key Takeaway for Beginners

Arrays are fast when you know the position (index) of what you want. They are slow when you need to search by value or insert/delete in the middle. Almost every decision about which data structure to use comes down to this tradeoff.

How It Works — Visual Deep Dive

Arrays store elements in contiguous memory. The address of element i is simply base_address + i * element_size. No traversal, no pointer chasing. Click each operation below to see exactly what happens under the hood.

1 / 6
10[0]23[1]4[2]57[3]89[4]12[5]36[6]48[7]

Initial array of 8 elements in contiguous memory

arr = [10, 23, 4, 57, 89, 12, 36, 48]

Intuition & Mental Model

Understanding why arrays behave the way they do gives you the ability to make instant decisions in interviews. Here are the two concepts you need to internalize.

Cache Friendliness

Because array elements sit next to each other in memory, iterating through an array is extremely cache-friendly. Modern CPUs fetch memory in 64-byte cache lines — when you access element 0, elements 1–15 are already in L1 cache for free.

This is why arrays beat linked lists in practice for sequential access, even though both are theoretically O(n). A linked list causes constant cache misses. Real benchmarks show arrays can be 10–100x faster for iteration.

Why Arrays Are the Default

O(1) random access + cache friendliness + simple implementation = the most used data structure in all of programming. When you don't know what data structure to use, start with an array.

The only time you should reach for something else is when you need O(1) insertions/deletions in the middle (linked list), O(1) lookups by value (hash map), or ordering guarantees (BST/heap). For everything else — array first, questions later.

Dynamic Array Resizing — Interactive Demo

Python lists, JavaScript arrays, and Java ArrayLists are all dynamic arrays. They double capacity when full. Click “Append” to watch the buffer grow.

Dynamic Array: Capacity Doubling

Size: 3 / Capacity: 4
1[0]2[1]3[2]empty[3]1 slots available

capacity: 4 | filled: 3 | amortized cost per append: O(1)

Under the Hood — Dynamic Arrays Are Just Array Copying

Python's list, Java's ArrayList, and JavaScript's Array all feel “dynamic” — you just keep adding elements and they grow. But internally, they are all backed by fixed-size arrays. When the internal array fills up, the runtime allocates a brand-new array (typically 1.5–2x larger), copies every element from the old array into the new one, and discards the old one. That single resize is O(n), but because it happens so rarely (each resize doubles capacity), the amortized cost per append stays O(1).

dynamic-array-internals.java / .py
1// Java ArrayList under the hood — simplified version of what actually happens
2// when you call list.add(element)
3
4import java.util.Arrays;
5
6public class MyArrayList<T> {
7 private Object[] data; // The backing fixed-size array
8 private int size; // How many elements are actually stored
9
10 public MyArrayList() {
11 data = new Object[4]; // Start with capacity 4
12 size = 0;
13 }
14
15 public void add(T element) {
16 // Step 1: Is the backing array full?
17 if (size == data.length) {
18 // Step 2: Create a NEW array with 2x capacity
19 int newCapacity = data.length * 2;
20 Object[] newData = new Object[newCapacity];
21
22 // Step 3: COPY every element from old to new
23 // This is the O(n) cost that makes resizing expensive
24 System.arraycopy(data, 0, newData, 0, size);
25
26 // Step 4: Point to the new array, old one gets garbage collected
27 data = newData;
28 System.out.println("Resized! " + data.length / 2 + " -> " + newCapacity);
29 }
30
31 // Step 5: Place the element and increment size
32 data[size] = element;
33 size++;
34 }
35
36 @SuppressWarnings("unchecked")
37 public T get(int index) {
38 if (index < 0 || index >= size) throw new IndexOutOfBoundsException();
39 return (T) data[index]; // O(1) — just pointer arithmetic
40 }
41
42 public int size() { return size; }
43 public int capacity() { return data.length; }
44}
45
46// Usage — watch the resizes happen:
47// MyArrayList<Integer> list = new MyArrayList<>();
48// list.add(1); // size=1, capacity=4
49// list.add(2); // size=2, capacity=4
50// list.add(3); // size=3, capacity=4
51// list.add(4); // size=4, capacity=4 (full!)
52// list.add(5); // RESIZE 4->8, copies 4 elements, then adds 5
53// list.add(6); // size=6, capacity=8
54// list.add(7); // size=7, capacity=8
55// list.add(8); // size=8, capacity=8 (full!)
56// list.add(9); // RESIZE 8->16, copies 8 elements, then adds 9
57//
58// Total copies after 9 adds: 4 + 8 = 12
59// Average cost per add: 12/9 ≈ 1.33 — that's amortized O(1)

The Illusion of Dynamic Arrays

There is no such thing as a truly “dynamic” array at the hardware level. RAM only gives you fixed-size blocks. Every list.append() in Python, ArrayList.add() in Java, or array.push() in JavaScript is either filling an existing slot (fast, O(1)) or triggering a full copy to a bigger array (slow, O(n)).

Interview tip: When an interviewer asks “what is the time complexity of append?” the correct answer is “amortized O(1)”. Follow up with: “Individual appends are O(1), but every time the capacity doubles, there is a single O(n) copy. Since each element is copied at most O(log n) times across all doublings, the amortized cost per operation is O(1).”

Array vs Linked List vs Hash Map — When to Pick Each

Use CaseArrayLinked ListHash Map
Access by indexO(1)O(n)N/A
Search (contains)O(n)O(n)O(1)
Insert at frontO(n)O(1)O(1)
Insert at endO(1)*O(1)O(1)
Delete in middleO(n)O(1)O(1)
Ordered iterationExcellentGoodNo order
Cache performanceBestPoorFair

* Amortized O(1) — occasional O(n) resize cost spread across all operations

Operations & Complexity

These are the numbers you need burned into memory. Every time you reach for an array operation in an interview, you should know the cost instantly.

OperationTimeSpace
Access by index (arr[i])O(1)O(1)
Search (unsorted)O(n)O(1)
Search (sorted -- binary search)O(log n)O(1)
Append to end (push)O(1) amortizedO(1)
Insert at index iO(n)O(1)
Delete at index iO(n)O(1)
SortO(n log n)O(n) or O(log n)

Relative Cost Comparison

Access [i]
O(1)
Append
O(1)*
Search
O(n)
Insert (mid)
O(n)
Delete (mid)
O(n)
Sort
O(n log n)
Str concat (loop)
O(n²)

* Amortized O(1) -- occasional O(n) resize cost is spread across all append operations

“Why is search O(n)?” — Think of it this way

You have 1,000 mailboxes. You know someone named “Alice” has a mailbox, but you don't know the number. Your only option? Open mailbox #0, check. Open mailbox #1, check. Keep going until you find it or reach the end. In the worst case, you check all 1,000. That is O(n). The array gives you no shortcut when you search by value instead of index.

The fix: If the array is sorted, use binary search — O(log n). If you need frequent lookups by value, use a hash map — O(1). Knowing which tool to reach for is exactly what interviewers are testing.

You understand the data structure and its complexity tradeoffs. Now let's connect theory to practice — these four patterns are the mental shortcuts that let you solve array problems in minutes, not hours.

Interview Patterns — “When You See X, Think Y”

When you see an array problem, your first move is to identify which pattern applies. Recognize the trigger phrases, reach for the right template, and you are already halfway to the solution.

Two Pointers

Use when you see: “sorted array,” “pair that sums to,” “remove duplicates in-place,” “reverse a string,” “palindrome,” “container with most water.”

Two flavors: opposite direction (left=0, right=end, move inward) for sorted arrays and palindromes. Same direction (slow/fast) for removing duplicates, partitioning, and cycle detection.

Reduces O(n²) brute force to O(n). No extra space needed.

How to recognize in an interview:

Look for keywords like “sorted array,” “in-place,” “pair,” “two ends,” “palindrome.” If the array is sorted and you need to find a pair satisfying a condition, two pointers moving inward from both ends is almost always the answer.

# Python two-pointer template (opposite ends)
def two_pointer(nums, target):
    left, right = 0, len(nums) - 1
    while left < right:
        total = nums[left] + nums[right]
        if total == target: return [left, right]
        elif total < target: left += 1
        else: right -= 1

Sliding Window

Use when you see: “contiguous subarray,” “substring of length k,” “maximum sum subarray of size k,” “longest substring without repeating characters,” “minimum window substring.”

Fixed window: expand to size k, then slide by adding the right element and removing the left. Dynamic window: expand right until condition breaks, then shrink left until it holds again.

Turns O(n*k) brute force into O(n). The trick is maintaining your window state efficiently.

How to recognize in an interview:

Look for “contiguous,” “subarray of size k,” “longest/shortest substring,” “at most k distinct.” If you are checking every possible subarray of every possible size, a sliding window can almost certainly do it in one pass.

# Python dynamic sliding window template
def max_subarray_of_size_k(nums, k):
    window_sum, max_sum = 0, 0
    for right in range(len(nums)):
        window_sum += nums[right]
        if right >= k:
            window_sum -= nums[right - k]  # Shrink left
        if right >= k - 1:
            max_sum = max(max_sum, window_sum)
    return max_sum

Prefix Sum / Kadane's Algorithm

Use when you see: “subarray sum equals k,” “range sum query,” “maximum subarray,” “number of subarrays with sum,” “equilibrium index.”

Prefix sum precomputes cumulative sums so any range sum is O(1). Combine with a hash map for “subarray sum equals k.” Kadane's finds the maximum subarray sum in a single pass with O(1) space.

Prefix sums: O(n) precomputation, O(1) per query. Kadane's: O(n) time, O(1) space.

How to recognize in an interview:

Look for “subarray sum,” “range query,” “maximum subarray,” “equilibrium.” If the problem involves cumulative sums over ranges, use prefix sums. If it asks for the maximum contiguous sum, Kadane's is the one-pass answer.

# Python Kadane's algorithm template
def max_subarray(nums):
    best = current = nums[0]
    for num in nums[1:]:
        current = max(num, current + num)  # Extend or restart
        best = max(best, current)
    return best

Sort, Then Solve

Use when you see: “find all triplets,” “closest pair,” “merge intervals,” “meeting rooms,” “group anagrams.”

Sorting the array first (O(n log n)) often makes the actual problem trivial. After sorting, duplicates are adjacent, two pointers work, and binary search becomes available.

Total complexity: O(n log n) — dominated by the sort. Often better than O(n²) brute force.

How to recognize in an interview:

Look for “triplets,” “intervals,” “merge overlapping,” “closest to target,” “meeting rooms.” If the problem does not require preserving original order and the brute force is O(n²+), sorting first often unlocks a simpler O(n log n) approach.

# Python merge intervals template
def merge(intervals):
    intervals.sort(key=lambda x: x[0])
    merged = [intervals[0]]
    for start, end in intervals[1:]:
        if start <= merged[-1][1]:        # Overlap
            merged[-1][1] = max(merged[-1][1], end)
        else:
            merged.append([start, end])
    return merged

Common Mistakes Candidates Make

  • Off-by-one errors in loop bounds. Always verify with a 1-element and 2-element array. Is it < n or <= n? Draw it out.
  • Mutating strings in a loop. O(n²) trap. Build a list and join at the end.
  • Forgetting the empty array check. Interviewers specifically watch for this edge case.
  • Thinking slicing is free. Python's nums[i:j] creates a new list — O(j-i) time and space. Use index variables instead.
  • Using O(n) extra space when the problem says “in-place.” Use two pointers or swap-based approaches.
  • Confusing index and value. Label variables val_to_index, not just seen.

The Key Insight

Most array problems reduce to four techniques. Before writing a single line of code, ask which one applies:

  • Two pointers / binary search — array is sorted
  • Hash map — you need frequency counts or lookups
  • Sliding window — question mentions “contiguous subarray”
  • Sorting — ordering unlocks a simpler approach

This mental framework solves 90% of array interview questions.

Implementation

Three battle-tested implementations covering the most important array patterns. Study these until you can write them on a whiteboard with your eyes closed. Every line is commented with why, not just what.

1. Two Pointers — Remove Duplicates from Sorted Array

The slow pointer marks the write position. The fast pointer scans ahead for new values. When fast finds something different from slow, advance slow and write. Classic in-place O(1) space pattern.

The Idea

Imagine you have a sorted deck of numbered cards with duplicates. You want to remove duplicates without using a second deck. Place your left finger on the first card (slow pointer). Scan forward with your right finger (fast pointer). Every time your right finger lands on a new number, slide your left finger forward and copy that number over.

Before you read the code, understand this:

  • Because the array is sorted, all duplicates sit next to each other — no need to hash or sort again.
  • The slow pointer always points to the last unique value written. Everything at or before slow is “clean.”
  • We never need extra space because we overwrite duplicates in-place — O(1) space.
two-pointer-remove-duplicates.py / .js
1# Two Pointers: Remove duplicates from sorted array in-place
2# Returns the count of unique elements.
3# The first k elements of nums will contain the unique values.
4#
5# WHY this works: since the array is sorted, all duplicates
6# are adjacent. The slow pointer only advances when we find
7# a genuinely new value — guaranteeing uniqueness.
8
9def remove_duplicates(nums: list[int]) -> int:
10 if not nums:
11 return 0 # Edge case: empty array
12
13 # slow = index of the last unique element written
14 slow = 0
15
16 for fast in range(1, len(nums)):
17 # WHY: if different from last unique, it's a new value
18 if nums[fast] != nums[slow]:
19 slow += 1 # advance write position
20 nums[slow] = nums[fast] # write the new value
21
22 return slow + 1 # count of unique elements
23
24# Example:
25# nums = [1, 1, 2, 2, 3]
26# After: nums = [1, 2, 3, _, _], returns 3
27#
28# Complexity: O(n) time, O(1) space — true in-place

Dry Run — step by step on nums = [1, 1, 2, 2, 3]

Stepfastnums[fast]nums[slow]ActionslowArray state
Init--1slow = 00[1, 1, 2, 2, 3]
1111Same value, skip0[1, 1, 2, 2, 3]
2221New! slow++, write 21[1, 2, 2, 2, 3]
3322Same value, skip1[1, 2, 2, 2, 3]
4432New! slow++, write 32[1, 2, 3, 2, 3]

Result: slow + 1 = 3 unique elements. First 3 positions hold [1, 2, 3].

Key Line Spotlight

if nums[fast] != nums[slow]:

This single comparison is the heart of the algorithm. It answers the only question that matters: “Is this a new value I haven't written yet?” Because the array is sorted, comparing just these two positions is sufficient — no hash set, no frequency counter. This is the line interviewers want to see you write confidently.

Why O(n) time, O(1) space?

  • The for fast in range(1, len(nums)) loop visits each element exactly once → O(n).
  • No extra arrays, no hash maps — only two integer pointers (slow, fast) → O(1) space.
  • The comparison and write on each iteration are both O(1) operations.

2. Prefix Sum — Range Sum Query

Precompute cumulative sums once (O(n)), then answer any range sum query in O(1). The sentinel value at index 0 avoids special casing. This is the foundation for subarray-sum problems.

The Idea

Imagine you are tracking a running total of money deposits. After each deposit, you write down the cumulative total. If someone asks “how much was deposited between Tuesday and Thursday?” you simply subtract Tuesday's running total from Thursday's. No need to re-add individual deposits.

Before you read the code, understand this:

  • Brute force re-sums the range every time — O(n) per query. With 10,000 queries on an array of 100,000, that is 1 billion operations.
  • Prefix sum does O(n) work once, then every query is O(1) — just one subtraction.
  • The sentinel prefix[0] = 0 eliminates edge cases — it represents “nothing has been summed yet.”
prefix-sum.py / .js
1# Prefix Sum: Precompute cumulative sums for O(1) range queries.
2#
3# WHY prefix[0] = 0: this sentinel avoids an if-statement
4# for ranges starting at index 0. Clean, elegant, bug-free.
5#
6# prefix[i] = sum of nums[0..i-1]
7# sum(nums[left..right]) = prefix[right+1] - prefix[left]
8
9def build_prefix_sum(nums: list[int]) -> list[int]:
10 # WHY n+1: the extra slot is the sentinel
11 prefix = [0] * (len(nums) + 1)
12 for i in range(len(nums)):
13 # WHY cumulative: each position builds on the previous
14 prefix[i + 1] = prefix[i] + nums[i]
15 return prefix
16
17def range_sum(prefix: list[int], left: int, right: int) -> int:
18 # WHY subtraction works: prefix[right+1] has all elements
19 # up to right; prefix[left] has everything before left.
20 # The difference is exactly the range we want.
21 return prefix[right + 1] - prefix[left]
22
23# Example:
24# nums = [3, 1, 4, 1, 5, 9]
25# prefix = [0, 3, 4, 8, 9, 14, 23]
26# range_sum(prefix, 1, 3) = prefix[4] - prefix[1] = 9 - 3 = 6
27# (that's nums[1] + nums[2] + nums[3] = 1 + 4 + 1 = 6)
28#
29# Build: O(n) time, O(n) space
30# Query: O(1) time, O(1) space

Dry Run — nums = [3, 1, 4, 1, 5, 9]

Stepinums[i]prefix[i+1]prefix array so far
Init--0[0]
1030+3=3[0, 3]
2113+1=4[0, 3, 4]
3244+4=8[0, 3, 4, 8]
4318+1=9[0, 3, 4, 8, 9]
5459+5=14[0, 3, 4, 8, 9, 14]
65914+9=23[0, 3, 4, 8, 9, 14, 23]

Query: range_sum(prefix, 1, 3) = prefix[4] - prefix[1] = 9 - 3 = 6 — that is nums[1]+nums[2]+nums[3] = 1+4+1 = 6.

Key Line Spotlight

return prefix[right + 1] - prefix[left]

This single subtraction replaces an entire loop. Instead of summing elements from left to right every time, you precompute all cumulative sums once. Then any range sum is just one subtraction — O(1) instead of O(n). This is the line that turns a brute-force solution into an interview-winning one.

Why O(n) build + O(1) query?

  • The for i in range(len(nums)) loop runs once → O(n) build time.
  • The prefix array stores n+1 values → O(n) space.
  • Each query is a single subtraction of two precomputed values → O(1) per query.

3. Kadane's Algorithm — Maximum Subarray Sum

At each position, decide: extend the current subarray or start fresh. If the running sum is negative, it can only hurt — restart. One pass, O(1) space. This is the most elegant array algorithm.

The Idea

You are walking along a number line, picking up coins (positive numbers) and paying tolls (negative numbers). At each step you decide: “Is it worth continuing from where I started, or should I drop everything and start fresh here?” If your running total has gone negative, continuing can only drag you down — reset.

Before you read the code, understand this:

  • Brute force: check all O(n²) subarrays and compute each sum. That is O(n³) or O(n²) with prefix sums.
  • Kadane's insight: you only need to track the best sum ending at each position. At each element, it is either better to extend the previous subarray or start a new one from this element alone.
  • Initialize with nums[0] (not 0) to correctly handle arrays of all negative numbers.
kadanes-max-subarray.py / .js
1# Kadane's Algorithm: Find the contiguous subarray with the
2# largest sum. Classic DP / greedy hybrid.
3#
4# WHY it works: a negative running sum can never help future
5# elements. If current_sum < 0, any extension makes it worse.
6# So we "reset" — start fresh from the current element.
7
8def max_subarray(nums: list[int]) -> int:
9 if not nums:
10 return 0
11
12 max_sum = nums[0] # WHY nums[0]: handles all-negative arrays
13 current_sum = nums[0] # WHY: tracks best sum ending HERE
14
15 for i in range(1, len(nums)):
16 # WHY max(): the core decision — extend or restart
17 current_sum = max(nums[i], current_sum + nums[i])
18 # WHY: update global best after each decision
19 max_sum = max(max_sum, current_sum)
20
21 return max_sum
22
23# Example:
24# nums = [-2, 1, -3, 4, -1, 2, 1, -5, 4]
25# max subarray = [4, -1, 2, 1] = 6
26#
27# Walk through:
28# i=0: current=-2, max=-2
29# i=1: current=max(1, -2+1)=1, max=1
30# i=2: current=max(-3, 1-3)=-2, max=1
31# i=3: current=max(4, -2+4)=4, max=4
32# i=4: current=max(-1, 4-1)=3, max=4
33# i=5: current=max(2, 3+2)=5, max=5
34# i=6: current=max(1, 5+1)=6, max=6 <-- answer
35#
36# Complexity: O(n) time, O(1) space

Dry Run — nums = [-2, 1, -3, 4, -1, 2, 1, -5, 4]

inums[i]extend (cur+nums[i])restart (nums[i])current_summax_sum
0-2---2-2
11-2+1=-1111
2-31+(-3)=-2-3-21
34-2+4=2444
4-14+(-1)=3-134
523+2=5255
615+1=6166
7-56+(-5)=1-516
841+4=5456

Answer: max_sum = 6, from subarray [4, -1, 2, 1] (indices 3–6). Notice at i=3 Kadane chose to restart (4 > -2+4=2) — dropping the negative prefix.

Key Line Spotlight

current_sum = max(nums[i], current_sum + nums[i])

This is the entire algorithm in one line. It asks a single question at every position:

  • Extend the current subarray by including this element, or
  • Start fresh from this element alone?

If current_sum went negative, extending would only reduce nums[i] — so we restart. This greedy decision at every position is what makes Kadane's O(n). Interviewers want to see you articulate this tradeoff clearly.

Why O(n) time, O(1) space?

  • The for i in range(1, len(nums)) loop visits each element exactly once → O(n).
  • Only two variables (current_sum, max_sum) — no arrays, no hash maps → O(1) space.
  • Each max() call is O(1), executed n-1 times total.

Interviewer's Perspective

Understanding what interviewers are actually evaluating helps you focus your preparation and communicate effectively during the interview.

What Interviewers Are Really Testing

  • Can you identify the right technique in under 2 minutes? When given an array problem, interviewers watch how quickly you recognize whether it needs two pointers, sliding window, hash map, or sorting. Hesitating for 10 minutes before starting code is a red flag.
  • Do you handle edge cases proactively? Before writing code, say out loud: “Let me consider edge cases — empty array, single element, all duplicates, all negative numbers.” This shows maturity.
  • Can you articulate the time-space tradeoff? Always mention alternatives: “I could use a hash map for O(n) time but O(n) space, or sort first for O(n log n) time but O(1) extra space. Which would you prefer?”
  • Do you start with brute force, then optimize? The best candidates say: “The brute force is O(n^2) nested loops. But I notice the array is sorted, so I can use two pointers for O(n).” This structured thinking is exactly what gets a “Strong Hire.”

Arrays are the foundation for many algorithmic patterns. Master these patterns next to build on your array knowledge.

When to Use an Array — Decision Guide

Not every problem calls for an array. Here is a clear framework to help you decide when an array is the right tool and when you should reach for something else.

Use an Array When...

  • You need fast access by position (index). Arrays give you O(1) random access.
  • You mostly add/remove from the end (stack-like behavior). Append is O(1) amortized.
  • You need to iterate through all elements frequently. Arrays have the best cache performance.
  • Your data size is known in advance or grows slowly. Fixed-size arrays avoid resizing overhead.
  • You need to sort the data. Sorting algorithms work naturally on contiguous arrays.

Don't Use an Array When...

  • You need frequent insertions/deletions in the middle. Every insert shifts O(n) elements. Use a linked list instead.
  • You need fast lookup by value (not index). Searching an unsorted array is O(n). Use a hash map for O(1) lookup.
  • You need fast insertion at the front. Inserting at index 0 is O(n). Use a deque or linked list.
  • You need elements to stay sorted as you insert. Use a balanced BST or heap instead.
  • You need to check membership frequently. x in array is O(n). Use a hash set for O(1).

The Golden Rule

When you are unsure, start with an array. It is the simplest, fastest, and most memory-efficient general-purpose data structure.

Only switch to something else when you identify a specific operation that an array handles poorly (O(n) search, O(n) middle insert). In interviews, this “start simple, optimize when needed” approach is exactly what earns you points.

You have a solid foundation on arrays. Now let's shift to strings — which are essentially arrays of characters with one critical twist: immutability. This single property changes how you think about performance and shapes several interview-specific patterns.

Part 2: Strings

Everything below focuses on string-specific concepts, memory models, and interview patterns.

What is a String?

A string is an array of characters. The word “hello” is stored as the array ['h', 'e', 'l', 'l', 'o']. Everything you learned about arrays — contiguous memory, O(1) indexing, cache friendliness — applies to strings too. But strings have one critical property that changes everything about how you work with them in interviews:

Strings Are Immutable in Most Languages

In Python, JavaScript, and Java, you cannot change a character inside a string. If you write s = "hello" and then try to do s[0] = "H", it will either error or create a brand-new string. This single fact drives the majority of string interview questions and performance pitfalls.

String Memory — Visual Deep Dive

See exactly how strings behave in memory. The first visual shows immutability in action — why concatenation creates new objects. The second shows Java's String Pool vs Heap — a classic interview topic.

String Immutability (Python / JS / Java)

s = "hello"hello+!hello!NEW string allocatedThe original "hello" is NOT modified. A brand-new 6-character string is allocated.Doing this in a loop: O(n) per concatenation = O(n squared) total.Fix: Collect into a list/array, then .join() at the end = O(n) total.

Java String Pool vs Heap Memory

Interactive
1 / 8
Variabless1STRING POOL"hello"HEAPString s1 = "hello";

String s1 = "hello" — a string literal goes into the String Pool

Pool: ["hello"] | Heap: (empty)

String Immutability — Deep Dive

Understanding string immutability at a deep level is critical for interviews. Below we cover the fix per language, why strings are immutable, and Java's String Intern Pool.

String Immutability — Fix per Language

Python

Strings are immutable. s += "x" creates a new string every time. Fix: collect characters in a list, then "".join(parts).

JavaScript

Strings are immutable. Concatenation in a loop allocates n new strings. Fix: push into an array, then parts.join("").

Java

String is immutable. Fix: use StringBuilder for mutable string building. Its .append() is amortized O(1).

Why Are Strings Immutable? The Real Reasons

This is a popular interview question, especially at companies like Google and Amazon. Every major language (Python, Java, JavaScript, Go, C#) chose to make strings immutable. Here is why — and knowing this shows the interviewer you think at a systems level.

Thread Safety

If a string can never change, multiple threads can read the same string simultaneously without locks. This is critical for server applications handling thousands of concurrent requests. Mutable strings would require synchronization on every access — a huge performance cost.

Hash Map Keys

Strings are the most common hash map key. An immutable string's hash can be computed once and cached. Java's String stores its hash in a field after first computation. If strings were mutable, every hash map lookup would need to recompute the hash.

Security

File paths, database URLs, class names, and API keys are all strings. If strings were mutable, a security-checked string could be modified after the check passes. Immutability ensures that once validated, a string stays validated.

Memory Optimization (String Interning)

Because strings cannot change, the runtime can reuse them. If two variables hold"hello", they can point to the same memory. This is called string interning — it saves massive memory in real applications.

Java's String Intern Pool — A Classic Interview Topic

Java maintains a special memory area called the String Pool (or intern pool). When you create a string literal like "hello", Java first checks if an identical string already exists in the pool. If yes, it reuses that reference. This is only possible because strings are immutable.

string-intern.java / .py / .js
1// Java String Pool — a frequently asked interview question
2
3// CASE 1: String literals — both point to same pool object
4String a = "hello";
5String b = "hello";
6System.out.println(a == b); // true (same reference!)
7System.out.println(a.equals(b)); // true (same content)
8
9// CASE 2: new String() — creates a NEW object on the heap
10String c = new String("hello");
11System.out.println(a == c); // false (different objects!)
12System.out.println(a.equals(c)); // true (same content)
13
14// CASE 3: intern() — forces the string into the pool
15String d = c.intern();
16System.out.println(a == d); // true (d now points to pool)
17
18// WHY THIS MATTERS IN INTERVIEWS:
19// 1. Always use .equals() for string comparison, never ==
20// 2. == compares references (memory addresses), .equals() compares content
21// 3. The pool saves memory: 1000 variables holding "hello" share ONE object
22// 4. Only possible because String is immutable — if you could change a,
23// it would corrupt b (and every other reference to the same pool entry)
24
25// Memory layout:
26//
27// String Pool (PermGen/Metaspace):
28// ┌──────────────────────┐
29// │ "hello" (one copy) │ <── a, b, d all point here
30// └──────────────────────┘
31//
32// Heap:
33// ┌──────────────────────┐
34// │ "hello" (separate) │ <── c points here
35// └──────────────────────┘

How Different Languages Represent Strings Internally

Not all strings are created equal. Understanding the internal representation helps you predict performance and avoid common bugs (especially with Unicode).

LanguageInternal EncodingChar SizeMutable?Key Detail
JavaUTF-16 (compact: Latin-1 or UTF-16)1 or 2 bytesNoSince Java 9, strings with only ASCII chars use 1 byte/char (compact strings)
Python 3UCS-1 / UCS-2 / UCS-4 (adaptive)1, 2, or 4 bytesNoPython picks the smallest encoding that fits all chars. 'hello' = 1 byte/char, '你好' = 2 bytes/char
JavaScriptUTF-162 bytesNoEmoji like 😀 take 2 UTF-16 code units (4 bytes). str.length returns code units, not visual characters
GoUTF-8 (byte slice)1-4 bytesNoStrings are read-only byte slices. Use []rune for character-level access
CByte array + null terminator1 byte (ASCII)Yeschar* is mutable. No built-in Unicode. strlen() is O(n) — scans for \0
RustUTF-8 (String / &str)1-4 bytesString: Yes, &str: NoIndexing by byte position is O(1), but by character is O(n) — UTF-8 is variable width

The .length Trap in JavaScript

"😀".length returns 2, not 1. Because JavaScript uses UTF-16, emojis require two code units (a “surrogate pair”). Use [..."😀"].length or Array.from("😀").length to get the visual character count. This is a common source of bugs and a popular interview gotcha.

Essential String Methods for Interviews

These are the string operations you will use in 90% of interview problems. Know them by heart — fumbling with string APIs wastes precious interview time.

string-methods.py / .java
1# ============================================================
2# Essential Python String Methods for Interviews
3# ============================================================
4
5s = "Hello, World!"
6
7# --- BASICS ---
8len(s) # 13 — O(1) in Python (length is cached)
9s[0] # 'H' — O(1) indexing
10s[-1] # '!' — negative indexing (Python-specific)
11s[0:5] # 'Hello' — slicing, creates new string, O(k)
12
13# --- CHECKING ---
14s.startswith("Hel") # True — O(k) where k = prefix length
15s.endswith("ld!") # True — O(k)
16"World" in s # True — O(n*m) substring search
17s.isalpha() # False — contains comma and space
18s.isdigit() # False
19s.isalnum() # False
20"hello".islower() # True
21"HELLO".isupper() # True
22
23# --- TRANSFORMING (all return NEW strings — immutable!) ---
24s.lower() # 'hello, world!'
25s.upper() # 'HELLO, WORLD!'
26s.strip() # removes leading/trailing whitespace
27s.lstrip() # left strip only
28s.rstrip() # right strip only
29s.replace("World", "Python") # 'Hello, Python!'
30s.split(", ") # ['Hello', 'World!'] — split into list
31", ".join(["a","b"]) # 'a, b' — join list into string, O(total chars)
32
33# --- SEARCHING ---
34s.find("World") # 7 — returns index, -1 if not found
35s.index("World") # 7 — raises ValueError if not found
36s.count("l") # 3 — count occurrences
37
38# --- INTERVIEW PATTERNS ---
39# Reverse a string
40s[::-1] # '!dlroW ,olleH' — slice with step -1
41
42# Convert string to list for in-place modification
43chars = list(s) # ['H', 'e', 'l', 'l', 'o', ...]
44chars[0] = 'h' # modify individual characters
45result = "".join(chars) # convert back to string
46
47# Character frequency counting
48from collections import Counter
49freq = Counter("banana") # Counter({'a': 3, 'n': 2, 'b': 1})
50
51# Check if two strings are anagrams
52def is_anagram(a, b):
53 return Counter(a) == Counter(b)
54
55# ASCII value of a character
56ord('A') # 65
57ord('a') # 97
58chr(65) # 'A'

String Complexity Cheat Sheet

OperationTimeWhy
s[i] / charAt(i)O(1)Direct memory offset
len(s) / length()O(1)Cached as field
s + t (concatenation)O(n+m)Copies both into new string
s[a:b] / substringO(b-a)Copies substring into new string
s.find(t) / indexOfO(n*m)Naive search, O(n+m) with KMP
s.split(delim)O(n)Scans entire string once
"".join(parts)O(total)One allocation, copies all parts
loop concat (s += x)O(n²)Each += copies entire string

You understand string internals and immutability. Now let's apply that knowledge — these four patterns cover the most common string problems you will see in FAANG interviews.

String Interview Patterns

String problems in interviews tend to fall into a few recurring categories. Recognizing the pattern is half the battle.

Palindrome Problems

Use when you see: “valid palindrome,” “longest palindromic substring,” “palindrome partitioning.”

Technique: Two pointers from both ends moving inward. For longest palindromic substring, use “expand around center” — try each character (and each pair of characters) as the center and expand outward while characters match.

Valid palindrome: O(n). Longest palindromic substring: O(n²) expand-around-center or O(n) with Manacher's.

How to recognize in an interview:

Look for “palindrome,” “reads the same forwards and backwards,” “mirror,” “symmetric string.” If the problem mentions palindromes, two pointers from the ends or expand-around-center is the approach.

# Python expand-around-center template
def longest_palindrome(s):
    def expand(l, r):
        while l >= 0 and r < len(s) and s[l] == s[r]:
            l -= 1; r += 1
        return s[l+1:r]
    best = ""
    for i in range(len(s)):
        odd = expand(i, i)        # Odd-length palindrome
        even = expand(i, i + 1)   # Even-length palindrome
        best = max(best, odd, even, key=len)
    return best

Anagram Problems

Use when you see: “valid anagram,” “group anagrams,” “find all anagrams in a string.”

Technique: Frequency counting with a hash map or 26-element array. Two strings are anagrams if and only if they have the same character frequencies. For “find all anagrams,” use a sliding window with a frequency map.

Valid anagram: O(n). Group anagrams: O(n * k) where k = max string length. Find all anagrams: O(n) sliding window.

How to recognize in an interview:

Look for “anagram,” “rearrangement,” “permutation of characters,” “same letters.” If two strings must have identical character counts, reach for a frequency array (int[26]) or Counter.

Substring / Sliding Window Problems

Use when you see: “longest substring without repeating characters,” “minimum window substring,” “substring with concatenation of all words.”

Technique: Dynamic sliding window with a hash map tracking character counts. Expand the right pointer to include characters; shrink the left pointer when a constraint is violated. The window state (hash map) updates in O(1) per step.

Longest substring without repeats: O(n). Minimum window substring: O(n).

How to recognize in an interview:

Look for “substring,” “window,” “without repeating,” “contains all characters,” “at most k distinct.” These are classic sliding window + hash map problems where you expand right and shrink left.

String Building / Manipulation

Use when you see: “reverse words in a string,” “string compression,” “encode/decode strings.”

Technique: Convert the string to a list/array of characters (since strings are immutable), perform in-place modifications, then join back. In Java, use StringBuilder. In Python, collect parts in a list and "".join() at the end. Never concatenate strings in a loop — that is the O(n²) trap.

String building with join: O(n). Loop concatenation: O(n²) — avoid this!

How to recognize in an interview:

Look for “reverse words,” “compress,” “encode/decode,” “build string.” Any time the problem asks you to construct or transform a string, convert to a mutable structure first. The O(n²) trap from += in a loop is the number one mistake interviewers watch for.

The Key Insight for String Problems

Most string problems are really array problems with one twist: strings are immutable. This means:

  • Convert to char[] or list(s) when you need to modify characters
  • Use frequency arrays int[26] or int[128] for character counting
  • Build strings with StringBuilder / list + join, never with += in a loop
  • Two pointers, sliding window, and hash maps work identically on strings and arrays

Practice Problems

Start with these curated problems. They are ordered from easy to hard and each one drills a different array/string pattern.

Array Problems

String Problems

Test Your Knowledge

📝

Ready to test your understanding?

25 questions covering Arrays & Strings — from easy fundamentals to hard interview-level challenges.

8 Easy9 Medium8 Hard
Student? Ping us for a discount!