Data StructureFundamental

Linked Lists

The pointer-based sequential data structure. No contiguous memory, no random access — just nodes connected by references. Mastering linked lists means mastering pointer manipulation, and that skill transfers to trees, graphs, and beyond.

Imagine a scavenger hunt where each clue tells you where to find the next clue. You can't jump to clue #5 directly — you must follow the chain from the start. That is a linked list: each element (node) holds a value and a pointer to the next element. There is no index, no shortcut. To reach a specific node, you walk the chain.

This constraint sounds limiting — and for random access, it is. But it unlocks something arrays cannot do efficiently: constant-time insertions and deletions at any known position. No shifting elements, no resizing buffers. Just rewire two pointers and you are done.

What is a Linked List? — Start Here if You're New

If arrays are like a row of lockers, a linked list is a completely different approach to organizing data. Let's build your understanding from scratch.

Real-World Analogy: A Scavenger Hunt

Imagine a scavenger hunt where each clue is in a different location, and each clue tells you where to find the next one. Clue #1 is at the park, and it says “go to the library.” At the library, clue #2 says “go to the coffee shop.” And so on.

  • You cannot jump directly to clue #5. You must follow the chain from the beginning: clue #1 leads to #2, which leads to #3, and so on. This is why access is O(n).
  • But if you want to insert a new clue between clue #2 and clue #3, you just update clue #2 to point to the new clue instead. No other clues need to move. This is why insertion is O(1) once you are at the right spot.
  • Each “clue” is what we call a node. It holds two things: the data (the clue itself) and a pointer (the directions to the next clue).

How Linked Lists Work in Memory

Unlike arrays, linked list nodes are scattered throughout memory. Each node can live at any random address. The computer does not need them to be next to each other because each node carries a pointer that says exactly where the next node is.

Node at 0x200:  value=3, next=0x540
Node at 0x540:  value=7, next=0x120
Node at 0x120:  value=1, next=0x890
Node at 0x890:  value=9, next=null   (end of list)

The nodes are NOT next to each other in memory!
We follow the "next" pointers to traverse.

This is the fundamental difference from arrays. With arrays, the computer uses math to find any element instantly. With linked lists, the computer must follow the chain from the beginning. This means random access is impossible, but insertion and deletion only require changing a couple of pointers.

Why Do Linked Lists Exist?

If arrays are so fast for access, why would anyone use a linked list? Because arrays have a big weakness: inserting or deleting in the middle is expensive. Every time you insert, all subsequent elements must shift. For applications that do many insertions and deletions (like an undo history, a music playlist, or a task scheduler), linked lists are more efficient.

Linked lists are also the building blocks for other important data structures you will learn later: stacks, queues, hash map chains, LRU caches, and adjacency lists for graphs. Understanding linked lists is not just about linked lists — it is about understanding pointer manipulation, the most fundamental skill in computer science.

Key Takeaway for Beginners

Arrays give you fast access by position but slow insertions. Linked lists give you fast insertions at known positions but slow access. Knowing when to choose each one is a core interview skill.

How a Linked List Works

A linked list is a collection of nodes. Each node stores a value and a pointer to the next node. The first node is the head; the last node points to null. Watch the traversal below — this is the only way to reach any element.

Traversal Animation — Following the Pointer Chain

currentHEAD3next7next1next9nextnull

Watch the current pointer walk from the head, following each next pointer until it reaches null. This is the only way to reach any node — no shortcuts, no random access.

Operations — Visual Deep Dive

Click through each operation step by step. Pay attention to how pointers are rewired — this is the core skill you need for every linked list interview problem.

1 / 6
HEAD3next7next1next9nextnull

Linked list: 3 -> 7 -> 1 -> 9 -> null

head -> 3

Singly vs Doubly vs Circular

Three variants of the same idea. The difference is the pointer structure — which determines traversal direction and memory cost.

Singly Linked List

AnextBnextCnextnull

One pointer per node. Forward traversal only. Lowest memory overhead.

Doubly Linked List

prevAnextprevBnextprevCnextnullnull

Two pointers per node (next + prev). Traversal in both directions. Used in LRU Cache, browser history.

Circular Linked List

AnextBnextCnexttail.next = head (no null)

Last node points back to the head. No null terminator. Used in round-robin scheduling, circular buffers.

TypePointers per NodeTraversalBest For
Singly1 (next)Forward onlyStacks, simple chains, most interview problems
Doubly2 (next + prev)Both directionsLRU Cache, browser history, deques
Circular1 or 2Continuous loopRound-robin scheduling, circular buffers

Array vs Linked List — The Key Tradeoff

This is a question interviewers love. Know the trade-offs cold. The visual below shows why insertion behaves so differently.

Insertion at index 1: Array must shift, Linked List just rewires

Array — O(n) insert

Before:3719shift rightAfter insert 5 at [1]:357193 elements had to shift!

Linked List — O(1) rewire

Before:37...After insert 5 after node 3:357...Only 2 pointers changed!
CriteriaArrayLinked List
Random accessO(1)O(n)
Insert/delete at headO(n)O(1)
Insert/delete at known posO(n)O(1)
Memory layoutContiguous (cache-friendly)Scattered (cache-unfriendly)
Memory overheadNone per elementPointer(s) per node
ResizingExpensive (copy all)Not needed
Best forIndex-heavy, read-heavyFrequent insert/delete

Operations & Complexity

The O(n) for insert/delete at a position accounts for traversal to find the position. Once you are at the node, the actual pointer rewiring is always O(1). This is the fundamental insight.

OperationTimeSpace
Access by indexO(n)O(1)
Search by valueO(n)O(1)
Insert at headO(1)O(1)
Insert at tail (with tail ptr)O(1)O(1)
Insert at tail (no tail ptr)O(n)O(1)
Insert at positionO(n)O(1)
Delete headO(1)O(1)
Delete by valueO(n)O(1)
Delete at positionO(n)O(1)

Why is access O(n)?

Unlike an array where you can jump directly to index arr[5] using pointer arithmetic (base address + offset), a linked list has no contiguous memory layout. To reach the 5th node, you must start at the head and follow 5 next pointers sequentially. There is no shortcut — this sequential walk is what makes access O(n) in the worst case.

Interview Patterns

Four patterns cover 90% of linked list interview questions. Recognize the pattern, apply the template, and you will solve the problem.

Fast / Slow Pointer

Two pointers move at different speeds. The slow pointer moves one step; the fast pointer moves two. When fast reaches the end, slow is at the middle. If there is a cycle, they will meet.

Use when you see: "find the middle node," "detect cycle," "find the start of a cycle," "kth node from the end"

Reverse a Linked List

The fundamental pointer manipulation exercise. Iteratively: use three pointers (prev, curr, next) to reverse links one by one. Recursively: reverse from the tail back, relinking as the call stack unwinds.

Use when you see: "reverse the list," "reverse between positions m and n," "palindrome linked list"

Merge Two Sorted Lists

Use a dummy head node and a tail pointer. Compare the front nodes of both lists, attach the smaller one to the merged list, advance that pointer. Attach whatever remains when one list is exhausted.

Use when you see: "merge two sorted lists," "merge k sorted lists" (extend with a min-heap)

Detect & Remove Cycle

Floyd's algorithm: run fast/slow pointers until they meet (cycle exists). To find the cycle start, reset one pointer to head and advance both at the same speed — they meet at the cycle entrance.

Use when you see: "linked list cycle," "find where the cycle starts," "happy number" (abstract cycle detection)

Common Mistakes — Avoid These

  • Losing the head reference. If you advance your only pointer past the head without saving it, you lose the entire list. Always keep a reference to head (or use a dummy node).
  • Null pointer on .next.next. Before accessing node.next.next, verify that node.next is not null. This is the #1 runtime error in linked list problems.
  • Not handling single-node lists. A list with one node is its own head and tail. Reversing it should return it unchanged. Many solutions break on this edge case.
  • Forgetting to update the tail pointer. If your data structure maintains a tail reference, every insertion and deletion must keep it in sync.
  • Creating cycles accidentally. When rewiring pointers in reverse or merge operations, verify you set the old tail's next to null. Otherwise, you create an infinite loop.

The Key Insight

Linked list problems are really pointer manipulation puzzles. The data in the nodes almost never matters — what matters is how you rewire the connections between them. Every linked list operation boils down to: (1) save the reference you are about to lose, (2) rewire the pointer, (3) advance. Master this three-step dance and every linked list problem becomes mechanical.

Implementation

Now that you understand the visual mechanics, here is the code. Every linked list problem starts with the same building block: the ListNode class. Then you manipulate pointers. Here are the four patterns you will use in 90% of linked list interview problems.

The Idea

Every linked list is built from the same tiny building block: a node that holds a value and a pointer to the next node. Think of it like a chain where each link knows only about the link directly after it — nothing about the chain as a whole.

Before you read the code, understand this:

  • A node is just two fields: val (the data) and next (pointer to the next node, or null).
  • The dummy node trick in build_list avoids special-casing the head — you will see this pattern in nearly every linked list problem.
ListNode — The Building Block
1class ListNode:
2 """Singly linked list node."""
3 def __init__(self, val=0, next=None):
4 self.val = val # the data this node holds
5 self.next = next # pointer to the next node (or None)
6
7 def __repr__(self):
8 # Helper for debugging: prints the list from this node
9 parts, node = [], self
10 while node:
11 parts.append(str(node.val))
12 node = node.next
13 return " -> ".join(parts) + " -> null"
14
15
16# Helper: build a linked list from a Python list
17def build_list(values):
18 dummy = ListNode(0) # dummy avoids special-casing head
19 curr = dummy
20 for v in values:
21 curr.next = ListNode(v) # create and link each node
22 curr = curr.next # advance the tail
23 return dummy.next # skip dummy, return real head

Key Line Spotlight

dummy = ListNode(0)

The dummy node is the single most important trick in linked list coding. Without it, you need separate logic for “what if the head is null?” and “what if we need to insert before the head?” With a dummy, every node — including the first real one — has a predecessor. At the end, just return dummy.next. You will use this in almost every linked list interview problem.

The Idea

Reversing a linked list means making each node point backward instead of forward. Picture a one-way street: you need to make every car turn around so traffic flows the other direction. The trick is to do it one car at a timewithout losing track of the rest of the road.

Before you read the code, understand this:

  • You need three pointers: prev (already reversed), curr (being processed), and next_node (saved before we break the link).
  • The order matters: save next, reverse pointer, advance prev, advance curr. Mess up the order and you lose the rest of the list.
  • The iterative approach is O(1) space. The recursive approach uses O(n) stack space but is elegant to understand.
Reverse a Linked List — Iterative & Recursive
1def reverse_list(head):
2 """Reverse a singly linked list in-place. O(n) time, O(1) space."""
3 prev = None # will become the new tail (null)
4 curr = head # start at the head
5
6 while curr:
7 next_node = curr.next # 1. save next before breaking link
8 curr.next = prev # 2. reverse the pointer
9 prev = curr # 3. advance prev forward
10 curr = next_node # 4. advance curr forward
11
12 return prev # prev is now the new head
13
14
15def reverse_list_recursive(head):
16 """Reverse recursively. O(n) time, O(n) space (call stack)."""
17 # Base case: empty list or single node
18 if not head or not head.next:
19 return head
20
21 # Recurse: reverse everything after head
22 new_head = reverse_list_recursive(head.next)
23
24 # head.next is now the TAIL of the reversed sublist
25 # Make it point back to head
26 head.next.next = head
27 head.next = None # head is now the new tail
28
29 return new_head

Dry Run — Reverse 1 -> 2 -> 3 -> null

Stepprevcurrnext_nodeActionChain so far
Initnull1-Set up pointers1->2->3->null
1null12Save next=2, 1.next=null, advancenull<-1 2->3->null
2123Save next=3, 2.next=1, advancenull<-1<-2 3->null
323nullSave next=null, 3.next=2, advancenull<-1<-2<-3
Done3null-curr is null, return prev3->2->1->null

Key Line Spotlight

next_node = curr.next # save before breaking

This line is where beginners get tripped up. Once you set curr.next = prev, the original forward link is destroyed. If you did not save curr.next beforehand, you would lose the rest of the list forever. This save-then-reverse pattern appears in every linked list pointer manipulation problem. Interviewers specifically watch for whether you handle this correctly.

Why O(n) time, O(1) space?

  • The while curr loop visits each node exactly once → O(n) time.
  • Only three pointer variables (prev, curr, next_node) regardless of list size → O(1) space.
  • The recursive version uses O(n) call stack space — mention this tradeoff in interviews.

The Idea

Imagine two runners on a circular track. The fast runner (hare) goes twice as fast as the slow runner (tortoise). If the track is circular, they must eventually meet. If the track is straight (no cycle), the fast runner simply reaches the end. This is Floyd's algorithm — and it detects cycles using zero extra memory.

Before you read the code, understand this:

  • Brute force: use a hash set to store visited nodes — O(n) space. Floyd's does it in O(1) space.
  • If there is a cycle, slow and fast will meet inside the cycle. The math guarantees it.
  • To find where the cycle starts: after they meet, reset one pointer to head and move both at speed 1. They will meet at the cycle entrance.
Detect Cycle — Floyd's Tortoise and Hare
1def has_cycle(head):
2 """Detect if a linked list has a cycle. O(n) time, O(1) space."""
3 slow = head # tortoise: moves 1 step
4 fast = head # hare: moves 2 steps
5
6 while fast and fast.next:
7 slow = slow.next # move one step
8 fast = fast.next.next # move two steps
9 if slow is fast:
10 return True # they met — cycle exists
11
12 return False # fast reached the end — no cycle
13
14
15def find_cycle_start(head):
16 """Find the node where the cycle begins.
17 O(n) time, O(1) space.
18
19 After detecting the cycle (slow == fast), reset one
20 pointer to head. Move both at the same speed — they
21 meet at the cycle entrance. This works because:
22 distance(head -> start) == distance(meeting -> start).
23 """
24 slow = head
25 fast = head
26
27 # Phase 1: detect the cycle
28 while fast and fast.next:
29 slow = slow.next
30 fast = fast.next.next
31 if slow is fast:
32 break
33 else:
34 return None # no cycle
35
36 # Phase 2: find the entrance
37 slow = head
38 while slow is not fast:
39 slow = slow.next
40 fast = fast.next
41
42 return slow # this is the cycle start node

Dry Run — Cycle detection on 1->2->3->4->2 (cycle at node 2)

Stepslowfastslow == fast?
Init11Yes, but start
123No
232No
344Yes! Cycle detected

They meet at node 4. To find the cycle start: reset slow to head (1), move both at speed 1. slow: 1→2, fast: 4→2. They meet at node 2 — the cycle entrance.

Key Line Spotlight

if slow is fast: return True

This identity check (is in Python, === in JavaScript) tests whether both pointers reference the same node object in memory, not just nodes with the same value. If the hare (moving 2x speed) ever lands on the exact same node as the tortoise, a cycle must exist — because in a straight list, the hare would have reached null first. This is the key insight interviewers want you to explain.

Why O(n) time, O(1) space?

  • The fast pointer covers at most 2n steps before either reaching null or meeting slow → O(n) time.
  • Only two pointers (slow, fast) — no hash set needed → O(1) space.
  • Finding the cycle start (Phase 2) adds at most another O(n) pass, so total is still O(n).

The Idea

You have two sorted stacks of papers. To merge them into one sorted stack, you compare the top sheets of each stack, take the smaller one, and place it on the output pile. Repeat until both stacks are empty. This is exactly what the code does with pointers.

Before you read the code, understand this:

  • A dummy node avoids special-casing “which list provides the first node?”
  • We reuse existing nodes (no new allocations) — just re-wire pointers, giving O(1) space.
  • When one list runs out, attach the entire remainder of the other — it is already sorted.
Merge Two Sorted Linked Lists
1def merge_two_lists(l1, l2):
2 """Merge two sorted linked lists into one sorted list.
3 O(n + m) time, O(1) space (reuses existing nodes).
4 """
5 # Dummy node simplifies edge cases (no special head handling)
6 dummy = ListNode(0)
7 tail = dummy
8
9 while l1 and l2:
10 if l1.val <= l2.val:
11 tail.next = l1 # attach the smaller node
12 l1 = l1.next # advance that list's pointer
13 else:
14 tail.next = l2
15 l2 = l2.next
16 tail = tail.next # advance the merged list's tail
17
18 # One list is exhausted — attach the remainder of the other
19 tail.next = l1 if l1 else l2
20
21 return dummy.next # skip the dummy, return the real head

Dry Run — Merge 1->3->5 and 2->4->6

Stepl1l2CompareAttachMerged so far
1121 <= 21D->1
2323 > 22D->1->2
3343 <= 43D->1->2->3
4545 > 44D->1->2->3->4
5565 <= 65D->1->2->3->4->5
6null6l1 emptyrest(6)D->1->2->3->4->5->6

Result: dummy.next = 1->2->3->4->5->6->null. The “D” is the dummy node we skip.

Key Line Spotlight

tail.next = l1 if l1 else l2

After the while loop, one list still has remaining nodes. This line attaches the entire remainder in one shot — no loop needed because those nodes are already sorted and connected. This is a common pattern: when one input is exhausted, short-circuit by linking the rest. Interviewers love to see this clean handling of the “leftover” case.

Why O(n + m) time, O(1) space?

  • The while loop compares and attaches one node per iteration. Total iterations = n + m → O(n + m).
  • No new nodes are created — we re-wire existing pointers. Only the dummy node is extra → O(1) space.
  • This extends naturally to “merge k sorted lists” using a min-heap — a common follow-up question.

Practice Problems

These are the linked list problems that appear most frequently in top-tier technical interviews. Solve them in order — each one builds on the previous.

Interviewer's Perspective

What Interviewers Are Really Testing

  • 1.Pointer manipulation fluency. Can you reverse a linked list, detect a cycle, or merge two sorted lists without hesitation? These are the bread-and-butter operations tested at every level.
  • 2.Edge case awareness. Empty list, single node, cycle to head, even/odd length for finding the middle. Missing any of these signals a lack of thoroughness.
  • 3.Fast & slow pointer mastery. Floyd's cycle detection, finding the middle node, and detecting the intersection of two lists all use this technique. Interviewers expect you to recognize and apply it instantly.
  • 4.In-place modification. Linked list problems often require O(1) space solutions. If you allocate a new list when you could rewire pointers, interviewers notice.

Related Patterns & Concepts

When to Use a Linked List — Decision Guide

Use a Linked List When...

  • You need frequent O(1) insertions/deletions at the head (or at a known node position).
  • You do not need random access by index. You only traverse sequentially.
  • You are building a stack, queue, or deque from scratch.
  • You need an LRU Cache (doubly linked list + hash map is the classic implementation).

Don't Use a Linked List When...

  • You need random access (accessing element by index). Use an array instead.
  • You need cache-efficient iteration. Arrays beat linked lists 10-100x on sequential scans due to memory locality.
  • You need to sort the data. Sorting a linked list is possible but much slower and more complex than sorting an array.
  • Memory overhead matters. Each node stores an extra pointer (8 bytes on 64-bit systems), which can double memory usage for small data.