, ,

Combining Arrays in JavaScript: Spread vs. concat

Posted by

A practical, senior-dev guide to merging arrays without surprises — covering performance, edge cases, TypeScript tips, and real-world recipes.

A practical, senior-dev guide to merging arrays without surprises — covering performance, edge cases, TypeScript tips, and real-world recipes.

Introduction

If you write JavaScript daily, you combine arrays all the time — merging props in React, stitching paginated results, building search indexes, or flattening data from APIs. And every few lines, you face a tiny fork in the road: spread ([...a, ...b]) or Array.prototype.concat?

It looks like a stylistic choice — until it isn’t. The differences matter for immutability, performance, type safety, edge cases (like holes and array-likes), and even subtle behaviors like Symbol.isConcatSpreadable.

This guide goes beyond “both copy” and shows when to choose which, with copy-pasteable patterns, gotchas to avoid, and bench-ready heuristics you can trust in real projects (React, Node, Next.js, TypeScript).


TL;DR Decision Tree

Quick picks you can memorize:

  • Most cases: const merged = [...a, ...b]Spread is clear, terse, and idiomatic.
  • Need to merge many arrays at once (dynamic list): [].concat(...list) or list.flat() (if truly flattening).
  • Merging with non-array iterables (e.g., Set): use spread[...setA, ...setB].
  • Preserve array holes as-is (rare): concat mirrors spec’d hole behavior; spread converts holes to actual empty slots but practical differences are subtle.
  • Guard against Symbol.isConcatSpreadable tricks: prefer spread for predictability.
  • Typed arrays: use their typed set or concat polyfills; spread/concat make copies—watch memory.
  • Huge merges in hot paths (hundreds of thousands+): benchmark; sometimes concat wins in engines that optimize it.
  • Immutable style in React / Redux: both are immutable; choose readability → spread.

1) What Actually Happens Under the Hood

Spread: [...]

const a = [1, 2];
const b = [3, 4];
const merged = [...a, ...b]; // [1, 2, 3, 4]
  • What: The array literal allocates a new array, then iterates each operand with the iteration protocol.
  • Key point: Anything iterable (arrays, Sets, Maps’ keys/values, generator results) can be spread.
  • Predictable with plain arrays; uses the iterator, not exotic concat rules.

concat:

const a = [1, 2];
const b = [3, 4];
const merged = a.concat(b); // [1, 2, 3, 4]
  • What: Creates a new array and appends a shallow copy of arguments.
  • Special behavior: If an argument is an array (or has Symbol.isConcatSpreadable = true), concat spreads it; otherwise it’s appended as a single element:
const merged = [1].concat([2], 3, [4, 5]); // [1, 2, 3, 4, 5]
  • Exotic case: Objects can masquerade as “spreadable” via Symbol.isConcatSpreadable. That’s powerful but can be surprising.

Bottom line: Spread respects “is it iterable?”; concat respects “is it spreadable?”. For plain arrays they behave the same; differences show up with array-likes and custom objects.


2) Immutability, Side Effects, and Safety

Both spread and concat return new arrays (immutability ✅). Neither mutates the original arrays.

  • Shallow, not deep: Both copy references. If inner objects mutate, both results “change.”
const a = [{id:1}];
const b = [{id:2}];
const merged = [...a, ...b];
merged[0].id = 42;
console.log(a[0].id); // 42 — same object reference

Rule: If you need deep copies, use structuredClone (modern), JSON.parse(JSON.stringify()) (limitations), or a dedicated clone utility—not spread or concat.


3) What You Can Merge: Arrays, Iterables, Array-Likes

Arrays + Iterables → Prefer Spread

const set = new Set([1, 2]);
const arr = [3, 4];
const merged = [...set, ...arr]; // [1, 2, 3, 4]

Array-likes (e.g., arguments, DOM NodeList)

  • Spread requires iterables. Array-likes aren’t always iterable.
  • concat spreads only things that look like arrays (isConcatSpreadable true or actual arrays).

Make array-likes iterable or arrays:

// Option A: from()
const nodes = document.querySelectorAll('a');
const all = [...Array.from(nodes), ...arr];

// Option B: spread directly if iterable (NodeList is iterable in modern DOMs)
const all2 = [...nodes, ...arr]; // works in modern browsers

Takeaway: If you’re unsure, wrap with Array.from() before spreading.


4) Shallow vs. Deep (and Flattening)

Combining arrays is different from flattening nested arrays.

const nested = [1, [2, 3], [4, [5]]];
// Combining:
const combined = [...nested, 6]; // [1, [2,3], [4,[5]], 6]
// Flattening:
const flat1 = nested.flat(); // [1, 2, 3, 4, [5]]
const flatAll = nested.flat(Infinity); // [1, 2, 3, 4, 5]

Gotcha: Many bugs come from accidentally flattening (or not).

  • Spread never deep-flattens; it only expands one level of each operand if it’s iterable itself (for arrays that means their elements are copied one level).
  • concat similarly does not deep-flatten. It only spreads its arguments if they are arrays/spreadable; elements inside remain unchanged.

Choosing flattening? Use flat, not spread/concat.


5) Order, Stability, and De-Duplication

Preserve relative order

Both spread and concat keep the original order of elements. This matters for UI lists, sorting pipelines, and diffing.

De-dupe while merging

const a = [1, 2, 2, 3];
const b = [2, 3, 4];

// Preserve first occurrence order:
const unique = [...new Set([...a, ...b])]; // [1,2,3,4]
  • Performance note: Building a Set is O(n). For large data, that’s fine and usually faster than nested indexOf checks.

De-dupe by key (objects)

const a = [{id:1, n:'A'}, {id:2, n:'B'}];
const b = [{id:2, n:'B*'}, {id:3, n:'C'}];

const byId = new Map([...a, ...b].map(x => [x.id, x]));
const uniqueById = [...byId.values()];
// Keeps the *last* occurrence for each id → [{1,'A'}, {2,'B*'}, {3,'C'}]

If you want to keep the first occurrence:

const seen = new Set();
const keepFirst = [...a, ...b].filter(x => !seen.has(x.id) && seen.add(x.id));

6) Conditional Merges, Guards, and Cleanups

Real code often needs “merge only if value exists” or “skip empty arrays”.

Only merge when there’s data

const maybe = (arr) => Array.isArray(arr) && arr.length ? arr : [];

const merged = [
...maybe(filters.tags),
...maybe(filters.categories),
...maybe(extra),
];

Insert conditionally

const base = [1, 2];
const includeThree = true;

const out = [
...base,
...(includeThree ? [3] : []),
];

Remove falsy while merging

const merged = [1, null, 2, undefined, 3].filter(Boolean); // [1,2,3]

Merge unique values only

const appendUnique = (arr, ...items) => {
const set = new Set(arr);
for (const it of items) if (!set.has(it)) set.add(it);
return [...set];
};

appendUnique([1,2], 2, 3); // [1,2,3]

7) Performance & Memory (Without Micro-Benchmark Theater)

Both spread and concat allocate new arrays—so cost scales with total elements. Differences are engine-dependent and shift over time.

Practical rules:

  • Normal sizes (≤ tens of thousands): Use what’s clearest. Spread is usually fast enough.
  • Very large merges (hundreds of thousands+):
  • Consider a pre-sized write pattern when you know total length:
const out = new Array(a.length + b.length); 
let i = 0; for (const x of a) out[i++] = x;
for (const x of b) out[i++] = x;
  • This avoids multiple resizes and intermediate sets.
  • Or mutate a target intentionally (document it!):
const out = a.slice(); // copy out.push(...b);
  • Note: push(...b) spreads b into arguments. For extremely large b, this can hit engine argument limits. In that case, manual loops are safer.
  • Memory headroom: Both approaches double your peak memory during merge (inputs + output). For large datasets in Node, consider streaming or chunked processing.

Typed arrays: Prefer set for speed and low overhead:

const a = new Uint8Array([1,2]);
const b = new Uint8Array([3,4]);
const out = new Uint8Array(a.length + b.length);
out.set(a, 0);
out.set(b, a.length);

8) Edge Cases & Gotchas You’ll Actually Meet

1) Symbol.isConcatSpreadable

Objects can pretend to be arrays to concat:

const weirdo = {0:'x', 1:'y', length:2, [Symbol.isConcatSpreadable]: true};
[1].concat(weirdo); // [1, 'x', 'y']
  • Spread uses the iterator; if the object isn’t iterable, [...] throws.
  • Use spread for predictability unless you intentionally rely on spreadable behavior.

2) Holes vs. undefined

const a = [ , 1];           // a[0] is a hole
const spreadCopy = [...a]; // preserves a hole (still empty slot)
const concatCopy = [].concat(a); // also preserves hole

Both preserve holes; but downstream ops like map/forEach skip holes. If you want explicit undefineds:

const withUndefineds = Array.from(a); // converts holes to undefined

3) Array-likes that aren’t iterable

function f() {
const args = arguments; // array-like, not iterable in older runtimes
// [...args] // TypeError in older environments
const arr = Array.from(args); // safe
}

4) Engine argument limits with push(...bigArray)

When bigArray.length is huge (hundreds of thousands+), the spread into a function call can exceed engine limits. Prefer:

out.push.apply(out, chunk); // still can hit limits on huge chunks
// safest:
for (const x of bigArray) out.push(x);

5) Don’t confuse flatten with merge

If you write:

const merged = [].concat(a, b); // ok
const flattenedOnce = [].concat(...listOfArrays); // flattens only one level (the arguments)

If listOfArrays elements are themselves nested, you’ll need .flat().

6) Merging sparse arrays changes iteration semantics later

Be explicit if your consumers expect dense arrays:

const dense = Array.from(sparse); // convert holes to undefineds

9) Practical Recipes You’ll Reuse

Merge two arrays (idiomatic)

const merged = [...a, ...b];

Merge many arrays (dynamic input)

const merged = [].concat(...listOfArrays);
// or
const merged2 = listOfArrays.flat(); // if you truly want one-level flatten

Insert at the beginning, middle, end

// beginning
const withStart = [startValue, ...arr];
// middle
const withMiddle = [...arr.slice(0, i), value, ...arr.slice(i)];
// end
const withEnd = [...arr, endValue];

Immutable replace range

const replaceRange = (arr, start, end, ...items) => [
...arr.slice(0, start),
...items,
...arr.slice(end),
];

Paginated merge with uniqueness

const mergeUnique = (acc, page) => {
const set = new Set(acc);
for (const x of page) set.add(x);
return [...set];
};

Merge deeply nested results (not flattening inner arrays)

const result = [...outerA, ...outerB]; // inner arrays remain intact

Merge & sort stable

const merged = [...a, ...b].sort((x, y) => x.createdAt - y.createdAt);

Merge objects lists by key (keep last)

const mergeByIdKeepLast = (...lists) => {
const m = new Map();
for (const list of lists) for (const item of list) m.set(item.id, item);
return [...m.values()];
};

Merge objects lists by key (keep first)

const mergeByIdKeepFirst = (...lists) => {
const m = new Map();
for (const list of lists) {
for (const item of list) if (!m.has(item.id)) m.set(item.id, item);
}
return [...m.values()];
};

10) Code Snippets With Explanations (JS/TS-Ready)

A. Simple merge with guard rails (TS)

function mergeSafe<T>(...parts: Array<ReadonlyArray<T> | null | undefined>): T[] {
const out: T[] = [];
for (const p of parts) if (p && p.length) out.push(...p);
return out;
}

Why: Tolerates null/undefined, keeps type safety, avoids unnecessary allocations.

B. Unique merge by key with stable order (keep first)

function uniqueBy<T, K>(items: T[], key: (t: T) => K): T[] {
const seen = new Set<K>();
const out: T[] = [];
for (const it of items) {
const k = key(it);
if (!seen.has(k)) { seen.add(k); out.push(it); }
}
return out;
}

// usage
const merged = uniqueBy([...a, ...b], x => x.id);

Why: Stable order is great for UI diffing (React lists) and predictable UX.

C. Chunk-safe push for huge arrays

function pushChunkSafe<T>(target: T[], source: T[], chunkSize = 50_000) {
for (let i = 0; i < source.length; i += chunkSize) {
const chunk = source.slice(i, i + chunkSize);
for (let j = 0; j < chunk.length; j++) target.push(chunk[j]);
}
}

Why: Avoids argument-length limits and keeps memory spikes in check.

D. Merge typed arrays efficiently

function concatTyped(a: Uint8Array, b: Uint8Array): Uint8Array {
const out = new Uint8Array(a.length + b.length);
out.set(a, 0);
out.set(b, a.length);
return out;
}

Why: Spread/concat would allocate intermediate arrays; set writes directly.

E. Ensure dense arrays (convert holes to undefined)

function toDense<T>(arr: Array<T>): T[] {
return Array.from(arr);
}

Why: Some downstream operations break or behave oddly with holes.


11) React/Redux Patterns (Immutable by Design)

Add items to props immutably

const next = [...props.items, ...newItems];

Insert between sections

const withAd = [
...feed.slice(0, insertIndex),
<Ad key="ad-1" />,
...feed.slice(insertIndex),
];

Normalize + merge server pages

const nextPage = response.items.map(normalize);
const allItems = uniqueBy([...state.items, ...nextPage], x => x.id);

Memoize merged arrays to prevent unnecessary re-renders

import { useMemo } from 'react';

const merged = useMemo(() => [...a, ...b], [a, b]);

Tip: If a and b are new identities each render, memoization won’t help. Normalize earlier in your state layer.


12) TypeScript Tips You’ll Be Glad You Knew

  • Readonly sources: Allow merging without mutation fear.
function mergeReadonly<T>(...parts: ReadonlyArray<ReadonlyArray<T>>): T[] {
return parts.flat(); // or [].concat(...parts)
}
  • as const preserves literal types when appropriate:
const a = [1, 2] as const; // readonly [1, 2]
const b = [3, 4] as const;
// [...a, ...b] // becomes readonly merged at compile time; convert if needed
const merged: number[] = [...a, ...b]; // widening to number[]
  • Narrowing after merge: Use type predicates if combining heterogeneous arrays:
type A = { type: 'a'; a: number };
type B = { type: 'b'; b: string };
const merged: (A | B)[] = [...arrA, ...arrB];

function isA(x: A | B): x is A { return x.type === 'a'; }
const onlyA = merged.filter(isA);
  • Generics + key helpers for uniqueness:
function mergeUniqueBy<T, K>(lists: T[][], key: (t: T) => K): T[] {
const m = new Map<K, T>();
for (const list of lists) for (const item of list) m.set(key(item), item);
return [...m.values()];
}

13) When to Prefer Spread vs. concat


14) Mistakes to Avoid (Gotchas)

  • Using spread to “deep copy” nested arrays/objects. It’s shallow.
  • Assuming push(...b) is always safe. Big arrays can exceed argument limits.
  • Confusing merge with flatten. Use .flat() for flattening.
  • Forgetting order guarantees. If deduping by Map, last wins. If you need “first wins,” use a Set guard.
  • Merging sparse arrays accidentally. If consumer code breaks on holes, convert to dense via Array.from.

Conclusion

Spread and concat both create new arrays and feel interchangeable—until the details matter. Spread shines for clarity and iterables; concat shines when merging many arrays at once or when you’re operating near engine-tuned paths. For typed arrays, use set. For uniqueness, lean on Set or Map helpers that preserve the order you care about. And always distinguish merge from flatten—it saves hours of subtle bugs.

If you walk away with one mental model, make it this:

  • Spread for day-to-day merging across iterables (clean, readable).
  • concat (or flat) for merging lists of lists.
  • Typed arrays → set.
  • Big data → benchmark or chunk.

Pro Tip: Wrap the patterns you use most (e.g., mergeUniqueBy, replaceRange) into tiny utilities and re-use them across projects. Fewer footguns, more signal.


Call to Action (CTA)

What’s your preferred pattern — spread or concat—and why?
Drop your experience (and any perf anecdotes) in the comments.
If this helped, share it with a teammate or bookmark for your next refactor.


Appendix: Copy-Paste Utilities

// 1) Merge (guards null/undefined)
export function mergeSafe<T>(...parts: Array<ReadonlyArray<T> | null | undefined>): T[] {
const out: T[] = [];
for (const p of parts) if (p && p.length) out.push(...p);
return out;
}

// 2) Unique by key (keep last)
export function mergeByKeyKeepLast<T, K>(lists: T[][], key: (t: T) => K): T[] {
const m = new Map<K, T>();
for (const list of lists) for (const item of list) m.set(key(item), item);
return [...m.values()];
}
// 3) Unique by key (keep first)
export function mergeByKeyKeepFirst<T, K>(lists: T[][], key: (t: T) => K): T[] {
const m = new Map<K, T>();
for (const list of lists) {
for (const item of list) {
const k = key(item);
if (!m.has(k)) m.set(k, item);
}
}
return [...m.values()];
}
// 4) Replace range immutably
export function replaceRange<T>(arr: ReadonlyArray<T>, start: number, end: number, ...items: T[]): T[] {
return [...arr.slice(0, start), ...items, ...arr.slice(end)];
}
// 5) Chunk-safe append
export function pushChunkSafe<T>(target: T[], source: ReadonlyArray<T>, chunkSize = 50_000) {
for (let i = 0; i < source.length; i += chunkSize) {
const end = Math.min(i + chunkSize, source.length);
for (let j = i; j < end; j++) target.push(source[j]);
}
}

Leave a Reply

Your email address will not be published. Required fields are marked *