Mastering React Virtualized Lists: Performance Patterns and Tuning Guide
Build fast, memory‑efficient React virtualized lists with tuning tips for overscan, variable heights, a11y, and profiling—plus practical code examples.
Image used for representation purposes only.
Why virtualized lists matter in React
Rendering thousands of DOM nodes is expensive. Each element adds layout, style, paint, and memory overhead. On a 60 Hz display, you only have ~16 ms per frame to do everything: JavaScript, layout, painting, and compositing. Virtualized (windowed) lists keep the DOM small by rendering only what is visible plus a configurable buffer. The result is lower memory usage and smoother scrolling, especially on mid‑range devices.
This guide distills practical techniques for building and tuning high‑performance virtualized lists in React, with code snippets you can drop into real apps.
Popular libraries and when to use them
- react-window: Lightweight, modern API for fixed or variable sizes (List and Grid). Great default choice for most apps.
- react-virtualized: Feature‑rich predecessor with many components. Larger API surface; more knobs if you need them.
- TanStack Virtual: Headless virtualizer focused on flexibility; works well for tables and custom layouts.
- React Aria Components Virtualizer: Accessible, headless primitives from Adobe with strong a11y defaults.
Pick the smallest tool that meets your requirements. If you don’t know where to start, try react-window for lists/grids and TanStack Virtual for advanced table scenarios.
Fixed-size list with react-window (baseline)
Fixed heights are the simplest and fastest scenario: the virtualizer can compute positions without measuring.
import { FixedSizeList as List, ListChildComponentProps } from 'react-window'
import React, { memo } from 'react'
// Row is memoized to avoid re-rendering when unrelated props change
const Row = memo(({ index, style, data }: ListChildComponentProps) => {
const item = data.items[index]
return (
<div style={style} className="row">
<img src={item.avatar} alt="" width={24} height={24} />
<span>{item.name}</span>
</div>
)
})
export default function UsersList({ items }: { items: User[] }) {
return (
<List
height={480}
width={360}
itemCount={items.length}
itemSize={48}
itemData={{ items }}
overscanCount={5} // tune this!
>
{Row}
</List>
)
}
Notes:
- Always pass a stable itemData object shape. If you re-create it on every render with new references, all rows will re-render. Use memoization when needed.
- Keep Row pure and small. Avoid inline heavy computations; precompute outside the renderer.
Variable heights: measurement strategies
Variable item sizes are common (e.g., chat messages). They require measurement so the virtualizer can map items to scroll offsets. You have three broad options:
- Estimate, then correct once measured
- Provide a reasonable estimate to keep scroll positions stable.
- Measure with ResizeObserver after mount and update the size cache.
- Pre-measure off-DOM
- Render content in a hidden, non-scrolling container to measure before showing. This reduces jumps but adds complexity.
- Constrain content to buckets
- Snap content heights to a small set of sizes (e.g., 64, 96, 128 px). Easier caching and fewer layout shifts.
Example with react-window’s VariableSizeList and ResizeObserver:
import { VariableSizeList as List } from 'react-window'
import React, { useCallback, useMemo, useRef } from 'react'
function useSizeMap<T>() {
const map = useRef(new Map<number, number>())
const set = useCallback((index: number, size: number) => {
const prev = map.current.get(index)
if (prev !== size) map.current.set(index, size)
}, [])
const get = useCallback((index: number) => map.current.get(index) ?? 72, []) // estimate
return { set, get }
}
export default function Messages({ items }: { items: Message[] }) {
const listRef = useRef<List>(null)
const { set, get } = useSizeMap()
const setSize = useCallback((index: number, size: number) => {
set(index, size)
listRef.current?.resetAfterIndex(index) // reflow positions from index onward
}, [set])
const Row = useMemo(() => {
return function Row({ index, style }: { index: number; style: React.CSSProperties }) {
return (
<Measured style={style} index={index} onHeight={setSize}>
<MessageBubble message={items[index]} />
</Measured>
)
}
}, [items, setSize])
return (
<List
ref={listRef}
height={560}
width={480}
itemCount={items.length}
itemSize={get}
overscanCount={6}
>
{Row as any}
</List>
)
}
function Measured({ index, style, children, onHeight }:
{ index: number; style: React.CSSProperties; children: React.ReactNode; onHeight: (i: number, h: number) => void }) {
const ref = useRef<HTMLDivElement | null>(null)
useEffect(() => {
const el = ref.current!
const ro = new ResizeObserver(entries => {
for (const e of entries) onHeight(index, Math.ceil(e.contentRect.height))
})
ro.observe(el)
return () => ro.disconnect()
}, [index, onHeight])
return <div ref={ref} style={style}>{children}</div>
}
Tips for variable heights:
- Use a stable, close estimate to avoid big jumps near the scroll anchor.
- Batch measurement updates via requestAnimationFrame if many items resize at once.
- Avoid flushing layout synchronously in render; let ResizeObserver do the work.
Overscan: striking the right balance
Overscan adds extra items before and after the viewport to hide rendering during fast scrolls. More overscan reduces popping but increases CPU and memory cost.
Rules of thumb:
- Start with overscanCount ~ 2–6 items for fixed heights or ~1–2 viewports for variable heights.
- Increase for low‑cost rows (text-only) and decrease for expensive rows (charts, canvases).
- Prefer symmetric overscan unless you have directional scroll behavior (e.g., chat anchored to bottom).
If your rows do async work in useEffect, ensure that offscreen rows don’t keep doing it. Guard side effects by checking visibility if necessary.
Avoiding unnecessary re-renders
Virtualization helps DOM count, but React re-renders can still be a bottleneck. Key practices:
- Stable itemKey
- Use item IDs, not indexes, to keep state and focus correct when the list changes order.
- In react-window, pass itemKey prop to control this mapping.
const itemKey = (index: number, data: { items: Item[] }) => data.items[index].id
<List itemKey={itemKey} ...>
- Memoize rows
- Wrap row components with React.memo and a custom comparator if props are nested.
const Row = React.memo(RowImpl, (prev, next) => prev.data.items[prev.index] === next.data.items[next.index])
-
Stable callbacks and data
- useMemo/useCallback for itemData and handlers. Avoid creating new objects in parent render.
-
Avoid context churn
- Passing large contexts into rows can cause widespread re-renders. Prefer itemData or lightweight context slices.
Styling and layout performance
- Contain your scroll area
- Apply CSS containment to the scroll container to isolate layout and paint work.
.virtual-scroll {
contain: layout paint style;
overflow: auto;
}
-
Prefer transforms over top/left for moving inner elements
- Transforms are GPU-composited and often cheaper than changing layout properties.
-
Avoid will-change on many children
- It can explode memory. Use it sparingly and only where you’ve measured a benefit.
-
Use position: sticky for headers/footers inside the inner element instead of JS scroll listeners.
Sticky headers and footers with react-window
You can inject persistent UI into the inner scrolling element via innerElementType and style it as sticky.
const Inner = React.forwardRef<HTMLDivElement, React.HTMLProps<HTMLDivElement>>((props, ref) => (
<div ref={ref} {...props}>
<div className="sticky-header">Mailboxes</div>
{props.children}
<div className="sticky-footer">End</div>
</div>
))
<List innerElementType={Inner} ...> {Row} </List>
/* CSS */
.sticky-header, .sticky-footer { position: sticky; z-index: 1; background: white; }
.sticky-header { top: 0; }
.sticky-footer { bottom: 0; }
For tables with column headers, consider a two-pane layout: a virtualized body plus a non-virtual header aligned by CSS grid or synced widths.
Infinite loading and placeholders
Virtualized lists pair well with incremental data loading:
- Request the next page when the user scrolls within N items of the end.
- Show lightweight placeholders to avoid layout shifts; if heights are variable, estimate to keep scroll positions.
- Debounce network triggers to avoid spamming requests during fling scrolls.
Pseudocode pattern:
const nearEnd = useMemo(() => visibleStopIndex > items.length - 10, [visibleStopIndex, items.length])
useEffect(() => { if (nearEnd && !loading) fetchNextPage() }, [nearEnd, loading])
Accessibility considerations (a11y)
Virtualization reduces DOM nodes, but you still need a coherent accessibility model:
-
Landmarks and roles
- Give the list a role (list, listbox, grid) that matches interaction patterns.
-
Set sizes
- For listbox/grid patterns, expose aria-setsize/aria-posinset or aria-rowcount/aria-rowindex where appropriate so assistive tech understands total length.
-
Focus management
- Use roving tabindex for keyboard navigation.
- When recycling DOM nodes, ensure focused elements aren’t unmounted unexpectedly; anchor focus to items, not indexes.
-
Announce dynamic loads
- Use live regions to announce “Loaded 50 more items.”
-
Avoid autoFocus inside virtualized rows
- It can steal focus as items mount during scroll.
Libraries like React Aria’s Virtualizer provide a strong baseline if a11y is central.
Profiling: find the real bottleneck
Don’t guess—measure. A quick workflow:
- React Profiler
- Record interactions while scrolling. Look for components with high render counts or long render times.
- Verify that only the visible rows re-render when props/state change.
- Chrome Performance panel
- Record a fling scroll on a real device profile (CPU throttled 4× for baseline).
- Look for long tasks > 50 ms, repeated layout passes, or expensive paints.
- Check the “Layers” and “Rendering” tracks for paint storms.
- Memory view
- Ensure DOM nodes and JS objects are released while scrolling past thousands of items. Watch for detached nodes or caches that grow unbounded.
- Lighthouse/Performance Insights (optional)
- Not perfect for scrolling scenarios, but helpful to catch main-thread hogs.
Common pitfalls (and fixes)
-
Passing new itemData every render
- Fix: memoize itemData; keep references stable.
-
Heavy effects in each row
- Fix: move work up, cache results, or lazy-initialize inside an intersection/visibility check.
-
Using array index as key
- Fix: use a stable ID via itemKey to preserve state/focus and reduce diffing work.
-
Measuring in render
- Fix: use ResizeObserver; avoid forced synchronous layouts.
-
Overuse of portals/tooltips per row
- Fix: use a single layered portal; manage tooltip content via events rather than mounting one per item.
-
Animating height of many rows
- Fix: prefer transform-based reveal (expand overlay) or snap to size buckets; avoid reflow on dozens of elements per frame.
When not to virtualize
- Small lists (< 200 simple items) where development complexity outweighs the gains.
- SEO-critical, indexable content where removing items from the DOM harms discoverability.
- Print views and server-rendered snapshots where full content must exist at once.
In these cases, render everything or use content-visibility: auto on sections to reduce offscreen costs without changing semantics.
Tables and data grids: special considerations
-
Column virtualization
- For very wide datasets, virtualize both rows and columns (TanStack Virtual or specialized grids).
-
Sticky columns/headers
- Use CSS grid or two synchronized panes. Avoid JS scroll listeners if CSS can do it.
-
Measuring row heights with complex cell renderers
- Cache heights per row key, not index, to survive sorting/filtering.
Chat and inverted lists
For chat UIs anchored to the bottom:
- Invert visually with CSS (flex-direction: column-reverse) or maintain a bottom anchor by tracking scrollHeight - scrollTop.
- Prioritize overscan below the viewport (newer messages) and keep a smaller overscan above.
- On new message, preserve scroll position by adjusting scrollTop if the user is not scrolled to bottom.
Server-side rendering and hydration
- Don’t SSR thousands of items; render only the first viewport worth plus overscan so hydration is fast.
- Ensure the client virtualizer uses the same estimated sizes to avoid hydration mismatches.
- If content length is unpredictable, hydrate a skeleton list first, then switch to the real virtualized list after mount.
Tuning checklist
- Pick the simplest virtualizer that fits your needs (react-window is often enough).
- Use fixed item sizes when you can; otherwise estimate and measure via ResizeObserver.
- Start with modest overscan and tune up or down based on row cost and scroll speed.
- Memoize rows, use stable itemKey, and keep itemData/callbacks referentially stable.
- Isolate the scroll container with CSS containment; prefer transforms and sticky over JS-driven layout.
- Profile with React Profiler and Chrome Performance; fix what the traces show, not what you suspect.
- Guard effects and cleanup to avoid work in offscreen rows; watch memory for leaks during long scrolls.
Conclusion
Virtualized lists deliver big wins with a small set of disciplined practices: keep the DOM tiny, avoid unnecessary React work, measure sizes sanely, and profile the real experience. With the patterns above, you can ship lists that feel instant—even with hundreds of thousands of items—while keeping your codebase maintainable.
Related Posts
React Hydration Mismatch: A Practical Debugging Guide
Learn how to diagnose and fix React hydration mismatches with step-by-step checks, common causes, and production-safe patterns for SSR and Next.js.
React Compiler Automatic Optimization: A Practical Guide
A practical guide to React’s automatic compiler optimizations: how it works, code patterns that help or hurt, migration steps, and how to measure impact.
React Hooks Best Practices: Patterns for Clean, Correct, and Performant Components
A practical guide to React Hooks best practices: state, effects, memoization, refs, custom hooks, concurrency, testing, and pitfalls—with examples.