Skip to content
Free Tool Arena

Developer Utilities · Free tool

JSON Parser Benchmark

Measure JSON.parse and JSON.stringify performance. 8 cases, median-of-11 with warmup, throughput in MB/s, downloadable JSON. V8, SpiderMonkey, JSC.

Updated May 2026
Detected engine:unknown(hardwareConcurrency: 4)

5 warmup runs + 11 measured runs per case. Parse and stringify timed separately.

Choose cases (7 of 8 selected)
Found this useful?Email

Advertisement

What it does

Measures real JSON.parse + JSON.stringify performance on your browser’s native engine across 8 realistic document shapes: small flat (100 records), medium (10K), large (100K), deeply-nested (50 / 500 levels), big-integer-heavy, Unicode-heavy with multi-script + emoji, and long-string-heavy (200KB strings × 50). Results show median, min, max, and throughput in MB/s — downloadable as JSON for citation.

“Is JSON.parse fast enough?” gets asked constantly with no measured answer. Published numbers are dated — most articles cite V8 internals from 2017 that don’t reflect current optimizations. This page generates current numbers on demand, with reproducible methodology, on the visitor’s real browser.

Embed this tool on your siteShow snippet

Paste this snippet into any page. Loads on-demand (lazy), no tracking scripts, and sized to most dashboards. Replace the height to fit your layout.

<iframe src="https://freetoolarena.com/embed/json-parser-benchmark" width="100%" height="720" frameborder="0" loading="lazy" title="JSON Parser Benchmark" style="border:1px solid #e2e8f0;border-radius:12px;max-width:720px;"></iframe>
Embed docs →

Example input & output

Input

Engine: V8 (Chrome)
Warmup: 5 runs / case
Measured: 11 runs / case (median reported)

Output

flat-100         parse 0.04 ms  stringify 0.02 ms  throughput 1,180 MB/s
flat-10k         parse 8.2 ms   stringify 3.5 ms   throughput   180 MB/s
flat-100k        parse 96 ms    stringify 38 ms    throughput   155 MB/s
deep-50          parse 0.05 ms  stringify 0.03 ms  throughput 1,420 MB/s
deep-500         parse 0.5 ms   stringify 0.4 ms   throughput   850 MB/s
big-numbers      parse 5.1 ms   stringify 4.2 ms   throughput    98 MB/s
unicode-1k       parse 1.8 ms   stringify 1.2 ms   throughput   220 MB/s
string-heavy     parse 3.4 ms   stringify 1.9 ms   throughput   2,940 MB/s

Numbers are illustrative — V8 hits ~150-300 MB/s on flat objects, ~1-3 GB/s on long-string-heavy docs (mostly memcpy). Run yourself for ground truth on your hardware.

How to use it

  1. Click 'Run benchmark' to measure all 8 cases on your current browser.
  2. Toggle individual cases under 'Choose cases' if you want to skip the largest case (100K records, ~15MB) on a slow machine.
  3. Read parse and stringify medians side-by-side. Parse is usually the bottleneck.
  4. Throughput (MB/s) lets you estimate how a 100MB API response will parse on this engine.
  5. Click 'Download JSON' to save raw measurements with engine, hardware, and methodology metadata.

How it works

Key takeaways

  • JSON throughput on V8 is typically 100-300 MB/s on object-heavy documents (lots of keys), 1-3 GB/s on long-string-heavy ones (parser is mostly memcpy). Counterintuitive but explainable: per-character cost is much lower inside a string than inside a structural token.
  • JSON.parse is ~2-3× slower than JSON.stringify for the same document. Parsing is harder than serializing because you don’t know what comes next.
  • Deeply-nested documents (500+ levels) parse fine in V8/SpiderMonkey/JSC at current versions. Older browsers (pre-2020) blew the stack past ~256 levels. Most production parsers limit nesting to 1024 for security.
  • Big-integer documents are slow because parsing each digit string into an IEEE 754 double costs more than parsing a smaller number. Plus you LOSE precision past 2^53 — verify your ID format before sending big integers as JSON numbers.

Methodology

Each case is a deterministic JSON-string generator. Same input every run, so numbers are comparable across measurements (within a single browser version).

  1. 5 warmup runs — result discarded. Lets V8 / SpiderMonkey JIT-compile the parser’s hot paths. JSON parsing is mostly C++ in modern engines (V8’s parser is hand-tuned C++, not JIT’d JS), but the JS-side wrapper still benefits from steady-state.
  2. 11 measured runsperformance.now() wraps each JSON.parse(text) call. Times collected, sorted, median reported. Min/max included for variance check.
  3. Stringify measured separately — same 11-run pattern. We do parse first, then stringify the parsed object. This avoids cross-cache interference where stringify might pollute parse memory.
  4. 50ms inter-case delay via setTimeout — gives the browser GC a chance to clean up before the next case.

What the cases test

  • flat-100 / flat-10k / flat-100k: scaling. Object-heavy documents with mixed scalar fields. Each record has 6 keys. Tests how parse time scales with document size.
  • deep-50 / deep-500: nesting cost. Recursive structure with object/value alternation. Tests recursion depth handling and cache locality.
  • big-numbers: numeric parser. Numbers near Number.MAX_SAFE_INTEGER (2^53) and very large floats. Each number requires multiple atoi-style operations.
  • unicode-1k: escape and surrogate pair handling. Mixed Latin, CJK, Hebrew, Arabic, Cyrillic, plus emoji (BMP + supplementary plane).
  • string-heavy: per-byte cost. 50 records with one 200KB string each = 10MB total. Most of the work is reading characters; structural overhead is tiny. Highest throughput case.

Cross-engine notes

V8 (Chrome / Edge / Node): hand-tuned C++ parser. The fastest for most workloads. Special optimizations for ASCII keys and small integers.

SpiderMonkey (Firefox): also C++ parser, comparable to V8 on flat documents but ~10-30% slower on deeply-nested. Stronger Unicode handling due to better support for variant string encodings internally.

JavaScriptCore (Safari): usually within 20% of V8. Historically slower on big-integer parsing; recent versions caught up.

For binary alternatives: Protocol Buffers / FlatBuffers / Cap’n Proto parse 5-50× faster than JSON because they don’t need to parse text. MessagePack / CBOR are ~2-3× faster than JSON on most documents. BSON (MongoDB) is roughly the same speed as JSON. The cost is loss of human readability and tooling.

Common mistakes when benchmarking JSON

  • Reusing the same parsed object. Some benchmarks cache the parse result, then time stringify of the cached object. That’s a different workload than a fresh parse-stringify cycle — the stringify sees a hot, JIT-friendly object.
  • Including JSON generation in the timed window. We generate the input strings ONCE per case before warmup; only parse/stringify is timed.
  • Forgetting variable-shape effects. V8’s hidden classes optimize objects with stable key sets. Every record in our flat-* cases has the same 6 keys — that’s realistic for API responses but not for heterogeneous data. Mixed-shape records would parse 20-30% slower.
  • Comparing across different machines. Throughput depends on CPU memory bandwidth and L3 cache size. A 2024 desktop is 2-3× faster than a 2020 laptop on the same browser. Always note the hardware.

When to use this tool

  • You're choosing between JSON and a binary format for high-frequency or large payloads.
  • You're debugging a parse-time slowdown in a real application.
  • You're writing a technical article that needs measured performance data.
  • You're sizing a streaming-parser switch (deciding the threshold above which JSON.parse becomes prohibitive).

When not to use it

  • Production performance profiling — Chrome DevTools Performance tab on real traffic gives more accurate numbers including network and de-serialization cost.
  • Comparing JavaScript JSON parsers to native parsers in other languages — JSON.parse is comparable to Python's json or Go's encoding/json, but actual speed depends on the input shape and language runtime.
  • Microbenchmarking sub-millisecond cases — performance.now() resolution clamps at ~5µs and noise dominates very small measurements.

Common use cases

  • Deciding whether to use streaming JSON parsers (jq, ijson, JSONStream) for a given workload size.
  • Comparing parse cost between Chrome, Firefox, and Safari for a cross-browser app.
  • Investigating a slow-load complaint on a JSON-heavy SPA.
  • Justifying a switch to an alternative format (CBOR, MessagePack, Protocol Buffers) with measured numbers.
  • Citing real parse-throughput numbers in engineering blog posts.

Frequently asked questions

Why is parse slower than stringify?
Parsing has to handle ambiguity at every step (is the next token a number, string, object, array?). Stringify knows exactly what's in memory and just emits it. Plus parse has to validate (reject malformed input), error-recover, and allocate object + array structures — stringify reuses traversal state. The 2-3× ratio holds across V8, SpiderMonkey, and JavaScriptCore.
Can I trust these numbers for production sizing?
Within ~30%, yes. Real production has additional cost not measured here: network deserialization (gzip / brotli decompression), V8 string cons-cell coalescing, fetch() body parsing, and JS-engine warmup overhead per request. For SSR-rendered API responses, real-world parse cost is typically 1.5-2× this benchmark. For client-side fetch, multiply by 1.2-1.5×. For absolute precision, profile your actual workload with DevTools Performance.
How does V8 JSON.parse compare to Node.js performance?
Same parser. Node.js is V8 + libuv; the JSON parser is the same C++ code as Chrome. Cold-start startup time is faster in Node (no DOM, no rendering pipeline), but per-operation parse speed is identical. Benchmark numbers from this page transfer to Node.js for that operation.
Should I switch from JSON to a binary format?
Math: if your payloads are under 100KB and parse takes under 5ms median, switching saves <5ms per request — usually not worth the tooling complexity. Above 1MB or when latency matters in hot loops (high-frequency trading, real-time games), binary formats (Protocol Buffers, FlatBuffers, MessagePack, CBOR) cut 50-90% of parse time. The trade-off is human readability and broad ecosystem support. JSON wins on debuggability; binary wins on speed and size.
Why does my big-integer JSON parse slowly?
Each large number requires multi-digit-string-to-double conversion. Numbers near 2^53 are slowest because the conversion path includes precision-loss handling. Plus: numbers above 2^53 silently lose precision in JavaScript (12345678901234567890 becomes 12345678901234568000 — different value). For numeric IDs over 2^53, send them as JSON strings and parse to BigInt or library-specific big-integer types on the receiving side.
What's the largest JSON file Chrome can practically parse?
In one JSON.parse call: ~1-2GB before the V8 string size limit kicks in (2^29 - 24 bytes ≈ 537MB on 32-bit pointers, larger on 64-bit). Performance degrades well before that — 100MB takes 5-15 seconds on a fast machine. For files >100MB, use streaming parsers (jq for CLI, ijson for Python, JSONStream for Node, or oboe.js in browser). NDJSON / JSONL format (one JSON object per line) is trivially streamable line-by-line.

Advertisement

Learn more

Explore more developer utilities tools

100% in-browserNo downloadsNo sign-upMalware-freeHow we keep this safe →