Developer Utilities · Free tool
Regex Engine Benchmark
Measure regex performance on V8, SpiderMonkey, JavaScriptCore. 12 patterns, median-of-21 with warmup, downloadable JSON. Reproducible methodology.
unknown(hardwareConcurrency: 4)50 warmup runs + 21 measured runs per case. Reported value is median; min/max included for variance.
Choose cases (11 of 12 selected)
Advertisement
What it does
A reproducible regex engine benchmark that runs in your browser. Measures the wall-clock time of 12 representative patterns — literal match, character classes, alternation, lookahead, Unicode property classes, catastrophic backtracking demonstration — against fixed inputs, on YOUR machine, with full methodology disclosed and raw measurements downloadable as JSON.
Why this exists: published regex performance comparisons online are fragmentary. jsperf is dead. V8/SpiderMonkey/JavaScriptCore teams publish internal benchmarks that don’t map to typical workloads. MDN gives no numbers. This page generates measured data on real browsers, with full methodology disclosure, on demand — for citation, comparison, and ReDoS auditing.
Embed this tool on your siteShow snippetHide
Paste this snippet into any page. Loads on-demand (lazy), no tracking scripts, and sized to most dashboards. Replace the height to fit your layout.
<iframe src="https://freetoolarena.com/embed/regex-engine-benchmark" width="100%" height="720" frameborder="0" loading="lazy" title="Regex Engine Benchmark" style="border:1px solid #e2e8f0;border-radius:12px;max-width:720px;"></iframe>Example input & output
Input
Selected cases: literal-match, char-class, alternation-3, email-pragmatic, html-strip-lazy, ipv4, lookahead, unicode-prop
Engine: V8 (Chrome/Chromium)
Warmup: 50 runs
Measured: 21 runs (median reported)Output
literal-match median 0.142 ms (min 0.131 / max 0.158)
char-class median 0.487 ms (min 0.471 / max 0.612)
alternation-3 median 0.221 ms
email-pragmatic median 0.683 ms
html-strip-lazy median 0.412 ms
ipv4 median 0.018 ms
lookahead median 0.024 ms
unicode-prop median 1.243 msNumbers are illustrative — actual results depend on your browser, CPU, current system load, and recent JIT state. Run it yourself for ground truth.
How to use it
- Click 'Run benchmark' to measure all 12 cases on your current browser.
- Toggle individual cases under 'Choose cases' if you want to skip the catastrophic-backtracking demonstration or focus on a subset.
- Read the median (ms) column; min/max show variance.
- Click 'Download JSON' to save your raw measurements with engine, hardware, and methodology metadata for citation.
How it works
Key takeaways
performance.now()resolution clamps at ~5µs in modern browsers (Spectre mitigation). Sufficient for ms-scale measurements; useless for sub-microsecond ones.- Median-of-21 runs is the right statistic. Mean is GC-pause sensitive; median ignores outliers. Variance (max / min) above 2× usually signals a GC event during measurement.
- Engines hot-path JIT after ~10-100 invocations. We do 50 warmup runs untimed before the real measurement to capture steady-state performance, not cold-start.
- Catastrophic backtracking is an exponential-time problem, not a constant-factor slowdown.
(a+)+on 30+ a’s without a matching$takes 30+ seconds; on 35 a’s, 17+ minutes. Engine speed is irrelevant when the algorithm is broken.
Methodology
Each case is a (pattern, flags, input) triple. The runner constructs a fresh RegExp object for each measurement (so we’re not benchmarking compilation cache hits). Inputs are deterministic functions — same input every run, so results are comparable run-to-run on the same browser.
Each case runs:
- 50 warmup runs — result discarded. Lets V8 / SpiderMonkey / JavaScriptCore JIT-compile the regex into optimized native code. Without warmup, you measure the parser/compiler, not the matching engine.
- 21 measured runs —
performance.now()wraps eachRegExp.exec()call (or the fullmatchAlliteration for global flag). Times collected, sorted, median reported. We also surface min/max for variance check. - Iteration cap of 10,000 matches per global-flag case — prevents pathological inputs from running forever.
Garbage collection: we suggest GC between cases via globalThis.gc?.() (works only on Chrome with --js-flags="--expose-gc"; ignored elsewhere). We also insert a 50ms setTimeout between cases to give the browser a chance to clean up before the next measurement.
What the cases test
- literal-match: baseline. Fixed-string search in 50KB. Engines optimize this aggressively (Boyer-Moore in V8). Should be sub-ms.
- char-class / word-boundary: common log-parsing patterns. Tests how efficiently the engine handles
\d,\w,\bacross mid-size text. - alternation-3 / alternation-10: stress test on alternation cost. Many engines optimize 3-alt to a trie internally; 10-alt may fall back to linear search.
- email-pragmatic / url-match / html-strip-lazy: real-world extraction workloads. Lazy quantifiers, character classes, optional groups.
- ipv4: bounded numeric ranges with anchors. Tests how the engine compiles
(?:25[0-5]|2[0-4]\d|[01]?\d\d?). - lookahead: zero-width assertion overhead. Password-rule-style pattern with 4 lookaheads.
- unicode-prop:
\p{L}requires theuflag. Tests Unicode property table lookups across mixed scripts. - redos-trap: classic
(a+)+on 20 a’s without a matching$. Demonstrates measurable catastrophic backtracking without freezing the tab. Disabled by default; toggle on to see it. WARNING: try 30+ a’s only if you want a multi-second hang.
Comparing across browsers
Run the benchmark on Chrome, then Firefox, then Safari. Save each JSON. The differences tell a story:
- V8 (Chrome) vs SpiderMonkey (Firefox): V8 typically faster on alternation due to internal trie compilation. SpiderMonkey often faster on literal-match for small inputs because its bytecode startup is cheaper.
- JavaScriptCore (Safari): historically the slowest for cold regex but competitive once JIT-warmed. Recent versions narrow the gap.
- Unicode property classes: huge variance. JavaScriptCore was slower for
\p{L}until ~2022; current versions are comparable to V8.
For non-JavaScript engines (Python re, PCRE, Go RE2), this benchmark can’t measure them — they’re not accessible from a browser. For PCRE / re2 cross-checks, use the published JIT benchmarks (PCRE2 author Philip Hazel publishes some) or run a native harness yourself. The methodology here adapts to those engines: same patterns, same inputs, just a different wrapper.
Why we publish raw JSON
Citation-grade data needs reproducibility. JSON exports include: timestamp, engine name, full user-agent string, hardware concurrency, methodology (warmup/measure counts, timer source), and per-case (pattern, flags, category, median, min, max, mean). With that, anyone can reproduce your numbers, audit your methodology, or feed them into their own analysis.
If you cite these numbers, please link to this page and include the generatedAt timestamp from your JSON — engines change, and stale numbers mislead.
Common mistakes when benchmarking regex
- Benchmarking the parser, not the engine. Reusing the same
new RegExp(pattern)across runs hits the compilation cache. Our harness creates a fresh RegExp per measurement to avoid this. - No warmup. First-run numbers measure JIT compilation, not steady-state matching. 50 warmup runs is conservative for V8; SpiderMonkey warms faster.
- Mean instead of median. Means absorb GC pauses; medians ignore them. Always report median for benchmarks under 100ms.
- Microbenchmarking on a busy machine. Close other tabs, plug in the laptop charger, and watch for thermal throttling. Variance >2× is a red flag for measurement noise.
- Generalizing to production. This benchmark uses fixed inputs. Real production traffic has variance — some inputs are pathological. A pattern that’s fast here can hang on a malicious input. Always audit for ReDoS separately.
When to use this tool
- Investigating a performance regression in a production regex.
- Building security tooling that needs to detect ReDoS patterns.
- Writing an engineering blog post or security advisory that needs measured data.
- Comparing two patterns to decide which to deploy in a hot path.
When not to use it
- Production performance profiling — use Chrome DevTools Performance tab on actual production traffic.
- Comparing against non-JavaScript regex engines (PCRE, Python re, Go RE2) — those need their own native runners.
- Microbenchmarking sub-microsecond patterns — performance.now() resolution clamps at ~5µs in modern browsers.
Common use cases
- Comparing regex performance across browsers (Chrome / Firefox / Safari).
- Auditing your own patterns against known-fast and known-slow benchmark cases.
- Demonstrating ReDoS to a colleague: see catastrophic backtracking measured live.
- Validating that engine optimization claims (V8 release notes, SpiderMonkey changelogs) match observed reality.
- Citing published numbers in security advisories, performance writeups, or technical posts.
Frequently asked questions
- How accurate is performance.now() in a browser?
- Modern browsers clamp performance.now() to ~5 microseconds resolution as a Spectre / fingerprinting mitigation (Chrome since 2018, Firefox since 2018, Safari since 2018). For ms-scale regex measurements (1ms+), this is plenty accurate. For sub-microsecond patterns, the clamp dominates the signal and you can't measure differences. To benchmark sub-µs work, batch many invocations and divide.
- Why do my numbers vary 2-5x between runs on the same browser?
- Several causes: (1) JIT warmup didn't fully complete — try increasing warmup runs from 50 to 200. (2) GC pause during measurement — median ignores these but max will spike. (3) Background tabs or system processes consuming CPU — close them. (4) Thermal throttling on laptops — ensure plugged in and cool. (5) For very fast patterns (<0.5ms), the timer's 5µs clamp dominates noise. Variance under 1.5x is good; over 2x means rerun on a quieter system.
- Can I compare results from this page to PCRE / Python re / Go RE2?
- Not directly — those are different engines with different optimizations. The METHODOLOGY transfers (same patterns, same inputs, median-of-N with warmup), but you'd need to run a native harness for those engines. PCRE2 publishes some internal benchmarks; Go's regexp package is RE2-based and explicitly slower than PCRE for some patterns in exchange for linear-time guarantees. The patterns and inputs in this benchmark are designed to be portable: copy the pattern strings into a Python or Go harness and you'll get directly comparable numbers.
- What's the catastrophic-backtracking case actually demonstrating?
- The pattern '(a+)+' on input 'aaaa...b' (where the trailing 'b' prevents a match against the implied '$') exhibits exponential time complexity. The engine tries every possible split of the a's between the inner '+' and outer '+'. With N a's, that's roughly 2^N paths. 20 a's = ~1M paths = ~10ms; 25 = ~32M = ~300ms; 30 = ~1B = ~10s; 35 = ~30B = ~5min. We use 20 a's by default to demonstrate measurable cost without freezing the tab. Add more a's at your own risk. This is why ReDoS is a real attack vector: a single malicious input can DoS your service.
- Why warm up 50 times before measuring?
- JavaScript engines are tiered JITs: first runs interpret bytecode (slowest), then a baseline compiler kicks in (~10-50 calls), then an optimizing compiler (~100-1000 calls). Without warmup, you measure the interpreter or baseline tier, which can be 10-100x slower than steady state. 50 warmup runs is conservative — it lands you firmly in the optimizing-compiler tier for V8 and SpiderMonkey. For very simple patterns, even 10 warmup runs would suffice; for complex patterns with deep backtracking trees, 100+ might be safer. We chose 50 as a compromise.
- Can I cite these numbers in a blog post or security advisory?
- Yes — that's the explicit purpose. Use the 'Download JSON' button to get the raw measurements with engine, user-agent, hardware concurrency, and methodology metadata. Include the generatedAt timestamp in your citation; engines change with browser versions and old benchmarks mislead. Always attribute to the page URL and include the timestamp. If you find a discrepancy with my notes (above) about engine behavior, run it yourself and report — the goal is accurate, reproducible measurements, not authority.
Advertisement
Learn more
Guides about this topic
- Developers & Technical · GuideRegex Cheat Sheet: All Patterns ExplainedComplete regex reference: every operator, flavor differences (ECMAScript, PCRE, Python, Go), and 30 patterns covering 95% of real matching tasks.
- How-To & Life · GuideHow to test regex patternsAnchors, quantifiers, character classes, groups, lookarounds, flags across flavors (PCRE, JS, Python), catastrophic backtracking, and live-testing workflows.
- Using Our Tools · GuideHow to generate QR codesMake a QR code for a URL, wifi, vCard, or plain text. What error-correction means, how big to print, how to test it.
- Using Our Tools · GuideHow to create a strong passwordThe entropy math, 2026 NIST rules, passphrases vs passwords, password managers, MFA and hardware keys, where passkeys fit, 5 mistakes that still lose accounts
- Developers & Technical · GuideHow to encode and decode Base64What Base64 is (not encryption), the 3-to-4 encoding mechanics, standard vs URL-safe vs MIME variants, 33% overhead, when to use it, common mistakes
- Design & Media · GuideHow to choose a color paletteHSL color theory, four palette schemes (monochromatic, analogous, complementary, triadic), the 60-30-10 rule, WCAG contrast, dark mode, and palette tools.
Explore more developer utilities tools
- Port Number LookupQuick reference for ~140 well-known TCP/UDP ports — search by number or service name. Web, mail, DNS, DB, SSH, Docker, Kafka, MQTT, more.
- Test Credit Card NumbersReference table of canonical test card numbers from Stripe, Adyen, and Braintree sandbox docs. Plus Luhn validator + network detector.
- IPv6 Expander & ShortenerExpand or shorten IPv6 addresses to RFC 5952 canonical form. Handles zone IDs, prefix length, embedded IPv4, ip6.arpa reverse DNS, and binary.
- Htpasswd GeneratorGenerate .htpasswd lines for Apache + nginx Basic Auth. Browser-only SHA hashing. Includes nginx + Apache config snippets and curl example.
- Chmod CalculatorCalculate Unix file permissions: octal (755, 644) ↔ symbolic (rwxr-xr-x) ↔ rwx checkboxes. Covers setuid, setgid, sticky bit. With presets.
- Excel Formula ExplainerPaste an Excel or Google Sheets formula, get a plain-English breakdown of every function. Covers 60+ functions, gotchas, modern alternatives.