Developers & Technical · Guide · Developer Utilities
How to convert SQL to JSON
Row-to-object mapping, NULL vs missing keys, type fidelity (decimals, bigints, dates), Postgres/MySQL/SQLite JSON functions, streaming.
Relational rows and JSON documents model data differently. SQL is tabular, normalized, foreign-keyed. JSON is hierarchical, denormalized, embedded. Converting between them involves more than column-to-key mapping: you choose whether to flatten, nest, or embed relations; how to handle NULL vs missing keys; how to represent dates, decimals, and blobs; and when to use SQL’s own JSON functions vs a post-query transform. This guide covers the row-to-object mapping, nested shapes via joins or aggregation, NULL semantics, type fidelity, modern SQL JSON functions (Postgres, MySQL, SQLite), and performance/streaming tradeoffs.
Advertisement
The basic row-to-object mapping
A table row becomes a JSON object; a result set becomes an array of objects:
-- SQL
SELECT id, name, email FROM users LIMIT 2;
-- JSON
[
{ "id": 1, "name": "Alice", "email": "a@x.com" },
{ "id": 2, "name": "Bob", "email": "b@x.com" }
]Column names become keys. Column values become typed JSON scalars. Easy for flat tables.
NULL vs missing keys
SQL NULL has two reasonable JSON representations:
Include as null: {“email”: null}. Preserves the column’s existence. Default in most converters.
Omit the key: {}. Smaller payload; may confuse consumers expecting the key.
Pick one per API: mixing is the worst outcome. If your consumers use Zod / JSON Schema with required vs optional fields, explicit null is clearer.
Type fidelity
SQL has types JSON doesn’t:
DECIMAL / NUMERIC: exact-precision. JSON numbers are 64-bit doubles. For money, output as string (“amount”: “19.99”) and parse server-side with a decimal library.
BIGINT: JavaScript Number loses precision past 2^53. Output as string for IDs over that threshold.
DATE / TIMESTAMP: JSON has no date type. Convention: ISO 8601 strings (“2026-04-23T12:00:00Z”). UTC is strongly preferred to avoid timezone ambiguity.
BYTEA / BLOB: base64-encode. Don’t try to stuff binary into JSON strings directly — characters above U+10FFFF or invalid UTF-8 will break parsers.
UUID: string. Preserve hyphens for readability.
BOOLEAN: maps directly. MySQL stores as TINYINT (0/1) — coerce in your query or driver.
Joins — flat vs nested
A join returns a flat row. JSON often wants nested:
-- SQL
SELECT u.id, u.name, p.title AS post_title
FROM users u JOIN posts p ON p.user_id = u.id;
-- Flat JSON (same shape as SQL result)
[
{ "id": 1, "name": "Alice", "post_title": "Hello" },
{ "id": 1, "name": "Alice", "post_title": "Second" }
]
-- Nested (typical API shape)
[
{
"id": 1,
"name": "Alice",
"posts": [
{ "title": "Hello" },
{ "title": "Second" }
]
}
]Flat is fine for rows that will render as a table. Nested is what APIs usually want. To produce nested, either:
Post-query group: run the flat query, group by parent key in application code. Easy but ships duplicated parent columns over the wire.
SQL aggregation: use the database’s JSON aggregate functions (shown next). Fewer bytes, fewer round-trips.
Postgres — json_agg, row_to_json, jsonb_build_object
Postgres has excellent JSON support:
SELECT jsonb_build_object(
'id', u.id,
'name', u.name,
'posts', (
SELECT jsonb_agg(jsonb_build_object('title', p.title))
FROM posts p WHERE p.user_id = u.id
)
) FROM users u;Result is a column of JSONB documents, one per user, with an embedded posts array. Concatenate with jsonb_agg(...) at the outer level if you want a single JSON array of all users.
Use JSONB (binary) for anything non-trivial. It preserves types better and is indexable.
MySQL 5.7+ / MariaDB — JSON_OBJECT, JSON_ARRAYAGG
SELECT JSON_OBJECT(
'id', u.id,
'name', u.name,
'posts', (
SELECT JSON_ARRAYAGG(JSON_OBJECT('title', p.title))
FROM posts p WHERE p.user_id = u.id
)
) FROM users u;Same shape as Postgres. MySQL 8+ is the sweet spot — 5.7 JSON functions exist but are slower and have more quirks.
SQLite 3.38+ — json_object, json_group_array
SELECT json_object(
'id', u.id,
'name', u.name,
'posts', (
SELECT json_group_array(json_object('title', p.title))
FROM posts p WHERE p.user_id = u.id
)
) FROM users u;SQLite’s functions are a bit newer but fully functional. Perfect for local tools and embedded apps.
SQL Server — FOR JSON
SELECT u.id, u.name,
(SELECT p.title FROM posts p WHERE p.user_id = u.id FOR JSON PATH) AS posts
FROM users u
FOR JSON PATH, ROOT('users');FOR JSON PATH is SQL Server’s idiom. Supports nested paths via dot notation in column aliases (posts.title).
Streaming large result sets
Building JSON in memory fails at scale. For a 10M-row dump:
NDJSON / JSON Lines: one JSON object per line, no wrapping array. Streamable end-to-end — write a row, read a row. Industry standard for data pipelines.
Cursor + streaming serializer: use a database cursor (Postgres DECLARE CURSOR, MySQL unbuffered result) and a streaming JSON writer (jq, streamjson in Node). Emits an array but never holds it all in memory.
Avoid: loading the entire result set, then serializing. Hits memory limits fast.
Quoting and escaping
If you handroll SQL-to-JSON in application code, make sure the serializer handles:
Quotes, backslashes, control characters in string columns (", \\, \\n, \\u0000).
UTF-8 validity — Postgres happily stores invalid UTF-8 withbytea_output=escape; your serializer won’t like it.
Don’t build JSON with string concatenation. Use your language’s JSON library or the DB’s JSON functions.
Schema evolution
JSON outputs are part of your API contract. Adding columns is safe (extra keys don’t break well-written consumers). Renaming and dropping aren’t safe — version the output or add a compatibility layer.
Common mistakes
Returning DECIMAL as JSON number. Silent precision loss on money. Strings only.
TIMESTAMPs without timezones. “2026-04-23 12:00” without a Z or offset means nothing to the consumer.
Cartesian-join flattening. JOINing users × posts without deduping users duplicates the user row per post. Use aggregation for nested output.
BIGINTs as JSON numbers. Silently truncated in JS consumers. String-encode.
Mixing null and missing-key conventions. Consumers can’t tell a “not set” from a “not known”.
Building JSON with string concatenation. Escaping bugs waiting to happen.
Serializing huge result sets in memory. Use NDJSON and streaming.
Run the numbers
Convert SQL table data to JSON with the SQL to JSON converter. Pair with the JSON formatter to validate output, and the SQL formatter to keep the source queries readable.
Advertisement