JSON and CSV are the two most common data formats, but converting between them is surprisingly tricky. Here's how to handle nested objects, arrays, and edge cases.
I thought converting JSON to CSV would take five minutes. Drag a file, click a button, done. Then I saw the output. My nested address.city field had become [object Object]. Half the rows were misaligned. And the CSV wouldn't open properly in Excel because — surprise — some of my data contained commas.
That was two years ago. Since then, I've converted hundreds of thousands of records between JSON and CSV for everything from API migrations to analytics exports to feeding data into spreadsheets for non-technical teammates. What I've learned is this: JSON-to-CSV conversion is a solved problem with a hundred unsolved edge cases, and the difference between a quick copy-paste and a multi-hour debugging session comes down to knowing those edge cases before they hit you.
This is the guide I wish I'd had.
On the surface, the mapping seems obvious. JSON is a list of objects. CSV is a table. Each object key becomes a column header. Each object becomes a row. Done, right?
Not even close.
JSON is a hierarchical, tree-structured format. It can nest infinitely. An object can contain arrays of objects that themselves contain arrays. There's no depth limit.
CSV is flat. It has exactly two dimensions: rows and columns. That's it. No nesting. No hierarchy. No arrays-within-cells.
So when you try to convert this:
{
"name": "Alice",
"address": {
"street": "123 Main St",
"city": "Portland",
"state": "OR"
},
"orders": [
{ "id": 1, "total": 49.99 },
{ "id": 2, "total": 129.0 }
]
}...into CSV, you immediately face three decisions:
address? Flatten it into address.street, address.city, address.state?orders? One row per order (duplicating the name and address)? Or serialize the array into a single cell?There is no single correct answer. It depends on what you're going to do with the CSV.
Even flat JSON has gotchas:
[
{ "name": "Alice", "age": 30, "city": "Portland" },
{ "name": "Bob", "city": "Seattle" },
{ "name": "Charlie", "age": 25, "city": "Denver", "state": "CO" }
]Notice anything? The objects don't have consistent keys. Alice has age but not state. Bob has neither age nor state. Charlie has both.
In JSON, this is perfectly legal. In CSV, every row needs the same columns. So your converter needs to:
Most naive converters only look at the first object to determine headers. Feed them inconsistent data and you'll silently lose columns.
Let's talk about the elephant in the room. Real-world JSON from APIs is almost never flat. It's nested, sometimes deeply. You need a flattening strategy.
The most common approach. Nested keys get concatenated with dots:
{
"user": {
"profile": {
"name": "Alice",
"settings": {
"theme": "dark"
}
}
}
}Becomes:
| user.profile.name | user.profile.settings.theme |
|---|---|
| Alice | dark |
Pros: Preserves the full path, can be unflattened back to JSON. Cons: Column names get absurdly long with deep nesting. Excel columns become unreadable.
Same idea, but use underscores instead of dots:
| user_profile_name | user_profile_settings_theme |
|---|---|
| Alice | dark |
Pros: Works better with tools that choke on dots in column names (looking at you, pandas).
Cons: Ambiguous if your original keys already contain underscores. Is user_first_name a flattened user.first_name or a literal key user_first_name?
Only flatten to a certain depth. Beyond that depth, serialize as JSON strings:
| user.profile.name | user.profile.settings |
|---|---|
| Alice | {"theme":"dark"} |
Pros: Keeps column count manageable. Preserves complex structures. Cons: Cells containing JSON strings are awkward to work with in spreadsheets.
For arrays, create a new row for each element:
[
{
"name": "Alice",
"orders": [
{ "product": "Widget", "qty": 2 },
{ "product": "Gadget", "qty": 1 }
]
}
]Becomes:
| name | orders.product | orders.qty |
|---|---|---|
| Alice | Widget | 2 |
| Alice | Gadget | 1 |
Pros: Every cell contains a single value. Perfect for pivot tables and aggregation. Cons: Duplicates parent data. A record with 50 orders becomes 50 rows. Your 1,000-row dataset might become 100,000 rows.
For most use cases: dot notation with a depth limit of 3, arrays serialized as JSON strings. This gives you readable columns without exploding your row count. If the recipient needs to drill into arrays, they can always filter/expand later.
This is where things get philosophical. JSON distinguishes between:
null — explicitly no valueundefined — key doesn't exist (technically not valid in JSON, but you'll encounter it in JavaScript objects)"" — empty string0 — zero (a valid value, not "empty")false — boolean false (also a valid value)[] — empty array{} — empty objectCSV has... empty cells. That's it. One representation for all of the above.
So what do you do?
[
{ "name": "Alice", "score": 0, "active": false, "notes": null },
{ "name": "Bob", "score": null, "active": true },
{ "name": "Charlie", "score": 85, "notes": "" }
]Here's the approach I use:
| Value | CSV Representation | Why |
|---|---|---|
null | (empty cell) | Safest default. Spreadsheets handle blank cells well. |
| Missing key | (empty cell) | Same as null — the consumer doesn't need to know the difference. |
"" | "" | Preserve empty strings with explicit quotes to distinguish from null. |
0 | 0 | Keep it. Zero is a value, not absence of value. |
false | false | Keep it. Boolean false is meaningful data. |
[] | (empty cell) or [] | Depends on context. Usually empty cell. |
{} | (empty cell) or {} | Same. |
The key principle: don't lose information silently. If 0 becomes an empty cell, your data analyst will filter it out thinking it's missing. That's a bug, not a feature.
"CSV" stands for Comma-Separated Values. Except when it doesn't.
If your data contains commas (and it will — addresses, descriptions, names like "Smith, John"), the standard approach is to wrap the value in double quotes:
name,address,city
"Smith, John","123 Main St, Apt 4",PortlandBut what if your data contains double quotes? Escape them by doubling:
name,quote
Alice,"She said ""hello"" to me"This is defined in RFC 4180, and most tools handle it correctly. Most. Not all. I've seen converters that:
If you're importing into a database or another program, TSV is often safer than CSV:
name address city
Alice 123 Main St, Apt 4 Portland
Tabs rarely appear in data, so you almost never need quoting. The downside: TSV files don't open as nicely in Excel by default (you need to use the import wizard).
European Excel installations default to semicolons as the delimiter because Europeans use commas as decimal separators. So 1.500,50 (one thousand five hundred and fifty cents) would conflict with a comma delimiter.
If you're sending CSV files to European colleagues, consider:
name;price;quantity
Widget;1.500,50;3
Gadget;299,99;1Or better yet: use tab separation and avoid the entire problem.
This is the edge case that wastes the most developer time. You convert your JSON to CSV, open it in Excel, and see: é instead of é, â€" instead of —, or just squares and question marks everywhere.
Excel on Windows does not default to UTF-8 when opening CSV files. It defaults to the system locale encoding (usually Windows-1252 in Western systems). So your perfectly valid UTF-8 CSV looks like garbage.
The fix is a Byte Order Mark (BOM) — three bytes (0xEF 0xBB 0xBF) at the very beginning of the file. This signals to Excel: "Hey, this is UTF-8."
// When generating CSV in the browser
const BOM = "\uFEFF";
const csvContent = BOM + headers + "\n" + rows.join("\n");
const blob = new Blob([csvContent], { type: "text/csv;charset=utf-8" });With BOM: Excel opens the file correctly. Without BOM: Excel mangles every non-ASCII character.
Google Sheets handles UTF-8 without BOM. LibreOffice handles it too. It's specifically Excel on Windows that causes this headache. But since a huge percentage of your users will open CSVs in Excel, the BOM is practically required.
JSON is defined as UTF-8 (RFC 8259). But in practice, I've encountered JSON files encoded in:
If your converter doesn't detect or handle the input encoding, the output will inherit whatever garbage encoding came in. Good converters let you specify input encoding or auto-detect it.
Going the other direction has its own set of problems.
CSV is typeless. Every cell is a string. When you convert to JSON, do you want:
{ "age": "30", "active": "true", "score": "0" }Or:
{ "age": 30, "active": true, "score": 0 }Obviously the second one. But type inference is tricky:
"30" → number? String? (What about "007"? Zip codes like "01234"?)"true" → boolean? The literal string "true"?"" → null? Empty string?"2026-03-23" → date? String?"1,234.56" → number 1234.56? String?There's no way to know for certain without schema information. Smart converters:
If you have a CSV with dot-notation headers like address.city and address.state, a good converter should be able to reconstruct the nested JSON:
name,address.city,address.state
Alice,Portland,ORShould become:
[
{
"name": "Alice",
"address": {
"city": "Portland",
"state": "OR"
}
}
]This is the reverse of the flattening we discussed earlier. Not all tools support it, but it's incredibly useful when round-tripping data.
Here's something most people don't know: you can convert multi-gigabyte files in the browser without uploading anything.
The trick is streaming. Instead of loading the entire file into memory, you read it chunk by chunk:
// Reading a large file in chunks
const reader = file.stream().getReader();
const decoder = new TextDecoder();
let buffer = "";
while (true) {
const { done, value } = await reader.read();
if (done) break;
buffer += decoder.decode(value, { stream: true });
// Process complete lines from the buffer
const lines = buffer.split("\n");
buffer = lines.pop(); // Keep incomplete line in buffer
for (const line of lines) {
processRow(line);
}
}For JSON, streaming is harder because JSON isn't line-delimited (unless you're using NDJSON/JSON Lines). But for CSV output, you can absolutely generate it incrementally and either stream it to a download or write it to a ReadableStream.
Browser-based tools that do this correctly can handle files that would crash server-based converters — because the server has to juggle thousands of simultaneous uploads while your browser only has to handle your one file.
A 500 MB CSV file loaded entirely into memory as a JavaScript string takes about 1 GB of memory (because JavaScript uses UTF-16 internally). With streaming, you can process it using only a few megabytes.
This matters for:
If your converter tool loads the entire file before processing, it's doing it wrong.
This is the use case I deal with most often. Here's the workflow I've settled on after years of iteration:
Whether it's from Postman, curl, or your app's network tab, save the JSON response. If it's paginated, concatenate all pages into a single array first.
Before converting, understand what you're working with:
// Quick structure analysis
const data = JSON.parse(jsonString);
const sample = Array.isArray(data) ? data[0] : data;
function getStructure(obj, prefix = "") {
for (const [key, value] of Object.entries(obj)) {
const path = prefix ? `${prefix}.${key}` : key;
const type = Array.isArray(value) ? "array" : value === null ? "null" : typeof value;
console.log(`${path}: ${type}`);
if (type === "object") getStructure(value, path);
if (type === "array" && value.length > 0 && typeof value[0] === "object") {
getStructure(value[0], `${path}[]`);
}
}
}
getStructure(sample);This tells you the depth and shape of your data before you commit to a flattening strategy.
You rarely need every field. API responses are verbose. A Stripe charge object has 40+ fields. Your spreadsheet probably needs 8 of them. Pre-filtering before conversion saves time and makes the output usable.
Choose your flattening depth, delimiter, and encoding based on the recipient. If it's for you, TSV with type inference. If it's for Excel users, CSV with BOM and proper quoting.
Always spot-check the output. Open it in a text editor (not Excel — Excel silently reformats data) and verify:
[object Object] anywhereI learned this the hard way when a converter silently truncated long text fields at 255 characters. It took three days before anyone noticed the truncated descriptions.
I tested the most popular online converters with a real-world dataset: 1,000 records from an API response with nested objects, arrays, null values, Unicode characters, and fields containing commas and newlines.
| Feature | Most Free Sites | Server-Upload Tools | Browser-Based Converters |
|---|---|---|---|
| File size limit | 1-5 MB | 50-100 MB | Unlimited (RAM only) |
| Nested object handling | [object Object] | Basic flattening | Configurable depth + strategy |
| Array handling | Broken or ignored | JSON string in cell | Explode or serialize (your choice) |
| Null vs empty | No distinction | Sometimes | Configurable |
| Unicode/UTF-8 | Hit or miss | Usually OK | BOM option available |
| Delimiter options | Comma only | Comma, tab | Comma, tab, semicolon, custom |
| Privacy | File uploaded to server | File uploaded to server | Everything local, nothing leaves your machine |
| Type inference (CSV→JSON) | Everything is a string | Basic (numbers, booleans) | Configurable per-column |
| Streaming large files | Crashes or timeout | Server-limited | Streams in browser |
| Cost | Free with limits | Free tier + paid | Completely free |
The browser-based category is the clear winner for most use cases. Your data never leaves your machine, there are no file size limits beyond your available RAM, and the conversion happens instantly because there's no upload/download round trip.
If a JSON string value contains \n (which is common in descriptions, comments, and addresses), and your CSV doesn't properly quote that field, the newline will be interpreted as a row break. Your CSV will have phantom rows that break every parser.
Always quote fields that contain newlines. Better yet, use a converter that does this automatically.
JSON object key order is not guaranteed by the spec. So if you're generating CSV headers from the first object and then mapping subsequent objects by position, you'll get misaligned columns when the key order changes.
Always map by key name, not position.
Excel loves to "help" you by auto-formatting data:
"001234" becomes 1234 (leading zeros stripped)"3/23" becomes 23-Mar (interpreted as a date)"1E10" becomes 10000000000 (interpreted as scientific notation)"9007199254740993" lose precision (JavaScript Number limit)If your data has IDs, zip codes, phone numbers, or codes that look like numbers or dates, warn your users or provide the CSV in a format that Excel won't auto-mangle (like a .xlsx file with explicit text formatting).
Valid JSON can technically have duplicate keys in the same object (though it's discouraged):
{ "name": "Alice", "name": "Bob" }Most parsers keep the last value. But when converting to CSV, if you're building headers from all keys, you might end up with two "name" columns — or silently lose one value. Validate your input.
This sounds obvious, but I've seen converters that output data-only CSV without a header row, or that include the header row in the data count (so "export 100 rows" actually gives you 99 data rows + 1 header).
Always verify the output count matches your expectations.
If you have any control over the JSON format, consider using JSON Lines instead:
{"name":"Alice","age":30}
{"name":"Bob","age":25}
{"name":"Charlie","age":35}
One JSON object per line. No wrapping array. No commas between records.
This format is:
grep "Alice" data.jsonl just worksIf you're the one generating the JSON (from your API, your database export, your script), NDJSON is almost always the better choice for large datasets. Many modern tools support it natively.
function flatten(obj, prefix = "", result = {}) {
for (const [key, value] of Object.entries(obj)) {
const newKey = prefix ? `${prefix}.${key}` : key;
if (value && typeof value === "object" && !Array.isArray(value)) {
flatten(value, newKey, result);
} else if (Array.isArray(value)) {
// Serialize arrays as JSON strings
result[newKey] = JSON.stringify(value);
} else {
result[newKey] = value;
}
}
return result;
}function escapeCSV(value) {
if (value === null || value === undefined) return "";
const str = String(value);
// Quote if contains comma, quote, newline, or leading/trailing whitespace
if (/[,"\n\r]/.test(str) || str !== str.trim()) {
return '"' + str.replace(/"/g, '""') + '"';
}
return str;
}function jsonToCSV(data, delimiter = ",") {
if (!data.length) return "";
// Collect all keys across all objects
const keys = [...new Set(data.flatMap((obj) => Object.keys(flatten(obj))))];
const header = keys.map(escapeCSV).join(delimiter);
const rows = data.map((obj) => {
const flat = flatten(obj);
return keys.map((key) => escapeCSV(flat[key])).join(delimiter);
});
return "\uFEFF" + header + "\n" + rows.join("\n");
}function csvToJSON(csvString, delimiter = ",") {
const lines = parseCSVLines(csvString); // Use a proper parser, not split('\n')
const headers = lines[0];
return lines.slice(1).map((values) => {
const obj = {};
headers.forEach((header, i) => {
const value = values[i] || "";
// Basic type inference
if (value === "") obj[header] = null;
else if (value === "true") obj[header] = true;
else if (value === "false") obj[header] = false;
else if (!isNaN(value) && value.trim() !== "") obj[header] = Number(value);
else obj[header] = value;
});
return obj;
});
}Note: Don't write your own CSV parser for production use. CSV parsing is deceptively complex (quoted fields with embedded newlines, escaped quotes within quotes, UTF-8 BOM handling). Use a battle-tested parser.
Not every situation calls for conversion. Sometimes the answer is "just use JSON."
| Scenario | Best Format | Why |
|---|---|---|
| Sending data to a spreadsheet user | CSV | They can double-click and it opens |
| API communication | JSON | Native to HTTP, preserves types |
| Database import | CSV or TSV | Most DB import tools prefer flat formats |
| Configuration files | JSON | Hierarchy matters for config |
| Log analysis | NDJSON | Streamable, greppable |
| Data exchange between services | JSON | Schema validation, nested structures |
| Reporting/dashboards | CSV | BI tools love flat data |
| Machine learning datasets | CSV | Most ML libraries expect tabular data |
| Archival/backup | JSON | Preserves full structure |
Something I feel strongly about: your data conversion should happen locally.
When you paste a JSON response from your production API into an online converter, that data — which might contain customer emails, transaction amounts, internal IDs — gets sent to someone else's server. Even if they promise to delete it, even if they claim HTTPS encryption, you've still transmitted potentially sensitive data over the internet to a third party.
Browser-based converters that process everything client-side eliminate this risk entirely. The JSON you paste in and the CSV you download never leave your machine. Open your browser's network tab and verify: zero HTTP requests during conversion.
For enterprise data, this isn't just a nice-to-have — it's a compliance requirement. GDPR, HIPAA, SOC 2, and basically every data protection framework has something to say about transmitting data to unvetted third parties.
After testing dozens of converters, here's my checklist for a good one:
If you've been fighting with converter tools that choke on nested data, mangle your encoding, or quietly upload your files to unknown servers, it might be time to switch. Browser-based tools have gotten remarkably good. In fact, I've built a whole collection of over 200 converter tools — including JSON↔CSV, XML, YAML, TSV, and more — that run entirely in your browser at akousa.net. No uploads, no limits, no catch. Give them a try if your current workflow is causing headaches.
But whatever tool you use, understand the edge cases first. The five minutes you spend choosing the right flattening strategy and encoding settings will save you hours of debugging downstream.
Data conversion isn't glamorous. But getting it right is the difference between a clean dataset and a spreadsheet full of [object Object].