Format, validate, and pretty print JSON data instantly in your browser. Learn JSON syntax rules, common errors, and tips for working with JSON APIs.
You just got a 47KB JSON response from an API. It is a single line. No indentation, no line breaks, no mercy. You need to find a single boolean field buried three levels deep in a nested array of objects. You could squint at it in your terminal, scrolling sideways for eternity. Or you could paste it into a JSON formatter, hit a button, and see the entire structure laid out in front of you in under a second.
If you have ever worked with APIs, webhooks, configuration files, or any form of structured data exchange on the web, you have formatted JSON. Probably dozens of times today. Pretty printing JSON is one of those tasks that sounds trivial until you realize how much of your debugging and development workflow depends on it.
This guide covers everything you need to know about JSON formatting and validation: the spec itself, how pretty printing works under the hood, the most common syntax errors that trip people up, how to validate JSON properly, and practical techniques for working with complex nested structures. Whether you are a backend developer parsing API responses, a frontend engineer debugging state objects, or someone who just received a JSON file and needs to make sense of it, this is the reference you will keep coming back to.
JSON stands for JavaScript Object Notation. Douglas Crockford formalized the spec in the early 2000s, and it is defined in two RFCs: RFC 7159 (2014) and its successor RFC 8259 (2017). Despite the name, JSON is language-independent. Every major programming language has a JSON parser. It is the de facto standard for data interchange on the web.
The entire JSON specification fits on a single printed page. That simplicity is its greatest strength. There are exactly six data types:
"hello world"42, 3.14, -17, 2.998e8true or false (lowercase, always)null (lowercase)That is the entire language. No comments, no functions, no date type, no undefined, no single quotes. This minimal surface area is why JSON is so portable and why parsers can be fast and reliable.
Here is a valid JSON document that uses every data type:
{
"name": "Alice Chen",
"age": 34,
"isActive": true,
"address": {
"street": "742 Evergreen Terrace",
"city": "Portland",
"state": "OR",
"zip": "97201"
},
"skills": ["Python", "SQL", "Kubernetes", "Terraform"],
"certifications": null,
"salary": 128500.00,
"projects": [
{
"name": "Data Pipeline v2",
"status": "completed",
"teamSize": 5
},
{
"name": "ML Inference Service",
"status": "in-progress",
"teamSize": 3
}
]
}Clean, readable, self-documenting. That is what pretty-printed JSON looks like. Now imagine all of that on a single line with no whitespace. That is what your API sends back.
Pretty printing is the process of taking minified (compact) JSON and adding whitespace, indentation, and line breaks to make it human-readable. The data does not change. The structure does not change. You are adding cosmetic formatting only.
Here is the same data, minified:
{"name":"Alice Chen","age":34,"isActive":true,"address":{"street":"742 Evergreen Terrace","city":"Portland","state":"OR","zip":"97201"},"skills":["Python","SQL","Kubernetes","Terraform"],"certifications":null,"salary":128500.00,"projects":[{"name":"Data Pipeline v2","status":"completed","teamSize":5},{"name":"ML Inference Service","status":"in-progress","teamSize":3}]}Both are semantically identical. A JSON parser will produce the exact same data structure from either version. But only one of them is something a human can actually read.
The pretty printing algorithm is straightforward:
In JavaScript, you can pretty print JSON in one line:
const formatted = JSON.stringify(data, null, 2);The third argument is the indentation. 2 means two spaces per level. You can also pass "\t" for tabs. In Python:
import json
formatted = json.dumps(data, indent=2, ensure_ascii=False)In the command line with jq:
echo '{"name":"Alice","age":34}' | jq .These all produce the same result: nicely indented, readable JSON. But there is a crucial distinction between pretty printing and validation. Pretty printing assumes the input is already valid JSON. If it is not, the parser will throw an error. That is where validation comes in.
Validating JSON means confirming that a string conforms to the JSON specification. A validator will catch structural problems that would cause a parser to fail. But a good validator goes further than a simple parse check — it tells you exactly where the problem is and what you need to fix.
Here is valid JSON:
{
"status": "ok",
"count": 42
}Here is invalid JSON that looks almost identical:
{
status: "ok",
count: 42
}The keys are not wrapped in double quotes. This is valid JavaScript object syntax, but it is not valid JSON. This single distinction causes more confusion than any other JSON rule.
A proper validator will point to the exact character position where the error occurs: "Expected string at position 4, found s." That precision is what saves you from staring at a 500-line payload trying to figure out what went wrong.
A complete JSON validator verifies:
{ has a }. Every [ has a ].undefined, no functions, no regex literals.0.x), no hex notation, no NaN or Infinity.\", \\, \/, \b, \f, \n, \r, \t, \uXXXX).After years of debugging JSON payloads, these are the errors I see over and over again. Each one includes the broken JSON, what the validator reports, and the corrected version.
This is the number one JSON error across the entire internet. JavaScript allows trailing commas. JSON does not.
{
"name": "Alice",
"age": 34,
}Error: Unexpected token } at position 33. Expected value after comma.
Fix: Remove the comma after the last element.
{
"name": "Alice",
"age": 34
}The same applies to arrays:
{
"colors": ["red", "green", "blue",]
}Remove the trailing comma after "blue".
Python developers hit this constantly. Python's str() representation of a dict uses single quotes by default.
{
'name': 'Alice',
'age': 34
}Error: Expected double-quoted string at position 4.
Fix: Replace all single quotes with double quotes. Always use json.dumps() in Python, never str().
{
"name": "Alice",
"age": 34
}Valid in JavaScript, invalid in JSON. This one catches people who write a lot of JS and assume JSON follows the same rules.
{
name: "Alice",
age: 34
}Error: Expected string at position 4.
Fix: Wrap every key in double quotes.
{
"name": "Alice",
"age": 34
}Easy to miss when you are manually editing JSON. You add a new field and forget the comma on the line above.
{
"name": "Alice"
"age": 34
}Error: Expected comma or } at position 22, found ".
Fix: Add a comma after "Alice".
{
"name": "Alice",
"age": 34
}JSON does not support comments. Not single-line, not multi-line, not any kind. This is a deliberate design decision by Crockford. JSONC (JSON with Comments) exists as an extension used by VS Code and TypeScript configs, but it is not valid JSON.
{
"name": "Alice",
// This is the user's age
"age": 34
}Error: Unexpected token / at position 28.
Fix: Remove the comment entirely. If you need metadata, add it as a data field:
{
"name": "Alice",
"age": 34,
"_comment": "age is in years since birth"
}These are JavaScript values, not JSON values.
{
"name": "Alice",
"middleName": undefined,
"score": NaN,
"limit": Infinity
}Error: Unexpected token u at position 34.
Fix: Use null for undefined values. Represent special numbers as strings or null.
{
"name": "Alice",
"middleName": null,
"score": null,
"limit": "Infinity"
}Newlines, tabs, and backslashes inside strings must be escaped.
{
"path": "C:\new\folder",
"message": "Line 1
Line 2"
}Error: Invalid escape sequence \n in string (it will be interpreted as a newline, breaking the string).
Fix: Escape backslashes and use \n for literal newlines.
{
"path": "C:\\new\\folder",
"message": "Line 1\nLine 2"
}JSON does not allow leading zeros. 07 is not a valid JSON number. This catches people who work with zip codes, IDs, or zero-padded values.
{
"zipCode": 07201,
"id": 003
}Error: Unexpected number format at position 15.
Fix: Remove leading zeros, or use strings if the leading zero is significant.
{
"zipCode": "07201",
"id": 3
}Especially common in deeply nested structures. You open with { but close with ], or you have an extra closing brace somewhere.
{
"users": [
{
"name": "Alice",
"roles": ["admin", "editor"]
}
}
}Error: Expected ] at position 78, found }.
Fix: Match your brackets. The users array should close with ].
{
"users": [
{
"name": "Alice",
"roles": ["admin", "editor"]
}
]
}The JSON spec says the behavior for duplicate keys is undefined. Most parsers will silently use the last value, which can lead to subtle bugs that are extremely hard to track down.
{
"name": "Alice",
"age": 34,
"name": "Bob"
}This is technically parseable, but the name field will be "Bob" in most parsers. A good validator will warn you about duplicate keys even if the JSON is technically parseable.
Fix: Remove the duplicate.
{
"name": "Bob",
"age": 34
}Real-world JSON from APIs is rarely flat. You will encounter responses with four, five, or six levels of nesting. Pretty printing is essential, but you also need strategies for navigating large structures.
Here is a realistic API response:
{
"data": {
"user": {
"id": "usr_a1b2c3",
"profile": {
"firstName": "Alice",
"lastName": "Chen",
"avatar": {
"url": "https://cdn.example.com/avatars/a1b2c3.jpg",
"dimensions": {
"width": 256,
"height": 256
}
},
"preferences": {
"theme": "dark",
"language": "en",
"notifications": {
"email": true,
"push": false,
"sms": false,
"frequency": "daily",
"categories": {
"marketing": false,
"product": true,
"security": true,
"billing": true
}
}
}
},
"subscription": {
"plan": "pro",
"status": "active",
"billingCycle": "annual",
"currentPeriod": {
"start": "2026-01-15T00:00:00Z",
"end": "2027-01-15T00:00:00Z"
},
"features": [
{
"name": "api-access",
"enabled": true,
"limit": 10000,
"used": 3847
},
{
"name": "storage",
"enabled": true,
"limit": 53687091200,
"used": 12884901888
},
{
"name": "team-members",
"enabled": true,
"limit": 25,
"used": 12
}
]
}
}
},
"meta": {
"requestId": "req_x9y8z7",
"timestamp": "2026-03-27T14:30:00Z",
"version": "v2"
}
}That is 65 lines of formatted JSON. Without pretty printing, it would be a single unreadable line. With formatting, you can immediately see the hierarchy: data.user.profile.preferences.notifications.categories.marketing is false. You can trace the path visually because of the indentation.
When you are dealing with JSON that is hundreds or thousands of lines long, simple pretty printing is not enough. Here are techniques that help:
Collapse sections: A good JSON formatter lets you collapse objects and arrays to see just the top-level structure. You can then expand only the sections you care about.
Path notation: Some formatters show you the JSONPath or dot-notation path to the element you are hovering over. Instead of counting braces, you see data.user.subscription.features[1].used directly.
Search and filter: Finding a specific key in a 2000-line JSON document by scrolling is impractical. Search by key name, value, or path.
Tree view: An alternative to indented text. Tree views show each node as a collapsible tree element, which works especially well for deeply nested structures.
Diffing: When you are comparing two JSON responses (before and after a change, for example), a JSON diff tool will highlight exactly which values changed, which keys were added, and which were removed.
Beyond syntax validation, there is structural validation: does this JSON document match the shape you expect? JSON Schema is the standard for this.
A JSON Schema defines the expected structure:
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"type": "object",
"required": ["name", "email", "age"],
"properties": {
"name": {
"type": "string",
"minLength": 1,
"maxLength": 100
},
"email": {
"type": "string",
"format": "email"
},
"age": {
"type": "integer",
"minimum": 0,
"maximum": 150
},
"role": {
"type": "string",
"enum": ["admin", "editor", "viewer"]
}
},
"additionalProperties": false
}This schema says: the JSON must be an object with required name, email, and age fields. name must be a non-empty string under 100 characters. email must be a valid email format. age must be an integer between 0 and 150. role, if present, must be one of three specific strings. No other fields are allowed.
Given this schema, this document is valid:
{
"name": "Alice Chen",
"email": "alice@example.com",
"age": 34,
"role": "admin"
}And this one is not:
{
"name": "",
"email": "not-an-email",
"age": -5,
"role": "superadmin",
"extra": true
}The validator would report five separate violations: name below minimum length, email not matching format, age below minimum, role not in the allowed enum, and extra not allowed by additionalProperties: false.
JSON Schema is invaluable for API contract testing, form validation, and configuration file validation.
Understanding when JSON is the right choice (and when it is not) helps you make better architectural decisions.
<user>
<name>Alice Chen</name>
<age>34</age>
<active>true</active>
</user>{
"name": "Alice Chen",
"age": 34,
"active": true
}JSON is less verbose, easier to parse, and maps directly to programming language data structures. XML has attributes, namespaces, and schemas (XSD) that give it more expressive power, but at the cost of complexity. For web APIs, JSON won decisively.
name: Alice Chen
age: 34
active: true
skills:
- Python
- SQLYAML is a superset of JSON that is more human-readable. It supports comments, multi-line strings, and anchors/aliases. But YAML's significant whitespace makes it error-prone (one wrong indent breaks everything), and it has well-documented security issues with arbitrary code execution in some parsers. JSON is safer and more predictable for data interchange. YAML is often preferred for configuration files where humans edit the content directly.
CSV works well for flat, tabular data. JSON works well for nested, hierarchical data. If your data has a consistent set of columns and no nesting, CSV is more compact and easier to load into spreadsheets. If you have mixed-type fields, optional properties, or nested objects, JSON is the only practical choice.
Here are JSON patterns you will encounter constantly when working with APIs.
{
"data": [
{ "id": 1, "name": "Item A" },
{ "id": 2, "name": "Item B" },
{ "id": 3, "name": "Item C" }
],
"pagination": {
"page": 1,
"perPage": 20,
"totalPages": 15,
"totalItems": 294
}
}{
"error": {
"code": "VALIDATION_ERROR",
"message": "Request validation failed",
"details": [
{
"field": "email",
"message": "Must be a valid email address",
"value": "not-an-email"
},
{
"field": "age",
"message": "Must be a positive integer",
"value": -5
}
]
}
}{
"success": true,
"data": {
"user": {
"id": "usr_123",
"name": "Alice"
}
},
"meta": {
"requestId": "req_abc",
"responseTime": 45
}
}{
"operations": [
{ "action": "create", "resource": "user", "data": { "name": "Alice" } },
{ "action": "update", "resource": "user", "id": "usr_456", "data": { "name": "Bob" } },
{ "action": "delete", "resource": "user", "id": "usr_789" }
]
}Being able to quickly format, read, and validate these patterns is a core part of API development. When you get back a 400 error and the response body is a minified JSON string, the first thing you do is format it to see the error details.
In production, JSON should almost always be minified. The whitespace in pretty-printed JSON adds significant bytes over the wire.
Consider a moderately complex API response. Pretty-printed with 2-space indentation, it might be 4.2 KB. Minified, the same data is 2.8 KB. That is a 33% reduction. Over millions of API calls, that difference translates to real bandwidth costs and latency.
However, there are cases where pretty-printed JSON is appropriate in production:
/debug/config endpoint that returns the current configuration, intended to be read by humans.The rule of thumb: minified for machine-to-machine communication, pretty-printed for human consumption.
If you need a fast, reliable JSON formatter that runs entirely in your browser, akousa.net has a free JSON formatter and validator tool. Paste your JSON, and it will instantly pretty print it, validate it, and highlight any errors with their exact line and position. No data leaves your browser, there are no file size limits worth worrying about, and you do not need to create an account or install anything.
It handles all the use cases covered in this post: formatting minified API responses, catching trailing commas and missing quotes, navigating nested structures, and switching between different indentation levels. If you work with JSON regularly, bookmark it and save yourself the five seconds of googling every time.
Here is every rule of JSON syntax in one place, for easy reference.
Objects:
{ }:,Arrays:
[ ],Strings:
" " (double quotes only)\", \\, \/, \b, \f, \n, \r, \t\u followed by four hex digitsNumbers:
0 or 0.x)e or E, optional + or -, digits)NaN, Infinity, 0x hex, or 0o octalBooleans: true or false (lowercase only)
Null: null (lowercase only)
Whitespace: Spaces, tabs, newlines, and carriage returns are allowed between any tokens and are ignored by parsers.
No comments: JSON has no comment syntax of any kind.
Encoding: UTF-8 is the standard encoding. The first bytes should not be a BOM (byte order mark), though many parsers tolerate it.
JSON is the language of data on the web. Whether you are debugging an API response at 2 AM, writing a configuration file, or building a data pipeline, you will interact with JSON every single day. Knowing the spec cold, recognizing common errors instantly, and having a fast formatter in your toolkit are not optional skills for a modern developer. They are foundational.
Pretty printing transforms unreadable data into structured, navigable information. Validation catches errors before they become bugs in production. And understanding the nuances of JSON syntax, from the trailing comma rule to the double-quote requirement to the prohibition on comments, will save you hours of frustration over the course of your career.
Keep a good JSON formatter bookmarked. Use it without thinking about it. And the next time you get back a wall of minified JSON from an API, remember: the data is already there, neatly structured and logically organized. It just needs a few line breaks to reveal itself.