Master SQL with this comprehensive cheat sheet. Learn essential queries, JOINs, subqueries, window functions, and optimization tips with practical examples.
I keep a SQL cheat sheet taped to the wall next to my monitor. Not because I cannot remember how GROUP BY works — I have written thousands of queries — but because SQL has enough subtle syntax variations that looking things up is faster than guessing wrong and debugging for ten minutes.
This is that cheat sheet, expanded into something more useful than my scribbled sticky notes. Every example here runs against a standard relational database. I have tested them against PostgreSQL and MySQL, and noted where syntax differs. If you are just getting started with SQL or you want a quick reference you can bookmark, this is for you.
If you want to try these queries live, the Code Playground supports SQL execution right in your browser. And if you need to clean up messy queries, the SQL Formatter will handle indentation and capitalization for you.
Everything starts with SELECT. You are telling the database what columns you want back.
-- Get all columns from a table
SELECT * FROM users;
-- Get specific columns
SELECT first_name, last_name, email FROM users;
-- Alias columns for readability
SELECT
first_name AS "First Name",
last_name AS "Last Name",
created_at AS "Signup Date"
FROM users;WHERE filters rows. Think of it as the "only show me rows where this condition is true" clause.
-- Exact match
SELECT * FROM users WHERE status = 'active';
-- Comparison operators
SELECT * FROM products WHERE price > 50.00;
SELECT * FROM orders WHERE created_at >= '2026-01-01';
-- Multiple conditions
SELECT * FROM users
WHERE status = 'active'
AND created_at >= '2026-01-01';
-- OR conditions
SELECT * FROM products
WHERE category = 'electronics'
OR category = 'accessories';
-- IN shorthand (cleaner than multiple ORs)
SELECT * FROM products
WHERE category IN ('electronics', 'accessories', 'gadgets');
-- Pattern matching with LIKE
SELECT * FROM users WHERE email LIKE '%@gmail.com';
SELECT * FROM products WHERE name LIKE 'Pro%';
-- NULL checks (= will not work for NULL)
SELECT * FROM users WHERE deleted_at IS NULL;
SELECT * FROM users WHERE phone IS NOT NULL;
-- Range with BETWEEN
SELECT * FROM orders
WHERE total BETWEEN 100 AND 500;| Operator | Description | Example |
|---|---|---|
= | Equal | WHERE status = 'active' |
<> or != | Not equal | WHERE role <> 'admin' |
>, < | Greater/less than | WHERE price > 100 |
>=, <= | Greater/less or equal | WHERE age >= 18 |
BETWEEN | Range inclusive | WHERE id BETWEEN 1 AND 100 |
LIKE | Pattern match | WHERE name LIKE 'J%' |
ILIKE | Case-insensitive (Postgres) | WHERE name ILIKE 'john%' |
IN | Match any in list | WHERE id IN (1, 2, 3) |
IS NULL | Is null | WHERE email IS NULL |
IS NOT NULL | Is not null | WHERE email IS NOT NULL |
-- Sort ascending (default)
SELECT * FROM products ORDER BY price;
-- Sort descending
SELECT * FROM products ORDER BY price DESC;
-- Multiple sort columns
SELECT * FROM users ORDER BY last_name ASC, first_name ASC;
-- Limit results
SELECT * FROM products ORDER BY price DESC LIMIT 10;
-- Pagination with OFFSET
SELECT * FROM products
ORDER BY created_at DESC
LIMIT 20 OFFSET 40; -- Page 3 (20 items per page)JOINs are where SQL gets powerful. You are connecting rows from different tables based on related columns. I have seen developers avoid JOINs because they seem intimidating, but they are just set operations. Once you visualize them, they are straightforward.
Returns only rows that have matching values in both tables. This is the most common JOIN and the default when you just write JOIN.
SELECT
orders.id AS order_id,
users.first_name,
users.email,
orders.total,
orders.created_at
FROM orders
INNER JOIN users ON orders.user_id = users.id
WHERE orders.total > 100
ORDER BY orders.created_at DESC;Returns all rows from the left table, and matched rows from the right table. If there is no match, the right side columns are NULL.
-- Find users who have never placed an order
SELECT
users.id,
users.email,
orders.id AS order_id
FROM users
LEFT JOIN orders ON users.id = orders.user_id
WHERE orders.id IS NULL;-- RIGHT JOIN: all rows from right table, matched from left
SELECT
departments.name AS department,
employees.name AS employee
FROM employees
RIGHT JOIN departments ON employees.dept_id = departments.id;
-- FULL OUTER JOIN: all rows from both tables
SELECT
users.email,
subscriptions.plan
FROM users
FULL OUTER JOIN subscriptions ON users.id = subscriptions.user_id;-- CROSS JOIN: every combination (cartesian product)
SELECT colors.name, sizes.name
FROM colors
CROSS JOIN sizes;
-- Self JOIN: join a table to itself
SELECT
e.name AS employee,
m.name AS manager
FROM employees e
LEFT JOIN employees m ON e.manager_id = m.id;| JOIN Type | Returns |
|---|---|
INNER JOIN | Only matching rows from both tables |
LEFT JOIN | All rows from left + matches from right |
RIGHT JOIN | All rows from right + matches from left |
FULL OUTER JOIN | All rows from both tables |
CROSS JOIN | Every combination of rows |
| Self JOIN | Table joined to itself |
Aggregation collapses multiple rows into summary rows. GROUP BY defines the groups, aggregate functions (COUNT, SUM, AVG, MIN, MAX) do the math.
-- Count orders per user
SELECT
user_id,
COUNT(*) AS order_count,
SUM(total) AS total_spent,
AVG(total) AS avg_order_value,
MAX(total) AS largest_order
FROM orders
GROUP BY user_id
ORDER BY total_spent DESC;
-- Count users by signup month
SELECT
DATE_TRUNC('month', created_at) AS month,
COUNT(*) AS signups
FROM users
GROUP BY DATE_TRUNC('month', created_at)
ORDER BY month;HAVING filters groups after aggregation. It is the WHERE clause for aggregated data.
-- Users who have placed more than 5 orders
SELECT
user_id,
COUNT(*) AS order_count
FROM orders
GROUP BY user_id
HAVING COUNT(*) > 5;
-- Categories with average price above 100
SELECT
category,
AVG(price) AS avg_price,
COUNT(*) AS product_count
FROM products
GROUP BY category
HAVING AVG(price) > 100
ORDER BY avg_price DESC;Common mistake: using WHERE when you mean HAVING. WHERE filters individual rows before grouping. HAVING filters groups after aggregation. If your condition involves an aggregate function, you need HAVING.
A subquery is a query nested inside another query. They are useful when you need to filter or compare against a computed result.
-- Find users whose orders exceed the average order total
SELECT * FROM users
WHERE id IN (
SELECT user_id FROM orders
WHERE total > (SELECT AVG(total) FROM orders)
);
-- Correlated subquery: reference outer query
SELECT
p.name,
p.price,
p.category
FROM products p
WHERE p.price > (
SELECT AVG(p2.price)
FROM products p2
WHERE p2.category = p.category
);
-- Subquery in SELECT (scalar subquery)
SELECT
u.first_name,
u.email,
(SELECT COUNT(*) FROM orders o WHERE o.user_id = u.id) AS order_count
FROM users u;
-- EXISTS: check if related rows exist
SELECT * FROM users u
WHERE EXISTS (
SELECT 1 FROM orders o
WHERE o.user_id = u.id
AND o.created_at >= '2026-01-01'
);CTEs make complex queries readable. They let you define named temporary result sets that you can reference in your main query. Think of them as variables for query results.
-- Basic CTE
WITH active_users AS (
SELECT id, first_name, email
FROM users
WHERE status = 'active'
AND last_login >= '2026-01-01'
)
SELECT
au.first_name,
au.email,
COUNT(o.id) AS order_count
FROM active_users au
LEFT JOIN orders o ON au.id = o.user_id
GROUP BY au.id, au.first_name, au.email;
-- Multiple CTEs
WITH monthly_revenue AS (
SELECT
DATE_TRUNC('month', created_at) AS month,
SUM(total) AS revenue
FROM orders
WHERE status = 'completed'
GROUP BY DATE_TRUNC('month', created_at)
),
monthly_growth AS (
SELECT
month,
revenue,
LAG(revenue) OVER (ORDER BY month) AS prev_revenue,
ROUND(
(revenue - LAG(revenue) OVER (ORDER BY month))
/ LAG(revenue) OVER (ORDER BY month) * 100, 2
) AS growth_pct
FROM monthly_revenue
)
SELECT * FROM monthly_growth
ORDER BY month DESC;
-- Recursive CTE: org chart traversal
WITH RECURSIVE org_tree AS (
SELECT id, name, manager_id, 1 AS depth
FROM employees
WHERE manager_id IS NULL
UNION ALL
SELECT e.id, e.name, e.manager_id, ot.depth + 1
FROM employees e
INNER JOIN org_tree ot ON e.manager_id = ot.id
)
SELECT * FROM org_tree ORDER BY depth, name;Window functions perform calculations across a set of rows related to the current row, without collapsing them into a single output row. They are one of the most powerful features in SQL and something many developers do not learn until years into their career.
-- ROW_NUMBER: assign sequential numbers
SELECT
name,
department,
salary,
ROW_NUMBER() OVER (ORDER BY salary DESC) AS rank
FROM employees;
-- ROW_NUMBER with PARTITION: rank within each group
SELECT
name,
department,
salary,
ROW_NUMBER() OVER (
PARTITION BY department
ORDER BY salary DESC
) AS dept_rank
FROM employees;
-- RANK and DENSE_RANK: handle ties differently
SELECT
name,
score,
RANK() OVER (ORDER BY score DESC) AS rank,
DENSE_RANK() OVER (ORDER BY score DESC) AS dense_rank
FROM leaderboard;
-- RANK: 1, 2, 2, 4 (skips 3)
-- DENSE_RANK: 1, 2, 2, 3 (no skip)
-- Running totals with SUM
SELECT
date,
amount,
SUM(amount) OVER (ORDER BY date) AS running_total
FROM transactions;
-- Moving average
SELECT
date,
revenue,
AVG(revenue) OVER (
ORDER BY date
ROWS BETWEEN 6 PRECEDING AND CURRENT ROW
) AS seven_day_avg
FROM daily_revenue;
-- LAG and LEAD: access previous/next rows
SELECT
month,
revenue,
LAG(revenue, 1) OVER (ORDER BY month) AS prev_month,
LEAD(revenue, 1) OVER (ORDER BY month) AS next_month
FROM monthly_stats;| Function | Description |
|---|---|
ROW_NUMBER() | Unique sequential number per row |
RANK() | Rank with gaps on ties |
DENSE_RANK() | Rank without gaps on ties |
NTILE(n) | Distribute rows into n buckets |
LAG(col, n) | Value from n rows before |
LEAD(col, n) | Value from n rows after |
FIRST_VALUE(col) | First value in the window |
LAST_VALUE(col) | Last value in the window |
SUM() OVER() | Running or windowed sum |
AVG() OVER() | Running or windowed average |
-- INSERT single row
INSERT INTO users (first_name, last_name, email, status)
VALUES ('Jane', 'Doe', 'jane@example.com', 'active');
-- INSERT multiple rows
INSERT INTO products (name, price, category)
VALUES
('Keyboard', 79.99, 'electronics'),
('Mouse', 49.99, 'electronics'),
('Desk Pad', 29.99, 'accessories');
-- INSERT from SELECT
INSERT INTO archived_orders (id, user_id, total, created_at)
SELECT id, user_id, total, created_at
FROM orders
WHERE created_at < '2025-01-01';
-- UPDATE with conditions
UPDATE users
SET status = 'inactive', updated_at = NOW()
WHERE last_login < '2025-01-01'
AND status = 'active';
-- UPDATE with JOIN (PostgreSQL syntax)
UPDATE orders o
SET status = 'cancelled'
FROM users u
WHERE o.user_id = u.id
AND u.status = 'banned';
-- DELETE with conditions
DELETE FROM sessions
WHERE expires_at < NOW();
-- UPSERT (INSERT ... ON CONFLICT) - PostgreSQL
INSERT INTO user_settings (user_id, theme, language)
VALUES (42, 'dark', 'en')
ON CONFLICT (user_id)
DO UPDATE SET theme = EXCLUDED.theme, language = EXCLUDED.language;Writing correct SQL is one thing. Writing fast SQL is another. Here are the patterns I use most often for optimization.
-- Single column index
CREATE INDEX idx_users_email ON users (email);
-- Composite index (column order matters!)
CREATE INDEX idx_orders_user_date
ON orders (user_id, created_at DESC);
-- Unique index
CREATE UNIQUE INDEX idx_users_email_unique
ON users (email);
-- Partial index (PostgreSQL) - only index active users
CREATE INDEX idx_users_active
ON users (email)
WHERE status = 'active';EXPLAIN ANALYZE to see how the database executes your query:EXPLAIN ANALYZE
SELECT * FROM orders
WHERE user_id = 42
ORDER BY created_at DESC
LIMIT 10;Avoid SELECT * in production code. Fetch only the columns you need. Fewer columns means less data transferred and potentially better index usage.
Index your WHERE and JOIN columns. If you filter or join on a column frequently, it should have an index.
Be careful with functions in WHERE clauses. This kills index usage:
-- BAD: function on column prevents index use
SELECT * FROM users WHERE LOWER(email) = 'jane@example.com';
-- GOOD: use a functional index or normalize data on insert
CREATE INDEX idx_users_email_lower ON users (LOWER(email));EXISTS instead of IN for large subqueries:-- Slower with large datasets
SELECT * FROM users WHERE id IN (SELECT user_id FROM orders);
-- Usually faster
SELECT * FROM users u
WHERE EXISTS (SELECT 1 FROM orders o WHERE o.user_id = u.id);Limit your result set early. Applying LIMIT and tight WHERE conditions reduces the amount of work the database does.
Avoid N+1 queries in application code. If you are running a query inside a loop, you probably need a JOIN or a batch query instead.
| Problem | Solution |
|---|---|
| Slow WHERE filters | Add index on filtered column |
| Slow JOINs | Index the JOIN columns on both tables |
| Full table scans | Check EXPLAIN output, add targeted indexes |
| Sorting bottleneck | Index with matching sort order |
| Too much data returned | Use SELECT specific_columns not SELECT * |
| Function in WHERE | Create functional index or pre-compute |
| N+1 queries | Replace loop with JOIN or batch query |
A few patterns that I reach for regularly:
-- CASE: conditional logic in queries
SELECT
name,
price,
CASE
WHEN price < 20 THEN 'Budget'
WHEN price < 100 THEN 'Mid-range'
ELSE 'Premium'
END AS tier
FROM products;
-- COALESCE: first non-null value
SELECT
COALESCE(nickname, first_name, 'Anonymous') AS display_name
FROM users;
-- DISTINCT ON (PostgreSQL): latest record per group
SELECT DISTINCT ON (user_id)
user_id, total, created_at
FROM orders
ORDER BY user_id, created_at DESC;
-- String aggregation
-- PostgreSQL:
SELECT department, STRING_AGG(name, ', ') AS members
FROM employees GROUP BY department;
-- MySQL:
SELECT department, GROUP_CONCAT(name SEPARATOR ', ') AS members
FROM employees GROUP BY department;Date handling varies across databases, but these patterns cover the most common scenarios:
-- Current timestamp
SELECT NOW(); -- PostgreSQL, MySQL
SELECT CURRENT_TIMESTAMP; -- Standard SQL
-- Extract parts
SELECT EXTRACT(YEAR FROM created_at) AS year FROM orders;
SELECT EXTRACT(MONTH FROM created_at) AS month FROM orders;
-- Date arithmetic
SELECT created_at + INTERVAL '30 days' FROM subscriptions; -- PostgreSQL
SELECT DATE_ADD(created_at, INTERVAL 30 DAY) FROM subscriptions; -- MySQL
-- Truncate to period
SELECT DATE_TRUNC('month', created_at) AS month FROM orders; -- PostgreSQLSQL has not changed dramatically since I first learned it, and that is a feature, not a bug. The queries in this cheat sheet will work in 2026 the same way they worked a decade ago. The fundamentals — filtering with WHERE, combining with JOINs, aggregating with GROUP BY, and analyzing with window functions — cover the vast majority of what you will need day to day.
Bookmark this page. Keep it open in a tab. And when you are building queries against real data, use the SQL Formatter to keep things readable, the Code Playground to test your queries, and tools like the JSON to CSV converter and CSV Viewer when you need to move data between formats.
The best way to get better at SQL is to write more of it. Pick a dataset, open a terminal, and start querying.