JSON to SQL INSERT (Bulk)
Convert large JSON arrays into optimized bulk SQL INSERT statements with batch sizing and dialect support.
About This Tool
The JSON to SQL INSERT (Bulk) converter is a specialized tool designed for generating high-performance bulk INSERT statements from large JSON arrays. Unlike single-row INSERT generators, this tool focuses on multi-row VALUES clauses that dramatically reduce the number of round-trips to the database and improve insertion speed.
When you paste a JSON array, the tool automatically detects column names from JSON keys and infers SQL types from the values. You can then customize the output by adjusting the batch size — splitting thousands of rows into manageable chunks of 50, 100, 500, or 1000 rows per INSERT statement. This is important because most databases have limits on query size and the number of parameters per statement.
The tool supports four SQL dialects: MySQL, PostgreSQL, SQLite, and SQL Server. Each dialect uses its own quoting convention for identifiers (backticks for MySQL, double quotes for PostgreSQL, square brackets for SQL Server) and has different syntax for conflict resolution. You can choose between ON CONFLICT DO NOTHING (skip duplicates) and ON CONFLICT DO UPDATE (upsert) for PostgreSQL and SQLite, or ON DUPLICATE KEY UPDATE for MySQL.
Column mapping lets you rename JSON keys to different SQL column
names, select which columns to include, and handle NULL values in
three ways: as the SQL NULL keyword, by skipping the column,
or as DEFAULT. You can optionally include a CREATE TABLE
statement, wrap the entire operation in a transaction, and toggle
identifier quoting.
All processing runs entirely in your browser. Your JSON data is never sent to any server — there are no network requests, no logging, and no third-party dependencies involved. This makes the tool safe for sensitive production data. For simpler single-row conversions, see the JSON to SQL INSERT tool, or generate database schemas with the JSON to SQL Schema converter.
How to Use
- Paste your JSON array into the Input (JSON Array) panel on the left, or click a preset button to load sample data.
- Set the Table Name to the target database table.
- Choose the SQL Dialect (MySQL, PostgreSQL, SQLite, or SQL Server) for the correct quoting and syntax.
- Adjust the Batch Size to control how many rows appear in each INSERT statement (e.g., 100 rows per batch).
- Configure Column Mapping: rename columns, uncheck columns to exclude them, or use Select All / Deselect All.
- Optionally enable ON CONFLICT handling (ignore or upsert), CREATE TABLE, Quote identifiers, and Wrap in transaction.
- Click Copy or press Ctrl+Shift+C to copy the generated SQL. Click .sql to download as a file.
Popular Bulk INSERT Examples
FAQ
What is the advantage of bulk INSERT over individual INSERTs?
Bulk INSERT statements combine multiple rows into a single INSERT with a multi-row VALUES clause. This reduces the number of SQL statements the database needs to parse and execute, lowers network round-trips, and can be 10-100x faster than inserting rows one at a time. Most databases also optimize multi-row inserts internally with fewer transaction commits and index updates.
What batch size should I use?
The optimal batch size depends on your database and the size of each row. Common choices are 100-500 rows per batch for typical data. MySQL has a max_allowed_packet limit (default 4MB), PostgreSQL handles large statements well but 1000 rows per batch is a practical ceiling. SQLite performs best with 100-500 rows per batch. If you see errors on import, try reducing the batch size.
How does ON CONFLICT / ON DUPLICATE KEY work?
When you enable conflict handling, the tool adds upsert syntax to each INSERT statement. For PostgreSQL and SQLite, it uses ON CONFLICT (column) DO NOTHING or DO UPDATE SET. For MySQL, it uses INSERT IGNORE or ON DUPLICATE KEY UPDATE. You select the conflict column (usually the primary key), and the tool generates the appropriate syntax for your chosen dialect.
Can I rename columns in the output?
Yes. The Column Mapping section shows each JSON key with an editable text field for the SQL column name. Change the name to map a JSON key like 'firstName' to a SQL column like 'first_name'. The inferred type is shown next to each column.
How are NULL values handled?
You can choose from three NULL handling strategies: 'NULL keyword' inserts the literal NULL, 'Skip column' omits the column entirely for that row (only works with individual inserts), and 'DEFAULT keyword' uses the SQL DEFAULT keyword to let the database apply its column default value.
Is my data safe?
Yes. All JSON parsing and SQL generation happens entirely in your browser using JavaScript. No data is sent to any server, no network requests are made, and no third-party services are involved. You can verify this by checking the Network tab in your browser's developer tools.
Does it handle nested JSON objects and arrays?
Nested JSON objects and arrays are serialized to their JSON string representation and inserted as text values (single-quoted JSON strings). This is useful for databases that support JSON columns like PostgreSQL's JSONB or MySQL's JSON type. The inferred type for such columns will be TEXT.
Related Tools
JSON to SQL INSERT
Convert JSON arrays to SQL INSERT statements. Supports bulk inserts, table name customization, and multiple SQL dialects.
JSON to SQL Schema
Convert JSON to CREATE TABLE statements. Generate SQL schema for MySQL, PostgreSQL, and SQLite with automatic type inference.
Database Seed Generator
Generate realistic seed data from SQL CREATE TABLE schemas. Export as SQL INSERT, JSON, or CSV.
CSV ↔ JSON Converter
Convert between CSV and JSON formats with delimiter selection, header toggle, and file drag-and-drop.
JSON Formatter
Format, validate, and beautify JSON with syntax highlighting and tree view.