Bulk INSERT with Batch Size 100

Generate bulk INSERT statements with 100 rows per batch. Learn why 100 is a common default batch size and when to use it for balanced performance.

Batch Sizes

Detailed Explanation

Batch Size 100: The Safe Default

A batch size of 100 rows per INSERT statement is a widely recommended default that balances performance with compatibility across all major databases.

Why 100 Rows?

The number 100 is a practical sweet spot:

  1. Well within limits: Even SQLite's default 999-variable limit accommodates 100 rows with up to 9 columns each
  2. Good performance: Reduces round-trips by 100x compared to individual INSERTs
  3. Manageable errors: If a batch fails, you only need to retry 100 rows, not thousands
  4. Memory-friendly: The generated SQL stays under 1MB for most datasets

Example Output

For a 350-row dataset:

-- Batch 1 (rows 1-100)
INSERT INTO orders ("id", "customer", "total", "status")
VALUES
  (1, 'Alice', 150.00, 'shipped'),
  (2, 'Bob', 89.50, 'pending'),
  -- ... rows 3-99 ...
  (100, 'Zara', 210.00, 'delivered');

-- Batch 2 (rows 101-200)
INSERT INTO orders ("id", "customer", "total", "status")
VALUES
  (101, 'Alex', 45.00, 'shipped'),
  -- ... rows 102-200 ...

-- Batch 3 (rows 201-300)
-- ...

-- Batch 4 (rows 301-350)
INSERT INTO orders ("id", "customer", "total", "status")
VALUES
  (301, 'Mike', 99.00, 'pending'),
  -- ... remaining 49 rows ...
  (350, 'Nina', 175.00, 'shipped');

When to Choose 100

Use 100 rows per batch when:

  • You are unsure about database limits
  • Your rows have many columns (10+)
  • You need error isolation (easier to debug per-batch failures)
  • You are targeting multiple database engines

Use Case

You are building a data migration script that must work across development (SQLite), staging (PostgreSQL), and production (MySQL) environments. Batch size 100 is safe for all three databases.

Try It — JSON to Bulk INSERT

Open full tool