Bulk INSERT with Batch Size 1000
Generate high-throughput bulk INSERT statements with 1000 rows per batch. Ideal for PostgreSQL and MySQL imports where maximum speed is needed.
Batch Sizes
Detailed Explanation
Batch Size 1000: Maximum Throughput
A batch size of 1000 rows per INSERT pushes the performance boundaries and is ideal for high-volume data imports into PostgreSQL and MySQL.
Performance Gains
Compared to smaller batch sizes, 1000 rows per INSERT provides:
| Batch Size | Statements for 10,000 rows | Relative Speed |
|---|---|---|
| 1 (individual) | 10,000 | 1x (baseline) |
| 100 | 100 | ~50x faster |
| 500 | 20 | ~80x faster |
| 1000 | 10 | ~100x faster |
The speed increase comes from:
- Fewer statement parse operations
- Reduced transaction commit overhead
- Better buffer utilization in the database
Database Compatibility
| Database | 1000-Row Support |
|---|---|
| PostgreSQL | Excellent — handles 1000+ rows easily |
| MySQL | Good — watch max_allowed_packet for wide tables |
| SQL Server | Maximum — hard limit is 1000 rows per INSERT |
| SQLite | Limited — may hit variable number limit with wide tables |
Generated Output
INSERT INTO metrics ("timestamp", "cpu", "memory", "disk")
VALUES
('2024-01-01 00:00:00', 45.2, 72.1, 55.0),
('2024-01-01 00:01:00', 46.8, 71.9, 55.0),
-- ... 998 more rows ...
('2024-01-01 16:39:00', 52.1, 68.3, 56.2);
When to Use 1000
- PostgreSQL or MySQL with default or increased packet sizes
- Time-series or log data with few columns
- Batch imports where speed is the top priority
- Combine with "Wrap in transaction" for atomic imports
Use Case
You are importing server metrics collected over months (500,000+ rows) into PostgreSQL for analytics. Using batch size 1000 with transactions reduces import time from hours to minutes.