CSV to SQL Generator
Convert CSV data to SQL CREATE TABLE and INSERT statements. Automatically infers column types (INT, DECIMAL, DATE, BOOLEAN, VARCHAR) and supports MySQL, PostgreSQL, and SQLite dialects.
What is a CSV to SQL Generator?
A CSV to SQL generator converts tabular CSV (Comma-Separated Values) data into SQL statements you can run directly against a database. It automatically infers column types from the data, generates a CREATE TABLE statement, and produces batched INSERT statements — saving you from writing dozens of SQL lines by hand. Supports MySQL, PostgreSQL, and SQLite dialects with the correct quoting, type names, and boolean representations for each.
How type inference works
Each column is sampled across all non-empty values. If every value matches -?\d+ it becomes INT; if all match a decimal pattern it becomes DECIMAL; dates matching YYYY-MM-DD become DATE; columns containing only true/false/0/1 become BOOLEAN; everything else falls back to VARCHAR (or TEXT for PostgreSQL/SQLite). Empty cells are excluded from type inference and written as NULL.
Frequently Asked Questions
How does SQL type inference work?
The generator inspects every non-empty value in a column and checks each against a series of patterns in order: integer, decimal, ISO date, boolean, then defaults to VARCHAR. All values in the column must match a pattern for that type to be assigned — a single non-matching value pushes the column to the next fallback type. This conservative approach avoids type mismatches that would cause import errors.
What is a batch INSERT and why does it matter?
A batch (or multi-row) INSERT combines multiple rows into a single INSERT INTO … VALUES (…), (…), (…) statement instead of one statement per row. This dramatically reduces round-trips to the database server and can be 10–50× faster for large imports. The "Rows per INSERT" control lets you tune the batch size — smaller batches use less memory per statement, while larger batches maximise throughput. MySQL's default max_allowed_packet is 16 MB, so keep batches under a few thousand rows for wide tables.
What are the differences between MySQL, PostgreSQL, and SQLite dialects?
Each dialect has distinct conventions. MySQL uses backtick (`) identifier quoting, TINYINT(1) for booleans, and VARCHAR(255) for strings. PostgreSQL uses double-quote identifier quoting, a native BOOLEAN type with TRUE/FALSE literals, and TEXT for unbounded strings. SQLite is dynamically typed — all types are advisory — so integers become INTEGER, decimals become REAL, and everything else is TEXT. Booleans in SQLite are stored as 0/1.
How do I import the generated SQL into my database?
Download the .sql file and run it with your database client. For MySQL: mysql -u user -p dbname < data.sql. For PostgreSQL: psql -U user -d dbname -f data.sql. For SQLite: sqlite3 mydb.sqlite < data.sql. In Laravel, you can run it via DB::unprepared(file_get_contents('data.sql')) or use a seeder that calls the file. Always review the generated SQL before importing into a production database.
How are NULL values and empty strings handled?
An empty CSV cell (two consecutive commas, or a field with no content) is written as SQL NULL, not as an empty string. This preserves the semantic difference between "no value" and "empty string" that matters for nullable columns, aggregations, and IS NULL checks. If you need empty strings to remain as '' in the database, pre-process your CSV to insert a placeholder before generating SQL. Quoted empty fields ("") are also treated as NULL by this tool.