CSV formats
Distribu is CSV-friendly at the edges. You can bulk-load catalogs and customer lists from a spreadsheet, export the current state back out, and (soon) pull orders out for accounting. This section documents the exact column names, validation rules, and row-level behavior for every CSV flow in the app.
The three flows
| Flow | Import | Export | Page |
|---|---|---|---|
| Products | /dashboard/inventory/import | /dashboard/inventory/export | Products format |
| Customers | /dashboard/customers/import | /dashboard/customers/export | Customers format |
| Orders | — | coming soon | Orders export |
Products and customers are fully round-trippable: export the current data, edit in a spreadsheet, re-import, and Distribu will match rows by SKU / email and apply the diffs. Orders are read-only — there's no "import orders" flow today, and the export is still on the roadmap.
Conventions used everywhere
The same parser powers both imports. A few rules hold across every CSV flow:
Headers are case-insensitive and format-tolerant
Every CSV must have a header row. Columns are matched by normalized
name — lowercased, with spaces / underscores / dashes stripped. So
SKU, sku, Sku, item_code, and Item Code all match the sku
field. Each page below lists the full set of synonyms recognized for
each column.
If a header isn't recognized, Distribu shows a column mapping UI before you commit the import so you can pick which CSV column to use for which field. Mapping dropdowns appear in the preview panel after uploading.
Imports are idempotent
Every import is upsert by natural key:
- Products match on
skuwithin your company. - Customers match on
email(lowercased) within your company.
Re-running the same import after fixing a typo is safe — existing rows get updated in place, missing rows get created, unchanged rows are skipped. The preview panel tells you exactly how many fall into each bucket before you commit.
File size and row limits
| Limit | Value |
|---|---|
| Maximum file size | 2 MB |
| Maximum rows per import | 5,000 |
If you have more than 5,000 products or customers to load, split the file into multiple imports. The 2 MB cap is generous for most spreadsheet data (≈20k short rows), but a CSV packed with long descriptions can hit it sooner.
Imports also respect your plan limits — the preview panel projects how many new rows you'd be creating and flags any overage before you apply the changes.
Parsing is tolerant, validation is strict
Distribu handles all the usual CSV quirks for free: quoted values
containing commas, embedded newlines in quoted cells,
escape-quote-by-doubling (""), trimmed whitespace, and UTF-8 encoding.
Empty lines are skipped.
Once parsed, every row is validated against a strict schema. Errors (bad email, non-numeric price, duplicate SKU) are collected per row with a message and surfaced in the preview — no partial imports. You fix the file, re-upload, and only commit when the issue list is empty.
You can also download a row,message CSV of the parse errors from the
preview panel to use as a punch list.
Preview before commit
The import flow has two phases:
- Preview — upload the file, see a diff (rows to create / update / unchanged), see any validation issues, adjust the column mapping if the auto-detection picked the wrong columns.
- Apply — commit the previewed changes in a single database transaction. If anything fails mid-apply, nothing is written.
You never "accidentally" run an import. The Apply changes button only enables when there are no issues, the mapping covers all required fields, and the projected row count is under your plan limit.
Exports round-trip
Both products and customers can be exported to a CSV that's importable back in. Column order and header names match the import schema exactly, so you can:
- Export the current catalog.
- Edit in Excel / Google Sheets / numbers.
- Re-import — new rows become creates, edits become updates.
Export filenames follow {flow}-{YYYY-MM-DD}.csv — for example
products-2026-04-16.csv.
Audit trail
Bulk imports land in the audit log:
ProductBulkImported— metadata:{ created, updated }counts.CustomerBulkImported— metadata:{ created, updated }counts.
View the full log at Settings → Audit log.
Individual row-level changes are not audited separately — the summary row on the bulk import is the only trace. If you need per-row history, stay on the manual edit path for those specific records.
What's in this section
- Products format — every column, every synonym, validation rules, sample template.
- Customers format — email as the natural key, status/credit-limit fields, sample template.
- Orders export — the planned format for the upcoming orders CSV export, plus current workarounds via the REST API.
Next: Products format.
