Skip to content

A lightweight Python toolkit for CSV data integrity and organization. Features robust row-level deduplication with detailed reporting, structural file classification, and a CLI for predictable, transparent data cleaning.

License

Notifications You must be signed in to change notification settings

yeiichi/csvsmith

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

csvsmith

PyPI version Python versions License

Introduction

csvsmith is a lightweight collection of CSV utilities designed for data integrity, deduplication, and organization. It provides a robust Python API for programmatic data cleaning and a convenient CLI for quick operations. Whether you need to organize thousands of files based on their structural signatures or pinpoint duplicate rows in a complex dataset, csvsmith ensures the process is predictable, transparent, and reversible.

Table of Contents

[Python API Usage](#python-api-usage)

:   -   [Count duplicate values](#count-duplicate-values)
    -   [Find duplicate rows in a
        DataFrame](#find-duplicate-rows-in-a-dataframe)
    -   [Deduplicate with report](#deduplicate-with-report)
    -   [CSV File Classification](#csv-file-classification)
[CLI Usage](#cli-usage)

:   -   [Show duplicate rows](#show-duplicate-rows)
    -   [Deduplicate and generate a duplicate
        report](#deduplicate-and-generate-a-duplicate-report)
    -   [Classify CSVs](#classify-csvs)

Installation

From PyPI:

pip install csvsmith

For local development:

git clone https://github.com/yeiichi/csvsmith.git
cd csvsmith
python -m venv .venv
source .venv/bin/activate
pip install -e .[dev]

Python API Usage

Count duplicate values

Works on any iterable of hashable items.

from csvsmith import count_duplicates_sorted

items = ["a", "b", "a", "c", "a", "b"]
print(count_duplicates_sorted(items))
# [('a', 3), ('b', 2)]

Find duplicate rows in a DataFrame

import pandas as pd
from csvsmith import find_duplicate_rows

df = pd.read_csv("input.csv")
dup_rows = find_duplicate_rows(df)
print(dup_rows)

Deduplicate with report

import pandas as pd
from csvsmith import dedupe_with_report

df = pd.read_csv("input.csv")

# Use all columns
deduped, report = dedupe_with_report(df)
deduped.to_csv("deduped.csv", index=False)
report.to_csv("duplicate_report.csv", index=False)

# Use all columns except an ID column
deduped_no_id, report_no_id = dedupe_with_report(df, exclude=["id"])

CSV File Classification

Organize files into directories based on their headers.

from csvsmith.classify import CSVClassifier

classifier = CSVClassifier(
    source_dir="./raw_data",
    dest_dir="./organized",
    auto=True  # Automatically group files with identical headers
)

# Execute the classification
classifier.run()

# Or rollback a previous run using its manifest
classifier.rollback("./organized/manifest_20260121_120000.json")

CLI Usage

csvsmith includes a command-line interface for duplicate detection and file organization.

Show duplicate rows

csvsmith row-duplicates input.csv

Save only duplicate rows to a file:

csvsmith row-duplicates input.csv -o duplicates_only.csv

Deduplicate and generate a duplicate report

csvsmith dedupe input.csv --deduped deduped.csv --report duplicate_report.csv

Classify CSVs

Organize a mess of CSV files into structured folders based on their column headers.

# Preview what would happen (Dry Run)
csvsmith classify --src ./raw_data --dest ./organized --auto --dry-run

# Run classification with a signature config
csvsmith classify --src ./raw_data --dest ./organized --config signatures.json

# Undo a classification run
csvsmith classify --rollback ./organized/manifest_20260121_120000.json

Philosophy

  1. CSVs deserve tools that are simple, predictable, and transparent.
  2. A row has meaning only when its identity is stable and hashable.
  3. Collisions are sin; determinism is virtue.
  4. Let no delimiter sow ambiguity among fields.
  5. Love thy \x1f. The unseen separator, the quiet guardian of clean hashes.
  6. The pipeline should be silent unless something is wrong.
  7. Your data deserves respect --- and your tools should help you give it.

For more, see MANIFESTO.md.

License

MIT License.

About

A lightweight Python toolkit for CSV data integrity and organization. Features robust row-level deduplication with detailed reporting, structural file classification, and a CLI for predictable, transparent data cleaning.

Resources

License

Stars

Watchers

Forks

Packages

No packages published