diff --git a/.github/ISSUE_TEMPLATE/bug_report.md b/.github/ISSUE_TEMPLATE/bug_report.md new file mode 100644 index 0000000..0306f3b --- /dev/null +++ b/.github/ISSUE_TEMPLATE/bug_report.md @@ -0,0 +1,32 @@ +--- +name: Bug report +about: Report a reproducible AdLint bug +labels: bug +--- + +## What happened? + +## Expected behavior + +## Reproduction steps + +```bash +adlint scan ... +``` + +## Input shape + +Please use synthetic or redacted examples only. Do not paste private customer data or sensitive campaign data. + +## Environment + +- OS: +- Python version: +- AdLint version/commit: +- Local model enabled? yes/no + +## Logs or output + +```text +paste relevant output here +``` diff --git a/.github/ISSUE_TEMPLATE/eval_case.md b/.github/ISSUE_TEMPLATE/eval_case.md new file mode 100644 index 0000000..2f60e12 --- /dev/null +++ b/.github/ISSUE_TEMPLATE/eval_case.md @@ -0,0 +1,31 @@ +--- +name: Eval case +about: Add a synthetic, public-source, or paraphrased eval case +labels: eval, good first issue +--- + +## Dataset target + +- [ ] seed +- [ ] benchmark +- [ ] real cases +- [ ] blind holdout +- [ ] rewrite quality + +## Case type + +- [ ] synthetic +- [ ] public-source paraphrase +- [ ] other + +## Expected decision + +- [ ] approved +- [ ] needs_review +- [ ] high_risk + +## Why this matters + +## Source/reference + +If public-source based, include source URL and paraphrase. Do not include private data. diff --git a/.github/ISSUE_TEMPLATE/policy_rule.md b/.github/ISSUE_TEMPLATE/policy_rule.md new file mode 100644 index 0000000..7e6fa4a --- /dev/null +++ b/.github/ISSUE_TEMPLATE/policy_rule.md @@ -0,0 +1,35 @@ +--- +name: Policy rule request +about: Suggest a new or improved ad policy rule +labels: policy, good first issue +--- + +## Policy area + +Examples: health claims, finance, privacy, disclosure, landing-page mismatch, brand safety, platform policy. + +## Risk pattern + +What pattern should AdLint catch? + +## Example copy + +Use synthetic or public/paraphrased examples only. + +```text +Example ad copy here +``` + +## Suggested decision + +- [ ] approved +- [ ] needs_review +- [ ] high_risk + +## Recommended action + +What should the user change to lower risk? + +## References + +Link public platform/legal/industry guidance if available. diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md new file mode 100644 index 0000000..1d6ccbe --- /dev/null +++ b/CONTRIBUTING.md @@ -0,0 +1,83 @@ +# Contributing to AdLint + +Thanks for helping improve AdLint. The project is local-first decision-support software for preflight ad, landing-page, brand-safety, privacy, and disclosure checks. + +## Good first contributions + +Good first issues usually fall into one of these buckets: + +- Add or improve a policy YAML rule with clear evidence and recommended action. +- Add an example ad config under `examples/`. +- Add or improve eval rows in `evals/datasets/`. +- Improve README, docs, or legal-boundary language. +- Add tests for a policy edge case. + +## Development setup + +```bash +python3 -m venv .venv +. .venv/bin/activate +python -m pip install -e ".[dev]" +make test +``` + +Useful checks: + +```bash +make eval +make benchmark +make real-cases-ci +make real-world-blind-ci +make policy-coverage-validate +make rewrite-quality +``` + +Before opening eval or policy PRs, run: + +```bash +make pr-preflight +``` + +## Policy contribution guidelines + +Policies should be explainable and conservative: + +- Include a stable `id`. +- Set severity intentionally: `low`, `medium`, `high`, or `critical`. +- Prefer specific signals over broad words that create false positives. +- Include `recommended_action` so users know how to lower risk. +- Use `requires_review: true` for sensitive legal/privacy/platform concerns instead of pretending AdLint can make definitive legal decisions. +- Add tests or eval rows for the expected behavior. + +Example: + +```yaml +policies: + - id: unsupported_health_claim + severity: high + category: health_claims + modules: [health_claims] + industries: [health, wellness] + signals: + - clinically proven + - medical breakthrough + recommended_action: Remove or qualify the claim and provide substantiation. + requires_review: true +``` + +## Privacy and safety boundaries + +AdLint should remain privacy-conscious: + +- Do not add raw submission persistence by default. +- Do not include real private customer data in tests, examples, evals, screenshots, or docs. +- Do not claim legal compliance, guaranteed platform approval, or definitive statutory determinations. +- Keep local model features as decision support unless benchmarked evidence proves otherwise. + +## Pull request checklist + +- [ ] I ran relevant tests or documented why not. +- [ ] I added/updated tests or eval rows for behavior changes. +- [ ] I preserved decision-support and legal-boundary language. +- [ ] I avoided adding private data, secrets, or raw real ad submissions. +- [ ] I updated docs if the user-facing behavior changed. diff --git a/README.md b/README.md index db21787..6587883 100644 --- a/README.md +++ b/README.md @@ -11,6 +11,41 @@ evidence, recommended actions, and safer rewrite suggestions. AdLint is decision-support software, not legal advice. It does not guarantee platform approval or make definitive statutory violation determinations. + +## Why AdLint? + +Ad review usually fails late: after creative is built, traffic is ready, or a +platform review blocks launch. Enterprise compliance tools can be opaque, +expensive, and hard to adapt to a team's actual growth workflow. Generic LLM +review is flexible, but often ungrounded and inconsistent. + +AdLint takes a different path: + +- **Local-first**: run checks without sending campaign copy to a hosted service. +- **Policy-as-code**: review logic lives in auditable YAML files. +- **Explainable**: every decision includes policy IDs, evidence, severities, and recommended actions. +- **Composable**: use it as a CLI, FastAPI service, importable Python engine, or local Web UI. +- **Benchmark-oriented**: eval datasets, policy coverage, and blind holdout diagnostics are first-class. +- **Legally careful**: AdLint flags review risk; it does not promise compliance or platform approval. + +AdLint is for growth teams that want a preflight check before legal/platform +review, not a black-box replacement for reviewers. + +## Demo surfaces + +AdLint currently has three demo-friendly entry points: + +1. **CLI** — scan a JSON/YAML campaign config and write JSON/Markdown reports. +2. **Local Web UI** — paste copy, configure platform/industry/model settings, review findings, and export reports. +3. **FastAPI** — embed `/analyze` into internal tools or CI workflows. + +Suggested screenshot/GIF flow for the public repo: + +```bash +adlint scan examples/high_risk_tiktok_health.json --format markdown +make api # then open http://127.0.0.1:8000/ui/ +``` + ## What runs today - Python package with the `adlint scan` CLI. @@ -420,6 +455,21 @@ endpoint. If the model endpoint is unavailable, AdLint still returns rule-based findings and marks the model status as `unavailable`. + +## Contributing + +Contributions are welcome, especially policy rules, synthetic eval cases, +platform-specific examples, documentation, and tests for edge cases. Start with +[`CONTRIBUTING.md`](CONTRIBUTING.md) and the issue templates. + +High-value contribution areas: + +- Meta Ads policy coverage. +- More public-source/paraphrased eval cases. +- Landing-page extraction improvements. +- Safer rewrite-quality evaluation. +- Docs, examples, screenshots, and launch polish. + ## Related docs - `docs/policy_design.md`