Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
32 changes: 32 additions & 0 deletions .github/ISSUE_TEMPLATE/bug_report.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
---
name: Bug report
about: Report a reproducible AdLint bug
labels: bug
---

## What happened?

## Expected behavior

## Reproduction steps

```bash
adlint scan ...
```

## Input shape

Please use synthetic or redacted examples only. Do not paste private customer data or sensitive campaign data.

## Environment

- OS:
- Python version:
- AdLint version/commit:
- Local model enabled? yes/no

## Logs or output

```text
paste relevant output here
```
31 changes: 31 additions & 0 deletions .github/ISSUE_TEMPLATE/eval_case.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
---
name: Eval case
about: Add a synthetic, public-source, or paraphrased eval case
labels: eval, good first issue
---

## Dataset target

- [ ] seed
- [ ] benchmark
- [ ] real cases
- [ ] blind holdout
- [ ] rewrite quality

## Case type

- [ ] synthetic
- [ ] public-source paraphrase
- [ ] other

## Expected decision

- [ ] approved
- [ ] needs_review
- [ ] high_risk

## Why this matters

## Source/reference

If public-source based, include source URL and paraphrase. Do not include private data.
35 changes: 35 additions & 0 deletions .github/ISSUE_TEMPLATE/policy_rule.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
---
name: Policy rule request
about: Suggest a new or improved ad policy rule
labels: policy, good first issue
---

## Policy area

Examples: health claims, finance, privacy, disclosure, landing-page mismatch, brand safety, platform policy.

## Risk pattern

What pattern should AdLint catch?

## Example copy

Use synthetic or public/paraphrased examples only.

```text
Example ad copy here
```

## Suggested decision

- [ ] approved
- [ ] needs_review
- [ ] high_risk

## Recommended action

What should the user change to lower risk?

## References

Link public platform/legal/industry guidance if available.
83 changes: 83 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
# Contributing to AdLint

Thanks for helping improve AdLint. The project is local-first decision-support software for preflight ad, landing-page, brand-safety, privacy, and disclosure checks.

## Good first contributions

Good first issues usually fall into one of these buckets:

- Add or improve a policy YAML rule with clear evidence and recommended action.
- Add an example ad config under `examples/`.
- Add or improve eval rows in `evals/datasets/`.
- Improve README, docs, or legal-boundary language.
- Add tests for a policy edge case.

## Development setup

```bash
python3 -m venv .venv
. .venv/bin/activate
python -m pip install -e ".[dev]"
make test
```

Useful checks:

```bash
make eval
make benchmark
make real-cases-ci
make real-world-blind-ci
make policy-coverage-validate
make rewrite-quality
```

Before opening eval or policy PRs, run:

```bash
make pr-preflight
```

## Policy contribution guidelines

Policies should be explainable and conservative:

- Include a stable `id`.
- Set severity intentionally: `low`, `medium`, `high`, or `critical`.
- Prefer specific signals over broad words that create false positives.
- Include `recommended_action` so users know how to lower risk.
- Use `requires_review: true` for sensitive legal/privacy/platform concerns instead of pretending AdLint can make definitive legal decisions.
- Add tests or eval rows for the expected behavior.

Example:

```yaml
policies:
- id: unsupported_health_claim
severity: high
category: health_claims
modules: [health_claims]
industries: [health, wellness]
signals:
- clinically proven
- medical breakthrough
recommended_action: Remove or qualify the claim and provide substantiation.
requires_review: true
```

## Privacy and safety boundaries

AdLint should remain privacy-conscious:

- Do not add raw submission persistence by default.
- Do not include real private customer data in tests, examples, evals, screenshots, or docs.
- Do not claim legal compliance, guaranteed platform approval, or definitive statutory determinations.
- Keep local model features as decision support unless benchmarked evidence proves otherwise.

## Pull request checklist

- [ ] I ran relevant tests or documented why not.
- [ ] I added/updated tests or eval rows for behavior changes.
- [ ] I preserved decision-support and legal-boundary language.
- [ ] I avoided adding private data, secrets, or raw real ad submissions.
- [ ] I updated docs if the user-facing behavior changed.
50 changes: 50 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,41 @@ evidence, recommended actions, and safer rewrite suggestions.
AdLint is decision-support software, not legal advice. It does not guarantee
platform approval or make definitive statutory violation determinations.


## Why AdLint?

Ad review usually fails late: after creative is built, traffic is ready, or a
platform review blocks launch. Enterprise compliance tools can be opaque,
expensive, and hard to adapt to a team's actual growth workflow. Generic LLM
review is flexible, but often ungrounded and inconsistent.

AdLint takes a different path:

- **Local-first**: run checks without sending campaign copy to a hosted service.
- **Policy-as-code**: review logic lives in auditable YAML files.
- **Explainable**: every decision includes policy IDs, evidence, severities, and recommended actions.
- **Composable**: use it as a CLI, FastAPI service, importable Python engine, or local Web UI.
- **Benchmark-oriented**: eval datasets, policy coverage, and blind holdout diagnostics are first-class.
- **Legally careful**: AdLint flags review risk; it does not promise compliance or platform approval.

AdLint is for growth teams that want a preflight check before legal/platform
review, not a black-box replacement for reviewers.

## Demo surfaces

AdLint currently has three demo-friendly entry points:

1. **CLI** — scan a JSON/YAML campaign config and write JSON/Markdown reports.
2. **Local Web UI** — paste copy, configure platform/industry/model settings, review findings, and export reports.
3. **FastAPI** — embed `/analyze` into internal tools or CI workflows.

Suggested screenshot/GIF flow for the public repo:

```bash
adlint scan examples/high_risk_tiktok_health.json --format markdown
make api # then open http://127.0.0.1:8000/ui/
```

## What runs today

- Python package with the `adlint scan` CLI.
Expand Down Expand Up @@ -420,6 +455,21 @@ endpoint.
If the model endpoint is unavailable, AdLint still returns rule-based findings
and marks the model status as `unavailable`.


## Contributing

Contributions are welcome, especially policy rules, synthetic eval cases,
platform-specific examples, documentation, and tests for edge cases. Start with
[`CONTRIBUTING.md`](CONTRIBUTING.md) and the issue templates.

High-value contribution areas:

- Meta Ads policy coverage.
- More public-source/paraphrased eval cases.
- Landing-page extraction improvements.
- Safer rewrite-quality evaluation.
- Docs, examples, screenshots, and launch polish.

## Related docs

- `docs/policy_design.md`
Expand Down
Loading