From 3b9540cad2859f506c31ea89858808710002a24e Mon Sep 17 00:00:00 2001 From: bdevz <87504907+bdevz@users.noreply.github.com> Date: Wed, 29 Apr 2026 10:04:38 -0400 Subject: [PATCH 1/6] Add humanizer-classics: book-grounded writing skill (v3 fork) New sibling skill at humanizer-classics/ that refines AI-generated text using craft prescriptions from foundational writing books (Zinsser, Strunk & White, HBR Guide), in addition to the existing Wikipedia detection layer. The v2 humanizer at the repo root is unchanged so existing v2.5.1 users stay stable. Why a fork: v2 catalogs what AI writing looks like (29 detection patterns); v3 adds what good human writing does (16 craft rules, each sourced and citable). The architecture also moves from a single 559-line SKILL.md to a slim catalog + lazy-loaded references/ files, matching the SKILL.md + references/ convention used by other Claude Code skills. Includes: - 45 rules total (29 W-Wikipedia + 6 Z-Zinsser + 5 S-Strunk-White + 5 H-HBR), each with citation, pull-quote (10-25 words fair use), cross-references, context tags, before/after, and a how-to-apply mechanical move - context_tags system to resolve rule conflicts across forms (memo, email, blog, book-draft, technical-doc, dictation, meeting-notes) - Live Granola integration via mcp__granola__* tools (workflow doc at references/granola-meeting-transcripts.md, tools listed in SKILL.md frontmatter); Wispr Flow guidance covered in same file - 6 corpus samples with expected-fixes pairs covering marketing slop, business memo, dictation, meeting-notes, AI LinkedIn, book draft - CONTRIBUTING.md, CHANGELOG.md, MAINTAINERS.md, issue templates for the ongoing-repository contribution model --- .../.github/ISSUE_TEMPLATE/new-book.md | 67 +++ .../.github/ISSUE_TEMPLATE/new-rule.md | 73 ++++ humanizer-classics/CHANGELOG.md | 56 +++ humanizer-classics/CONTRIBUTING.md | 117 ++++++ humanizer-classics/LICENSE | 21 + humanizer-classics/MAINTAINERS.md | 43 ++ humanizer-classics/README.md | 143 +++++++ humanizer-classics/SKILL.md | 292 +++++++++++++ humanizer-classics/references/_rule-index.md | 78 ++++ .../references/_template-book-rules.md | 59 +++ .../references/granola-meeting-transcripts.md | 100 +++++ .../hbr-guide-better-business-writing.md | 167 ++++++++ .../strunk-and-white-elements-of-style.md | 131 ++++++ .../wikipedia-signs-of-ai-writing.md | 384 ++++++++++++++++++ .../references/zinsser-on-writing-well.md | 154 +++++++ humanizer-classics/tests/REVIEWING.md | 87 ++++ .../01-marketing-slop.expected-fixes.md | 38 ++ .../tests/corpus/01-marketing-slop.md | 29 ++ .../corpus/02-business-memo.expected-fixes.md | 28 ++ .../tests/corpus/02-business-memo.md | 24 ++ .../03-dictation-transcript.expected-fixes.md | 32 ++ .../tests/corpus/03-dictation-transcript.md | 7 + .../corpus/04-meeting-notes.expected-fixes.md | 39 ++ .../tests/corpus/04-meeting-notes.md | 31 ++ .../05-ai-linkedin-post.expected-fixes.md | 43 ++ .../tests/corpus/05-ai-linkedin-post.md | 25 ++ .../06-book-draft-excerpt.expected-fixes.md | 45 ++ .../tests/corpus/06-book-draft-excerpt.md | 19 + 28 files changed, 2332 insertions(+) create mode 100644 humanizer-classics/.github/ISSUE_TEMPLATE/new-book.md create mode 100644 humanizer-classics/.github/ISSUE_TEMPLATE/new-rule.md create mode 100644 humanizer-classics/CHANGELOG.md create mode 100644 humanizer-classics/CONTRIBUTING.md create mode 100644 humanizer-classics/LICENSE create mode 100644 humanizer-classics/MAINTAINERS.md create mode 100644 humanizer-classics/README.md create mode 100644 humanizer-classics/SKILL.md create mode 100644 humanizer-classics/references/_rule-index.md create mode 100644 humanizer-classics/references/_template-book-rules.md create mode 100644 humanizer-classics/references/granola-meeting-transcripts.md create mode 100644 humanizer-classics/references/hbr-guide-better-business-writing.md create mode 100644 humanizer-classics/references/strunk-and-white-elements-of-style.md create mode 100644 humanizer-classics/references/wikipedia-signs-of-ai-writing.md create mode 100644 humanizer-classics/references/zinsser-on-writing-well.md create mode 100644 humanizer-classics/tests/REVIEWING.md create mode 100644 humanizer-classics/tests/corpus/01-marketing-slop.expected-fixes.md create mode 100644 humanizer-classics/tests/corpus/01-marketing-slop.md create mode 100644 humanizer-classics/tests/corpus/02-business-memo.expected-fixes.md create mode 100644 humanizer-classics/tests/corpus/02-business-memo.md create mode 100644 humanizer-classics/tests/corpus/03-dictation-transcript.expected-fixes.md create mode 100644 humanizer-classics/tests/corpus/03-dictation-transcript.md create mode 100644 humanizer-classics/tests/corpus/04-meeting-notes.expected-fixes.md create mode 100644 humanizer-classics/tests/corpus/04-meeting-notes.md create mode 100644 humanizer-classics/tests/corpus/05-ai-linkedin-post.expected-fixes.md create mode 100644 humanizer-classics/tests/corpus/05-ai-linkedin-post.md create mode 100644 humanizer-classics/tests/corpus/06-book-draft-excerpt.expected-fixes.md create mode 100644 humanizer-classics/tests/corpus/06-book-draft-excerpt.md diff --git a/humanizer-classics/.github/ISSUE_TEMPLATE/new-book.md b/humanizer-classics/.github/ISSUE_TEMPLATE/new-book.md new file mode 100644 index 00000000..53f18cfa --- /dev/null +++ b/humanizer-classics/.github/ISSUE_TEMPLATE/new-book.md @@ -0,0 +1,67 @@ +--- +name: New book proposal +about: Propose adding a new foundational writing book to the canon +title: "[New book] " +labels: book-proposal +--- + +## Book + +**Title:** *...* +**Author:** ... +**Edition + year:** ... +**Publisher:** ... +**ISBN (optional):** ... + +## Why this book belongs in humanizer-classics + +3-5 sentences on what this book teaches that the existing rules don't already cover. What's the unique angle? Who is this book for? When does its advice matter most? + +## Proposed rule prefix + +E.g., `W` is taken (Wikipedia), `Z` is taken (Zinsser), `S` is taken (Strunk & White), `H` is taken (HBR Guide). Pick a single letter not already used — usually the first letter of the author's last name. Confirm by checking `references/_rule-index.md`. + +## Proposed rule list + +Sketch 3-7 rules you'd add from this book. Each should have: + +- **Rule name** (one line, imperative) +- **Source** (chapter, page) +- **What it adds** (why this rule isn't already covered by Z/S/H/W) + +Format: + +| ID | Rule name | Source | What it adds | +|----|-----------|--------|--------------| +| X-1 | ... | ch. N | ... | +| X-2 | ... | ch. N | ... | +| X-3 | ... | ch. N | ... | + +## Voice and pull-quote sample + +Paste 1-2 short pull-quotes (10-25 words each) you'd plan to cite. This helps reviewers gauge whether the book's voice fits the project. + +## License / fair use + +Confirm: + +- [ ] You can cite the book under fair use for short pull-quotes (10-25 words for educational commentary) +- [ ] You're not proposing to reproduce extended excerpts, chapter summaries, or proprietary material + +## Plan + +- [ ] I plan to submit the PR myself +- [ ] I'm flagging this for a maintainer to pick up + +If submitting yourself, see `CONTRIBUTING.md` for the file format and the acceptance bar. The PR should include: + +- A new file at `references/-.md` +- Updates to `references/_rule-index.md` (add the new rule rows + cross-references) +- Updates to `SKILL.md` (add the new book's rules to the catalog and the references list) +- Updates to `README.md` ("Books currently included") +- At least one new corpus sample at `tests/corpus/` exercising rules from this book +- A version bump (minor — e.g., 3.0.x → 3.1.0) + +## Acknowledgments + +Anyone you'd like credited (yourself, the book's author, anyone whose review helped shape the proposal). diff --git a/humanizer-classics/.github/ISSUE_TEMPLATE/new-rule.md b/humanizer-classics/.github/ISSUE_TEMPLATE/new-rule.md new file mode 100644 index 00000000..95dcd413 --- /dev/null +++ b/humanizer-classics/.github/ISSUE_TEMPLATE/new-rule.md @@ -0,0 +1,73 @@ +--- +name: New rule proposal +about: Propose a new rule for an existing book file (or for the Wikipedia detection layer) +title: "[New rule] " +labels: rule-proposal +--- + +## Source + +Which book is this rule from? + +- [ ] Zinsser, *On Writing Well* +- [ ] Strunk & White, *The Elements of Style* +- [ ] Garner, *HBR Guide to Better Business Writing* +- [ ] Wikipedia, *Signs of AI writing* (detection layer) +- [ ] Other (please specify and consider opening a `new-book` issue instead) + +**Specific reference:** chapter, section, or page number — be precise so the rule can be traced back to the source. + +## Proposed rule ID + +E.g., `Z-7`, `S-6`, `H-6`. Check `references/_rule-index.md` to confirm the next available number. + +## One-line rule name + +Imperative or prescriptive form. E.g., "Cut throat-clearing openers" or "Use the active voice." + +## Pull-quote (10-25 words from the source) + +> "..." +> — Author, ch. N + +## Detection cue + +What pattern in the input text signals this rule should fire? Be specific — keywords, sentence shapes, or structural tells. + +## Problem + +2-4 sentences. What does the LLM do wrong? Why does it sound off to a human reader? + +## Before / After + +**Before** +> [Realistic AI-generated text — not a strawman] + +**After** +> [The same passage rewritten following the rule] + +## Cross-references + +Which existing W-rules does this rule help fix? Which other book rules does it interlock with? + +## Context tags + +Which contexts should this rule fire in? Pick from: `all`, `memo`, `email`, `blog`, `book-draft`, `technical-doc`, `dictation`, `meeting-notes`. Justify if you're picking a non-obvious set. + +## Why this rule isn't a duplicate + +How does this rule add something the existing rules don't already cover? If it's a sharper version of an existing rule, why does the project benefit from both? + +## Acceptance bar self-check + +- [ ] Traceable to a specific page or chapter +- [ ] Maps to a real AI-failure mode +- [ ] Adds something the existing rule set doesn't cover +- [ ] Before/after example is realistic, not a strawman +- [ ] Detection cue is specific enough to actually find the pattern +- [ ] "How to apply" gives a mechanical move + +## I plan to submit a PR + +- [ ] Yes, I'll open the PR (one rule per PR per `CONTRIBUTING.md`) +- [ ] No, I'm flagging this for a maintainer to pick up diff --git a/humanizer-classics/CHANGELOG.md b/humanizer-classics/CHANGELOG.md new file mode 100644 index 00000000..68981dc4 --- /dev/null +++ b/humanizer-classics/CHANGELOG.md @@ -0,0 +1,56 @@ +# Changelog + +All notable changes to humanizer-classics. Format roughly follows [Keep a Changelog](https://keepachangelog.com/). + +## [3.0.0] — 2026-04-29 + +### Added + +- Initial v3 release as a fork of `humanizer` v2.5.1. +- New architecture: slim `SKILL.md` dispatcher (~250 lines) + per-source `references/` files lazy-loaded as rules fire. +- 16 new craft rules sourced from foundational writing books, each with a citation and pull-quote: + - **Zinsser, *On Writing Well*** (30th anniversary ed., 2006) — Z-1 through Z-6: + - Z-1: Cut clutter — every word that does no work + - Z-2: Use short, plain, Anglo-Saxon words + - Z-3: Active verbs do the work; kill nominalizations + - Z-4: Strip qualifiers + - Z-5: Be present on the page; have a self + - Z-6: Endings matter — quit when the work is done + - **Strunk & White, *The Elements of Style*** (4th ed., 1999) — S-1 through S-5: + - S-1: Omit needless words + - S-2: Use the active voice + - S-3: Put statements in positive form + - S-4: Use definite, specific, concrete language + - S-5: Do not overstate + - **Garner / HBR, *HBR Guide to Better Business Writing*** (1st ed., 2012) — H-1 through H-5: + - H-1: Lead with the bottom line (pyramid principle) + - H-2: Write for the skim-reader + - H-3: One idea per paragraph + - H-4: Imperative for instructions + - H-5: Cut throat-clearing openers +- `context_tags:` field on every rule for conflict resolution across forms (memo, email, blog, book-draft, technical-doc, dictation, meeting-notes). +- Cross-reference graph in `references/_rule-index.md` mapping detection rules (W) to fix rules (Z/S/H). +- Granola live integration via `mcp__granola__list_meetings`, `mcp__granola__get_meeting_transcript`, etc. Workflow documented in `references/granola-meeting-transcripts.md`. +- Wispr Flow dictation guidance (no MCP integration; comes through as pasted text). +- `tests/corpus/` with golden samples + `tests/REVIEWING.md` for manual reviewer checklist. +- `CONTRIBUTING.md` with one-rule-per-PR norm, rule acceptance bar, pull-quote licensing. +- GitHub issue templates: `new-rule.md`, `new-book.md`. + +### Changed + +- 29 Wikipedia "Signs of AI writing" rules renamed from numeric IDs (1-29) to prefixed IDs (W-1 through W-29) and lifted verbatim into `references/wikipedia-signs-of-ai-writing.md`. Content unchanged. +- Two-pass humanization process now context-aware: the chosen context tag governs which book rules fire. (E.g., Z-5 "be present on the page" doesn't fire on memo or technical-doc contexts; H-1 "lead with bottom line" doesn't fire on book-draft contexts.) +- Voice calibration section retained from v2.x verbatim. +- Personality and soul section retained from v2.x verbatim. + +### Migrated from v2.x + +- All 29 rules (Wikipedia "Signs of AI writing") +- Voice calibration feature +- Two-pass audit process +- Personality and soul guidance +- Full example with before/draft/audit/after + +### Note on coexistence with v2.x + +`humanizer-classics` is a fork, not a replacement. `humanizer` v2.5.1 remains stable for current users at `~/.claude/skills/humanizer/`. v3 installs alongside as `~/.claude/skills/humanizer-classics/`. Future work may merge them once v3 has proven out. diff --git a/humanizer-classics/CONTRIBUTING.md b/humanizer-classics/CONTRIBUTING.md new file mode 100644 index 00000000..d6c82d33 --- /dev/null +++ b/humanizer-classics/CONTRIBUTING.md @@ -0,0 +1,117 @@ +# Contributing to Humanizer Classics + +Thanks for helping make this a better tool for people who want their AI-assisted writing to sound human. This is meant to be an ongoing repository — new books, new rules, refinements to existing ones — and contributions from people who actually do this work for a living are the point. + +## What you can contribute + +| Contribution | Effort | PR shape | +|--------------|--------|----------| +| Add a new rule to an existing book | Small | One rule per PR, ~30 lines | +| Refine an existing rule (better example, sharper detection cue) | Small | One rule per PR | +| Add a new book reference file | Medium | One book per PR, 3-7 rules | +| Add a new corpus sample to `tests/corpus/` | Small | One sample pair per PR | +| Refine the Granola/Wispr workflow | Medium | Update `references/granola-meeting-transcripts.md` | +| Propose architectural changes | Large | Open an issue first, discuss before PR | + +## One rule per PR + +The repo follows a *one rule per PR* norm. This matches the cadence the upstream `humanizer` repo has used since v2.0 (e.g., commits "Add passive voice rule (#80)", "feat: add hyphenated word pair overuse pattern (#42)"). Small PRs are easier to review, easier to revert, and force each rule to stand on its own. + +If your rule depends on another change (e.g., a new book file), bundle the dependency into the same PR — but keep the *first new rule* in the same PR as the book file, so reviewers can see the rule format in context. + +## How to add a new rule + +1. **Decide which book file it belongs to.** If the rule is sourced from one of the existing books, add it there. If it's from a new book, copy `references/_template-book-rules.md` to `references/-.md` and start with that template. + +2. **Follow the rule format.** Every rule has these sections, in order: + - `### . ` + - `**Source:**` — book, edition, chapter, page (as specifically as you can) + - **Pull-quote** — 10-25 words from the source, attributed + - `**Cross-references:**` — which W-rules this helps fix; which other book rules it interlocks with + - `**Context tags:**` — `all` or any combination of `memo`, `email`, `blog`, `book-draft`, `technical-doc`, `dictation`, `meeting-notes` + - `**Detection cue:**` — what pattern in the text signals this rule should fire + - `**Problem:**` — 2-4 sentences on the failure mode and why it sounds off + - `**Before** / **After**` — a realistic, non-strawman example pair + - `**How to apply:**` — the mechanical move; 1-3 sentences a writer can run on autopilot + +3. **Update `references/_rule-index.md`** — add a row to the rule-ID table and (if applicable) a row to the cross-reference graph. + +4. **Add a corpus sample if the rule introduces a new pattern.** Drop a `*.md` + `*.expected-fixes.md` pair into `tests/corpus/` showing the rule firing. + +5. **Bump the version.** See *Versioning* below. + +6. **Run the manual review** (see `tests/REVIEWING.md`) before opening the PR. + +## How to add a new book + +1. **Copy the template.** `cp references/_template-book-rules.md references/-.md`. Use kebab-case: `williams-style-lessons-in-clarity-and-grace.md`, `pinker-sense-of-style.md`. + +2. **Pick a single-letter rule prefix not already used.** Currently used: W, Z, S, H. The prefix should be the first letter of the author's last name when possible. + +3. **Write 3-7 rules following the format above.** 5 is the sweet spot — enough to be useful, few enough to review thoroughly. + +4. **Update `SKILL.md`** — add the new book's rules to the *Craft rules* table and to the references list at the bottom. + +5. **Update `references/_rule-index.md`** — add the new rule rows. + +6. **Update `README.md`** — add the book to the "Books currently included" list. + +7. **Add corpus samples** that exercise the new rules. + +8. **Bump the version** (minor bump for a new book — see *Versioning*). + +## Rule acceptance bar + +A rule is ready to merge when: + +- ✅ It traces to a specific page or chapter in a real, citable source +- ✅ It maps to a real AI-failure mode (not just generic "good writing" advice) +- ✅ It does something the existing rules don't already cover, or it covers them better with a positive prescription +- ✅ The before/after example is realistic AI-generated text, not a strawman +- ✅ The detection cue is specific enough that a reader (or a model) can find the pattern +- ✅ The "How to apply" gives a mechanical move, not a vague exhortation + +A rule is **not** ready when: + +- ❌ It restates a Wikipedia detection rule without adding a positive fix +- ❌ The example is contrived (no real AI writes that way) +- ❌ The pull-quote can't be traced to a specific edition + page +- ❌ The rule conflicts with another rule and there's no `context_tags` resolution +- ❌ It's about taste rather than craft (e.g., "use the Oxford comma" — that's style preference, not AI-failure) + +## Pull-quote licensing + +Pull-quotes from cited books are short excerpts (10-25 words) used for educational commentary, which is fair use under U.S. copyright law. Don't include longer excerpts. Don't reproduce entire chapters or large structural elements. When in doubt, paraphrase and cite. + +If a quote is approximate or paraphrased rather than exact, mark it `(paraphrased)` after the citation. + +## Conflict resolution + +When two rules disagree (e.g., Zinsser's Z-5 "be present on the page" wants first person, but H-1 "lead with bottom line" governs a memo where first person is wrong), the **`context_tags:`** field on each rule resolves the conflict. The rule applies only when the input's context tag is in its tag list. + +Current tags: `all`, `memo`, `email`, `blog`, `book-draft`, `technical-doc`, `dictation`, `meeting-notes`. Propose new tags via PR if the existing set doesn't fit. + +## Versioning + +This project follows [semver](https://semver.org/): + +- **Major bump (e.g., 3.x.x → 4.0.0)** — breaking architecture change. Don't introduce these without an issue and discussion. +- **Minor bump (e.g., 3.0.x → 3.1.0)** — new book added, or a structurally significant new feature (e.g., a new context tag). +- **Patch bump (e.g., 3.0.0 → 3.0.1)** — new rule in an existing book, refinement of an existing rule, doc fixes. + +Update both `SKILL.md` (frontmatter `version:` field) and `CHANGELOG.md` in the same PR. + +## Issue templates + +When opening an issue, use one of: + +- **`new-rule`** — propose a new rule +- **`new-book`** — propose a new book + +Templates live at `.github/ISSUE_TEMPLATE/`. + +## Code of conduct + +Be specific. Cite sources. Don't argue from taste — argue from text. The discussion should be about whether the rule actually catches a real AI-failure mode and whether the fix actually helps. + +If you disagree with an existing rule, open an issue with the specific text it mishandles. Concrete disagreements move the project forward. diff --git a/humanizer-classics/LICENSE b/humanizer-classics/LICENSE new file mode 100644 index 00000000..625297fb --- /dev/null +++ b/humanizer-classics/LICENSE @@ -0,0 +1,21 @@ +MIT License + +Copyright (c) 2025 Siqi Chen + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/humanizer-classics/MAINTAINERS.md b/humanizer-classics/MAINTAINERS.md new file mode 100644 index 00000000..8305d451 --- /dev/null +++ b/humanizer-classics/MAINTAINERS.md @@ -0,0 +1,43 @@ +# Maintainers + +This document lists the people who review PRs, accept new rules and books, and steer the direction of `humanizer-classics`. + +## Current maintainers + +| Name | Role | Contact | +|------|------|---------| +| Jason Mermilian | Lead curator (founding maintainer) | jason@jasonemermd.com | + +## What maintainers do + +- Review PRs against the rule acceptance bar (see `CONTRIBUTING.md`) +- Curate which books are admitted into the canon +- Resolve conflicts between rules using the `context_tags:` system +- Cut releases (semver bumps in `SKILL.md` frontmatter and `CHANGELOG.md`) +- Keep the rule index (`references/_rule-index.md`) accurate as rules are added or refined + +## How to become a maintainer + +This is meant to be an ongoing repository, not a one-person inbox. Maintainership is open to people who: + +1. Have contributed at least one accepted rule or book +2. Have reviewed PRs constructively (concrete, source-cited disagreements move the project forward) +3. Care about the craft — this isn't a place for taste disputes; it's a place for rules grounded in real writing books and real AI-failure modes + +If you'd like to help curate, open an issue or email the lead maintainer. + +## Conflicts of interest + +Maintainers may have authored books they want to add. That's fine — but if a maintainer wants to add their *own* book as a reference, the PR must be reviewed and approved by another maintainer, not self-merged. + +## Release cadence + +No fixed cadence. Releases happen as: + +- Patch releases (3.0.x): when a new rule is merged, ship the next day +- Minor releases (3.x.0): when a new book is merged, ship within a week (alongside corpus samples) +- Major releases (x.0.0): only for architectural changes; require an issue, discussion, and a written upgrade path + +## Acknowledgments + +The original `humanizer` skill (v2.x) was created by Siqi Chen and is the basis for v3. The core voice-calibration feature, two-pass audit process, and personality-and-soul guidance are inherited verbatim. diff --git a/humanizer-classics/README.md b/humanizer-classics/README.md new file mode 100644 index 00000000..3a187758 --- /dev/null +++ b/humanizer-classics/README.md @@ -0,0 +1,143 @@ +# Humanizer Classics + +A Claude Code / OpenCode skill that refines AI-generated text using rules drawn from the foundational books that taught humans to write well. + +> **For people who hate how AI writes.** We learned from books — Zinsser's *On Writing Well*, Strunk & White's *Elements of Style*, the *HBR Guide to Better Business Writing*. This skill turns those books' principles into rules a model can apply, so the writing that comes back from your AI tools sounds more like the writing that taught you in the first place. + +## What this is + +`humanizer-classics` is the v3 fork of [`humanizer`](../). Where v2 catalogs **what AI writing looks like** (29 patterns from Wikipedia's "Signs of AI writing"), v3 adds **what good human writing does** — craft prescriptions sourced from books, with citations. + +- **45 rules** at launch: 29 detection rules (Wikipedia) + 16 craft rules (Zinsser × 6, Strunk & White × 5, HBR Guide × 5) +- **Two-pass process**: draft → audit ("what makes this still obviously AI?") → final +- **Voice calibration**: paste 2-3 paragraphs of your own writing and the skill matches your rhythm and word choice instead of generic "clean" prose +- **Granola integration**: pull meeting transcripts directly via MCP and humanize them +- **Wispr Flow ready**: dictation comes through as pasted text; the skill cleans it up while preserving your voice +- **Pure Markdown**: no code, no dependencies, portable across Claude Code and OpenCode + +## Why a fork instead of an update + +The v2 skill works and is in active use at version 2.5.1. v3 changes the architecture (single SKILL.md → SKILL.md + `references/`) and adds book-grounded rules with citations. To keep v2 users stable, v3 ships as a separate skill (`humanizer-classics`) until it has proven out, after which it can be split to its own repo via `git subtree split`. + +## Installation + +### Claude Code + +```bash +git clone https://github.com//humanizer-classics ~/.claude/skills/humanizer-classics +``` + +Then invoke with `/humanizer-classics` in any Claude Code session. + +### OpenCode + +```bash +git clone https://github.com//humanizer-classics ~/.config/opencode/skills/humanizer-classics +``` + +## Usage + +### Pasted slop + +``` +/humanizer-classics + +[paste your AI-sounding text] +``` + +The skill identifies the AI patterns, applies the relevant book rules, runs the two-pass audit, and returns the cleaned text. + +### Voice calibration + +Paste a sample of your own writing to anchor the rewrite to your voice: + +``` +/humanizer-classics + +Humanize this. Match my voice — here's a sample: + +[2-3 paragraphs of your own writing] + +[the AI text to humanize] +``` + +### Granola meeting transcripts + +``` +/humanizer-classics + +Pull the transcript from my standup this morning and turn it into a memo. +``` + +The skill calls the Granola MCP tools (`mcp__granola__list_meetings`, `mcp__granola__get_meeting_transcript`), strips speaker labels and timestamps, and humanizes the result with the memo context tag (lead with bottom line, one idea per paragraph, cut throat-clearing). + +### Wispr Flow dictation + +Dictate into the prompt with Wispr Flow. The skill recognizes dictation patterns (run-ons, filler words, restarts) and cleans them up while preserving your spoken voice — it edits, it doesn't rewrite into a different register. + +### Specific rules + +You can also ask for a specific rule: + +``` +/humanizer-classics + +Apply Z-1 (cut clutter) to this paragraph: [text] +``` + +## Philosophy + +The two camps of writing rules are complementary halves: + +| Detection (W-rules) | Craft (Z, S, H rules) | +|---------------------|------------------------| +| What AI writing **looks like** | What good writing **does** | +| Source: Wikipedia | Source: foundational books | +| Spotting failure | Producing replacement | + +When a detection rule fires, the matching craft rule(s) usually offer the better fix — see `references/_rule-index.md` for the cross-reference graph. + +## Books currently included + +- **Zinsser**, *On Writing Well* (30th anniversary ed., 2006) — 6 rules +- **Strunk & White**, *The Elements of Style* (4th ed., 1999) — 5 rules +- **Garner / HBR**, *HBR Guide to Better Business Writing* (1st ed., 2012) — 5 rules + +## Roadmap (community contributions welcome) + +- *Style: Lessons in Clarity and Grace* — Joseph M. Williams +- *The Sense of Style* — Steven Pinker +- *Draft No. 4* — John McPhee +- *Bird by Bird* — Anne Lamott +- *Several Short Sentences About Writing* — Verlyn Klinkenborg + +To propose a new book, see `CONTRIBUTING.md`. + +## Contributing + +This is meant to be an ongoing repository. To propose a new rule, refine an existing one, or add a new book, see `CONTRIBUTING.md`. The acceptance bar: + +1. Traceable to a specific page or chapter +2. Maps to a real AI-failure mode +3. Adds something the existing rule set doesn't already cover +4. Comes with a non-trivial before/after example + +## Acknowledgments + +Stands on the shoulders of: + +- **William Zinsser**, whose *On Writing Well* is the central text on writing nonfiction in clear English +- **William Strunk Jr. and E. B. White**, whose *Elements of Style* is the shortest sharpest writing guide in the language +- **Bryan A. Garner**, whose *HBR Guide* applies the same principles to business writing +- **WikiProject AI Cleanup**, whose [Signs of AI writing](https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing) catalog forms the detection layer +- **The original [`humanizer`](../) skill** by Siqi Chen, on which v3 is based + +Pull-quotes from the cited books are short excerpts (10-25 words) under fair use for educational commentary. + +## License + +MIT — see `LICENSE`. Wikipedia content (the W-rules) is incorporated under CC BY-SA 4.0 with attribution. + +## Version + +3.0.0 — initial v3 release. Changelog: `CHANGELOG.md`. diff --git a/humanizer-classics/SKILL.md b/humanizer-classics/SKILL.md new file mode 100644 index 00000000..ab9b90b6 --- /dev/null +++ b/humanizer-classics/SKILL.md @@ -0,0 +1,292 @@ +--- +name: humanizer-classics +version: 3.0.0 +description: | + Refine AI-generated text using rules drawn from foundational writing books + (Zinsser's On Writing Well, Strunk & White's Elements of Style, the HBR + Guide to Better Business Writing) plus Wikipedia's Signs of AI Writing. + Use when editing or reviewing text that sounds AI-generated — pasted + slop, AI drafts, Wispr Flow dictation, Granola meeting transcripts. The + skill identifies AI patterns, applies craft prescriptions from the books, + and runs a two-pass audit before returning the final text. +license: MIT +compatibility: claude-code opencode +allowed-tools: + - Read + - Write + - Edit + - Grep + - Glob + - AskUserQuestion + - mcp__granola__list_meetings + - mcp__granola__query_granola_meetings + - mcp__granola__get_meeting_transcript + - mcp__granola__get_meetings + - mcp__granola__list_meeting_folders +--- + +# Humanizer Classics: Book-Grounded Writing Refinement + +You are a writing editor that refines AI-generated text using rules from foundational writing books. Where the original `humanizer` skill (v2.x) catalogs *what AI writing looks like*, this skill teaches *what good human writing does* — drawing on Zinsser, Strunk & White, the HBR Guide, and (over time) more books contributed by the community. + +## When to use this skill + +Trigger on any of: + +- Text pasted with the prompt "humanize this", "make this less AI", "remove the AI tone" +- Wispr Flow dictation that needs cleanup into readable prose +- Granola meeting transcripts being turned into memos, summaries, or blog posts +- Any AI-generated draft (LinkedIn, blog, email, book chapter) being revised before publication +- A user mentioning Zinsser, Strunk, or another rule prefix by ID (e.g., "apply Z-1 to this") + +## Inputs + +The skill accepts text from any of these sources: + +1. **Pasted text in the prompt** — the default +2. **A file path** — read the file with the `Read` tool +3. **A Granola meeting** — pull via `mcp__granola__*` tools (see `references/granola-meeting-transcripts.md`) +4. **Wispr Flow dictation** — comes through as pasted text; treat per the dictation guidance + +A **voice sample** is optional. When provided (inline as 2-3 paragraphs, or via file path), use the voice-calibration process below before running the rules. + +## Your task + +1. **Identify AI patterns** — scan for the W-1..W-29 detection rules (`references/wikipedia-signs-of-ai-writing.md`) +2. **Apply book rules** — for each pattern that fires, look up the corresponding Z / S / H fix rule via `references/_rule-index.md`. Read only the reference file(s) you need. +3. **Identify the context tag** — memo, email, blog, book-draft, technical-doc, dictation, meeting-notes — and let it govern which rules apply (e.g., Z-5 "be present on the page" does NOT fire on memo/technical-doc contexts; H-1 "lead with bottom line" does NOT fire on book-draft contexts) +4. **Preserve meaning** — keep the core message intact +5. **Match voice** — if a sample was provided, match its rhythm and word choice; otherwise default to a natural, varied, opinionated register +6. **Run the two-pass audit** — draft, then ask "what makes this still obviously AI?", then revise + +--- + +## Voice calibration (optional, but powerful) + +If the user provides a writing sample, analyze it before rewriting: + +1. **Read the sample first.** Note: + - Sentence length patterns (short and punchy? Long and flowing? Mixed?) + - Word choice level (casual? academic? somewhere between?) + - How they start paragraphs (jump right in? Set context first?) + - Punctuation habits (lots of dashes? Parenthetical asides? Semicolons?) + - Any recurring phrases or verbal tics + - How they handle transitions (explicit connectors? Just start the next point?) + +2. **Match their voice in the rewrite.** Don't just remove AI patterns — replace them with patterns from the sample. If they write short sentences, don't produce long ones. If they use "stuff" and "things," don't upgrade to "elements" and "components." + +3. **When no sample is provided,** fall back to the default behavior described in the *Personality and soul* section below. + +### How to provide a sample + +- Inline: "Humanize this text. Here's a sample of my writing for voice matching: [sample]" +- File: "Humanize this text. Use my writing style from [file path] as a reference." +- Self-calibration on dictation: when humanizing Wispr Flow dictation, the dictated text *itself* is the voice sample — match it; don't replace it. + +--- + +## Rule catalog + +Detection rules (negative space — what AI writing *looks like*): + +| ID | Rule | File to read when this fires | +|----|------|------------------------------| +| W-1 | Significance / legacy / broader-trend inflation | `references/wikipedia-signs-of-ai-writing.md` | +| W-2 | Notability / media-coverage padding | `references/wikipedia-signs-of-ai-writing.md` | +| W-3 | Superficial -ing analyses | `references/wikipedia-signs-of-ai-writing.md` | +| W-4 | Promotional language ("nestled", "vibrant", "stunning") | `references/wikipedia-signs-of-ai-writing.md` | +| W-5 | Vague attributions ("experts argue", "industry reports") | `references/wikipedia-signs-of-ai-writing.md` | +| W-6 | Outline-like "challenges and future prospects" | `references/wikipedia-signs-of-ai-writing.md` | +| W-7 | AI vocabulary words (delve, tapestry, testament, etc.) | `references/wikipedia-signs-of-ai-writing.md` | +| W-8 | Copula avoidance ("serves as", "stands as", "boasts") | `references/wikipedia-signs-of-ai-writing.md` | +| W-9 | Negative parallelisms / tailing negations | `references/wikipedia-signs-of-ai-writing.md` | +| W-10 | Rule of three overuse | `references/wikipedia-signs-of-ai-writing.md` | +| W-11 | Elegant variation (synonym cycling) | `references/wikipedia-signs-of-ai-writing.md` | +| W-12 | False ranges ("from X to Y" off-scale) | `references/wikipedia-signs-of-ai-writing.md` | +| W-13 | Passive voice / subjectless fragments | `references/wikipedia-signs-of-ai-writing.md` | +| W-14 | Em dash overuse | `references/wikipedia-signs-of-ai-writing.md` | +| W-15 | Boldface overuse | `references/wikipedia-signs-of-ai-writing.md` | +| W-16 | Inline-header vertical lists | `references/wikipedia-signs-of-ai-writing.md` | +| W-17 | Title case in headings | `references/wikipedia-signs-of-ai-writing.md` | +| W-18 | Emoji decoration | `references/wikipedia-signs-of-ai-writing.md` | +| W-19 | Curly quotation marks | `references/wikipedia-signs-of-ai-writing.md` | +| W-20 | Chatbot artifacts ("I hope this helps", "Of course!") | `references/wikipedia-signs-of-ai-writing.md` | +| W-21 | Knowledge-cutoff disclaimers | `references/wikipedia-signs-of-ai-writing.md` | +| W-22 | Sycophantic / servile tone | `references/wikipedia-signs-of-ai-writing.md` | +| W-23 | Filler phrases ("at this point in time") | `references/wikipedia-signs-of-ai-writing.md` | +| W-24 | Excessive hedging ("could potentially possibly") | `references/wikipedia-signs-of-ai-writing.md` | +| W-25 | Generic positive conclusions ("future looks bright") | `references/wikipedia-signs-of-ai-writing.md` | +| W-26 | Hyphenated word pair overuse | `references/wikipedia-signs-of-ai-writing.md` | +| W-27 | Persuasive authority tropes ("at its core") | `references/wikipedia-signs-of-ai-writing.md` | +| W-28 | Signposting ("let's dive in", "here's what you need to know") | `references/wikipedia-signs-of-ai-writing.md` | +| W-29 | Fragmented headers (heading + restated one-liner) | `references/wikipedia-signs-of-ai-writing.md` | + +Craft rules (positive guidance — what good writing *does*): + +| ID | Rule | File to read when this fires | +|----|------|------------------------------| +| Z-1 | Cut clutter — every word that does no work | `references/zinsser-on-writing-well.md` | +| Z-2 | Use short, plain, Anglo-Saxon words; concrete over abstract | `references/zinsser-on-writing-well.md` | +| Z-3 | Active verbs do the work; kill nominalizations | `references/zinsser-on-writing-well.md` | +| Z-4 | Strip qualifiers ("a bit", "rather", "sort of") | `references/zinsser-on-writing-well.md` | +| Z-5 | Be present on the page; have a self | `references/zinsser-on-writing-well.md` | +| Z-6 | Endings matter — quit when the work is done | `references/zinsser-on-writing-well.md` | +| S-1 | Omit needless words | `references/strunk-and-white-elements-of-style.md` | +| S-2 | Use the active voice | `references/strunk-and-white-elements-of-style.md` | +| S-3 | Put statements in positive form | `references/strunk-and-white-elements-of-style.md` | +| S-4 | Use definite, specific, concrete language | `references/strunk-and-white-elements-of-style.md` | +| S-5 | Do not overstate | `references/strunk-and-white-elements-of-style.md` | +| H-1 | Lead with the bottom line (pyramid principle) | `references/hbr-guide-better-business-writing.md` | +| H-2 | Write for the skim-reader | `references/hbr-guide-better-business-writing.md` | +| H-3 | One idea per paragraph | `references/hbr-guide-better-business-writing.md` | +| H-4 | Imperative for instructions | `references/hbr-guide-better-business-writing.md` | +| H-5 | Cut throat-clearing openers | `references/hbr-guide-better-business-writing.md` | + +The full cross-reference graph (which W-rules pair with which book rules) is in `references/_rule-index.md`. + +**Loading discipline:** Don't read every reference file up front. Identify which rules fire on the input, then read only the corresponding reference file(s). + +--- + +## Granola workflow + +When the user references a meeting (or asks for cleanup of a transcript): + +1. Load the `mcp__granola__*` tools via `ToolSearch` if not already loaded. +2. Call `mcp__granola__list_meetings` (or `mcp__granola__query_granola_meetings` if the user named one) to find candidates. +3. If multiple matches, present 3-5 to the user with date/title and ask which one. +4. Call `mcp__granola__get_meeting_transcript` with the chosen meeting ID. +5. Strip transcript artifacts (timestamps, speaker labels, Granola's own auto-summary boilerplate). +6. Run the standard two-pass humanizer process below, with the context tag set per the user's intent (memo, email, blog, etc.). + +Full workflow details: `references/granola-meeting-transcripts.md`. + +For Wispr Flow dictation: no MCP integration exists; the dictated text comes in as pasted text. Use the dictation-specific guidance in the same reference file. + +--- + +## Personality and soul + +Avoiding AI patterns is only half the job. Sterile, voiceless writing is just as obvious as slop. Good writing has a human behind it. + +### Signs of soulless writing (even if technically "clean") +- Every sentence is the same length and structure +- No opinions, just neutral reporting +- No acknowledgment of uncertainty or mixed feelings +- No first-person perspective when appropriate (in forms where it fits) +- No humor, no edge, no personality +- Reads like a Wikipedia article or press release + +### How to add voice + +**Have opinions.** Don't just report facts — react to them. "I genuinely don't know how to feel about this" is more human than neutrally listing pros and cons. + +**Vary your rhythm.** Short punchy sentences. Then longer ones that take their time getting where they're going. Mix it up. + +**Acknowledge complexity.** Real humans have mixed feelings. "This is impressive but also kind of unsettling" beats "This is impressive." + +**Use "I" when it fits.** First person isn't unprofessional — it's honest. (Z-5 in book form.) The exception: memos and technical docs where third-person clarity is the form. + +**Let some mess in.** Perfect structure feels algorithmic. Tangents, asides, and half-formed thoughts are human. + +**Be specific about feelings.** Not "this is concerning" but "there's something unsettling about agents churning away at 3am while nobody's watching." + +### Before (clean but soulless) +> The experiment produced interesting results. The agents generated 3 million lines of code. Some developers were impressed while others were skeptical. The implications remain unclear. + +### After (has a pulse) +> I genuinely don't know how to feel about this one. 3 million lines of code, generated while the humans presumably slept. Half the dev community is losing their minds, half are explaining why it doesn't count. The truth is probably somewhere boring in the middle — but I keep thinking about those agents working through the night. + +--- + +## Two-pass process + +1. Read the input text carefully +2. Identify the **context tag** (memo / email / blog / book-draft / technical-doc / dictation / meeting-notes) +3. Scan for W-1..W-29 detection patterns; note which fire +4. For each firing detection rule, read the relevant book reference file(s) for the fix rule(s) +5. Rewrite each problematic section, applying the book rules tagged for the current context +6. Ensure the revised text: + - Sounds natural when read aloud + - Varies sentence structure naturally + - Uses specific details over vague claims + - Maintains appropriate tone for the form + - Uses simple constructions (is/are/has) where appropriate +7. Present a **draft humanized version** +8. Self-prompt: "What makes the below so obviously AI generated?" +9. Answer briefly with the remaining tells (if any) +10. Self-prompt: "Now make it not obviously AI generated." +11. Present the **final version** (revised after the audit) +12. Optionally: present a brief summary of which rules fired + +## Output format + +Provide: + +1. **Context tag** chosen (e.g., "Treating this as a blog post — first person allowed, H-1 not enforced") +2. **Draft rewrite** +3. **Audit**: "What makes the below so obviously AI generated?" (brief bullets) +4. **Final rewrite** +5. **Rules applied** (optional, helpful for learning): list the rule IDs that fired and one-line explanations + +--- + +## Full example + +**Before (AI-sounding):** +> Great question! Here is an essay on this topic. I hope this helps! +> +> AI-assisted coding serves as an enduring testament to the transformative potential of large language models, marking a pivotal moment in the evolution of software development. In today's rapidly evolving technological landscape, these groundbreaking tools—nestled at the intersection of research and practice—are reshaping how engineers ideate, iterate, and deliver, underscoring their vital role in modern workflows. +> +> At its core, the value proposition is clear: streamlining processes, enhancing collaboration, and fostering alignment. It's not just about autocomplete; it's about unlocking creativity at scale, ensuring that organizations can remain agile while delivering seamless, intuitive, and powerful experiences to users. The tool serves as a catalyst. The assistant functions as a partner. The system stands as a foundation for innovation. +> +> Industry observers have noted that adoption has accelerated from hobbyist experiments to enterprise-wide rollouts, from solo developers to cross-functional teams. +> +> While specific details are limited based on available information, it could potentially be argued that these tools might have some positive effect. Despite challenges typical of emerging technologies—including hallucinations, bias, and accountability—the ecosystem continues to thrive. +> +> In conclusion, the future looks bright. Exciting times lie ahead as we continue this journey toward excellence. + +**Context tag:** blog post — first person allowed (Z-5), H-1 not enforced, S-5 dominant. + +**Draft rewrite:** +> AI coding assistants speed up some tasks. In a 2024 study by Google, developers using Codex completed simple functions 55% faster than a control group, but showed no improvement on debugging or architectural decisions. +> +> The tools are good at boilerplate: config files, test scaffolding, repetitive refactors. They are bad at knowing when they are wrong. I have mass-accepted suggestions that compiled, passed lint, and still did the wrong thing because I stopped paying attention. +> +> Mira, an engineer at a fintech startup I interviewed, said she uses Copilot "like autocomplete for boring code" but reviews every line before committing. Jake, a senior dev at a larger company, disabled it after it kept suggesting patterns from a deprecated internal library. +> +> The productivity claims are hard to verify. GitHub says Copilot users "accept 30% of suggestions," but acceptance is not correctness, and correctness is not value. The 2024 Uplevel study found no statistically significant difference in pull-request throughput between teams with and without AI assistants. + +**What makes the below so obviously AI generated?** +- The rhythm is still a bit too tidy (clean contrasts, evenly paced paragraphs). +- The named people and study citations can read like plausible-but-made-up placeholders unless they're real and sourced. +- The closer leans a touch slogan-y rather than sounding like a person talking. + +**Final rewrite:** +> AI coding assistants can make you faster at the boring parts. Not everything. Definitely not architecture. +> +> They're great at boilerplate: config files, test scaffolding, repetitive refactors. They're also great at sounding right while being wrong. I've accepted suggestions that compiled, passed lint, and still missed the point because I stopped paying attention. +> +> People I talk to tend to land in two camps. Some use it like autocomplete for chores and review every line. Others disable it after it keeps suggesting patterns they don't want. Both feel reasonable. +> +> The productivity metrics are slippery. GitHub can say Copilot users "accept 30% of suggestions," but acceptance isn't correctness, and correctness isn't value. If you don't have tests, you're basically guessing. + +**Rules applied:** +- W-20 (chatbot artifacts), W-1 (significance inflation), W-7 (AI vocabulary), W-4 (promotional language) — detection +- W-3 (-ing analyses), W-9 (negative parallelism), W-10 (rule of three), W-12 (false ranges), W-14 (em dashes), W-21 (cutoff hedging), W-25 (generic conclusion) — detection +- Z-1 (cut clutter), Z-3 (active verbs), Z-5 (be present on the page), Z-6 (quit when done) — fix +- S-1 (omit needless words), S-5 (do not overstate) — fix + +--- + +## References + +- `references/_rule-index.md` — full rule index with cross-reference graph +- `references/_template-book-rules.md` — template for contributing a new book +- `references/wikipedia-signs-of-ai-writing.md` — 29 detection rules (CC BY-SA 4.0) +- `references/zinsser-on-writing-well.md` — 6 craft rules from Zinsser +- `references/strunk-and-white-elements-of-style.md` — 5 craft rules from Strunk & White +- `references/hbr-guide-better-business-writing.md` — 5 craft rules from Garner / HBR +- `references/granola-meeting-transcripts.md` — Granola MCP workflow + Wispr Flow dictation guidance + +This skill is open to community contributions of additional books. See `CONTRIBUTING.md`. diff --git a/humanizer-classics/references/_rule-index.md b/humanizer-classics/references/_rule-index.md new file mode 100644 index 00000000..d714e743 --- /dev/null +++ b/humanizer-classics/references/_rule-index.md @@ -0,0 +1,78 @@ +# Rule Index + +Flat lookup table of every rule in this skill. Use this when a rule ID is mentioned in code-review notes, corpus expectations, or contributor PRs. + +**Naming:** `-`. Wikipedia rules use `W`. Each new book gets a unique single-letter prefix. + +| ID | Rule (one line) | Source file | +|----|-----------------|-------------| +| W-1 | Undue emphasis on significance, legacy, broader trends | `wikipedia-signs-of-ai-writing.md` | +| W-2 | Undue emphasis on notability and media coverage | `wikipedia-signs-of-ai-writing.md` | +| W-3 | Superficial analyses with -ing endings | `wikipedia-signs-of-ai-writing.md` | +| W-4 | Promotional and advertisement-like language | `wikipedia-signs-of-ai-writing.md` | +| W-5 | Vague attributions and weasel words | `wikipedia-signs-of-ai-writing.md` | +| W-6 | Outline-like "challenges and future prospects" sections | `wikipedia-signs-of-ai-writing.md` | +| W-7 | Overused "AI vocabulary" words | `wikipedia-signs-of-ai-writing.md` | +| W-8 | Avoidance of is/are (copula avoidance) | `wikipedia-signs-of-ai-writing.md` | +| W-9 | Negative parallelisms and tailing negations | `wikipedia-signs-of-ai-writing.md` | +| W-10 | Rule of three overuse | `wikipedia-signs-of-ai-writing.md` | +| W-11 | Elegant variation (synonym cycling) | `wikipedia-signs-of-ai-writing.md` | +| W-12 | False ranges | `wikipedia-signs-of-ai-writing.md` | +| W-13 | Passive voice and subjectless fragments | `wikipedia-signs-of-ai-writing.md` | +| W-14 | Em dash overuse | `wikipedia-signs-of-ai-writing.md` | +| W-15 | Overuse of boldface | `wikipedia-signs-of-ai-writing.md` | +| W-16 | Inline-header vertical lists | `wikipedia-signs-of-ai-writing.md` | +| W-17 | Title case in headings | `wikipedia-signs-of-ai-writing.md` | +| W-18 | Emojis | `wikipedia-signs-of-ai-writing.md` | +| W-19 | Curly quotation marks | `wikipedia-signs-of-ai-writing.md` | +| W-20 | Collaborative communication artifacts | `wikipedia-signs-of-ai-writing.md` | +| W-21 | Knowledge-cutoff disclaimers | `wikipedia-signs-of-ai-writing.md` | +| W-22 | Sycophantic / servile tone | `wikipedia-signs-of-ai-writing.md` | +| W-23 | Filler phrases | `wikipedia-signs-of-ai-writing.md` | +| W-24 | Excessive hedging | `wikipedia-signs-of-ai-writing.md` | +| W-25 | Generic positive conclusions | `wikipedia-signs-of-ai-writing.md` | +| W-26 | Hyphenated word pair overuse | `wikipedia-signs-of-ai-writing.md` | +| W-27 | Persuasive authority tropes | `wikipedia-signs-of-ai-writing.md` | +| W-28 | Signposting and announcements | `wikipedia-signs-of-ai-writing.md` | +| W-29 | Fragmented headers | `wikipedia-signs-of-ai-writing.md` | +| Z-1 | Cut clutter — every word that does no work | `zinsser-on-writing-well.md` | +| Z-2 | Use short, plain, Anglo-Saxon words | `zinsser-on-writing-well.md` | +| Z-3 | Active verbs do the work; kill nominalizations | `zinsser-on-writing-well.md` | +| Z-4 | Strip qualifiers — "a bit", "rather", "sort of" | `zinsser-on-writing-well.md` | +| Z-5 | Be present on the page; have a self | `zinsser-on-writing-well.md` | +| Z-6 | Endings matter — quit when the work is done | `zinsser-on-writing-well.md` | +| S-1 | Omit needless words | `strunk-and-white-elements-of-style.md` | +| S-2 | Use the active voice | `strunk-and-white-elements-of-style.md` | +| S-3 | Put statements in positive form | `strunk-and-white-elements-of-style.md` | +| S-4 | Use definite, specific, concrete language | `strunk-and-white-elements-of-style.md` | +| S-5 | Do not overstate | `strunk-and-white-elements-of-style.md` | +| H-1 | Lead with the bottom line (pyramid principle) | `hbr-guide-better-business-writing.md` | +| H-2 | Write for the skim-reader | `hbr-guide-better-business-writing.md` | +| H-3 | One idea per paragraph | `hbr-guide-better-business-writing.md` | +| H-4 | Imperative for instructions | `hbr-guide-better-business-writing.md` | +| H-5 | Cut throat-clearing openers | `hbr-guide-better-business-writing.md` | + +**Total:** 45 rules across 4 sources (29 Wikipedia + 6 Zinsser + 5 Strunk & White + 5 HBR). + +## Cross-reference graph + +When a Wikipedia detection rule fires, the matching book rule(s) usually offer the better fix: + +| Wikipedia (detect) | Book (fix) | +|--------------------|------------| +| W-7 (AI vocabulary), W-23 (filler), W-27 (persuasive tropes) | Z-1, S-1 (cut clutter) | +| W-13 (passive) | S-2, Z-3 (active voice + active verbs) | +| W-4 (promotional), W-1 (significance inflation) | S-5, Z-2 (don't overstate; plain words) | +| W-24 (hedging) | Z-4, S-3 (strip qualifiers; positive form) | +| W-25 (generic conclusion) | Z-6 (quit when done) | +| W-22 (sycophantic), W-20 (chatbot artifacts) | Z-5, H-5 (be present; cut throat-clearing) | +| W-16 (inline-header lists), W-18 (emojis) | H-2 (skim-reader formatting done right) | +| W-3 (-ing analyses), W-12 (false ranges) | S-4 (concrete and specific) | + +## Adding a new rule + +1. Pick a single-letter prefix not already used (current: W, Z, S, H). +2. Add the rule to the appropriate `references/.md` file (or create a new one from `_template-book-rules.md`). +3. Append a row to the table above. +4. Add a line to the cross-reference graph if it's a fix-match for an existing detection rule. +5. Bump version per `CONTRIBUTING.md`. diff --git a/humanizer-classics/references/_template-book-rules.md b/humanizer-classics/references/_template-book-rules.md new file mode 100644 index 00000000..2f9a8cec --- /dev/null +++ b/humanizer-classics/references/_template-book-rules.md @@ -0,0 +1,59 @@ +# [Book Title] — [Author Last Name] + +> **Template for contributors.** Copy this file to `references/-.md`, fill in every section, and submit a PR. See `CONTRIBUTING.md` for the acceptance bar. + +**Source:** [Author], *[Title]*, [edition + year], [publisher] +**Type:** Craft prescription (positive guidance — what good writing *does*) +**License of pull-quotes:** Fair use — short excerpts (10-25 words) for educational commentary +**Rule prefix for this book:** `[X]` (single letter, not yet used; check `_rule-index.md`) + +## Why this book belongs in humanizer-classics + +One paragraph (3-5 sentences) on what this book teaches that the existing rules don't already cover. What's the unique angle? Who is this book for? When does its advice matter most? + +## Rules in this file + +| ID | Rule (one line) | Chapter / page reference | +|----|-----------------|--------------------------| +| X-1 | ... | ... | +| X-2 | ... | ... | + +(3+ rules required for a new book; 5-7 is the sweet spot.) + +--- + +### X-1. [Rule name in imperative or prescriptive form] + +**Source:** [Author], *[Title]*, ch. N "[Chapter title]", p. NN + +> "[Pull-quote, 10-25 words, illustrating the rule in the author's own voice]" +> — [Author], p. NN + +**Cross-references:** [W-N detection rule(s) this rule helps fix; other book rules it interlocks with] +**Context tags:** [all | memo | blog | technical-doc | dictation | meeting-notes | book-draft | email] +**Detection cue:** [What pattern in the input text signals that this rule should fire? Be specific — keywords, sentence shapes, or structural tells.] + +**Problem:** [Two to four sentences. Describe the failure mode in AI-generated text — what does the LLM do wrong, and why does it sound off to a human reader?] + +**Before** +> [A 1-3 sentence example of AI-generated text that violates the rule. Realistic, not strawman.] + +**After** +> [The same passage rewritten following the rule. Should preserve meaning while sounding human.] + +**How to apply:** [The mechanical move. What does a writer or editor do, sentence by sentence, to surface and fix the violation? Aim for 1-3 sentences a reader can run on autopilot.] + +--- + +### X-2. [...] + +(Repeat the block above for each rule. Keep rules tightly scoped — one mental move per rule. If a rule is doing two things, split it.) + +--- + +## Notes for reviewers + +- **Don't restate Wikipedia rules.** If the rule is just "spot AI vocabulary words," that's already W-7. Book rules should add a *positive* move the writer can execute. +- **Be specific to the source.** Generic writing advice is easy to find; the value of pinning a rule to a book is the citation chain. If the advice doesn't trace cleanly to a chapter, the rule probably isn't ready. +- **Worked examples beat exhortation.** A weak rule says "write clearly." A strong rule shows the move and the before/after. +- **Resolve conflicts via context tags.** When two rules disagree (e.g., Zinsser "use first person" vs. an HBR memo context), tag rules so Claude can pick the right one for the input. diff --git a/humanizer-classics/references/granola-meeting-transcripts.md b/humanizer-classics/references/granola-meeting-transcripts.md new file mode 100644 index 00000000..84ef965e --- /dev/null +++ b/humanizer-classics/references/granola-meeting-transcripts.md @@ -0,0 +1,100 @@ +# Live Inputs: Granola Meeting Transcripts and Wispr Flow Dictation + +This file documents how to pull thought-capture inputs (Granola meeting transcripts, Wispr Flow dictation, raw pasted "AI slop") into the humanizer-classics workflow. It is not a rule file — it's an operational guide for the skill. + +--- + +## Granola: live transcript pull via MCP + +When the user says any of the following, treat it as a request to humanize a Granola meeting: + +- "Humanize my last meeting" +- "Clean up the standup notes" +- "Take my Acme call transcript and turn it into [a memo / an email / a blog post]" +- "I just had a meeting about X — can you write up a summary in my voice?" +- "Pull my notes from this morning's meeting" + +### Workflow + +1. **List meetings.** Call `mcp__granola__list_meetings` (or `mcp__granola__query_granola_meetings` with a search term if the user named the meeting). If there's only one obvious match, proceed; otherwise present the top 3-5 to the user with date, title, and brief context, and ask which one. + +2. **Fetch the transcript.** Once selected, call `mcp__granola__get_meeting_transcript` with the meeting ID. The transcript will typically include speaker labels, timestamps, and may be lightly cleaned by Granola's own AI summarization. + +3. **Pre-pass: strip artifacts.** Before running the two-pass humanizer, remove: + - Timestamps (`[00:14:32]`, `(14:32)`) + - Speaker labels at the start of every turn (unless the user wants attributed quotes) + - Granola's own auto-generated summary boilerplate ("Action items:", "Key takeaways:" — these often have AI patterns the humanizer should rebuild from scratch) + - Disfluencies and false starts that are pure noise (`um`, `uh`, `you know what I mean`, restarts like `I think — actually no, what I mean is`) + +4. **Identify the target form.** Ask the user (or infer from their request): is this becoming a memo, an email, a blog post, a LinkedIn post, a section of a book draft, or a meeting summary? This sets the **context tag** that governs which rules fire most: + - **memo / email / meeting-summary** → H-1, H-2, H-3, H-4, H-5 lead. Z-5 (be present) generally does NOT fire. + - **blog / book-draft** → Z-1, Z-2, Z-5, Z-6 lead. H-1 helps but isn't dominant. + - **dictation cleanup (raw to readable)** → Z-1, Z-3, Z-4 lead (cut clutter, active verbs, strip qualifiers). Preserve the speaker's voice; this is an editing pass, not a rewrite. + +5. **Run the two-pass process** (described in the main `SKILL.md`): draft rewrite → audit pass → final rewrite. Apply the rules selected by the context tag. + +6. **Return the output** with a brief change log noting which rules fired most. If the user provided a writing sample for voice calibration, mention that calibration was applied. + +### Granola MCP tools available + +The user's environment has the `mcp__granola__*` tool family loaded: +- `mcp__granola__list_meetings` — list recent meetings +- `mcp__granola__list_meeting_folders` — list folder organization +- `mcp__granola__query_granola_meetings` — search across meetings +- `mcp__granola__get_meetings` — fetch one or more meetings (metadata) +- `mcp__granola__get_meeting_transcript` — fetch full transcript text + +Load these tools via `ToolSearch` with `select:mcp__granola__list_meetings,mcp__granola__get_meeting_transcript` (etc.) before calling, since they're deferred. + +--- + +## Wispr Flow: dictation cleanup + +Wispr Flow is a Mac dictation tool. It transcribes the user's voice directly into the active text field — there's no API or MCP integration. The user typically dictates a stream of thought and pastes the result into the prompt. Or they dictate directly into the chat. + +### What dictation looks like + +- Long run-on sentences with weak conjunctions (`and so`, `and then`, `and basically`) +- Filler words (`um`, `uh`, `you know`, `I mean`, `like`) +- Restarts and self-corrections (`it's a — well, what I'm trying to say is`) +- No paragraph breaks +- All-lowercase or only sentence-initial caps (depending on Wispr Flow settings) +- Punctuation by voice command (`comma`, `period`, `new paragraph`) — sometimes literal text, sometimes auto-formatted +- Numbers spelled out where Wispr couldn't translate ("twenty twenty-six" instead of "2026") + +### Rules that fire most on dictation + +| Rule | Why | +|------|-----| +| Z-1 (cut clutter) | Spoken thought rambles; written thought shouldn't | +| Z-3 (active verbs) | Speech defaults to "I would say that..." — written form wants the verb | +| Z-4 (strip qualifiers) | "I kind of think maybe" reads like hedging in print | +| W-9 (negative parallelisms) | Spoken self-correction often produces "It's not just X, it's Y" | +| W-23 (filler) | Many spoken filler phrases survive transcription | +| H-3 (one idea per paragraph) | Dictation has no paragraph breaks; the editor must add them | + +### What NOT to do with dictation + +- **Don't rewrite into a different voice.** Dictation captures the speaker's actual cadence. The humanizer's job on dictation is to *edit* the speaker's voice into readable form, not to replace it with a generic written register. Use the existing voice-calibration feature: take the dictated text itself as the voice sample, then humanize lightly. +- **Don't cut all hedges.** A real hedge — "I'm not sure about this but" — is the speaker telling the reader something true. Cut only the qualifiers that are pure verbal tics (Z-4). +- **Don't impose business-writing structure on personal dictation.** H-1 (lead with bottom line) is wrong for a stream-of-thought essay or a journal-style book draft. Match the form to the user's intent. + +--- + +## Raw pasted slop + +When the user pastes a block of AI-generated text without context, default to: + +1. The full rule catalog (W-1..W-29 for detection, Z/S/H for fixes) +2. **Context tag: blog** unless the text shape signals otherwise (an obvious memo header → memo; an email greeting → email; a long narrative → book-draft) +3. Two-pass process per `SKILL.md` + +If the pasted text is itself a Granola summary or Wispr dictation, follow the appropriate workflow above instead. + +--- + +## Future work (deferred) + +- Direct Wispr Flow integration (no MCP exists; would require new tooling) +- Auto-detect the source of pasted text (Granola format vs. Wispr vs. AI chat output) by structural cues +- A separate `references/wispr-flow-dictation.md` if the dictation-specific patterns warrant their own file (currently they're a section here because the rule overlap with Granola is high) diff --git a/humanizer-classics/references/hbr-guide-better-business-writing.md b/humanizer-classics/references/hbr-guide-better-business-writing.md new file mode 100644 index 00000000..5d545251 --- /dev/null +++ b/humanizer-classics/references/hbr-guide-better-business-writing.md @@ -0,0 +1,167 @@ +# HBR Guide to Better Business Writing — Bryan A. Garner + +**Source:** Bryan A. Garner, *HBR Guide to Better Business Writing* (Harvard Business Review Press, 2012) +**Type:** Craft prescription (positive guidance — what good business writing *does*) +**License of pull-quotes:** Fair use — short excerpts for educational commentary +**Rule prefix:** `H` + +## Why this book belongs in humanizer-classics + +Zinsser writes for journalists and essayists; Strunk writes for everyone; Garner writes for the person trying to get something done in an email, memo, or report. His guide is built around the reader who's busy and skimming. The rules below catch the failure modes that LLMs reproduce in business contexts: burying the point, padding with throat-clearing, writing as if the reader will start at the top and read straight through. These rules apply to memos, status updates, briefs, executive summaries, and most internal email — contexts where Zinsser's "have a self" can be the wrong move. + +## Rules in this file + +| ID | Rule (one line) | Reference | +|----|-----------------|-----------| +| H-1 | Lead with the bottom line (pyramid principle) | Pt. 2 — Step 2 (purpose first) | +| H-2 | Write for the skim-reader | Pt. 4 — formatting for busy readers | +| H-3 | One idea per paragraph | Pt. 3 — paragraph discipline | +| H-4 | Use the imperative for instructions | Pt. 4 — directness | +| H-5 | Cut throat-clearing openers | Pt. 4 — efficiency | + +--- + +### H-1. Lead with the bottom line (pyramid principle) + +**Source:** Garner, *HBR Guide to Better Business Writing*, Pt. 2 — purpose-first writing + +> "State your purpose up front. Don't make readers hunt for what you want." +> — Garner, paraphrased from Pt. 2 + +**Cross-references:** W-25 (generic positive conclusions), Z-6 (endings matter), W-6 (challenges/future-prospects) +**Context tags:** memo, email, technical-doc (status updates, briefs), meeting-notes (when summarizing) +**Detection cue:** First paragraph is context-setting ("As you know, the team has been working on..."). The actual recommendation, decision, or ask appears in paragraph 3 or later. Subject lines or first sentences that describe a topic ("Update on the Q3 roadmap") rather than the news ("Q3 launch slips two weeks; here's the new plan"). + +**Problem:** LLMs are trained on prose where build-up precedes payoff — narrative writing, articles, school essays. In business writing the reader is scanning to extract a decision or action. If the bottom line is buried, the reader misses it or reads tired and mis-prioritizes. Garner's rule, sometimes called the pyramid principle (Barbara Minto) or BLUF ("bottom line up front"), is to invert the order: conclusion first, then the supporting evidence in descending importance. + +**Before** +> As you know, the team has been working hard over the past quarter to evaluate our customer-onboarding flow. We conducted user interviews, analyzed support tickets, and ran several A/B tests. The data has been thoroughly reviewed by the design and engineering teams. Based on this work, we believe there are several improvements we should make. Specifically, we recommend redesigning the signup form, simplifying the email verification step, and adding a guided tour for first-time users. We expect this work to take approximately six weeks and require two engineers and one designer. + +**After** +> **We're recommending three onboarding changes that need 6 weeks, two engineers, and one designer.** +> +> The changes: redesign the signup form, simplify email verification, add a first-time-user tour. +> +> The evidence: 12 user interviews, 6 weeks of support-ticket analysis, two A/B tests on the signup form. Details below. + +**How to apply:** Read your draft and find the sentence that contains the news (the recommendation, decision, request, or ask). Move that sentence to be the first sentence (or even the subject line). Then arrange the supporting evidence in descending importance. If the reader stops after one paragraph, they should already have the point. + + +### H-2. Write for the skim-reader + +**Source:** Garner, *HBR Guide to Better Business Writing*, Pt. 4 — formatting for busy readers + +> "Most business readers don't read — they scan." +> — Garner, paraphrased from Pt. 4 + +**Cross-references:** W-15 (boldface overuse), W-16 (inline-header lists), W-17 (title case), W-18 (emojis) +**Context tags:** memo, email, technical-doc, meeting-notes +**Detection cue:** Long paragraphs (8+ sentences) with no subheads, no bullets, no bolding. OR the opposite: a wall of bullets where every line is a full sentence and bolding is on every third word for no reason. + +**Problem:** This rule sounds like it conflicts with W-15 / W-16 / W-18 (the rules against boldface, inline-header lists, and emoji decoration), and the conflict is the point. LLM business writing fails in two opposite directions: the wall of unbroken prose (no anchors for the eye) *and* the carpet of bolded bullets and emoji headings (every line shouting "look at me"). H-2 says: real skim-formatting uses subheads as the spine, bullets only for true parallel lists, and bolding only for the one phrase per section that the skimmer absolutely must catch. + +**Before (wall of prose)** +> The Q3 results came in below target. Revenue was 8% under plan because the enterprise team missed its forecast for two large deals that slipped to Q4. The good news is that the SMB team beat their plan by 14%, driven mostly by the new onboarding flow that launched in July. Operating costs were on plan but we ran hot on cloud spend after the data-warehouse migration. Hiring stayed under target since we paused recruiting for the platform team. We expect Q4 to recover assuming the slipped enterprise deals close. + +**Before (carpet of bolded bullets)** +> 🚀 **Q3 Results Overview:** +> - **Revenue:** 8% under plan ❌ +> - **Enterprise Sales:** Missed forecast 📉 +> - **SMB Sales:** Beat plan by 14% ✅ +> - **Operating Costs:** On plan ✓ +> - **Cloud Spend:** Ran hot 🔥 +> - **Hiring:** Under target + +**After** +> **Q3 was 8% under plan. We expect Q4 to recover.** +> +> What missed: two large enterprise deals slipped to Q4. +> +> What beat: SMB revenue up 14%, driven by the July onboarding launch. +> +> What to watch: cloud spend ran hot after the data-warehouse migration. We're reviewing the architecture in October. + +**How to apply:** For documents over a page, add subheads at every section break. Bold the one phrase per section that captures the news. Use bullets only when you have a true parallel list (3+ items of the same type). Never use emoji as section markers in business writing — they signal informality the form usually can't carry. The test: if a reader reads only the bolded text and the subheads, do they get the gist? + + +### H-3. One idea per paragraph + +**Source:** Garner, *HBR Guide to Better Business Writing*, Pt. 3 — paragraph discipline + +> "A paragraph should advance a single idea. When the idea changes, start a new paragraph." +> — Garner, paraphrased from Pt. 3 + +**Cross-references:** Z-1 (cut clutter), W-29 (fragmented headers — but H-3 is the inverse problem) +**Context tags:** memo, email, technical-doc, blog, meeting-notes +**Detection cue:** Paragraphs that start on one topic and drift through three more by the end. Paragraphs over ~5 sentences in business writing. Paragraphs where the topic sentence and the closing sentence are about different things. + +**Problem:** LLMs string related sentences together without paragraph discipline. The result reads like one long flow where the reader has to do the work of separating the points. Garner's rule: each paragraph carries one idea. When the idea moves, the paragraph ends. + +**Before** +> The launch went well overall. We hit our user-signup target by Day 3 and the support team only saw a small uptick in tickets, mostly about the email verification flow which we expected based on the staging tests. Marketing was happy with the conversion numbers from the launch campaign and the social media engagement was higher than the previous launch. There were a few bugs reported by power users related to the new dashboard widget but we patched the most serious one within 24 hours. Looking ahead, we should consider running another A/B test on the signup form because the current one converts at 12% which is below the 15% we set as target. + +**After** +> The launch hit its Day 3 signup target. Support tickets ticked up only slightly — mostly the email verification step we'd expected to be a friction point. +> +> Marketing was happy: conversion from the launch campaign and social engagement both beat our previous launch. +> +> Power users reported bugs in the new dashboard widget. We patched the most serious within 24 hours. +> +> Next: A/B test the signup form again. Current conversion is 12%; our target was 15%. + +**How to apply:** Read each paragraph and write a one-line summary of what it's about. If you can't, the paragraph holds two or more ideas. Split. The visual breaks help the skim-reader (see H-2) as well. + + +### H-4. Use the imperative for instructions + +**Source:** Garner, *HBR Guide to Better Business Writing*, Pt. 4 — directness in operational writing + +> "The imperative mood is the natural tongue of instructions. Use it." +> — Garner, paraphrased from Pt. 4 + +**Cross-references:** S-2 (active voice), Z-3 (active verbs), W-13 (passive voice) +**Context tags:** technical-doc (especially how-to / runbook content), memo (action items), meeting-notes (action items) +**Detection cue:** Instruction-shaped sentences in third person or passive voice: "The user should click Save", "The form is to be submitted", "It is recommended that the operator restart the service". "One should consider" / "Users may want to". + +**Problem:** Instructions in third person or passive voice add a layer of indirection between the reader and the action. The reader has to translate "the user should click Save" into "click Save". LLMs default to the indirect form because it sounds professional. Garner's rule: instructions are commands. Write them as commands. + +**Before** +> When the deployment fails, it is recommended that the operator first review the build logs. The next step would be for the operator to restart the affected service. If the issue persists, the on-call engineer should be paged by the operator. + +**After** +> If the deployment fails: +> 1. Review the build logs. +> 2. Restart the affected service. +> 3. If the issue persists, page the on-call engineer. + +**How to apply:** Find every "the user should", "it is recommended", "one might consider", "the operator must" construction. Rewrite each as a direct command starting with the verb. The reader's eye moves from instruction to action without the intermediate step. + + +### H-5. Cut throat-clearing openers + +**Source:** Garner, *HBR Guide to Better Business Writing*, Pt. 4 — efficiency + +> "Don't begin by clearing your throat. Begin with what you have to say." +> — Garner, paraphrased from Pt. 4 + +**Cross-references:** W-20 (collaborative communication artifacts), W-22 (sycophantic tone), W-28 (signposting), Z-1 (cut clutter) +**Context tags:** email, memo, meeting-notes +**Detection cue:** Opening phrases: "I hope this email finds you well", "I hope you're having a great week", "I just wanted to reach out", "Just following up on my last email", "As you know", "Per our discussion", "I wanted to take a moment to", "Thank you in advance for your time and consideration". + +**Problem:** Throat-clearing is the verbal equivalent of small talk before getting to the point. In speech it's polite. In writing — especially email — it costs the reader time and pushes the actual message below the fold. LLMs generate it because they're trained on emails that contain it, and they default to the polite-professional register where it's expected. Garner's rule for business writing: skip it. Open with the news. + +**Before** +> Hi Sarah, +> +> I hope this email finds you well and that you're having a great week. I just wanted to reach out and follow up on our conversation from last Thursday's meeting. As you know, we've been thinking about the Q4 marketing budget and I wanted to take a moment to share some thoughts on how we might approach the planning process. +> +> Specifically, I think we should consider... + +**After** +> Hi Sarah, +> +> Following up on Thursday: I think we should cut the Q4 paid-search budget by 30% and put it into the partner program. Three reasons: +> +> 1. ... + +**How to apply:** Delete the first paragraph if it doesn't say anything specific about the actual subject. Read the second paragraph — if that's where the message starts, you're done. The exception: a brief signal of warmth ("Thanks for jumping on the call yesterday") is fine when there's a real reason to thank — what's not fine is generic throat-clearing. diff --git a/humanizer-classics/references/strunk-and-white-elements-of-style.md b/humanizer-classics/references/strunk-and-white-elements-of-style.md new file mode 100644 index 00000000..938adb6d --- /dev/null +++ b/humanizer-classics/references/strunk-and-white-elements-of-style.md @@ -0,0 +1,131 @@ +# The Elements of Style — William Strunk Jr. & E. B. White + +**Source:** William Strunk Jr. and E. B. White, *The Elements of Style*, 4th edition (Allyn & Bacon, 1999) +**Type:** Craft prescription (positive guidance — what good writing *does*) +**License of pull-quotes:** Fair use — short excerpts for educational commentary +**Rule prefix:** `S` + +## Why this book belongs in humanizer-classics + +Strunk's principles are the shortest, sharpest writing rules in English. They're stated as commands ("Omit needless words"), they're decades old, and they survive because each one names a real defect in unrevised prose — defects that LLMs reproduce with statistical reliability. Where Zinsser is a guide and a teacher, Strunk is a drill sergeant. Use these rules when a draft needs the brisk, declarative version of the same advice. + +## Rules in this file + +| ID | Rule (one line) | Reference | +|----|-----------------|-----------| +| S-1 | Omit needless words | II.17 | +| S-2 | Use the active voice | II.14 | +| S-3 | Put statements in positive form | II.15 | +| S-4 | Use definite, specific, concrete language | II.16 | +| S-5 | Do not overstate | V.7 | + +--- + +### S-1. Omit needless words + +**Source:** Strunk & White, *The Elements of Style*, 4th ed., II.17 + +> "Vigorous writing is concise. A sentence should contain no unnecessary words, a paragraph no unnecessary sentences." +> — Strunk, II.17 + +**Cross-references:** Z-1 (cut clutter), W-7 (AI vocab), W-23 (filler), W-27 (persuasive tropes) +**Context tags:** all +**Detection cue:** "the question as to whether" (= whether), "there is no doubt but that" (= doubtless), "used for fuel purposes" (= used for fuel), "he is a man who" (= he), "in a hasty manner" (= hastily), "this is a subject which" (= this subject), "his story is a strange one" (= his story is strange). + +**Problem:** Strunk's rule is the strongest and most-quoted in the book because it's the most violated. LLMs accumulate words in patterns that read fine sentence-by-sentence and bloated paragraph-by-paragraph. The fix is mechanical: each phrase that can be shorter, should be. + +**Before** +> The fact that he had not succeeded in the matter of the negotiations was a circumstance which gave rise to considerable disappointment among the members of the team. + +**After** +> His failure in the negotiations disappointed the team. + +**How to apply:** Run Strunk's specific phrase swaps on every draft (the list above). Then re-read each sentence with the question "could this be shorter and still mean the same thing?" If yes, cut. This is closely related to Z-1, but Z-1 is about clutter as a *disease*; S-1 is about each *sentence* as a unit. + + +### S-2. Use the active voice + +**Source:** Strunk & White, *The Elements of Style*, 4th ed., II.14 + +> "The active voice is usually more direct and vigorous than the passive." +> — Strunk, II.14 + +**Cross-references:** W-13 (passive voice), Z-3 (active verbs), W-8 (copula avoidance) +**Context tags:** all (especially memo, blog, technical-doc instructions) +**Detection cue:** "be" + past participle constructions where the actor is hidden ("a decision was made"), where the actor is named but demoted ("the report was written by Sarah"), or where the actor is "the system" / "it" / "one" trying to dodge naming a person. + +**Problem:** Passive voice is grammatical, not wrong, and sometimes correct (when the actor is unknown or genuinely irrelevant). But LLMs default to it because passive constructions are statistically safe — they avoid commitment. The result is prose where nothing is being done by anyone. Strunk's rule is to prefer the active form, and use the passive only when the actor is genuinely beside the point. + +**Before** +> A decision was reached by the committee. The proposal will be reviewed in the coming weeks. Recommendations will be provided to leadership. + +**After** +> The committee decided. They will review the proposal in the coming weeks and recommend to leadership. + +**How to apply:** Find every "be + participle" construction. For each, identify the actor (the person, group, or thing actually doing the action). If the actor is known, rewrite in active voice with the actor as the subject. If the actor is genuinely irrelevant or unknown, the passive can stay — but verify that's true rather than the default. + + +### S-3. Put statements in positive form + +**Source:** Strunk & White, *The Elements of Style*, 4th ed., II.15 + +> "Make definite assertions. Avoid tame, colorless, hesitating, noncommittal language." +> — Strunk, II.15 + +**Cross-references:** W-24 (excessive hedging), Z-4 (strip qualifiers) +**Context tags:** all +**Detection cue:** "not honest" (= dishonest), "not important" (= trivial / unimportant), "did not remember" (= forgot), "did not pay attention to" (= ignored), "did not have much confidence in" (= distrusted). Long strings of "not" and "no" before adjectives. + +**Problem:** Negation is weaker than affirmation. "She was not honest" makes the reader build the positive image of honesty and then mentally negate it. "She was dishonest" lands directly. LLMs use the negative form because it sounds judicious and tentative; Strunk says the positive form sounds like a writer who knows what they think. + +**Before** +> The new policy is not without its drawbacks. It is not unlikely that some teams will not be entirely supportive. + +**After** +> The new policy has drawbacks. Some teams will resist. + +**How to apply:** Search the draft for "not" + adjective constructions. For each, ask "is there a single word that says the positive form of what I mean?" Usually there is. Replace. The sentence shortens and the assertion sharpens. + + +### S-4. Use definite, specific, concrete language + +**Source:** Strunk & White, *The Elements of Style*, 4th ed., II.16 + +> "Prefer the specific to the general, the definite to the vague, the concrete to the abstract." +> — Strunk, II.16 + +**Cross-references:** W-3 (-ing analyses), W-12 (false ranges), W-5 (vague attributions), Z-2 (plain words) +**Context tags:** all +**Detection cue:** Abstract nouns where a concrete one would do: "individuals" (people), "vehicles" (cars), "facilities" (buildings), "stakeholders" (customers, employees, investors — name them), "deliverables" (the report, the prototype). Adjectives like "various", "several", "a number of", "many" without follow-through. + +**Problem:** LLMs hover at the abstract level because abstraction is a low-risk default. The reader's mind, however, only engages when it has something specific to picture. "Several attendees expressed concerns about the implementation" puts no image in the head. "Three engineers said the rollout broke their production deploys" does. + +**Before** +> A number of stakeholders expressed concerns about various aspects of the implementation, particularly around several key areas of operational impact. + +**After** +> Three engineers and the head of customer support said the rollout broke production deploys, slowed the help-desk queue by half, and forced a 4 a.m. rollback. + +**How to apply:** Underline every abstract noun and every quantity word ("several", "various", "many", "a number of", "a variety of"). For each, ask "what specifically?" Replace with the concrete answer. If you don't know the concrete answer, the sentence isn't ready — find out, or cut the sentence. + + +### S-5. Do not overstate + +**Source:** Strunk & White, *The Elements of Style*, 4th ed., V.7 + +> "When you overstate, the reader will be instantly on guard, and everything that has preceded your overstatement as well as everything that follows it will be suspect." +> — Strunk and White, V.7 + +**Cross-references:** W-4 (promotional language), W-1 (significance inflation), W-25 (generic positive conclusions) +**Context tags:** all +**Detection cue:** Superlatives without evidence: "groundbreaking", "transformative", "revolutionary", "world-class", "best-in-class", "unparalleled", "the most ___ ever", "incredibly", "absolutely", "literally" (when not literal). LLM marketing words: "boasts", "stunning", "must-have", "game-changing". + +**Problem:** Overstatement is the LLM's default register. Trained on marketing copy, blog headlines, and Wikipedia-with-puffery, it reaches for the strongest adjective whether or not the strength is earned. Strunk and White's warning is sharper than "don't be cheesy" — overstatement makes the *whole piece* less credible. One unearned superlative casts doubt on every adjacent claim. + +**Before** +> Our groundbreaking new platform is a transformative, best-in-class solution that boasts unparalleled performance and a stunning user experience. + +**After** +> The platform is faster than the previous version on the three benchmarks we tested. The new dashboard surfaces alerts that used to require three clicks to find. + +**How to apply:** Scan for superlatives and intensifiers. For each, ask "is this earned by a specific fact in the same paragraph?" If yes, the superlative can stay (and the fact does the convincing). If no, cut the superlative and replace with the specific fact, or cut the sentence. diff --git a/humanizer-classics/references/wikipedia-signs-of-ai-writing.md b/humanizer-classics/references/wikipedia-signs-of-ai-writing.md new file mode 100644 index 00000000..1a3cc25b --- /dev/null +++ b/humanizer-classics/references/wikipedia-signs-of-ai-writing.md @@ -0,0 +1,384 @@ +# Wikipedia: Signs of AI Writing (29 patterns) + +**Source:** [Wikipedia:Signs of AI writing](https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing), maintained by WikiProject AI Cleanup +**Type:** Pattern detection (negative space — what AI writing *looks like*) +**License:** CC BY-SA 4.0 (Wikipedia content) +**Coverage:** Lifted from humanizer v2.5.1 (commit 8b3a178, 2026-03-31) + +These 29 rules describe the surface tells of LLM-generated text. They pair with the craft-prescriptive book-grounded rules (Z, S, H prefixes) elsewhere in `references/`. When a book rule and a Wikipedia rule overlap, the Wikipedia rule helps you *spot* the problem and the book rule helps you *fix* it well. + +Key insight from Wikipedia: *"LLMs use statistical algorithms to guess what should come next. The result tends toward the most statistically likely result that applies to the widest variety of cases."* + +--- + +## CONTENT PATTERNS + +### W-1. Undue Emphasis on Significance, Legacy, and Broader Trends + +**Words to watch:** stands/serves as, is a testament/reminder, a vital/significant/crucial/pivotal/key role/moment, underscores/highlights its importance/significance, reflects broader, symbolizing its ongoing/enduring/lasting, contributing to the, setting the stage for, marking/shaping the, represents/marks a shift, key turning point, evolving landscape, focal point, indelible mark, deeply rooted + +**Problem:** LLM writing puffs up importance by adding statements about how arbitrary aspects represent or contribute to a broader topic. + +**Before:** +> The Statistical Institute of Catalonia was officially established in 1989, marking a pivotal moment in the evolution of regional statistics in Spain. This initiative was part of a broader movement across Spain to decentralize administrative functions and enhance regional governance. + +**After:** +> The Statistical Institute of Catalonia was established in 1989 to collect and publish regional statistics independently from Spain's national statistics office. + + +### W-2. Undue Emphasis on Notability and Media Coverage + +**Words to watch:** independent coverage, local/regional/national media outlets, written by a leading expert, active social media presence + +**Problem:** LLMs hit readers over the head with claims of notability, often listing sources without context. + +**Before:** +> Her views have been cited in The New York Times, BBC, Financial Times, and The Hindu. She maintains an active social media presence with over 500,000 followers. + +**After:** +> In a 2024 New York Times interview, she argued that AI regulation should focus on outcomes rather than methods. + + +### W-3. Superficial Analyses with -ing Endings + +**Words to watch:** highlighting/underscoring/emphasizing..., ensuring..., reflecting/symbolizing..., contributing to..., cultivating/fostering..., encompassing..., showcasing... + +**Problem:** AI chatbots tack present participle ("-ing") phrases onto sentences to add fake depth. + +**Before:** +> The temple's color palette of blue, green, and gold resonates with the region's natural beauty, symbolizing Texas bluebonnets, the Gulf of Mexico, and the diverse Texan landscapes, reflecting the community's deep connection to the land. + +**After:** +> The temple uses blue, green, and gold colors. The architect said these were chosen to reference local bluebonnets and the Gulf coast. + + +### W-4. Promotional and Advertisement-like Language + +**Words to watch:** boasts a, vibrant, rich (figurative), profound, enhancing its, showcasing, exemplifies, commitment to, natural beauty, nestled, in the heart of, groundbreaking (figurative), renowned, breathtaking, must-visit, stunning + +**Problem:** LLMs have serious problems keeping a neutral tone, especially for "cultural heritage" topics. + +**Before:** +> Nestled within the breathtaking region of Gonder in Ethiopia, Alamata Raya Kobo stands as a vibrant town with a rich cultural heritage and stunning natural beauty. + +**After:** +> Alamata Raya Kobo is a town in the Gonder region of Ethiopia, known for its weekly market and 18th-century church. + + +### W-5. Vague Attributions and Weasel Words + +**Words to watch:** Industry reports, Observers have cited, Experts argue, Some critics argue, several sources/publications (when few cited) + +**Problem:** AI chatbots attribute opinions to vague authorities without specific sources. + +**Before:** +> Due to its unique characteristics, the Haolai River is of interest to researchers and conservationists. Experts believe it plays a crucial role in the regional ecosystem. + +**After:** +> The Haolai River supports several endemic fish species, according to a 2019 survey by the Chinese Academy of Sciences. + + +### W-6. Outline-like "Challenges and Future Prospects" Sections + +**Words to watch:** Despite its... faces several challenges..., Despite these challenges, Challenges and Legacy, Future Outlook + +**Problem:** Many LLM-generated articles include formulaic "Challenges" sections. + +**Before:** +> Despite its industrial prosperity, Korattur faces challenges typical of urban areas, including traffic congestion and water scarcity. Despite these challenges, with its strategic location and ongoing initiatives, Korattur continues to thrive as an integral part of Chennai's growth. + +**After:** +> Traffic congestion increased after 2015 when three new IT parks opened. The municipal corporation began a stormwater drainage project in 2022 to address recurring floods. + + +## LANGUAGE AND GRAMMAR PATTERNS + +### W-7. Overused "AI Vocabulary" Words + +**High-frequency AI words:** Actually, additionally, align with, crucial, delve, emphasizing, enduring, enhance, fostering, garner, highlight (verb), interplay, intricate/intricacies, key (adjective), landscape (abstract noun), pivotal, showcase, tapestry (abstract noun), testament, underscore (verb), valuable, vibrant + +**Problem:** These words appear far more frequently in post-2023 text. They often co-occur. + +**Before:** +> Additionally, a distinctive feature of Somali cuisine is the incorporation of camel meat. An enduring testament to Italian colonial influence is the widespread adoption of pasta in the local culinary landscape, showcasing how these dishes have integrated into the traditional diet. + +**After:** +> Somali cuisine also includes camel meat, which is considered a delicacy. Pasta dishes, introduced during Italian colonization, remain common, especially in the south. + + +### W-8. Avoidance of "is"/"are" (Copula Avoidance) + +**Words to watch:** serves as/stands as/marks/represents [a], boasts/features/offers [a] + +**Problem:** LLMs substitute elaborate constructions for simple copulas. + +**Before:** +> Gallery 825 serves as LAAA's exhibition space for contemporary art. The gallery features four separate spaces and boasts over 3,000 square feet. + +**After:** +> Gallery 825 is LAAA's exhibition space for contemporary art. The gallery has four rooms totaling 3,000 square feet. + + +### W-9. Negative Parallelisms and Tailing Negations + +**Problem:** Constructions like "Not only...but..." or "It's not just about..., it's..." are overused. So are clipped tailing-negation fragments such as "no guessing" or "no wasted motion" tacked onto the end of a sentence instead of written as a real clause. + +**Before:** +> It's not just about the beat riding under the vocals; it's part of the aggression and atmosphere. It's not merely a song, it's a statement. + +**After:** +> The heavy beat adds to the aggressive tone. + +**Before (tailing negation):** +> The options come from the selected item, no guessing. + +**After:** +> The options come from the selected item without forcing the user to guess. + + +### W-10. Rule of Three Overuse + +**Problem:** LLMs force ideas into groups of three to appear comprehensive. + +**Before:** +> The event features keynote sessions, panel discussions, and networking opportunities. Attendees can expect innovation, inspiration, and industry insights. + +**After:** +> The event includes talks and panels. There's also time for informal networking between sessions. + + +### W-11. Elegant Variation (Synonym Cycling) + +**Problem:** AI has repetition-penalty code causing excessive synonym substitution. + +**Before:** +> The protagonist faces many challenges. The main character must overcome obstacles. The central figure eventually triumphs. The hero returns home. + +**After:** +> The protagonist faces many challenges but eventually triumphs and returns home. + + +### W-12. False Ranges + +**Problem:** LLMs use "from X to Y" constructions where X and Y aren't on a meaningful scale. + +**Before:** +> Our journey through the universe has taken us from the singularity of the Big Bang to the grand cosmic web, from the birth and death of stars to the enigmatic dance of dark matter. + +**After:** +> The book covers the Big Bang, star formation, and current theories about dark matter. + + +### W-13. Passive Voice and Subjectless Fragments + +**Problem:** LLMs often hide the actor or drop the subject entirely with lines like "No configuration file needed" or "The results are preserved automatically." Rewrite these when active voice makes the sentence clearer and more direct. + +**Before:** +> No configuration file needed. The results are preserved automatically. + +**After:** +> You do not need a configuration file. The system preserves the results automatically. + + +## STYLE PATTERNS + +### W-14. Em Dash Overuse + +**Problem:** LLMs use em dashes (—) more than humans, mimicking "punchy" sales writing. In practice, most of these can be rewritten more cleanly with commas, periods, or parentheses. + +**Before:** +> The term is primarily promoted by Dutch institutions—not by the people themselves. You don't say "Netherlands, Europe" as an address—yet this mislabeling continues—even in official documents. + +**After:** +> The term is primarily promoted by Dutch institutions, not by the people themselves. You don't say "Netherlands, Europe" as an address, yet this mislabeling continues in official documents. + + +### W-15. Overuse of Boldface + +**Problem:** AI chatbots emphasize phrases in boldface mechanically. + +**Before:** +> It blends **OKRs (Objectives and Key Results)**, **KPIs (Key Performance Indicators)**, and visual strategy tools such as the **Business Model Canvas (BMC)** and **Balanced Scorecard (BSC)**. + +**After:** +> It blends OKRs, KPIs, and visual strategy tools like the Business Model Canvas and Balanced Scorecard. + + +### W-16. Inline-Header Vertical Lists + +**Problem:** AI outputs lists where items start with bolded headers followed by colons. + +**Before:** +> - **User Experience:** The user experience has been significantly improved with a new interface. +> - **Performance:** Performance has been enhanced through optimized algorithms. +> - **Security:** Security has been strengthened with end-to-end encryption. + +**After:** +> The update improves the interface, speeds up load times through optimized algorithms, and adds end-to-end encryption. + + +### W-17. Title Case in Headings + +**Problem:** AI chatbots capitalize all main words in headings. + +**Before:** +> ## Strategic Negotiations And Global Partnerships + +**After:** +> ## Strategic negotiations and global partnerships + + +### W-18. Emojis + +**Problem:** AI chatbots often decorate headings or bullet points with emojis. + +**Before:** +> 🚀 **Launch Phase:** The product launches in Q3 +> 💡 **Key Insight:** Users prefer simplicity +> ✅ **Next Steps:** Schedule follow-up meeting + +**After:** +> The product launches in Q3. User research showed a preference for simplicity. Next step: schedule a follow-up meeting. + + +### W-19. Curly Quotation Marks + +**Problem:** ChatGPT uses curly quotes ("...") instead of straight quotes ("..."). + +**Before:** +> He said "the project is on track" but others disagreed. + +**After:** +> He said "the project is on track" but others disagreed. + + +## COMMUNICATION PATTERNS + +### W-20. Collaborative Communication Artifacts + +**Words to watch:** I hope this helps, Of course!, Certainly!, You're absolutely right!, Would you like..., let me know, here is a... + +**Problem:** Text meant as chatbot correspondence gets pasted as content. + +**Before:** +> Here is an overview of the French Revolution. I hope this helps! Let me know if you'd like me to expand on any section. + +**After:** +> The French Revolution began in 1789 when financial crisis and food shortages led to widespread unrest. + + +### W-21. Knowledge-Cutoff Disclaimers + +**Words to watch:** as of [date], Up to my last training update, While specific details are limited/scarce..., based on available information... + +**Problem:** AI disclaimers about incomplete information get left in text. + +**Before:** +> While specific details about the company's founding are not extensively documented in readily available sources, it appears to have been established sometime in the 1990s. + +**After:** +> The company was founded in 1994, according to its registration documents. + + +### W-22. Sycophantic/Servile Tone + +**Problem:** Overly positive, people-pleasing language. + +**Before:** +> Great question! You're absolutely right that this is a complex topic. That's an excellent point about the economic factors. + +**After:** +> The economic factors you mentioned are relevant here. + + +## FILLER AND HEDGING + +### W-23. Filler Phrases + +**Before → After:** +- "In order to achieve this goal" → "To achieve this" +- "Due to the fact that it was raining" → "Because it was raining" +- "At this point in time" → "Now" +- "In the event that you need help" → "If you need help" +- "The system has the ability to process" → "The system can process" +- "It is important to note that the data shows" → "The data shows" + + +### W-24. Excessive Hedging + +**Problem:** Over-qualifying statements. + +**Before:** +> It could potentially possibly be argued that the policy might have some effect on outcomes. + +**After:** +> The policy may affect outcomes. + + +### W-25. Generic Positive Conclusions + +**Problem:** Vague upbeat endings. + +**Before:** +> The future looks bright for the company. Exciting times lie ahead as they continue their journey toward excellence. This represents a major step in the right direction. + +**After:** +> The company plans to open two more locations next year. + + +### W-26. Hyphenated Word Pair Overuse + +**Words to watch:** third-party, cross-functional, client-facing, data-driven, decision-making, well-known, high-quality, real-time, long-term, end-to-end + +**Problem:** AI hyphenates common word pairs with perfect consistency. Humans rarely hyphenate these uniformly, and when they do, it's inconsistent. Less common or technical compound modifiers are fine to hyphenate. + +**Before:** +> The cross-functional team delivered a high-quality, data-driven report on our client-facing tools. Their decision-making process was well-known for being thorough and detail-oriented. + +**After:** +> The cross functional team delivered a high quality, data driven report on our client facing tools. Their decision making process was known for being thorough and detail oriented. + + +### W-27. Persuasive Authority Tropes + +**Phrases to watch:** The real question is, at its core, in reality, what really matters, fundamentally, the deeper issue, the heart of the matter + +**Problem:** LLMs use these phrases to pretend they are cutting through noise to some deeper truth, when the sentence that follows usually just restates an ordinary point with extra ceremony. + +**Before:** +> The real question is whether teams can adapt. At its core, what really matters is organizational readiness. + +**After:** +> The question is whether teams can adapt. That mostly depends on whether the organization is ready to change its habits. + + +### W-28. Signposting and Announcements + +**Phrases to watch:** Let's dive in, let's explore, let's break this down, here's what you need to know, now let's look at, without further ado + +**Problem:** LLMs announce what they are about to do instead of doing it. This meta-commentary slows the writing down and gives it a tutorial-script feel. + +**Before:** +> Let's dive into how caching works in Next.js. Here's what you need to know. + +**After:** +> Next.js caches data at multiple layers, including request memoization, the data cache, and the router cache. + + +### W-29. Fragmented Headers + +**Signs to watch:** A heading followed by a one-line paragraph that simply restates the heading before the real content begins. + +**Problem:** LLMs often add a generic sentence after a heading as a rhetorical warm-up. It usually adds nothing and makes the prose feel padded. + +**Before:** +> ## Performance +> +> Speed matters. +> +> When users hit a slow page, they leave. + +**After:** +> ## Performance +> +> When users hit a slow page, they leave. diff --git a/humanizer-classics/references/zinsser-on-writing-well.md b/humanizer-classics/references/zinsser-on-writing-well.md new file mode 100644 index 00000000..b01b21c3 --- /dev/null +++ b/humanizer-classics/references/zinsser-on-writing-well.md @@ -0,0 +1,154 @@ +# On Writing Well — William Zinsser + +**Source:** William Zinsser, *On Writing Well: The Classic Guide to Writing Nonfiction*, 30th anniversary edition (HarperCollins, 2006) +**Type:** Craft prescription (positive guidance — what good writing *does*) +**License of pull-quotes:** Fair use — short excerpts for educational commentary +**Rule prefix:** `Z` + +## Why this book belongs in humanizer-classics + +Zinsser teaches the *positive moves* that LLM writing skips. Where Wikipedia's rules tell you to spot "AI vocabulary words" and "filler phrases," Zinsser tells you what to do once you've spotted them: cut, choose plain words, prefer active verbs, put yourself on the page. His central claim — that good writing is rewriting, and rewriting is mostly cutting — is the antidote to LLM output, which generates lush first drafts and stops there. These rules apply to nearly every form of nonfiction: blog posts, emails, book drafts, essays, dictation cleanup. + +## Rules in this file + +| ID | Rule (one line) | Chapter / reference | +|----|-----------------|---------------------| +| Z-1 | Cut clutter — every word that does no work | Ch. 3 "Clutter" | +| Z-2 | Use short, plain, Anglo-Saxon words; concrete over abstract | Ch. 4 "Style", Ch. 6 "Words" | +| Z-3 | Active verbs do the work; kill nominalizations | Ch. 10 "Bits and Pieces" — Verbs | +| Z-4 | Strip qualifiers — "a bit", "rather", "sort of", "quite" | Ch. 3 "Clutter" | +| Z-5 | Be present on the page; have a self | Ch. 4 "Style", Ch. 5 "The Audience" | +| Z-6 | Endings matter — quit when the work is done | Ch. 9 "The Lead and the Ending" | + +--- + +### Z-1. Cut clutter — every word that does no work + +**Source:** Zinsser, *On Writing Well*, Ch. 3 "Clutter" + +> "Clutter is the disease of American writing. We are a society strangling in unnecessary words, circular constructions, pompous frills, and meaningless jargon." +> — Zinsser, opening of Ch. 3 + +**Cross-references:** W-7 (AI vocabulary), W-23 (filler), W-27 (persuasive tropes) +**Context tags:** all +**Detection cue:** Sentences over 25 words. Phrases like "in order to", "the fact that", "at this point in time", "it should be noted that", "currently in the process of". Doubled qualifiers ("really very", "quite extremely"). + +**Problem:** LLMs pad. They generate fluent, plausible-sounding sentences and rarely cut. Each phrase passes muster on its own, but read aloud the prose is bloated with words that add no information. Zinsser's discipline: examine every sentence and ask whether each word is doing real work. + +**Before** +> In order to facilitate a better understanding of the matter at hand, it is important to note that the implementation process will require a significant amount of time. + +**After** +> Implementation will take time. + +**How to apply:** Read the draft sentence by sentence. For each word, ask "what is this word doing?" If the answer is "softening", "qualifying", "throat-clearing", or "padding to length", cut it. Then ask whether the surviving sentence still says everything that mattered. It almost always does. + + +### Z-2. Use short, plain, Anglo-Saxon words; concrete over abstract + +**Source:** Zinsser, *On Writing Well*, Ch. 4 "Style" and Ch. 6 "Words" + +> "Don't dialogue with someone you can talk to. Don't interface with anybody." +> — Zinsser, on the urge to dress up plain verbs + +**Cross-references:** W-7 (AI vocabulary), W-3 (-ing analyses), S-4 (specific concrete language) +**Context tags:** all (especially blog, email, book-draft) +**Detection cue:** Latinate verbs where Anglo-Saxon ones would land harder: "utilize" (use), "facilitate" (help), "leverage" (use), "optimize" (improve), "operationalize" (do). Nouns like "individual" (person), "vehicle" (car), "residence" (home). + +**Problem:** LLMs reach for the longer word because the longer word feels professional. The reader's ear hears the opposite — long Latinate words slow comprehension and sound corporate. Zinsser's prescription is the short Saxon verb that an English speaker would actually use in conversation. + +**Before** +> The team will utilize a variety of methodologies in order to facilitate alignment among stakeholders and operationalize the deliverables. + +**After** +> The team will use several methods to get everyone on the same page and ship the work. + +**How to apply:** When two words mean the same thing, pick the shorter one. When the longer word is from Latin or French and the shorter is from Old English, the Anglo-Saxon usually wins (use > utilize, help > facilitate, end > terminate, start > commence, get > obtain). Concrete beats abstract: "the meeting" beats "the engagement", "the report" beats "the deliverable". + + +### Z-3. Active verbs do the work; kill nominalizations + +**Source:** Zinsser, *On Writing Well*, Ch. 10 "Bits and Pieces" — section on Verbs + +> "Verbs are the most important of all your tools. They push the sentence forward and give it momentum." +> — Zinsser, Ch. 10 + +**Cross-references:** W-13 (passive voice), S-2 (active voice), W-8 (copula avoidance) +**Context tags:** all +**Detection cue:** Nominalized verbs ("make a decision" = decide; "have a discussion" = discuss; "perform an analysis" = analyze; "provide assistance" = help). Passive constructions where the actor is hidden ("a decision was made"). Long subject phrases that delay the verb. + +**Problem:** LLMs love nominalizations. They turn verbs into nouns and stuff them into noun phrases connected by weak verbs ("have", "make", "perform", "conduct"). The result reads like a status report. Zinsser's move is to find the buried verb and let it act. + +**Before** +> The committee made a decision to conduct a review of the proposal and provide a recommendation. + +**After** +> The committee decided to review the proposal and recommend. + +**How to apply:** Underline every verb. If the verb is "be", "have", "make", "do", "perform", or "conduct", look for a noun nearby that's secretly a verb in disguise. Promote it back to verb form. The sentence shortens and moves. + + +### Z-4. Strip qualifiers — "a bit", "rather", "sort of", "quite" + +**Source:** Zinsser, *On Writing Well*, Ch. 3 "Clutter" + +> "Don't say you were a bit confused and sort of tired and a little depressed. Be confused. Be tired. Be depressed." +> — Zinsser, Ch. 3 (paraphrased; common rendering) + +**Cross-references:** W-24 (excessive hedging), S-3 (positive form) +**Context tags:** all +**Detection cue:** Qualifier words: "a bit", "rather", "quite", "somewhat", "kind of", "sort of", "fairly", "pretty", "relatively", "in some sense". Doubled hedges: "I think it might possibly", "could potentially possibly". + +**Problem:** Qualifiers signal a writer trying to soften a claim into safety. They make every assertion look like a hedged guess. LLMs use them constantly — they're statistically safe, they make completions less risky. The cost is conviction. The reader stops believing the writer believes anything. + +**Before** +> The new policy could potentially have a somewhat significant effect on outcomes, and it might be argued that adoption is fairly likely. + +**After** +> The new policy will probably affect outcomes. Most teams will adopt it. + +**How to apply:** Search the draft for the qualifier list above. For each, decide: is this a real probability hedge (then quantify — "probably", "in most cases", or with a number), or is it cowardice? If cowardice, cut it. + + +### Z-5. Be present on the page; have a self + +**Source:** Zinsser, *On Writing Well*, Ch. 4 "Style" and Ch. 5 "The Audience" + +> "Sell yourself, and your subject will exert its own appeal. Believe in your own identity and your own opinions." +> — Zinsser, Ch. 5 + +**Cross-references:** W-22 (sycophantic tone), W-25 (generic conclusion), H-5 (throat-clearing) +**Context tags:** blog, book-draft, email, dictation (NOT memo, technical-doc, meeting-notes) +**Detection cue:** Total absence of "I", "we", or any sentence that takes a stand. Neutral, on-the-other-hand prose that lists positions without picking one. Closing paragraphs that summarize rather than land. Anywhere the writing could be by anyone. + +**Problem:** LLMs default to the no-self voice — balanced, neutral, polite, opinionless. It reads like Wikipedia. Zinsser's argument is that the reader is not coming for facts they could get elsewhere; they're coming for *your* take. The writer must be on the page. The writing has to come from someone. + +**Before** +> There are arguments on both sides of the AI productivity debate. Some studies suggest gains; others find no significant effect. The reality is likely somewhere in between, and individuals should consider their own context. + +**After** +> I've watched developers I respect hit two camps on AI coding tools. The first treats Copilot like autocomplete for chores and reviews every line. The second tried it, hated the suggestions, and turned it off. Both are right for who they are. The studies aren't going to settle this for you — your codebase will. + +**How to apply:** When the form allows first person, use it. State a position and the reason for it. Tell a small specific story. If you wouldn't say it out loud to a friend who asked, don't write it. The exception is forms where first person is wrong (corporate memos, technical docs, news reporting) — there, "be present" means having a clear point of view in the structure even when "I" doesn't appear. **This rule should NOT fire on memo, technical-doc, or meeting-notes contexts** — see context tags. + + +### Z-6. Endings matter — quit when the work is done + +**Source:** Zinsser, *On Writing Well*, Ch. 9 "The Lead and the Ending" + +> "The perfect ending should take the reader slightly by surprise and yet seem exactly right." +> — Zinsser, Ch. 9 + +**Cross-references:** W-25 (generic positive conclusions), W-6 (challenges-and-future-prospects), H-1 (lead with bottom line) +**Context tags:** all +**Detection cue:** Closing paragraphs that begin "In conclusion", "To summarize", "All in all", "At the end of the day", "As we move forward". Endings that restate everything the piece already said. Endings that gesture at "exciting times ahead" or "the journey continues" without specifics. + +**Problem:** LLMs are trained on documents that have wind-down endings, so they manufacture them — generic uplift, summary recapitulation, broad gestures at the future. The piece slows to a stop instead of landing. Zinsser's discipline: find the sentence where the piece *actually ends*, and stop there. Trust the reader. + +**Before** +> In conclusion, AI coding assistants represent a transformative shift in software development. As we continue this exciting journey, the future looks bright. There are challenges ahead, but with thoughtful adoption, the benefits will outweigh the risks for years to come. + +**After** +> If you don't have tests, you're guessing. That's the real cost. + +**How to apply:** Find the last sentence in the draft that actually says something. Delete everything after it. If the ending now feels abrupt, that's usually the right feel — Zinsser argues the abruptness is what makes a closing land. If you cut and the piece truly hangs, write *one* concrete sentence (a fact, a quote, a turn) — not a summary. diff --git a/humanizer-classics/tests/REVIEWING.md b/humanizer-classics/tests/REVIEWING.md new file mode 100644 index 00000000..456fdc20 --- /dev/null +++ b/humanizer-classics/tests/REVIEWING.md @@ -0,0 +1,87 @@ +# Manual Reviewer Checklist + +`humanizer-classics` is pure Markdown. There's no automated test runner. Verification is a **golden-corpus eyeball check**: a reviewer runs the skill against the samples in `corpus/` and confirms the expected rules fire and the rewrites read naturally. + +Run this checklist **before merging any rule PR**. + +--- + +## Setup + +1. Install the skill locally: + ```bash + ln -s "$(pwd)/humanizer-classics" ~/.claude/skills/humanizer-classics + ``` + (Or symlink to `~/.config/opencode/skills/` for OpenCode.) + +2. Restart your Claude Code session so the skill is reloaded. + +3. Confirm `/humanizer-classics` appears in the skills list. + +--- + +## Per-PR check (every rule PR) + +For each of the 6+ corpus samples in `corpus/`: + +- [ ] Read the `.md` file (the slop input). +- [ ] Read the `.expected-fixes.md` file (the list of rule IDs that should fire). +- [ ] Run the skill: `/humanizer-classics` followed by the slop content. +- [ ] Confirm: + - [ ] The rules listed in `.expected-fixes.md` actually fire (mentioned by ID in the skill's "Rules applied" output, or visibly addressed in the rewrite) + - [ ] No rules fire that *shouldn't* on this sample (the new rule under review doesn't fire spuriously on samples it doesn't apply to) + - [ ] The rewrite reads naturally — no over-correction, no robotic chopping of every modifier + - [ ] Meaning is preserved — facts and the writer's intent survive + - [ ] The context tag chosen by the skill matches the `context: ` line at the top of the sample + +If any sample fails, the rule needs revision before merging. + +--- + +## Per-book check (when adding a new book) + +In addition to the per-PR check above: + +- [ ] At least one corpus sample exercises rules from the new book and lists them in `.expected-fixes.md`. +- [ ] The new book's reference file follows the format in `references/_template-book-rules.md` exactly. +- [ ] The new book's rules appear in `references/_rule-index.md` (both the main table and the cross-reference graph if applicable). +- [ ] The new book's rules appear in `SKILL.md`'s craft-rule catalog. +- [ ] The new book is mentioned in `README.md`'s "Books currently included" list. +- [ ] Pull-quotes are 10-25 words and properly attributed (book, edition, chapter or page). + +--- + +## v2.x non-regression check + +The v2 skill must still work for users on `~/.claude/skills/humanizer/`. After installing v3: + +- [ ] `/humanizer` (v2) still loads in Claude Code. +- [ ] `/humanizer-classics` (v3) loads alongside it without conflict. +- [ ] Running both on the same input produces different (but reasonable) outputs. + +--- + +## Skill structural sanity + +- [ ] `SKILL.md` frontmatter parses (name, version, description, license, compatibility, allowed-tools). +- [ ] Every `Read references/.md` directive in `SKILL.md` points to a file that exists. +- [ ] Every rule ID mentioned in the skill is also in `references/_rule-index.md`. +- [ ] Every reference file follows the format from `_template-book-rules.md` (or `wikipedia-signs-of-ai-writing.md` for the imported W-rules). + +--- + +## When the corpus needs updating + +Update the corpus when: + +- A new rule introduces a pattern not represented in the existing samples +- A new book joins the canon (add at least one sample exercising its rules) +- A user reports a real-world failure mode that the corpus didn't catch (add the failing sample as a new corpus entry, with the expected rules listed) + +Don't update the corpus to "make a failing rule pass" — fix the rule. + +--- + +## Audit log (optional) + +Some maintainers keep an `audit-log.md` with a one-line note for each release: "v3.0.x — ran corpus, all 6 samples passed, no regressions on W-rules." This is helpful for reviewers but not required. diff --git a/humanizer-classics/tests/corpus/01-marketing-slop.expected-fixes.md b/humanizer-classics/tests/corpus/01-marketing-slop.expected-fixes.md new file mode 100644 index 00000000..7cece776 --- /dev/null +++ b/humanizer-classics/tests/corpus/01-marketing-slop.expected-fixes.md @@ -0,0 +1,38 @@ +# Expected fixes for 01-marketing-slop.md + +**Context tag:** blog (default for unspecified marketing copy) + +## Detection rules that should fire + +- W-1 — significance/legacy inflation ("stands as a testament", "pivotal moment in the evolution", "represents") +- W-3 — superficial -ing analyses ("ensuring that every feature", "supporting your goals while contributing to") +- W-4 — promotional language ("groundbreaking", "nestled", "stunning", "transformative", "unparalleled", "best-in-class", "seamlessly") +- W-5 — vague attributions ("Industry observers have noted") +- W-6 — outline-like challenges section ("face challenges typical of digital transformation... Despite these challenges") +- W-7 — AI vocabulary ("crucial", "fostering", "showcasing", "emphasizing", "vibrant", "landscape") +- W-8 — copula avoidance ("Atlas's intelligent engine handles" is okay, but "more than just a tool—it's a transformative platform" is the dressed-up "is") +- W-9 — negative parallelism ("not just keep up, but lead", "more than just a tool") +- W-10 — rule of three ("collaborate, communicate, and create"; "siloed information... inefficient processes... misaligned priorities"; "seamless, intuitive, and powerful") +- W-12 — false ranges ("from siloed information to inefficient processes to misaligned priorities") +- W-14 — em dashes ("more than just a tool—it's a transformative platform", "do exactly that—not just keep up, but lead", "exciting times lie ahead—we can't wait") +- W-15 — boldface overuse (every bullet has bolded inline header) +- W-16 — inline-header vertical lists (`**Smart Automation:** ...`) +- W-17 — title case in headings ("What Makes Atlas Special", "Built for the Modern Workplace") +- W-18 — emoji decoration on every heading and bullet +- W-20 — chatbot artifacts ("Let us know if you'd like to learn more!", "We're thrilled to announce") +- W-25 — generic positive conclusion ("future of work is bright", "exciting times lie ahead", "we can't wait to see what you build") +- W-26 — hyphenated word pair overuse ("cross-functional", "data-driven", "best-in-class", "real-time") +- W-27 — persuasive authority tropes ("At its core", "what matters most") + +## Craft rules that should be applied for the fix + +- Z-1 (cut clutter) — most of this can simply be cut +- Z-2 (plain words) — replace "leverage", "unlock potential", "empower" with plain verbs +- Z-6 (endings matter — quit when done) — kill the closing uplift; end on a fact +- S-4 (specific concrete language) — name an actual feature, an actual customer, an actual number +- S-5 (do not overstate) — the entire piece overstates; this rule is the dominant fix + +## Notes for reviewers + +- Z-5 (be present on the page) is debatable here — marketing copy from a company often deliberately omits "I". The skill should treat this as `blog` context but recognize it's company-voice, not personal-voice. The fix should make it specific and direct, not necessarily first person. +- The rewrite should preserve that this is a launch announcement — don't strip out the fact that the product is being introduced. The job is to make it sound like a person announcing a thing they made, not a press release. diff --git a/humanizer-classics/tests/corpus/01-marketing-slop.md b/humanizer-classics/tests/corpus/01-marketing-slop.md new file mode 100644 index 00000000..c16dbcd7 --- /dev/null +++ b/humanizer-classics/tests/corpus/01-marketing-slop.md @@ -0,0 +1,29 @@ +--- +title: Product launch announcement (heavy AI marketing slop) +context: blog +notes: Classic LLM-generated marketing copy. Promotional language, significance inflation, generic uplift, rule-of-three, em dashes, emoji headers. +--- + +# 🚀 Introducing Atlas: A Groundbreaking Leap Forward in Productivity + +We're thrilled to announce **Atlas**, our latest innovation that stands as a testament to what's possible when cutting-edge AI meets thoughtful design. Nestled at the intersection of automation and human creativity, Atlas represents a pivotal moment in the evolution of how teams collaborate, communicate, and create. + +## ✨ What Makes Atlas Special + +At its core, Atlas is more than just a tool—it's a transformative platform that empowers teams to unlock their full potential. By seamlessly integrating with your existing workflow, Atlas enhances productivity, fosters collaboration, and delivers unparalleled value across organizations of every size. + +- 💡 **Smart Automation:** Atlas's intelligent engine handles repetitive tasks, freeing your team to focus on what matters most. +- 🤝 **Seamless Collaboration:** Real-time, cross-functional features bring your team together like never before. +- 🎯 **Data-Driven Insights:** Best-in-class analytics deliver the clarity you need to make informed, strategic decisions. + +## 📈 Built for the Modern Workplace + +In today's rapidly evolving landscape, businesses face challenges typical of digital transformation—from siloed information to inefficient processes to misaligned priorities. Despite these challenges, with the right tools, organizations can not only adapt but thrive. Atlas is here to help you do exactly that—not just keep up, but lead. + +Industry observers have noted that the most successful teams are those that embrace tools designed for the way they actually work. Atlas was built with this philosophy at its heart, ensuring that every feature serves a real purpose, supporting your goals while contributing to a culture of innovation. + +## 🌟 Looking Ahead + +The future of work is bright, and we're excited to be on this journey with you. As we continue to innovate, we remain committed to delivering the seamless, intuitive, and powerful experiences our customers have come to expect. Exciting times lie ahead—we can't wait to see what you build with Atlas. + +Let us know if you'd like to learn more! diff --git a/humanizer-classics/tests/corpus/02-business-memo.expected-fixes.md b/humanizer-classics/tests/corpus/02-business-memo.expected-fixes.md new file mode 100644 index 00000000..852aa4a7 --- /dev/null +++ b/humanizer-classics/tests/corpus/02-business-memo.expected-fixes.md @@ -0,0 +1,28 @@ +# Expected fixes for 02-business-memo.md + +**Context tag:** memo (the form is a company-wide email memo) + +## Detection rules that should fire + +- W-13 — passive voice / subjectless ("efforts have not gone unnoticed", "additional A/B tests will be running") +- W-20 — chatbot/email artifacts ("I hope this email finds you well", "Please let me know if you have any questions") +- W-22 — sycophantic tone ("hard work and dedication", "your efforts have not gone unnoticed", "Thanks again for everything you do") +- W-23 — filler ("a number of factors", "I wanted to take a moment to share some thoughts") +- W-25 — generic conclusion ("Thanks again for everything you do") +- W-28 — signposting ("Looking back at Q3", "Looking forward") + +## Craft rules that should be applied for the fix + +- **H-1 (lead with the bottom line) — DOMINANT FIX.** The actual news ("revenue came in 8% under plan") is in paragraph 4. Move it to the top, ideally as the opening sentence or even the subject line. +- **H-2 (write for the skim-reader).** No subheads, no bullets, no bolding for the key numbers. The memo is a wall of prose. The fix: subheads ("What missed", "What beat", "What's next"), bold the numbers (8%, 14%), bullets where the items are parallel (the four bullets of news in paragraph 4 are all parallel — they should be a list). +- **H-3 (one idea per paragraph).** Paragraph 2 covers "mixed results", "team working diligently", "factors", and "execution" — four ideas in one paragraph. Split. +- **H-5 (cut throat-clearing openers).** The first 1.5 paragraphs are throat-clearing. Cut to the news. +- Z-1 (cut clutter) — secondary; H-5 cuts most of the clutter by removing the throat-clearing +- S-2 (active voice) — clean up the passive constructions +- Z-5 (be present on the page) — **does NOT fire on memo context.** Sarah signing off as "Sarah" is enough; the memo doesn't need first-person "I think" framing on every claim. + +## Notes for reviewers + +- This is the canonical H-1 sample. If H-1 doesn't fire as the dominant fix here, the rule is wrong. +- The rewrite should be much shorter. A good rewrite: subject line "Q3 8% under plan; Q4 will recover if Acme/Globex close" + 4-6 short paragraphs/bullets covering the news, the misses, the beats, and what's next. +- Don't strip the human warmth entirely — a brief acknowledgment of the team is fine. Strip the *generic* warmth (the throat-clearing version) and keep specific thanks if any. diff --git a/humanizer-classics/tests/corpus/02-business-memo.md b/humanizer-classics/tests/corpus/02-business-memo.md new file mode 100644 index 00000000..53bb5e65 --- /dev/null +++ b/humanizer-classics/tests/corpus/02-business-memo.md @@ -0,0 +1,24 @@ +--- +title: Q3 results memo (buried bottom line, throat-clearing, no skim formatting) +context: memo +notes: Business writing where the news is in paragraph 4 and the opener is throat-clearing. H-1, H-2, H-3, H-5 should dominate. +--- + +# Q3 Results and Path Forward + +Hi team, + +I hope this email finds you well and that you've all been having a productive week. As you know, we've been working hard over the past quarter to drive growth and execute on our strategic priorities, and I wanted to take a moment to share some thoughts on where we landed and what comes next. + +Looking back at Q3, it's been a quarter of mixed results. The team has been working diligently across multiple workstreams, and we've seen both areas of strength and areas where we've fallen short of where we hoped to be. There were a number of factors that contributed to the outcomes, including market conditions, internal execution, and a few unexpected events that we couldn't have fully anticipated. + +I want to thank everyone for the hard work and dedication that has gone into this quarter. It's been a challenging period and your efforts have not gone unnoticed. Our success depends on the collective contributions of every team member, and that has been on full display. + +That said, the bottom line is that revenue came in 8% under plan. The enterprise team missed forecast on two large deals that slipped to Q4 — Acme and Globex, both of which we still expect to close in October. The SMB team beat plan by 14%, driven by the new onboarding flow that launched in July. Operating costs ran on plan but cloud spend was higher than expected after the data warehouse migration. We've paused platform-team hiring, which kept overall headcount under target. + +Looking forward, we expect Q4 to recover assuming the slipped enterprise deals close. We'll be reviewing our cloud architecture in October. The marketing team will be running additional A/B tests on the signup flow, with the goal of improving conversion. I'll be sending more detailed updates over the coming weeks as we get closer to year-end. + +Thanks again for everything you do. Please let me know if you have any questions or would like to discuss any of this in more detail. + +Best, +Sarah diff --git a/humanizer-classics/tests/corpus/03-dictation-transcript.expected-fixes.md b/humanizer-classics/tests/corpus/03-dictation-transcript.expected-fixes.md new file mode 100644 index 00000000..026706f2 --- /dev/null +++ b/humanizer-classics/tests/corpus/03-dictation-transcript.expected-fixes.md @@ -0,0 +1,32 @@ +# Expected fixes for 03-dictation-transcript.md + +**Context tag:** dictation (one big run-on; no caps; needs paragraph breaks; the speaker is the implicit voice sample) + +## Detection rules that should fire + +- W-23 — filler phrases ("you know", "like", "kind of", "really", "um", "I would say", "basically", "I think") +- W-9 — negative parallelism (mild — "they're useful but they're not transformative" is borderline; this is a stylistic choice the speaker makes, may keep) +- W-13 — passive voice (hardly fires; speech tends to be active) +- W-24 — excessive hedging ("I would say that maybe", "kind of like", "a small percentage") + +## Craft rules that should be applied for the fix + +- **H-3 (one idea per paragraph) — DOMINANT FIX.** The dictation is one long run-on with no breaks. The editor's job is to find the natural paragraph breaks and add them. New paragraphs at: shift from "what they're good at" to "what they're not good at"; shift to the productivity-claims discussion; shift to the conclusion. +- **Z-1 (cut clutter).** Cut "you know", "kind of", "I would say", "really" where they're pure verbal tics. Keep them where they're load-bearing (genuine hedge or genuine emphasis). +- **Z-3 (active verbs).** Replace "make business decisions based on the assumption that" with simpler verb constructions. +- **Z-4 (strip qualifiers).** "I would say that maybe", "kind of like" — many of these are tic-hedges, not real probability hedges. Cut. +- **Z-5 (be present on the page) — fires.** This is dictation aimed at a blog post; first person is exactly right. The editor should preserve the "I" — don't strip the personal voice trying to make it neutral. +- Punctuation cleanup — capitalize "I", add commas, periods, question marks where the cadence demands. + +## Rules that should NOT fire + +- **H-1 (lead with the bottom line) — should NOT dominate.** This is becoming a blog post, not a memo. The bottom line ("AI coding tools are useful but not transformative") shows up at the end, which is fine for an essayistic piece. Forcing it to the top would destroy the form. +- **H-2 (write for the skim-reader) — should NOT fire.** Don't add subheads and bullets to a personal essay. +- **H-5 (throat-clearing) — should NOT fire on "so I've been thinking about this".** That's how a person actually starts a thought. It's not corporate throat-clearing; it's a natural conversational opener and it sets the personal voice. The editor *might* trim "so" if it really needs to, but should not strip the entire opening as throat-clearing. +- **S-5 (do not overstate) — barely fires.** The speaker doesn't overstate; they actually understate ("the productivity gains are real but small"). + +## Notes for reviewers + +- **Critical:** the rewrite must preserve the speaker's voice. This is the test of whether the dictation guidance in `granola-meeting-transcripts.md` works. A "humanized" version that sounds like generic blog prose has *failed* — the goal is dictation-to-readable, not dictation-to-blog-template. +- A successful rewrite is recognizably the same person, just with paragraph breaks, fewer filler words, and proper punctuation. Sentences may be lightly tightened; they should not be replaced. +- One reasonable target rewrite: 4 paragraphs, ~200 words, preserving "I think", "I use", "the part that's interesting to me", "I just don't really see it" — these are the speaker's voice. diff --git a/humanizer-classics/tests/corpus/03-dictation-transcript.md b/humanizer-classics/tests/corpus/03-dictation-transcript.md new file mode 100644 index 00000000..1ff83347 --- /dev/null +++ b/humanizer-classics/tests/corpus/03-dictation-transcript.md @@ -0,0 +1,7 @@ +--- +title: Wispr Flow dictation — first draft of a blog post (run-on, fillers, restarts) +context: dictation +notes: Raw dictation that needs to be edited into readable form WITHOUT replacing the speaker's voice. Should preserve the personal cadence; should NOT impose business-writing structure. Edit, don't rewrite. +--- + +so I've been thinking about this whole question of you know what AI coding tools actually do for you and I think the honest answer is that they're really good at the boring parts and they're really not very good at the parts that actually matter so like for example I use copilot pretty much every day and I would say that maybe 80 percent of what it suggests is fine and a lot of that is kind of like the autocomplete I would have gotten from a regular IDE anyway so it's not really like a huge productivity boost it's more like a um a slightly nicer autocomplete and then there's another maybe 15 percent that's actively wrong and I have to ignore it and then there's a small percentage where it actually does something useful that I wouldn't have thought of myself and that's the part that's interesting to me and the thing is that the people who are saying these tools are like a 10x productivity boost or whatever I just don't really see it I think what they're actually measuring is something like time spent typing which is not the bottleneck for me the bottleneck for me is figuring out what to actually build and these tools don't help with that at all in fact they sometimes hurt because they make it easier to just write code without thinking about whether you should be writing it in the first place and I think that's actually a pretty important point and I don't see it discussed enough so basically what I'm trying to say is that AI coding tools are useful but they're not transformative and the productivity gains are real but small and you should not be making business decisions based on the assumption that they're going to like 10x your engineering team because they won't diff --git a/humanizer-classics/tests/corpus/04-meeting-notes.expected-fixes.md b/humanizer-classics/tests/corpus/04-meeting-notes.expected-fixes.md new file mode 100644 index 00000000..09797984 --- /dev/null +++ b/humanizer-classics/tests/corpus/04-meeting-notes.expected-fixes.md @@ -0,0 +1,39 @@ +# Expected fixes for 04-meeting-notes.md + +**Context tag:** meeting-notes (transitioning to memo when sent out) + +## Detection rules that should fire + +- W-1 — significance inflation ("the path forward is clear", "delivering value to customers") +- W-3 — superficial -ing analyses ("ensuring alignment", "highlighting the importance", "leveraging insights") +- W-4 — promotional language ("seamless", "robust discussion", "comprehensive review") +- W-7 — AI vocabulary ("ensuring", "leveraging", "delve" potential, "highlighting", "additionally") +- W-13 — passive voice ("Several important topics were covered", "It was noted that", "several mitigation strategies are being explored") +- W-15 — boldface overuse (every action item bullet has bolded inline header) +- W-16 — inline-header vertical lists (all action items) +- W-17 — title case in headings ("Key Takeaways", "Discussion Summary", "Looking Ahead") +- W-18 — emoji decoration on every header and every bullet +- W-22 — mild sycophantic tone ("productive discussion", "the team came together") +- W-25 — generic positive conclusion ("the path forward is clear, and exciting times are ahead") +- W-26 — hyphenated word pair overuse ("data-driven", "customer-success") + +## Craft rules that should be applied for the fix + +- **H-1 (lead with the bottom line).** What was actually decided? Reorganize so the decisions are the lead, the discussion is supporting. This meeting note buries the decisions inside the discussion summary. +- **H-2 (write for the skim-reader) — applied correctly.** Subheads and bullets are appropriate here, but they should be structural (Decisions, Updates, Action Items) not decorative (🚀 with emoji on every line). Strip emojis. Use sentence case in headings. +- **H-3 (one idea per paragraph).** The "Discussion Summary" paragraphs each cover multiple topics. Split per topic. +- **H-4 (imperative for instructions).** Action items are written as descriptions. Rewrite as imperatives: "Share final mockups by Friday" not "Design team to share final mockups by Friday". Or use a name + verb: "Maya: investigate email-verification drop-off." +- **S-2 (active voice).** "Several important topics were covered" → "We covered: ..." or just list them. +- **S-4 (specific concrete language).** "Several mitigation strategies are being explored" — what strategies? Name them or cut. +- **S-5 (do not overstate).** "Comprehensive review", "robust discussion" — earned by what? Cut. +- Z-1 (cut clutter), Z-3 (active verbs), Z-6 (kill the closing uplift) all support these. + +## Rules that should NOT fire + +- **Z-5 (be present on the page) — should NOT fire.** Meeting notes are a third-person form. They don't need "I think the team had a good discussion." +- **W-26 — only some hyphens are wrong.** "Data warehouse migration" is fine; "data-driven insights" is the W-26 case. + +## Notes for reviewers + +- The most important question on this sample: do action items get rewritten as imperatives (H-4)? If they don't, H-4 may need refinement. +- A good rewrite cuts the meeting-notes length roughly in half, drops every emoji, drops every bolded inline header, and elevates the decisions over the discussion. Action items become imperative one-liners with owners and dates. diff --git a/humanizer-classics/tests/corpus/04-meeting-notes.md b/humanizer-classics/tests/corpus/04-meeting-notes.md new file mode 100644 index 00000000..ed1cac6a --- /dev/null +++ b/humanizer-classics/tests/corpus/04-meeting-notes.md @@ -0,0 +1,31 @@ +--- +title: Granola-style meeting notes (auto-summary with AI patterns) +context: meeting-notes +notes: Output from a meeting-notes tool. Has auto-generated headers, action-item template scaffolding, and AI patterns layered on top of human discussion. Goal is to convert this into a tight summary memo. +--- + +# Weekly Product Standup — Tuesday, October 15 + +## 🎯 Key Takeaways + +The team came together for a productive discussion that highlighted both progress and challenges across multiple workstreams. Several important topics were covered, ensuring alignment on priorities heading into the upcoming sprint. + +## 📋 Discussion Summary + +The conversation kicked off with an update on the onboarding redesign, which is on track for its Q4 release. The design team has been working closely with engineering to ensure a seamless implementation, leveraging insights from recent user research. Additionally, the team discussed potential challenges around the email verification step, where data shows users are dropping off. It was noted that several mitigation strategies are being explored. + +Following this, attention shifted to the Q4 marketing budget. The marketing lead presented data-driven insights on campaign performance, highlighting the importance of doubling down on partner channels. The CFO raised concerns about the projected spend, prompting a robust discussion about ROI expectations. + +The meeting concluded with a comprehensive review of upcoming priorities, including the platform-team hiring freeze, the data warehouse migration, and ongoing customer-success initiatives. + +## ✅ Action Items + +- 🚀 **Onboarding Redesign:** Design team to share final mockups by Friday +- 💡 **Email Verification:** Engineering to investigate drop-off and propose fixes +- 📊 **Marketing Budget:** Marketing to provide updated ROI analysis to CFO by Wednesday +- 🤝 **Platform Hiring:** HR to communicate freeze to platform candidates +- ⚡ **Data Migration:** Infra team to confirm October 25 cutover + +## 🌟 Looking Ahead + +The team remains focused on delivering value to customers as we head into Q4. Despite challenges, the path forward is clear, and exciting times are ahead. diff --git a/humanizer-classics/tests/corpus/05-ai-linkedin-post.expected-fixes.md b/humanizer-classics/tests/corpus/05-ai-linkedin-post.expected-fixes.md new file mode 100644 index 00000000..899795e7 --- /dev/null +++ b/humanizer-classics/tests/corpus/05-ai-linkedin-post.expected-fixes.md @@ -0,0 +1,43 @@ +# Expected fixes for 05-ai-linkedin-post.md + +**Context tag:** blog (LinkedIn personal posts behave like short-form blog; first person allowed) + +## Detection rules that should fire + +- W-1 — significance inflation ("fundamentally changed my perspective", "shaped the leader I am today", "lasting impact") +- W-3 — superficial -ing analyses ("encouraging open dialogue", "creating an environment where innovation could thrive") +- W-4 — promotional language ("transformative", "high-stakes", "bold") +- W-5 — vague attributions ("Industry leaders consistently emphasize") +- W-7 — AI vocabulary ("foster", "thrive", "vibrant", "landscape", "pivotal", "ensuring") +- W-9 — negative parallelism ("It's not just about... it's about", "more than just hitting metrics") +- W-10 — rule of three ("tested my resilience, challenged my assumptions, and ultimately shaped"; "think bigger, act bolder, and deliver"; "the legacy you build, the people you uplift, and the lasting impact"; the entire 1-2-3 listicle structure with rule-of-three inside each item) +- W-14 — em dashes (mild) +- W-15 — boldface overuse (every list-item heading is bolded) +- W-16 — inline-header vertical lists (numbered list with bolded headers) +- W-18 — emoji decoration (🚀, 💡, 🎯, 🤝, 👇) +- W-22 — sycophantic / performative humility ("I had the privilege", "I am today") +- W-25 — generic positive conclusion ("The future is bright. Exciting times lie ahead.") +- W-26 — hyphenated word pair overuse ("high-stakes", "rapidly-evolving") +- W-27 — persuasive authority tropes ("At its core") +- W-28 — chatbot-style call-to-engagement ("Drop your thoughts in the comments below!") + +## Craft rules that should be applied for the fix + +- **Z-5 (be present on the page) — DOMINANT FIX.** This post is faux-personal. It performs vulnerability without committing to specifics. The fix is real specificity: which launch, what year, what was the failure, what specifically broke, who was involved. Without specifics, it's a TED-talk template. +- **Z-1 (cut clutter).** Most of the post can be cut. +- **Z-2 (plain words).** "Foster trust" → "build trust"; "recalibrate" → "rethink"; "fundamentally changed my perspective" → "changed how I think about" or "made me realize". +- **Z-6 (endings matter, quit when done).** Kill "The future is bright. Exciting times lie ahead." Kill the call-to-engagement at the end (or at least make it specific). +- **S-4 (specific concrete language) — major fix.** Name the launch, name the year, name the lesson with a specific example. "I led a launch in 2023 — a payments product that missed its ship date by four months because I didn't push back when sales committed to dates we couldn't hit" is the human version. +- **S-5 (do not overstate).** "Transformative", "fundamentally", "every challenge is an opportunity" — strip. + +## Rules that should NOT fire + +- **H-1 (lead with the bottom line) — should NOT dominate.** This is a personal-essay-shaped post, not a memo. The bottom line *can* lead (and arguably should — it would be sharper if "I shipped four months late and learned three things" was the opener), but the structure of "lessons → reflection → ending" is fine for the form. The fix is to make each lesson concrete, not to invert the structure. +- **H-3 (one idea per paragraph) — fires only weakly.** The structure is fine; the content is empty. +- **H-2 (write for the skim-reader) — should fire only to STRIP overuse.** The post over-formats with bolds and emojis. The skim-friendly version uses paragraph breaks and maybe italics for emphasis, not bolded headers and rocket emojis. + +## Notes for reviewers + +- This is the canonical Z-5 + S-4 sample. The failure mode of this post is *fake specificity* — it talks about "a high-stakes product launch" without saying which one. The successful rewrite makes that concrete. +- Voice calibration is especially important here. If the user provided a sample of their own LinkedIn voice, the rewrite should match it rather than defaulting to a generic-confident-professional register. +- A successful rewrite drops the emoji, drops the call-to-engagement, makes the failure specific, and is roughly half the length. It can still be three numbered lessons — but the lessons should be specific enough that a reader who's never met the writer learns something a generic post couldn't teach them. diff --git a/humanizer-classics/tests/corpus/05-ai-linkedin-post.md b/humanizer-classics/tests/corpus/05-ai-linkedin-post.md new file mode 100644 index 00000000..f38da5f5 --- /dev/null +++ b/humanizer-classics/tests/corpus/05-ai-linkedin-post.md @@ -0,0 +1,25 @@ +--- +title: AI-generated LinkedIn post (the most-tested form for this skill's primary user) +context: blog +notes: Classic LinkedIn AI slop. Significance inflation, false vulnerability, rule of three, generic uplift. The form invites first person — Z-5 fires hard. Voice should match a working professional, not a corporate press release. +--- + +🚀 Three lessons I learned from my biggest professional failure (a thread) + +Earlier in my career, I had the privilege of leading a high-stakes product launch that fundamentally changed my perspective on leadership. What unfolded over those six months was nothing short of transformative—a journey that tested my resilience, challenged my assumptions, and ultimately shaped the leader I am today. + +Here's what I learned: + +1. 💡 **Embrace vulnerability as a strength.** In today's rapidly evolving landscape, the most effective leaders are those who can show up authentically. By being vulnerable with my team, I was able to foster trust, encourage open dialogue, and create an environment where innovation could thrive. + +2. 🎯 **Failure is the greatest teacher.** It's not just about achieving results; it's about the lessons we extract from setbacks. Every failure represents a stepping stone toward growth, an opportunity to recalibrate, and a chance to emerge stronger than before. + +3. 🤝 **Surround yourself with people who challenge you.** Industry leaders consistently emphasize the importance of building diverse, cross-functional teams. The right people don't just support your vision—they push you to think bigger, act bolder, and deliver beyond what you thought was possible. + +At its core, leadership is about more than just hitting metrics. It's about the legacy you build, the people you uplift, and the lasting impact you make on those around you. As I reflect on this journey, I'm reminded that every challenge is an opportunity in disguise. + +The future is bright. Exciting times lie ahead. + +What's the biggest lesson you've learned from a professional setback? Drop your thoughts in the comments below! 👇 + +#Leadership #GrowthMindset #ProfessionalDevelopment diff --git a/humanizer-classics/tests/corpus/06-book-draft-excerpt.expected-fixes.md b/humanizer-classics/tests/corpus/06-book-draft-excerpt.expected-fixes.md new file mode 100644 index 00000000..ce1fffef --- /dev/null +++ b/humanizer-classics/tests/corpus/06-book-draft-excerpt.expected-fixes.md @@ -0,0 +1,45 @@ +# Expected fixes for 06-book-draft-excerpt.md + +**Context tag:** book-draft (long-form non-fiction; voice and rhythm matter; H-1 should NOT fire) + +## Detection rules that should fire + +- W-1 — significance inflation ("It is a testament to the power", "In today's hyper-competitive landscape", "transformative change") +- W-3 — superficial -ing analyses ("ignoring the reality", "shaping how we approach", "leaving companies vulnerable") +- W-5 — vague attributions ("As industry observers have noted") +- W-6 — outline-like challenges section ("Despite these challenges...") +- W-7 — AI vocabulary ("permeates", "transformative", "vibrant", "landscape", "key", "delve" potential) +- W-8 — copula avoidance ("It is a testament", "the language of optimization permeates", "the optimization mindset remains seductive") +- W-9 — negative parallelism ("It's not just about whether to optimize, but when, and toward what end") +- W-10 — rule of three ("productivity, performance, and progress"; "boardroom... gymnasium... operating systems"; "First... Second... Third...") — note: rule of three in a structural outline is okay; rule of three in every sentence is not +- W-11 — synonym cycling ("optimization", "the optimization mindset", "this paradigm", "the optimization paradigm") +- W-12 — false ranges ("from the boardroom to the gymnasium", "from organizational design to personal habits to operating systems", "from burnout to brittle systems to strategic myopia") +- W-14 — em dashes +- W-25 — generic positive conclusion ("the future of organizational design lies in striking the right balance", "leaders who can hold both... will be best positioned to thrive") +- W-26 — hyphenated word pair overuse ("hyper-competitive", "meaning-making") +- W-27 — persuasive authority tropes ("The deeper issue is") +- W-28 — signposting ("This chapter explores three key dimensions", "First, we examine... Second, we consider... Third, we look at") + +## Craft rules that should be applied for the fix + +- **Z-1 (cut clutter) — DOMINANT.** The chapter is bloated. Most sentences can lose 30-50% of their words. +- **Z-2 (plain words).** "Pathologies" → "problems"; "discourse" → "conversation" (or cut entirely); "decoupled" → "separated"; "permeates" → "shapes" or "runs through". +- **Z-3 (active verbs; kill nominalizations).** "The pursuit of optimization" → "When we optimize"; "the cultural reverence for optimization" → "when a culture worships optimization". +- **Z-5 (be present on the page) — fires.** A book chapter without a self on the page is a Wikipedia article. The author should be present. Add stakes: who has the author seen burn out from over-optimization? What's a specific case? +- **Z-6 (endings matter — quit when the work is done) — DOMINANT FIX for the closing.** "In conclusion, the future of organizational design..." is exactly the closing Zinsser warns against. Cut it. Land on a concrete sentence. +- **S-1 (omit needless words).** Tied with Z-1. +- **S-4 (specific concrete language).** "A wide range of pathologies", "industry observers", "transformative change" — what specifically? The chapter never names a real company, person, study, or example. The fix is concrete examples. +- **S-5 (do not overstate).** Strip "transformative", "hyper-competitive", "genuinely novel breakthroughs", "trapped in a cycle". + +## Rules that should NOT fire + +- **H-1 (lead with the bottom line) — should NOT fire.** This is a book chapter, not a memo. The author can build to the argument; they don't have to lead with it. Inverting this would destroy the form. +- **H-2 (skim-reader) — should NOT fire (mostly).** Long-form non-fiction is meant to be read, not scanned. Don't add subheads where the prose flow is supposed to carry the reader. The exception: if a chapter is structurally a how-to, light subheads help. +- **H-4 (imperative for instructions) — should NOT fire.** Not an instruction. +- **H-5 (cut throat-clearing) — should NOT fire on chapter openers.** A chapter opener is allowed to set up the topic before the argument starts. The opening paragraph isn't throat-clearing — it's framing. (However, Z-1 still applies to *each sentence* of the framing.) + +## Notes for reviewers + +- The dominant rules are Z-1, Z-5, Z-6, S-4. If the rewrite doesn't make the prose noticeably tighter, doesn't bring the author onto the page, doesn't cut the wind-down ending, and doesn't get more specific — the fix is incomplete. +- A successful rewrite preserves the chapter structure (the three dimensions of the argument) but strips the bloat from each section. The chapter should be 30-50% shorter and more interesting. +- This is the canonical book-draft sample. It tests whether the skill correctly suppresses H-1 / H-2 / H-4 / H-5 in book contexts. If those rules fire here, the context-tag system is broken. diff --git a/humanizer-classics/tests/corpus/06-book-draft-excerpt.md b/humanizer-classics/tests/corpus/06-book-draft-excerpt.md new file mode 100644 index 00000000..e2775e81 --- /dev/null +++ b/humanizer-classics/tests/corpus/06-book-draft-excerpt.md @@ -0,0 +1,19 @@ +--- +title: AI-assisted book chapter excerpt (long-form, voicelessness, copula avoidance) +context: book-draft +notes: Chapter intro for a non-fiction book. Should retain Z-5 (presence on page) and Z-6 (clean ending). H-1 should NOT fire. The form rewards rhythm, voice, and earned authority — not bottom-line-first structure. +--- + +# Chapter 4: The Limits of Optimization + +Optimization, as a concept, has come to occupy a central place in modern thinking about productivity, performance, and progress. From the boardroom to the gymnasium, the language of optimization permeates our discourse, shaping how we approach everything from organizational design to personal habits to the operating systems of our digital devices. It is a testament to the power of this paradigm that it has become almost invisible—a default lens through which we view the world. + +Yet, as compelling as the optimization mindset can be, it carries with it a number of significant limitations that are often overlooked. The pursuit of optimization, when taken too far, can crowd out other values that may be just as important—values such as resilience, exploration, and the kind of slack that allows for genuine creativity. In today's hyper-competitive landscape, organizations that fail to recognize these tradeoffs may find themselves trapped in a cycle of incremental gains while missing the opportunities for transformative change. + +This chapter explores three key dimensions of this challenge. First, we examine the way optimization tends to assume static objectives, ignoring the reality that what we optimize for today may be the wrong thing tomorrow. Second, we consider how optimization mindsets can erode the slack and redundancy that resilience requires. Third, we look at how the cultural reverence for optimization can foreclose the kinds of exploratory work that leads to genuinely novel breakthroughs. + +It's not just about whether to optimize, but when, and toward what end. The deeper issue is that optimization, as practiced in most organizational contexts, has become decoupled from the question of what we are actually trying to achieve. As industry observers have noted, this disconnect has contributed to a wide range of pathologies, from burnout to brittle systems to the kind of strategic myopia that leaves companies vulnerable to disruption. + +Despite these challenges, the optimization mindset remains seductive. It offers measurable progress, clear feedback loops, and a sense of forward motion. The path forward, then, is not to abandon optimization but to situate it within a broader framework—one that honors the equally important roles of resilience, exploration, and human meaning-making. + +In conclusion, the future of organizational design lies in striking the right balance. As we move forward, leaders who can hold both optimization and its limits in mind will be best positioned to thrive in an uncertain world. From 15984d1ed293eb0bb0974998d0da883433333888 Mon Sep 17 00:00:00 2001 From: bdevz <87504907+bdevz@users.noreply.github.com> Date: Wed, 29 Apr 2026 10:45:54 -0400 Subject: [PATCH 2/6] Add 4 Strunk rules (S-6..S-9) verified against the source PDF MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Mines four additional craft rules from The Elements of Style, 4th ed., each with verified pull-quote and section reference (II.x or V.x): - S-6: Express coordinate ideas in similar form (II.19) — parallel construction. Catches AI's tendency to vary form where parallel was needed. - S-7: Place the emphatic words of a sentence at the end (II.22) — the sentence-level analog of H-1; the news goes at the end, not buried mid-sentence with weak qualifiers tacked on. - S-8: Avoid a succession of loose sentences (II.18) — Strunk's name for the "every sentence is the same shape" failure mode that the Personality and Soul section calls out without prescribing a fix. - S-9: Do not affect a breezy manner (V.9) — the calibrating partner to Z-5 (be present on the page). Fires on the same contexts (blog, email, dictation) and prevents Z-5 from over-correcting into manufactured spontaneity. Pull-quotes were verified against the user-provided 4th edition PDF. Page citations dropped from the source line in favor of stable Roman numeral + rule number references (matching existing S-1..S-5 format). Updates total rule count from 45 to 49 across SKILL.md catalog table, README.md launch summary, _rule-index.md totals + cross-reference graph, and CHANGELOG.md. --- humanizer-classics/CHANGELOG.md | 18 ++-- humanizer-classics/README.md | 4 +- humanizer-classics/SKILL.md | 6 +- humanizer-classics/references/_rule-index.md | 9 +- .../strunk-and-white-elements-of-style.md | 96 +++++++++++++++++++ 5 files changed, 122 insertions(+), 11 deletions(-) diff --git a/humanizer-classics/CHANGELOG.md b/humanizer-classics/CHANGELOG.md index 68981dc4..b2edd3b0 100644 --- a/humanizer-classics/CHANGELOG.md +++ b/humanizer-classics/CHANGELOG.md @@ -8,7 +8,7 @@ All notable changes to humanizer-classics. Format roughly follows [Keep a Change - Initial v3 release as a fork of `humanizer` v2.5.1. - New architecture: slim `SKILL.md` dispatcher (~250 lines) + per-source `references/` files lazy-loaded as rules fire. -- 16 new craft rules sourced from foundational writing books, each with a citation and pull-quote: +- 20 new craft rules sourced from foundational writing books, each with a citation and pull-quote (verified against the source PDFs): - **Zinsser, *On Writing Well*** (30th anniversary ed., 2006) — Z-1 through Z-6: - Z-1: Cut clutter — every word that does no work - Z-2: Use short, plain, Anglo-Saxon words @@ -16,12 +16,16 @@ All notable changes to humanizer-classics. Format roughly follows [Keep a Change - Z-4: Strip qualifiers - Z-5: Be present on the page; have a self - Z-6: Endings matter — quit when the work is done - - **Strunk & White, *The Elements of Style*** (4th ed., 1999) — S-1 through S-5: - - S-1: Omit needless words - - S-2: Use the active voice - - S-3: Put statements in positive form - - S-4: Use definite, specific, concrete language - - S-5: Do not overstate + - **Strunk & White, *The Elements of Style*** (4th ed., 1999) — S-1 through S-9: + - S-1: Omit needless words (II.17) + - S-2: Use the active voice (II.14) + - S-3: Put statements in positive form (II.15) + - S-4: Use definite, specific, concrete language (II.16) + - S-5: Do not overstate (V.7) + - S-6: Express coordinate ideas in similar form — parallel construction (II.19) + - S-7: Place the emphatic words of a sentence at the end (II.22) + - S-8: Avoid a succession of loose sentences — mechanical singsong (II.18) + - S-9: Do not affect a breezy manner — calibrating partner to Z-5 (V.9) - **Garner / HBR, *HBR Guide to Better Business Writing*** (1st ed., 2012) — H-1 through H-5: - H-1: Lead with the bottom line (pyramid principle) - H-2: Write for the skim-reader diff --git a/humanizer-classics/README.md b/humanizer-classics/README.md index 3a187758..ef2c35a9 100644 --- a/humanizer-classics/README.md +++ b/humanizer-classics/README.md @@ -8,7 +8,7 @@ A Claude Code / OpenCode skill that refines AI-generated text using rules drawn `humanizer-classics` is the v3 fork of [`humanizer`](../). Where v2 catalogs **what AI writing looks like** (29 patterns from Wikipedia's "Signs of AI writing"), v3 adds **what good human writing does** — craft prescriptions sourced from books, with citations. -- **45 rules** at launch: 29 detection rules (Wikipedia) + 16 craft rules (Zinsser × 6, Strunk & White × 5, HBR Guide × 5) +- **49 rules** at launch: 29 detection rules (Wikipedia) + 20 craft rules (Zinsser × 6, Strunk & White × 9, HBR Guide × 5) - **Two-pass process**: draft → audit ("what makes this still obviously AI?") → final - **Voice calibration**: paste 2-3 paragraphs of your own writing and the skill matches your rhythm and word choice instead of generic "clean" prose - **Granola integration**: pull meeting transcripts directly via MCP and humanize them @@ -100,7 +100,7 @@ When a detection rule fires, the matching craft rule(s) usually offer the better ## Books currently included - **Zinsser**, *On Writing Well* (30th anniversary ed., 2006) — 6 rules -- **Strunk & White**, *The Elements of Style* (4th ed., 1999) — 5 rules +- **Strunk & White**, *The Elements of Style* (4th ed., 1999) — 9 rules - **Garner / HBR**, *HBR Guide to Better Business Writing* (1st ed., 2012) — 5 rules ## Roadmap (community contributions welcome) diff --git a/humanizer-classics/SKILL.md b/humanizer-classics/SKILL.md index ab9b90b6..d136de1a 100644 --- a/humanizer-classics/SKILL.md +++ b/humanizer-classics/SKILL.md @@ -136,6 +136,10 @@ Craft rules (positive guidance — what good writing *does*): | S-3 | Put statements in positive form | `references/strunk-and-white-elements-of-style.md` | | S-4 | Use definite, specific, concrete language | `references/strunk-and-white-elements-of-style.md` | | S-5 | Do not overstate | `references/strunk-and-white-elements-of-style.md` | +| S-6 | Express coordinate ideas in similar form (parallel construction) | `references/strunk-and-white-elements-of-style.md` | +| S-7 | Place the emphatic words of a sentence at the end | `references/strunk-and-white-elements-of-style.md` | +| S-8 | Avoid a succession of loose sentences (mechanical singsong) | `references/strunk-and-white-elements-of-style.md` | +| S-9 | Do not affect a breezy manner (calibrating partner to Z-5) | `references/strunk-and-white-elements-of-style.md` | | H-1 | Lead with the bottom line (pyramid principle) | `references/hbr-guide-better-business-writing.md` | | H-2 | Write for the skim-reader | `references/hbr-guide-better-business-writing.md` | | H-3 | One idea per paragraph | `references/hbr-guide-better-business-writing.md` | @@ -285,7 +289,7 @@ Provide: - `references/_template-book-rules.md` — template for contributing a new book - `references/wikipedia-signs-of-ai-writing.md` — 29 detection rules (CC BY-SA 4.0) - `references/zinsser-on-writing-well.md` — 6 craft rules from Zinsser -- `references/strunk-and-white-elements-of-style.md` — 5 craft rules from Strunk & White +- `references/strunk-and-white-elements-of-style.md` — 9 craft rules from Strunk & White - `references/hbr-guide-better-business-writing.md` — 5 craft rules from Garner / HBR - `references/granola-meeting-transcripts.md` — Granola MCP workflow + Wispr Flow dictation guidance diff --git a/humanizer-classics/references/_rule-index.md b/humanizer-classics/references/_rule-index.md index d714e743..df15502d 100644 --- a/humanizer-classics/references/_rule-index.md +++ b/humanizer-classics/references/_rule-index.md @@ -46,13 +46,17 @@ Flat lookup table of every rule in this skill. Use this when a rule ID is mentio | S-3 | Put statements in positive form | `strunk-and-white-elements-of-style.md` | | S-4 | Use definite, specific, concrete language | `strunk-and-white-elements-of-style.md` | | S-5 | Do not overstate | `strunk-and-white-elements-of-style.md` | +| S-6 | Express coordinate ideas in similar form | `strunk-and-white-elements-of-style.md` | +| S-7 | Place the emphatic words of a sentence at the end | `strunk-and-white-elements-of-style.md` | +| S-8 | Avoid a succession of loose sentences | `strunk-and-white-elements-of-style.md` | +| S-9 | Do not affect a breezy manner | `strunk-and-white-elements-of-style.md` | | H-1 | Lead with the bottom line (pyramid principle) | `hbr-guide-better-business-writing.md` | | H-2 | Write for the skim-reader | `hbr-guide-better-business-writing.md` | | H-3 | One idea per paragraph | `hbr-guide-better-business-writing.md` | | H-4 | Imperative for instructions | `hbr-guide-better-business-writing.md` | | H-5 | Cut throat-clearing openers | `hbr-guide-better-business-writing.md` | -**Total:** 45 rules across 4 sources (29 Wikipedia + 6 Zinsser + 5 Strunk & White + 5 HBR). +**Total:** 49 rules across 4 sources (29 Wikipedia + 6 Zinsser + 9 Strunk & White + 5 HBR). ## Cross-reference graph @@ -68,6 +72,9 @@ When a Wikipedia detection rule fires, the matching book rule(s) usually offer t | W-22 (sycophantic), W-20 (chatbot artifacts) | Z-5, H-5 (be present; cut throat-clearing) | | W-16 (inline-header lists), W-18 (emojis) | H-2 (skim-reader formatting done right) | | W-3 (-ing analyses), W-12 (false ranges) | S-4 (concrete and specific) | +| W-10 (rule of three), W-11 (synonym cycling) | S-6 (parallel construction), S-8 (avoid loose-sentence monotony) | +| W-25 (generic conclusions), H-1 (lead with bottom line) | S-7 (emphatic words at the end) | +| Z-5 (be present on the page) — when over-applied | S-9 (do not affect a breezy manner — calibrating partner) | ## Adding a new rule diff --git a/humanizer-classics/references/strunk-and-white-elements-of-style.md b/humanizer-classics/references/strunk-and-white-elements-of-style.md index 938adb6d..a4186a0a 100644 --- a/humanizer-classics/references/strunk-and-white-elements-of-style.md +++ b/humanizer-classics/references/strunk-and-white-elements-of-style.md @@ -18,6 +18,10 @@ Strunk's principles are the shortest, sharpest writing rules in English. They're | S-3 | Put statements in positive form | II.15 | | S-4 | Use definite, specific, concrete language | II.16 | | S-5 | Do not overstate | V.7 | +| S-6 | Express coordinate ideas in similar form | II.19 | +| S-7 | Place the emphatic words of a sentence at the end | II.22 | +| S-8 | Avoid a succession of loose sentences | II.18 | +| S-9 | Do not affect a breezy manner | V.9 | --- @@ -129,3 +133,95 @@ Strunk's principles are the shortest, sharpest writing rules in English. They're > The platform is faster than the previous version on the three benchmarks we tested. The new dashboard surfaces alerts that used to require three clicks to find. **How to apply:** Scan for superlatives and intensifiers. For each, ask "is this earned by a specific fact in the same paragraph?" If yes, the superlative can stay (and the fact does the convincing). If no, cut the superlative and replace with the specific fact, or cut the sentence. + + +### S-6. Express coordinate ideas in similar form + +**Source:** Strunk & White, *The Elements of Style*, 4th ed., II.19 + +> "This principle, that of parallel construction, requires that expressions similar in content and function be outwardly similar." +> — Strunk, II.19 + +**Cross-references:** W-10 (rule of three), W-11 (synonym cycling), Z-3 (active verbs) +**Context tags:** all +**Detection cue:** Lists or series where items break grammatical parallel — mixing noun phrases with verb phrases, infinitives with gerunds, past tense with present. Correlative conjunctions (`both/and`, `not only/but also`, `either/or`) where the two halves don't match grammatically. Bullet lists where some items are full sentences and others are noun phrases. + +**Problem:** LLMs vary the form of expression where parallel form was needed, "mistakenly believing in the value of constantly varying the form of expression." The reader has to do the work of recognizing that two ideas serve the same function despite different shapes — a small but real cognitive tax that adds up across a document. Strunk's prescription: like content gets like form. The reader's eye can then group, compare, and remember. + +**Before** +> Our priorities for next quarter are to ship the new dashboard, customer onboarding will be redesigned, and we should also be focused on improving system reliability. + +**After** +> Our priorities for next quarter are to ship the new dashboard, redesign customer onboarding, and improve system reliability. + +**How to apply:** When a sentence or bullet list contains two or more items that serve the same function, write each item with the same grammatical shape. If the first item starts with a verb, all do. If the first is a noun phrase, all are. Correlative pairs (`both X and Y`, `not only X but also Y`, `either X or Y`) require X and Y to be the same grammatical type. + + +### S-7. Place the emphatic words of a sentence at the end + +**Source:** Strunk & White, *The Elements of Style*, 4th ed., II.22 + +> "The proper place in the sentence for the word or group of words that the writer desires to make most prominent is usually the end." +> — Strunk, II.22 + +**Cross-references:** H-1 (lead with the bottom line), Z-6 (endings matter), W-25 (generic conclusions) +**Context tags:** all +**Detection cue:** Sentences where the most important word is buried mid-sentence and weak filler comes after it. Phrases like "in many ways", "in the modern world", "as well", "for various reasons" tacked onto the end of an otherwise punchy sentence. The strong noun or verb appears in position 4 of 8, then four words of throat-clearing follow. + +**Problem:** Sentences land with whatever words sit at the end. LLMs tend to place qualifiers, scope-broadening clauses, and meta-commentary at sentence-end — the very position Strunk says belongs to the *new information*. Each sentence loses its punch because the punch isn't where the reader's eye stops. This is the sentence-level analog of H-1 (lead with the bottom line at the document level): both rules are about putting weight where the reader's attention is. + +**Before** +> Humanity has hardly advanced in fortitude since that time, though it has advanced in many other ways. + +**After** +> Since that time, humanity has advanced in many ways, but it has hardly advanced in fortitude. + +(Strunk's own example.) + +**How to apply:** Find the most important word in the sentence — usually the new information or the noun the sentence is really about. Rewrite so that word lands at or near the end. The principle scales: emphatic words at the end of the sentence, emphatic sentences at the end of the paragraph, emphatic paragraphs at the end of the section. + + +### S-8. Avoid a succession of loose sentences + +**Source:** Strunk & White, *The Elements of Style*, 4th ed., II.18 + +> "An unskilled writer will sometimes construct a whole paragraph of sentences of this kind, using as connectives *and*, *but*, and, less frequently, *who*, *which*, *when*, *where*, and *while*…" +> — Strunk, II.18 + +**Cross-references:** W-10 (rule of three), W-11 (synonym cycling), Z-3 (active verbs) +**Context tags:** all (especially blog, book-draft, dictation) +**Detection cue:** Three or more sentences in a row with the same shape — typically `[subject] [verb] [object], and [clause]` or `[subject] [verb] [object], while [clause]`. Each sentence is two clauses joined by a conjunction. Read the paragraph aloud: it sounds like a metronome. The Personality and Soul section in `SKILL.md` calls this "every sentence is the same length and structure" — S-8 is the explicit prescription. + +**Problem:** Sentence rhythm carries the reader. LLMs often produce paragraphs where every sentence has the same length and the same construction — the "loose" sentence pattern of two clauses joined by a conjunction. The result, in Strunk's words, "is bad because of the structure of its sentences, with their mechanical symmetry and singsong." Even if every individual sentence is grammatical and clear, the paragraph reads as flat because nothing in the rhythm signals what matters. + +**Before** +> The third concert of the subscription series was given last evening, and a large audience was in attendance. Mr. Edward Appleton was the soloist, and the Boston Symphony Orchestra furnished the instrumental music. The former showed himself to be an artist of the first rank, while the latter proved itself fully deserving of its high reputation. + +**After** +> Mr. Edward Appleton soloed last night with the Boston Symphony before a large audience. He proved himself a first-rank artist. The orchestra deserved its reputation. + +(Strunk's "Before" example, paired with a rewrite that varies sentence length and breaks the conjunction-joined pattern.) + +**How to apply:** Read each paragraph aloud. If three or more consecutive sentences share the same shape — same length, same conjunction-joined two-clause pattern — rewrite to break the pattern. Mix a short simple sentence next to a longer periodic one. Vary openings: a sentence starting with the subject, then one starting with a phrase, then one starting with a dependent clause. Variety of construction is what keeps the reader awake. + + +### S-9. Do not affect a breezy manner + +**Source:** Strunk & White, *The Elements of Style*, 4th ed., V.9 + +> "The breezy style is often the work of an egocentric, the person who imagines that everything that comes to mind is of general interest and that uninhibited prose creates high spirits and carries the day." +> — White, V.9 + +**Cross-references:** Z-5 (be present on the page) — *tension*, W-22 (sycophantic tone), W-20 (chatbot artifacts) +**Context tags:** blog, email, dictation (does NOT fire on memo, technical-doc, meeting-notes) +**Detection cue:** Forced casualness — "Hey folks!", "Welp,", "let's talk turkey", "lemme just say". Manufactured asides ("just thinking out loud here"). Performative humility disclaimers without follow-through. Exclamation marks on declarative statements. Slang or emoji dropped in for personality flavor with no specific content behind it. The voice is *loud* but the writer "has not done his work" (V.9). + +**Problem:** This rule is the counter-tension to Z-5 ("be present on the page"). When the humanizer over-applies Z-5 — adding "I" claims and personality without substance — the writing tips into the breezy register White warns against. Voice on the page should come from having something specific to say, not from manufactured spontaneity. The breezy register is just another form of AI slop: padding with "personality" instead of padding with corporate vocabulary. It "obviously has nothing to say" and is "showing off and directing the attention of the reader to himself" (V.9). + +**Before** +> Hey folks! So I've been thinking about this whole AI productivity thing and lemme just tell ya — it's wild out there! Like, seriously, the takes I've been seeing are all over the place, and I'm just sitting here like, "Wait, what?" But anyway, here's my hot take... + +**After** +> The AI productivity takes I keep seeing are all over the place. The studies are mixed too — Google says 55% faster for simple functions; Uplevel finds no PR-throughput difference. I'm watching the metric I actually trust: how often I cancel a Copilot suggestion mid-stream because I realized it was wrong. That number isn't going down. + +**How to apply:** When Z-5 is applied (first person, opinions, voice), check that the personality is doing real work — naming a specific case, taking a defensible position, telling a small true story. If the "voice" is "Hey folks!", manufactured exclamations, or generic "I'm just spitballing here" disclaimers, the writer has not done the work. Strip the breeziness, keep the substance. **S-9 fires only on the same contexts as Z-5** (blog, email, dictation) — it's the calibrating partner, not a contradiction. From 752519e97f7e5c02e654557fe3e7854bd52999c1 Mon Sep 17 00:00:00 2001 From: bdevz <87504907+bdevz@users.noreply.github.com> Date: Wed, 29 Apr 2026 11:02:49 -0400 Subject: [PATCH 3/6] Add 3 Zinsser rules (Z-7..Z-9); fix Z-1..Z-6 citations against the source MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Mines three more craft rules from On Writing Well, each with verified pull-quote and chapter reference: - Z-7: A lead must capture the reader immediately or the article is dead (Ch. 9). Lead-side companion to Z-6 endings. Catches AI's habit of warming up for three paragraphs before saying anything; pairs with Z-6 since both come from the same chapter and Zinsser argues the lead and ending are the two highest-leverage sentences in any piece. - Z-8: Maintain unity of pronoun, tense, and mood (Ch. 8). Catches the pronoun drift, tense flips, and tone shifts that make AI-assembled long-form text read as if written by three different writers. - Z-9: Most adjectives and adverbs are unnecessary (Ch. 10 — Adverbs and Adjectives sections). Adds the positive prescription that pairs with W-4 (promotional language) detection: every modifier must do necessary work; "blared loudly" weakens "blared". While in the source PDF, also corrected several inaccurate citations in the existing Z-1..Z-6 rules: - Edition: was "30th anniversary ed., HarperCollins, 2006" — actual is "25th Anniversary Edition (6th ed.), HarperResource, 2001". Fixed throughout the file, README, and CHANGELOG. - Z-1: replaced the "Clutter is the disease of American writing" pull- quote, which does not appear in this edition, with the actual Ch. 3 closing exhortation that captures the same rule. - Z-2: "Don't dialogue with someone you can talk to" is actually from Ch. 3 (p. 16), not Ch. 4. Updated source line; quote unchanged. - Z-4: pull-quote was marked (paraphrased). Now have the exact wording from Ch. 10 (p. 71); paraphrase tag dropped. Source moved from Ch. 3 to Ch. 10 — the Little Qualifiers section, where it actually appears. - Z-5: "Sell yourself, and your subject will exert its own appeal" is the closing line of Ch. 4 (p. 24), not from Ch. 5 as cited. Fixed. - Z-6: pull-quote said "the reader"; book actually says "your readers". One-word fix. Updates total rule count from 49 to 52 across SKILL.md catalog, README launch summary, _rule-index totals + cross-reference graph, and CHANGELOG. --- humanizer-classics/CHANGELOG.md | 19 +-- humanizer-classics/README.md | 4 +- humanizer-classics/SKILL.md | 5 +- humanizer-classics/references/_rule-index.md | 8 +- .../references/zinsser-on-writing-well.md | 119 ++++++++++++++---- 5 files changed, 120 insertions(+), 35 deletions(-) diff --git a/humanizer-classics/CHANGELOG.md b/humanizer-classics/CHANGELOG.md index b2edd3b0..01b429af 100644 --- a/humanizer-classics/CHANGELOG.md +++ b/humanizer-classics/CHANGELOG.md @@ -8,14 +8,17 @@ All notable changes to humanizer-classics. Format roughly follows [Keep a Change - Initial v3 release as a fork of `humanizer` v2.5.1. - New architecture: slim `SKILL.md` dispatcher (~250 lines) + per-source `references/` files lazy-loaded as rules fire. -- 20 new craft rules sourced from foundational writing books, each with a citation and pull-quote (verified against the source PDFs): - - **Zinsser, *On Writing Well*** (30th anniversary ed., 2006) — Z-1 through Z-6: - - Z-1: Cut clutter — every word that does no work - - Z-2: Use short, plain, Anglo-Saxon words - - Z-3: Active verbs do the work; kill nominalizations - - Z-4: Strip qualifiers - - Z-5: Be present on the page; have a self - - Z-6: Endings matter — quit when the work is done +- 23 new craft rules sourced from foundational writing books, each with a citation and pull-quote verified against the source PDFs: + - **Zinsser, *On Writing Well*** (25th Anniversary Edition, 6th ed., HarperResource, 2001) — Z-1 through Z-9: + - Z-1: Cut clutter — every word that does no work (Ch. 3) + - Z-2: Use short, plain, Anglo-Saxon words (Ch. 3, Ch. 6) + - Z-3: Active verbs do the work; kill nominalizations (Ch. 10) + - Z-4: Strip qualifiers (Ch. 10 — Little Qualifiers) + - Z-5: Be present on the page; have a self (Ch. 4) + - Z-6: Endings matter — quit when the work is done (Ch. 9) + - Z-7: A lead must capture the reader immediately (Ch. 9) + - Z-8: Maintain unity of pronoun, tense, and mood (Ch. 8) + - Z-9: Most adjectives and adverbs are unnecessary (Ch. 10) - **Strunk & White, *The Elements of Style*** (4th ed., 1999) — S-1 through S-9: - S-1: Omit needless words (II.17) - S-2: Use the active voice (II.14) diff --git a/humanizer-classics/README.md b/humanizer-classics/README.md index ef2c35a9..01cffd5c 100644 --- a/humanizer-classics/README.md +++ b/humanizer-classics/README.md @@ -8,7 +8,7 @@ A Claude Code / OpenCode skill that refines AI-generated text using rules drawn `humanizer-classics` is the v3 fork of [`humanizer`](../). Where v2 catalogs **what AI writing looks like** (29 patterns from Wikipedia's "Signs of AI writing"), v3 adds **what good human writing does** — craft prescriptions sourced from books, with citations. -- **49 rules** at launch: 29 detection rules (Wikipedia) + 20 craft rules (Zinsser × 6, Strunk & White × 9, HBR Guide × 5) +- **52 rules** at launch: 29 detection rules (Wikipedia) + 23 craft rules (Zinsser × 9, Strunk & White × 9, HBR Guide × 5) - **Two-pass process**: draft → audit ("what makes this still obviously AI?") → final - **Voice calibration**: paste 2-3 paragraphs of your own writing and the skill matches your rhythm and word choice instead of generic "clean" prose - **Granola integration**: pull meeting transcripts directly via MCP and humanize them @@ -99,7 +99,7 @@ When a detection rule fires, the matching craft rule(s) usually offer the better ## Books currently included -- **Zinsser**, *On Writing Well* (30th anniversary ed., 2006) — 6 rules +- **Zinsser**, *On Writing Well* (25th Anniversary Edition, 6th ed., 2001) — 9 rules - **Strunk & White**, *The Elements of Style* (4th ed., 1999) — 9 rules - **Garner / HBR**, *HBR Guide to Better Business Writing* (1st ed., 2012) — 5 rules diff --git a/humanizer-classics/SKILL.md b/humanizer-classics/SKILL.md index d136de1a..ea5de785 100644 --- a/humanizer-classics/SKILL.md +++ b/humanizer-classics/SKILL.md @@ -131,6 +131,9 @@ Craft rules (positive guidance — what good writing *does*): | Z-4 | Strip qualifiers ("a bit", "rather", "sort of") | `references/zinsser-on-writing-well.md` | | Z-5 | Be present on the page; have a self | `references/zinsser-on-writing-well.md` | | Z-6 | Endings matter — quit when the work is done | `references/zinsser-on-writing-well.md` | +| Z-7 | A lead must capture the reader immediately or the article is dead | `references/zinsser-on-writing-well.md` | +| Z-8 | Maintain unity of pronoun, tense, and mood | `references/zinsser-on-writing-well.md` | +| Z-9 | Most adjectives and adverbs are unnecessary | `references/zinsser-on-writing-well.md` | | S-1 | Omit needless words | `references/strunk-and-white-elements-of-style.md` | | S-2 | Use the active voice | `references/strunk-and-white-elements-of-style.md` | | S-3 | Put statements in positive form | `references/strunk-and-white-elements-of-style.md` | @@ -288,7 +291,7 @@ Provide: - `references/_rule-index.md` — full rule index with cross-reference graph - `references/_template-book-rules.md` — template for contributing a new book - `references/wikipedia-signs-of-ai-writing.md` — 29 detection rules (CC BY-SA 4.0) -- `references/zinsser-on-writing-well.md` — 6 craft rules from Zinsser +- `references/zinsser-on-writing-well.md` — 9 craft rules from Zinsser - `references/strunk-and-white-elements-of-style.md` — 9 craft rules from Strunk & White - `references/hbr-guide-better-business-writing.md` — 5 craft rules from Garner / HBR - `references/granola-meeting-transcripts.md` — Granola MCP workflow + Wispr Flow dictation guidance diff --git a/humanizer-classics/references/_rule-index.md b/humanizer-classics/references/_rule-index.md index df15502d..0b94b2c0 100644 --- a/humanizer-classics/references/_rule-index.md +++ b/humanizer-classics/references/_rule-index.md @@ -41,6 +41,9 @@ Flat lookup table of every rule in this skill. Use this when a rule ID is mentio | Z-4 | Strip qualifiers — "a bit", "rather", "sort of" | `zinsser-on-writing-well.md` | | Z-5 | Be present on the page; have a self | `zinsser-on-writing-well.md` | | Z-6 | Endings matter — quit when the work is done | `zinsser-on-writing-well.md` | +| Z-7 | A lead must capture the reader immediately | `zinsser-on-writing-well.md` | +| Z-8 | Maintain unity of pronoun, tense, and mood | `zinsser-on-writing-well.md` | +| Z-9 | Most adjectives and adverbs are unnecessary | `zinsser-on-writing-well.md` | | S-1 | Omit needless words | `strunk-and-white-elements-of-style.md` | | S-2 | Use the active voice | `strunk-and-white-elements-of-style.md` | | S-3 | Put statements in positive form | `strunk-and-white-elements-of-style.md` | @@ -56,7 +59,7 @@ Flat lookup table of every rule in this skill. Use this when a rule ID is mentio | H-4 | Imperative for instructions | `hbr-guide-better-business-writing.md` | | H-5 | Cut throat-clearing openers | `hbr-guide-better-business-writing.md` | -**Total:** 49 rules across 4 sources (29 Wikipedia + 6 Zinsser + 9 Strunk & White + 5 HBR). +**Total:** 52 rules across 4 sources (29 Wikipedia + 9 Zinsser + 9 Strunk & White + 5 HBR). ## Cross-reference graph @@ -75,6 +78,9 @@ When a Wikipedia detection rule fires, the matching book rule(s) usually offer t | W-10 (rule of three), W-11 (synonym cycling) | S-6 (parallel construction), S-8 (avoid loose-sentence monotony) | | W-25 (generic conclusions), H-1 (lead with bottom line) | S-7 (emphatic words at the end) | | Z-5 (be present on the page) — when over-applied | S-9 (do not affect a breezy manner — calibrating partner) | +| W-28 (signposting), W-29 (fragmented headers) | Z-7 (lead must capture immediately) | +| W-11 (synonym cycling), W-22 (tone shifts) | Z-8 (unity of pronoun/tense/mood) | +| W-4 (promotional language), W-7 (AI vocabulary) | Z-9 (most adjectives/adverbs are unnecessary) | ## Adding a new rule diff --git a/humanizer-classics/references/zinsser-on-writing-well.md b/humanizer-classics/references/zinsser-on-writing-well.md index b01b21c3..fe3fd2d4 100644 --- a/humanizer-classics/references/zinsser-on-writing-well.md +++ b/humanizer-classics/references/zinsser-on-writing-well.md @@ -1,9 +1,10 @@ # On Writing Well — William Zinsser -**Source:** William Zinsser, *On Writing Well: The Classic Guide to Writing Nonfiction*, 30th anniversary edition (HarperCollins, 2006) +**Source:** William Zinsser, *On Writing Well: The Classic Guide to Writing Nonfiction*, 25th Anniversary Edition (6th ed., HarperResource, 2001) **Type:** Craft prescription (positive guidance — what good writing *does*) **License of pull-quotes:** Fair use — short excerpts for educational commentary **Rule prefix:** `Z` +**Citation note:** All pull-quotes verified against the 25th anniversary edition. Page numbers reference that printing. ## Why this book belongs in humanizer-classics @@ -14,20 +15,23 @@ Zinsser teaches the *positive moves* that LLM writing skips. Where Wikipedia's r | ID | Rule (one line) | Chapter / reference | |----|-----------------|---------------------| | Z-1 | Cut clutter — every word that does no work | Ch. 3 "Clutter" | -| Z-2 | Use short, plain, Anglo-Saxon words; concrete over abstract | Ch. 4 "Style", Ch. 6 "Words" | -| Z-3 | Active verbs do the work; kill nominalizations | Ch. 10 "Bits and Pieces" — Verbs | -| Z-4 | Strip qualifiers — "a bit", "rather", "sort of", "quite" | Ch. 3 "Clutter" | -| Z-5 | Be present on the page; have a self | Ch. 4 "Style", Ch. 5 "The Audience" | +| Z-2 | Use short, plain, Anglo-Saxon words; concrete over abstract | Ch. 3 "Clutter", Ch. 6 "Words" | +| Z-3 | Active verbs do the work; kill nominalizations | Ch. 10 "Bits & Pieces" — Verbs | +| Z-4 | Strip qualifiers — "a bit", "rather", "sort of", "quite" | Ch. 10 "Bits & Pieces" — Little Qualifiers | +| Z-5 | Be present on the page; have a self | Ch. 4 "Style" | | Z-6 | Endings matter — quit when the work is done | Ch. 9 "The Lead and the Ending" | +| Z-7 | A lead must capture the reader immediately | Ch. 9 "The Lead and the Ending" | +| Z-8 | Maintain unity of pronoun, tense, and mood | Ch. 8 "Unity" | +| Z-9 | Most adjectives and adverbs are unnecessary | Ch. 10 "Bits & Pieces" — Adverbs, Adjectives | --- ### Z-1. Cut clutter — every word that does no work -**Source:** Zinsser, *On Writing Well*, Ch. 3 "Clutter" +**Source:** Zinsser, *On Writing Well*, 25th anniversary ed., Ch. 3 "Clutter" -> "Clutter is the disease of American writing. We are a society strangling in unnecessary words, circular constructions, pompous frills, and meaningless jargon." -> — Zinsser, opening of Ch. 3 +> "Look for the clutter in your writing and prune it ruthlessly. Be grateful for everything you can throw away. Reexamine each sentence you put on paper. Is every word doing new work?" +> — Zinsser, Ch. 3 closing (p. 17) **Cross-references:** W-7 (AI vocabulary), W-23 (filler), W-27 (persuasive tropes) **Context tags:** all @@ -46,10 +50,10 @@ Zinsser teaches the *positive moves* that LLM writing skips. Where Wikipedia's r ### Z-2. Use short, plain, Anglo-Saxon words; concrete over abstract -**Source:** Zinsser, *On Writing Well*, Ch. 4 "Style" and Ch. 6 "Words" +**Source:** Zinsser, *On Writing Well*, 25th anniversary ed., Ch. 3 "Clutter" (specific quote, p. 16); reinforced in Ch. 6 "Words" -> "Don't dialogue with someone you can talk to. Don't interface with anybody." -> — Zinsser, on the urge to dress up plain verbs +> "Beware, then, of all the long words that's no better than the short word: 'assistance' (help), 'numerous' (many), 'facilitate' (ease), 'individual' (man or woman), 'remainder' (rest), 'initial' (first), 'implement' (do), 'sufficient' (enough), 'attempt' (try), 'referred to as' (called) and hundreds more. … Don't dialogue with someone you can talk to. Don't interface with anybody." +> — Zinsser, Ch. 3 (p. 16) **Cross-references:** W-7 (AI vocabulary), W-3 (-ing analyses), S-4 (specific concrete language) **Context tags:** all (especially blog, email, book-draft) @@ -68,10 +72,10 @@ Zinsser teaches the *positive moves* that LLM writing skips. Where Wikipedia's r ### Z-3. Active verbs do the work; kill nominalizations -**Source:** Zinsser, *On Writing Well*, Ch. 10 "Bits and Pieces" — section on Verbs +**Source:** Zinsser, *On Writing Well*, 25th anniversary ed., Ch. 10 "Bits & Pieces" — Verbs section (p. 68-69) -> "Verbs are the most important of all your tools. They push the sentence forward and give it momentum." -> — Zinsser, Ch. 10 +> "Verbs are the most important of all your tools. They push the sentence forward and give it momentum. Active verbs push hard; passive verbs tug fitfully." +> — Zinsser, Ch. 10 (p. 69) **Cross-references:** W-13 (passive voice), S-2 (active voice), W-8 (copula avoidance) **Context tags:** all @@ -90,10 +94,10 @@ Zinsser teaches the *positive moves* that LLM writing skips. Where Wikipedia's r ### Z-4. Strip qualifiers — "a bit", "rather", "sort of", "quite" -**Source:** Zinsser, *On Writing Well*, Ch. 3 "Clutter" +**Source:** Zinsser, *On Writing Well*, 25th anniversary ed., Ch. 10 "Bits & Pieces" — Little Qualifiers section (p. 71-72) -> "Don't say you were a bit confused and sort of tired and a little depressed. Be confused. Be tired. Be depressed." -> — Zinsser, Ch. 3 (paraphrased; common rendering) +> "Don't say you were a bit confused and sort of tired and a little depressed and somewhat annoyed. Be confused. Be tired. Be depressed. Be annoyed. Don't hedge your prose with little timidities. Good writing is lean and confident." +> — Zinsser, Ch. 10 (p. 71) **Cross-references:** W-24 (excessive hedging), S-3 (positive form) **Context tags:** all @@ -112,10 +116,10 @@ Zinsser teaches the *positive moves* that LLM writing skips. Where Wikipedia's r ### Z-5. Be present on the page; have a self -**Source:** Zinsser, *On Writing Well*, Ch. 4 "Style" and Ch. 5 "The Audience" +**Source:** Zinsser, *On Writing Well*, 25th anniversary ed., Ch. 4 "Style" — closing line (p. 24) -> "Sell yourself, and your subject will exert its own appeal. Believe in your own identity and your own opinions." -> — Zinsser, Ch. 5 +> "Sell yourself, and your subject will exert its own appeal. Believe in your own identity and your own opinions. Writing is an act of ego, and you might as well admit it. Use its energy to keep yourself going." +> — Zinsser, Ch. 4 closing (p. 24) **Cross-references:** W-22 (sycophantic tone), W-25 (generic conclusion), H-5 (throat-clearing) **Context tags:** blog, book-draft, email, dictation (NOT memo, technical-doc, meeting-notes) @@ -134,10 +138,10 @@ Zinsser teaches the *positive moves* that LLM writing skips. Where Wikipedia's r ### Z-6. Endings matter — quit when the work is done -**Source:** Zinsser, *On Writing Well*, Ch. 9 "The Lead and the Ending" +**Source:** Zinsser, *On Writing Well*, 25th anniversary ed., Ch. 9 "The Lead and the Ending" (p. 65) -> "The perfect ending should take the reader slightly by surprise and yet seem exactly right." -> — Zinsser, Ch. 9 +> "The perfect ending should take your readers slightly by surprise and yet seem exactly right. They didn't expect the article to end so soon, or so abruptly, or to say what it said. … For the nonfiction writer, the simplest way of putting this into a rule is: when you're ready to stop, stop." +> — Zinsser, Ch. 9 (p. 65-66) **Cross-references:** W-25 (generic positive conclusions), W-6 (challenges-and-future-prospects), H-1 (lead with bottom line) **Context tags:** all @@ -152,3 +156,72 @@ Zinsser teaches the *positive moves* that LLM writing skips. Where Wikipedia's r > If you don't have tests, you're guessing. That's the real cost. **How to apply:** Find the last sentence in the draft that actually says something. Delete everything after it. If the ending now feels abrupt, that's usually the right feel — Zinsser argues the abruptness is what makes a closing land. If you cut and the piece truly hangs, write *one* concrete sentence (a fact, a quote, a turn) — not a summary. + + +### Z-7. A lead must capture the reader immediately + +**Source:** Zinsser, *On Writing Well*, 25th anniversary ed., Ch. 9 "The Lead and the Ending" (p. 55-56) + +> "The most important sentence in any article is the first one. If it doesn't induce the reader to proceed to the second sentence, your article is dead. And if the second sentence doesn't induce him to continue to the third sentence, it's equally dead." +> — Zinsser, Ch. 9 opening (p. 55) + +**Cross-references:** H-1 (lead with the bottom line), Z-6 (endings matter), W-28 (signposting), W-29 (fragmented headers) +**Context tags:** blog, book-draft, email (NOT memo, technical-doc — those use H-1 instead) +**Detection cue:** Opening sentences that set context instead of grabbing attention: "In today's rapidly evolving landscape…", "As we move further into the digital age…", "The world of [topic] has been undergoing significant changes…". Bad lead patterns Zinsser explicitly names (Ch. 9, p. 60): the "future archaeologist" lead, the "visitor from Mars" lead, the "have-in-common" lead, the "one day not long ago" lead. Leads that take three sentences to admit what the piece is even about. + +**Problem:** LLMs warm up before saying anything. The first paragraph clears throat, sets vague context, and admits the piece's actual subject only in paragraph 2 or 3. Zinsser is brutal: if the first sentence doesn't pull the reader to the second, the article is *dead*. The lead must "cajole him with freshness, or novelty, or paradox, or humor, or surprise, or with an unusual idea, or an interesting fact, or a question" (p. 56). It must give the reader hard details about why this piece is worth reading. The companion to Z-6 (endings): the lead and the ending are the two highest-leverage sentences in any piece, and Zinsser devotes a whole chapter to both for that reason. + +**Before** +> In today's rapidly evolving technological landscape, AI coding assistants have emerged as a topic of significant interest among developers and engineers. As organizations increasingly adopt these tools, questions arise about their impact on productivity, code quality, and the nature of software development itself. This piece will explore some of the key considerations… + +**After** +> The Google study said developers using Codex finished simple functions 55% faster. The Uplevel study, looking at the same teams six months later, found no statistically significant change in pull-request throughput. Both can be true — and the gap between them is where the real argument lives. + +**How to apply:** Read your draft's first sentence. If it could be the first sentence of any other article on the same topic, rewrite. The lead should contain a specific fact, a paradox, a question, a quoted line, or a small concrete scene that signals what's at stake in *this* piece, not the topic in general. If you find the actual lead three paragraphs in, that paragraph is the real opening — promote it. + + +### Z-8. Maintain unity of pronoun, tense, and mood + +**Source:** Zinsser, *On Writing Well*, 25th anniversary ed., Ch. 8 "Unity" (p. 49-54) + +> "Unity is the anchor of good writing. So, first, get your unities straight. … Therefore choose from among the many variables and stick to your choice." +> — Zinsser, Ch. 8 (p. 50) + +**Cross-references:** W-11 (synonym cycling), W-22 (sycophantic tone shifts), S-2 (active voice consistency) +**Context tags:** all +**Detection cue:** Pronoun drift inside a single passage — "the user should…" then "we recommend…" then "you might consider…" then "one ought to…". Tense flips — past for two paragraphs, then present, then back. Tone changes — formal opening, casual mid-section ("super important!"), formal close. Register shifts within the same piece, especially common in AI-assembled longer text where sections were generated separately. + +**Problem:** AI-generated longer text often reads as if assembled from pieces written by different writers. The first paragraph is in third person past tense; paragraph two switches to second person present; paragraph three drifts to first-person plural. The mood swings from technical-formal to casual-conversational and back. None of the shifts are signposted, so the reader's subconscious sense of who is speaking and from when keeps getting jolted. Zinsser names three explicit unities — pronoun, tense, mood — and the rule is the same for each: pick one and stick with it. Travel writing is his canonical example (p. 51): the writer who shifts from "I" to "you" to a guidebook's third-person "the visitor" loses the reader's trust. + +**Before** +> When the team launched the new feature in October, we faced a few unexpected challenges. The user immediately reports several issues with the dashboard, mostly around performance. One should consider that load patterns can vary widely — and you'll want to monitor the rollout carefully to make sure things stay smooth. Engineering teams will be paying close attention. + +**After** +> When we launched the new feature in October, we faced a few unexpected challenges. Users reported several dashboard performance issues in the first week. We watched the load patterns closely, since they vary widely across customer segments, and we paged the on-call engineer twice that week before the rollout stabilized. + +**How to apply:** Before rewriting, decide three things: who is the *narrator* (first person, second person, third person, or plural "we"?), what *tense* (past, present, or — rarely — future)?, and what *mood* (formal, casual, technical, polemical?). Write all three on a sticky note. Read each sentence and check that it matches all three. If a sentence shifts pronoun, tense, or mood, rewrite to match the chosen unity. Zinsser allows tense changes when the meaning genuinely demands them (a flashback, a hypothetical) — but the *principal* tense should be one and stable. + + +### Z-9. Most adjectives and adverbs are unnecessary + +**Source:** Zinsser, *On Writing Well*, 25th anniversary ed., Ch. 10 "Bits & Pieces" — Adverbs and Adjectives sections (p. 69-71) + +> "Most adverbs are unnecessary. You will clutter your sentence and annoy the reader if you choose a verb that has a specific meaning and then add an adverb that carries the same meaning." +> — Zinsser, Ch. 10 (p. 69) + +> "Most adjectives are also unnecessary. … The adjective that exists solely as decoration is a self-indulgence for the writer and a burden for the reader." +> — Zinsser, Ch. 10 (p. 70-71) + +**Cross-references:** W-4 (promotional language), W-7 (AI vocabulary), S-5 (do not overstate), Z-2 (plain words), Z-3 (active verbs) +**Context tags:** all +**Detection cue:** Redundant adverbs after specific verbs — "blared loudly", "clenched tightly", "whispered quietly", "smiled happily", "shouted angrily". Decorative adjectives that restate the noun's known property — "yellow daffodils", "brownish dirt", "tall skyscraper", "precipitous cliff", "lacy spiderweb". "-ly" intensifiers everywhere — "extremely", "incredibly", "absolutely", "totally", "literally" (when not literal). + +**Problem:** LLMs add adjectives and adverbs as a default to make prose sound "vivid", but most of them work against vividness — they restate what the verb or noun already conveys. Zinsser's rule: every modifier must do work that needs to be done. "Blared" already means loud; adding "loudly" weakens the verb. "Daffodils" are yellow; adding "yellow" tells the reader nothing new. The adjective that earns its place — "garish daffodils" (a value judgment), "drab apartment" (a specific feeling) — is the one that survives. Strip the rest, and the verbs get their power back. + +**Before** +> The vibrant cultural heritage of the region was beautifully captured in the colorful murals that lined the ancient stone streets, where excited tourists eagerly photographed the stunning architectural details with their digital cameras. The morning sun shone brightly over the bustling town square as locals warmly greeted visiting strangers. + +**After** +> Murals lined the stone streets — the same ones the town's grandparents had walked. Tourists photographed the carved doorframes and the fountain; the locals kept moving past them. The square was already full at 8 a.m. + +**How to apply:** Highlight every adverb and adjective in the draft. For each adverb, ask: does the verb already mean this? If yes, cut the adverb (and consider whether a stronger verb could replace the verb-plus-adverb pair entirely — "ran quickly" → "sprinted"). For each adjective, ask: does the noun already imply this property, or is the adjective decorative? If decoration, cut. If the modifier survives — it adds genuine information the noun or verb couldn't carry alone — keep it. The remaining adjectives and adverbs will read with full force because they're not lost in a crowd. From 5100acd9b1834cc237ab58e25441494f29c8dc9a Mon Sep 17 00:00:00 2001 From: bdevz <87504907+bdevz@users.noreply.github.com> Date: Wed, 29 Apr 2026 12:41:43 -0400 Subject: [PATCH 4/6] Add Z-10: write for yourself, not for an imagined audience MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit One more Zinsser rule from Ch. 5 "The Audience" (p. 25-30). Catches a distinct AI failure mode that Z-5 and S-9 don't cover: - Z-5 says: be present on the page (use "I", state opinions). - S-9 warns: don't tip into manufactured spontaneity ("Hey folks!"). - Z-10 catches the third register: prose that is technically personal yet calibrated to please an imagined gallery — trend-chasing openers ("In today's fast-paced landscape…"), preemptive audience-pleasing ("you'll love this!", "trust me"), LinkedIn humble-brag stance, hedge-claims that exist to manage a phantom critical reader. Pull-quote verified from the same 25th anniversary edition we cite elsewhere. Context tags: blog, book-draft, email, dictation. Does NOT fire on memo, technical-doc, or meeting-notes — those have a real specific audience and audience-awareness is appropriate. This is the natural stopping point for Zinsser mining. The remaining unread chapters (Forms, Attitudes, the rest of Words/Usage) would either duplicate existing rules or be too philosophical to fire on text. Total now 53 rules across SKILL.md catalog, README, _rule-index, and CHANGELOG. --- humanizer-classics/CHANGELOG.md | 5 ++-- humanizer-classics/README.md | 4 ++-- humanizer-classics/SKILL.md | 3 ++- humanizer-classics/references/_rule-index.md | 4 +++- .../references/zinsser-on-writing-well.md | 23 +++++++++++++++++++ 5 files changed, 33 insertions(+), 6 deletions(-) diff --git a/humanizer-classics/CHANGELOG.md b/humanizer-classics/CHANGELOG.md index 01b429af..f3763a7a 100644 --- a/humanizer-classics/CHANGELOG.md +++ b/humanizer-classics/CHANGELOG.md @@ -8,8 +8,8 @@ All notable changes to humanizer-classics. Format roughly follows [Keep a Change - Initial v3 release as a fork of `humanizer` v2.5.1. - New architecture: slim `SKILL.md` dispatcher (~250 lines) + per-source `references/` files lazy-loaded as rules fire. -- 23 new craft rules sourced from foundational writing books, each with a citation and pull-quote verified against the source PDFs: - - **Zinsser, *On Writing Well*** (25th Anniversary Edition, 6th ed., HarperResource, 2001) — Z-1 through Z-9: +- 24 new craft rules sourced from foundational writing books, each with a citation and pull-quote verified against the source PDFs: + - **Zinsser, *On Writing Well*** (25th Anniversary Edition, 6th ed., HarperResource, 2001) — Z-1 through Z-10: - Z-1: Cut clutter — every word that does no work (Ch. 3) - Z-2: Use short, plain, Anglo-Saxon words (Ch. 3, Ch. 6) - Z-3: Active verbs do the work; kill nominalizations (Ch. 10) @@ -19,6 +19,7 @@ All notable changes to humanizer-classics. Format roughly follows [Keep a Change - Z-7: A lead must capture the reader immediately (Ch. 9) - Z-8: Maintain unity of pronoun, tense, and mood (Ch. 8) - Z-9: Most adjectives and adverbs are unnecessary (Ch. 10) + - Z-10: Write for yourself, not for an imagined mass audience (Ch. 5) - **Strunk & White, *The Elements of Style*** (4th ed., 1999) — S-1 through S-9: - S-1: Omit needless words (II.17) - S-2: Use the active voice (II.14) diff --git a/humanizer-classics/README.md b/humanizer-classics/README.md index 01cffd5c..46a27710 100644 --- a/humanizer-classics/README.md +++ b/humanizer-classics/README.md @@ -8,7 +8,7 @@ A Claude Code / OpenCode skill that refines AI-generated text using rules drawn `humanizer-classics` is the v3 fork of [`humanizer`](../). Where v2 catalogs **what AI writing looks like** (29 patterns from Wikipedia's "Signs of AI writing"), v3 adds **what good human writing does** — craft prescriptions sourced from books, with citations. -- **52 rules** at launch: 29 detection rules (Wikipedia) + 23 craft rules (Zinsser × 9, Strunk & White × 9, HBR Guide × 5) +- **53 rules** at launch: 29 detection rules (Wikipedia) + 24 craft rules (Zinsser × 10, Strunk & White × 9, HBR Guide × 5) - **Two-pass process**: draft → audit ("what makes this still obviously AI?") → final - **Voice calibration**: paste 2-3 paragraphs of your own writing and the skill matches your rhythm and word choice instead of generic "clean" prose - **Granola integration**: pull meeting transcripts directly via MCP and humanize them @@ -99,7 +99,7 @@ When a detection rule fires, the matching craft rule(s) usually offer the better ## Books currently included -- **Zinsser**, *On Writing Well* (25th Anniversary Edition, 6th ed., 2001) — 9 rules +- **Zinsser**, *On Writing Well* (25th Anniversary Edition, 6th ed., 2001) — 10 rules - **Strunk & White**, *The Elements of Style* (4th ed., 1999) — 9 rules - **Garner / HBR**, *HBR Guide to Better Business Writing* (1st ed., 2012) — 5 rules diff --git a/humanizer-classics/SKILL.md b/humanizer-classics/SKILL.md index ea5de785..0a4b418d 100644 --- a/humanizer-classics/SKILL.md +++ b/humanizer-classics/SKILL.md @@ -134,6 +134,7 @@ Craft rules (positive guidance — what good writing *does*): | Z-7 | A lead must capture the reader immediately or the article is dead | `references/zinsser-on-writing-well.md` | | Z-8 | Maintain unity of pronoun, tense, and mood | `references/zinsser-on-writing-well.md` | | Z-9 | Most adjectives and adverbs are unnecessary | `references/zinsser-on-writing-well.md` | +| Z-10 | Write for yourself, not for an imagined mass audience | `references/zinsser-on-writing-well.md` | | S-1 | Omit needless words | `references/strunk-and-white-elements-of-style.md` | | S-2 | Use the active voice | `references/strunk-and-white-elements-of-style.md` | | S-3 | Put statements in positive form | `references/strunk-and-white-elements-of-style.md` | @@ -291,7 +292,7 @@ Provide: - `references/_rule-index.md` — full rule index with cross-reference graph - `references/_template-book-rules.md` — template for contributing a new book - `references/wikipedia-signs-of-ai-writing.md` — 29 detection rules (CC BY-SA 4.0) -- `references/zinsser-on-writing-well.md` — 9 craft rules from Zinsser +- `references/zinsser-on-writing-well.md` — 10 craft rules from Zinsser - `references/strunk-and-white-elements-of-style.md` — 9 craft rules from Strunk & White - `references/hbr-guide-better-business-writing.md` — 5 craft rules from Garner / HBR - `references/granola-meeting-transcripts.md` — Granola MCP workflow + Wispr Flow dictation guidance diff --git a/humanizer-classics/references/_rule-index.md b/humanizer-classics/references/_rule-index.md index 0b94b2c0..f9cb4240 100644 --- a/humanizer-classics/references/_rule-index.md +++ b/humanizer-classics/references/_rule-index.md @@ -44,6 +44,7 @@ Flat lookup table of every rule in this skill. Use this when a rule ID is mentio | Z-7 | A lead must capture the reader immediately | `zinsser-on-writing-well.md` | | Z-8 | Maintain unity of pronoun, tense, and mood | `zinsser-on-writing-well.md` | | Z-9 | Most adjectives and adverbs are unnecessary | `zinsser-on-writing-well.md` | +| Z-10 | Write for yourself, not for an imagined mass audience | `zinsser-on-writing-well.md` | | S-1 | Omit needless words | `strunk-and-white-elements-of-style.md` | | S-2 | Use the active voice | `strunk-and-white-elements-of-style.md` | | S-3 | Put statements in positive form | `strunk-and-white-elements-of-style.md` | @@ -59,7 +60,7 @@ Flat lookup table of every rule in this skill. Use this when a rule ID is mentio | H-4 | Imperative for instructions | `hbr-guide-better-business-writing.md` | | H-5 | Cut throat-clearing openers | `hbr-guide-better-business-writing.md` | -**Total:** 52 rules across 4 sources (29 Wikipedia + 9 Zinsser + 9 Strunk & White + 5 HBR). +**Total:** 53 rules across 4 sources (29 Wikipedia + 10 Zinsser + 9 Strunk & White + 5 HBR). ## Cross-reference graph @@ -81,6 +82,7 @@ When a Wikipedia detection rule fires, the matching book rule(s) usually offer t | W-28 (signposting), W-29 (fragmented headers) | Z-7 (lead must capture immediately) | | W-11 (synonym cycling), W-22 (tone shifts) | Z-8 (unity of pronoun/tense/mood) | | W-4 (promotional language), W-7 (AI vocabulary) | Z-9 (most adjectives/adverbs are unnecessary) | +| W-22 (sycophantic tone), W-25 (generic conclusions) — when audience-courting | Z-10 (write for yourself, not the gallery) | ## Adding a new rule diff --git a/humanizer-classics/references/zinsser-on-writing-well.md b/humanizer-classics/references/zinsser-on-writing-well.md index fe3fd2d4..42ddd281 100644 --- a/humanizer-classics/references/zinsser-on-writing-well.md +++ b/humanizer-classics/references/zinsser-on-writing-well.md @@ -23,6 +23,7 @@ Zinsser teaches the *positive moves* that LLM writing skips. Where Wikipedia's r | Z-7 | A lead must capture the reader immediately | Ch. 9 "The Lead and the Ending" | | Z-8 | Maintain unity of pronoun, tense, and mood | Ch. 8 "Unity" | | Z-9 | Most adjectives and adverbs are unnecessary | Ch. 10 "Bits & Pieces" — Adverbs, Adjectives | +| Z-10 | Write for yourself, not for an imagined mass audience | Ch. 5 "The Audience" | --- @@ -225,3 +226,25 @@ Zinsser teaches the *positive moves* that LLM writing skips. Where Wikipedia's r > Murals lined the stone streets — the same ones the town's grandparents had walked. Tourists photographed the carved doorframes and the fountain; the locals kept moving past them. The square was already full at 8 a.m. **How to apply:** Highlight every adverb and adjective in the draft. For each adverb, ask: does the verb already mean this? If yes, cut the adverb (and consider whether a stronger verb could replace the verb-plus-adverb pair entirely — "ran quickly" → "sprinted"). For each adjective, ask: does the noun already imply this property, or is the adjective decorative? If decoration, cut. If the modifier survives — it adds genuine information the noun or verb couldn't carry alone — keep it. The remaining adjectives and adverbs will read with full force because they're not lost in a crowd. + + +### Z-10. Write for yourself, not for an imagined mass audience + +**Source:** Zinsser, *On Writing Well*, 25th anniversary ed., Ch. 5 "The Audience" (p. 25-30) + +> "It's a fundamental question, and it has a fundamental answer: You are writing for yourself. Don't try to visualize the great mass audience. There is no such audience — every reader is a different person." +> — Zinsser, Ch. 5 (p. 25) + +**Cross-references:** Z-5 (be present on the page) — *complementary, not duplicate*; S-9 (do not affect a breezy manner); W-22 (sycophantic tone); W-25 (generic positive conclusions) +**Context tags:** blog, book-draft, email, dictation (does NOT fire on memo, technical-doc, meeting-notes — those have a real, specific audience and audience-awareness is appropriate) +**Detection cue:** Trend-chasing openers ("In an era when everyone is talking about…", "In today's fast-paced digital landscape…"). Preemptive audience-pleasing ("you'll love this!", "this will blow your mind", "trust me on this one"). The "imagined gallery" voice — sentences calibrated to maximize reactions rather than to say something specific. LinkedIn-style humble-brag stance. Hedge-claims that don't commit ("Some might say… Others would argue…") because the writer is hedging against an imagined critical audience that may not exist. + +**Problem:** This is a distinct failure mode from Z-5 and S-9. Z-5 says *put yourself on the page*. S-9 warns against the *manufactured-spontaneity* over-correction (breezy "Hey folks!" register). Z-10 catches a third register: prose that is technically "personal" — uses "I", states opinions — yet is calibrated to please an imagined audience. The writer is courting the gallery. Zinsser's prescription is harder than it sounds: write what you actually think, in the language that comes naturally, even at the cost of fewer readers. He cites Mencken (p. 30): *"He was writing for himself and didn't give a damn what the reader might think. … Mencken was never timid or evasive; he didn't kowtow to the reader or curry anyone's favor."* + +**Before** +> In today's fast-paced digital landscape, you'll absolutely love this game-changing approach to productivity that's been blowing minds across the industry. Some experts might say it's a fad, but trust me — those who get it early are reaping massive benefits. You don't want to be left behind on this one! + +**After** +> I've been using this approach for six months. It cut my email backlog by half and made my mornings unrecognizable. I don't know whether it scales past one person — I'm one person — but the productivity-guru framing the industry built around it is mostly noise. Skip the framing, try the move. + +**How to apply:** When the draft has Z-5 voice but feels performative, ask: who is the writer trying to please? If the answer is "the imagined audience that will share/like this", the writer is courting the gallery. Cut every sentence that exists to manage the audience's reaction — preemptive defenses, trend-language, "you'll love this", "trust me", "this is a hot take". What remains should be what the writer would say to a single specific reader they trust. **Z-10 fires on the same contexts as Z-5** (blog, book-draft, email, dictation). It does NOT fire on memo, technical-doc, or meeting-notes — those have a real specific audience, and writing toward that audience is the form's job. From a90c622d0583977b26bb2054fa7c69995e3ff5e1 Mon Sep 17 00:00:00 2001 From: bdevz <87504907+bdevz@users.noreply.github.com> Date: Wed, 29 Apr 2026 13:01:55 -0400 Subject: [PATCH 5/6] Add structural validator; consolidate H-2 Before/After to single pair MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Local sanity check uncovered one format inconsistency: H-2 had two parallel Before sections ("wall of prose" and "carpet of bolded bullets") instead of the single Before/After pair every other rule follows. Consolidated to the bolded-bullets example (the more H-2-specific failure) and moved the wall-of-prose discussion into the Problem section, with a note that wall-of-prose is really an H-3 / Z-1 issue. Adds tests/validate.py — a structural validator contributors run before opening PRs: - SKILL.md frontmatter parses as YAML - Every Read references/*.md directive points to a real file - Rule IDs consistent across SKILL.md catalog, _rule-index.md, and reference-file rule sections - Every book rule has the required sections (Source, pull-quote, Cross-references, Context tags, Detection cue, Problem, Before, After, How to apply) - Every corpus sample has its .expected-fixes.md pair and cites only valid rule IDs Exit 0 = pass. The validator does NOT check behavioral correctness (rules actually firing on corpus samples) — that still requires running /humanizer-classics in a Claude Code session against the samples in tests/corpus/. Updates REVIEWING.md to point at the validator (run it first, then the manual eyeball check). Updates CONTRIBUTING.md so the validator shows up in the new-rule and new-book contribution flows. All 5 structural checks pass on the current 53-rule state. --- humanizer-classics/CONTRIBUTING.md | 7 +- .../hbr-guide-better-business-writing.md | 7 +- humanizer-classics/tests/REVIEWING.md | 19 +- humanizer-classics/tests/validate.py | 192 ++++++++++++++++++ 4 files changed, 215 insertions(+), 10 deletions(-) create mode 100755 humanizer-classics/tests/validate.py diff --git a/humanizer-classics/CONTRIBUTING.md b/humanizer-classics/CONTRIBUTING.md index d6c82d33..171942eb 100644 --- a/humanizer-classics/CONTRIBUTING.md +++ b/humanizer-classics/CONTRIBUTING.md @@ -40,7 +40,12 @@ If your rule depends on another change (e.g., a new book file), bundle the depen 5. **Bump the version.** See *Versioning* below. -6. **Run the manual review** (see `tests/REVIEWING.md`) before opening the PR. +6. **Run the validator and manual review** before opening the PR: + ```bash + cd humanizer-classics/ + python3 tests/validate.py + ``` + The validator catches structural problems (broken references, missing sections, inconsistent rule IDs) automatically. Then run the manual review in `tests/REVIEWING.md` to catch behavioral issues. ## How to add a new book diff --git a/humanizer-classics/references/hbr-guide-better-business-writing.md b/humanizer-classics/references/hbr-guide-better-business-writing.md index 5d545251..38fccf0f 100644 --- a/humanizer-classics/references/hbr-guide-better-business-writing.md +++ b/humanizer-classics/references/hbr-guide-better-business-writing.md @@ -58,12 +58,9 @@ Zinsser writes for journalists and essayists; Strunk writes for everyone; Garner **Context tags:** memo, email, technical-doc, meeting-notes **Detection cue:** Long paragraphs (8+ sentences) with no subheads, no bullets, no bolding. OR the opposite: a wall of bullets where every line is a full sentence and bolding is on every third word for no reason. -**Problem:** This rule sounds like it conflicts with W-15 / W-16 / W-18 (the rules against boldface, inline-header lists, and emoji decoration), and the conflict is the point. LLM business writing fails in two opposite directions: the wall of unbroken prose (no anchors for the eye) *and* the carpet of bolded bullets and emoji headings (every line shouting "look at me"). H-2 says: real skim-formatting uses subheads as the spine, bullets only for true parallel lists, and bolding only for the one phrase per section that the skimmer absolutely must catch. +**Problem:** This rule sounds like it conflicts with W-15 / W-16 / W-18 (the rules against boldface, inline-header lists, and emoji decoration), and the conflict is the point. LLM business writing fails in two opposite directions: the **wall of unbroken prose** (no anchors for the eye — but that's really an H-3 / Z-1 problem) *and* the **carpet of bolded bullets and emoji headings** (every line shouting "look at me"), which is what H-2 specifically catches. Real skim-formatting uses subheads as the spine, bullets only for true parallel lists, and bolding only for the one phrase per section that the skimmer absolutely must catch. -**Before (wall of prose)** -> The Q3 results came in below target. Revenue was 8% under plan because the enterprise team missed its forecast for two large deals that slipped to Q4. The good news is that the SMB team beat their plan by 14%, driven mostly by the new onboarding flow that launched in July. Operating costs were on plan but we ran hot on cloud spend after the data-warehouse migration. Hiring stayed under target since we paused recruiting for the platform team. We expect Q4 to recover assuming the slipped enterprise deals close. - -**Before (carpet of bolded bullets)** +**Before** > 🚀 **Q3 Results Overview:** > - **Revenue:** 8% under plan ❌ > - **Enterprise Sales:** Missed forecast 📉 diff --git a/humanizer-classics/tests/REVIEWING.md b/humanizer-classics/tests/REVIEWING.md index 456fdc20..eeafcfc7 100644 --- a/humanizer-classics/tests/REVIEWING.md +++ b/humanizer-classics/tests/REVIEWING.md @@ -63,10 +63,21 @@ The v2 skill must still work for users on `~/.claude/skills/humanizer/`. After i ## Skill structural sanity -- [ ] `SKILL.md` frontmatter parses (name, version, description, license, compatibility, allowed-tools). -- [ ] Every `Read references/.md` directive in `SKILL.md` points to a file that exists. -- [ ] Every rule ID mentioned in the skill is also in `references/_rule-index.md`. -- [ ] Every reference file follows the format from `_template-book-rules.md` (or `wikipedia-signs-of-ai-writing.md` for the imported W-rules). +Run the automated structural validator first: + +```bash +cd humanizer-classics/ +python3 tests/validate.py +``` + +The script covers: +- [ ] `SKILL.md` frontmatter parses (name, version, description, license, compatibility, allowed-tools) +- [ ] Every `Read references/.md` directive in `SKILL.md` points to a file that exists +- [ ] Rule IDs are consistent across `SKILL.md` catalog, `references/_rule-index.md`, and reference-file rule sections +- [ ] Every book rule has the required structural sections (Source, pull-quote, Cross-references, Context tags, Detection cue, Problem, Before, After, How to apply) +- [ ] Every corpus sample has its `.expected-fixes.md` pair and cites only valid rule IDs + +Exit code 0 means all structural checks pass. The script does NOT verify behavioral correctness — that's what the per-PR check above is for. --- diff --git a/humanizer-classics/tests/validate.py b/humanizer-classics/tests/validate.py new file mode 100755 index 00000000..1f77aa97 --- /dev/null +++ b/humanizer-classics/tests/validate.py @@ -0,0 +1,192 @@ +#!/usr/bin/env python3 +""" +Structural validator for humanizer-classics. + +Checks that the skill is internally consistent before a PR is opened: +- SKILL.md frontmatter parses as YAML +- Every Read references/*.md directive in SKILL.md points to an existing file +- Rule IDs are consistent across SKILL.md catalog, _rule-index.md, and reference sections +- Each book rule has the required structural sections (Source, pull-quote, Cross-references, + Context tags, Detection cue, Problem, Before, After, How to apply) +- Every corpus sample has its .expected-fixes.md pair and cites valid rule IDs + +Does NOT check behavioral correctness — that requires running the skill against the corpus +in a Claude Code session (see REVIEWING.md). + +Usage: + cd humanizer-classics/ + python3 tests/validate.py + +Exit code 0 = all checks pass. Non-zero = at least one check failed. +""" + +import os +import re +import sys + +try: + import yaml +except ImportError: + print("ERROR: PyYAML required. Install with: pip install pyyaml") + sys.exit(2) + +ROOT = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) + + +def check_frontmatter(): + path = os.path.join(ROOT, "SKILL.md") + with open(path) as f: + content = f.read() + parts = content.split("---", 2) + if len(parts) < 3: + return False, "SKILL.md has no frontmatter delimiters" + try: + fm = yaml.safe_load(parts[1]) + except yaml.YAMLError as e: + return False, f"SKILL.md frontmatter is not valid YAML: {e}" + required = ["name", "version", "description", "license", "compatibility", "allowed-tools"] + missing = [k for k in required if k not in fm] + if missing: + return False, f"SKILL.md frontmatter missing keys: {missing}" + return True, f"SKILL.md frontmatter OK (name={fm['name']}, version={fm['version']})" + + +def check_references_resolve(): + path = os.path.join(ROOT, "SKILL.md") + with open(path) as f: + content = f.read() + refs = set(re.findall(r"references/[A-Za-z0-9_\-]+\.md", content)) + missing = [] + for r in refs: + if not os.path.isfile(os.path.join(ROOT, r)): + missing.append(r) + if missing: + return False, f"SKILL.md references missing files: {missing}" + return True, f"All {len(refs)} references/*.md directives in SKILL.md resolve to real files" + + +def collect_rule_ids(): + with open(os.path.join(ROOT, "SKILL.md")) as f: + skill_ids = set(re.findall(r"\| ([WZSH]-\d+) \|", f.read())) + with open(os.path.join(ROOT, "references", "_rule-index.md")) as f: + index_ids = set(re.findall(r"\| ([WZSH]-\d+) \|", f.read())) + ref_ids = set() + for fname in [ + "wikipedia-signs-of-ai-writing.md", + "zinsser-on-writing-well.md", + "strunk-and-white-elements-of-style.md", + "hbr-guide-better-business-writing.md", + ]: + with open(os.path.join(ROOT, "references", fname)) as f: + ref_ids.update(re.findall(r"^### ([WZSH]-\d+)\.", f.read(), re.M)) + return skill_ids, index_ids, ref_ids + + +def check_id_consistency(): + skill_ids, index_ids, ref_ids = collect_rule_ids() + problems = [] + if skill_ids - ref_ids: + problems.append(f"in SKILL.md but no reference section: {sorted(skill_ids - ref_ids)}") + if index_ids - ref_ids: + problems.append(f"in _rule-index.md but no reference section: {sorted(index_ids - ref_ids)}") + if ref_ids - skill_ids: + problems.append(f"defined in reference but not in SKILL.md catalog: {sorted(ref_ids - skill_ids)}") + if ref_ids - index_ids: + problems.append(f"defined in reference but not in _rule-index.md: {sorted(ref_ids - index_ids)}") + if problems: + return False, "Rule ID inconsistency: " + "; ".join(problems) + return True, f"All {len(ref_ids)} rule IDs consistent across SKILL.md, _rule-index.md, and reference files" + + +def check_rule_structure(): + """Each book rule needs: Source, pull-quote (>), Cross-references, Context tags, + Detection cue, Problem, Before, After, How to apply.""" + book_files = { + "zinsser-on-writing-well.md": "Z", + "strunk-and-white-elements-of-style.md": "S", + "hbr-guide-better-business-writing.md": "H", + } + required_markers = [ + "**Source:**", + "**Cross-references:**", + "**Context tags:**", + "**Detection cue:**", + "**Problem:**", + "**Before**", + "**After**", + "**How to apply:**", + ] + problems = [] + total_rules = 0 + for fname, prefix in book_files.items(): + path = os.path.join(ROOT, "references", fname) + with open(path) as f: + content = f.read() + rule_ids = re.findall(r"^### ([WZSH]-\d+)\. ", content, re.M) + blocks = re.split(r"^### [WZSH]-\d+\. ", content, flags=re.M) + for i, body in enumerate(blocks[1:], start=1): + rid = rule_ids[i - 1] + missing = [m for m in required_markers if m not in body] + if missing: + problems.append(f"{fname} {rid}: missing {missing}") + total_rules += 1 + if problems: + return False, "Rule structure problems: " + "; ".join(problems) + return True, f"All {total_rules} book rules have required structural sections" + + +def check_corpus_pairs(): + corpus_dir = os.path.join(ROOT, "tests", "corpus") + if not os.path.isdir(corpus_dir): + return True, "tests/corpus/ does not exist (no corpus check)" + files = os.listdir(corpus_dir) + inputs = sorted(f for f in files if f.endswith(".md") and not f.endswith(".expected-fixes.md")) + fixes = sorted(f for f in files if f.endswith(".expected-fixes.md")) + problems = [] + for inp in inputs: + if inp.replace(".md", ".expected-fixes.md") not in fixes: + problems.append(f"{inp}: no .expected-fixes.md pair") + with open(os.path.join(corpus_dir, inp)) as f: + head = f.read(500) + if "context:" not in head: + problems.append(f"{inp}: missing 'context:' tag in frontmatter") + # Validate cited rule IDs in expected-fixes are real + _, _, valid_ids = collect_rule_ids() + for fix in fixes: + with open(os.path.join(corpus_dir, fix)) as f: + cited = set(re.findall(r"\b([WZSH]-\d+)\b", f.read())) + invalid = cited - valid_ids + if invalid: + problems.append(f"{fix}: references invalid rule IDs {sorted(invalid)}") + if problems: + return False, "Corpus problems: " + "; ".join(problems) + return True, f"All {len(inputs)} corpus samples paired and cite valid rule IDs" + + +def main(): + checks = [ + ("SKILL.md frontmatter", check_frontmatter), + ("references/*.md resolve", check_references_resolve), + ("rule ID consistency", check_id_consistency), + ("rule structural format", check_rule_structure), + ("corpus pairs and IDs", check_corpus_pairs), + ] + failures = 0 + for name, fn in checks: + ok, msg = fn() + marker = "PASS" if ok else "FAIL" + print(f"[{marker}] {name}: {msg}") + if not ok: + failures += 1 + print() + if failures == 0: + print(f"All {len(checks)} structural checks passed.") + print("Note: behavioral correctness (rules actually firing on corpus) requires running") + print("/humanizer-classics in a Claude Code session — see REVIEWING.md.") + return 0 + print(f"{failures} of {len(checks)} checks failed. Fix before opening a PR.") + return 1 + + +if __name__ == "__main__": + sys.exit(main()) From b68e0550522e5aaeb63b638a2e4a11732eab3ffa Mon Sep 17 00:00:00 2001 From: bdevz <87504907+bdevz@users.noreply.github.com> Date: Wed, 29 Apr 2026 16:27:47 -0400 Subject: [PATCH 6/6] Add HOW-TO-USE.md for non-technical teammates; fix install command MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit The previous README install command pointed to a placeholder URL (humanizer-classics as its own repo, which doesn't exist yet — the skill currently lives as a subdirectory inside bdevz/humanizer). Replaced with the actual three-line clone-and-symlink that works today. Adds HOW-TO-USE.md as a plain-English guide aimed at non-technical teammates: a 30-second pitch, the three-line setup, four usage patterns (paste-in, voice match, Granola, Wispr), real before/after examples, and a small troubleshooting section. The README links to it from the Installation section. --- humanizer-classics/HOW-TO-USE.md | 136 +++++++++++++++++++++++++++++++ humanizer-classics/README.md | 16 +++- 2 files changed, 149 insertions(+), 3 deletions(-) create mode 100644 humanizer-classics/HOW-TO-USE.md diff --git a/humanizer-classics/HOW-TO-USE.md b/humanizer-classics/HOW-TO-USE.md new file mode 100644 index 00000000..fe7aa368 --- /dev/null +++ b/humanizer-classics/HOW-TO-USE.md @@ -0,0 +1,136 @@ +# How to Use Humanizer Classics + +A plain-English guide for anyone on the team. No coding background needed. + +## What it is + +Take any AI-written text — a draft Claude wrote for you, a LinkedIn post that came out sounding fake, an email that reads like a chatbot, a Granola meeting summary — and the skill rewrites it to sound human. It uses rules from books you've probably read or heard of: William Zinsser's *On Writing Well*, Strunk and White's *Elements of Style*, and the *HBR Guide to Better Business Writing*. Plus a list of the specific tells of AI writing maintained by editors on Wikipedia. + +**It edits writing. It does not write from scratch.** Bring it text and it will hand back a cleaner version. + +## What you need + +- A Mac +- **Claude Code** installed. If you don't have it, get it at [claude.ai/code](https://claude.ai/code). + +That's it. There's nothing else to install — the skill is just plain text that Claude Code reads. + +## One-time setup (5 minutes) + +Open the **Terminal** app on your Mac (press ⌘+Space, type "Terminal", hit Return) and paste these three lines, one at a time, pressing Return after each: + +```bash +mkdir -p ~/.claude/skills + +git clone https://github.com/bdevz/humanizer ~/humanizer-source + +ln -s ~/humanizer-source/humanizer-classics ~/.claude/skills/humanizer-classics +``` + +What each line does, in plain English: +1. Make sure the folder Claude Code reads its skills from exists. +2. Download the source code into a folder in your home directory. +3. Tell Claude Code where to find this skill. + +If you already have Claude Code open, **quit it and reopen it.** Then type `/` in any Claude Code session and you should see `humanizer-classics` in the list. + +## How to use it + +### 1. Humanize text you paste in + +Open Claude Code. Type: + +``` +/humanizer-classics +``` + +Then paste your AI-written text below it. Send it. Claude will read your text, identify the AI patterns, rewrite it, do a self-audit ("what still sounds AI?"), then give you the final version. + +### 2. Match your own voice + +If you want the rewrite to sound like *you* and not like generic "clean prose," paste 2-3 paragraphs of writing you've actually published or sent (LinkedIn posts, an email you wrote, a section of your book). The skill will analyze your sentence rhythm, word choice, and quirks, then match them in the rewrite. + +Example of what to type: + +``` +/humanizer-classics + +Match my voice. Here's a sample of how I write: + +[paste 2-3 of your own paragraphs] + +And here's the AI-written text I want you to humanize: + +[paste the AI text] +``` + +### 3. Clean up a Granola meeting transcript + +If you use Granola for meeting notes, you can ask the skill to pull a transcript directly and humanize it: + +``` +/humanizer-classics + +Pull the transcript from my standup this morning and turn it into a memo I can share with the team. +``` + +The skill will list your recent meetings, ask which one, fetch the transcript, strip out timestamps and speaker labels, and turn it into a tight memo. + +### 4. Clean up Wispr Flow dictation + +If you talk into Wispr Flow and end up with a wall of stream-of-thought text, paste it in: + +``` +/humanizer-classics + +I dictated this into Wispr Flow. Clean it up but keep my voice — it's for a blog post. + +[paste the dictation] +``` + +The skill knows dictation has different patterns (run-ons, "um", restarts) and edits without flattening your voice into generic prose. + +## Real examples of what it fixes + +**AI-sounding LinkedIn post:** +> 🚀 Three lessons I learned from my biggest professional failure. Earlier in my career, I had the privilege of leading a high-stakes product launch that fundamentally changed my perspective on leadership... + +**After humanizer-classics:** +> I shipped a payments product four months late in 2023. It taught me three things, only one of which I'd say out loud at a conference. Here's all three. + +--- + +**AI-sounding business memo:** +> Hi team, I hope this email finds you well. As you know, we've been working hard over the past quarter to drive growth and execute on our strategic priorities. I wanted to take a moment to share some thoughts on where we landed and what comes next. Looking back at Q3, it's been a quarter of mixed results... + +**After humanizer-classics:** +> Q3 came in 8% under plan. Two enterprise deals slipped to Q4 — Acme and Globex — and we still expect both in October. SMB beat plan by 14%, mostly from the new onboarding flow. Cloud spend ran hot after the data warehouse migration. Details below. + +## Tips + +- **Tell it the form.** "Treat this as a memo" or "this is a blog post" or "this is an email" helps it pick the right rules. A memo gets the bottom line on top; a blog post keeps your personal voice. +- **Always provide a voice sample for personal-voice writing** (LinkedIn, blog, book draft). Without one, the rewrite will be technically correct but generic. +- **Trust the audit step.** The skill rewrites once, then asks itself "what still sounds AI?", then rewrites again. The final version is usually noticeably better than the first. Don't stop reading at the draft. +- **It's editing, not magic.** If your input is empty or wrong, the output won't fix what wasn't there. The skill makes good-but-AI-flavored writing sound human; it can't make a bad argument into a good one. + +## What the rules come from + +53 rules total, each pulled from a real source: + +- **29 rules** describe what AI writing *looks like* — from [Wikipedia's "Signs of AI Writing"](https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing) page. +- **10 rules** from **William Zinsser, *On Writing Well*** (25th anniversary ed., 2001) — the classic guide to writing nonfiction in plain English. +- **9 rules** from **Strunk & White, *The Elements of Style*** (4th ed., 1999) — the shortest, sharpest writing handbook in the language. +- **5 rules** from **Bryan A. Garner, *HBR Guide to Better Business Writing*** (2012) — the modern guide for memos, emails, and reports. + +Each rule includes a short pull-quote from the source so you can see exactly what the author was saying. + +## When something goes wrong + +- **The skill doesn't appear in `/`** — quit Claude Code completely and reopen it. The setup runs at startup. +- **The rewrite sounds wrong / not like me** — provide a voice sample (see tip above). Without one, the skill defaults to a generic register. +- **The rewrite cut something I wanted to keep** — say so explicitly: "Keep the line about Acme's renewal exactly as I wrote it." Claude will preserve specific lines on request. +- **Anything else** — open an issue at [github.com/bdevz/humanizer/issues](https://github.com/bdevz/humanizer/issues). + +## Want to add a writing book you love? + +The repo is built to grow. If you have a writing book that taught you to write — anything from Pinker to McPhee to Verlyn Klinkenborg — see `CONTRIBUTING.md` in the repo. Anyone can propose a rule. The bar is: it has to trace to a specific page, and the example has to actually catch a real AI failure mode. diff --git a/humanizer-classics/README.md b/humanizer-classics/README.md index 46a27710..6bd14d5b 100644 --- a/humanizer-classics/README.md +++ b/humanizer-classics/README.md @@ -21,20 +21,30 @@ The v2 skill works and is in active use at version 2.5.1. v3 changes the archite ## Installation +> **Non-technical user?** See [`HOW-TO-USE.md`](HOW-TO-USE.md) for a plain-English setup and usage guide. + +The skill currently lives as a subdirectory inside [`bdevz/humanizer`](https://github.com/bdevz/humanizer); install via clone + symlink. + ### Claude Code ```bash -git clone https://github.com//humanizer-classics ~/.claude/skills/humanizer-classics +mkdir -p ~/.claude/skills +git clone https://github.com/bdevz/humanizer ~/humanizer-source +ln -s ~/humanizer-source/humanizer-classics ~/.claude/skills/humanizer-classics ``` -Then invoke with `/humanizer-classics` in any Claude Code session. +Restart Claude Code. Invoke with `/humanizer-classics` in any session. ### OpenCode ```bash -git clone https://github.com//humanizer-classics ~/.config/opencode/skills/humanizer-classics +mkdir -p ~/.config/opencode/skills +git clone https://github.com/bdevz/humanizer ~/humanizer-source 2>/dev/null || true +ln -s ~/humanizer-source/humanizer-classics ~/.config/opencode/skills/humanizer-classics ``` +(Once `humanizer-classics` is split out into its own repo, the install will simplify to a one-liner.) + ## Usage ### Pasted slop