diff --git a/instructor/notes.md b/instructor/notes.md index c1e4451..548346c 100644 --- a/instructor/notes.md +++ b/instructor/notes.md @@ -10,6 +10,13 @@ Build and run a workshop that trains engineering judgment under ambiguity: clarifying requirements, surfacing assumptions, defining success criteria, anticipating risks, and evaluating outcomes against intent. +## Product Judgment Research Workspace + +Research artifacts for EpicProduct.engineer/product-judgment content live in +`instructor/product-judgment-research/readme.md`. Use those scenario cards, +templates, outreach prompts, and follow-up lists when grounding future exercises +in real examples. + ## Success Criteria - Learners explicitly document assumptions before implementation. diff --git a/instructor/product-judgment-research/decision-record-template.md b/instructor/product-judgment-research/decision-record-template.md new file mode 100644 index 0000000..21dedf8 --- /dev/null +++ b/instructor/product-judgment-research/decision-record-template.md @@ -0,0 +1,101 @@ +# Product Judgment Decision Record Template + +Use this as a lightweight artifact for interviews, exercises, and post-decision +reflection. It should make judgment inspectable without pretending the team had +perfect information. + +## Decision Title + +Short noun phrase: + +## Situation + +- What is happening? +- Why does this matter now? +- What user, business, system, or team consequence makes this decision real? + +## Beneficiary and Pain + +- Who benefits if we get this right? +- What pain are they experiencing? +- What language have they used to describe it? +- What proposed solution should we not blindly accept? + +## Decision to Make + +- What specific call are we making? +- What is intentionally out of scope? +- Who owns the recommendation? +- Who can approve, block, or override? + +## Options + +| Option | Why consider it? | What does it risk? | +| -------- | ---------------- | ------------------ | +| Option A | | | +| Option B | | | +| Option C | | | + +## Evidence + +| Evidence | Type | Confidence | Notes | +| -------- | -------------------------------------------------------------- | ------------------- | ----- | +| | Quantitative / qualitative / stakeholder / engineering / taste | Low / medium / high | | + +## Constraints + +- Time: +- Team/capacity: +- User experience: +- Technical/system: +- Business: +- Cost: +- Compliance/security: +- Authority/governance: + +## Reversibility + +- How hard is this to change later? +- What migration, compatibility, or trust cost would reversal create? +- Can we contain the decision behind a flag, beta, abstraction, manual process, + or communication plan? + +## Recommendation + +- Recommended option: +- Why this option fits the current product phase: +- Main tradeoff we are accepting: +- Main risk we are mitigating: + +## Rejected Options + +- Option rejected: + - Why: + - What evidence could make it viable later: + +## Decision Rights + +- Recommendation owner: +- Approver: +- Consulted: +- Informed: +- Escalation path: +- Disagreement record: + +## Revisit Plan + +- Revisit trigger: +- Revisit date or milestone: +- Signal to watch: +- What would change the decision: +- Who is responsible for checking: + +## Outcome Reflection + +Complete after the decision has had contact with reality. + +- What happened? +- Which assumptions were validated? +- Which assumptions were wrong? +- What did we learn about users, business, system, or team? +- What should change in the next decision record? diff --git a/instructor/product-judgment-research/follow-up-contact-list.md b/instructor/product-judgment-research/follow-up-contact-list.md new file mode 100644 index 0000000..eb047a8 --- /dev/null +++ b/instructor/product-judgment-research/follow-up-contact-list.md @@ -0,0 +1,43 @@ +# First-Pass Follow-Up Contact List + +Source: Kody stash notes from Kent's product-decision X prompt and follow-ups. +Prioritize examples with real pain around making, explaining, defending, or +revisiting product judgment calls. + +## Priority Follow-Ups + +| Priority | Contact/example | Why this matters | Ask next | Likely use | +| -------- | --------------- | -------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------- | +| 1 | Manoj Thokala | Direct pain signal: explicitly lacks confidence making and defending product decisions. | "Can you describe the last product-shaped decision you struggled to defend? What was at stake, who pushed back, and what would have helped?" | Target-learner interview and confidence-gap scenario. | +| 2 | Stevie P | Strong MVP/GA triage loop: patch vs right-way fix, feature flags, evidence weighting, alpha exposure, disagree-and-commit. | "How did you estimate impact, who owned the call, and what evidence would have changed patch-now/refactor-later?" | MVP triage card, evidence weighting card, decision-rights practice. | +| 3 | Ronan Berder | Expert practice model: small sticky decisions, written process, AI as debate partner, deliberate revisit loops. | "Can we walk through one decision record for attachments/comments/API shape, including options rejected and revisit trigger?" | Small sticky decisions card and decision-record exemplar. | +| 4 | Max Andrews | Prototype-as-spec loop with customer notes, production-context prototyping, design feedback, and engineering handoff. | "What painful earlier experience taught you to prototype this way, and where can this approach fail?" | Prototype-as-spec card and product/engineering handoff exercise. | +| 5 | Headmaster Duck | Organizational uncertainty loop: reversing a best practice, RFC resistance, delayed outcomes, enough feedback threshold. | "What objections uncovered new risks versus repeated known tradeoffs, and who had authority to proceed?" | RFC/feedback sufficiency card and delayed-outcome decision record. | +| 6 | Kirill Goldin | Enterprise pressure loop: loud customers, competitor parity, prioritization math, and unavailable "no." | "How did the team distinguish broader segment need from one-account pressure, and who could protect product coherence?" | Enterprise pressure card and decision-rights/governance practice. | +| 7 | Sean Manzano | Feature value versus feature framing: AI-aversion, backlash, paid-tier behavior, timing of public response. | "What would you do differently next time, and what evidence is enough to keep investing despite sentiment risk?" | Feature framing card and audience-worldview research. | +| 8 | Marc Beinder | Launch performance bar: taste, target customer expectations, caching as quick win, longer-term cleanup. | "What exact threshold makes the product feel launch-ready, and what signal would force deeper refactor before launch?" | Launch-quality card and perceived-quality exercise. | + +## Secondary Follow-Ups + +| Contact/example | Why | Ask next | +| --------------- | ------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------- | +| Nik (@neke989) | Temporary pain versus final product form during migration. | "What signal would turn known interim pain into new risk that requires optimization?" | +| Alem Tuzlak | API/DX taste as product judgment, simplifying AI-generated proposals before feedback. | "Which primitive did you drop, and what made it feel bloated or hard to follow?" | +| Cosmin Pruteanu | Churn/business milestone pressure justifying a long fix. | "How did you measure churn impact, what alternatives were considered, and why was the delay acceptable?" | +| Ryan Allred | Fog-of-war loop: ship known next step, run experiments, delete as uncertainty clears. | "Can you share a concrete example where you deleted or redirected work after the fog cleared?" | + +## Confidential Permission Queue + +| Source | Status | Safe use now | Ask before using | +| --------------------------------------- | ------------------------- | --------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------- | +| Confidential buy-versus-build DM source | Private. Generalize only. | Use the anonymized pattern: AI changes build thresholds only where domain expertise plus internal capability creates asymmetric upside. | Any name, employer, industry, direct quote, or specific platform example. | + +## Outreach Sequencing + +1. Start with Manoj to validate direct learner pain. +2. Pair one expert-practice interview (Ronan or Max) with one messy authority + interview (Stevie P, Headmaster Duck, or Kirill). +3. Use Sean, Marc, Nik, and Alem to diversify beyond classic feature + prioritization into framing, launch quality, migration, and DX judgment. +4. Keep the confidential buy-versus-build source out of public examples unless + Kent confirms permission. diff --git a/instructor/product-judgment-research/outreach-prompts.md b/instructor/product-judgment-research/outreach-prompts.md new file mode 100644 index 0000000..9953184 --- /dev/null +++ b/instructor/product-judgment-research/outreach-prompts.md @@ -0,0 +1,142 @@ +# Outreach Prompts + +Use these to elicit real product-judgment stories, not abstract advice. Keep the +ask focused on one concrete decision the person actually made, defended, or +revisited. + +## Public Reply Prompt + +> This is exactly the kind of product-engineering judgment I'm trying to +> understand better. Would you be open to sharing one more detail: what evidence +> or constraint most shaped the decision, and what would have changed your mind? + +## Public Reply for Direct Pain + +> This is helpful because it gets at the skill I'm trying to teach: not just +> "know product," but make and defend a judgment call with incomplete evidence. +> What's a recent decision you struggled to defend? What was at stake? + +## Public Reply for Expert Practice + +> I love how concrete this is. If you are open to it, I'd like to understand the +> decision loop: options considered, evidence used, what you rejected, and when +> you knew it was time to revisit. Which part was hardest to explain to others? + +## DM Prompt for a 20-30 Minute Interview + +> Hey [name], thanks for sharing your product-decision example. I'm doing +> research for EpicProduct.engineer around how developers build product judgment +> as implementation gets cheaper with AI. +> +> The thing I'm trying to understand is not generic product-management theory. +> It's the real moment where an engineer has to make, explain, or revisit a +> product-shaped call with incomplete evidence, mixed authority, and real +> consequences. +> +> Would you be open to a short call where we reconstruct one decision? I'd ask +> about: +> +> - what was at stake +> - options you considered +> - evidence you trusted or distrusted +> - who could approve/block/override +> - what changed your mind, if anything +> - what artifact or language would have helped +> +> Happy to keep details private or anonymized. + +## DM Prompt for Expert Models + +> Hey [name], your example feels like expert product-engineering practice rather +> than beginner confusion, which makes it especially useful as a model. +> +> Could we walk through one real decision in detail? I'm particularly interested +> in how you write down the process, use AI without outsourcing judgment, decide +> what options to reject, and choose when to revisit. +> +> If we use anything publicly, I'll confirm attribution and quotes first. + +## DM Prompt for Sensitive or Business-Specific Examples + +> Hey [name], your example sounds useful but potentially sensitive. I do not +> want to expose company, customer, or strategic details without permission. +> +> If you are open to it, could we talk through the generalized judgment pattern? +> I can keep it anonymized and focus on the decision shape: evidence, options, +> constraints, authority, and revisit trigger. +> +> Before anything becomes public-facing, I'll confirm exactly what can be quoted +> or attributed. + +## Async Interview Prompt + +> Thanks for being willing to help. Please answer with one real product-shaped +> decision, not general advice. Short bullets are fine. +> +> 1. What was the decision? +> 2. What was at stake for users, business, system quality, or the team? +> 3. What options did you consider? +> 4. What evidence did you have? +> 5. What evidence was missing or untrustworthy? +> 6. Who owned the call? Who could block or override it? +> 7. What tradeoff did you accept? +> 8. What would have changed your mind? +> 9. Did you revisit it later? What triggered the revisit? +> 10. What would have helped you make or defend the decision earlier? +> +> Also: can this be quoted publicly, attributed to you, anonymized, or kept +> private? + +## Follow-Up Prompts by Judgment Loop + +### Prototype as Spec + +> What did the prototype prove that a written spec would not have proven? What +> production constraint would a throwaway prototype have missed? + +### Enough Feedback to Proceed + +> When did objections stop uncovering new risks and start repeating known +> tradeoffs? + +### Small Sticky Decisions + +> How did you decide what "right" meant: user expectations, compatibility, +> migration cost, implementation complexity, business cost, or taste? + +### MVP/GA Triage + +> What made patch-now/refactor-later acceptable, and what signal would have +> forced the right-way fix before launch? + +### Evidence Weighting + +> Which evidence looked objective but was actually too thin or noisy to decide +> from? + +### Decision Rights + +> Was the decision owner explicit? If not, how did authority actually work? + +### Enterprise Pressure + +> How did you distinguish a broader segment need from one loud customer's +> request? + +### Feature Framing + +> Did backlash reveal feature risk, messaging risk, or audience-worldview risk? + +### Buy Versus Build + +> Where has AI changed the build-versus-buy threshold, and where has it not? + +### Temporary Pain Versus Final Form + +> What signal would have turned known interim pain into a reason to pause and +> optimize? + +### Fog of War + +> What did you ship because it was known, and what did you intentionally treat +> as an experiment that might be deleted? diff --git a/instructor/product-judgment-research/raw-example-capture-template.md b/instructor/product-judgment-research/raw-example-capture-template.md new file mode 100644 index 0000000..33fa8c2 --- /dev/null +++ b/instructor/product-judgment-research/raw-example-capture-template.md @@ -0,0 +1,120 @@ +# Raw Example Capture Template + +Use this before synthesis. Capture the example in the source's language first, +then add light interpretation. The goal is to preserve real product-judgment +pain around making, explaining, and revisiting calls. + +## Metadata + +- Source name: +- Source handle/link: +- Date captured: +- Capture method: public reply / DM / interview / async form / other +- Permission status: public / private / anonymize / ask before quoting +- Related stash/source id: +- Follow-up owner: + +## Audience Fit + +- Role: +- Team/company stage: +- Product context: +- Is this person a target learner, expert model, or example source? +- Why: + +## Situation + +- What was happening? +- What made this decision product-shaped rather than only technical? +- What was at stake for users, business, team, or system quality? +- What deadline, launch, migration, customer, or authority pressure existed? + +## Decision + +- What call had to be made? +- Who made the call? +- Who could block or override it? +- Was the decision explicit, implied, or reconstructed after the fact? + +## Options + +List the real options the team considered. + +1. +2. +3. + +List options that seem obvious in hindsight but were not considered. + +1. +2. + +## Evidence + +- Quantitative signal: +- Qualitative signal: +- Customer/user language: +- Internal stakeholder input: +- Engineering instinct or code signal: +- Product/business strategy signal: +- Missing evidence: + +## Constraints + +- Time: +- Team/capacity: +- Technical: +- User experience: +- Business: +- Authority/governance: +- Reversibility/migration: + +## Pain + +- What was hard about making the call? +- What was hard about explaining or defending the call? +- What was hard about revisiting the call later? +- What artifact, language, or practice was missing? + +## Outcome + +- What happened after the decision? +- What changed their mind, if anything? +- What debt or risk remained? +- Was the decision revisited? If yes, what triggered it? + +## Judgment Loop Tags + +Check all that apply. + +- [ ] Prototype as spec +- [ ] Enough feedback to proceed +- [ ] Small sticky decision +- [ ] Launch quality bar +- [ ] MVP/GA triage +- [ ] Evidence weighting +- [ ] Decision rights or override +- [ ] Enterprise pressure +- [ ] Feature framing or market perception +- [ ] Buy versus build +- [ ] Temporary pain versus final form +- [ ] API/DX taste +- [ ] Fog of war experiments +- [ ] Confidence defending decisions + +## Researcher Synthesis + +- Strongest quote: +- Core pain: +- Product-judgment skill involved: +- Possible scenario card: +- Possible exercise artifact: +- Follow-up question: + +## Confidentiality Notes + +- Can we quote this? +- Can we name the person? +- Can we name the company/product? +- Must details be generalized? +- Anything Kent must approve before use: diff --git a/instructor/product-judgment-research/readme.md b/instructor/product-judgment-research/readme.md new file mode 100644 index 0000000..df45f8c --- /dev/null +++ b/instructor/product-judgment-research/readme.md @@ -0,0 +1,47 @@ +# Product Judgment Research Workspace + +This folder turns the Kody stash research into practical product-content +artifacts for EpicProduct.engineer and the Engineering Judgement workshop. + +## Working Thesis + +Developers do not only need more information about product engineering. They +need practice making, explaining, and revisiting product judgment calls when +evidence is incomplete, authority is mixed, implementation is cheap, and +consequences are real. + +## Source Notes + +- Handoff stash: `00d6dcbd-dbad-484f-8abf-f2b39140b231` +- Detailed source stash: `380d4438-8d0b-41eb-8f10-40630bac27f8` +- Original public X prompt: + + +## Contents + +- `scenario-cards/` - one-page scenario cards for the strongest judgment loops. +- `raw-example-capture-template.md` - field template for capturing future real + examples without over-synthesizing them. +- `decision-record-template.md` - lightweight artifact shape to test in + exercises and interviews. +- `follow-up-contact-list.md` - first-pass contact/example list from the source + examples. +- `outreach-prompts.md` - public reply, DM, and async interview prompts. + +## Confidentiality + +One buy-vs-build source came from a confidential private DM. Keep that example +generalized unless Kent confirms attribution and specific details are permitted. +Do not name the person, company, or industry in public-facing material. + +## How to Use This Workspace + +1. Start with the scenario cards when choosing exercise or interview prompts. +2. Use the raw capture template during follow-up interviews before synthesis. +3. Ask participants or interviewees to fill a decision record after describing a + real judgment call. +4. Update the contact list as Kent receives replies, permissions, or + disqualifying context. +5. Keep every artifact anchored in real pain: making the call, explaining the + call, defending it under mixed authority, and revisiting it after reality + changes. diff --git a/instructor/product-judgment-research/scenario-cards/ai-era-buy-vs-build.md b/instructor/product-judgment-research/scenario-cards/ai-era-buy-vs-build.md new file mode 100644 index 0000000..be64e5f --- /dev/null +++ b/instructor/product-judgment-research/scenario-cards/ai-era-buy-vs-build.md @@ -0,0 +1,76 @@ +# Scenario Card: AI-Era Buy vs Build + +## Use When + +You want learners to practice strategic buy-versus-build judgment after AI +changes what an internal engineering team can realistically create and support. + +## Real Pain Pattern + +Implementation is cheaper, so "we can build it" becomes true more often. That +does not mean "we should build it" is true. The judgment call is whether +internal ownership creates asymmetric domain advantage or merely reduces vendor +spend. + +## Scenario Seed + +A company in a domain-specific, operationally complex market has historically +bought brittle external platforms. AI assistance and a stronger internal +engineering capability make it plausible to build some domain-specific systems +in-house. Generic back-office software remains less compelling to rebuild. + +## Decision to Practice + +Should the company build a domain-specific platform internally, buy a vendor +solution, hybridize around vendor APIs, or defer until internal capability is +proven? + +## Evidence Available + +- Vendor cost and brittleness. +- Internal domain expertise. +- Internal engineering support capacity. +- Strategic differentiation from owning the workflow. +- Maintenance burden after the first build. +- Weak falsifiability: the decision is partly a directional bet. + +## Constraints and Pressure + +- AI changes resource constraints but not accountability. +- "Save vendor money" is weaker than "own a differentiated business capability." +- Internal teams must support what they build. +- Some directional bets cannot be cleanly validated before committing. + +## Options to Weigh + +1. Buy the vendor solution. +2. Build in-house because domain ownership changes business outcomes. +3. Hybridize: buy commodity surfaces and build differentiating workflows. +4. Defer until internal engineering capability is demonstrated elsewhere. + +## Participant Artifact + +Create a buy-versus-build judgment memo: + +- Capability being considered. +- Differentiation hypothesis. +- Internal expertise and support burden. +- Vendor brittleness and switching cost. +- What AI changes and what it does not change. +- Directional bet, risk controls, and revisit signals. + +## Debrief Prompts + +- Is the upside asymmetric or just cheaper? +- What would make the build burden unacceptable? +- Which parts are commodity and which are strategic? +- How will the team stay close to signal if the bet is not quickly falsifiable? + +## Follow-Up Research Question + +"Where has AI changed your build-versus-buy threshold, and where has it not?" + +## Confidentiality Note + +This card is generalized from confidential source material. Do not attribute it +or add specific company, person, or industry details without Kent's permission. diff --git a/instructor/product-judgment-research/scenario-cards/api-design-as-product-judgment.md b/instructor/product-judgment-research/scenario-cards/api-design-as-product-judgment.md new file mode 100644 index 0000000..2786d77 --- /dev/null +++ b/instructor/product-judgment-research/scenario-cards/api-design-as-product-judgment.md @@ -0,0 +1,71 @@ +# Scenario Card: API Design as Product Judgment + +## Use When + +You want learners to practice applying product judgment to developer experience, +especially when AI-generated options are plausible but bloated. + +## Real Pain Pattern + +The maintainer is close to the user and has strong taste, but the API still +needs external feedback before release. AI can generate many abstractions; the +judgment is choosing what primitives to expose, what to drop, and how much +control users should keep. + +## Scenario Seed + +An OSS maintainer is designing an API for agent orchestration. AI-generated +proposals contain useful concepts but feel hard to follow. The maintainer keeps +useful ideas, simplifies toward plain JavaScript, shifts control to users, drops +unhelpful primitives, and plans feedback before shipping. + +## Decision to Practice + +Should the API expose powerful abstractions, keep the surface close to the +language, split advanced primitives into later releases, or seek more external +feedback before deciding? + +## Evidence Available + +- Maintainer taste from being a likely user. +- Prototype ergonomics. +- Feedback from other potential users. +- Complexity of generated proposals. +- Risk that an early public API becomes sticky. + +## Constraints and Pressure + +- API design creates long-lived commitments. +- DX frustration is product pain. +- Personal taste is a useful hypothesis, not a complete validation loop. +- AI can inflate the primitive set. + +## Options to Weigh + +1. Ship the richer abstraction set. +2. Simplify to plain-language primitives. +3. Keep advanced primitives experimental. +4. Delay public release for external feedback. + +## Participant Artifact + +Create a DX judgment note: + +- Target developer job. +- Proposed primitives and user control model. +- Concepts dropped and why. +- Taste-based assumptions. +- Feedback required before shipping. +- Revisit trigger after adoption. + +## Debrief Prompts + +- What did "simple" mean for the target developer? +- Which generated concepts were useful but not worth exposing? +- Where did personal taste help, and where could it mislead? +- What would make the API hard to change later? + +## Follow-Up Research Question + +"Can you name a specific API primitive you dropped and what signal made it feel +wrong?" diff --git a/instructor/product-judgment-research/scenario-cards/confidence-gap-defending-decisions.md b/instructor/product-judgment-research/scenario-cards/confidence-gap-defending-decisions.md new file mode 100644 index 0000000..242e3e6 --- /dev/null +++ b/instructor/product-judgment-research/scenario-cards/confidence-gap-defending-decisions.md @@ -0,0 +1,70 @@ +# Scenario Card: Confidence Gap Defending Decisions + +## Use When + +You want to test the most direct pain signal: engineers who know they need help +making and defending product-shaped decisions. + +## Real Pain Pattern + +The engineer is not asking for more product-management theory. They need a way +to reconstruct a real decision, name the evidence, explain the tradeoffs, and +defend the call without pretending uncertainty disappeared. + +## Scenario Seed + +An engineer recently struggled to defend a product-shaped technical decision. +They had some evidence, some instinct, and some stakeholder pressure, but no +clear artifact or language for explaining why their recommendation was +reasonable. + +## Decision to Practice + +How should the engineer reconstruct the decision so they can explain what they +knew, what they assumed, who owned the call, and what would cause a revisit? + +## Evidence Available + +- Their memory of the decision. +- Stakeholder objections. +- Options considered or skipped. +- Outcome so far. +- Missing artifact that would have helped. + +## Constraints and Pressure + +- The decision may already have happened. +- The engineer may feel judged for uncertainty. +- Authority may have been mixed. +- The goal is not to prove they were right; it is to make judgment inspectable. + +## Options to Weigh + +1. Reconstruct the decision record after the fact. +2. Interview stakeholders to fill evidence gaps. +3. Create a current recommendation with explicit uncertainty. +4. Treat it as a practice case for a future scenario card. + +## Participant Artifact + +Create a decision reconstruction: + +- The call that was hard to defend. +- What was at stake. +- Evidence and assumptions at the time. +- Alternatives and why they were rejected. +- Who had authority. +- What happened after. +- What would have helped before the call. + +## Debrief Prompts + +- Which part was confidence, and which part was missing structure? +- What language would have made the tradeoff easier to defend? +- Did the engineer need more evidence or a clearer owner? +- How could this become deliberate practice? + +## Follow-Up Research Question + +"Tell me about the last product-shaped decision you struggled to defend. What +would have helped you make the call earlier?" diff --git a/instructor/product-judgment-research/scenario-cards/enough-feedback-to-reverse-best-practice.md b/instructor/product-judgment-research/scenario-cards/enough-feedback-to-reverse-best-practice.md new file mode 100644 index 0000000..4e6a709 --- /dev/null +++ b/instructor/product-judgment-research/scenario-cards/enough-feedback-to-reverse-best-practice.md @@ -0,0 +1,68 @@ +# Scenario Card: Enough Feedback to Reverse a Best Practice + +## Use When + +You want learners to practice organizational product judgment where the outcome +will not be known quickly and technical "best practice" is not enough. + +## Real Pain Pattern + +A long-standing internal best practice creates complexity outside the team's +direct control. Reversing it triggers resistance, but waiting for perfect proof +means continuing to export complexity to customers, partners, or other systems. + +## Scenario Seed + +A company has modeled product identity around a legacy compound concept for +years. Modern commerce, portals, invoices, integrations, and future syndication +need stable single identifiers. The proposed change moves complexity back into +internal systems the team controls. + +## Decision to Practice + +When do objections stop revealing new risks and start repeating known tradeoffs? +Should the team proceed, gather more feedback, narrow the change, or preserve +the existing practice? + +## Evidence Available + +- RFC feedback from impacted teams. +- External integration requirements. +- Known failure modes from the current model. +- No clean near-term win and a long feedback loop before success is obvious. + +## Constraints and Pressure + +- Outcomes may take many months to validate. +- Multiple teams own parts of the current workflow. +- External ecosystems punish unstable or compound identity. +- Internal systems can absorb complexity more safely than public-facing systems. + +## Options to Weigh + +1. Keep the existing best practice and document workarounds. +2. Reverse the practice broadly after RFC review. +3. Pilot the new identity model in one bounded surface. +4. Delay until a clearer business incident creates urgency. + +## Participant Artifact + +Create an RFC decision addendum: + +- Why the old best practice is wrong for this context. +- Which risks are bounded internally versus unbounded externally. +- Objections received and whether each uncovered new risk. +- Decision owner and escalation path. +- Revisit trigger and expected evidence. + +## Debrief Prompts + +- Which arguments were technical preferences versus product/business signals? +- What made the risk surface bounded or unbounded? +- Who had authority to decide that enough feedback had been gathered? +- How would you explain the delayed outcome to a skeptical executive? + +## Follow-Up Research Question + +"How did you know you had enough feedback to move forward instead of running one +more alignment cycle?" diff --git a/instructor/product-judgment-research/scenario-cards/enterprise-pressure-vs-product-coherence.md b/instructor/product-judgment-research/scenario-cards/enterprise-pressure-vs-product-coherence.md new file mode 100644 index 0000000..5d9de88 --- /dev/null +++ b/instructor/product-judgment-research/scenario-cards/enterprise-pressure-vs-product-coherence.md @@ -0,0 +1,70 @@ +# Scenario Card: Enterprise Pressure vs Product Coherence + +## Use When + +You want learners to practice customer-request judgment when retention threats, +competitor parity, prioritization math, and decision rights collide. + +## Real Pain Pattern + +A large customer threatens to leave unless a feature is built. Competitors have +something similar. The prioritization framework considers customer size, request +count, and churn risk, but "no" may be unavailable because senior leadership can +overrule the product team. + +## Scenario Seed + +A B2B SaaS product serves giant corporate customers. One customer wants a niche +feature framed as competitor parity. The team worries that saying yes preserves +short-term revenue while pulling the product away from a coherent vision. + +## Decision to Practice + +Should the team build the requested feature, solve the deeper need differently, +commit to discovery only, or push back despite retention pressure? + +## Evidence Available + +- Customer size and revenue exposure. +- Number of customers requesting the feature. +- Competitor checklist pressure. +- Product vision or roadmap. +- Escalation path and likelihood of override. + +## Constraints and Pressure + +- A prioritization formula can exist and still fail if decision rights make "no" + impossible. +- The loudest customer may not represent the target segment. +- Short-term retention may be real, not merely emotional pressure. +- Engineering may inherit complexity from decisions made elsewhere. + +## Options to Weigh + +1. Build the requested feature. +2. Solve the underlying need with a narrower or more coherent alternative. +3. Offer a manual/service workaround while validating broader demand. +4. Say no and accept retention risk. + +## Participant Artifact + +Create a customer-pressure decision brief: + +- Customer request and deeper need. +- Segment evidence versus single-account pressure. +- Product coherence risk. +- Decision rights and escalation path. +- Recommendation and fallback if overruled. +- Revisit trigger after adoption, churn, or usage data. + +## Debrief Prompts + +- Did the framework actually decide, or did authority decide? +- What would prove the request is broader than one loud customer? +- How could engineering make the cost of saying yes legible? +- What is the smallest promise that preserves trust? + +## Follow-Up Research Question + +"How did the team decide whether a request represented a broader segment need or +just one loud customer?" diff --git a/instructor/product-judgment-research/scenario-cards/evidence-weighting-and-decision-rights.md b/instructor/product-judgment-research/scenario-cards/evidence-weighting-and-decision-rights.md new file mode 100644 index 0000000..bebeb2a --- /dev/null +++ b/instructor/product-judgment-research/scenario-cards/evidence-weighting-and-decision-rights.md @@ -0,0 +1,71 @@ +# Scenario Card: Evidence Weighting and Decision Rights + +## Use When + +You want learners to practice deciding with weak data, qualitative feedback, +engineering instinct, and mixed authority. + +## Real Pain Pattern + +The team has analytics, but usage is low enough that the numbers do not settle +the decision. Users say one thing, executives want momentum, product owns the +roadmap, and engineering sees future cost. The hard part is not finding one more +metric. It is deciding what evidence counts and who owns the call. + +## Scenario Seed + +A young product has enough events to inspect behavior but not enough volume to +trust every quantitative signal. A decision must be made before launch about +whether to optimize a technical concern, simplify UX, or ship the MVP to prove +the model. + +## Decision to Practice + +When should qualitative feedback outweigh analytics, when should engineering +drive, and when should product or executives override? + +## Evidence Available + +- Analytics events with limited volume. +- User conversations. +- Alpha feedback. +- Executive/product roadmap pressure. +- Engineering concern about scalability or maintainability. + +## Constraints and Pressure + +- Quantitative evidence may be measurable but misleading. +- Product strategy changes the right technical choice. +- Engineering concerns are real, but may be over-optimized for the current + phase. +- Decision rights may be unclear or contested. + +## Options to Weigh + +1. Follow the analytics. +2. Prioritize qualitative user conversations. +3. Let engineering drive because the risk is technical. +4. Let product/executives override because the product phase demands momentum. + +## Participant Artifact + +Create an evidence weighting memo: + +- Decision to make. +- Evidence types and confidence level. +- Product phase: proof-of-model or scale-the-model. +- Recommended owner of the decision. +- Recommendation and rejected alternatives. +- What future evidence would change the call. + +## Debrief Prompts + +- Which evidence looked objective but was actually low signal? +- What did product phase change about the right answer? +- Was authority explicit or inferred? +- How would you defend the decision to someone with a different risk model? + +## Follow-Up Research Question + +"Can you walk me through a decision where analytics existed but talking to users +was still more useful?" diff --git a/instructor/product-judgment-research/scenario-cards/feature-value-vs-framing.md b/instructor/product-judgment-research/scenario-cards/feature-value-vs-framing.md new file mode 100644 index 0000000..f7dfa3c --- /dev/null +++ b/instructor/product-judgment-research/scenario-cards/feature-value-vs-framing.md @@ -0,0 +1,71 @@ +# Scenario Card: Feature Value vs Feature Framing + +## Use When + +You want learners to practice separating whether a feature is valuable from +whether the market accepts how it is described. + +## Real Pain Pattern + +A feature works and some customers pay for it, but the audience dislikes the +category or language around it. Community backlash creates pressure to remove +the feature, hide it, rename it, or explain it harder. + +## Scenario Seed + +A product includes an AI-assisted coaching feature grounded in the product's own +data. The audience is skeptical of AI. A public community post creates backlash, +but paid-tier purchase ratios do not clearly collapse. The team changes +positioning and wonders whether to keep investing. + +## Decision to Practice + +Should the team remove the feature, rename/reframe it, stop leading with it, +publish methodology, wait before responding again, or double down because usage +and conversion still look healthy? + +## Evidence Available + +- Public backlash and sentiment. +- Visit volume after the controversy. +- Paid-tier ratio before and after the framing change. +- Qualitative user objections. +- Product belief that the feature solves a real problem. + +## Constraints and Pressure + +- Market perception can be a product constraint even when the feature works. +- Backlash can increase awareness and purchases while damaging trust. +- Explaining methodology too quickly may sound defensive. +- Usage and perception may point in different directions. + +## Options to Weigh + +1. Remove the feature. +2. Keep the feature but rename/reposition it. +3. Keep it behind a higher tier and stop leading with it. +4. Publish methodology and education. +5. Pause public response and watch behavior. + +## Participant Artifact + +Create a feature-framing decision note: + +- Feature value hypothesis. +- Perception risk. +- Behavioral evidence versus sentiment evidence. +- Positioning change. +- What would justify more investment. +- What would justify removal. + +## Debrief Prompts + +- Did backlash reveal product risk, messaging risk, or both? +- Which signal mattered more: conversion ratio, usage, comments, or support? +- What did the team learn about audience worldview? +- How would you avoid overreacting to a loud channel? + +## Follow-Up Research Question + +"What would you do differently next time a useful feature triggers audience +aversion?" diff --git a/instructor/product-judgment-research/scenario-cards/fog-of-war-experiments.md b/instructor/product-judgment-research/scenario-cards/fog-of-war-experiments.md new file mode 100644 index 0000000..96a12bc --- /dev/null +++ b/instructor/product-judgment-research/scenario-cards/fog-of-war-experiments.md @@ -0,0 +1,70 @@ +# Scenario Card: Fog of War Experiments + +## Use When + +You want learners to practice moving forward when the next step is clear but the +whole path is not. + +## Real Pain Pattern + +Teams often wait for certainty they cannot get, or they overbuild a full plan +from weak information. The better move may be to ship the known next step, run +small experiments into the unknown, and stay willing to delete work as the fog +clears. + +## Scenario Seed + +A team knows the immediate product improvement customers need, but follow-on +steps are speculative. Several small experiments could reveal which path is +worth continuing. Some of that work may need to be deleted later. + +## Decision to Practice + +What should be shipped as the known next step, which experiments should probe +the unknowns, and what deletion/revisit boundary keeps exploration from turning +into accidental product surface? + +## Evidence Available + +- Confidence in one immediate next step. +- Lower confidence in follow-on roadmap. +- Several experiment ideas. +- Cost of deleting or migrating exploratory work. +- User or business signals that could reduce uncertainty. + +## Constraints and Pressure + +- Implementation is cheap enough to over-experiment. +- Deleted work can still be good judgment if it bought learning. +- Experiments need explicit stop conditions. +- Shipping known value should not be blocked by speculative roadmap anxiety. + +## Options to Weigh + +1. Ship the known next step only. +2. Ship the known step plus several bounded experiments. +3. Delay until the roadmap is clearer. +4. Build a broad flexible platform for all possible futures. + +## Participant Artifact + +Create an experiment boundary note: + +- Known next step. +- Unknowns worth probing. +- Experiments and expected learning. +- Delete/continue criteria. +- Owner and revisit date. +- What must not become permanent without a decision record. + +## Debrief Prompts + +- Which uncertainty mattered now versus later? +- Did the experiment have a learning goal or was it just extra scope? +- What would make deletion emotionally hard? +- How does the team avoid mistaking shipped experiments for strategy? + +## Follow-Up Research Question + +"Can you describe a time you shipped through fog and later deleted or redirected +work as the path became clearer?" diff --git a/instructor/product-judgment-research/scenario-cards/launch-performance-bar.md b/instructor/product-judgment-research/scenario-cards/launch-performance-bar.md new file mode 100644 index 0000000..eb93123 --- /dev/null +++ b/instructor/product-judgment-research/scenario-cards/launch-performance-bar.md @@ -0,0 +1,69 @@ +# Scenario Card: Launch Performance Bar + +## Use When + +You want learners to practice deciding what "fast enough" means when performance +is tied to perceived product quality, not just engineering preference. + +## Real Pain Pattern + +The product still uses MVP-era rendering. A deeper refactor would be cleaner, +but launch is approaching and the first impression needs to feel credible. The +team must decide whether a tactical cache or partial refactor is enough. + +## Scenario Seed + +A CMS product powers its own marketing site. The page load feels slow and janky +during launch prep. The founder worries customers will not believe in the CMS if +the CMS cannot make its own site feel good. + +## Decision to Practice + +Should the team ship a quick caching improvement, delay launch for deeper +architecture cleanup, narrow the launch audience, or accept the current +performance for now? + +## Evidence Available + +- First response time ranges. +- Target customer expectations. +- Qualitative "wow factor" and perceived quality. +- Knowledge that pages do not change constantly. +- Known longer-term simplification opportunities. + +## Constraints and Pressure + +- Launch perception matters. +- Taste may be the strongest current signal. +- Predictability may matter more than theoretical speed. +- A short-term patch should not erase the need for future cleanup. + +## Options to Weigh + +1. Add caching now and keep the launch date. +2. Delay launch for deeper rendering refactor. +3. Launch to a smaller audience with known limitations. +4. Keep current performance and monitor complaints. + +## Participant Artifact + +Create a launch-quality bar: + +- Target user and expectation. +- Current performance and perceived quality problem. +- "Good enough for launch" threshold. +- Tactical fix and why it is acceptable. +- Cleanup debt intentionally carried forward. +- Revisit trigger after launch. + +## Debrief Prompts + +- What made this a product decision rather than pure tech debt? +- Which evidence was measurable versus taste-based? +- What would make the caching patch irresponsible? +- What customer expectation did the bar optimize for? + +## Follow-Up Research Question + +"How are you deciding when it is fast enough: target metric, target customer +expectation, test usage, or taste?" diff --git a/instructor/product-judgment-research/scenario-cards/mvp-ga-triage.md b/instructor/product-judgment-research/scenario-cards/mvp-ga-triage.md new file mode 100644 index 0000000..125d952 --- /dev/null +++ b/instructor/product-judgment-research/scenario-cards/mvp-ga-triage.md @@ -0,0 +1,69 @@ +# Scenario Card: MVP/GA Triage + +## Use When + +You want learners to practice edge-case sizing, patch-versus-refactor +sequencing, and risk containment right before a general availability milestone. + +## Real Pain Pattern + +The team finds a bug or confusing UX before GA. The "right way" fix is larger +than the launch window allows. The small fix feels incomplete, but the MVP's job +is to prove the model, not solve every future scaling concern. + +## Scenario Seed + +An analysis workflow has an edge-case bug. The team can patch the specific case, +feature-flag a risky path, add targeted explanatory UI, or do a holistic +refactor after launch. Alpha users have seen the issue, but usage is still too +thin for clean quantitative certainty. + +## Decision to Practice + +Should the team patch now and refactor later, delay GA for the right-way fix, +feature-flag the path, or accept the risk because the edge case is small? + +## Evidence Available + +- Rough estimate of affected edge-case percentage. +- Alpha user exposure. +- Qualitative user conversations. +- Analytics that exist but may be too thin or noisy. +- Product or executive pressure to prove the model. + +## Constraints and Pressure + +- MVP margins are thin. +- Product, executives, and engineering may disagree about the bar. +- A targeted UX fix may produce more user value than a deeper engineering fix. +- The engineering roadmap may absorb debt later, but not if GA fails. + +## Options to Weigh + +1. Patch the specific bug and schedule the broader refactor. +2. Delay GA for a holistic fix. +3. Feature-flag or disable risky behavior with explanatory text. +4. Accept the bug and monitor after launch. + +## Participant Artifact + +Create a GA triage note: + +- What is affected and estimated impact. +- Why now versus later. +- Recommended containment path. +- Authority and disagreement record. +- Tech debt paid now versus intentionally deferred. +- Revisit date or trigger. + +## Debrief Prompts + +- What signal made patch-now/refactor-later acceptable? +- Who should own the call when evidence is mixed? +- Did the team optimize for proof-of-model or scale-the-model? +- How would you disagree-and-commit without hiding risk? + +## Follow-Up Research Question + +"How did you estimate the edge-case impact, and what would have changed your +recommendation?" diff --git a/instructor/product-judgment-research/scenario-cards/prototype-as-spec.md b/instructor/product-judgment-research/scenario-cards/prototype-as-spec.md new file mode 100644 index 0000000..b95df0c --- /dev/null +++ b/instructor/product-judgment-research/scenario-cards/prototype-as-spec.md @@ -0,0 +1,70 @@ +# Scenario Card: Prototype as Spec + +## Use When + +You want learners to practice translating messy customer context into a product +direction before asking engineering to formalize it. + +## Real Pain Pattern + +The customer expresses needs through notes, workflows, and domain language. The +danger is either building their requested solution literally or handing +engineering a vague problem statement that creates multiple product/engineering +iteration loops. + +## Scenario Seed + +A customer has a deep, messy operational taxonomy. They need a way to move from +high-level allocation categories down to concrete work items and implementation +details. Product has customer notes and a rough schema dump, but no polished +spec. + +## Decision to Practice + +Should the product engineer write a traditional requirements doc, build a +throwaway prototype, prototype inside production context, or hand the problem to +engineering for discovery? + +## Evidence Available + +- Customer notes with expressed pains and proposed solutions. +- Internal schema or data model context. +- Design feedback on what makes the workflow understandable. +- Engineering feedback on feasibility, stability, and handoff cost. + +## Constraints and Pressure + +- The prototype must reveal product shape, not just technical possibility. +- A greenfield prototype may diverge from production realities. +- Engineering time should focus on efficiency, stability, and integration, not + guessing what product meant. +- The source data and customer language may contain jargon that hides the real + need. + +## Options to Weigh + +1. Write a traditional spec from customer notes. +2. Build a greenfield prototype to explore interaction shape. +3. Prototype against production-like context and use the prototype as the spec. +4. Defer until engineering can join discovery directly. + +## Participant Artifact + +Create a prototype-as-spec brief: + +- Customer need in the customer's words. +- The proposed interaction and what it proves. +- Assumptions the prototype intentionally does not prove. +- Engineering handoff notes. +- Revisit trigger after engineering implementation starts. + +## Debrief Prompts + +- Where did the customer describe a need versus propose a solution? +- What did the prototype make clearer than a document would have? +- What production constraints would a throwaway prototype have missed? +- What should engineering still challenge after receiving the prototype? + +## Follow-Up Research Question + +"What did you learn the hard way that made you start prototyping this way?" diff --git a/instructor/product-judgment-research/scenario-cards/readme.md b/instructor/product-judgment-research/scenario-cards/readme.md new file mode 100644 index 0000000..43b6042 --- /dev/null +++ b/instructor/product-judgment-research/scenario-cards/readme.md @@ -0,0 +1,45 @@ +# Scenario Cards + +Each card is a one-page seed for interviews, exercises, or facilitated +reflection. The cards are intentionally about product judgment loops, not +generic product-management concepts. + +## Strongest Loops + +- `confidence-gap-defending-decisions.md` - direct learner pain around defending + product-shaped calls. +- `mvp-ga-triage.md` - launch pressure, edge-case sizing, patch versus refactor. +- `evidence-weighting-and-decision-rights.md` - thin analytics, qualitative + signal, authority, and overrides. +- `small-sticky-product-decisions.md` - durable consequences of syntax, + permissions, comments, APIs, and cost choices. +- `prototype-as-spec.md` - customer notes to production-context prototype to + engineering handoff. +- `enough-feedback-to-reverse-best-practice.md` - RFC resistance, external + ecosystem risk, and delayed outcomes. +- `enterprise-pressure-vs-product-coherence.md` - retention threats, competitor + parity, prioritization math, and unavailable "no." +- `feature-value-vs-framing.md` - useful feature versus hostile market framing. +- `temporary-pain-vs-final-form.md` - known migration cost versus new risk. +- `launch-performance-bar.md` - perceived quality and target-customer + expectations before launch. +- `api-design-as-product-judgment.md` - DX taste, API stickiness, and AI-bloated + proposals. +- `ai-era-buy-vs-build.md` - build only where domain expertise plus internal + capability creates asymmetric upside. +- `fog-of-war-experiments.md` - ship the known next step and bound exploratory + work. + +## Facilitator Use + +1. Pick one card that matches the exercise muscle. +2. Ask learners to fill the decision-record template before implementation or + recommendation. +3. During debrief, focus on evidence, authority, reversibility, and revisit + triggers. +4. Capture fresh examples with the raw-example template before revising cards. + +## Public Use Caution + +The buy-versus-build card is generalized from confidential material. Keep it +anonymous and non-specific unless Kent grants permission to use more detail. diff --git a/instructor/product-judgment-research/scenario-cards/small-sticky-product-decisions.md b/instructor/product-judgment-research/scenario-cards/small-sticky-product-decisions.md new file mode 100644 index 0000000..797c1dc --- /dev/null +++ b/instructor/product-judgment-research/scenario-cards/small-sticky-product-decisions.md @@ -0,0 +1,70 @@ +# Scenario Card: Small Sticky Product Decisions + +## Use When + +You want learners to feel how apparently small implementation choices become +durable product commitments. + +## Real Pain Pattern + +The decision sounds small: syntax, storage path, comment behavior, permissions, +agent API shape, or hosting cost. But changing it later may break user habits, +cross-platform compatibility, data migrations, pricing assumptions, or ecosystem +expectations. + +## Scenario Seed + +A Markdown collaboration product must choose attachment syntax, attachment +storage paths, comment behavior across desktop/mobile, workspace permissions, +agent APIs, and static asset hosting strategy before launch. + +## Decision to Practice + +How should the team decide what "right" means before real usage proves the +choice, and how should they record the decision so it can be revisited without +pretending it was obvious? + +## Evidence Available + +- Existing tool conventions from products users already know. +- Prototype feedback. +- Code smells or brittle API shapes found during implementation. +- Security, cost, and migration concerns. +- Personal product taste from being close to the user. + +## Constraints and Pressure + +- Launch decisions may be expensive to migrate. +- Mobile and desktop needs may conflict. +- Compatibility with adjacent tools may matter more than local elegance. +- AI can generate many plausible APIs, but cannot own the judgment. + +## Options to Weigh + +1. Follow the dominant convention from comparable tools. +2. Choose a custom behavior that better fits this product's model. +3. Hide the decision behind a migration-friendly abstraction. +4. Defer the surface until more usage exists. + +## Participant Artifact + +Create a small-decision record: + +- Decision and scope. +- User expectation being honored or intentionally broken. +- Options rejected. +- Reversibility and migration cost. +- Signals that would force a revisit. +- Commit/tag or artifact that locks the decision for now. + +## Debrief Prompts + +- Which part of the decision was product judgment, not implementation detail? +- Did the team benchmark tools or just copy them? +- What would make the decision feel wrong after launch? +- How did AI help generate options without replacing judgment? + +## Follow-Up Research Question + +"Have you regretted one of these small sticky decisions before, and what would a +better record have changed?" diff --git a/instructor/product-judgment-research/scenario-cards/temporary-pain-vs-final-form.md b/instructor/product-judgment-research/scenario-cards/temporary-pain-vs-final-form.md new file mode 100644 index 0000000..3674492 --- /dev/null +++ b/instructor/product-judgment-research/scenario-cards/temporary-pain-vs-final-form.md @@ -0,0 +1,70 @@ +# Scenario Card: Temporary Pain vs Final Product Form + +## Use When + +You want learners to practice distinguishing genuinely new risk from a known, +accepted migration cost. + +## Real Pain Pattern + +The team is in an interim product state. A new pain appears. Fixing it would +make the interim state nicer but slow the move to the final product form and +increase the impact radius across other teams. + +## Scenario Seed + +A product migration creates onboarding confusion during an interim phase. The +team already expected some support burden during this phase. New confusion is +reported by internal teams, not external users. Internal conversations suggest +the confusion is workable with communication. + +## Decision to Practice + +Should the team optimize the interim state, communicate/support through the +known pain, change the migration plan, or pause migration until the confusion is +resolved? + +## Evidence Available + +- Internal reports of confusion. +- Prior acceptance of direct support burden. +- No clear external user signal yet. +- Timeline impact and cross-team impact radius. +- Final product form that reduces the interim problem. + +## Constraints and Pressure + +- Not all pain is new information. +- Internal reports matter, but differ from user-facing product failure. +- Optimizing the interim state can delay the better end state. +- Communication and support are legitimate product moves. + +## Options to Weigh + +1. Build interim optimizations. +2. Continue migration and improve internal communications. +3. Pause migration until the confusion is removed. +4. Narrow the migration or change rollout sequencing. + +## Participant Artifact + +Create a migration pain decision note: + +- Current pain and who feels it. +- Whether the pain was known, accepted, or new. +- Impact radius of an interim optimization. +- Communication/support plan. +- Signal that would force a product or engineering change. +- Revisit date or migration milestone. + +## Debrief Prompts + +- What made the pain acceptable for now? +- Was the source internal, external, or both? +- What would turn known cost into new risk? +- Did the team choose momentum because it was easier or because it was right? + +## Follow-Up Research Question + +"What signal would have forced you to pause the migration and optimize before +reaching the final form?"