Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 7 additions & 0 deletions instructor/notes.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,13 @@

Build and run a workshop that trains engineering judgment under ambiguity: clarifying requirements, surfacing assumptions, defining success criteria, anticipating risks, and evaluating outcomes against intent.

## Product Judgment Research Workspace

Research artifacts for EpicProduct.engineer/product-judgment content live in
`instructor/product-judgment-research/readme.md`. Use those scenario cards,
templates, outreach prompts, and follow-up lists when grounding future exercises
in real examples.

## Success Criteria

- Learners explicitly document assumptions before implementation.
Expand Down
101 changes: 101 additions & 0 deletions instructor/product-judgment-research/decision-record-template.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
# Product Judgment Decision Record Template

Use this as a lightweight artifact for interviews, exercises, and post-decision
reflection. It should make judgment inspectable without pretending the team had
perfect information.

## Decision Title

Short noun phrase:

## Situation

- What is happening?
- Why does this matter now?
- What user, business, system, or team consequence makes this decision real?

## Beneficiary and Pain

- Who benefits if we get this right?
- What pain are they experiencing?
- What language have they used to describe it?
- What proposed solution should we not blindly accept?

## Decision to Make

- What specific call are we making?
- What is intentionally out of scope?
- Who owns the recommendation?
- Who can approve, block, or override?

## Options

| Option | Why consider it? | What does it risk? |
| -------- | ---------------- | ------------------ |
| Option A | | |
| Option B | | |
| Option C | | |

## Evidence

| Evidence | Type | Confidence | Notes |
| -------- | -------------------------------------------------------------- | ------------------- | ----- |
| | Quantitative / qualitative / stakeholder / engineering / taste | Low / medium / high | |

## Constraints

- Time:
- Team/capacity:
- User experience:
- Technical/system:
- Business:
- Cost:
- Compliance/security:
- Authority/governance:

## Reversibility

- How hard is this to change later?
- What migration, compatibility, or trust cost would reversal create?
- Can we contain the decision behind a flag, beta, abstraction, manual process,
or communication plan?

## Recommendation

- Recommended option:
- Why this option fits the current product phase:
- Main tradeoff we are accepting:
- Main risk we are mitigating:

## Rejected Options

- Option rejected:
- Why:
- What evidence could make it viable later:

## Decision Rights

- Recommendation owner:
- Approver:
- Consulted:
- Informed:
- Escalation path:
- Disagreement record:

## Revisit Plan

- Revisit trigger:
- Revisit date or milestone:
- Signal to watch:
- What would change the decision:
- Who is responsible for checking:

## Outcome Reflection

Complete after the decision has had contact with reality.

- What happened?
- Which assumptions were validated?
- Which assumptions were wrong?
- What did we learn about users, business, system, or team?
- What should change in the next decision record?
43 changes: 43 additions & 0 deletions instructor/product-judgment-research/follow-up-contact-list.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
# First-Pass Follow-Up Contact List

Source: Kody stash notes from Kent's product-decision X prompt and follow-ups.
Prioritize examples with real pain around making, explaining, defending, or
revisiting product judgment calls.

## Priority Follow-Ups

| Priority | Contact/example | Why this matters | Ask next | Likely use |
| -------- | --------------- | -------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------- |
| 1 | Manoj Thokala | Direct pain signal: explicitly lacks confidence making and defending product decisions. | "Can you describe the last product-shaped decision you struggled to defend? What was at stake, who pushed back, and what would have helped?" | Target-learner interview and confidence-gap scenario. |
| 2 | Stevie P | Strong MVP/GA triage loop: patch vs right-way fix, feature flags, evidence weighting, alpha exposure, disagree-and-commit. | "How did you estimate impact, who owned the call, and what evidence would have changed patch-now/refactor-later?" | MVP triage card, evidence weighting card, decision-rights practice. |
| 3 | Ronan Berder | Expert practice model: small sticky decisions, written process, AI as debate partner, deliberate revisit loops. | "Can we walk through one decision record for attachments/comments/API shape, including options rejected and revisit trigger?" | Small sticky decisions card and decision-record exemplar. |
| 4 | Max Andrews | Prototype-as-spec loop with customer notes, production-context prototyping, design feedback, and engineering handoff. | "What painful earlier experience taught you to prototype this way, and where can this approach fail?" | Prototype-as-spec card and product/engineering handoff exercise. |
| 5 | Headmaster Duck | Organizational uncertainty loop: reversing a best practice, RFC resistance, delayed outcomes, enough feedback threshold. | "What objections uncovered new risks versus repeated known tradeoffs, and who had authority to proceed?" | RFC/feedback sufficiency card and delayed-outcome decision record. |
| 6 | Kirill Goldin | Enterprise pressure loop: loud customers, competitor parity, prioritization math, and unavailable "no." | "How did the team distinguish broader segment need from one-account pressure, and who could protect product coherence?" | Enterprise pressure card and decision-rights/governance practice. |
| 7 | Sean Manzano | Feature value versus feature framing: AI-aversion, backlash, paid-tier behavior, timing of public response. | "What would you do differently next time, and what evidence is enough to keep investing despite sentiment risk?" | Feature framing card and audience-worldview research. |
| 8 | Marc Beinder | Launch performance bar: taste, target customer expectations, caching as quick win, longer-term cleanup. | "What exact threshold makes the product feel launch-ready, and what signal would force deeper refactor before launch?" | Launch-quality card and perceived-quality exercise. |

## Secondary Follow-Ups

| Contact/example | Why | Ask next |
| --------------- | ------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------- |
| Nik (@neke989) | Temporary pain versus final product form during migration. | "What signal would turn known interim pain into new risk that requires optimization?" |
| Alem Tuzlak | API/DX taste as product judgment, simplifying AI-generated proposals before feedback. | "Which primitive did you drop, and what made it feel bloated or hard to follow?" |
| Cosmin Pruteanu | Churn/business milestone pressure justifying a long fix. | "How did you measure churn impact, what alternatives were considered, and why was the delay acceptable?" |
| Ryan Allred | Fog-of-war loop: ship known next step, run experiments, delete as uncertainty clears. | "Can you share a concrete example where you deleted or redirected work after the fog cleared?" |

## Confidential Permission Queue

| Source | Status | Safe use now | Ask before using |
| --------------------------------------- | ------------------------- | --------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------- |
| Confidential buy-versus-build DM source | Private. Generalize only. | Use the anonymized pattern: AI changes build thresholds only where domain expertise plus internal capability creates asymmetric upside. | Any name, employer, industry, direct quote, or specific platform example. |

## Outreach Sequencing

1. Start with Manoj to validate direct learner pain.
2. Pair one expert-practice interview (Ronan or Max) with one messy authority
interview (Stevie P, Headmaster Duck, or Kirill).
3. Use Sean, Marc, Nik, and Alem to diversify beyond classic feature
prioritization into framing, launch quality, migration, and DX judgment.
4. Keep the confidential buy-versus-build source out of public examples unless
Kent confirms permission.
142 changes: 142 additions & 0 deletions instructor/product-judgment-research/outreach-prompts.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,142 @@
# Outreach Prompts

Use these to elicit real product-judgment stories, not abstract advice. Keep the
ask focused on one concrete decision the person actually made, defended, or
revisited.

## Public Reply Prompt

> This is exactly the kind of product-engineering judgment I'm trying to
> understand better. Would you be open to sharing one more detail: what evidence
> or constraint most shaped the decision, and what would have changed your mind?

## Public Reply for Direct Pain

> This is helpful because it gets at the skill I'm trying to teach: not just
> "know product," but make and defend a judgment call with incomplete evidence.
> What's a recent decision you struggled to defend? What was at stake?

## Public Reply for Expert Practice

> I love how concrete this is. If you are open to it, I'd like to understand the
> decision loop: options considered, evidence used, what you rejected, and when
> you knew it was time to revisit. Which part was hardest to explain to others?

## DM Prompt for a 20-30 Minute Interview

> Hey [name], thanks for sharing your product-decision example. I'm doing
> research for EpicProduct.engineer around how developers build product judgment
> as implementation gets cheaper with AI.
>
> The thing I'm trying to understand is not generic product-management theory.
> It's the real moment where an engineer has to make, explain, or revisit a
> product-shaped call with incomplete evidence, mixed authority, and real
> consequences.
>
> Would you be open to a short call where we reconstruct one decision? I'd ask
> about:
>
> - what was at stake
> - options you considered
> - evidence you trusted or distrusted
> - who could approve/block/override
> - what changed your mind, if anything
> - what artifact or language would have helped
>
> Happy to keep details private or anonymized.

## DM Prompt for Expert Models

> Hey [name], your example feels like expert product-engineering practice rather
> than beginner confusion, which makes it especially useful as a model.
>
> Could we walk through one real decision in detail? I'm particularly interested
> in how you write down the process, use AI without outsourcing judgment, decide
> what options to reject, and choose when to revisit.
>
> If we use anything publicly, I'll confirm attribution and quotes first.

## DM Prompt for Sensitive or Business-Specific Examples

> Hey [name], your example sounds useful but potentially sensitive. I do not
> want to expose company, customer, or strategic details without permission.
>
> If you are open to it, could we talk through the generalized judgment pattern?
> I can keep it anonymized and focus on the decision shape: evidence, options,
> constraints, authority, and revisit trigger.
>
> Before anything becomes public-facing, I'll confirm exactly what can be quoted
> or attributed.

## Async Interview Prompt

> Thanks for being willing to help. Please answer with one real product-shaped
> decision, not general advice. Short bullets are fine.
>
> 1. What was the decision?
> 2. What was at stake for users, business, system quality, or the team?
> 3. What options did you consider?
> 4. What evidence did you have?
> 5. What evidence was missing or untrustworthy?
> 6. Who owned the call? Who could block or override it?
> 7. What tradeoff did you accept?
> 8. What would have changed your mind?
> 9. Did you revisit it later? What triggered the revisit?
> 10. What would have helped you make or defend the decision earlier?
>
> Also: can this be quoted publicly, attributed to you, anonymized, or kept
> private?

## Follow-Up Prompts by Judgment Loop

### Prototype as Spec

> What did the prototype prove that a written spec would not have proven? What
> production constraint would a throwaway prototype have missed?

### Enough Feedback to Proceed

> When did objections stop uncovering new risks and start repeating known
> tradeoffs?

### Small Sticky Decisions

> How did you decide what "right" meant: user expectations, compatibility,
> migration cost, implementation complexity, business cost, or taste?

### MVP/GA Triage

> What made patch-now/refactor-later acceptable, and what signal would have
> forced the right-way fix before launch?

### Evidence Weighting

> Which evidence looked objective but was actually too thin or noisy to decide
> from?

### Decision Rights

> Was the decision owner explicit? If not, how did authority actually work?

### Enterprise Pressure

> How did you distinguish a broader segment need from one loud customer's
> request?

### Feature Framing

> Did backlash reveal feature risk, messaging risk, or audience-worldview risk?

### Buy Versus Build

> Where has AI changed the build-versus-buy threshold, and where has it not?

### Temporary Pain Versus Final Form

> What signal would have turned known interim pain into a reason to pause and
> optimize?

### Fog of War

> What did you ship because it was known, and what did you intentionally treat
> as an experiment that might be deleted?
Loading
Loading