A collection of ten structured AI prompt templates that form a five-stage workflow for turning raw papers, videos, and lectures into processed, actionable knowledge.
Built for knowledge workers who consume large volumes of information — researchers, analysts, students, or anyone who needs to process more than they can read.
Each prompt handles one stage of the workflow. The stages are designed to be used in sequence, though each prompt works independently.
| Stage | Purpose | Prompts |
|---|---|---|
| 01 Triage | Decide if a paper is worth reading in full | research_summary.md |
| 02 Decode | Build mental scaffolding and extract signal | scaffolder.md, yt_transcript_summary.md, lecture_study.md |
| 03 Audit | Challenge and validate what you just processed | auditor.md, reproducibility_audit.md |
| 04 Test | Verify you actually understood the material | questions.md |
| 05 Benchmark | Map your skills against real job requirements | reverse_job_resume.md |
A meta-layer prompt (prompt_optimizer.md) sits above the pipeline and is used to refine any rough prompt ideas before use.
knowledge-pipeline-prompts/
├── README.md
├── LICENSE
├── analysis/
│ ├── auditor.md # Logic auditing and stress-testing arguments
│ ├── lecture_study.md # Processing lecture transcripts and Coursera videos
│ ├── research_summary.md # Rapid triage and deep extraction from research papers
│ ├── scaffolder.md # Decoding author assumptions and mental models
│ └── yt_transcript_summary.md # Decoding YouTube video transcripts
├── generation/
│ ├── prompt_optimizer.md # Rewriting rough prompt ideas into clean instructions
│ └── reverse_job_resume.md # Reverse-engineering job descriptions into resume profiles
│ └── term-mapper.md.md # Create concept map for user given terms or concepts
└── evaluation/
├── questions.md # Generating high-level questions from any source material
└── reproducibility_audit.md # Auditing research papers for reproducibility standards
Each prompt is a standalone instruction set. Copy the contents of the relevant .md file and paste it into your AI assistant of choice (Claude, GPT-4, Gemini, etc.) as the system prompt or opening instruction, then follow up with your source material.
For a dense or jargon-heavy text:
- Run
analysis/scaffolder.md— build the scaffolding and surface hidden assumptions. - Run
analysis/auditor.md— get the counter-arguments and blind spots. - Run
evaluation/questions.md— test your understanding.
For a research paper:
- Run
analysis/research_summary.md— quick triage, extract core findings. - Run
evaluation/reproducibility_audit.md— check if the methods hold up. - Run
evaluation/questions.md— test your understanding.
For a YouTube video:
- Run
analysis/yt_transcript_summary.md— strip filler, extract argument and evidence. - Run
analysis/auditor.md— get the counter-arguments and blind spots.
For a lecture or Coursera video:
- Run
analysis/lecture_study.mdbefore watching for context, or after to check for gaps. - Run
evaluation/questions.mdfor active recall. - Run
generation/term-mapper.mdfor structured domain-specific terminology maps
For career benchmarking:
- Find a job description you are interested in.
- Run
generation/reverse_job_resume.md— see yourself as High, Medium, and Borderline match. - Use the gap analysis to decide what to learn next.
All prompts in this collection follow the same conventions:
- Role declaration — each prompt defines a specific expert persona.
- Input type — each prompt states exactly what kind of content it expects.
- Interaction rule — each prompt handles missing input gracefully.
- Zero-fluff output — no "the author says" preamble in any output.
- Structured output — all prompts produce consistent, scannable formats.
These prompts emerged from a practical problem: three-plus years working in research generates an unmanageable backlog of papers, videos, and lectures. The prompts were built iteratively to solve specific bottlenecks in that workflow, then mapped into a coherent five-stage pipeline.
The reproducibility_audit.md prompt is bioinformatics-specific by default. All other prompts are domain-agnostic and work equally well in any research, academic, or knowledge-intensive field.
For the full story behind this system, see the accompanying blog post: The 10-Prompt System That Turns Raw Papers, Videos, and Lectures Into Structured Knowledge.
Improvements, adaptations, and new prompt suggestions are welcome. If you extend a prompt for a specific domain (e.g., law, medicine, economics), feel free to open a pull request.
When contributing:
- Follow the existing prompt structure (Role, Input Type, Interaction Rule, Output Format).
- Keep output format sections explicit and scannable.
- Do not add emojis or decorative formatting to prompt files.
This work is licensed under the Creative Commons Attribution 4.0 International License. You are free to use, adapt, and redistribute these prompts for any purpose, provided you give appropriate credit.
See LICENSE for full terms.