Skip to content

Latest commit

 

History

History
50 lines (43 loc) · 3.89 KB

File metadata and controls

50 lines (43 loc) · 3.89 KB

agents

help agents write code for humans

A huge thank you goes out to CodeAesthetic on Youtube, from which most of these excellent design and architecture decisions were inspired. support him via:

vibe coding speedrun checklist

DOs:

  • DO approach vibe coding like you are learning a new language - it is a buildable skill that takes practice
  • DO write 500-1000+ line prompts
  • DO ask agents to help you improve your prompts before submitting the final one
  • DO choose popular tools like React and Next.js over more niche or newer frameworks that do not have a wealth of knowledge or troubleshooting yet
  • DO define an AGENTS.md which specifies your high-level system design goals
  • DO remind your agent to explicitly refer to AGENTS.md in every prompt before continuing
  • DO end your sessions and start new ones frequently
  • DO break large tasks into small chunks
  • DO be as specific in your expectations as possible
  • DO include both a "macro" goal (broader vision we are working towards) as well as a "micro" goal (this is specifically what I want you to accomplish at this stage)
  • DO A/B test the "micro" goal with various phrasing
  • DO commit to version control in between every prompt in which you have made substantial progress
  • DO have a consistent pre-prompt that describes the progress you have made so far, any MCP tools that are available for use, and any assumptions you have
  • DO keep a record of your prompts in ./prompt_history committed to version control so that agents can refer back to them
  • DO remind your agents that ./prompt_history exists, and you can refer to it when necessary
  • DO have agents record incremental progress when you are certain that progress is successful
  • DO use tools like basic memory that allow agents to maintain a running knowledge base
  • DO create templates for common prompt patterns you use repeatedly
  • DO periodically test different models and compare their strengths/weaknesses
  • DO remind agents to be honest when things are not working
  • DO remind agents that pre-celebration, self-promotion, and self-aggrandizing are not helpful
  • DO remind agents that we are not implementing a "mock", "stub", "partial", "sample", or "example" solution to revisit later
  • DO periodically take a step back if you find yourself in an error/debugging loop - have you provided enough context?
  • DO modify your original prompt and start from a clean working slate rather than sending 10 additional prompts in the same session
  • DO encourage agents to brainstorm 3 or more potential root causes of an issue and test each one independently
  • DO encourage agents to be extremely verbose in logging
  • DO ask agents to use a two-step process when replicating existing behavior: first explain the existing implementation, then use those docs to implement it
  • DO ask agents to be critical about small differences between two approaches where one is working and one is broken

DO NOTs:

  • DO NOT approach a large task with a prompt like "migrate this to TypeScript" or "refactor the repo"
  • DO NOT use vague qualifiers like "make it better" or "optimize this" without specific criteria
  • DO NOT expect that a larger context window will fix all your problems
  • DO NOT allow agents to pollute your workspace with noise or any documentation that may be partially incorrect/misleading
  • DO NOT assume agents will remember context or notes/tools/AGENTS.md from previous conversations exist without explicitly restating it
  • DO NOT obsess over token cost unless you are an open sourcer or bootstrapper