Discussion about this post

User's avatar
Karl Wirth's avatar

Thank you. This is helpful. I agree with the 4 things you list for "What makes this actually work?" They are all about giving the AI agent the tools and especially context. I would add a fifth context: For a major feature, don't just prompt it, iterate with it on a plan/spec for that feature. Taking the time to go back and forth with AI to hone your plan for the feature makes a huge difference in the quality of the output. It doesn't need to be a book, but laying out the goal, features, use cases, diagram if needed, data model if needed, implementation plan ... we have seen the quality of output leap proceeding in this way. The more and faster the human-AI iteration the better.

Expand full comment
Neural Foundry's avatar

This workflow transformation from waterfall to prompt-evaluate-iterate is legit. The AGENTS.md approach for onboarding AI context is clever, kinda like creating institutional knowlege that persists across sessions instead of constantly reexplaining things. I've been messing around with similar setups for prototyping data pipelines and the worktree parallelization is a gamechanger once you realize prototypes are basically free now. That ASCII wireframing output is wild tho, way faster than opening Figma for quick iteratons.

Expand full comment

No posts

Ready for more?