Author here. I shared PrivyDrop on HN before; this post is specifically about the reusable “AI collaboration playbook” I extracted from the project (templates + workflow), after iterating on it in real dev work.
Quick entry points:
- `AGENTS.md` (guardrails + Done)
- `docs/ai-playbook/index.md` (1-page navigation)
- `docs/ai-playbook/code-map.md` (where to change)
- `docs/ai-playbook/flows.md` (how the system runs)
- `docs/ai-playbook/collab-rules.md` (plan-first template)
Two questions I’d love input on:
1. What’s your most effective “Done” / acceptance criteria format for coding agents?
2. Where do you put hard red-lines (security/privacy/architecture) so they actually get followed?
Quick entry points:
- `AGENTS.md` (guardrails + Done) - `docs/ai-playbook/index.md` (1-page navigation) - `docs/ai-playbook/code-map.md` (where to change) - `docs/ai-playbook/flows.md` (how the system runs) - `docs/ai-playbook/collab-rules.md` (plan-first template)
Two questions I’d love input on:
1. What’s your most effective “Done” / acceptance criteria format for coding agents? 2. Where do you put hard red-lines (security/privacy/architecture) so they actually get followed?