Automating values, minimising the delta & hiring in the era of AI agents : it was never about .md files

A few weeks ago, Garry Tan (the president of Y Combinator, if you don’t know him) published something called gstack on Product Hunt. It was a collection of markdown files. Plain text files with instructions for his AI agents. That’s it. That’s the whole thing.
The internet went two ways on it. One camp said it was god mode. The other laughed. “This is on Product Hunt? It’s just a bunch of .md files.”
I’ve been sitting with that reaction for a while now, because I think the people laughing are looking at the artifact and completely missing what it represents.
Garry isn’t the only one doing this. Matt Pocock, Paul Hammond and a bunch of other people building seriously in tech have been quietly doing the same thing. Writing down exactly how they want their AI agents to work, what good output looks like, where not to cut corners, what matters. Just writing it down. In markdown.
So what’s actually going on here?
The Delta
Here’s how I think about it. When you work with an AI agent, there is always a gap between what you had in your head and what the agent actually delivers. Maybe it’s small. Maybe it’s huge. But it’s always there. And the whole job of those .md files, whether people realise it or not, is to close that gap.
I’m calling that gap the delta. And minimising the delta is, I think, the most important skill in the next few years of working with AI.
You are the Maestro 🎼
Think of it like being a music conductor. The conductor doesn’t play an instrument. They hold a vision of how the final piece should sound and spend the whole performance closing the gap between that vision and what’s actually coming off the stage. Over time, a good conductor builds chemistry with the orchestra. The musicians start to understand the assignment even before the baton moves.
AI agents don’t get to build that chemistry with you. Every new conversation, they arrive completely fresh. Technically capable. But with no sense of your taste, your standards, or your definition of done. So you have to write it down. Every time. In a file they can actually read.
But here’s the thing that doesn’t get talked about enough.
When you conduct an orchestra, your musicians already know how to play violin. The challenge is interpretation and coordination. With AI agents, you’re also doing something a conductor never has to do: you’re explaining what a violin is. What strings do. Why the second movement should feel heavy. Your .md files aren’t just coordinating a capable team. They’re defining first principles for a collaborator who has no defaults except the ones you give it.
Automating your Value systems
Here’s where it gets genuinely interesting to me.
When you hire a human collaborator, they show up pre-loaded. Their own opinion on what “good enough” means. Their own instinct about when to ship versus when to keep polishing. Their own idea of what urgency looks like. You can influence those values over time, but you can’t install them. You inherit whoever walks in.
AI agents are different. There’s no ego. No pre-existing bias about how things should be done. No defending a decision they made last month. You can write down exactly what you value and they will work toward it without pushback.
That’s what these .md files really are. They’re not prompt engineering. They’re not clever tricks. They are a direct encoding of your value system into a collaborator. You are automating your values. For the first time, you can actually choose your collaborator’s values before they show up to work.
That has never been possible before. And I think we’re massively underselling how profound that is.
On hiring
It’s not necessarily the best coder. Not the person who has memorised the most syntax or knows every API by heart. Those things are still useful, but they stop being the ceiling on what you can produce.
The ceiling now is judgment. And whether you can make that judgment legible enough for an agent to act on it. Here is what I think the profile looks like:
Someone who thinks in systems. Not someone who solves one problem in isolation, but someone who sees how the parts connect. Who can look at a workflow and immediately spot where the delta is widest. You are not an implementer anymore. You are an architect of how work gets done.
Someone who generates ideas and can actually explain them. This is underrated. Having a good idea is one thing. Being able to hand it to an AI agent with enough clarity that they can run with it is a completely different skill. The people who can do both are rare and increasingly valuable. Your primary output is no longer code or a deliverable. It is a well-explained intent.
Someone with high integrity and a knack for shipping fast. These two together are the real unlock. Integrity means your standards are real, not performative. Shipping fast means you don’t let those standards become an excuse for never finishing. AI agents will expose both. They amplify high standards. They also amplify vagueness. The person who is genuinely committed to quality and knows when to stop is the one who gets the most out of them.
Someone who communicates with precision. Your instructions are your output now. If you can’t write clearly, no agent will save you. This is different from traditional communication skills. It is the ability to write for a collaborator with zero shared context, no ability to read between the lines, and no patience for ambiguity.
Someone who stays open to feedback and keeps updating. In an AI-first workflow you are constantly iterating on your own instructions. Every time the delta is wider than expected, the question is: what did I fail to communicate? The people who ask that question honestly will keep improving. The ones who blame the agent will plateau.
What gets devalued? Memorising syntax. Knowing every API endpoint. Deep expertise in narrow implementation details that an AI retrieves faster than you can type. Still useful. No longer the ceiling.
The Long Game
Will this change as AI improves? Probably. Memory and personalisation are getting better fast. But I don’t think the core of this goes away. Even a collaborator who remembers everything still needs to know what you value. The best teams don’t run on shared memory alone. They run on shared standards, written down, kept honest. The .md file is that document.
The people who figure this out early are building a compounding advantage. Not because they got access to better tools. Because they did the harder work of getting clear on what they actually believe, and then found a way to make it show up in everything they build.
That’s the bigger game Garry Tan is playing. And honestly? It deserves to be on Product Hunt.