I've been watching everyone obsess over AI prompts lately.
"You have to be specific with your instructions!" "The more detailed your prompt, the better the output!" "Don't assume the AI knows what you want!"
Meanwhile, these same people are giving their human teams instructions like "handle this strategically" and "make sure you communicate well."
Apparently, we've figured out how to talk to artificial intelligence but we're still terrible at talking to actual intelligence.
The Irony That's Killing Your Performance
You spend 20 minutes crafting the perfect ChatGPT prompt because you know vague instructions get mediocre results.
Then you turn to your team member and say "I need this client-ready ASAP" and wonder why they didn't read your mind about what "client-ready" and “ASAP” means to you.
You've discovered that clarity works with AI. You just haven't applied that discovery to humans yet.
The Clarity Crisis in Real Time
Scene 1: The Project Meeting
You: "I need this done ASAP." Them (thinking): "How ASAP? By end of day? End of week? Before I go home for Christmas?" Them (out loud): "Sure, no problem."
Three days later you're wondering why it's not done.
Scene 2: The Quality Standards
You: "Make sure this is client-ready." Them (thinking): "Client-ready like rough draft client-ready? Or client-ready like we're presenting to the board? What does client-ready even mean here?" Them (out loud): "Absolutely."
You review their work and think they clearly don't understand quality.
Scene 3: The Performance Expectation
You: "I need you to be more proactive." Them (thinking): "More proactive about what? Should I email you more? Less? Make decisions without asking? Ask more questions?" Them (out loud): "I'll work on that."
Nothing changes because nobody knows what "more proactive" actually means.
What You've Learned About AI (But Forgot About Humans)
With AI: You give specific, detailed prompts because you want quality output
With humans: You give vague instructions and expect them to guess what you want
With AI: You iterate and refine your prompts when you don't get the result you want
With humans: You assume they "don't get it" when your unclear instructions produce unclear results
With AI: You know garbage input = garbage output
With humans: You give garbage instructions and blame them for garbage performance
What Clarity Actually Sounds Like
Instead of: "I need this done well"
Try: "I need this done by 3pm Thursday, with the financial analysis double-checked and formatted according to the client template"
Instead of: "Communicate better with the team"
Try: "Send a project update email to the team every Friday by 4pm with current status, next week's priorities, and any blockers"
Instead of: "Be more client-focused"
Try: "Respond to client emails within 4 hours during business days, and if you can't solve their issue immediately, send an update within 24 hours"
You wouldn't prompt AI this vaguely. Don't prompt your humans this vaguely either.
The Human Prompt Engineering You're Missing
You've become a prompt engineer for AI. Time to become a prompt engineer for your team.
The same principles apply:
- Be specific about the desired outcome
- Provide context and constraints
- Define what success looks like
- Give examples when helpful
- Test and refine your "prompts" (instructions)
Your Clarity Challenge
This week, treat your team instructions like AI prompts. Before you ask someone to do something, ask yourself: "Is this specific enough that it would get me good results from ChatGPT?"
If the answer is no, refine your human prompt.
You'll probably discover that the "performance issues" you thought you had were actually clarity issues in disguise.
Because most people aren't failing because they don't care. They're failing because they don't know what success looks like.
The Bottom Line
You've figured out how to get great results from artificial intelligence through clear, specific instructions.
Your human intelligence deserves the same courtesy.
Stop making your team guess. Start making success achievable.












