AI & LLMs · Guide · AI & Prompt Tools
How to Write Better AI Prompts
The 4-part prompt formula, chain-of-thought, temperature, and weak-vs-strong prompt examples.
The difference between a useless AI output and a genuinely useful one is almost never the model — it’s the prompt.
Advertisement
Prompt engineering sounds intimidating, but for small-business owners and solo founders it boils down to a few repeatable habits. If you’re pasting one-line questions into ChatGPT or Claude and getting mushy, generic answers, you’re leaving 70% of the model on the table. This guide walks through the formula that consistently produces publishable, usable output — no PhD required.
The 4-part formula: role + task + context + format
Every strong prompt names a role (who the AI should act as), a task (what it should do), context (the specifics of your situation), and a format (how the answer should be shaped). Miss any one of the four and quality drops. Think of it as briefing a freelancer: you wouldn’t just say “write me something.”
Weak vs strong prompts, side by side
- Weak: “Write a cold email.” Strong: “You’re a B2B SaaS sales rep. Write a 90-word cold email to a head of marketing at a 50-person agency. Hook: our tool cuts reporting time 60%. Format: subject line + 3 short paragraphs + single CTA.”
- Weak: “Summarize this.” Strong: “Summarize this customer call transcript for a busy CEO. Return: 3 bullets of wins, 3 bullets of concerns, one recommended next step.”
- Weak: “Help me name my product.” Strong: “You’re a brand strategist. Suggest 10 names for a dog-walking app targeting urban millennials. Tone: warm and playful. Avoid anything with ‘paw’ or ‘pup.’ Include the .com availability guess for each.”
System prompt vs user prompt
The system prompt sets the persona and the unchanging rules — who the AI is, what it never does, the tone it holds across a whole session. The user prompt is the specific request you send each turn. If you’re building a custom GPT or a Claude Project, front-load the durable stuff (style guide, brand voice, banned words) into the system prompt so you don’t repeat yourself in every message.
Chain-of-thought and few-shot examples
Chain-of-thought simply means asking the model to “think step by step” or “work through this out loud before giving the final answer.” It noticeably improves accuracy on anything involving math, logic, or multi-step reasoning. Few-shot means showing the model 2–3 examples of input → desired output before the real task. For anything repetitive like tagging, classifying, or formatting, few-shot beats any amount of instructions.
Temperature: 0 vs 0.7
Temperature controls randomness. Set it to 0 (or close to it) when you want deterministic, factual, repeatable output — data extraction, code, summaries, anything where wrong is worse than boring. Crank it up to 0.7–1.0 when you want creativity — brainstorms, taglines, fiction, variant generation. Most chat UIs hide temperature, but the API and most playgrounds expose it.
Iterate, save wins, build a library
Treat prompts like code. When one works, save it to a Notion page, a Google Doc, or a dedicated prompt library tool. Name it, tag it, note what model and what date. Next month when you need the same thing, you won’t reinvent it. Solo founders who do this compound a personal moat over time.
Common mistakes
Vague instructions (“make it better”), stuffing 10 tasks into one prompt, and telling instead of showing. The fix for all three is the same: show the model an example of what “good” looks like rather than describing it in adjectives. This is the show-don’t-tell rule and it’s the single biggest upgrade most people can make today.
Bottom line
Use role + task + context + format, show examples, pick temperature deliberately, and save what works. Do that for a month and you’ll out-prompt 95% of users on any model you touch.
Advertisement