Glossary · Definition
Chain of thought (CoT)
Chain of thought (CoT) is a prompting technique that asks the AI to reason step-by-step before giving the final answer. Dramatically improves accuracy on math, logic, and multi-step reasoning tasks.
Definition
Chain of thought (CoT) is a prompting technique that asks the AI to reason step-by-step before giving the final answer. Dramatically improves accuracy on math, logic, and multi-step reasoning tasks.
What it means
Original CoT (2022 Wei et al.) showed adding 'Let's think step by step' to prompts improved accuracy by 20+ percentage points on math benchmarks. Modern reasoning models (OpenAI o-pro, DeepSeek R1, Claude with extended thinking) bake CoT into the model itself — generating internal 'thinking' tokens before the visible answer. You can also explicitly request step-by-step reasoning at the prompt level for any model.
Advertisement
Why it matters
CoT is the cheapest prompt-engineering trick that produces real quality gains. For any task involving math, logic, multi-step planning, or complex extraction, requesting step-by-step reasoning improves accuracy meaningfully. The 'thinking' tokens cost extra but the accuracy gain is usually worth it.
Related free tools
Frequently asked questions
Should I always use it?
For math, logic, planning — yes. For simple Q&A or stylistic generation — usually overkill, just adds cost.
Reasoning models vs CoT prompting?
Reasoning models do CoT internally, more reliably. For non-reasoning models (GPT-4o, Claude Sonnet), CoT prompting is a simple way to get most of the benefit.
Related terms
- DefinitionSystem promptA system prompt is the persistent instruction sent to an LLM before user messages. It defines the AI's role, style, behavior, and constraints. Cached on most providers, so investing in a good one is cheap.
- DefinitionFew-shot promptingFew-shot prompting includes 1-5 examples of desired input-output pairs in your prompt to guide the AI's response style or format. Beats zero-shot for tasks where format matters.