Skip to content
Free Tool Arena

Glossary · Definition

Chain of thought (CoT)

Chain of thought (CoT) is a prompting technique that asks the AI to reason step-by-step before giving the final answer. Dramatically improves accuracy on math, logic, and multi-step reasoning tasks.

Updated May 2026 · 4 min read
100% in-browserNo downloadsNo sign-upMalware-freeHow we keep this safe →

Definition

Chain of thought (CoT) is a prompting technique that asks the AI to reason step-by-step before giving the final answer. Dramatically improves accuracy on math, logic, and multi-step reasoning tasks.

What it means

Original CoT (2022 Wei et al.) showed adding 'Let's think step by step' to prompts improved accuracy by 20+ percentage points on math benchmarks. Modern reasoning models (OpenAI o-pro, DeepSeek R1, Claude with extended thinking) bake CoT into the model itself — generating internal 'thinking' tokens before the visible answer. You can also explicitly request step-by-step reasoning at the prompt level for any model.

Advertisement

Why it matters

CoT is the cheapest prompt-engineering trick that produces real quality gains. For any task involving math, logic, multi-step planning, or complex extraction, requesting step-by-step reasoning improves accuracy meaningfully. The 'thinking' tokens cost extra but the accuracy gain is usually worth it.

Related free tools

Frequently asked questions

Should I always use it?

For math, logic, planning — yes. For simple Q&A or stylistic generation — usually overkill, just adds cost.

Reasoning models vs CoT prompting?

Reasoning models do CoT internally, more reliably. For non-reasoning models (GPT-4o, Claude Sonnet), CoT prompting is a simple way to get most of the benefit.

Related terms