Chain of Thought Prompting Explained

4 min read

Chain-of-thought (CoT) is the principle that AI models produce better answers when they reason step by step rather than jumping straight to a conclusion. It’s one of the most important ideas in prompting — but how you use it depends on which model you’re working with.

Two Eras of CoT

Then (2022-2023): Early large language models needed you to explicitly ask for step-by-step reasoning. Adding “think step by step” to a prompt could be the difference between a wrong answer and a right one.

Now: Models like Claude, OpenAI’s o-series, and DeepSeek-R1 have built-in reasoning. They automatically break down complex problems internally before responding — no magic phrase needed. The latest models even use adaptive thinking, dynamically deciding when and how much to reason based on the complexity of your query.

What Reasoning Models Do Automatically

When you ask a modern reasoning model a complex question, it internally:

  1. Breaks down the problem into sub-steps
  2. Considers multiple approaches
  3. Checks its own logic before answering

You don’t need to prompt for this — it happens by default. A question like “What do you pay on an $80 item with 25% off plus a 10% coupon on the sale price?” gets the correct $54 without any CoT prompting.

When Explicit CoT Still Matters

Even with reasoning models, there are times you want to ask for visible step-by-step thinking:

  • Auditing the logic — “Show your reasoning” lets you verify how the model reached its answer, not just what it answered
  • Steering the approach — “First analyze the requirements, then evaluate each option” guides which steps the model takes
  • Non-reasoning models — Smaller or older models still benefit enormously from explicit CoT prompts
  • Teaching and documentation — When the reasoning itself is the deliverable (explaining a decision, writing a proof)

A Practical Tip

When you do ask a model to reason, keep your instructions general. Saying “think this through carefully” tends to produce better results than spelling out exact steps — the model’s own reasoning often surpasses what you’d prescribe.

The Takeaway

Chain-of-thought isn’t a hack you apply to every prompt. It’s a mental model: complex problems need decomposition. The best models now do this internally. Your job as a prompter is knowing when to let the model reason on its own and when to ask it to show — or structure — its thinking.

Quick Quiz

Question 1 of 2

When is explicitly asking for chain-of-thought reasoning most useful?