“Ask an AI to do extended thinking…” — and it tells you what it’s doing?! (Sort of.)
A Level Computing | Philip M Russell Ltd style | UK spelling
You’ve seen it happen:
You ask an AI a tricky question.
You click the button that says something like “extended thinking”.
And then—instead of just blurting out an answer like an overconfident Year 10—you get a response that sounds like the AI is narrating its brain:
“First I’ll break the problem down… then I’ll check edge cases… then I’ll verify…”
It feels like watching a student show their working. Which is oddly comforting.
But here’s the important bit for A Level Computing:
The AI isn’t “showing its thoughts” in the way you think
Most modern AI systems do not reveal their full internal reasoning (often called chain-of-thought). What you’re seeing is usually a summary of the approach: a tidy, human-readable explanation of the steps it took or would take.
That’s not a bad thing. In fact, for learning, it can be brilliant — but you need to understand what you’re getting.
What “extended thinking” usually means (in plain English)
When you request extended thinking, you’re generally asking the model to:
-
Spend more compute/time on reasoning
-
Break the task into sub-problems
-
Self-check for contradictions and missing cases
-
Explain the method more explicitly than usual
In A Level terms, it’s similar to switching from:
-
“Give me the answer”
to -
“Show me your algorithm, and then run it carefully.”
Why it looks like the AI is narrating its process
Because narration is useful.
A well-structured explanation often includes:
-
Identifying inputs/outputs (specification thinking)
-
Planning a method (algorithm design)
-
Checking constraints (edge cases, assumptions)
-
Verifying results (testing / validation)
That’s basically the Computational Thinking toolkit:
Decomposition, abstraction, algorithmic thinking, evaluation.
So the AI is doing what your teacher has been nagging you to do all along. (Annoying, isn’t it?)
The catch: “explanations” are not the same as “proof”
Even if the AI gives you a lovely step-by-step explanation, it can still:
-
use a wrong assumption,
-
miss a constraint,
-
produce an answer that sounds correct but isn’t.
So treat it like a very fast study partner who sometimes confidently walks into lampposts.
A Level-friendly rule:
Use the AI’s explanation as a draft algorithm — then test it like you would test your own code.
How to prompt it properly (so it actually helps you learn)
Try these prompt styles:
1) Ask for a plan first (before the final answer)
Prompt:
“Give me a brief plan (like pseudocode / method) before the final answer.”
Why it helps: you can spot dodgy logic early.
2) Force it to state assumptions
Prompt:
“List your assumptions explicitly before solving.”
Why it helps: you can challenge the weak bits.
3) Ask it to check edge cases
Prompt:
“After answering, test your solution against 3 edge cases.”
Why it helps: that’s literally exam evaluation.
4) Ask for a marking-grid style response
Prompt:
“Answer like an A Level student: define terms, show method, give final result, then evaluate limitations.”
Why it helps: it mirrors how marks are awarded.
A quick example: “Explain how you’d search for the fastest route”
Instead of:
“Find the fastest route.”
Try:
“Explain how you’d approach this: identify the graph model, choose an algorithm, and justify it.”
Now you’re doing proper A Level:
-
Graph representation (nodes/edges/weights)
-
Algorithm choice (Dijkstra vs A* vs BFS)
-
Justification (constraints, complexity, correctness)
So… should you trust the “thinking”?
Trust it the way you trust a calculator:
-
Great for speed
-
Great for structure
-
Still your job to check it’s answering the right question
And if it gives you a neat method: brilliant. That’s basically revision.
Just don’t confuse “a convincing explanation” with “guaranteed correct”.
.jpg)
No comments:
Post a Comment