The Core Principle
A language model's first answer often contains confident-sounding claims that rest on implicit assumptions the model has never been asked to justify. Maieutic prompting reveals these assumptions by demanding explanation. When you ask 'why is that true?' after each claim, the model is forced to either produce a valid justification (which builds confidence) or reveal that the claim was a pattern-match without deep support (which surfaces uncertainty). The recursive self-explanation pressure is what distinguishes maieutic prompting from a simple follow-up question — it continues until the reasoning chain either reaches solid ground or reveals its weakest link.
How to Implement It
The practical implementation is simple: after receiving an initial response, follow up with: 'Now, for each claim in your response, explain why it is true and flag any claim where you are less than fully certain.' This single follow-up prompt consistently reveals the shakiest parts of the initial answer — the claims that were confident but shallow. For deeper analysis, continue the questioning: 'You said X is true because Y. Why is Y true? Are there exceptions? How confident are you?' Each layer of questioning either validates the reasoning or exposes the floor below which the model's certainty doesn't hold.
Combining With Chain of Thought
Maieutic prompting works best layered on an existing chain-of-thought response. Use chain-of-thought for the initial reasoning pass — this produces an explicit reasoning chain you can then interrogate. Then apply maieutic questioning to the weakest-looking steps: 'In step 3 you said [X]. Why is this the right move here? What would happen if [alternative assumption] were true instead?' This two-layer approach is more efficient than applying maieutic questioning to a non-CoT answer, because you have explicit reasoning steps to target rather than a flat answer.
Practical Applications
Maieutic prompting is particularly useful for: evaluating AI-generated analyses before acting on them (surface the assumptions before relying on the conclusion), checking logical arguments for hidden weaknesses, stress-testing decisions (make the model justify each element of a recommendation), and learning (having AI teach you something by explaining each step until you genuinely understand rather than just receiving a confident explanation). It's less useful for creative tasks, simple factual questions, and tasks where the first answer is already well-supported.
Maieutic Prompting as a Hallucination Check
One of the most practical uses of maieutic prompting is as a post-hoc hallucination check. After receiving any answer that contains specific claims (statistics, citations, historical facts, technical specifications), apply: 'Review each factual claim in your response. For each, rate your certainty (high/medium/low) and explain the basis for that rating. Flag any claim you are not highly certain about.' This prompts the model to surface its own uncertainty — and while this isn't foolproof (the model can be confidently wrong), it consistently catches a meaningful fraction of potential hallucinations before you rely on them.