ReAct Prompting
Interleaving reasoning traces and actions so a model can use tools and reflect on their results.
Full Definition
ReAct (Reasoning + Acting) is a prompting framework where the model alternates between Thought steps (internal reasoning), Action steps (calling a tool or API), and Observation steps (the tool's result) until it reaches a final answer. This structure grounds the model's reasoning in real retrieved data and allows it to correct course mid-task if a tool returns unexpected results. ReAct was introduced in a 2022 Princeton/Google paper and became a foundational pattern for LLM agents. It outperforms both pure chain-of-thought (no tools) and pure tool-use (no reasoning) on tasks requiring dynamic information retrieval.
Examples
Thought: I need the current exchange rate. Action: search('USD to EUR exchange rate today'). Observation: 1 USD = 0.92 EUR. Answer: €92.
A ReAct agent answering a multi-hop Wikipedia question by iterating search → read → reason until all sub-questions are resolved.
Apply this in your prompts
PromptITIN automatically uses techniques like ReAct Prompting to build better prompts for you.