✍️ Instruction-based Prompting

Definition: Giving direct commands or instructions to the model.

Example: “Summarize this article in 5 bullet points.”

🚀 Zero-Shot Prompting

Definition: The model is expected to complete the task based only on instructions, with no examples provided. Example: “Translate this sentence into French: ‘I love learning.’”

🧩 One-shot Prompting

Definition: Providing one example along with the instruction to guide the model’s response. Example: “Translate the following: ‘I love dogs’ → ‘J’aime les chiens’. Now translate: ‘I enjoy music’.”

📚 Few-shot Prompting

Definition: Providing multiple examples to help the model learn the pattern before asking it to perform a new task. Example:

Translate: ‘Good morning’ → ‘Bonjour’ ‘Good night’ → ‘Bonne nuit’ Now translate: ‘See you tomorrow’

🧠 Chain-of-Thought Prompting

Definition: Asking the model to show its reasoning step-by-step to improve logical coherence or solve complex problems. Example: “If a car travels 60 km in 2 hours, what is its speed? Let’s think step by step.”

🎭 Role-based Prompting

Definition: Asking the model to respond as if it were a specific expert, profession, or persona. Example: “You are a senior software engineer. Explain how multithreading works in simple terms.”

💬 Multi-turn (Conversational) Prompting

Definition: Building and maintaining context over multiple exchanges between the user and the model. Example:

User: “Explain SQL.” AI: “SQL stands for Structured Query Language…” User: “How does a JOIN work?”

🧾 Contextual Prompting

Definition: Supplying extra background or relevant context to improve the model’s output quality. Example: “Given the recent economic downturn, write a report on its impact on small businesses.”

🔗 Prompt Chaining

Definition: Breaking a complex task into multiple smaller prompts and combining their outputs to solve the original problem. Example:

First prompt: “Summarize this article.” Second prompt: “Extract action items from the summary.”

⚙️ ReAct Prompting (Reasoning + Acting)

Definition: The model reasons step-by-step and then performs an action — such as calling a function, tool, or search API — to complete the task. Use Case: Commonly used in LLM agents that require both thinking and tool usage, like answering with up-to-date facts or performing calculations.