200_prompting_guide_condensed_2025
Prompting LLMs – Essential Guide
Google / OpenAI / Anthropic / Hugging Face
Table of Contents
1. Introduction
2. Why Prompting Matters
3. LLM Landscape (Google, OpenAI, Anthropic, Hugging Face)
4. Principles of Effective Prompts
5. Prompt Types
6. Practical Examples
7. Best Practices & Common Pitfalls
8. Pro Tips by Provider
9. Educational & Business Use Cases
10. References & Signature
1. Introduction
Prompting: The structured method for leveraging Large Language Models (LLMs).
Used by Google (Gemini), OpenAI (GPT-4o), Anthropic (Claude), Hugging Face
(Transformers).
Goal: Obtain reliable, auditable, useful results for real-world needs.
This guide synthesizes key educational and advanced fundamentals.
2. Why Prompting Matters
LLMs predict text; they don’t understand context like humans.
Prompt formulation directly determines output quality (OpenAI, Anthropic).
“Garbage In, Garbage Out”: clarity drives results.
Prompting mastery = higher work efficiency, less noise, safer outputs.
3. LLM Landscape
Google: Gemini, PaLM. Strong in multimodality, Google Suite integration.
OpenAI: GPT-4o, GPT-4. Versatile, widely used APIs.
Anthropic: Claude 3, Claude 2. Focus on safety, reasoning, business use.
Hugging Face: Open platform, thousands of models (Llama, Mistral…). Customizable,
open-source.
Each has unique context window, output, and prompt styles.
4. Principles of Effective Prompts
Start simple and explicit—be specific about the task.
Specify the output format (text, JSON, table…).
Add context: role, goal, constraints, sample data.
Version, test, and document your prompts.
Examples:
“Act as an HR expert. Summarize this text in 3 bullets for a busy manager.”
“Classify each sentence as POSITIVE, NEUTRAL, or NEGATIVE.”
5. Prompt Types
Zero-shot: Simple instruction.
Translate this text to English: ...
One-shot: With one example.
Example:
Text: Great product. | Sentiment: POSITIVE
Now:
Text: Slow service. | Sentiment:
Few-shot: Multiple examples.
Text: Amazing! → POSITIVE
Text: Bad experience. → NEGATIVE
Text: Delivery okay. → NEUTRAL
Text: ... →
Chain of Thought (CoT): Step-by-step reasoning.
Q: I have 3 apples, eat 1. How many left? Think step by step...
ReAct: Reasoning + Action (needs tools/agents).
Question: Who is Google’s CEO?
Thought: Search online.
Action: Google Search “CEO Google”
6. Practical Examples
Decision-focused summary
You are a strategy consultant. Summarize the text:
- In 3 bullets: issue, tension, option
Text: ...
JSON Extraction
Analyze and return JSON:
{ "entity": ..., "date": ..., "action": ..., "emotion": ... }
Text: ...
GPT API Prompt (OpenAI)
{
"model": "gpt-4o",
"temperature": 0.2,
"messages": [
{"role": "system", "content": "You are a legal agent."},
}
]
{"role": "user", "content": "Analyze this contract."}
Claude Prompt (Anthropic)
{
}
"system": "You are an SMB advisor.",
"context": "Business dispute",
"task": "Write a firm, polite response."
7. Best Practices & Common Pitfalls
Always specify the desired format.
Avoid ambiguity—use positive, direct instructions.
Test with multiple inputs (see Hugging Face logs).
Avoid:
Overly long or vague prompts.
Contradictory instructions.
Missing output format constraints.
8. Pro Tips by Provider
Google: Gemini handles images and multi-format well; use short, structured prompts.
OpenAI: GPT-4o works best with structured “system”, “user”, “assistant” roles. Temperature
adjusts creativity.
Anthropic: Claude values context; uses “system/context/task” structure; excels at step-bystep reasoning.
Hugging Face: Use open-source models for domain-specific prompts; supports long
prompts (e.g., Llama3); can fine-tune.
9. Educational & Business Use Cases
Course summaries: Reduce 10 pages to 5 bullets for students.
Text analysis: Extract key points from an article for teachers.
Assisted writing: Generate outlines for essays or business reports.
Classification: Sort student answers or customer emails.
Quiz creation: Generate 10 MCQs from educational material.
10. References & Signature
Sources:
Google
OpenAI
Anthropic
Hugging Face
Compiled by: Jean-Philippe Maltais
Date: 2025-06-23