Chapter 7: Prompt Engineering
Guiding Model Behavior
Learning Objectives
- Understand prompt engineering fundamentals
- Master the mathematical foundations
- Learn practical implementation
- Apply knowledge through examples
- Recognize real-world applications
Prompt Engineering
Introduction
Guiding Model Behavior
This chapter provides comprehensive coverage of prompt engineering, including detailed explanations, mathematical formulations, code implementations, and real-world examples.
📚 Why This Matters
Understanding prompt engineering is crucial for mastering modern AI systems. This chapter breaks down complex concepts into digestible explanations with step-by-step examples.
Key Concepts
Prompt Engineering Fundamentals
What is prompt engineering: Crafting input text to guide LLM behavior and improve performance without modifying model weights.
Key principles:
- Clarity: Be specific and unambiguous
- Context: Provide relevant background
- Examples: Show desired format (few-shot)
- Structure: Use clear formatting and organization
Prompting Strategies
Zero-shot: Just describe the task. Model uses its pre-trained knowledge.
Few-shot: Provide examples in the prompt. Model learns the pattern from examples.
Chain-of-thought: Ask model to show reasoning steps. Improves complex reasoning tasks.
Role-playing: "Act as a..." helps model adopt specific perspective or expertise.
Prompt Components
Effective prompts include:
- Task description: What you want the model to do
- Context: Relevant background information
- Examples: Demonstrations of desired behavior
- Constraints: Limitations or requirements
- Output format: How you want the response structured
Mathematical Formulations
Prompt-based Prediction
Where:
- \(y\): Desired output
- \(\text{prompt}\): Crafted input including task description and examples
- \(x\): Actual input to process
- Model conditions on entire prompt context
Few-shot Learning
Where:
- \(\{(x_i, y_i)\}\): k examples in prompt
- Model learns pattern from examples
- Applies pattern to new input x
- k typically 1-5 examples
Chain-of-Thought Prompting
By explicitly modeling reasoning steps, the model breaks down complex problems into simpler sub-problems, improving accuracy on reasoning tasks.
Detailed Examples
Example: Zero-shot vs Few-shot
Zero-shot prompt:
Classify the sentiment of this review: "This movie was amazing!" Sentiment:
Few-shot prompt:
Review: "Great product!" Sentiment: Positive Review: "Terrible quality." Sentiment: Negative Review: "This movie was amazing!" Sentiment:
Result: Few-shot typically performs better as model learns the pattern from examples.
Example: Chain-of-Thought
Without CoT:
Q: A store has 15 apples. They sell 6. How many left? A:
With CoT:
Q: A store has 15 apples. They sell 6. How many left? A: Let me think step by step. The store starts with 15 apples. They sell 6 apples. So remaining = 15 - 6 = 9 apples. Therefore, 9 apples are left.
Result: CoT improves accuracy on math and reasoning problems.
Implementation
Few-shot Prompting Function
def few_shot_classify(text, examples, model, tokenizer):
"""
Classify text using few-shot prompting
"""
# Build prompt with examples
prompt = ""
for example_text, example_label in examples:
prompt += f"Text: {example_text}\nLabel: {example_label}\n\n"
# Add input text
prompt += f"Text: {text}\nLabel:"
# Generate
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(
inputs.input_ids,
max_length=inputs.input_ids.shape[1] + 10,
temperature=0.3,
do_sample=True
)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
label = result.split("Label:")[-1].strip()
return label
# Example usage
examples = [
("Great product!", "Positive"),
("Terrible quality.", "Negative")
]
text = "This movie was amazing!"
label = few_shot_classify(text, examples, model, tokenizer)
print(label) # "Positive"
Chain-of-Thought Prompting
def chain_of_thought_reasoning(question, model, tokenizer):
"""
Use chain-of-thought prompting for reasoning
"""
prompt = f"""Q: {question}
A: Let me think step by step.
"""
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(
inputs.input_ids,
max_length=inputs.input_ids.shape[1] + 200,
temperature=0.7,
do_sample=True
)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
reasoning = result.split("A:")[-1].strip()
return reasoning
# Example
question = "A store has 15 apples. They sell 6. How many left?"
reasoning = chain_of_thought_reasoning(question, model, tokenizer)
print(reasoning)
Real-World Applications
Prompt Engineering in Practice
Chatbot development:
- Craft system prompts to define chatbot personality
- Use few-shot examples to show desired conversation style
- Iterate on prompts to improve responses
Content generation:
- Use role-playing prompts: "Act as a marketing expert..."
- Provide examples of desired writing style
- Specify output format and constraints
Task automation:
- Format conversion tasks
- Data extraction from unstructured text
- Code generation with specific requirements
Prompt Engineering Best Practices
Do:
- Be specific and clear
- Provide context and examples
- Specify output format
- Test and iterate
- Use chain-of-thought for complex reasoning
Don't:
- Be vague or ambiguous
- Assume model knows context
- Use overly complex prompts
- Ignore prompt injection risks