Chapter 1: Introduction to RAG

Retrieval-Augmented Generation

Learning Objectives

  • Understand introduction to rag fundamentals
  • Master the mathematical foundations
  • Learn practical implementation
  • Apply knowledge through examples
  • Recognize real-world applications

Introduction to RAG

What is RAG?

Retrieval-Augmented Generation (RAG) combines information retrieval with language generation. Instead of relying solely on the LLM's training data, RAG retrieves relevant information from external knowledge sources and uses it to generate more accurate, up-to-date responses.

Think of RAG like a research assistant:

  • Traditional LLM: Like answering from memory - might be outdated or incomplete
  • RAG System: Like a researcher who looks up current information, then answers based on what they found
  • Result: More accurate, factual, and up-to-date responses

⚠️ The Problem with LLMs

LLMs have three critical limitations:

1. Hallucination

LLMs can generate plausible-sounding but incorrect information:

  • Question: "What is the capital of France?"
  • LLM might say: "The capital of France is Paris" (correct)
  • But also: "The capital of France is Lyon" (incorrect, but sounds plausible)
  • Problem: No way to verify without external knowledge
2. Outdated Information

LLMs are trained on data up to a cutoff date:

  • GPT-3.5 trained on data up to September 2021
  • Cannot know about events after that date
  • Question: "Who won the 2024 World Cup?" → Might not know or hallucinate
3. Limited Context Window

LLMs have fixed context limits:

  • Cannot store entire knowledge bases in context
  • Cannot access private/internal documents
  • Limited to what fits in the prompt

✅ How RAG Solves These Problems

RAG architecture:

  1. Retrieval: Search external knowledge base for relevant information
  2. Augmentation: Add retrieved information to the prompt
  3. Generation: LLM generates answer based on retrieved context

Benefits:

  • ✅ Reduces hallucination (grounded in retrieved facts)
  • ✅ Provides up-to-date information (can update knowledge base)
  • ✅ Accesses private documents (can index internal docs)
  • ✅ More transparent (can cite sources)

📊 RAG Data Flow Diagram

The following diagram shows how data flows through a RAG system, including document indexing and query processing:

📚 Document Indexing Phase (One-time Setup)
📄
Raw Documents
"France is a country...", "Germany is...", etc.
🔢
Document Embedding
SentenceTransformer
Convert each document to vector: [0.1, -0.3, 0.7, ...]
💾
Store in Vector Database
Pinecone / Weaviate / FAISS
Store document embeddings for fast retrieval
🔍 Query Processing Phase (Per Query)
👤
User Query
"What is the capital of France?"
🔢
Query Embedding
SentenceTransformer
Convert query to vector: [0.2, -0.5, 0.8, ...]
📐
Cosine Similarity
NumPy
cos(θ) = (q·d) / (||q|| × ||d||)
Compare query vector with all document vectors
🔍
Top-k Retrieval
NumPy argsort / Vector DB
Select top 3-5 documents with highest similarity scores
📝
Context Assembly
Python String
Build prompt: Context + Question
🤖
LLM Generation
OpenAI API / Transformers
Generate answer based on context
Final Answer
"The capital of France is Paris."
Key Libraries & Components:
  • SentenceTransformer: Converts text (queries and documents) into dense vector embeddings. Used in both indexing (documents) and query processing phases.
  • Vector Database: Pinecone, Weaviate, Chroma, or FAISS stores document embeddings for fast similarity search. Documents are embedded once during indexing.
  • NumPy: Computes cosine similarity using the formula: cos(θ) = (q·d) / (||q|| × ||d||). Compares query embedding with all stored document embeddings.
  • NumPy argsort: Sorts similarity scores to find top-k documents with highest cosine similarity values.
  • OpenAI API / Transformers: Language model for generating answers based on retrieved context
  • Python String Operations: Assembles the final prompt with context and question
Key Process Steps:
  1. Document Indexing (One-time): All documents are embedded using SentenceTransformer and stored in a vector database. This happens once when building the knowledge base.
  2. Query Processing (Per Query): Each user query is embedded, then cosine similarity is computed against all stored document embeddings to find the most relevant documents.
  3. Retrieval: Top-k documents with highest similarity scores are retrieved and used as context.
  4. Generation: LLM generates the final answer using the retrieved context.

Key Concepts

RAG Architecture

Components:

  • Knowledge Base: Collection of documents (vector database)
  • Retriever: Finds relevant documents for query
  • LLM: Generates answer using retrieved context

Process:

  1. User asks question
  2. Retriever searches knowledge base
  3. Top-k relevant documents retrieved
  4. Documents added to LLM prompt as context
  5. LLM generates answer based on context

RAG vs Fine-tuning

RAG advantages:

  • No training required
  • Easy to update knowledge (just add documents)
  • Can cite sources
  • Works with any LLM

Fine-tuning advantages:

  • Better for learning specific patterns
  • No retrieval latency
  • More consistent behavior

Mathematical Formulations

RAG Generation Process

\[P(y | q) = P(y | q, \text{Retrieve}(q, D))\]
What This Formula Means:

This formula represents the core principle of RAG: the probability of generating answer \(y\) given query \(q\) is equal to the probability of generating \(y\) given both the query \(q\) and the retrieved documents from the knowledge base.

Breaking It Down:
  • \(P(y | q)\): Traditional LLM approach - probability of answer \(y\) given only the query \(q\). This relies solely on the model's training data.
  • \(P(y | q, \text{Retrieve}(q, D))\): RAG approach - probability of answer \(y\) given both the query \(q\) AND the retrieved context from knowledge base \(D\).
  • \(\text{Retrieve}(q, D)\): Function that searches knowledge base \(D\) and returns the most relevant documents for query \(q\).
Key Insight:

The formula shows that RAG augments the generation process by conditioning on retrieved documents. Instead of generating from memory alone, the LLM generates based on both the query and the retrieved factual context, leading to more accurate and up-to-date responses.

Example:

If a user asks "What happened in Q4 2024?", the traditional model might say "I don't have information about that" (because its training data cuts off earlier). But with RAG, \(\text{Retrieve}(q, D)\) finds the Q4 2024 report in the knowledge base, and the model generates an answer based on that actual document.

Retrieval Score (Cosine Similarity)

\[\text{score}(q, d) = \frac{q \cdot d}{\|q\| \|d\|} = \cos(\theta)\]
What This Formula Measures:

Cosine similarity measures how similar two vectors are in direction, regardless of their magnitude. It's the cosine of the angle \(\theta\) between the query embedding vector \(q\) and document embedding vector \(d\).

Breaking It Down:
  • \(q \cdot d\): Dot product of query and document vectors. Measures how much the vectors point in the same direction.
  • \(\|q\|\): Magnitude (length) of query vector = \(\sqrt{q_1^2 + q_2^2 + \ldots + q_n^2}\)
  • \(\|d\|\): Magnitude (length) of document vector
  • \(\frac{q \cdot d}{\|q\| \|d\|}\): Normalizes the dot product by dividing by the product of magnitudes, giving us the cosine of the angle.
  • \(\cos(\theta)\): The cosine of the angle between vectors. When vectors point in the same direction, \(\theta = 0°\) and \(\cos(0°) = 1\) (maximum similarity).
Why Cosine Similarity?
  • Range: Values range from -1 to 1, but for normalized embeddings (common in RAG), values are typically between 0 and 1.
  • Scale-invariant: Only cares about direction, not magnitude. A document about "machine learning" will have high similarity to a query about "ML" even if one is longer.
  • Semantic meaning: Embeddings capture semantic meaning, so similar meanings = similar directions = high cosine similarity.
Example:

Query: "What is artificial intelligence?"
Document 1: "AI is the simulation of human intelligence by machines" → High similarity (0.92)
Document 2: "The weather today is sunny" → Low similarity (0.15)
The retrieval system ranks Document 1 higher because its embedding vector points in a similar direction to the query embedding.

Top-k Retrieval

\[D_{\text{retrieved}} = \text{argmax}_k \{\text{score}(q, d) : d \in D\}\]
What This Formula Does:

This formula selects the top \(k\) documents from knowledge base \(D\) that have the highest similarity scores with query \(q\). The \(\text{argmax}_k\) function finds the \(k\) documents that maximize the score function.

Breaking It Down:
  • \(D\): The entire knowledge base (all documents available for retrieval)
  • \(d \in D\): Each document \(d\) in the knowledge base \(D\)
  • \(\text{score}(q, d)\): Similarity score between query \(q\) and document \(d\) (typically cosine similarity)
  • \(\text{argmax}_k\): Returns the \(k\) documents with the highest scores (not just the maximum, but the top \(k\))
  • \(D_{\text{retrieved}}\): The final set of \(k\) documents selected for context
Why Top-k Instead of Just the Best?
  • Context completeness: A single document might not contain all relevant information. Multiple documents provide richer context.
  • Redundancy: Multiple sources can confirm information, reducing hallucination risk.
  • Coverage: Different documents might cover different aspects of the query.
Choosing k:
  • Small k (3-5): Faster, lower cost, but might miss relevant information. Good for simple queries.
  • Medium k (5-10): Balanced approach, most common in production RAG systems.
  • Large k (10+): More comprehensive but increases latency, cost, and may include irrelevant documents that confuse the LLM.
Example:

Query: "How does RAG work?"
Knowledge base has 1000 documents. The system calculates similarity scores for all 1000, then selects the top 5 documents with scores: [0.95, 0.92, 0.89, 0.87, 0.85]. These 5 documents are passed to the LLM as context for generating the answer.

Detailed Examples

Example: RAG Question Answering

User query: "What is the capital of France?"

Step 1: Query Embedding

  • Convert query to embedding vector: [0.2, -0.5, 0.8, ...]

Step 2: Retrieval

  • Search knowledge base for similar embeddings
  • Find document: "France is a country in Europe. Its capital is Paris."
  • Similarity score: 0.92

Step 3: Context Augmentation

  • Build prompt: "Context: France is a country in Europe. Its capital is Paris. Question: What is the capital of France?"

Step 4: Generation

  • LLM generates: "The capital of France is Paris."
  • Answer is grounded in retrieved context

Example: Without RAG vs With RAG

Query: "What happened in the company Q4 2024 earnings?"

Without RAG:

  • LLM only knows training data (cutoff date)
  • May hallucinate or say "I don't have information about that"

With RAG:

  • Retrieves Q4 2024 earnings report from knowledge base
  • LLM generates answer based on actual report
  • Accurate, up-to-date information

Implementation

Simple RAG Implementation

from sentence_transformers import SentenceTransformer
import numpy as np
from openai import OpenAI  # Or use transformers pipeline for local models

class SimpleRAG:
    """
    Basic RAG implementation with proper cosine similarity calculation.
    
    This class implements the core RAG pipeline:
    1. Document embedding and storage
    2. Query embedding and retrieval
    3. Context augmentation and generation
    """
    
    def __init__(self, embedding_model='all-MiniLM-L6-v2'):
        """
        Initialize RAG system with embedding model.
        
        Args:
            embedding_model: Sentence transformer model name
        """
        self.embedder = SentenceTransformer(embedding_model)
        # For production, use OpenAI API or better local models
        # self.client = OpenAI(api_key="your-api-key")
        self.documents = []
        self.embeddings = None
    
    def add_documents(self, docs):
        """
        Add documents to knowledge base and compute embeddings.
        
        Args:
            docs: List of document strings to add
        """
        if not docs:
            raise ValueError("Documents list cannot be empty")
        
        self.documents = docs
        # Encode all documents into embeddings (vectors)
        self.embeddings = self.embedder.encode(docs, show_progress_bar=False)
        print(f"Added {len(docs)} documents to knowledge base")
    
    def retrieve(self, query, top_k=3):
        """
        Retrieve top-k most relevant documents using cosine similarity.
        
        Args:
            query: User query string
            top_k: Number of documents to retrieve
            
        Returns:
            List of top-k most relevant document strings
        """
        if self.embeddings is None or len(self.documents) == 0:
            raise ValueError("No documents in knowledge base. Call add_documents() first.")
        
        # Encode query into embedding vector
        query_embedding = self.embedder.encode([query])
        
        # Compute cosine similarity: (q · d) / (||q|| * ||d||)
        # Step 1: Compute dot product between query and all document embeddings
        dot_products = np.dot(self.embeddings, query_embedding.T).flatten()
        
        # Step 2: Compute norms (magnitudes) of embeddings
        query_norm = np.linalg.norm(query_embedding)
        doc_norms = np.linalg.norm(self.embeddings, axis=1)
        
        # Step 3: Normalize to get cosine similarity (range: -1 to 1, typically 0 to 1)
        cosine_similarities = dot_products / (query_norm * doc_norms)
        
        # Step 4: Get indices of top-k documents with highest similarity
        top_indices = np.argsort(cosine_similarities)[-top_k:][::-1]
        
        # Return the actual documents (not just indices)
        retrieved_docs = [self.documents[i] for i in top_indices]
        
        print(f"Retrieved {len(retrieved_docs)} documents with similarities: {cosine_similarities[top_indices]}")
        return retrieved_docs
    
    def generate(self, query, top_k=3, use_openai=False):
        """
        Generate answer using RAG pipeline.
        
        Args:
            query: User query
            top_k: Number of documents to retrieve
            use_openai: Whether to use OpenAI API (requires API key)
            
        Returns:
            Generated answer string
        """
        # Step 1: Retrieve relevant documents
        context_docs = self.retrieve(query, top_k)
        
        # Step 2: Build prompt with retrieved context
        context = "\n\n".join([f"Document {i+1}: {doc}" for i, doc in enumerate(context_docs)])
        prompt = f"""Based on the following context documents, answer the question.

Context:
{context}

Question: {query}

Answer:"""
        
        # Step 3: Generate answer using LLM
        if use_openai:
            # Using OpenAI API (recommended for production)
            response = self.client.chat.completions.create(
                model="gpt-3.5-turbo",
                messages=[
                    {"role": "system", "content": "You are a helpful assistant that answers questions based on provided context."},
                    {"role": "user", "content": prompt}
                ],
                temperature=0.7,
                max_tokens=200
            )
            return response.choices[0].message.content
        else:
            # Using local model (for demonstration)
            # Note: GPT-2 is not ideal for Q&A. Use better models in production.
            from transformers import pipeline
            llm = pipeline("text-generation", model="gpt2")
            result = llm(prompt, max_length=len(prompt.split()) + 100, num_return_sequences=1)
            # Extract only the generated part (after the prompt)
            generated_text = result[0]['generated_text']
            answer = generated_text[len(prompt):].strip()
            return answer

# Example usage
if __name__ == "__main__":
    # Initialize RAG system
    rag = SimpleRAG()
    
    # Add documents to knowledge base
    rag.add_documents([
        "France is a country in Europe. Its capital is Paris, a major cultural and economic center.",
        "Germany is a country in Central Europe. Its capital is Berlin, known for its history and technology sector.",
        "Italy is a country in Southern Europe. Its capital is Rome, famous for its ancient history and art."
    ])
    
    # Ask a question
    query = "What is the capital of France?"
    answer = rag.generate(query, top_k=2)
    print(f"\nQuestion: {query}")
    print(f"Answer: {answer}")
Key Implementation Details:
  • Cosine Similarity Calculation: Properly normalized by dividing dot product by the product of vector norms. This ensures values are in the range [-1, 1] and represent true cosine similarity.
  • Document Embedding: All documents are embedded once when added to the knowledge base, making retrieval fast.
  • Top-k Retrieval: Uses np.argsort to efficiently find the k documents with highest similarity scores.
  • Prompt Engineering: Structured prompt that clearly separates context from question, helping the LLM understand what to answer.
  • Error Handling: Checks for empty documents and missing knowledge base before operations.
Production Considerations:
  • Use OpenAI API, Anthropic Claude, or better local models (like Llama 2) instead of GPT-2 for better Q&A performance.
  • Implement vector database (Pinecone, Weaviate) for large-scale document storage.
  • Add caching for frequently asked queries.
  • Implement proper logging and monitoring.
  • Add source citation to show which documents were used.

Real-World Applications

1. Customer Support Chatbots - Real-World Example

Company: Major e-commerce platform (similar to Amazon, Shopify)

Problem: Customer support receives thousands of questions daily about product features, shipping policies, returns, and technical issues. Traditional chatbots with fixed responses couldn't handle the variety of questions, and human agents were overwhelmed.

RAG Solution:

  • Knowledge Base: Indexed all product documentation, FAQ pages, shipping policies, return procedures, and troubleshooting guides (50,000+ documents)
  • Implementation: Built RAG system that retrieves relevant documentation for each customer query and generates accurate, up-to-date answers
  • Results:
    • ✅ 70% of customer queries resolved automatically (up from 30% with rule-based chatbot)
    • ✅ Response time reduced from 5 minutes (human agent) to 10 seconds (RAG chatbot)
    • ✅ Customer satisfaction increased by 25%
    • ✅ Support costs reduced by 40%

Example Interaction:

Customer: "How do I return an item I ordered last week?"

RAG System: Retrieves return policy document, shipping policy, and order management guide. Generates answer:

"You can return items within 30 days of purchase. To initiate a return: 1) Go to 'My Orders' in your account, 2) Select the item and click 'Return', 3) Print the prepaid return label, 4) Package the item and drop it off at any carrier location. Refunds are processed within 5-7 business days after we receive the item. [Source: Return Policy v2.3, Shipping Guide 2024]"

Why RAG Works Here: Policies change frequently, products are added daily, and customers ask questions in many different ways. RAG can handle all of this without retraining the model.

2. Enterprise Knowledge Base - Real-World Example

Company: Large technology company (similar to Microsoft, Google)

Problem: Company has 10,000+ employees across multiple departments. Internal documentation includes technical specs, API documentation, HR policies, project documentation, and meeting notes. Employees spent hours searching through documents to find information, and often couldn't find what they needed.

RAG Solution:

  • Knowledge Base: Indexed all internal documentation (200,000+ documents) including:
    • Technical documentation and API references
    • HR policies and employee handbooks
    • Project documentation and meeting notes
    • Code documentation and architecture decisions
  • Implementation: Built internal RAG-powered search system accessible via Slack, Teams, and web interface
  • Results:
    • ✅ 60% reduction in time spent searching for information
    • ✅ 80% of queries successfully answered (vs. 40% with traditional search)
    • ✅ Knowledge sharing improved - employees discover relevant documentation they didn't know existed
    • ✅ Onboarding time for new employees reduced by 30%

Example Interaction:

Employee: "What's the process for deploying to production?"

RAG System: Retrieves deployment guide, CI/CD documentation, and recent deployment notes. Generates comprehensive answer with step-by-step process, links to relevant documentation, and recent changes.

Why RAG Works Here: Documentation is constantly updated, spread across many systems, and employees need to find information quickly. RAG understands semantic meaning, so it finds relevant docs even when exact keywords don't match.

3. Legal Document Analysis - Real-World Example

Company: Law firm specializing in contract review

Problem: Lawyers spend hours reviewing contracts, legal documents, and case files to answer client questions. Each contract review takes 4-6 hours, and lawyers need to reference previous cases, legal precedents, and regulatory documents.

RAG Solution:

  • Knowledge Base: Indexed:
    • All previous contracts and case files (100,000+ documents)
    • Legal precedents and case law
    • Regulatory documents and compliance guidelines
    • Client-specific documentation
  • Implementation: RAG system that helps lawyers quickly find relevant precedents, similar contracts, and regulatory requirements
  • Results:
    • ✅ Contract review time reduced from 4-6 hours to 1-2 hours
    • ✅ 90% accuracy in finding relevant precedents (vs. 60% with manual search)
    • ✅ Lawyers can handle 2x more cases
    • ✅ Better consistency - all lawyers have access to same knowledge base

Example Interaction:

Lawyer: "What are the standard terms for data privacy clauses in software licensing agreements?"

RAG System: Retrieves relevant contracts, GDPR compliance documents, and previous case analyses. Generates answer with common terms, variations, and relevant precedents, all with source citations.

Why RAG Works Here: Legal documents use precise terminology, but questions can be phrased in many ways. RAG understands semantic relationships and can find relevant documents even when exact phrases don't match. Source attribution is critical for legal work, which RAG provides.

4. Medical Information System - Real-World Example

Organization: Hospital system with multiple facilities

Problem: Doctors and medical staff need quick access to medical guidelines, drug information, treatment protocols, and research papers. Medical information changes frequently, and staff need the most up-to-date information to make critical decisions.

RAG Solution:

  • Knowledge Base: Indexed:
    • Medical guidelines and treatment protocols
    • Drug databases and interaction information
    • Recent research papers and clinical trials
    • Hospital-specific procedures and policies
  • Implementation: RAG-powered medical assistant accessible via tablets and computers in patient rooms
  • Results:
    • ✅ Doctors get answers in 10-15 seconds (vs. 5-10 minutes searching databases)
    • ✅ Always up-to-date - new research and guidelines added immediately
    • ✅ Better patient care - doctors have access to latest treatment options
    • ✅ Reduced medical errors - system provides evidence-based recommendations with citations

Example Interaction:

Doctor: "What are the latest treatment options for Type 2 diabetes in elderly patients with kidney complications?"

RAG System: Retrieves latest diabetes treatment guidelines, kidney disease management protocols, and recent research on elderly patients. Generates comprehensive answer with treatment options, dosages, contraindications, and links to full guidelines.

Why RAG Works Here: Medical information is constantly evolving. New research, updated guidelines, and drug approvals happen frequently. RAG allows the system to stay current without retraining. Source citations are essential for medical decisions, which RAG provides.

Key Advantages of RAG in Production

1. Up-to-Date Information: Unlike traditional LLMs with fixed training data, RAG systems can be updated instantly by adding new documents. This is critical for industries where information changes rapidly (legal, medical, technology).

2. Source Attribution: RAG systems can cite the exact documents used to generate answers. This is essential for:

  • Legal work (need to cite precedents)
  • Medical decisions (need evidence-based sources)
  • Enterprise knowledge (need to verify information)
  • Customer support (need to reference policy documents)

3. Reduced Hallucination: By grounding answers in retrieved documents, RAG systems are much less likely to "make up" information. This is critical for applications where accuracy is essential (medical, legal, financial).

4. Domain-Specific Knowledge: RAG systems can work with any domain-specific knowledge base without fine-tuning the LLM. This makes it cost-effective and fast to deploy for specialized use cases.

5. Easy Updates: Adding new information is as simple as adding documents to the knowledge base. No model retraining required, making it practical for production systems that need to stay current.

Test Your Understanding

Question 1: What is RAG (Retrieval-Augmented Generation)?

A) A system that combines information retrieval with language generation, retrieving relevant information from external sources before generating responses
B) A method to train LLMs faster
C) A type of neural network architecture
D) A database system

Question 2: What are the main limitations of traditional LLMs that RAG addresses?

A) Speed and cost only
B) Hallucination, outdated information, lack of source attribution, and knowledge cutoff dates
C) Model size
D) Training time

Question 3: In the RAG formula \(P(y \mid q) = P(y \mid q, \text{Retrieve}(q, D))\), what does \(\text{Retrieve}(q, D)\) represent?

A) The generated answer
B) The query embedding
C) The retrieved relevant documents from knowledge base D for query q
D) The LLM model

Question 4: Interview question: "What are the key advantages of RAG over fine-tuning?"

A) Faster inference
B) Smaller model size
C) Lower cost
D) No training required, easy to update knowledge by adding documents, can cite sources, works with any LLM, and no retraining needed for new information

Question 5: What is cosine similarity used for in RAG?

A) Training the LLM
B) Generating embeddings
C) Chunking documents
D) Measuring semantic similarity between query and document embeddings to find relevant documents

Question 6: What does top-k retrieval mean in RAG?

A) Using k different models
B) Selecting the k documents with highest similarity scores (typically k=3-10)
C) Training k times
D) Processing k queries

Question 7: Interview question: "When would you choose RAG over fine-tuning?"

A) Always choose RAG
B) Always choose fine-tuning
C) When you need up-to-date information, want source attribution, have frequently changing knowledge, or need to work with domain-specific documents without retraining
D) When you need faster inference

Question 8: What is the typical RAG pipeline flow?

A) Query → LLM → Response
B) Documents → Training → Model
C) Query → Database → Response
D) Query → Embedding → Retrieval → Context Assembly → LLM Generation → Response

Question 9: What is the main difference between RAG and traditional LLM responses?

A) RAG is faster
B) RAG uses smaller models
C) RAG retrieves relevant context from external knowledge sources before generating, while traditional LLMs only use training data
D) There's no difference

Question 10: Interview question: "How does RAG reduce hallucination?"

A) By using smaller models
B) By training more
C) By using faster inference
D) By grounding responses in retrieved documents from the knowledge base, providing factual context that constrains the LLM's generation

Question 11: What is the cosine similarity formula \(\text{score}(q, d) = \frac{q \cdot d}{\|q\| \|d\|}\) measuring?

A) Euclidean distance
B) Dot product only
C) Vector magnitude
D) The cosine of the angle between query and document vectors, indicating semantic similarity (range: -1 to 1, higher = more similar)

Question 12: Interview question: "What are the key components of a RAG system?"

A) Just an LLM
B) Just a database
C) Document knowledge base, embedding model, vector database, retrieval mechanism, LLM for generation, and context assembly logic
D) Just embeddings