Blog

The Art of the Prompt: How to A/B Test Your Prompts in a Live Application

Discover how to systematically improve your LLM applications by implementing A/B testing for prompts in a live environment.

Posted on: 2026-03-24 by AI Assistant


Prompt engineering is rarely a “one and done” task. What works perfectly during local testing might fail spectacularly when exposed to real users. The only way to know for sure which prompt yields the best results—whether that’s higher accuracy, lower latency, or better user engagement—is to test them in the wild.

In this tutorial, you will learn how to set up an A/B testing framework for your LLM prompts in a live Python application using an observability tool like LangSmith or a custom database logger.

Why A/B Test Prompts?

A/B testing prompts allows you to:

Prerequisites

To follow this guide, you will need:

Step 1: Managing Prompt Versions

Instead of hardcoding prompts directly into your application, you need a way to manage and retrieve different versions dynamically. Let’s create a simple dictionary to simulate a prompt registry.

# prompt_registry.py

PROMPT_VARIANTS = {
    "v1_concise": "You are a helpful assistant. Answer the user's question in exactly one sentence. Question: {question}",
    "v2_detailed": "You are an expert assistant. Provide a detailed, step-by-step answer to the user's question. Question: {question}"
}

def get_prompt(variant_id: str) -> str:
    return PROMPT_VARIANTS.get(variant_id, PROMPT_VARIANTS["v1_concise"])

Step 2: Implementing the Routing Logic

When a request comes in, you need to randomly assign the user (or request) to one of the variants.

# router.py
import random
from prompt_registry import get_prompt

def select_variant() -> str:
    # 50/50 split between v1 and v2
    return random.choice(["v1_concise", "v2_detailed"])

def generate_response(question: str):
    variant_id = select_variant()
    prompt_template = get_prompt(variant_id)
    
    # Format the prompt
    formatted_prompt = prompt_template.format(question=question)
    
    # ... call your LLM here using formatted_prompt ...
    llm_response = f"Simulated response for: {formatted_prompt}"
    
    return {
        "variant_id": variant_id,
        "response": llm_response
    }

Step 3: Logging and Gathering Feedback

A/B testing is useless without data. You must log which variant was used and, crucially, gather feedback from the user.

Let’s build a simple FastAPI endpoint that returns the response and a subsequent endpoint to capture user feedback (e.g., a thumbs up or down).

# main.py
from fastapi import FastAPI
from pydantic import BaseModel
from router import generate_response
import logging

app = FastAPI()

# Setup basic logging to simulate a database
logging.basicConfig(level=logging.INFO)

class QueryRequest(BaseModel):
    question: str

class FeedbackRequest(BaseModel):
    query_id: str
    variant_id: str
    score: int # 1 for thumbs up, 0 for thumbs down

@app.post("/ask")
async def ask_question(req: QueryRequest):
    # In a real app, you'd generate a unique query_id here
    query_id = f"q_{random.randint(1000, 9999)}"
    
    result = generate_response(req.question)
    
    # Log the variant assignment
    logging.info(f"Query {query_id} assigned to variant {result['variant_id']}")
    
    return {
        "query_id": query_id,
        "variant_id": result["variant_id"],
        "answer": result["response"]
    }

@app.post("/feedback")
async def submit_feedback(req: FeedbackRequest):
    # Log the feedback against the specific variant
    logging.info(f"Feedback for {req.variant_id} on query {req.query_id}: {req.score}")
    return {"status": "success"}

Step 4: Analyzing the Results

After running your application for a while, you can analyze your logs or database to calculate the win rate for each prompt variant.

In this scenario, v2_detailed is the clear winner for user satisfaction!

Conclusion

A/B testing prompts is a crucial step in moving from a prototype to a production-grade AI application. By separating your prompt logic, routing users intelligently, and diligently capturing feedback, you can continuously iterate and improve the quality of your LLM outputs.

What’s Next? Try integrating an LLM observability platform like LangSmith, Helicone, or Arize to automate the tracking and dashboarding of your A/B test results. They provide out-of-the-box support for variant tagging and feedback collection.