A Deep Dive into LangChain Expression Language (LCEL)
Learn how to construct complex AI pipelines effortlessly using the declarative power of LangChain Expression Language (LCEL).
Posted on: 2026-03-24 by AI Assistant

Building complex applications with Large Language Models (LLMs) often involves chaining together multiple components: prompt templates, models, output parsers, and external tools. While you can write custom Python code to link these parts, it quickly becomes verbose and difficult to manage.
Enter LangChain Expression Language (LCEL).
In this tutorial, you will learn how LCEL simplifies the creation, modification, and execution of AI pipelines, offering a cleaner, more declarative way to orchestrate your LLM applications.
What is LCEL?
LangChain Expression Language is a declarative syntax provided by LangChain that makes it easy to compose chains together. It uses the | (pipe) operator—similar to Unix pipes—to pass the output of one component directly as the input to the next.
LCEL isn’t just about syntax; it comes with built-in features like streaming, asynchronous support, batched execution, and automatic fallbacks, making it highly suitable for production environments.
Prerequisites
Before we begin, ensure you have the following:
- Python 3.10+
- The
langchainandlangchain-openaipackages installed. - An OpenAI API key.
pip install langchain langchain-openai
export OPENAI_API_KEY="your-api-key-here"
The Traditional Way vs. LCEL
Let’s look at a simple example: generating a joke about a specific topic and then parsing the output.
Without LCEL (The old way)
from langchain_openai import ChatOpenAI
from langchain.prompts import PromptTemplate
from langchain.schema.output_parser import StrOutputParser
prompt = PromptTemplate.from_template("Tell me a short joke about {topic}")
model = ChatOpenAI(model="gpt-3.5-turbo")
parser = StrOutputParser()
# Execution involves manually passing data
formatted_prompt = prompt.format(topic="ice cream")
response = model.invoke(formatted_prompt)
parsed_output = parser.invoke(response)
print(parsed_output)
With LCEL (The new way)
Using LCEL, we define the entire chain as a single, readable pipeline:
from langchain_openai import ChatOpenAI
from langchain.prompts import PromptTemplate
from langchain.schema.output_parser import StrOutputParser
prompt = PromptTemplate.from_template("Tell me a short joke about {topic}")
model = ChatOpenAI(model="gpt-3.5-turbo")
parser = StrOutputParser()
# The LCEL Chain
chain = prompt | model | parser
# Execution is a single call
result = chain.invoke({"topic": "ice cream"})
print(result)
Notice how prompt | model | parser perfectly describes the flow of data? The invoke method automatically passes the dictionary {"topic": "ice cream"} into the prompt, feeds the formatted prompt to the model, and then sends the model’s output to the parser.
Advanced Routing with LCEL
LCEL truly shines when you need complex logic, such as parallel execution or routing. Let’s say we want to route a user’s question to a specific expert based on the topic.
We can use RunnableBranch or a custom routing function wrapped in a RunnableLambda.
from langchain_core.runnables import RunnableLambda, RunnablePassthrough
from langchain.prompts import PromptTemplate
from langchain_openai import ChatOpenAI
from langchain.schema.output_parser import StrOutputParser
model = ChatOpenAI(model="gpt-3.5-turbo")
parser = StrOutputParser()
# Define our experts
math_prompt = PromptTemplate.from_template("You are a math genius. Answer: {question}")
history_prompt = PromptTemplate.from_template("You are a history buff. Answer: {question}")
general_prompt = PromptTemplate.from_template("Answer the question: {question}")
# A simple classifier model
classifier_prompt = PromptTemplate.from_template(
"Classify the question as 'math', 'history', or 'general'. Question: {question}"
)
classifier_chain = classifier_prompt | model | parser
def route_question(info):
category = info["topic"].strip().lower()
question = info["question"]
if "math" in category:
return math_prompt | model | parser
elif "history" in category:
return history_prompt | model | parser
else:
return general_prompt | model | parser
# The full routing chain
full_chain = {
"topic": classifier_chain,
"question": RunnablePassthrough()
} | RunnableLambda(route_question)
result = full_chain.invoke({"question": "Who was the first president of the United States?"})
print(result)
In this example, the input question is first passed to a classifier chain and also passed through directly using RunnablePassthrough(). The combined result is then sent to our custom router, which dynamically selects the appropriate sub-chain to execute.
Built-in Superpowers
By constructing chains with LCEL, you automatically inherit powerful methods:
chain.stream(): Stream chunks of the response as they are generated.chain.ainvoke(): Run the chain asynchronously.chain.batch(): Process a list of inputs concurrently.chain.with_fallbacks(): Define fallback models in case the primary one fails (e.g., rate limits or downtime).
Conclusion
LangChain Expression Language transforms the way developers build AI pipelines. By adopting a declarative approach, your code becomes easier to read, test, and scale. You’ve learned how to create basic pipelines, utilize custom routing logic, and take advantage of built-in features like streaming and batching.
What’s Next? Try adding a fallback model to your chain using .with_fallbacks(), or experiment with RunnableParallel to execute multiple chains simultaneously and combine their results. Happy chaining!