Power Up Your Terminal: Build a Natural Language CLI Tool with Typer and an LLM
Learn how to build a custom CLI tool that understands natural language commands using Python, Typer, and the Gemini API.
Posted on: 2026-03-13 by AI Assistant

Imagine being able to type ask "Find all large log files in /var/log and zip them" into your terminal and having it happen automatically. In this tutorial, we’ll build exactly that: a CLI tool that translates natural language into executable shell commands using Python, Typer, and the Gemini API.
Why Typer?
Typer is a library for building CLI applications based on Python type hints. It’s incredibly intuitive, provides automatic help pages, and makes building professional CLI tools a breeze.
Prerequisites
- Python 3.10+
- A Gemini API Key (get one at aistudio.google.com)
pip install typer google-generativeai rich
The Plan
- Capture user input (the natural language command).
- Send the input to the Gemini API with a specific prompt.
- Receive the generated shell command.
- (Optional but recommended) Ask for user confirmation before executing.
- Execute the command.
Step 1: Setting up the CLI structure
First, let’s create our basic Typer app.
import typer
from rich import print
from rich.prompt import Confirm
import google.generativeai as genai
import os
import subprocess
app = typer.Typer()
# Configure Gemini
genai.configure(api_key="YOUR_GEMINI_API_KEY")
model = genai.GenerativeModel('gemini-1.5-flash')
@app.command()
def ask(prompt: str):
"""
Translate natural language to shell commands and execute them.
"""
print(f"[bold blue]Processing:[/bold blue] {prompt}")
# Generate the command
command = generate_command(prompt)
if not command:
print("[bold red]Could not generate a command.[/bold red]")
return
print(f"\n[bold green]Suggested Command:[/bold green]\n{command}\n")
# Confirm before execution
if Confirm.ask("Do you want to run this command?"):
execute_command(command)
else:
print("[yellow]Operation cancelled.[/yellow]")
def generate_command(user_prompt: str) -> str:
system_prompt = (
"You are an expert Linux sysadmin. Translate the user's request "
"into a single, safe, and efficient shell command. "
"Return ONLY the command itself, no explanation, no backticks."
)
response = model.generate_content(f"{system_prompt}\n\nUser request: {user_prompt}")
return response.text.strip()
def execute_command(command: str):
try:
# subprocess.run is used to execute the command
result = subprocess.run(command, shell=True, check=True, text=True, capture_output=True)
print("[bold cyan]Output:[/bold cyan]")
print(result.stdout)
except subprocess.CalledProcessError as e:
print(f"[bold red]Error executing command:[/bold red]\n{e.stderr}")
if __name__ == "__main__":
app()
Step 2: Refinement and Safety
When building LLM-powered CLI tools, safety is paramount. Notice how we:
- Ask for confirmation using
rich.prompt.Confirm. - Instruct the LLM to be “safe and efficient.”
- Capture errors using a
try-exceptblock aroundsubprocess.run.
How to use it
Save the code as cli.py. You can now run it from your terminal:
python cli.py "list the 5 largest files in my downloads folder"
The output will look something like this:
Processing: list the 5 largest files in my downloads folder
Suggested Command:
ls -lh ~/Downloads | sort -rh | head -n 5
Do you want to run this command? [y/N]:
Conclusion
With just a few lines of Python, you’ve created a tool that bridges the gap between your intent and the technical syntax of the shell. You can extend this further by adding support for different shells (PowerShell, Zsh), keeping a history of commands, or even “learning” your specific project structure.
Next Steps
In our next tutorial, we’ll tackle one of the most tedious parts of development: Stop Writing Boilerplate: Generate Unit Tests Automatically with AI.