Prompt Engineering 101: How to Talk to LLMs and Get the Code You Want
Master the art of prompt engineering to improve your AI-assisted coding workflow. Learn practical techniques to get better code, faster.
Posted on: 2026-03-13 by AI Assistant

As developers, we’re increasingly using Large Language Models (LLMs) like Gemini, Claude, and GPT to help us write code, debug issues, and design systems. However, the quality of the output you get is directly proportional to the quality of the input you provide. This is the essence of Prompt Engineering.
In this guide, we’ll explore practical prompt engineering techniques specifically tailored for developers to help you get the exact code you need.
The Problem: “It’s Not Quite Right”
We’ve all been there: you ask an LLM for a React component, and it gives you something that looks right but uses outdated libraries, lacks proper typing, or misses a crucial edge case. The frustration isn’t with the model’s capability, but often with the lack of context or clarity in our request.
Core Techniques for Better Code
1. Provide Clear Context (The “Who” and “Where”)
LLMs perform significantly better when they know their role and the environment they’re working in.
Bad Prompt:
Write a function to sort a list of users by their age.
Better Prompt:
You are a senior TypeScript developer. Write a pure function for a Node.js backend that sorts an array of `User` objects by their `age` property in descending order. Use modern ES6+ syntax.
2. The Few-Shot Technique (Provide Examples)
One of the most powerful ways to influence the output is to provide a few examples of the style or logic you’re looking for.
Prompt:
I want you to convert natural language descriptions into SQL queries for a PostgreSQL database. Here are some examples:
Input: 'Find all users who signed up in the last 30 days.'
Output: `SELECT * FROM users WHERE created_at >= NOW() - INTERVAL '30 days';`
Input: 'Get the total revenue from orders in 2023.'
Output: `SELECT SUM(total_amount) FROM orders WHERE EXTRACT(YEAR FROM order_date) = 2023;`
Input: 'Find the top 5 most expensive products.'
Output:
3. Use Delimiters for Clarity
Help the model distinguish between instructions, code snippets, and data by using clear delimiters like triple backticks (` ` `), XML-like tags (<context></context>), or dashed lines.
Prompt:
Refactor the following code to use `async/await` instead of callbacks.
Code:
```
function fetchData(callback) {
request('https://api.example.com/data', function(error, response, body) {
if (error) return callback(error);
callback(null, JSON.parse(body));
});
}
```
4. Chain of Thought (Think Step-by-Step)
For complex logic, ask the model to “think step-by-step” or describe its logic before writing the code. This often leads to more accurate and debugged results.
Prompt:
I need to implement a complex permission system in Python. First, explain the logic of how you would handle nested groups and inherited permissions. Then, provide the implementation using Pydantic for data validation.
5. Constraint-Based Prompting
Explicitly state what you don’t want. This prevents the model from taking shortcuts or using libraries you want to avoid.
Prompt:
Generate a CSS grid layout for a 3-column dashboard.
Constraints:
- Do not use any external frameworks like Bootstrap or Tailwind.
- Use only CSS Variables for colors.
- Ensure it is responsive without using media queries if possible (use `minmax` and `auto-fill`).
Putting It All Together
Prompt engineering isn’t about magic spells; it’s about clear communication. By treating the LLM like a highly capable but context-deprived junior developer, you can provide the necessary guardrails to ensure the output is professional, correct, and immediately useful.
What’s Next?
Start applying these techniques in your daily workflow. Try refactoring a prompt that previously failed you and see the difference. In our next post, we’ll dive into AI APIs vs. Local Models to help you decide where to run these prompts.