Blog

Stop Writing Boilerplate: Generate Unit Tests Automatically with AI

Unit testing is essential but often tedious. Learn how to use AI to generate comprehensive test suites and edge cases in seconds.

Posted on: 2026-03-13 by AI Assistant


Writing unit tests is like going to the gym: everyone knows it’s good for you, but it’s often the first thing to get skipped when you’re busy. The boilerplate involved—setting up mocks, creating dummy data, and handling repetitive assertions—can be a significant drag on development velocity.

But what if you could automate the “boring” parts of testing? With Large Language Models (LLMs), you can generate comprehensive test suites, including edge cases you might have missed, in a fraction of the time.

The Strategy: AI as a Testing Assistant

The goal isn’t to let the AI write all your tests blindly. Instead, use it as a powerful assistant to:

  1. Generate the skeleton: Create the setup and teardown boilerplate.
  2. Suggest edge cases: Identify null values, empty strings, or extreme numeric ranges.
  3. Mock dependencies: Automatically create mock objects for databases or APIs.
  4. Translate requirements: Convert user stories directly into test cases.

Practical Example: Testing a TypeScript Service

Let’s say we have a simple service that calculates a discount based on a user’s loyalty tier.

// loyalty.service.ts
export type Tier = 'BRONZE' | 'SILVER' | 'GOLD';

export function calculateDiscount(price: number, tier: Tier): number {
  if (price < 0) throw new Error('Price cannot be negative');
  
  switch (tier) {
    case 'GOLD': return price * 0.2;
    case 'SILVER': return price * 0.1;
    case 'BRONZE': return price * 0.05;
    default: return 0;
  }
}

The Prompt

Instead of writing the tests manually, we can use a prompt like this:

“Act as a senior QA engineer. Using Vitest, write a comprehensive suite of unit tests for the calculateDiscount function provided below. Include tests for:

  1. Valid inputs for all tiers.
  2. Edge cases like price = 0.
  3. Error handling for negative prices.
  4. Ensure the output is precisely typed.”

The AI-Generated Result

The AI will output something like this:

import { describe, it, expect } from 'vitest';
import { calculateDiscount, Tier } from './loyalty.service';

describe('calculateDiscount', () => {
  it('should apply 20% discount for GOLD tier', () => {
    expect(calculateDiscount(100, 'GOLD')).toBe(20);
  });

  it('should apply 10% discount for SILVER tier', () => {
    expect(calculateDiscount(100, 'SILVER')).toBe(10);
  });

  it('should apply 5% discount for BRONZE tier', () => {
    expect(calculateDiscount(100, 'BRONZE')).toBe(5);
  });

  it('should return 0 discount when price is 0', () => {
    expect(calculateDiscount(0, 'GOLD')).toBe(0);
  });

  it('should throw an error for negative prices', () => {
    expect(() => calculateDiscount(-1, 'GOLD')).toThrow('Price cannot be negative');
  });

  it('should handle large prices correctly', () => {
    expect(calculateDiscount(1000000, 'GOLD')).toBe(200000);
  });
});

Best Practices for AI-Generated Tests

  1. Review is Mandatory: Never commit AI-generated tests without reading them. AI can “hallucinate” functions or mock objects that don’t exist.
  2. Focus on Edge Cases: LLMs are particularly good at “What if?” scenarios. Ask them specifically to “Find three edge cases I might have missed.”
  3. Keep it Dry (but not too dry): AI sometimes generates repetitive tests. Use its output as a starting point, then refactor using it.each or similar helpers.
  4. Use Context: If your function depends on a complex database schema, provide the schema (or a simplified version) in the prompt so the AI can generate realistic mocks.

Conclusion

Automating unit test generation doesn’t just save time; it leads to more robust codebases by making it easier to achieve high test coverage. By offloading the boilerplate to AI, you can focus on the architectural logic and complex business rules that truly matter.

What’s Next?

Now that your tests are running, how do you keep track of the cost and performance of your AI integrations? In our next post, we’ll explore The Missing Piece: How to Monitor and Log Your LLM Apps.