Home/Use Cases/Write Unit Tests
Coding

How to Write Unit Tests with AI

Generate comprehensive unit tests that cover happy paths, edge cases, and error conditions for any function or module.

Writing tests is time-consuming but non-negotiable for production code quality. AI can analyze a function's logic, identify edge cases a human might overlook, and generate a full test suite with descriptive names — dramatically increasing coverage without the tedium of manually writing every assertion.

Why unit test coverage is hard to maintain manually

Unit test coverage degrades over time for predictable reasons. New functions get written without tests because 'I'll add them later.' Edge cases get missed because the developer who wrote the function has mental blind spots about inputs they never considered. Test names are too vague to be useful as documentation ('testFunction1', 'shouldWork'). And async code paths — error states, timeout handling, concurrent calls — are consistently undertested because they are harder to set up. The result is a test suite that covers the happy path thoroughly and leaves the failure modes that actually cause production incidents largely untested.

How AI improves unit test quality and coverage

AI generates unit tests from function code by reasoning about what the function promises to do and what could go wrong. It identifies edge cases systematically — null inputs, empty arrays, boundary values at numeric limits, type mismatches, async rejection states — that human developers miss because they are thinking about implementation rather than adversarial inputs. More importantly, AI generates tests with descriptive names that document the function's expected behavior, making the test suite useful as living documentation rather than just a coverage metric. For complex functions, AI also identifies cases where the function's contract is ambiguous and the test should clarify expected behavior.

What context produces the best AI-generated tests

The quality of AI-generated unit tests depends on three inputs: the complete function with type signatures, the testing framework and any relevant conventions (describe block naming, assertion style, mock library), and any known edge cases or constraints the function must handle. Pasting just the function body without types often causes the AI to generate tests that don't typecheck. Specifying the test naming format (e.g., 'should [expected behavior] when [condition]') produces consistently descriptive test names rather than generic ones. If there are specific edge cases you already know about — a constraint from the business domain, a previous bug that was fixed — mention them explicitly so they are tested intentionally rather than discovered accidentally.

Step-by-step guide

1

Provide the function to test

Paste the complete function with its type signatures and any dependencies it imports.

2

Specify the testing framework

State whether you are using Jest, Vitest, Pytest, or another framework so syntax is correct.

3

Request coverage of edge cases

Explicitly ask AI to cover: null inputs, boundary values, type mismatches, and async failure states.

4

Review and add missing cases

Ask AI to audit its own test suite and identify any scenarios it may have missed.

Ready-to-use prompts

Full test suite — TypeScript/Jest
Write a complete Jest test suite for this TypeScript function. Testing framework: Jest with ts-jest. Follow these conventions: describe blocks grouped by scenario ('when input is valid', 'when input is invalid', 'when async operation fails'), test names in format 'should [expected behavior] when [condition]', use expect.assertions(n) for async tests.

Function to test:
[PASTE FUNCTION WITH FULL TYPE SIGNATURES AND IMPORTS]

Cover these scenarios:
1. Happy path — standard valid input
2. Boundary values (0, negative numbers, empty string, maximum allowed value if applicable)
3. Null and undefined inputs for each parameter
4. Type mismatches (if function accepts string, test with number, etc.)
5. Async error/rejection states if applicable
6. Any business logic edge cases: [DESCRIBE ANY KNOWN EDGE CASES]

Do not mock internal logic — only mock external dependencies. List any dependencies that need mocking at the top.

Why it works

Specifying describe block grouping by scenario and the test naming format produces a test suite that reads as documentation, not just coverage. Explicitly listing the edge case categories prevents the AI from stopping after the happy path.

Pytest parametrize for Python
Write a Pytest test suite for this Python function using parametrize for data-driven tests.

Function:
[PASTE FUNCTION WITH TYPE HINTS]

Conventions: use @pytest.mark.parametrize for happy path variants and common edge cases, separate test functions for error conditions that raise exceptions, use pytest.raises for exception testing, descriptive test IDs in parametrize.

Test scenarios to cover:
- Happy path: [LIST 3-4 VALID INPUT VARIANTS]
- Invalid inputs: [LIST INVALID INPUT TYPES]
- Boundary values: [LIST BOUNDARY CONDITIONS]
- Exception cases: [DESCRIBE WHEN FUNCTION SHOULD RAISE]

Include a fixture if any setup is shared across tests. Do not use unittest.mock unless the function has external I/O.

Why it works

Separating parametrize-based tests from exception-raising tests keeps the test suite readable. Providing specific happy path variants in the prompt ensures the parametrize cases test real scenarios rather than trivial duplicates.

Practical tips

  • Paste the complete function with type signatures, not just the body — AI that knows the types generates correct assertions and catches type-related edge cases automatically.
  • Specify the test naming convention explicitly ('should [expected] when [condition]') — without it, AI defaults to generic names like 'test_valid_input' that provide no documentation value.
  • Ask AI to identify which of its generated tests it considers the most likely to catch a real bug — this surfaces the highest-value tests for your review.
  • After generating the suite, ask 'what inputs could make this function fail that are not covered in these tests?' — AI often identifies an additional 2-3 edge cases on the second pass.
  • For async functions, always explicitly ask for rejection/error state tests — AI defaults to testing the success path for async unless instructed to test failure modes.

Recommended AI tools

CursorGitHub CopilotChatGPT

Continue learning

Generate test casesDebug codeCode review automation

Build the perfect prompt for this task

PromptIt asks smart questions and tailors the prompt structure to your specific situation in seconds.

✦ Try it free

More Coding use cases

Debug Code

Diagnose and fix bugs faster by giving AI your error, stack trace, and

View →

Write API Documentation

Generate clear, developer-friendly API docs with endpoint descriptions

View →

Generate Test Cases

Create structured QA test cases covering functional, edge, and negativ

View →

Refactor Legacy Code

Modernize and clean up legacy codebases by identifying code smells and

View →
← Browse all use cases