Skip to main content

Generating and Updating Tests with AI

Use AI agent skills to generate Drift tests consistently, validate coverage, and maintain tests as your OpenAPI spec evolves.

What this guide covers

  • Install Drift AI skills for your agent or IDE
  • Use the skills to generate tests incrementally
  • Maintain and evolve tests safely over time

Install the Drift skill

The pact-agentic-tooling-extensions repository provides two skills designed to work together:

  • Drift — writes and iterates test cases, configures lifecycle hooks, and publishes results to PactFlow
  • OpenAPI Parser — analyses complex OpenAPI specs and generates Drift test case scaffolding

For full installation steps across all supported agents and IDEs (Claude Code, GitHub Copilot, OpenCode, Cursor, Windsurf, Codex, Kiro, and others), see Drift AI Agent Skills.

Use the instructions iteratively

The Drift instructions work best when you run them in small, verifiable steps.

Getting started when you have zero tests

If you are starting from scratch, this is a good first prompt:

Create a comprehensive drift test suite for this project, based on this OpenAPI document.

NOTE: Attach the relevant OpenAPI document as context to the prompt

This should produce:

  • A broad test suite across your OpenAPI operations
  • A dataset file with starter data structures
  • A Lua script template with lifecycle hooks ready to populate

Important expectation

The generated suite is a strong starting point, but it may not pass out of the box. Most teams need to add state and data setup (seed data, auth tokens, teardown, or environment-specific values) before tests become stable.

  1. Red: Run one operation and observe failures.
  2. Green: Add only the minimum setup/data needed to make that operation pass.
  3. Refactor: Move repeated setup into dataset entries, globals, or lifecycle hooks.
  4. Repeat operation-by-operation until the full suite is stable.

Use single-operation execution while bootstrapping:

drift verify --test-files path/to/testcase.yaml --server-url http://localhost:8080 --operation <operationId>

Once your initial suite exists, use this same operation-by-operation loop to expand and harden coverage.

When you are not generating the full suite from scratch

Use this plan-first workflow for controlled, operation-by-operation progress.

1) Ask for a coverage plan and TODO file

Prompt your AI assistant:

Analyze this OpenAPI document and create a comprehensive plan to reach 100% Drift test coverage.
Create a DRIFT_TODO.md file that lists tests by operation, status code, and media type.
Use checkboxes to track progress.
Then scaffold the initial Drift setup (test case YAML, dataset YAML, and Lua script), and implement only the first test from the plan.

2) Get the first test passing

Run only the first planned operation:

drift verify --test-files path/to/testcase.yaml --server-url http://localhost:8080 --operation <first-test-operationId>

If it fails, update only the minimum setup/data needed and re-run that same operation.

3) Progress through DRIFT_TODO.md one test at a time

Prompt your AI assistant:

Continue with the next unchecked item in DRIFT_TODO.md.
Implement exactly one test operation.
Do not move to the next item until I confirm this operation passes.

4) Repeat until coverage goals are met

After each passing operation, ask the assistant to:

  • Mark the completed item in DRIFT_TODO.md
  • Implement the next unchecked test
  • Keep naming and structure consistent with existing tests

Recommended audit prompt:

Audit DRIFT_TODO.md and the current test files.
List any remaining coverage gaps by operation, status code, and media type.

Maintain tests over time

When your API changes, use this update loop:

  1. Diff spec changes (new/changed/removed operations and response codes).
  2. Regenerate only impacted tests (avoid broad rewrites).
  3. Run changed operations first using --operation.
  4. Run the full drift verify command only after targeted checks pass.
  5. Keep datasets and Lua minimal; avoid custom helpers unless explicitly needed.

Recommended maintenance prompt:

Update only the tests affected by this OpenAPI diff.
Preserve existing operation IDs where possible.
Show a checklist of added/updated/removed tests.

Example prompts for maintaining tests

New endpoint

Create a new set of Drift tests for the new <x> endpoint.
Update DRIFT_TODO.md and include happy-path, validation, and auth scenarios where applicable.

Update existing endpoint behavior

Update the Drift test case for the <x> endpoint, <y> status code, and <z> media type.
Only modify impacted tests and preserve existing operation IDs where possible.

Adapt skills across providers

The Drift and OpenAPI Parser skills follow the Agent Skills Open Standard, so the core instruction content — objective, workflow, guardrails, and execution guidance — is consistent across agents. Only the file placement and discovery mechanism differs per agent. See Drift AI Agent Skills for agent-specific placement instructions.

Reference