Writing Test Cases
The operations section of your Drift test file is where you define individual test scenarios. Each entry describes a specific interaction with your API and the expected outcome. This guide covers everything from minimal test cases to advanced scenarios.
The Minimal Test Case
The simplest Drift test requires only two things: a target and an expected status code.
# yaml-language-server: $schema=https://download.pactflow.io/drift/schemas/drift.testcases.v1.schema.json
drift-testcase-file: v1
operations:
getAllProducts_Success:
target: source-oas:getAllProducts
expected:
response:
statusCode: 200
What Drift does automatically:
- Reads the request/response schema from your OpenAPI spec
- Uses the first available example for request bodies (if required)
- Validates the response against the JSON schema
- Checks the content type matches the spec
This minimal approach is perfect for "happy path" tests where you trust your OpenAPI examples.
1. Targeting Operations
To test an endpoint, you must point Drift to the correct operation in your source specification.
Using Operation IDs
If your OpenAPI spec defines an operationId, use it directly:
target: source-name:getAllProducts
Targeting Without an Operation ID
If an operationId is missing, target the endpoint using the method and path exactly as they appear in the OpenAPI description:
# Format: source:method:path
target: product-oas:get:/products/{id}
2. Request and Response Bodies
Drift is designed to minimize boilerplate by leveraging your OpenAPI specification.
Omission Logic (Leveraging Examples)
If your OpenAPI spec includes examples for a request or response body, you can omit the body field in yaml.
- Behavior: Drift will automatically pick the first example found in the specification to use for the test.
Overriding with Datasets
When you need to test specific scenarios (like a specific product ID or type), use a dataset to provide the body:
parameters:
request:
body: ${product-data:products.product10}
3. Assertions and Expected Outcomes
The expected block defines what a "pass" looks like.
expected:
response:
statusCode: 200
body: ${equalTo(product-data:products.product10)}
Supported Matchers
| Matcher | Description |
|---|---|
equalTo(value) | Performs a deep equality check against the provided value. |
| [Placeholder] | Additional matchers such as regex or type validation to be documented. |
Non-JSON (Binary) Validation
For non-JSON bodies (e.g., images, PDFs), Drift performs a byte comparison to ensure the provider's output matches the expected source.
- [Placeholder]: Documentation on property-based checks (e.g., Content-Length) for binary data.
4. Testing Negative Scenarios
Most APIs need validation for error cases like missing authentication, invalid input, or non-existent resources. These "negative tests" are critical for ensuring robust error handling.
Testing Unauthorized Access (401)
To test authentication failures, you need to override or exclude the global authentication:
operations:
# Using exclude to remove global auth
getAllProducts_Unauthorized:
target: source-oas:getAllProducts
description: "Get all products with invalid authorization"
exclude:
- auth
parameters:
headers:
authorization: "Bearer invalid-token"
expected:
response:
statusCode: 401
The exclude field removes globally applied configuration (like authentication headers), allowing you to test what happens when auth is missing or invalid.
Testing Bad Request (400)
Test invalid input by providing malformed data:
operations:
getProductByID_InvalidID:
target: source-oas:getProductByID
description: "Get a product with invalid ID format"
parameters:
path:
id: "invalid" # Send string where integer expected
expected:
response:
statusCode: 400
When testing bad requests, you're deliberately sending invalid data to verify the API rejects it. However, Drift will normally validate your request against the OpenAPI schema and report errors when you send malformed data.
Suppressing Schema Validation Errors
To silence schema validation errors for bad request tests, use the ignore: schema field:
operations:
createProduct_MissingRequired:
target: source-oas:createProduct
description: "Create product without required fields"
parameters:
request:
body:
price: 9.99 # Missing required 'name' field
ignore:
schema: true # Ignore request schema validation errors
expected:
response:
statusCode: 400
What this does:
- Prevents Drift from reporting schema validation errors for the request you're sending
- Allows you to test invalid inputs without test failures from schema mismatches
- Still validates the response against its schema
This is useful when:
- Testing validation error responses (400, 422, etc.)
- Deliberately sending invalid data to check error handling
- Verifying the API properly rejects malformed requests
Testing Not Found (404)
Verify the API returns 404 for non-existent resources:
operations:
getProductByID_NotFound:
target: source-oas:getProductByID
description: "Get a product that does not exist"
parameters:
path:
id: 99999 # ID that doesn't exist
expected:
response:
statusCode: 404
Forbidden Access (403)
Test authorization (not authentication) by using a valid token with insufficient permissions:
operations:
deleteProduct_Forbidden:
target: source-oas:deleteProduct
description: "Delete product with read-only token"
parameters:
headers:
authorization: "Bearer ${functions:readonly_token}"
path:
id: 10
expected:
response:
statusCode: 403
5. Using Global Configuration with exclude
Global configuration lets you define common settings once and apply them to all operations.
Defining Global Authentication
global:
auth:
apply: true # Automatically applies to all operations
parameters:
authentication:
scheme: bearer
token: ${functions:bearer_token}
Excluding Global Configuration
When testing negative cases, use exclude to remove specific global settings:
operations:
createProduct_Unauthorized:
target: source-oas:createProduct
description: "Create product without authentication"
exclude:
- auth # Don't apply the global auth config
parameters:
headers:
authorization: "Bearer invalid-token"
expected:
response:
statusCode: 401
Why use exclude?
- Keeps your test file DRY (Don't Repeat Yourself)
- Makes it explicit when a test intentionally deviates from the norm
- Prevents accidentally inheriting unwanted configuration
6. Organizing Tests with Tags
Tags help you categorize and selectively run subsets of your test suite.
Adding Tags to Operations
operations:
getAllProducts_Success:
target: source-oas:getAllProducts
tags:
- smoke
- products
- read-only
expected:
response:
statusCode: 200
createProduct_Success:
target: source-oas:createProduct
tags:
- products
- write
expected:
response:
statusCode: 201
getProductByID_Unauthorized:
target: source-oas:getProductByID
tags:
- security
- auth
exclude:
- auth
expected:
response:
statusCode: 401
Running Tests by Tag
# Run only smoke tests
drift verify --test-files drift.yaml --tags smoke
# Run all security tests
drift verify --test-files drift.yaml --tags security
# Run tests with multiple tags (AND logic)
drift verify --test-files drift.yaml --tags products,write
# Exclude certain tags (NOT logic)
drift verify --test-files drift.yaml --tags '!security'
Common tag strategies:
- By functionality:
products,users,orders - By test type:
smoke,integration,regression - By stability:
stable,flaky,experimental - By concern:
security,performance,validation - By mutation level:
read-only,write,destructive
7. Controlling Test Execution Order
By default, Drift executes operations in alphanumeric order by their keys. This works for most stateless APIs, but sometimes you need to control the order — such as when setting up or cleaning up state, or when tests depend on each other.
Using the sequence Field
You can add an optional sequence field to any operation to control its execution order:
- Negative sequence: Runs before all operations without a sequence, in ascending order (more negative first).
- No sequence: Runs next, ordered by key (the default behavior).
- Zero or positive sequence: Runs after unsequenced operations, in ascending order (lowest first).
- Ties: If multiple operations have the same sequence, they are ordered by key.
Example: Setup, Main, and Cleanup
operations:
setupDatabase:
target: source-oas:setupDatabase
description: "Prepare database for tests"
sequence: -10 # Runs first
expected:
response:
statusCode: 200
createProduct:
target: source-oas:createProduct
description: "Create a product"
# No sequence: runs after negative, before positive
expected:
response:
statusCode: 201
getProduct:
target: source-oas:getProduct
description: "Get the created product"
# No sequence: runs after negative, before positive
expected:
response:
statusCode: 200
cleanup:
target: source-oas:cleanup
description: "Clean up test data"
sequence: 10 # Runs last
expected:
response:
statusCode: 204
Grouping Operations
If you assign the same sequence number to multiple operations, they form a group. The group runs in sequence order, and within the group, operations are ordered by key:
operations:
stepA:
sequence: 1
# ...
stepB:
sequence: 1
# ...
stepC:
sequence: 2
# ...
Here, stepA and stepB run together (ordered by key), then stepC.
When to Use sequence
- When you need a setup or teardown step
- When tests depend on state created by previous operations
- When you want to avoid ugly key prefixes like
01_,02_, etc.
For advanced workflows (complex dependencies, dynamic state), consider using Lua scripting or integrating with your test framework.
8. Complete Example
Here's a comprehensive test suite combining all these concepts:
# yaml-language-server: $schema=https://download.pactflow.io/drift/schemas/drift.testcases.v1.schema.json
drift-testcase-file: v1
title: "Product API Tests"
sources:
- name: source-oas
path: ../openapi.yaml
- name: product-data
path: product.dataset.yaml
- name: functions
path: product.lua
plugins:
- name: oas
- name: json
- name: data
global:
auth:
apply: true
parameters:
authentication:
scheme: bearer
token: ${functions:bearer_token}
operations:
# Happy path - minimal test
getAllProducts_Success:
target: source-oas:getAllProducts
tags:
- smoke
- read-only
expected:
response:
statusCode: 200
# Using dataset for specific data
createProduct_Success:
target: source-oas:createProduct
tags:
- products
- write
dataset: product-data
parameters:
request:
body: ${product-data:products.product10}
expected:
response:
statusCode: 201
# Using OpenAPI examples (body omitted)
createProduct_SuccessWithExample:
target: source-oas:createProduct
description: "Create a product using OpenAPI example"
tags:
- smoke
expected:
response:
statusCode: 201
# Negative test - unauthorized
createProduct_Unauthorized:
target: source-oas:createProduct
description: "Create product with invalid token"
tags:
- security
- auth
exclude:
- auth
parameters:
headers:
authorization: "Bearer invalid-token"
request:
body:
id: 20
name: "test product"
expected:
response:
statusCode: 401
# Negative test - not found
getProductByID_NotFound:
target: source-oas:getProductByID
description: "Get non-existent product"
tags:
- products
parameters:
path:
id: 99999
expected:
response:
statusCode: 404
# Negative test - bad input
getProductByID_InvalidID:
target: source-oas:getProductByID
description: "Get product with invalid ID format"
tags:
- validation
parameters:
path:
id: "invalid"
expected:
response:
statusCode: 400
8. When Declarative Tests Aren't Enough
The examples above work great when your API is stateless or when test data already exists. But what happens when tests depend on specific system state?
The State Problem
Consider this test that expects a product to exist:
operations:
deleteProduct_Success:
target: source-oas:deleteProduct
description: "Delete an existing product"
parameters:
path:
id: 10 # Assumes product ID 10 exists!
expected:
response:
statusCode: 204
Problem: What if product ID 10 doesn't exist? The test will fail unpredictably.
The Solution: Lifecycle Hooks
For scenarios requiring setup or cleanup, use lifecycle hooks in Lua scripts:
-- product.lua
local exports = {
event_handlers = {
["operation:started"] = function(event, data)
-- Create test product before the operation runs
local res = http({
url = "http://localhost:8080/products",
method = "POST",
body = json.encode({id = 10, name = "Test Product"})
})
end,
["operation:finished"] = function(event, data)
-- Clean up after the test completes
http({
url = "http://localhost:8080/products/10",
method = "DELETE"
})
end
}
}
return exports
Common Use Cases for Hooks
- Creating prerequisite data before tests run
- Cleaning up test data to ensure test isolation
- Authenticating and refreshing tokens dynamically
- Polling for async operations to complete
- Seeding databases with known test states
- Resetting state between test runs
When to Use Hooks vs. Declarative Tests
| Scenario | Approach |
|---|---|
| API is stateless (read-only endpoints) | Declarative tests |
| Test data exists and is stable | Declarative tests |
| Need to create/modify data before testing | Use hooks |
| Need cleanup between tests | Use hooks |
| Need dynamic values (timestamps, UUIDs) | Use hooks or data expressions |
| Testing race conditions or timing | Use hooks |
Learn more: See the complete guide on Lifecycle Hooks for detailed examples and available event types.
Next Steps
- Use datasets to manage complex test data
- Add lifecycle hooks for setup and teardown
- Configure authentication for secured endpoints
- Explore data expressions for dynamic test generation