Skip to main content

Running Tests in Parallel

As your API grows, test suites can become large and slow. Running tests in parallel can significantly reduce execution time in CI/CD pipelines. This guide shows you how to organize and execute Drift tests across multiple files for parallel execution.

When to Use This Approach

Consider splitting your test suite when:

  • Test execution time is slow: Your single test suite takes more than a few minutes to run
  • Logical separation exists: Your API has distinct domains (e.g., /pet, /store, /user endpoints)
  • CI pipeline optimization: You want faster feedback by running tests in parallel CI jobs
  • Team organization: Different teams own different parts of the API

How It Works

Drift doesn't currently support native parallel execution or importing other Drift files. However, you can achieve parallelism by:

  1. Creating multiple Drift test files, each covering a subset of operations
  2. Sharing common resources (datasets, Lua scripts, OpenAPI specs)
  3. Running each test file in separate CI jobs or processes

Example: Splitting the Petstore API

The Swagger Petstore API has three logical domains: pets, store, and users. Let's split these into separate test suites.

Step 1: Organize Your Test Files

Create a directory structure that separates test suites while sharing common files:

project/
├── openapi.yaml # Shared OpenAPI spec
├── petstore.dataset.yaml # Shared dataset
├── petstore.lua # Shared Lua functions
└── drift/
├── pets.yaml # Tests for /pet/* endpoints
├── store.yaml # Tests for /store/* endpoints
└── users.yaml # Tests for /user/* endpoints

Step 2: Create Separate Test Suites

Each Drift file references the same shared resources but tests different operations.

drift/pets.yaml - Tests for /pet/* endpoints:

# yaml-language-server: $schema=https://download.pactflow.io/drift/schemas/drift.testcases.v1.schema.json
drift-testcase-file: v1
title: "Petstore API - Pet Endpoints"

sources:
- name: petstore-oas
path: ../openapi.yaml
- name: petstore-data
path: ../petstore.dataset.yaml
- name: functions
path: ../petstore.lua

plugins:
- name: oas
- name: json
- name: data
- name: junit-output

global:
auth:
apply: true
parameters:
authentication:
scheme: apiKey
name: api_key
in: header
value: ${functions:api_key}

operations:
addPet_Success:
target: petstore-oas:addPet
description: "Add a new pet to the store"
parameters:
request:
body:
id: 12345
name: "Fluffy"
status: "available"
expected:
response:
statusCode: 200

getPetById_Success:
target: petstore-oas:getPetById
description: "Get pet by ID"
dataset: petstore-data
parameters:
path:
petId: ${petstore-data:pets.pet1.id}
expected:
response:
statusCode: 200

updatePet_Success:
target: petstore-oas:updatePet
description: "Update an existing pet"
parameters:
request:
body:
id: 12345
name: "Fluffy Updated"
status: "sold"
expected:
response:
statusCode: 200

deletePet_Success:
target: petstore-oas:deletePet
description: "Delete a pet"
parameters:
path:
petId: 12345
expected:
response:
statusCode: 200

drift/store.yaml - Tests for /store/* endpoints:

# yaml-language-server: $schema=https://download.pactflow.io/drift/schemas/drift.testcases.v1.schema.json
drift-testcase-file: v1
title: "Petstore API - Store Endpoints"

sources:
- name: petstore-oas
path: ../openapi.yaml
- name: petstore-data
path: ../petstore.dataset.yaml
- name: functions
path: ../petstore.lua

plugins:
- name: oas
- name: json
- name: data
- name: junit-output

operations:
getInventory_Success:
target: petstore-oas:getInventory
description: "Get store inventory"
expected:
response:
statusCode: 200

placeOrder_Success:
target: petstore-oas:placeOrder
description: "Place an order for a pet"
parameters:
request:
body:
id: 1
petId: 12345
quantity: 1
status: "placed"
expected:
response:
statusCode: 200

getOrderById_Success:
target: petstore-oas:getOrderById
description: "Get order by ID"
parameters:
path:
orderId: 1
expected:
response:
statusCode: 200

deleteOrder_Success:
target: petstore-oas:deleteOrder
description: "Delete an order"
parameters:
path:
orderId: 1
expected:
response:
statusCode: 200

drift/users.yaml - Tests for /user/* endpoints:

# yaml-language-server: $schema=https://download.pactflow.io/drift/schemas/drift.testcases.v1.schema.json
drift-testcase-file: v1
title: "Petstore API - User Endpoints"

sources:
- name: petstore-oas
path: ../openapi.yaml
- name: petstore-data
path: ../petstore.dataset.yaml
- name: functions
path: ../petstore.lua

plugins:
- name: oas
- name: json
- name: data
- name: junit-output

operations:
createUser_Success:
target: petstore-oas:createUser
description: "Create a new user"
parameters:
request:
body:
id: 1
username: "testuser"
email: "test@example.com"
expected:
response:
statusCode: 200

getUserByName_Success:
target: petstore-oas:getUserByName
description: "Get user by username"
parameters:
path:
username: "testuser"
expected:
response:
statusCode: 200

updateUser_Success:
target: petstore-oas:updateUser
description: "Update an existing user"
parameters:
path:
username: "testuser"
request:
body:
id: 1
username: "testuser"
email: "updated@example.com"
expected:
response:
statusCode: 200

deleteUser_Success:
target: petstore-oas:deleteUser
description: "Delete a user"
parameters:
path:
username: "testuser"
expected:
response:
statusCode: 200

Step 3: Run Tests in Parallel

Local Execution

Run each suite independently in separate terminal sessions or use shell backgrounding:

# Run all suites in parallel (backgrounded)
drift --test-files drift/pets.yaml --server-url http://localhost:8080 &
drift --test-files drift/store.yaml --server-url http://localhost:8080 &
drift --test-files drift/users.yaml --server-url http://localhost:8080 &

# Wait for all background jobs to complete
wait

CI/CD Execution (GitHub Actions)

Use a matrix strategy to run each test suite in a separate job:

name: API Contract Tests (Parallel)

on:
pull_request:
branches: [ main ]
push:
branches: [ main ]

jobs:
drift-tests:
runs-on: ubuntu-latest
strategy:
matrix:
suite: [pets, store, users]
fail-fast: false # Continue running other suites if one fails

steps:
- name: Checkout Code
uses: actions/checkout@v4

- name: Download and Install Drift
run: |
wget https://download.pactflow.io/drift/latest/linux-x86_64.zip
unzip linux-x86_64.zip
echo "$(pwd)" >> $GITHUB_PATH

- name: Start API Provider
run: |
npm install
npm start &
sleep 5

- name: Run Drift Tests - ${{ matrix.suite }}
run: |
drift --test-files drift/${{ matrix.suite }}.yaml \
--server-url http://localhost:8080 \
--output-dir ./drift-results-${{ matrix.suite }}

- name: Archive Test Results
if: always()
uses: actions/upload-artifact@v4
with:
name: drift-junit-report-${{ matrix.suite }}
path: ./drift-results-${{ matrix.suite }}

This configuration runs three parallel jobs: drift-tests (pets), drift-tests (store), and drift-tests (users).

Publishing Results to PactFlow (BDCT)

When using parallel test execution with Bi-Directional Contract Testing (BDCT), you'll encounter a challenge: each test suite produces its own verification result file, but PactFlow requires a single bundled result for publishing.

The Problem

When you run three parallel suites, you get three separate result files:

drift-results-pets/verification-result.json
drift-results-store/verification-result.json
drift-results-users/verification-result.json

PactFlow's pactflow publish-provider-contract command expects a single verification result bundle representing the complete provider verification.

The Solution: Bundle Results

Work in Progress

The result bundling feature is currently under development. The command syntax and output format may change in future releases.

Drift provides a bundle command to merge multiple verification results into a single file:

drift bundle --results drift-results-pets drift-results-store drift-results-users

This produces a single bundled result file that can be published to PactFlow.

Updated CI/CD Workflow with Result Bundling

Here's an enhanced GitHub Actions workflow that collects results from parallel jobs and publishes them to PactFlow:

name: API Contract Tests with PactFlow

on:
pull_request:
branches: [ main ]
push:
branches: [ main ]

jobs:
drift-tests:
runs-on: ubuntu-latest
strategy:
matrix:
suite: [pets, store, users]
fail-fast: false

steps:
- name: Checkout Code
uses: actions/checkout@v4

- name: Download and Install Drift
run: |
wget https://download.pactflow.io/drift/latest/linux-x86_64.zip
unzip linux-x86_64.zip
echo "$(pwd)" >> $GITHUB_PATH

- name: Start API Provider
run: |
npm install
npm start &
sleep 5

- name: Run Drift Tests - ${{ matrix.suite }}
run: |
drift --test-files drift/${{ matrix.suite }}.yaml \
--server-url http://localhost:8080 \
--output-dir ./drift-results-${{ matrix.suite }}

- name: Upload Verification Results
if: always()
uses: actions/upload-artifact@v4
with:
name: verification-results-${{ matrix.suite }}
path: ./drift-results-${{ matrix.suite }}

publish-to-pactflow:
needs: drift-tests
runs-on: ubuntu-latest
if: always() # Run even if some tests failed

steps:
- name: Checkout Code
uses: actions/checkout@v4

- name: Download and Install Drift
run: |
wget https://download.pactflow.io/drift/latest/linux-x86_64.zip
unzip linux-x86_64.zip
echo "$(pwd)" >> $GITHUB_PATH

- name: Download All Verification Results
uses: actions/download-artifact@v4
with:
path: ./all-results
pattern: verification-results-*

- name: Bundle Verification Results
run: |
drift bundle --results \
./all-results/verification-results-pets \
./all-results/verification-results-store \
./all-results/verification-results-users \
--output ./bundled-results

- name: Publish to PactFlow
run: |
pactflow publish-provider-contract \
openapi.yaml \
--provider "Petstore API" \
--provider-app-version ${{ github.sha }} \
--branch ${{ github.ref_name }} \
--verification-results ./bundled-results/verification-result.json \
--verification-results-content-type application/json \
--verifier drift
env:
PACT_BROKER_BASE_URL: ${{ secrets.PACTFLOW_BASE_URL }}
PACT_BROKER_TOKEN: ${{ secrets.PACTFLOW_TOKEN }}

Key differences from the basic workflow:

  1. Separate jobs: Tests run in parallel in drift-tests job, publishing happens in publish-to-pactflow job
  2. Artifact collection: Each parallel job uploads its verification results as artifacts
  3. Result download: The publish job downloads all artifacts using a pattern match
  4. Result bundling: drift bundle merges multiple result directories into a single output
  5. Single publish: One pactflow publish-provider-contract call with the bundled results

Important: State Management Considerations

⚠️ Be careful with stateful tests when running in parallel.

When testing the same API instance concurrently, tests can interfere with each other if they modify shared state:

Problem Scenario

# Suite A: Testing DELETE returns 404 after deletion
deletePet_ThenGet404:
# Step 1: Delete pet ID 12345
# Step 2: Verify GET returns 404

# Suite B: Running in parallel, testing GET returns 200
getPet_Success:
# Expects pet ID 12345 to exist and return 200

If both suites run against the same API instance simultaneously, Suite A might delete the pet while Suite B is trying to retrieve it, causing Suite B to fail unexpectedly.

Solutions

Option 1: Use Unique Test Data

Ensure each suite operates on different data:

  • Suite A tests pet IDs 10000-19999
  • Suite B tests pet IDs 20000-29999
  • Suite C tests pet IDs 30000-39999

Option 2: Run Against Isolated Instances

If your CI supports it, spin up a separate API instance per suite:

jobs:
drift-tests:
strategy:
matrix:
suite: [pets, store, users]
port: [8080, 8081, 8082]

steps:
- name: Start API Provider on Port ${{ matrix.port }}
run: |
PORT=${{ matrix.port }} npm start &
sleep 5

- name: Run Tests
run: |
drift --test-files drift/${{ matrix.suite }}.yaml \
--server-url http://localhost:${{ matrix.port }}

Option 3: Run Serially for Stateful Tests

For tests that inherently conflict (creating/deleting the same resources), run them serially:

jobs:
drift-tests-parallel:
# Read-only tests run in parallel
strategy:
matrix:
suite: [get-pets, get-store, get-users]

drift-tests-serial:
# Stateful tests run after parallel tests complete
needs: drift-tests-parallel
runs-on: ubuntu-latest
steps:
- name: Run Full CRUD Test Suite
run: drift --test-files drift/full-crud.yaml

Tips and Best Practices

1. Logical Grouping

Group tests by:

  • API domain/resource (e.g., /pets, /orders)
  • HTTP method (e.g., read-operations.yaml, write-operations.yaml)
  • Test type (e.g., happy-path.yaml, error-cases.yaml)

2. Share Common Configuration

Extract authentication, common headers, and base URLs into shared global sections to avoid duplication.

3. Use Descriptive Names

Name your test files clearly so CI logs are easy to understand:

  • drift/user-authentication-tests.yaml
  • drift/product-catalog-crud.yaml
  • drift/test1.yaml

4. Monitor Individual Suite Performance

Track how long each suite takes to identify bottlenecks:

time drift --test-files drift/pets.yaml

5. Balance Parallelism

Splitting into too many small suites can add overhead. Aim for 3-10 logical groups depending on your API size.

Verification

After setting up parallel execution, verify your configuration:

  1. Check total execution time: Parallel execution should be faster than serial
  2. Verify no conflicts: Ensure tests don't interfere with each other
  3. Check CI reporting: Confirm all suites report results correctly
  4. Test failure handling: Verify that a failure in one suite doesn't block others (e.g. use fail-fast: false in GitHub Actions)

Next Steps

Once you have parallel tests running:

  1. Monitor your CI pipeline execution times
  2. Identify slow-running suites for further optimization
  3. Consider provider verification with Pactflow for distributed testing
  4. Set up test data isolation strategies to prevent conflicts