Skip to main content

Debugging Test Cases

When a test suite fails, or when you are developing complex new scenarios, running the entire suite can be slow. Drift provides several mechanisms to isolate and debug specific operations.

1. Tag-Based Filtering

Tags allow you to execute specific subsets of your test suite. You can include or exclude tags using the --tags flag.

Important Logic Note: Drift uses OR logic between tags. If you provide multiple tags, Drift will run any operation that matches at least one of the specified criteria. It does not currently support AND logic.

Examples

Run operations tagged with "post" OR "error-response":

drift verify -u http://localhost:8080 -f product.testcases.yaml --tags 'post,error-response'

Run operations tagged with "post" but EXCLUDE those tagged with "error-response": Use the ! prefix to exclude specific tags.

drift verify -u http://localhost:8080 -f product.testcases.yaml --tags 'post,!error-response'

2. Running a Single Operation

To debug one specific failure, use the --operation flag to execute a single test case by its ID.

drift verify -u http://localhost:8080 -f drift.yaml --operation getProductByID_Success

3. Re-running Failed Tests

If you have a large suite with multiple failures, the --failed flag allows you to focus exclusively on the tests that did not pass during the previous run.

drift verify -u http://localhost:8080 -f drift.yaml --failed

4. Generating an AI Fix Prompt

When tests fail and the cause is not immediately obvious, use the --generate-fix-prompt flag to produce an AI prompt file. Drift writes this file to the output directory alongside the other result files. You can then pass it to an AI agent to get targeted suggestions for fixing the failing tests.

drift verify -u http://localhost:8080 -f drift.yaml --generate-fix-prompt

The prompt file captures the failure details in a format designed to give an AI agent the context it needs to diagnose root causes and suggest corrections to your test cases or API implementation.

Using the prompt file with an AI agent

  1. Run drift verify with --generate-fix-prompt.
  2. Locate the generated prompt file in the output directory (default: the same directory as your test case files).
  3. Pass the file to your AI agent — for example, by attaching it to a chat session or referencing it in a prompt.

If you have the Drift AI agent skill installed, you can ask it to read the prompt file directly and suggest fixes.


5. Reading Test Output

Drift displays test results in a formatted table. This helps you quickly identify which operations passed or failed.

Successful Run

─[ Summary ]───────────────────────────────────────────────────────────────────────────────────────────

Executed 1 test case (1 passed, 0 failed)
Executed 9 operations (9 passed, 0 failed, 0 skipped)
Execution time 1.288865209s
Setup time 72.192458ms

┌────────────────────────────────┬──────────────────────────────────┬───────────────────────────┬────────┐
│ Testcase ┆ Operation ┆ Target ┆ Result │
╞════════════════════════════════╪══════════════════════════════════╪═══════════════════════════╪════════╡
│ Product API Tests ┆ createProduct_Success ┆ source-oas:createProduct ┆ OK │
├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┤
│ ┆ getAllProducts_Success ┆ source-oas:getAllProducts ┆ OK │
├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┤
│ ┆ getProductByID_Success ┆ source-oas:getProductByID ┆ OK │
└────────────────────────────────┴──────────────────────────────────┴───────────────────────────┴────────┘

Each row shows:

  • Testcase: The name of your test suite
  • Operation: The specific operation that was executed
  • Target: Where the operation came from (e.g., source-oas:operationId)
  • Result: OK (passed) or FAILED

Failed Run with Error Details

When tests fail, Drift displays them in the table with a FAILED status, followed by a detailed failure section:

─[ Summary ]───────────────────────────────────────────────────────────────────────────────────────────

Executed 1 test case (0 passed, 1 failed)
Executed 9 operations (7 passed, 2 failed, 0 skipped)

┌───────────────────┬──────────────────────────────────┬───────────────────────────┬────────┐
│ Testcase ┆ Operation ┆ Target ┆ Result │
╞═══════════════════╪══════════════════════════════════╪═══════════════════════════╪════════╡
│ Product API Tests ┆ createProduct_Success ┆ source-oas:createProduct ┆ FAILED │
├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┤
│ ┆ getAllProducts_Success ┆ source-oas:getAllProducts ┆ FAILED │
└───────────────────┴──────────────────────────────────┴───────────────────────────┴────────┘

─[ Failures ]──────────────────────────────────────────────────────────────────────────────────────────

┌─ Testcase 'Product API Tests'

│ Operation 'createProduct_Success':

│ status_code: Expected response status Client Error (404) but got 201

│ Operation 'getAllProducts_Success':

│ status_code: Expected response status Success (201) but got 200

The Failures section provides:

  • Which operation failed
  • The specific assertion that didn't match (e.g., status_code)
  • What was expected vs. what was received

Use this information to quickly identify and fix issues in your test cases or API implementation.


6. Increasing Log Verbosity

When a test fails and the reason is unclear, increase the logging level to debug or trace to see the full request/response exchange.

# Set level to 'debug' for detailed internal logs
drift verify -u http://localhost:8080 -f drift.yaml --log-level debug

7. Script-Level Debugging

Within your Lua scripts, use the built-in dbg() function to print the structure of event data directly to the console.

-- drift.lua
["operation:started"] = function(event, data)
-- Prints a readable table of the operation metadata, including tags and ID
print(dbg(data))
end