Test Suites

Organize tests into named collections, run them in batch with configurable parallel execution, and track progress in real time with live streaming updates.

What Are Suites?

A test suite is a named collection of tests within a project. Suites let you group related tests together and run them as a batch — for example, you might create a "Smoke Tests" suite with your most critical tests, a "Checkout Flow" suite for all e-commerce tests, or a "Full Regression" suite that includes everything.

Suites provide two key capabilities:

  • Organization — logically group tests by feature, priority, or any criteria that makes sense for your workflow
  • Batch execution — run all tests in a suite (or all tests in the project) with a single click, with configurable parallel execution

Creating a Suite

Suites are created and managed from the Tests page within your project.

Open the Tests Page

Navigate to your project and click the Tests tab. This shows all tests in the project along with suite management controls.

Create a New Suite

Click the New Suite button. Enter a descriptive name for the suite (e.g., "Login Tests", "API Smoke Tests", "Full Regression").

Select Tests

Choose which tests to include in the suite. You can select any combination of tests from the project. A test can belong to multiple suites.

Save

Save the suite. It now appears in the suite list and can be run at any time.

Running Tests in Batch

QA Studio provides two ways to run multiple tests at once:

Run All Project Tests

From the Tests page, click the Run All button to execute every test in the project. Before the run starts, you can select the concurrency level (1 through 5) using the concurrency selector next to the button.

Run a Specific Suite

Select a suite from the suite list and click its Run button. Only the tests included in that suite will execute. The same concurrency selector is available for suite runs.

Parallel Execution

QA Studio supports parallel test execution using a semaphore pattern. The concurrency setting controls how many tests run simultaneously.

Concurrency Setting

The concurrency setting controls how many tests run simultaneously. Each test gets its own isolated browser instance, so parallel tests do not interfere with each other.

Concurrency Behavior Best For
1 Sequential execution — one test at a time, each waits for the previous to finish Debugging, resource-constrained environments, tests with shared state
2 Two tests run in parallel at any given time Light parallelism with moderate resource usage
3 Three tests run in parallel Balanced speed and resource usage
4 Four tests run in parallel Faster execution on machines with adequate resources
5 Maximum parallelism — five tests running concurrently Maximum speed on powerful machines

How the Semaphore Works

The batch runner uses a semaphore (counting lock) to enforce the concurrency limit. When a batch run starts:

  1. All tests are queued for execution
  2. The semaphore allows up to N tests (where N is the concurrency setting) to acquire a "slot" and begin running
  3. When a test completes (pass or fail), it releases its slot
  4. The next queued test immediately acquires the freed slot and starts running
  5. This continues until all tests have completed

Each test runs in its own isolated Playwright browser context, so parallel tests do not share cookies, local storage, or any other browser state.

Choosing Concurrency

Start with concurrency 1 when debugging or setting up new tests. This makes output easier to follow and isolates any failures. Once your tests are stable, increase concurrency to 3-5 for faster execution. Watch your server's CPU and memory — each browser instance consumes resources.

Live Progress Tracking

During a batch run, QA Studio streams progress updates to the dashboard in real time using Server-Sent Events (SSE). The dashboard displays the BatchRunProgress component, which provides a live view of the run.

SSE Streaming Events

The server emits the following events during a batch run:

Event Payload Description
test-start Test ID, test name Emitted when a test begins execution. The UI moves the test from "queued" to "running" status.
test-complete Test ID, status (passed/failed), duration (ms), error message (if failed) Emitted when a test finishes. Includes the outcome and how long it took. If the test failed, the error message is included.
suite-complete Total tests, passed count, failed count, total duration Emitted when all tests in the batch have finished. Provides aggregate results.

BatchRunProgress Component

The dashboard's BatchRunProgress component consumes these SSE events and renders a live progress interface:

  • Progress bar — a visual bar showing the percentage of completed tests (passed + failed out of total)
  • Pass/fail counters — running totals of passed and failed tests, updated in real time as each test completes
  • Per-test status list — a list of all tests in the batch showing their current state:
    • Queued — waiting to start
    • Running — currently executing (with a spinner indicator)
    • Passed — completed successfully (with duration)
    • Failed — completed with an error (with duration and error message)
  • Live updates — the entire view updates as each SSE event arrives, with no page refreshes needed

Suite Run History

Every batch run (whether triggered from "Run All" or from a specific suite) is recorded and accessible from the Tests page. The run history is paginated and shows:

  • Run timestamp — when the batch run was initiated
  • Suite name — which suite was run (or "All Tests" for project-wide runs)
  • Results summary — total tests, passed, failed
  • Duration — total wall-clock time for the batch run

Suite Run Details

Click on any suite run in the history to view its details. The detail view provides:

  • Aggregate metrics — total tests, passed count, failed count, pass rate percentage
  • Individual test results — each test in the suite with its status, duration, and any error messages
  • Links to individual runs — click on any test result to navigate to its individual run detail page, where you can see step-by-step execution results, screenshots, and visual diffs

This two-level view (suite summary to individual test detail) lets you quickly identify which tests failed and drill down into the specifics of each failure.

Best Practices

  • Create focused suites — group tests by feature or user journey rather than creating one massive suite. This makes it easier to run targeted regression checks.
  • Use a "Smoke" suite — create a small suite of your most critical tests (5-10) that validates core functionality. Run this suite frequently for fast feedback.
  • Match concurrency to resources — higher concurrency runs tests faster but uses more CPU and memory. Monitor your server's resource usage and dial back if you see instability.
  • Review failed tests promptly — use the suite run detail view to identify failures and click through to individual run details for debugging.
  • Leverage scheduled runs — combine suites with scheduled runs to automatically execute your test suites on a regular cadence.