Visual Regression

Detect unintended visual changes in your application with pixel-level screenshot comparison, baseline management, and an intuitive approve/reject workflow.

Overview

Visual regression testing catches UI changes that functional tests miss. A button might still work correctly, but if its color, size, or position changed unexpectedly, visual regression testing will flag it.

QA Studio's visual regression system works in three stages:

  1. Capture — take screenshots during test execution using the Screenshot action
  2. Compare — compare captured screenshots against stored baselines using pixel-level diffing
  3. Review — inspect diffs and approve or reject changes through the visual diff viewer

Adding Screenshot Steps

To capture screenshots during a test run, add Screenshot action steps to your test using the test builder.

The Screenshot action has two fields:

Field Description Required
name A unique name for this screenshot within the test. Used to match against baselines. Example: homepage-hero Yes
fullPage When enabled, captures the entire scrollable page instead of just the visible viewport. Default: false No
Full-Page Screenshots

Use fullPage: true for comprehensive visual coverage. This captures everything on the page, including content below the fold. It is especially useful for landing pages, long forms, and content-heavy layouts where changes could happen anywhere on the page.

Place Screenshot steps at key points in your test — after page loads, after interactions, after animations settle. Each screenshot acts as a visual checkpoint that will be compared against its baseline on subsequent runs.

Running a Test with Screenshots

When you run a test that contains Screenshot steps, the runner captures a PNG image at each screenshot step and stores it as part of the run results. The first time you run a test with screenshots, there are no baselines to compare against, so all screenshots are recorded with a new status.

Once baselines are set (see next section), subsequent runs will automatically compare each captured screenshot against its corresponding baseline and generate diff data.

Setting Baselines

Baselines are the "known good" reference screenshots that future runs are compared against. You set baselines from a passing test run.

Run the Test

Execute the test and verify that the application looks correct. This run will serve as your visual baseline.

Open the Baselines Panel

Navigate to the test's detail view and open the Baselines panel. This panel shows all existing baselines for the test and provides controls for managing them.

Set Baselines from Run

Click the "Set Baselines from Run" button. A dropdown lets you select which run to use. Choose the run that represents the correct visual state of your application.

Confirm

The system extracts all screenshots from the selected run and stores them as baselines. Each baseline is keyed by the screenshot step's name, so baselines are per-test, per-step.

How Comparison Works

When a test runs and baselines exist, QA Studio performs pixel-level comparison using the pixelmatch library (powered by pngjs for PNG parsing).

The comparison process for each screenshot step:

  1. The runner captures the current screenshot (the "actual" image)
  2. The system looks up the baseline image for this test + screenshot name combination
  3. If a baseline exists, pixelmatch compares the two images pixel by pixel
  4. A diff image is generated highlighting every pixel that differs (shown in magenta/red)
  5. A diff percentage is calculated representing the proportion of pixels that changed

The result is stored as a screenshot diff record containing references to the baseline image, actual image, diff image, and the calculated diff percentage.

Viewing Diffs

After running a test with baselines, open the run detail panel (RunDetailPanel) to inspect visual diffs. The panel includes a VisualDiffViewer component that provides a side-by-side comparison view.

Diff Viewer Layout

The visual diff viewer displays three images for each screenshot comparison:

Image Description
Baseline The stored reference image representing the expected visual state
Actual The screenshot captured during the current test run
Diff A generated image highlighting pixel differences between baseline and actual, with changed pixels shown in a contrasting color

Above the images, the diff percentage is displayed, giving you an immediate sense of how much has changed. A diff of 0% means the images are identical; higher percentages indicate more visual change.

Diff Statuses

Each screenshot comparison is assigned one of the following statuses:

Status Description
match The actual screenshot is identical to the baseline (0% diff). No visual regression detected.
mismatch The actual screenshot differs from the baseline. The diff percentage and diff image show what changed. Requires review.
new No baseline exists for this screenshot step. This is normal for the first run or when a new screenshot step is added.
approved A mismatch was reviewed and approved. The baseline is updated to the new screenshot.
rejected A mismatch was reviewed and rejected. The diff is flagged as a visual regression that needs to be fixed.

Approve Workflow

When a screenshot comparison results in a mismatch, you need to decide whether the change is intentional or a regression.

If the change is intentional (for example, you redesigned a component), click the Approve button on the diff viewer. Approving a diff:

  • Updates the baseline to the new (actual) screenshot
  • Sets the diff status to approved
  • Future runs will compare against the updated baseline

This effectively "accepts" the new visual state as the new standard.

Reject Workflow

If the change is unintentional (a visual regression), click the Reject button. Rejecting a diff:

  • Keeps the original baseline unchanged
  • Sets the diff status to rejected
  • Marks the diff as a known regression that needs to be fixed in the application

The rejected status serves as a flag for your team that a visual bug has been introduced and needs attention. Once the application is fixed and the test is re-run, the screenshot should match the baseline again.

Managing Baselines

The Baselines panel for each test shows all stored baselines, organized by screenshot name. From this panel you can:

  • View the current baseline image for each screenshot step
  • Delete individual baselines if they are no longer needed
  • Set new baselines from a different run if the current ones are outdated
Baseline Scope

Baselines are stored per-test, per-step. Each test maintains its own set of baselines keyed by the screenshot step name. Re-running a test overwrites the comparison images (actual and diff) but does not automatically update baselines — baselines only change when you explicitly set them from a run or approve a diff.

Best Practices

  • Use descriptive screenshot names — names like login-form, dashboard-loaded, or settings-modal make it clear what each baseline represents.
  • Wait for stability — add Wait steps before screenshots to ensure animations, transitions, and lazy-loaded content have finished rendering.
  • Set baselines from clean runs — only set baselines from runs where the application is in its correct, expected visual state.
  • Review diffs promptly — mismatches left unreviewed accumulate and make it harder to distinguish intentional changes from regressions.
  • Use full-page screenshots strategically — full-page captures provide thorough coverage but produce larger images and are more sensitive to layout shifts in unrelated page areas.
  • Keep viewport consistent — visual diffs are sensitive to viewport size. Use consistent browser configuration across runs to avoid false positives from viewport differences.