Paul Hammant's Blog:
Selenium Component Testing and visual documentation
A few days ago I posted UI component testing revisited and Cypress component testing for a Cypress version of the Playwright component test from earlier in the week (and a refresher on the ideas)
Selenium Component Testing
Branch: selenium_instead_of_playwright
Selenium Visual Implementation: Complete Test Documentation Through Screenshots
The Selenium implementation takes a similar visual-first approach, generating detailed screenshots for each test interaction. Here’s the identical Test Harness Component Testing pattern implemented with Selenium WebDriver, producing the same visual layout as both Playwright and Cypress implementations.
Example Initial Component State
The Selenium implementation produces the same visual pattern as Playwright and Cypress:
- Component Under Test (blue border) - The actual React component being tested
- Test Harness State (green border) - Shows the parent component state that would exist in the real app
- Event Log (yellow border) - Traces the complete interaction history for debugging
Units Conversion Cycle Component Test
Initial State: Metric Mode (Switch to mph available)
After Click: Imperial Mode (Switch to km/h available)
After Second Click: Back to Metric Mode
The Selenium implementation captures the same interaction flow:
- Button text changing from “Switch to mph” → “Switch to km/h” → “Switch to mph”
- Harness state updating from “METRIC (km/h)” → “IMPERIAL (mph)” → “METRIC (km/h)”
- Event log accumulating each interaction: “Units changed to imperial” → both events visible
Architecture: External Test Server vs Built-in Mounting
As an experiment in testing framework diversity, the car-doppler project also implemented the same Test Harness Component Testing approach using Selenium WebDriver with Jest, providing an interesting comparison point to the Playwright implementation.
The Selenium approach uses a different architecture from Playwright’s built-in component mounting:
Selenium + Jest Setup:
- Spawns an external component test server (
server.ts
) - Tests run against real HTTP endpoints
- More complex setup but closer to production environment
- Browser automation via WebDriver protocol
Key Architectural Difference:
Component-Under-Test == CUT
Playwright: mount(<CUT->) → virtual DOM → browser context
Selenium: CUT in a page, on a HTTP server → real DOM → WebDriver → browser automation
Performance Trade-offs and Optimization Journey
For the Selenium Implementation:
- One Shared WebDriver instance across all tests
- Single browser navigation + page replacement strategy
- Optional screenshot generation (SKIP_SCREENSHOTS env var)
Selenium Performance Analysis (100 iterations of identical ‘best case’ test, headless):
- WITH Screenshots: 0.585s average per test (4.9 tests per second)
- WITHOUT Screenshots: 1.248s average per test (7.99 tests per second)
Browser was Firefox (headless) and tests control component within the test harness. Admittedly, the test doesn’t have any interactions, so more interactive tests would be slower.
The Performance Gap
Despite aggressive optimization, the 100-iteration analysis reveals Selenium’s consistent performance disadvantage due to architectural differences. While Selenium is only marginally slower than Playwright without screenshots (0.585s vs 0.517s), the gap widens with screenshots. Both are much slower than Cypress’s optimized performance.
Process Architecture - The “Hops” Problem:
Selenium (Many Hops):
Jest Process → HTTP → Express Server → Server-Side React Render →
HTTP Response → WebDriver JSON Protocol → geckodriver → Firefox Process →
DOM Updates → WebDriver Response → HTTP → Jest
Playwright Component Testing (Minimal Hops):
Jest Process → Vite Dev Server (same process) →
Direct Component Mount → Embedded Chromium → DOM → Direct Response
Practical Advice for Selenium-using Teams Today
If you need Selenium-level browser coverage, accept the 1-second tax as the cost of cross-browser compatibility. Definately bypass all interstitial steps/pages and go directly to the component under test.
When Selenium Component Testing Makes Sense
** Use Cases:**
- High-stakes UI components where visual regression is critical
- Stakeholder communication - screenshots are self-explanatory
- Complex interaction flows that benefit from step-by-step visual documentation
Integration with WebDriver Infrastructure
For teams already using Selenium for E2E testing, extending the same infrastructure to component testing provides consistency:
- Same browser automation patterns
- Some shared WebDriver utilities and helpers
- Consistent screenshot/video capture approaches
- Single testing technology stack
Component test-base: Playwright vs Selenium
Topic | Playwright test-base | Selenium test-base |
---|---|---|
Core library | @playwright/test + @playwright/experimental-ct-react |
selenium-webdriver (Jest/Mocha wrapper) |
Browser control | In-process Playwright browsers (chromium , firefox , webkit ) |
Remote/WebDriver sessions (Builder , By , until ) |
Startup / teardown | Handled by Playwright fixtures; no manual server code | Helpers startTestServer() / stopTestServer() spin up dev server on demand |
Global fixtures | test , page , automatic context per test |
Custom driver via getDriver() ; shared helpers |
DOM access | page.locator() , page.getByRole() , page.getByTestId() |
driver.findElement(By.css('[data-testid="…"]')) + helper wrappers |
Wait strategy | Auto-wait built in; expect(locator).toBeVisible() |
Explicit driver.wait(until.elementLocated()) for every action/assert |
Assertions | Playwright’s expect matchers |
Node assert / Jest expect |
Timeouts | Defaults ~ 30 s per Playwright | Test-suite timeout manually bumped (e.g. jest.setTimeout(60000) ) |
Artifacts | Trace viewer, video, screenshots configurable | No built-ins; screenshots would need extra code |
File naming | *.ct.playwright.test.ts(x) |
*.ct.selenium.test.ts(x) |
Dependencies | @playwright/* , Playwright CT config |
+ selenium-webdriver , ts-jest , @types/selenium-webdriver– all Playwright packages |
Typical helpers (from diff) | N/A (built into Playwright) | loadTestHarness(url) , findElementByTestId(id) , clickElementByTestId(id) , getTextByTestId(id) |
Speed / flakiness | Faster, less boilerplate, auto-waiting reduces flake | Slower, more boilerplate; explicit waits required |
Overall, the Selenium version replicates Playwright functionality but requires custom scaffolding for server control, waiting, and assertions, resulting in more verbose and potentially slower tests.
Serial vs parallel test execution
The testing in these blog entries was for serial execution on a single machine or vm. Parallel and distributed execution capabilities vary significantly, and are worth summarizing:
- Cypress: Limited parallel execution (Cypress Cloud/Dashboard for CI, but component tests typically run serially)
- Playwright: Excellent built-in parallel execution (
--workers=4
), can distribute across multiple machines - Selenium: Superior distributed execution via Selenium Grid, supports large-scale parallel execution across browser farms. That and 10+ SaaS vendors that made commercial Selenium grids (and more).
Conclusion
Selenium component testing feel more production-like, even with testing environments. While it comes with performance trade-offs compared to Cypress or (less so) Playwright, it is still compelling test evidence for multiple browsers potentially that’s immediately understandable to stakeholders. And with parallel/distributed execution via the likes of Selenium grid and commercial services, delays on good or bad could be reduced.
The key insight is that Test Harness Component Testing works equally well across different automation frameworks, allowing teams to choose based on their specific priorities: speed (Cypress), cross-browser capabilities (Playwright), or comprehensive visual documentation and production-like environments (Selenium).