Paul Hammant's Blog:
UI Component Testing Revisited: Modern Implementation with Visual Verification
Eight years ago, I wrote about UI Component Testing - the idea of testing UI components in isolation within test harnesses, focusing on the smallest reasonable rectangle while maintaining realistic evenThis blogt coupling. The goal was to be closer to the speed of unit tests with the confidence of integration tests, targeting “multiple tests per second” throughput.
Fast forward to 2025, and I’ve implemented this pattern in a real-world React TypeScript application with some fascinating modern twists. This post explores both the evolution of the testing approach and the unique challenges of building a browser-based Doppler speed detection app.
The Application: Doppler Speed Detection in the Browser
Before diving into the testing implementation, let me describe the application that provided the testing canvas. “Car doppler” is a React + TypeScript web application that attempts to detect vehicle speeds using Doppler shift analysis through your device’s microphone.
The app analyzes audio in real-time, looking for frequency shifts that indicate approaching or receding vehicles. When a car drives past at 40 mph in a 30 mph zone, the app attempts to calculate the speed based on the Doppler effect - though as prominently warned throughout the application, the calculations are wildly inaccurate and should never be used for law enforcement or any official speed measurements. This is an experimental proof-of-concept for browser audio processing, not a precision instrument.
Why a Web App Instead of Native Mobile?
This type of application would arguably have better user experience as a native iOS or Android app - better microphone access, background processing, and more natural mobile UX patterns. However, as any developer who’s dealt with app store submissions knows, the overhead is substantial:
- App store review processes and policies
- Developer account fees and maintenance
- Platform-specific development or cross-platform complexity
- Distribution and update mechanisms
- Content policy compliance (speed detection might raise flags)
A web app sidesteps all of this drama. Deploy to GitHub Pages, share a URL, and users can immediately access the functionality. The trade-off is dealing with browser limitations and web platform constraints.
Technical Challenges: WASM-SIMD and the Browser Environment
The audio processing pipeline presented several interesting technical challenges that influenced the component testing approach and at fundamental choices for techs.
WASM with SIMD: Performance at a Cost
For the computationally intensive FFT (Fast Fourier Transform) operations required for Doppler analysis, the app uses WebAssembly with SIMD (Single Instruction, Multiple Data) optimizations. This provides near-native performance for audio processing, but comes with complexity. Presently, making a Node solution use a WASM piece and work a) in Node itself for compile/test, and b) on the web itself is hard. Hard enough for me to hive off the algorithm stuff to a separate project and use “Runtime-Linking” (not build-time linking) to bring this into the solution as an effect doppler library. That as opposed to a TypeScript component/service among many within a solution. My project for that is https://github.com/paul-hammant/Car-Speed-Via-Doppler-Library and it auto-deploys to GitHub-Pages on git-push. The FFT tech itself is from https://github.com/echogarden-project/pffft-wasm which purports to deliver a .wasm file for use in a web environment. The prod-code is web-centric (no npm, no nodejs). The test suite is a bunch of regular npm/node choices. Tests run in a real browser via PlayWright. The build is of pffft-wasm is hard, so I checked in two .wasm files to just shortcut the rest of that repo’s build a little. One of the in there WASMs is for SIMD-enabled, and one for SIMD-disabled. You can go press “test now” buttons for the three implementations in the GH-P site: https://paul-hammant.github.io/Car-Speed-Via-Doppler-Library/ Despite significant effort, I could not get the SIMD-enabled implementation working reliably across browsers - it remains a known limitation.
Another challenge is that SIMD support and performance varies across browsers and devices. The app maybe needs to gracefully degrade through multiple fallback strategies, and each path needs testing. This is where the component testing approach becomes valuable - you can test the UI behavior across different WASM loading states without actually loading WASM in every test.
Node.js Modules in the Browser: The require() problem
Modern JavaScript development often involves libraries originally written for Node.js that use require()
statements. Browsers don’t support require()
- they use ES6 import/export
syntax. The car-speed-via-doppler-analysis library had to be carefully architected to avoid Node.js-specific patterns:
// ❌ Not allowed in browser context
const fs = require('fs');
const { SomeUtil } = require('./utils');
// ✅ Browser-compatible ES6 modules
import { SomeUtil } from './utils.js';
This constraint is well known: When will CommonJS modules (require) be deprecated and removed? - issue on GitHub.
The complexity for me is that getting AIs to help a dual need of integration tests must pass under node execution AND in a first class browser is hard. Not only in the browser but coupled to three modes of operation 1. WASM & SIMD enabled, 2. WASM & SIMD disabled, 3. Pure JavaScript. That is a lot of permutations to juggle concurrently and have a non-functional requirement that “all tests must pass”. So at one point I abandoned the node/npm -centric core of the doppler library - https://github.com/echogarden-project/pffft-wasm, and moved to runtime-linkage on the web only.
Modern Component Testing: The 2025 Implementation
The 2017 blog post described the concept; here’s how it looks implemented with modern tooling.
“Component Under Test” in a test Harness
Following the original article, each component test creates a test harness that simulates how the component would be used in the real application. The key insight remains the same: test both the component AND the test harness state through realistic event coupling.
The visual layout implements the pattern perfectly:
- Component Under Test (blue border) - The actual React component being tested
- Test Harness State (green border) - Shows the parent component state that would exist in the real app
- Event Log (yellow border) - Traces the complete interaction history for debugging
Dual Assertions: Component + Harness
Here’s what the 2017 concept looks like in modern TypeScript with Playwright:
test('demonstrates event coupling - recording toggle', async ({ mount, page }) => {
const component = await mount(
<ControlsTestHarness testName="Recording Toggle Event Coupling" />
);
// Assert on COMPONENT (traditional component testing)
await expect(component.getByTestId('record-button')).toContainText('Start');
// Assert on TEST HARNESS (the 2017 insight)
await expect(component.getByTestId('harness-recording-state')).toContainText('OFF');
// 5 second pause to see initial state
await page.waitForTimeout(5000);
// User interaction
await component.getByTestId('record-button').click();
// 5 second pause to see the change
await page.waitForTimeout(5000);
// Assert BOTH updated via event coupling
await expect(component.getByTestId('record-button')).toContainText('Stop');
await expect(component.getByTestId('harness-recording-state')).toContainText('ON');
// Assert on EVENT COUPLING trace
await expect(component.getByTestId('event-log')).toContainText('Recording started');
});
Visual Verification: Screenshots
One major testing-tech evolution since 2017 is the ability to easily capture visual verification. Each test automatically generates:
- Screenshots at each interaction point
- Interactive browser sessions for debugging
This addresses one of the original challenges: convincing stakeholders that the component testing provides real value. When you can show screenshots of the test harness demonstrating realistic user interactions, it becomes more tangible to non-developing collaborators and stakeholders.
Of course Nx, Cypress, StoryBook popularized this way for TypeScript/JavaScript development. And that hasn’t stood still either - you can now mock interactions with backend services in the same tech. We’re using Playwright as mentioned (not Cypress). And not Selenium (I was co-creator of v1), but either of those two would be possible here, and possibly just as fast in execution.
Test Execution: Multiple Modes
The testing setup supports different execution modes for different purposes:
# Bottom of the test pyramid - fastest units/specs
npm test
# Fast feedback during component testing during development
npm run test:ct
# Visual verification and demos - I slowed this down to take screenshots.
npm run test:ct:headed
# Interactive eyeball debugging
npm run test:ct:ui
# Specific component test focus
npm run test:ct:headed -- UnitsConversion.ct.test.tsx
Real-World Example: Units Conversion Testing
The units conversion functionality (mph to/from km/h) provides a perfect example of the pattern in action:
Before Playwright interaction:
After Playwright interaction:
After one more Playwright interaction:
The test demonstrates:
- Initial state: “Switch to mph” button, harness shows “METRIC (km/h)”
- After click: “Switch to km/h” button, harness shows “IMPERIAL (mph)”
- After second click: Back to original state, event log shows complete history
This single test verifies:
- Component visual state changes ✓
- Parent component state management ✓
- Event coupling integrity ✓
- User interaction flow ✓
- Debugging trace ✓
Performance: Achieving the 2017 prediction
The original post aimed for “multiple tests per second.” With modern hardware and optimized tooling, we’re achieving:
- Unit tests: 2-5ms each (200-500 tests per second)
- Component tests: sub-second each (ignoring browser startup)
- Full app tests: 5-15 seconds (complete user workflows)
I’m cheating a little - running 100 of the same component test then dividing that time by 100 in order to eliminate test startup time. Breaking down the CT times:
- WITH Screenshots we have an average per test of 0.730s (1.37 tests per second)
- WITHOUT Screenshots we have an average per test of 0.517s (1.93 tests per second)
At least on my Chromebook (i7-1265U w/ 32 GB RAM), that’s the perf. Component tests of this nature might hit 6+ tests a second on an M4 Mac. “More importantly, they hit visual verification, realistic event coupling, and stakeholder-demonstrable test evidence.
The fundamental component testing insight remains the same vs the 2017 blog entry: test the smallest reasonable rectangle with representative event coupling to other UI even if that last is not prod code. The tooling these days makes this easy.
Lessons Learned
What Worked Well
- Visual Verification - Screenshots and videos provide compelling evidence of test value
- Event Coupling - Testing both component and harness state catches integration bugs unit tests miss
- Stakeholder Communication - Non-technical stakeholders can understand what these tests verify
- Debugging Aid - Event logs provide clear audit trails for interaction flows
Challenges and Trade-offs
- Test Speed - Component tests are slower than unit tests, requiring careful test pyramid balance
- Maintenance - Test harnesses require maintenance as components evolve
- Tooling Complexity - Setting up browser-based testing requires more infrastructure
- Flakiness - Browser tests can be less reliable than pure unit tests
When to Use This Pattern
Good Candidates:
- Complex interactive components with state management
- Components with multiple interaction modes
- Critical user workflow components
- Components that integrate with external systems
Poor Candidates:
- Simple presentational components
- Pure computational functions
- Components with minimal user interaction
The Future: AI and Component Testing
Looking forward, I’m curious how AI coding assistants will interact with this testing pattern. The visual nature of the test harnesses and event coupling traces might provide excellent training data for AI systems to understand component behavior and generate more sophisticated tests.
Conclusion
The UI component testing pattern from 2017 is mainstream now, for TypeScript/Reach/Angular teams at least.
The potential for visual verification capabilities has transform this class of tests from pure validation tools into communication and documentation aids. When stakeholders can review test screenshots and see exactly how the component behaves under different conditions, the value proposition becomes undeniable. Again Nx/Storybook/Cypress really pushed this for many years.
For teams building complex React or Angular applications, especially those dealing with real-time data processing, audio/video manipulation, or other browser-constrained scenarios, this testing approach provides a robust foundation for maintaining component quality while supporting rapid development iteration.
Repos
*The complete React-app source code and test implementations are available in the github.com/paul-hammant/car-doppler repo.
The doppler library is in its own repo [github.com/paul-hammant/Car-Speed-Via-Doppler-Library[(https://github.com/paul-hammant/Car-Speed-Via-Doppler-Library)] but it need needs better WAV files for testing. And more strategies for detecting various things.