Automated testing is a process used to validate functionality and conformity during development and deployment.

Testing is hard. There are many tools and techniques out there that make it hard to figure out the right way of doing things, especially when it comes to React and front-end development.

Automated testing used to be more common in back-end projects and because of this some patterns and best practices are already in place, thankfully front-end development is finally catching up.

I don’t think we can cover everything about testing in one post so instead, we would explore the guiding concepts and principles with examples of some common scenarios.

Why is testing important

It might not seem obvious why automated testing is needed in the first place, after all, we make sure newly developed features are working as expected.

The main strength of automated testing comes from its ability to ensure our application continues working as expected in the future. It guards against unintended behavior changes and bugs.

We sort its benefits into three categories

  1. During feature development: Having tests helps us stick to the acceptance criteria and gives us a clear guide to when something is “done”.
  2. In the future: When a feature is well tested, it’s guarded against future changes in other parts of the application that can affect its behavior. We also have more confidence to refactor the implementation since we know tests will catch any behavior changes.
  3. Teamwork: Tests act as documentation, helping developers understand what a feature or unit responsibility is. This facilitates project onboarding and collaboration.

In general, we test to gain confidence that our application is working now and in the future. And to better document its behavior.


To help us navigate through the different tools and techniques of testing we have these principles to guide our decision-making. We should always aim to be aligned with them whenever we are making decisions.

Write tests. Not too many. Mostly integration

The first part is obvious, tests are good, and we should write tests. However, tests are often overlooked and come as a second thought, especially on tight deadlines. Tests make it easy to collaborate and refactor. Fewer tests === less maintainability.

Tests should be part of the “feature work” and not something that gets added later when we have time

Unit, integration, and E2E are all different types of tests, each having its tradeoffs.

Unit tests are fast but don’t guarantee different units would work together to produce the expected results and usually requires extensive mocking.

E2E tests give the most confidence and rarely require any mocking at all, but they are slow and expensive to run.

Integration tests come in between, giving us great confidence that our applications and features are working as expected while staying fast and cheap enough.

That said, not everything needs to be an integration test. Unit tests are best for pure utility functions and E2E tests come in pretty handy to make sure critical flows are working as expected across all the different layers.

Avoid testing implementation details

We test to gain confidence our application will continue working as expected against future changes and refactors, if tests depend on how the logic is implemented it will break whenever the test subject is refactored.

False negatives are when tests fail while the behavior is the same. We should instead focus on testing the application/feature behavior, no matter how it’s implemented.

This type of testing is called “black-box testing” which means manipulating the test subject through its API without “seeing” or worrying about the inner workings of it.

An example of this in frontend development is testing against state changes, instead of their effects.

Only mock when necessary

The need for mocking is usually an indicator of tight coupling. This is pretty apparent in unit tests, to isolate a certain unit we usually need to mock all its dependencies.

Mocking goes hand in hand with testing implementation details, a common practice is to mock an internal method of a dependency which is an implementation detail, the test will break if that function doesn’t exist anymore or changes its interface.

With integration testing, the need to mock all dependencies is greatly reduced since the test will cover the final behavior regardless of what and how dependencies are used.

However, it’s not always possible to avoid mocking, one example is timing functions. Tests will take much longer without mocking functions like setInterval and setTimeout.

Another example is API requests, unit and integration tests should run in isolation and not affect anything outside of the test scope. If the test subject is communicating with a remote service through an API it’s crucial to mock this connection. Tools like MSW enable us to mock API requests without being coupled to a certain implementation so we can test the behavior of the request regardless of how it’s made.

Stay as close as possible to user behavior

As a continuation of previous points, tests should stay as close as possible to how users would normally interact with an application. For example, a user would click on a tab and then see the corresponding content appear, so instead of testing how our state changes when a tab is clicked, we should assert the right content is shown.

Libraries like “Testing Library” make this very easy through tools such as userEvent which mimics user interactions like clicking, typing, selecting the text, …etc, and its screen utility which helps identify DOM elements similar to how regular users would.

Usecase Coverage > Code Coverage

Code coverage reporting tools give us great insight into the relationship between our business logic and tests. They tell about which lines of code are invoked through the tests, the percentage of covered logical branches, and which files are missing tests. However, it’s common that development teams approach code coverage as a metric and goal in itself.

The problem is a project can have 100% test code coverage while missing a lot of the use cases, this is because the same function/line of code could behave differently according to their input.

Chasing 100% (or any percentage, really) code coverage leads to practices that don’t always align with the rest of the principles like snapshot testing or testing implementation details.

Code coverage reports should be treated as a guide rather than a goal.

Example: const arrayify = (input) => [input].filter(Boolean)

This line of code has two use cases:

  1. It “arrayifies” the input, and
  2. return an empty array if the input is falsy

A test like expect(arrayify(5)).toEqual([5]) would give us 100% code coverage while ignoring the second use case.

Write fewer, longer tests

Given that test subjects often covers multiple use cases, it’s often necessary to run it against multiple assertions. Traditionally it was advised to have one assertion per test, this was mainly because testing tools could not pinpoint which assertions failed, making it hard to know what was broken.

Modern tools don’t have this problem anymore, they can show us exactly which assertions failed and even show us a printout of the DOM with meaningful error messages.

Given the arrayify method from above, a longer test would look something like the following

it('Should return input in an array', () => {


Common Scenarios

Simple tests

It’s pretty common to abuse snapshot testing to either achieve higher coverage or as a simple test to make sure a component is rendering correctly.

Even though snapshot tests are much faster and easier to write, they don’t help when it comes to why we test for multiple reasons:

  1. Tightly coupled to the implementation: Change how you style a component or which element you are using and the test will fail (false negative)
  2. Doesn’t test the behavior: if the markup is still the same but the functionality or behavior changes the test will still pass (false positive)
  3. Give a false sense of confidence as it affects the coverage report and makes the codebase look as “tested”
  4. It doesn’t help figure out the functionality or behavior of the tested unit so can’t work as documentation.

In general, snapshot tests miss all the points we do tests in the first place.


it('Should render correctly', () => {
  const container = render(<ProductCard />);


it('Should show product details', () => {
  render(<ProductCard />);

  const productName = screen.getByText('White T-shirt')
  const productPrice = screen.getByText('100 SEK')


Component with routing

Routing can be tricky to test as we don’t load the whole application in the test case. Usually, developers reach for mocking the router to assert the relevant method invocation. Unfortunately, this is considered an implementation detail and doesn’t ensure the application would react to routing as expected.

To correctly test the routing behavior we would need to implement real routing. React Router provides a MemoryRouter component that behaves the same as BrowserRouter but without the dependency on a browser.

We can then set up our own “micro” routes and assert the correct page is rendering.


const mockedNavigate = jest.fn()

jest.mock('react-router-dom', () => ({
  useNavigate: () => {
      navigate: mockedNavigate

it('Should invoke the navigate method', () => {
  render(<Login />);

  const usernameInput = screen.getByLabelText(/Username/);
  const passwordInput = screen.getByLabelText(/Password/);
  const loginButton = screen.getByRole('button', { name: 'Log in' });

  await user.type(usernameInput, 'admin');
  await user.type(passwordInput, 'p@ssw0rd');



// Configure a micro router instead of the real one
// This enables asserting the redirection to the homepage
const LoginWithRouter = () => {
  return (
    <MemoryRouter initialEntries={['/login']}>
        <Route path="/" element={<h1>HOMEPAGE</h1>} />
        <Route path="/login" element={<Login />} />

it('Should redirect to homepage after logging in', async () => {
  const user = userEvent.setup();

  render(<LoginWithRouter />);

  const usernameInput = screen.getByLabelText(/Username/);
  const passwordInput = screen.getByLabelText(/Password/);
  const loginButton = screen.getByRole('button', { name: 'Log in' });

  await user.type(usernameInput, 'admin');
  await user.type(passwordInput, 'p@ssw0rd');

  // Trigger the action causing the rendering

  // Assert the actual redirection similar to how users would
  expect(await screen.findByText('HOMEPAGE')).toBeInTheDocument();

Component with network requests

It’s often that components will fetch something over the network which makes them tricky to test. We don’t want tests to affect anything outside of the test scope, this includes hitting a server and potentially mutating data.

Developers often mock the network tools such as fetch or Axios, again, this is an implementation detail. At any time a project can switch from one tool to another and the behavior would stay the same.

Modern tools such as MSW enable us to set up network interceptors through service workers which aren’t coupled to a specific tool.


// Or use something like jest-fetch-mock
const mockedFetch = jest.fn(() =>
    json: () => Promise.resolve({ name: 'Helicon', website: '' }),

global.fetch = mockedFetch

it('Should show company information', () => {
  render(<CompanyInfo />);

  expect(mockedFetch).toHaveReturnedWith({ name: 'Helicon', website: '' })


// Setup mock server to avoid tight coupling to fetch, Axios, or any other library
// Tools like MSW enable us to setup request handlers to intercept the real requests
import { rest } from 'msw'

rest.get('', async (req, res, ctx) => {
  return res(
      ctx.json({ name: 'Helicon', website: '' })

it('Should show company information', async () => {
  render(<CompanyInfo />);

  const companyName = await screen.findByText('Helicon');
  const companyWebsite = await screen.findByText('');


Want to know more about how we can work together and launch a successful digital energy service?