Use a fake server (not only) for testing your UI

Story time

Recently, in a project I'm working on, we faced a challenge to refactor one of the largest forms in our UI. The form itself belongs to the more complex ones, including a couple steps with some inputs that depend on each other, some others that are fetching the data from our GraphQL API, which sums up to a handful of paths that can be chosen by the user.

Along with the refactoring, we wanted to add some integration tests for the form itself to give us confidence that it's working properly at the moment and we'll be notified if some regressions slip between the lines of code in the future. All of it to ensure us that we can ship things with confidence and sleep calmly at night.

How did we decide to approach it?

We already have the "whys?" for writing the tests, but before jumping straight to the coding, we stopped to think about the "hows?". I had the pleasure to work with some great people, who had shown me some wisdom and shared some principles and guidelines that stayed with me till today. Below they're wrapped up shortly, along with links to articles made by our irreplaceable Kent C. Dodds that extend the topics.

The more your tests resemble the way your software is used, the more confidence they can give you —  Kent C. Dodds

Having our end user in mind when developing apps is a great principle, which often leads to better and more user-friendly software. Similarly, designing our tests to resemble the user behaviour as closely as possible, will give us the confidence that our product works properly when the next day our user will perform similar actions.

When having such an approach in mind, we're creating tests that are not relying on nitty-gritty implementation details, which were altered to provide a setup for the test and may change over time. I remember that one guy I was working with, suggested in review some better approach to structure the components and state. He also mentioned that we have quite some tests here, however we implemented them independently from the internals and I won't need to change them. Indeed, he was right.

Read Testing Implementation Details if you're interested in more details.

Write tests. Not too many. Mostly integration. —  Guillermo Rauch

As with many things in life, writing tests... is an art of tradeoffs. Let's consider the suite of e2e tests that spins up the entire app with the backend, they will click some buttons, type some values into inputs, pretending to use our app as a user. They give a lot of confidence, but with great power comes great... cost. Cost in form of the time the tests are running as well as the time which is required to write, maintain those tests and ensure that the external factors are not affecting the results due to lack of resiliency and proper isolation. Writing good e2e tests is hard and it's an art of shuffling between their confidence, resiliency and total execution time. On the other hand, unit tests are fast to write and run, but they're only covering simple actions that are most frequently separated from the usage path. In my opinion, well written integration tests are a good middle ground between both types mentioned above. They can give a lot of confidence that the UI is working properly, by covering various user paths in more detail, while at the same time they are much cheaper to write and maintain. I'm not saying to neglect the e2e tests, because we need them to fulfill the whole picture, but leverage them to test the main paths, while covering all the remaining ones and edge cases in integration tests.

Read Write tests. Not too many. Mostly integration. if you're interested in more details.

Back to the story

We're using React in our UI, so leveraging @testing-library/react and its benefits was an obvious choice for us. However, we've got a discussion on how we'd like to tackle the API calls during tests. We knew that mocking the client is not an option, especially that it will be tightly coupled with the implementation detail on top of some Amplify's mumbo-jumbo we want to get rid of. We knew about msw and the way we can incorporate it to mock our GraphQL API. It leverages the service worker, which can be configured to intercept the requests and respond to them with predefined responses. All of it does work seamlessly without relying on implementation details, such as the library used for fetching the data. We could write some fixtures with API response, configure msw to respond with them and call it a day... but on the other hand, we were aware that we'd love to add some more integration tests in other places in our app, that are using similar set of resources stored on the backend. But maybe we could do it a little bit smarter?

Some time ago, I've was working on a project with one smart guy, that showed us mirage.js and gave us possibility to intercept API calls, mimic the behaviour of our API with a little bit of code and store the data in a lightweight, in-memory database.

I started wondering If such a thing would be possible with msw and after some wandering, other smart guy pointed me to @mswjs/data, which turned out to be the missing puzzle piece. It allowed us to define the models of the resources used in the app, that can be further stored and retrieved from in-memory database as well as neatly integrated with msw handlers. Due to that we would be able to define some fake of our backend, that can be flexibly reused across multiple integration tests of our app.

How it works under the hood

If you're interested into taking deeper insight I can highly recommend the talk Beyond API Mocking given by Artem Zakharchenko who is a creator and maintainer of Mock Service Worker.

For more hands-on knowledge you can visit official docs, which provide a lot of useful examples or read other articles provided by Artem:

What we've made

Before jumping further, I want to mention that it's not the one and only way to incorporate such tests in the app. Here, I'm presenting the approach we decided to incorporate, because we saw a handful of benefits from that, which are summarised at the end of the article. However, some other approaches may be more suitable for your use cases, so I recommend you approach my story with pragmatism and validate if it would serve you well.

I can't share the details of our work explicitly, but if you'd like to see a complete example, I've prepared a sample repository, that provides a configuration of mock server for managing books and associate authors with them, including some simple integration test. Users can view the listing of books, see their details as well as add new entries, which will be the scope I want to cover.


Below there is a description of subsequent steps required to prepare such a config.

Configuring the fake database

First we need to specify the model's structure, representing the resources we want to store in our fake database. Below, example models of author and book are specified.

import { primaryKey } from "@mswjs/data";
import { v4 } from "uuid";
import faker from "faker";

export const author = {
  id: primaryKey(() => v4()),
  name: () =>,
import { oneOf, primaryKey } from "@mswjs/data";
import faker from "faker";

export const book = {
  isbn: primaryKey(() => String(faker.datatype.number(9999999999999))),
  title: () => faker.random.words(3),
  author: oneOf("author"),

As you can see, each of the models needs to have primaryKey specified. Other fields can have an initialising function, which is used to infer the type of property as well as to return some default value if it won't be passed during initialisation. In that case I'm using faker to provide some mock data. Lastly, the oneOf function is used to define the one-to-one relationship between the book and author.

import { factory } from "@mswjs/data";
import { author } from "./factories/authors.factory";
import { book } from "./factories/books.factory";

export const db = factory({

The prepared models are further passed to the factory function, which exposes the functionality of a fake, in-memory database, which is fully typed.

Configuring handlers

Such a database can be further used in implementation of handlers for our fake server. The methods exposed by @mswjs/data are inspired by the Prisma API, which is really convenient to use (especially for someone who used Prisma before).

import { graphql } from "msw";
import { Book, BookInput } from "../../graphql/generated-types";
import { db } from "../db";

export const handlers = [
  graphql.query("Book", (req, res, ctx) => {
    const { isbn } = req.variables;

    return res({
        book:{ where: { isbn: { equals: isbn } } }),

  graphql.query("Books", (req, res, ctx) => {
    return res({

  graphql.mutation<{ createBook: Book }, { input: BookInput }>(
    (req, res, ctx) => {
      const { isbn, title, authorId } = req.variables.input;

      const author ={ where: { id: { equals: authorId } } })!;
      const newBook ={ isbn, title, author });

      return res({
          createBook: { ...newBook, author },

The snipper above presents the example handlers for GraphQL operations prepared for the book resource. The graphql.query("Book", ( ... ) => { ... }) registers the handler for the Book query and retrieves the book with a given isbn number. The request properties can be extracted from the req parameter, res returns the response, while ctx includes a set of helper functions.

Configuring server

Such a set of handlers can be further gathered and passed to the setupServer function from msw to expose functionality of the fake server.

import { handlers as authorHandlers } from "./handlers/authors.handlers";
import { handlers as bookHandlers } from "./handlers/books.handlers";

export const handlers = [...authorHandlers, ...bookHandlers];
import { setupServer } from "msw/node";
import { handlers } from "./handlers";

export const server = setupServer(...handlers);

The server can be further used in tests

import "@testing-library/jest-dom/extend-expect";
import { drop } from "@mswjs/data";
import { client } from "./ApolloClient";
import { server } from "./mockServer/server";
import { db } from "./mockServer/db";

beforeAll(() => {

beforeEach(() => {
  return client.clearStore();

afterEach(() => {

afterAll(() => {

The snippet above presents the setupTests.ts which is used to configure tests. The beforeAll and afterAll are responsible for spinning up and tearing down the fake server. afterEach resets the state of handlers and fake database between tests, while beforeEach is specific to @apollo/client and clears its cache.

I decided to create such a setup in that case, however it will spin up the fake server for all the tests. Nevertheless, it may be beneficial to add that setup to test files explicitly, because some unit tests may not need to communicate with the fake server, which may remove unnecessary steps and decrease their execution time.

Using the server in tests

Below, there are two snippets with two test cases covering the happy path and the case in which a book with the given ISBN already exists. There are additional comments added to explaining particular steps of the test.

import * as React from "react";
import { screen, waitForElementToBeRemoved } from "@testing-library/react";
import userEvent from "@testing-library/user-event";
import { waitFor } from "@testing-library/dom";
import { graphql } from "msw";
import { renderWithProviders } from "../../testUtils/render";
import { db } from "../../mockServer/db";
import { server } from "../../mockServer/server";
import Books from "./index";

function seedData() {
  const authors = [{ name: "James Clear" }),{ name: "Greg McKeown" }),

  const books = [{ title: "Atomic Habits", author: authors[0] }),{ title: "Essentialism", author: authors[1] }),

  return { authors, books };

test("should create a new book when form is submitted with valid data", async () => {
  // seed some fake data...

  // ... and add some specific for the test
  const authorName = "Andrzej Pilipiuk";{ name: authorName });

  // render component
  renderWithProviders(<Books />);

  // wait for books to be loaded
  await waitForElementToBeRemoved(() => screen.getByText(/Loading/));

  // go to create book view"link", { name: "Create new book" }));

  // fill in the form
  const isbn = "1234567891011";
  const isbnInput = screen.getByRole("textbox", { name: "ISBN" });
  userEvent.type(isbnInput, isbn);

  const title = "Chronicles of Jakub Wędrowycz";
  const titleInput = screen.getByRole("textbox", { name: "Title" });
  userEvent.type(titleInput, title);

  const authorSelect = screen.getByRole("combobox", { name: "Author Id" });
  // wait for select to be enabled - options are loaded
  await waitFor(() => expect(authorSelect).toBeEnabled());
  userEvent.selectOptions(authorSelect, authorName);"button", { name: "Create book" }));

  // wait for the book to be shown - queries are invalidated which leads to refetching
  await waitFor(() => expect(screen.getByText(title)).toBeInTheDocument());

  // assert that the results are stored in fake database
  expect({ where: { isbn: { equals: isbn } } })).toEqual(
      author: expect.objectContaining({ name: authorName }),

Let's summarise some key points of the first test and comment on them.

I've created a seedData function to provide some fake data initially and reuse it in both tests. After adding some more specific data for the first test, I rendered the component tree and performed the action that the user would perform - entering the view with form, filling in the data and submitting it. After the action I'm waiting to see the newly added book on the listing, due to the fact that the Apollo's cache is invalidated, which leads to refetching the data. Lastly, I'm checking if the value in the fake database is stored correctly. Alternatively, I could enter the details view for the newly created entry and verify that the data returned from the fake server is correct (which would be even better, because that's how the user would interact with the app, right?).

Now let's cover the second test case with an error returned from the fake server.

test("should show an error when book with given ISBN already exists", async () => {
  // seed some fake data to use it later
  const {
    books: [book],
    authors: [author],
  } = seedData();

  // overwrite handler to give us specific error
  const errorMessage = "Book with given ISBN already exists";
    graphql.mutation("CreateBook", (req, res, ctx) =>
            message: errorMessage,
            path: ["input", "isbn"],

  // render component
  renderWithProviders(<Books />);

  // wait for books to be loaded
  await waitForElementToBeRemoved(() => screen.getByText(/Loading/));

  // go to create book view"link", { name: "Create new book" }));

  // fill in the form
  const isbnInput = screen.getByRole("textbox", { name: "ISBN" });
  userEvent.type(isbnInput, book.isbn);

  const title = "Chronicles of Jakub Wędrowycz";
  const titleInput = screen.getByRole("textbox", { name: "Title" });
  userEvent.type(titleInput, title);

  const authorSelect = screen.getByRole("combobox", { name: "Author Id" });
  // wait for select to be enabled - options are loaded
  await waitFor(() => expect(authorSelect).toBeEnabled());
  userEvent.selectOptions(authorSelect,;"button", { name: "Create book" }));

  // wait for the error message from the backend to be rendered
  await waitFor(() =>

The second test covers the case in which a user enters data for the book that already exists. In that one, I've overwritten the handler for the GraphQL operation I want to perform. Alternatively, we could incorporate the error handling logic into the handler, however it would lead to extending the handler logic (as for example )

In that case, we could incorporate such an error handling in our handler for a fake server, however it could lead to reimplementing the logic from the real server. As you can think of it - it's yet another tradeoff we need to consider. The rule of thumb I decided to take there is to implement the happy paths in the handlers for the fake server and keep them as lean as possible, while overwriting them in particular tests for the error handling cases. It gives me greater readability for the handlers and reusability across multiple tests, while preserving the flexibility and optimisation for change while writing new and refactoring old ones. Lastly, I'm asserting that the error returned from the backend is displayed to the user, which is the desired behaviour of the app.

Using the server in app

The capabilities of msw are not only limited to the tests... wait, have I told you that the backend for that app does not exist and if you run it normally, it would work as expected, because of leveraging the fake database? Indeed, msw can be used for prototyping and reproducing the errors more easily.

import { setupWorker } from "msw";
import { handlers } from "./handlers";

export const worker = setupWorker(...handlers);

Here's the snippet, which is similar to setupServer for tests, however it setupWorker creates the client-side worker instance, which can be further activated to intercept requests, while working on UI.

// Start the mocking conditionally.
if (process.env.NODE_ENV === "development") {
  const { worker } = require("./mockServer/browser");{
    title: "Atomic Habits",
    author:{ name: "James Clear" }),
    title: "Essentialism",
    author:{ name: "Greg McKeown" }),
    title: "Chronicles of Jakub Wędrowycz",
    author:{ name: "Andrzej Pilipuik" }),


  <App />

The snippet above shows that the worker can be registered on the client-side along with the fake database with seeded data.

Is it worth it?

As mentioned many times before, with different solutions for various problems, there are some tradeoffs we need to consider and similarly, the described approach for mocking the API has some pros and cons we need to consider. I decided to gather and describe them shortly below.


  • Possibility to fake the behaviour of the real server - If our app is performing some operations, including reading, creating and updating resources, we can prepare the handlers and leverage the in-memory store from @mswjs/data to mock the behaviour of the real server and test our app more thoroughly. We can fill in and submit some form or perform some other action resulting from an API call, and check after if the data is returned in another place. It would resemble the way users interact with our app, without testing the implementation details, which should give us enough confidence to sleep calmly at night and ship our product.

  • It's flexible to be reused in various tests, contrary to hardcoded responses - If there are multiple places in which we're using some of the resources, we can flexibly manage the mocked data with little effort. If we plan to make similar API calls with the same operations or endpoints, we'll benefit from setting up the fake server and resource handlers once, and later we'd be able to use it extensively.

  • Possibility to use such a server for development... - As you may have already experienced, sometimes deadlines happen... and it would be nice to start developing the frontend, but the backend is not ready yet. If you haven't faced any situation like that, then you're the lucky one, but I bet that sooner or later you may meet some. In that case, instead of looking for someone to blame the tight deadlines, I'll encourage you to go talk with the backend folks, discuss the API contract and start development of the UI with a fake server implementing the contract (which can be freely used after in integration tests). I bet your manager will be mind-blown with your agility and resourcefulness!

  • ...As well as as debugging and prototyping - In some place in your app, you're getting some error, which is reproducible after performing a couple of convoluted steps and you need to handle it? You can get the response once it's returned and put it into the fake server to reproduce it easier in the future. Moreover, you can prototype your frontend without the need to have some backend working, for example to provide some proof of concept, which later could be integrated with the real backend. That's how I managed to prototype and run the simple example linked in the article, before covering it with tests.


  • Higher cost of introduction and maintenance - Every abstraction brings some cost with it and similarly the fake server needs to be configured initially with some of the resources that will be used and maintained later when the real server will evolve. Nevertheless, from my experience, you don't need to build it all at once, you can approach it gradually by introducing models you'll need to use in tests you're writing. The initial phase may be a little slower, but the more handlers you introduce, the more you'll be able to reuse in the future. At some point you'll may end up preparing 20% of your real server, which would be used in 80% of use cases and you'll add or adjust the missing ones from time to time, based on current needs


My goal here was to present the idea behind creating the fake server, show an example on how it could be used (not only) in tests and discuss pros and cons of that solution.

In our case, introducing and configuring it took us some time initially, but with the flexibility it gave us, we could reuse it later with a little effort in other parts of our app, because we were using a similar set of API calls. I feel it was a good decision and that it will pay off in the long run, because it enabled us to write better tests, which gave us more confidence due to the fact that they resemble the way users interact with our app.

I'm really happy as a developer that I can use tools like @testing-library/react, mirage.js, msw and Testing Playground that made writing quality tests so nice. If you don't know about some of them, I highly recommend to spend some time with them, but I'll warn you that there is no going back. Also, kudos for all the awesome folks that are working on them and making our life so much easier!

Lastly, I'm wondering what's your take on that? That's part of my story and my experience, but maybe you don't fully agree with me or you see something that I could improve? I'd be more than happy to hear your opinion!