Scaling React Apps at MOIA

MOIA Engineering
11 min readDec 16, 2022

--

by: Kim Schneider, Tobias Pickel, Michael Schmidt-Voigt, Filip Ilievski

Two software engineers looking and pointing to a display in an office environment.

Scaling JS projects is not a challenging task. We manage a React web-based Backoffice application for MOIA. This app grew from about 500 components in 2019 to almost 1000 in 2021. As you might imagine, doubling the size of a project has consequences on nearly all aspects of managing that codebase.

The most pressing issues that arose were:

  • Missing confidence while refactoring, as the code was written in plain JavaScript using very shallow PropTypes.
  • The overall app architecture was hard to maintain, as there was no clear boundary or API defined between components, and everything just grew organically.
  • With more code, the CI became a bottleneck and took a lot of time to finish.

We want to introduce some actions to tackle those issues and keep our code manageable and fun to maintain.

TypeScript

TypeScript is the JavaScript derivative of choice for any codebase that needs to run for a long time and that might grow. We expected TypeScript to help us write more self-documenting code and give us confidence while doing more significant refactoring and migrations.

As a plus, TypeScript and React play nicely together. Since the hooks release, the way TypeScript is used with React has drastically improved. It is no longer necessary to type complex higher-order components, every hook is just a regular function. Apart from that, a wide range of common patterns is maintained by the TypeScript community.

Another benefit of TypeScript is that you do not need to convert the whole codebase simultaneously. To quote the official Microsoft TypeScript migration guide for React.

Adopting TypeScript in any project can be broken down into 2 phases:

- Adding the TypeScript compiler (tsc) to your build pipeline.
- Converting JavaScript files into TypeScript files.

So, it is perfectly fine to add it incrementally, and that is what we did. The rule we decided on was to convert any JavaScript file to TypeScript as soon it is touched. With that setup, we migrated 90% of all code to TypeScript in the last two years.

Of course, if we migrate more complex components, there is more to it than just renaming a file.

For example, we use MaterialUi as a base for many components, and of course, as with everybody else in the industry, we have a custom button component. In JavaScript, we used PropTypes to do basic type-checking on components. Our ButtonPropType mirrored some of the MaterialButton APIs but not all of them. Still, we spread the rest of the props to the ButtonComponent. On top of that, a lot of PropTypes lacked detail. The use of plain PropTypes.object and PropTypes.func could be found everywhere, whereas PropTypes.shapes were relatively rare. Of course, we could have copied the old PropTypes and migrated those to TypeScript. However, it is a tough challenge to strictly type a PropTypes.object without knowing every detail of its context and usage.

So what we did instead was, rely on already existing types and adapt those. As we already used MaterialUi and it ships with TypeDefinitions, we could reuse what was already there and rather pick, omit or intersect existing types. Additionally, that has the benefit of not having to maintain a partial copy of those definitions by ourselves.

During the refactoring, we realized that migrating components based on libraries benefits a lot from reusing existing definitions, and our business feature components reuse types from our API. It was a breeze to eliminate PropTypes.object by reusing the existing type instead.

Right on time, our backend colleagues did the push to migrate from REST to GraphQL. And that was the icing on our cake.

GraphQL

TypeScript and React are already a perfect fit. If GraphQL fits your use case, have those three happily together joined by the fabulous graphql-code-generator. With that, we did not only get a single endpoint to use but a single source of truth for all of our API types. Breaking changes in the API no longer results in hard-to-track-down bugs, but TypeScript tells us what component needs which kind of migration.

On top of GraphQL, we introduced Apollo Client as a state management tool. After getting used to Apollo, we realized that most of our Redux state management is unnecessary. While everything else was rapidly growing, the usage of Redux went down from 130 to 0 files.

In summary, TypeScript paired with GraphQL helped to keep our codebase cleaner and easier to maintain.

One crucial missing piece regarding maintainability, though, is testing.

Testing Library

A good testing setup is a reliable safety net that prevents you from breaking critical features while refactoring and is a valid form of documenting and explaining more complex features.

There are options to approach a good testing strategy, like the testing pyramid or the testing trophy. The first one focuses more on unit and the other on integration tests. Sadly we had neither of them. Out of the many components, only a few were unit tested, and most of those were simple snapshot assertions.

The integration tests used Cypress and a mock server returning JSON fixtures. The coverage provided some confidence while updating dependencies without breaking critical features. Most of those tests consisted of two parts: first, test ids to query elements and second wait statements to assert that a request happened. Luckily, with the release of Testing Library, a new testing approach came to life that helped us improve our tests.

The way we write tests now resembles how a user would use the page. Scan for a unique label and maybe trigger an action. As a bonus, that helps a lot while debugging broken tests. It is obvious which of the below two expressions is easier to map to a form input:

cy.get(CouponAreaDataTestIds.Create.CouponBatchAbsoluteDiscount).type(discount) 
cy.findByLabelText('Amount').type(discount)

After the refactoring from test ids to Testing Library selectors, we could identify tests missing accurate assertions and fundamental UI flaws. For example, has the FormInput a meaningful and unique label? Is there a missing success message to indicate an action worked? Or are there missing loading or transition states? As a plus, we did not have to strip test ids from our production build anymore, which used to bloat our bundle.

With the help of TypeScript as a base, react-testing-library for unit, integration tests, and cypress-testing-library for end-to-end tests, we could finally set up a fine testing trophy and have confidence while improving our code.

So, we can check “Missing confidence while refactoring” from the list and start refactoring. But how to detangle a big codebase without creating an even bigger mess?

We researched possible options and decided to give nrwl’s nx a try.

NX

As nrwl nx states on their webpage, it is the: “Next generation build system with first-class monorepo support and powerful integrations”.

By introducing nx, we expected to get support while splitting our huge monolith into smaller, more maintainable pieces. And while doing so, profit from top-notch dev toolings like pre-configured Webpack configs, code generators, and custom lint rules.

One of the main selling points is nx’s concept of affected. That means only parts of the app affected by the current change need to be tested or built. We split our app into smaller feature libraries to start profiting from that.

We added nx to the project, having one app for the actual app, one for the mock server, and one for each area e2e test. We kept one huge common library for the other libraries to import components and functions.

Even though we did not do a big bang release, the refactor took quite some time, and the initial pull request was huge.

To show how the structure has changed over time, we can start by looking at the dependency diagram of a feature library, first shortly after the first commit, then after a year of using nx, and finally with a current diagram of the whole project.

Dependency graph after the initial commit
Dependency graph after one year
Recent dependency graph

While splitting our app, we unwrapped a series of hidden problems within the current architecture. Before nx, features just lived in a folder called “areas”, which seemed okay at first sight. What was very hard to track was that many areas cross-imported code. Defining libraries with a clear scope and a well-defined API helped track which parts belong together or should be separated.

On top of that, there was no rule on how to import things. We configured an alias ‘~’ to reference the top-level folder in our project. That seemed handy initially but turned into a foot gun while the repo grew. For example, importing our custom Button could be done by alias from the top-level index. import { MoiaButton } from ‘~/components’ Some people prefer to import it from deeper nested index files, eg. import { MoiaButton } from ‘~/components/atom’. Of course, some imports pointed directly to the button component due to circular dependency issues. As everything was still in one big folder, several relative imports with various levels of nesting also existed.

Restructuring the way we organized shared components was a big pain point at the time. We moved those components to custom libraries and updated all imports to be consistent. That helped a lot while refactoring the overall app structure. Luckily, nx provides lint rules to enforce consistent imports; we do not run into the import madness anymore.

As already teased, circular dependencies were the next issue popping up as there were no rules on which area could import what from where. We introduced libraries with shared code to break the cycles, which sounds easy at first but is a huge challenge if you need to deal with lots of files. Luckily, we discovered import code mod, and with its help updating various imports in hundreds of files became more bearable.

With a growing number of clearly separated libraries, we could finally start benefiting from running affected tests and checks. We were quite happy with the progress of our refactoring, but soon, we realized that there were some issues where nx was not providing a working solution for us yet.

When we started with nx, adding and maintaining custom code generators provided by nx seemed overly complex. At the time, we stumbled upon a project called projen. Setting up the base boilerplate was way more straightforward at the time, which is why we started with projen as a code generator. Adding or removing an e2e test suite to an existing library or generating a new feature was as easy as flipping a switch. And as projen always ran for the whole setup, we no longer had to deal with complex merge conflicts. Luckily, nx introduced version 13.10 including the feature to run generators from nx plugins in the workspace they were created in. With that, it was reasonably easy to refactor our projen generator to use the nx one.

Another problem was that nx/react did not support hot module replacement and fast refresh the feature was merged on 10 May 2021. As a result, the initial developer experience was way worse than before. We tried to run smaller apps instead of the whole monolith, but it did not improve the situation. We helped ourselves by manually adding the feature to a custom Webpack config. The lesson we learned is, that in complex projects, even with tools like nx you will have to add custom tooling if you do not want to compromise on developer experience.

On top of that, running ForkTsCheckerWebpackPlugin and the CircularDependencyPlugin in dev mode resulted in a very long feedback time, so we needed to disable both. After that, we found out there was no easy way to run a type-check script with nx. We worked around that limitation by providing a custom executor that did the type-checking for us.

After getting the developer experience back on track, there was another problem to solve. Although affected did a tremendous job of keeping the CI fast for changes, only affecting a few libraries. Complete runs triggered by global code changes resulted in even longer CI runs than before our nx refactor.

To get faster runs, we evaluated the paid cypress dashboard, which was helping to run e2e tests in parallel, but in the end, it was way too costly for the provided value.

We decided to migrate to a custom script that returns a list of affected e2e tests and a hardcoded number of machines that should run each test suite on the CI. With that setup, we achieved similar timings as with cypress dashboard but without the monthly fee.

While checking more possible improvements, we realized that we had two low-hanging fruits to improve the overall timing.

First, we were still running yarn v1, and node-module install times were terrible. Running a yarn install could easily take three to five minutes. Although we already reused and shared caches on the CI.

To improve that situation we tried pnpm. After an easy migration of some miss-aligned peer dependencies and an additional setup step in the CI, we had pnpm up and running. The size of our node_modules folder decreased from 990MB to 600MB, and setup times were even better. Including reading and writing the cache, the pnpm setup always stayed under a minute.

The second big chunk of CI time spent was building the app for e2e tests. We initially built our app and shared that artifact between all test machines. This initial build step could easily take up to five minutes or longer.

By the time esbuild gained more and more popularity. We tried using plain esbuild but decided to go with Vite as it seemed to almost work out of the box. Of course, we had to deal with some Webpack specific and invalid ECMAScript modules-related errors. But after a short and straightforward migration, we had Vite up and running.

As a result, we stripped down the CI build. As a plus, the cold startup of the local development server went down from 60 to 2 seconds, and the “hot module replacement” was almost instant.

In conclusion, we love nx’s guidance on splitting our monorepo into apps and libraries. Tools like the dependency graph and custom lint rules to detect cycles and enforce clear module boundaries help keep the structure in good shape. And the cherry on top, the affected concept helps to prevent running unnecessary pipelines on the CI.

The impact the introduction of new tooling had on our day-to-day work, Vite and pnpm were instantly visible, and the migration path was very smooth. Although nx promises to “Work for Projects of Any Size”, we still had to add some custom tooling on top to make it work for us.

Including those extra steps, we managed to get a fast CI, stricter rules regarding architecture, better maintenance of our tooling, and overall, more fun while maintaining and scaling our React project.

For further improvements, we already have an eye on Vitest to get our project’s next big speed bump.

--

--

MOIA Engineering
MOIA Engineering

Written by MOIA Engineering

Our code on the road to future mobility.

Responses (1)