Thanks! We'll be in touch in the next 12 hours
Oops! Something went wrong while submitting the form.

Optimize React App Performance By Code Splitting

Akash Rajput

Full-stack Development

Prerequisites

This blog post is written assuming you have an understanding of the basics of React and routing inside React SPAs with react-router. We will also be using Chrome DevTools for measuring the actual performance benefits achieved in the example. We will be using Webpack, a well-known bundler for JavaScript projects.

What is Code Splitting?

Code splitting is simply dividing huge code bundles into smaller code chunks that can be loaded ad hoc. Usually, when the SPAs grow in terms of components and plugins, the need to split the code into smaller-sized chunks arises. Bundlers like Webpack and Rollup provide support for code splitting.

Several different code splitting strategies can be implemented depending on the application structure. We will be taking a look at an example in which we implement code splitting inside an admin dashboard for better performance.

Let's Get Started

We will be starting with a project configured with Webpack as a bundler having a considerable bundle size. This simple Github repository dashboard will have four routes for showing various details regarding the same repository. The dashboard uses some packages to show details in the app such as react-table, TinyMCE, and recharts.

Github repo dashboard

Before optimizing the bundle

Just to get an idea of performance changes, let us note the metrics from the prior bundle of the app. Let’s check loading time in the network tab with the following setup: 

  • Browser incognito tab
  • Cache disabled
  • Throttling enabled to Fast 3G

Development Build

As you can see, the development bundle without any optimization has around a 1.3 MBs network transfer size, which takes around 7.85 seconds to load for the first time on a fast 3G connection.

Development build network tab

However, we know that we will probably never want to serve this unoptimized development bundle in production. So, let’s figure out metrics for the production bundle with the same setup.

Production Build

The project is already configured for generating a webpack production build. The production bundle is much smaller, with a 534 kBs network transfer size compared to the development bundle, which takes around 3.54 seconds to load on a fast 3G connection. This is still a problem as the best practice suggests keeping the page load times below 3 seconds. Let’s check what happens if we check with a slow 3G connection.

Production build network tab

The production bundle took 12.70 seconds to load for the first time on a slow 3G connection. Now, this can annoy users.

Production build network tab - slow connection

If we look at the lighthouse report, we see a warning indicating that we’re loading more code than needed:

Production build lighthouse report

As per the warning, we’re loading some unused code while rendering the first time, which we can get rid of and load later instead. Lighthouse report indicates that we can save up to 404 KiBs while loading the page for the first time. 

There’s one more suggestion for splitting the bundle using React.lazy(). The lighthouse also gives us various metrics that can be measured for improvement of the application. However, we will be focusing on bundle size in this case.

The extra unused code inside the bundle is not only bad in terms of download size, but it also impacts the user experience. Let’s use the performance tab for figuring out how this is affecting the user experience. Navigate to the performance tab and profile the page. It shows that it takes around 10 seconds for the user to see actual content on the page reload:

Production build performance report

Webpack Bundle Analyzer Report

We can visualize the bundles with the webpack bundle analyzer tool, which gives us a way to track and measure the bundle size changes over time. Please follow the installation instructions given here.

So, this is what our production build bundle report looks like:

Production build -  bundle analyzer report 

As we can see, our current production build has a giant chunk man.201d82c8.js, which can be divided into smaller chunks.

The bundle analyzer report not only gives us information about the chunk sizes but also what modules the chunk contains and their size. This gives an opportunity to find out and free up such modules and achieve better performance. Here for example that adds considerable size to our main bundle:

Production build -  bundle analyzer individual module preview

Using React.lazy() for Code Splitting

React.lazy allows us to use dynamically imported components. This means that we can load these components when they’re needed and reduce bundle size. As our dashboard app has four top-level routes that are wrapped inside react-router’s Switch, we know that they will never need to be at once. 

So, apparently, we can split these top-level components into four different bundle chunks and load them ad hoc. For doing that, we need to convert our imports from:

CODE: https://gist.github.com/velotiotech/972999ef8126c7618814be299d326a62.js

To:

CODE: https://gist.github.com/velotiotech/e910f0f8737f6bf05bccfa992ffc0c64.js

This also requires us to implement a Suspense wrapper around our routes, which does the work of showing fallback visuals till the dynamically loading component is visible.

CODE: https://gist.github.com/velotiotech/c316484dc70e72920473366080cdcbfc.js

Just after this, the Webpack recognizes the dynamic imports and splits the main chunk into smaller chunks. In the production build, we can notice the following bundles being downloaded. We have reduced the load time for the main bundle chunk from 12 seconds to 3.10 seconds, which is quite good. This is an improvement as we're not loading unnecessary JS for the first time. 

As we can see in the waterfall view of the requests tab, other required chunks are loaded parallel as soon as the main chunk is loaded.

Production build after splitting - network tab

If we look at the lighthouse report, the warning for removing unused JS has been gone and we can see the check passing.

Production build after splitting - lighthouse report

This is good for the landing page. How about the other routes when we visit them? The following shows that we are now loading more small chunks when we render that lazily loaded component on menu item click.

Production build after splitting - other routes

With the current setup, we should be able to see improved performance inside our applications. We can always go ahead and tweak Webpack chunks when needed.

To measure how this change affects user experience, we can again generate the performance report with Chrome DevTools. We can quickly notice that the idle frame time has dropped to around 1 second—far better than the previous setup. 

Production build after splitting - performance report

If we read through the timeline, we can see the user sees a blank frame up to 1 second, and they’re able to see the sidebar in the next second. Once the main bundle is loaded, we’re loading the lazy-loaded commits chunk till that time we see our fallback loading component.

Also, when we navigate to the other routes, we can see the chunks loaded lazily when they’re needed.

Let’s have a look at the bundle analyzer report generated after the changes. We can easily see that the chunks are divided into smaller chunks. Also, we can notice that the chunks contain only the code they need. For example, the 51.573370a6.js chunk is actually the commits route containing the react-table code. It’s similar for the charts module in the other chunk.

Conclusion

Depending on the project structure, we can easily set up code-splitting inside the React applications, which is useful for better-performing applications and leads to a positive impact for the users.

You can find the referenced code in this repo.

Get the latest engineering blogs delivered straight to your inbox.
No spam. Only expert insights.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Did you like the blog? If yes, we're sure you'll also like to work with the people who write them - our best-in-class engineering team.

We're looking for talented developers who are passionate about new emerging technologies. If that's you, get in touch with us.

Explore current openings

Optimize React App Performance By Code Splitting

Prerequisites

This blog post is written assuming you have an understanding of the basics of React and routing inside React SPAs with react-router. We will also be using Chrome DevTools for measuring the actual performance benefits achieved in the example. We will be using Webpack, a well-known bundler for JavaScript projects.

What is Code Splitting?

Code splitting is simply dividing huge code bundles into smaller code chunks that can be loaded ad hoc. Usually, when the SPAs grow in terms of components and plugins, the need to split the code into smaller-sized chunks arises. Bundlers like Webpack and Rollup provide support for code splitting.

Several different code splitting strategies can be implemented depending on the application structure. We will be taking a look at an example in which we implement code splitting inside an admin dashboard for better performance.

Let's Get Started

We will be starting with a project configured with Webpack as a bundler having a considerable bundle size. This simple Github repository dashboard will have four routes for showing various details regarding the same repository. The dashboard uses some packages to show details in the app such as react-table, TinyMCE, and recharts.

Github repo dashboard

Before optimizing the bundle

Just to get an idea of performance changes, let us note the metrics from the prior bundle of the app. Let’s check loading time in the network tab with the following setup: 

  • Browser incognito tab
  • Cache disabled
  • Throttling enabled to Fast 3G

Development Build

As you can see, the development bundle without any optimization has around a 1.3 MBs network transfer size, which takes around 7.85 seconds to load for the first time on a fast 3G connection.

Development build network tab

However, we know that we will probably never want to serve this unoptimized development bundle in production. So, let’s figure out metrics for the production bundle with the same setup.

Production Build

The project is already configured for generating a webpack production build. The production bundle is much smaller, with a 534 kBs network transfer size compared to the development bundle, which takes around 3.54 seconds to load on a fast 3G connection. This is still a problem as the best practice suggests keeping the page load times below 3 seconds. Let’s check what happens if we check with a slow 3G connection.

Production build network tab

The production bundle took 12.70 seconds to load for the first time on a slow 3G connection. Now, this can annoy users.

Production build network tab - slow connection

If we look at the lighthouse report, we see a warning indicating that we’re loading more code than needed:

Production build lighthouse report

As per the warning, we’re loading some unused code while rendering the first time, which we can get rid of and load later instead. Lighthouse report indicates that we can save up to 404 KiBs while loading the page for the first time. 

There’s one more suggestion for splitting the bundle using React.lazy(). The lighthouse also gives us various metrics that can be measured for improvement of the application. However, we will be focusing on bundle size in this case.

The extra unused code inside the bundle is not only bad in terms of download size, but it also impacts the user experience. Let’s use the performance tab for figuring out how this is affecting the user experience. Navigate to the performance tab and profile the page. It shows that it takes around 10 seconds for the user to see actual content on the page reload:

Production build performance report

Webpack Bundle Analyzer Report

We can visualize the bundles with the webpack bundle analyzer tool, which gives us a way to track and measure the bundle size changes over time. Please follow the installation instructions given here.

So, this is what our production build bundle report looks like:

Production build -  bundle analyzer report 

As we can see, our current production build has a giant chunk man.201d82c8.js, which can be divided into smaller chunks.

The bundle analyzer report not only gives us information about the chunk sizes but also what modules the chunk contains and their size. This gives an opportunity to find out and free up such modules and achieve better performance. Here for example that adds considerable size to our main bundle:

Production build -  bundle analyzer individual module preview

Using React.lazy() for Code Splitting

React.lazy allows us to use dynamically imported components. This means that we can load these components when they’re needed and reduce bundle size. As our dashboard app has four top-level routes that are wrapped inside react-router’s Switch, we know that they will never need to be at once. 

So, apparently, we can split these top-level components into four different bundle chunks and load them ad hoc. For doing that, we need to convert our imports from:

CODE: https://gist.github.com/velotiotech/972999ef8126c7618814be299d326a62.js

To:

CODE: https://gist.github.com/velotiotech/e910f0f8737f6bf05bccfa992ffc0c64.js

This also requires us to implement a Suspense wrapper around our routes, which does the work of showing fallback visuals till the dynamically loading component is visible.

CODE: https://gist.github.com/velotiotech/c316484dc70e72920473366080cdcbfc.js

Just after this, the Webpack recognizes the dynamic imports and splits the main chunk into smaller chunks. In the production build, we can notice the following bundles being downloaded. We have reduced the load time for the main bundle chunk from 12 seconds to 3.10 seconds, which is quite good. This is an improvement as we're not loading unnecessary JS for the first time. 

As we can see in the waterfall view of the requests tab, other required chunks are loaded parallel as soon as the main chunk is loaded.

Production build after splitting - network tab

If we look at the lighthouse report, the warning for removing unused JS has been gone and we can see the check passing.

Production build after splitting - lighthouse report

This is good for the landing page. How about the other routes when we visit them? The following shows that we are now loading more small chunks when we render that lazily loaded component on menu item click.

Production build after splitting - other routes

With the current setup, we should be able to see improved performance inside our applications. We can always go ahead and tweak Webpack chunks when needed.

To measure how this change affects user experience, we can again generate the performance report with Chrome DevTools. We can quickly notice that the idle frame time has dropped to around 1 second—far better than the previous setup. 

Production build after splitting - performance report

If we read through the timeline, we can see the user sees a blank frame up to 1 second, and they’re able to see the sidebar in the next second. Once the main bundle is loaded, we’re loading the lazy-loaded commits chunk till that time we see our fallback loading component.

Also, when we navigate to the other routes, we can see the chunks loaded lazily when they’re needed.

Let’s have a look at the bundle analyzer report generated after the changes. We can easily see that the chunks are divided into smaller chunks. Also, we can notice that the chunks contain only the code they need. For example, the 51.573370a6.js chunk is actually the commits route containing the react-table code. It’s similar for the charts module in the other chunk.

Conclusion

Depending on the project structure, we can easily set up code-splitting inside the React applications, which is useful for better-performing applications and leads to a positive impact for the users.

You can find the referenced code in this repo.

Did you like the blog? If yes, we're sure you'll also like to work with the people who write them - our best-in-class engineering team.

We're looking for talented developers who are passionate about new emerging technologies. If that's you, get in touch with us.

Explore current openings