Code Splitting with Vite SSR (Server Side Rendering)
Lessons from battling hydration mismatches and the simple optimizations that moved the needle

1. What is code splitting and why you should care
As your app grows, features, utilities, third-party packages pile up and the amount of JavaScript you ship grows with them. That extra JS has to be downloaded, parsed, and executed before users can fully interact (and in many SPAs, before the initial screen hydrates). So, the more you ship, the slower the startup? Pretty much yes, unless you do some code splitting.
Think of code splitting as slicing your app into smaller bundles that the browser loads only when they’re needed (for the current route or interaction). This optimization shortens the initial load and directly improves metrics measured by tools like Lighthouse, Google’s automated tool for auditing web performance and accessibility. Strong Lighthouse scores often reflect healthy Core Web Vitals, metrics like Largest Contentful Paint (LCP) and Cumulative Layout Shift (CLS) that affect both user experience and search ranking. You can read more about code splitting in the React docs.
2. React-based code splitting
a. Route-based splitting
This is the classic move: split by page or route. Each route’s UI becomes its own bundle, loaded only when the user navigates there. It’s clean because routes are natural boundaries in your app. This is often the first cut people make.
// Assume a router setup with React Router
const Home = React.lazy(() => import('./pages/Home'));
const Profile = React.lazy(() => import('./pages/Profile'));
function AppRouter() {
return (
<Suspense fallback={<div>Loading…</div>}>
<Routes>
<Route path="/" element={<Home />} />
<Route path="/profile" element={<Profile />} />
</Routes>
</Suspense>
);
}
b. Component-level splitting
Within a route, you may have heavy components (charts, editors, modals, maps). Use lazy loading on those so you avoid shipping them by default. Even if the user never opens a modal or expands the editor, they won’t incur that cost.
function Dashboard() {
const HeavyChart = React.lazy(() => import('./charts/HeavyChart'));
return (
<div>
<h2>Your activity</h2>
<Suspense fallback={<div>Chart loading…</div>}>
<HeavyChart />
</Suspense>
</div>
);
}
c. Library splitting
You can extract large third-party libraries (Stripe, DnD, fabric, data viz libs) into their own chunks. They’re loaded only when features requiring them activate. This lets your core app stay lean, while these “big rocks” sit dormant until needed.
3. Route Based Splitting and Vite SSR are not good friends
We built our stack on Vite’s SSR, expecting route-based splitting to mostly work out of the box. Instead, we ended up staring at broken routes, weird freezes, and more console errors than confidence. The problems we encountered boiled down to three systemic pains:
Hydration mismatches: the HTML sent by our server would sometimes diverge from what React expected on the client, triggering errors like “Hydration failed”. In SSR environments, mismatches aren’t rare. To tame this, we looked at libraries that advertise smoother SSR + lazy boundary integration. Loadable Components, for instance, presents itself as a better fit for SSR than
React.lazy, offering tooling to help align server and client rendering.Tooling friction with SSR + lazy libraries: We experiment with Loadable Components, but moved away since this is not compatible with Vite. Then, we tried also Vite preload, but wiring it into our stack wasn’t as smooth as the docs suggest. Parts of our app became unresponsive, and we had to build custom logic to patch over some of those issues. What stabilized eventually cracked again when authentication flows were involved.
Auth logic: Even when chunking worked, some user flows, especially around login, started failing. For example, after authentication the app sometimes treated the user as still unauthenticated.
Together, these pains taught us that “route splitting under ViteSSR” is far from a lightweight refactor or a quick win. It demands deep orchestration between routing, lazy loading, manifest wiring, and authentication. Trying to build it from scratch felt a lot like rewriting your own framework. So we decided to look elsewhere for performance gains.
4. What we shipped: Good old library splitting
We took a pragmatic approach, not trying to split everything, but isolating what hurt us most. The ultimate goal was to reduce our main bundle size and shave milliseconds off startup. That’s why we pivoted toward library splitting instead of wresting with full route-based code splitting under SSR.
const MODULES_FOR_SEPARATE_CHUNKS = [
'@stripe/stripe-js',
'@stripe/react-stripe-js',
... // rest of heavy libraries
];
export default defineConfig({
build: {
rollupOptions: {
output: {
manualChunks(id) {
const ROLLUP_COMMON_MODULES = [
'vite/preload-helper',
'vite/modulepreload-polyfill',
'vite/dynamic-import-helper',
'commonjsHelpers',
'commonjs-dynamic-modules',
'__vite-browser-external',
];
if (
id.includes('node_modules') &&
MODULES_FOR_SEPARATE_CHUNKS.find((module) => id.includes(module))
) {
return id.toString().split('node_modules/')[1].split('/')[0].toString();
}
if (
id.includes('node_modules') ||
ROLLUP_COMMON_MODULES.some((commonModule) => id.includes(commonModule))
) {
return 'vendor';
}
},
},
},
},
...//rest of config
})
In our vite.config.js, we declared a MODULES_FOR_SEPARATE_CHUNKS array that lists heavyweight dependencies we want carved out of the main bundle. In the manualChunks(id) function, we check if the module path (id) comes from node_modules and matches one of those listed modules. If so, we return the module’s name so Vite/Rollup builds it into its own chunk. For anything else in node_modules, we funnel them into a shared vendor chunk. The result: heavy libraries live in isolated bundles only fetched when needed, while the core of our app stays lean.
A few caveats worth knowing: Vite/Rollup’s manual chunking isn’t foolproof. Issues like chunks loading earlier than expected or duplication sometimes appear in real-world setups Here is a link to the issue in Github: https://github.com/vitejs/vite/issues/5189
5. Faster load times
After deploying our optimizations indeed our app load time felt significantly faster. We started seeing Lighthouse scores pop into the 90s on several pages. That kind of result signals you’re in the “green zone” for performance (Lighthouse marks 90+ as “good”). These results confirmed that our surgical splitting, deferring of heavy scripts, and chunking strategy weren’t just theoretical—they moved practical metrics.



To be precise, the scores referenced here were measured on October 7, 2025, giving us a date-stamped benchmark for comparison.
6. What’s Next?

We started off looking for a quick win, route splitting, turn the lever, get performance. But ViteSSR & route code splitting turned out to be a beast: hydration mismatches, odd auth bugs, and tooling friction were true boss fights. By focusing on surgical cuts like deferring third-party scripts, isolating heavy libs, and letting the main bundle stay lean we managed to achieve really nice results.
Of course, optimizations never really stop. We can continue pushing: splitting out more chunks, preloading fonts only in the editor, asynchronously loading those small CSS files only when needed, tightening cache lifetimes, and squashing layout shifts. And if those efforts don’t feel enough, migrating to a purpose-built framework remains on the table as a fallback, though not our only path forward.





