After implementing micro-frontends at two companies, here's what actually happened. Module federation, shared dependencies, routing, styling conflicts, and why the org chart matters more than the architecture.
I have implemented micro-frontends at two different companies. The first time, it was a disaster that took eight months to unwind. The second time, it worked reasonably well. The difference had nothing to do with technical choices and everything to do with organizational structure.
That is the central thesis of this entire post, and I am giving it to you upfront because the micro-frontend ecosystem is drowning in articles that start with webpack configuration and never mention the actual reason you would (or would not) adopt this pattern.
So let me be blunt: if you are a team of fewer than 30 frontend engineers, you almost certainly do not need micro-frontends. If you are a team of five reading this because someone saw a conference talk, close this tab and go set up proper code splitting in your monolith. You will ship faster and sleep better.
Still here? Good. Let's talk about what actually happens when you split a frontend into independently deployed pieces.
The pitch is seductive. Independent teams. Independent deployments. Technology freedom. Scaling development across organizations without stepping on each other's toes.
Here is what micro-frontends actually solve: deployment coupling between teams that don't trust each other's release processes.
That's it. That is the core value proposition. Not "better architecture." Not "cleaner code." Not "technology diversity." The real benefit is that Team A can ship on Tuesday without waiting for Team B to finish their sprint, fix their flaky tests, and get their PR approved.
Everything else -- the technology freedom, the independent scaling, the "just like microservices but for the frontend" -- is either a side effect or a myth.
Melvin Conway observed in 1967 that organizations produce system designs that mirror their communication structures. This is not a cute observation. It is an iron law of software development.
If you have four teams, you will get a four-part system. If those teams sit in different buildings, the interfaces between those parts will be formal and well-documented. If those teams share a Slack channel, the interfaces will be implicit and leaky.
Micro-frontends are Conway's Law made explicit. You are saying: "Our teams are organizationally independent, so our frontend should be architecturally independent."
The problem is that most teams adopting micro-frontends are not organizationally independent. They share designers, they share a design system, they share backend services, they have a single product manager. They are one team pretending to be many teams because someone read a blog post about how Spotify organizes squads.
When I see a company with 12 frontend developers split into three "autonomous" micro-frontend teams, I see three teams that are about to spend 40% of their time on coordination overhead that a monorepo would have handled for free.
At Company A, we had a React monolith serving a B2B SaaS dashboard. About 15 frontend engineers, growing to maybe 20. The app had a few major sections: analytics, user management, billing, and a real-time monitoring view.
Someone -- and I will take partial blame because I did not push back hard enough -- decided we should "go micro-frontend" to let teams deploy independently. The reasoning was that the monolith's CI pipeline took 18 minutes, and merging to main was becoming a bottleneck.
We chose Module Federation with webpack 5. We split the app into a shell and four remotes. Here is a rough picture of what we created:
shell-app/ (host application)
- top nav, sidebar, routing shell
- shared auth context
- shared design system
analytics-remote/ (Team Alpha)
- /analytics/*
- charts, dashboards, data export
users-remote/ (Team Beta)
- /users/*
- user management, roles, permissions
billing-remote/ (Team Gamma)
- /billing/*
- invoices, subscriptions, payment methods
monitoring-remote/ (Team Delta)
- /monitoring/*
- real-time WebSocket views, alerting
Here is what went wrong, roughly in order of severity:
1. Shared dependency version drift. Within two months, the analytics team wanted React 18.3 for a new concurrent feature. The billing team was pinned to 18.2 because of a third-party payment component. The shell was on 18.2.1. This created subtle rendering bugs that only appeared in production when remotes were composed together.
2. Shared state was a nightmare. Users expected to click "Upgrade Plan" in the billing section and see the analytics section immediately reflect the new feature limits. With micro-frontends, that state lived in different JavaScript bundles with different React trees. We ended up building a custom event bus, which was just a worse version of having a shared state store.
3. Routing became a two-level problem. The shell handled top-level navigation, but each remote had its own internal routing. Deep linking, browser back/forward, and URL-based state all had edge cases that took weeks to iron out.
4. The build pipeline was more complex, not less. Instead of one 18-minute build, we had five 6-minute builds plus a deployment orchestration layer. Total wall clock time was shorter for individual changes, but the system was dramatically harder to reason about.
5. Performance got worse. The initial load now required the shell bundle plus whatever remote was being rendered. Module Federation's async boundaries added loading states everywhere. Users noticed.
After eight months, we migrated back to a monorepo with Turborepo and proper code splitting. The CI pipeline dropped to 9 minutes with remote caching. Nobody missed the micro-frontends.
Company B was different. 80+ frontend engineers across genuinely independent product lines. The "platform" was more like a suite of applications that happened to share a navigation bar and a login system. Think Google's approach: Gmail and Google Docs share a top bar, but they are completely independent applications.
Here, micro-frontends made sense because:
The architecture looked like this:
platform-shell/
- authentication
- top navigation
- product switcher
- shared analytics SDK
product-a/ (30 engineers, real-time collab)
- full React app
- WebSocket-heavy
- own design system extensions
product-b/ (20 engineers, forms/CRUD)
- full React app
- heavy form validation
- wizard-style workflows
product-c/ (15 engineers, data visualization)
- React + D3
- canvas-heavy rendering
- minimal shared UI beyond the shell
product-d/ (10 engineers, admin/settings)
- simpler React app
- mostly configuration screens
This worked because the boundaries were real. These products did not need to share state. They did not need to communicate in real-time. A user switching from Product A to Product B expected a page transition. The contract between the shell and each product was minimal: render yourself in this DOM node, here is the authenticated user, here is a function to navigate to other products.
Let me get into the technical details because this is where most articles either oversimplify or get lost in configuration.
Module Federation is a webpack (and now rspack) feature that lets one JavaScript build consume modules from another JavaScript build at runtime. The "host" application declares what "remotes" it expects, and the remotes expose specific modules.
// shell-app/webpack.config.js
const { ModuleFederationPlugin } = require('webpack').container;
module.exports = {
plugins: [
new ModuleFederationPlugin({
name: 'shell',
remotes: {
analytics: 'analytics@https://cdn.example.com/analytics/remoteEntry.js',
billing: 'billing@https://cdn.example.com/billing/remoteEntry.js',
users: 'users@https://cdn.example.com/users/remoteEntry.js',
},
shared: {
react: {
singleton: true,
requiredVersion: '^18.2.0',
eager: true,
},
'react-dom': {
singleton: true,
requiredVersion: '^18.2.0',
eager: true,
},
'react-router-dom': {
singleton: true,
requiredVersion: '^6.20.0',
},
},
}),
],
};// analytics-remote/webpack.config.js
const { ModuleFederationPlugin } = require('webpack').container;
module.exports = {
plugins: [
new ModuleFederationPlugin({
name: 'analytics',
filename: 'remoteEntry.js',
exposes: {
'./App': './src/App',
'./DashboardWidget': './src/components/DashboardWidget',
},
shared: {
react: {
singleton: true,
requiredVersion: '^18.2.0',
},
'react-dom': {
singleton: true,
requiredVersion: '^18.2.0',
},
'react-router-dom': {
singleton: true,
requiredVersion: '^6.20.0',
},
},
}),
],
};// shell-app/src/remotes/AnalyticsApp.tsx
import React, { Suspense, lazy } from 'react';
import { RemoteErrorBoundary } from '../components/RemoteErrorBoundary';
import { RemoteLoadingSkeleton } from '../components/RemoteLoadingSkeleton';
const AnalyticsApp = lazy(() => import('analytics/App'));
export function AnalyticsRemote() {
return (
<RemoteErrorBoundary
fallback={<RemoteUnavailable name="Analytics" />}
>
<Suspense fallback={<RemoteLoadingSkeleton />}>
<AnalyticsApp />
</Suspense>
</RemoteErrorBoundary>
);
}This looks clean. The reality is much messier.
That remoteEntry.js URL is hardcoded. In practice, you need a discovery service or a manifest file that tells the shell where each remote's entry point lives. This is because:
So you end up building something like this:
// shell-app/src/remotes/registry.ts
interface RemoteConfig {
url: string;
scope: string;
module: string;
version: string;
integrity?: string;
}
interface RemoteRegistry {
[remoteName: string]: RemoteConfig;
}
async function fetchRemoteRegistry(): Promise<RemoteRegistry> {
const response = await fetch('/api/remote-registry', {
headers: { 'Cache-Control': 'no-cache' },
});
if (!response.ok) {
throw new Error(`Remote registry fetch failed: ${response.status}`);
}
return response.json();
}
// Dynamic remote loading -- this is where the fun begins
async function loadRemote(remoteName: string): Promise<any> {
const registry = await fetchRemoteRegistry();
const config = registry[remoteName];
if (!config) {
throw new Error(`Remote "${remoteName}" not found in registry`);
}
// Check if the remote's container is already loaded
if (window[config.scope]) {
return window[config.scope];
}
// Dynamically inject the script tag
await new Promise<void>((resolve, reject) => {
const script = document.createElement('script');
script.src = config.url;
script.type = 'text/javascript';
script.async = true;
if (config.integrity) {
script.integrity = config.integrity;
script.crossOrigin = 'anonymous';
}
script.onload = () => resolve();
script.onerror = () => reject(
new Error(`Failed to load remote: ${config.url}`)
);
document.head.appendChild(script);
});
// Initialize the remote container
const container = window[config.scope];
await container.init(__webpack_share_scopes__.default);
return container;
}
export async function loadRemoteModule(
remoteName: string,
modulePath: string
): Promise<any> {
const container = await loadRemote(remoteName);
const factory = await container.get(modulePath);
return factory();
}Now you have a remote registry service to build, deploy, and maintain. You need health checks for each remote. You need fallback behavior when a remote is down. You need version compatibility checking.
This is the kind of infrastructure that makes sense when you have a platform team dedicated to developer experience. It does not make sense when you are three teams sharing a Jira board.
This is, in my experience, the single biggest source of pain in micro-frontend architectures. Let me walk through why.
React must be a singleton. You cannot have two copies of React running on the same page -- hooks will break, context will not propagate, state will be invisible across boundaries. So you mark React as a singleton shared dependency.
But "shared singleton" means every remote must use a compatible version. In theory, semver handles this. In practice:
// Remote A says:
shared: {
react: { singleton: true, requiredVersion: '^18.2.0' }
}
// Remote B says:
shared: {
react: { singleton: true, requiredVersion: '^18.3.0' }
}
// Shell provides React 18.2.0
// Remote B's requirement is satisfied by ^18.3.0 (>= 18.3.0 < 19.0.0)
// Shell's 18.2.0 does NOT satisfy ^18.3.0
// Module Federation will either:
// a) Load a second copy of React (breaking singleton)
// b) Throw an error at runtime
// c) Use 18.2.0 anyway and hope for the bestThe behavior depends on your strictVersion and singleton settings, and the failure modes are confusing. I have spent entire debugging sessions staring at webpack's module federation runtime trying to understand why a hook was throwing "Invalid hook call" errors that only appeared in production.
It is not just React. Here is a realistic list of shared dependencies for a React-based micro-frontend setup:
shared: {
'react': { singleton: true, requiredVersion: '^18.2.0' },
'react-dom': { singleton: true, requiredVersion: '^18.2.0' },
'react-router-dom': { singleton: true, requiredVersion: '^6.20.0' },
'@tanstack/react-query': { singleton: true, requiredVersion: '^5.0.0' },
'zustand': { singleton: true, requiredVersion: '^4.4.0' },
'@company/design-system': { singleton: true, requiredVersion: '^3.0.0' },
'@company/auth-sdk': { singleton: true, requiredVersion: '^2.1.0' },
'@company/analytics-sdk': { singleton: true, requiredVersion: '^1.5.0' },
'date-fns': { requiredVersion: '^2.30.0' },
'lodash-es': { requiredVersion: '^4.17.0' },
'axios': { requiredVersion: '^1.6.0' },
}Every one of these is a coordination point. Every major version bump of any shared dependency requires synchronized updates across all remotes. You have recreated the monolith's coordination problem, except now it is distributed across multiple repositories with independent CI/CD pipelines.
The design system is the worst offender. A design system update that changes a component's API needs to be adopted by all remotes simultaneously, or you get visual inconsistencies where one section of the app has the new button style and another has the old one.
After painful experience, here is what I recommend for shared dependency management:
Pin major versions ruthlessly. Do not rely on semver ranges for shared singletons. If the shell provides React 18.2.0, all remotes should declare exactly 18.2.0. Yes, this means coordinated updates. No, there is no way around it.
Use a shared package.json template. Create a tool that generates the shared configuration from a single source of truth:
// tools/shared-deps.js
// Single source of truth for all shared dependency versions
const SHARED_DEPS = {
react: { version: '18.2.0', singleton: true, eager: false },
'react-dom': { version: '18.2.0', singleton: true, eager: false },
'react-router-dom': { version: '6.22.0', singleton: true, eager: false },
'@tanstack/react-query': { version: '5.17.0', singleton: true, eager: false },
'@company/design-system': { version: '3.2.1', singleton: true, eager: false },
};
function generateSharedConfig(options = {}) {
const { eager = false } = options;
const shared = {};
for (const [pkg, config] of Object.entries(SHARED_DEPS)) {
shared[pkg] = {
singleton: config.singleton,
requiredVersion: config.version,
eager: eager || config.eager,
};
}
return shared;
}
module.exports = { SHARED_DEPS, generateSharedConfig };Publish this as an internal package and use it in every remote's webpack config. When you update a shared dependency, you update this package, and all remotes pick up the change on their next build.
Run a compatibility check in CI. Before a remote deploys, verify that its shared dependency versions are compatible with what the shell currently serves:
// ci/check-shared-deps.js
const { SHARED_DEPS } = require('@company/shared-deps');
const pkg = require('../package.json');
const violations = [];
for (const [dep, config] of Object.entries(SHARED_DEPS)) {
const installed = pkg.dependencies[dep] || pkg.devDependencies[dep];
if (!installed) continue;
const cleanInstalled = installed.replace(/[\^~]/, '');
if (config.singleton && cleanInstalled !== config.version) {
violations.push(
`${dep}: installed ${cleanInstalled}, expected ${config.version}`
);
}
}
if (violations.length > 0) {
console.error('Shared dependency violations:');
violations.forEach(v => console.error(` - ${v}`));
process.exit(1);
}Routing in a micro-frontend architecture is deceptively complex. The obvious approach -- shell handles top-level routes, remotes handle sub-routes -- breaks down quickly.
If the shell uses React Router and each remote also uses React Router, you have multiple router instances fighting over the URL. The shell's router does not know about the remote's internal routes, and the remote's router does not know about sibling remotes.
// Shell routing
function ShellApp() {
return (
<BrowserRouter>
<ShellLayout>
<Routes>
<Route path="/analytics/*" element={<AnalyticsRemote />} />
<Route path="/billing/*" element={<BillingRemote />} />
<Route path="/users/*" element={<UsersRemote />} />
<Route path="/" element={<Dashboard />} />
</Routes>
</ShellLayout>
</BrowserRouter>
);
}
// Analytics remote -- DO NOT create another BrowserRouter!
function AnalyticsApp() {
// This remote must use relative routes within the shell's router context
return (
<Routes>
<Route path="/" element={<AnalyticsDashboard />} />
<Route path="/reports" element={<Reports />} />
<Route path="/reports/:id" element={<ReportDetail />} />
<Route path="/export" element={<DataExport />} />
</Routes>
);
}This only works if the remote is rendered within the shell's BrowserRouter context. If Module Federation loads the remote into a separate React tree (which happens if you are not careful with how you mount remotes), the router context is lost. The remote's useNavigate hook will throw. Its Link components will not work.
Users bookmark URLs. They share URLs in Slack. They click the back button. All of these need to work seamlessly across micro-frontend boundaries.
Consider this scenario: a user is on /analytics/reports/42, clicks a notification that takes them to /billing/invoices/abc, then clicks the back button. The shell needs to unmount the billing remote, mount the analytics remote, and the analytics remote needs to render the correct report. If the analytics remote does any async data fetching during mount (it does), the user sees a loading skeleton for content they just had on screen.
There is no elegant solution. The options are:
1. Keep-alive mounting. Mount all remotes but only show the active one. This wastes memory and means all remotes are executing their effects and subscriptions even when not visible.
function ShellApp() {
const location = useLocation();
return (
<ShellLayout>
<div style={{ display: location.pathname.startsWith('/analytics') ? 'block' : 'none' }}>
<AnalyticsRemote />
</div>
<div style={{ display: location.pathname.startsWith('/billing') ? 'block' : 'none' }}>
<BillingRemote />
</div>
<div style={{ display: location.pathname.startsWith('/users') ? 'block' : 'none' }}>
<UsersRemote />
</div>
</ShellLayout>
);
}2. Client-side caching. Let remotes unmount and remount, but use a shared cache (like React Query's cache) so data is not re-fetched. This is better but requires all remotes to use the same caching strategy.
3. Accept the tradeoff. Show loading states on navigation. This is what most micro-frontend implementations actually do. Users notice, but it is the simplest solution to maintain.
When two independently developed applications render on the same page, their styles will collide. This is a guarantee, not a possibility.
CSS Modules with unique prefixes. Each remote uses CSS Modules, but you add a remote-specific prefix to prevent collisions:
// analytics-remote/webpack.config.js
module.exports = {
module: {
rules: [
{
test: /\.module\.css$/,
use: [
'style-loader',
{
loader: 'css-loader',
options: {
modules: {
localIdentName: 'analytics__[local]__[hash:base64:5]',
},
},
},
],
},
],
},
};This works for component-scoped styles. It does not help with global styles, CSS resets, or third-party component libraries that inject their own stylesheets.
Shadow DOM. The nuclear option. Each remote renders inside a Shadow DOM boundary, which provides true CSS isolation:
// shell-app/src/remotes/ShadowRemote.tsx
import { useRef, useEffect, useState } from 'react';
import { createRoot } from 'react-dom/client';
interface ShadowRemoteProps {
children: React.ReactNode;
}
export function ShadowRemote({ children }: ShadowRemoteProps) {
const hostRef = useRef<HTMLDivElement>(null);
const [shadowRoot, setShadowRoot] = useState<ShadowRoot | null>(null);
useEffect(() => {
if (hostRef.current && !shadowRoot) {
const shadow = hostRef.current.attachShadow({ mode: 'open' });
setShadowRoot(shadow);
}
}, [shadowRoot]);
useEffect(() => {
if (shadowRoot) {
const container = document.createElement('div');
shadowRoot.appendChild(container);
const root = createRoot(container);
root.render(children);
return () => {
root.unmount();
shadowRoot.removeChild(container);
};
}
}, [shadowRoot, children]);
return <div ref={hostRef} />;
}Shadow DOM solves CSS isolation but creates new problems:
document.body is outside the shadow rootTailwind CSS with prefixes. If all remotes use Tailwind, you can configure different prefixes:
// analytics-remote/tailwind.config.js
module.exports = {
prefix: 'an-',
// ...
};
// billing-remote/tailwind.config.js
module.exports = {
prefix: 'bl-',
// ...
};But this means the design system's Tailwind classes do not match. Your shared Button component uses bg-blue-500 but the remote expects an-bg-blue-500. You end up maintaining a translation layer or abandoning the shared design system's styles.
My actual recommendation: Use CSS Modules with remote-specific prefixes, inject the design system as a shared dependency that includes its own stylesheet, and accept that you will spend time debugging style collisions. There is no perfect solution. The question is which set of problems you want to deal with.
Micro-frontends are supposed to be independent. But users do not think in terms of "micro-frontends." They think in terms of "the application." When they update their profile picture in the settings section, they expect to see it change in the navigation bar immediately. When they add an item to their cart in the product listing, they expect the cart icon in the header to update.
Custom Events. The simplest approach. Remotes dispatch events on window, and other remotes listen:
// Shared event types -- published as a package
// @company/micro-frontend-events
export interface MFEvents {
'user:profile-updated': { avatarUrl: string; displayName: string };
'cart:item-added': { productId: string; quantity: number };
'notifications:count-changed': { count: number };
'theme:changed': { mode: 'light' | 'dark' };
}
export function emitMFEvent<K extends keyof MFEvents>(
eventName: K,
detail: MFEvents[K]
) {
window.dispatchEvent(
new CustomEvent(`mf:${eventName}`, { detail })
);
}
export function onMFEvent<K extends keyof MFEvents>(
eventName: K,
handler: (detail: MFEvents[K]) => void
): () => void {
const listener = (event: Event) => {
handler((event as CustomEvent).detail);
};
window.addEventListener(`mf:${eventName}`, listener);
return () => window.removeEventListener(`mf:${eventName}`, listener);
}
// React hook for convenience
export function useMFEvent<K extends keyof MFEvents>(
eventName: K,
handler: (detail: MFEvents[K]) => void
) {
useEffect(() => {
return onMFEvent(eventName, handler);
}, [eventName, handler]);
}This works. It also means you have reinvented a pub/sub system with no replay, no persistence, and no guaranteed ordering. If the navigation bar mounts after the settings remote dispatches the profile update event, the navigation bar never gets the update.
Shared state store. Better for state that needs to be read by multiple remotes at any time:
// @company/shared-store
import { useSyncExternalStore } from 'react';
type Listener = () => void;
class SharedStore<T> {
private state: T;
private listeners = new Set<Listener>();
constructor(initialState: T) {
this.state = initialState;
}
getState = (): T => {
return this.state;
};
setState = (updater: T | ((prev: T) => T)): void => {
this.state = typeof updater === 'function'
? (updater as (prev: T) => T)(this.state)
: updater;
this.listeners.forEach(listener => listener());
};
subscribe = (listener: Listener): (() => void) => {
this.listeners.add(listener);
return () => this.listeners.delete(listener);
};
}
// Global stores, exposed via the shell
export interface UserState {
id: string;
displayName: string;
avatarUrl: string;
permissions: string[];
}
export interface NotificationState {
unreadCount: number;
lastChecked: number;
}
export const userStore = new SharedStore<UserState | null>(null);
export const notificationStore = new SharedStore<NotificationState>({
unreadCount: 0,
lastChecked: Date.now(),
});
// React hooks
export function useSharedUser(): UserState | null {
return useSyncExternalStore(
userStore.subscribe,
userStore.getState,
userStore.getState,
);
}
export function useSharedNotifications(): NotificationState {
return useSyncExternalStore(
notificationStore.subscribe,
notificationStore.getState,
notificationStore.getState,
);
}This is cleaner than custom events but now you have another shared package to version and another coordination point.
The honest truth: every shared state mechanism in a micro-frontend architecture is worse than just having a shared state store in a monolith. The complexity exists because of the architectural boundary, not despite it. You are paying a tax for independence.
Let me share some real numbers from the second (successful) implementation.
Monolith total bundle (code-split): 1.2 MB gzipped Micro-frontend total loaded resources for the same features: 1.8 MB gzipped
The difference comes from:
remoteEntry.js manifest file (2-5 KB each, but there is overhead per remote)A 50% increase in total bundle size is not unusual. You can mitigate it with aggressive code splitting within each remote, but the baseline overhead is real.
In a monolith with code splitting, the loading sequence is:
1. Load shell HTML [50ms]
2. Load main JS bundle (includes React) [200ms]
3. Load route-specific chunk [100ms]
4. Render [50ms]
Total: ~400ms
In a micro-frontend setup:
1. Load shell HTML [50ms]
2. Load shell JS bundle [150ms]
3. Shell renders, determines which remote [20ms]
4. Fetch remote registry [50ms]
5. Load remoteEntry.js for the target [80ms]
6. Module Federation negotiates shared deps [30ms]
7. Load remote's route-specific chunk [100ms]
8. Remote renders [50ms]
Total: ~530ms
Steps 4-6 are the micro-frontend tax. You can reduce step 4 by inlining the registry. You can reduce step 5 by preloading remoteEntry.js files. But you cannot eliminate the waterfall entirely.
In our case, LCP increased by about 200ms and INP was roughly unchanged. FCP was actually slightly better because the shell was smaller and rendered faster, but users saw meaningful content later because the remote had to load.
We addressed this with aggressive preloading:
// shell-app/src/remotes/preload.ts
const PRELOAD_MAP: Record<string, string> = {
'/analytics': 'analytics',
'/billing': 'billing',
'/users': 'users',
};
export function preloadRemoteOnHover(path: string) {
const remoteName = Object.entries(PRELOAD_MAP)
.find(([prefix]) => path.startsWith(prefix))?.[1];
if (!remoteName) return;
// Preload the remoteEntry.js on hover
const link = document.createElement('link');
link.rel = 'prefetch';
link.href = getRemoteEntryUrl(remoteName);
link.as = 'script';
document.head.appendChild(link);
}
// Usage in navigation
function NavLink({ to, children }: NavLinkProps) {
return (
<Link
to={to}
onMouseEnter={() => preloadRemoteOnHover(to)}
onFocus={() => preloadRemoteOnHover(to)}
>
{children}
</Link>
);
}This helps, but it is another thing to maintain. And it only helps for navigation -- direct URL access still has the full waterfall.
In a monolith, your deploy pipeline is:
In a micro-frontend architecture:
When something breaks in production, which remote caused it? If three remotes deployed in the last hour, you need to be able to identify the culprit and roll back just that remote while keeping the others stable.
This requires a deployment manifest:
{
"timestamp": "2025-12-19T14:32:00Z",
"shell": "2.1.0",
"remotes": {
"analytics": {
"version": "1.45.2",
"deployedAt": "2025-12-19T14:30:00Z",
"entryUrl": "https://cdn.example.com/analytics/1.45.2/remoteEntry.js",
"commitSha": "abc123"
},
"billing": {
"version": "3.12.0",
"deployedAt": "2025-12-18T09:15:00Z",
"entryUrl": "https://cdn.example.com/billing/3.12.0/remoteEntry.js",
"commitSha": "def456"
}
},
"sharedDeps": {
"react": "18.2.0",
"react-dom": "18.2.0"
}
}You need tooling to manage this manifest, update it during deploys, and use it for rollbacks. This is infrastructure that a monolith does not need because a monolith has one version: the currently deployed commit.
Unit tests for individual remotes are straightforward. Integration tests that verify the composed application works are not.
// integration-tests/analytics-in-shell.test.tsx
import { test, expect } from '@playwright/test';
test.describe('Analytics remote in shell', () => {
test('loads analytics dashboard', async ({ page }) => {
await page.goto('/analytics');
// Wait for the remote to load (this is the micro-frontend tax in tests)
await page.waitForSelector('[data-testid="analytics-dashboard"]', {
timeout: 10000, // remotes can be slow to load in CI
});
await expect(
page.locator('[data-testid="analytics-dashboard"]')
).toBeVisible();
});
test('navigation between remotes preserves shell state', async ({ page }) => {
await page.goto('/analytics');
await page.waitForSelector('[data-testid="analytics-dashboard"]');
// Verify user info in shell header
const userName = await page.textContent('[data-testid="shell-user-name"]');
// Navigate to billing
await page.click('[data-testid="nav-billing"]');
await page.waitForSelector('[data-testid="billing-dashboard"]', {
timeout: 10000,
});
// Verify user info is still correct
await expect(page.locator('[data-testid="shell-user-name"]'))
.toHaveText(userName!);
});
test('error boundary catches remote load failure', async ({ page }) => {
// Simulate remote being unavailable
await page.route('**/analytics/remoteEntry.js', route => route.abort());
await page.goto('/analytics');
await expect(
page.locator('[data-testid="remote-unavailable"]')
).toBeVisible();
// Shell should still be functional
await expect(
page.locator('[data-testid="shell-header"]')
).toBeVisible();
});
});These tests are slow. They require either a full running environment with all remotes or a mock setup that approximates production. In CI, they add 5-15 minutes depending on the number of remotes and the complexity of the interactions being tested.
You also need contract tests between the shell and each remote to catch interface changes before deployment:
// contracts/analytics-contract.test.ts
import { test, expect } from 'vitest';
test('analytics remote exports expected interface', async () => {
// This would load the actual built remote in a test environment
const analyticsModule = await import('analytics/App');
// Verify the remote exports a valid React component
expect(analyticsModule.default).toBeDefined();
expect(typeof analyticsModule.default).toBe('function');
// If the remote accepts props, verify the interface
// This catches breaking changes before deployment
});Webpack's Module Federation was the pioneer, but the ecosystem is evolving. rspack (the Rust-based webpack alternative) supports Module Federation and is significantly faster.
Module Federation 2.0 introduces several improvements:
// rspack.config.js with Module Federation 2.0
const { ModuleFederationPlugin } = require('@module-federation/enhanced/rspack');
module.exports = {
plugins: [
new ModuleFederationPlugin({
name: 'shell',
remotes: {
analytics: 'analytics@https://cdn.example.com/analytics/mf-manifest.json',
},
shared: {
react: { singleton: true, requiredVersion: '^18.2.0' },
'react-dom': { singleton: true, requiredVersion: '^18.2.0' },
},
// 2.0 features
runtimePlugins: [
require.resolve('./src/mf-plugins/version-check'),
require.resolve('./src/mf-plugins/error-reporting'),
],
}),
],
};The type generation is the most impactful feature. In Module Federation 1.0, consuming a remote module meant trusting that the interface had not changed. With 2.0, types are generated and published alongside the remote build, so the host gets compile-time errors when a remote's API changes.
This is a genuine improvement, but it does not change the fundamental tradeoffs. The organizational and architectural costs remain the same.
Across the implementations I have been involved with and the ones I have observed at other companies, the successful micro-frontend architectures share these characteristics:
1. Genuinely independent products. Not "features" within a product, but products that could plausibly exist as separate applications. The micro-frontend architecture is not creating the boundary; it is acknowledging one that already exists.
2. Large engineering organizations (50+ frontend engineers). The coordination cost of micro-frontends is high. You need enough engineers that the coordination cost of a monolith is even higher.
3. A dedicated platform team. Someone needs to own the shell, the shared dependency management, the deployment infrastructure, the remote registry, the integration tests. This is not a part-time job. At Company B, we had a team of four engineers whose full-time job was the micro-frontend platform.
4. Minimal cross-remote state. The products share a user session and a navigation bar. They do not share shopping carts, real-time notifications, or collaborative editing state. When the shared surface area is large, the micro-frontend boundary creates more problems than it solves.
5. Different release cadences. One product ships daily, another ships weekly, another ships monthly. If all teams are on the same two-week sprint and deploy on the same day, the independent deployment benefit evaporates.
I am going to be direct: most teams considering micro-frontends should not adopt them. Here is a non-exhaustive list of situations where I have seen micro-frontends adopted and then abandoned:
"Our build is slow." Fix your build. Use Turborepo. Use rspack. Add remote caching. A slow build is a build problem, not an architecture problem.
"Teams keep stepping on each other's toes." Improve your code ownership model. Use CODEOWNERS. Set up proper code review. Create clear module boundaries within the monolith. These are process problems.
"We want technology freedom." No, you do not. You want everyone to use the same framework, the same component library, and the same state management approach. Technology diversity in a frontend increases maintenance burden exponentially. The one team that chose Svelte will regret it when they need to integrate with the React design system.
"We want to scale like [insert large tech company]." You are not that company. They have hundreds of frontend engineers and dedicated platform teams. You have twelve developers and a part-time DevOps engineer.
"Microservices on the backend, micro-frontends on the frontend." Backend service boundaries and frontend boundaries are different things. A microservice might serve data to five different parts of the UI. Splitting the frontend along the same lines as the backend is a category error.
For teams between 5 and 50 frontend engineers, the right answer is almost always a well-structured monorepo with code-level boundaries that could become deployment boundaries later if needed.
monorepo/
packages/
design-system/ (shared UI components)
shared-utils/ (shared utilities)
shared-types/ (TypeScript interfaces)
apps/
main-app/ (single deployable application)
src/
features/
analytics/ (Team Alpha owns this)
components/
hooks/
routes/
index.ts (public API -- only import from here)
billing/ (Team Beta owns this)
components/
hooks/
routes/
index.ts
users/ (Team Gamma owns this)
components/
hooks/
routes/
index.ts
With ESLint boundaries and proper code ownership:
// .eslintrc.js (using eslint-plugin-boundaries)
module.exports = {
plugins: ['boundaries'],
settings: {
'boundaries/elements': [
{ type: 'analytics', pattern: 'features/analytics/*' },
{ type: 'billing', pattern: 'features/billing/*' },
{ type: 'users', pattern: 'features/users/*' },
{ type: 'shared', pattern: 'shared/*' },
],
},
rules: {
'boundaries/element-types': [
'error',
{
default: 'disallow',
rules: [
// Features can only import from shared, not from each other
{ from: 'analytics', allow: ['shared'] },
{ from: 'billing', allow: ['shared'] },
{ from: 'users', allow: ['shared'] },
],
},
],
},
};This gives you:
The key insight is that code boundaries and deployment boundaries are independent decisions. You can have strong code boundaries within a single deployable unit. You can add deployment boundaries later when -- and only when -- the organizational pain justifies the technical cost.
Here is my decision framework. Answer these questions honestly:
Do you have more than 50 frontend engineers? If no, stop. Use a monorepo with code-level boundaries.
Are the products genuinely independent? If a product manager could describe each product without referencing the others, they might be independent. If the products share workflows, data, or user journeys, they are not independent. They are features of one product.
Can you afford a platform team? You need 2-4 engineers whose full-time job is the micro-frontend infrastructure. Shared dependency management, deployment orchestration, integration testing, developer experience. If you cannot dedicate these engineers, you cannot maintain the architecture.
Is independent deployment actually valuable? If all teams deploy on the same schedule, independent deployment adds complexity without benefit. Independent deployment is valuable when teams have genuinely different release cadences and cannot tolerate waiting for each other.
Have you exhausted simpler solutions? Turborepo with remote caching, proper code splitting, CODEOWNERS, ESLint boundary rules, feature flags for incremental rollout. If you have not tried these, try them first. They solve 80% of the problems that micro-frontends are adopted to solve, at 10% of the cost.
Is the pain organizational, not technical? If the pain is "our build is slow" or "our bundle is big," that is a technical problem with technical solutions. If the pain is "Team A cannot ship because Team B broke the test suite again and their on-call is in a different timezone," that is an organizational problem that micro-frontends might actually help with.
If you answered "yes" to all six questions, micro-frontends might be right for you. Might. Start with the simplest possible implementation: a shell that loads independent SPAs into designated DOM nodes. Do not use Module Federation on day one. Do not build a shared state bus on day one. Start with iframes if you can tolerate the UX tradeoffs. Add complexity only when you have a specific problem that requires it.
Micro-frontends are a solution to an organizational problem masquerading as a technical architecture. They work when the organization has genuinely independent teams building genuinely independent products. They fail when they are adopted for technical reasons by organizations that do not have the independence to justify the overhead.
The most successful frontend architectures I have seen are boring. A monorepo. A single build pipeline. A shared design system. Code-level boundaries enforced by linting. Team ownership enforced by CODEOWNERS. Feature flags for incremental rollout. Remote caching for fast builds.
If you are considering micro-frontends, start by drawing your org chart. If the teams that would own each micro-frontend do not already operate independently -- different product managers, different roadmaps, different release schedules -- the architecture will fight the organization. And the organization always wins.
Build the simplest thing that works. Draw boundaries in code, not in infrastructure. And if you ever genuinely need micro-frontends, the well-structured monorepo with enforced boundaries will be dramatically easier to decompose than the tangled monolith you would have had without boundaries at all.
The best micro-frontend migration is the one you never have to do because you structured your monolith well enough from the start.