The strategic decisions behind monorepo tooling that blog posts skip over. Nx vs Turborepo vs Bazel in practice, remote caching failures, CI pipelines that actually work at scale, and when your monorepo has quietly become a monolith.
I have managed monorepos ranging from four packages to four hundred. The small ones felt magical. The large ones felt like maintaining a nuclear reactor -- one wrong dependency and the whole graph collapses, builds take forty minutes, and someone on the platform team spends their Friday night debugging why remote cache invalidation broke the staging deploy.
The conversation around monorepos in 2026 is frustratingly shallow. You get two flavors of content: "Monorepos are great, here is how to set up Turborepo in ten minutes" and "Google uses a monorepo so you should too." Neither addresses the strategic decisions that determine whether your monorepo becomes a productivity multiplier or a coordination tax disguised as shared infrastructure.
This is not a setup guide. I already wrote one of those for Turborepo. This is the post about the decisions that will cost you weeks or months if you get them wrong, and the lessons I learned the hard way across multiple organizations and tooling migrations.
Let me dismantle the most dangerous argument in monorepo advocacy: "Google does it, so it works."
Google has a monorepo. It contains billions of lines of code. It is the most successful monorepo in the history of software engineering. It is also completely irrelevant to your situation.
Google built Blaze (the internal precursor to Bazel) over many years with a dedicated team of infrastructure engineers. They built a custom distributed file system (Piper) because regular Git cannot handle a repository that large. They built a custom code review tool (Critique), a custom CI system, custom IDE integrations, and custom dependency management tools. The team that maintains this infrastructure is larger than most companies' entire engineering departments.
When someone says "Google uses a monorepo," what they are actually saying is "Google spent hundreds of millions of dollars building custom infrastructure to make a monorepo work at their scale." Meta did the same thing with their Mercurial-based infrastructure. Microsoft built VFS for Git because even Git's architecture could not handle the Windows repository without a virtual file system layer.
The question is not whether monorepos work. They obviously do. The question is whether your organization has the tooling budget to make one work at the scale you are targeting. For most teams up to about fifty engineers with a few dozen packages, the answer is yes -- off-the-shelf tools handle this well. For teams pushing into the hundreds of packages with complex dependency graphs, you need to start thinking about whether you are going to invest seriously in build infrastructure or whether you are going to drown in slow CI and broken caches.
I have seen three companies attempt Google-style monorepos without Google-style infrastructure. All three eventually split their repositories, not because the monorepo concept was wrong, but because nobody was willing to fund a platform team to maintain it. The engineers who set it up moved on, and the institutional knowledge of how the build system worked left with them.
The takeaway is boring but critical: match your monorepo ambitions to your infrastructure investment. A well-maintained monorepo with twenty packages and good tooling is worth infinitely more than a two-hundred-package monorepo where nobody understands the task graph.
I have used all four in production. Here is what I actually think, stripped of the usual diplomatic "it depends" hedging.
Turborepo is the easiest to adopt. If you have an existing pnpm or npm workspace and you want caching and task orchestration without changing your mental model, Turborepo is the answer. You add a turbo.json, define your pipeline, and things get faster immediately.
{
"$schema": "https://turbo.build/schema.json",
"tasks": {
"build": {
"dependsOn": ["^build"],
"outputs": ["dist/**", ".next/**"]
},
"test": {
"dependsOn": ["^build"],
"cache": true
},
"lint": {
"cache": true
},
"dev": {
"persistent": true,
"cache": false
}
}
}The strength of Turborepo is its simplicity. The ^ syntax for topological dependencies is intuitive. The caching is file-hash-based and works without configuration. Remote caching through Vercel is turnkey. In 2026, Turborepo has matured significantly -- watch mode, task boundaries, and better error messages have all improved.
The weakness is that Turborepo is opinionated about being minimal. It does not generate code. It does not manage dependency constraints between packages. It does not have a plugin system. When you need those things, you either build them yourself or you look at Nx.
The other weakness, and this is the one nobody talks about, is that Turborepo's task hashing can produce false cache hits in edge cases involving environment variables and file system state. I have spent entire days tracking down bugs where a cached build artifact was stale because an environment variable changed but the turbo hash did not. You solve this with globalEnv and env declarations in turbo.json, but you have to know about every environment variable that affects your build, and you will forget one.
{
"tasks": {
"build": {
"dependsOn": ["^build"],
"outputs": [".next/**", "dist/**"],
"env": ["DATABASE_URL", "NEXT_PUBLIC_API_URL"],
"passThroughEnv": ["NODE_ENV", "CI"]
}
},
"globalEnv": ["VERCEL_URL", "GITHUB_SHA"]
}Nx is the Swiss Army knife. It does everything: code generation, dependency graph visualization, constraint enforcement, affected command computation, distributed task execution, and it has a plugin system that covers most frameworks.
// nx.json — workspace configuration
{
"targetDefaults": {
"build": {
"dependsOn": ["^build"],
"inputs": ["production", "^production"],
"cache": true
},
"test": {
"inputs": ["default", "^production", "{workspaceRoot}/jest.preset.js"],
"cache": true
}
},
"namedInputs": {
"default": ["{projectRoot}/**/*", "sharedGlobals"],
"production": [
"default",
"!{projectRoot}/**/?(*.)+(spec|test).[jt]s?(x)?(.snap)",
"!{projectRoot}/tsconfig.spec.json",
"!{projectRoot}/jest.config.[jt]s"
],
"sharedGlobals": ["{workspaceRoot}/.github/workflows/ci.yml"]
}
}The strength of Nx is its intelligence about the dependency graph. The nx affected command genuinely understands which projects are impacted by a code change, and it does this by analyzing imports, not just file hashes. This means if you change a shared utility, Nx knows exactly which apps and libraries consume it and only runs their tests. Turborepo does something similar, but Nx's graph analysis has historically been more granular.
Nx also has module boundary enforcement, which is underrated:
// .eslintrc.json with @nx/enforce-module-boundaries
{
"rules": {
"@nx/enforce-module-boundaries": [
"error",
{
"depConstraints": [
{
"sourceTag": "scope:shared",
"onlyDependOnLibsWithTags": ["scope:shared"]
},
{
"sourceTag": "scope:admin",
"onlyDependOnLibsWithTags": ["scope:shared", "scope:admin"]
},
{
"sourceTag": "type:feature",
"onlyDependOnLibsWithTags": ["type:util", "type:data-access", "type:ui"]
}
]
}
]
}
}This is the feature that prevents your monorepo from becoming a monolith. Without constraints, developers will import anything from anywhere, and your dependency graph will become a tangled mess where every change affects every package. I have seen this happen in under six months on a team of fifteen.
The weakness of Nx is its learning curve and its weight. The project.json files, the generators, the executors, the plugins -- it is a lot of machinery. When something goes wrong deep in the Nx plugin system, debugging it requires understanding how Nx resolves and executes tasks, which is a nontrivial investment. I have lost days to executor configuration issues where the error message pointed to the wrong problem entirely.
Nx Cloud (their remote caching and distributed task execution service) is good but creates vendor dependency. In 2026, self-hosted Nx Cloud via their Powerpack offering is an option, but it adds operational overhead. You are essentially running another piece of infrastructure.
Bazel is the correct choice if you have a polyglot monorepo with more than two hundred packages and a dedicated platform team of at least three engineers. For everyone else, it is almost certainly overkill.
I am not being dismissive. Bazel's model is genuinely superior for large-scale builds. The hermeticity guarantees -- where every build is reproducible regardless of local machine state -- are something neither Turborepo nor Nx can match. The distributed execution model is proven at a scale that dwarfs anything in the JavaScript ecosystem.
# BUILD.bazel for a TypeScript library
load("@aspect_rules_ts//ts:defs.bzl", "ts_project")
load("@aspect_rules_jest//jest:defs.bzl", "jest_test")
ts_project(
name = "shared-utils",
srcs = glob(["src/**/*.ts"]),
deps = [
"//packages/types:types",
"//:node_modules/@types/node",
],
declaration = True,
tsconfig = ":tsconfig.json",
visibility = ["//visibility:public"],
)
jest_test(
name = "test",
srcs = glob(["src/**/*.test.ts"]),
deps = [":shared-utils"],
config = ":jest.config.js",
)The weakness is adoption cost. Learning BUILD files, understanding Starlark, setting up rules for your specific stack, maintaining the toolchain -- this is a multi-month investment. Every JavaScript developer on your team needs to understand at least the basics of how Bazel resolves dependencies, or they will be blocked every time they add a new file or dependency.
I have watched two teams adopt Bazel for JavaScript monorepos. One had a platform team that owned the toolchain and it worked well. The other expected individual feature teams to maintain their own BUILD files and it was a slow-motion disaster. BUILD file conflicts became the number one source of merge conflicts, and developers started routing around the build system by copying files instead of creating proper dependencies.
Pants deserves mention because it solves a real problem: Bazel-like hermeticity without Bazel-level complexity. It infers dependency information from source code instead of requiring manual BUILD files, which removes a massive source of friction.
In 2026, Pants has solid support for Python and Go, decent support for Java and Kotlin, and emerging support for JavaScript/TypeScript through community plugins. If your monorepo is primarily Python or Go, Pants is genuinely competitive with Bazel and far easier to adopt. For JavaScript-heavy monorepos, Nx or Turborepo are still better choices because the ecosystem support is more mature.
Here is how I actually decide:
Under ten packages, JavaScript-only: Turborepo. The simplicity is worth more than the features you are missing.
Ten to one hundred packages, JavaScript-primary: Nx. You need the graph intelligence, the affected commands, and the module boundary enforcement.
One hundred plus packages, polyglot: Evaluate Bazel or Pants, but only if you can staff a platform team. Otherwise, consider splitting into two or three focused monorepos.
One common mistake is starting with Bazel because you anticipate scale. Do not do this. Start with Turborepo or Nx, solve your immediate problems, and migrate to Bazel later if and when you hit the limits of those tools. I have seen this migration done successfully. I have never seen a team start with Bazel and not regret the complexity during the first year when they had twenty packages and three developers.
Caching is the headline feature of every monorepo tool. "Zero unnecessary rebuilds." "10x faster CI." The marketing writes itself because it is genuinely transformative when it works.
The problem is the "when it works" part. I want to walk through the three levels of caching and where each one breaks.
Local caching means your machine remembers the output of previous tasks. If you run build on package A, then change package B and run build again, package A's build is served from cache because its inputs have not changed.
This works well and rarely breaks. The main failure mode is cache size -- on a large monorepo, cached artifacts can consume tens of gigabytes. Turborepo and Nx both handle cache eviction, but developers on CI machines with limited disk space sometimes hit issues.
Remote caching means sharing cache artifacts across machines. Developer A builds package X on their laptop, the artifact goes to a remote cache, and when CI runs the same build, it pulls from cache instead of rebuilding. This is where the magic happens and also where the pain lives.
# Turborepo remote cache configuration
npx turbo login
npx turbo link
# Or self-hosted with a custom endpoint
# turbo.json
{
"remoteCache": {
"signature": true,
"enabled": true
}
}The failure modes of remote caching are subtle and maddening:
Platform mismatches. If developer A is on macOS ARM and CI runs on Linux x86, cache artifacts that include native binaries are not portable. This bites you with any package that has native dependencies -- sharp, better-sqlite3, esbuild platform-specific binaries, and others. You end up needing platform-aware cache keys, which both Turborepo and Nx support but which you have to configure correctly.
Environment variable drift. I mentioned this earlier but it bears repeating. If your build output depends on NODE_ENV or API_URL and you do not declare these in your cache key configuration, you will serve production artifacts in development or vice versa. This class of bug is particularly insidious because everything looks correct -- the build "succeeded," the cache "hit" -- but the output is wrong.
Cache poisoning. If a developer's local build produces incorrect output and that gets pushed to the remote cache, every subsequent consumer of that cache gets the bad artifact. Turborepo's signature verification helps, but it does not validate correctness, only authenticity. The defense is good CI hygiene: always rebuild from clean state periodically, and have a way to flush the remote cache when things go wrong.
This is the frontier. Instead of running all tasks on one CI machine, you distribute them across multiple machines that coordinate through a central orchestrator.
Nx Cloud offers this as Distributed Task Execution (DTE). The idea is brilliant: if you have one hundred test suites that take a total of sixty minutes sequentially, distribute them across ten machines and finish in six minutes.
# Example CI config with Nx Cloud DTE
name: CI
on: [push]
jobs:
main:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- uses: nrwl/nx-set-shas@v4
- run: npm ci
- run: npx nx-cloud start-ci-run --distribute-on="5 linux-medium-js"
- run: npx nx affected -t lint test build --parallel=3
- run: npx nx-cloud stop-all-agentsThe practical reality is messier. Task distribution requires that every task be hermetic -- it cannot depend on local file system state that other tasks create. If task A writes a file that task B reads, and they run on different machines, task B fails. Finding and fixing these implicit dependencies is the bulk of the work in adopting DTE.
I spent three weeks untangling implicit task dependencies when setting up DTE for a sixty-package monorepo. The test suite had been written assuming sequential execution, with test fixtures that relied on build artifacts being present on the same machine. Refactoring those tests to be truly independent was the cost of admission.
If you take one thing from this post, make it this: the dependency graph is the most important artifact in your monorepo. Not your CI config. Not your build scripts. The graph.
Every monorepo tool works by building a directed acyclic graph (DAG) of dependencies between packages. When you change package A, the tool walks the graph to find every package that depends on A (directly or transitively) and runs the relevant tasks on those packages. This is the "affected" command, and it is the foundation of efficient monorepo CI.
# Nx: show what is affected by current changes
npx nx affected:graph
# Turborepo: run tests only on affected packages
npx turbo run test --filter=...[HEAD~1]
# Both of these walk the dependency graph from changed packages
# to find everything downstreamThe graph is built from dependency declarations -- package.json dependencies, imports, and configuration files. When these declarations are incomplete or incorrect, the affected command misses packages and you get false negatives (changed code that does not trigger a rebuild).
Here are the three most common causes:
Undeclared dependencies. Package A imports a file from package B using a relative path instead of a workspace dependency. The build works because the file system resolves the path, but the dependency graph does not know about the relationship. When package B changes, package A is not flagged as affected.
// BAD: relative import bypasses the dependency graph
import { helper } from "../../packages/shared/src/helper";
// GOOD: workspace dependency, visible in the graph
import { helper } from "@myorg/shared";Implicit global dependencies. Your build depends on a root-level configuration file (say, a shared Tailwind config or a PostCSS config), but the dependency graph does not know about it. When you change that config, nothing is flagged as affected. The fix is declaring these as implicit dependencies in your tool configuration.
// nx.json — declaring implicit dependencies
{
"namedInputs": {
"sharedGlobals": [
"{workspaceRoot}/tailwind.config.ts",
"{workspaceRoot}/postcss.config.js",
"{workspaceRoot}/tsconfig.base.json"
]
}
}Circular dependencies. This is the one that catches people off guard. Two packages import from each other, creating a cycle in the graph. Most monorepo tools detect and reject this, but it can also manifest subtly through transitive dependencies: A depends on B, B depends on C, C depends on A. The solution is usually to extract the shared code into a new package that all three depend on, breaking the cycle.
I keep a strict rule: run nx graph (or equivalent) in CI and fail the build if the graph contains cycles. It is far easier to prevent cycles than to untangle them after they have been baked into the codebase for months.
The promise of monorepos is shared code. The reality is that shared code comes in several flavors, and choosing the wrong one creates friction that erodes the benefits.
The simplest pattern. Package A imports directly from package B's source code. No build step for the library, no compiled artifacts. The consuming application's bundler (webpack, Vite, esbuild) compiles the library code as part of its own build.
// packages/ui/package.json
{
"name": "@myorg/ui",
"main": "./src/index.ts",
"types": "./src/index.ts",
"exports": {
".": "./src/index.ts",
"./button": "./src/components/Button.tsx",
"./dialog": "./src/components/Dialog.tsx"
}
}This is the fastest development loop because there is no intermediate build step. You change the library, the consuming app's dev server picks it up immediately through hot module replacement. For internal-only packages that will never be published to npm, this is almost always the right choice.
The drawback is that every consuming application must be able to compile the library's source. If your library uses TypeScript features or JSX that the consumer's bundler does not understand, you need to configure transpilation in the consumer. With modern bundlers and Next.js's transpilePackages, this is straightforward but it is one more thing to configure.
Buildable libraries have their own build step that produces compiled JavaScript and declaration files. Consumers import from the compiled output, not the source.
// packages/shared-utils/package.json
{
"name": "@myorg/shared-utils",
"main": "./dist/index.js",
"types": "./dist/index.d.ts",
"scripts": {
"build": "tsup src/index.ts --format esm,cjs --dts"
}
}The advantage is isolation. The library's build is self-contained. Consumers do not need to know what language features the library uses because they receive standard JavaScript and type declarations. This is necessary when your library will be consumed by applications with different build toolchains.
The disadvantage is the build step. In development, you need to either rebuild the library on every change (slow) or use watch mode (one more process to run). The dependency graph must ensure libraries are built before their consumers, which adds complexity to your task pipeline.
Publishable libraries are buildable libraries with versioning, changelogs, and the ability to be published to an npm registry (public or private). You use this pattern when code needs to be consumed both inside and outside the monorepo.
This is where things get complicated. You now have two distribution channels: workspace links for internal consumers and published packages for external consumers. They can diverge. An internal consumer uses the latest source, but an external consumer uses whatever version they have installed. If you are not careful, you end up with the same "which version is deployed" problem that monorepos were supposed to solve.
My rule of thumb: default to internal packages. Promote to buildable only when you need build isolation. Promote to publishable only when external consumption is a concrete requirement, not a hypothetical future need.
The single most common monorepo failure I see is CI that runs everything on every commit. A team sets up a monorepo, configures a CI pipeline that runs lint, test, and build for all packages, and things are fine for the first few months. Then the monorepo grows to thirty packages and CI takes twenty-five minutes. Then fifty packages and forty minutes. Then someone adds a slow integration test suite and CI takes over an hour.
At this point, developers start skipping CI. They merge without waiting for checks. Bugs slip through. The monorepo gets blamed for slow development velocity when the real problem is a CI pipeline that does not understand the dependency graph.
Every monorepo CI pipeline should run tasks only on packages affected by the current change. Here is a GitHub Actions example with Nx:
name: CI
on:
pull_request:
branches: [main]
jobs:
affected:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- uses: pnpm/action-setup@v4
- uses: actions/setup-node@v4
with:
node-version: 22
cache: "pnpm"
- run: pnpm install --frozen-lockfile
- uses: nrwl/nx-set-shas@v4
- name: Lint affected
run: npx nx affected -t lint --parallel=5
- name: Test affected
run: npx nx affected -t test --parallel=3 --configuration=ci
- name: Build affected
run: npx nx affected -t build --parallel=3The fetch-depth: 0 is critical -- the tool needs the full git history to determine what changed between the PR branch and main. I have seen teams waste days debugging affected commands that returned "everything is affected" because their CI was doing a shallow clone.
The nx-set-shas action determines the correct base commit for comparison. Without it, affected calculation on CI can compare against the wrong commit, leading to either too many or too few packages being flagged.
Turborepo uses the --filter flag with git range syntax:
- name: Build affected
run: npx turbo run build --filter="...[origin/main...HEAD]"
- name: Test affected
run: npx turbo run test --filter="...[origin/main...HEAD]"The ... syntax means "packages that changed between these two commits, plus all their dependents." It is powerful but the syntax is not intuitive and I have seen developers get it wrong more than once.
Even with affected-only CI, you should run a full build on the main branch after every merge. This catches the rare case where two PRs individually pass CI but together introduce a conflict -- for example, both PRs modify the API contract of a shared package in incompatible ways.
name: Main Build
on:
push:
branches: [main]
jobs:
full-build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: pnpm install --frozen-lockfile
- run: npx nx run-many -t lint test build --parallel=3This is your safety net. It runs everything. It is the pipeline that tells you "the monorepo is healthy." If this pipeline fails on main, it is an all-hands-on-deck situation.
I have done this migration three times. Twice into a monorepo, once back out. Here is what I wish someone had told me.
For a team of six with four repositories, expect six to eight weeks of part-time work. Not two weeks. Not "a weekend project." Here is the actual breakdown:
Week one and two. Set up the monorepo structure, workspace configuration, and build tooling. Get one application building and its tests passing. This is the proof of concept and it goes quickly because you are only dealing with one app.
Week three and four. Migrate the remaining applications. This is where dependency conflicts surface. Application A uses React 18.2 and application B uses React 18.3. Application C uses a different version of TypeScript. Resolving these version conflicts is the most tedious part of the migration because it can trigger cascading type errors.
Week five and six. Extract shared code into packages. You will discover that what you thought was "shared" code actually has subtle differences between repositories. The date formatting utility in repo A handles timezones differently from the one in repo B. Reconciling these differences requires product decisions, not just code changes.
Week seven and eight. CI/CD pipeline, remote caching, developer documentation, and onboarding. Setting up the CI pipeline correctly -- with affected commands, proper caching, and reasonable parallelism -- takes longer than you expect. Write documentation for your team. Record a walkthrough video. Nobody reads documentation, but having it prevents the same questions from being asked repeatedly.
You will need to decide whether to preserve git history from the original repositories. You can use git subtree or manual surgery to bring commits into the monorepo, but the history often does not map cleanly because file paths change. My recommendation: bring the history if it is recent and actively referenced. If the code has been stable and nobody looks at blame older than six months, start fresh with a migration commit and archive the original repositories as read-only.
I have seen one monorepo split, and it was the right decision. The monorepo contained a consumer-facing web application and an internal analytics platform. They shared a component library and a handful of utilities. Over eighteen months, the teams diverged: different deployment cadences, different testing philosophies, different infrastructure requirements. The shared code became a coordination bottleneck rather than a productivity boost.
The split took about four weeks. We identified the shared packages that both applications needed, published them to a private npm registry, and moved each application into its own repository. The shared packages remained in a small monorepo of their own, versioned and published through a CI pipeline.
The key insight: the split was not a failure. The monorepo served its purpose during the period when the teams were aligned. When organizational dynamics changed, the architecture needed to follow. Treating a monorepo as a permanent architectural decision is a mistake. It is an organizational tool, and organizations change.
If your monorepo contains publishable packages, you need a versioning strategy. There are two schools, and they both have real problems.
All packages share a single version number. When any package changes, all packages get a new version. This is what Angular does with their monorepo.
// Example: all packages at v5.12.0
// packages/core: 5.12.0
// packages/router: 5.12.0
// packages/forms: 5.12.0The advantage is simplicity. You never wonder which version of @myorg/core is compatible with which version of @myorg/router. They are always the same version.
The disadvantage is noise. A bug fix in @myorg/forms bumps the version of @myorg/core and @myorg/router even though nothing changed in those packages. Consumers who pin versions see "new version available" for packages that have no meaningful changes. At scale, this creates upgrade fatigue.
Each package has its own version number, incremented independently based on changes to that specific package. This is what Changesets manages.
# Generate a changeset
npx changeset
# This creates a markdown file describing the change
# .changeset/cool-tigers-dance.md
# ---
# "@myorg/ui": minor
# "@myorg/shared-utils": patch
# ---
# Added new Dialog component and fixed timezone utilityThe advantage is precision. Consumers know exactly what changed in each package. Version bumps are meaningful.
The disadvantage is coordination overhead. If you change a shared utility and three packages depend on it, you need to version-bump all four. Changesets handles this automatically through its dependency graph analysis, but the mental model is more complex. I have seen teams where half the PR comments were about whether the changeset was correct, which version bump was appropriate, and whether a transitive dependency should be patch or minor.
My recommendation: use fixed versioning unless you have external consumers who need fine-grained version control. The simplicity is worth the noise.
VS Code starts struggling around two hundred packages. TypeScript language server memory usage climbs, file watching consumes CPU, and "Go to Definition" takes noticeable seconds instead of being instant. Here is what actually helps.
TypeScript project references let the language server understand package boundaries without loading the entire codebase into a single program.
// tsconfig.json at the root
{
"references": [
{ "path": "./packages/ui" },
{ "path": "./packages/shared-utils" },
{ "path": "./packages/api-client" },
{ "path": "./apps/web" },
{ "path": "./apps/admin" }
],
"files": []
}
// packages/ui/tsconfig.json
{
"extends": "../../tsconfig.base.json",
"compilerOptions": {
"composite": true,
"outDir": "./dist",
"rootDir": "./src",
"declarationMap": true
},
"references": [
{ "path": "../shared-utils" }
],
"include": ["src"]
}The composite flag tells TypeScript to treat each package as a discrete compilation unit. The language server can then load declaration files for dependencies instead of re-parsing their entire source tree. This makes a dramatic difference -- I have seen TypeScript memory usage drop by sixty percent after properly setting up project references.
Tell VS Code to stop watching directories it does not need to:
// .vscode/settings.json
{
"files.watcherExclude": {
"**/node_modules/**": true,
"**/dist/**": true,
"**/.turbo/**": true,
"**/.nx/**": true,
"**/coverage/**": true,
"**/.next/**": true
},
"search.exclude": {
"**/dist": true,
"**/node_modules": true,
"**/.turbo": true,
"**/coverage": true
},
"typescript.tsdk": "node_modules/typescript/lib",
"typescript.preferences.importModuleSpecifier": "non-relative"
}For very large monorepos where even project references are not enough, open only the packages you are actively working on as a VS Code multi-root workspace. This sacrifices some cross-package navigation for IDE responsiveness. I use this for monorepos with more than three hundred packages and it keeps VS Code usable.
The real answer for truly massive monorepos is a language server that supports lazy loading -- only analyzing packages as you navigate into them. JetBrains IDEs handle this better than VS Code in 2026, though VS Code's TypeScript server has improved. If you are hitting IDE limits, consider whether your team would benefit from WebStorm or IntelliJ.
A monorepo is a repository that contains multiple independent projects with well-defined boundaries. A monolith is a single application with no internal boundaries. The difference is not about the number of files or the size of the repository. It is about the dependency graph.
Here are the signs that your monorepo has quietly become a monolith:
Every change triggers every test. If your nx affected output consistently shows eighty percent or more of your packages, your dependency graph is too interconnected. The packages are not independent -- they are tightly coupled modules wearing package.json costumes.
Shared packages have become dumping grounds. You have a @myorg/shared or @myorg/common package that everything depends on. It started with a few utilities and now contains business logic, API clients, configuration, and React components. Every commit to this package invalidates every downstream cache. This is the monorepo equivalent of a God object.
Teams cannot deploy independently. The whole point of separate packages is independent lifecycles. If Team A's deployment is regularly blocked by changes in Team B's package, the boundaries are not real. This manifests as "merge to main and hope nothing breaks" culture.
Build times are not improving with caching. If remote cache hit rates are below fifty percent, something is wrong. Either your cache key configuration is too broad (invalidating caches unnecessarily) or your packages change so frequently that caching provides little benefit. Low cache hit rates are a symptom of poor package boundaries.
Nobody can draw the dependency graph. If you ask five senior engineers to sketch the dependency relationships between your top twenty packages and they all produce different diagrams, the architecture is not clear enough. The graph should be obvious. If it is not, boundaries need to be re-drawn.
The fix is not splitting the repository. The fix is re-establishing boundaries within the repository. This usually means:
One, break up the God package. @myorg/shared becomes @myorg/date-utils, @myorg/api-client, @myorg/ui-primitives, and so on. Each new package has a clear responsibility and a narrow dependency footprint.
Two, enforce dependency constraints. Use Nx module boundaries or a custom lint rule that prevents packages from depending on things they should not. Make the constraint enforcement part of CI so it cannot be bypassed.
Three, audit and reduce cross-package imports. For every import that crosses a package boundary, ask: does this dependency make sense architecturally? Would I be comfortable if these two packages were in separate repositories? If the answer is no, the code needs to be restructured.
Four, consider strategic splitting. If your monorepo genuinely contains two unrelated product lines with separate teams, separate deployment pipelines, and separate business priorities, a split might be the right call. But split into two focused monorepos, not back into a polyrepo free-for-all. Each monorepo should still contain packages that share code and deploy together.
The monorepo is a tool. Like any tool, it works brilliantly when matched to the problem and maintained with discipline. The engineers I respect most are not the ones who can configure the most complex Nx workspace or optimize Turborepo cache hit rates to ninety-nine percent. They are the ones who can look at a dependency graph and say, "This boundary is wrong, and here is how we fix it."
That judgment -- knowing where to draw the lines -- is the real skill of monorepo architecture. The tooling is secondary.