Part 3. Multiple DbContexts per Module Without Breaking Transactions

By the time you’ve enforced real module boundaries and organised behaviour using vertical slices, you run head-first into the next uncomfortable question:
If modules are truly independent, why are they still sharing a DbContext?
This is where most modular monoliths quietly cheat.
They talk about ownership and boundaries, but behind the scenes everything funnels through a single EF Core DbContext, backed by a single schema, and stitched together with navigation properties that bleed across modules.
The Shared DbContext Lie
Let’s call it what it is. A shared DbContext across modules is a lie, or at least a very convincing one. It claims that modules are independent while quietly giving them direct and implicit access to each other’s persistence concerns. On the surface everything looks clean, but underneath, the boundaries are already compromised. Once a shared DbContext exists, a familiar pattern starts to emerge. Someone adds a cross-module join “just this once” to save time. Navigation properties begin to grow tentacles, reaching into parts of the system they were never meant to know about. Performance tuning stops being a local concern and turns into a global exercise, because a change for one feature can ripple unpredictably across others.
The longer this goes on, the harder it becomes to reverse. When you eventually want to extract a module, whether into its own service or simply into a cleaner internal boundary, you discover that everything is entangled at the data level. What looked like a modular design turns out to be tightly coupled where it matters most.
At that point, you are not really building a modular monolith. You are building a monolith with folders. Its the architectural equivalent of what my Spanish friend says when I drift too far from his paella recipe. Once you start throwing in whatever is convenient, it stops being paella and becomes “arroz con cosas”, rice with things. It might still be edible, but it is no longer the thing you set out to make.
That is exactly what happens with a shared DbContext. The intent was modularity, but convenience takes over. What you end up with may still work, but it has quietly lost its identity, and undoing that damage later is far harder than getting it right up front.
What is Data Ownership?
Data ownership is another area where the language sounds clear but the reality often gets blurred. If a module owns a business concept, then it must own everything that makes that concept real in the system. That includes the schema, the mappings, the persistence rules, and the full lifecycle of the data from creation to deletion. Anything less than that is shared ownership, and shared ownership is where boundaries quietly fall apart. There is no such thing as “mostly owns” or “owns it except for reporting”. The moment another module can shape, query, or optimise that data on its own terms, ownership is already compromised. The module may still be responsible for the concept in theory, but in practice the data has become a shared resource. The architectural consequence of real ownership is simple and uncomfortable for some teams, one DbContext per module. Not one per feature, and not multiple bounded contexts hiding inside the same module. One module, one persistence boundary. That is what makes ownership explicit, enforceable, and durable as the system grows.
The Immediate Pushback
The pushback comes immediately, and to be fair, it is justified. The moment you say “multiple DbContexts”, the same questions surface almost every time. What about transactions? What if a single operation needs to update two modules? Isn’t this just distributed systems inside a single process?
These are good questions, and they are honest ones. They usually come from people who have been burned by data inconsistency or partial failures before, and who are rightly cautious about introducing new failure modes.
The mistake is not in asking those questions. The mistake is in trying to answer them with the wrong tools.
What a Multi-DbContext Modular Monolith Looks Like
At a structural level, it’s simple.
Users.Module
UsersDbContext
Billing.Module
BillingDbContext
Each DbContext:
Lives inside its module
Is internal to that module
Only knows about its own aggregates
No shared base DbContext.
No shared migrations.
No shared entity configurations.
Here’s the mental model:

Same physical database if you want. Different schemas. Different contexts. Different ownership.
“But I Need a Transaction Across Modules”
This is the point where most people reach for the wrong answer. Faced with the idea of changing state in more than one module, the instinct is to try to stretch a transaction across the boundary and make the problem go away. The tooling makes this feel tempting, even safe, and there is always some variation of “the framework can handle it if we’re careful enough”.
And technically, some of that does work. You can coordinate multiple persistence operations, get everything to commit or roll back together, and walk away with a sense of consistency. On the surface, it looks like the cleanest solution.
Architecturally, it is a trap. The moment you rely on a shared transactional boundary, you have effectively collapsed the modules back into one. The boundary still exists in name, but not in behaviour. What you gain in short-term convenience, you lose in long-term modularity, flexibility, and the ability to evolve the system without fear.
Why Cross-Module Transactions Are a Smell
Here’s the uncomfortable truth. If two modules must commit atomically, then they are not independent. No amount of layering or careful naming changes that reality. Atomic consistency is a stronger signal than any diagram you can draw. That does not automatically mean your design is bad. It means the boundary is wrong. What you have separated conceptually does not match how the business actually needs the system to behave. The architecture is telling you something, and it is usually worth listening.
In practice, there are only two real possibilities. Either the modules are genuinely part of the same consistency boundary and should be treated as such, or the operation does not truly require atomic consistency and you are reaching for it out of convenience. Most systems blur this line, choosing the comfort of transactions instead of questioning whether the boundary itself makes sense.
Strong Consistency Is Rarely Needed
Imagine a Users module and a Billing module. You create a user and, as part of that flow, you also create a billing profile. The reflex is to assume that both of those actions must succeed or fail together, wrapped in a single atomic transaction. But step back and ask what actually matters. The user needs to exist. Billing needs to know about that user. That is the real requirement. If billing finds out about the new user 50 milliseconds later, nothing meaningful breaks. There is no business catastrophe hiding in that gap.
This is where strong consistency quietly reveals itself as optional rather than mandatory. Inside a monolith, eventual consistency is not a compromise or a failure of design. In many cases, it is the more honest reflection of the business process. By allowing modules to communicate asynchronously and converge on consistency over time, you preserve boundaries, reduce coupling, and end up with a system that is easier to reason about as it grows.
The Correct Pattern: Local Transactions + Events
The correct pattern is much simpler than it first appears, local transactions combined with events. Inside a module, nothing exotic is required. You use a normal EF Core transaction, make your changes, and commit them. The module stays fully in control of its own data and its own consistency. Once that work is complete, the module publishes an event to say that something meaningful has happened. That event is not an implementation detail, it is a deliberate signal to the rest of the system. It represents the only sanctioned way for other modules to react.
That event becomes the boundary crossing point. Other modules can listen, respond, and update their own state in their own time, using their own transactions. Consistency is achieved through coordination rather than coupling, and the integrity of each module’s boundary remains intact.

No shared DbContext.
No distributed transaction.
No lies.
“But What If Billing Fails?”
Good. Now we’re talking about reality. Failure is not an edge case, it is the normal state of complex systems. This is exactly the point where layered monoliths tend to hide that reality by forcing everything through a single transaction. It feels safe because nothing appears to fail, but it is also brittle, because all failure is collapsed into one silent rollback.
With a modular approach, failure becomes explicit. When modules communicate through events, you do not pretend that everything always succeeds. You design for the fact that things can and will fail. Recovery becomes intentional rather than accidental, and system state becomes observable instead of being hidden behind a transaction boundary. If Billing fails, the user still exists. The event that announced the new user can be retried. The failure is visible, traceable, and something you can respond to. That is not a step backwards. It is an honest representation of how the system actually behaves, and honesty is what gives you control when things go wrong.
The Outbox Pattern
Inside the Users module:
using var tx = await db.Database.BeginTransactionAsync(stopToken);
db.Users.Add(user);
db.Outbox.Add(new OutboxMessage(
"UserCreated",
new UserCreatedEvent(user.Id)));
await db.SaveChangesAsync(stopToken);
await tx.CommitAsync(stopToken);
Billing never touches Users’ DbContext.
It just reacts to what Users emits.
This is the same pattern that lets you split later without rewriting everything.
What About Read Models?
Another common objection shows up quickly. “But I need to join Users and Billing for queries”. This is usually framed as a hard requirement, but it is really a question about reads, not ownership. Reads do not define ownership. Writes do. The fact that you want to view data together does not mean it should be stored or managed together. Conflating the two is how boundaries get eroded under the guise of convenience. If you genuinely need a combined view, the answer is a read model. You build a projection that listens to the relevant events, materialise a model that is shaped for querying, and optimise it for the questions you actually need to answer. That model can live wherever it makes the most sense, without punching holes through module boundaries. Yes, this means duplication. And yes, that duplication is intentional. Data is copied because it serves a different purpose. It is fine. In fact, it is often the cleanest way to keep writes honest while still giving reads the flexibility they need.
Up Next in the Series
Now that we’ve:
Enforced boundaries
Structured behaviour
Isolated data
The next problem shows up immediately:
How do modules talk to each other without turning into a distributed monolith?
That’s where in-process messaging, contracts, and intent-driven communication come in.





