Skip to main content

Command Palette

Search for a command to run...

Part 4. Inter-Module Communication Without Creating a Distributed Monolith

Updated
9 min read
Part 4. Inter-Module Communication Without Creating a Distributed Monolith

Once you’ve enforced boundaries, adopted vertical slices, and given each module its own DbContext, a new tension appears almost immediately.

Modules are now properly isolated. They own their data. They commit independently. They even fail independently.

And then you hit the next question:

How do these things actually talk to each other?

This is the point where a lot of otherwise solid modular monoliths quietly fall apart. Not all at once. Slowly. Politely. One innocent decision at a time.

A direct method call here.
A shared DTO there.
A synchronous dependency that “should be fine”.

Before long, you haven’t broken any rules explicitly, but you’ve recreated the very thing you were trying to escape.

The Trap: Treating Modules Like Classes

The most common mistake I see is treating modules like they are just big classes. The reasoning usually sounds harmless enough. They are in the same process, so why not just call across and get the answer directly? From a technical point of view, that is correct. From an architectural point of view, it is disastrous. The moment one module calls another synchronously and expects an immediate answer, a chain of coupling is introduced, whether you intended it or not. Execution order becomes fixed. Failure modes bleed across boundaries. Time suddenly matters in ways you did not plan for, and assumptions about internal behaviour start to leak out. What looked like a clean boundary turns into a thin veil. The module may still exist on the filesystem, but its autonomy is gone.

What You’re Actually Trying to Preserve

Before choosing any communication mechanism, it helps to be clear about what you are actually trying to preserve. The goal is not elegance or convenience in the short term. It is about protecting a set of properties that make the system resilient over time. You are trying to preserve autonomy, so each module can make its own decisions without being dragged into someone else’s execution flow. You are trying to preserve replaceability, so modules can evolve, change shape, or even be rewritten without forcing a cascade of changes elsewhere. You are trying to preserve honest failure, where problems surface clearly instead of being buried inside a larger operation. And you are trying to preserve extractability, so future you still has real options when the system needs to change.

If a communication style undermines any of those, it is the wrong choice, no matter how convenient or familiar it feels today. Convenience fades quickly. The consequences of broken boundaries tend to stick around much longer.

Three Ways Modules Can Communicate

In a modular monolith, there are really only three legitimate ways modules should talk to each other.

Everything else is a variation or a mistake.

1. Asking a Question (Synchronous, Contract-Only)

Sometimes a module genuinely needs to ask another module a question. Not to delegate work, and not to coordinate behaviour, but simply to retrieve information that the other module owns. This is the narrowest and safest form of synchronous communication you can allow. In cases like this, the intent is clear. Billing might need to know whether a user exists. An authorisation module might need to check permissions. A reporting feature might need a snapshot of reference data. In each case, the caller is asking for information, not trying to drive the other module’s behaviour.

The constraints around this kind of interaction are non-negotiable. You depend on a contract, never an implementation. You accept that the call can fail and design accordingly. And you do not embed business flow or orchestration logic into the response. When those rules are respected, synchronous queries can exist without eroding the autonomy of the modules involved.

This is not orchestration. It’s lookup.

If the answer disappears tomorrow and you have to replace it with a cache or a projection, nothing fundamental breaks.

2. Announcing Something Happened (Asynchronous, Event-Driven)

This is the most important pattern in the entire series.

When a module completes work it owns, it announces a fact, not an instruction.

“A user was created.”
“An invoice was issued.”
“A policy was cancelled.”

It does not care who reacts. It does not wait. It does not coordinate.

This preserves autonomy better than anything else you can do.

Each module:

  • Decides if it cares

  • Handles the event in its own time

  • Fails independently

If Billing is down, Users still works.
If Notifications breaks, Billing doesn’t care.

That’s not a compromise. That’s the design doing its job.

3. Issuing a Command (Rare, Explicit, Dangerous)

This one should make you uncomfortable, and that discomfort is intentional. A command is not a notification and it is not a question. It is one module telling another module to do something. Not “something happened”, but “you must act”.

There are situations where this is genuinely necessary, but they should be rare. A command carries weight because it explicitly coordinates behaviour across boundaries. When you issue one, you are saying that your operation is not complete unless another module performs work on your behalf. That is a strong form of coupling, even when it is wrapped in a clean interface.

If you find yourself doing this often, your boundaries are lying to you. Either the modules are not as independent as you think, or the responsibility is split in the wrong place.

When commands are unavoidable, treat them with care. Make them explicit so their intent is obvious. Make them intentional so they are not introduced casually. And above all, make them rare, because every command chips away at the autonomy you are trying to preserve.

If this starts to feel like a workflow engine, that’s because it is. At that point, you should acknowledge it and design accordingly, not pretend it’s “just a call”.

The Illusion of Safety in Synchronous Chains

Here’s the pattern that causes the most damage:

On a diagram, this kind of flow looks neat. It is linear, predictable, and easy to explain. One box calls the next, work flows left to right, and everything appears nicely ordered.

In reality, a very different set of behaviours emerges. Latency stacks as each call waits on the next. Failures cascade across boundaries. Retries multiply in unexpected ways. Partial success becomes invisible because everything is hidden behind synchronous calls and assumptions of immediacy.

What you have really created is a distributed transaction in disguise, but without any of the tooling, visibility, or honesty that distributed systems demand. And worse, you have done it inside a monolith, where nobody is watching for those failure modes because the architecture claims they do not exist.

Why Events Feel Uncomfortable at First

Events tend to feel uncomfortable at first because most developers are trained to think in terms of control flow. You call this, then you call that, and if something goes wrong you roll everything back. The path is explicit, linear, and easy to follow in your head.

Events break that mental model. When you publish an event, you do not know who will react to it, when they will react, or in what order. That loss of immediate control can feel dangerous, especially if you are used to relying on transactions to keep everything tidy.

What actually changes is where correctness lives. Instead of being enforced by a single transactional boundary, correctness is enforced through idempotency, retries, observability, and explicit state management. These mechanisms are harder to fake and harder to ignore. They force you to deal with reality rather than hiding it behind rollback semantics, and that honesty is exactly what makes event-driven designs robust over time.

In-Process Messaging Is Not a Shortcut

In-process messaging is often misunderstood as a shortcut. Developers reach for a messaging library and tell themselves that it is just method calls with a bus in the middle. That assumption is where the trouble starts.

The bus is not an implementation detail. The bus is the boundary. If you treat it as invisible, you will design handlers that quietly rely on immediate execution, guaranteed ordering, and the absence of failure. Those assumptions hold only as long as everything stays exactly where it is.

The moment you move that bus out of process, all of those hidden assumptions surface at once, and things start breaking in surprising ways. The system was never designed for the realities it now has to face.

The right mental model is to design your in-process messaging as if it were already remote. Assume latency. Assume failure. Assume retries and reordering. If you do that, extracting the messaging infrastructure later becomes a mechanical change rather than a fundamental redesign, and the boundaries you established early continue to hold.

A Rule That Saves You Years Later

Heres a rule I have learned to trust over time. If a module needs to wait for another module to finish work, you probably have the wrong boundary. Not always, but often enough that it is worth pausing and questioning the design when it happens. Waiting implies coordination. Coordination implies shared responsibility. And shared responsibility implies coupling. Each step pulls the modules closer together, even if the code still looks clean on the surface.

When you notice this pattern, it is a signal to slow down and reassess. Either the work truly belongs in the same module, or the interaction should be reshaped so that one module can proceed independently. Catching this early can save you years of friction later on.

Living With Partial Failure

This is the part most architectures try hard to avoid. Partial failure feels messy and uncomfortable, so the instinct is to design it away. In reality, partial failure is normal. One module succeeds, another fails, and the system continues to run. When that happens, the response should be deliberate. You log what happened. You retry where it makes sense. You compensate if the business process requires it. And, crucially, you observe it so you can understand how often it occurs and why. What you do not do is hide that failure inside a transaction and pretend it never happened. That illusion might hold for a while, but it always leaks eventually, and it almost always leaks at the worst possible time, when the system is under pressure and the cost of surprises is highest.

Why

I dont want systems where I have to remember invisible rules. I do not want to rely on assumptions like “this always happens before that” when there is nothing in the system actually enforcing it. When I come back to a codebase after a break, or late at night when my brain is half-fried, I want the behaviour to be explicit. I want communication patterns that tell the truth about dependencies instead of hiding them behind convention or tribal knowledge. Modules that announce facts and react independently are easier to reason about. They are easier to monitor, easier to debug, and easier to evolve without fear. And they age better, which is something that matters far more than most people like to admit when the system is still young.

Bringing It All Together

So far in the series, we’ve established that:

  • Boundaries must be enforced, not documented

  • Behaviour belongs in vertical slices

  • Data must be owned per module

  • Communication must preserve autonomy

If you skip any one of these, the others weaken.

Get them all roughly right, and you end up with something rare, a monolith that doesn’t rot as it grows.


Up Next in the Series

Now that modules can talk safely, the next problem shows up immediately:

How do you apply cross-cutting concerns like authorisation without blowing holes through your boundaries?

Permissions, policies, and identity have a habit of leaking everywhere if you’re not careful.