Denormalisation for Performance in C#

How to unlock real performance gains without turning your data model into a mess
Most engineers start in the same place. You normalise the schema, remove duplication, keep each fact in one place, and rely on joins to reconstruct the answer when the application needs it. That is still the right default for a transactional system. The problem is that many people quietly stretch that rule too far. They end up assuming that a data model which is logically clean must also be operationally fast. In production, that falls apart quickly.
The real cost of a heavily normalised model is not usually visible in one query. It shows up in repetition. The same joins run over and over. The same aggregates are recalculated on every request. The same object graph is rebuilt for every page, every API call, every dashboard tile, and every export. The database becomes a reconstruction engine. The application becomes a shaping engine. Both work hard, not because the business needs new information, but because the model forces them to keep rebuilding information the system already knows.
This is where denormalisation comes in.
Denormalisation is a deliberate decision to move work away from the read path and into the write path, or into a background projection step, because doing that once is cheaper than doing the same work thousands of times on demand. In a modern C# system, especially one built with ASP.NET Core, EF Core, background workers, queues, Redis, and event driven processing, that trade can transform performance.
The gains are not abstract. You see them in lower latency, lower database CPU, fewer allocations, more stable p95 and p99 response times, better concurrency, and less fragile query behaviour. You also get a cleaner separation between the source of truth and the shape that the application actually needs at the edge.
This is important mainly in systems with heavy read traffic, complex dashboards, queue screens, search pages, configuration endpoints, reporting APIs, and integration surfaces that repeatedly ask for the same shaped view of the data. If you treat every one of those reads as a fresh act of discovery, the system wastes time. If you precompute and persist the shape once, the request becomes a cheap lookup.
The key idea is simple. Normalisation optimises storage and correctness. Denormalisation optimises access. Mature systems usually need both.
The hidden cost of clean relational models
A normalised schema protects integrity. That is its job. It makes writes understandable and it keeps the domain tidy. The trouble starts when the application’s hot paths are read heavy and shape heavy.
Imagine a common business screen. You need to show a case list with case number, customer name, policy type, current stage, outstanding balance, number of open actions, last correspondence date, assigned handler, SLA status, and a search summary. In a fully normalised design, those values may come from six or seven tables, plus a few aggregate queries, plus a handful of rules in application code. Nothing about that is inherently wrong. The problem is frequency. If that same shape is requested constantly, the system is paying the cost of reconstruction on every request.
The request path starts to look like this.
That path is often acceptable in development. Small data volumes hide the cost. A local database hides the latency. The ORM hides the SQL. Then production arrives, concurrency rises, the dataset grows, and the endpoint that felt harmless becomes one of the hottest parts of the estate.
The first thing people usually try is index tuning. That helps. Then they add projections in LINQ. That helps a bit more. Then they introduce caching. That can help a lot, but it often hides the problem rather than fixing it. A cache miss still falls back to the same expensive reconstruction path. Once the underlying shape is wrong for the read pattern, you are tuning around the problem rather than changing it.
That is why denormalisation is so powerful. It does not ask how to run the same expensive query a little faster. It asks whether the query should exist in that form at all.
What denormalisation really means in a C# system
In practice, denormalisation in a C# system usually takes one of a few forms.
You store precomputed aggregates directly on a parent row, such as current balance, open item count, or last activity date.
You build a dedicated read model that already matches a page or API response.
You snapshot descriptive values, such as broker name or product name, onto a transactional record so you do not join to reference tables on every read.
You persist flags and classifications, such as IsUrgent, RiskBand, or HasOpenTasks, instead of recalculating them repeatedly.
You compile expensive response payloads into a cached or persisted format, often as JSON, so the request path can serve them directly.
Those are all forms of the same idea. You take work that would otherwise happen every time the application reads data and you pay for it once when the data changes.
That changes the economics of the system.
If a value changes once an hour but is read ten thousand times in that hour, it is usually madness to compute it ten thousand times. Store it. Keep it fresh. Read it cheaply.
Where the performance gains come from
The gains from denormalisation are easy to hand wave, but the useful part is knowing where they show up.
The first gain is query simplification. A query that previously needed multiple joins, aggregates, and conditional expressions can become a simple index seek against a flat row. That cuts database CPU, logical reads, memory pressure, and plan complexity.
The second gain is lower application overhead. Even if the database work is acceptable, materialising nested EF Core graphs and then reshaping them into DTOs still costs CPU and memory in the application. A flat read model avoids much of that.
The third gain is better tail performance. Complex queries are far more likely to show unstable p95 and p99 latencies, especially under concurrency or when parameter values vary. Simple denormalised queries are usually more predictable.
The fourth gain is improved cache behaviour. A denormalised row or payload already matches the response shape, so a cache miss is not painful. You do not have to rebuild the world before you can refill the cache.
The fifth gain is fewer cross service dependencies. If you snapshot small pieces of descriptive data, one service no longer needs to ask another service the same question on every request. That is often a bigger win than any SQL optimisation.
The sixth gain is better scaling behaviour. Once each request does less work, every instance can serve more traffic with more stable latency. Horizontal scaling starts to work properly because the units of work are cheaper and more predictable.
The right way to picture this is as two separate paths.
The write path gets slightly heavier. The read path becomes dramatically cheaper. In read heavy systems that is exactly the trade you want.
Technique one, precomputed aggregates
This is the most obvious denormalisation technique and still one of the most effective. If an aggregate is read often and changes comparatively rarely, store it.
Think about balances, counts, totals, last updated timestamps, most recent activity, most recent payment date, open task count, total claim value, or number of outstanding documents. Engineers often recalculate these on every request because the database can do it. That does not mean it should.
A typical normalised query might look like this.
public sealed class AccountSummaryService
{
private readonly FinanceDbContext _db;
public AccountSummaryService(FinanceDbContext db)
{
_db = db;
}
public async Task<AccountSummaryDto?> GetAsync(Guid accountId, CancellationToken stopToken)
{
return await _db.Accounts
.Where(x => x.Id == accountId)
.Select(x => new AccountSummaryDto
{
AccountId = x.Id,
CustomerName = x.Customer.Name,
CurrentBalance = x.Transactions.Sum(t => t.Amount),
OpenInvoiceCount = x.Invoices.Count(i => !i.IsPaid),
LastPaymentUtc = x.Payments
.OrderByDescending(p => p.PaidAtUtc)
.Select(p => (DateTime?)p.PaidAtUtc)
.FirstOrDefault()
})
.SingleOrDefaultAsync(stopToken);
}
}
This is tidy. It is also doing real work every time the endpoint is called. The sum is recomputed. The count is recomputed. The payment ordering is revisited. The joins are rebuilt. If that account page is busy, you are paying that cost repeatedly for no gain in truth.
A denormalised design moves those values onto the account row or onto a dedicated account summary table.
public sealed class Account
{
public Guid Id { get; set; }
public Guid CustomerId { get; set; }
public decimal CurrentBalance { get; set; }
public int OpenInvoiceCount { get; set; }
public DateTime? LastPaymentUtc { get; set; }
}
The read path becomes much simpler.
public sealed class AccountSummaryService
{
private readonly FinanceDbContext _db;
public AccountSummaryService(FinanceDbContext db)
{
_db = db;
}
public async Task<AccountSummaryDto?> GetAsync(Guid accountId, CancellationToken stopToken)
{
return await _db.Accounts
.AsNoTracking()
.Where(x => x.Id == accountId)
.Select(x => new AccountSummaryDto
{
AccountId = x.Id,
CustomerName = x.Customer.Name,
CurrentBalance = x.CurrentBalance,
OpenInvoiceCount = x.OpenInvoiceCount,
LastPaymentUtc = x.LastPaymentUtc
})
.SingleOrDefaultAsync(stopToken);
}
}
The gain here is not subtle. You have shifted work from every read to only the writes that actually change the values.
There are two good ways to maintain these fields. If the value is part of a hard business invariant, update it in the same transaction as the canonical write. If the value is mainly for display or read optimisation, project it asynchronously through an outbox driven worker.
Here is a synchronous example.
public sealed class PaymentService
{
private readonly FinanceDbContext _db;
public PaymentService(FinanceDbContext db)
{
_db = db;
}
public async Task RecordPaymentAsync(Guid accountId, decimal amount, DateTime paidAtUtc, CancellationToken stopToken)
{
var account = await _db.Accounts.SingleAsync(x => x.Id == accountId, stopToken);
_db.Payments.Add(new Payment
{
Id = Guid.NewGuid(),
AccountId = accountId,
Amount = amount,
PaidAtUtc = paidAtUtc
});
account.CurrentBalance -= amount;
account.LastPaymentUtc = paidAtUtc;
await _db.SaveChangesAsync(stopToken);
}
}
That looks almost boring, which is exactly the point. Good denormalisation is often simple. It gives you a cheap read path because the system has already done the work.
Technique two, dedicated read models
This is where denormalisation starts to move from a tactical optimisation into a strategic design choice. A read model is a table or document that exists purely because a specific screen, endpoint, or integration needs data in a specific shape.
Suppose you have an underwriting queue page. The page needs submission number, insured name, broker name, product class, status, risk rating, attachment count, assigned underwriter, created date, and an urgency flag. In a fully normalised schema those values may be scattered across several tables and some of them may need to be computed. You can absolutely query them on demand. You will just keep paying for it.
A denormalised read model lets you store exactly what the queue needs.
public sealed class SubmissionReviewQueueItem
{
public Guid SubmissionId { get; set; }
public string SubmissionNumber { get; set; } = string.Empty;
public string InsuredName { get; set; } = string.Empty;
public string BrokerName { get; set; } = string.Empty;
public string ProductClass { get; set; } = string.Empty;
public string Status { get; set; } = string.Empty;
public string RiskRating { get; set; } = string.Empty;
public int AttachmentCount { get; set; }
public string AssignedUnderwriterName { get; set; } = string.Empty;
public bool IsUrgent { get; set; }
public DateTime CreatedUtc { get; set; }
}
The query then becomes a simple paged lookup.
public sealed class ReviewQueueService
{
private readonly UnderwritingDbContext _db;
public ReviewQueueService(UnderwritingDbContext db)
{
_db = db;
}
public async Task<IReadOnlyList<ReviewQueueItemDto>> GetPageAsync(int page, int pageSize, CancellationToken stopToken)
{
return await _db.SubmissionReviewQueueItems
.AsNoTracking()
.OrderByDescending(x => x.IsUrgent)
.ThenBy(x => x.CreatedUtc)
.Skip((page - 1) * pageSize)
.Take(pageSize)
.Select(x => new ReviewQueueItemDto
{
SubmissionId = x.SubmissionId,
SubmissionNumber = x.SubmissionNumber,
InsuredName = x.InsuredName,
BrokerName = x.BrokerName,
ProductClass = x.ProductClass,
Status = x.Status,
RiskRating = x.RiskRating,
AttachmentCount = x.AttachmentCount,
AssignedUnderwriterName = x.AssignedUnderwriterName,
IsUrgent = x.IsUrgent,
CreatedUtc = x.CreatedUtc
})
.ToListAsync(stopToken);
}
}
This is the kind of change that can take an endpoint from unpredictable and expensive to stable and cheap. It also changes how you index. Instead of trying to satisfy a messy query against the whole domain model, you can build indexes specifically for the queue.
The strongest way to maintain a read model is with a projection pipeline. The domain write commits. An outbox message is recorded in the same transaction. A background worker consumes the outbox and updates the read model. The read path never has to rebuild the queue shape on demand.
That pattern gives you reliability, observability, and a clean failure model. If a projection fails, you can retry it. If you need to rebuild, you can replay events or recalculate from the source of truth.
Technique three, snapshot descriptive data
A huge amount of hidden query cost comes from descriptive joins. Broker name. Product name. Region name. Handler display name. Organisation name. Status description. These are often joined into hot queries simply because they are stored elsewhere. That keeps the schema pure. It also keeps the read path unnecessarily busy.
Snapshotting descriptive values means copying them at the point where they matter. That often improves performance, and in many domains it also improves auditability because it preserves the value as it was when the transaction occurred.
Here is a simple example.
public sealed class Submission
{
public Guid Id { get; set; }
public Guid BrokerId { get; set; }
public string BrokerNameSnapshot { get; set; } = string.Empty;
public string ProductNameSnapshot { get; set; } = string.Empty;
public string InsuredName { get; set; } = string.Empty;
public DateTime CreatedUtc { get; set; }
}
When you create the submission, you take the snapshot.
public sealed class SubmissionService
{
private readonly UnderwritingDbContext _db;
public SubmissionService(UnderwritingDbContext db)
{
_db = db;
}
public async Task<Guid> CreateAsync(CreateSubmissionCommand command, CancellationToken stopToken)
{
var broker = await _db.Brokers.SingleAsync(x => x.Id == command.BrokerId, stopToken);
var product = await _db.Products.SingleAsync(x => x.Id == command.ProductId, stopToken);
var submission = new Submission
{
Id = Guid.NewGuid(),
BrokerId = broker.Id,
BrokerNameSnapshot = broker.DisplayName,
ProductNameSnapshot = product.Name,
InsuredName = command.InsuredName,
CreatedUtc = DateTime.UtcNow
};
_db.Submissions.Add(submission);
await _db.SaveChangesAsync(stopToken);
return submission.Id;
}
}
Now every queue, dashboard, export, and search result that needs the broker or product name can read it directly from the submission or from a projection row built from it. That removes joins from the hot path and makes results historically accurate. If the broker later changes their display name, older submissions do not silently rewrite history.
This is one of the most underused denormalisation techniques because people dismiss it as duplication. In reality it is often one of the cleanest improvements you can make.
Technique four, persist flags and classifications
A lot of expensive read logic is not about fetching data at all. It is about classifying it. Is this item urgent. Is this account over limit. Is this submission nearing SLA breach. Does this customer require manual review. Is this case ready for escalation. Those rules often combine dates, counts, statuses, and related rows. If they sit on the hot read path, they get recalculated constantly.
If the answer is needed often, persist it.
public sealed class SubmissionReviewQueueItem
{
public Guid SubmissionId { get; set; }
public bool IsUrgent { get; set; }
public string RiskBand { get; set; } = string.Empty;
public DateTime? EscalationDueUtc { get; set; }
}
The projector decides the values once.
public static class SubmissionClassification
{
public static bool CalculateUrgency(DateTime createdUtc, string status, int attachmentCount)
{
if (status == "Completed")
{
return false;
}
if (attachmentCount == 0)
{
return true;
}
return DateTime.UtcNow - createdUtc > TimeSpan.FromHours(24);
}
public static string CalculateRiskBand(decimal score)
{
if (score >= 80m)
{
return "High";
}
if (score >= 50m)
{
return "Medium";
}
return "Low";
}
}
Once you store these values, the database can index them directly. That is the real shift. Instead of asking the engine to compute urgency on every candidate row, you let it seek on IsUrgent or RiskBand. That changes both performance and plan quality.
Technique five, flattened search columns
Search is a classic source of accidental complexity. Users want one box. They expect it to match case number, broker name, customer name, postcode, product, maybe even a phone number or note. A normalised model turns that into a wide OR condition with several joins, or pushes the team into adding a separate search engine earlier than they really need one.
A useful middle ground is to denormalise search into a dedicated row with flattened searchable fields.
public sealed class SubmissionSearchRow
{
public Guid SubmissionId { get; set; }
public string SubmissionNumber { get; set; } = string.Empty;
public string BrokerName { get; set; } = string.Empty;
public string InsuredName { get; set; } = string.Empty;
public string Postcode { get; set; } = string.Empty;
public string ProductName { get; set; } = string.Empty;
public string SearchText { get; set; } = string.Empty;
public DateTime CreatedUtc { get; set; }
}
A projector builds the flattened text.
public sealed class SubmissionSearchProjector
{
private readonly UnderwritingDbContext _db;
public SubmissionSearchProjector(UnderwritingDbContext db)
{
_db = db;
}
public async Task RebuildAsync(Guid submissionId, CancellationToken stopToken)
{
var source = await _db.Submissions
.Where(x => x.Id == submissionId)
.Select(x => new
{
x.Id,
x.SubmissionNumber,
x.BrokerNameSnapshot,
x.InsuredName,
x.Postcode,
x.ProductNameSnapshot,
x.CreatedUtc
})
.SingleAsync(stopToken);
var searchText = string.Join(' ',
source.SubmissionNumber,
source.BrokerNameSnapshot,
source.InsuredName,
source.Postcode,
source.ProductNameSnapshot)
.ToLowerInvariant();
var row = await _db.SubmissionSearchRows.FindAsync(new object[] { submissionId }, stopToken);
if (row is null)
{
row = new SubmissionSearchRow { SubmissionId = submissionId };
_db.SubmissionSearchRows.Add(row);
}
row.SubmissionNumber = source.SubmissionNumber;
row.BrokerName = source.BrokerNameSnapshot;
row.InsuredName = source.InsuredName;
row.Postcode = source.Postcode;
row.ProductName = source.ProductNameSnapshot;
row.CreatedUtc = source.CreatedUtc;
row.SearchText = searchText;
await _db.SaveChangesAsync(stopToken);
}
}
This is not a replacement for a true search platform in every case. It is a practical step that often solves internal search needs very well and removes painful joins from the request path.
Technique six, compiled JSON payloads
Some read paths are expensive not because the data is hard to query, but because the response is expensive to build. Configuration payloads, product catalogues, pricing rules, feature flag definitions, and reference datasets often fall into this category. The source data may be split across multiple tables and the application may have to turn it into a nested object model before serialising it.
If the payload changes relatively rarely and is read heavily, compile it once and store the result.
public sealed class CompiledProductConfig
{
public Guid ProductId { get; set; }
public string Version { get; set; } = string.Empty;
public string JsonPayload { get; set; } = string.Empty;
public DateTime CompiledUtc { get; set; }
}
A compiler service generates the payload after changes.
public sealed class ProductConfigCompiler
{
private readonly ProductDbContext _db;
public ProductConfigCompiler(ProductDbContext db)
{
_db = db;
}
public async Task CompileAsync(Guid productId, CancellationToken stopToken)
{
var product = await _db.Products
.Where(x => x.Id == productId)
.Select(x => new
{
x.Id,
x.Name,
Rules = x.Rules
.OrderBy(r => r.Priority)
.Select(r => new
{
r.Key,
r.Operator,
r.Value
})
.ToList()
})
.SingleAsync(stopToken);
var payload = JsonSerializer.Serialize(product);
var row = await _db.CompiledProductConfigs.FindAsync(new object[] { productId }, stopToken);
if (row is null)
{
row = new CompiledProductConfig { ProductId = productId };
_db.CompiledProductConfigs.Add(row);
}
row.Version = Guid.NewGuid().ToString("N");
row.JsonPayload = payload;
row.CompiledUtc = DateTime.UtcNow;
await _db.SaveChangesAsync(stopToken);
}
}
The request path then becomes almost trivial.
public sealed class ProductConfigService
{
private readonly ProductDbContext _db;
public ProductConfigService(ProductDbContext db)
{
_db = db;
}
public async Task<string?> GetCompiledJsonAsync(Guid productId, CancellationToken stopToken)
{
return await _db.CompiledProductConfigs
.AsNoTracking()
.Where(x => x.ProductId == productId)
.Select(x => x.JsonPayload)
.SingleOrDefaultAsync(stopToken);
}
}
This is denormalisation at the payload level. It is blunt, and when used in the right place it is extremely effective. You remove query assembly and serialisation work from the hot path entirely.
How to implement denormalisation cleanly in .NET
The biggest risk with denormalisation is not duplication. It is ambiguity. If nobody can tell which model is canonical, which values are derived, how freshness works, and how to repair drift, the design will decay.
A clean .NET implementation usually has four parts.
You keep the canonical write model explicit. This is the model the business truly owns.
You emit an outbox event or domain event when the source data changes.
You project that event into one or more read models in a background process.
You make the read endpoints talk to the read models directly, without trying to reconstruct the domain again.
A simple hosted service can handle the projection side. In larger systems that may be a separate worker or Azure Function. The important part is not the hosting model. The important part is that the projection is idempotent and observable.
public sealed class SubmissionProjectionWorker : BackgroundService
{
private readonly IServiceScopeFactory _scopeFactory;
private readonly ILogger<SubmissionProjectionWorker> _logger;
public SubmissionProjectionWorker(
IServiceScopeFactory scopeFactory,
ILogger<SubmissionProjectionWorker> logger)
{
_scopeFactory = scopeFactory;
_logger = logger;
}
protected override async Task ExecuteAsync(CancellationToken stopToken)
{
while (!stopToken.IsCancellationRequested)
{
using var scope = _scopeFactory.CreateScope();
var db = scope.ServiceProvider.GetRequiredService<UnderwritingDbContext>();
var projector = scope.ServiceProvider.GetRequiredService<SubmissionProjector>();
var batch = await db.OutboxMessages
.Where(x => x.ProcessedUtc == null && x.Type == "SubmissionChanged")
.OrderBy(x => x.OccurredUtc)
.Take(100)
.ToListAsync(stopToken);
if (batch.Count == 0)
{
await Task.Delay(TimeSpan.FromSeconds(1), stopToken);
continue;
}
foreach (var message in batch)
{
try
{
await projector.ProjectAsync(message.AggregateId, stopToken);
message.ProcessedUtc = DateTime.UtcNow;
}
catch (Exception ex)
{
_logger.LogError(ex, "Failed to project submission {SubmissionId}", message.AggregateId);
}
}
await db.SaveChangesAsync(stopToken);
}
}
}
That worker does not need to be clever. It needs to be reliable. Projection code should be deterministic, repeatable, and easy to rebuild.
This pattern works because it gives you separation. Writes preserve truth. Projections shape data for speed. Reads stay lean.
Measuring the gains properly
If you denormalise without measuring, you are guessing. Sometimes the guess is right, but expert engineering means proving it.
Do not just measure average response time. That hides a lot. You want median, p95, and p99. You want database CPU and logical reads. You want application allocations. You want throughput under concurrency. You want to know whether you made writes slightly heavier and whether that matters.
The pattern to look for is simple. If the old design has acceptable median latency but poor p95 and p99, denormalisation usually helps a lot because it simplifies the work and stabilises the plan. If the old design burns database CPU and application allocations on every request, denormalisation usually helps there too. In C# terms, you should benchmark at three levels. Measure the database query cost. Measure the endpoint under realistic concurrency. Measure the shaping cost in process if serialisation or object mapping is part of the problem. A single stopwatch around an API call is not enough. Its common to see a read endpoint go from well over one hundred milliseconds to below twenty once it moves from reconstruction to direct lookup. More importantly, the tail often tightens dramatically. The endpoint stops having bad days.
That is a stronger win than a modest median improvement because production pain lives in the tail.
The trade offs you must own
Denormalisation works because it changes where the work happens. That means the cost does not disappear. It moves.
Writes may become heavier because you now update projections or summary fields.
You may accept eventual consistency if projections run asynchronously.
You add more moving parts, especially if you use outbox processing and background workers.
You need rebuild and reconciliation tooling because projections can drift if there is a bug.
None of those are reasons to avoid denormalisation. They are reasons to design it properly.
The important discipline is to be explicit. Name read models as read models. Keep projection logic out of the domain core where possible. Decide which values must be transactionally current and which values can lag slightly. Build replays or rebuild jobs so you can recover from bad logic. Make it obvious to every engineer which table tells the truth and which table exists for speed.
If you fail to do that, denormalisation becomes accidental duplication. That is where teams get burned.
When not to denormalise
Do not denormalise because a query feels ugly. Ugly code is not always expensive code.
Do not denormalise values you cannot clearly derive and refresh.
Do not duplicate fields with no owner and no repair story.
Do not turn projections into hidden sources of truth.
Do not assume eventual consistency is always harmless. A stale dashboard count is one thing. A stale available credit decision is another.
Do not denormalise everything. Most systems only need it in a few hot places. If you apply it everywhere, you increase complexity without improving the parts that matter.
The expert judgement is knowing where the hot paths really are and shaping only those.
Denormalisation is one of the few performance techniques that changes the shape of the problem instead of merely tuning around it. Indexes, cache layers, and ORM tweaks all matter, but they mostly help you execute the same work more efficiently. Denormalisation asks a better question. Should the system be doing this work on every read at all.
In many serious C# systems, the honest answer is no.
If the application already knows a balance, a count, a risk band, a queue shape, a search row, or a compiled payload, and if that value is read far more often than it changes, storing it in the form the read path needs is not a compromise. It is good engineering.
The strongest systems keep their transactional core clean and truthful. Then they build denormalised shapes around that core for speed. They measure the gains. They own the trade offs. They keep projections rebuildable. They keep the source of truth clear. That is how you get fast systems without losing control of the design. If you want real performance gains from denormalisation, do not think of it as breaking the rules. Think of it as moving work to the cheapest place in the system. When you do that deliberately, your database stops reconstructing the obvious, your APIs stop carrying unnecessary weight, and your read path starts behaving like it was designed for production rather than for a whiteboard.




