JasperFx Software works hand in hand with our clients to improve our client’s outcomes on software projects using the “Critter Stack” (Marten and Wolverine). Based on our engagements with client projects as well as the greater Critter Stack user base, we’ve built up quite a few optional usages and settings in the two frameworks to solve specific technical challenges.
The unfortunate reality of managing a long lived application framework such as Wolverine or a complicated library like Marten is the need to both continuously improve the tools as well as trying really hard not to introduce regression errors to our clients when they upgrade tools. To that end, we’ve had to make several potentially helpful features be “opt in” in the tools, meaning that users have to explicitly turn on feature flag type settings for these features. A common cause of this is any change that introduces database schema changes as we try really hard to only do that in major version releases (Wolverine 5.0 added some new tables to SQL Server or PostgreSQL storage for example).
And yes, we’ve still introduced regression bugs in Marten or Wolverine far more times than I’d like, even with trying to be careful. In the end, I think the only guaranteed way to constantly and safely improve tools like the Critter Stack is to just be responsive to whatever problems slip through your quality gates and try to fix those problems quickly to regain trust.
With all that being said, let’s pretend we’re starting a greenfield project with the Critter Stack and we want to build in the best performing system possible with some added options for improved resiliency as well. To jump to the end state, this is what I’m proposing for a new optimized greenfield setup for users:
var builder = Host.CreateApplicationBuilder();
builder.Services.AddMarten(m =>
{
// Much more coming...
m.Connection(builder.Configuration.GetConnectionString("marten"));
// 50% improvement in throughput, less "event skipping"
m.Events.AppendMode = EventAppendMode.Quick;
// or if you care about the timestamps -->
m.Events.AppendMode = EventAppendMode.QuickWithServerTimestamps;
// 100% do this, but be aggressive about taking advantage of it
m.Events.UseArchivedStreamPartitioning = true;
// These cause some database changes, so can't be defaults,
// but these might help "heal" systems that have problems
// later
m.Events.EnableAdvancedAsyncTracking = true;
// Enables you to mark events as just plain bad so they are skipped
// in projections from here on out.
m.Events.EnableEventSkippingInProjectionsOrSubscriptions = true;
// If you do this, just now you pretty well have to use FetchForWriting
// in your commands
// But also, you should use FetchForWriting() for command handlers
// any way
// This will optimize the usage of Inline projections, but will force
// you to treat your aggregate projection "write models" as being
// immutable in your command handler code
// You'll want to use the "Decider Pattern" / "Aggregate Handler Workflow"
// style for your commands rather than a self-mutating "AggregateRoot"
m.Events.UseIdentityMapForAggregates = true;
// Future proofing a bit. Will help with some future optimizations
// for rebuild optimizations
m.Events.UseMandatoryStreamTypeDeclaration = true;
// This is just annoying anyway
m.DisableNpgsqlLogging = true;
})
// This will remove some runtime overhead from Marten
.UseLightweightSessions()
.IntegrateWithWolverine(x =>
{
// Let Wolverine do the load distribution better than
// what Marten by itself can do
x.UseWolverineManagedEventSubscriptionDistribution = true;
});
builder.Services.AddWolverine(opts =>
{
// This *should* have some performance improvements, but would
// require downtime to enable in existing systems
opts.Durability.EnableInboxPartitioning = true;
// Extra resiliency for unexpected problems, but can't be
// defaults because this causes database changes
opts.Durability.InboxStaleTime = 10.Minutes();
opts.Durability.OutboxStaleTime = 10.Minutes();
// Just annoying
opts.EnableAutomaticFailureAcks = false;
// Relatively new behavior that will store "unknown" messages
// in the dead letter queue for possible recovery later
opts.UnknownMessageBehavior = UnknownMessageBehavior.DeadLetterQueue;
});
using var host = builder.Build();
return await host.RunJasperFxCommands(args);
Now, let’s talk more about some of these settings…
Lightweight Sessions with Marten
The first option we’re going to explicitly add is to use “lightweight” sessions in Marten:
var builder = Host.CreateApplicationBuilder();
builder.Services.AddMarten(m =>
{
// Elided configuration...
})
// This will remove some runtime overhead from Marten
.UseLightweightSessions()
By default, Marten will use a heavier version of IDocumentSession that incorporates an Identity Map internally to track documents (entities) already loaded by that session. Likewise, when you request to load an entity by its identity, Marten’s session will happily check if it has already loaded that entity and gives you the same object back to you without making the database call.
The identity map usage is mostly helpful when you have unclear or deeply nested call stacks where different elements of the code might try to load the same data as part of the same HTTP request or command handling. If you follow “Critter Stack” and what we call the best practices especially for Wolverine usage, you’ll know that we very strongly recommend against deep call stacks and excessive layering.
Moreover, I would argue that you should never need the identity map behavior if you were building a system with an idiomatic Critter Stack approach, so the default session type is actually harmful in that it adds extra runtime overhead. The “lightweight” sessions run leaner by completely eliminating all the dictionary storage and lookups.
Why you ask is the identity map behavior the default?
We were originally designing Marten as a near drop in replacement for RavenDb in a big system, so we had to mimic that behavior right off the bat to be able to make the replacement in a timely fashion
If we changed the default behavior, it can easily break code in existing systems that upgrade in ways that are very hard to predict and unfortunately hard to diagnose. And of course, this is most likely a problem in the exact kind of codebases that are hard to reason about. How do I know this and why am I so very certain this is so you ask? Scar tissue.
The Wolverine community fields a lot of questions from people who are moving to Wolverine from their previous MediatR usage. A quite natural response is to try to use Wolverine as a pure drop in replacement for MediatR and even try to use the existing MediatR idioms they’re already used to. However, Wolverine comes from a different philosophy than MediatR and most of the other “mediator” tools it’s inspired and using Wolverine with its idioms might lead to much simpler code or more efficient execution. Inspired by a conversation I had online today, let’s just into an example that I think shows quite a bit of contrast between the tools.
We’ve tried to lay out some of the differences between the tools in our Wolverine for MediatR Users guide, including the section this post is taken from.
Here’s an example of MediatR usage I borrowed from this blog post that shows the usage of MediatR within a shopping cart subsystem:
returnOk("Product added to the cart successfully.");
}
else
{
returnBadRequest(result.ErrorMessage);
}
}
}
Note the usage of the custom Result<T> type from the message handler. Folks using MediatR love using these custom Result types when you’re passing information between logical layers because it avoids the usage of throwing exceptions and communicates failure cases more clearly.
Wolverine is all about reducing code ceremony and we always strive to write application code as synchronous pure functions whenever possible, so let’s just write the exact same functionality as above using Wolverine idioms to shrink down the code:
publicstaticclassAddToCartRequestEndpoint
{
// Remember, we can do validation in middleware, or
// even do a custom Validate() : ProblemDetails method
// to act as a filter so the main method is the happy path
[WolverinePost("/api/cart/add"), EmptyResponse]
publicstaticUpdate<Cart>Post(
AddToCartRequestrequest,
// This usage will return a 400 status code if the Cart
There’s a lot going on above, so let’s dive into some of the details:
I used Wolverine.HTTP to write the HTTP endpoint so we only have one piece of code for our “vertical slice” instead of having both the Controller method and the matching message handler for the same logical command. Wolverine.HTTP embraces our Railway Programming model and direct support for the ProblemDetails specification as a means of stopping the HTTP request such that validation pre-conditions can be validated by middleware such that the main endpoint method is really the “happy path”.
The code above is using Wolverine’s “declarative data access” helpers you see in the [Entity] usage. We realized early on that a lot of message handlers or HTTP endpoints need to work on a single domain entity or a handful of entities loaded by identity values riding on either command messages, HTTP requests, or HTTP routes. At runtime, if the Cart isn’t found by loading it from your configured application persistence (which could be EF Core, Marten, or RavenDb at this time), the whole HTTP request would stop with status code 400 and a message communicated through ProblemDetails that the requested Cart cannot be found.
The key point I’m trying to prove is that idiomatic Wolverine results in potentially less repetitive code, less code ceremony, and less layering than MediatR idioms. Sure, it’s going to take a bit to get used to Wolverine idioms, but the potential payoff is code that’s easier to reason about and much easier to unit test — especially if you’ll buy into our A-Frame Architecture approach for organizing code within your slices.
Validation Middleware
As another example just to show how Wolverine’s runtime is different than MediatR’s, let’s consider the very common case of using Fluent Validation (or now DataAnnotations too!) middleware in front of message handlers or HTTP requests. With MediatR, you might use an IPipelineBehavior<T> implementation like this that will wrap all requests:
public class ValidationBehaviour<TRequest, TResponse> : IPipelineBehavior<TRequest, TResponse> where TRequest : IRequest<TResponse>
{
private readonly IEnumerable<IValidator<TRequest>> _validators;
public ValidationBehaviour(IEnumerable<IValidator<TRequest>> validators)
{
_validators = validators;
}
public async Task<TResponse> Handle(TRequest request, CancellationToken cancellationToken, RequestHandlerDelegate<TResponse> next)
{
if (_validators.Any())
{
var context = new ValidationContext<TRequest>(request);
var validationResults = await Task.WhenAll(_validators.Select(v => v.ValidateAsync(context, cancellationToken)));
var failures = validationResults.SelectMany(r => r.Errors).Where(f => f != null).ToList();
if (failures.Count != 0)
throw new ValidationException(failures);
}
return await next();
}
}
I’ve seen plenty of alternatives out there with slightly different implementations. In some cases folks will use service location to probe the application’s IoC container for any possible IValidator<T> implementations for the current request. In all cases though, the implementations are using runtime logic on every possible request to check if there is any validation logic. With the Wolverine version of Fluent Validation middleware, we do things a bit differently with less runtime overhead that will also result in cleaner Exception stack traces when things go wrong — don’t laugh, we really did design Wolverine quite purposely to avoid the really nasty kind of Exception stack traces you get from many other middleware or “behavior” using frameworks like Wolverine’s predecessor tool FubuMVC did 😦
Let’s say that you have a Wolverine.HTTP endpoint like so:
Just like with MediatR, you would need to register the Fluent Validation validator types in your IoC container as part of application bootstrapping. Now, here’s how Wolverine’s model is very different from MediatR’s pipeline behaviors. While MediatR is applying that ValidationBehaviour to each and every message handler in your application whether or not that message type actually has any registered validators, Wolverine is able to peek into the IoC configuration and “know” whether there are registered validators for any given message type. If there are any registered validators, Wolverine will utilize them in the code it generates to execute the HTTP endpoint method shown above for creating a customer. If there is only one validator, and that validator is registered as a Singleton scope in the IoC container, Wolverine generates this code:
public class POST_validate_customer : Wolverine.Http.HttpHandler
{
private readonly Wolverine.Http.WolverineHttpOptions _wolverineHttpOptions;
private readonly Wolverine.Http.FluentValidation.IProblemDetailSource<WolverineWebApi.Validation.CreateCustomer> _problemDetailSource;
private readonly FluentValidation.IValidator<WolverineWebApi.Validation.CreateCustomer> _validator;
public POST_validate_customer(Wolverine.Http.WolverineHttpOptions wolverineHttpOptions, Wolverine.Http.FluentValidation.IProblemDetailSource<WolverineWebApi.Validation.CreateCustomer> problemDetailSource, FluentValidation.IValidator<WolverineWebApi.Validation.CreateCustomer> validator) : base(wolverineHttpOptions)
{
_wolverineHttpOptions = wolverineHttpOptions;
_problemDetailSource = problemDetailSource;
_validator = validator;
}
public override async System.Threading.Tasks.Task Handle(Microsoft.AspNetCore.Http.HttpContext httpContext)
{
// Reading the request body via JSON deserialization
var (customer, jsonContinue) = await ReadJsonAsync<WolverineWebApi.Validation.CreateCustomer>(httpContext);
if (jsonContinue == Wolverine.HandlerContinuation.Stop) return;
// Execute FluentValidation validators
var result1 = await Wolverine.Http.FluentValidation.Internals.FluentValidationHttpExecutor.ExecuteOne<WolverineWebApi.Validation.CreateCustomer>(_validator, _problemDetailSource, customer).ConfigureAwait(false);
// Evaluate whether or not the execution should be stopped based on the IResult value
if (result1 != null && !(result1 is Wolverine.Http.WolverineContinue))
{
await result1.ExecuteAsync(httpContext).ConfigureAwait(false);
return;
}
// The actual HTTP request handler execution
var result_of_Post = WolverineWebApi.Validation.ValidatedEndpoint.Post(customer);
await WriteString(httpContext, result_of_Post);
}
}
I should note that Wolverine’s Fluent Validation middleware will not generate any code for any HTTP endpoint where there are no known Fluent Validation validators for the endpoint’s request model. Moreover, Wolverine can even generate slightly different code for having multiple validators versus a singular validator as a way of wringing out a little more efficiency in the common case of having only a single validator registered for the request type.
The point here is that Wolverine is trying to generate the most efficient code possible based on what it can glean from the IoC container registrations and the signature of the HTTP endpoint or message handler methods while the MediatR model has to effectively use runtime wrappers and conditional logic at runtime.
Marten has very rich support for projecting events into read, write, or query models. While there are other capabilities as well, the most common usage is probably to aggregate related events into a singular view. Marten projections can be executed Live, meaning that Marten does the creation of the view by loading the target events into memory and building the view on the fly. Projections can also be executed Inline, meaning that the projected views are persisted as part of the same transaction that captures the events that apply to that projection. For this post though, I’m mostly talking about projections running asynchronously in the background as events are captured into the database (think eventual consistency).
Aggregate Projections in Marten combine some sort of grouping of events and process them to create a single aggregated document representing the state of those events. These projections come in two flavors:
Single Stream Projections create a rolled up view of all or a segment of the events within a single event stream. These projections are done either by using the SingleStreamProjection<TDoc, TId> base type or by creating a “self aggregating” Snapshot approach with conventional Create/Apply/ShouldDelete methods that mutate or evolve the snapshot based on new events.
Multi Stream Projections create a rolled up view of a user-defined grouping of events across streams. These projections are done by sub-classing the MultiStreamProjection<TDoc, TId> class and is further described in Multi-Stream Projections. An example of a multi-stream projection might be a “query model” within an accounting system of some sort that rolls up the value of all unpaid invoices by active client.
You can also use a MultiStreamProjection to create views that are a segment of a single stream over time or version. Imagine that you have a system that models the activity of a bank account with event sourcing. You could use a MultiStreamProjection to create a view that summarizes the activity of a single bank account within a calendar month.
The ability to use explicit code to define projections was hugely improved in the Marten 8.0 release.
Within your aggregation projection, you can express the logic about how Marten combines events into a view through either conventional methods (original, old school Marten) or through completely explicit code.
Within an aggregation, you have advanced options to:
Append all new events or send messages in response to projection updates with side effects
Simple Example
The most common usage is to create a “write model” that projects the current state for a single stream, so on that note, let’s jump into a simple example.
I’m huge into epic fantasy book series, hence the silly original problem domain in the very oldest code samples. Hilariously, Marten has fielded and accepted pull requests that corrected our modeling of the timeline of the Lord of the Rings in sample code.
Let’s say that we’re building a system to track the progress of a traveling party on a quest within an epic fantasy series like “The Lord of the Rings” or the “Wheel of Time” and we’re using event sourcing to capture state changes when the “quest party” adds or subtracts members. We might very well need a “write model” for the current state of the quest for our command handlers like this one:
Just to understand a little bit more about the capabilities of Marten’s aggregation projections, let’s look at the diagram below that tries to visualize the runtime workflow of aggregation projections inside of the Async Daemon background process:
The Daemon is constantly pushing a range of events at a time to an aggregation projection. For example, Events 1,000 to 2,000 by sequence number
The aggregation “slices” the incoming range of events into a group of EventSlice objects that establishes a relationship between the identity of an aggregated document and the events that should be applied during this batch of updates for that identity. To be more concrete, a single stream projection for QuestParty would be creating an EventSlice for each quest id it sees in the current range of events. Multi-stream projections will have some kind of custom “slicing” or grouping. For example, maybe in our Quest tracking system we have a multi-stream projection that tries to track how many monsters of each type are defeated. That projection might “slice” by looking for all MonsterDefeated events across all streams and group or slice incoming events by the type of monster. The “slicing” logic is automatic for single stream projections, but will require explicit configuration or explicitly written logic for multi stream projections.
Once the projection has a known list of all the aggregate documents that will be updated by the current range of events, the projection will fetch each persisted document, first from any active aggregate cache in memory, then by making a single batched request to the Marten document storage for any missing documents and adding these to any active cache (see Optimizing Performance for more information about the potential caching).
The projection will execute any event enrichment against the now known group of EventSlice. This process gives you a hook to efficiently “enrich” the raw event data with extra data lookups from Marten document storage or even other sources.
Most of the work as a developer is in the application or “Evolve” step of the diagram above. After the “slicing”, the aggregation has turned the range of raw event data into EventSlice objects that contain the current snapshot of a projected document by its identity (if one exists), the identity itself, and the events from within that original range that should be applied on top of the current snapshot to “evolve” it to reflect those events. This can be coded either with the conventional Apply/Create/ShouldDelete methods or using explicit code — which is almost inevitably means a switch statement. Using the QuestParty example again, the aggregation projection would get an EventSlice that contains the identity of an active quest, the snapshot of the current QuestParty document that is persisted by Marten, and the new MembersJoined et al events that should be applied to the existing QuestParty object to derive the new version of QuestParty.
Just before Marten persists all the changes from the application / evolve step, you have the RaiseSideEffects() hook to potentially raise “side effects” like appending additional events based on the now updated state of the projected aggregates or publishing the new state of an aggregate through messaging (Wolverine has first class support for Marten projection side effects through its Marten integration into the full “Critter Stack”)
For the current event range and event slices, Marten will send all aggregate document updates or deletions, new event appending operations, and even outboxed, outgoing messages sent via side effects (if you’re using the Wolverine integration) in batches to the underlying PostgreSQL database. I’m calling this out because we’ve constantly found in Marten development that command batching to PostgreSQL is a huge factor in system performance and the async daemon has been designed to try to minimize the number of network round trips between your application and PostgreSQL at every turn.
Assuming the transaction succeeds for the current event range and the operation batch in the previous step, Marten will call “after commit” observers. This notification for example will release any messages raised as a side effect and actually send those messages via whatever is doing the actual publishing (probably Wolverine).
Marten happily supports immutable data types for the aggregate documents produced by projections, but also happily supports mutable types as well. The usage of the application code is a little different though.
Starting with Marten 8.0, we’ve tried somewhat to conform to the terminology used by the Functional Event Sourcing Decider paper by Jeremie Chassaing. To that end, the API now refers to a “snapshot” that really just means a version of the projection and “evolve” as the step of applying new events to an existing “snapshot” to calculate a new “snapshot.”
Wolverine has had a very frequent release cadence the past couple months as community contributions, requests from JasperFx Software clients, and yes, sigh, bug reports have flowed in. Right now I think I can justifiably claim that Wolverine is innovating much faster than any of the other comparable tools in the .NET ecosystem.
Some folks clearly don’t like that level of change of course, and I’ve always had to field some only criticism for our frequency of releases. I don’t think that continues forever of course.
I thought that now would be a good time to write a little bit about the new features and improvements just because so much of it happened over the holiday season. Starting somewhat arbitrarily with the first of December to now…
Inferred Message Grouping in Wolverine 5.5
A massively important new feature in Wolverine 5 was our “Partitioned Sequential Messaging” that seeks to effectively head off problems with concurrent message processing by segregating message processing by some kind of business entity identity. Long story short, this feature can almost completely eliminate issues with concurrent access to data without eliminating parallel processing across unrelated messages.
“Classic” .NET Domain Events with EF Core in Wolverine 5.6
Wolverine is attracting a lot of new users lately who might honestly only have been originally interested because of other tool’s recent licensing changes, and those users tend to come with a more typical .NET approach to application architecture than Wolverine’s idiomatic vertical slice architecture approach. These new users are also a lot more likely to be using EF Core than Marten, so we’ve had to invest more in EF Core integration.
There wasn’t many new features of note, but Wolverine 5.7 less than a week after 5.6 had five contributors and knocked out a dozen issues. The open issue count in Wolverine crested in December in the low 70’s and it’s down to the low 30’s right now.
Client Requests in Wolverine 5.8
Wolverine 5.8 gave us some bug fixes, but also a couple new features requested by JasperFx clients:
The Community Went Into High Gear with Wolverine 5.9
Wolverine 5.9 dropped the week before Christmas with contributions from 7 different people.
The highlights are:
Sandeep Desai has been absolutely on fire as a contributor to Wolverine and he made the HTTP Messaging Transport finally usable in this release with several other pull requests in later versions that also improved that feature. This is enabling Wolverine to use HTTP as a messaging transport. I’ve long wanted this feature as a prerequisite for CritterWatch.
The Rabbit MQ integration got more robust about reconnecting on errors
Wolverine 5.10 Kicked off 2026 with a Bang!
Wolverine 5.10 came out last week with contributions from eleven different folks. Plenty of bug fixes and contributions built up over the holidays. The highlights include:
That release also included several bug fixes and an effort from me to go fill in some gaps in the documentation website. That release got us down to the lowest open issue count in years.
Summary
The Wolverine community has been very busy, it is actually a community of developers from all over the world, and we’re improving fast.
I do think that the release cadence will slow down somewhat though as this has been an unusual burst of activity.
The Marten community made our first big release of the new year with 8.18 this morning. I’m particularly happy with a couple significant things in this release:
We had 8 different contributors in just the last month of work this release represents
The entire documentation section on projections got a much needed revamp and now includes a lot more information about capabilities from our big V8 release last year. I’m hopeful that the new structure and content makes this crucial feature set more usable.
The “Composite or Chained Projections” feature has been something we’ve talked about as a community for years, and now we have it
The one consistent theme in those points is that Marten just got a lot better for our users for creating “query models” in systems.
Let’s Build a TeleHealth System!
I got to be a part of a project like this for a start up during the pandemic. Fantastic project with lots of great people. Even though I wasn’t able to use Marten on the project at that time (we used a hand rolled Event Sourcing solution with Node.JS + TypeScript), that project has informed several capabilities added to Marten in the years since including the features shown in this post.
Just to have a problem domain for the sample code, let’s pretend that we’re building a new only TeleHealth system that allows patients to register for an appointment online and get matched up with a healthcare provider for an appointment that day. The system will do all the work of coordinating these appointments as well as tracking how the healthcare providers spend their time.
That domain might have some plain Marten document storage for reference data including:
Provider — representing a medical provider (Nurse? Physician? PA?) who fields appointments
Specialty — models a medical specialty
Patient — personal information about patients who are requesting appointments in our system
Switching to event streams, we may be capturing events for:
Board – events modeling a single, closely related group of appointments during a single day. Think of “Pediatrics in Austin, Texas for January 19th”
ProviderShift – events modeling the activity of a single provider working in a single Board during a single day
Appointment – events recording the progress of an appointment including requesting an appointment through the appointment being cancelled or completed
Better Query Models
The easiest and most common form of a projection in Marten is a simple “write model” that projects the information from a single event stream to a projected document. From our TeleHealth domain, here’s the “self-aggregating” Board:
publicclassBoard
{
privateBoard()
{
}
publicBoard(BoardOpenedopened)
{
Name=opened.Name;
Activated=opened.Opened;
Date=opened.Date;
}
publicvoidApply(BoardFinishedfinished)
{
Finished=finished.Timestamp;
}
publicvoidApply(BoardClosedclosed)
{
Closed=closed.Timestamp;
CloseReason=closed.Reason;
}
publicGuidId { get; set; }
publicstringName { get; privateset; }
publicDateTimeOffsetActivated { get; set; }
publicDateTimeOffset?Finished { get; set; }
publicDateOnlyDate { get; set; }
publicDateTimeOffset?Closed { get; set; }
publicstringCloseReason { get; privateset; }
}
Easy money. All the projection has to do is apply the raw event data for that one stream and nothing else. Marten is even doing the event grouping for you, so there’s just not much to think about at all.
Now let’s move on to more complicated usages. One of the things that makes Marten such a great platform for Event Sourcing is that it also has its dedicated document database feature set on top of the PostgreSQL engine. All that means that you can happily keep some relatively static reference data back in just plain ol’ documents or even raw database tables.
To that end, let’s say in our TeleHealth system that we want to just embed all the information for a Provider (think a nurse or a physician) directly into our ProviderShift for easier usage:
// I was admittedly lazy in the testing, so I just
// completely embedded the Provider document directly
// in the ProviderShift for easier querying later
publicProviderProvider { get; set; } =provider;
}
When mixing and matching document storage and events, Marten has always given you the ability to utilize document data during projections by brute force lookups in your projection code like this:
public async Task<ProviderShift> Create(
// The event data
ProviderJoined joined,
IQuerySession session)
{
var provider = await session
.LoadAsync<Provider>(joined.ProviderId);
return new ProviderShift(joined.BoardId, provider);
}
The code above is easy to write and conceptually easy to understand, but when the projection is being executed in our async daemon where the projection is processing a large batch of events at one time, the code above potentially sets you up for an N+1 query anti-pattern where Marten has to make lots of small database round trips to get each referenced Provider every time there’s a separate ProviderJoined event.
Instead, let’s use Marten’s recent hook for event enrichment and the new declarative syntax we just introduced in 8.18 today to get all the related Provider information in one batched query for maximum efficiency:
public override async Task EnrichEventsAsync(SliceGroup<ProviderShift, Guid> group, IQuerySession querySession, CancellationToken cancellation)
{
await group
// First, let's declare what document type we're going to look up
.EnrichWith<Provider>()
// What event type or marker interface type or common abstract type
// we could look for within each EventSlice that might reference
// providers
.ForEvent<ProviderJoined>()
// Tell Marten how to find an identity to look up
.ForEntityId(x => x.ProviderId)
// And finally, execute the look up in one batched round trip,
// and apply the matching data to each combination of EventSlice, event within that slice
// that had a reference to a ProviderId, and the Provider
.EnrichAsync((slice, e, provider) =>
{
// In this case we're swapping out the persisted event with the
// enhanced event type before each event slice is then passed
// in for updating the ProviderShift aggregates
slice.ReplaceEvent(e, new EnhancedProviderJoined(e.Data.BoardId, provider));
});
}
Now, inside the actual projection for ProviderShift, we can use the EnhancedProviderJoined event from above like this:
// This is a recipe introduced in Marten 8 to just write explicit code
// to "evolve" aggregate documents based on event data
public override ProviderShift Evolve(ProviderShift snapshot, Guid id, IEvent e)
{
switch (e.Data)
{
case EnhancedProviderJoined joined:
snapshot = new ProviderShift(joined.BoardId, joined.Provider)
{
Provider = joined.Provider, Status = ProviderStatus.Ready
};
break;
case ProviderReady:
snapshot.Status = ProviderStatus.Ready;
break;
case AppointmentAssigned assigned:
snapshot.Status = ProviderStatus.Assigned;
snapshot.AppointmentId = assigned.AppointmentId;
break;
case ProviderPaused:
snapshot.Status = ProviderStatus.Paused;
snapshot.AppointmentId = null;
break;
case ChartingStarted charting:
snapshot.Status = ProviderStatus.Charting;
break;
}
return snapshot;
}
In the sample above, I replaced the ProviderJoined event being sent to our projection with the richer EnhancedProviderJoined event, but there are other ways to send data to projections with a new References<T> event type that’s demonstrated in our documentation on this feature.
Sequential or Composite Projections
This feature was introduced in Marten 8.18 in response to feedback from several JasperFx Software clients who needed to efficiently create projections that effectively made de-normalized views across multiple stream types and used reference data outside of the events. Expect this feature to grow in capability as we get more feedback about its usage.
Here are a handful of scenarios that Marten users have hit over the years:
Wanting to use the build products of Projection 1 as an input to Projection 2. You can do that today by running Projection 1 as Inline and Projection 2 as Async, but that’s imperfect and sensitive to timing. Plus, you might not have wanted to run the first projection Inline.
Needing to create a de-normalized projection view that incorporates data from several other projections and completely different types of event streams, but that previously required quite a bit of duplicated logic between projections
Looking for ways to improve the throughput of asynchronous projections by doing more batching of event fetching and projection updates by trying to run multiple projections together
To meet these somewhat common needs more easily, Marten has introduced the concept of a “composite” projection where Marten is able to run multiple projections together and possibly divided into multiple, sequential stages. This provides some potential benefits by enabling you to safely use the build products of one projection as inputs to a second projection. Also, if you have multiple projections using much of the same event data, you can wring out more runtime efficiency by building the projections together so your system is doing less work fetching events and able to make updates to the database with fewer network round trips through bigger batches.
In our TeleHealth system, we need to have single stream “write model” projections for each of the three stream types. We also need to have a rich view of each Board that combines all the common state of the active Appointment and ProviderShift streams in that Board including the more static Patient and Provider information that can be used by the system to automate the assignment of providers to open patients (a real telehealth system would need to be able to match up the requirements of an appointment with the licensing, specialty, and location of the providers as well as “knowing” what providers are available or estimated to be available). We probably also need to build a denormalized “query model” about all appointments that can be efficiently queried by our user interface on any of the elements of Board, Appointment, Patient, or Provider.
What we really want is some way to efficiently utilize the upstream products and updates of the Board, Appointment, and ProviderShift “write model” projections as inputs to what we’ll call the BoardSummary and AppointmentDetails projections. We’ll use the new “composite projection” feature to run these projections together in two stages like this:
Before we dive into each child projection, this is how we can set up the composite projection using the StoreOptions model in Marten:
Now, let’s go downstream and look at the AppointmentDetailsProjection that will ultimately need to use the build products of all three upstream projections:
Note the usage of the Updated<T> event types that the downstream projections are using in their Evolve or DetermineAction methods. That is a synthetic event added by Marten to communicate to the downstream projections what projected documents were updated for the current event range. These events are carrying the latest snapshot data for the current event range so the downstream projections can just use the build products without making any additional fetches. It also guarantees that the downstream projections are seeing the exact correct upstream projection data for that point of the event sequencing.
Moreover, the composite “telehealth” projection is reading the event range once for all five constituent projections, and also applying the updates for all five projections at one time to guarantee consistency.
Some the documentation on Composite Projections for more information about how this feature fits it with rebuilding, versioning, and non stale querying.
Summary
Marten has hopefully gotten much better at building “query model” projections that you’d use for bigger dashboard screens or search within your application. We’re hoping that this makes Marten a better tool for real life development.
The best way for an OSS project to grow healthily is having a lot of user feedback and engagement coupled with the maintainers reacting to that feedback with constant improvement. And while I’d sometimes like to have the fire hose of that “feedback” stop for a couple days, it helps drive the tools forward.
The advent of JasperFx Software has enabled me to spend much more time working with our users and seeing the real problems they face in their system development. The features I described in this post are a direct result of engagements with at least four different JasperFx clients in the past year and a half. Drop us a line anytime at sales@jasperfx.net and I’d be happy to talk to you about how we can help you be more successful with Event Sourcing using Marten.
Reach out anytime to sales@jasperfx.net to ask us about how we could potentially help your shop with software development using the Critter Stack.
It’s a New Year and hopefully we all get to start on some great new software initiatives. If you happen to be starting something this year that’s going to get you into Event Driven Architecture or Event Sourcing, the Critter Stack (Marten and Wolverine) is a great toolset to get you where you’re going. And of course, JasperFx Software is around to help our clients get the most out of the Critter Stack and support you through architectural decisions, business modeling, and test automation as well.
A JasperFx support plan is more than just a throat to choke when things go wrong. We build in consulting time, and mostly interact with our clients through IM tools like Discord or Slack and occasional Zoom calls when that’s appropriate. And GitHub issues of course for tracking problems or feature requests.
Just thinking about the past week or so, JasperFx has helped clients with:
Helped troubleshoot a couple production or development issues with clients
Modeling events, event streams, and strategies for projections
A deep dive into the multi-tenancy support in Marten and Wolverine, the implications of different options, possible performance optimizations that probably have to be done upfront as well as performance optimizations that could be done later, and how these options fit our client’s problem domain and business.
For a greenfield project, we laid out several options with Marten to optimize the future performance and scalability with several opt in features and of course, the potential drawbacks of those features (like event archiving or stream compacting).
Worked with a couple clients on how best to configure Wolverine when multiple applications or multiple modules within the same application are targeting the same database
Worked with a client on how to configure Wolverine to enable a modular monolith approach to utilize completely separate databases and a mix and match of database per tenant with separate databases per module.
How authorization and authentication can be integrated into Wolverine.HTTP — which basically boils down to “basically the same as MVC Core”
A lot of conversations about how to protect your system against concurrency issues and what features in both Marten and Wolverine will help you be more resilient
Talked through many of the configuration possibilities for message sequencing or parallelism in Wolverine and how to match that to different needs
Fielded several small feature requests to improve Wolverine’s usage within modular monolith applications where the same message might need to be handled independently by separate modules
Pushed a new Wolverine release that included some small requests from a client for their particular usage
Conferred with a current client on some very large, forthcoming features in Marten that will hopefully improve its usability for applications that require complex dashboard screens that display very rich data. The feature isn’t directly part of the client’s support agreement per se, but we absolutely pay attention to our client’s use cases within our own internal roadmap for the Critter Stack tools.
But again, that’s only the past couple weeks. If you’re interested in learning more, or want JasperFx to be helping your shop, drop us an email at sales@jasperfx.net or you can DM me just about anywhere.
At least professionally, I tend to be mostly focused on what’s next on the road map or upcoming client work or long planned strategic improvements to the Critter Stack (Marten and Wolverine). One of the things I do every year is to write out a blog post stating the technical goals for the OSS projects that I lead, with this year’s version, Critter Stack Roadmap for 2026 already up (but I’m already going to publish a new version later this week). I’ll frequently look back to what I wrote in the previous January and be frustrated by the hoped for initiatives that still haven’t been completed or even started. All the same though, 2025 was a very productive year for the Critter Stack and there’s plenty of accomplishments and improvements worth reflecting on.
JasperFx Software in 2025
JasperFx Software more than doubled our roster of ongoing support clients while doing quite a bit of one off consulting and delivery work as well. The biggest improvement and growth is that I’ve stopped fretting on a daily basis about whether I gambled my family’s financial well being on an ego driven attempt to stave off a mid life crisis and started confidently planning the company’s future around what appears to be a very successful and promising technical toolset.
Along the way, we helped our clients through interactions on Discord, Slack, MS Teams (I’m not yet a fan), Zoom, and GitHub. Common topics for Critter Stack usage included:
Designing long lived workflows
Event Sourcing usage
Resiliency strategies of all sorts
Multi-Tenancy
Dealing with Concurrency. Lots of concurrency related issues
Test Automation
Quite a bit of troubleshooting
Instrumentation
If you would like any level of help with your Critter Stack usage, feel free to reach out to sales@jasperfx.net for a conversation about what or how JasperFx could help you out.
Marten in 2025
2024 was an insanely busy year for Marten improvements, and after that, I feel like there just wasn’t much lacking anymore in Marten’s feature set for productive event sourcing. You can definitely see Marten development slowed down a bit in 2025. Even so, we had 16 folks commit code this past year — with far more folks contributing through Discord discussions and feedback in GitHub issues.
Marten 8.0 dropped at the first of June, which included:
A big consolidation and reorganization of the shared dependencies underneath Marten and Wolverine
A lot of improvements to Marten’s Projections API including much better (we hope) options for writing explicit code and a streamlined API for “event slicing and grouping” in multi-stream projections
JasperFx Software built the Stream Compacting feature for our largest client as a mechanism to keep a busy system running smoothly over time by keeping the database size relatively stable
We added a lot more support for strong typed identifiers in different usage scenarios to Marten throughout the year. I won’t claim there isn’t still some potential problems out there, but dammit, we tried really hard on that front
Again in collaboration with JasperFx clients, we added far more metrics publishing to understand the Async Daemon behavior in production
Switched Marten from the venerable TPL DataFlow library to using System.Threading.Channels. I was very happy with how smoothly that went after the first day of toe stubbing.
After saying that I felt like Marten was essentially “done” at the beginning of this section, I think we actually do have a pretty ambitious set of enhancements for Marten projection support and cross-document querying teed up for early 2026.
All the same though, I’ll stand by my recent statements that Marten is clearly the best technical tool for Event Sourcing on the .NET platform and competitive with anything else out there in other ecosystems.
Wolverine in 2025
Buckle in, this list is much longer and I’m only going to hit the highlights because Wolverine development was crazily busy in 2025:
Wolverine had 66 different people contribute code in 2025. Some of those are from little pull requests to improve documentation (i.e., fixing my errors in markdown files), but that number dramatically undercounts the contribution from people in GitHub issues and Discord discussions. And also, those little pull requests to improve documentation are very much appreciated by me and I think they definitely improve the project in the whole.
Lots of improvements to Wolverine’s support for Modular Monolith architectures
Yet more support for strong typed identifiers. Curse those little maggots, but some of you really, really like them
Wolverine 3.13 improved Wolverine.HTTP with [AsParameters] binding support
Wolverine 4.0 brought the consolidation of shared dependencies with Marten as well as Multi-Tenancy through separate databases with Entity Framework Core (for a JasperFx client).
More Wolverine transport options support the “named broker” approach such that you can connect to multiple Rabbit MQ brokers, Azure Service Bus namespaces, Kafka brokers, or Amazon SQS endpoints from one system
Wolverine got much better support for “hybrid” HTTP endpoint/handler combinations (in collaboration with a JasperFx support client)
JasperFx working with another client improved the usability of F# idioms with Wolverine
New messaging transport options including GCP Pubsub, SignalR, Redis, an HTTP options, and a NATS.io option ready to go in the first release of 2026. Dang.
And there was a massive rush of activity at the end of the year as I scurried to address issues and requests from recent JasperFx clients for yet more resiliency, error handling, instrumentation, and inevitably some bugs. I’ll be writing a blog post later this week to go over the new additions from the US Thanksgiving holiday through the end of 2025.
I guess the big takeaway is that Wolverine improved a lot in 2025 and I expect that trend to continue at least during the early part of 2026. I would argue that regardless of exactly how Wolverine stacks up on features and usability compared to other options in .NET that Wolverine is improving and innovating much faster than any of its competitors.
Me in 2025
I’m maybe working at a bit of an unsustainable pace for the longer term, but I think I’m good for at least one more year at this pace. At the end of the day though, I feel extremely fortunate to be living out my long term professional dream to have my own company centered around the OSS tools that I’ve created or led.
I normally write this out in January, but I’m feeling like now is a good time to get this out as some of it is in flight. So with plenty of feedback from the other Critter Stack Core team members and a lot of experience seeing where JasperFx Software clients have hit friction in the past couple years, here’s my current thinking about where the Critter Stack development goes for 2026.
As I’m sure you can guess, every time I’ve written this yearly post, it’s been absurdly off the mark of what actually gets done through the year.
Critter Watch
For the love of all that’s good in this world, JasperFx Software needs to get an MVP out the door that’s usable for early adopters who are already clamoring for it. The “Critter Watch” tool, in a nutshell, should be able to tell you everything you need to know about how or why a Critter Stack application is unhealthy and then also give you the tools you need to heal your systems when anything does go wrong.
The MVP is still shaping up as:
A visualization and explanation of the configuration of your Critter Stack application
Performance metrics integration from both Marten and Wolverine
Event Store monitoring and management of projections and subscriptions
Wolverine node visualization and monitoring
Dead Letter Queue querying and management
Alerting – but I don’t have a huge amount of detail yet. I’m paying close attention to the issues JasperFx clients see in production applications though, and using that to inform what information Critter Watch will surface through its user interface and push notifications
This work is heavily in flight, and will hopefully accelerate over the holidays and January as JasperFx Software clients tend to be much quieter. I will be publishing a separate vision document soon for users to review.
The Entire “Critter Stack”
We’re standing up the new docs.jasperfx.net (Babu is already working on this) to hold documentation on supporting libraries and more tutorials and sample projects that cross Marten & Wolverine. This will finally add some documentation for Weasel (database utilities and migration support), our command line support, the stateful resource model, the code generation model, and everything to do with DevOps recipes.
Play the “Cold Start Optimization” epic across both Marten and Wolverine (and possibly Lamar). I don’t think that true AOT support is feasible, but maybe we can get a lot closer. Have an optimized start mode of some sort that eliminates all or at least most of:
Reflection usage in bootstrapping
Reflection usage at runtime, which today is really just occasional calls to object.GetType()
Assembly scanning of any kind, which we know can be very expensive for some systems with very large dependency trees.
Increased and improved integration with EF Core across the stack
Marten
The biggest set of complaints I’m hearing lately is all around views between multiple entity types or projections involving multiple stream types or multiple entity types. I also got some feedback from multiple past clients about the limitation of Marten as a data source underneath UI grids, which isn’t particularly a new bit of feedback. In general, there also appears to be a massive opportunity to improve Marten’s usability for many users by having more robust support in the box for projecting event data to flat, denormalized tables.
I think I’d like to prioritize a series of work in 2026 to alleviate the complicated view problem:
The “Composite Projections” Epic where you might use the build products of upstream projections to create multi-stream projection views. This is also an opportunity to ratchet up even more scalability and throughput in the daemon. I’ve gotten positive feedback from a couple JasperFx clients about this. It’s also a big opportunity to increase the throughput and scalability of the Async Daemon by making fewer database requests
Revisit GroupJoin in the LINQ support even though that’s going to be absolutely miserable to build. GroupJoin() might end up being a much easier usage that all our Include() functionality.
A first class model to project Marten event data with EF Core. In this proposed model, you’d use an EF Core DbContext to do all the actual writes to a database.
Other than that, some other ideas that have kicked around for awhile are:
Improve the documentation and sample projects, especially around the usage of projections
Take a better look at the full text search features in Marten
Finally support the PostGIS extension in Marten. I think that could be something flashy and quick to build, but I’d strongly prefer to do this in the context of an actual client use case.
Continue to improve our story around multi-stream operations. I’m not enthusiastic about “Dynamic Boundary Consistency” (DCB) in regards to Marten though, so I’m not sure what this actually means yet. This might end up centering much more on the integration with Wolverine’s “aggregate handler workflow” which is already perfectly happy to support strong consistency models even with operations that touch more than one event stream.
Wolverine
Wolverine is by far and away the busiest part of the Critter Stack in terms of active development right now, but I think that slows down soon. To be honest, most work at this point is us reacting tactically to JasperFx client or user needs. In terms of general, strategic themes, I think that 2026 will involve:
In conjunction with “CritterWatch”, improving Wolverine’s management story around dead letter queueing
I would love to expand Wolverine’s database support beyond “just” SQL Server and PostgreSQL
Improving the Kafka integration. That’s not our most widely used messaging broker, but that seems to be the leading source of enhancement requests right now
New Critters?
We’ve done a lot of preliminary work to potentially build new Critter Stack event store alternatives based on different database engines. I’ve always believed that SQL Server would be the logical next database engine, but we’ve gotten fewer and fewer requests for this as PostgreSQL has become a much more popular database choice in the .NET ecosystem.
I’m not sure this will be a high priority in 2026, but you never know…
I was helping a new JasperFx Software client this week to best integrate a Domain Events strategy into their new Wolverine codebase. This client wanted to use the common model of using an EF Core DbContext to harvest domain events raised by different entities and relay those to Wolverine messaging with proper Wolverine transactional outbox support for system durability. As part of that assistance — and also to have some content for other Wolverine users trying the same thing later — I promised to write a blog post showing how I’d do this kind of integration myself with Wolverine and EF Core or at least consider a few options. To try to more permanently head this usage problem for other users, I went into mad scientist mode this evening and just rolled out a new Wolverine 5.6 with some important improvements to make this Domain Events pattern much easier to use in combination with EF Core.
Let’s start with some context about the general kind of approach I’m referring to with…
// Base class that establishes the pattern for publishing
// domain events within an entity
public abstract class Entity : IEntity
{
[NotMapped]
private readonly ConcurrentQueue<IDomainEvent> _domainEvents = new ConcurrentQueue<IDomainEvent>();
[NotMapped]
public IProducerConsumerCollection<IDomainEvent> DomainEvents => _domainEvents;
protected void PublishEvent(IDomainEvent @event)
{
_domainEvents.Enqueue(@event);
}
protected Guid NewIdGuid()
{
return MassTransit.NewId.NextGuid();
}
}
public class BacklogItem : Entity
{
public Guid Id { get; private set; }
[MaxLength(255)]
public string Description { get; private set; }
public virtual Sprint Sprint { get; private set; }
public DateTime CreatedAtUtc { get; private set; } = DateTime.UtcNow;
private BacklogItem() { }
public BacklogItem(string desc)
{
this.Id = NewIdGuid();
this.Description = desc;
}
public void CommitTo(Sprint s)
{
this.Sprint = s;
this.PublishEvent(new BacklogItemCommitted(this, s));
}
}
Note the CommitTo() method that publishes a BacklogItemCommitted event that in his sample is published via MediatR with some customization of an EF Core DbContext like this from the referenced post with some comments that I added:
public override async Task<int> SaveChangesAsync(CancellationToken cancellationToken = default(CancellationToken))
{
await _preSaveChanges();
var res = await base.SaveChangesAsync(cancellationToken);
return res;
}
private async Task _preSaveChanges()
{
await _dispatchDomainEvents();
}
private async Task _dispatchDomainEvents()
{
// Find any entity objects that were changed in any way
// by the current DbContext, and relay them to MediatR
var domainEventEntities = ChangeTracker.Entries<IEntity>()
.Select(po => po.Entity)
.Where(po => po.DomainEvents.Any())
.ToArray();
foreach (var entity in domainEventEntities)
{
// _dispatcher was an abstraction in his post
// that was a light wrapper around MediatR
IDomainEvent dev;
while (entity.DomainEvents.TryTake(out dev))
await _dispatcher.Dispatch(dev);
}
}
The goal of this approach is to make DDD style entity types the entry point and governing “decider” of all business behavior and workflow and give these domain model types a way to publish event messages to the rest of the system for side effects in the system outside of the state of the entity. Like for example, maybe the backlog system has to publish a message to a Slack room about the back log item being added to the sprint. You sure as hell don’t want your domain entity to have to know about the infrastructure you use to talk to Slack or web services or whatever.
Mechanically, I’ve seen this typically done with some kind of Entity base class that either exposes a collection of published domain events like the sample above, or puts some kind of interface like this directly into the Entity objects:
// Just assume that this little abstraction
// eventually relays the event messages to Wolverine
// or whatever messaging tool you're using
public interface IEventPublisher
{
void Publish<T>(T @event);
}
// Using a Nullo just so you don't have potential
// NullReferenceExceptions
public class NulloEventPublisher : IEventPublisher
{
public void Publish<T>(T @event)
{
// Do nothing.
}
}
public abstract class Entity
{
public IEventPublisher Publisher { get; set; } = new NulloEventPublisher();
}
public class BacklogItem : Entity
{
public Guid Id { get; private set; } = Guid.CreateVersion7();
public string Description { get; private set; }
// ZOMG, I forgot how annoying ORMs are. Use a document database
// and stop worrying about making things virtual just for lazy loading
public virtual Sprint Sprint { get; private set; }
public void CommitTo(Sprint sprint)
{
Sprint = sprint;
Publisher.Publish(new BackLotItemCommitted(Id, sprint.Id));
}
}
In the approach of using the abstraction directly inside of your entity classes, you incur the extra overhead of connecting the Entity objects loaded out of EF Core with the implementation of your IEventPublisher interface at runtime. I’ll do a few thought experiments later in this post and try out a couple different alternatives.
Before going back to EF Core integration ideas, let me deviate into…
Idiomatic Critter Stack Usage
Forget EF Core for a second, let’s examine a possible usage with the full “Critter Stack” and use Marten for Event Sourcing instead. In this case, a command handler to add a backlog item to a sprint could look something like this (folks, I didn’t spend much time thinking about how a back log system would be built here):
public record BackLotItemCommitted(Guid SprintId);
public record CommitToSprint(Guid BacklogItemId, Guid SprintId);
// This is utilizing Wolverine's "Aggregate Handler Workflow"
// which is the Critter Stack's flavor of the "Decider" pattern
public static class CommitToSprintHandler
{
public static Events Handle(
// The actual command
CommitToSprint command,
// Current state of the back log item,
// and we may decide to make the commitment here
[WriteAggregate] BacklogItem item,
// Assuming that Sprint is event sourced,
// this is just a read only view of that stream
[ReadAggregate] Sprint sprint)
{
// Use the item & sprint to "decide" if
// the system can proceed with the commitment
return [new BackLotItemCommitted(command.SprintId)];
}
}
In the code above we’re appending the BackLotItemCommitted event to Marten that’s returned from the method. If you need to carry out side effects outside of the scope of this handler using that event as a message input, you have a couple options to have Wolverine relay that through any of its messaging through the event forwarding (faster, but un-ordered) or event subscriptions (strictly ordered, but that always means slower).
I should also say that if the events returned from the function above are also being forwarded as messages and not just being appended to the Marten event store, that messaging is completely integrated with Wolverine’s transactional outbox support. That’s a key differentiation all by itself from a similar MediatR based approach that doesn’t come with outbox support.
That’s it, that’s the whole handler, but here are some things I would want you to take away from that code sample above:
Yes, the business logic is embedded directly in the handler method instead of being buried in the BacklogItem or Sprint aggregates. We are very purposely going down a Functional Programming (adjacent? curious?) approach where the logic is primarily in pure “Decider” functions
I think the code above clearly shows the relationship between the system input (the CommitToSprint command message) and the potential side effects and changes in state of the system. This relative ease of reasoning about the code is of the utmost importance for system maintainability. We can look at the handler code and know that executing that message will potentially lead to events or event messages being published. I’m going to hit this point again from some of the other potential approaches because I think this is a vital point.
Testability of the business logic is easy with the pure function approach
There are no marker interfaces, Entity base classes, or jumping through layers. There’s no repository or factory
So enough of that, let’s start with some possible alternatives for Wolverine integration of domain events from domain entity objects with EF Core.
Relay Events from Your Entity Subclass to Wolverine
Switching back to EF Core integration, let’s look at a possible approach to teach Wolverine how to scrape domain events for publishing from your own custom Event or IEventlayer supertype like this one that we’ll put behind our BackLogItem type:
// Of course, if you're into DDD, you'll probably
// use many more marker interfaces than I do here,
// but you do you and I'll do me in throwaway sample code
public abstract class Entity
{
public List<object> Events { get; } = new();
public void Publish(object @event)
{
Events.Add(@event);
}
}
public class BacklogItem : Entity
{
public Guid Id { get; private set; }
public string Description { get; private set; }
public virtual Sprint Sprint { get; private set; }
public DateTime CreatedAtUtc { get; private set; } = DateTime.UtcNow;
public void CommitTo(Sprint sprint)
{
Sprint = sprint;
Publish(new BackLotItemCommitted(Id, sprint.Id));
}
}
Let’s utilize this a little bit within a Wolverine handler, first with explicit code:
public static class CommitToSprintHandler
{
public static async Task HandleAsync(
CommitToSprint command,
ItemsDbContext dbContext)
{
var item = await dbContext.BacklogItems.FindAsync(command.SprintId);
var sprint = await dbContext.Sprints.FindAsync(command.SprintId);
// This method would cause an event to be published within
// the BacklogItem object here that we need to gather up and
// relay to Wolverine later
item.CommitTo(sprint);
// Wolverine's transactional middleware handles
// everything around SaveChangesAsync() and transactions
}
}
public static class CommitToSprintHandler
{
public static IStorageAction<BacklogItem> Handle(
CommitToSprint command,
// There's a naming convention here about how
// Wolverine "knows" the id for the BacklogItem
// from the incoming command
[Entity] BacklogItem item,
[Entity] Sprint sprint
)
{
// This method would cause an event to be published within
// the BacklogItem object here that we need to gather up and
// relay to Wolverine later
item.CommitTo(sprint);
// This is necessary to "tell" Wolverine to put transactional middleware around the handler
// Just taking in the right DbContext type as a dependency
// work work just as well if you don't like the Wolverine
// magic
return Storage.Update(item);
}
}
Now, let’s add some Wolverine configuration to just make this pattern work:
builder.Host.UseWolverine(opts =>
{
// Setting up Sql Server-backed message storage
// This requires a reference to Wolverine.SqlServer
opts.PersistMessagesWithSqlServer(connectionString, "wolverine");
// Set up Entity Framework Core as the support
// for Wolverine's transactional middleware
opts.UseEntityFrameworkCoreTransactions();
// THIS IS A NEW API IN Wolverine 5.6!
opts.PublishDomainEventsFromEntityFrameworkCore<Entity>(x => x.Events);
// Enrolling all local queues into the
// durable inbox/outbox processing
opts.Policies.UseDurableLocalQueues();
});
In the Wolverine configuration above, the EF Core transactional middleware now “knows” how to scrape out possible domain events from the active DbContext.ChangeTracker and publish them through Wolverine. Moreover, the EF Core transactional middleware is doing all the operation ordering for you so that the events are enqueued as outgoing messages as part of the transaction and potentially persisted to the transactional inbox or outbox (depending on configuration) before the transaction is committed.
To make this as clear as possible, this approach is completely reliant on the EF Core transactional middleware.
Oh, and also note that this domain event “scraping” is also supported and tested with the IDbContextOutbox<T> service if you want to use this in application code outside of Wolverine message handlers or HTTP endpoints.
This approach could also support the thread safe approach that the sample from the first section used in the future, but I’m dubious that that’s necessary.
If I were building a system that embeds domain event publishing directly in domain model entity classes, I would prefer this approach. But, let’s talk about another option that will not require any changes to Wolverine…
Relay Events from Entity to Wolverine Cascading Messages
In this approach, which I’m granting that some people won’t like at all, we’ll simply pipe the event messages from the domain entity right to Wolverine and utilize Wolverine’s cascading message feature.
This time I’m going to change the BacklogItem entity class to something like this:
public class BacklogItem
{
public Guid Id { get; private set; }
public string Description { get; private set; }
public virtual Sprint Sprint { get; private set; }
public DateTime CreatedAtUtc { get; private set; } = DateTime.UtcNow;
// The exact return type isn't hugely important here
public object[] CommitTo(Sprint sprint)
{
Sprint = sprint;
return [new BackLotItemCommitted(Id, sprint.Id)];
}
}
With the handler signature:
public static class CommitToSprintHandler
{
public static object[] Handle(
CommitToSprint command,
// There's a naming convention here about how
// Wolverine "knows" the id for the BacklogItem
// from the incoming command
[Entity] BacklogItem item,
[Entity] Sprint sprint
)
{
return item.CommitTo(sprint);
}
}
The approach above let’s you make the handler be a single pure function which is always great for unit testing, eliminates the need to do any customization of the DbContext type, makes it unnecessary to bother with any kind of IEventPublisher interface, and let’s you keep the logic for what event messages should be raised completely in your domain model entity types.
I’d also argue that this approach makes it more clear to later developers that “hey, additional messages may be published as part of handling the CommitToSprint command” and I think that’s invaluable. I’ll harp on this more later, but I think the traditional, MediatR-flavored approach to domain events from the first example at the top makes application code harder to reason about and therefore more buggy over time.
Embedding IEventPublisher into the Entities
Lastly, let’s move to what I think is my least favorite approach that I will from this moment be recommending against for any JasperFx clients but is now completely supported by Wolverine 5.6+! Let’s use an IEventPublisher interface like this:
// Just assume that this little abstraction
// eventually relays the event messages to Wolverine
// or whatever messaging tool you're using
public interface IEventPublisher
{
void Publish<T>(T @event) where T : IDomainEvent;
}
// Using a Nullo just so you don't have potential
// NullReferenceExceptions
public class NulloEventPublisher : IEventPublisher
{
public void Publish<T>(T @event) where T : IDomainEvent
{
// Do nothing.
}
}
public abstract class Entity
{
public IEventPublisher Publisher { get; set; } = new NulloEventPublisher();
}
public class BacklogItem : Entity
{
public Guid Id { get; private set; } = Guid.CreateVersion7();
public string Description { get; private set; }
// ZOMG, I forgot how annoying ORMs are. Use a document database
// and stop worrying about making things virtual just for lazy loading
public virtual Sprint Sprint { get; private set; }
public void CommitTo(Sprint sprint)
{
Sprint = sprint;
Publisher.Publish(new BackLotItemCommitted(Id, sprint.Id));
}
}
Now, on to a Wolverine implementation for this pattern. You’ll need to do just a couple things. First, add this line of configuration to Wolverine, and note there are no generic arguments here:
// This will set you up to scrape out domain events in the
// EF Core transactional middleware using a special service
// I'm just about to explain
opts.PublishDomainEventsFromEntityFrameworkCore();
Now, build a real implementation of that IEventPublisher interface above:
public class EventPublisher(OutgoingDomainEvents Events) : IEventPublisher
{
public void Publish<T>(T e) where T : IDomainEvent
{
Events.Add(e);
}
}
OutgoingDomainEvents is a service from the WolverineFx.EntityFrameworkCore Nuget that is registered as Scoped by the usage of the EF Core transactional middleware. Next, register your custom IEventPublisher with the Scoped lifecycle:
How you wire up IEventPublisher to your domain entities getting loaded out of the your EF Core DbContext? Frankly, I don’t want to know. Maybe a repository abstraction around your DbContext types? Dunno. I hate that kind of thing in code, but I perfectly trust *you* to do that and to not make me see that code.
What’s important is that within a message handler or HTTP endpoint, if you resolve the IEventPublisher through DI and use the EF Core transactional middleware, the domain events published to that interface will be piped correctly into Wolverine’s active messaging context.
Likewise, if you are using IDbContextOutbox<T>, the domain events published to IEventPublisher will be correctly piped to Wolverine if you:
Pull both IEventPublisher and IDbContextOutbox<T> from the same scoped service provider (nested container in Lamar / StructureMap parlance)
So, we’re going to have to do some sleight of hand to keep your domain entities synchronous
Last note, in unit testing you might use a stand in “Spy” like this:
public class RecordingEventPublisher : OutgoingMessages, IEventPublisher
{
public void Publish<T>(T @event)
{
Add(@event);
}
}
Summary
I have always hated this Domain Events pattern and much prefer the full “Critter Stack” approach with the Decider pattern and event sourcing. But, Wolverine is picking up a lot more users who combine it with EF Core (and JasperFx deeply appreciates these customers!) and I know damn well that there will be more and more demand for this pattern as people with more traditional DDD backgrounds and used to more DI-reliant tools transition to Wolverine. Now was an awfully good time to plug this gap.
If it was me, I would also prefer having an Entity just store published domain events on itself and depend on Wolverine “scraping” these events out of the DbContext change tracking so you don’t have to do any kind of gymnastics and extra layering to attach some kind of IEventPublisher to your Entity types.
Lastly, if you’re comparing this straight up to the MediatR approach, just keep in mind that this is not an oranges to oranges comparison because Wolverine also needs to correctly utilize its transactional outbox for resiliency, which is a feature that MediatR does not provide.
Starting today, Babu Annamalai is taking a larger role at JasperFx Software (LLC) to help expand our support coverage to just about every time zone on the planet. Babu is a long time member of the Marten and now Critter Stack core team. In addition to some large contributions like the Partial API in Marten and smoothing out database migrations, he’s been responsible for most of our DevOps support and documentation websites that helps keep the Critter Stack moving forward.
A little more about Babu:
Babu has over 28 years of experience, excelling in technology and product management roles within renowned enterprise firms. His expertise lies in crafting cutting-edge products and solutions customised for the ever-evolving domain of investment management and research. Co-maintainer of Marten. Owns and manages .NET OSS libraries ReverseMarkdown.Net and MysticMind.PostgresEmbed. Drawing from his wealth of knowledge, he recently embarked on a thrilling entrepreneurial journey, establishing Radarleaf Technologies, providing top-notch consultancy and bespoke software development services.