On Debugging Problems

This post is from a decade old draft where I thought I would write a grand treatise about every bit of wisdom I could possibly impart to other developers. So let this be proof that I absolutely can still write about software development outside of promoting the “Critter Stack” if you give me a decade to finish the post and a day when my new fangled AI agent is being extremely slow!

A lot of my time over the past 15 years has been spent helping other software developers analyze, debug, and fix problems in their own work. I also spend a lot of time working in some fairly complicated problem domains where I not infrequently have to unwind problems of my own creation. It follows pretty naturally that I’ve spent a significant amount of time thinking both about how I debug problems in code and how I would try to teach other developers to be better at debugging their own problems.

Here then is a few thoughts about the act of debugging problems in software systems. You’ll note that I’m not going to talk really at all about specific tools. I’m happy to leave that to other people and really just focus on how you use your noggin to think your way into determining root causes and fixes.

Do what I say, not what I just did…

And of course while I was writing this I wasn’t practicing what I’m preaching, so let’s look at a micro-cosm of all this advice for a problem I was troubleshooting right after I wrote most of this post. Right now I’m focused on a forthcoming product for JasperFx Software named “CritterWatch.” That product will have to manage communication between Wolverine systems, the centralized CritterWatch service and connected CritterWatch browsers. Last week I was initially struggling with why the initial load of the web pages was missing some data from the backend as shown in this diagram:

I was working pretty late in the afternoon waiting for my daughter to wrap up an after school activity, and I admittedly wasn’t at my best. I followed these steps:

  1. There were custom value types in the information flow from Wolverine to CritterWatch to the CritterWatch user interface, so I first went with the reasonable theory that there were JSON serialization problems that had cropped up since last I’d worked seriously in this codebase.
  2. I was feeling lazy at the end of the day, so I tried to get Claude to “solve” the problem for me and told it to look at serialization issues. Claude claimed to have fixed a problem, but no dice when I ran the whole stack
  3. I wasn’t really getting into the whole flow yet, so I kept trying to just add more tests to verify bidirectional JSON serialization and deserialization of the custom types, and all of those new tests were passing just fine.
  4. I finally listened to what the feedback was telling me and moved off of what had been my initial theory once I had really disproven that theory about serialization problems several ways to Sunday.
  5. I walked away for the night, and came back fresh the next morning
  6. After some rest, I went a lot deeper in the code and got back up to speed on how data flowed throughout and with a little more judicious debugging I was able to determine that the data was flowing incorrectly much earlier in the workflow than I’d originally thought and that the problem was very different than I’d first theorized. To use the analogy I’ll introduce in a bit, I took my mental X-Wing right into the trench to go find the thermal exhaust port.
  7. Knowing where the problem was, I switched to using automated tests that were finer grained than running the while system end to end and much more quickly repeatable
  8. With the faster feedback loop and a new theory about what could be going wrong, I made a change to the internals that turned out to be correct and I got the tests passing before moving on to running the full stack end to end and seeing the whole system function correctly. Finally.
  9. And lastly, I actually did remember to go rip out some of the temporary tracing code I’d put in while diagnosing the problems

Does any of this matter if we have LLMs now?

Definitely for the short term and probably for the longer term too? I’ve been very impressed with my usage of Claude Code so far, but I’ve seen plenty of times when I could have probably have solved an issue much faster myself than letting Claude hit it with brute force. Even with a tool like Claude or Copilot, I’d still say you want to feed it what you think the likely cause is to speed up its own work and whatever facts you know. And I guess at the end of the day I just don’t believe an LLM is going to be better than a capable human at everything, especially when you’re facing a novel problem.

I have found it advantageous to have the LLM tools retrofit tests to verify or disprove theories about why code is broken. It’s also been helpful to get an LLM to create summaries of a code subsystem including dumping that out to a markdown file with requested Mermaid diagrams explaining the flow.

Oh, and I absolutely think it’s important for you to understand how the fancy AI tool was able to fix the issue later anyway.

15/15 Minute Rule

Before we get into the real meat of this post, there are two unhealthy behaviors I see from other developers asking for help that drive me absolutely bananas:

  1. Not making the slightest attempt to solve their problem on their own before they ask for help, but I think this is commonly caused by developers who can’t do what I’m going to call step 1 below
  2. Spending hours banging their head against the wall trying to solve something that I could have helped them get through much more quickly

I really wish I’d made of note who wrote this first — and we can happily quibble about what exactly the time duration should be — but I’d highly recommend the “15/15 Rule” of debugging. You should spend at least 15 minutes trying to debug a problem on your own both to develop your own skills and to keep from overloading your colleagues and maybe causing them to resent you. After that though, spend no more than 15 more minutes trying to solve the problem before you lift your head up and ask someone else for help.

It’s been years since I have been an active development lead, but I vividly remember how frustrating either extreme can be. At the time I started this post I was a remote “architect” who was on call to help anyone who could reach me on Hipchat (I think? Long before Slack), and I tried to take that role seriously. Now, one very important thing I should have taken more seriously then and that I have tried to impart to technical leaders since is that you should try really hard to teach the other people on your team how you debugged the problem and found the eventual answer. That could include knowing to check certain log files, database tables, or just imparting more about how the system works.

And of course, there were days when I wished fewer people needed my help on any given day. Then a few years later that company hired a lot more people in the main office with a few senior folks mixed in and people stopped asking me for help altogether. And if you are assuming that probably led to me no longer working there, you’d probably be at least partially right.

Step 1: Believing that you can figure it out

I despise touchy feely, kumbayah statements in regards to software development, but I do actually think that believing that you can understand the problem you’re facing and fix it is the all important first step. If you stay on the outside and just randomly try different things without understanding what you’re dealing with or try to set Debugger breakpoints in what you hope might be a useful spot, you might get lucky, but in my experience that’s not a sure thing.

Look, I’m solidly GenX, so the analogy I’m going to make is trying to destroy the Death Star. Sure, you can try to lob shots from the outside, the only sure way is to go fly your X-Wing right through that trench until you find the thermal exhaust port that will blow up the whole thing.

Long story short, I think you need to be ready to roll your sleeves up and learn enough about whatever it is you’re working on to have a good enough mental model of how information flows and where the points are that the process could be failing.

Oh, and another important point. Experience helps tremendously in debugging problems, and the only way to get experience is to try to debug problems yourself or at least just not ask the older dev on your team to just fix it for you.

Create a Mental Model

Okay, so once we actually believe that we can ultimately understand the code, the problem, and the eventual solution, I suggest you create or refresh your own mental model of how the code that’s failing works. I’d say to first focus on:

  • Where, when, and how system state is mutated
  • The workflow of the code or system or systems in play. How does information pass between elements of the code or the system?

What I think specifically you’re looking for is the most likely place in that workflow where things are going wrong and that you should…

One thing I already like AI for is to quickly say “make me a summary of XYZ code in markdown with a Mermaid sequence diagram for ABC.”

Have a Theory, then Prove or Disprove It

Formulate the most likely theory about why the functionality is failing and devise some way to verify or disprove that theory. Sometimes you can do this through targeted Debugger usage, but I’ll also highly recommend trying to build automated tests to verify intermediate steps as that’s almost always going to be a much faster feedback cycle than manually running through a system or series of systems. To put this more bluntly, always be looking to shorten your debugging session by looking for faster, finer grained feedback cycles that can tell you something valuable. And of course, absolutely leave behind automated test coverage as regression tests against whatever the problem turns out to be.

Having a working theory will provide some structure to how you’re trying to debug and fix your broken code. However, you need to be able to drop that working theory and look for a new one as soon as you get feedback to the contrary. Several of the longest debugging sessions I’ve undergone were prolonged by me refusing to give up on my initial explanation for what was wrong.

I was joking online the other day that Extreme Programming was the fount of all knowledge about software development. One of the ideas I learned from XP was that having to frequently use a Debugger tool meant that you should be adding more finer-grained automated tests and that you generally don’t want to spend a long time fussing with a Debugger if you can help it.

Pay Attention to Trends and Correlation

“Correlation isn’t Causation” necessarily, but it will absolutely suggest issues with certain types of input or configuration and tell you something that deserves further investigation and be an input into creating a viable theory.

Select isn’t Broken

The saying “Select isn’t Broken” comes from the seminal Pragmatic Programmer book that every aspiring developer should read at some point (it’s pretty short). I also like the Coding Horror take on this from years ago.

The gist of this is that it’s far more likely that the source of your problem is in your freshly written code and not in the much more mature tools you may be using. As a .Net developer, I know that it’s far more likely that a bug in my system is much more likely caused by my code than the core CLR code underneath it that’s used by millions of developers.

This isn’t necessarily saying that you’re a bad developer, but it can definitely save you time to assume that the problem is in your code and start from there.

Another way to put this is to always apply…

Occam’s Razor

From Wikipedia’s entry on Occam’s Razor:

Occam’s Razor is a problem-solving principle that, when presented with competing hypothetical answers to a problem, one should select the one that makes the fewest assumptions

This one is simple, look for the most likely explanation for your problem and focus in on that in your testing and debugging. Sorry to say this, but Occam’s Razor is frequently going to point at your code.

I probably shouldn’t tell anybody this, but an easy “tell” that I’m getting annoyed or impatient with somebody that I’m trying to help solve a problem is if I use either the phrase “Occam’s Razor says…” or something to the effect of “Dude, SELECT isn’t broken…”

I wrote that last sentence a decade ago but it’s still completely true today.

Don’t be so quick to blame the weird thing

So here’s the situation, you just got assigned to a project that uses some kind of technology that is completely new to you. Or maybe you pulled down some hotshot OSS library for the first time. Either way, you might find yourself pounding your head on your desk not being able to understand why some piece of code that uses that weird new thing isn’t working.

The common reaction is to blame the weird new thing, but you might just be completely wrong. Even when faced with some kind of novel technology, you still need to check for the normal, banal problems. I’ve fallen prey to this myself several times in the past, only to realize that my code that used the strange new thing was wrong. In particular, I remember feeling pretty stupid when I realized that I had a file path in the code wrong. It wasn’t the strange, new thing, it was just the common kind of mistake that I’ve made and quickly fixed several hundred times before.

Another way to put this is to always blame your code first. I say that based on the fact that it should be easiest and quickest to troubleshoot your own code to eliminate any possible problems there before getting into the strange new thing.

This is closely related to the initial “Believing that you can figure it out” rule, because blaming the “weird thing” allows you to push off all responsibility instead of diving in to try to understand what’s not working.

This section was written a decade ago after an incident where I happened to be in the main office when a developer said in a stand up meeting that he had a “FubuMVC problem.” As the primary author of FubuMVC, I jumped in and said I’d pair with him to try to fix whatever it was. When we sat down, he showed me the exception he was getting, and a quick glance at the inner exception pointed at it being just a run of the mill issue with a collection in his code not being initialized before the code tried to modify it. Easy money. But of course, the next day I popped into their stand up again and he told them that we had fixed the “FubuMVC problem.” Grr.

As the author of many OSS tools meant for other developers, I sometimes get the brunt of this issue. I’ve got some thoughts and lessons learned about better or worse stack trace and exception messages from my time writing tools and frameworks for other developers that I tackle in a later section — but, the main thing I want to tell other developers sometimes is to…

In no small part, Wolverine’s runtime architecture was purposely designed to streamline Exception stack traces compared to how FubuMVC at that time (or ASP.Net Core today for that matter) would add an absurd amount of framework noise to stack traces. I would argue that this is a significant advantage to Wolverine over the middleware strategies of other tools in the .NET space today.

Read the Exception Message Carefully

Yeah, this one is self explanatory. But yet this section is here because oftentimes the most important information you need to pay attention is buried in an inner exception. Or just don’t jump to incorrect conclusions about what the exception message and stack trace is telling you.

And also, the absolutely most important debugging rule. If you jump online and flag me down to help you solve your problem, don’t ever just say “I had an Exception” but instead post the whole damn stack trace (please)!

Just Flat Out Walk Away

Yeah, I’m not kidding. Sometimes if you’re really stuck and you can get away with this, just walk away and go do something else and hopefully recharge. It’s not a perfectly reliable or predictable strategy at all, but your subconscious will frequently throw up the eventual answer — or at least a new theory — at some random time while you’re walking the dog or doing dishes or whatever your daily routines are.

Probably more importantly, just getting some mental rest and coming back with a fresh mind and hopefully ready to entertain new ideas about what’s going wrong and how to solve that is frequently helpful as compared to banging your head on your desk and hoping this time you’ll notice something in the Debugger that tips you off to the problem.

The Secret of Management

As I was wrapping this up, I realized that I recreated this classic episode of News Radio (i.e., one of the greatest sitcoms ever even if it gave us Joe Rogan and RIP Phil Hartman):

Using the Azure Service Bus Emulator for Local Wolverine Development

As I wrote last week, I finally got on the AI bus and started using Claude Code. One of the things I’ve been doing with that so far is ripping through “chore” tasks that I’ve long wanted to do, but sounded too time consuming. One of those things was converting Wolverine’s own test suite for its Azure Service Bus integration into using the Azure Service Bus Emulator — which turned out to be extremely fortunate timing as the emulator recently gained support for the Azure Service Bus Management API that finally made the emulator usable.

The emulator is already turning out to be very useful for Wolverine development, especially in areas where we needed 3-5 namespaces just to test features like “namespace per tenant” and named broker features inside of Wolverine. A JasperFx client just happened to ask about that this week, so I lazily had Claude build a new section in the Wolverine docs to explain how that’s working for us and how you might use the emulator for your own local testing. Maybe in the next Wolverine (5.17) we’ll add some syntactical sugar to make this a little easier.

In the meantime, here’s how we’re using the emulator for Wolverine testing:

The Azure Service Bus Emulator allows you to run integration tests against a local emulator instance instead of a real Azure Service Bus namespace. This is exactly what Wolverine uses internally for its own test suite.

Docker Compose Setup

The Azure Service Bus Emulator requires a SQL Server backend. Here is a minimal Docker Compose setup:

networks:
sb-emulator:
services:
asb-sql:
image: "mcr.microsoft.com/azure-sql-edge"
environment:
- "ACCEPT_EULA=Y"
- "MSSQL_SA_PASSWORD=Strong_Passw0rd#2025"
networks:
sb-emulator:
asb-emulator:
image: "mcr.microsoft.com/azure-messaging/servicebus-emulator:latest"
volumes:
- ./docker/asb/Config.json:/ServiceBus_Emulator/ConfigFiles/Config.json
ports:
- "5673:5672" # AMQP messaging
- "5300:5300" # HTTP management
environment:
SQL_SERVER: asb-sql
MSSQL_SA_PASSWORD: "Strong_Passw0rd#2025"
ACCEPT_EULA: "Y"
EMULATOR_HTTP_PORT: 5300
depends_on:
- asb-sql
networks:
sb-emulator:

TIP

The emulator exposes two ports: the AMQP port (5672) for sending and receiving messages, and an HTTP management port (5300) for queue/topic administration. These must be mapped to different host ports.

Emulator Configuration File

The emulator reads a Config.json file on startup. A minimal configuration that lets Wolverine auto-provision everything it needs:

{
"UserConfig": {
"Namespaces": [
{
"Name": "sbemulatorns"
}
],
"Logging": {
"Type": "File"
}
}
}

You can also pre-configure queues and topics in this file if needed:

{
"UserConfig": {
"Namespaces": [
{
"Name": "sbemulatorns",
"Queues": [
{
"Name": "my-queue",
"Properties": {
"MaxDeliveryCount": 3,
"LockDuration": "PT1M",
"RequiresSession": false
}
}
],
"Topics": [
{
"Name": "my-topic",
"Subscriptions": [
{
"Name": "my-subscription",
"Properties": {
"MaxDeliveryCount": 3,
"LockDuration": "PT1M"
}
}
]
}
]
}
],
"Logging": {
"Type": "File"
}
}
}

Connection Strings

The emulator uses standard Azure Service Bus connection strings with UseDevelopmentEmulator=true:

// AMQP connection for sending/receiving messages
var messagingConnectionString =
"Endpoint=sb://localhost:5673;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=SAS_KEY_VALUE;UseDevelopmentEmulator=true;";
// HTTP connection for management operations (creating queues, topics, etc.)
var managementConnectionString =
"Endpoint=sb://localhost:5300;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=SAS_KEY_VALUE;UseDevelopmentEmulator=true;";

WARNING

The emulator uses separate ports for messaging (AMQP) and management (HTTP) operations. In production Azure Service Bus, a single connection string handles both, but the emulator requires you to configure these separately.

Configuring Wolverine with the Emulator

The key to using the emulator with Wolverine is setting both the primary connection string (for AMQP messaging) and the ManagementConnectionString (for HTTP administration) on the transport:

var builder = Host.CreateApplicationBuilder();
builder.UseWolverine(opts =>
{
opts.UseAzureServiceBus(messagingConnectionString)
.AutoProvision()
.AutoPurgeOnStartup();
// Required for the emulator: set the management connection string
// to the HTTP port since it differs from the AMQP port
var transport = opts.Transports.GetOrCreate<AzureServiceBusTransport>();
transport.ManagementConnectionString = managementConnectionString;
// Configure your queues, topics, etc. as normal
opts.ListenToAzureServiceBusQueue("my-queue");
opts.PublishAllMessages().ToAzureServiceBusQueue("my-queue");
});

Creating a Test Helper

Wolverine’s own test suite uses a static helper extension method to standardize emulator configuration across all tests. Here’s the pattern:

public static class AzureServiceBusTesting
{
// Connection strings pointing at the emulator
public static readonly string MessagingConnectionString =
"Endpoint=sb://localhost:5673;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=SAS_KEY_VALUE;UseDevelopmentEmulator=true;";
public static readonly string ManagementConnectionString =
"Endpoint=sb://localhost:5300;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=SAS_KEY_VALUE;UseDevelopmentEmulator=true;";
private static bool _cleaned;
public static AzureServiceBusConfiguration UseAzureServiceBusTesting(
this WolverineOptions options)
{
// Delete all queues and topics on first usage to start clean
if (!_cleaned)
{
_cleaned = true;
DeleteAllEmulatorObjectsAsync().GetAwaiter().GetResult();
}
var config = options.UseAzureServiceBus(MessagingConnectionString);
var transport = options.Transports.GetOrCreate<AzureServiceBusTransport>();
transport.ManagementConnectionString = ManagementConnectionString;
return config.AutoProvision();
}
public static async Task DeleteAllEmulatorObjectsAsync()
{
var client = new ServiceBusAdministrationClient(ManagementConnectionString);
await foreach (var topic in client.GetTopicsAsync())
{
await client.DeleteTopicAsync(topic.Name);
}
await foreach (var queue in client.GetQueuesAsync())
{
await client.DeleteQueueAsync(queue.Name);
}
}
}

Writing Integration Tests

With the helper in place, integration tests become straightforward:

public class when_sending_messages : IAsyncLifetime
{
private IHost _host;
public async Task InitializeAsync()
{
_host = await Host.CreateDefaultBuilder()
.UseWolverine(opts =>
{
opts.UseAzureServiceBusTesting()
.AutoPurgeOnStartup();
opts.ListenToAzureServiceBusQueue("send_and_receive");
opts.PublishMessage<MyMessage>()
.ToAzureServiceBusQueue("send_and_receive");
}).StartAsync();
}
public async Task DisposeAsync()
{
await _host.StopAsync();
}
[Fact]
public async Task send_and_receive_a_single_message()
{
var message = new MyMessage("Hello");
var session = await _host.TrackActivity()
.IncludeExternalTransports()
.Timeout(30.Seconds())
.SendMessageAndWaitAsync(message);
session.Received.SingleMessage<MyMessage>()
.Name.ShouldBe("Hello");
}
}

TIP

Use .IncludeExternalTransports() on the tracked session so Wolverine waits for messages that travel through Azure Service Bus rather than only tracking in-memory activity.

Disabling Parallel Test Execution

Because the emulator is a shared resource, tests that create and tear down queues or topics can interfere with each other when run in parallel. Wolverine’s own test suite disables parallel execution for its Azure Service Bus tests:

// Add to a file like NoParallelization.cs in your test project
[assembly: CollectionBehavior(CollectionBehavior.CollectionPerAssembly)]

Critter Stack Roadmap Update for 1st Quarter 2026

That is an American Polecat (black-footed ferret), our avatar for our newest Critter Stack project.

Mostly for my own sake to collect my own thoughts, I wanted to do a little update on the Critter Stack roadmap as it looks right now. This is an update on Critter Stack Roadmap for 2026 from December. Things have changed a little bit, or really just become more clear. While the rest of the Critter Stack core team has been early adopters of AI tools, I was late to the party, but two weeks into my own adoption of Claude Code, my ambition for the year has hugely expanded and this new update will reflect that.

Also, we’ve delivered an astonishing amount of new functionality in the first six weeks for 2026:

  • Marten’s new composite projection capability that is already getting usage. This feature is going to hopefully make it much easier to create denormalized “query model” projections with Marten to support reporting and dashboard screens
  • Wolverine got rate limiting middleware support (community built feature)
  • Wolverine’s options for transactional middleware, inbox, outbox, and scheduled messaging support grew to include Oracle, MySql, Sqlite, and CosmosDb. Weasel support for Critter Stack style “it just works” migrations were added for Oracle, MySql, and Sqlite as well

Short to Medium Term Roadmap

I think we are headed toward a Marten 9.0 and Wolverine 6.0 release this year, but I think that’s 2nd or even 3rd quarter this year.

CritterWatch

My personal focus (i.e. JasperFx’s) is switching to CritterWatch as of today. We have a verbal agreement with a JasperFx Software client to have a functional CritterWatch MVP in their environment by the end of March 2026, so here we go! More on this soon as I probably do quite a bit of thinking and analysis on how this should function out loud. The MVP scope is still this:

  • A visualization and explanation of the configuration of your Critter Stack application
  • Performance metrics integration from both Marten and Wolverine
  • Event Store monitoring and management of projections and subscriptions
  • Wolverine node visualization and monitoring
  • Dead Letter Queue querying and management
  • Alerting – but I don’t have a huge amount of detail yet. I’m paying close attention to the issues JasperFx clients see in production applications though, and using that to inform what information Critter Watch will surface through its user interface and push notifications

Marten 8.*

I think that Marten 8.* has just about played out and there’s only a handful of new features I’m personally thinking about before we effectively turn the page on Marten 8.*:

  1. First Class EF Core Projections. Just the ability to use an EF Core DbContext to write projected data with EF Core. I’ve thought that this would further help Marten users with reporting needs.
  2. An ability to tag event streams with user-defined “natural keys”, and efficient mechanisms to use those natural keys in APIs like FetchForWriting() and FetchLatest(). This will be done in conjunction with Wolverine’s “aggregate handler workflow.” This has been occasionally requested and on our roadmap for a couple years, but it moves up now because of some ongoing client work

Add in some ongoing improvements to the new “composite projection” feature and some improvements to the robustness of the Async Daemon subsystem and I think that’s a wrap on Marten 8.

One wild card is that Marten will gain some kind of model for Dynamic Consistency Boundaries (DCB) this year. I’m not sure whether I think that could or should be done in 8.* or wait for 9.0 though. I was initially dubious about DCB because it largely seemed to be a workaround for event store tools that can’t support strong consistency between event streams the way that Marten can. I’ve come around to DCB a little bit more after reviewing some JasperFx client code where they need to do quite a few cross-stream operations and seeing some opportunity to reduce repetitive code. This will be part of an ongoing process of improving the full Critter Stack’s ability to express cross-stream commands and will involve the integration into Wolverine as well.

Wolverine 5.*

Wolverine has exploded in development and functionality over the past three months, but I think that’s mostly played out as well. Looking at the backlog today, it’s mostly small ball refinements here and there. As mentioned before, I think Wolverine will be part of the improvements to cross-stream operations with Marten as well.

Wolverine gets a lot of community contributions though, and that could continue as a major driver of new features.

Introducing Polecat!

After 10 years of people sagely telling us that Marten would be much more popular if only it supported SQL Server, let’s all welcome Polecat to the Critter Stack. Polecat is going to be a SQL Server Backed Event Store and Document Db tool within the greater Critter Stack ecosystem. As you can imagine, Polecat is very much based on Marten with some significant simplifications. Right now the very basic event sourcing capabilities are already in place, but there’s plenty more to do before I’d suggest using it in a production application.

The key facts about its approach so far:

  • Supply a robust Event Store functionality using SQL Server as the storage mechanism
  • Mimics Marten’s API, and it’s likely some of the public API ends up being standardized between the two tools
  • Uses the same JasperFx.Events library for event abstractions and projection or subscription base types
  • Uses Weasel.SqlServer for automatic database migrations similar to Marten
  • Supports the bigger Critter Stack “stateful resource” model with Weasel to build out schema objects
  • Support both conjoined and separate database multi-tenancy
  • Projections will be based on the model in JasperFx.Events and supply SingleStreamProjectionMultiStreamProjectionEventProjection, and FlatTableProjection right out of the box
  • STJ only for the serialization. No Newtonsoft support this time
  • QuickAppend will be the default event appending approach
  • Only support .NET 10
  • Only support Sql Server 2025 (v17)
  • Utilize the new Sql Server JSON type much like Marten uses the PostgreSQL JSONB
  • Strictly using source generators instead of the Marten code generation model — but let’s call this an experiment for now that might end up moving to Marten 9.0 later on

I blew a tremendous amount of time in late 2024 and throughout 2025 getting ready to do this work by pulling out much of the guts of Marten Event Sourcing into potentially reusable libraries, and Polecat is the result.

Selfishly, the CritterWatch approach requires its own event sourced persistence, and I’m hoping that Polecat and SQL Server could be used as an alternative to Marten and PostgreSQL for shops that are interested in CritterWatch but don’t today use PostgreSQL.

Marten 9.0 and Wolverine 6.0

There will be major version releases of the two main critters later this year. The main goal of these releases will be all about optimizing the cold start time of the two tools and at least moving closer to true AOT compliance. We’ll be reevaluating the code generation model of both tools as part of this work.

The only other concrete detail we know is that these releases will dump .NET 8.0 support.

Summary

The road map changes all the time based on what issues clients and users are hitting and sometimes because we just have to stop and respond to something Microsoft or other technologies are doing. But at least for this moment, this is what the Critter Stack core team and I are thinking about.

Big Critter Stack Releases

The Critter Stack had a big day today with releases for both Marten and Wolverine.

First up, we have Marten 8.22 that included:

  • Lots of bug fixes, including several old LINQ related bugs and issues related to full text search that finally got addressed
  • Some improvements for the newer Composite Projections feature as users start to use it in real project work. Hat tip to Anne Erdtsieck on this one (and a JasperFx client needing an addition to it as well)
  • Some optimizations, including a potentially big one as Marten can now use a source generator to build some of the projection code that before depended on not perfectly efficient Expression compilation. This will impact “self aggregating” snapshot projections that use the Apply / Create / ShouldDelete conventions

Next, a giant Wolverine 5.16 release that brings:

  • Many, many bug fixes
  • Several small feature requests for our HTTP support
  • Improved resiliency for Kafka especially but also for any usage of external message brokers with Wolverine. See Sending Error Handling. Plus better error handling for durable listener endpoints when the transactional inbox database is unavailable
  • Wait, what? Wolverine has experimental support for CosmosDb as a transactional inbox/outbox and all of Wolverine’s declarative persistence helpers?
  • The ability to mark some message handlers or HTTP endpoints as opting out of automatic transactional middleware (for a JasperFx client). See this, but it applies to all persistence options.
  • Modular monolith usage improvements for a pair of JasperFx clients who are helping us stretch Wolverine to yet more use cases.
  • More to come on this, but we’ve recently slipped in Sqlite and Oracle support for Wolverine

2 Weeks of Claude Code for Me

I’m busy all the time with the Critter Stack tools, answering questions on Slack or Discord, and trying like hell to make JasperFx Software go. I’ve admittedly had my head in the sand a bit about the AI tools for coding, thinking that what I do being relatively novel for the most part and that I wasn’t missing out on anything yet because the AI stuff was probably mostly trained up and useful for repetitive feature work.

The unfortunate analogy I have to make for myself is harking back to my first job as a piping engineer helping design big petrochemical plants. I got to work straight out of college with a fantastic team of senior engineers who were happy to teach me and to bring me along instead of just being dead weight for them. This just happened to be right at the time the larger company was transitioning from old fashioned paper blueprint drafting to 3D CAD models for the piping systems. Our team got a single high powered computer with a then revolutionary Riva 128 (with a gigantic 8 whole megabytes of memory!) video card that was powerful enough to let you zoom around the 3D models of the piping systems we were designing. Within a couple weeks I was much faster doing some kinds of common work than my older peers just because I knew how to use the new workstation tools to zip around the model of our piping systems. It occurred to me a couple weeks ago that in regards to AI I was probably on the wrong side of that earlier experience with 3D CAD models and knew it was time to take the plunge and get up to speed.

Anyway, enough of that. I spent a week thinking about what I’d try to do first with AI coding agents and spent some time watching some YouTube videos on writing prompts. I signed up for a Claude Max subscription at the beginning of last week to just go jump into the deep end. My tally so far in two weeks for progress is:

  • Added MySql and Oracle database engine support to Weasel and Wolverine up to and including the ability for the Critter Stack to manage database migrations on the fly like we already did for PostgreSQL and SQL Server. Granted it took a couple attempts at the Oracle support, but it just doesn’t hurt to throwaway code that didn’t cost you much to write. Babu added Sqlite support as well.
  • Filled in a gap in our SQL Server support for a queue per tenant database that had been outstanding for quite awhile
  • I had Claude fix some holes in our compliance test suite for our RavenDb support I’d been neglecting for awhile
  • Still in progress, but I have the beginning of a “Marten style migrations for EF Core” subsystem going that’s going to make the Wolverine testing for our EF Core integration go a lot smoother when the kinks are worked out as well as potentially making EF Core less aggravating to use for just about anyone
  • I’m almost done with a potentially big performance optimization for Marten projections that I’d wanted to do for 6 months, but never had anywhere near enough time to research fully enough to do. That took in the end 30 minutes of my time and a couple hours of chugging. Just to make this point really hard here, it helps tremendously to have a large base of tests
  • I improved quite a few “blinking” tests in the Wolverine codebase. Not perfect, but way better than before
  • I pushed quite a few improvements to the Wolverine CI infrastructure. That’s a work in progress, but hey, it is progress
  • I got a previously problematic test suite in Marten running in CI for the first time
  • Marten’s open issue count (bugs and enhancements) is at 16 as I write this, and that’s the least that number has been since I filled out the initial story list in GitHub in late 2015.
  • Wolverine’s open issue count is coincidentally down to 16. That number has hovered between 50-70 for the past several years. I was able to address a handful of LINQ related bugs that have been hanging around for years because the effort to reward ratios seemed all wrong
  • I filled in some significant gaps in documentation in Wolverine that I’d been putting off for ages. I certainly went in after the fact and made edits, but we’re in better shape now. But of course, I’ve already got a tiny bit of feedback about something in that being wrong that I should have caught.
  • I had Claude look for savings in object allocations in both Marten and Wolverine, and got plenty of little micro-optimizations – mostly around convenient usages of LINQ instead of slightly uglier C# usage. I’m not the very best guy in the world around low level things, so that’s been nice.
  • I converted a couple of our solutions to centralized package management. That’s something I’ve kind of wanted to do for awhile, but who has time to mess with something like that in a big solution?

And really to make this sound a bit more impressive, this was with me doing 8 hours of workshops for a client and probably about 10-12 other meetings with clients during these two weeks so it’s not as if I had unbroken blocks of time in which to crank away. I also don’t have a terribly good handle on “Vibe Programming” and I’m not sure at all what a “Ralph Loop” is, so all of that very real progress was without me being completely up to speed on how to best incorporate AI tools.

Moreover, it’s already changed my perspective on the Critter Stack roadmap for this year because some things I’ve long wanted to do that sounded like too much work and too much risk now seem actually quite feasible based on the past couple weeks.

With all of that said, here are my general takeaways:

  • I think Steve Yegge’s AI Vampire post is worth some thought — and I also just thought it was cool that Steve Yegge is still around because he has to be older than me. I think the usage of AI is a little exhausting sometimes just because it encourages you to do a lot of context shifting as you get long running AI agent work going on different codebases and different features.
  • I already resent the feeling that I’m wasting time if I don’t have an agent loaded and churning
  • It’s been great when you have very detailed compliance test frameworks that the AI tools can use to verify the completion of the work
  • It’s also been great for tasks that have relatively straightforward acceptance criteria, but will involve a great deal of repetitive keystrokes to complete
  • I’ve been completely shocked at how well Claude Opus has been able to pick up on some of the internal patterns within Marten and Wolverine and utilize them correctly in new features
  • The Critter Stack community by and large does a great job of writing up reproduction steps and even reproduction GitHub repositories in bug reports. In many cases I’ve been able to say “suggest an approach to fix [link to github issue]” and been able to approve Claude’s suggestion.
  • I’m still behind the learning curve, but a few times now I’ve gotten Claude to work interactively to explore approaches to new features and get to a point where I could just turn it loose
  • Yeah, there’s no turning back unless the economic model falls apart
  • I’m absolutely conflicted about tools like Claude clearly using *my* work and *my* writings in the public to build solutions that rival Marten and Wolverine and there’s already some cases of that happening
  • The Tailwind thing upset me pretty badly truth be told

Anyway, I’m both horrified, elated, excited, and worried about the AI coding agents after just two weeks and I’m absolutely concerned about how that plays out in our industry, my own career, and our society.

Building a Greenfield System with the Critter Stack

JasperFx Software works hand in hand with our clients to improve our client’s outcomes on software projects using the “Critter Stack” (Marten and Wolverine). Based on our engagements with client projects as well as the greater Critter Stack user base, we’ve built up quite a few optional usages and settings in the two frameworks to solve specific technical challenges.

The unfortunate reality of managing a long lived application framework such as Wolverine or a complicated library like Marten is the need to both continuously improve the tools as well as trying really hard not to introduce regression errors to our clients when they upgrade tools. To that end, we’ve had to make several potentially helpful features be “opt in” in the tools, meaning that users have to explicitly turn on feature flag type settings for these features. A common cause of this is any change that introduces database schema changes as we try really hard to only do that in major version releases (Wolverine 5.0 added some new tables to SQL Server or PostgreSQL storage for example).

And yes, we’ve still introduced regression bugs in Marten or Wolverine far more times than I’d like, even with trying to be careful. In the end, I think the only guaranteed way to constantly and safely improve tools like the Critter Stack is to just be responsive to whatever problems slip through your quality gates and try to fix those problems quickly to regain trust.

With all that being said, let’s pretend we’re starting a greenfield project with the Critter Stack and we want to build in the best performing system possible with some added options for improved resiliency as well. To jump to the end state, this is what I’m proposing for a new optimized greenfield setup for users:

 var builder = Host.CreateApplicationBuilder();

builder.Services.AddMarten(m =>
{
    // Much more coming...
    m.Connection(builder.Configuration.GetConnectionString("marten"));

    // 50% improvement in throughput, less "event skipping"
    m.Events.AppendMode = EventAppendMode.Quick;
    // or if you care about the timestamps -->
    m.Events.AppendMode = EventAppendMode.QuickWithServerTimestamps;

    // 100% do this, but be aggressive about taking advantage of it
    m.Events.UseArchivedStreamPartitioning = true;

    // These cause some database changes, so can't be defaults,
    // but these might help "heal" systems that have problems
    // later
    m.Events.EnableAdvancedAsyncTracking = true;

    // Enables you to mark events as just plain bad so they are skipped
    // in projections from here on out.
    m.Events.EnableEventSkippingInProjectionsOrSubscriptions = true;

    // If you do this, just now you pretty well have to use FetchForWriting
    // in your commands
    // But also, you should use FetchForWriting() for command handlers 
    // any way
    // This will optimize the usage of Inline projections, but will force
    // you to treat your aggregate projection "write models" as being 
    // immutable in your command handler code
    // You'll want to use the "Decider Pattern" / "Aggregate Handler Workflow"
    // style for your commands rather than a self-mutating "AggregateRoot"
    m.Events.UseIdentityMapForAggregates = true;

    // Future proofing a bit. Will help with some future optimizations
    // for rebuild optimizations
    m.Events.UseMandatoryStreamTypeDeclaration = true;

    // This is just annoying anyway
    m.DisableNpgsqlLogging = true;
})
// This will remove some runtime overhead from Marten
.UseLightweightSessions()

.IntegrateWithWolverine(x =>
{
    // Let Wolverine do the load distribution better than
    // what Marten by itself can do
    x.UseWolverineManagedEventSubscriptionDistribution = true;
});

builder.Services.AddWolverine(opts =>
{
    // This *should* have some performance improvements, but would
    // require downtime to enable in existing systems
    opts.Durability.EnableInboxPartitioning = true;

    // Extra resiliency for unexpected problems, but can't be
    // defaults because this causes database changes
    opts.Durability.InboxStaleTime = 10.Minutes();
    opts.Durability.OutboxStaleTime = 10.Minutes();

    // Just annoying
    opts.EnableAutomaticFailureAcks = false;

    // Relatively new behavior that will store "unknown" messages
    // in the dead letter queue for possible recovery later
    opts.UnknownMessageBehavior = UnknownMessageBehavior.DeadLetterQueue;
});

using var host = builder.Build();

return await host.RunJasperFxCommands(args);

Now, let’s talk more about some of these settings…

Lightweight Sessions with Marten

The first option we’re going to explicitly add is to use “lightweight” sessions in Marten:

var builder = Host.CreateApplicationBuilder();

builder.Services.AddMarten(m =>
{
    // Elided configuration...
})
// This will remove some runtime overhead from Marten
.UseLightweightSessions()

By default, Marten will use a heavier version of IDocumentSession that incorporates an Identity Map internally to track documents (entities) already loaded by that session. Likewise, when you request to load an entity by its identity, Marten’s session will happily check if it has already loaded that entity and gives you the same object back to you without making the database call.

The identity map usage is mostly helpful when you have unclear or deeply nested call stacks where different elements of the code might try to load the same data as part of the same HTTP request or command handling. If you follow “Critter Stack” and what we call the best practices especially for Wolverine usage, you’ll know that we very strongly recommend against deep call stacks and excessive layering.

Moreover, I would argue that you should never need the identity map behavior if you were building a system with an idiomatic Critter Stack approach, so the default session type is actually harmful in that it adds extra runtime overhead. The “lightweight” sessions run leaner by completely eliminating all the dictionary storage and lookups.

Why you ask is the identity map behavior the default?

  1. We were originally designing Marten as a near drop in replacement for RavenDb in a big system, so we had to mimic that behavior right off the bat to be able to make the replacement in a timely fashion
  2. If we changed the default behavior, it can easily break code in existing systems that upgrade in ways that are very hard to predict and unfortunately hard to diagnose. And of course, this is most likely a problem in the exact kind of codebases that are hard to reason about. How do I know this and why am I so very certain this is so you ask? Scar tissue.

Wolverine Idioms for MediatR Users

The Wolverine community fields a lot of questions from people who are moving to Wolverine from their previous MediatR usage. A quite natural response is to try to use Wolverine as a pure drop in replacement for MediatR and even try to use the existing MediatR idioms they’re already used to. However, Wolverine comes from a different philosophy than MediatR and most of the other “mediator” tools it’s inspired and using Wolverine with its idioms might lead to much simpler code or more efficient execution. Inspired by a conversation I had online today, let’s just into an example that I think shows quite a bit of contrast between the tools.

We’ve tried to lay out some of the differences between the tools in our Wolverine for MediatR Users guide, including the section this post is taken from.

Here’s an example of MediatR usage I borrowed from this blog post that shows the usage of MediatR within a shopping cart subsystem:

public class AddToCartRequest : IRequest<Result>
{
public int ProductId { get; set; }
public int Quantity { get; set; }
}
public class AddToCartHandler : IRequestHandler<AddToCartRequest, Result>
{
private readonly ICartService _cartService;
public AddToCartHandler(ICartService cartService)
{
_cartService = cartService;
}
public async Task<Result> Handle(AddToCartRequest request, CancellationToken cancellationToken)
{
// Logic to add the product to the cart using the cart service
bool addToCartResult = await _cartService.AddToCart(request.ProductId, request.Quantity);
bool isAddToCartSuccessful = addToCartResult; // Check if adding the product to the cart was successful.
return Result.SuccessIf(isAddToCartSuccessful, "Failed to add the product to the cart."); // Return failure if adding to cart fails.
}
}
public class CartController : ControllerBase
{
private readonly IMediator _mediator;
public CartController(IMediator mediator)
{
_mediator = mediator;
}
[HttpPost]
public async Task<IActionResult> AddToCart([FromBody] AddToCartRequest request)
{
var result = await _mediator.Send(request);
if (result.IsSuccess)
{
return Ok("Product added to the cart successfully.");
}
else
{
return BadRequest(result.ErrorMessage);
}
}
}

Note the usage of the custom Result<T> type from the message handler. Folks using MediatR love using these custom Result types when you’re passing information between logical layers because it avoids the usage of throwing exceptions and communicates failure cases more clearly.

See Andrew Lock on Working with the result pattern for more information about the Result pattern.

Wolverine is all about reducing code ceremony and we always strive to write application code as synchronous pure functions whenever possible, so let’s just write the exact same functionality as above using Wolverine idioms to shrink down the code:

public static class AddToCartRequestEndpoint
{
// Remember, we can do validation in middleware, or
// even do a custom Validate() : ProblemDetails method
// to act as a filter so the main method is the happy path
[WolverinePost("/api/cart/add"), EmptyResponse]
public static Update<Cart> Post(
AddToCartRequest request,
// This usage will return a 400 status code if the Cart
// cannot be found
[Entity(OnMissing = OnMissing.ProblemDetailsWith400)] Cart cart)
{
return cart.TryAddRequest(request) ? Storage.Update(cart) : Storage.Nothing(cart);
}
}

There’s a lot going on above, so let’s dive into some of the details:

I used Wolverine.HTTP to write the HTTP endpoint so we only have one piece of code for our “vertical slice” instead of having both the Controller method and the matching message handler for the same logical command. Wolverine.HTTP embraces our Railway Programming model and direct support for the ProblemDetails specification as a means of stopping the HTTP request such that validation pre-conditions can be validated by middleware such that the main endpoint method is really the “happy path”.

The code above is using Wolverine’s “declarative data access” helpers you see in the [Entity] usage. We realized early on that a lot of message handlers or HTTP endpoints need to work on a single domain entity or a handful of entities loaded by identity values riding on either command messages, HTTP requests, or HTTP routes. At runtime, if the Cart isn’t found by loading it from your configured application persistence (which could be EF Core, Marten, or RavenDb at this time), the whole HTTP request would stop with status code 400 and a message communicated through ProblemDetails that the requested Cart cannot be found.

The key point I’m trying to prove is that idiomatic Wolverine results in potentially less repetitive code, less code ceremony, and less layering than MediatR idioms. Sure, it’s going to take a bit to get used to Wolverine idioms, but the potential payoff is code that’s easier to reason about and much easier to unit test — especially if you’ll buy into our A-Frame Architecture approach for organizing code within your slices.

Validation Middleware

As another example just to show how Wolverine’s runtime is different than MediatR’s, let’s consider the very common case of using Fluent Validation (or now DataAnnotations too!) middleware in front of message handlers or HTTP requests. With MediatR, you might use an IPipelineBehavior<T> implementation like this that will wrap all requests:

    public class ValidationBehaviour<TRequest, TResponse> : IPipelineBehavior<TRequest, TResponse> where TRequest : IRequest<TResponse>
    {
        private readonly IEnumerable<IValidator<TRequest>> _validators;
        public ValidationBehaviour(IEnumerable<IValidator<TRequest>> validators)
        {
            _validators = validators;
        }
      
        public async Task<TResponse> Handle(TRequest request, CancellationToken cancellationToken, RequestHandlerDelegate<TResponse> next)
        {
            if (_validators.Any())
            {
                var context = new ValidationContext<TRequest>(request);
                var validationResults = await Task.WhenAll(_validators.Select(v => v.ValidateAsync(context, cancellationToken)));
                var failures = validationResults.SelectMany(r => r.Errors).Where(f => f != null).ToList();
                if (failures.Count != 0)
                    throw new ValidationException(failures);
            }
          
            return await next();
        }
    }

    I’ve seen plenty of alternatives out there with slightly different implementations. In some cases folks will use service location to probe the application’s IoC container for any possible IValidator<T> implementations for the current request. In all cases though, the implementations are using runtime logic on every possible request to check if there is any validation logic. With the Wolverine version of Fluent Validation middleware, we do things a bit differently with less runtime overhead that will also result in cleaner Exception stack traces when things go wrong — don’t laugh, we really did design Wolverine quite purposely to avoid the really nasty kind of Exception stack traces you get from many other middleware or “behavior” using frameworks like Wolverine’s predecessor tool FubuMVC did 😦

    Let’s say that you have a Wolverine.HTTP endpoint like so:

    public record CreateCustomer
    (
    string FirstName,
    string LastName,
    string PostalCode
    )
    {
    public class CreateCustomerValidator : AbstractValidator<CreateCustomer>
    {
    public CreateCustomerValidator()
    {
    RuleFor(x => x.FirstName).NotNull();
    RuleFor(x => x.LastName).NotNull();
    RuleFor(x => x.PostalCode).NotNull();
    }
    }
    }
    public static class CreateCustomerEndpoint
    {
    [WolverinePost("/validate/customer")]
    public static string Post(CreateCustomer customer)
    {
    return "Got a new customer";
    }
    [WolverinePost("/validate/customer2")]
    public static string Post2([FromQuery] CreateCustomer customer)
    {
    return "Got a new customer";
    }
    }

    In the application bootstrapping, I’ve added this option:

    app.MapWolverineEndpoints(opts =>
    {
    // more configuration for HTTP...
    // Opting into the Fluent Validation middleware from
    // Wolverine.Http.FluentValidation
    opts.UseFluentValidationProblemDetailMiddleware();
    }

    Just like with MediatR, you would need to register the Fluent Validation validator types in your IoC container as part of application bootstrapping. Now, here’s how Wolverine’s model is very different from MediatR’s pipeline behaviors. While MediatR is applying that ValidationBehaviour to each and every message handler in your application whether or not that message type actually has any registered validators, Wolverine is able to peek into the IoC configuration and “know” whether there are registered validators for any given message type. If there are any registered validators, Wolverine will utilize them in the code it generates to execute the HTTP endpoint method shown above for creating a customer. If there is only one validator, and that validator is registered as a Singleton scope in the IoC container, Wolverine generates this code:

        public class POST_validate_customer : Wolverine.Http.HttpHandler
        {
            private readonly Wolverine.Http.WolverineHttpOptions _wolverineHttpOptions;
            private readonly Wolverine.Http.FluentValidation.IProblemDetailSource<WolverineWebApi.Validation.CreateCustomer> _problemDetailSource;
            private readonly FluentValidation.IValidator<WolverineWebApi.Validation.CreateCustomer> _validator;
    
            public POST_validate_customer(Wolverine.Http.WolverineHttpOptions wolverineHttpOptions, Wolverine.Http.FluentValidation.IProblemDetailSource<WolverineWebApi.Validation.CreateCustomer> problemDetailSource, FluentValidation.IValidator<WolverineWebApi.Validation.CreateCustomer> validator) : base(wolverineHttpOptions)
            {
                _wolverineHttpOptions = wolverineHttpOptions;
                _problemDetailSource = problemDetailSource;
                _validator = validator;
            }
    
    
    
            public override async System.Threading.Tasks.Task Handle(Microsoft.AspNetCore.Http.HttpContext httpContext)
            {
                // Reading the request body via JSON deserialization
                var (customer, jsonContinue) = await ReadJsonAsync<WolverineWebApi.Validation.CreateCustomer>(httpContext);
                if (jsonContinue == Wolverine.HandlerContinuation.Stop) return;
                
                // Execute FluentValidation validators
                var result1 = await Wolverine.Http.FluentValidation.Internals.FluentValidationHttpExecutor.ExecuteOne<WolverineWebApi.Validation.CreateCustomer>(_validator, _problemDetailSource, customer).ConfigureAwait(false);
    
                // Evaluate whether or not the execution should be stopped based on the IResult value
                if (result1 != null && !(result1 is Wolverine.Http.WolverineContinue))
                {
                    await result1.ExecuteAsync(httpContext).ConfigureAwait(false);
                    return;
                }
    
    
                
                // The actual HTTP request handler execution
                var result_of_Post = WolverineWebApi.Validation.ValidatedEndpoint.Post(customer);
    
                await WriteString(httpContext, result_of_Post);
            }
    
        }

    I should note that Wolverine’s Fluent Validation middleware will not generate any code for any HTTP endpoint where there are no known Fluent Validation validators for the endpoint’s request model. Moreover, Wolverine can even generate slightly different code for having multiple validators versus a singular validator as a way of wringing out a little more efficiency in the common case of having only a single validator registered for the request type.

    The point here is that Wolverine is trying to generate the most efficient code possible based on what it can glean from the IoC container registrations and the signature of the HTTP endpoint or message handler methods while the MediatR model has to effectively use runtime wrappers and conditional logic at runtime.

    Marten’s Aggregation Projection Subsystem

    Marten has very rich support for projecting events into read, write, or query models. While there are other capabilities as well, the most common usage is probably to aggregate related events into a singular view. Marten projections can be executed Live, meaning that Marten does the creation of the view by loading the target events into memory and building the view on the fly. Projections can also be executed Inline, meaning that the projected views are persisted as part of the same transaction that captures the events that apply to that projection. For this post though, I’m mostly talking about projections running asynchronously in the background as events are captured into the database (think eventual consistency).

    Aggregate Projections in Marten combine some sort of grouping of events and process them to create a single aggregated document representing the state of those events. These projections come in two flavors:

    Single Stream Projections create a rolled up view of all or a segment of the events within a single event stream. These projections are done either by using the SingleStreamProjection<TDoc, TId> base type or by creating a “self aggregating” Snapshot approach with conventional Create/Apply/ShouldDelete methods that mutate or evolve the snapshot based on new events.

    Multi Stream Projections create a rolled up view of a user-defined grouping of events across streams. These projections are done by sub-classing the MultiStreamProjection<TDoc, TId> class and is further described in Multi-Stream Projections. An example of a multi-stream projection might be a “query model” within an accounting system of some sort that rolls up the value of all unpaid invoices by active client.

    You can also use a MultiStreamProjection to create views that are a segment of a single stream over time or version. Imagine that you have a system that models the activity of a bank account with event sourcing. You could use a MultiStreamProjection to create a view that summarizes the activity of a single bank account within a calendar month.

    The ability to use explicit code to define projections was hugely improved in the Marten 8.0 release.

    Within your aggregation projection, you can express the logic about how Marten combines events into a view through either conventional methods (original, old school Marten) or through completely explicit code.

    Within an aggregation, you have advanced options to:

    Simple Example

    The most common usage is to create a “write model” that projects the current state for a single stream, so on that note, let’s jump into a simple example.

    I’m huge into epic fantasy book series, hence the silly original problem domain in the very oldest code samples. Hilariously, Marten has fielded and accepted pull requests that corrected our modeling of the timeline of the Lord of the Rings in sample code.

    Martens on a Quest

    Let’s say that we’re building a system to track the progress of a traveling party on a quest within an epic fantasy series like “The Lord of the Rings” or the “Wheel of Time” and we’re using event sourcing to capture state changes when the “quest party” adds or subtracts members. We might very well need a “write model” for the current state of the quest for our command handlers like this one:

    public sealed record QuestParty(Guid Id, List<string> Members)
    {
    // These methods take in events and update the QuestParty
    public static QuestParty Create(QuestStarted started) => new(started.QuestId, []);
    public static QuestParty Apply(MembersJoined joined, QuestParty party) =>
    party with
    {
    Members = party.Members.Union(joined.Members).ToList()
    };
    public static QuestParty Apply(MembersDeparted departed, QuestParty party) =>
    party with
    {
    Members = party.Members.Where(x => !departed.Members.Contains(x)).ToList()
    };
    public static QuestParty Apply(MembersEscaped escaped, QuestParty party) =>
    party with
    {
    Members = party.Members.Where(x => !escaped.Members.Contains(x)).ToList()
    };
    }

    For a little more context, the QuestParty above might be consumed in a command handler like this:

    public record AddMembers(Guid Id, int Day, string Location, string[] Members);
    public static class AddMembersHandler
    {
    public static async Task HandleAsync(AddMembers command, IDocumentSession session)
    {
    // Fetch the current state of the quest
    var quest = await session.Events.FetchForWriting<QuestParty>(command.Id);
    if (quest.Aggregate == null)
    {
    // Bad quest id, do nothing in this sample case
    }
    var newMembers = command.Members.Where(x => !quest.Aggregate.Members.Contains(x)).ToArray();
    if (!newMembers.Any())
    {
    return;
    }
    quest.AppendOne(new MembersJoined(command.Id, command.Day, command.Location, newMembers));
    await session.SaveChangesAsync();
    }
    }

    How Aggregation Works

    Just to understand a little bit more about the capabilities of Marten’s aggregation projections, let’s look at the diagram below that tries to visualize the runtime workflow of aggregation projections inside of the Async Daemon background process:

    How Aggregation Works
    1. The Daemon is constantly pushing a range of events at a time to an aggregation projection. For example, Events 1,000 to 2,000 by sequence number
    2. The aggregation “slices” the incoming range of events into a group of EventSlice objects that establishes a relationship between the identity of an aggregated document and the events that should be applied during this batch of updates for that identity. To be more concrete, a single stream projection for QuestParty would be creating an EventSlice for each quest id it sees in the current range of events. Multi-stream projections will have some kind of custom “slicing” or grouping. For example, maybe in our Quest tracking system we have a multi-stream projection that tries to track how many monsters of each type are defeated. That projection might “slice” by looking for all MonsterDefeated events across all streams and group or slice incoming events by the type of monster. The “slicing” logic is automatic for single stream projections, but will require explicit configuration or explicitly written logic for multi stream projections.
    3. Once the projection has a known list of all the aggregate documents that will be updated by the current range of events, the projection will fetch each persisted document, first from any active aggregate cache in memory, then by making a single batched request to the Marten document storage for any missing documents and adding these to any active cache (see Optimizing Performance for more information about the potential caching).
    4. The projection will execute any event enrichment against the now known group of EventSlice. This process gives you a hook to efficiently “enrich” the raw event data with extra data lookups from Marten document storage or even other sources.
    5. Most of the work as a developer is in the application or “Evolve” step of the diagram above. After the “slicing”, the aggregation has turned the range of raw event data into EventSlice objects that contain the current snapshot of a projected document by its identity (if one exists), the identity itself, and the events from within that original range that should be applied on top of the current snapshot to “evolve” it to reflect those events. This can be coded either with the conventional Apply/Create/ShouldDelete methods or using explicit code — which is almost inevitably means a switch statement. Using the QuestParty example again, the aggregation projection would get an EventSlice that contains the identity of an active quest, the snapshot of the current QuestParty document that is persisted by Marten, and the new MembersJoined et al events that should be applied to the existing QuestParty object to derive the new version of QuestParty.
    6. Just before Marten persists all the changes from the application / evolve step, you have the RaiseSideEffects() hook to potentially raise “side effects” like appending additional events based on the now updated state of the projected aggregates or publishing the new state of an aggregate through messaging (Wolverine has first class support for Marten projection side effects through its Marten integration into the full “Critter Stack”)
    7. For the current event range and event slices, Marten will send all aggregate document updates or deletions, new event appending operations, and even outboxed, outgoing messages sent via side effects (if you’re using the Wolverine integration) in batches to the underlying PostgreSQL database. I’m calling this out because we’ve constantly found in Marten development that command batching to PostgreSQL is a huge factor in system performance and the async daemon has been designed to try to minimize the number of network round trips between your application and PostgreSQL at every turn.
    8. Assuming the transaction succeeds for the current event range and the operation batch in the previous step, Marten will call “after commit” observers. This notification for example will release any messages raised as a side effect and actually send those messages via whatever is doing the actual publishing (probably Wolverine).

    Marten happily supports immutable data types for the aggregate documents produced by projections, but also happily supports mutable types as well. The usage of the application code is a little different though.

    Starting with Marten 8.0, we’ve tried somewhat to conform to the terminology used by the Functional Event Sourcing Decider paper by Jeremie Chassaing. To that end, the API now refers to a “snapshot” that really just means a version of the projection and “evolve” as the step of applying new events to an existing “snapshot” to calculate a new “snapshot.”

    Catching Up with Recent Wolverine Releases

    Wolverine has had a very frequent release cadence the past couple months as community contributions, requests from JasperFx Software clients, and yes, sigh, bug reports have flowed in. Right now I think I can justifiably claim that Wolverine is innovating much faster than any of the other comparable tools in the .NET ecosystem.

    Some folks clearly don’t like that level of change of course, and I’ve always had to field some only criticism for our frequency of releases. I don’t think that continues forever of course.

    I thought that now would be a good time to write a little bit about the new features and improvements just because so much of it happened over the holiday season. Starting somewhat arbitrarily with the first of December to now

    Inferred Message Grouping in Wolverine 5.5

    A massively important new feature in Wolverine 5 was our “Partitioned Sequential Messaging” that seeks to effectively head off problems with concurrent message processing by segregating message processing by some kind of business entity identity. Long story short, this feature can almost completely eliminate issues with concurrent access to data without eliminating parallel processing across unrelated messages.

    In Wolverine 5.5 we added the now obvious capability to let Wolverine automatically infer the messaging group id for messages handled by a Saga (the saga identity) or with the Aggregate Handler Workflow (the stream id of the primary event stream being altered in the handler):

    // Telling Wolverine how to assign a GroupId to a message, that we'll use
    // to predictably sort into "slots" in the processing
    opts.MessagePartitioning
    // This tells Wolverine to use the Saga identity as the group id for any message
    // that impacts a Saga or the stream id of any command that is part of the "aggregate handler workflow"
    // integration with Marten
    .UseInferredMessageGrouping()
    .PublishToPartitionedLocalMessaging("letters", 4, topology =>
    {
    topology.MessagesImplementing<ILetterMessage>();
    topology.MaxDegreeOfParallelism = PartitionSlots.Five;
    topology.ConfigureQueues(queue =>
    {
    queue.BufferedInMemory();
    });
    });

    “Classic” .NET Domain Events with EF Core in Wolverine 5.6

    Wolverine is attracting a lot of new users lately who might honestly only have been originally interested because of other tool’s recent licensing changes, and those users tend to come with a more typical .NET approach to application architecture than Wolverine’s idiomatic vertical slice architecture approach. These new users are also a lot more likely to be using EF Core than Marten, so we’ve had to invest more in EF Core integration.

    Wolverine 5.6 brought an ability to cleanly and effectively utilize a traditional .NET approach for “Domain Event” publishing through EF Core to Wolverine’s messaging.

    I wrote about that at the time in “Classic” .NET Domain Events with Wolverine and EF Core.

    Wolverine 5.7 Knocked Out Bugs

    There wasn’t many new features of note, but Wolverine 5.7 less than a week after 5.6 had five contributors and knocked out a dozen issues. The open issue count in Wolverine crested in December in the low 70’s and it’s down to the low 30’s right now.

    Client Requests in Wolverine 5.8

    Wolverine 5.8 gave us some bug fixes, but also a couple new features requested by JasperFx clients:

    The Community Went Into High Gear with Wolverine 5.9

    Wolverine 5.9 dropped the week before Christmas with contributions from 7 different people.

    The highlights are:

    • Sandeep Desai has been absolutely on fire as a contributor to Wolverine and he made the HTTP Messaging Transport finally usable in this release with several other pull requests in later versions that also improved that feature. This is enabling Wolverine to use HTTP as a messaging transport. I’ve long wanted this feature as a prerequisite for CritterWatch.
    • Lodewijk Sioen added Wolverine middleware support for using Data Annotations with Wolverine.HTTP
    • The Rabbit MQ integration got more robust about reconnecting on errors

    Wolverine 5.10 Kicked off 2026 with a Bang!

    Wolverine 5.10 came out last week with contributions from eleven different folks. Plenty of bug fixes and contributions built up over the holidays. The highlights include:

    And several random requests for JasperFx clients because that’s something we do to support our clients.

    Wolverine 5.11 adds More Idempotency Options

    Wolverine 5.11 dropped this week with more bug fixes and new capabilities from five contributors. The big new feature was an improved option for enforcing message idempotency on non-transactional handlers as a request from a JasperFx support client.

    using var host = await Host.CreateDefaultBuilder()
    .UseWolverine(opts =>
    {
    opts.Durability.Mode = DurabilityMode.Solo;
    opts.Services.AddDbContextWithWolverineIntegration<CleanDbContext>(x =>
    x.UseSqlServer(Servers.SqlServerConnectionString));
    opts.Services.AddResourceSetupOnStartup(StartupAction.ResetState);
    opts.Policies.AutoApplyTransactions(IdempotencyStyle.Eager);
    opts.PersistMessagesWithSqlServer(Servers.SqlServerConnectionString, "idempotency");
    opts.UseEntityFrameworkCoreTransactions();
    // THIS RIGHT HERE
    opts.Policies.AutoApplyIdempotencyOnNonTransactionalHandlers();
    }).StartAsync();

    That release also included several bug fixes and an effort from me to go fill in some gaps in the documentation website. That release got us down to the lowest open issue count in years.

    Summary

    The Wolverine community has been very busy, it is actually a community of developers from all over the world, and we’re improving fast.

    I do think that the release cadence will slow down somewhat though as this has been an unusual burst of activity.

    Easier Query Models with Marten

    The Marten community made our first big release of the new year with 8.18 this morning. I’m particularly happy with a couple significant things in this release:

    1. We had 8 different contributors in just the last month of work this release represents
    2. Anne Erdtsieck did a lot to improve our documentation for using our multi-stream projections for advanced query model projections
    3. The entire documentation section on projections got a much needed revamp and now includes a lot more information about capabilities from our big V8 release last year. I’m hopeful that the new structure and content makes this crucial feature set more usable.
    4. We improved Marten’s event enrichment ability within projections to more easily and efficiently incorporate information from outside of the raw event data
    5. The “Composite or Chained Projections” feature has been something we’ve talked about as a community for years, and now we have it

    The one consistent theme in those points is that Marten just got a lot better for our users for creating “query models” in systems.

    Let’s Build a TeleHealth System!

    I got to be a part of a project like this for a start up during the pandemic. Fantastic project with lots of great people. Even though I wasn’t able to use Marten on the project at that time (we used a hand rolled Event Sourcing solution with Node.JS + TypeScript), that project has informed several capabilities added to Marten in the years since including the features shown in this post.

    Just to have a problem domain for the sample code, let’s pretend that we’re building a new only TeleHealth system that allows patients to register for an appointment online and get matched up with a healthcare provider for an appointment that day. The system will do all the work of coordinating these appointments as well as tracking how the healthcare providers spend their time.

    That domain might have some plain Marten document storage for reference data including:

    • Provider — representing a medical provider (Nurse? Physician? PA?) who fields appointments
    • Specialty — models a medical specialty
    • Patient — personal information about patients who are requesting appointments in our system

    Switching to event streams, we may be capturing events for:

    • Board – events modeling a single, closely related group of appointments during a single day. Think of “Pediatrics in Austin, Texas for January 19th”
    • ProviderShift – events modeling the activity of a single provider working in a single Board during a single day
    • Appointment – events recording the progress of an appointment including requesting an appointment through the appointment being cancelled or completed

    Better Query Models

    The easiest and most common form of a projection in Marten is a simple “write model” that projects the information from a single event stream to a projected document. From our TeleHealth domain, here’s the “self-aggregating” Board:

    public class Board
    {
    private Board()
    {
    }
    public Board(BoardOpened opened)
    {
    Name = opened.Name;
    Activated = opened.Opened;
    Date = opened.Date;
    }
    public void Apply(BoardFinished finished)
    {
    Finished = finished.Timestamp;
    }
    public void Apply(BoardClosed closed)
    {
    Closed = closed.Timestamp;
    CloseReason = closed.Reason;
    }
    public Guid Id { get; set; }
    public string Name { get; private set; }
    public DateTimeOffset Activated { get; set; }
    public DateTimeOffset? Finished { get; set; }
    public DateOnly Date { get; set; }
    public DateTimeOffset? Closed { get; set; }
    public string CloseReason { get; private set; }
    }

    Easy money. All the projection has to do is apply the raw event data for that one stream and nothing else. Marten is even doing the event grouping for you, so there’s just not much to think about at all.

    Now let’s move on to more complicated usages. One of the things that makes Marten such a great platform for Event Sourcing is that it also has its dedicated document database feature set on top of the PostgreSQL engine. All that means that you can happily keep some relatively static reference data back in just plain ol’ documents or even raw database tables.

    To that end, let’s say in our TeleHealth system that we want to just embed all the information for a Provider (think a nurse or a physician) directly into our ProviderShift for easier usage:

    public class ProviderShift(Guid boardId, Provider provider)
    {
    public Guid Id { get; set; }
    public int Version { get; set; }
    public Guid BoardId { get; private set; } = boardId;
    public Guid ProviderId => Provider.Id;
    public ProviderStatus Status { get; set; } = ProviderStatus.Paused;
    public string Name { get; init; }
    public Guid? AppointmentId { get; set; }
    // I was admittedly lazy in the testing, so I just
    // completely embedded the Provider document directly
    // in the ProviderShift for easier querying later
    public Provider Provider { get; set; } = provider;
    }

    When mixing and matching document storage and events, Marten has always given you the ability to utilize document data during projections by brute force lookups in your projection code like this:

        public async Task<ProviderShift> Create(
            // The event data
            ProviderJoined joined, 
            IQuerySession session)
        {
            var provider = await session
                .LoadAsync<Provider>(joined.ProviderId);
    
            return new ProviderShift(joined.BoardId, provider);
        }

    The code above is easy to write and conceptually easy to understand, but when the projection is being executed in our async daemon where the projection is processing a large batch of events at one time, the code above potentially sets you up for an N+1 query anti-pattern where Marten has to make lots of small database round trips to get each referenced Provider every time there’s a separate ProviderJoined event.

    Instead, let’s use Marten’s recent hook for event enrichment and the new declarative syntax we just introduced in 8.18 today to get all the related Provider information in one batched query for maximum efficiency:

        public override async Task EnrichEventsAsync(SliceGroup<ProviderShift, Guid> group, IQuerySession querySession, CancellationToken cancellation)
        {
            await group
    
                // First, let's declare what document type we're going to look up
                .EnrichWith<Provider>()
    
                // What event type or marker interface type or common abstract type
                // we could look for within each EventSlice that might reference
                // providers
                .ForEvent<ProviderJoined>()
    
                // Tell Marten how to find an identity to look up
                .ForEntityId(x => x.ProviderId)
    
                // And finally, execute the look up in one batched round trip,
                // and apply the matching data to each combination of EventSlice, event within that slice
                // that had a reference to a ProviderId, and the Provider
                .EnrichAsync((slice, e, provider) =>
                {
                    // In this case we're swapping out the persisted event with the
                    // enhanced event type before each event slice is then passed
                    // in for updating the ProviderShift aggregates
                    slice.ReplaceEvent(e, new EnhancedProviderJoined(e.Data.BoardId, provider));
                });
        }

    Now, inside the actual projection for ProviderShift, we can use the EnhancedProviderJoined event from above like this:

        // This is a recipe introduced in Marten 8 to just write explicit code
        // to "evolve" aggregate documents based on event data
        public override ProviderShift Evolve(ProviderShift snapshot, Guid id, IEvent e)
        {
            switch (e.Data)
            {
                case EnhancedProviderJoined joined:
                    snapshot = new ProviderShift(joined.BoardId, joined.Provider)
                    {
                        Provider = joined.Provider, Status = ProviderStatus.Ready
                    };
                    break;
    
                case ProviderReady:
                    snapshot.Status = ProviderStatus.Ready;
                    break;
    
                case AppointmentAssigned assigned:
                    snapshot.Status = ProviderStatus.Assigned;
                    snapshot.AppointmentId = assigned.AppointmentId;
                    break;
    
                case ProviderPaused:
                    snapshot.Status = ProviderStatus.Paused;
                    snapshot.AppointmentId = null;
                    break;
    
                case ChartingStarted charting:
                    snapshot.Status = ProviderStatus.Charting;
                    break;
            }
    
            return snapshot;
        }

    In the sample above, I replaced the ProviderJoined event being sent to our projection with the richer EnhancedProviderJoined event, but there are other ways to send data to projections with a new References<T> event type that’s demonstrated in our documentation on this feature.

    Sequential or Composite Projections

    This feature was introduced in Marten 8.18 in response to feedback from several JasperFx Software clients who needed to efficiently create projections that effectively made de-normalized views across multiple stream types and used reference data outside of the events. Expect this feature to grow in capability as we get more feedback about its usage.

    Here are a handful of scenarios that Marten users have hit over the years:

    • Wanting to use the build products of Projection 1 as an input to Projection 2. You can do that today by running Projection 1 as Inline and Projection 2 as Async, but that’s imperfect and sensitive to timing. Plus, you might not have wanted to run the first projection Inline.
    • Needing to create a de-normalized projection view that incorporates data from several other projections and completely different types of event streams, but that previously required quite a bit of duplicated logic between projections
    • Looking for ways to improve the throughput of asynchronous projections by doing more batching of event fetching and projection updates by trying to run multiple projections together

    To meet these somewhat common needs more easily, Marten has introduced the concept of a “composite” projection where Marten is able to run multiple projections together and possibly divided into multiple, sequential stages. This provides some potential benefits by enabling you to safely use the build products of one projection as inputs to a second projection. Also, if you have multiple projections using much of the same event data, you can wring out more runtime efficiency by building the projections together so your system is doing less work fetching events and able to make updates to the database with fewer network round trips through bigger batches.

    In our TeleHealth system, we need to have single stream “write model” projections for each of the three stream types. We also need to have a rich view of each Board that combines all the common state of the active Appointment and ProviderShift streams in that Board including the more static Patient and Provider information that can be used by the system to automate the assignment of providers to open patients (a real telehealth system would need to be able to match up the requirements of an appointment with the licensing, specialty, and location of the providers as well as “knowing” what providers are available or estimated to be available). We probably also need to build a denormalized “query model” about all appointments that can be efficiently queried by our user interface on any of the elements of BoardAppointmentPatient, or Provider.

    What we really want is some way to efficiently utilize the upstream products and updates of the BoardAppointment, and ProviderShift “write model” projections as inputs to what we’ll call the BoardSummary and AppointmentDetails projections. We’ll use the new “composite projection” feature to run these projections together in two stages like this:

    Before we dive into each child projection, this is how we can set up the composite projection using the StoreOptions model in Marten:

    opts.Projections.CompositeProjectionFor("TeleHealth", projection =>
    {
    projection.Add<ProviderShiftProjection>();
    projection.Add<AppointmentProjection>();
    projection.Snapshot<Board>();
    // 2nd stage projections
    projection.Add<AppointmentDetailsProjection>(2);
    projection.Add<BoardSummaryProjection>(2);
    });

    First, let’s just look at the simple ProviderShiftProjection:

    public class ProviderShiftProjection: SingleStreamProjection<ProviderShift, Guid>
    {
    public ProviderShiftProjection()
    {
    // Make sure this is turned on!
    Options.CacheLimitPerTenant = 1000;
    }
    public override async Task EnrichEventsAsync(SliceGroup<ProviderShift, Guid> group, IQuerySession querySession, CancellationToken cancellation)
    {
    await group
    // First, let's declare what document type we're going to look up
    .EnrichWith<Provider>()
    // What event type or marker interface type or common abstract type
    // we could look for within each EventSlice that might reference
    // providers
    .ForEvent<ProviderJoined>()
    // Tell Marten how to find an identity to look up
    .ForEntityId(x => x.ProviderId)
    // And finally, execute the look up in one batched round trip,
    // and apply the matching data to each combination of EventSlice, event within that slice
    // that had a reference to a ProviderId, and the Provider
    .EnrichAsync((slice, e, provider) =>
    {
    // In this case we're swapping out the persisted event with the
    // enhanced event type before each event slice is then passed
    // in for updating the ProviderShift aggregates
    slice.ReplaceEvent(e, new EnhancedProviderJoined(e.Data.BoardId, provider));
    });
    }
    public override ProviderShift Evolve(ProviderShift snapshot, Guid id, IEvent e)
    {
    switch (e.Data)
    {
    case EnhancedProviderJoined joined:
    snapshot = new ProviderShift(joined.BoardId, joined.Provider)
    {
    Provider = joined.Provider, Status = ProviderStatus.Ready
    };
    break;
    case ProviderReady:
    snapshot.Status = ProviderStatus.Ready;
    break;
    case AppointmentAssigned assigned:
    snapshot.Status = ProviderStatus.Assigned;
    snapshot.AppointmentId = assigned.AppointmentId;
    break;
    case ProviderPaused:
    snapshot.Status = ProviderStatus.Paused;
    snapshot.AppointmentId = null;
    break;
    case ChartingStarted charting:
    snapshot.Status = ProviderStatus.Charting;
    break;
    }
    return snapshot;
    }
    }

    Now, let’s go downstream and look at the AppointmentDetailsProjection that will ultimately need to use the build products of all three upstream projections:

    public class AppointmentDetailsProjection : MultiStreamProjection<AppointmentDetails, Guid>
    {
    public AppointmentDetailsProjection()
    {
    Options.CacheLimitPerTenant = 1000;
    Identity<Updated<Appointment>>(x => x.Entity.Id);
    Identity<IEvent<ProviderAssigned>>(x => x.StreamId);
    Identity<IEvent<AppointmentRouted>>(x => x.StreamId);
    }
    public override async Task EnrichEventsAsync(SliceGroup<AppointmentDetails, Guid> group, IQuerySession querySession, CancellationToken cancellation)
    {
    // Look up and apply specialty information from the document store
    // Specialty is just reference data stored as a document in Marten
    await group
    .EnrichWith<Specialty>()
    .ForEvent<Updated<Appointment>>()
    .ForEntityId(x => x.Entity.Requirement.SpecialtyCode)
    .AddReferences();
    // Also reference data (for now)
    await group
    .EnrichWith<Patient>()
    .ForEvent<Updated<Appointment>>()
    .ForEntityId(x => x.Entity.PatientId)
    .AddReferences();
    // Look up and apply provider information
    await group
    .EnrichWith<Provider>()
    .ForEvent<ProviderAssigned>()
    .ForEntityId(x => x.ProviderId)
    .AddReferences();
    // Look up and apply Board information that matches the events being
    // projected
    await group
    .EnrichWith<Board>()
    .ForEvent<AppointmentRouted>()
    .ForEntityId(x => x.BoardId)
    .AddReferences();
    }
    public override AppointmentDetails Evolve(AppointmentDetails snapshot, Guid id, IEvent e)
    {
    switch (e.Data)
    {
    case AppointmentRequested requested:
    snapshot ??= new AppointmentDetails(e.StreamId);
    snapshot.SpecialtyCode = requested.SpecialtyCode;
    snapshot.PatientId = requested.PatientId;
    break;
    // This is an upstream projection. Triggering off of a synthetic
    // event that Marten publishes from the early stage
    // to this projection running in a secondary stage
    case Updated<Appointment> updated:
    snapshot ??= new AppointmentDetails(updated.Entity.Id);
    snapshot.Status = updated.Entity.Status;
    snapshot.EstimatedTime = updated.Entity.EstimatedTime;
    snapshot.SpecialtyCode = updated.Entity.SpecialtyCode;
    break;
    case References<Patient> patient:
    snapshot.PatientFirstName = patient.Entity.FirstName;
    snapshot.PatientLastName = patient.Entity.LastName;
    break;
    case References<Specialty> specialty:
    snapshot.SpecialtyCode = specialty.Entity.Code;
    snapshot.SpecialtyDescription = specialty.Entity.Description;
    break;
    case References<Provider> provider:
    snapshot.ProviderId = provider.Entity.Id;
    snapshot.ProviderFirstName = provider.Entity.FirstName;
    snapshot.ProviderLastName = provider.Entity.LastName;
    break;
    case References<Board> board:
    snapshot.BoardName = board.Entity.Name;
    snapshot.BoardId = board.Entity.Id;
    break;
    }
    return snapshot;
    }
    }

    And also the definition for the downstream BoardSummary view:

    public class BoardSummaryProjection: MultiStreamProjection<BoardSummary, Guid>
    {
    public BoardSummaryProjection()
    {
    Options.CacheLimitPerTenant = 100;
    Identity<Updated<Appointment>>(x => x.Entity.BoardId ?? Guid.Empty);
    Identity<Updated<Board>>(x => x.Entity.Id);
    Identity<Updated<ProviderShift>>(x => x.Entity.BoardId);
    }
    public override Task EnrichEventsAsync(SliceGroup<BoardSummary, Guid> group, IQuerySession querySession, CancellationToken cancellation)
    {
    return group.ReferencePeerView<Board>();
    }
    public override (BoardSummary, ActionType) DetermineAction(BoardSummary snapshot, Guid identity, IReadOnlyList<IEvent> events)
    {
    snapshot ??= new BoardSummary { Id = identity };
    if (events.TryFindReference<Board>(out var board))
    {
    snapshot.Board = board;
    }
    var shifts = events.AllReferenced<ProviderShift>().ToArray();
    foreach (var providerShift in shifts)
    {
    snapshot.ActiveProviders[providerShift.ProviderId] = providerShift;
    if (providerShift.AppointmentId.HasValue)
    {
    snapshot.Unassigned.Remove(providerShift.ProviderId);
    }
    }
    foreach (var appointment in events.AllReferenced<Appointment>())
    {
    if (appointment.ProviderId == null)
    {
    snapshot.Unassigned[appointment.Id] = appointment;
    snapshot.Assigned.Remove(appointment.Id);
    }
    else
    {
    snapshot.Unassigned.Remove(appointment.Id);
    var shift = shifts.FirstOrDefault(x => x.Id == appointment.ProviderId.Value);
    snapshot.Assigned[appointment.Id] = new AssignedAppointment(appointment, shift?.Provider);
    }
    }
    return (snapshot, ActionType.Store);
    }
    }

    Note the usage of the Updated<T> event types that the downstream projections are using in their Evolve or DetermineAction methods. That is a synthetic event added by Marten to communicate to the downstream projections what projected documents were updated for the current event range. These events are carrying the latest snapshot data for the current event range so the downstream projections can just use the build products without making any additional fetches. It also guarantees that the downstream projections are seeing the exact correct upstream projection data for that point of the event sequencing.

    Moreover, the composite “telehealth” projection is reading the event range once for all five constituent projections, and also applying the updates for all five projections at one time to guarantee consistency.

    Some the documentation on Composite Projections for more information about how this feature fits it with rebuilding, versioning, and non stale querying.

    Summary

    Marten has hopefully gotten much better at building “query model” projections that you’d use for bigger dashboard screens or search within your application. We’re hoping that this makes Marten a better tool for real life development.

    The best way for an OSS project to grow healthily is having a lot of user feedback and engagement coupled with the maintainers reacting to that feedback with constant improvement. And while I’d sometimes like to have the fire hose of that “feedback” stop for a couple days, it helps drive the tools forward.

    The advent of JasperFx Software has enabled me to spend much more time working with our users and seeing the real problems they face in their system development. The features I described in this post are a direct result of engagements with at least four different JasperFx clients in the past year and a half. Drop us a line anytime at sales@jasperfx.net and I’d be happy to talk to you about how we can help you be more successful with Event Sourcing using Marten.