Critter Stack Sample Projects and Our Curated AI Skills

JasperFx Software has been busy lately create a new set of AI Skill files that incorporate our recommendations for using the Critter Stack tools (Marten, Polecat, Wolverine, Weasel, Alba, and the soon forthcoming CritterWatch tool). As of the end of the day tomorrow (April 16th, 2026), these skill files will be available to all current JasperFx Software support clients.

If you’re interested in gaining access to these skills, just message us at sales@jasperfx.net.

How did you build the Skills files (so far)?

Let me tell you, this has been an exercise in sheer T*E*D*I*U*M. In some order, the Critter Stack core team and I:

  • Started with a Google doc laying out the subjects we needed to include in our skills files along with the key points about usage, design, and software architecture we wanted to enforce
  • I used Claude to build a plan using that document after the team reviewed it, all the documentation websites, my blog, and by running through Discord discussions to identify common areas of questions or confusion (which was also used to improve the documentation).
  • I admittedly let Claude take the first pass at the skill files, then reviewed each file by hand and made some course corrections as I went
  • I then let Claude use the new skills to convert several sample projects published online to Wolverine + Marten, then reviewed each conversion and corrected or added to the skills content as needed
  • Yet another pass at converting some additional sample projects with the corrected skills
  • Had Claude run the AI skills against a pair of large JasperFx client systems to identify issues in their code, and painstakingly reviewed that report while making yet more refinements and additions to the skills — partially to identify what advice is strongly recommended for green field systems and what can be bypassed in existing systems. As well as exception cases in plenty of cases. This also turned into an exercise in identifying performance optimization opportunities. One of my thoughts about the AI Skills is saying that developers can write the conceptual code they can, then let the AI skills move the (hopefully covered by tests) code to more idiomatic Wolverine usage and opt into some non-obvious Wolverine usage that can lead to better performance.
  • Let a friendly community member check out the AI skills against their new system, and we again refined the skils based on what it found — and for the record, it absolutely identified some important changes that needed to be made.

Whew, take my word for it, that was exhausting. But, the result is that I feel good about letting these new skill files out into the wild! Even knowing that we’ll 100% have to continue to refine and curate these things over time.

Summary of the AI Skills (So Far)

Each skill file is a structured Markdown document that gives AI assistants deep knowledge about a specific pattern, API, or migration path. When an AI assistant has access to these skills, it can generate idiomatic Critter Stack code, follow established conventions, and avoid common pitfalls — rather than guessing from generic .NET patterns.

Skill Categories

Getting Started (6 skills)

  • New project bootstrapping for Wolverine + Marten, Wolverine + EF Core, Wolverine + Polecat, and Wolverine + CosmosDB
  • Vertical slice architecture fundamentals
  • Modular monolith patterns with Wolverine

Wolverine Handlers (8 skills)

  • Building handlers with convention-based discovery
  • Pure function handlers and A-Frame Architecture
  • Declarative persistence with [Entity] and IStorageAction<T>
  • EF Core integration patterns
  • Middleware, Railway Programming, and FluentValidation
  • IoC and service optimization

Wolverine HTTP (3 skills)

  • HTTP endpoint fundamentals with [WolverineGet][WolverinePost], etc.
  • HTTP + Marten integration with [Aggregate] and [WriteAggregate]
  • Hybrid handlers (HTTP + messaging)

Wolverine Messaging (2 skills)

  • Message routing, outbox, scheduled messages, partitioning
  • Resiliency policies, retry strategies, circuit breakers, DLQ handling

Marten Event Sourcing (14 skills)

  • Aggregate handler workflow with [AggregateHandler]
  • Event subscriptions and forwarding
  • 5 projection types: Single Stream, Multi Stream, Flat Table, Composite, Event Enrichment
  • 7 advanced topics: Async Daemon, Cross-Stream Operations, Dynamic Consistency Boundary, Indexes, Load Distribution, Multi-Tenancy, Optimization

Polecat (1 skill)

  • Setup guide and decision criteria for SQL Server 2025 with native JSON

Migration Skills (7 skills)

  • Converting from MediatR, MassTransit, NServiceBus, EventStoreDB/Eventuous, Minimal API, and MVC Core
  • Each includes API mapping tables, before/after code examples, and migration checklists

Messaging Transports (9 skills)

  • RabbitMQ, Azure Service Bus, AWS SQS/SNS, Kafka, SignalR, NATS, Redis, Apache Pulsar, MQTT
  • Each covers configuration, topology, error handling, and best practices

Testing (7 skills)

  • Alba HTTP testing, Wolverine tracked sessions, Wolverine + Marten integration testing
  • Marten-specific test patterns (CleanAllMartenDataAsyncDocumentStore())
  • Test parallelization for xUnit, TUnit, NUnit, MSTest

Observability (5 skills)

  • OpenTelemetry setup, Prometheus metrics, Grafana dashboards
  • CLI diagnostics (describecodegen-previewdb-apply)
  • Code generation strategies

CritterWatch Integration (1 skill)

  • Installing and configuring CritterWatch monitoring
  • Adding monitoring to Wolverine applications
  • Aspire orchestration patterns

Key Principles Taught

The skills encode battle-tested patterns refined through real-world sample conversions:

  1. Prefer synchronous handlers — let Wolverine middleware handle async persistence
  2. Avoid IResult — return concrete types for OpenAPI inference
  3. Use [Entity] aggressively — declarative entity loading replaces manual LoadAsync + null checks
  4. Move sad-path validation into Validate/ValidateAsync — keep handlers focused on the happy path
  5. Use Results.NoContent() over [EmptyResponse] — more intention-revealing for 204 responses
  6. Use IntegrateWithWolverine() + AutoApplyTransactions() — the foundation for everything
  7. Name commands in imperative form — CreateOrder, not OrderRequest
  8. One file per request type — colocate command record, validator, and endpoint class

Validated by Real Conversions

These skills were tested and refined by converting 10 real-world open-source projects in the JasperFx/CritterStackSamples repository — from MediatR, MassTransit, Clean Architecture, EventStoreDB, and modular monolith patterns to the Critter Stack. 107 Alba integration tests pass across all samples.

The Sample Projects

We’ve long fielded complaints, with some legitimate validity, that the Critter Stack needed to have more sample projects. Luckily, as a side effect of all this AI skill file work, we now have the CritterStackSamples repository with all these converted projects! So far this is mostly showing Wolverine + Marten work with not much asynchronous messaging, but we’ll continue to add to these samples over time. I know the next sample application I’m building is going to involve Marten’s new DCB capability. And we’ll surely add more samples for Polecat too.

Why aren’t these skills free?

Really just two reasons:

  1. These skills have been primarily built through lessons learned as JasperFx has assisted our clients and even trained and corrected through usage on the code from JasperFx customers. Moreover, the skills will be constantly improved based on JasperFx client usage
  2. The long term viability of the Critter Stack depends on there being a successful company behind the tools. Especially in the .NET ecosystem, it is not feasible to succeed as an OSS project of this complexity without commercial support. This is part of the answer to that need.

And in other words, I just want some sweeteners for folks considering JasperFx support contracts!

Are you changing your mind about licensing?

No, and for all of you just ready to scream at us if we even flirt with making the same licensing change as MediatR or MassTransit, we’re still committed to the “Open Core” model for the Critter Stack. I.e., the currently MIT-licensed core products will remain that way.

But, as I said before, I’m concerned about the consulting and services model being insufficient in the future, so we’re pivoting to a services + commercial add on product model.

Early April Releases for the Critter Stack

As active members of the Critter Stack community know, I’ve been increasingly concerned at our frequent release cadence, as that has been something we have been criticized for in the past. At least now, I feel like I can justifiably claim this is mostly due to the high volume of community contributions we get plus JasperFx Software client requests more than needing to patch new bugs.

April was an exceptionally busy month for the Critter Stack. Across WolverineMarten, Weasel, and Polecat we shipped 4 Wolverine releases3 Marten releases6 Weasel releases, and 3 Polecat releases — driven by a healthy mix of new features, important bug fixes, and a growing stream of community contributions. Here’s a tour of what’s new.

Wolverine

Wolverine saw four releases this month: V5.28.0V5.29.0V5.30.0, and V5.31.0, totaling 59 merged pull requests — the most active month in Wolverine’s history.

Wire Tap for Message Auditing

Wolverine now supports the Wire Tap pattern from the Enterprise Integration Patterns book. A wire tap lets you record a copy of every message flowing through configured endpoints for auditing, compliance, analytics, or monitoring — without affecting the primary message processing pipeline.

Implement the IWireTap interface:

public interface IWireTap
{
    ValueTask RecordSuccessAsync(Envelope envelope);
    ValueTask RecordFailureAsync(Envelope envelope, Exception exception);
}

Then enable it on specific endpoints:

opts.Services.AddSingleton<IWireTap, AuditWireTap>();

opts.ListenToRabbitQueue("incoming").UseWireTap();
opts.PublishAllMessages().ToRabbitExchange("outgoing").UseWireTap();

// Or enable across all listeners
opts.Policies.AllListeners(x => x.UseWireTap());

You can even use keyed services to assign different wire tap implementations to different endpoints. See the full Wire Tap documentation for details.

Transport Health Checks

All Wolverine transports — RabbitMQ, Kafka, Azure Service Bus, Amazon SQS, Redis, NATS, and Pulsar — now expose ASP.NET Core IHealthCheck implementations. This means you can integrate transport connectivity checks directly into your existing health check infrastructure with minimal configuration.

Retry Jitter

When many nodes retry at the same fixed delay after a shared failure, they produce a “thundering herd” that can overwhelm a recovering dependency. Wolverine now supports additive jitter on delay-based error policies with three strategies:

// Full jitter: effective delay ∈ [d, 2·d]
opts.OnException<DownstreamUnavailableException>()
    .RetryWithCooldown(50.Milliseconds(), 100.Milliseconds(), 250.Milliseconds())
    .WithFullJitter();

// Bounded jitter: effective delay ∈ [d, d × (1 + percent)]
opts.OnException<DownstreamUnavailableException>()
    .ScheduleRetry(1.Seconds(), 5.Seconds(), 30.Seconds())
    .WithBoundedJitter(0.25); // +0% to +25%

// Exponential jitter: spread widens with each attempt
opts.OnException<DownstreamUnavailableException>()
    .PauseThenRequeue(5.Seconds())
    .WithExponentialJitter();

Jitter only extends the configured delay, never shortens it — the configured values remain the lower bound. See the error handling docs for the full details. Thanks to @BlackChepo for the contribution in #2504.

IHandlerConfiguration Interface

Handler chain customization now has a compile-time safe alternative to the convention-based Configure(HandlerChain) method. Just implement IHandlerConfiguration:

public class InterfaceConfiguredHandler : IHandlerConfiguration
{
    public void Handle(InterfaceConfiguredMessage message)
    {
        // handle the message
    }

    public static void Configure(HandlerChain chain)
    {
        chain.Middleware.Add(new CustomFrame());
        chain.SuccessLogLevel = LogLevel.None;
    }
}

This makes it explicit in the type system that your handler participates in chain configuration, rather than relying on method name conventions alone.

Header Propagation

Wolverine can now automatically forward headers from an incoming message to all outgoing messages produced within the same handler context. This is useful for propagating correlation identifiers or tracing metadata across a chain of messages:

builder.Host.UseWolverine(opts =>
{
    // Forward a single header
    opts.Policies.PropagateIncomingHeaderToOutgoing("x-on-behalf-of");

    // Or forward multiple headers at once
    opts.Policies.PropagateIncomingHeadersToOutgoing(
        "x-correlation-id", "x-source-system");
});

This works across all transports. See the full header propagation docs. Thanks to @lyall-sc for the contribution in #2446.

Soft-Deleted Sagas with Marten

When a Marten-backed saga calls MarkCompleted(), Wolverine now respects Marten’s soft-delete configuration. If your saga type is configured for soft-deletes, the document will be soft-deleted rather than hard-deleted, allowing you to keep a history of completed sagas:

[SoftDeleted]
public class SubscriptionSaga : Saga, ISoftDeleted
{
    public Guid Id { get; set; }
    public string PlanName { get; set; } = string.Empty;
    public bool IsActive { get; set; }

    // ISoftDeleted members — Marten populates these automatically
    public bool Deleted { get; set; }
    public DateTimeOffset? DeletedAt { get; set; }

    public void Handle(CancelSubscription command)
    {
        IsActive = false;
        MarkCompleted(); // Marten will soft-delete instead of hard-delete
    }

    public void Handle(UpgradeSubscription command)
    {
        if (Deleted)
        {
            // Saga was already completed — do nothing
            return;
        }
        PlanName = command.NewPlanName;
    }
}

See the Marten saga documentation for more details.

TenantId Overloads for MartenOps

All MartenOps and IMartenOp return types now support explicit TenantId overloads, making it straightforward to perform cross-tenant operations in multi-tenanted Marten setups from within Wolverine handlers.

New Wolverine Diagnostics CLI Commands

Three new sub-commands were added to help diagnose Wolverine applications that were specifically added with AI assisted development in mind:

  • codegen-preview — Preview generated handler code for specific message types
  • describe-routing — Display all configured message routing rules
  • describe-resiliency — Show error handling and circuit breaker configurations

OTEL Improvements

This is largely about some future CritterWatch work to be able to quickly tie Wolverine messages to the full Open Telemetry span history in Jaeger, DataDog, or AppInsights (for now).

  • Handler and HTTP endpoint spans now include a handler.type tag for more granular tracing
  • Saga spans are tagged with wolverine.saga.id and wolverine.saga.type
  • Aggregate handler workflow spans include wolverine.stream.id and wolverine.stream.type
  • Fixed a trace ID leak after circuit breaker trip/resume cycles (#2494)

Wolverine.HTTP Enhancements

I made a concerted effort a couple weeks ago to try to plus any remaining gaps between Wolverine.HTTP and ASP.Net Core Minimal API or MVC Core. That also included quite a bit of new documentation on the Wolverine website to map MVC Core concepts to Wolverine.HTTP concepts — or just tell you which ASP.Net Core features work as is with Wolverine.HTTP.

V5.28.0 included a substantial expansion of the HTTP capabilities:

  • Route prefix groups for organizing endpoints
  • Antiforgery/CSRF protection for form endpoints
  • SSE/streaming response support
  • Rate limiting integration
  • Output caching integration
  • Content negotiation with [Writes] and ConnegMode
  • API versioning documentation
  • OnException convention for exception handling

Additional Notable Fixes

  • Fixed Turkish culture (dotless-i) corruption of SQL identifiers
  • Fixed EF Core DomainEventScraper O(n) full ChangeTracker scan (#2476)
  • Fixed strong-typed saga ID causing CS0246 code generation errors
  • Parallelized tenant database initialization in EF Core multi-tenancy
  • Fixed Redis stream listener ignoring endpoint DatabaseId (thanks @BlackChepo)
  • Fixed CloudEvents fallback and message aliases (thanks @lahma)
  • Exposed FluentValidation configuration through UseFluentValidation overload (thanks @outofrange-consulting)

Marten

Marten shipped three releases this month: V8.29.0V8.29.3, and V8.30.0.

EnableBigIntEvents for 64-bit Event Sequences

For high-volume event stores, Marten now supports 64-bit event sequences via a new EnableBigIntEvents flag. This addresses the int32 overflow reported in #4246 where mt_quick_append_events could fail with “integer out of range” when sequences exceed int32 limits. Thanks to @vicmaeg for both reporting and contributing the SQL fix in #4248.

ProjectLatest API

A new ProjectLatest API lets you project aggregates with pending (uncommitted) events — useful for validation scenarios where you need to see the would-be state of an aggregate before committing or return the new state of a projection from a Wolverine command handler.

EnrichEventsAsync Hook

EventProjection now supports an EnrichEventsAsync hook, allowing you to augment events with additional data during projection processing.

ConfigureNpgsqlDataSourceBuilder

Marten now exposes ConfigureNpgsqlDataSourceBuilder for Npgsql plugin registration, making it easier to register custom type mappings and other Npgsql extensions. This innocuous little item was added to enable our commercial add ons for Marten using PgVector and PostGIS.

OrderByNgramRank for Full-Text Search

A new OrderByNgramRank method enables relevance-sorted ngram search results, useful for “fuzzy” full-text search scenarios where you want to rank results by how closely they match.

Adaptive EventLoader for Sparse Projections

Long story short, this change made Marten’s Async Daemon more resilient for certain runtime circumstances.

An opt-in event type index and adaptive EventLoader now make sparse projections significantly more efficient — projections that only care about a small subset of event types can skip irrelevant events entirely at the database level.

Removed FSharp.Core Compile-Time Dependency

Marten no longer has a compile-time dependency on FSharp.Core, reducing dependency bloat for C#-only projects.

Notable Bug Fixes

  • Fixed EF Core 10 JSON column mapping compatibility (via Weasel 8.11.4)
  • Fixed NaturalKeySource discovery when methods are on the projection class
  • Fixed natural key and DCB tag operations with archived stream partitioning
  • Fixed mt_update and mt_upsert WHERE clauses for partitioned tables
  • Fixed long identifier names exceeding PostgreSQL’s NAMEDATALEN limit
  • Quoted column names in DuplicatedField update SQL fragments

Weasel

Weasel had six releases this month, from V8.11.2 through V8.13.0. The headline feature is the new EF Core testing infrastructure.

EF Core Batch Queries

Wolverine is also able to support this in code generation similar to what it does today for Marten. Blow post coming soon on this.

The new BatchedQuery API combines multiple IQueryable<T> queries into a single database round trip using ADO.NET’s DbBatch. This works across PostgreSQL, SQL Server, and SQLite — a significant performance win for scenarios that need to load multiple independent datasets.

IDatabaseCleaner for Integration Testing

Inspired by Respawn and Marten’s ResetAllData(), the new IDatabaseCleaner<TContext> provides FK-aware database cleanup for integration testing with EF Core. It supports multi-tenant scenarios with explicit DbConnection overloads and uses provider-specific SQL for PostgreSQL, SQL Server, and SQLite.

PostgreSQL Identifier Improvements

  • New PostgresqlIdentifier.Shorten() for deterministic identifier truncation when names exceed PostgreSQL’s NAMEDATALEN limit. This has been a continuously annoying problem since Marten 1.0!
  • Reserved keywords are now properly quoted in index column expressions and function update fragments (thanks @MarkVDD for the contributions)

Notable Bug Fixes

  • Fixed primary key migration when other tables have referencing foreign keys
  • Fixed deadlock in ManagedListPartitions.InitializeAsync
  • Fixed culture-invariant SQL identifier casing (Turkish locale issue)
  • Fixed Npgsql v10 compatibility for cidr/IPNetwork mapping
  • Fixed EF Core 10 JSON column mapping

VitePress Documentation Site

Weasel now has its own VitePress documentation site, making it easier to find and navigate Weasel-specific documentation. We’ll have that live in the next week or two.


Polecat

Polecat shipped three releases: V1.6.1, V2.0.0, and V2.0.1.

Polecat 2.0 — SQL Server 2025 Native JSON

Yeah, this one was an oopsie we fixed:(

The major V2.0.0 release defaults to SQL Server 2025’s native JSON column type, taking advantage of the database engine’s built-in JSON support for better performance and query capabilities.

DDL Migration to Weasel SchemaMigration

V2.0.1 migrated all DDL generation to Weasel’s SchemaMigration infrastructure, aligning Polecat with the rest of the Critter Stack’s database migration approach.

EnrichEventsAsync and Time-Based Projections

Polecat picked up EnrichEventsAsync tests for EventProjection and a new time-based multi-stream projection example, expanding the projection capabilities available on SQL Server.


Community Contributions

A special thank you to the community contributors who made April so productive:

  • @BlackChepo — Retry jitter support (#2504), Redis stream DatabaseId fix (#2452), and silent message loss fix for RabbitMQ/MQTT (#2511)
  • @lyall-sc — Header propagation (#2446) and publicly exposed MetadataRules (#2464)
  • @outofrange-consulting — FluentValidation configuration overload (#2497) and MassTransit envelope header fix (#2439)
  • @lahma — CloudEvents fixes (#2453) and NUKE build upgrade (#2454)
  • @vicmaeg — Marten int32 overflow fix (#4248)
  • @MarkVDD — Weasel PostgreSQL reserved keyword quoting (#238#242)
  • @Sonic198 — Kafka PartitionId on Envelope (#2440)
  • @Ferchke7 — Fluent circuit breaker configuration for Kafka listeners (#2506)
  • @codeswithfists — OpenAPI OperationId and Summary/Description support (#2445)
  • @ali-pouneh — New MessagesImplementing overload (#2449)
  • @LodewijkSioen — ValidationResult as validation return type (#2332)
  • @dmytro-pryvedeniuk — Trigger restriction fix (#2398) and Alba auto-start fix (#2411)
  • @Shield1739@benv-nti@ericwscott@Blackclaws — Documentation fixes across the stack

April’s new contributors: @Sonic198@ali-pouneh@codeswithfists@Ferchke7, and @ericwscott — welcome to the Critter Stack!


What’s Next

We’ll see! JasperFx & I are admittedly moving more to our commercial add on tools for a little bit.

As always, find us on the JasperFx Discord or file issues on GitHub!

The Fastest Possible HTTP Queries with Marten

I’ve been piddling this weekend with testing out JasperFx Software‘s soon to be officially curated AI Skills. To test and refine those new skills, I’ve been using my buddies Chris Woodruff and Joseph Guadagno‘s MoreSpeakers application as a sample application to port to Wolverine and Marten (and a half dozen others too so far).

I’m sure that you’ll be positively shocked to know that it’s taken quite a bit of minor corrections and “oh, yeah” enhancements to the guidance in the skills to get to exactly where I’d want the translated code to get to. It’s not exactly this bad, but what it’s most reminded me of was my experience coaching youth basketball teams of very young kids when I constantly kick myself after the first game for all the very basic basketball rules and strategies I’d forgotten to tell them about.

Anyway, on to the Marten and Wolverine part of this. Consider this HTTP endpoint in the translated system:

public static class GetExpertiseCategoriesEndpoint
{
[WolverineGet("/api/expertise")]
public static Task<IReadOnlyList<ExpertiseCategory>> Get(IQuerySession session, CancellationToken ct)
=> session.Query<ExpertiseCategory>()
.Where(c => c.IsActive)
.OrderBy(c => c.Sector)
.ThenBy(c => c.Name)
.ToListAsync(ct);
}

Pretty common request to run a query against the database, then stream the results down to the HTTP response. I’ll write a follow up post later to discuss the greater set of changes, but let’s take that endpoint code above and make it a whole lot more efficient by utilizing Marten.AspNetCore‘s ability to just stream JSON write out of the database like this:

public static class GetExpertiseCategoriesEndpoint
{
[WolverineGet("/api/expertise")]
// It's an imperfect world. I've never been able to come up with a syntax
// option that would eliminate the need for this attribute that isn't as ugly
// as using the attribute, so ¯\_(ツ)_/¯
[ProducesResponseType<ExpertiseCategory[]>(200, "application/json")]
public static Task Get(IQuerySession session, HttpContext context)
=> session.Query<ExpertiseCategory>()
.Where(c => c.IsActive)
.OrderBy(c => c.Sector)
.ThenBy(c => c.Name)
.WriteArray(context);
}

The version above is 100% functionally equivalent to the first version, but it’s a lot more efficient at runtime because what it’s doing is writing the JSON directly from the database (Marten is already storing state using PostgreSQL’s JSONB type) right to the HTTP response byte by byte.

And just to be silly and be even more serious about the optimization, let’s introduce Marten’s compiled query feature that completely eliminates the runtime work of having to interpret the LINQ expression into an executable plan for executing the query:

// Compiled query — Marten pre-compiles the SQL and query plan once,
// then reuses it for every execution. Combined with WriteArray(),
// the result streams raw JSON from PostgreSQL with zero C# allocation.
public class ActiveExpertiseCategoriesQuery : ICompiledListQuery<ExpertiseCategory>
{
public Expression<Func<IMartenQueryable<ExpertiseCategory>, IEnumerable<ExpertiseCategory>>> QueryIs()
=> q => q.Where(c => c.IsActive)
.OrderBy(c => c.Sector)
.ThenBy(c => c.Name);
}
public static class GetExpertiseCategoriesEndpoint
{
[WolverineGet("/api/expertise")]
[ProducesResponseType<ExpertiseCategory[]>(200, "application/json")]
public static Task Get(IQuerySession session, HttpContext context)
=> session.WriteArray(new ActiveExpertiseCategoriesQuery(), context);
}

That’s a little bit uglier code that we had to go out of our way to write compared to the simpler, original mechanism, but that’s basically how performance optimization generally goes!

At no point is it ever trying to deserialize the actual ExpertiseCategory objects in memory. There are of course some limitations or gotchas:

  • There’s no anti-corruption layer of any kind, and this can only send down exactly what is persisted in the Marten database. I’ll tackle this in more detail in a follow up post about the conversion, but I’m going to say I don’t really think this is a big deal at all, and we can introduce some kind of mapping later if we want to change what’s actually stored or how the JSON is served up to the client.
  • You may have to be careful to make Marten’s JSON storage configuration match what HTTP clients want — which is probably just using camel casing and maybe opting into Enum values being serialized as strings.

But now, let’s compare the code above to what the original version using EF pCore had to do. Let’s say it’s about a wash in how long it takes Marten and EF Core to translate the

  1. EF Core has to parse the LINQ expression and turn that into both SQL and some internal execution plan about how to turn the raw results into C# objects
  2. EF Core executes the SQL statement, and if this happens to be a .NET type that has nested objects or collections, this could easily be an ugly SQL statement with multiple JOINs — which Marten doesn’t have to do all
  3. EF Core has to loop around the database results and create .NET objects that map to the raw database results
  4. The original version used AutoMapper in some places to map the internal entities to the DTO types that were going to be delivered over HTTP. That’s a very common .NET architecture, but that’s more runtime overhead and Garbage Collection thrashing than the Marten version
  5. My buddies used an idiomatic Clean/Onion Architecture approach, so there’s a couple extra layers of indirection in their endpoints that require a DI container to build more objects on each request, so there’s even more GC thrasing. It’s not obvious at all, but in the Wolverine versions of the endpoint, there’s absolutely zero usage of the DI container at runtime (that’s not true for every possible endpoint of course).
  6. ASP.Net Core feeds those newly created objects into a JSON serializer and writes the results down to the HTTP response. The AspNetCore team has optimized the heck out of that process, but it’s still overhead.

The whole point of that exhaustive list is just to illustrate how much more efficient the Marten version potentially is than the typical .NET approach with EF Core and Clean Architecture approaches.

I’ll come back later this week with a bigger post on the differences in structure between the original version and the Critter Stack result. It’s actually turning out to be a great exercise for me because the problem domain and domain model mapping of MoreSpeakers actually lends itself to a good example of using DCB to model Event Sourcing. Stay tuned later for that one!

Marten, Polecat, and Wolverine Releases — One Shining Moment Edition

For non basketball fans, the NCAA Tournament championship game broadcasts end each year with a highlight montage to a cheesy song called “One Shining Moment” that’s one of my favorite things to watch each year.

The Critter Stack community is pretty much always busy, but we were able to make some releases to Marten, Polecat, and Wolverine yesterday and today that dropped our open issue counts on GitHub to the lowest number in a decade. That’s bug fixes, some long overdue structural improvements, quite a few additions to the documentation, new features, and some quiet enablement of near term improvements in CritterWatch and our AI development strategy.

Wolverine 5.28.0 Released

We’re happy to announce Wolverine 5.28.0, a feature-packed release that significantly strengthens both the messaging and HTTP sides of the framework. This release includes major new infrastructure for transport observability, powerful new Wolverine.HTTP capabilities bringing closer parity with ASP.NET Core’s feature set, and several excellent community contributions.

Last week I took some time to do a “gap analysis” of Wolverine.HTTP against Minimal API and MVC Core for missing features and did a similar exercise of Wolverine’s asynchronous messaging support against other offerings in the .NET and Java world. This release actually plugs most of those gaps — albeit with just documentation in many cases.

Highlights

🔍 Transport Health Checks

This has been one of our most requested features. Wolverine now provides built-in health check infrastructure for all message transports — RabbitMQ, Kafka, Azure Service Bus, Amazon SQS, NATS, Redis, and MQTT. The new WolverineTransportHealthCheck base class reports point-in-time health status including connection state and, where supported, broker queue depth — critical for detecting the “silent failure” scenario where messages are piling up on the broker but aren’t being consumed (a situation we’ve seen in production with RabbitMQ).

Health checks integrate with ASP.NET Core’s standard IHealthCheck interface, so they plug directly into your existing health monitoring infrastructure.

Transport health check documentation →

This was built specifically for CritterWatch integration. I should also point out that CritterWatch is now able to kickstart the “silent failure” issues where Marten/Polecat projections claim to be running, but not advancing and messaging listeners who appear to be active but also aren’t actually receiving messages.

🔌 Wire Tap (Message Auditing)

Implementing the classic Enterprise Integration Patterns Wire Tap, this feature lets you record a copy of every message flowing through configured endpoints — without affecting the primary processing pipeline. It’s ideal for compliance logging, analytics, or debugging.

opts.ListenToRabbitQueue("orders")
.UseWireTap();

Implement the IWireTap interface with RecordSuccessAsync() and RecordFailureAsync() methods, and Wolverine handles the rest. Supports keyed services for different implementations per endpoint.

Wire Tap documentation →

📋 Declarative Marten Data Requirements

This feature is meant to be a new type of “declarative invariant” that will enable Critter Stack systems to be more efficient. If this is used with other declarative persistence helpers in the same HTTP endpoint or message handler, Wolverine is able to opt into Marten’s batch querying for more efficient code.

New [DocumentExists<T>] and [DocumentDoesNotExist<T>] attributes let you declaratively guard handlers with Marten document existence checks. Wolverine generates optimized middleware at compile time — no manual boilerplate needed:

[DocumentExists<Customer>]
public static OrderConfirmation Handle(PlaceOrder command)
{
// Customer is guaranteed to exist here
}

Throws RequiredDataMissingException if the precondition fails.

Marten integration documentation →

🎯 Confluent Schema Registry Serializers for Kafka

A community contribution that adds first-class support for Confluent Schema Registry serialization with Kafka topics. Both JSON Schema and Avro (for ISpecificRecord types) serializers are included, with automatic schema ID caching and the standard wire format (magic byte + 4-byte schema ID + payload).

opts.UseKafka("localhost:9092")
.ConfigureSchemaRegistry(config =>
{
config.Url = "http://localhost:8081";
})
.UseSchemaRegistryJsonSerializer();

Kafka Schema Registry documentation →

Wolverine.HTTP Improvements

This release brings a wave of HTTP features that close the gap with vanilla ASP.NET Core while maintaining Wolverine’s simpler programming model:

Response Content Negotiation

New ConnegMode configuration with Loose (default, falls back to JSON) and Strict (returns 406 Not Acceptable) modes. Use the [Writes] attribute to declare supported content types and [StrictConneg] to enforce strict matching per endpoint.

Content negotiation documentation →

OnException Convention

This is orthogonal to Wolverine’s error handling policies.

Handler and middleware methods named OnException or OnExceptionAsync are now automatically wired as exception handlers, ordered by specificity. Return ProblemDetailsIResult, or HandlerContinuation to control the response:

public static ProblemDetails OnException(OrderNotFoundException ex)
{
return new ProblemDetails { Status = 404, Detail = ex.Message };
}

Exception handling documentation →

Output Caching

Direct integration with ASP.NET Core’s output caching middleware via the [OutputCache] attribute on endpoints, supporting policy names, VaryByQuery, VaryByHeader, and tag-based invalidation.

Output caching documentation →

Rate Limiting

Apply ASP.NET Core’s rate limiting policies to Wolverine endpoints with [EnableRateLimiting("policyName")] — supporting fixed window, sliding window, token bucket, and concurrency algorithms.

Rate limiting documentation →

Antiforgery / CSRF Protection

Form endpoints automatically require antiforgery validation. Use [ValidateAntiforgery] to opt in non-form endpoints or [DisableAntiforgery] to opt out. Global configuration available via opts.RequireAntiforgeryOnAll().

Antiforgery documentation →

Route Prefix Groups

Organize endpoints with class-level [RoutePrefix("api/v1")] or namespace-based prefixes for cleaner API versioning:

opts.RoutePrefix("api/orders", forEndpointsInNamespace: "MyApp.Features.Orders");

Routing documentation →

SSE / Streaming Responses

Documentation and examples for Server-Sent Events and streaming responses using ASP.NET Core’s Results.Stream(), fully integrated with Wolverine’s service injection.

Streaming documentation →

Community Contributions

Thank you to our community contributors for this release:

  • @LodewijkSioen — Structured ValidationResult support for FluentValidation (#2332)
  • @dmytro-pryvedeniuk — AutoStartHost enabled by default (#2411)
  • @outofrange-consulting — Bidirectional MassTransit header mapping (#2439)
  • @Sonic198 — PartitionId on Envelope for Kafka partition tracking (#2440)
  • Confluent Schema Registry serializers for Kafka (#2443)

Bug Fixes

  • Fixed exchange naming when using FromHandlerType conventional routing (#2397)
  • Fixed flaky GloballyLatchedListenerTests caused by async disposal race condition in TCP SocketListener
  • Added handler.type OpenTelemetry tag for better tracing of message handlers and HTTP endpoints

New Documentation

We’ve also added several new tutorials and guides:

Marten 8.29.0 Release — Performance, Extensibility, and Bug Fixes

Marten 8.29.0 shipped yesterday with a packed release: a new LINQ operator, event enrichment for EventProjection, major async daemon performance improvements, the removal of the FSharp.Core dependency, and several important bug fixes for partitioned tables.

New Features

OrderByNgramRank — Sort Search Results by Relevance

You can now sort NGram search results by relevance using the new OrderByNgramRank() LINQ operator:

var results = await session
.Query<Product>()
.Where(x => x.Name.NgramSearch("blue shoes"))
.OrderByNgramRank(x => x.Name, "blue shoes")
.ToListAsync();

This generates ORDER BY ts_rank(mt_grams_vector(...), mt_grams_query(...)) DESC under the hood — no raw SQL needed.

EnrichEventsAsync for EventProjection

The EnrichEventsAsync hook that was previously only available on aggregation projections (SingleStreamProjection, MultiStreamProjection) is now available on EventProjection too. This lets you batch-load reference data before individual events are processed, avoiding N+1 query problems:

public class TaskProjection : EventProjection
{
public override async Task EnrichEventsAsync(
IQuerySession querySession, IReadOnlyList<IEvent> events,
CancellationToken cancellation)
{
// Batch-load users for all TaskAssigned events in one query
var userIds = events.OfType<IEvent<TaskAssigned>>()
.Select(e => e.Data.UserId).Distinct().ToArray();
var users = await querySession.LoadManyAsync<User>(cancellation, userIds);
// ... set enriched data on events
}
}

ConfigureNpgsqlDataSourceBuilder — Plugin Registration for All Data Sources

A new ConfigureNpgsqlDataSourceBuilder API on StoreOptions ensures Npgsql plugins like UseVector()UseNetTopologySuite(), and UseNodaTime() are applied to every NpgsqlDataSource Marten creates — including tenant databases in multi-tenancy scenarios:

opts.ConfigureNpgsqlDataSourceBuilder(b => b.UseVector());

This is the foundation for external PostgreSQL extension packages (PgVector, PostGIS, etc.) to work correctly across all tenancy modes.

And by the way, JasperFx will be releasing formal Marten support for pgvector and PostGIS in commercial add ons very soon.

Performance Improvements

Opt-in Event Type Index for Faster Projection Rebuilds

If your projections filter on a small subset of event types and your event store has millions of events, rebuilds can time out scanning through non-matching events. A new opt-in composite index solves this:

opts.Events.EnableEventTypeIndex = true;

This creates a (type, seq_id) B-tree index on mt_events, letting PostgreSQL jump directly to matching event types instead of sequential scanning.

And as always, remember that adding more indexes can slow down inserts, so use this judiciously.

Adaptive EventLoader

TL;DR: this helps make the Async Daemon be more reliable in the face of unexpected usage and more adaptive to get over unusual errors in production usage.

Even without the index, the async daemon now automatically adapts when event loading times out. It falls back through progressively simpler strategies — skip-ahead (find the next matching event via MIN(seq_id)), then window-step (advance in 10K fixed windows) — and resets when events flow normally. No configuration needed.

See the expanded tuning documentation for guidance on when to enable the index and how to diagnose slow rebuilds.

FSharp.Core Dependency Removed

Marten no longer has a compile-time dependency on FSharp.Core. F# support still works — if your project references FSharp.Core (as any F# project does), Marten detects it at runtime via reflection. This unblocks .NET 8 users who were stuck on older Marten versions due to the FSharp.Core 9.0.100 requirement.

If you use F# types with Marten (FSharpOption, discriminated union IDs, F# records), everything continues to work unchanged. The dependency just moved from Marten’s package to your project.

Bug Fixes

Partitioned Table Composite PK in Update Functions (#4223)

The generated mt_update_* PostgreSQL function now correctly uses all composite primary key columns in its WHERE clause. Previously, for partitioned tables with a PK like (id, date), the update only matched on id, causing duplicate key violations when multiple rows shared the same ID with different partition keys.

Long Identifier Names (#4224)

Auto-discovered tag types with long names (e.g., BootstrapTokenResourceName) no longer cause PostgresqlIdentifierTooLongException at startup. Generated FK, PK, and index names that exceed PostgreSQL’s 63-character limit are now deterministically shortened with a hash suffix.

This has been longstanding problem in Marten, and we probably should have dealt with this years ago:-(

EF Core 10 Compatibility (#4225)

Updated Weasel to 8.12.0 which fixes MissingMethodException when using Weasel.EntityFrameworkCore with EF Core 10 on .NET 10.

Upgrading

dotnet add package Marten --version 8.29.0

The full changelog is on GitHub.

Polecat 2.0.1

Some time in the last couple weeks I wrote a blog post about my experiences so far with Claude assisted developement where I tried to say that you absolutely have to carefully review what your AI tools are doing because they can take short cuts. So, yeah, I should do that even more closely.

Polecat 2.0.1 is using the SQL Server 2025 native JSON type correctly now, and the database migrations are now all done with the underlying Weasel library that enables Polecat to play nicely with all of the Critter Stack command line support for migrations.

Wolverine “Gap” Analysis

This is the kind of post I write for myself and just share on a Friday or weekend when not many folks are paying any attention.

I’ve taken a couple days at the end of this week after a month long crush to just think about the strategic technical vision for the Critter Stack and the commercial add on products that we’re building under the JasperFx Software rubric. As part of my “deep think, but don’t work too hard” day, I had Claude help me do a gap analysis between Wolverine.HTTP and ASP.Net Core Minimal API & MVC Core and even FastEndpoints. I also did the same for Wolverine’s messaging feature set and all the widely used .NET messaging frameworks (I think .NET has more strong options for this than any other platform and it still irritates me that Microsoft seriously tried to butt into that) and several options in the Java ecosystem.

Before I share the results and what I thought was and wasn’t important, let me share one big insight. Different tools in the same problem space frequently solve the same problems, but with very different technical solutions, concepts, and abstractions. Sometimes different tools even have very similar solutions to common problems, but use very different nomenclature . All this is to say that this effort helped me identify several places where we will try to improve documentation to map features from other tools to the options in Wolverine as Claude “identified” almost two dozen functional “gaps” where I felt like Wolverine already happily solved the same problems that features in MassTransit, NServiceBus, Mulesoft, or other tools did.

There’s also a lesson for folks who switch tools to understand the different concepts in the new tool instead of automatically trying to map your mental model from tool A to tool B without first learning what’s really different.

And lastly, a lesson for anybody who ever does any kind of support of development tools: remember to ask a user who is struggling what their end goals are or their real use case is instead of just focusing on the sometimes oddball implementation or API questions they’re asking you. And that goes double when a user is quite possibly trying to force fit their mental model of a completely different tool into your tool.

Anyway, here’s what I ended up adding to our backlog as well as things that I didn’t think were valuable at this time.

On the HTTP front, I came up with several things, with the big items being:

  1. I originally thought an equivalent to MVC’s IExceptionFilter, but we might just use that as is. That’s come up plenty of times before
  2. Anti-forgery support. I originally thought that Wolverine.HTTP would mostly be used for API development, so didn’t really bother much upfront with too much for supporting HTTP forms, but I think there’s a significant overlap between Wolverine.HTTP usage and htmx where forms are used more heavily, so here we go.
  3. Routing prefixes. It’s come up occasionally, and been just barely on my radar
  4. Endpoint rate limiting middleware for HTTP. This will build on our new rate limiting middleware for message handlers
  5. Server Sent Events support. Why not? For whatever reason, SSE seems to be getting rediscovered by folks. FubuMVC (Wolverine’s predecessor in the early 2010’s) actually had a first class SSE support all those years ago
  6. Output Caching. This has been in my thinking for quite awhile. I think this is going to be two pronged, with direct support for ASP.Net Core caching middleware and maybe some more directed “per entity” caching around our existing “declarative persistence” helpers. I think the second actually lives inside of message handlers as well
  7. API versioning of some sort. It’s easy enough to just add “1.0” into your routes, but we’ll look at more alternatives as well
  8. A little bit of content negotiation support, but that’s been on the periphery of my attention from the beginning. My thought all along was to not bother with that until people explicitly asked for it, but now I just want to close the gaps. FubuMVC had that 15 years ago, so I’ve already dealt with that successfully before — but that was in the ReST craze and “conneg” just isn’t nearly as common in usage as far as I can tell.

And the gap analysis helped point out several areas where we had opportunities to improve the documentation (and future AI skills) to help map Minimal API or MVC Core concepts to existing features in Wolverine.HTTP.

Now, on to the messaging support which turned up almost nothing that I was actually interested in adding to Wolverine except for these:

  1. Formal support for the EIP “Claim Check” pattern. I’ve never pursued that before because I’ve felt like it’s just not that much explicit code, but I still added that to the backlog for “completeness”
  2. Build in EIP “Wire Tap” support to persist messages but that was already in our backlog as that comes up from users and also because we have plans to expose that through MCP and command line AI support tools. I’m not enthusiastic thought about bothering with the “command sourcing” concept from Greg Young, but we’ll see if anybody ever wants it.

Claude came up with about 35 different things to consider, but other than those two things above, those items fell into either functionality we already had with different names or different conceptual solutions, features I just have no interest in supporting or I don’t see being used or requested by our users, or a third group of features that are happily planned and already underway with our forthcoming CritterWatch commercial add on.

Just for completeness, the features I’m saying we won’t even plan to support right now were:

  • The EIP “Routing Slip” concept. I know that MassTransit supports it, but I’m deeply unenthusiastic about both the concept and any attempt to support that in Wolverine. They can have that one.
  • Distributed transaction support. I don’t even know why I would need to explain why not!
  • “Change Data Capture” integration with something like Debezium. I just don’t see a demand for that with Wolverine
  • Any kind of visual process designer. Even on the Marten/Polecat side, I’m wanting us to focus on Markdown or Gherkin specifications or just flat out making our code as simple as possible to write instead of blowing energy on visual tools that generate XML that in turn get generated into Java code. Not that I’m necessarily giving some side eye to any other tool out there *cough* liar! *cough*
  • Batch processing support that really touched on ETL concerns
  • A long lived job model. Maybe down the road, but I’d push folks to just break that up into smaller actions whenever possible anyway. It’s trivial in Wolverine to have message handlers cascade out a request for the next step. Actually, this one is probably the one I’m most likely to have to change my mind about, but we’ll see
  • NServiceBus has their “messaging bridge” that I think would be trivial to build out later if that’s ever valuable for someone, but nobody is asking for that today and Wolverine happily lets you mix and match all the transports and even multiple brokers in one application

And of course, there was some random quirky features of some of the other tools I just didn’t think were worth any consideration outside of client requests or common user community requests.

Multi-Tenancy in the Critter Stack

We put on another Critter Stack live stream today to give a highlight tour of the multi-tenancy features and support across the entire stack. Long story short, I think we have by far and away the most comprehensive feature set for multi-tenancy in the .NET ecosystem, but I’ll let you judge that for yourself:

The Critter Stack provides comprehensive multi-tenancy support across all three tools — Marten, Wolverine, and Polecat — with tenant context flowing seamlessly from HTTP requests through message handling to data persistence. Here’s some links to various bits of documentation and some older blog posts at the bottom as well.

Marten (PostgreSQL)

Marten offers three tenancy strategies for both the document database and event store:

  • Conjoined Tenancy — All tenants share tables with automatic tenant_id discrimination, cross-tenant querying via TenantIsOneOf() and AnyTenant(), and PostgreSQL LIST/HASH partitioning on tenant_id (Document Multi-TenancyEvent Store Multi-Tenancy)
  • Database per Tenant — Four strategies ranging from static mapping to single-server auto-provisioning, master table lookup, and runtime tenant registration (Database-per-Tenant Configuration)
  • Sharded Multi-Tenancy with Database Pooling — Distributes tenants across a pool of databases using hash, smallest-database, or explicit assignment strategies, combining conjoined tenancy with database sharding for extreme scale (Database-per-Tenant Configuration)
  • Global Streams & Projections — Mix globally-scoped and tenant-specific event streams within a conjoined tenancy model (Event Store Multi-Tenancy)

Wolverine (Messaging, Mediator, and HTTP)

Wolverine propagates tenant context automatically through the entire message processing pipeline:

  • Handler Multi-Tenancy — Tenant IDs tracked as message metadata, automatically propagated to cascaded messages, with InvokeForTenantAsync() for explicit tenant targeting (Handler Multi-Tenancy)
  • HTTP Tenant Detection — Built-in strategies for detecting tenant from request headers, claims, query strings, route arguments, or subdomains (HTTP Multi-Tenancy)
  • Marten Integration — Database-per-tenant or conjoined tenancy with automatic IDocumentSession scoping and transactional inbox/outbox per tenant database (Marten Multi-Tenancy)
  • Polecat Integration — Same database-per-tenant and conjoined patterns for SQL Server (Polecat Multi-Tenancy)
  • EF Core Integration — Multi-tenant transactional inbox/outbox with separate databases and automatic migrations (EF Core Multi-Tenancy)
  • RabbitMQ per Tenant — Map tenants to separate virtual hosts or entirely different brokers (RabbitMQ Multi-Tenancy)
  • Azure Service Bus per Tenant — Map tenants to separate namespaces or connection strings (Azure Service Bus Multi-Tenancy)

Polecat (SQL Server)

Polecat mirrors Marten’s tenancy model for SQL Server:

Related Blog Posts

DatePost
Feb 2024Dynamic Tenant Databases in Marten
Mar 2024Recent Critter Stack Multi-Tenancy Improvements
May 2024Multi-Tenancy: What is it and why do you care?
May 2024Multi-Tenancy: Marten’s “Conjoined” Model
Jun 2024Multi-Tenancy: Database per Tenant with Marten
Sep 2024Multi-Tenancy in Wolverine Messaging
Dec 2024Message Broker per Tenant with Wolverine
Feb 2025Critter Stack Roadmap Update for February
May 2025Wolverine 4 is Bringing Multi-Tenancy to EF Core
Oct 2025Wolverine 5 and Modular Monoliths
Mar 2026Announcing Polecat: Event Sourcing with SQL Server
Mar 2026Critter Stack Wide Releases — March Madness Edition

Critter Stack Wide Releases — March Madness Edition

As anybody knows who follows the Critter Stack on our Discord server, I’m uncomfortable with the rapid pace of releases that we’ve sustained in the past couple quarters and I think I would like the release cadence to slow down. However, open issues and pull requests feel like money burning a hole in my pocket, and I don’t letting things linger very long. Our rapid cadence is somewhat driven by JasperFx Software client requests, some by our community being quite aggressive in contributing changes, and our users finding new issues that need to be addressed. While I’ve been known to be very unhappy with feedback saying that our frequent release cadence must be a sign of poor quality, I think our community seems to mostly appreciate that we move relatively fast. I believe that we are definitely innovating much faster and more aggressively than any of the other asynchronous messaging tools in the .NET space, so there’s that. Anyway, enough of that, here’s a rundown of the new releases today.

It’s been a busy week across the Critter Stack! We shipped coordinated releases today across all five projects: Marten 8.27, Wolverine 5.25, Polecat 1.5, Weasel 8.11.1, and JasperFx 1.21.1. Here’s a rundown of what’s new.


Marten 8.27.0

Sharded Multi-Tenancy with Database Pooling

For teams operating at extreme scale — we’re talking hundreds of billions of events — Marten now supports a sharded multi-tenancy model that distributes tenants across a pool of databases. Each tenant gets its own native PostgreSQL LIST partition within a shard database, giving you the isolation benefits of per-tenant databases with the operational simplicity of a managed pool.

Configuration is straightforward:

opts.MultiTenantedWithShardedDatabases(x =>
{
    // Connection to the master database that holds the pool registry
    x.ConnectionString = masterConnectionString;

    // Schema for the registry tables in the master database
    x.SchemaName = "tenants";

    // Seed the database pool on startup
    x.AddDatabase("shard_01", shard1ConnectionString);
    x.AddDatabase("shard_02", shard2ConnectionString);
    x.AddDatabase("shard_03", shard3ConnectionString);
    x.AddDatabase("shard_04", shard4ConnectionString);

    // Choose a tenant assignment strategy (see below)
    x.UseHashAssignment(); // this is the default
});

Calling MultiTenantedWithShardedDatabases() automatically enables conjoined tenancy for both documents and events, with native PG list partitions created per tenant.

Three tenant assignment strategies are built-in:

  • Hash Assignment (default) — deterministic FNV-1a hash of the tenant ID. Fast, predictable, no database queries needed. Best when tenants are roughly equal in size.
  • Smallest Database — assigns new tenants to the database with the fewest existing tenants. Accepts a custom IDatabaseSizingStrategy for balancing by row count, disk usage, or any other metric.
  • Explicit Assignment — you control exactly which database hosts each tenant via the admin API.

The admin API lets you manage the pool at runtime: AddTenantToShardAsyncAddDatabaseToPoolAsyncMarkDatabaseFullAsync — all with advisory-locked concurrent safety.

See the multi-tenancy documentation for the full details.

Bulk COPY Event Append for High-Throughput Seeding

For data migrations, test fixture setup, load testing, or importing events from external systems, Marten now supports a bulk COPY-based event append that uses PostgreSQL’s COPY ... FROM STDIN BINARY for maximum throughput:

// Build up a list of stream actions with events
var streams = new List<StreamAction>();

for (int i = 0; i < 1000; i++)
{
    var streamId = Guid.NewGuid();
    var events = new object[]
    {
        new OrderPlaced(streamId, "Widget", 5),
        new OrderShipped(streamId, $"TRACK-{i}"),
        new OrderDelivered(streamId, DateTimeOffset.UtcNow)
    };

    streams.Add(StreamAction.Start(store.Events, streamId, events));
}

// Bulk insert all events using PostgreSQL COPY for maximum throughput
await store.BulkInsertEventsAsync(streams);

This supports all combinations of Guid/string identity, single/conjoined tenancy, archived stream partitioning, and metadata columns. When using conjoined tenancy, a tenant-specific overload is available:

await store.BulkInsertEventsAsync("tenant-abc", streams);

See the event appending documentation for more.

Other Fixes

  • FetchForWriting now auto-discovers natural keys without requiring an explicit projection registration, and works correctly with strongly typed IDs combined with UseIdentityMapForAggregates
  • Compiled queries using IsOneOf with array parameters now generate correct SQL
  • EF Core OwnsOne().ToJson() support (via Weasel 8.11.1) — schema diffing now correctly handles JSON column mapping when Marten and EF Core share a database
  • Thanks to @erdtsieck for fixing duplicate codegen when using secondary document stores!

Wolverine 5.25.0

This is a big release with 12 PRs merged — a mix of bug fixes, new features, and community contributions.

MassTransit and NServiceBus Interop for Azure Service Bus Topics

Previously, MassTransit and NServiceBus interoperability was only available on Azure Service Bus queues. With 5.25, you can now interoperate on ASB topics and subscriptions too — making it much easier to migrate incrementally or coexist with other .NET messaging frameworks:

// Publish to a topic with NServiceBus interop
opts.PublishAllMessages().ToAzureServiceBusTopic("nsb-topic")
    .UseNServiceBusInterop();

// Listen on a subscription with MassTransit interop
opts.ListenToAzureServiceBusSubscription("wolverine-sub")
    .FromTopic("wolverine-topic")
    .UseMassTransitInterop(mt => { })
    .DefaultIncomingMessage<ResponseMessage>().UseForReplies();

Both UseMassTransitInterop() and UseNServiceBusInterop() are available on AzureServiceBusTopic (for publishing) and AzureServiceBusSubscription (for listening). This is ideal for brownfield scenarios where you’re migrating services one at a time and need different messaging frameworks to talk to each other through shared ASB topics.

Other New Features

  • Handler Type Naming for Conventional Routing — NamingSource.FromHandlerType names listener queues after the handler type instead of the message type, useful for modular monolith scenarios with multiple handlers per message
  • Enhanced WolverineParameterAttribute — new FromHeaderFromClaim, and FromMethod value sources for binding handler parameters to HTTP headers, claims, or static method return values
  • Full Tracing for InvokeAsync — opt-in InvokeTracingMode.Full emits the same structured log messages as transport-received messages, with zero overhead in the default path
  • Configurable SQL transport polling interval — thanks to new contributor @xwipeoutx!

Bug Fixes


Polecat 1.5.0

Polecat — the Critter Stack’s newer, lighter-weight event store option — had a big jump from 1.2 to 1.5:

  • net9.0 support and CI workflow
  • SingleStreamProjection<TDoc, TId> with strongly-typed ID support
  • Auto-discover natural keys for FetchForWriting
  • Conjoined tenancy support for DCB tags and natural keys
  • Fix for FetchForWriting with UseIdentityMapForAggregates and strongly typed IDs

Weasel 8.11.1

  • EF Core OwnsOne().ToJson() support — Weasel’s schema diffing now correctly handles EF Core’s JSON column mapping, preventing spurious migration diffs when Marten and EF Core share a database

JasperFx 1.21.1 / JasperFx.Events 1.24.1

  • Skip unknown flags when AutoStartHost is true — fixes an issue where unrecognized CLI flags would cause errors during host auto-start
  • Retrofit IEventSlicer tests

Upgrading

All packages are available on NuGet now. The Marten and Wolverine releases are fully coordinated — if you’re using the Critter Stack together, upgrade both at the same time for the best experience.

As always, please report any issues on the respective GitHub repositories and join us on the Critter Stack Discord if you have questions!

The World’s Crudest Chaos Monkey

I’m working pretty hard this week and early next to deliver the CritterWatch MVP (our new management and observability console for the Critter Stack) to a JasperFx Software client. One of the things we need to do for testing is to fake out several failure conditions in message handlers to be able to test CritterWatch’s “Dead Letter Queue” management and alerting features. To that end, we have some fake systems that constantly process messages, and we’ve rigged up what I’m going to call the world’s crudest Chaos Monkey in Wolverine middleware:

    public static async Task Before(ChaosMonkeySettings chaos)
    {
        // Configurable slow handler for testing back pressure
        if (chaos.SlowHandlerMs > 0)
        {
            await Task.Delay(chaos.SlowHandlerMs);
        }

        if (chaos.FailureRate <= 0) return;

        // Chaos monkey — distribute failure rate equally across 5 exception types
        var perType = chaos.FailureRate / 5.0;
        var next = Random.Shared.NextDouble();

        if (next < perType)
        {
            throw new TripServiceTooBusyException("Just feeling tired at " + DateTime.Now);
        }

        if (next < perType * 2)
        {
            throw new TrackingUnavailableException("Tracking is down at " + DateTime.Now);
        }

        if (next < perType * 3)
        {
            throw new DatabaseIsTiredException("The database wants a break at " + DateTime.Now);
        }

        if (next < perType * 4)
        {
            throw new TransientException("Slow down, you move too fast.");
        }

        if (next < perType * 5)
        {
            throw new OtherTransientException("Slow down, you move too fast.");
        }
    }

And this to control it remotely in tests or just when doing exploratory manual testing:

    private static void MapChaosMonkeyEndpoints(WebApplication app)
    {
        var group = app.MapGroup("/api/chaos")
            .WithTags("Chaos Monkey");

        group.MapGet("/", (ChaosMonkeySettings settings) => Results.Ok(settings))
            .WithSummary("Get current chaos monkey settings");

        group.MapPost("/enable", (ChaosMonkeySettings settings) =>
        {
            settings.FailureRate = 0.20;
            return Results.Ok(new { message = "Chaos monkey enabled at 20% failure rate", settings });
        }).WithSummary("Enable chaos monkey with default 20% failure rate");

        group.MapPost("/disable", (ChaosMonkeySettings settings) =>
        {
            settings.FailureRate = 0;
            return Results.Ok(new { message = "Chaos monkey disabled", settings });
        }).WithSummary("Disable chaos monkey (0% failure rate)");

        group.MapPost("/failure-rate/{rate:double}", (double rate, ChaosMonkeySettings settings) =>
        {
            rate = Math.Clamp(rate, 0, 1);
            settings.FailureRate = rate;
            return Results.Ok(new { message = $"Failure rate set to {rate:P0}", settings });
        }).WithSummary("Set chaos monkey failure rate (0.0 to 1.0)");

        group.MapPost("/slow-handler/{ms:int}", (int ms, ChaosMonkeySettings settings) =>
        {
            ms = Math.Max(0, ms);
            settings.SlowHandlerMs = ms;
            return Results.Ok(new { message = $"Handler delay set to {ms}ms", settings });
        }).WithSummary("Set artificial handler delay in milliseconds (for back pressure testing)");

        group.MapPost("/projection-failure-rate/{rate:double}", (double rate, ChaosMonkeySettings settings) =>
        {
            rate = Math.Clamp(rate, 0, 1);
            settings.ProjectionFailureRate = rate;
            return Results.Ok(new { message = $"Projection failure rate set to {rate:P0}", settings });
        }).WithSummary("Set projection failure rate (0.0 to 1.0)");
    }

In this case, the Before middleware is just baked into the message handlers, but in your development the “chaos monkey” middleware could be applied only in testing with a Wolverine extension.

And I was probably listening to Simon & Garfunkel when I did the first cut at the chaos monkey:

New Option for Simple Projections in Marten or Polecat

JasperFx Software is around and ready to assist you with getting the best possible results using the Critter Stack.

The projections model in Marten and now Polecat has evolved quite a bit over the past decade. Consider this simple aggregated projection of data for our QuestParty in our tests:

public class QuestParty
{
public List<string> Members { get; set; } = new();
public IList<string> Slayed { get; } = new List<string>();
public string Key { get; set; }
public string Name { get; set; }
// In this particular case, this is also the stream id for the quest events
public Guid Id { get; set; }
// These methods take in events and update the QuestParty
public void Apply(MembersJoined joined) => Members.Fill(joined.Members);
public void Apply(MembersDeparted departed) => Members.RemoveAll(x => departed.Members.Contains(x));
public void Apply(QuestStarted started) => Name = started.Name;
public override string ToString()
{
return $"Quest party '{Name}' is {Members.Join(", ")}";
}
}

That type is mutable, but the projection library underneath Marten and Polecat happily supports projecting to immutable types as well.

Some people actually like the conventional method approach up above with the Apply, Create, and ShouldDelete methods. From the perspective of Marten’s or Polecat’s internals, it’s always been helpful because the projection subsystem “knows” in this case that the QuestParty is only applicable to the specific event types referenced in those methods, and when you call this code:

var party = await query
.Events
.AggregateStreamAsync<QuestParty>(streamId);

Marten and Polecat are able to quietly use extra SQL filters to limit the events fetched from the database to only the types utilized by the projected QuestParty aggregate.

Great, right? Except that some folks don’t like the naming conventions, just prefer explicit code, or do some clever things with subclasses on events that can confuse Marten or Polecat about the precedence of the event type handlers. To that end, Marten 8.0 introduced more options for explicit code. We can rewrite the projection part of the QuestParty above to a completely different class where you can add explicit code:

public class QuestPartyProjection: SingleStreamProjection<QuestParty, Guid>
{
public QuestPartyProjection()
{
// This is *no longer necessary* in
// the very most recent versions of Marten,
// but used to be just to limit Marten's
// querying of event types when doing live
// or async projections
IncludeType<MembersJoined>();
IncludeType<MembersDeparted>();
IncludeType<QuestStarted>();
}
public override QuestParty Evolve(QuestParty snapshot, Guid id, IEvent e)
{
snapshot ??= new QuestParty{ Id = id };
switch (e.Data)
{
case MembersJoined j:
// Small helper in JasperFx that prevents
// double values
snapshot.Members.Fill(j.Members);
break;
case MembersDeparted departed:
snapshot.Members.RemoveAll(x => departed.Members.Contains(x));
break;
}
return snapshot;
}
}

There are several more items in that SingleStreamProjection base type like versioning or fine grained control over asynchronous projection behavior that might be valuable later, but for now, let’s look at a new feature in Marten and Polecat that let’s you use explicit code right in the single aggregate type:

public class QuestParty
{
public List<string> Members { get; set; } = new();
public IList<string> Slayed { get; } = new List<string>();
public string Key { get; set; }
public string Name { get; set; }
// In this particular case, this is also the stream id for the quest events
public Guid Id { get; set; }
public void Evolve(IEvent e)
{
switch (e.Data)
{
case QuestStarted _:
// Little goofy, but this let's Marten know that
// the projection cares about that event type
break;
case MembersJoined j:
// Small helper in JasperFx that prevents
// double values
Members.Fill(j.Members);
break;
case MembersDeparted departed:
Members.RemoveAll(x => departed.Members.Contains(x));
break;
}
}
public override string ToString()
{
return $"Quest party '{Name}' is {Members.Join(", ")}";
}
}

This is admittedly yet another convention method in terms of the method name and the possible arguments, but hopefully the switch statement approach is much more explicit for folks who prefer that. As an additional bonus, Marten is able to automatically register the event types via a source generator that the version of QuestParty just above is using automatically so that we get all the benefits of the event filtering without making users do extra explicit configuration.

Projecting to Immutable Views

Just for completeness, let’s look at alternative versions of QuestParty just to see what it looks like if you make the aggregate an immutable type. First up is the conventional method approach:

public sealed record QuestParty(Guid Id, List<string> Members)
{
// These methods take in events and update the QuestParty
public static QuestParty Create(QuestStarted started) => new(started.QuestId, []);
public static QuestParty Apply(MembersJoined joined, QuestParty party) =>
party with
{
Members = party.Members.Union(joined.Members).ToList()
};
public static QuestParty Apply(MembersDeparted departed, QuestParty party) =>
party with
{
Members = party.Members.Where(x => !departed.Members.Contains(x)).ToList()
};
public static QuestParty Apply(MembersEscaped escaped, QuestParty party) =>
party with
{
Members = party.Members.Where(x => !escaped.Members.Contains(x)).ToList()
};
}

And with the Evolve approach:

public sealed record QuestParty(Guid Id, List<string> Members)
{
public static QuestParty Evolve(QuestParty? party, IEvent e)
{
switch (e.Data)
{
case QuestStarted s:
return new(s.QuestId, []);
case MembersJoined joined:
return party with {
Members = party.Members.Union(joined.Members).ToList()
};
case MembersDeparted departed:
return party with
{
Members = party.Members.Where(x => !departed.Members.Contains(x)).ToList()
};
case MembersEscaped escaped:
return party with
{
Members = party.Members.Where(x => !escaped.Members.Contains(x)).ToList()
};
}
return party;
}

Summary

What do I recommend? Honestly, just whatever you prefer. This is a case where I’d like everyone to be happy with one of the available options. And yes, it’s not always good that there is more than one way to do the same thing in a framework, but I think we’re going to just keep all these options in the long run. It wasn’t shown here at all, but I think we’ll kill off the early options to define projections through a ton of inline Lambda functions within a fluent interface. That stuff can just die.

In the medium and longer term, we’re going to be utilizing more source generators across the entire Critter Stack as a way of both eliminating some explicit configuration requirements and to optimize our cold start times. I’m looking forward to getting much more into that work.

CQRS and Event Sourcing with Polecat and SQL Server

If you’re already familiar with Marten and Wolverine, this is all old news except for the part where we’re using SQL Server. If you’re brand new to the “Critter Stack,” Event Sourcing, or CQRS, hang around! And just so you know, JasperFx Software is completely ready to support our clients using Polecat.

All of the sample code in this blog post can be found in the Wolverine codebase on GitHub here.

With the advent of Polecat going 1.0 last week, you now have a robust solution for Event Sourcing using SQL Server 2025 as the backing store. If you’re reading this, you’re surely involved in software development and that means that your job at some point has been dictated by some kind of issue tracking tool, so let’s use that as our example system and pretend we’re creating an incident tracking system for our help desk folks as shown below:

To get started, I’m a fan of using the Event Storming technique to identify some of the meaningful events we should capture in our system and start to identify possible commands within our system:

Having at least some initial thoughts about the shape of our system, let’s start a new web service project in .NET with:

dotnet new webapi

Then add both Polecat (for persistence) and Wolverine (for both HTTP endpoints and asynchronous messaging) with:

dotnet add package WolverineFx.Polecat
dotnet add package WolverineFx.Http

And now, let’s jump into our Program file to wire up Polecat to an existing SQL Server database and configure Wolverine as well:

using Polecat;
using Polecat.Projections;
using PolecatIncidentService;
using Wolverine;
using Wolverine.Http;
using Wolverine.Polecat;
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddOpenApi();
builder.Services.AddPolecat(opts =>
{
var connectionString = builder.Configuration.GetConnectionString("SqlServer")
??
"Server=localhost,1434;User Id=sa;Password=P@55w0rd;Timeout=5;MultipleActiveResultSets=True;Initial Catalog=master;Encrypt=False";
opts.ConnectionString = connectionString;
opts.DatabaseSchemaName = "incidents";
// We'll talk about this soon...
opts.Projections.Snapshot<Incident>(SnapshotLifecycle.Inline);
})
// For Marten users, *this* is the default for Polecat!
//.UseLightweightSessions()
.IntegrateWithWolverine(x => x.UseWolverineManagedEventSubscriptionDistribution = true);
builder.Host.UseWolverine(opts => { opts.Policies.AutoApplyTransactions(); });
builder.Services.AddWolverineHttp();
var app = builder.Build();
if (app.Environment.IsDevelopment())
{
app.MapOpenApi();
}
// Adding Wolverine.HTTP
app.MapWolverineEndpoints();
// This gets you a lot of CLI goodness from the
// greater JasperFx / Critter Stack ecosystem
// and will soon feed quite a bit of AI assisted development as well
return await app.RunJasperFxCommands(args);
// For test bootstrapping in case you want to work w/
// more than one system at a time
public partial class Program
{
}

Our events are just going to be some immutable records like this:

public record LogIncident(
Guid CustomerId,
Contact Contact,
string Description,
Guid LoggedBy
);
public record CategoriseIncident(
IncidentCategory Category,
Guid CategorisedBy,
int Version
);
public record CloseIncident(
Guid ClosedBy,
int Version
);

It’s not mandatory to use immutable types, but you might as well and it’s just idiomatic.

Let’s start with our LogIncident use case and build out an HTTP endpoint that creates a new “event stream” for events related to a single, logical Incident:

public static class LogIncidentEndpoint
{
[WolverinePost("/api/incidents")]
public static (CreationResponse<Guid>, IStartStream) Post(LogIncident command)
{
var (customerId, contact, description, loggedBy) = command;
var logged = new IncidentLogged(customerId, contact, description, loggedBy);
var start = PolecatOps.StartStream<Incident>(logged);
var response = new CreationResponse<Guid>("/api/incidents/" + start.StreamId, start.StreamId);
return (response, start);
}
}

Polecat does support “Dynamic Consistency Boundary” event sourcing as well, but that’s not where I think most people should start, and I’ll get to that in a later post I keep putting off…

With some help from Alba, another JasperFx supported library, we can write both unit tests for the business logic (such as it is) and do an end to end test through the HTTP endpoint like this:

public class when_logging_an_incident : IntegrationContext
{
public when_logging_an_incident(AppFixture fixture) : base(fixture)
{
}
[Fact]
public void unit_test()
{
var contact = new Contact(ContactChannel.Email);
var command = new LogIncident(Guid.NewGuid(), contact, "It's broken", Guid.NewGuid());
// Pure function FTW!
var (response, startStream) = LogIncidentEndpoint.Post(command);
// Should only have the one event
startStream.Events.ShouldBe([
new IncidentLogged(command.CustomerId, command.Contact, command.Description, command.LoggedBy)
]);
}
[Fact]
public async Task happy_path_end_to_end()
{
var contact = new Contact(ContactChannel.Email);
var command = new LogIncident(Guid.NewGuid(), contact, "It's broken", Guid.NewGuid());
// Log a new incident first
var initial = await Scenario(x =>
{
x.Post.Json(command).ToUrl("/api/incidents");
x.StatusCodeShouldBe(201);
});
// Read the response body by deserialization
var response = initial.ReadAsJson<CreationResponse<Guid>>();
// Reaching into Polecat to build the current state of the new Incident
await using var session = Store.LightweightSession();
var incident = await session.Events.FetchLatest<Incident>(response.Value);
incident!.Status.ShouldBe(IncidentStatus.Pending);
}
}

Now, to build out a command handler for potentially categorizing an event, we’ll need to:

  1. Know the current state of the logical Incident by rolling up the events into some kind of representation of the state so that we can “decide” which if any events should be appended at this time. In Event Sourcing terms, I’d refer to this as the “write model.”
  2. The command type itself
  3. Validation logic for the input
  4. Like I said earlier, decide which events should be published
  5. Do some metadata correlation for observability. It’s not obvious from the code, but in the sample below Wolverine & Marten are tracking the events captured against the correlation id of the current HTTP request
  6. Establish transactional boundaries, including any outbound messaging that might be taking place in response to the events that are being appended. This is something that Wolverine does for Polecat (and Marten) in command handlers. This includes the transactional outbox support in Wolverine.
  7. Create protections against concurrent writes to any given Incident stream, which Wolverine and Polecat do for you in the next endpoint by applying optimistic concurrency checks to guarantee that no other thread changed the Incident since this CategoriseIncident command was issued by the caller

That’s actually quite a bit of responsibility for the command handler, but not to worry, Wolverine and Polecat are going to keep your code nice and simple. Hopefully even a pure function “Decider” for the business logic in many cases. Before I get into the command handler, here’s what the “projection” that gives us the current state of the Incident by applying events:

public class Incident
{
public Guid Id { get; set; }
// Polecat will set this itself for optimistic concurrency
public int Version { get; set; }
public IncidentStatus Status { get; set; } = IncidentStatus.Pending;
public IncidentCategory? Category { get; set; }
public bool HasOutstandingResponseToCustomer { get; set; } = false;
public Incident()
{
}
public void Apply(IncidentLogged _) { }
public void Apply(IncidentCategorised e) => Category = e.Category;
public void Apply(AgentRespondedToIncident _) => HasOutstandingResponseToCustomer = false;
public void Apply(CustomerRespondedToIncident _) => HasOutstandingResponseToCustomer = true;
public void Apply(IncidentResolved _) => Status = IncidentStatus.Resolved;
public void Apply(ResolutionAcknowledgedByCustomer _) => Status = IncidentStatus.ResolutionAcknowledgedByCustomer;
public void Apply(IncidentClosed _) => Status = IncidentStatus.Closed;
public bool ShouldDelete(Archived @event) => true;
}

And finally, the command handler:

public record CategoriseIncident(
IncidentCategory Category,
Guid CategorisedBy,
int Version
);
public static class CategoriseIncidentEndpoint
{
public static ProblemDetails Validate(Incident incident)
{
return incident.Status == IncidentStatus.Closed
? new ProblemDetails { Detail = "Incident is already closed" }
: WolverineContinue.NoProblems;
}
[EmptyResponse]
[WolverinePost("/api/incidents/{incidentId:guid}/category")]
public static IncidentCategorised Post(
CategoriseIncident command,
[Aggregate("incidentId")] Incident incident)
{
return new IncidentCategorised(incident.Id, command.Category, command.CategorisedBy);
}
}

And I admit that that’s a lot of code thrown at you all at once, and maybe even a lot of new concepts. For further reading, see: