Code Monkey home page Code Monkey logo

eventsourcing.netcore's Introduction

Github Actions Github Sponsors blog blog

EventSourcing .NET

Tutorial, practical samples and other resources about Event Sourcing in .NET. See also my similar repositories for JVM and NodeJS.

1. Event Sourcing

1.1 What is Event Sourcing?

Event Sourcing is a design pattern in which results of business operations are stored as a series of events.

It is an alternative way to persist data. In contrast with state-oriented persistence that only keeps the latest version of the entity state, Event Sourcing stores each state change as a separate event.

Thanks to that, no business data is lost. Each operation results in the event stored in the database. That enables extended auditing and diagnostics capabilities (both technically and business-wise). What's more, as events contains the business context, it allows wide business analysis and reporting.

In this repository I'm showing different aspects and patterns around Event Sourcing from the basic to advanced practices.

Read more in my articles:

1.2 What is Event?

Events represent facts in the past. They carry information about something accomplished. It should be named in the past tense, e.g. "user added", "order confirmed". Events are not directed to a specific recipient - they're broadcasted information. It's like telling a story at a party. We hope that someone listens to us, but we may quickly realise that no one is paying attention.

Events:

  • are immutable: "What has been seen, cannot be unseen".
  • can be ignored but cannot be retracted (as you cannot change the past).
  • can be interpreted differently. The basketball match result is a fact. Winning team fans will interpret it positively. Losing team fans - not so much.

Read more in my articles:

1.3 What is Stream?

Events are logically grouped into streams. In Event Sourcing, streams are the representation of the entities. All the entity state mutations end up as the persisted events. Entity state is retrieved by reading all the stream events and applying them one by one in the order of appearance.

A stream should have a unique identifier representing the specific object. Each event has its own unique position within a stream. This position is usually represented by a numeric, incremental value. This number can be used to define the order of the events while retrieving the state. It can also be used to detect concurrency issues.

1.4 Event representation

Technically events are messages.

They may be represented, e.g. in JSON, Binary, XML format. Besides the data, they usually contain:

  • id: unique event identifier.
  • type: name of the event, e.g. "invoice issued".
  • stream id: object id for which event was registered (e.g. invoice id).
  • stream position (also named version, order of occurrence, etc.): the number used to decide the order of the event's occurrence for the specific object (stream).
  • timestamp: representing a time at which the event happened.
  • other metadata like correlation id, causation id, etc.

Sample event JSON can look like:

{
  "id": "e44f813c-1a2f-4747-aed5-086805c6450e",
  "type": "invoice-issued",
  "streamId": "INV/2021/11/01",
  "streamPosition": 1,
  "timestamp": "2021-11-01T00:05:32.000Z",

  "data":
  {
    "issuedTo": {
      "name": "Oscar the Grouch",
      "address": "123 Sesame Street"
    },
    "amount": 34.12,
    "number": "INV/2021/11/01",
    "issuedAt": "2021-11-01T00:05:32.000Z"
  },

  "metadata":
  {
    "correlationId": "1fecc92e-3197-4191-b929-bd306e1110a4",
    "causationId": "c3cf07e8-9f2f-4c2d-a8e9-f8a612b4a7f1"
  }
}

Read more in my articles:

1.5 Event Storage

Event Sourcing is not related to any type of storage implementation. As long as it fulfills the assumptions, it can be implemented having any backing database (relational, document, etc.). The state has to be represented by the append-only log of events. The events are stored in chronological order, and new events are appended to the previous event. Event Stores are the databases' category explicitly designed for such purpose.

Read more in my articles:

1.6 Retrieving the current state from events

In Event Sourcing, the state is stored in events. Events are logically grouped into streams. Streams can be thought of as the entities' representation. Traditionally (e.g. in relational or document approach), each entity is stored as a separate record.

Id IssuerName IssuerAddress Amount Number IssuedAt
e44f813c Oscar the Grouch 123 Sesame Street 34.12 INV/2021/11/01 2021-11-01

In Event Sourcing, the entity is stored as the series of events that happened for this specific object, e.g. InvoiceInitiated, InvoiceIssued, InvoiceSent.

[
    {
        "id": "e44f813c-1a2f-4747-aed5-086805c6450e",
        "type": "invoice-initiated",
        "streamId": "INV/2021/11/01",
        "streamPosition": 1,
        "timestamp": "2021-11-01T00:05:32.000Z",

        "data":
        {
            "issuer": {
                "name": "Oscar the Grouch",
                "address": "123 Sesame Street",
            },
            "amount": 34.12,
            "number": "INV/2021/11/01",
            "initiatedAt": "2021-11-01T00:05:32.000Z"
        }
    },
    {
        "id": "5421d67d-d0fe-4c4c-b232-ff284810fb59",
        "type": "invoice-issued",
        "streamId": "INV/2021/11/01",
        "streamPosition": 2,
        "timestamp": "2021-11-01T00:11:32.000Z",

        "data":
        {
            "issuedTo": "Cookie Monster",
            "issuedAt": "2021-11-01T00:11:32.000Z"
        }
    },
    {
        "id": "637cfe0f-ed38-4595-8b17-2534cc706abf",
        "type": "invoice-sent",
        "streamId": "INV/2021/11/01",
        "streamPosition": 3,
        "timestamp": "2021-11-01T00:12:01.000Z",

        "data":
        {
            "sentVia": "email",
            "sentAt": "2021-11-01T00:12:01.000Z"
        }
    }
]

All of those events share the stream id ("streamId": "INV/2021/11/01"), and have incremented stream positions.

In Event Sourcing each entity is represented by its stream: the sequence of events correlated by the stream id ordered by stream position.

To get the current state of an entity we need to perform the stream aggregation process. We're translating the set of events into a single entity. This can be done with the following steps:

  1. Read all events for the specific stream.
  2. Order them ascending in the order of appearance (by the event's stream position).
  3. Construct the empty object of the entity type (e.g. with default constructor).
  4. Apply each event on the entity.

This process is called also stream aggregation or state rehydration.

We could implement that as:

public record Person(
    string Name,
    string Address
);

public record InvoiceInitiated(
    double Amount,
    string Number,
    Person IssuedTo,
    DateTime InitiatedAt
);

public record InvoiceIssued(
    string IssuedBy,
    DateTime IssuedAt
);

public enum InvoiceSendMethod
{
    Email,
    Post
}

public record InvoiceSent(
    InvoiceSendMethod SentVia,
    DateTime SentAt
);

public enum InvoiceStatus
{
    Initiated = 1,
    Issued = 2,
    Sent = 3
}

public class Invoice
{
    public string Id { get;set; }
    public double Amount { get; private set; }
    public string Number { get; private set; }

    public InvoiceStatus Status { get; private set; }

    public Person IssuedTo { get; private set; }
    public DateTime InitiatedAt { get; private set; }

    public string IssuedBy { get; private set; }
    public DateTime IssuedAt { get; private set; }

    public InvoiceSendMethod SentVia { get; private set; }
    public DateTime SentAt { get; private set; }

    public void When(object @event)
    {
        switch (@event)
        {
            case InvoiceInitiated invoiceInitiated:
                Apply(invoiceInitiated);
                break;
            case InvoiceIssued invoiceIssued:
                Apply(invoiceIssued);
                break;
            case InvoiceSent invoiceSent:
                Apply(invoiceSent);
                break;
        }
    }

    private void Apply(InvoiceInitiated @event)
    {
        Id = @event.Number;
        Amount = @event.Amount;
        Number = @event.Number;
        IssuedTo = @event.IssuedTo;
        InitiatedAt = @event.InitiatedAt;
        Status = InvoiceStatus.Initiated;
    }

    private void Apply(InvoiceIssued @event)
    {
        IssuedBy = @event.IssuedBy;
        IssuedAt = @event.IssuedAt;
        Status = InvoiceStatus.Issued;
    }

    private void Apply(InvoiceSent @event)
    {
        SentVia = @event.SentVia;
        SentAt = @event.SentAt;
        Status = InvoiceStatus.Sent;
    }
}

and use it as:

var invoiceInitiated = new InvoiceInitiated(
    34.12,
    "INV/2021/11/01",
    new Person("Oscar the Grouch", "123 Sesame Street"),
    DateTime.UtcNow
);
var invoiceIssued = new InvoiceIssued(
    "Cookie Monster",
    DateTime.UtcNow
);
var invoiceSent = new InvoiceSent(
    InvoiceSendMethod.Email,
    DateTime.UtcNow
);

// 1,2. Get all events and sort them in the order of appearance
var events = new object[] {invoiceInitiated, invoiceIssued, invoiceSent};

// 3. Construct empty Invoice object
var invoice = new Invoice();

// 4. Apply each event on the entity.
foreach (var @event in events)
{
    invoice.When(@event);
}

and generalise this into Aggregate base class:

public abstract class Aggregate<T>
{
    public T Id { get; protected set; }

    protected Aggregate() { }

    public virtual void When(object @event) { }
}

The biggest advantage of "online" stream aggregation is that it always uses the most recent business logic. So after the change in the apply method, it's automatically reflected on the next run. If events data is fine, then it's not needed to do any migration or updates.

In Marten When method is not needed. Marten uses naming convention and call the Apply method internally. It has to:

  • have single parameter with event object,
  • have void type as the result.

See samples:

Read more in my article:

1.7 Strongly-Typed ids with Marten

Strongly typed ids (or, in general, a proper type system) can make your code more predictable. It reduces the chance of trivial mistakes, like accidentally changing parameters order of the same primitive type.

So for such code:

var reservationId = "RES/01";
var seatId = "SEAT/22";
var customerId = "CUS/291";

var reservation = new Reservation(
    reservationId,
    seatId,
    customerId
);

the compiler won't catch if you switch reservationId with seatId.

If you use strongly typed ids, then compile will catch that issue:

var reservationId = new ReservationId("RES/01");
var seatId = new SeatId("SEAT/22");
var customerId = new CustomerId("CUS/291");

var reservation = new Reservation(
    reservationId,
    seatId,
    customerId
);

They're not ideal, as they're usually not playing well with the storage engines. Typical issues are: serialisation, Linq queries, etc. For some cases they may be just overkill. You need to pick your poison.

To reduce tedious, copy/paste code, it's worth defining a strongly-typed id base class, like:

public class StronglyTypedValue<T>: IEquatable<StronglyTypedValue<T>> where T: IComparable<T>
{
    public T Value { get; }

    public StronglyTypedValue(T value)
    {
        Value = value;
    }

    public bool Equals(StronglyTypedValue<T>? other)
    {
        if (ReferenceEquals(null, other)) return false;
        if (ReferenceEquals(this, other)) return true;
        return EqualityComparer<T>.Default.Equals(Value, other.Value);
    }

    public override bool Equals(object? obj)
    {
        if (ReferenceEquals(null, obj)) return false;
        if (ReferenceEquals(this, obj)) return true;
        if (obj.GetType() != this.GetType()) return false;
        return Equals((StronglyTypedValue<T>)obj);
    }

    public override int GetHashCode()
    {
        return EqualityComparer<T>.Default.GetHashCode(Value);
    }

    public static bool operator ==(StronglyTypedValue<T>? left, StronglyTypedValue<T>? right)
    {
        return Equals(left, right);
    }

    public static bool operator !=(StronglyTypedValue<T>? left, StronglyTypedValue<T>? right)
    {
        return !Equals(left, right);
    }
}

Then you can define specific id class as:

public class ReservationId: StronglyTypedValue<Guid>
{
    public ReservationId(Guid value) : base(value)
    {
    }
}

You can even add additional rules:

public class ReservationNumber: StronglyTypedValue<string>
{
    public ReservationNumber(string value) : base(value)
    {
        if (string.IsNullOrEmpty(value) || !value.StartsWith("RES/") || value.Length <= 4)
            throw new ArgumentOutOfRangeException(nameof(value));
    }
}

The base class working with Marten, can be defined as:

public abstract class Aggregate<TKey, T>
    where TKey: StronglyTypedValue<T>
    where T : IComparable<T>
{
    public TKey Id { get; set; } = default!;

    [Identity]
    public T AggregateId    {
        get => Id.Value;
        set {}
    }

    public int Version { get; protected set; }

    [JsonIgnore] private readonly Queue<object> uncommittedEvents = new();

    public object[] DequeueUncommittedEvents()
    {
        var dequeuedEvents = uncommittedEvents.ToArray();

        uncommittedEvents.Clear();

        return dequeuedEvents;
    }

    protected void Enqueue(object @event)
    {
        uncommittedEvents.Enqueue(@event);
    }
}

Marten requires the id with public setter and getter of string or Guid. We used the trick and added AggregateId with a strongly-typed backing field. We also informed Marten of the Identity attribute to use this field in its internals.

Example aggregate can look like:

public class Reservation : Aggregate<ReservationId, Guid>
{
    public CustomerId CustomerId { get; private set; } = default!;

    public SeatId SeatId { get; private set; } = default!;

    public ReservationNumber Number { get; private set; } = default!;

    public ReservationStatus Status { get; private set; }

    public static Reservation CreateTentative(
        SeatId seatId,
        CustomerId customerId)
    {
        return new Reservation(
            new ReservationId(Guid.NewGuid()),
            seatId,
            customerId,
            new ReservationNumber(Guid.NewGuid().ToString())
        );
    }

    // (...)
}

See the full sample here.

Read more in the article:

2. Videos

2.1. Practical Event Sourcing with Marten

Pragmatic Event Sourcing with Marten

2.2. Keep your streams short! Or how to model event-sourced systems efficiently

Keep your streams short! Or how to model event-sourced systems efficiently

2.3. Let's build event store in one hour!

Let's build event store in one hour!

2.4. CQRS is Simpler than you think with C#11 & NET7

CQRS is Simpler than you think with C#11 & NET7

2.5. Practical Introduction to Event Sourcing with EventStoreDB

Practical introduction to Event Sourcing with EventStoreDB

2.6. How to deal with privacy and GDPR in Event-Sourced systems

How to deal with privacy and GDPR in Event-Sourced systems

2.7 Let's build the worst Event Sourcing system!

Let's build the worst Event Sourcing system!

2.8 The Light and The Dark Side of the Event-Driven Design

The Light and The Dark Side of the Event-Driven Design

2.9 Implementing Distributed Processes

Implementing Distributed Processes

2.10 Conversation with Yves Lorphelin about CQRS

Event Store Conversations: Yves Lorphelin talks to Oskar Dudycz about CQRS (EN)

2.11. Never Lose Data Again - Event Sourcing to the Rescue!

Never Lose Data Again - Event Sourcing to the Rescue!

3. Support

Feel free to create an issue if you have any questions or request for more explanation or samples. I also take Pull Requests!

๐Ÿ’– If this repository helped you - I'd be more than happy if you join the group of my official supporters at:

๐Ÿ‘‰ Github Sponsors

โญ Star on GitHub or sharing with your friends will also help!

4. Prerequisites

For running the Event Store examples you need to have:

  1. .NET 6 installed - https://dotnet.microsoft.com/download/dotnet/6.0
  2. Docker installed. Then going to the docker folder and running:
docker-compose up

More information about using .NET, WebApi and Docker you can find in my other tutorials: WebApi with .NET

5. Tools used

  1. Marten - Event Store and Read Models
  2. EventStoreDB - Event Store
  3. Kafka - External Durable Message Bus to integrate services
  4. ElasticSearch - Read Models

6. Samples

See also fully working, real-world samples of Event Sourcing and CQRS applications in Samples folder.

Samples are using CQRS architecture. They're sliced based on the business modules and operations. Read more about the assumptions in "How to slice the codebase effectively?".

  • Simplest CQRS and Event Sourcing flow using Minimal API,
  • Cutting the number of layers and boilerplate complex code to bare minimum,
  • Using all Marten helpers like WriteToAggregate, AggregateStream to simplify the processing,
  • Examples of all the typical Marten's projections,
  • Example of how and where to use C# Records, Nullable Reference Types, etc,
  • No Aggregates. Commands are handled in the domain service as pure functions.
  • typical Event Sourcing and CQRS flow,
  • DDD using Aggregates,
  • microservices example,
  • stores events to Marten,
  • distributed processes coordinated by Saga (Order Saga),
  • Kafka as a messaging platform to integrate microservices,
  • example of the case when some services are event-sourced (Carts, Orders, Payments) and some are not (Shipments using EntityFramework as ORM)
  • typical Event Sourcing and CQRS flow,
  • functional composition, no aggregates, just data and functions,
  • stores events to EventStoreDB,
  • Builds read models using Subscription to $all,
  • Read models are stored as Postgres tables using EntityFramework.
  • orchestrate and coordinate business workflow spanning across multiple aggregates using Saga pattern,
  • handle distributed processing both for asynchronous commands scheduling and events publishing,
  • getting at-least-once delivery guarantee,
  • implementing command store and outbox pattern on top of Marten and EventStoreDB,
  • unit testing aggregates and Saga with a little help from Ogooreck,
  • testing asynchronous code.
  • typical Event Sourcing and CQRS flow,
  • DDD using Aggregates,
  • stores events to EventStoreDB,
  • Builds read models using Subscription to $all.
  • Read models are stored as Marten documents.
  • simplest CQRS flow using .NET Endpoints,
  • example of how and where to use C# Records, Nullable Reference Types, etc,
  • No Event Sourcing! Using Entity Framework to show that CQRS is not bounded to Event Sourcing or any type of storage,
  • No Aggregates! CQRS do not need DDD. Business logic can be handled in handlers.

Variation of the previous example, but:

Shows how to handle basic event schema versioning scenarios using event and stream transformations (e.g. upcasting):

Shows how to compose event handlers in the processing pipelines to:

  • filter events,
  • transform them,
  • NOT requiring marker interfaces for events,
  • NOT requiring marker interfaces for handlers,
  • enables composition through regular functions,
  • allows using interfaces and classes if you want to,
  • can be used with Dependency Injection, but also without through builder,
  • integrates with MediatR if you want to.
  • ๐Ÿ“ Read more How to build a simple event pipeline
  • typical Event Sourcing and CQRS flow,
  • DDD using Aggregates,
  • microservices example,
  • stores events to Marten,
  • Kafka as a messaging platform to integrate microservices,
  • read models handled in separate microservice and stored to other database (ElasticSearch)
  • typical Event Sourcing and CQRS flow,
  • DDD using Aggregates,
  • stores events to Marten.
  • typical Event Sourcing and CQRS flow,
  • DDD using Aggregates,
  • stores events to Marten,
  • asynchronous projections rebuild using AsyncDaemon feature.

7. Self-paced training Kits

I prepared the self-paced training Kits for the Event Sourcing. See more in the Workshop description.

Event Sourcing is perceived as a complex pattern. Some believe that it's like Nessie, everyone's heard about it, but rarely seen it. In fact, Event Sourcing is a pretty practical and straightforward concept. It helps build predictable applications closer to business. Nowadays, storage is cheap, and information is priceless. In Event Sourcing, no data is lost.

The workshop aims to build the knowledge of the general concept and its related patterns for the participants. The acquired knowledge will allow for the conscious design of architectural solutions and the analysis of associated risks.

The emphasis will be on a pragmatic understanding of architectures and applying it in practice using Marten and EventStoreDB.

You can do the workshop as a self-paced kit. That should give you a good foundation for starting your journey with Event Sourcing and learning tools like Marten and EventStoreDB. If you'd like to get full coverage with all nuances of the private workshop, feel free to contact me via email.

  1. Events definition.
  2. Getting State from events.
  3. Appending Events:
  4. Getting State from events
  5. Business logic:
  6. Optimistic Concurrency:
  7. Projections:

It teaches the event store basics by showing how to build your Event Store on top of Relational Database. It starts with the tables setup, goes through appending events, aggregations, projections, snapshots, and finishes with the Marten basics.

  1. Streams Table
  2. Events Table
  3. Appending Events
  4. Optimistic Concurrency Handling
  5. Event Store Methods
  6. Stream Aggregation
  7. Time Travelling
  8. Aggregate and Repositories
  9. Snapshots
  10. Projections
  11. Projections With Marten

8. Articles

Read also more on the Event Sourcing and CQRS topics in my blog posts:

9. Event Store - Marten

  • Creating event store
  • Event Stream - is a representation of the entity in event sourcing. It's a set of events that happened for the entity with the exact id. Stream id should be unique, can have different types but usually is a Guid.
    • Stream starting - stream should be always started with a unique id. Marten provides three ways of starting the stream:
      • calling StartStream method with a stream id
        var streamId = Guid.NewGuid();
        documentSession.Events.StartStream<IssuesList>(streamId);
      • calling StartStream method with a set of events
        var @event = new IssueCreated { IssueId = Guid.NewGuid(), Description = "Description" };
        var streamId = documentSession.Events.StartStream<IssuesList>(@event);
      • just appending events with a stream id
        var @event = new IssueCreated { IssueId = Guid.NewGuid(), Description = "Description" };
        var streamId = Guid.NewGuid();
        documentSession.Events.Append(streamId, @event);
    • Stream loading - all events that were placed on the event store should be possible to load them back. Marten allows to:
      • get list of event by calling FetchStream method with a stream id
        var eventsList = documentSession.Events.FetchStream(streamId);
      • geting one event by its id
        var @event = documentSession.Events.Load<IssueCreated>(eventId);
    • Stream loading from exact state - all events that were placed on the event store should be possible to load them back. Marten allows to get stream from exact state by:
      • timestamp (has to be in UTC)
        var dateTime = new DateTime(2017, 1, 11);
        var events = documentSession.Events.FetchStream(streamId, timestamp: dateTime);
      • version number
        var versionNumber = 3;
        var events = documentSession.Events.FetchStream(streamId, version: versionNumber);
  • Event stream aggregation - events that were stored can be aggregated to form the entity once again. During the aggregation, process events are taken by the stream id and then replayed event by event (so eg. NewTaskAdded, DescriptionOfTaskChanged, TaskRemoved). At first, an empty entity instance is being created (by calling default constructor). Then events based on the order of appearance are being applied on the entity instance by calling proper Apply methods.
    • Online Aggregation - online aggregation is a process when entity instance is being constructed on the fly from events. Events are taken from the database and then aggregation is being done. The biggest advantage of online aggregation is that it always gets the most recent business logic. So after the change, it's automatically reflected and it's not needed to do any migration or updates.
    • Inline Aggregation (Snapshot) - inline aggregation happens when we take the snapshot of the entity from the DB. In that case, it's not needed to get all events. Marten stores the snapshot as a document. This is good for performance reasons because only one record is being materialized. The con of using inline aggregation is that after business logic has changed records need to be reaggregated.
    • Reaggregation - one of the biggest advantages of the event sourcing is flexibility to business logic updates. It's not needed to perform complex migration. For online aggregation it's not needed to perform reaggregation - it's being made always automatically. The inline aggregation needs to be reaggregated. It can be done by performing online aggregation on all stream events and storing the result as a snapshot.
      • reaggregation of inline snapshot with Marten
        var onlineAggregation = documentSession.Events.AggregateStream<TEntity>(streamId);
        documentSession.Store<TEntity>(onlineAggregation);
        documentSession.SaveChanges();
  • Event transformations
  • Events projection
  • Multitenancy per schema

10. CQRS (Command Query Responsibility Separation)

11. NuGet packages to help you get started.

I gathered and generalized all of the practices used in this tutorial/samples in Nuget Packages maintained by me GoldenEye Framework. See more in:

  • GoldenEye DDD package - it provides a set of base and bootstrap classes that helps you to reduce boilerplate code and help you focus on writing business code. You can find all classes like Commands/Queries/Event handlers and many more. To use it run:

    dotnet add package GoldenEye

  • GoldenEye Marten package - contains helpers, and abstractions to use Marten as document/event store. Gives you abstractions like repositories etc. To use it run:

    dotnet add package GoldenEye.Marten

12. Other resources

12.1 Introduction

12.2 Event Sourcing on production

12.3 Projections

12.4 Snapshots

12.5 Versioning

12.6 Storage

12.7 Design & Modeling

12.8 GDPR

12.9 Conflict Detection

12.10 Functional programming

12.12 Testing

12.13 CQRS

12.14 Tools

12.15 Event processing

12.16 Distributed processes

12.17 Domain Driven Design

12.18 Whitepapers

12.19 Event Sourcing Concerns

12.20 This is NOT Event Sourcing (but Event Streaming)

12.21 Architecture Weekly

If you're interested in Architecture resources, check my other repository: https://github.com/oskardudycz/ArchitectureWeekly/.

It contains a weekly updated list of materials I found valuable and educational.


EventSourcing.NetCore is Copyright ยฉ 2017-2022 Oskar Dudycz and other contributors under the MIT license.

eventsourcing.netcore's People

Contributors

ardalis avatar axelbrinck avatar bartelink avatar bradknowles avatar brendonparker avatar cjbanna avatar dennisdoomen avatar eifinger avatar grzegorzorwat avatar havret avatar jbarczyk avatar jchannon avatar laeckerv avatar laurentkempe avatar lukasz-pyrzyk avatar monkeywithacupcake avatar mrnustik avatar nicojuicy avatar oskardudycz avatar rudelafuente avatar skovsende avatar slang25 avatar stavris8894 avatar stemadocafe avatar tstuttard avatar yordis avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

eventsourcing.netcore's Issues

OpenTelemetry and Hosted Services

When I was using the ECommerce sample as a training ground for adding OpenTelemetry to microservices, I found that some of the activities created in Hosted Services weren't exported. As it turned out, those activities were created before OpenTelemetry registered its exporters. I described it in more detail on StackOverflow.

In the case of the ECommerce sample, it is the easiest to notice this problem when the microservice, that is Kafka consumer, is turned off when a new message is produced. Then when it starts, it consumes the message and tries to create an activity for it, but it can't as OpenTelemetry didn't register its exporters yet.

I can create PR, but first, I would like to hear from you if you want it and which solution I described best fits your code.

Core.Marten.Repository.MartenRepository.Delete does not delete.

Hi there,

the implementation calls the private Store function, which is implemented as:

        {
            var events = aggregate.DequeueUncommittedEvents();
            documentSession.Events.Append(
                aggregate.Id,
                events
            );
            await documentSession.SaveChangesAsync(cancellationToken);
            await eventBus.Publish(events);
        }

Sample directories could be cleaned

AsyncProjections: not compiling (on Mac M1):

  • When trying to compile the solution:
    wrong path reference to EventSourcing.NetCore/Sample/Tickets/Core/Core.csproj

  • When trying to run the SmartHome.api project:
    dotnet run
    Unhandled exception. System.AggregateException: Some services are not able to be constructed (Error while validating the service descriptor 'ServiceType: Swashbuckle.AspNetCore.Swagger.ISwaggerProvider Lifetime: Transient ImplementationType: Swashbuckle.AspNetCore.SwaggerGen.SwaggerGenerator': No constructor for type 'Swashbuckle.AspNetCore.SwaggerGen.SwaggerGenerator' can be instantiated using services from the service container and default values.) (Error while validating the service descriptor 'ServiceType: Swashbuckle.AspNetCore.Swagger.IAsyncSwaggerProvider Lifetime: Transient ImplementationType: Swashbuckle.AspNetCore.SwaggerGen.SwaggerGenerator': No constructor for type 'Swashbuckle.AspNetCore.SwaggerGen.SwaggerGenerator' can be instantiated using services from the service container and default values.) (Error while validating the service descriptor 'ServiceType: Microsoft.Extensions.ApiDescriptions.IDocumentProvider Lifetime: Singleton ImplementationType: Microsoft.Extensions.ApiDescriptions.DocumentProvider': No constructor for type 'Swashbuckle.AspNetCore.SwaggerGen.SwaggerGenerator' can be instantiated using services from the service container and default values.)

CustomerIncidentsSummaryGrouper Issue in Helpdesk sample

There is an issue with the CustomerIncidentsSummaryGrouper: IAggregateGrouper<Guid> class.

The code:

var filteredEvents = events .Where(ev => eventTypes.Contains(ev.GetType())) .ToList();

returns an empty list.

events:
{Marten.Events.Event<Helpdesk.Api.Incidents.IncidentResolved>}

eventTypes:
[0]: {Marten.Events.IEvent`1[Helpdesk.Api.Incidents.IncidentResolved]}
[1]: {Marten.Events.IEvent`1[Helpdesk.Api.Incidents.ResolutionAcknowledgedByCustomer]}
[2]: {Marten.Events.IEvent`1[Helpdesk.Api.Incidents.IncidentClosed]}
Seems that the Contains returns false.

AccountSummaryViewProjection - Guid - Why ?

Why did you inside AccountSummaryViewProjection hard-code the guid ?

readonly Guid viewid = new Guid("a8c1a4ac-686d-4fb7-a64a-710bc630f471");

Inside ctor it's:

ProjectEvent<NewAccountCreated>((ev)=> viewid, Persist);

Why not ?:

ProjectEvent<NewAccountCreated>((ev)=> ev.AccountId, Persist);

??

Tests failing on PRs from forked repositories

I believe the tests are failing on the PRs based on the this comment from the test-reporter README.md

Following setup does not work in workflows triggered by pull request from forked repository.

The author suggests splitting the workflow into two due to the how PRs from forked repositories are handled.
Recommended setup for public repositories

Workflows triggered by pull requests from forked repositories are executed with read-only token and therefore can't create check runs. To workaround this security restriction, it's required to use two separate workflows.

If Match Header value

Hi,
I was looking at the HelpDesk sample in the repo and found out that the API accepts the If-Match request header. I'm not sure what value needs to be passed in this header and its significance as far as versioning is concerned.

I appreciate any help you can provide.

CreateCommand_ShouldCreate_Package test fails due to packageDetails.SentAt assertion

When running Shipments.Api.Tests.Packages.SendPackageTests.CreateCommand_ShouldCreate_Package the test fails with this message.

Expected packageDetails.SentAt to be after <2021-07-31 05:28:33.2309098>, but found <2021-07-31 01:28:34.575791>.

I believe it is due to SentAt being set to DateTime.Now instead of DateTime.UtcNow.

var package = new Package
{
Id = Guid.NewGuid(),
OrderId = request.OrderId,
ProductItems = request.ProductItems.Select(pi =>
new ProductItem {Id = Guid.NewGuid(), ProductId = pi.ProductId, Quantity = pi.Quantity}).ToList(),
SentAt = DateTime.Now
};

Error: BuildYourOwnEventStore/ docker-compose up Error Message

I am attempting to go through the self paced workshop in Workshops/BuildYourOwnEventStore/. I have gotten to step 12 and am receiving the following error when running the command docker-compose up from C:\repos\EventSourcing.NetCore\Workshops\BuildYourOwnEventStore\docker:

-Network docker_postgres Error 15.0s
failed to create network docker_postgres: Error response from daemon: plugin "postgres" not found.

I'm fairly new to this and am not sure if it's something I did on my end or not. So far I have been following all the instructions for setting up my environment by downloading and installing everything. Looking forward to resolving this issue to get started.

Problem ECommerce with Marten AddProduct

Hello,

I am testing the ECommerce example with Marten, specifically the actions of adding carts and products. When I enter the cart and the first product everything works correctly. The problem appears when adding a second product, as the following error is displayed.

image

image

It seems to be a problem when trying to aggregate the event stream.

image

I hope you can help me, thank you very much

Some bugs to fix and a question

Thank you for your previous update to my previous request.
I'm trying to run some of the demos and there are a few things to correct in some of the files.

  1. There is a problem in all projects in the current commit, the marten package has two versions, one being unavailable (the 5.0 one).
    <PackageReference Include="Marten" Version="4.0.0-alpha.4" />
    <PackageReference Include="Microsoft.NET.Test.Sdk" Version="16.8.3" />
    <PackageReference Include="Marten" Version="5.0.0" />

  2. There is a problem with ElasticSearch when running this docker file.
    https://github.com/oskardudycz/EventSourcing.NetCore/blob/main/Workshops/BuildYourOwnEventStore/02-EventSourcingAdvanced/docker/docker-compose.yml

I get the following errors,
{"type":"log","@timestamp":"2021-01-03T18:29:14Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
kibana_1 | {"type":"log","@timestamp":"2021-01-03T18:29:18Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"}

I managed to solve them by using the elasticsearch and kibana docker specifications from this url
https://medium.com/@TimvanBaarsen/how-to-run-an-elasticsearch-7-x-single-node-cluster-for-local-development-using-docker-compose-2b7ab73d8b82

  1. Related to the same docker file, I get this error when the MettingsManagement.Api and MeetingsSearch.Api projects try to connect to Kafka.

%3|1609698927.479|FAIL|rdkafka#consumer-1| [thrd:GroupCoordinator]: GroupCoordinator: Failed to resolve 'kafka:9092': No such host is known. (after 2287ms in state CONNECT)

I managed to solve them by changing this line in the docker file :
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092 (localhost instead of kafka)

I used this post to solve the error
https://stackoverflow.com/questions/52438822/docker-kafka-w-python-consumer/52440056

  1. Finally I have a question, I'm not sure if the messages are flowing to elasticsearch in the MeetingsSearch.Api project, the MeetingSearch api requires a filter to query in swager, and I don't know which format to use for this filter.
    When I connect to Kibana, I don't find any data stored in eleasticsearch, which should be the case if the view creation was working.

Would it be possible that you check if the project is working and give a query example after a meeting and its schedule are created? I think it's a great example to understand your ideas and I would be very happy to get it to work.

Thank you for your work

Event Pipelines performance optimisations

Per @kbroniek suggestion

OK. It will be nice to add a benchmark, but I know it will consume your time that we don't have these days.
Thought you could implement this as .net events or take advantage of source code generation. It could be fast then.
What if we add multiple handlers in the multi-threading app. Consider using ConcurrentDictionary.

Do benchmarks and optimise the code. Consider making pipelines more generic to also apply them to command and query handling.

when to use the EventSource?

Hello Oskar,
Is there a reason why you used EventSource as the base class with the Account domain model and not the Client?

Thanks

cp-schema-registry container cannot run?

I ran the containers with the command docker-compose -f docker-compose -d. However, the cp-schema-registry container is not running.

docker logs cp-schema-registry

EndOfStreamException: Unable to read additional data from server sessionid 0x100006c060d0001, likely server has closed socket
        at org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:77)
        at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
        at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1290)
[main] INFO org.apache.zookeeper.ZooKeeper - Session: 0x100006c060d0001 closed
[main-EventThread] INFO org.apache.zookeeper.ClientCnxn - EventThread shut down for session: 0x100006c060d0001
[main] INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@3d921e20
[main] INFO org.apache.zookeeper.ClientCnxnSocket - jute.maxbuffer value is 1048575 Bytes
[main] INFO org.apache.zookeeper.ClientCnxn - zookeeper.request.timeout value is 0. feature enabled=false
[main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zookeeper/172.28.0.2:2181.
[main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - SASL config status: Will not attempt to authenticate using SASL (unknown error)
[main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket connection established, initiating session, client: /172.28.0.4:60808, server: zookeeper/172.28.0.2:2181
[main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server zookeeper/172.28.0.2:2181, session id = 0x100006c060d0002, negotiated timeout = 40000
[main] INFO org.apache.zookeeper.ZooKeeper - Session: 0x100006c060d0002 closed
[main-EventThread] INFO org.apache.zookeeper.ClientCnxn - EventThread shut down for session: 0x100006c060d0002
[main] INFO org.apache.kafka.clients.admin.AdminClientConfig - AdminClientConfig values:
        bootstrap.servers = [kafka:29092]
        client.dns.lookup = use_all_dns_ips
        client.id =
        connections.max.idle.ms = 300000
        default.api.timeout.ms = 60000
        metadata.max.age.ms = 300000
        metric.reporters = []
        metrics.num.samples = 2
        metrics.recording.level = INFO
        metrics.sample.window.ms = 30000
        receive.buffer.bytes = 65536
        reconnect.backoff.max.ms = 1000
        reconnect.backoff.ms = 50
        request.timeout.ms = 30000
        retries = 2147483647
        retry.backoff.ms = 100
        sasl.client.callback.handler.class = null
        sasl.jaas.config = null
        sasl.kerberos.kinit.cmd = /usr/bin/kinit
        sasl.kerberos.min.time.before.relogin = 60000
        sasl.kerberos.service.name = null
        sasl.kerberos.ticket.renew.jitter = 0.05
        sasl.kerberos.ticket.renew.window.factor = 0.8
        sasl.login.callback.handler.class = null
        sasl.login.class = null
        sasl.login.refresh.buffer.seconds = 300
        sasl.login.refresh.min.period.seconds = 60
        sasl.login.refresh.window.factor = 0.8
        sasl.login.refresh.window.jitter = 0.05
        sasl.mechanism = GSSAPI
        security.protocol = PLAINTEXT
        security.providers = null
        send.buffer.bytes = 131072
        socket.connection.setup.timeout.max.ms = 30000
        socket.connection.setup.timeout.ms = 10000
        ssl.cipher.suites = null
        ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
        ssl.endpoint.identification.algorithm = https
        ssl.engine.factory.class = null
        ssl.key.password = null
        ssl.keymanager.algorithm = SunX509
        ssl.keystore.certificate.chain = null
        ssl.keystore.key = null
        ssl.keystore.location = null
        ssl.keystore.password = null
        ssl.keystore.type = JKS
        ssl.protocol = TLSv1.3
        ssl.provider = null
        ssl.secure.random.implementation = null
        ssl.trustmanager.algorithm = PKIX
        ssl.truststore.certificates = null
        ssl.truststore.location = null
        ssl.truststore.password = null
        ssl.truststore.type = JKS
org.apache.kafka.common.config.ConfigException: No supported Kafka endpoints are configured. kafkastore.bootstrap.servers must have at least one endpoint matching kafkastore.security.protocol.
        at io.confluent.kafka.schemaregistry.rest.SchemaRegistryConfig.endpointsToBootstrapServers(SchemaRegistryConfig.java:666)
        at io.confluent.kafka.schemaregistry.rest.SchemaRegistryConfig.bootstrapBrokers(SchemaRegistryConfig.java:615)
        at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.kafkaClusterId(KafkaSchemaRegistry.java:1494)
        at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.<init>(KafkaSchemaRegistry.java:166)
        at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.initSchemaRegistry(SchemaRegistryRestApplication.java:71)
        at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.configureBaseApplication(SchemaRegistryRestApplication.java:90)
        at io.confluent.rest.Application.configureHandler(Application.java:271)
        at io.confluent.rest.ApplicationServer.doStart(ApplicationServer.java:245)
        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
        at io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain.main(SchemaRegistryMain.java:44)
```

About DbContext/DB Support in sample Project

hi there,

Is there any DB call support or EF - DBContext in Project Sample? It would be great if you can provide me or guide me about how to make DB Call from sample application (NOT Event Store)

Thanks & Regards,
Sunny Dave

Question: Core.Marten.Repository.MartenRepository.Store & Ambiente Transactions

Hi there,

the Core.Marten.Repository.MartenRepository.Store is implemented as following:

        private async Task Store(T aggregate, CancellationToken cancellationToken)
        {
            var events = aggregate.DequeueUncommittedEvents();
            documentSession.Events.Append(
                aggregate.Id,
                events
            );
            await documentSession.SaveChangesAsync(cancellationToken);
            await eventBus.Publish(events);
        }

The uncommitted events are saved and published.
If a subscriber in turn also needs to save its modified aggregate and that operations fails for some reason, we ended up with some inconsistencies.

Is there an option to check if there is an ambiente transaction active and when not to explicit start one before saving and committing it after publishing the events?

I think that is somehow related to your open task Add sample outbox pattern implementation

Using multiple constructors in the aggregate

Hi Oskar,
I have a question about this private constructor in th Order.

    private Order(Guid id, Guid clientId, IReadOnlyList<PricedProductItem> productItems, decimal totalPrice)
    {
        var @event = OrderInitialized.Create(
            id,
            clientId,
            productItems,
            totalPrice,
            DateTime.UtcNow
        );

        Enqueue(@event);
        Apply(@event);
    }

Why do you need this constructor? We can just call this functionality inner Initialize static method, like below code and remove this parameters constructor:

    public static Order Initialize(
        Guid orderId,
        Guid clientId,
        IReadOnlyList<PricedProductItem> productItems,
        decimal totalPrice)
    {
        var order = new Order();

        var @event = OrderInitialized.Create(
            id,
            clientId,
            productItems,
            totalPrice,
            DateTime.UtcNow
        );

        order.Enqueue(@event);
        order.Apply(@event);

        return order;
    }

It is less code :D

Error resolving ShipmentsDbContext when running PracticalEventSourcing.sln

When running PracticalEventSourcing.sln and using Shipments.Api as the startup project, I get an exception (see below) as the database migrations attempt to run.

The code throwing the exception.

public static void ConfigureShipmentsModule(this IServiceProvider serviceProvider)
{
var environment = Environment.GetEnvironmentVariable("ASPNETCORE_ENVIRONMENT") ?? "Development";
if (environment == "Development")
{
serviceProvider.GetRequiredService<ShipmentsDbContext>().Database.Migrate();
}
}

The exception details.

System.InvalidOperationException
  HResult=0x80131509
  Message=Cannot resolve scoped service 'Shipments.Storage.ShipmentsDbContext' from root provider.
  Source=Microsoft.Extensions.DependencyInjection
  StackTrace:
   at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteValidator.ValidateResolution(Type serviceType, IServiceScope scope, IServiceScope rootScope)
   at Microsoft.Extensions.DependencyInjection.ServiceProvider.Microsoft.Extensions.DependencyInjection.ServiceLookup.IServiceProviderEngineCallback.OnResolve(Type serviceType, IServiceScope scope)
   at Microsoft.Extensions.DependencyInjection.ServiceLookup.ServiceProviderEngine.GetService(Type serviceType, ServiceProviderEngineScope serviceProviderEngineScope)
   at Microsoft.Extensions.DependencyInjection.ServiceLookup.ServiceProviderEngineScope.GetService(Type serviceType)
   at Microsoft.Extensions.DependencyInjection.ServiceProviderServiceExtensions.GetRequiredService(IServiceProvider provider, Type serviceType)
   at Microsoft.Extensions.DependencyInjection.ServiceProviderServiceExtensions.GetRequiredService[T](IServiceProvider provider)
   at Shipments.Config.ConfigureShipmentsModule(IServiceProvider serviceProvider) in ...\EventSourcing.NetCore\Sample\ECommerce\Shipments\Shipments\Config.cs:line 33
   at Shipments.Api.Startup.Configure(IApplicationBuilder app, IWebHostEnvironment env) in ...\EventSourcing.NetCore\Sample\ECommerce\Shipments\Shipments.Api\Startup.cs:line 70
   at System.RuntimeMethodHandle.InvokeMethod(Object target, Object[] arguments, Signature sig, Boolean constructor, Boolean wrapExceptions)
   at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture)
   at Microsoft.AspNetCore.Hosting.ConfigureBuilder.Invoke(Object instance, IApplicationBuilder builder)
   at Microsoft.AspNetCore.Hosting.ConfigureBuilder.<>c__DisplayClass4_0.<Build>b__0(IApplicationBuilder builder)
   at Microsoft.AspNetCore.Hosting.GenericWebHostBuilder.<>c__DisplayClass15_0.<UseStartup>b__1(IApplicationBuilder app)
   at Microsoft.AspNetCore.Mvc.Filters.MiddlewareFilterBuilderStartupFilter.<>c__DisplayClass0_0.<Configure>g__MiddlewareFilterBuilder|0(IApplicationBuilder builder)
   at Microsoft.AspNetCore.HostFilteringStartupFilter.<>c__DisplayClass0_0.<Configure>b__0(IApplicationBuilder app)
   at Microsoft.AspNetCore.Hosting.GenericWebHostService.<StartAsync>d__31.MoveNext()

docker-compose up doens't work in the main folder

Building the sample project in docker doesn't work using the latest version that uses .Net 5. There are some version issues in the DOCKERFILE file.

It builds using the commit of the 31st of october.

I get this error though:
event_sourcing_sample | Application startup exception: System.InvalidOperationException: No service for type 'Microsoft.AspNetCore.Hosting.Server.IServer' has been registered.

In all cases, thank you for your great work

ECommerce Sample APIs all using port 5000

Based on the settings of the launchUrl in the linked launchSettings.json profiles, it appears the intent is each API should be on it's own port. However, each one starts on port 5000.

PS C:\Code\EventSourcing.NetCore\Sample\ECommerce> dotnet run --project .\Carts\Carts.Api\Carts.Api.csproj
info: Microsoft.Hosting.Lifetime[0]
====>     Now listening on: http://localhost:5000  <====
info: Microsoft.Hosting.Lifetime[0]
====>     Now listening on: https://localhost:5001 <====
info: Microsoft.Hosting.Lifetime[0]
      Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
      Hosting environment: Development
info: Microsoft.Hosting.Lifetime[0]
      Content root path: C:\Code\EventSourcing.NetCore\Sample\ECommerce\Carts\Carts.Api

PS C:\Code\EventSourcing.NetCore\Sample\ECommerce> dotnet run --project .\Orders\Orders.Api\Orders.Api.csproj
info: Core.Streaming.Consumers.ExternalEventConsumerBackgroundWorker[0]
      External Event Consumer started
info: Core.Streaming.Kafka.Consumers.KafkaConsumer[0]
      Kafka consumer started
info: Microsoft.Hosting.Lifetime[0]
====>      Now listening on: http://localhost:5000 <====
info: Microsoft.Hosting.Lifetime[0]
====>      Now listening on: https://localhost:5001 <====
info: Microsoft.Hosting.Lifetime[0]
      Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
      Hosting environment: Development
info: Microsoft.Hosting.Lifetime[0]
      Content root path: C:\Code\EventSourcing.NetCore\Sample\ECommerce\Orders\Orders.Api

Adding applicationUrl to the launchSettings.json file should correct the issue. The added benefit being that the Swagger UI will automatically open on run from Visual Studio. Also, correcting this would allow multiple APIs to be running simultaneously.

"profiles": {
"CartsApi": {
"commandName": "Project",
"launchBrowser": true,
"launchUrl": "http://localhost:5500",
"environmentVariables": {
"ASPNETCORE_ENVIRONMENT": "Development"
}
}
}

"profiles": {
"OrdersApi": {
"commandName": "Project",
"launchBrowser": true,
"launchUrl": "http://localhost:5501",
"environmentVariables": {
"ASPNETCORE_ENVIRONMENT": "Development"
}
}
}

"profiles": {
"PaymentsApi": {
"commandName": "Project",
"launchBrowser": true,
"launchUrl": "http://localhost:5502",
"environmentVariables": {
"ASPNETCORE_ENVIRONMENT": "Development"
}
}
}

"profiles": {
"ShipmentsApi": {
"commandName": "Project",
"launchBrowser": true,
"launchUrl": "http://localhost:5503",
"environmentVariables": {
"ASPNETCORE_ENVIRONMENT": "Development"
}
}
}

Confusing typo in readme.md

I think there is a typo in section 1.7 of the readme.md.

I think the object type here:

var reservation = new ReservationId (
    reservationId,
    seatId,
    customerId
);

Should be Reservation rather than ReservationId i.e:

var reservation = new Reservation (
    reservationId,
    seatId,
    customerId
);

I think it is correct later on in the section i.e.

public class Reservation : Aggregate<ReservationId, Guid>
{
    ...
}

Apologies if I have misunderstood what's happening here.

Thanks.

Should use Singleton for `EventStoreDBExpectedStreamRevisionProvider`

EventStoreDBExpectedStreamRevisionProvider doesn't always return what's set in If-Match, it is because it's injected as a scoped instance, thus when it's set to a new version, it will still read the older version ( because it's another instance) when trying to append an event.

How to reproduce:

Run Simple EventStoreDB example, initialize a shopping cart, and then confirm the shopping cart (version 0), and confirm it again (turn off the validation, version 1), you'll see WrongExpectedVersionException with expected version 0, but actual 1.

ECommerce Sample Payments API has conflicting delete routes

When attempting to execute action DiscardPayment, this error message is returned:

{
    "StatusCode": 500,
    "Error": "The request matched multiple endpoints. Matches: \r\n\r\nPayments.Api.Controllers.PaymentsController.DiscardPayment (Payments.Api)\r\nPayments.Api.Controllers.PaymentsController.TimeoutPayment (Payments.Api)"
}

Both actions are decorated with the same [HttpDelete("{id}")] attribute and ASP.NET doesn't know which to choose.

[HttpDelete("{id}")]
public async Task<IActionResult> DiscardPayment(Guid id, [FromBody] DiscardPaymentRequest request)
{
Guard.Against.Null(request, nameof(request));
var command = Payments.DiscardingPayment.DiscardPayment.Create(
id,
request.DiscardReason
);
await commandBus.Send(command);
return Ok();
}
[HttpDelete("{id}")]
public async Task<IActionResult> TimeoutPayment(Guid id, [FromBody] TimeOutPaymentRequest request)
{
Guard.Against.Null(request, nameof(request));
var command = TimeOutPayment.Create(
id,
request.TimedOutAt
);
await commandBus.Send(command);
return Ok();
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.