Code Monkey home page Code Monkey logo

dapr.eventstore's Introduction

NuGet Version and Downloads count Build

Dapr.EventStore

Demo implementation of an basic EventStore with DaprClient.

DaprClient

Dapr state is key/value, a key scheme app-id || key for the non actor scenario.

State store could have transactional support, and be optimized for bulk operations. Due to these options the EventStore has a few different modes, called SliceMode. Default if Off. Slices are used when bulk write/read is not optimal, or to minimize transactions, when all opererations is in there own.

Append (slice mode - Off)

To build an "append only" stream on top of that, this ES saves a entry for stream head, with streamName as key, and each set of events as one entry per event with "streamName-Version" as key. To come close to append, etags are used to check that the events not exist. The stream head is updated when a new slice is written, the head and events is in the same transaction.

Stream head

{
    "id": "es-test||teststream-f83d3|head",
    "value": {
        "Version": 2
    },
    "partitionKey": "teststream-f83d3",
    ...
}

Event

{
    "id": "es-test||teststream-f83d3|2",
    "value": {
        "Data": "hello 2",
        "EventId": "2bde5f48-a5a7-4b68-b8bc-5d769ef95917",
        "EventName": "test",
        "StreamName": "teststream-f83d3",
        "Version": 2
    },
    "partitionKey": "teststream-f83d3",
    ...
}

Append (slice mode - TwoPhased)

To build an "append only" stream on top of that, this ES saves a entry for stream head, with streamName as key, and each set/slice of events as one entry with "streamName-Version" as key. To come close to append, etags are used to check that the slice not exist. The stream head is updated when a new slice is written, but the head and slice is not in the same transaction.

Stream head

{
    "id": "es-test||teststream-36087|head",
    "value": {
        "Version": 2
    },
    "partitionKey": "teststream-36087",
    ...
}

Event

Not that value is an array of events.

{
    "id": "es-test||teststream-36087|2",
    "value": [
        {
            "Data": "hello 2",
            "EventId": "e17976c9-e151-4b28-aa53-f0640677ff6e",
            "EventName": "test",
            "StreamName": "teststream-36087",
            "Version": 2
        }
    ],
    "partitionKey": "teststream-36087",
    ...
}

Append (slice mode - Transactional)

To build an "append only" stream on top of that, this ES saves a entry for stream head, with streamName as key, and each set/slice of events as one entry with "streamName-Version" as key.To come close to append, etags are used to check that the slice not exist. The stream head is updated when a new slice is written, the head and slice is in the same transaction.

Stream head

{
    "id": "es-test||teststream-77b42|head",
    "value": {
        "Version": 2
    },
    "partitionKey": "teststream-77b42",
    ...
}

Event

Note event are store binary serialized (even if meta data sets json , issue)

{
    "id": "es-test||teststream-77b42|2",
    "value": "W3siRXZlbnRJZCI6ImIyMzU2YWY1LTcwYjYtNDE3Yi04Nzc5LWRhYmIyMzc5YTRkYiIsIkV2ZW50TmFtZSI6InkiLCJTdHJlYW1OYW1lIjoidGVzdHN0cmVhbS03N2I0MiIsIkRhdGEiOiJoZWxsbyAyIiwiVmVyc2lvbiI6Mn1d",
    "partitionKey": "teststream-77b42",
    ...
}

Append (slice mode - OffAndSharedAll)

Store all events in a collection with the same partitionKey. To build an "append only" stream on top of that, this ES saves a entry for stream head, with streamName as key, and each event as one entry with stream and eventId as key.

Stream head

{
    "id": "es-test||teststream-22f9f|head",
    "value": {
        "Version": 2
    },
    "partitionKey": "all",
    ...
}

Event

Note that the id is compoiste with eventId and partition is set to "all".

{
    "id": "es-test||teststream-22f9f|2d49634a-5ed6-4217-83fc-526d05d51dfd",
    "value": {
        "Data": "hello 2",
        "EventId": "2d49634a-5ed6-4217-83fc-526d05d51dfd",
        "EventName": "y",
        "StreamName": "teststream-22f9f",
        "Version": 2
    },
    "partitionKey": "all",
    ...
}

Load/Read (off)

To be able to read all event in a stream, first the current version is read for the stream head, then all events is read using bulk read until given version is reached. So there is multiple reads, one for head and one for event (but optimized based on the underlaying store's bulk read implementation).

Load/Read (slice)

To be able to read all event in a stream, first the current version is read for the stream head, then starting with the latest slice, each slice is read until given version is reached. So there is multiple reads, one for head and one for each slice.

Load/Read (slice - OffAndSharedAll)

This mode uses the Query api to get the stream events. Sorting is in memory - Issue

Partitioning

Partitioning follows a fixed key scheme, this could effect the underlaying stores partioning. Some stores lets you control partition through metadata. The event store lets you pass custom meta data based on stream.

Actors

TBD.

Actor follow a different key scheme, that might allow a compontent to support a multi-operation, and eiser implementation of query by state key *.

Test

The test suite runs agaist a local dapr instance.

dapr run --app-id es-test --dapr-grpc-port 50001`

Using the cosmos emulator a state compontent could be configured.

When dapr initializes the databae and collection need to exist. The test suite will tear down and recreate the collection.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: statestore
spec:
  type: state.azure.cosmosdb
  version: v1
  metadata:
  - name: url
    value: https://localhost:8081
  - name: masterKey
    value: C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==
  - name: database
    value: statestore
  - name: collection
    value: statestore

The eventstore test suite need this environment variable to be set (in setup of the test suite), to run against the local dapr instance. Without this set if will run against a local "fake" dapr client.

Environment.SetEnvironmentVariable("DAPR_GRPC_PORT", "50001");

dapr.eventstore's People

Contributors

perokvist avatar elanhasson avatar

Stargazers

云铮 avatar Sze Ka Wai Raymond avatar Adolfo Silva avatar Krzysztof Korus avatar Vladimir Shaleev avatar Jordan Bachmann avatar Wei-Ting Kuo avatar  avatar  avatar Ravi avatar Glenn Versweyveld avatar Jishnu A avatar davy avatar Paul Blamire avatar Jeffrey Hu avatar kdcllc avatar Laurent Kempé avatar

Watchers

 avatar James Cloos avatar

dapr.eventstore's Issues

Transactional stores events byte serialized.

StateTransactionRequest stores value byte serialized, even if meta is set to json.

var headReq = new StateTransactionRequest(streamHeadKey, JsonSerializer.SerializeToUtf8Bytes(head), Client.StateOperationType.Upsert, metadata: meta,

LoadEventStreamAsync - crashes

Running in dapr with cosmos;
System.InvalidOperationException: An attempt was made to transition a task to a final state when it had already completed.

Load form head version - with missing slices

In the case that head have a newer version (but no matching slices), due to failure #1.
Ignore missing slices, and always return version based on returned events.

(warn when miss-match)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.