Code Monkey home page Code Monkey logo

akka-ddd's People

Contributors

gitter-badger avatar pawelkaczor avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

akka-ddd's Issues

Question - Utilize Cassandra rather than EventStore

Hi,

Is it possible to utilize Cassandra rather than EventStore (geteventstore.com)? Additionally, would it be possible to incorporate PersistentView (or, its new name - Persistence Query)?

I am looking to implement a solution similar to your ddd-leaven-akka-v2 example, but was hoping to utilize Cassandra rather than EventStore. I do not know anything about EventStore, which is why I leaning towards Cassandra - is EventStore as mature as Cassandra and production ready?

Thanks!

Scala 2.12 Support

Scala 2.12 was just released. I think it might be a good idea to publish a 2.12 artifacts.

add Role to SingletonManagerFactory

I try use scenario: start simple cluster seed (cluster monitor) and after join nodes with business logic. But Receptor in Headquarters did not receive messages, because of "The singleton actor is always running on the oldest member with specified role." (first node have no implementation for ClusterSingletonManager). One of the way is ClusterSingletonManagerSettings.withRole use (like in ShardingSupport)

trait SingletonSupport {

  implicit def singletonManagerFactory[A <: BusinessEntity : LocalOfficeId](implicit as: ActorSystem): SingletonManagerFactory =
    new SingletonManagerFactory(implicitly[LocalOfficeId[A]].department)

}

class SingletonManagerFactory(role: String)(implicit system: ActorSystem) extends CreationSupport {

  override def getChild(name: String): Option[ActorRef] = throw new UnsupportedOperationException

  override def createChild(props: Props, name: String): ActorRef = {
    val singletonManagerName: String = s"singletonOf$name"
    val managerSettings = ClusterSingletonManagerSettings(system).withSingletonName(name).withRole(role)
    system.actorOf(
      ClusterSingletonManager.props(
        singletonProps = props,
        terminationMessage = PoisonPill,
        managerSettings
      ),
      name = singletonManagerName)

    val proxySettings = ClusterSingletonProxySettings(system).withSingletonName(name)
    system.actorOf(
      ClusterSingletonProxy.props(
        singletonManagerPath = s"/user/$singletonManagerName",
        proxySettings),
      name = s"${name}Proxy")
  }
}

Durable scheduler

Motivation

Sagas (Process Managers) should be able to schedule timeout events (deadlines) to be notified when deadline comes.

Solution proposal

I came up with an idea of durable scheduler built on top of Eventstore.
See initial concept here.

Support to persist a sequence of events

do you think we should add support for this :

/**

  • Asynchronously persists events in specified order. This is equivalent to calling
  • persist[A](event: A)(handler: A => Unit) multiple times with the same handler,
  • except that events are persisted atomically with this method.
    *
  • @param events events to be persisted
  • @param handler handler for each persisted events
    */
    final def persist[A](events: immutable.Seq[A])(handler: A ⇒ Unit): Unit =
    events.foreach(persist(_)(handler))

There are cases where from a single command we may want to raise several events and be sure will be handled atomically

Best way to manage the event sequence number for view-update-elastic modue

The view-update module uses the event sequence number provided by the event store to skip the messages already processed by the view itself.

In the view-update-mysql you store the event number in a separated table wrapped in a transaction so if something goes wrong the event number will not be persisted and the transaction rolled back.

I am thinking to do the same for the elasticsearch module and to create a document responsible to keep event sequence numbers. The main difference though is that Elasticsearch does not support transactions and does not provide any rollback mechanism leaving us with a potential inconsistency within the data if something goes wrong. This is not an issue for the work i am doing but maybe we can think a solution to make this better.

A few questions about akka-ddd status

Hello,
We in Actor are considering implementing a database-agnostic persistence layer. We are going to use event-sourcing for everything, but it leads us to a question: how we should build dashboards, do analytics, if we don't have a consistent data representation available for use by third-party systems?
We were considering implementing our own solution for synchronizing state of Persistent Actors (like User and Groups names, Group participants, User balances, etc.). But today I've stumbled upon akka-ddd and found out that you already have solutions for this, and now I'm considering an ability to modify akka-ddd for our needs and use it in actor-platform. And contribute our modifications to akka-ddd project too of course.
But before we finally decide use akka-ddd there are some questions:

– Some of our deployments use Cassandra for events persistence. Do I understand right that akka-persistence-* modules needs to be extended for compatibility with akka-ddd? Is there a way to completely replace eventstore-akka-persistence by another akka-persistence-* module?
– We are using protobuf in our client-server protocol, we are migrating our internal server's serialization to it too, using ScalaPB. It is backward- and forward-compatible, fast and produces small-sized serialized data. Is there a way to use custom serialization with akka-ddd?

Thanks for your time, looking forward hearing your feedback!

A single projection fails causes Projection to spam lots of db connection

I deduced that it spams db connection because in my case using postgresql, it prints out
"FATAL: remaining connection slots are reserved for non-replication superuser connections" after a projection fails. I cannot use any other client to connect to postgres afterwards hence i cannot fix the db inconsistencies manually.

Old snapshots (before 1.4.0) is not loaded when recovering actor.

in JsonSerializerExtension:

The old snapshot doesn't have dataSerializerId, i think it needs one more pattern matching for snapshot that doesn't have data serializer id

def deserialize(implicit format: Formats): PartialFunction[(TypeInfo, JValue), Snapshot] = {
    case (TypeInfo(Clazz, _), JObject(List(
            JField("dataClass", JString(dataClass)),
            JField("dataSerializerId", JInt(serializerId)),
            JField("data", JString(x)),
            JField("metadata", metadata)))) =>
              import Base64._

              val data = if (serializerId.intValue == EmptySerializerId) {
                serialization.deserialize(x.toByteArray, Class.forName(dataClass)).get
              } else {
                serialization.deserialize(x.toByteArray, serializerId.intValue, dataClass).get
              }
              val metaData = metadata.extract[SnapshotMetadata]
              Snapshot(data, metaData)
  }

Tenant Id

Do you think the tenant id should be part of the metadata ?

Command Queue

It should be possible to send a command to a remote office that is not available in the current Akka Cluster.

Support for creating many receptors for the same bpsName

Different Process Managers can not use the same stream of events because of Receptor naming issue. Receptor name depends on bps (Bussines process stream) only, which is the same for those Process managers Receptor naming

abstract class ProcessConfig[E : ClassTag](val bpsName: String, departmentId: EntityId = null)
  extends LocalOfficeId[E](bpsName, Option(departmentId).getOrElse(bpsName)) {

  /**
    * Correlation ID identifies process instance.
    */
  def correlationIdResolver: CorrelationIdResolver

  def processName: EntityId = id
}
class CoordinationOffice[E: ClassTag](val config: ProcessConfig[E], actor: ActorRef) extends Office(config, actor) {

  def receptorConfig: ReceptorConfig =
    ReceptorBuilder()
      .reactTo(BusinessProcessId(config.bpsName, config.department))
      .reactFor(config.processName)
abstract class ReceptorActorFactory[A : LocalOfficeId : CreationSupport](implicit system: ActorSystem) {
...
    implicitly[CreationSupport[A]].createChild(receptorProps, s"Receptor-${receptorConfig.stimuliSource.id}-$(receptorConfig.reactFor)")
...}

Exception in receiveRecover when replaying event type EventMessage

Using saga to coordinate events, and this appears in log quite often:
Exception in receiveRecover when replaying event type [pl.newicom.dddd.messaging.event.EventMessage$$anon$1] with sequence number [2] for persistenceId [productsSaga-96577cb9-7bb3-3edc-83f7-24d5208420a8]. scala.MatchError: Processed(1,Success(OK)).

Full stacktrace:

at scala.PartialFunction$$anon$1.apply(PartialFunction.scala:253) at scala.PartialFunction$$anon$1.apply(PartialFunction.scala:251) at co.styletheory.context.inventory.domain.write.InventorySaga$$anonfun$1.applyOrElse(InventorySaga.scala:43) at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36) at pl.newicom.dddd.process.SagaStateHandling$$anonfun$state$1.apply(SagaStateHandling.scala:20) at pl.newicom.dddd.process.SagaStateHandling$$anonfun$state$1.apply(SagaStateHandling.scala:20) at scala.Option.getOrElse(Option.scala:121) at pl.newicom.dddd.process.SagaStateHandling$class.state(SagaStateHandling.scala:20) at pl.newicom.dddd.process.ProcessManager.state(ProcessManager.scala:3) at pl.newicom.dddd.process.SagaStateHandling$class.updateState(SagaStateHandling.scala:40) at pl.newicom.dddd.process.ProcessManager.updateState(ProcessManager.scala:3) at pl.newicom.dddd.process.Saga.pl$newicom$dddd$process$Saga$$_updateState(Saga.scala:60) at pl.newicom.dddd.process.Saga$$anonfun$receiveRecover$1.applyOrElse(Saga.scala:30) at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36) at akka.persistence.Eventsourced$$anon$3$$anonfun$1.applyOrElse(Eventsourced.scala:461) at akka.actor.Actor$class.aroundReceive(Actor.scala:480) at pl.newicom.dddd.process.Saga.akka$persistence$Eventsourced$$super$aroundReceive(Saga.scala:23) at akka.persistence.Eventsourced$$anon$4.stateReceive(Eventsourced.scala:505) at akka.persistence.Eventsourced$class.aroundReceive(Eventsourced.scala:173) at pl.newicom.dddd.process.Saga.akka$persistence$AtLeastOnceDeliveryLike$$super$aroundReceive(Saga.scala:23) at akka.persistence.AtLeastOnceDeliveryLike$class.aroundReceive(AtLeastOnceDelivery.scala:357) at pl.newicom.dddd.process.Saga.akka$contrib$pattern$ReceivePipeline$$super$aroundReceive(Saga.scala:23) at akka.contrib.pattern.ReceivePipeline$class.aroundReceive(ReceivePipeline.scala:112) at pl.newicom.dddd.process.Saga.aroundReceive(Saga.scala:23) at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526) at akka.actor.ActorCell.invoke(ActorCell.scala:495) at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257) at akka.dispatch.Mailbox.run(Mailbox.scala:224) at akka.dispatch.Mailbox.exec(Mailbox.scala:234) at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

Saga code:

object InventorySaga {
  sealed trait InventoryStatus extends SagaState[InventoryStatus] {
    def isNew = false
  }

  case object New extends InventoryStatus {
    override def isNew: Boolean = true
  }
  case object Available extends InventoryStatus
  case object Rented extends InventoryStatus
  case object toCustomer extends InventoryStatus
  case object returning extends InventoryStatus
  case object BeingWashed extends InventoryStatus

  implicit object InventorySagaConfig extends SagaConfig[InventorySaga]("products") {
    def correlationIdResolver = {
      case ProductParentCreated(productUUID, _, _, _, _, _, _, _, _, _, _, _, _, _, _, _, _, _, _, _) => productUUID
      case ProductChildCreated(productUUID, _, _, _, _, _, _, _, _, _, _, _, _, _, _, _, _, _, _, _) => productUUID
      case ProductBooked(productUUID, _, _, _) => productUUID
    }
  }
}

class InventorySaga(val pc: PassivationConfig,
                    inventoryOffice: ActorPath) extends ProcessManager[InventoryStatus] {
  override def officeId = InventorySagaConfig

  startWhen {
    case created: ProductParentCreated => New
    case created: ProductChildCreated => New
  } andThen {
    case New => {
      case ProductParentCreated(productId, name, parent_name, designers, categories, gallery, price, currency, designer_ids, category_ids, color, clothe_type, detail, productSize, bust, waist, hips, length, sleeve, neck) =>
        deliverCommand(inventoryOffice, CreateChildProduct(productId, name, parent_name, designers, categories, gallery, price, currency, designer_ids, category_ids, color, clothe_type, detail, productSize, bust, waist, hips, length, sleeve, neck))
        stay()
    }
  }
}

Saga process:

fromStreams(['$ce-Product', '$ce-ProductGroup']).
when({
    'co.styletheory.context.inventory.domain.contracts.inventory.ProductParentCreated' : function(s,e) {
        linkTo('products', e);
    },
    'co.styletheory.context.inventory.domain.contracts.inventory.ProductChildCreated' : function(s,e) {
        linkTo('products', e);
    },
    'co.styletheory.context.inventory.domain.contracts.inventory.ProductBooked' : function(s,e) {
        linkTo('products', e);
    }
});

I'm using event store 3.4.0.

[write-front] generic schema of successful response

Hi Pavel, very good implementation indeed.
Maybe something that I missed but in the "CommandHandler" trait you return a command processed (thank you) message.

How do we return a custom response per for command ?

Snapshot fails to be read when starting the application

Because i thought this should be a separate issue:

Logs that i picked up:

at akka.persistence.eventstore.Helpers$RichConnection$.akka$persistence$eventstore$Helpers$RichConnection$$loop$1(Helpers.scala:52)
at akka.persistence.eventstore.Helpers$RichConnection$$anonfun$akka$persistence$eventstore$Helpers$RichConnection$$foldLeft$1$1.apply(Helpers.scala:57)
at akka.persistence.eventstore.Helpers$RichConnection$$anonfun$akka$persistence$eventstore$Helpers$RichConnection$$foldLeft$1$1.apply(Helpers.scala:56)
at scala.concurrent.Future$$anonfun$flatMap$1.apply(Future.scala:253)
at scala.concurrent.Future$$anonfun$flatMap$1.apply(Future.scala:251)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91)
at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:90)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:39)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalArgumentException: Unimplemented deserialization of message with manifest [akka.cluster.sharding.ShardCoordinator$Internal$State] in [akka.cluster.sharding.protobuf.ClusterShardingMessageSerializer]
at akka.cluster.sharding.protobuf.ClusterShardingMessageSerializer.fromBinary(ClusterShardingMessageSerializer.scala:162)
at akka.serialization.SerializerWithStringManifest.fromBinary(Serializer.scala:131)
at akka.serialization.Serialization$$anonfun$deserialize$3.apply(Serialization.scala:193)
at scala.util.Try$.apply(Try.scala:192)
at akka.serialization.Serialization.deserialize(Serialization.scala:193)
at pl.newicom.eventstore.json.SnapshotJsonSerializer$$anonfun$deserialize$1.applyOrElse(JsonSerializerExtension.scala:118)
at pl.newicom.eventstore.json.SnapshotJsonSerializer$$anonfun$deserialize$1.applyOrElse(JsonSerializerExtension.scala:112)
at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
at pl.newicom.eventstore.json.ScheduledEventSerializer$$anonfun$deserialize$2.applyOrElse(JsonSerializerExtension.scala:84)
at pl.newicom.eventstore.json.ScheduledEventSerializer$$anonfun$deserialize$2.applyOrElse(JsonSerializerExtension.scala:84)
at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
at org.json4s.CustomSerializer$$anonfun$deserialize$1.applyOrElse(Formats.scala:393)
at org.json4s.CustomSerializer$$anonfun$deserialize$1.applyOrElse(Formats.scala:393)
at org.json4s.ext.EnumSerializer$$anonfun$deserialize$1.applyOrElse(EnumSerializer.scala:34)
at org.json4s.CustomSerializer$$anonfun$deserialize$1.applyOrElse(Formats.scala:393)
at org.json4s.CustomSerializer$$anonfun$deserialize$1.applyOrElse(Formats.scala:393)
at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
at org.json4s.ext.EnumSerializer$$anonfun$deserialize$1.applyOrElse(EnumSerializer.scala:34)
at org.json4s.ext.EnumSerializer$$anonfun$deserialize$1.applyOrElse(EnumSerializer.scala:34)
at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
at org.json4s.ext.EnumSerializer$$anonfun$deserialize$1.applyOrElse(EnumSerializer.scala:34)
at org.json4s.ext.EnumSerializer$$anonfun$deserialize$1.applyOrElse(EnumSerializer.scala:34)
at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
at org.json4s.ext.EnumSerializer$$anonfun$deserialize$1.applyOrElse(EnumSerializer.scala:34)
at org.json4s.ext.EnumSerializer$$anonfun$deserialize$1.applyOrElse(EnumSerializer.scala:34)
at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
at org.json4s.ext.EnumSerializer$$anonfun$deserialize$1.applyOrElse(EnumSerializer.scala:34)
at org.json4s.ext.EnumSerializer$$anonfun$deserialize$1.applyOrElse(EnumSerializer.scala:34)
at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
at org.json4s.ext.EnumSerializer$$anonfun$deserialize$1.applyOrElse(EnumSerializer.scala:34)
at org.json4s.ext.EnumSerializer$$anonfun$deserialize$1.applyOrElse(EnumSerializer.scala:34)
at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
at org.json4s.ext.EnumSerializer$$anonfun$deserialize$1.applyOrElse(EnumSerializer.scala:34)
at org.json4s.ext.EnumSerializer$$anonfun$deserialize$1.applyOrElse(EnumSerializer.scala:34)
at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
at org.json4s.ext.EnumSerializer$$anonfun$deserialize$1.applyOrElse(EnumSerializer.scala:34)
at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
... 31 more
at org.json4s.ext.EnumSerializer$$anonfun$deserialize$1.applyOrElse(EnumSerializer.scala:34)
at org.json4s.ext.EnumSerializer$$anonfun$deserialize$1.applyOrElse(EnumSerializer.scala:34)
at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
at org.json4s.ext.EnumSerializer$$anonfun$deserialize$1.applyOrElse(EnumSerializer.scala:34)
at org.json4s.ext.EnumSerializer$$anonfun$deserialize$1.applyOrElse(EnumSerializer.scala:34)
at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
at scala.collection.AbstractMap.applyOrElse(Map.scala:59)
at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
at org.json4s.Extraction$.org$json4s$Extraction$$customOrElse(Extraction.scala:593)
at org.json4s.Extraction$ClassInstanceBuilder.result(Extraction.scala:580)
at org.json4s.Extraction$.extract(Extraction.scala:389)
at org.json4s.Extraction$ClassInstanceBuilder.org$json4s$Extraction$ClassInstanceBuilder$$mkWithTypeHint(Extraction.scala:575)
at org.json4s.Extraction$ClassInstanceBuilder$$anonfun$result$8.apply(Extraction.scala:584)
at org.json4s.Extraction$ClassInstanceBuilder$$anonfun$result$8.apply(Extraction.scala:580)
at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
at org.json4s.Extraction$ClassInstanceBuilder.result(Extraction.scala:580)
at org.json4s.Extraction$.extract(Extraction.scala:389)
at org.json4s.Extraction$.extract(Extraction.scala:39)
at org.json4s.Extraction$.org$json4s$Extraction$$customOrElse(Extraction.scala:594)
org.json4s.package$MappingException: unknown error
at org.json4s.Extraction$.extract(Extraction.scala:43)
at org.json4s.ExtractableJsonAstNode.extract(ExtractableJsonAstNode.scala:21)
at org.json4s.native.Serialization$.read(Serialization.scala:71)
at org.json4s.Serialization$class.read(Serialization.scala:30)
at org.json4s.native.Serialization$.read(Serialization.scala:32)
at pl.newicom.eventstore.json.JsonSerializerExtensionImpl.fromBinary(JsonSerializerExtension.scala:44)
at pl.newicom.eventstore.EventstoreSerializationSupport$class.deserialize(EventstoreSerializationSupport.scala:98)
at pl.newicom.eventstore.EventstoreSerializationSupport$class.fromEvent(EventstoreSerializationSupport.scala:61)
at pl.newicom.eventstore.plugin.EventStoreSerializer.fromEvent(EventStoreSerializer.scala:10)
at pl.newicom.eventstore.plugin.EventStoreSerializer.fromEvent(EventStoreSerializer.scala:27)
at akka.persistence.eventstore.EventStoreSerialization.deserialize(EventStoreSerialization.scala:13)
at akka.persistence.eventstore.snapshot.EventStoreSnapshotStore$$anonfun$loadAsync$1$$anonfun$apply$1.applyOrElse(EventStoreSnapshotStore.scala:35)
at akka.persistence.eventstore.snapshot.EventStoreSnapshotStore$$anonfun$loadAsync$1$$anonfun$apply$1.applyOrElse(EventStoreSnapshotStore.scala:35)
at akka.persistence.eventstore.snapshot.EventStoreSnapshotStore$$anonfun$loadAsync$1.akka$persistence$eventstore$snapshot$EventStoreSnapshotStore$$anonfun$$fold$1(EventStoreSnapshotStore.scala:24)
at scala.PartialFunction$Lifted.apply(PartialFunction.scala:223)
at scala.PartialFunction$Lifted.apply(PartialFunction.scala:219)
at akka.persistence.eventstore.Helpers$RichConnection$.akka$persistence$eventstore$Helpers$RichConnection$$loop$1(Helpers.scala:52)
at akka.persistence.eventstore.Helpers$RichConnection$$anonfun$akka$persistence$eventstore$Helpers$RichConnection$$foldLeft$1$1.apply(Helpers.scala:57)
at akka.persistence.eventstore.Helpers$RichConnection$$anonfun$akka$persistence$eventstore$Helpers$RichConnection$$foldLeft$1$1.apply(Helpers.scala:56)
at scala.concurrent.Future$$anonfun$flatMap$1.apply(Future.scala:253)
at scala.concurrent.Future$$anonfun$flatMap$1.apply(Future.scala:251)
at akka.serialization.Serialization$$anonfun$deserialize$3.apply(Serialization.scala:193)
at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91)
at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:90)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:39)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalArgumentException: Unimplemented deserialization of message with manifest [akka.cluster.sharding.ShardCoordinator$Internal$State] in [akka.cluster.sharding.protobuf.ClusterShardingMessageSerializer]
at akka.cluster.sharding.protobuf.ClusterShardingMessageSerializer.fromBinary(ClusterShardingMessageSerializer.scala:162)
at akka.serialization.SerializerWithStringManifest.fromBinary(Serializer.scala:131)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
at scala.util.Try$.apply(Try.scala:192)
at akka.serialization.Serialization.deserialize(Serialization.scala:193)
at pl.newicom.eventstore.json.SnapshotJsonSerializer$$anonfun$deserialize$1.applyOrElse(JsonSerializerExtension.scala:118)
at pl.newicom.eventstore.json.SnapshotJsonSerializer$$anonfun$deserialize$1.applyOrElse(JsonSerializerExtension.scala:112)
at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
at pl.newicom.eventstore.json.ScheduledEventSerializer$$anonfun$deserialize$2.applyOrElse(JsonSerializerExtension.scala:84)
at pl.newicom.eventstore.json.ScheduledEventSerializer$$anonfun$deserialize$2.applyOrElse(JsonSerializerExtension.scala:84)
at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
at org.json4s.CustomSerializer$$anonfun$deserialize$1.applyOrElse(Formats.scala:393)
at org.json4s.CustomSerializer$$anonfun$deserialize$1.applyOrElse(Formats.scala:393)
at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
at org.json4s.CustomSerializer$$anonfun$deserialize$1.applyOrElse(Formats.scala:393)
at org.json4s.CustomSerializer$$anonfun$deserialize$1.applyOrElse(Formats.scala:393)
at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
at org.json4s.ext.EnumSerializer$$anonfun$deserialize$1.applyOrElse(EnumSerializer.scala:34)
at org.json4s.ext.EnumSerializer$$anonfun$deserialize$1.applyOrElse(EnumSerializer.scala:34)
at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
at org.json4s.ext.EnumSerializer$$anonfun$deserialize$1.applyOrElse(EnumSerializer.scala:34)
at org.json4s.ext.EnumSerializer$$anonfun$deserialize$1.applyOrElse(EnumSerializer.scala:34)
at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
at org.json4s.ext.EnumSerializer$$anonfun$deserialize$1.applyOrElse(EnumSerializer.scala:34)
at org.json4s.ext.EnumSerializer$$anonfun$deserialize$1.applyOrElse(EnumSerializer.scala:34)
... 31 more
at org.json4s.ext.EnumSerializer$$anonfun$deserialize$1.applyOrElse(EnumSerializer.scala:34)
at org.json4s.ext.EnumSerializer$$anonfun$deserialize$1.applyOrElse(EnumSerializer.scala:34)
at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
at org.json4s.ext.EnumSerializer$$anonfun$deserialize$1.applyOrElse(EnumSerializer.scala:34)
at org.json4s.ext.EnumSerializer$$anonfun$deserialize$1.applyOrElse(EnumSerializer.scala:34)
at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
at org.json4s.ext.EnumSerializer$$anonfun$deserialize$1.applyOrElse(EnumSerializer.scala:34)
at org.json4s.ext.EnumSerializer$$anonfun$deserialize$1.applyOrElse(EnumSerializer.scala:34)
at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
at org.json4s.ext.EnumSerializer$$anonfun$deserialize$1.applyOrElse(EnumSerializer.scala:34)
at org.json4s.ext.EnumSerializer$$anonfun$deserialize$1.applyOrElse(EnumSerializer.scala:34)
at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
at org.json4s.ext.EnumSerializer$$anonfun$deserialize$1.applyOrElse(EnumSerializer.scala:34)
at org.json4s.ext.EnumSerializer$$anonfun$deserialize$1.applyOrElse(EnumSerializer.scala:34)
at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
at org.json4s.ext.EnumSerializer$$anonfun$deserialize$1.applyOrElse(EnumSerializer.scala:34)
at org.json4s.ext.EnumSerializer$$anonfun$deserialize$1.applyOrElse(EnumSerializer.scala:34)
at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
at org.json4s.Extraction$.org$json4s$Extraction$$customOrElse(Extraction.scala:593)
at org.json4s.Extraction$ClassInstanceBuilder.result(Extraction.scala:580)
at org.json4s.Extraction$.extract(Extraction.scala:389)
at org.json4s.Extraction$ClassInstanceBuilder.org$json4s$Extraction$ClassInstanceBuilder$$mkWithTypeHint(Extraction.scala:575)
at org.json4s.Extraction$ClassInstanceBuilder$$anonfun$result$8.apply(Extraction.scala:584)
at org.json4s.Extraction$ClassInstanceBuilder$$anonfun$result$8.apply(Extraction.scala:580)
at org.json4s.Extraction$.org$json4s$Extraction$$customOrElse(Extraction.scala:594)
at org.json4s.Extraction$ClassInstanceBuilder.result(Extraction.scala:580)
at org.json4s.Extraction$.extract(Extraction.scala:389)
at org.json4s.Extraction$.extract(Extraction.scala:39)
at scala.collection.AbstractMap.applyOrElse(Map.scala:59)

Events versioning

What should we do when an event changes overtime ? for example today i am raising an event productCreated(id,name) in a month time i decide to add an extra field sku so my events will become productCreated(id,name,sku) . Right now unless i did something wrong is not possible as the framework will complain big time when it tries to deserialize old objects.

how would you solve this issue ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.