Code Monkey home page Code Monkey logo

spring-session-data-geode's Introduction

spring session

Build Status

Spring Session for Apache Geode

Spring Session core provides an API along with several provider implementations to manage user sessions. It also simplifies the support for clustered session state management without being tied to an application container specific solution.

NOTICE

2023-January-17:

At the end of 2022, VMware announced the general availability of the Spring for VMware GemFire portfolio of projects.

While these Spring based projects for VMware GemFire are open source and a succession to the Spring for Apache Geode projects, they are not a replacement. VMware GemFire forked from the Apache Geode project and is not open source. Additionally, newer Apache Geode and VMware GemFire clients are not backwards compatible with older Apache Geode and VMware GemFire servers. You can begin the transition by starting here.

Alternatively, the Spring portfolio provides first-class integration with other comparable session caching providers. Also, see here.

Finally, keep in mind, the Spring for Apache Geode projects will still be maintained until OSS and commercial support ends. Maintenance will only include CVE and critical fixes. No new features or major enhancements will be made. The Spring Session for Apache Geode support timelines can be viewed here.

2022-October-24:

See the October 24th NOTICE on the Spring Data for Apache Geode GitHub project page for complete details.

Features

Out of the box Spring Session provides integration with:

  • HttpSession - replaces the HttpSession supplied by the application container (e.g. Apache Tomcat) in a neutral way along with providing HTTP Session IDs in the HTTP Header to work with REST APIs.

  • WebSocket - keeps the HttpSession active when receiving WebSocket messages.

On top of the core Spring Session features, Spring Session for Apache Geode and VMware Tanzu GemFire (SSDG) positions either Apache Geode or VMware Tanzu GemFire as a session repository provider and adds additional capabilities required by enterprise class solutions:

  • Custom Expiration Policies - in addition to the default, 30 minute session idle expiration timeout (TTI), which is configurable, SSDG also supports fixed-duration expiration timeout (e.g. expire the session after 1 hour regardless of whether the session is active or inactive). Users may also define custom expiration policies using the SessionExpirationPolicy interface. See the documentation for more details.

  • Custom Data Serialization - in addition to the default Apache Geode PDX Serialization format, users may configure Apache Geode Data Serialization with full support for Delta Propagation. While race conditions between competing HTTP requests (accessing the same HTTP Session) cannot be completely avoided with any session provider, sending only the delta (or changes) minimizes the chance of lost updates, especially in a highly clustered Web environment. By using PDX Serialization, your HTTP Session state is immediately transferable across environments, from non-managed, standalone environments to managed environments, like Pivotal Cloud Foundry (PCF) using Pivotal Cloud Cache (PCC).

  • Custom Change Detection - while most session implementations consider the session to be dirty anytime anything is written to the session, even when your application domain objects stored in the session have not changed, SSDG will intelligently determine whether there is anything to send before writing it to the wire. OOTB, SSDG will look at any application domain objects that implement Apache Geode’s Delta interface and use that to determine if your application domain objects are indeed dirty before sending the delta. If your objects do not implement the Delta interface, or the object is not the same, then it functions like all other Spring Session providers. If you prefer, you may specify your own rules composed with the IsDirtyPredicate strategy interface.

  • Powerful Pub/Sub - Apache Geode and VMware Tanzu GemFire both provide a very powerful and robust client/server event distribution and handling sub-system leveraged by SSDG in order to reliably manage session state, especially in a distributed/clustered environment.

These and many more Apache Geode or VMware Tanzu GemFire features may be leveraged in your application environment to achieve resilient, highly available (HA), durable, consistent, and even multi-clustered (WAN), persistent session statement management.

The best part, SSDG allows you to use either Apache Geode or VMware Tanzu GemFire interchangeably without having to change a single line of code. Simply change your dependency from org.springframework.session:spring-session-data-geode to org.springframework.session:spring-session-data-gemfire, or vice versa, and you can seemlessly move between either Apache Geode or VMware Tanzu GemFire, or even PCC.

No other Spring Session provider offers you the same type of flexibility and power in 1 solution, especially as your requirements and UC change (e.g. from simple session caching to a full on System of Record with distributed compute and streaming capabilities).

Spring Session Project Site

You can find the documentation, issue management, support, samples, and guides for using Spring Session at https://projects.spring.io/spring-session/.

Additionally, you can find documentation, issue management, support, samples and guides using Spring Session for Apache Geode & VMware Tanzu GemFire at https://spring.io/projects/spring-session-data-geode.

Documentation

Documentation for Spring Session for Apache Geode and VMware Tazu GemFire can be found here and Javadoc is available here.

Code of Conduct

Please see our code of conduct

Reporting Security Vulnerabilities

Please see our Security policy.

License

Spring Session is Open Source Software released under the Apache 2.0 license.

spring-session-data-geode's People

Contributors

eleftherias avatar jxblum avatar rwinch avatar spring-operator avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

spring-session-data-geode's Issues

Consider support for customizable IsDirty application domain object checking.

Currently, SSDG uses the [java.lang.Object.equals(:Object)] method (https://docs.oracle.com/javase/8/docs/api/java/lang/Object.html#equals-java.lang.Object-) to determine whether the application's domain objects stored in the (HTTP) Session have changed or not.

The limitation of this approach is:

  1. If the users' application domain objects do not override the Object.equals(:Object) method, or...
  2. Users application domain objects' Object.equals(:Object) method does not detect changes in the object's persistent state and only implements equals in terms of "logical" identity (e.g. Customer's SSN ), or...
  3. If the users' application modifies the application domain objects in-place rather than making defensive copies upon reading an attribute value from the Session.

If anyone of these conditions holds, then problems with the users application may occurs, such as improper dirty detection.

In addition, this problem only occurs when using Apache Geode or Pivotal GemFire's [DataSerialization]I(http://geode.apache.org/docs/guide/17/developing/data_serialization/gemfire_data_serialization.html) framework with Delta Propagation since SSDG attempts to be efficient and prevent unnecessary data transmissions for objects that have not changed, as determined by Object.equals(:Object).

This problem will not occur when using either PDX Serialization or Java Serialization since the entire Session and all of its contents (application domain objects included) are fully serialized each time the Session object is persisted back to Apache Geode or Pivotal GemFire.

As such, SSDG will potentially offer more support to allow users to specify exactly when and how a application domain object transitions from a non-dirty to a dirty state.

A few options include:

  1. Introduce a IsDirty strategy interface with default implementations provided SSDG OOTB.
interface IsDirtyPredicate {

  default boolean isDirty(Object previousValue, Object newValue) {
     return newValue == null || !newValue.equals(previousValue);
  }
}

An always dirty object strategy:

class AlwaysDirtyPredicate implements IsDirtyPredicate {

   boolean isDirty(..) {
      return true;
    }
}

A Delta interface aware dirty object implementation and strategy:

class DeltaAwareDirtyPredicate implements IsDirtyPredicate {

    boolean isDirty(..) {
       return newValue instanceof Delta ? ((Delta) newValue).hasDelta() : super.dirty(..);
    }
}
  1. Alternatively, SSDG could provide new @EnableGemFireHttpSession annotation attributes (e.g. copyOnRead and (optionally) useDeepCopy) to clone/copy the (HTTP) Session object and all of its contents.

For example, when the (HTTP) Session is read on SessionRepository.findById(sessionId) (here), a GemFireSession.copy(:Session) could be performed rather than GemFireSession:from(:Session) along with an option to perform a deep copy (i.e. including application domain objects).

But, then again, the users application domain objects would still need to implement the java.lang.Cloneable interface.

Release Spring Session Data Geode 2.1.0

Spring Session Bean is scheduled for release on Oct 25th. For spring-session-data-geode 2.1.0 to be included, we need a the GA release by Oct 24th. If it is not available, we will need to roll back to the previous GA of spring-session-data-geode in order to ensure Spring Session Bean gets in Spring Boot

Consider adding support for writing delta of changes

Generally, session repository implementations should track changes to session and write only delta in order to reduce the probability of lost updates due to race conditions.

Additionally, if this is implemented also consider supporting SaveMode to allow flexibility.

All session repositories in Spring Session core modules support this and could be used for inspiration.

Modify PdxSerializableSessionSerializer to mark the identity field of the serialized Session object

To help Apache Geode (PDX) serialization framework and mechanics resolve the identity (ID) field of a serialized object, use the PdxWriter.markIdentityField(..) to mark the Session.id property accordingly.

This will help in cases, such as:

"If users are allowed to do queries that return whole session objects, some queries may require computing hashCode and equals on your whole session." E.g. SELECT DISTINCT session FROM /ClusterSpringSessions session

Prevent SessionRepository.save(Session) on non-dirty Sessions.

Currently, Spring Session core performs a "commit" of a Session twice during the HTTP request/response processing cycle. This double commit ends up calling SessionRepository.save(Session) twice.

The commit/save, more often than not, results in a non-dirty Session update. In fact I have not seen a case where the Session has become dirty again after it was saved the first time during the HTTP request/response processing cycle. Of course, more research is needed to actually confirm an update to the Session would not occur in some other code path after the first save, such as a Servlet Filter.

Anyway, more details to follow...

@rwinch FYI, ^^^^

Enhance PDX serialization support to delegate back to Geode for Session attribute value serialization

Currently, Spring Session for Apache Geode/Pivotal GemFire (SSDG) supports delegation when using Data Serialization to serialize both the Session and the Session Attributes along with the corresponding attribute values.

This gives users the ability to apply GemFire serialization semantics to their own application domain object types. However, the same strategy is not applied when using PDX Serialization to de/serialize the Session and its contents.

This ticket sets out to enhance SSDG's PDX Serialization to support delegation.

Switch to SLF4J.

Currently, there is a mix of Apache Commons Logging (mostly Commons Logging) and some SLF4J. This ticket will standardize SSDG on SLF4J.

NoTransactionException while Rollingback the transaction when using @Transactionald

Client App is using multiple threads to modify the same object concurrently in the same region (replicated region in this case) with transaction. When the second transaction fails, the transaction rollsback (which is correct). However, in doing so, it is throwing the following exception

org.springframework.transaction.NoTransactionException: No transaction is associated with the current thread; are multiple transaction managers present?; nested exception is java.lang.IllegalStateException: Thread does not have an active transaction

at org.springframework.data.gemfire.transaction.GemfireTransactionManager.doRollback(GemfireTransactionManager.java:235)

at org.springframework.transaction.support.AbstractPlatformTransactionManager.doRollbackOnCommitException(AbstractPlatformTransactionManager.java:893)

at org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:776)

at org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:712)

at org.springframework.transaction.interceptor.TransactionAspectSupport.commitTransactionAfterReturning(TransactionAspectSupport.java:631)

at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:385)

at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:99)

at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)

at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:747)

at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:689)

at <our class>

at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:266)

at java.util.concurrent.FutureTask.run(FutureTask.java)

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

at java.lang.Thread.run(Thread.java:748)

Caused by: java.lang.IllegalStateException: Thread does not have an active transaction

at org.apache.geode.internal.cache.TXManagerImpl.rollback(TXManagerImpl.java:532)

at org.springframework.data.gemfire.transaction.GemfireTransactionManager.doRollback(GemfireTransactionManager.java:232)

... 17 more

Add support to expose configuration as properties in the Spring Environment.

Currently, Spring Session for Apache Geode and Pivotal GemFire (SSDG) allow you to configure Spring Spring with either Apache Geode or Pivotal GemFire, using well-known and documented properties, such as in a Spring Boot application.properties file, as follows:

# Spring Boot with Spring Session application.properties
spring.session.data.gemfire.session.expiration.max-inactive-interval-seconds=300

However, if the user is using the @EnableGemFireHttpSession annotation attributes directly, or perhaps a SpringSessionGemFireConfigurer bean is used and registered in the Spring application context to configure Spring Session with either Apache Geode or Pivotal GemFire, then no such properties exist and it is more problematic to get access to the configuration at runtime.

It is a simple matter to inject the SpringSessionGemFireConifugrer bean definition into other application components, as follows:

@Configuration
@EnableGemFireHttpSession(maxInactiveIntervalInSeconds = 300)
class SpringSessionGemFireConfiguration {

  @Bean
  SpringSessionGemFirerConfigurer customConfigurer() {

    return new SpringSessionGemFireConfigurer() {

      @Override
      public String getRegionName() {
        return "Sessions";
      }
  }

  @Bean
  MyApplicationService appService(SpringSessionGemFireConfigurer configurer) {
    return new MyApplicationService(configurer.getRegionName());
  }
}

But, getting access to say the Session Region name when the name of the Region managing (HTTP) Session state is only specified on the @EnableGemFireHttpSession annotation, sessionRegionName attribute is more problematic.

For that, a new annotation attribute (i.e. exposeConfigurationAsProperties) will added to expose the configuration of Spring Session with Apache Geode or Pivotal GemFire as properties in the Spring Environment, as follows:

@Configuration
@EnableGemFireHttpSession(regionName = "Sessions", maxInactiveIntervalInSeconds = 300,
    exposeConfigurationAsProperties = true)
class SpringSessionGemFireConfiguration {

  @Bean
  MyApplicationService appService(
    @Value("${spring.session.data.gemfire.session.region.name} String regionName) {

    return new MyApplicationService(regionName);
  }

In this way, users can use Spring's @Value annotation to extract configuration, as properties, from the Environment, including configuration SSDG stores in the Environment when exposeConfigurationAsProperties is specifically set to true.

The normal precedence applies:

  1. A SpringSessionGemFireConfigurer bean overrides properties and @EnableGemFireHttpSession annotation attributes.
  2. Properties defined in Spring Boot application.properties (or another properties file loaded into the Spring Environment using a PropertySource) override @EnableGemFireHttpSesion annotation attributes.
  3. And finally, @EnableGemFireHttpSession annotation attributes determine the value of the property if neither of the above approaches are used to configure Spring Session with Apache Geode or Pivotal GemFire.

The names of the properties will be the well-known, documented names specified in the SSDG documentation.

Harmonize naming of SessionRepositories

In Spring Session core modules we recently harmonized naming of session repositories - see spring-projects/spring-session#1455. This was triggered by addition of another Redis-backed SessionRepository implementation for 2.1 so we wanted that implementation name more closely reflect the concrete SessionRepository interface they implement.

To align with the above, Spring Session Data Geode should:

  • rename GemFireOperationsSessionRepository to GemFireIndexedSessionRepository (or even GeodeIndexedSessionRepository?)

See spring-projects/spring-session@8cc8fbb for inspiration and strategy taken for preserving backwards compatibility.

Consider PDX Serialization as a possible configuration option for SSDG in addition to the already existing use of GemFire/Geode's DataSerialization framework

Original description from Spring Session Issue #493...

"The initial implementation of Spring Session Data GemFire support made use of GemFire's PDX serialization framework. However, due to several limitations/problems when employing PDX to serialize HTTP Session data, particularly around adding new Session attributes (as first-class properties of the Session object itself) along with robust delta-propagation, the PDX effort failed.

Based on the existing object model representing the (HTTP) Session state (specifically GemFireSession and GemFireSessionAttributes), it now maybe possible to utilize certain aspects of PDX, which has the distinct advantage of not having to modify the servers' CLASSPATH with application or Spring Session-specific types.

However, while PDX is more convenient and flexible, it is also slower than DataSerializable and does not necessarily prevent de-serialization on the server. It all depends on how the user has configured PDX on each individual server in the cluster along with the type of data access operations being performed by the application.

For instance, certain OQL queries and GemFire Functions can cause a PDX instance to be de-serialized even when read-serialized is set to true. Therefore, if the Spring Session / application types are not on the server CLASSPATH when a data access operation triggers a deserialization, then a ClassNotFoundException will still be thrown.

However, unlike DataSerializable, GemFire will prefer to keep PDX data serialized as much as possible, whereas it seems GemFire always deserializes DataSerializable objects, as is evident from GemFire Data Serialization section in the GemFire User Guide, where it states...

"..while GemFire DataSerializable interface is generally more performant than GemFire's PdxSerializable, it requires full deserialization on the server and then reserialization to send the data back to the client."

The constant serialization/deserialization of data is a performance hit, and GemFire must always serialize data when replicating between peers in the cluster, replicating over WAN, transferring data between clients and servers, and when persisting and/or overflowing data to disk."

Consider support for "Attached" Sessions.

The idea behind "attached" Sessions is not unlike "attached entities" in a JPA context. That is, each update to the Session object is immediately written to the backend data store, which in this case is Apache Geode or Pivotal GemFire.

Add server-side configuration support for GemFire/Geode DataSerialization when SSDG is not used to configure Spring Session on the servers.

This support will be particularly useful for users/customers (mostly) configuring and bootstrapping their Apache Geode or Pivotal GemFire servers via Gfsh.

Unfortunately, GemFire/Geode's DataSerialization framework requires a fair amount of server-side configuration to handle Session objects as well as the application domain objects stored in the (HTTP) Session properly. For instance, the Spring (Session) JARs, application domain model object JARs (for objects stored in the Session) and all dependent JARs are required on the GemFire/Geode server's classpath, since, and especially when using GemFire/Geode's Delta handling capabilities, this requires a "deserialization" on the server processing the data access operation (i.e. when the Session "delta" is applied). Therefore, the Session classes, application classes and any third-party libs containing dependent classes, must be on the classpath.

More details to follow.

Change SpringSessionGemFireConfigurer behavior to only apply configuration for the callback methods the user has actually implemented

Currently, the configuration applied by the SpringSessionGemFireConfigurer is all or nothing (as described in the docs), even if the user only implements a single callback method affecting the configuration of SSDG (e.g. getClientRegionShortcut()) via a SpringSessionGemFireConfigurer bean instance.

This makes the use of the SpringSessionGemFireConfigurer less flexible when only a single attribute/property needs custom, conditional logic that is better suited in code then as an annotation attribute, or as expressed in a property, especially since a user could be using a combination of Annotation attributes or the well-known SSDG properties (e.g. spring.session.data.gemfire.cache.client.region.shortcut as documented in the @EnableGemFireHttpSession annotation, clientRegionShortcut attribute) in Spring Boot's application.properties, in addition to the Configurer when configuring SSDG.

This issue intends to change the behavior to be additive/overriding, with precedence. The precedence will be...

  1. SpringSessionGemFireConfigurer (for only the callback methods defined in the SpringSessionGemFireConfigurer interface that the developer actually "implemented").
  2. Well-known SSDG properties (e.g. spring.session.data.gemfire.expiration.max-inactiver-interval-seconds)
  3. And finally, the @EnableGemFireHttpSession attributes themselves.

In other words, if all 3 configuration approaches are used for a particular attribute, e.g. session expiration timeouts, then SpringSessionGemFireConfigurer.getMaxInactiveIntervalInSeconds() followed by the spring.session.data.gemfire.session.expiration.max-inactive-interval-seconds property followed by the @EnableGemFireHttpSession.maxInactiveIntervalInSeconds attribute will be evaluated, in that order until 1 the configuration provided by that approach.

Fix bug in Spring Session (core) infrastructure component initialization

Currently, SSDG overrides the @PostConstruct annotated, SpringHttpSessionConfiguration.init() method in the GemFireHttpSessionConfiguration class (here) by extension.

However, the overridden @PostConstruct annotated init() method in the SSDG GemFireHttpSessionConfiguration class does not appropriately call the super.init() method in the Spring Session core SpringHttpSessionConfiguration base class, leaving (for example) the custom configuration of a Spring Session core CookieSerializer unrealized. Therefore, the configured CookieSerializer always defaults to the DefaultCookieSerializer being registered on the configured HttpSessionIdResolver.

This is a bug!

Consider adding support for flush mode

Immediate FlushMode can be used as a strategy for dealing with race conditions, by writing changes to data store as they happen, vs traditionally on invocation of #save on session repository.

Note that this is an option only for Servlet-based implementations, as org.springframework.session.Session#setAttribute returns void (and not Mono<Void>).

All SessionRepository implementations in Spring Session core modules support this and could be used for inspiration.

Reset the Thread count (180) and Workload size (10,000) once the Apache Geode concurrency issues are resolved

This change must be made to the MultiThreadedHighlyConcurrentClientServerHttpSessionAccessIntegrationTests class.

When the Thread count is 180 and the Workload size is 10,000, the Apache Geode Server becomes non-responsive or unavailable (i.e. it fails)!

[FORK] - 2020-01-24 00:12:20,867  INFO .internal.cache.tier.sockets.BaseCommand: 442 - Error applying delta for key f501b046-5466-4e42-a83e-9fd8c2b6361d of region /Sessions: Cache encountered replay of event containing delta bytes for key f501b046-5466-4e42-a83e-9fd8c2b6361d
[FORK] - 2020-01-24 00:12:20,868  WARN .internal.cache.tier.sockets.BaseCommand: 334 - Server connection from [identity(192.168.99.1(SpringBasedCacheClientApplication:7292:loner):51190:358a9bd6:SpringBasedCacheClientApplication,connection=1; port=51421]: connection disconnect detected by EOF.
2020-01-24 00:12:21,072  WARN ode.cache.client.internal.OpExecutorImpl: 653 - Pool unexpected socket timed out on client connection=Pooled Connection to localhost:51177: Connection[localhost:51177]@1979076415)
2020-01-24 00:12:21,072  WARN ode.cache.client.internal.OpExecutorImpl: 653 - Pool unexpected socket timed out on client connection=Pooled Connection to localhost:51177: Connection[DESTROYED]). Server unreachable: could not connect after 1 attempts
2020-01-24 00:12:21,094  WARN ode.cache.client.internal.OpExecutorImpl: 653 - Pool unexpected socket timed out on client connection=Pooled Connection to localhost:51177: Connection[localhost:51177]@728216184)
2020-01-24 00:12:21,094  WARN ode.cache.client.internal.OpExecutorImpl: 653 - Pool unexpected socket timed out on client connection=Pooled Connection to localhost:51177: Connection[DESTROYED]). Server unreachable: could not connect after 1 attempts
2020-01-24 00:12:21,312  WARN ode.cache.client.internal.OpExecutorImpl: 653 - Pool unexpected socket timed out on client connection=Pooled Connection to localhost:51177: Connection[localhost:51177]@1624117850)
2020-01-24 00:12:21,312  WARN ode.cache.client.internal.OpExecutorImpl: 653 - Pool unexpected socket timed out on client connection=Pooled Connection to localhost:51177: Connection[DESTROYED]). Server unreachable: could not connect after 1 attempts
2020-01-24 00:12:21,329  WARN ode.cache.client.internal.OpExecutorImpl: 653 - Pool unexpected socket timed out on client connection=Pooled Connection to localhost:51177: Connection[localhost:51177]@1563217080)
2020-01-24 00:12:21,329  WARN ode.cache.client.internal.OpExecutorImpl: 653 - Pool unexpected socket timed out on client connection=Pooled Connection to localhost:51177: Connection[DESTROYED]). Server unreachable: could not connect after 1 attempts
2020-01-24 00:12:21,350  WARN ode.cache.client.internal.OpExecutorImpl: 653 - Pool unexpected socket timed out on client connection=Pooled Connection to localhost:51177: Connection[localhost:51177]@578482607)
2020-01-24 00:12:21,350  WARN ode.cache.client.internal.OpExecutorImpl: 653 - Pool unexpected socket timed out on client connection=Pooled Connection to localhost:51177: Connection[DESTROYED]). Server unreachable: could not connect after 1 attempts
2020-01-24 00:12:21,627  WARN ode.cache.client.internal.OpExecutorImpl: 653 - Pool unexpected socket timed out on client connection=Pooled Connection to localhost:51177: Connection[localhost:51177]@880633971)
2020-01-24 00:12:21,627  WARN ode.cache.client.internal.OpExecutorImpl: 653 - Pool unexpected socket timed out on client connection=Pooled Connection to localhost:51177: Connection[DESTROYED]). Server unreachable: could not connect after 1 attempts
[FORK] - 2020-01-24 00:12:21,655  INFO .internal.cache.tier.sockets.BaseCommand: 442 - Error applying delta for key f501b046-5466-4e42-a83e-9fd8c2b6361d of region /Sessions: Cache encountered replay of event containing delta bytes for key f501b046-5466-4e42-a83e-9fd8c2b6361d
[FORK] - 2020-01-24 00:12:21,656  WARN .internal.cache.tier.sockets.BaseCommand: 334 - Server connection from [identity(192.168.99.1(SpringBasedCacheClientApplication:7292:loner):51190:358a9bd6:SpringBasedCacheClientApplication,connection=1; port=51360]: connection disconnect detected by EOF.
[FORK] - 2020-01-24 00:12:21,669  INFO .internal.cache.tier.sockets.BaseCommand: 442 - Error applying delta for key f501b046-5466-4e42-a83e-9fd8c2b6361d of region /Sessions: Cache encountered replay of event containing delta bytes for key f501b046-5466-4e42-a83e-9fd8c2b6361d
[FORK] - 2020-01-24 00:12:21,670  WARN .internal.cache.tier.sockets.BaseCommand: 334 - Server connection from [identity(192.168.99.1(SpringBasedCacheClientApplication:7292:loner):51190:358a9bd6:SpringBasedCacheClientApplication,connection=1; port=51226]: connection disconnect detected by EOF.
[FORK] - 2020-01-24 00:12:21,672  INFO .internal.cache.tier.sockets.BaseCommand: 442 - Error applying delta for key f501b046-5466-4e42-a83e-9fd8c2b6361d of region /Sessions: Cache encountered replay of event containing delta bytes for key f501b046-5466-4e42-a83e-9fd8c2b6361d
[FORK] - 2020-01-24 00:12:21,673  WARN .internal.cache.tier.sockets.BaseCommand: 334 - Server connection from [identity(192.168.99.1(SpringBasedCacheClientApplication:7292:loner):51190:358a9bd6:SpringBasedCacheClientApplication,connection=1; port=51487]: connection disconnect detected by EOF.
[FORK] - 2020-01-24 00:12:21,686  WARN .internal.cache.tier.sockets.BaseCommand: 334 - Server connection from [identity(192.168.99.1(SpringBasedCacheClientApplication:7292:loner):51190:358a9bd6:SpringBasedCacheClientApplication,connection=1; port=51386]: connection disconnect detected by EOF.
[FORK] - 2020-01-24 00:12:21,718  WARN .internal.cache.tier.sockets.BaseCommand: 334 - Server connection from [identity(192.168.99.1(SpringBasedCacheClientApplication:7292:loner):51190:358a9bd6:SpringBasedCacheClientApplication,connection=1; port=51477]: connection disconnect detected by EOF.
[FORK] - 2020-01-24 00:12:21,731  INFO .internal.cache.tier.sockets.BaseCommand: 442 - Error applying delta for key f501b046-5466-4e42-a83e-9fd8c2b6361d of region /Sessions: Cache encountered replay of event containing delta bytes for key f501b046-5466-4e42-a83e-9fd8c2b6361d
[FORK] - 2020-01-24 00:12:21,738  WARN .internal.cache.tier.sockets.BaseCommand: 334 - Server connection from [identity(192.168.99.1(SpringBasedCacheClientApplication:7292:loner):51190:358a9bd6:SpringBasedCacheClientApplication,connection=1; port=51405]: connection disconnect detected by EOF.
[FORK] - 2020-01-24 00:12:21,768  INFO .internal.cache.tier.sockets.BaseCommand: 442 - Error applying delta for key f501b046-5466-4e42-a83e-9fd8c2b6361d of region /Sessions: Cache encountered replay of event containing delta bytes for key f501b046-5466-4e42-a83e-9fd8c2b6361d
[FORK] - 2020-01-24 00:12:21,769  WARN .internal.cache.tier.sockets.BaseCommand: 334 - Server connection from [identity(192.168.99.1(SpringBasedCacheClientApplication:7292:loner):51190:358a9bd6:SpringBasedCacheClientApplication,connection=1; port=51314]: connection disconnect detected by EOF.
[FORK] - 2020-01-24 00:12:21,778  INFO .internal.cache.tier.sockets.BaseCommand: 442 - Error applying delta for key f501b046-5466-4e42-a83e-9fd8c2b6361d of region /Sessions: Cache encountered replay of event containing delta bytes for key f501b046-5466-4e42-a83e-9fd8c2b6361d
[FORK] - 2020-01-24 00:12:21,779  WARN .internal.cache.tier.sockets.BaseCommand: 334 - Server connection from [identity(192.168.99.1(SpringBasedCacheClientApplication:7292:loner):51190:358a9bd6:SpringBasedCacheClientApplication,connection=1; port=51533]: connection disconnect detected by EOF.
[FORK] - 2020-01-24 00:12:21,791  INFO .internal.cache.tier.sockets.BaseCommand: 442 - Error applying delta for key f501b046-5466-4e42-a83e-9fd8c2b6361d of region /Sessions: Cache encountered replay of event containing delta bytes for key f501b046-5466-4e42-a83e-9fd8c2b6361d
[FORK] - 2020-01-24 00:12:21,791  WARN .internal.cache.tier.sockets.BaseCommand: 334 - Server connection from [identity(192.168.99.1(SpringBasedCacheClientApplication:7292:loner):51190:358a9bd6:SpringBasedCacheClientApplication,connection=1; port=51468]: connection disconnect detected by EOF.
[FORK] - 2020-01-24 00:12:21,811  WARN .internal.cache.tier.sockets.BaseCommand: 334 - Server connection from [identity(192.168.99.1(SpringBasedCacheClientApplication:7292:loner):51190:358a9bd6:SpringBasedCacheClientApplication,connection=1; port=51495]: connection disconnect detected by EOF.
[FORK] - 2020-01-24 00:12:21,892  INFO .internal.cache.tier.sockets.BaseCommand: 442 - Error applying delta for key f501b046-5466-4e42-a83e-9fd8c2b6361d of region /Sessions: Cache encountered replay of event containing delta bytes for key f501b046-5466-4e42-a83e-9fd8c2b6361d
[FORK] - 2020-01-24 00:12:21,893  WARN .internal.cache.tier.sockets.BaseCommand: 334 - Server connection from [identity(192.168.99.1(SpringBasedCacheClientApplication:7292:loner):51190:358a9bd6:SpringBasedCacheClientApplication,connection=1; port=51493]: connection disconnect detected by EOF.
[FORK] - 2020-01-24 00:12:22,002  INFO .internal.cache.tier.sockets.BaseCommand: 442 - Error applying delta for key f501b046-5466-4e42-a83e-9fd8c2b6361d of region /Sessions: Cache encountered replay of event containing delta bytes for key f501b046-5466-4e42-a83e-9fd8c2b6361d
[FORK] - 2020-01-24 00:12:22,009  INFO .internal.cache.tier.sockets.BaseCommand: 442 - Error applying delta for key f501b046-5466-4e42-a83e-9fd8c2b6361d of region /Sessions: Cache encountered replay of event containing delta bytes for key f501b046-5466-4e42-a83e-9fd8c2b6361d

[FORK] - 2020-01-24 00:12:22,010  WARN .internal.cache.tier.sockets.BaseCommand: 334 - Server connection from [identity(192.168.99.1(SpringBasedCacheClientApplication:7292:loner):51190:358a9bd6:SpringBasedCacheClientApplication,connection=1; port=51515]: connection disconnect detected by EOF.
java.lang.RuntimeException: Session Access Task Failed
	at org.springframework.session.data.gemfire.MultiThreadedHighlyConcurrentClientServerHttpSessionAccessIntegrationTests.safeFutureGet(MultiThreadedHighlyConcurrentClientServerHttpSessionAccessIntegrationTests.java:296)
	at java.util.stream.ReferencePipeline$4$1.accept(ReferencePipeline.java:210)
	at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
	at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
	at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
	at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
	at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
	at java.util.stream.IntPipeline.reduce(IntPipeline.java:456)
	at java.util.stream.IntPipeline.sum(IntPipeline.java:414)
	at org.springframework.session.data.gemfire.MultiThreadedHighlyConcurrentClientServerHttpSessionAccessIntegrationTests.runSessionWorkload(MultiThreadedHighlyConcurrentClientServerHttpSessionAccessIntegrationTests.java:311)
	at org.springframework.session.data.gemfire.MultiThreadedHighlyConcurrentClientServerHttpSessionAccessIntegrationTests.concurrentSessionAccessIsCorrect(MultiThreadedHighlyConcurrentClientServerHttpSessionAccessIntegrationTests.java:322)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.springframework.test.context.junit4.statements.RunBeforeTestExecutionCallbacks.evaluate(RunBeforeTestExecutionCallbacks.java:74)
	at org.springframework.test.context.junit4.statements.RunAfterTestExecutionCallbacks.evaluate(RunAfterTestExecutionCallbacks.java:84)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.springframework.test.context.junit4.statements.RunBeforeTestMethodCallbacks.evaluate(RunBeforeTestMethodCallbacks.java:75)
	at org.springframework.test.context.junit4.statements.RunAfterTestMethodCallbacks.evaluate(RunAfterTestMethodCallbacks.java:86)
	at org.springframework.test.context.junit4.statements.SpringRepeat.evaluate(SpringRepeat.java:84)
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
	at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:251)
	at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:97)
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.springframework.test.context.junit4.statements.RunBeforeTestClassCallbacks.evaluate(RunBeforeTestClassCallbacks.java:61)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.springframework.test.context.junit4.statements.RunAfterTestClassCallbacks.evaluate(RunAfterTestClassCallbacks.java:70)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
	at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.run(SpringJUnit4ClassRunner.java:190)
	at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
	at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
	at com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:33)
	at com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:230)
	at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:58)
Caused by: java.util.concurrent.ExecutionException: org.springframework.dao.DataAccessResourceFailureException: nested exception is org.apache.geode.cache.client.NoAvailableServersException
	at java.util.concurrent.FutureTask.report(FutureTask.java:122)
	at java.util.concurrent.FutureTask.get(FutureTask.java:192)
	at org.springframework.session.data.gemfire.MultiThreadedHighlyConcurrentClientServerHttpSessionAccessIntegrationTests.safeFutureGet(MultiThreadedHighlyConcurrentClientServerHttpSessionAccessIntegrationTests.java:293)
	... 43 more
Caused by: org.springframework.dao.DataAccessResourceFailureException: nested exception is org.apache.geode.cache.client.NoAvailableServersException
	at org.springframework.data.gemfire.GemfireCacheUtils.convertGemfireAccessException(GemfireCacheUtils.java:235)
	at org.springframework.data.gemfire.GemfireAccessor.convertGemFireAccessException(GemfireAccessor.java:93)
	at org.springframework.data.gemfire.GemfireTemplate.get(GemfireTemplate.java:172)
	at org.springframework.session.data.gemfire.GemFireOperationsSessionRepository.findById(GemFireOperationsSessionRepository.java:95)
	at org.springframework.session.data.gemfire.AbstractGemFireIntegrationTests.get(AbstractGemFireIntegrationTests.java:405)
	at org.springframework.session.data.gemfire.MultiThreadedHighlyConcurrentClientServerHttpSessionAccessIntegrationTests.lambda$newAddSessionAttributeTask$2(MultiThreadedHighlyConcurrentClientServerHttpSessionAccessIntegrationTests.java:200)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.geode.cache.client.NoAvailableServersException
	at org.apache.geode.cache.client.internal.pooling.ConnectionManagerImpl.borrowConnection(ConnectionManagerImpl.java:277)
	at org.apache.geode.cache.client.internal.OpExecutorImpl.execute(OpExecutorImpl.java:125)
	at org.apache.geode.cache.client.internal.OpExecutorImpl.execute(OpExecutorImpl.java:108)
	at org.apache.geode.cache.client.internal.PoolImpl.execute(PoolImpl.java:772)
	at org.apache.geode.cache.client.internal.GetOp.execute(GetOp.java:91)
	at org.apache.geode.cache.client.internal.ServerRegionProxy.get(ServerRegionProxy.java:116)
	at org.apache.geode.internal.cache.LocalRegion.findObjectInSystem(LocalRegion.java:2793)
	at org.apache.geode.internal.cache.LocalRegion.getObject(LocalRegion.java:1470)
	at org.apache.geode.internal.cache.LocalRegion.nonTxnFindObject(LocalRegion.java:1443)
	at org.apache.geode.internal.cache.LocalRegionDataView.findObject(LocalRegionDataView.java:188)
	at org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.java:1380)
	at org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.java:1319)
	at org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.java:1304)
	at org.apache.geode.internal.cache.AbstractRegion.get(AbstractRegion.java:436)
	at org.springframework.data.gemfire.GemfireTemplate.get(GemfireTemplate.java:169)
	... 7 more

I tried different combinations of Thread count and Workload size:

360 Threads / 3000 Ops - PASSES

500 Threads / 3000 Ops - FAILS

500 Threads / 2000 Ops - PASSES

700 Threads / 2000 Ops - FAILS

There seems to be a correlation and threshold with Thread count and Workload size.

This problem only exists (fails) with Apache Geode 1.11.0. This test passes with the original Thread cont (180) and Workload size (10,000) with Apache Geode 1.9.2.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.