Code Monkey home page Code Monkey logo

neonbee's Introduction

🐝 NeonBee Core

Discord REUSE status Coverage

NeonBee is an open source reactive dataflow engine, a data stream processing framework using Vert.x.

Description

NeonBee abstracts most of Vert.x's low-level functionality by adding an application layer for modeling a dataflow, data stream processing and for doing data consolidation, exposing everything via standardized RESTfull APIs in form of interfaces called endpoints.

Additionally NeonBee takes care of all the execution and comes bundled with a full-blown application server with all the core capabilities of Vert.x at hand but without you having to care about things like a boot sequence, command line parsing, configuration, monitoring, clustering, deployment, scaling, logging / log handling and much more. We put a rich convenience layer for data handling on top, simplifying technical data exchange.

To achieve this simplification, NeonBee reduces the scope of Vert.x by choosing and picking the most appropriate core and extensions components of Vert.x and providing them in a pre-configured / application server like fashion. For example, NeonBee comes with a default configuration of the SLF4J logging facade and Logback logging backend, so you won't have to deal with choosing and configuring the logging environment in the first place. However, in case you decide to go with the logging setup NeonBee provides, you are still free to change.

NeonBee vs. Vert.x

After reading the introduction, you might ask yourself when do I need NeonBee or when to choose NeonBee over Vert.x? Summarizing a few of our main advantages, that will let you help choose. If you say "no, I do that myself anyways", you might stay with Vert.x for now!

  • We choose, you use: With NeonBee we came to the conclusion that providing all capabilities of a framework is often not the best choice, especially when it comes to the later operation of the framework. Vert.x provides you with a lot of options to choose from: which cluster manager to take, which logging framework to choose, how many servers to start, what authentication plugins to use, etc. etc. This is great, if you know exactly what you need and everyone in your area that you are working in does. Problems start, when multiple projects should run on the same foundational basis, if it is required for your components to interact. Suddenly, choosing a different cluster manager, or event bus implementation, or for every team creating an own server in an own API format to expose their API soon gets troublesome! Thus, with NeonBee, we took a lot of those kinds of decisions for you and stuck to the industry / enterprise standard. Where there is debate, we made it configurable or provided hooks for you to integrate. However, you do no longer have to care, that your application is abel to run in a cluster, you need an API? You get an API, logging framework – pre-configured, metrics endpoint – check... NeonBee provides you with a perfect foundation to get started and for your team to build-up on, without having to deal with every bit and piece of the lower-level set-up of your project.

  • Configuration over code: Vert.x has the concept of exposing everything via APIs to developers. It is a purely development-oriented framework. On the one hand side NeonBee is a development-oriented framework as well. Verticles still need implementation effort, however for especially the application server-side tasks, the idea is to configure a well-tested component, rather than writing a new one from scratch every time you need one. Need a RESTful JSON-based CRUD endpoint? Well, have fun writing one in Vert.x or simply add one line to a config file in NeonBee's ServerVerticle to get one. Want to configure your verticles with an own config file when they get deployed, well sure, go ahead and use Vert.x Config to make every verticle configurable, or use NeonBee and have every verticle that gets deployed, read a config from a /config directory automatically. We have built-in support for different endpoint types (like REST, OData and soon more), different authentication schemes, you can define an own error template, configure if and how you cluster your Vert.x instances, which Micrometer registries to use, etc. With this approach we want to set a standard, assuming that it is easier to implement it generically once in a framework, than it is to create it from scratch with every use case that you come across. And if there is still the need to, customize a given part of NeonBee via code, be our guest, you still have plenty of hooks and Vert.x default methods to override to do so!

  • Need an API? Get an API: As briefly mentioned in the previous points already, Vert.x by default does not provide you with a standardized API endpoint. If you need one, build one, or use any of the Vert.x plugins to get one configured for you. How does it interact with other endpoints, is it properly integrated, e.g., into the event bus, so that consumption of thousands of requests from the endpoint remains scalable? Well, something that will depend heavily on your implementation of the server / endpoint. As a dataflow engine, we have been under the assumption that data processing means nothing, if you cannot expose your result. Thus, NeonBee comes with build-in endpoints and an extensible end-point concept. Meaning that you can get a REST and / or OData endpoint simply by configuring it. It runs integrated with the concept of DataVerticle and EntityVerticle, meaning that everything that you expose via a DataVerticle can also (without one line of additional code), be exposed via a REST endpoint if you choose to do so. Like back in the days, when you had been working on Servlets, you did not have to take care about that a server existed to expose the Servlets to, NeonBee provides you with a even extended concept of structured and unstructured endpoints, meaning that you can expose a full data model, of which the data is provided by an independently scalable set of verticles "under the hood". Again, one less thing you have to worry about!

  • Scalable by design: Building a scalable application can be hard. Especially if there are too many choices and that one wrong decision or misuse of a certain functionality could have catastrophic effects in terms of performance and scalability. Vert.x is a great and very performant framework, two big parts of this is that it internally based on Netty ant their event-loop concept, as well as providing options to horizontally scale, such as the concept of verticles, the event-bus as well as running in a cluster of multiple Vert.x instances. However, connecting these dots can be hard... You missed to use the event-bus to communicate between your verticles? Well though-luck going to use any clustering. You want to use clustering but are not familiar with the concept of cluster managers, well that will be a chore for you to get into. You need to exchange data between your verticles efficiently, but you do not want to bother with all the internals of the event-bus, codecs, etc.? Yes, this is where NeonBee's build-in data processing / data flow concept comes into full effect! If you stick to the implementation of our DataVerticle and EntityVerticle interfaces, scalability will be simply a given, that you again, do not have to take care about! Verticles in NeonBee can communicate with each other via the event bus by design. At the same time, it is completely transparent to use as a user. You tell, which data you require and NeonBee does take care of the rest, requesting the data, in advance to your verticle getting the data it requested. And this being fully horizontally scalable, by using the build-in support for clustering and cluster management into NeonBee. Starting / operating your server as a single or a horizontally scalable clustered instance, is essentially just two more command line arguments when starting NeonBee. And this only works because we did not allow every verticle to "do whatever they want" there are a couple of rules to stick by, which, we know, are good boundary conditions for a development with Vert.x anyways.

  • Modularization is key: Developing application in a monolithic design pattern, is a long accepted anti-pattern when it comes to scalable software development. In recent years micro-services had been en vogue, but they come with their own set of disadvantages. Generally micro-services mean, you need a full-stack experience in your developer-base. As the only "loose coupling rule" generally is that micro-services need to use the same protocol to communicate (for instance HTTP/S), there is a whole heck of things that every team needs to take care of. Well... now to put everything in one huge repository and to build one big Vert.x application out of it, might also not be the best thing when it comes to software design and maintainability. This is where NeonBee also shines: It comes with a build-in approach for componentization. Remember the DataVerticles and EntityVerticles that Team A build? Well, they should provide them to you a Fat/Ueber-Jar-like "Module Jar" format that NeonBee defines (essentially a Jar, that contains all dependencies of the verticles, but not shared dependencies, such as the dependency to NeonBee). NeonBee can then take this module and deploy it, into a completely encapsulated class-loader instance before deployment. This essentially allows you to build a scalable server, with as many verticles you would like, from as many teams as you would like. And like with micro-services, all verticles stay only loosely connected via the event-bus. Isn't that great!?

  • Trust us & don't worry, we got this!: We have been working with Vert.x and together with the Vert.x community for many years now. We know the internals of Vert.x, Netty and learned a lot of how "the clock ticks". Meaning that sometimes we purposefully limited some options or provided only a customizability without the option to further influence a given component by using a development hook. In these cases, we did our homework, to ensure NeonBee stays scalable and performant at any point in time. This is also why for instance the default interface of a DataVerticle is very easy to use, even for somebody who had maybe not had any experience with Vert.x's futurized / promise-based interfaces so far. Give it a shot, we are sure you will like it! If you have anything to do better, tell us and we will be happy to help!

Core Components

To facilitate the true nature of NeonBee, it features a certain set of core components, abstracting and thus simplifying the naturally broad spectrum of the underlying Vert.x framework components:

  • Server: boot sequence (read config, deploy verticles & data models, start instance), master / slave handling, etc.
  • Command Line: CLI argument parsing, using Vert.x CLI API
  • Configuration: YAML config, using Vert.x Config
  • Monitoring: Using Micrometer.io API and Prometheus
  • Clustering: Clustered operation using any Cluster Manager
  • Deployment: Automated deployment of verticles and data model elements
  • Supervisor & Scaling: Automated supervision and scaling of verticles
  • Logging: Using the Vert.x Logging and SLF4J facades and Logback as a back end
  • Authentication: Configurable authentication chain using Vert.x Auth

Dataflow Processing

While you may just use the NeonBee core components to consume functionalities of Vert.x more easily, the main focus of NeonBee lies on data processing via its stream design. Thus, NeonBee adds a sophisticated high-performance data processing interface you can easily extend plug-and-play. The upcoming sections describe how to use NeonBees data processing capabilities hands-on. In case you would like to understand the concept of NeonBees dataflow processing more in detail, for instance on how different resolution strategies can be utilized, for a highly optimized traversal of the data tree, please have a look at this document explaining the theory behind NeonBees data processing.

Data Verticles / Sources

The main component for data processing is more or less a specialization of the verticle concept of Vert.x. NeonBee introduces a new AbstractVerticle implementation called DataVerticle. These types of verticles implement a very simple data processing interface and communicate between each other using the Vert.x Event Bus. Processing data using the DataVerticle becomes a piece of cake. Data retrieval was split in two phases or tasks:

  1. Require: Each verticle first announces the data it requires from other even multiple DataVerticle for processing the request. NeonBee will, depending on the resolution strategy (see below), attempt to pre-fetch all required data, before invoking the next phase of data processing.
  2. Retrieve: In the retrieval phase, the verticle either processes all the data it requested in the previous require processing phase, or it perform arbitrary actions, such as doing a database calls, or plain data processing, mapping, etc.

Conveniently, the method signatures of DataVerticle are named exactly like that. So, it is very easy to request / consolidate / process data from many different sources in a highly efficient manner.

During either phase, data can be requested from other verticles or arbitrary data sources, however, it is to note that those kinds of requests start spanning a new three, thus they can only again be optimized according to the chosen resolution strategy in their sub-tree. It is best to stick with one require / retrieval phase for one request / data stream, however it could become necessary mixing different strategies to achieve the optimal result, depending on the use case.

Entity Verticles & Data Model

Entity verticles are an even more specific abstraction of the data verticle concept. While data verticles can really deal with any kind of data, for data processing it is mostly more convenient to know how the data is structured. To define the data structure, NeonBee utilizes the OASIS Open Data Protocol (OData) 4.0 standard, which is also internationally certified by ISO/IEC.

OData 4.0 defines the structure of data in so-called models. In NeonBee, models can be easily defined using the Core Data and Services (CDS) Definition Language. CDS provide a human-readable syntax for defining models. These models can then be interpreted by NeonBee to build valid OData 4.0 entities.

For processing entities, NeonBee uses the Apache Olingo™ library. Entity verticles are essentially data verticles dealing with Olingo entities. This provides NeonBee the possibility to expose these entity verticles in a standardized OData endpoint, as well as to perform many optimizations when data is processed.

Data Endpoints

Endpoints are standardized interfaces provided by NeonBee. Endpoints call entry verticles (see dataflow processing) to fetch / write back data. Depending on the type of endpoint, different APIs like REST, OData, etc. will be provided and a different set of verticles gets exposed. Given the default configuration, NeonBee will expose two simplified HTTP endpoints:

  • A /raw HTTP REST endpoint that returns the data of any data verticle, preferably in a JSON format and
  • a standardized /odata HTTP endpoint to get a valid OData response from entity verticles.

Getting Started

This Kickstart Guide is recommended for a first start with NeonBee.

Further NeonBee examples can be found in this repository.

Our Road Ahead

We have ambitious plans for NeonBee and would like to extend it to be able to provide a whole platform for dataflow / data stream processing. Generalizing endpoints, so further interfaces like OpenAPI can be provided or extending the data verticle concept by more optimized / non-deterministic resolution strategies are only two of them. Please have a look at our roadmap for an idea on what we are planning to do next.

Contributing

If you have suggestions how NeonBee could be improved, or want to report a bug, read up on our guidelines for contributing to learn about our submission process, coding rules and more.

We'd love all and any contributions.

neonbee's People

Contributors

ada89 avatar carlspring avatar dakra avatar dependabot[bot] avatar gedack avatar github-actions[bot] avatar halber avatar kristian avatar kristian-sap avatar livkovacova avatar matthias-michel avatar pk-work avatar s4heid avatar sapdanilowork avatar tschuba avatar wowselim avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

neonbee's Issues

Default Set of active profiles is empty

Describe the bug
Default Set of active profiles is empty, which results in not deploying custom verticles.

To Reproduce
Start NeonBee with custom verticles on the classpath. These verticles won't get deloyed.

Expected behavior
The custom verticles should be deployed.

Environment:

  • NeonBee version 0.9.1

[Feature]: Handle content types in RawEndpoint

Is there an existing issue for this?

  • I have searched the existing issues

The Problem

atm the RawEndpoint only supports text/plain and application/json.

HttpServerResponse response = routingContext.response()//
                                .putHeader("Content-Type",
                                        Optional.ofNullable(context.responseData().get("Content-Type"))
                                                .map(String.class::cast).orElse("application/json"));
                        if (result instanceof JsonObject) {
                            result = ((JsonObject) result).toBuffer();
                        } else if (result instanceof JsonArray) {
                            result = ((JsonArray) result).toBuffer();
                        } else if (!(result instanceof Buffer)) {
                            // TODO add logic here, what kind of data is returned by the data verticle and what kind of
                            // data is ACCEPTed by the client. For now just support JSON and always return
                            // application/json.
                            result = Json.encodeToBuffer(asyncResult.result());
                        } else {
                            // fallback to text/plain, so that the browser tries to display it, instead of downloading
                            // it
                            response.putHeader("Content-Type", "text/plain");
                        }

Ignoring the content type causes the user to return an encoded string and create a file from that.

Desired Solution

Create a concept to handle content types in the RawEndpoint and react to the given request content type headers.

Alternative Solutions

No response

Additional Context

No response

[Feature]: Define LogLevel via NeonBeeTestBase

Is there an existing issue for this?

  • I have searched the existing issues

The Problem

NeonBeeTestBase is always using the NeonBeeTestBase-Logback.xml from the class path. In this config the log level is set to debug.

It is not possible to change the log level, because there is no place between log config creation and boot up where I could modify the config.

Desired Solution

Move responsibility for the logback config from NeonBeeTestBase to WorkingDirectoryBuilder. Add some kind of option or method I can use to set the default log level.

Alternative Solutions

Add some kind of option or method I can use to set the default log level in NeonBeeTestBase.

Additional Context

No response

[Feature]: Generate relative location header in the `ODataEndpoint`, when primary key is defined by Entity

Is there an existing issue for this?

  • I have searched the existing issues

The Problem

Split from #326

As previously agreed, the behaviour of Create and Update should be as follows:

  • (1) When response contains the Location (via Response Hint)
    --> use Location as returned by EntityVerticle and set 201 (create) or 200 (update) HTTP status code
  • (2) When Response contains an Entity that defines the primary key completely
    --> generate relative location header in the ODataEndpoint, e.g. EntitySetName(PrimaryKey) (check if helper method is available in Olingo)
  • (3) Respond without an entity (e.g. empty future) or the returned entity does not contain the full primary key.
    --> for backward compatibility, continue as of now by not setting the location header and returning a 204 HTTP status code
  • (4) In future, to be able to use absolute URLs in Location header instead: provide helper method in NeonBee to generate absolute header based on currently RoutingContext and also add new „absoluteURLPrefix“ configuration (e.g. /backend as this is added by AppRouter not NeonBee)

(1) and (3) were already implemented as part of #326. This story is about implementing (2) and (4).

Desired Solution

No response

Alternative Solutions

No response

Additional Context

No response

Unregister HealthCheckVerticles from shared map

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

HealthCheckVerticles use a similar mechanism like EntityVerticles for data exchange. They register in NeonBee's shared map by their (unique) eventbus address, but never unregister. If a node dies, the entry remains in the shared map and will be tried to be contacted if data is requested. This means that the shared map fills up after some time with verticle addresses which are no longer reachable.

Expected Behavior

#159 addresses this issue for entity verticles. Once this issue is resolved, the unregistering mechanism should be reused for HealthCheckVerticles.

Steps To Reproduce

No response

Environment

all environments

Relevant log output

No response

Anything else?

No response

[Feature]: Update to Vert.x 4.4.5

Is there an existing issue for this?

  • I have searched the existing issues

The Problem

In Vert.x 4.4.5 all methods which exposes Netty classes directly are deprecated. NeonBee is using such methods at two places:

  1. ImmutableBuffer [1], where the ByteBuf class is used
  2. EventLoopHealthCheck [2], where EventExecutor is used.

Desired Solution

ImmutableBuffer: Could the approach below work?

// Current
    ImmutableBuffer(Buffer buffer) {
        this(requireNonNull(buffer), buffer.getByteBuf());
    }

    /**
     * Small optimization, as calling {@link Buffer#getByteBuf} will duplicate the underlying buffer.
     *
     * @param buffer     the buffer to wrap
     * @param byteBuffer the associated Netty byte-buffer
     */
    private ImmutableBuffer(Buffer buffer, ByteBuf byteBuffer) {
        // if the underlying byte buffer is read-only already, there is no need to make it any more immutable
        this.buffer = byteBuffer.isReadOnly() ? buffer : Buffer.buffer(byteBuffer.asReadOnly());
    }

// new
    ImmutableBuffer(Buffer buffer) {
        this.buffer = buffer instanceOf ImmutableBuffer ? buffer : Buffer.buffer(buffer);
    }

EventLoopHealthCheck: I'd suggest to use SuppressWarnings here because:

  • The long term solution is, that the metric "pending tasks" will be part of Vert.x Metrics. I talked already with Julien.
  • In Vert.x 5 we can use VertxInternal to access the EventExecutor. Of course this is an internal API, but the unofficial agreement is that we won't touch this API until the pending tasks metric is part of Vert.x Metrics.

[1]

this.buffer = byteBuffer.isReadOnly() ? buffer : Buffer.buffer(byteBuffer.asReadOnly());

[2]
for (EventExecutor elg : neonBee.getVertx().nettyEventLoopGroup()) {

Alternative Solutions

No response

Additional Context

No response

[Bug]: executeBlocking is executed with ordered=true by default

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

Two years ago executeBlocking got a new default value: ordered = true [1]. We now saw in one of our Quarkus apps, that this has massive performance issues. So if ordered = false is possible, we should use it.

[1] eclipse-vertx/vert.x@87004b7

Expected Behavior

My suggestion is to modify AsyncHelper in a way, that ordered = false will be used, or is at least the default value.

Steps To Reproduce

No response

Environment

- OS:
- Java:
- NeonBee:

Relevant log output

No response

Anything else?

No response

[Bug]: NeonBee cannot use Logback > 1.3.x

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

When starting NeonBee w/ Logback > 1.3.x on the runtime classpath, NeonBee's LoggerConfiguration fails, because logback in version 1.3.x removed org.slf4j.impl.StaticLoggerBinder. I would either suggest bumping the dependency all together or stop using the impl. package to make it compatible to the public API.

Expected Behavior

NeonBee to work with any Logback version on the runtime classpath.

Steps To Reproduce

No response

Environment

- OS:
- Java:
- NeonBee:

Relevant log output

No response

Anything else?

No response

[Bug]: un-/registering entity verticles in hazelcast cluster when not in safe state

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

It was noticed that sometimes, when we redeploy a node some verticles may not get registered in the shared map.
This happens due to the fact that when a node leaves or joins the cluster hazelcast, it will rebalance the cluster.
If we register or unregister entity verticles when the hazelcast cluster is not in a safe state a PartitionMigratingException occurs.

Expected Behavior

Register or unregister entity verticles should work.

Steps To Reproduce

No response

Environment

all environments

Relevant log output

No response

Anything else?

No response

Execute global health checks only on the local instance

Is there an existing issue for this?

  • I have searched the existing issues

The Problem

When NeonBee runs clustered, health checks are divided in node-specific and global health checks. If health information is requested via the HealthCheckHandler, node-specific checks are executed on every single node and the results will be consolidated in the HealthCheckRegistry. The current implementation, however, also executes the global checks in the same way as there is no differentiation between the type of check. All check results of a global check will be compared in the consolidateResults(...) method of the HealthCheckRegistry, and only the first check will be added to the list of consolidated checks (which will be returned by the HealthCheckHandler).

This is a problem, because when having a large cluster, global checks are executed on every single node in parallel. Depending, on the type of check (e.g. health request to an external service) this can cause high load on the external services.

Another issue with this implementation is that the NeonBee nodes not necessarily share the same configuration and thus might not be able to perform the health check. In that case, the consolidateResults method has redundant data. Therefore, we need to make it clear to the user where this check is performed, such that the required configuration can be set-up.

Desired Solution

A better implementation would only execute the global check once. It would be sufficient if the check is executed on the local node which invokes the health check handler. I think - for now - we do not have make it configurable on which node the check is executed, but this might be something we could keep in mind for the future in case there is demand.

Alternative Solutions

No response

Additional Context

To give more detail about the current implementation, here is some log output which would be generated in the HealthCheckRegistry.sendDataRequests(...), when logging the data object returned by the invoked data request of each HealthCheckVerticle. Assuming there is 3 verticles in a cluster with a global check service.feature-flags.health,

Retrieved check from neonbee/_healthCheckVerticle-5404d541-7fe0-44a8-bed4-c60af882453b with data: [ {
"id" : "cluster.hazelcast",
"status" : "UP",
"data" : {
"clusterState" : "ACTIVE",
"clusterSize" : 3,
"lifecycleServiceState" : "ACTIVE"
}
}, {
"id" : "service.feature-flags.health",
"status" : "UP",
"data" : {
"statusCode" : 200,
"latencyMillis" : 23,
"statusMessage" : "UP"
}
} ]

Retrieved check from neonbee/_healthCheckVerticle-0ac2c75e-7e63-4e29-b919-1304b023521b with data: [ {
"id" : "cluster.hazelcast",
"status" : "UP",
"data" : {
"clusterState" : "ACTIVE",
"clusterSize" : 3,
"lifecycleServiceState" : "ACTIVE"
}
}, {
"id" : "service.feature-flags.health",
"status" : "UP",
"data" : {
"statusCode" : 200,
"latencyMillis" : 26,
"statusMessage" : "UP"
}
} ]

Retrieved check from neonbee/_healthCheckVerticle-69fc018f-eaaf-499f-acce-82a0752dc919 with data: [ {
"id" : "cluster.hazelcast",
"status" : "UP",
"data" : {
"clusterState" : "ACTIVE",
"clusterSize" : 3,
"lifecycleServiceState" : "ACTIVE"
}
}, {
"id" : "service.feature-flags.health",
"status" : "DOWN",
"data" : {
"cause" : "Could not fetch credentials for basic authentication"
}
} ]

Here, NeonBee would report always the status of the HealthCheckVerticle which registered first in the shared map. The other check results are discarded. Also, notice that the node which runs neonbee/_healthCheckVerticle-69fc018f-eaaf-499f-acce-82a0752dc919 is not setup to authenticate against the service. If this verticle registered first, this failing status would always be returned.

Unstable test

Describe the bug
There is an unstable unit test that I have disabled. Unfortunately, this test is fine when I run it on my computer.

To Reproduce
Enable the test again.

Expected behavior
Test is always green.

Environment:
NeonBee Test Java 11.0 on ubuntu-18.04 environment

Logs

ConfiguredDataVerticleMetricsTest > backend meter registry is null FAILED
[1104](https://github.com/SAP/neonbee/runs/7514974393?check_suite_focus=true#step:5:1105)
    value of            : configureMetricsReporting(...)
[1105](https://github.com/SAP/neonbee/runs/7514974393?check_suite_focus=true#step:5:1106)
    expected instance of: io.neonbee.data.internal.metrics.NoopDataVerticleMetrics
[1106](https://github.com/SAP/neonbee/runs/7514974393?check_suite_focus=true#step:5:1107)
    but was instance of : io.neonbee.data.internal.metrics.DataVerticleMetricsImpl
[1107](https://github.com/SAP/neonbee/runs/7514974393?check_suite_focus=true#step:5:1108)
    with value          : io.neonbee.data.internal.metrics.DataVerticleMetricsImpl@4ba9bd7c
[1108](https://github.com/SAP/neonbee/runs/7514974393?check_suite_focus=true#step:5:1109)
        at app//io.neonbee.data.internal.metrics.ConfiguredDataVerticleMetricsTest.backendMeterRegistriesNull(ConfiguredDataVerticleMetricsTest.java:65)

Full build log: 5_Build and analyze.txt

Only start running job verticles if boot process fully finished

Is there an existing issue for this?

  • I have searched the existing issues

The Problem

Currently, if verticles are deployed during the boot process of NeonBee, it can happen that JobVerticles already start their work, even though NeonBee is still in a booting state. As verticles are deployed as a last step of the boot, this is often times not critical, but especially if verticles depend on each other through the event loop, it can result in non-deterministic behavior, especially if the job verticle calls the dependent verticle immediately.

Desired Solution

Wait for starting the "clock" of execution of all job-verticles until the boot process of NeonBee signals full completion.

Alternative Solutions

No response

Additional Context

No response

AnnotationProcessing floods the terminal with warnings

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

With some of the recent changes, the annotationProcessing gradle task is flooding the console with warnings.

Expected Behavior

No warnings.

Steps To Reproduce

./gradlew annotationProcessing

Environment

- OS: Darwin 21.6.0 arm64
- Java: openjdk 11.0.14.1 2022-02-08 LTS
- NeonBee: 0.16.1

Relevant log output

> Task :annotationProcessing
warning: unknown enum constant Scopes.GLOBAL
  reason: class file for org.infinispan.factories.scopes.Scopes not found
warning: unknown enum constant DataType.TRAIT
  reason: class file for org.infinispan.jmx.annotations.DataType not found
warning: unknown enum constant DataType.TRAIT
warning: unknown enum constant DataType.TRAIT
warning: unknown enum constant DataType.TRAIT
warning: unknown enum constant DataType.TRAIT
warning: unknown enum constant DataType.TRAIT
warning: unknown enum constant DataType.TRAIT
warning: unknown enum constant DataType.TRAIT
warning: unknown enum constant DataType.TRAIT
warning: unknown enum constant DataType.TRAIT
warning: unknown enum constant DataType.TRAIT
warning: unknown enum constant DataType.TRAIT
warning: unknown enum constant DataType.TRAIT
warning: unknown enum constant DataType.TRAIT
warning: unknown enum constant DataType.TRAIT
Note: Loaded data_object_converters code generator
warning: Supported source version 'RELEASE_8' from annotation processor 'org.gradle.api.internal.tasks.compile.processing.TimeTrackingProcessor' less than -source '11'
Note: Generated model io.neonbee.config.AuthProviderConfig: io.neonbee.config.AuthProviderConfigConverter
Note: Generated model io.neonbee.config.MetricsConfig: io.neonbee.config.MetricsConfigConverter
Note: Generated model io.neonbee.config.MicrometerRegistryConfig: io.neonbee.config.MicrometerRegistryConfigConverter
Note: Generated model io.neonbee.config.EndpointConfig: io.neonbee.config.EndpointConfigConverter
Note: Generated model io.neonbee.config.ServerConfig: io.neonbee.config.ServerConfigConverter
Note: Generated model io.neonbee.config.AuthHandlerConfig: io.neonbee.config.AuthHandlerConfigConverter
Note: Generated model io.neonbee.config.NeonBeeConfig: io.neonbee.config.NeonBeeConfigConverter
Note: Generated model io.neonbee.config.HealthConfig: io.neonbee.config.HealthConfigConverter
warning: unknown enum constant Scopes.GLOBAL
  reason: class file for org.infinispan.factories.scopes.Scopes not found
warning: unknown enum constant DataType.TRAIT
  reason: class file for org.infinispan.jmx.annotations.DataType not found
warning: unknown enum constant DataType.TRAIT
warning: unknown enum constant DataType.TRAIT
warning: unknown enum constant DataType.TRAIT
warning: unknown enum constant DataType.TRAIT
warning: unknown enum constant DataType.TRAIT
warning: unknown enum constant DataType.TRAIT
warning: unknown enum constant DataType.TRAIT
warning: unknown enum constant DataType.TRAIT
warning: unknown enum constant DataType.TRAIT
warning: unknown enum constant DataType.TRAIT
warning: unknown enum constant DataType.TRAIT
warning: unknown enum constant DataType.TRAIT
warning: unknown enum constant DataType.TRAIT
warning: unknown enum constant DataType.TRAIT
warning: unknown enum constant Scopes.GLOBAL
  reason: class file for org.infinispan.factories.scopes.Scopes not found
warning: unknown enum constant DataType.TRAIT
  reason: class file for org.infinispan.jmx.annotations.DataType not found
warning: unknown enum constant DataType.TRAIT
warning: unknown enum constant DataType.TRAIT
warning: unknown enum constant DataType.TRAIT
warning: unknown enum constant DataType.TRAIT
warning: unknown enum constant DataType.TRAIT
warning: unknown enum constant DataType.TRAIT
warning: unknown enum constant DataType.TRAIT
warning: unknown enum constant DataType.TRAIT
warning: unknown enum constant DataType.TRAIT
warning: unknown enum constant DataType.TRAIT
warning: unknown enum constant DataType.TRAIT
warning: unknown enum constant DataType.TRAIT
warning: unknown enum constant DataType.TRAIT
warning: unknown enum constant DataType.TRAIT
49 warnings

Anything else?

No response

OData Expand is not working with $format=xml

Describe the bug
If you want to use the expand feature of the ODataEndpoint and set the format to XML, Olingos ODataXmlSerializer will fail with a NPE, because Link.getRel() [1] returns null.

According to the OData spec [2] we MUST specify href, type, and rel.

  • rel is easy to implement because it is an identifier of the schema: http://docs.oasis-open.org/odata/ns/related/ + FQN
  • 'type' is a little bit more difficult, because we need to determine the requested format.
  • 'href' is also a little bit more difficult.

But Olingos ODataXmlSerializer only enforces rel, so we could start with that as a quick fix.

[1] https://olingo.apache.org/javadoc/odata4/org/apache/olingo/commons/api/data/Link.html#getRel--
[2] http://docs.oasis-open.org/odata/odata-atom-format/v4.0/cs02/odata-atom-format-v4.0-cs02.html#_Toc372792727

[Feature]: Add a CodeQL Github Workflow

Is there an existing issue for this?

  • I have searched the existing issues

The Problem

We should enable Github Advanced Security and enable the Vert.X support for CodeQL via the carlspring/vertx-codeql-queries custom query pack in a new Github Workflow.

Desired Solution

Use the Vert.X support for CodeQL via the carlspring/vertx-codeql-queries custom query pack in a new Github Workflow.

Alternative Solutions

No response

Additional Context

No response

Resources

Task List

The following tasks will need to be carried out:

  • Get admin access to the repository.
  • Enable GHAS.
  • Add a Github workflow for CodeQL.
  • Test.

Task Relationships

This task is:

[Feature]: Replace Vert.x CLI with another CLI library

Is there an existing issue for this?

  • I have searched the existing issues

The Problem

There is a high chance that Vert.x CLI will be removed with Vert.x 5.0.

Desired Solution

We can simply use another CLI library.

Alternative Solutions

No response

Additional Context

This is not urgent, as there is no Vert.x 5 release before end of the year.

Versioning information of the NeonBee jar

Is there an existing issue for this?

  • I have searched the existing issues

The Problem

At the moment there is no (easy) way to determine the NeonBee release version from the jar.

Desired Solution

Other projects like Vert.x include a file (e.g. vertx-version.txt) which contains the release version into the META-INF directory. This would solve the problem.

Alternative Solutions

No response

Additional Context

In addition to the file based solution, Vert.x offers a CLI command. This could be a nice to have extra, but the file based approach is sufficient for the beginning.

[Discussion]: Increase timeouts for tests

Is there an existing issue for this?

  • I have searched the existing issues

The Problem

The annotation ``@Timeout` defines the maximum time a test has the chance to complete. In the most tests we defined 2 seconds. On a slow machine or CI infrastructure there is a good chance to hit this timeout. Unfortunately it is not possible to configure a global timeout [1], which can be increased on slow machines.

[1] https://discord.com/channels/751380286071242794/751398169199378553/1045722403705212979

Desired Solution

We should agree on a new default timeout value and adopt the tests.

Alternative Solutions

We write our own Timeout annotation?

Additional Context

No response

Make NeonBee working with OpenJDK 17

OpenJDK 17 is the latest LTS version of OpenJDK.

At the moment NeonBee is not compatible to OpenJDK 17. There are some reflection issues when executing tests.

NPE in EntityModelLoader when entity model is invalid

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

Given an invalid model, the EntityModelLoader does not properly validate the model. Instead of propagating the error and failing the deployment of the model, a NullPointerException is thrown, which is not catched.

The problematic code is in EntityModelLoader.buildModelMap, because serviceMetaData.getEdm().getEntityContainer() might be null.

Expected Behavior

The deployment of the invalid model should fail with an error message, describing the issue.

Example code, which futurizes buildModelMap and handles the error:

20:00:16.888 [vert.x-eventloop-thread-10] ERROR io.neonbee.internal.deploy.PendingDeployment - Deployment of Models(models/data-model.csn$models/Esrc.Service.edmx) failed
io.vertx.core.impl.NoStackTraceThrowable: Cannot build model map. Entity Container is empty.
20:00:16.903 [vert.x-eventloop-thread-6] INFO io.neonbee.internal.verticle.ServerVerticle - HTTP server started on port 8080
20:00:16.904 [vert.x-eventloop-thread-10] ERROR io.neonbee.internal.deploy.PendingDeployment - Deployment of Deployables(module:2.0.0-SNAPSHOT) failed
io.vertx.core.impl.NoStackTraceThrowable: Cannot build model map. Entity Container is empty.
20:00:16.904 [vert.x-eventloop-thread-10] ERROR io.neonbee.internal.verticle.DeployerVerticle - Unexpected error occurred during deployment of module from JAR file: /Users/d061008/workspace/neonbee/working_dir/modules/dummy-2.0.0-SNAPSHOT-models.jar
io.vertx.core.impl.NoStackTraceThrowable: Cannot build model map. Entity Container is empty.

Steps To Reproduce

Clone neonbee from the main branch, create a folder working_dir/modules, and create an invalid cds / edmx file. Start neonbee with ./gradlew run and you should be able to reproduce the error.

Environment

- OS: Darwin / 21.6.0 / arm64
- Java: OpenJDK Runtime Environment SapMachine (build 11.0.14.1+1-LTS-sapmachine)
- NeonBee: 0.16.1

Relevant log output

java.lang.NullPointerException: Cannot invoke "org.apache.olingo.commons.api.edm.EdmEntityContainer.getNamespace()" because the return value of "org.apache.olingo.commons.api.edm.Edm.getEntityContainer()" is null
        at io.neonbee.entity.EntityModelLoader.lambda$buildModelMap$16(EntityModelLoader.java:205)
        at java.base/java.util.stream.Collectors.lambda$uniqKeysMapAccumulator$1(Collectors.java:177)
        at java.base/java.util.stream.ReduceOps$3ReducingSink.accept(ReduceOps.java:169)
        at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655)
        at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484)
        at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474)
        at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:913)
        at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
        at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578)
        at io.neonbee.entity.EntityModelLoader.buildModelMap(EntityModelLoader.java:204)
        at io.neonbee.entity.EntityModelLoader.lambda$loadModel$11(EntityModelLoader.java:184)
        at io.vertx.core.impl.future.FutureImpl$1.onSuccess(FutureImpl.java:91)
        at io.vertx.core.impl.future.FutureImpl$ListenerArray.onSuccess(FutureImpl.java:262)
        at io.vertx.core.impl.future.FutureBase.emitSuccess(FutureBase.java:60)
        at io.vertx.core.impl.future.FutureImpl.tryComplete(FutureImpl.java:211)
        at io.vertx.core.impl.future.CompositeFutureImpl.trySucceed(CompositeFutureImpl.java:163)
        at io.vertx.core.impl.future.CompositeFutureImpl.lambda$all$0(CompositeFutureImpl.java:38)
        at io.vertx.core.impl.future.FutureImpl$3.onSuccess(FutureImpl.java:141)
        at io.vertx.core.impl.future.FutureBase.emitSuccess(FutureBase.java:60)
        at io.vertx.core.impl.future.FutureImpl.tryComplete(FutureImpl.java:211)
        at io.vertx.core.impl.future.Composition$1.onSuccess(Composition.java:62)
        at io.vertx.core.impl.future.FutureBase.emitSuccess(FutureBase.java:60)
        at io.vertx.core.impl.future.FutureImpl.tryComplete(FutureImpl.java:211)
        at io.vertx.core.impl.future.PromiseImpl.tryComplete(PromiseImpl.java:23)
        at io.vertx.core.impl.future.PromiseImpl.onSuccess(PromiseImpl.java:49)
        at io.vertx.core.impl.future.FutureBase.lambda$emitSuccess$0(FutureBase.java:54)
        at io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:174)
        at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:167)
        at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:470)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:569)
        at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
        at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
        at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
        at java.base/java.lang.Thread.run(Thread.java:829)

Anything else?

No response

[Bug]: If a HealthCheck fails the check disappears in the results (non-clustered) or the whole request fails (clustered)

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

NeonBee can't handle failing HealthChecks.

Local
In non-clustered mode the method getLocalHealthCheckResults is called which simply omits failed HealthChecks.

...
asyncCheckResults.stream().filter(Future::succeeded)
...

Cluster
In clustered mode the method collectHealthCheckResults is called which sends requests to all HealthCheckVerticles. But as soon as one HealthCheck fails the HealthCheckVerticle will respond with a failure, because a AsyncHelper.allComposite collector is used to collect the results [1].

[1]

return AsyncHelper.allComposite(checkList).map(v -> new JsonArray(

Expected Behavior

  1. If a HealthCheck fails, I should see this in the result.
  2. If a HealthCheck fails, I should see the results of other HealthChecks.
  3. HealthCheckVerticle.retrieveData(..) should re-use getLocalHealthCheckResults to remove code redundancy.

Steps To Reproduce

No response

Environment

- OS:
- Java:
- NeonBee:

Relevant log output

No response

Anything else?

No response

Discussion: Config vs Option

We should clarify what's the defference between option and config. Why is the server port an option, but timezone a config?

DeployerVerticle should not fail if /verticles directory is not in workingDir, but rather watch the workingDir for a /verticles directory to be created

Describe the bug
Currently when NeonBee is started, DeployerVerticle fails to start with a...

java.nio.file.NoSuchFileException: <working_dir_path>\verticles
        at java.base/sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:85)
        at java.base/sun.nio.fs.WindowsException.asIOException(WindowsException.java:112)
        at java.base/sun.nio.fs.WindowsWatchService$Poller.implRegister(WindowsWatchService.java:366)
        at java.base/sun.nio.fs.AbstractPoller.processRequests(AbstractPoller.java:265)
        at java.base/sun.nio.fs.WindowsWatchService$Poller.run(WindowsWatchService.java:596)
        at java.base/java.lang.Thread.run(Thread.java:829)

... exception, if the working directory does not contain a verticles folder.

To Reproduce
Start NeonBee with a working directory without a verticles directory.

Expected behavior
NeonBee to start without an exception and the DeployerVerticle watching the workingDir for a verticles directory to be created which is then watched for new verticles.

Build Docker Image

Is your feature request related to a problem? Please describe.

In order to deploy NeonBee in a containerized application, it would be great to have a docker image which contains the NeonBee jar and starts the process.

Describe the solution you'd like

Provide a docker image and integrate a docker-build step into the release pipeline that publishes the NeonBee docker image to a public registry (GitHub packages or DockerHub) whenever NeonBee is released.

HTTP Header fields should be case-insensitive

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

The getters of HTTP headers in DataQuery do not handle the names case-insensitively. However, they should be; as stated by the HTTP RFC 2616 Section 4.2:

Field names are case-insensitive.

Expected Behavior

Field names are handled case-insensitively.

Steps To Reproduce

No response

Environment

No response

Relevant log output

No response

Anything else?

No response

[Feature]: Enable ODataEndpoint to handle already processed results

Is there an existing issue for this?

  • I have searched the existing issues

The Problem

  1. I forward an OData request with "skip" to an EntityVerticle.
  2. The EntityVerticle applies the "skip" option already on the database query and the database returns only the "valid" entires.
  3. CountEntityCollectionProcessor is still processing the "skip" option and make the result incorrect.

Same applies to "top" option. Filter and ordering works, because CountEntityCollectionProcessor is just applying a filter or sort them again, but it is unnecessary and just produced load.

Desired Solution

We use hints to tell ODataEndpoint that some processing has already been applied to the result.

hint.odata.filterApplied
hint.odata.orderingApplied
hint.odata.skipApplied
hint.odata.topApplied

Alternative Solutions

No response

Additional Context

No response

Consider switching to Apache Commons IO Monitor instead of using the NIO2 WatchService

Currently using the NIO2 WatchService in Java turned out to be a huge pain, due to the JVM internal implementation approach. We never achieved to have a consistent behaviour across all operating systems, which e.g. led to us having to disable all WatchVerticle related tests in our containerized build on GitHub (see [1] explaining why).

After consultation with @gedack he raised the idea of switching to Apache Commons IO Monitor as an alternative to use WatchService. It shall be clarified in a PoC whether that is a more viable approach.

[1] https://blog.arkey.fr/2019/09/13/watchservice-and-bind-mount/

Improve discovery of DataVerticle metrics

Is there an existing issue for this?

  • I have searched the existing issues

The Problem

There is 4 different data verticle metrics that can be enabled. These metrics will be collected for every single DataVerticle if the feature is enabled.

This may be fine for small installations, but running NeonBee with a large number of DataVerticles will produce a huge amount of metrics and clutter the monitoring system.

Desired Solution

Two ideas to improve the discovery of the metrics:

  • instead of making the verticle name part of the metric name, a micrometer tag could be used. this would reduce the number of metrics to 4, but you would still be able to filter for a specific verticle (e.g. when creating a chart).
  • add a neonbee. prefix to each metric. The monitoring system might contain quite a lot of metrics from different sources with overlapping names. The prefix would make it easier to search for the neonbee metrics, if you don't recall the exact name. This should be a convention for all metrics sent by neonbee.

Alternative Solutions

No response

Additional Context

No response

Load Hazelcast configuration from file

At the moment NeonBee only supports specifying Hazelcast configurations from classpath. It would be a benefit if NeonBee could also load Hazelcast configurations from a file path.

Currently the "resource" is loaded via ClasspathXmlConfig class but there is also a FileSystemXmlConfig class. In NeonBeeOptions we need to distinguish between these to approaches.

Maybe <config>.xml loads from classpath and file:<[path]/config>.xml loads from file path. Or we can first try to load the resource from classpath and if it is not found fall back to file path.

What do you think?

Unregistering Entities in cluster mode

Problem description

NeonBee uses the shared map to manage the registration fo Entities to EntityVerticles e.g:
("Products" -> "ProductsVerticle")
("OrderTypes" -> "C4SOrderTypesVerticle")
During the boot up of an EntityVerticle "announceEntityVerticle" is used to register:
"Entity Products" is handled by "me" (EntityProductsVerticle)
Problem: No "stop()" method was implemented, and no fallback handling was implemented (so the map may contain entries of verticles that no longer exist)

Target state

When a node is shut down properly, the entities registered by that node should be removed. If a node is not shut down
properly, the entities that were registered by that node should be purged through a cleanup method.

Solution

The solution consists of two methods. The gracefully shutdown and the cleanup method. The gracefully shutdown method
deletes all entries from the shared data map that the node has added. After cleaning up the shared data, the node can be
shutdown normally.

gracefully shutdown

When a node is shut down, the unregister method should be called. To call the unregister method, a hook must be
registered in NeonBee. The unregistering method cleans up the shared map and removes all entities that were registered
by that node. After the shared map is cleaned up, an event to EVENT_BUS_MODELS_LOADED_ADDRESS is fired to update the
ODataV4Endpoint routes. The gracefully shutdown method is intended to shorten the time frame until the cluster detects
that a node has left the cluster.

cleanup method

If a node cannot perform the gracefully shutdown method, the other nodes that are still alive must clean up the
registered entities. To detect that a node has left the cluster, a io.vertx.core.spi.cluster.NodeListener must be
implemented and registered by the io.vertx.core.spi.cluster.ClusterManager.nodeListener method. The
NodeListener#nodeLeft method is called when the cluster detects that a node has left the cluster. This method is also
called when a node leaves the cluster and executes the Gracefully Shutdown method. To avoid each cluster node tries to
change the shared map, the nodes decide which node will perform the cleanup. To decide which node executes the cleanup, a
simple implementation could use the hazelcast leader.

public boolean getLeader() {
  HazelcastClusterManager instance = ...
  Member oldestMember = instance.getCluster().getMembers().iterator().next();
  return oldestMember.localMember();
  }
}

The cleanup method is basically the same as the gracefully shutdown method. The only difference is that it cleans up the
shared map for another node. After the cleanup is complete, an event is fired to the EVENT_BUS_MODELS_LOADED_ADDRESS.

Make cluster manager choosable

At the moment NeonBee is using Hazelcast as a cluster manager, because it is small, fast and can be embedded.

But when you want to use the following features, you need to pay [1]:

  • Authentication: To ensure that not everyone can add a Node to your cluster
  • Encryption: To ensure that not everyone can read the inter cluster node communication.

If a project requires these features but don't want to buy a Hazelcast license, the project can't use NeonBee. In my opinion this is a problem.

There is an Infinispan Cluster Manager [2] that provides the same features as Hazelcast, but in addition it also offers Authentication and Encryption. Infinispan is licensed under the Apache License 2.0.

I'd like to replace Hazelcast by Infinispan or at least offer the possibility to choose another Cluster Manager.

Impact: When we introduced the HazelcastClusterHealthCheck [4], we added a hard dependency to Hazelcast in our productive code. We need to check how we can make this more abstract, or add also a InfinispanClusterHealthCheck.

[1] https://hazelcast.com/product-features/imdg-comparison/
[2] https://vertx.io/docs/vertx-infinispan/java/
[3] https://infinispan.org/features/
[4] https://github.com/SAP/neonbee/blob/main/src/main/java/io/neonbee/health/HazelcastClusterHealthCheck.java

[Bug]: Hazelcast metric descriptor reports too long values

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

Hazelcast complains that the names of the shared map entries, which are created as part of the EntityVerticle registration exceed the limit of 255 chars.

Due to a bug in hazelcast (hazelcast/hazelcast#17901), the name of the CP subsystem is concatenated which makes this even worse.

We should try to come up with shorter names of the keys.

Expected Behavior

Metric limit is not exceeded.

Steps To Reproduce

No response

Environment

- OS: Linux Ubuntu 22.04.1
- Java: 11
- NeonBee: 0.25.0

Relevant log output

14:10:30.391 [hz.sad_proskuriakova.cached.thread-9] DEBUG com.hazelcast.internal.metrics.managementcenter.ManagementCenterPublisher -- [10.130.94.71]:50000 [dev] [4.2.8] Too long value in the metric descriptor found, maximum is 255: __vertx.NeonBee-io.neonbee.internal.helper.SharedDataHelper#EntityVerticleRegistry-entityVerticles[frontend.Service.CountryCodes][###]10c24af8-667a-466e-a699-f599b57dddf1@__vertx.NeonBee-io.neonbee.internal.helper.SharedDataHelper#EntityVerticleRegistry-entityVerticles[frontend.Service.CountryCodes][###]10c24af8-667a-466e-a699-f599b57dddf1

Anything else?

No response

NeonBee OpenAPI Endpoint

NeonBee OpenAPI Endpoint

Contents

Abstract

This RFC describes an OpenAPI endpoint that can be built by providing an OpenAPI endpoint specification and is able to:

  • automatically validate incoming requests based on the passed specification.
  • forward incoming requests via the eventbus to a service that handles the request

Motivation

Beside OData, NeonBee does not offer any endpoint to exchange data in a structured way via HTTP.
An OpenAPI endpoint would solve this issue by providing a structured endpoint based on an OpenAPI contract.

Technical Details

With Vert.x OpenAPI it is possible to automatically build a Vert.x
Web Router based on an OpenAPI contract. When the endpoint is loaded, Vert.x OpenAPI offers a routerBuilder
to implement logic for operations defined in the OpenAPI Contract.

routerBuilder
  .operation("awesomeOperation")
  .handler(routingContext -> {
    RequestParameters params =
      routingContext.get(ValidationHandler.REQUEST_CONTEXT_KEY);
    RequestParameter body = params.body();
    JsonObject jsonBody = body.getJsonObject();
    // Do something with body
  }).failureHandler(routingContext -> {
  // Handle failure
});

The problem with this approach is, that it requires the implementation of the business logic inside the routerBuilder,
which would produce a lot of load on the web node, that hosts the ServerVerticle. To avoid this, we should use
Vert.x API Service which allows forwarding incoming HTTPRequests
via the eventbus to a dedicated service implementation which contains the business logic. When Vert.x API Service is
applied, the example from above would look like this:

String serviceAddress = ... // EB address of dedicated service implementation
String methodName = ... // name of the service method that can handle the request

routerBuilder
  .operation("awesomeOperation")
  .handler(RouteToEBServiceHandler.build(eventBus, serviceAddress, methodName))
});

Obviously the information for the serviceAddress and methodName must be provided OpenAPI Contract by using
OpenAPI extensions.

Performance Considerations

There should be no impact on performance.

Impact on Existing Functionalities

There should be no impact on existing functionalities.

Open Questions

That this concept works in general is proven by Francesco, who is a Vert.x committer, and wrote an
article about this. But in
contrast to his solution, we want to do a lot of the code generation during runtime. So we have to figure out
best practices here.

We should also think about hiding the Vert.x API Services behind a facade maybe a new kind of DataVerticle. Since a while
we try to find a way to improve the HTTP or REST capabilities of DataVerticles. Maybe there are synergies in this.

Other open questions:

  • How does NeonBee Authorization concept fit to authorizations defined in OpenAPI.

Load NeonBee options upfront (follow-up discussion)

Since the initial assumption of loading the neonbee config before initialization in order to set custom micrometer options is no longer valid (see comment), and because the code review of #94 showed that this topic has still some room for discussion, we decided to split things up:

  • #94 now only concerns adding micrometer registries via NeonBeeConfig, as the config loading change is not a requirement (Micrometer registries can be modified during runtime).
  • loading the config beforehand will be discussed in this issue as there are multiple ways how to do it. Some of them will be detailed out below.

Ideas

The following ideas came up so far:

1. Load the NeonBee config blocking

Pro ➕ Contra ➖
simple, low-effort solution requires introduction of additional loading methods which are blocking. Throughout the entire project quite some effort was spent to minimize blocking code wherever possible
loading config is anyways required

2. Start a temporary Vert.x instance

As proposed by @kristian (see comment), start a temporary Vert.x instance which will be shut-down again after loading the config and before starting NeonBee's Vert.x instance.

As pointed out by @pk-work in another discussion, the Vert.x instance might be useful for other things as well such as the Pre-Processor which could also make use of the Vert.x instance.

Pro ➕ Contra ➖
other Vert.x projects also use this pattern: e.g. https://vertx.io/docs/vertx-config/java/#_configuring_vert_x_itself unknown impact on the performance
code would be more homogenous as we could implement the file operation non-blocking feels weird to do this effort just because of a simple file read operation. in case we could use it for other (pre-processing?) operations as well, this doubt would be invalid.

Maybe we should evaluate how big the actual overhead of starting a vertx instance. do we have any measurements?

Additional Context

Current state: The NeonBeeConfig is loaded non-blocking directly before booting up the NeonBee server. As the Vert.x instance has already been created and configured before this step, configuration set in the NeonBeeConfig could not be used to configure the Vert.x options.

Desired state: Configurations from NeonBeeConfig should be loaded early such that things like vertx options can be modified with the configuration.

Passing null into constructor of EntityWrapper

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

After #318, it is no longer possible to pass null, as an argument for the Entity into the constructor of the EntityWrapper. This is because we need to retrieve the created entity.

If this breaking change was not intended, I would like to propose adding a null check.

Expected Behavior

If the Entity is null, complete the promise and respond with No Content.

Steps To Reproduce

new EntityWrapper(ENTITY_NAME, (Entity) null)

Environment

- OS: ubuntu
- Java: 11
- NeonBee: 0.0.26

Relevant log output

No response

Anything else?

No response

Load Checker

In Vert.x 4.3.0 it is possible to customize BlockedThreadChecker logic [1]. May we could create a new Hook in NeonBee that take BlockedThread threshold and event loop queue size into account.

To be honest, if such a blocked Thread is detected, I don't know who exactly we could fix it programmatically, but we could use it at least for monitoring.

[1] eclipse-vertx/vert.x#4283

RFC: NeonBee Multi Tenancy Authentication

NeonBee Multi Tenancy Authentication

Contents

Abstract

This RFC describes a multi tenancy authentication handler that is able to:

  • change tenant specific auth mechanisms during runtime
  • use latest tenant specific credentials during auth time

Motivation

I want to build a multi tenancy application based on NeonBee, which allows different tenants to configure (choose) their own auth mechanism and upload their own credentials during runtime.

This is currently not possible, because NeonBee only supports pre-configured auth mechanisms (incl. required credentials). To change the auth mechanisms a restart of NeonBee is required. In addition these auth mechanisms (incl. required credentials) would be used for every incoming request. There is no way to differentiate between requests of different tenants.

Technical Details

It is not possible to configure custom auth handlers, therefore a change in AuthHandlerConfig and AuthProviderConfig is required. I'd like to add a MultiTenancyAuthHandler and a MultiTenancyAuthProvider. At the moment I think the MultiTenancyAuthProvider is nothing you can configure (e.g. the JWTAuthProvider), therefore it should abstract. The configuration of the MultiTenancyAuthProvider only requires the class of the concrete implementation.

Performance Considerations

The dynamic lookup of the tenant specific auth mechanisms and related credentials will take time and should be cached. The abstract MultiTenancyAuthProvider should already provide utils to do this.

Impact on Existing Functionalities

No impact is expected, because only new features will be added.

Open Questions

Should we add a MultiTenancyAuthHandler, or should we only add the possibility to configure custom auth handlers?

NeonBeeMockHelper throws error

Describe the bug

With f622f17, the static logger field was removed which is modified in the NeonBeeMockHelper. As a result, a NoSuchFieldException is thrown when using the mock helper's registerNeonBeeMock(...) method.

To Reproduce

Register a neonbee with NeonBeeMockHelper.registerNeonBeeMock(...).

Expected behavior

A clear and concise description of what you expected to happen.

Environment:

  • NeonBee version master
  • Java version: sapmachine 11.0.14.1
  • OS: macOS 12.3.1

Logs

java.lang.NoSuchFieldException: logger
	at java.base/java.lang.Class.getDeclaredField(Class.java:2411)
	at io.neonbee.test.helper.ReflectionHelper.getValueOfPrivateField(ReflectionHelper.java:49)
	at io.neonbee.test.helper.ReflectionHelper.getValueOfPrivateStaticField(ReflectionHelper.java:71)
	at io.neonbee.NeonBeeMockHelper.createLogger(NeonBeeMockHelper.java:253)
	at io.neonbee.NeonBeeMockHelper.registerNeonBeeMock(NeonBeeMockHelper.java:246)
	at io.neonbee.NeonBeeMockHelper.registerNeonBeeMock(NeonBeeMockHelper.java:215)
	at io.neonbee.health.AbstractHealthCheckTest.createDummyHealthCheck(AbstractHealthCheckTest.java:98)
	at io.neonbee.health.AbstractHealthCheckTest.setUp(AbstractHealthCheckTest.java:49)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725)
	at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
	at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149)
	at org.junit.jupiter.engine.extension.TimeoutExtension.interceptLifecycleMethod(TimeoutExtension.java:126)
	at org.junit.jupiter.engine.extension.TimeoutExtension.interceptBeforeEachMethod(TimeoutExtension.java:76)
	at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115)
	at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
	at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104)
	at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98)
	at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.invokeMethodInExtensionContext(ClassBasedTestDescriptor.java:506)
	at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$synthesizeBeforeEachMethodAdapter$21(ClassBasedTestDescriptor.java:491)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeBeforeEachMethods$3(TestMethodTestDescriptor.java:171)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeBeforeMethodsOrCallbacksUntilExceptionOccurs$6(TestMethodTestDescriptor.java:199)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeBeforeMethodsOrCallbacksUntilExceptionOccurs(TestMethodTestDescriptor.java:199)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeBeforeEachMethods(TestMethodTestDescriptor.java:168)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:66)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
	at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
	at java.base/java.util.ArrayList.forEach(ArrayList.java:1541)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
	at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
	at java.base/java.util.ArrayList.forEach(ArrayList.java:1541)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
	at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:35)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:54)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52)
	at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:114)
	at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:86)
	at org.junit.platform.launcher.core.DefaultLauncherSession$DelegatingLauncher.execute(DefaultLauncherSession.java:86)
	at org.junit.platform.launcher.core.SessionPerRequestLauncher.execute(SessionPerRequestLauncher.java:53)
	at com.intellij.junit5.JUnit5IdeaTestRunner.startRunnerWithArgs(JUnit5IdeaTestRunner.java:71)
	at com.intellij.rt.junit.IdeaTestRunner$Repeater$1.execute(IdeaTestRunner.java:38)
	at com.intellij.rt.execution.junit.TestsRepeater.repeat(TestsRepeater.java:11)
	at com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:35)
	at com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:235)
	at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:54)

Health Checks

Description

As a NeonBee user, I would like to have a /health endpoint, which tells me if NeonBee itself (e.g. the cluster building), and application specific parts (e.g. a db connection) are in a healthy state.

This information should be exposed in an easily consumable way, such as a health endpoint.

Solution

Hazelcast already provides information about the health of a cluster:

curl -s http://localhost:5701/hazelcast/health | jq -r .
{
  "nodeState": "ACTIVE",
  "clusterState": "ACTIVE",
  "clusterSafe": true,
  "migrationQueueSize": 0,
  "clusterSize": 11
}

This information could be used as a starting point for neonbee health checks. With the vertx-health-check plugin we could provide an extensible health endpoint, which - at the beginning - provides a few default checks such as checking the available memory. In case that NeonBee was started in clustered mode, we may want to check the cluster status as well.

Example response of the /health endpoint:

root@web-7fc6cbf8f7-jcprt:/# curl -s http://localhost:8080/health | jq -r .
{
  "status": "UP",
  "checks": [
    {
      "id": "physical-memory-check",
      "status": "UP"
    },
    {
      "id": "cluster-health",
      "status": "UP"
    },
    {
      "id": "member-health",
      "status": "UP"
    },
    {
      "id": "database-health",
      "status": "UP",
      "data": {
        "connections": 32
      }
    }
  ],
  "outcome": "UP"
}

In order to provide custom (application-specific) health checks, we could use an SPI based approach.

Web Server Information Disclosure

Is there an existing issue for this?

  • I have searched the existing issues

The Problem

The webserver disclose the used software within the server header, which allows attackers to gather information about the application and potentially identify new attack surfaces.

The system reveals the used software in

  • the 404 error page footer. E.g.:
    NeonBee, Correlation ID: b7251b9d-1db1-4ce9-4435-d191e8519720
    
  • the x-instance-info header. E.g.:
    x-instance-info: NeonBee-09051a07-9dc2-483e-8bd5-64d3e4cb9485
    

Desired Solution

The webserver has an opt-in option to hide the application name from the header. The error template does not necessarily need to be adapted since it is already configurable.

Alternative Solutions

No response

Additional Context

No response

Support creating entity collections in the EntityProcessor

Is your feature request related to a problem? Please describe.

With the current implementation of EntityProcessor.createEntity, a 204 (No Content) HTTP status code is always returned, also when an entity collection is attempted to be created with a POST request.

Describe the solution you'd like

The entity should actually be created. This is not yet implemented, but should be similar to handleReadEntityResult(). Maybe this method can be reused.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.