Code Monkey home page Code Monkey logo

atdavidpark / microlib Goto Github PK

View Code? Open in Web Editor NEW

This project forked from module-federation/aegis-host

0.0 1.0 0.0 176.28 MB

MicroLib is a lightweight API framework built on Node.js that uses module federation to host and integrate multiple, independently deployable "microservice libraries" running in the same process. The idea is to eliminate or ameliorate the "microservices premium" and the trade-off between deployment independence and operational complexity.

License: Apache License 2.0

JavaScript 99.71% HTML 0.29%

microlib's Introduction

MicroLib

MicroLib codename ÆGIS

Microservice Libraries

TL;DR

git clone https://github.com/module-federation/MicroLib
cd MicroLib
cp dotenv.example .env
npm ci
npm run build
npm start .
npm run demo

Note: you no longer need to run the MicroLib-Example project, as the host has been configured to stream the federated modules directly from GitHub. If you want to update any of the sample services, fork the MicroLib-Example repo. Update the webpack/remoteEntries.js file on the host so entry.url points at your repo. Clone your fork, etc. make your changes, build the webpack bundles, commit and push. In order for the host to see your changes are hot reload, either configure a webhook in your repo that points at http://your-aegis-server:8707/microlib/reload or run the hot reload script. npm run hot-reload.

Purpose

Stop paying the "microservices premium".

When evaluating microservices as a candidate architecture, the most important fact to consider is that you are building a distributed application. Microservices are the components of distributed applications - and distribution is what enables their chief virtue, deployment independence. Unfortunately, relative to the traditional alternative, monoliths, distributed apps are much harder to build and manage. So much so, that many microservice implementations fail.

This trade-off, dealing with the increased scope, cost and risk that stems from distribution, is called paying the "microservices premium". Sometimes the premium is well worth it. But in many cases it does more harm than good, leading many experts to advise against starting with microservices, but to instead introduce them gradually as scope or demand increases.

That said, in cases where the implementation does succeed, organizations generally prefer microservices to monoliths because of the increased speed and agility that deployment independence brings. So one could make the argument that if the premium were somehow discounted, microservices would be appropriate for a much wider audience.

Consider, then, what would happen if we could eliminate the need for distribution and still allow for independent deployment.

Is there such an alternative? Fowler describes the implicit premise behind the distribution/deployment trade-off:

"One main reason for using services as components (rather than libraries) is that services are independently deployable. If you have an application that consists of multiple libraries in a single process, a change to any single component results in having to redeploy the entire application.”

While technologies that support hot deployment have been around for some time (such as OSGi), it would appear, up until now anyway, perhaps due to complexity, labor intensity, or skills scarcity, they haven't been considered a viable option. Whatever the reason, with the advent of module federation, this is no longer the case.

Using module federation, it is possible to dynamically and efficiently import remote libraries, just as if they had been installed locally, with only a few, simple configuration steps. MicroLib exploits this technology to support a framework for building application components as independently deployable libraries, call them microservice libraries.

Using webpack dependency graphs, code splitting and code streaming, MicroLib supports hot deployment of federated modules, as well as any dependencies not present on the host, allowing development teams to deploy whenever they choose, without disrupting other components, and without having to coordinate with other teams. To simplify integration, promote composability and ensure components remain decoupled, MicroLib implements the port-adapter paradigm from hexagonal architecture to standardize the way modules communicate, so intra- and interprocess communication is transparent. E.g. whether deployed locally to the same MicroLib host instance or remotely, its all the same to the module developer.

With MicroLib, then, you get the best of both worlds. You are no longer forced to choose between manageability and autonomy. Rather, you avoid the microservices premium altogether by building truly modular and independently deployable component libraries that run together in the same process (or cluster of processes), in what you might call a "polylith" - a monolith comprised of multiple (what would otherwise be) microservices.


Features

The goal of MicroLib is to provide an alternative to distributed systems and the performance and operational challenges that come with them, while preserving the benefits of deployment independence. To this end, MicroLib organizes components according to hexagonal architecture, such that the boundaries of, and relations between, federated components are clear and useful.

In addtion to zero-install, hot deployment and local eventing, MicroLib promotes strong boundaries between, and prevents coupling of, collocated components through the formalism of the port-adapter paradigm and the use of code generation to automate boilerplate integration tasks. Features include:

  • Dynamic API generation for federated modules
  • Dynamic, independent persistence of federated modules
  • Dynamic port generation for federated modules
  • Dynamic port-adapter binding
  • Dynamic adapter-service binding
  • Hot deployment of federated modules
  • Configuration-based service integration
  • Configuration-based service orchestration
  • Built-in error handling (circuit breaker, undo)
  • Common broker for locally shared events
  • Persistence API for cached datasources
  • Datasource relations for federated schemas and objects
  • Object broker for retrieving external model instances
  • Dependency/control inversion (IoC)
  • Zero downtime, "zero install" deployment
  • Evergreen deployment and semantic versioning
  • Dynamic A/B testing
  • Vendor-agnostic serverless deployment (no vendor lock-in)
  • Faster serverless deployment
  • Configurable serialization for network and storage I/O
  • Clustering for availability and scalibilty
  • Cluster cache synchronization
  • Polyrepo code reuse (the answer to the shared code question)

Components

Components

MicroLib uses a modified version of Webpack Module Federation to import remote modules over the network into the host framework at runtime. MicroLib modules fall into three categories: model, adapter and service.

A model is a domain entity/service - or in polylith architecture, a component - that implements all or part of the service’s core logic. It also implements the MicroLib ModelSpecification interface. The interface has many options but only a few simple requirements, so developers can use as much, or as little, of the framework's capabilities as they choose.

One such capability is port generation. In a hexagonal or port-adapter architecture, ports handle I/O between the application and domain layers. An adapter implements the port ’s interface, facilitating communication with the outside world. As a property of models, ports are configurable and can be hot-added, -replaced or -removed, in which case the framework automatically rebinds their adapters as needed. Adapters by themselves can also be hot-replaced and rebound.

A service provides an optional layer of abstraction for adapters and usually implements a client library. When an adapter is written to satisfy a common integration pattern, a service implements a particular instance of that pattern, binding to the outside-facing end of the adapter. Like adapters to ports, the framework dynamically imports and binds services to adapters at runtime.


Persistence

Persistence

The framework automatically persists domain models as JSON documents using the default adapter configured for the server. In-memory, filesystem, and MongoDB adapters are provided. Adapters can be extended and individualized per model. Additionally, de/serialization can be customized. Finally, every write operation generates an event that can be forwarded to an external event or data source.

A common datasource factory manages adapters and provides access to each service’s individual datasource. The factory supports federated schemas (think GraphQL) through relations defined between datasources in the ModelSpec. With local caching, not only are data federated, but so are related domain models.

const customer = order.customer(); // relation `customer` defined in ModelSpec

const creditCard = customer.decrypt().creditCardNumber;

Access to data and objects requires explicit permission, otherwise services cannot access one another’s code or data. Queries execute against an in-memory copy of the data. Datasources leverage this cache by extending the in-memory adapter.


Eventing

Integration

Ports & Adapters

When ports are configured in the ModelSpecification, the framework dynamically generates methods on the domain model to invoke them. Each port is assigned an adapter, which either invokes the port (inbound) or is invoked by it (outbound).

Ports can be instrumented for exceptions and timeouts to extend the framework’s circuit breaker, retry and compensation logic. They can also be piped together in control flows by specifying the output event of one port as the input or triggering event of another.

An adapter either implements an external interface or exposes an interface for external clients to consume. On the port end, an adapter always implements the port interface; never the other way around. Ports are a function of the domain logic, which is orthogonal to environmental concerns.

Ports optionally specify a callback to process data received on the port before control is returned to the caller. The callback is passed as an argument to the port function. Ports can be configured to run on receipt of an event, API request, or called directly from code.

Ports also have an undo callback for implementing compensating logic. The framework remembers the order in which ports are invoked and runs the undo callback of each port in reverse order, starting at the point of failure. This allows transactions across multiple services to be rolled back.

Local & Remote Events

In addition to in-memory function calls, federated objects and ports, services can communicate with one another locally the same way they do remotely: by publishing and subscribing to events. Using local events, microservice libraries are virtually as decoupled as they would be running remotely.

The framework provides a common broker for local service events and injects pub/sub functions into each model:

ModelA.listen(event, callback);

ModelB.notify(event, data);

Local events can also be forwarded to remote event targets. Like any external integration remote ports must be configured for external event sources/sinks. Adapters are provided for Kafka and WebSockets.


Workflow

Orchestration

Service orchestration is built on the framework’s port-adapter implementation. As mentioned, ports both produce and consume events, allowing them to be piped together in control flows by specifying the output event of one port as the input event of another. Because events are shared internally and can be forwarded externally, this implementation works equally well whether services are local or remote.

Callbacks specified for ports in the ModelSpec can process data received on a port before its output event is fired and the next port runs. If not specified, the framework nevertheless saves the port output to the model. Of course, you can implement your own event handlers or adapter logic to customize the flow.


Running the Application

s See above TLDS; section for a simplied install. Get up and running in about 60 seconds.

To demonstrate that polyrepo code sharing is a reality, you will clone two repos. The first is MicroLib-Example, which shows you how you might implement an Order service using MicroLib. It also mocks several services and how they might communicate over an event backbone (Kafka). In module-federation terms, this is the remote. The second is the MicroLib host, which streams federated modules exposed by the remote over the network and generates CRUD REST API endpoints for each one.

git clone https://github.com/module-federation/MicroLib-Example.git
cd *Example
npm ci
echo "KAFKA_GROUP_ID=remote" > .env
echo "ENCRYPTION_PWD=secret" >> .env
npm run build
git clone https://github.com/module-federation/MicroLib.git
cd MicroLib
npm ci
echo "KAFKA_GROUP_ID=host" > .env
echo "ENCRYPTION_PWD=secret" >> .env
echo "DATASOURCE_ADAPTER=DataSourceFile" >> .env
npm run build

Start the services:

# in MicroLib-Example dir
npm run start-all
# in MicroLib dir
npm start

Datasources

In the above configuaton, Microlib uses the local filesystem for default persistence. Alternatively, you can install MongoDB and update the .env accordingly to change the default to Mongo. You can also update an individual model's datasource in the ModelSpec.

brew install mongodb-community
mongod

.env

DATASOURCE_ADAPTER=DataSourceMongoDb
MONGODB_URL=mongodb://localhost:27017

Clustering

MicroLib supports clustering with automatic cache synchronization and rolling restart for increased stability, scalality and efficiency with zero downtime. When you rebuild the example service, it will automatically update the cluster. To enable:

.env

CLUSTER_ENABLED=true

Authorization

MicroLib supports JSON Web Tokens for authorization of protected routes. To enable, you must provide JSON Web Key URI to retrieve the public key of the signer of the JSON Web Token. You can set up an account with Auth0 for testing purposes. You update the key set configuration in the auth directory.

auth/key-set.json

{
  "cache": true,
  "rateLimit": true,
  "jwksRequestsPerMinute": 5,
  "jwksUri": "https://dev-2fe2iar6.us.auth0.com/.well-known/jwks.json",
  "audience": "https://microlib.io/",
  "issuer": "https://dev-2fe2iar6.us.auth0.com/",
  "algorithms": ["RS256"]
}

.env

AUTH_ENABLED=true

HTTPS

To enable Transport Layer Security, you'll need to import and trust the certificate in the cert directory or provide your own cert and private key. Then update .env.

cert

-rw-r--r--  1 tysonrm  staff  1090 Mar 19 06:55 csr.pem
-rw-r--r--  1 tysonrm  staff  1314 Mar 19 06:30 domain.crt
-rw-r--r--  1 tysonrm  staff  1679 Mar 19 06:54 server.key

.env

SSL_ENABLED=true

Installation

install

Zero Downtime - Zero Install Deployment, API Generation

hotreload

Reference Architecture

MicroLib.mov

MicroLib prevents vendor lock-in by providing a layer of abstraction on top of vendor serverless frameworks. A vendors API gateway simply proxies requests to the MicroLib serverless function, which is the only function adapted to the vendor's platform. From that point on, MicroLib handles the "deployment" of functions as federated modules. Developers don't even need to know what cloud is hosting their software!

Further Reading

Stop Paying the Microservice Premium: Achieving Deployment Independence in a Monolithic Architecture

Clean Micoservices: Building Composable Microservices with Module Federation

Webpack 5 Module Federation: A game-changer in JavaScript architecture

Microservice trade-offs

microlib's People

Contributors

tysonrm avatar scriptedalchemy avatar renovate-bot avatar rdworth avatar infoxicator avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.