Simple Micro-Service storing tasks duration and calculating the average
This is a simple project, to demonstrate some features of MicroServices 12 Factors App Paradigm. We started with a few independent features and we are going to enlarge these features to reach the MS paradigm, completely.
This service provide simple feature for insert of task execution stats such as task id and task duration in milliseconds. Service interface is composed by two end-points:
- POST /tasks/add - insert end-point accepting a sample JSON as following:
{
"taskId": "alphanumericId",
"duration": 200
}```
as aswer the user receive a string : ok or error
- GET /task/stats/{taskId} - retrieve average duration within all stored tasks with same id, available answer are :
1- [200] Statistic JSON :
{
"taskId": "alphanumericId",
"average": 200.0
}
2- [400,404,500] Error JSON :
{
"code": "NOT_FOUND",
"message": "Task id sdfasf not found",
"name": "myexceptionclassname"
}
Technology stack:
- Spring Boot (many libraries)
- H2 Database
- PowerMock
- JUnit
- Jacoco
Featured:
MicroService Architecture is used to define resilient applications. All information are stored in an external resource, we can configure. This service supply features related to save data about task we execute in other services and collect this data, then it provides the average of the duration for multiple repetition of the same task, identified by an unique Id.
This MicroService has been designed to be released in Cloud/Private DC with following platforms:
- Docker machines
- Docker-Swarm (Portainer.io)
- MESOS (Marathon)
- Kubernets
- Spinnaker pipeline to the cloud/PDC
- Other resilient container managers
The available databases :
- H2 or similar grammar embedded databases
- MySql Server
- Aurora
- Oracle
- SQL Server
- PostgreSQL
- And many other autoincrement RDS
We are defining a new feature for the remote logging of event, just to demonstrate the effective power of MicroServices. We are going to define with a similar architecture a dashboard to monitor instances. Please leave a comment if you are interested in demonstration on remote-control and docker images TelePort directly from dashboard commands.
Datasource SQL script is provided into the resources folder : schema-h2.sql
.
Schema in many other databases is the same for Database. You can define a custom :
application.properties (Spring Boot configuration file) and replace the original one with
following command line argument in the java command : -Dspring.config.location=/location/to/your/configurationfile
and
disable the auto-definition of the data-structure if you decide to run custom commands and provisioning the database externally as
described in the Spring configuration guide
and you can also refer to the common application properties
to get inspiration.
This is a maven project, written in Java8, so you need :
- Oracle Java 8
- Maven
- Internet connection for downloading required artifacts.
Optionally for DevOps (to run the resilient container released soon):
Clone this project:
git clone https://github.com/hellgate75/track-your-tasks.git
cd track-your-tasks
Inside the project folder ...
To create the runnable jar binaries run :
mvn -U -UP clean install
To create documentation jars run :
mvn javadoc:jar
This Micro-Service must be, first of all built, the in the project root folder you can execute on of following commands.
To execute java jar binaries run :
java -jar target/track-your-tasks-0.0.1-SNAPSHOT-spring-boot.jar
If you want specify an environment variable named APP_ENV
(e.g. : DEV,INT,PROD) you can run :
java -jar target/track-your-tasks-0.0.1-SNAPSHOT-spring-boot.jar
In this case in the jar path you need to put a custom property file named application-{APP_ENV lower case}.properties
Using docker: After you have built the source you can run a custom MySQL environment behalf a docker compose file running:
docker-compose --file docker-compose-dev.yaml up -d
Then when you have tested the app and/or if you want delete containers and images you can run:
docker-compose --file docker-compose-dev.yaml down -v
docker rmi -f track_your_tasks_app
docker rmi -f track_your_tasks_db
You can send some requests via curl or any browser REST api plugin.
Using curl to insert task data :
curl -X POST -H 'Content-Type: application/json;Accept: text/plain' -d '{"taskId":"sdadaduut7564","duration":700}' http://localhost:8090/tasks/add
answer:
ok
curl -X POST -H 'Content-Type: application/json;Accept: text/plain' -d '{"taskId":"sdadaduut7564","duration":1100}' http://localhost:8090/tasks/add
answer:
ok
curl -X POST -H 'Content-Type: application/json;Accept: text/plain' -d '{"taskId":"sdadaduut7564","duration":900}' http://localhost:8090/tasks/add
answer:
ok
Using curl to read task average data :
curl -X GET -H 'Content-Type: application/json;Accept: application/json' http://localhost:8090/tasks/stats/sdadaduut7564
answer:
{
"taskId": "sdadaduut7564",
"average": 900.0
}
The same is for a web browser Rest plugin (Advance REST client):
Insert a task data:
Read task average data :
The application is provisioned of unit and integration test, running multiple application instance and executing isolated test scenarios. Test reports are provided in the target folder.
We are preparing a full experience with docker machine and container based MicroService configuration. In the test sample we have connected a local h2 database.
In the plan there are demonstrations around:
- Security
- Remote configuration
- Remote control
- Alerts
- Auto-Restart for maintainability purposes
- Other interesting features
Please take a read to the 12 factor app before engage to the next releases.
We have been releasing soon a docker-compose version within a MySql database and a custom configuration injected in the application, in order to read and save data from a new datasource across a private docker network. The new database will not be accessible from the machine. The next next adventure will be the definition of the Kubernetes files, to deploy int a cluster the app, with a scaling policy, and the automated kubectl and helm scripts.