Code Monkey home page Code Monkey logo

shedu's Introduction

Logo

Shedu is the project for easy deploy of a KBEngine cluster in Docker on Linux.

Overview

The project builds, packages and starts KBEngine and kbe environment in the docker containers.

The main goal of the project is to simplify kbengine deploy. You don't need to know how to build a C++ project or what library you need to install for. Moreover all kbe infrastructure (database, smtp server etc) can be built and started just in one command too. You can choose a kbe commit for your kbe build and easy link your "assets" to the chosen kbe version. Change variables in "configs/example.env" and save the file like a new one with your configuration.

You can run a KBEngine cluster for both KBEngine version 1.x or 2.x. (any commit).

The project also deployed tools for convenient collection and viewing of logs based on the ELK stack (Elasticsearch + Logstash + Kibana).

The project can be used for convenient local development, quick MVP creation, and quick testing of game development business ideas.

Tested on Ubuntu 20.04, CentOS 7, Ubuntu 22.04

Docker environment

Table of contents

Glossary

Deploy

Configuration file

KBEngine logging (Elasticsearch + Logstash + Kibana)

Build activity

Assets normalization

The script "modify_kbe_config"

Cocos2D build example

Debug KBEngine in Docker

KBEngine server startup sequence diagrams

Stoping the game server

Glossary

  • Host - the server with installed Docker and Docker Compose

  • Shedu - this project. A utility that deploys a KBEngine cluster in the docker environment and gives API to manage the cluster

  • KBEngine - server game engine in C++

  • Assets - server game logic on Python + user server settings (see the official KBEngine demo assets)

  • Game - KBEngine + Assets

  • Engine image - a docker image containing only built KBEngine

  • Game image - a docker image containing built KBEngine + Assets

  • KBEngine Component - A system process with a given responsibility in a gaming business process

    • Machine / Supervisor - stores information about running components (their id, address, etc.)
    • Logger - collects and writes log files of components
    • Interfaces - to access third party billing, third party accounts and third party databases
    • DBMgr - manages communication with the database. Creates and modifies the database schema according to the configuration files of the game (assets). Responsible for saving entities in the DB
    • BaseappMgr - manages load balancing for BaseApps. Responsible for fault tolerance of BaseApps
    • CellappMgr - keeps track of all CellApps and their loading, distributes the creation of game entities among CellApps
    • Cellapp - processing game logic related to space or location. Managing spatial data (position, direction of the entity), adding geometric maps (navmesh), setting spatial triggers. Creating and Destroying a Level
    • Baseapp - keeps a connection with the client after loginapp is authenticated. Proxy calls to Cellapp. Used for game logic of non-spatial objects (chats, managers, clans)
    • Loginapp - connection point for clients. Responsible for authentication of the game client. After successful authentication passes Basseapp address for subsequent connection
  • KBEngine Environment - a set of services around game engine components: database, log services, smtp server, web server (for Kosos2D, for example), etc.

  • KBEngine Cluster - KBEngine Component + KBEngine Environment

  • Client - plugin supporting network protocol KBEngine + game engine (Cocos2D, Unity, Godot, UE etc.)

Deploy

Download the project

git clone https://github.com/ve-i-uj/shedu
cd shedu
git submodule update --init --recursive

Install Docker and Compose

This project uses Docker, so you need to install Docker and Docker Compose V2 if they are not installed. You can install them according to the official docker documentation, or [at your own risk] install them using the scripts that come with this project. If both Docker and Docker-compose are installed on the host, you can skip this step.

bash scripts/prepare/install_docker.sh
bash scripts/prepare/install_compose_v2.sh

The user will added to the "docker" group. It need to logout and to login again for changing is applyed.

Install Dependencies

# This script will install make git python3
./configure

The configuration file

The project reads the settings from the ".env" file located in the root directory. There are examples of .env files in the "configs" directory. If you use the example.env config file without changing the settings, the game server will be launched with kbengine_demos_assets. You can start the kbe server with your "assets" just point in the .env file the directory of your "assets". For more information see settings described here. Copy an example env file to the root directory and change it if you want set your custom settings.

cp configs/example.env .env

Build KBEngine

There are several already built kbe images on Docker Hub, so building a kbe might take just a few minutes (or a few seconds later if docker cache is used).

Build the engine

make build_kbe

Build the game

make build_game

Launch the game

make start_game

Delete game artifacts

make clean_game

View logs from the console (or view logs in ELK web interface)

make logs_console

Other operations

make help

To switch games between each other, you only need to stop the instance of the running game. Then change the configuration ".env" file (or copy the already existed file), build the image (if not built) and run an instance of another game. If the image of another game exists, then all you need to do is stop the services, change the config for the new game, and start its services. There is no need to delete images of the old game. By keeping the docker images of the previous version of the game, you can switch and run different game instances very quickly (less than a minute).

Currently, only one game and ELK instance can be running at the same time on the same host. This is due to a port conflict. Theoretically, if you change the ports in the docker-compos.yml file, you can run several games at the same time, but this functionality has not yet been tested.

Configuration file

The configuration file should be placed in the project root and has the name ".env". There are some examples in the configs directory.

cp configs/kbe-v2.5.12-demo.env .env

Set the path to Assets in the KBE_ASSETS_PATH variable in the config (the ".env" file). The path to Assets must be an absolute path pointing to your "assets" folder on your host. This "assets" folder will be copied into the game image. If KBE_ASSETS_PATH value is "demo", the latest version of kbengine_demos_assets will be downloaded and will be used (these Assets are suitable for Kbengine client demos). For demo purpose you need just copy an example config file like in the above command.

Settings

Database build settings

  • MYSQL_ROOT_PASSWORD - [Mandatory] Database root password
  • MYSQL_DATABASE - [Mandatory] Database name
  • MYSQL_USER - [Mandatory] Database user
  • MYSQL_PASSWORD - [Mandatory] Database user password

KBE build settings

  • KBE_GIT_COMMIT - [Optional] KBEngine will be compiled from the source code based on a git commit. The latest commit of the kbe repository will be use if the value of the variable is unset. Example: 7d379b9f
  • KBE_USER_TAG - [Optional] The compiled kbengine image will have this tag. For example: v2.5.12

Game assets settings

  • KBE_ASSETS_PATH - [Mandatory] The absolute path to the "assets" directory. If the value is "demo" then the kbe demo "assets" will be used
  • KBE_ASSETS_SHA - [Optional] You can set the "assets" git sha if the "assets" is a git project. Example: 81f7249b
  • KBE_ASSETS_VERSION - [Mandatory] The version of the "assets". This variable labels the finaly game image, it cannot be empty. Set any non-empty string if your project has no version.
  • KBE_KBENGINE_XML_ARGS - [Optional] With this field, you can change the values of the fields in kbengine.xml in the final image of the game. Example: KBE_KBENGINE_XML_ARGS=root.dbmgr.account_system.account_registration.loginAutoCreate=true;root.whatever=123
  • KBE_PUBLIC_HOST - [Mandatory] The external address of the server where the KBEngine Docker cluster will be deployed. For home development, when both client and server are on the same computer, you can use the default gateway address.

Global Settings

  • GAME_NAME - [Mandatory] For each instance of the game there is a separate kbe environment. The name of the game is a unique identifier for the kbe environments. It cannot be empty.

Other Settings

make print_vars_doc

KBEngine logging (Elasticsearch + Logstash + Kibana)

The Shedu project demonstrates the use of the ELK stack (Elastic + Logstash + Kibana) to easily view KBEngine logs. You can conveniently save game server logs in Elasticsearch and conveniently view them through Kibana or Dejavu (frontends for Elasticsearch) .

ELK services are docker images. The log documents are stored in a named volume. The ELK version is v8.5.3.

Before starting, you need to build the service images if they are not already built and then start the services.

make build_elk
make start_elk

The ELK will take some time to start, so after the start, you need to wait a few minutes for the ELK to be available. Open web interfaces to view logs:

# The "logs_kibana" rule exports some user settings to the Kibana view before opening the web page
make logs_kibana
make logs_dejavu

ELK

Dejavu page

Dejavu


For documentation on using Kibana or Dejavu see official sites.

The life cycles of game services and ELK services are independent. ELK will work without a running game; similarly with the game: the game works without ELK.

Logging in KBEngine (brief overview)

At the engine level, the log4cxx library (log4j for C++) is used for logging. The default configuration files for log4j are located in the directory kbe/res/server/log4cxx_properties_defaults. This directory contains a separate file for each component. The logging settings can be reloaded by defining custom log4j settings in the res/server/log4cxx_properties folder.

By default, logs are written to the assets/logs directory. If all KBEngine server components are located on the same host, all logs will be in this folder. By default, each component sends all logs to the Logger component (using the KBEngine message protocol). Then Logger writes the received logs to the assets/logs folder. The file name for log records has the following pattern: logger_<compnent_name>. Some of the critical logs (errors and warnings) are written by the components to the assets/logs folder under their own name (for example, "machine.log"). The critical logs do not send to Logger.

KBEngine logs + ELK for collecting logs

Operating procedure:

  • KBEngine default settings remain unchanged (logs are still written to the logs directory)
  • There is a separate volume created for the logs
  • The logs folder of each container is mounted in the separate log volume
  • This volume is also mounted to the Logstash container
  • Logstash collects all new records from the logs folder, normalizes them and sends them to Elasticsearch
  • Elasticsearch stores documents in its own volume (the ES volume and the log volume are different volumes)
  • To view the logs stored in Elasticsearch, you can use the web interfaces of Kibana or Dejavu services locally running in Docker

Logstash configuration settings are located in the shedu/data/logstash folder. To customize the logging fields, you can modify the grok pattern "LOG_MESSAGE". It is convenient to combine customization of this pattern with game scripts logging using the python logging module. An example of setting up logging of game scripts using Python can be found here.

Stopping ELK services

make stop_elk

Cleaning up ELK services

make clean_elk

Build activity

Build KBEngine source

Build kbe


Build Assets

Build Assets


Cocos2D example

Below is an example of building a server cluster and running the demo client on Cocos2D.

To run the example, we need KBEngine version v1.3.5 (commit 26e95776) and assets version v1.3.5 (commit eb034a2e). All you need to deploy a cluster of this version using the Shedu is to specify these commits in the configuration file.

This is what the config file will look like
MYSQL_ROOT_PASSWORD=pwd123456
MYSQL_DATABASE=kbe
MYSQL_USER=kbe
MYSQL_PASSWORD=pwd123456

KBE_GIT_COMMIT=26e95776
KBE_USER_TAG=v1.3.5

KBE_ASSETS_PATH=demo
KBE_ASSETS_SHA=eb034a2e
KBE_ASSETS_VERSION=v1.3.5

GAME_NAME=cocos-demo

KBE_PUBLIC_HOST=0.0.0.0

The directory configs already have a ready-made config for a client on Cocos2D. Just copy it and then build and run the cluster. If a cluster for another game is running, you must first stop it and only then copy the config.

Attention! If the client and the KBEngine cluster are running on different computers, the KBE_PUBLIC_HOST variable must be set to the address of the computer running KBEngine, otherwise the client will not be able to connect to the server. The address must be set before building the KBEngine cluster.

cp configs/kbe-v1.3.5-cocos-js-v1.3.13-demo-v1.3.5.env .env
make build_game
make start_game

And that's it, the server is running. Server logs can be viewed like this

make logs_console

Next, run the game client on Cocos2D. To run the client, you need a web server. To make the demonstration easier I have added a Dockerfile for Cocos2D client build. There are also several rules in Makefile for build, start, stop and cleanup the client.

Dockerfile.cocos-demo
    FROM nginx:1.23.3 as demo_client
    LABEL maintainer="Aleksei Burov <[email protected]>"

    WORKDIR /opt
    RUN apt-get update && apt-get install git -y
    RUN git clone https://github.com/kbengine/kbengine_cocos2d_js_demo.git \
        && cd /opt/kbengine_cocos2d_js_demo \
        && git submodule update --init --remote

    # Replace with host address where Loginapp KBEngine is located
    ARG KBE_PUBLIC_HOST=0.0.0.0
    WORKDIR /opt/kbengine_cocos2d_js_demo
    RUN sed -i -- "s/args.ip = \"127.0.0.1\";/args.ip = \"$KBE_PUBLIC_HOST\";/g" cocos2d-js-client/main.js

    FROM nginx:1.23.3
    COPY --from=demo_client /opt/kbengine_cocos2d_js_demo/cocos2d-js-client /usr/share/nginx/html

Building of the client image 1) pulls an Nginx image, 2) clones a demo client into the image, 3) changes the server address in the client code.

make cocos_build
make cocos_start

Cocos2D

Debug KBEngine in Docker

The project has settings for VSCode so that you can run KBEngine under debugging. The debugger will launch and connect to the components that are running in the Docker container. Debugger is the best answer to many questions.

image

You do not need to build KBEngine, it is already built in the container, but you need the KBEngine source code, you can download it from the github repository. The source code must be in the same commit as the version of KBEngine in the Docker container (the KBE_GIT_COMMIT value in the .env file).

You need to add the KBEngine source code to the Shedu workspace in VSCode (File -> Add folder to workspace...) - this is necessary for navigating through the code. The mapping between KBEngine in the Docker container and VSCode is already configured.

To launch components under a debugger, you must first launch the containers themselves, but without KBEngine components. To do this, in Shedu, you need to set the GAME_IDLE_START=true variable in the .env config. After changing the .env file, you need to rebuild the project (make clean_game build_game). Next, we launch Shedu by the make start_game rule.

You can see that the KBEngine process is not running in the container (e.g Logger), this can be done through the [Logger] ps aux task in VSCode. There will be something like this

USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root           1  1.4  0.0  11704  2512 ?        Ss   09:50   0:00 bash /opt/shedu/scripts/deploy/start_component.sh
root           6  0.0  0.0   4416   680 ?        S    09:50   0:00 tail -f /dev/null
root          31  0.0  0.0  51748  3336 ?        Rs   09:50   0:00 ps aux

We see that there is no KBEngine process, there is tail -f /dev/null so that the container runs idle and does not terminate.

Next, set a breakpoint in the KBEngine source code and run Debugger from VSCode (F5). That's actually all. Then we catch any place that interests us.

Breakpoint in Logger

image


There is a separate configuration for each component in the launch.json file. Components need to be run one by one sequentially by the hands.

Run the Supervisor component under the debugger

This project does not have the Machine component, instead it runs a component written in Python called Supervisor. To run the Supervisor under the debugger, you need to set the DEBUG_SUPERVISOR=true variable in the .env file. After changing the .env file, you need to rebuild the project (make clean_game build_game). We select in VSCode that we need to run the [Docker] Supervisor configuration under the debugger and run it. We set a breakpoint in the right place and catch the execution.

Breakpoint in Supervisor

image


Attention! The supervisor ignores the GAME_IDLE_START=true variable because this variable is overwrited in docker-compose.yml. Therefore, by the time you start the debug for the Supervisor from VSCode, the Supervisor will already be running. But with DEBUG_SUPERVISOR=true it is run via debugpy. And debugpy is already waiting for a connection from the VSCode debugger. Thus, breakpoints in main.py will not work (because at the time the debugger is connected, the application is already running), but you can set breakpoints in any other places, they will work there.

Run all components under the debugger

When launching all components under the debugger, you need to wait for the component to be ready, because healthcheck is disabled in this case. For example, DBMgr takes a long time to start up. The database when started with GAME_IDLE_START=true will start normally, this variable does not affect the start of the database. Components need to be launched sequentially as they are listed in launch.json (or in the debug configuration dropdown in VSCode). The exception is BaseappMgr and CellappMgr - they need to be run one after the other without waiting. Accordingly, if you need to use the debugger to get to the Loginapp component, then you will need to sequentially launch all the components one by one, starting from Supervisor. The easiest way to understand that the component is running normally is by looking at the logs (make logs_kibana).

Debug all components

image

Possible problems

Most likely, the components of KBEngine v1.x will not start correctly under the debugger. This is because each component has its own unique cluster ID (uid) based on the UID environment variable. By this identifier, the components understand which cluster they belong to if several game clusters are running on the same computer, but under different users. But if UID=0, as is the case when running as root, then each component randomly generates an uid and the components cannot find each other. In the KBEngine v2.x, it is possible to set the uid value through the UUID environment variable, but in KBEngine v1.x there is no such possibility - only by UID, i.e. by user ID.

Docker runs processes as root (i.e. UID=0) in a container by default. Plus, there is also a Logstash container, which is also running as root. The Logstash container and KBEngine cluster containers have a shared log volume. If you run containers with KBEngine as a user, then there was a problem with the logs and access rights to the log volume. Therefore, the container entry script starts as root, changes permissions of their folders, and only then launches the KBEngine component as a normal user. When running under a debugger, there is no such possibility, because you need to directly run the compiled KBEngine component file from the debugger.

I added the UUID variable to docker-compose.yml, so when running under the debugger, this variable is in the container and the uid will be created from it. Same uid - components find each other. But KBEngine v1.x does not have a uid created by the UUID variable, so most likely DBMgr will not be able to find Interfaces, Logger and nothing will work further.

I emphasize that this situation exists only to the launch under the debugger. Regular cluster start (without GAME_IDLE_START=true) also works for KBEngine v1.x.

KBEngine server startup sequence diagrams

Below are sequence diagrams of the server startup. The diagrams illustrate the messages, the sequence in which messages are sent, and the association of the components of a KBEngine cluster during initial startup. The server starts up without game entities, so the diagram will show the main initialization links only.

For all components, there are 4 basic actions that all components can do

  1. Ask Machine / Supervisor for the address of the component
  2. Establish a permanent TCP connection between two components (with querying the state of the component)
  3. Register yourself in Machine / Supervisor
  4. Connect to the Logger component and send it a log records

First, these 4 main sequences will be illustrated, then the startup diagram of the first 6 components + DB will be given (Supervisor, Logger, Interfaces, DBMgr, BaseappMgr, CellappMgr). Next, there will be diagrams for Cellapp, Baseapp, Loginapp.

Common actions for all components

GetComponentAddress LoggerRegisterNewApp OpenPermanentTCPConnection RegisterComponent


Supervisor, Logger, Interfaces, DBMgr, BaseappMgr, CellappMgr

Cluster start (No Entities)


Baseapp

Start Baseapp (No Entities)


Cellapp

Start Cellapp (No Entities)


Loginapp

Start Loginapp (No Entities)


Stoping the game server

The approach to stopping a KBEngine cluster has been changed to integrate with Docker's approach to stopping a container. Below, we will first describe the original approach to stopping the KBEngine cluster. And then the approach to the stop at Shedu and what has been changed will be described.

In the original KBEngine approach, the kbengine component was stopped by the Machine component. The machine receives the Machine::stopserver message and sends a ::reqCloseServer message to the component to stop it. The stopping component starts finalization and checks for completion. In the kbengine.xml file, you can set some shutdown settings: root/shutdown_time - the start period for shutdown and root/shutdown_waittick - the interval for checking the completion of the component shutdown. If root/shutdown_time is zero, then shutdown will begin immediately and synchronously. The finalization of game logic will be ignored (e.g. onReadyForShutDown callback).

Stopping a component in the original KBEngine architecture

Stopping a component in the original KBEngine architecture


In Shedu, Docker client manages serverices. Also the Machine component has been replaced by the Supervisor component. Supervisor receives the Machine::stopserver message but does nothing. Stopping a component in Shedu is triggered when Docker sends SIGTERM to the process with PID=1. This process is the bash script start_component.sh. The script intercepts this signal via trap and sends the ::reqCloseServer message to the component. Next, the script waits for the server component process to stop. After the component stopped, the script exists and then the container is stopped. In the Shedu config you can set the maximum waiting time for the container to stop through the KBE_STOP_GRACE_PERIOD variable. This variable sets timeout then Docker will send SIGKILL to the process with PID=1. Shedu sets root/shutdown_time=1 and root/shutdown_waittick=1 so that the component shutdown starts after 1 second and the component is checked for completion every second.

Stopping a component in a Shedu cluster

Stopping a component in the Shedu cluster


When a component stops, the TCP connection with other components is broken. For example, Loginapp has a persistent tcp connection with BaseappMgr. When Loginapp is stopping, BaseappMgr does not understand whether Loginapp is shutting down normally or it has terminated abnormally. Therefore, there will be an error in the logs from BaseappMgr about the connection being lost.

Components::removeComponentByChannel: loginapp : 9001, Abnormal exit(reason=disconnected)! Channel(timestamp=1253005611538929, lastReceivedTime=1252997482797767, inactivityExceptionPeriod=203836746624)

In the original KBEngine architecture, the stop message is sent to all components and all components start shutting down at the same time, in which case there will be no TCP connection loss error. In Shedu, a stop message is sent sequentially to each component after the services on which the terminated component depends (based on depends_on in docker-compose.yml) have completed. Therefore, so that the error would not be confusing when completing the component, I used Logstash to change the logging level from ERROR to INFO. Thus, this message will appear in Kibana, but with a different logging level. The case of connection loss means that the component has terminated, if this is an error situation - this should be visible at the container management level.

shedu's People

Contributors

ve-i-uj avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

Forkers

phoenix-repo

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.