Code Monkey home page Code Monkey logo

cartridge-cli's Introduction

Tarantool

Actions Status Code Coverage OSS Fuzz Telegram GitHub Discussions Stack Overflow

Tarantool is an in-memory computing platform consisting of a database and an application server.

It is distributed under BSD 2-Clause terms.

Key features of the application server:

Key features of the database:

  • MessagePack data format and MessagePack based client-server protocol.
  • Two data engines: 100% in-memory with complete WAL-based persistence and an own implementation of LSM-tree, to use with large data sets.
  • Multiple index types: HASH, TREE, RTREE, BITSET.
  • Document oriented JSON path indexes.
  • Asynchronous master-master replication.
  • Synchronous quorum-based replication.
  • RAFT-based automatic leader election for the single-leader configuration.
  • Authentication and access control.
  • ANSI SQL, including views, joins, referential and check constraints.
  • Connectors for many programming languages.
  • The database is a C extension of the application server and can be turned off.

Supported platforms are Linux (x86_64, aarch64), Mac OS X (x86_64, M1), FreeBSD (x86_64).

Tarantool is ideal for data-enriched components of scalable Web architecture: queue servers, caches, stateful Web applications.

To download and install Tarantool as a binary package for your OS or using Docker, please see the download instructions.

To build Tarantool from source, see detailed instructions in the Tarantool documentation.

To find modules, connectors and tools for Tarantool, check out our Awesome Tarantool list.

Please report bugs to our issue tracker. We also warmly welcome your feedback on the discussions page and questions on Stack Overflow.

We accept contributions via pull requests. Check out our contributing guide.

Thank you for your interest in Tarantool!

cartridge-cli's People

Contributors

0x501d avatar ananek avatar artembo avatar better0fdead avatar dependabot[bot] avatar differentialorange avatar dokshina avatar grishnov avatar hustonmmmavr avatar kirill-churkin avatar knazarov avatar lenkis avatar leonidvas avatar lowitea avatar mmelentiev-mail avatar mrrvz avatar nickvolynkin avatar oleggator avatar olegrok avatar onvember avatar p7nov avatar patiencedaur avatar printercu avatar psergee avatar roopakv avatar rosik avatar runsfor avatar slemonide avatar vanyarock01 avatar yngvar-antonsson avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cartridge-cli's Issues

Change the way for starting "Getting Started" example

At now, ./start.sh execute Tarantool processes in the shell background mode and stdout prevents to normal work by the README. We should change it.

Why we just not started via cartridge-cli in the background mode?

Pack application in docker

Application build flow

Application build comprises these steps:

  • cartridge.pre-build (or .cartridge.pre);
  • tarantoolctl rocks make;
  • cartridge.post-build;

And then, we perform some actions, described in the cartridge-cli source code:

  • writing VERSION file (it contains Trantool, SDK, application and installed rocks modules versions);
  • (cleanning up files mentioned in .cartridge.ignore)

What's the problem?

There are two types of problem:

For all distribution types except docker

Building rocks modules takes place on the local machine. As a result, the package contains executables specific for the OS where cartridge pack command was ran.

For docker distribution type:

Building rocks is running in the container.
It solves the problem of wrong type of executables, but on the other hand, writing VERSION file and applying .cartridge.ignore can't be ran in the container.

Possible decision

We discussed a few ways to solve this problems, and I suggest using one of these:

  • Application build stages are running in the docker container. The main purpose of this action is to get application distribution directory with the rocks modules specific for the target platform. (For packing to tar.gz user should have an opportunity to specify target system)

  • Then, application files with installed rocks modules are copied on the local machine to perform some actions like applying .cartridge.ignore and writing installed rocks versions to the VERSION file.

  • Next, systemd and tmpfiles directories are formed for rpm and deb.

  • Finally, application files (with rocks modules and other staff) are packed in the archive or copied to the resulting image.

This approach solves the problem of wrong platform-specific rocks.
Forming distribution dir (/usr/share/tarantool/<appname>) is independent of distribution type, and all cartridge-cli specific actions are performed on the local machine.
Docker is used only for transforming current directory contents to the directory contents required for the target platform.

It will also help us to implement runtime.txt (see #88) functionality in the future.
It also will make possible to use deprecated build files (.cartridge.pre + .cartridge.ignore) with pack docker command.

[Getting Started] Example for creating customer with account

Today we don't have an example of HTTP request in GS, where we create a customer with account.
It's a problem because the users can't use update_balance API without an account.

Solution:

  • create a customer with one default account
  • add block with description of the customer structure (point on account field)

Misleading workdir value in the cluster template

During Highload meetup some of the visitors ask me about the workdir value in template init.lua file. The guy said:

"This path looks like the path for smth not important (kind of garbage)".

It looks like something misleading and needs to be changed.

[4pt] Ability to choose a version of Tarantool for packaging in a docker image

Currently, executing the cartridge pack docker command creates a docker image with the same version of Tarantool as on the host machine.

I propose to give the opportunity to choose which version Tarantool will be installed in the final image.

I think it would be convenient to do this with an additional flag, and if it is not provided then you can use the latest available version.

Get warning about rocks manifest for template application

Error message after packing

warning: can't process rocks manifest file. Dependency information can't be shipped to the resulting package.

Reproducer:

tarantoolctl rocks install cartridge-cli
.rocks/bin/cartridge create
.rocks/bin/cartridge pack tgz myproject

Read.me add information.

Please add to readme information:

  1. Mandatory dependencies: tarantool-devel, git, gcc, cmake.
  2. To start shard correctly, you must click the Bootstrap button in the administrator interface.
  3. Add information about admin password for UI.
  4. For api.lua missing:
    local errors = require('errors')
    local err_vshard_router = errors.new_class("Vshard routing error")
    local err_httpd = errors.new_class("httpd error")

Rewrite in Go

I believe it's time to rewrite this utility in Go. It has great value for end-user and building it completely static will help making it relatively platform-independent and remove a lot of quirks in making it compatible across enterprise and community editions (by not requiring a runtime).

In addition, I'd like cartridge-cli to be used as a part of application and rock building and publishing flow.

Plus, rewriting it in Go will allow to incorporate process management and basic monitoring.

rpm pack fails on git clean

docker run --name $tmp_name $image /bin/bash -c \
        "tarantoolctl rocks install cartridge-cli 1.3.1 && \
        /.rocks/bin/cartridge pack rpm /opt/project --version ${version} --name project"
Packing rpm file
Packing CPIO in: /tmp/CTtXas
Running `git clean`
/usr/libexec/git-core/git-submodule: line 509: 0: Bad file descriptor
ERROR: Failed to execute 'cd "/tmp/CTtXas/usr/share/tarantool/project" && /usr/bin/git submodule foreach --recursive /usr/bin/git clean -f -d -X'

v1.3.1
centos:7

Support cluster config options

At now, I don't have any way to insert my options into the cluster-wide config.
Some roles need to obtain their own values from cluster-wide config by design and now I can push my config only manually (via WebUI or via HTTP endpoint).

Hide packed files list on `cartridge pack`

Now on packing application archive creation commands are run with -v flag (e.g. tar -czvf), as a result output is filled by all project files list.

We should run this commands without -v flag.

Get rid of npm in the getting-started-app

Since we do cartridge releases that you don't have to build yourself, we don't need to force the user to install node and npm (which is a burden). Please check that it works without npm and remove that line from getting-started-app README.

Make packing with TE more flexible

Problems

  • Bundle download URL isn't correct, is uses tarantool user (and requires passing download token as argument).

  • Now pack docker builds application in the docker container. As a result bundle is always downloaded on image. We should use bundle that is used locally (like in local build) by default.

Packing in docker is used for pack docker and pack {rpm,tgz,deb} --use-docker.

Solution

In Tarantool Enterprise case, both of this commands should deliver Tarantool Enterprise bundle on the build image. SDK can be downloaded using download URL or copied from local source.

Now packing in docker for TE requires --download-token argument.
I suggest replacing it with --download-url (or --sdk-download-url, or --sdk-url) and --sdk-path arguments.

So:

  • by default: current SDK is used (with warning);

  • if --sdk-path is specified: bundle is copied from localhost to container;

  • if --download-url is specified: bundle archive is downloaded from this URL (env TARANTOOL_DOWNLOAD_URL).

Maybe it would be better to replace default behaviour with a flag (e.g. --local-sdk) to make it more clear for the end user.
I sure, that when pack {rpm,tgz,deb} --use-docker is used, user doesn't want to use his current SDK (otherwise he doesn't specify --use-docker option).
In this case --local-sdk will be useful to make Cartridge CLI behaviour more clear.

Questions

Now in case of TE the result package (or container) contains only tarantool and tarantoolctl binaries from SDK. Is is OK? Should we deliver the whole bundle?

False "All instances started!"

Чего-то не хватает, после выполнения всех действий getting-started-app - отсутствует demo.yml, и даже ложно сообщает что "All instances started!", хотя их конечно же нет.

bash ./start.sh

init.lua:42: Can not open file: "demo.yml" No such file or directory
stack traceback:
	...g-started-app/.rocks/share/tarantool/cartridge/utils.lua:129: in function 'file_read'
	...tarted-app/.rocks/share/tarantool/cartridge/argparse.lua:159: in function 'load_file'
	...tarted-app/.rocks/share/tarantool/cartridge/argparse.lua:268: in function 'parse_file'
	...tarted-app/.rocks/share/tarantool/cartridge/argparse.lua:337: in function '_parse'
	...tarted-app/.rocks/share/tarantool/cartridge/argparse.lua:352: i
init.lua:42: Can not open file: "demo.yml" No such file or directory
stack traceback:
	...g-started-app/.rocks/share/tarantool/cartridge/utils.lua:129: in function 'file_read'
	...tarted-app/.rocks/share/tarantool/cartridge/argparse.lua:159: in function 'load_file'
	...tarted-app/.rocks/share/tarantool/cartridge/argparse.lua:268: in function 'parse_file'
	...tarted-app/.rocks/share/tarantool/cartridge/argparse.lua:337: in function '_parse'
	...tarted-app/.rocks/share/tarantool/cartridge/argparse.lua:352: i
init.lua:42: Can not open file: "demo.yml" No such file or directory
stack traceback:
	...g-started-app/.rocks/share/tarantool/cartridge/utils.lua:129: in function 'file_read'
	...tarted-app/.rocks/share/tarantool/cartridge/argparse.lua:159: in function 'load_file'
	...tarted-app/.rocks/share/tarantool/cartridge/argparse.lua:268: in function 'parse_file'
	...tarted-app/.rocks/share/tarantool/cartridge/argparse.lua:337: in function '_parse'
	...tarted-app/.rocks/share/tarantool/cartridge/argparse.lua:352: i
init.lua:42: Can not open file: "demo.yml" No such file or directory
stack traceback:
	...g-started-app/.rocks/share/tarantool/cartridge/utils.lua:129: in function 'file_read'
	...tarted-app/.rocks/share/tarantool/cartridge/argparse.lua:159: in function 'load_file'
	...tarted-app/.rocks/share/tarantool/cartridge/argparse.lua:268: in function 'parse_file'
	...tarted-app/.rocks/share/tarantool/cartridge/argparse.lua:337: in function '_parse'
	...tarted-app/.rocks/share/tarantool/cartridge/argparse.lua:352: i
init.lua:42: Can not open file: "demo.yml" No such file or directory
stack traceback:
	...g-started-app/.rocks/share/tarantool/cartridge/utils.lua:129: in function 'file_read'
	...tarted-app/.rocks/share/tarantool/cartridge/argparse.lua:159: in function 'load_file'
	...tarted-app/.rocks/share/tarantool/cartridge/argparse.lua:268: in function 'parse_file'
	...tarted-app/.rocks/share/tarantool/cartridge/argparse.lua:337: in function '_parse'
	...tarted-app/.rocks/share/tarantool/cartridge/argparse.lua:352: i
All instances started!

Can't launch the tests via tarantoolctl in "Getting Started"

Version of the Tarantool:

Tarantool 2.2.1-1-g1208b4e

Output:

a.barsegyan@a getting-started-app $ pwd
/Users/a.barsegyan/Documents/getting-stated/cartridge-cli/examples/getting-started-app
a.barsegyan@a getting-started-app $ tarantoolctl rocks test

Error: could not detect test type -- no test suite for getting-started-app?
a.barsegyan@a getting-started-app $ 

What's the problem? If I will start tests via ./.rocks/bin/luatest it will OK.

Consider an alternative to old-way .ignore

The main idea of .cartridge.ignore was to clear useless artefacts after building (e.g. node_modules/some docs temporary files)

However, now .cartridge.ignore is used before build. So, user can't control that will be packed after build.

Let's consider an alternative to old-way approach. It could be a ".cartridge.pack.ignore" or something else.

Suggest installing devopment dependencies when used interactively

When we use cartridge-cli interactively, we should keep in mind that the user can be new to Tarantool and doesn't know that in order to build his app, he needs cmake, git, tarantool-devel, msgpuck and potentially other things.

I suggest to detect that we are being called from the interactive terminal and if so, show a message to the user what they need to install in order to proceed. Potentially even show specific commands.

When called non-interactively, show a warning instead.

Add formulas into popular package managers

Current installation is weird. I (mean "any user") want to have system-wide installation via my favorite package manager.

yum for CentOS users and apt for Ubuntu users seems to me a favor way for start.
brew for MacOS – is good but not first. Firstly, users.

Implement `cartridge build` command

The main idea of this feature is to simplify local development.
cartridge build command is designed to be used for building application locally.

It will allow to start applications this way:

$ cartridge create --name myapp
$ cd myapp
$ cartridge build
$ cartridge start

RFC

Application build comprises these steps:

  • Running cartridge.pre-build (or .cartridge.pre);
  • Running tarantoolctl rocks make to install required rocks modules;

It's important to remember that the main purpose of cartridge.pre-build script is installing non-standard rocks modules, so it can't contain some commands like yum install etc.

Command

cartridge build [<path>] - builds an application locally.

  • path - path to application (default: '.')

This command runs cartridge.pre-build script and then calls tarantoolctl rocks make directly in the project directory.

The tests from "Getting Started" are fails

At now, I am seeing the next error, when the tests failed:

Failed tests:
-------------
1) integration_api.test_transaction_chain
...amples/getting-started-app/test/integration/api_test.lua:24: expected: 201, actual: 404
stack traceback:
	...amples/getting-started-app/test/integration/api_test.lua:24: in function 'assert_http_json_request'
	...amples/getting-started-app/test/integration/api_test.lua:75: in function 'integration_api.test_transaction_chain'

Ran 2 tests in 0.004 seconds, 1 success, 1 failure

The reason is cluster.main_server, which actually doesn't point to api node in cluster.
After my a little discovering, I am founded out that the cartridge test helper set up as the main server, any server which only one in own replicaset. (In my case it was storage server)

In general case, it's wrong.

start: check that file exists before read them

./cartridge start --cfg=test
.../cartridge-cli.lua:3585: Usage: yaml.decode(document, [{tag_only = boolean}])

Missing file existence validation. CLI doesn't have to show raw yaml error

RFC: cluster management CLI

This ticket is to prepare an RFC that describes our ideal cluster management CLI for Tarantool. Take a look at other tools like etcd, consul, and others, and come up with our own vision.

Extract process management to separate rock

Here is proposed api for this feature:

-- Build an instance.
-- Actually it's a shortcut to set metatable.
-- 
-- To `:stop()` or `:kill()` process only pid_file is required 
-- (or options to generate it, see below).
-- 
-- All other options are related to starting new process.
-- PID of started process is always saved in pid_file.
-- 
-- pid_file, console_sock, notify_socket can be either set directly 
-- or can be generated automatically with run_dir, app_name, instance_name.
-- 
-- @string[opt] object.app_name Used to build default run-files
-- @string[opt] object.instance_name Used to build default run-files,
--    sets TARANTOOL_INSTANCE_NAME, prefixes logs in foreground
-- @string[opt] object.script path to main script
-- @table[opt] object.env extra envars to start process with
-- @array[opt] object.args extra args to start process with
-- @string[opt] object.chdir Directory to chdir into before starting process.
-- @string[opt] object.cfg path to configuration, sets TARANTOOL_CFG
-- @string[opt] object.pid_file default generated in run_dir
-- @string[opt] object.console_sock default generated in run_dir
-- @string[opt] object.notify_socket default generated in run_dir
-- @string[opt] object.run_dir path to create pid_file, console_sock, notify_socket at
function Process:new(object)

-- Calculate required fields, make object consistent.
-- It's used mostly for internal purposes, but also makes it easier to extend the class.
function Process:initialize()

-- Check that process is running.
-- @return bool 
function Process:is_running()

-- Check that pid file does not exist or it's stale.
function Process:ensure_not_running()

-- Replace current process with new tarantool.
function Process:start_in_foreground()

-- Perform fork.
-- @bool[default = false] keep_stdio Don't close std* FDs
function Process:fork(keep_stdio)

-- Fork new process and wait until it boots (`READY=1` in NOTIFY_SOCKET).
function Process:start_daemom()

-- Fork new process and pipe it's output to the stdout.
-- Each output line is prefixed with instance_name, some of them may be colored.
function Process:start_with_decorated_output()

-- Send kill signal.
-- @int[default = 2] sig Signal to send.
--   Should this support signal names? Strings can be sent only with `os.execute('kill ...')`.
function Process:kill(sig)

-- Stop process and ensure it's actually stopped.
-- It checks that stopped process is tarantool before killing it.
-- @bool[opt] force Use SIGKILL.
function Process:stop(force)

I'm not sure if following methods can/need to be implemented:

function Process:start_console()
function Process:connect_net_box()

cartridge-cli does not take run_dir from --cfg file

With this config:

app.router_server_1_0:
  listen: 3301
  instance_name: app.router_server_1_0
  log_format: json
  log_level: 5
  http_port: 8081
  memtx_memory: 1000000000
  advertise_uri: server_1:3301
  workdir: /home/vagrant/app/data/app.router_server_1_0
  run_dir: /home/vagrant/app/run
  data_dir: /home/vagrant/app/data
  log_dir: /home/vagrant/logs
  pid_file: /home/vagrant/app/run/app.router_server_1_0.pid
  log: /home/vagrant/logs/app.router_server_1_0.jsonl

Command:

./cartridge start app --cfg ./cartridge.cfg --verbose

Starting router_server_1_0...
.../share/tarantool/rocks/cartridge-cli/scm-1/bin/cartridge:2364: No such file or directory

But if add run_dir it goes forward:

./cartridge start app --cfg ./cartridge.cfg --run_dir /home/vagrant/app/run --verbose

Starting router_server_1_0...
execve failed: No such file or directory
.../share/tarantool/rocks/cartridge-cli/scm-1/bin/cartridge:2404: Child process exited unexpectedly

tarantoolctl rocks make failed

I'm trying to do https://github.com/tarantool/cartridge-cli/blob/master/examples/getting-started-app/README_RUS.md.

`[root@esb-wso2-tst-01 getting-started-app]# tarantoolctl rocks make
Missing dependencies for getting-started-app scm-1:
cartridge == 1.0.0-1 (not installed)

getting-started-app scm-1 depends on cartridge == 1.0.0-1 (not installed)
Installing http://rocks.tarantool.org/cartridge-1.0.0-1.all.rock
Missing dependencies for cartridge 1.0.0-1:
http == 1.0.5-1 (not installed)
lulpeg == 0.1.2-1 (not installed)
errors == 2.1.1-1 (not installed)
vshard == 0.1.9-1 (not installed)
membership == 2.1.4-1 (not installed)
frontend-core == 6.0.1-1 (not installed)

cartridge 1.0.0-1 depends on http == 1.0.5-1 (not installed)
Installing http://rocks.tarantool.org/http-1.0.5-1.rockspec

Error: Failed installing dependency: http://rocks.tarantool.org/cartridge-1.0.0-1.all.rock - Failed installing dependency: http://rocks.tarantool.org/http-1.0.5-1.rockspec - Could not find header file for TARANTOOL
No file tarantool/module.h in /usr/local/include
No file tarantool/module.h in /usr/include
You may have to install TARANTOOL in your system and/or pass TARANTOOL_DIR or TARANTOOL_INCDIR to the luarocks command.
Example: luarocks install http TARANTOOL_DIR=/usr/local`

But tarantool already installed by this instruction https://www.tarantool.io/ru/download/os-installation/1.10/rhel-centos-6-7/
[root@esb-wso2-tst-01 getting-started-app]# tarantool -v Tarantool 1.10.3-136-gc3c087d Target: Linux-x86_64-RelWithDebInfo Build options: cmake . -DCMAKE_INSTALL_PREFIX=/usr -DENABLE_BACKTRACE=ON Compiler: /opt/rh/devtoolset-6/root/usr/bin/cc /opt/rh/devtoolset-6/root/usr/bin/c++ C_FLAGS:-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -fexceptions -funwind-tables -fno-omit-frame-pointer -fno-stack-protector -fno-common -fopenmp -msse2 -std=c11 -Wall -Wextra -Wno-strict-aliasing -Wno-char-subscripts -Wno-format-truncation -fno-gnu89-inline -Wno-cast-function-type CXX_FLAGS:-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -fexceptions -funwind-tables -fno-omit-frame-pointer -fno-stack-protector -fno-common -fopenmp -msse2 -std=c++11 -Wall -Wextra -Wno-strict-aliasing -Wno-char-subscripts -Wno-format-truncation -Wno-invalid-offsetof -Wno-cast-function-type

OS info:
[root@esb-wso2-tst-01 getting-started-app]# cat /etc/centos-release CentOS Linux release 7.6.1810 (Core)

drop "rocks pack" support

There are several problems in such sphere:

  • cartridge-cli is an additional level between tarantoolctl rocks pack and it's hard-maintainable to support opportune interface.
  • rocks pack supports several modes ".all.rock"/".src.rock"/... and cartridge-cli knows nothing about it.
  • tarantoolctl rocks pack could be dependent on Makefile/CMakeLists and cartridge-cli couldn't verify correctness of such files - cartridge pack simply has a different packaging flow.

So, we have a function that is already completely broken and could return some unpredictable result after packaging. So, I suggest simply drop it - if user wants to get ".rock" (s)he should be able to call tarantoolctl rocks pack directly.

Split cartridge-cli into several files

This project use "cmake" as a build system. This give us a huge possibilities to manipulate the source code files on the build stage. When we say about packages and usage it's quite good to have one file - one entrypoint, one script, etc.

However it's awful for developers and reviewers. My suggestion is to implement cmake script that will do following logic. On the build stage we read all files and then merge them into one:

package.loaded["1"] = load([[
-- the source code of the first module
]])

...

package.loaded["N"] = load([[
-- the source code of the Nth module
]])

-- Code of cartridge-cli.lua
local module1 = require('1') -- it's already added to package.loaded
...

Add more information about packed artifact structure

TODO: Write

Example:

Deploy package
The first step is to install application package on deploy server. It would create user tarantool with group tarantool and some directories for our app:

/etc/tarantool/conf.d/ - directory for instances config (described below);
/var/lib/tarantool/ - directory to store instances snapshots;
/var/run/tarantool/ - directory to store PID-files and console sockets.
Application code would be placed in /usr/share/tarantool/${app-name} directory. If you use Tarantool Enterprise, tarantool and tarantoolctl binaries would be delivered with package and placed there too. Otherwise, you need to install Tarantool package manually.

Package also contains /etc/systemd/system/${app-name}.service and /etc/systemd/system/{app-name}@.service systemd unit files.

Allow to specify SDK version to be installed on `pack docker` result image

How does it work now?

Now SDK version is detected from the current SDK version (TARANTOOL_SDK in VERSION file). For Mac OS bundle -macos suffix is removed.

What's the problem?

It turned out that not for each <version>-macos version there is a corresponding non-macos <version>. In this case, docker pack fails when trying do download non-existent <version> bundle on the result image.

Possible solution

In the future, we plan to use runtime.txt file to specify Tarantool or SDK version in the result package. It will solve this problem, but it will be implemented after #98 (packing application in docker container).

I suggest adding --sdk_version argument for docker pack command. SDK version will be detected from current SDK version, --sdk_version argument and TARANTOOL_SDK_VERSION environment variable according to the priority:

  • --sdk_version command-line argument;
  • TARANTOOL_SDK_VERSION environment variable;
  • SDK version detected from the current SDK version.

I sure, we should show info message with detected version, and warn if it was detected by removing -macos suffix.

Elaborate deploying stateboard.lua

Latest cartridge versions introduce new entity - state provider for stateful failover (aka stateboard.lua). We should elaborate deploying it as conveniently as the other cluster instances.

Speed up tests

Now python tests are run for about 45 minutes. It's too long.

The main reason for increasing test time is that now tests are run for both original and deprecated project structures.
I sure that we can't stop testing deprecated flow, moreover, maybe we should start testing different cartridge versions.

Now there are many cases not covered by tests, so the tests count should be increased more.

Extend "version" with git commit

Now: cartridge pack rpm /opt/project --version 1.2.3-264-g8bce594e leads to project-1.2.3-0.rpm

Expected: project-1.2.3-264-g8bce594e.rpm

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.