Code Monkey home page Code Monkey logo

atomkraft's People

Contributors

andrey-kuprianov avatar asalzmann avatar dalmirel avatar ebuchman avatar gatom22 avatar hvanz avatar ivan-gavran avatar konnov avatar p-offtermatt avatar rnbguy avatar udit-gulati avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

atomkraft's Issues

Testnet setup throws `KeyError: 'laddr'`

The current testnet setup introduces a bug while updating chain configuration files.

It overwrites the values of nested keys instead of merging them recursively.

If p2p.allow_duplicate_ip is updated, the whole p2p section is overwritten with only allow_duplicate_ip.

[p2p]
laddr = ""
allow_duplicate_ip = false
...

becomes

[p2p]
allow_duplicate_ip = true

but it should be

[p2p]
laddr = ""
allow_duplicate_ip = true
...

Add CWD reference to Atomcraft Pytest config

Pytest can fail inexplicably when invoked from command line (via pytest binary) or from Python code (via pytest.main()), compared to invoking via python -m pytest. The difference is apparently that in the latter case the current working directory is added to the system path, and as a result pytest fixtures can be found in the latter, but not in the former case.

The solution, as discussed with @rnbguy, is to add the following piece of code to pyproject.toml of Atomkraft project:

[tool.pytest.ini_options]
pythonpath = [
  "."
]

This can be done on atomkraft init using poetry config command.

Sampling of models fails

I have done the following:

  1. copy HelloFull.tla and HelloFull.config.toml from modelator samples into tests/models dir of atomkraft.
  2. atomkraft model load tests/models/HelloFull.tla
  3. atomkraft model sample --config-path tests/models/HelloFull.config.toml

This fails with FileNotFoundError: [Errno 2] No such file or directory: '/Users/.../atomkraft/modelator/samples'

The problem is that the path modelator/samples is picked up from the configuration, and overwrites the loaded model.

Event based waiting

Currently, we use time.sleep to wait for an event to happen. Examples,

  1. Waiting for testnet to complete its setup and ready to serve.
  2. Waiting for a transaction to be included in the blockchain.

These should be handled by some event-based mechanism.

Implement `test model` subcommand

This is the follow-up issue for #27 (and PR #61), as well as for #65, which implement generation, execution, and reporting for single ITF traces. This issue depends on #54 and #55, which implement the necessary programmatic interface from the Model module.

The tasks of this issue are:

  • get the trace from the model via interface defined in #54 and #55
  • store the obtained trace in traces folder
  • do the same processing as implemented in #27 and #65, but using the trace obtained in the previous step
  • modify slightly the implementation in #61, and hook there the default trace obtained via #55.

This is the simplest and preferable route for Atomkrat prototype.

Alternative route for Atomkraft MVP

The alternative route is to generate another kind of Pytest using @mbt decorator, and refer from it directly to the model being used. But this route is:

  • more complicated, because it will require the modification of @mbt decorator in modelator, in order to adopt the decorator to the changed modelator API, with TOML configs.
  • will not lead to reproducible Python scripts, as the Pytest test will depend on the model, and not on already generated trace.

Implementation of the alternative route will require a more deep integration with Modelator, and implementation of caching there, in order to achieve the speed and reproducibility of the first route. When implemented though, this will be the main and preferable way of Atomkraft operation, because the users will not need to concern themselves with the intermediate phase (ITF traces), if they don't want to. It will be a direct route from a model and a test assertion, to the execution of multiple generated traces against the testnet.

Store and retrieve the last model trace

As documented in ADR-05 Test execution, there is a dependency on the Model module, for providing programmatic access to obtaining the last trace produced from the model.

Programmatically, the following function needs to be provided by the Model module:

get_trace(trace = None)

The trace parameter, when given, will provide a filesystem path where to retrieve the trace. When the parameter is omitted, the last trace produced from the atomkraft model check or atmokraft model sample commands should be retrieved from Atomkraft configuration.

Errors: on any error, an exception should be raised, explaining the error reason (e.g. no trace has been sampled, or the provided trace can't be parsed).

Return value: on success, the trace represented as an ITF class from Modelator should be returned.

Reactor fixture for `testnet` is not defined

In the reactor created via atomkraft reactor command:

  • there is the unresolved import from cosmos_net.pytest import Testnet (there is no cosmos_net)
  • while each action handler does have the testnet parameter, the fixture producing testnet is not defined.

The testnet fixture should spin the testnet with the parameters define in the project chain configuration.

Reactor file is overwritten silently without warnings

The reactor file generated via atomkraft reactor once, is overwritten silently without any warnings on the next atomkraft reactor invocation. This is dangerous, as the user may already have started to work on that file.

The user should be warned if the file exists already, and asked whether they want to overwrite it.

Testnet creation fails non-deterministically

I have the Atomkraft project configured correctly, with all binaries and everything. This is the CosmWasm counter example. Most of the time testing works fine using this example; but from time to time it fails like that (from a pytest execution):

_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
../../../Library/Caches/pypoetry/virtualenvs/atomkraft-9j0E0YDD-py3.10/lib/python3.10/site-packages/modelator/pytest/decorators.py:78: in <dictcomp>
    arg: step[arg] if arg in step else request.getfixturevalue(arg)
../../../Library/Caches/pypoetry/virtualenvs/atomkraft-9j0E0YDD-py3.10/lib/python3.10/site-packages/_pytest/fixtures.py:554: in getfixturevalue
    fixturedef = self._get_active_fixturedef(argname)
../../../Library/Caches/pypoetry/virtualenvs/atomkraft-9j0E0YDD-py3.10/lib/python3.10/site-packages/_pytest/fixtures.py:573: in _get_active_fixturedef
    self._compute_fixture_value(fixturedef)
../../../Library/Caches/pypoetry/virtualenvs/atomkraft-9j0E0YDD-py3.10/lib/python3.10/site-packages/_pytest/fixtures.py:659: in _compute_fixture_value
    fixturedef.execute(request=subrequest)
../../../Library/Caches/pypoetry/virtualenvs/atomkraft-9j0E0YDD-py3.10/lib/python3.10/site-packages/_pytest/fixtures.py:1057: in execute
    result = ihook.pytest_fixture_setup(fixturedef=self, request=request)
../../../Library/Caches/pypoetry/virtualenvs/atomkraft-9j0E0YDD-py3.10/lib/python3.10/site-packages/pluggy/_hooks.py:265: in __call__
    return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
../../../Library/Caches/pypoetry/virtualenvs/atomkraft-9j0E0YDD-py3.10/lib/python3.10/site-packages/pluggy/_manager.py:80: in _hookexec
    return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
../../../Library/Caches/pypoetry/virtualenvs/atomkraft-9j0E0YDD-py3.10/lib/python3.10/site-packages/_pytest/fixtures.py:1111: in pytest_fixture_setup
    result = call_fixture_func(fixturefunc, request, kwargs)
../../../Library/Caches/pypoetry/virtualenvs/atomkraft-9j0E0YDD-py3.10/lib/python3.10/site-packages/_pytest/fixtures.py:883: in call_fixture_func
    fixture_result = next(generator)
../../../Library/Caches/pypoetry/virtualenvs/atomkraft-9j0E0YDD-py3.10/lib/python3.10/site-packages/atomkraft/chain/pytest.py:13: in testnet
    testnet.oneshot()
../../../Library/Caches/pypoetry/virtualenvs/atomkraft-9j0E0YDD-py3.10/lib/python3.10/site-packages/atomkraft/chain/testnet.py:214: in oneshot
    self.prepare()
../../../Library/Caches/pypoetry/virtualenvs/atomkraft-9j0E0YDD-py3.10/lib/python3.10/site-packages/atomkraft/chain/testnet.py:182: in prepare
    node.add_key(self.validators[i])
../../../Library/Caches/pypoetry/virtualenvs/atomkraft-9j0E0YDD-py3.10/lib/python3.10/site-packages/atomkraft/chain/node.py:112: in add_key
    stdout, stderr = self._execute(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <atomkraft.chain.node.Node object at 0x121a0f850>
args = ['keys', 'add', 'validator-0', '--recover', '--keyring-backend', 'test', ...]

    def _execute(self, args, *, stdin: bytes | None = None, stdout=None, stderr=None):
        final_args = f"{self.binary} --home {self.home_dir}".split() + args
        # print(" ".join(final_args))
        stdin_pipe = None if stdin is None else PIPE
        with Popen(final_args, stdin=stdin_pipe, stdout=stdout, stderr=stderr) as p:
            out, err = p.communicate(input=stdin)
            rt = p.wait()
            if rt != 0:
>               raise RuntimeError(f"Non-zero return code {rt}\n{err.decode()}")
E               RuntimeError: Non-zero return code 1
E               Error: aborted
E               Usage:
E                 junod keys add <name> [flags]
E               
E               Flags:
E                     --account uint32           Account number for HD derivation
E                     --algo string              Key signing algorithm to generate keys for (default "secp256k1")
E                     --coin-type uint32         coin type number for HD derivation (default 118)
E                     --dry-run                  Perform action, but don't add key to local keystore
E                     --hd-path string           Manual HD Path derivation (overrides BIP44 config)
E                 -h, --help                     help for add
E                     --index uint32             Address index number for HD derivation
E                 -i, --interactive              Interactively prompt user for BIP39 passphrase and mnemonic
E                     --ledger                   Store a local reference to a private key on a Ledger device
E                     --multisig strings         List of key names stored in keyring to construct a public legacy multisig key
E                     --multisig-threshold int   K out of N required signatures. For use in conjunction with --multisig (default 1)
E                     --no-backup                Don't print out seed phrase (if others are watching the terminal)
E                     --nosort                   Keys passed to --multisig are taken in the order they're supplied
E                     --pubkey string            Parse a public key in JSON format and saves key info to <name> file.
E                     --recover                  Provide seed phrase to recover existing key instead of creating
E               
E               Global Flags:
E                     --home string              The application home directory (default "/Users/andrey/.juno")
E                     --keyring-backend string   Select keyring's backend (os|file|test) (default "test")
E                     --keyring-dir string       The client Keyring directory; if omitted, the default 'home' directory will be used
E                     --log_format string        The logging format (json|plain) (default "plain")
E                     --log_level string         The logging level (trace|debug|info|warn|error|fatal|panic) (default "info")
E                     --output string            Output format (text|json) (default "text")
E                     --trace                    print out full stack trace on errors

../../../Library/Caches/pypoetry/virtualenvs/atomkraft-9j0E0YDD-py3.10/lib/python3.10/site-packages/atomkraft/chain/node.py:238: RuntimeError

Not sure what causes that, but would be nice to investigate.

CLI: `model` command

Background/Motivation

A user has written a TLA+ model and wants to parse the TLA+ files, type check the spec, set the model constants.
Once a model is loaded, the user can call the trace command to generate traces from the model.

This command is also an interface to the Model class in Modelator.

Linked documents: CLI ADR

Description: CLI commands

The command atomkraft model is essentially a wrapper around Modelator's Model, where each of its sub-commands would map almost one-to-one to the methods of Model:

atomkraft model load <model-path> # in Model it's the parse_file method
atomkraft model typecheck
atomkraft model instantiate <constant-name> <constant-value>
atomkraft model check [<invariant-list>] [--constants=<name>:<value>,...] # for now, checker is Apalache, and checker params are the default values
atomkraft model sample [<sample-list>] [--constants=<name>:<value>,...] 
atomkraft model last-sample
atomkraft model all-samples
atomkraft model monitor add markdown <monitor-file.md>
atomkraft model monitor add html <monitor-file.html>

Additionally, model will have the following sub-commands that require some extra logic not provided by Modelator:

atomkraft model info # will display filename(s), init, next, constants, invariants, ... 
atomkraft model monitor remove-all # will remove all initialized monitors

Not included in the first prototype:

atomkraft model config load <model-config-file> # will call the `ModelConfig` class in Modelator

Apalache does not require a cfg file with the model.

Artifacts

This module can load a model in memory that can be used by other modules.

Programmatic interface

This module does not expect any connection to other components.

Dependencies

None

Tasks

  • Implement the model command and the sub-commands that call Modelator directly.
  • Implement the model sub-commands that do not call Modelator directly.
  • Add unit tests for model.

ADR-01: Atomkraft principles & architecture

Moving towards more user-friendly version of Atomkraft, we need to document its organizational principles, and high-level architecture, to be implemented in the first prototype. The ADR will mostly ignore the inner workings of the tool, and concentrate on its external interface and artifacts.

CLI: `trace` command

Background/Motivation

A user has written a TLA+ model and wants to generate some traces from test assertions from the model, so that they can use some of the generated test traces later for executing on a testnet.

Linked documents:

Description

The atomkraft trace command generates ITF traces. If no model is given as parameter, it will use a model already loaded in memory with the atomkraft model command.
Its format is:

atomkraft trace [--model=<model>] <config-path> <test-assertion> [<traces-path>]

where:

  • <config-path> is the (path to) TOML file with the model and model checker configuration;
  • <test-assertion> is the name of the model operator describing the desired test trace.
  • <traces-path> is the location for the trace files

Upon successful command execution, the generated test trace in the ITF format should be persisted to disk.

A model config is a TOML file with the following format, and located in the same directory as the model:

[Model]
name = "ModuleName"
init = "Init"
next = "Next"
spec = "Spec"
invariants = ["Inv1", "Inv2", ...]
tlc_config_file = "path/to/ModuleName.cfg"

[Constants]
constant_name_1 = "tla_constant_value_1"
...
constant_name_n = "tla_constant_value_n"

[Config]
check_deadlock = FALSE
length=10 # called depth in TLC

Related commands:

atomkraft config traces-dir <traces-path> # sets the location for the generated trace files
atomkraft config traces-dir # displays the current directory for the trace files

Technical details

Under the hood, the atomkraft trace command will call the following Modelator Shell commands:

model = ModelShell.parse_file(<model-path>)
model.typecheck()
config = ModelConfig.parse_file(<config-path>)
model.check(config, <test-assertion>, <traces-path>)

where ModelConfig would be a new class in Modelator, used as a common data structure for Apalache and TLC configurations.

Artifacts

This command generates ITF traces in the directory default_traces_dir provided by the Setup module.

Interface to other modules

  • Read the value of default_traces_dir provided by the Setup module.

Dependencies

  • model command #17

Tasks

  • Implement the trace command (making use of Modelator, as described above).
  • Add unit tests for trace.

CI jobs are failing

#50 added lints and tests in CI. But they are failing because the codebase is not changed.

A separate PR is needed to improve code quality to make the lints and tests happy.

Small issues with Atomkraft tutorial

Testing the existing Atomkraft tutorial, I found the following two small issues:

  • upon installing (running pip install --upgrade atomkraft), I get the following error:
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
copier 6.1.0 requires packaging>=21.0, but you have packaging 20.9 which is incompatible.

The installation process goes through, but it is a bad feeling to see the Error message

  • copying in the .md files (when pressing the copy symbol) copies too much (also the output)

ADR: `atomkraft init`

We are figuring out how a user will use Atomkraft - produce tests, interact with test setup, and execute them.

The idea is to provide a atomkraft cli which creates a pytest project with necessary configurations.

`atomkraft init` fails in CI

Fails because of my mistake of adding private submodules. They work in our machines, cause we have access to the repository.

Implement test command CLI

This is the implementation issue for CLI of #13, constructing and executing tests against testnet.

CLI is to be implemented using Typer, for integration with CLI of other subcommands.

Create an E2E test for Atomkraft

We need to add an E2E test in CI, which successfully

  • installs Atomkraft
  • initializes an Atomkraft project
  • generates traces from a model
  • creates a reactor
  • sets up a chain
  • executes a traces on live testnet

Cannot start node in example

jehan@Jehans-MBP cosmos-sdk % make
./setup.sh
[+] Building 1.4s (19/19) FINISHED                                 
 => [internal] load build definition from Dockerfile          0.0s
 => => transferring dockerfile: 37B                           0.0s
 => [internal] load .dockerignore                             0.0s
 => => transferring context: 2B                               0.0s
 => [internal] load metadata for docker.io/library/alpine:ed  1.3s
 => [internal] load metadata for docker.io/library/golang:al  1.3s
 => [internal] load build context                             0.0s
 => => transferring context: 321B                             0.0s
 => [build-env 1/6] FROM docker.io/library/golang:alpine@sha  0.0s
 => [stage-1 1/7] FROM docker.io/library/alpine:edge@sha256:  0.0s
 => CACHED [stage-1 2/7] RUN apk add --update ca-certificate  0.0s
 => CACHED [stage-1 3/7] WORKDIR /root                        0.0s
 => CACHED [build-env 2/6] RUN apk add --no-cache curl make   0.0s
 => CACHED [build-env 3/6] WORKDIR /go/src/github.com/cosmos  0.0s
 => CACHED [build-env 4/6] RUN git clone https://github.com/  0.0s
 => CACHED [build-env 5/6] RUN git checkout v0.44.3           0.0s
 => CACHED [build-env 6/6] RUN make clean && make build-linu  0.0s
 => CACHED [stage-1 4/7] COPY --from=build-env /go/src/githu  0.0s
 => CACHED [stage-1 5/7] ADD ./chain-setup /opt/chain         0.0s
 => CACHED [stage-1 6/7] WORKDIR /opt/chain/                  0.0s
 => CACHED [stage-1 7/7] RUN /opt/chain/init.sh               0.0s
 => exporting to image                                        0.0s
 => => exporting layers                                       0.0s
 => => writing image sha256:ad757d341b71c2f52eca04c895d880b3  0.0s
 => => naming to docker.io/library/cosmos-image               0.0s

Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them
jehan@Jehans-MBP cosmos-sdk % ./start-node.sh
7:11PM INF starting ABCI with Tendermint
7:11PM INF Starting multiAppConn service impl=multiAppConn module=proxy
7:11PM INF Starting localClient service connection=query impl=localClient module=abci-client
7:11PM INF Starting localClient service connection=snapshot impl=localClient module=abci-client
7:11PM INF Starting localClient service connection=mempool impl=localClient module=abci-client
7:11PM INF Starting localClient service connection=consensus impl=localClient module=abci-client
7:11PM INF Starting EventBus service impl=EventBus module=events
7:11PM INF Starting PubSub service impl=PubSub module=pubsub
7:11PM INF Starting IndexerService service impl=IndexerService module=txindex
7:11PM INF ABCI Handshake App Info hash= height=0 module=consensus protocol-version=0 software-version=0.44.3
7:11PM INF ABCI Replay Blocks appHeight=0 module=consensus stateHeight=0 storeHeight=0
7:11PM INF asserting crisis invariants inv=0/11 module=x/crisis name=bank/nonnegative-outstanding
7:11PM INF asserting crisis invariants inv=1/11 module=x/crisis name=bank/total-supply
7:11PM INF asserting crisis invariants inv=2/11 module=x/crisis name=distribution/nonnegative-outstanding
7:11PM INF asserting crisis invariants inv=3/11 module=x/crisis name=distribution/can-withdraw
7:11PM INF asserting crisis invariants inv=4/11 module=x/crisis name=distribution/reference-count
7:11PM INF asserting crisis invariants inv=5/11 module=x/crisis name=distribution/module-account
7:11PM INF asserting crisis invariants inv=6/11 module=x/crisis name=staking/module-accounts
7:11PM INF asserting crisis invariants inv=7/11 module=x/crisis name=staking/nonnegative-power
7:11PM INF asserting crisis invariants inv=8/11 module=x/crisis name=staking/positive-delegation
7:11PM INF asserting crisis invariants inv=9/11 module=x/crisis name=staking/delegator-shares
7:11PM INF asserting crisis invariants inv=10/11 module=x/crisis name=gov/module-account
7:11PM INF asserted all invariants duration=6.272542 height=0 module=x/crisis
Error: error during handshake: error on replay: validator set is nil in genesis and still empty after InitChain
Usage:
  simd start [flags]

Flags:

Fix utils/project_root() to be Atomkraft-specific

Current implementation of project_root() function:

def project_root():
    cwd = Path(os.getcwd())
    while cwd != cwd.parent:
        if (cwd / "pyproject.toml").exists():
            return cwd
        cwd = cwd.parent
    return None

is dangerous: it traverses up to whatever "pyproject.toml" it finds, and returns this directory. There may be other Poetry projects up there, which are not Atomkraft projects. As a result, arbitrary unrelated projects could be overwritten.

Instead, this function should search for Atomkraft-specific config files, e.g. .atomkraft directory.

Fix reactor/get_reactor

Current implementation of reactor/get_reactor():

def get_reactor() -> PathLike:
    if "PYTEST_CURRENT_TEST" in os.environ:
        root = "tests/project"
    else:
        root = project_root()

    internal_config_file_path = os.path.join(
        root,
        constants.ATOMKRAFT_INTERNAL_FOLDER,
        constants.ATOMKRAFT_INTERNAL_CONFIG,
    )
    with open(internal_config_file_path) as config_f:
        config_data = tomlkit.load(config_f)
        return config_data[constants.REACTOR_CONFIG_KEY]

fails when running a trace from inside the Atomkraft via atomkraft test trace command. The reason is that this function assumes that it can be executed in a Pytest context only when testing the reactor code. But we also use Pytest to execute user tests, so Pytest is also a working environment for us when Atomkraft runs.

Programmatic sampling of model traces

As documented in ADR-05 Test execution, there is a dependency on the Model module, for providing programmatic access to sampling the traces from the model.

The dependency needs to be implemented via reexporting functionality of Modelator, and combining it with the defaults stored in Atomkraft, in approximately the following form:

get_model_trace(model = None, config = None, samples = None)

where:

  • the first two parameters are file paths, the third one is a list of operator names
  • when not given explicitly, the parameters should be picked up from the current defaults stored in Atomkraft config, where they are set via atomkraft model subcommands.

Return value:

  • when any parameter is not given, and is not available via defaults, or the parameter cannot be parsed, the function should raise an exception explaining the reason of the error.
  • when the model can be sampled for traces given the provided parameters, ModelResult from Modelator API should be returned.

CLI: Generate Reactor Stub

Context: Reactor

Once the user has a TLA+ model, they need to write a reactor:
a set of Python functions connecting the actions of the model to executions of the code.

The task is to generate a stub for the reactor.

Task

The command that needs to be implemented is
atomkraft reactor <action-list> <model> [<reactor-stub-file>]
where

  • action-list is a list of actions for which to generate stubs
  • model is the TLA model for which we are implementing a reactor
  • reactor-stub-file is a path at which the reactor file should be created. If omitted, a default path is used.

The stub should include:

  • a stub for the testnet initialization function
@pytest.fixture(scope="session")
def testnet():
  chain_id = "test-cw"
  binary = <binary> # as setup in the init command
  denom = "stake"
  prefix = "juno" #TODO: clarify
  coin_type = 118 # TODO: clarify

  genesis_config = {
      "app_state.gov.voting_params.voting_period": "600s",
      "app_state.mint.minter.inflation": "0.300000000000000000",
  }

  node_config = {
      "config/app.toml": {
          "api.enable": True,
          "api.swagger": True,
          "api.enabled-unsafe-cors": True,
          "minimum-gas-prices": f"0.10{denom}",
          "rosetta.enable": False,
      },
      "config/config.toml": {
          "instrumentation.prometheus": False,
          "p2p.addr_book_strict": False,
          "p2p.allow_duplicate_ip": True,
      },
  }

  testnet = Testnet(
      chain_id,
      n_validator=3,
      n_account=3,
      binary=binary,
      denom=denom,
      prefix=prefix,
      coin_type=coin_type,
      genesis_config=genesis_config,
      node_config=node_config,
      account_balance=10**26,
      validator_balance=10**16,
  )

  testnet.oneshot()
  time.sleep(10)
  yield testnet
  time.sleep(2)
  • a stub for the state function
  @pytest.fixture
  def state():
      pass
  • for each action act from action-list, a stub for the step function connecting the abstract action to the code execution.
  @step("act")
  def act_step(testnet, state, var1, var2,..., vark):
      pass

where var1, var2,...,vark are all the variables of the model, state is the state provided by the function state, and testnet is the blockchain client provided by the function testnet

Finally, the stub should contain comments with guidance on how to use the stub.

Create PR on Prototype feedback

Here are some preliminary feedbacks regarding the Atomkraft prototype.



Given that our focus is to build an e2e testing framework for cosmos SDK projects, there are ways to be more specific on certain points that would ease the usage and maybe developments.
We can first think in terms of module testing so the first thing to provide is a reactor per module:

  • A module has an “API” that defines transaction calls with parameters that are used in test suits to be run over testnets. Thus, we can have a reactor per module that constructs generic transaction calls in python.
  • A trace expresses a certain logic of transaction calls with specific parameters. Thus, the data structure in the trace should be aligned with reactor function calls/variables. Traces are constrained by reactors so we are not writing reactors to fit the trace witch is a change in the logic described in ADRs. This also constrains the model (maybe we can provide model templates, types, and variables)
  • Future points to evaluate: modules interconnections
  • The set-up creates a pytest project with necessary configurations and a project structure. It has been thought as if we are going to instantiate Atomkraft for each module separately but maybe we will have one instance dealing with all modules. (To be confirmed).
  • It is also more convenient to understand where the tests come from (traces and models) so putting them together under the same directory is better than having all the models in a repo, and all the traces in another without understanding the links between them. (Same for reactors)
  • In the initialization, it is also good to provide a testnet configuration in place (I have to play with the prototype)
  • It is important to transport the work done on the authz module to figure things out.

So from this project structure:
+- .atomkraft/
+- models/
+- traces/
+- reactors/
+- tests
| +- test_authz.py
| +- test_gov.py
+- reports/
+- testnet
| +- config
| | +- app.toml
| | +- config.toml
| | +- genesis.json
| +- run
| +- validator-1/
| +- validator-2/
| +- validator-3/
+- pyproject.toml

We would maybe have this:

+- .atomkraft/
+- modules/
| +- authz
|
| | +- reactors/
| | +- tests/
| | | +- models/
| | | +- traces/
| | | +- tests/
| | | | +- test_authz.py
| | +- reports/ (Scripts ?)
| +- gov
|
| | +- reactors/
| | +- tests/
| | | +- models/
| | | +- traces/
| | | +- tests/
| | | | +- test_gov.py
| | +- reports/ (Scripts ?)

And a single testnet :
+- testnet
| +- config
| | +- app.toml
| | +- config.toml
| | +- genesis.json
| +- run
| +- validator-1/
| +- validator-2/
| +- validator-3/
+- pyproject.toml

Improve user interactions of `atomkraft chain testnet` command

atomkraft chain testnet command, besides not having any user help (to be addressed in #32), also has the following deficiencies:

  • no user output is shown; the command executes in absolute silence, the user is left to guess what's happening
  • the directories node-0 ... node-3 are created at the top level of the user project. They should probably be located one level below (in chain or similar)
  • while the above directories contain stdout, stderr, they are empty, and the dirs are removed upon Ctrl+C. No logs or anything is available to the user.

So user interaction as a whole needs to be thought through.

Configure testnet from reactor

Right now testnet fixture is started before it reaches reactors. But users may want to dynamically initialize the validator set, number of genesis accounts, etc. from an Init reactor.

`atomkraft reactor` generates methods with same names

atomkraft reactor --actions "fizz,fuzz,fizzfuzz" --variables "x,y"generates the following code.

act_step should be different for each method.

import time
import pytest
from cosmos_net.pytest import Testnet
from modelator.pytest.decorators import step

    
keypath = 'action'



@pytest.fixture
def state():
    return {}


@step('fizz')
def act_step(testnet, state, x, y):
    print("Step: fizz")


@step('fuzz')
def act_step(testnet, state, x, y):
    print("Step: fuzz")


@step('fizzfuzz')
def act_step(testnet, state, x, y):
    print("Step: fizzfuzz")

ADR: Trace executor architecture

Trace executor is one of the top-level components of Atomkraft. It facilitates handling model specifications, generated traces, setting up a testnet(s), and driving a test on it.

ADR-05: run trace against testnet

Background/Motivation

The user has generated a test trace in the ITF format, and wants to execute it against the testnet, so that to make sure the testnet behaves as expected.

Linked documents: ADR

Description

The format of the proposed CLI command is:

atomkraft run <trace>

where:

  • <trace> is the (path to) ITF trace;

Upon successful execution the user is notified about it; no further action is necessary. Upon unsuccessful execution, the error should be presented to the user, and all the information needed to reproduce the error should be saved (details to be clarified).

Prerequisites

  • Atomkraft can communicate with the blockchain (init command has been successful)
  • The blockchain reactor is available (reactor command has been executed, and the user has filled the reactor stub with the method implementations)
  • The reactor has all methods implemented correctly

Technical details

  • The implementation should be able to efficiently check the above prerequisites.
    • E.g. for the first two, the project configuration should hold the entries, showing that the respective commands have been executed.
    • The pointers to the setup script, and to the reactor should be present in the configuration, such that they are picked up by this implementation.
  • While the above commands could have been executed successfully, it doesn't guarantee that the pieces match together. E.g. the errors below may happen, and should be differentiated from the real errors (mismatch of trace expectations to the behavior of the blockchain)
    • init might have been executed, but the blockchain binary was moved, or vanished from the PATH, or changed.
    • reactor has been executed, but the list of actions for which the stub was generated doesn't cover all actions present in the trace.
    • Reactor stub has been filled, but the file doesn't compile, or some error in one of the action handlers occurs at runtime.
  • When a real error in the testnet occurs:
    • the error summary needs to be presented to the user
    • the error details need to be saved, for possible later inspection:
      • the trace
      • the blockchain configuration
      • the setup and reactor scripts
      • the complete output

Run authz module tests

Work has been conducted on the authz module using the previous version of Atomkraft.
It could be fine in this repository:
https://github.com/rnbguy/Authz-Audit

In order to validate the prototype, we have to transpose the work in the new version.
It will also allow prioritise the next developments in terms of bug fixing, enhancements and new features.

First step: Tests execution
This part is crucial to get users feedback.

  • Create new repo
  • Configuration to set-up the testnet (that could be used by default as a replacement of the Docker part)
  • Write reactors (authz and bank) : We can start by the transactions we need and identify the missing parts, if any, regarding the modules APIs.
  • Import the traces and executing them
  • Result processing

Second step: Traces generation

  • Traces generation from existing model
  • Optimising model and traces format : The existing model wasn't written originally for end-to-end testing but unit testing. The model could be cleaned up to better understand the logic and generate traces with only the necessary information needed for testing.

Atomkraft creates new git inside a git project

atomkraft init creates a new git project inside an existing git project.

This is undesired if someone wants to manage multiple test projects in a single git repo.

Also, this prevents us from maintaining some example test projects created by atomkraft.

Store and report results for `test trace`

This is the follow-up issue for #27 (and PR #61), which implement the basic functionality of producing and executing Pytest tests from traces.

The task of this issue is, during the execution or upon finishing of testing the current trace, to retrieve and store in reports folder, under the name that corresponds to the test being executed:

  • full transaction data for each submitted transaction
  • results of executing the transaction
  • node logs
  • full Pytest output

The user might be given an option to either store the above results for all, or only for failed tests. This is a nice-to-have, and not required.

`node-n` dirs are created at top-level; fail to be removed sometimes

It looks like that in the process of merge conflict resolution, the fix for #33 has been applied only partially:

  • the directories node-n are removed
  • but they are still created at the top-level

Also, from time to time, some directories fail to be removed, which manifests itself like that:

tests/test_traces_example0_itf_json_2022_07_27T10_58_14_914.py::test_trace
  /Users/andrey/Library/Caches/pypoetry/virtualenvs/atomkraft-9j0E0YDD-py3.10/lib/python3.10/site-packages/_pytest/unraisableexception.py:78: PytestUnraisableExceptionWarning: Exception ignored in: <function Node.__del__ at 0x13397b370>
  
  Traceback (most recent call last):
    File "/Users/andrey/Library/Caches/pypoetry/virtualenvs/atomkraft-9j0E0YDD-py3.10/lib/python3.10/site-packages/atomkraft/chain/node.py", line 270, in __del__
      self.close()
    File "/Users/andrey/Library/Caches/pypoetry/virtualenvs/atomkraft-9j0E0YDD-py3.10/lib/python3.10/site-packages/atomkraft/chain/node.py", line 264, in close
      shutil.rmtree(self.home_dir)
    File "/opt/homebrew/Cellar/[email protected]/3.10.5/Frameworks/Python.framework/Versions/3.10/lib/python3.10/shutil.py", line 724, in rmtree
      _rmtree_safe_fd(fd, path, onerror)
    File "/opt/homebrew/Cellar/[email protected]/3.10.5/Frameworks/Python.framework/Versions/3.10/lib/python3.10/shutil.py", line 663, in _rmtree_safe_fd
      onerror(os.rmdir, fullname, sys.exc_info())
    File "/opt/homebrew/Cellar/[email protected]/3.10.5/Frameworks/Python.framework/Versions/3.10/lib/python3.10/shutil.py", line 661, in _rmtree_safe_fd
      os.rmdir(entry.name, dir_fd=topfd)
  OSError: [Errno 66] Directory not empty: 'config'
  
    warnings.warn(pytest.PytestUnraisableExceptionWarning(msg))

It would be nice to:

  • create node-n directories one level below;
  • keep the directories for the last executed test
  • clean them up on start of the new test, if they exist

Simplify and automate `atomkraft init` user interaction

Currently, atomkraft init operates via delegating many tasks to other programs, in particular poetry init, which asks the user lots of irrelevant questions. I believe the following should be done:

  • the internals of all those commands should be hidden from the user
  • all input/output should either come from directly from us, or be translated. No interaction should happen directly with the other, subordinate program.
  • the process should be automated to the maximal degree possible. In particular, the user should not be forced to answer whether they want to define dependencies/dev-dependencies interactively.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.