informalsystems / atomkraft Goto Github PK
View Code? Open in Web Editor NEWAdvanced fuzzing via Model Based Testing for Cosmos blockchains
License: Apache License 2.0
Advanced fuzzing via Model Based Testing for Cosmos blockchains
License: Apache License 2.0
As implemented in informalsystems/modelator#226, Modelator now has the Typer CLI. We need to integrate this CLI into Aromkraft via model
command.
The current testnet setup introduces a bug while updating chain configuration files.
It overwrites the values of nested keys instead of merging them recursively.
If p2p.allow_duplicate_ip
is updated, the whole p2p
section is overwritten with only allow_duplicate_ip
.
[p2p]
laddr = ""
allow_duplicate_ip = false
...
becomes
[p2p]
allow_duplicate_ip = true
but it should be
[p2p]
laddr = ""
allow_duplicate_ip = true
...
Currently atomkraft init
and atomkraft chain
commands have completely undocumented CLI. Every command and option should have a well-thought user help messages.
Pytest can fail inexplicably when invoked from command line (via pytest
binary) or from Python code (via pytest.main()
), compared to invoking via python -m pytest
. The difference is apparently that in the latter case the current working directory is added to the system path, and as a result pytest fixtures can be found in the latter, but not in the former case.
The solution, as discussed with @rnbguy, is to add the following piece of code to pyproject.toml
of Atomkraft project:
[tool.pytest.ini_options]
pythonpath = [
"."
]
This can be done on atomkraft init
using poetry config
command.
I have done the following:
HelloFull.tla
and HelloFull.config.toml
from modelator samples into tests/models
dir of atomkraft
.atomkraft model load tests/models/HelloFull.tla
atomkraft model sample --config-path tests/models/HelloFull.config.toml
This fails with FileNotFoundError: [Errno 2] No such file or directory: '/Users/.../atomkraft/modelator/samples'
The problem is that the path modelator/samples
is picked up from the configuration, and overwrites the loaded model.
Currently, we use time.sleep
to wait for an event to happen. Examples,
These should be handled by some event-based mechanism.
The CLI needs tests. We can use mdx, as the Apalache team did for their CLI: https://github.com/informalsystems/apalache/blob/main/test/tla/cli-integration-tests.md. Probably it would be a good idea to have multiple mdx files, instead of just one large file for the whole CLI. One file for each command, for instance.
This is the follow-up issue for #27 (and PR #61), as well as for #65, which implement generation, execution, and reporting for single ITF traces. This issue depends on #54 and #55, which implement the necessary programmatic interface from the Model module.
The tasks of this issue are:
traces
folderThis is the simplest and preferable route for Atomkrat prototype.
The alternative route is to generate another kind of Pytest using @mbt
decorator, and refer from it directly to the model being used. But this route is:
@mbt
decorator in modelator
, in order to adopt the decorator to the changed modelator
API, with TOML configs.Implementation of the alternative route will require a more deep integration with Modelator, and implementation of caching there, in order to achieve the speed and reproducibility of the first route. When implemented though, this will be the main and preferable way of Atomkraft operation, because the users will not need to concern themselves with the intermediate phase (ITF traces), if they don't want to. It will be a direct route from a model and a test assertion, to the execution of multiple generated traces against the testnet.
As documented in ADR-05 Test execution, there is a dependency on the Model module, for providing programmatic access to obtaining the last trace produced from the model.
Programmatically, the following function needs to be provided by the Model
module:
get_trace(trace = None)
The trace
parameter, when given, will provide a filesystem path where to retrieve the trace. When the parameter is omitted, the last trace produced from the atomkraft model check
or atmokraft model sample
commands should be retrieved from Atomkraft configuration.
Errors: on any error, an exception should be raised, explaining the error reason (e.g. no trace has been sampled, or the provided trace can't be parsed).
Return value: on success, the trace represented as an ITF
class from Modelator
should be returned.
In the reactor created via atomkraft reactor
command:
from cosmos_net.pytest import Testnet
(there is no cosmos_net
)testnet
parameter, the fixture producing testnet
is not defined.The testnet
fixture should spin the testnet with the parameters define in the project chain configuration.
The reactor file generated via atomkraft reactor
once, is overwritten silently without any warnings on the next atomkraft reactor
invocation. This is dangerous, as the user may already have started to work on that file.
The user should be warned if the file exists already, and asked whether they want to overwrite it.
... as a result, the reactor is non-functional. This needs to be fixed.
I have the Atomkraft project configured correctly, with all binaries and everything. This is the CosmWasm counter example. Most of the time testing works fine using this example; but from time to time it fails like that (from a pytest execution):
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../../Library/Caches/pypoetry/virtualenvs/atomkraft-9j0E0YDD-py3.10/lib/python3.10/site-packages/modelator/pytest/decorators.py:78: in <dictcomp>
arg: step[arg] if arg in step else request.getfixturevalue(arg)
../../../Library/Caches/pypoetry/virtualenvs/atomkraft-9j0E0YDD-py3.10/lib/python3.10/site-packages/_pytest/fixtures.py:554: in getfixturevalue
fixturedef = self._get_active_fixturedef(argname)
../../../Library/Caches/pypoetry/virtualenvs/atomkraft-9j0E0YDD-py3.10/lib/python3.10/site-packages/_pytest/fixtures.py:573: in _get_active_fixturedef
self._compute_fixture_value(fixturedef)
../../../Library/Caches/pypoetry/virtualenvs/atomkraft-9j0E0YDD-py3.10/lib/python3.10/site-packages/_pytest/fixtures.py:659: in _compute_fixture_value
fixturedef.execute(request=subrequest)
../../../Library/Caches/pypoetry/virtualenvs/atomkraft-9j0E0YDD-py3.10/lib/python3.10/site-packages/_pytest/fixtures.py:1057: in execute
result = ihook.pytest_fixture_setup(fixturedef=self, request=request)
../../../Library/Caches/pypoetry/virtualenvs/atomkraft-9j0E0YDD-py3.10/lib/python3.10/site-packages/pluggy/_hooks.py:265: in __call__
return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
../../../Library/Caches/pypoetry/virtualenvs/atomkraft-9j0E0YDD-py3.10/lib/python3.10/site-packages/pluggy/_manager.py:80: in _hookexec
return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
../../../Library/Caches/pypoetry/virtualenvs/atomkraft-9j0E0YDD-py3.10/lib/python3.10/site-packages/_pytest/fixtures.py:1111: in pytest_fixture_setup
result = call_fixture_func(fixturefunc, request, kwargs)
../../../Library/Caches/pypoetry/virtualenvs/atomkraft-9j0E0YDD-py3.10/lib/python3.10/site-packages/_pytest/fixtures.py:883: in call_fixture_func
fixture_result = next(generator)
../../../Library/Caches/pypoetry/virtualenvs/atomkraft-9j0E0YDD-py3.10/lib/python3.10/site-packages/atomkraft/chain/pytest.py:13: in testnet
testnet.oneshot()
../../../Library/Caches/pypoetry/virtualenvs/atomkraft-9j0E0YDD-py3.10/lib/python3.10/site-packages/atomkraft/chain/testnet.py:214: in oneshot
self.prepare()
../../../Library/Caches/pypoetry/virtualenvs/atomkraft-9j0E0YDD-py3.10/lib/python3.10/site-packages/atomkraft/chain/testnet.py:182: in prepare
node.add_key(self.validators[i])
../../../Library/Caches/pypoetry/virtualenvs/atomkraft-9j0E0YDD-py3.10/lib/python3.10/site-packages/atomkraft/chain/node.py:112: in add_key
stdout, stderr = self._execute(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <atomkraft.chain.node.Node object at 0x121a0f850>
args = ['keys', 'add', 'validator-0', '--recover', '--keyring-backend', 'test', ...]
def _execute(self, args, *, stdin: bytes | None = None, stdout=None, stderr=None):
final_args = f"{self.binary} --home {self.home_dir}".split() + args
# print(" ".join(final_args))
stdin_pipe = None if stdin is None else PIPE
with Popen(final_args, stdin=stdin_pipe, stdout=stdout, stderr=stderr) as p:
out, err = p.communicate(input=stdin)
rt = p.wait()
if rt != 0:
> raise RuntimeError(f"Non-zero return code {rt}\n{err.decode()}")
E RuntimeError: Non-zero return code 1
E Error: aborted
E Usage:
E junod keys add <name> [flags]
E
E Flags:
E --account uint32 Account number for HD derivation
E --algo string Key signing algorithm to generate keys for (default "secp256k1")
E --coin-type uint32 coin type number for HD derivation (default 118)
E --dry-run Perform action, but don't add key to local keystore
E --hd-path string Manual HD Path derivation (overrides BIP44 config)
E -h, --help help for add
E --index uint32 Address index number for HD derivation
E -i, --interactive Interactively prompt user for BIP39 passphrase and mnemonic
E --ledger Store a local reference to a private key on a Ledger device
E --multisig strings List of key names stored in keyring to construct a public legacy multisig key
E --multisig-threshold int K out of N required signatures. For use in conjunction with --multisig (default 1)
E --no-backup Don't print out seed phrase (if others are watching the terminal)
E --nosort Keys passed to --multisig are taken in the order they're supplied
E --pubkey string Parse a public key in JSON format and saves key info to <name> file.
E --recover Provide seed phrase to recover existing key instead of creating
E
E Global Flags:
E --home string The application home directory (default "/Users/andrey/.juno")
E --keyring-backend string Select keyring's backend (os|file|test) (default "test")
E --keyring-dir string The client Keyring directory; if omitted, the default 'home' directory will be used
E --log_format string The logging format (json|plain) (default "plain")
E --log_level string The logging level (trace|debug|info|warn|error|fatal|panic) (default "info")
E --output string Output format (text|json) (default "text")
E --trace print out full stack trace on errors
../../../Library/Caches/pypoetry/virtualenvs/atomkraft-9j0E0YDD-py3.10/lib/python3.10/site-packages/atomkraft/chain/node.py:238: RuntimeError
Not sure what causes that, but would be nice to investigate.
A user has written a TLA+ model and wants to parse the TLA+ files, type check the spec, set the model constants.
Once a model is loaded, the user can call the trace
command to generate traces from the model.
This command is also an interface to the Model
class in Modelator.
Linked documents: CLI ADR
The command atomkraft model
is essentially a wrapper around Modelator's Model
, where each of its sub-commands would map almost one-to-one to the methods of Model
:
atomkraft model load <model-path> # in Model it's the parse_file method
atomkraft model typecheck
atomkraft model instantiate <constant-name> <constant-value>
atomkraft model check [<invariant-list>] [--constants=<name>:<value>,...] # for now, checker is Apalache, and checker params are the default values
atomkraft model sample [<sample-list>] [--constants=<name>:<value>,...]
atomkraft model last-sample
atomkraft model all-samples
atomkraft model monitor add markdown <monitor-file.md>
atomkraft model monitor add html <monitor-file.html>
Additionally, model
will have the following sub-commands that require some extra logic not provided by Modelator:
atomkraft model info # will display filename(s), init, next, constants, invariants, ...
atomkraft model monitor remove-all # will remove all initialized monitors
Not included in the first prototype:
atomkraft model config load <model-config-file> # will call the `ModelConfig` class in Modelator
Apalache does not require a cfg
file with the model.
This module can load a model in memory that can be used by other modules.
This module does not expect any connection to other components.
None
model
command and the sub-commands that call Modelator directly.model
sub-commands that do not call Modelator directly.model
.Moving towards more user-friendly version of Atomkraft, we need to document its organizational principles, and high-level architecture, to be implemented in the first prototype. The ADR will mostly ignore the inner workings of the tool, and concentrate on its external interface and artifacts.
A user has written a TLA+ model and wants to generate some traces from test assertions from the model, so that they can use some of the generated test traces later for executing on a testnet.
Linked documents:
The atomkraft trace
command generates ITF traces. If no model is given as parameter, it will use a model already loaded in memory with the atomkraft model
command.
Its format is:
atomkraft trace [--model=<model>] <config-path> <test-assertion> [<traces-path>]
where:
<config-path>
is the (path to) TOML file with the model and model checker configuration;<test-assertion>
is the name of the model operator describing the desired test trace.<traces-path>
is the location for the trace filesUpon successful command execution, the generated test trace in the ITF format should be persisted to disk.
A model config is a TOML file with the following format, and located in the same directory as the model:
[Model]
name = "ModuleName"
init = "Init"
next = "Next"
spec = "Spec"
invariants = ["Inv1", "Inv2", ...]
tlc_config_file = "path/to/ModuleName.cfg"
[Constants]
constant_name_1 = "tla_constant_value_1"
...
constant_name_n = "tla_constant_value_n"
[Config]
check_deadlock = FALSE
length=10 # called depth in TLC
atomkraft config traces-dir <traces-path> # sets the location for the generated trace files
atomkraft config traces-dir # displays the current directory for the trace files
Under the hood, the atomkraft trace
command will call the following Modelator Shell commands:
model = ModelShell.parse_file(<model-path>)
model.typecheck()
config = ModelConfig.parse_file(<config-path>)
model.check(config, <test-assertion>, <traces-path>)
where ModelConfig
would be a new class in Modelator, used as a common data structure for Apalache and TLC configurations.
This command generates ITF traces in the directory default_traces_dir
provided by the Setup module.
default_traces_dir
provided by the Setup module.model
command #17trace
command (making use of Modelator, as described above).trace
.#50 added lints and tests in CI. But they are failing because the codebase is not changed.
A separate PR is needed to improve code quality to make the lints and tests happy.
Testing the existing Atomkraft tutorial, I found the following two small issues:
pip install --upgrade atomkraft
), I get the following error:ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
copier 6.1.0 requires packaging>=21.0, but you have packaging 20.9 which is incompatible.
The installation process goes through, but it is a bad feeling to see the Error message
We are figuring out how a user will use Atomkraft - produce tests, interact with test setup, and execute them.
The idea is to provide a atomkraft
cli which creates a pytest project with necessary configurations.
Fails because of my mistake of adding private submodules. They work in our machines, cause we have access to the repository.
The cosmwasm-counter example is too complicated.
We need to add an E2E test in CI, which successfully
The latest versions of pylama (8.3.8) and pyflakes (2.5.0) produce an error message. Version 2.4.0 of pyflakes fixes it.
jehan@Jehans-MBP cosmos-sdk % make
./setup.sh
[+] Building 1.4s (19/19) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 37B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/alpine:ed 1.3s
=> [internal] load metadata for docker.io/library/golang:al 1.3s
=> [internal] load build context 0.0s
=> => transferring context: 321B 0.0s
=> [build-env 1/6] FROM docker.io/library/golang:alpine@sha 0.0s
=> [stage-1 1/7] FROM docker.io/library/alpine:edge@sha256: 0.0s
=> CACHED [stage-1 2/7] RUN apk add --update ca-certificate 0.0s
=> CACHED [stage-1 3/7] WORKDIR /root 0.0s
=> CACHED [build-env 2/6] RUN apk add --no-cache curl make 0.0s
=> CACHED [build-env 3/6] WORKDIR /go/src/github.com/cosmos 0.0s
=> CACHED [build-env 4/6] RUN git clone https://github.com/ 0.0s
=> CACHED [build-env 5/6] RUN git checkout v0.44.3 0.0s
=> CACHED [build-env 6/6] RUN make clean && make build-linu 0.0s
=> CACHED [stage-1 4/7] COPY --from=build-env /go/src/githu 0.0s
=> CACHED [stage-1 5/7] ADD ./chain-setup /opt/chain 0.0s
=> CACHED [stage-1 6/7] WORKDIR /opt/chain/ 0.0s
=> CACHED [stage-1 7/7] RUN /opt/chain/init.sh 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:ad757d341b71c2f52eca04c895d880b3 0.0s
=> => naming to docker.io/library/cosmos-image 0.0s
Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them
jehan@Jehans-MBP cosmos-sdk % ./start-node.sh
7:11PM INF starting ABCI with Tendermint
7:11PM INF Starting multiAppConn service impl=multiAppConn module=proxy
7:11PM INF Starting localClient service connection=query impl=localClient module=abci-client
7:11PM INF Starting localClient service connection=snapshot impl=localClient module=abci-client
7:11PM INF Starting localClient service connection=mempool impl=localClient module=abci-client
7:11PM INF Starting localClient service connection=consensus impl=localClient module=abci-client
7:11PM INF Starting EventBus service impl=EventBus module=events
7:11PM INF Starting PubSub service impl=PubSub module=pubsub
7:11PM INF Starting IndexerService service impl=IndexerService module=txindex
7:11PM INF ABCI Handshake App Info hash= height=0 module=consensus protocol-version=0 software-version=0.44.3
7:11PM INF ABCI Replay Blocks appHeight=0 module=consensus stateHeight=0 storeHeight=0
7:11PM INF asserting crisis invariants inv=0/11 module=x/crisis name=bank/nonnegative-outstanding
7:11PM INF asserting crisis invariants inv=1/11 module=x/crisis name=bank/total-supply
7:11PM INF asserting crisis invariants inv=2/11 module=x/crisis name=distribution/nonnegative-outstanding
7:11PM INF asserting crisis invariants inv=3/11 module=x/crisis name=distribution/can-withdraw
7:11PM INF asserting crisis invariants inv=4/11 module=x/crisis name=distribution/reference-count
7:11PM INF asserting crisis invariants inv=5/11 module=x/crisis name=distribution/module-account
7:11PM INF asserting crisis invariants inv=6/11 module=x/crisis name=staking/module-accounts
7:11PM INF asserting crisis invariants inv=7/11 module=x/crisis name=staking/nonnegative-power
7:11PM INF asserting crisis invariants inv=8/11 module=x/crisis name=staking/positive-delegation
7:11PM INF asserting crisis invariants inv=9/11 module=x/crisis name=staking/delegator-shares
7:11PM INF asserting crisis invariants inv=10/11 module=x/crisis name=gov/module-account
7:11PM INF asserted all invariants duration=6.272542 height=0 module=x/crisis
Error: error during handshake: error on replay: validator set is nil in genesis and still empty after InitChain
Usage:
simd start [flags]
Flags:
Current implementation of project_root() function:
def project_root():
cwd = Path(os.getcwd())
while cwd != cwd.parent:
if (cwd / "pyproject.toml").exists():
return cwd
cwd = cwd.parent
return None
is dangerous: it traverses up to whatever "pyproject.toml" it finds, and returns this directory. There may be other Poetry projects up there, which are not Atomkraft projects. As a result, arbitrary unrelated projects could be overwritten.
Instead, this function should search for Atomkraft-specific config files, e.g. .atomkraft
directory.
Current implementation of reactor/get_reactor():
def get_reactor() -> PathLike:
if "PYTEST_CURRENT_TEST" in os.environ:
root = "tests/project"
else:
root = project_root()
internal_config_file_path = os.path.join(
root,
constants.ATOMKRAFT_INTERNAL_FOLDER,
constants.ATOMKRAFT_INTERNAL_CONFIG,
)
with open(internal_config_file_path) as config_f:
config_data = tomlkit.load(config_f)
return config_data[constants.REACTOR_CONFIG_KEY]
fails when running a trace from inside the Atomkraft via atomkraft test trace
command. The reason is that this function assumes that it can be executed in a Pytest context only when testing the reactor code. But we also use Pytest to execute user tests, so Pytest is also a working environment for us when Atomkraft runs.
Implement the atomkraft reactor
command, as described in ADR04
As documented in ADR-05 Test execution, there is a dependency on the Model module, for providing programmatic access to sampling the traces from the model.
The dependency needs to be implemented via reexporting functionality of Modelator
, and combining it with the defaults stored in Atomkraft
, in approximately the following form:
get_model_trace(model = None, config = None, samples = None)
where:
Atomkraft
config, where they are set via atomkraft model
subcommands.Return value:
ModelResult
from Modelator API should be returned.Once the user has a TLA+ model, they need to write a reactor:
a set of Python functions connecting the actions of the model to executions of the code.
The task is to generate a stub for the reactor.
The command that needs to be implemented is
atomkraft reactor <action-list> <model> [<reactor-stub-file>]
where
action-list
is a list of actions for which to generate stubsmodel
is the TLA model for which we are implementing a reactorreactor-stub-file
is a path at which the reactor file should be created. If omitted, a default path is used.The stub should include:
@pytest.fixture(scope="session")
def testnet():
chain_id = "test-cw"
binary = <binary> # as setup in the init command
denom = "stake"
prefix = "juno" #TODO: clarify
coin_type = 118 # TODO: clarify
genesis_config = {
"app_state.gov.voting_params.voting_period": "600s",
"app_state.mint.minter.inflation": "0.300000000000000000",
}
node_config = {
"config/app.toml": {
"api.enable": True,
"api.swagger": True,
"api.enabled-unsafe-cors": True,
"minimum-gas-prices": f"0.10{denom}",
"rosetta.enable": False,
},
"config/config.toml": {
"instrumentation.prometheus": False,
"p2p.addr_book_strict": False,
"p2p.allow_duplicate_ip": True,
},
}
testnet = Testnet(
chain_id,
n_validator=3,
n_account=3,
binary=binary,
denom=denom,
prefix=prefix,
coin_type=coin_type,
genesis_config=genesis_config,
node_config=node_config,
account_balance=10**26,
validator_balance=10**16,
)
testnet.oneshot()
time.sleep(10)
yield testnet
time.sleep(2)
@pytest.fixture
def state():
pass
act
from action-list
, a stub for the step function connecting the abstract action to the code execution. @step("act")
def act_step(testnet, state, var1, var2,..., vark):
pass
where var1
, var2
,...,vark
are all the variables of the model, state
is the state provided by the function state
, and testnet
is the blockchain client provided by the function testnet
Finally, the stub should contain comments with guidance on how to use the stub.
Here are some preliminary feedbacks regarding the Atomkraft prototype.
Given that our focus is to build an e2e testing framework for cosmos SDK projects, there are ways to be more specific on certain points that would ease the usage and maybe developments.
We can first think in terms of module testing so the first thing to provide is a reactor per module:
So from this project structure:
+- .atomkraft/
+- models/
+- traces/
+- reactors/
+- tests
| +- test_authz.py
| +- test_gov.py
+- reports/
+- testnet
| +- config
| | +- app.toml
| | +- config.toml
| | +- genesis.json
| +- run
| +- validator-1/
| +- validator-2/
| +- validator-3/
+- pyproject.toml
We would maybe have this:
+- .atomkraft/
+- modules/
| +- authz
|
| | +- reactors/
| | +- tests/
| | | +- models/
| | | +- traces/
| | | +- tests/
| | | | +- test_authz.py
| | +- reports/ (Scripts ?)
| +- gov
|
| | +- reactors/
| | +- tests/
| | | +- models/
| | | +- traces/
| | | +- tests/
| | | | +- test_gov.py
| | +- reports/ (Scripts ?)
And a single testnet :
+- testnet
| +- config
| | +- app.toml
| | +- config.toml
| | +- genesis.json
| +- run
| +- validator-1/
| +- validator-2/
| +- validator-3/
+- pyproject.toml
... following on the Issue #14
atomkraft chain testnet
command, besides not having any user help (to be addressed in #32), also has the following deficiencies:
node-0
... node-3
are created at the top level of the user project. They should probably be located one level below (in chain
or similar)stdout
, stderr
, they are empty, and the dirs are removed upon Ctrl+C
. No logs or anything is available to the user.So user interaction as a whole needs to be thought through.
Right now testnet fixture is started before it reaches reactors. But users may want to dynamically initialize the validator set, number of genesis accounts, etc. from an Init
reactor.
they should be removed when the tests are finished
Inside cosmos.py lines 66-68. This does not work if the cosmos command output is JSON. Parsing will fail, and call() will always return false, even if the command is successful.
atomkraft reactor --actions "fizz,fuzz,fizzfuzz" --variables "x,y"
generates the following code.
act_step
should be different for each method.
import time
import pytest
from cosmos_net.pytest import Testnet
from modelator.pytest.decorators import step
keypath = 'action'
@pytest.fixture
def state():
return {}
@step('fizz')
def act_step(testnet, state, x, y):
print("Step: fizz")
@step('fuzz')
def act_step(testnet, state, x, y):
print("Step: fuzz")
@step('fizzfuzz')
def act_step(testnet, state, x, y):
print("Step: fizzfuzz")
Trace executor is one of the top-level components of Atomkraft. It facilitates handling model specifications, generated traces, setting up a testnet(s), and driving a test on it.
The user has generated a test trace in the ITF format, and wants to execute it against the testnet, so that to make sure the testnet behaves as expected.
Linked documents: ADR
The format of the proposed CLI command is:
atomkraft run <trace>
where:
<trace>
is the (path to) ITF trace;Upon successful execution the user is notified about it; no further action is necessary. Upon unsuccessful execution, the error should be presented to the user, and all the information needed to reproduce the error should be saved (details to be clarified).
init
command has been successful)reactor
command has been executed, and the user has filled the reactor stub with the method implementations)init
might have been executed, but the blockchain binary was moved, or vanished from the PATH
, or changed.reactor
has been executed, but the list of actions for which the stub was generated doesn't cover all actions present in the trace.Work has been conducted on the authz module using the previous version of Atomkraft.
It could be fine in this repository:
https://github.com/rnbguy/Authz-Audit
In order to validate the prototype, we have to transpose the work in the new version.
It will also allow prioritise the next developments in terms of bug fixing, enhancements and new features.
First step: Tests execution
This part is crucial to get users feedback.
Second step: Traces generation
Currently the reactor stub produced via atomkraft reactor
is completely undocumented. There should exist inline documentation in the file, explaining how to use it.
atomkraft init
creates a new git project inside an existing git project.
This is undesired if someone wants to manage multiple test projects in a single git repo.
Also, this prevents us from maintaining some example test projects created by atomkraft
.
This is the follow-up issue for #27 (and PR #61), which implement the basic functionality of producing and executing Pytest tests from traces.
The task of this issue is, during the execution or upon finishing of testing the current trace, to retrieve and store in reports
folder, under the name that corresponds to the test being executed:
The user might be given an option to either store the above results for all, or only for failed tests. This is a nice-to-have, and not required.
It looks like that in the process of merge conflict resolution, the fix for #33 has been applied only partially:
node-n
are removedAlso, from time to time, some directories fail to be removed, which manifests itself like that:
tests/test_traces_example0_itf_json_2022_07_27T10_58_14_914.py::test_trace
/Users/andrey/Library/Caches/pypoetry/virtualenvs/atomkraft-9j0E0YDD-py3.10/lib/python3.10/site-packages/_pytest/unraisableexception.py:78: PytestUnraisableExceptionWarning: Exception ignored in: <function Node.__del__ at 0x13397b370>
Traceback (most recent call last):
File "/Users/andrey/Library/Caches/pypoetry/virtualenvs/atomkraft-9j0E0YDD-py3.10/lib/python3.10/site-packages/atomkraft/chain/node.py", line 270, in __del__
self.close()
File "/Users/andrey/Library/Caches/pypoetry/virtualenvs/atomkraft-9j0E0YDD-py3.10/lib/python3.10/site-packages/atomkraft/chain/node.py", line 264, in close
shutil.rmtree(self.home_dir)
File "/opt/homebrew/Cellar/[email protected]/3.10.5/Frameworks/Python.framework/Versions/3.10/lib/python3.10/shutil.py", line 724, in rmtree
_rmtree_safe_fd(fd, path, onerror)
File "/opt/homebrew/Cellar/[email protected]/3.10.5/Frameworks/Python.framework/Versions/3.10/lib/python3.10/shutil.py", line 663, in _rmtree_safe_fd
onerror(os.rmdir, fullname, sys.exc_info())
File "/opt/homebrew/Cellar/[email protected]/3.10.5/Frameworks/Python.framework/Versions/3.10/lib/python3.10/shutil.py", line 661, in _rmtree_safe_fd
os.rmdir(entry.name, dir_fd=topfd)
OSError: [Errno 66] Directory not empty: 'config'
warnings.warn(pytest.PytestUnraisableExceptionWarning(msg))
It would be nice to:
node-n
directories one level below;Currently, atomkraft init
operates via delegating many tasks to other programs, in particular poetry init
, which asks the user lots of irrelevant questions. I believe the following should be done:
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.