Code Monkey home page Code Monkey logo

adelphi's People

Contributors

aboudreault avatar absurdfarce avatar grighetto avatar jdonenine avatar tlasica avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

adelphi's Issues

Allow for the read/write ratio for data integrity testing to be specified

Feature Request

Description

Add support for the read/write ratio of the data integrity testing executed by Adelphi to be controlled by the user.

Describe the solution you'd like

Allow for a read:write ratio to be provided via helm variables.

This could be specified as an integration from 0-100 or as a selection fo enumerated values which might make sense, like:

  • 0 (reads only)
  • 20 (20% read : 80% write)
  • 50 (50% read : 50% write)
  • 80 (80% read : 20% write)
  • 100 (writes only)

Describe alternatives you've considered

N/A

Additional context

An initial look at Gemini indicates that this setting is not currently available on their side, to support it here we'd need to request/add support for it there.

Issue created in Gemini project: scylladb/gemini#262

┆Issue is synchronized with this Jira Task by Unito
┆Issue Number: AD-64

Start C* clusters in workflow steps

Currently, both source and target clusters are created as soon as the Helm chart is installed because we're using prebuilt C* images. But once we switch to custom C* images built from the source code, we have to wait for the image to be ready before trying to launch the cluster.
In other words, the installation of a CassandraCluster has to be a step in the Argo workflow.

┆Issue is synchronized with this Jira Task by Unito
┆Issue Number: AD-90

Pull schemas from repo and loop through them

At the moment, Adelphi operates on one schema at a time, as specified by the cql_schema prop in values.yaml.
But with the addition of the schemas repo in #40, we have to run through multiple schemas now.
One possible approach is to implement a loop in the Argo workflow instead of resubmitting the helm chart multiple times to reduce the overhead.

┆Issue is synchronized with this Jira Task by Unito
┆Issue Number: AD-99

Repository.create_fork fails on Python 2.x if user is in a Github org

Documenting this here mainly so that there's a record of it. Fix will likely be implemented as part of the ongoing work for #67.

Observed while attempting to run "adelphi contribute" using Python 2.x after switching code to use the repo at datastax/adelphi-schemas (which is in the "datastax" organization):

Traceback (most recent call last):
  File "/work/git/adelphi/python/setup/bin/adelphi", line 202, in <module>
    export(obj={}, auto_envvar_prefix="ADELPHI")
  File "/work/git/adelphi/python/setup/lib/python2.7/site-packages/click/core.py", line 829, in __call__
    return self.main(*args, **kwargs)
  File "/work/git/adelphi/python/setup/lib/python2.7/site-packages/click/core.py", line 782, in main
    rv = self.invoke(ctx)
  File "/work/git/adelphi/python/setup/lib/python2.7/site-packages/click/core.py", line 1259, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/work/git/adelphi/python/setup/lib/python2.7/site-packages/click/core.py", line 1066, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/work/git/adelphi/python/setup/lib/python2.7/site-packages/click/core.py", line 610, in invoke
    return callback(*args, **kwargs)
  File "/work/git/adelphi/python/setup/lib/python2.7/site-packages/click/decorators.py", line 21, in new_func
    return f(get_current_context(), *args, **kwargs)
  File "/work/git/adelphi/python/setup/bin/adelphi", line 196, in contribute
    fork_repo = origin_repo.create_fork()
  File "/work/git/adelphi/python/setup/lib/python2.7/site-packages/github/Repository.py", line 2088, in create_fork
    "POST", self.url + "/forks", input=post_parameters,
  File "/work/git/adelphi/python/setup/lib/python2.7/site-packages/github/Requester.py", line 322, in requestJsonAndCheck
    verb, url, parameters, headers, input, self.__customConnection(url)
  File "/work/git/adelphi/python/setup/lib/python2.7/site-packages/github/Requester.py", line 413, in requestJson
    return self.__requestEncode(cnx, verb, url, parameters, headers, input, encode)
  File "/work/git/adelphi/python/setup/lib/python2.7/site-packages/github/Requester.py", line 470, in __requestEncode
    requestHeaders["Content-Type"], encoded_input = encode(input)
  File "/work/git/adelphi/python/setup/lib/python2.7/site-packages/github/Requester.py", line 411, in encode
    return "application/json", json.dumps(input)
  File "/usr/lib64/python2.7/json/__init__.py", line 244, in dumps
    return _default_encoder.encode(obj)
  File "/usr/lib64/python2.7/json/encoder.py", line 207, in encode
    chunks = self.iterencode(o, _one_shot=True)
  File "/usr/lib64/python2.7/json/encoder.py", line 270, in iterencode
    return _iterencode(o, 0)
  File "/usr/lib64/python2.7/json/encoder.py", line 184, in default
    raise TypeError(repr(o) + " is not JSON serializable")
TypeError: NotSet is not JSON serializable

Underlying problem here is the Repository.create_fork function. That function in v1.45 of PyGithub is as follows:

    def create_fork(self, organization=github.GithubObject.NotSet):
        """                                                                                                                                                                       
        :calls: `POST /repos/:owner/:repo/forks <http://developer.github.com/v3/repos/forks>`_                                                                                    
        :param organization: string or "none" or "*"                                                                                                                              
        :rtype: :class:`github.Repository.Repository`                                                                                                                             
        """
        assert organization is github.GithubObject.NotSet or isinstance(
            organization, (str, six.text_type)
        ), organization
        post_parameters = {}
        if organization is not None:
            post_parameters["organization"] = organization
        headers, data = self._requester.requestJsonAndCheck(
            "POST", self.url + "/forks", input=post_parameters,
        )
        return Repository(self._requester, headers, data, completed=True)

Note the disconnect: the assertion is centered around a check for NotSet (which is the default value) but the val is added to post_parameters if the organization is not None. This results in the non-serializable NotSet being added to the outgoing HTTP request... which ultimately results in the TypeError shown above.

This bug was identified and fixed in PyGithub/PyGithub#1383. The problem is that (a) that fix wasn't rolled out until PyGithub 1.46 (see https://github.com/PyGithub/PyGithub/releases/tag/v1.46) and (b) the last version of PyGithub that supports Python 2.x is 1.45 (see https://github.com/PyGithub/PyGithub/releases/tag/v1.45).

Upshot is that if you use Python 2.x you'll get a version of PyGithub that includes this bug. If the user your Github token comes from is in an organization this becomes very problematic since you have to specify whether you want the fork created in the org or in your personal space. For extra fun no value of the organization param in the function below will accomplish that goal: if you pass None you'll fail the assertion but if you pass NotSet you'll hit the JSON serialization issue.

┆Issue is synchronized with this Jira Task by Unito
┆Issue Number: AD-62

Add purpose & maturity options to the schema anonymizer

It's valuable to have some details about an anonymized schema that has been contributed to Adelphi, for example, purpose and maturity level.
We should allow for these command-line options such that the user can enter this information when running the script.
The values would be commented annotations in the beginning of the CQL file.

┆Issue is synchronized with this Jira Task by Unito
┆Issue Number: AD-86

Naming collision on schema files for generated PRs

Current naming convention for schema files is /. After this was implemented we moved schema anonymization higher up in the process which means is always "ks_0" (or something similar) at the time of file creation. As a result we're very subject to collisions; the following should make it clear why:

  • User contributes schema for keyspace foo, generates name someuser/ks_0
  • This PR is merged
  • User contributes schema for keyspace bar, generates name someuser/ks_0... but this time it's for an entirely different keyspace
  • New PR cannot be merged since it conflicts with what's already in the repo

┆Issue is synchronized with this Jira Task by Unito
┆Issue Number: AD-65

Create CircleCI config file

We have to create a config.yml for cassandra-quality to make it run on CircleCI.
The config file should have steps to install and start the necessary system dependencies (k3d and helm) before launching the Argo workflow.

┆Issue is synchronized with this Jira Task by Unito
┆Issue Number: AD-74

Provide sample values.yaml files that are env specific

Some settings change depending on the environment (for example, the default storageClass for k3d is local-path whereas for GKE it is standard).
We can provide some sample config files that are tuned for the environment the user is targeting (local or cloud).

┆Issue is synchronized with this Jira Task by Unito
┆Issue Number: AD-87

Write results in parallel and expose files through nginx

The storage volume for the results should be configurable such that the user can choose the most appropriate settings depending on his env (cloud or local).

So far, we've had the following challenges:

  • ReadWriteMany volumes work on local k3d clusters but not on GKE;
  • Standalone NFS server works on GKE but doesn't work on k3d (due to DNS issues in kubelet).

What we should do instead is exposing the relevant storage settings in values.yaml (we already expose storageClass) such that we can configure Adelphi to use AWS S3 or Google Cloud Storage.

┆Issue is synchronized with this Jira Task by Unito
┆Issue Number: AD-68

Consider automating merge of PRs for submission

Current process for merging PRs created by "adelphi contribute" is a manual squash merge which doesn't do anything to clean up feature branches created on the users fork of our repo. We should consider automating an entire "merge submission" process which would:

  • Do the merge of the feature branch
  • (Perhaps optionally) clean up the feature branch on the users fork

┆Issue is synchronized with this Jira Task by Unito
┆Issue Number: AD-66

Report test results

At the moment, we can simply tell whether the tests worked by having a succeeded workflow execution, but we should collect the results generated by cassandra-diff, Gemini and NoSQLBench and report them to the user.

  • cassandra-diff results are stored in the cassandra-diff keyspace of the target cluster
  • Gemini results are printed to the console in the pod it executed

┆Issue is synchronized with this Jira Task by Unito
┆Issue Number: AD-75

Test on GKE/EKS

So far, we've been running this in a dev environment locally with k3d.
But we need to start testing it on a real, cloud-based, Kubernetes cluster if we want to allow users to test their workloads at scale.

┆Issue is synchronized with this Jira Task by Unito
┆Issue Number: AD-103

Convert a given anonymized schema file to Gemini JSON

Per #14, we can output a JSON schema to use with Gemini directly from the anonymizer script while connected to a running cluster.
But we also must be able to convert from an existing schema file that has been pre-generated.

┆Issue is synchronized with this Jira Task by Unito
┆Issue Number: AD-79

Build images for all dependencies before running the workflow

Up until now, we've been building all the dependencies that have no official Docker images on the fly (Gemini, Spark, cassandra-diff) within the k8s cluster itself at every execution of Adelphi in order to have a consistent and simple experience locally or across cloud envs, but as we add more tools, the startup time adds up.

Instead, we should start by at least providing Dockerfiles such that one could build these images once and reuse them locally with k3d. Later, we could publish these images to a public registry.

┆Issue is synchronized with this Jira Task by Unito
┆Issue Number: AD-85

Allow for the concurrency level of data integrity testing to be specified

Feature Request

Description

Add support for the concurrency level of the data integrity testing executed by Adelphi to be controlled by the user.

Describe the solution you'd like

Allow for a number of concurrent threads to be provided via helm values.

Describe alternatives you've considered

N/A

Additional context

Initial review of the Gemini command-line options looks like this can be controlled through the following flag:

-c, --concurrency uint                               Number of threads per table to run concurrently (default 10)

┆Issue is synchronized with this Jira Task by Unito
┆Issue Number: AD-63

Automate execution of the schema anonymizer script

Now that we have a way to use the output of the schema anonymizer per #8, we should automate the execution of the script itself. That is, given the address of a prod C* (as a Helm config value) we should run the script as the first step of the workflow.
If the user provides the schema manually in values.yaml, that should take precedence over running it in prod, in other words, it should be a conditional step in the workflow.

┆Issue is synchronized with this Jira Task by Unito
┆Issue Number: AD-71

Schedule Smoke Testing Integration

The goal of this issue is to include some smoke tests that verify the basic Adelphi features.
We want to catch things like incompatibilities of running Adelphi or the Cass-Operator on the latest version of k3d.

We can also leverage this work to produce a routine/scheduled cadence of tests against C* trunk to externally verify its continued reliability over time.

┆Issue is synchronized with this Jira Task by Unito
┆Issue Number: AD-88

Validate and/or fix handling of virtual keyspaces/tables in export operations

#40 provided some initial support for detecting and working with virtual keyspaces in the export process but this work won't be completed in that ticket. Specifically we want to make sure the following things are true:

  • Virtual keyspaces/schemas ARE NOT returned as "regular" keyspaces/tables in generated schemas
  • Virtual keyspaces/schemas ARE included in the correct way
    • I believe these two are true already
    • Recollection is that virtual keyspaces/schemas are displayed as the creation statement for them + the backing table... but this needs to be verified
  • Make sure virtual keyspaces/tables don't disrupt any cases of iteration over a collection of metadata objects in the code
    • An example: #40 likely will not exclude virtual keyspaces when building it's list of keyspace objects to export... and it probably should

┆Issue is synchronized with this Jira Task by Unito
┆Issue Number: AD-98

Consider adding a feature to "adelphi" app to allow a user to determine whether a schema has already been submitted

Recent changes have moved the anonymization process to an earlier point in the workflow of the "adelphi" app. As a result the actual files in the generated Github PRs are named after anonymized keyspaces... which leaves a user no obvious way to determine if a keyspace they want to submit to Adelphi has already been submitted. This ticket would address this deficiency.

We likely wouldn't need much more than (1) adding a SHA hash of the (anonymized) schema as metadata and (2) allowing some kind of ability to query metadata from the "adelphi" app.

One might also ask why bother using the anonymized keyspace name for the file name in the repo if it doesn't provide any useful information. This question is entirely legit and is being explored elsewhere... it'll probably wind up in a ticket of it's own. That work should be considered to be separate from this effort, however. The focus of this ticket is very much on the ability to determine whether a schema has already been submitted. They're certainly related, however; I can imagine the naming convention in the repo being changed to use the SHA hash of the schema and this being the basis for the query mechanism.

┆Issue is synchronized with this Jira Task by Unito
┆Issue Number: AD-67

CI/CD integration of anonymizer script testing

Integrate testing of the anonymizer script into CI/CD pipelines.

To standardize with other community projects (k8ssandra) it would be ideal if we can achieve this using GH Actions.

┆Issue is synchronized with this Jira Task by Unito
┆Issue Number: AD-101

Validate Gemini for data integrity testing

One of the challenges we have right now with NoSQLBench is generating the workload YAML with CQL statements.
While that's still in the works, we can try Gemini which may have a lower entry barrier.

┆Issue is synchronized with this Jira Task by Unito
┆Issue Number: AD-106

Flesh out support for (some) custom indexes

Current work on anonymizer testing (#54) excludes custom indexes from anonymizer output under the assumption that custom index classes may or may not be present when the schema is recreated. This is certainly true as a general matter, but it is the case that some well-known custom index classes will always (or at least nearly always) be available in practice. This ticket intends to refine this logic a bit by allowing a defined set of well-known custom index classes to be included in the anonymized schema output.

┆Issue is synchronized with this Jira Task by Unito
┆Issue Number: AD-104

Use a public C* image by default if git hash is not specified

Issue #9 introduced a way to build the C* image from the source to use it in the target cluster and made it the default behavior, but instead we should give preference to using a publicly available image from Dockerhub, if possible.

The goal of this ticket is to add some conditional in the Argo workflow that checks the git_hash configuration and, if not provided, skips building the image.

┆Issue is synchronized with this Jira Task by Unito
┆Issue Number: AD-89

Add step to upgrade a single node of the target cluster

We rely on the C* Operator to handle the state of the clusters we start with cassandra-quality.
One of the requirements for this project is the following:

Automated cassandra-diff mixed version integration testing
command-line based
k8s, ds operator + sidecar orchestration
populate data in 1st cluster w/3 node RF=3 with arbitrary schema and nosqlbench
spin up 2nd cluster on same version
>>> upgrade 1 of the 3 nodes <<<
populate data in 2nd cluster
cassandra-diff to ensure both clusters match

But the Operator doesn't allow upgrading a single node.
This is currently a BLOCKER.

┆Issue is synchronized with this Jira Task by Unito
┆Issue Number: AD-97

Convert integration test run script to Python, integrate with test frameworks for more granular feedback

Current work on the anonymizer test suite (see #54) is adding an integration test suite which spins up various versions of C* via Docker, loads them up with a complex schema and performs anonymization on the resulting keyspace(s). The driver for this effort is currently implemented as a shell script with results returned via the return value... but this only allows a binary fail/pass determination to be made.

Ideally we'd like to get individual feedback of success or failure for each C* version. A straightforward way to accomplish this would be to conver this runner to Python and bring in a well-known testing framework.

┆Issue is synchronized with this Jira Task by Unito
┆Issue Number: AD-105

Enable a repository of anonymized schemas

This task is two-fold:

  • Create a folder/repository where people can contribute their anonymized schemas (through PRs).
  • Add a Helm config to let people choose which schema they want to use when running Adelphi.

┆Issue is synchronized with this Jira Task by Unito
┆Issue Number: AD-78

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.