Code Monkey home page Code Monkey logo

greenlight's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

greenlight's Issues

Implement 'clean' command

Fill out the current stub for the clean command. This should load a collection of result files from the arguments and perform all of the cleanup steps. This is meant to be used to gracefully recover from an interrupted test run.

Generalize test function output

Instead of tests being responsible for returning an (un)modified context map, we can make this a bit more explicit. If our create-foo function returns a test step, we could have a few options for modifying the context:

(defn create-foo
  [params]
  #::step
  {:tenant/id (step/lookup :tenant/id)
   :test
   (fn [step] ; inputs or step?
     ...
     foo-id)
   ;; option 0
   ;; nothing
   ;; option 1
   :output :foo/id
   ;; option 2
   :output (fn [ctx foo-id] (i-dont-even-know ctx :foo/id foo-id))
   })

Option 0 would be no ::step/output, meaning return the context unmodified.
Option 1 would assoc :foo/id into the context with the value foo-id.
Option 2 gives the most flexibility as a function of the input context and the returned value of the test step foo-id.

Make test functions unaware of location of inputs in context

Test inputs are pulled directly from a ctx map that is passed along from test step to test step. Components are pulled into scope by explicit ::step/component entries. We can unify these two inputs. For example, if our make-foo returns a step:

(make-foo 
  {:foo 123
   :bar "baz"
   ;; Adding a `:qux` to input map based on the context
   :qux (step/lookup :look-here)
   ;; Adding a client key based on test components
   ;; replacing ::step/component
   :client (step/component :foo/client)})

This simplifies test steps in that they do not need to care where inputs from context come from, and are able to work on a single flat make of inputs.

Support splicing collections of steps into deftest

Would like to be able to write deftests like so:

(deftest my-test
  "My test"
  #::step{:name 'step-1}
  #::step{:name 'step-2}
  [#::step{:name 'step-3-a}
   [#::step{:name 'step-3-b-I}
    #::step{:name 'step-3-b-II}]]
  #::step{:name 'step-4})

And have Greenlight execute step-1, step-2, step-3-a, step-3-b-I, step-3-b-II, and step-4 in that order.

This will enable interesting use cases such as writing helper functions that return collections of steps.

(defn setup-foo
  []
  [#::step{:name 'create-foo}
   #::step{:name 'update-foo-draft}
   #::step{:name 'promote-foo-draft}])

(deftest foo-lifecycle-test
  "Testing foos"
  (setup-foo)
  ...)

Sequence of steps should be reusable as well

Right now there are two basic constructs: step and test. While step is reusable, IMO a collection of steps should also be easily reusable.

To illustrate what I'm saying, one concrete example: I'm using greenlight to write end to end tests and I have a sequence of steps to create an user, validate its email and save its data in greenlight context. While I'm able to use (concat user-creation-steps [...]) to test most flows, there are places where I can't, like when I want to create two different users and have them saved in different keys in the context.

Having an easy way of composing steps would be really helpful.

deftest without docstring silently drops first step

The following test:

(deftest my-test
  #::step{:name 'step-1}
  #::step{:name 'step-2})

will only execute step 2 because it thinks step 1 is the docstring.

I'd prefer docstrings to be optional in deftests, or for deftest to throw an error if the second argument isn't a string.

Improve test/step progress reporting

Once there's a more fleshed-out test suite available, work on the in-test reporting code to improve how it looks. Minimally, this should involve ANSI color coding (unless --no-color is set).

Throw error when step fails to return a context

I regularly accidentally write steps that neglect to return ctx if no context-mutations occur as the result of that step. When I do, the test failure usually manifests itself in a later step and is very hard to track down.

I'd like Greenlight to throw an error if the return value of any step's :greenlight.step/test function is not a map.

Repo defaults to development branch

Noob fail on my part but the default branch on Github is development which means that the README relates to unreleased features by default which is misleading (was trying to use the runner/ManagedSystem protocol only to realise it's not been merged into master yet

Improve output for unexpected exceptions in tests

When a test throws an unexpected exception, currently greenlight prints a big stack trace that starts with an ExecutionException that traces to a line in greenlight code. It'd be a nice improvement to have the exception in the test case be more front and center for debugging.

Example test code:

(ns amperity.user.main
  (:require
    [greenlight.runner :refer [main]]
    [greenlight.test :refer [deftest]]
    [greenlight.step :refer [defstep]]))


(defstep example-step
  "Throws an exception"
  :title "Example"
  :test (fn [_]
          (throw (ex-info "ouch" {}))))


(deftest example-test
  "Example test"
  [(example-step {})])


(defn -main
  [& args]
  (main
    (constantly nil)
    [(example-test)]
    args))

Example output:

$ lein run -m amperity.user.main
Starting test system...
Running 1 tests...

 * Testing example-test
 | amperity.user.main:15
 | Example test
 |
 +->> Example

ERROR in () (FutureTask.java:122)
Unhandled ExecutionException: clojure.lang.ExceptionInfo: ouch {}
expected: nil
  actual: java.util.concurrent.ExecutionException: clojure.lang.ExceptionInfo: ouch {}
 at java.util.concurrent.FutureTask.report (FutureTask.java:122)
    java.util.concurrent.FutureTask.get (FutureTask.java:205)
    clojure.core$deref_future.invokeStatic (core.clj:2302)
    clojure.core$future_call$reify__8454.deref (core.clj:6974)
    clojure.core$deref.invokeStatic (core.clj:2324)
    clojure.core$deref.invoke (core.clj:2306)
    greenlight.step$advance_BANG_.invokeStatic (step.clj:294)
    greenlight.step$advance_BANG_.invoke (step.clj:274)
    greenlight.test$run_steps_BANG_.invokeStatic (test.clj:164)
    greenlight.test$run_steps_BANG_.invoke (test.clj:151)
    greenlight.test$run_test_BANG_.invokeStatic (test.clj:206)
    greenlight.test$run_test_BANG_.invoke (test.clj:197)
    clojure.core$partial$fn__5841.invoke (core.clj:2631)
    clojure.core$mapv$fn__8445.invoke (core.clj:6912)
    clojure.lang.PersistentVector.reduce (PersistentVector.java:343)
    clojure.core$reduce.invokeStatic (core.clj:6827)
    clojure.core$mapv.invokeStatic (core.clj:6903)
    clojure.core$mapv.invoke (core.clj:6903)
    greenlight.runner$run_tests_BANG_.invokeStatic (runner.clj:208)
    greenlight.runner$run_tests_BANG_.invoke (runner.clj:190)
    greenlight.runner$main.invokeStatic (runner.clj:295)
    greenlight.runner$main.invoke (runner.clj:263)
    amperity.user.main$_main.invokeStatic (main.clj:22)
    amperity.user.main$_main.doInvoke (main.clj:20)
    clojure.lang.RestFn.invoke (RestFn.java:397)
    clojure.lang.Var.invoke (Var.java:380)
    user$eval140.invokeStatic (form-init17495549867506550820.clj:1)
    user$eval140.invoke (form-init17495549867506550820.clj:1)
    clojure.lang.Compiler.eval (Compiler.java:7177)
    clojure.lang.Compiler.eval (Compiler.java:7167)
    clojure.lang.Compiler.load (Compiler.java:7636)
    clojure.lang.Compiler.loadFile (Compiler.java:7574)
    clojure.main$load_script.invokeStatic (main.clj:475)
    clojure.main$init_opt.invokeStatic (main.clj:477)
    clojure.main$init_opt.invoke (main.clj:477)
    clojure.main$initialize.invokeStatic (main.clj:508)
    clojure.main$null_opt.invokeStatic (main.clj:542)
    clojure.main$null_opt.invoke (main.clj:539)
    clojure.main$main.invokeStatic (main.clj:664)
    clojure.main$main.doInvoke (main.clj:616)
    clojure.lang.RestFn.applyTo (RestFn.java:137)
    clojure.lang.Var.applyTo (Var.java:705)
    clojure.main.main (main.java:40)
Caused by: clojure.lang.ExceptionInfo: ouch
{}
 at amperity.user.main$example_step$fn__1830.invoke (main.clj:12)
    greenlight.step$advance_BANG_$fn__1524.invoke (step.clj:293)
    clojure.core$binding_conveyor_fn$fn__5754.invoke (core.clj:2030)
    clojure.lang.AFn.call (AFn.java:18)
    java.util.concurrent.FutureTask.run (FutureTask.java:264)
    java.util.concurrent.ThreadPoolExecutor.runWorker (ThreadPoolExecutor.java:1128)
    java.util.concurrent.ThreadPoolExecutor$Worker.run (ThreadPoolExecutor.java:628)
    java.lang.Thread.run (Thread.java:829)
 | Unhandled ExecutionException: clojure.lang.ExceptionInfo: ouch {}
 | [ERROR] (0.006 seconds)
 |
 |
 * ERROR (0.014 seconds)


Ran 1 tests containing 1 steps with 1 assertions:
* 1 error

Implement HTML reports

After test results are available, we can generate an HTML report to provide a more human-friendly format for consuming the results. This doesn't have to be fancy, but should be substantially easier to parse than the raw EDN result data.

  • Show an aggregate of number of tests, total elapsed time, aggregate assertions, etc.
  • Detailed info for each test (bonus: expand/collapse for more info).
  • Bonus: some kind of test-case timeline showing when things ran. (more useful for concurrent test runs)

Kaocha support

I just put something together to run greenlight with kaocha, instead of the default runner: https://github.com/caioaao/kaocha-greenlight Just wondering if you guys want to maintain this repository as well. It still needs a CI/clojars config, some implementations are missing, and there's no docs, but at least it's working already 😬 I can do those stuff in some time, but I could use some help

The biggest motivation for me is this (taken from this blog post):

If we have this many test runners already, why create another one? All of the mentioned projects are available for a subset of Clojure build tools (lein, boot, Clojure CLI), and implement a subset of all possible test runner features, but they don’t compose. I can’t take some features of one and a few of the other.

To illustrate this point, I'm already using greenlight with a watch mode and profiling in one project.

Make title derivable from context as a special case

It's nice to have the title of a test step be based on information in the context, for example something like:

{::step/name 'make-foo-in-bar
 ::step/title (fn [ctx] (format "Making foo in bar %s" (:bar/id ctx)))
 ,,,}

Output files contain non-readable artifacts

Output report files from Greenlight contain non-EDN readable artifacts. One such instance are test assertions that are passes

{:actual (#<Fn@1bc9f6 clojure.core/odd_QMARK_> 3),
 :expected (odd? 3),
 :message nil,
 :type :pass}

as the :actual value can't be read using clojure.edn/read-string. The underlying issue is described in https://dev.clojure.org/jira/browse/CLJ-1379.

Before writing output result files, we should clean up these instances as well as any other non-serializable values (exceptions) such that we can read output files.

Support optional step inputs

greenlight.step/lookup does not allow any inputs to be omitted, so the typical workaround is to do something like the following, which hurts test understandability (step depends on more than just its :inputs) and increases test boilerplate.

#:greenlight.step
{:inputs {:some-required-data (step/lookup :some-required-data)}
 :test (fn [{:keys [some-required-data] :as ctx}]
         (let [some-optional-data (:some-optional-data ctx)]
		   ,,,))}

Include Exception in step error outcomes

Currently, if a step throws an exception, only a string is included in the output. This causes information to be lost. This would be especially useful when creating integrations with other runners, like kaocha-greenlight, so it could report the full stacktrace.

Alternate System Support (Integrant/mount/etc)

Request

Please provide support to use system management libraries other than com.stuartsierra/component, for example integrant

Justification

I understand I could use a bridge of sorts to handle starting/stopping an integrant system using component, but it'd be nice to choose my own way of managing stateful portions of integration tests. It'd probably also be for the best if the core greenlight project didn't depend on a particular version of component anyway.


Suggested design:

One potential pattern I've seen for handling this is to provide a multimethod/protocol hook for the relevant operations and a library for each "popular/supported" framework that includes the relevant polymorphic implementations of the hook for said framework.

Discover tests

Right now it's on the user to inject the set of tests they want to run. Instead, the library should use a mechanism similar to clojure.test to discover deftest vars in the available namespaces on the classpath. Better would be an option to restrict namespaces loaded to a specific prefix or other pattern.

Implement JUnit XML reports

Many CI/CD systems have built-in integrations for parsing and displaying JUnit XML test results. We should be able to get a lot of value from generating even a basic report in XML to hook into these systems.

Implement 'report' command

Fill in the current stub for the report command. This should load a collection of result files from the arguments and generate a report as directed. This is mostly useful to aggregate results from (potentially many) separate test runs.

Custom test report function?

Hi all,

Is it possible to supply a custom test reporter when running a suite of tests via run-tests? I'd like to be able to generate my own output for each test as certain build pipelines require specific output be wrote to stdout in order to enable various UI features.

Reading the source I found that run-tests! simply binds greenlight.test/report to a function that's called after each test, which sounded perfect but unfortunately it isn't possible to bind this from outside, it is possible when running a singular test with run-test! however.

So it seems my only option as it stands is to discover tests using find-tests and run each test individually, of course this means I unfortunately lose some information with regards to how many tests have passed/failed unless I keep track of that myself which I'd rather avoid.

Is there perhaps something I'm missing?

Thanks.

Validate configuration and result specs

The various data pieces in greenlight.step and greenlight.test are fairly well specified, so we should leverage this by actually validating the inputs and outputs in the relevant functions.

Implement configurable retry

Ideally every integration test would work every time. But I'm sure we've all written or seen tests that work most of the time but sometimes need to be retried for them to work. It would be great if tests and steps had retry configuration that let you specify the number of times to retry them before they're regarded as failed.

Support initialization of test context

Tests always start with an empty context, occasionally you want some information available in the context from the start of a test. It's possible to create a dummy step to fill in these context values as:

#::step{:name 'filling-context
        :title "Filling context values"
        :inputs {}
        :test (constantly "my-output-value")
        :output :foo/bar}

Which creates a context of {:foo/bar "my-output-value"} available to all subsequent steps.

Support for distributed or parallelized execution of tests

We have a TODO in the README to support parallelized/distributed tests. Tests are isolated enough that simply partitioning the set of tests to run gives good distribution, but we should consider more first class support of distributed tests.

Blocking this would be a method of rolling up reports from multiple tests runs to present a unified report (#5, #21).

Support fixture-style test wrappers

It would be great to support wrapper functions which can be called around tests and optionally around each step. This could be used similar to clojure.test fixtures, but we don't really need to do common setup/teardown since greenlight already has facilities for that. Instead, we want to use this to add observability aspects to the tests to integrate them with tracing tools like honeycomb.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.