Code Monkey home page Code Monkey logo

marklogic-unit-test's Introduction

GitHub release GitHub last commit License

Easy testing of custom MarkLogic modules

marklogic-unit-test enables you to write and run automated tests for the custom JavaScript modules and XQuery modules that you can write and deploy to MarkLogic. Tests can be written in either JavaScript or XQuery and can test modules written in either language as well. By getting tests in place for your custom modules, you can speed up development of your MarkLogic applications and ensure that they can be changed as easily as possible as new requirements are introduced.

marklogic-unit-test includes the following components:

  1. A MarkLogic library module to help you write your test modules.
  2. A simple web interface for running tests.
  3. Java libraries for running your tests via JUnit5 and for integrating with other popular Java testing frameworks.
  4. A REST endpoint for integrating with testing frameworks in any language.

Please see the user guide to get started with adding marklogic-unit-test to your project.

marklogic-unit-test's People

Contributors

billfarber avatar cskeefer avatar daveegrant avatar dependabot[bot] avatar dmcassel avatar hansenmc avatar jamesagardner avatar jonesyface avatar paxtonhare avatar peetkes avatar prestonmcgowan avatar rjrudin avatar ryanjdew avatar sameerapriyathamtadikonda avatar venuiyengar avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

marklogic-unit-test's Issues

Was not able to use the ml-unit-test framework

I see the ml-unit-test framework is creating two databases and appservers by default for writing the test cases, but i don't want it to be that way.

I want my application databases and app servers to be used for placing the test code and deploy roxy related test files in my application specific modules database.

I am skipping the below properties in my application code as part of gradle.properties:

#mlHost=localhost
#mlAppName=unit-test-example
#mlRestPort=8134

#mlTestRestPort=8135
#mlUsername=admin
#mlPassword=admin

Even after skipping these properties, i still see the app servers and databases are getting created. I was not able to integrate ml-unit-test. Can someone please help me on this.

Document Using Test Data

Reference this suite as an example: https://github.com/marklogic-community/marklogic-unit-test/tree/master/marklogic-unit-test-client/src/test/ml-modules/root/test/suites/Test%20Suites

Note that test-data is a reserved directories to put test specific data.

Show examples of using the test:get-test-file("filename.txt") function. Note that it resolves files relative to that suites test-data directory. So if called in a suite at path/to/suite it should load /path/to/suite/test-data/filename.txt.

Build and Test marklogic-junit5 using Travis CI

Travis CI appears to only be building the marklogic-unit-test-client sub-module. We should configure Travis CI to build and test the marklogic-junit5 module with each pull request to more easily catch issues.

Support Nested Test Suites

When you create a test suite that is nested to, for example, mirror your modules folder structure, the suites are not discovered. Only immediate child folders of `/test/suites' are found.

To reproduce create test suite under /test/suites/root/my-suite/.

I think I've found the problematic code here and fn:not(fn:contains($path, "/")). Don't know what was the motivation for this requirement.

assertEqual cannot compare JS/JSON arrays

'use strict';
const helper = require('/test/test-helper.xqy');
helper.assertEqual([1, 'b'],[1, 'b']);

fails. It would be great were it to recognise the values and pass on to assertEqualJSon.

TestManager does not run teardown or suiteTeardown

This can cause tests to fail when they should pass (I'm running into this). I can submit a PR that will pass the parameters necessary to get the teardowns running. Longer term, it would make sense to add options to the gradle task to control this (-PrunTeardown={true|false}, -PrunSuiteTeardown={true|false}).

Make assert-same-values work for JSON arrays

Looks like the following would do it. I'll test and submit a PR when I can.

declare variable $ARRAY-QNAME := fn:QName("http://marklogic.com/xdmp/json", "array");

declare function helper:assert-same-values($expected as item()*, $actual as item()*)
{
  if (xdmp:type(fn:head($expected)) eq $ARRAY-QNAME) then
    local:assert-same-values(json:array-values($expected), json:array-values($actual))
  else
    let $expected-ordered :=
      for $e in $expected
      order by $e
      return $e
    let $actual-ordered :=
      for $a in $actual
      order by $a
      return $a
    return helper:assert-equal($expected-ordered, $actual-ordered)
};

Document how to include marklogic-unit-test and set up for a Data Hub

Set up a wiki page or similar that describes how to link marklogic-unit-test to a Data Hub build, where to put tests etc.

Data Hub uses a slightly altered/extended version of ml-gradle, so downloading a component from a maven repository and altering the modules paths needs some explanation.

Unable to use suite name in test name

In MarkLogic Workflow there are a pair of tests in /test/suites/inclusive-gateway called inclusive-gateway-01.xqy and inclusive-gateway-02.xqy

They fail using 1.0.beta with the error XDMP-MODNOTFOUND: (err:XQST0059) Module /test/suites/inclusive-gateway/-01.xqy not found and XDMP-MODNOTFOUND: (err:XQST0059) Module /test/suites/inclusive-gateway/-02.xqy not found

Note I will be changing the names as they are not particularly descriptive

ML10 gradle project has error with unit test

I am having com.marklogic:marklogic-unit-test-client:1.0.beta and "com.marklogic.ml-gradle" version "3.16.0" in build.gradle of my gradle project.
See marklogic-community/marklogicworkflow#148
There is the following error to run unit test.
grale mlUnitTest
Execution failed for task ‘:mlUnitTest’.
Local message: failed to read resource at resources/marklogic-unit-test: Internal Server Error. Server Message: RESTAPI-INVALIDREQ: (err:FOER0000) Invalid request: reason: Extension marklogic-unit-test or a dependency does not exist: XDMP-MODNOTFOUND: (err:XQST0059) Module /marklogic.rest.resource/marklogic-unit-test/assets/resource.xqy not found .
Can you please see whether it is the bug for ML10?

Use ml-unit-test without depending on rjrudin bintray repository

Currently, the ml-unit-test.zip artifact is only accessible from https://dl.bintray.com/rjrudin/maven/com/marklogic/ , and thus an ml-unit-test user must include this repository in their build.gradle file.

I think the better way to organize this project is to have two Gradle subprojects - ml-unit-test-modules for all of the code that's loaded into ML, and ml-unit-test-client for the Java code. Each of these can then be published as separate artifacts to bintray. In addition, each will have a "sources" jar, which means they can both be mirrored to the main jcenter repository, thus removing the need for the "rjrudin" bintray repository to be referenced.

I've prototyped this already locally, and I published the two artifacts with a 0.10.dev version, and they're both showing up under the jcenter repo:

The one catch is this involves moving nearly every file in the repo, but that's a one-time headache and then we're good to go.

Getting started

After reading the docs site I still have no idea how to get started. I copied the ml-gradle/examples/unit-test-project directory, upgraded to Gradle 5, and updated marklogicUnitTestVersion=1.0.beta in gradle.properties. However, when I do a gradle build I get:

* Where:
Build file '/Users/jmakeig/Workspaces/test-yo/build.gradle' line: 20

* What went wrong:
A problem occurred evaluating root project 'test-yo'.
> Could not find method mlBundle() for arguments [com.marklogic:marklogic-unit-test-modules:1.0.beta] on object of type org.gradle.api.internal.artifacts.dsl.dependencies.DefaultDependencyHandler.

I have no idea what I’m doing. Any help would be much appreciated.

Unit testing with ml-data-hub:2.0.4

I am trying to integrate ml-unit-test into ml-data-hub:2.0.4.

  1. Test app server not getting created, I followed (marklogic/marklogic-data-hub#1090), but still not working. Tried adding mlTestRestServerName and mlTestDbName - is issue #26 related ticket?

  2. Is there a preferred/convention location to put test modules? I am placing them under data-hub/test/...

  3. I am guessing that all the appropriate data hub directories get added to mlModulePaths and that I need to add the test directory as normal. Are the data hub paths added so that I just have to assign the test directory in the properties file, or do I need to delineate all the data hub paths?

assert-equal-json doesn't accept arrays

helper:assert-equal-json($expected as map:map, $actual as map:map)
This works for comparing JSON objects, but if you pass it a pair of JSON arrays, you get an error:

Invalid coercion: (...) as map:map

Proposed solution: modify assert-equal-json to accept map:map*, then start by checking whether the counts are the same.

Unable to run nested tests from gradle

I have a set of test suites that are all nested quite deeply based on 1.0.beta.

These work perfectly using REST and the UI, but return an error when attempting to use the gradle mlUnitTest task:

$ ./gradlew mlUnitTest

> Task :mlUnitTest
Constructing DatabaseClient based on settings in the databaseClientConfig task property
Run teardown scripts: true
Run suite teardown scripts: true
Run code coverage: false
Running all suites...
Done running all suites; time: 8069ms

24 tests completed, 0 failed

> Task :mlUnitTest FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':mlUnitTest'.
> java.nio.file.NoSuchFileException: build/test-results/marklogic-unit-test/TEST-entities/Complaint/harmonize/ComplaintHarmonizeCsv/content.xml

No directories or files are being generated lower than build/test-results/marklogic-unit-test/

Support testing of REST services

Support adding data and accessing it via REST. This would naturally include testing out of the box REST Services (/v1/...) as well as REST with transforms or options nodes, and REST Extensions (backed by code).

This differs from unit testing mostly in that it includes the rewriter, batching, URL formatting, headers, security, SSL and the network, load balancers and other non-MarkLogic infrastructure components.

Rename ml-unit-test-client?

I'm wondering if this should become ml-unit-test-java-client so that we can support multiple client libraries - e.g. ml-unit-test-node-client.

Remove Roxy from Namespaces

The following namespaces are in use as of this issue being created today:

  • http://marklogic.com/roxy/test
  • http://marklogic.com/roxy/test-helper
  • http://marklogic.com/roxy/test-coverage

Move guts of ml-gradle's UnitTestTask into marklogic-unit-test

ml-gradle's UnitTestTask has some code in it that supports running marklogic-unit-test tests and writing the results to the filesystem. Some of it is specific to ml-gradle:

  • How a DatabaseClient is created
  • The collection of inputs from Gradle properties

But the rest is generic and should be in this project, not ml-gradle:

  • Deleting an existing test results directory
  • Running all the test suites
  • Writing the test results with the JUnit formatter to a results path
  • And to address marklogic/ml-gradle#482 , the "raw" results should be written to a separate path as well

A "TestRunner" class in this project would handle all of this, including allowing the user to specify the paths for formatted results and for raw results.

This would have the benefit of making it easier to upgrade marklogic-unit-test without worrying about mlUnitTest in ml-gradle breaking.

Document Unit Testing Data Hub Framework (DHF) Flow

We need examples and documentation on a pattern for invoking and testing the content, header, and triple modules that would exist as a part of a DHF flow. Ideally the example would happen at a unit level, and exist entirely in MarkLogic.

SJS and XQuery examples would be great.

Test Resource Deployment fails when entity-config present

There seems to be an issue when deploying a test database when there is a db json file present under entity-config. In my testing I have an entity and when I try to deploy test resources it does not create the TEST db and the step afterwards to create the app server errors off with http 400 because the db doesn't exist. If I rename the db json file under entity-config then it all works fine. So I guess there is a bigger issue - when we create the test db we need to search all three possible locations hub-internal-config, entity-config and ml-config and merge the three just like in mlDeploy... looks to me like the DeployHubDatabaseCommand cannot deal with 3 locations. This is tested against a development version of marklogic-unit-test-modules - 0.13.develop - built locally. I can also provoke this error in the simple-dhf4 example by adding a db json file under entity-config.

More details on this:
Here is the output of my deployment of test resources (the test db is STAGING).

Deploying a test database with name data-hub-STAGING-TEST based on configuration file named staging-database.json
Checking for existence of resource: data-hub-STAGING
Sending XML GET request as user 'admin' to path: /manage/v2/databases
Found database with name of data-hub-STAGING, so updating at path /manage/v2/databases/data-hub-STAGING/properties
Sending JSON PUT request as user 'admin' to path: /manage/v2/databases/data-hub-STAGING/properties
Updated database at /manage/v2/databases/data-hub-STAGING/properties
Sending XML GET request as user 'admin' to path: /manage/v2/hosts
Finding eligible hosts for forests for database: data-hub-STAGING
Sending XML GET request as user 'admin' to path: /manage/v2/hosts
Checking for existence of resource: data-hub-STAGING
Sending XML GET request as user 'admin' to path: /manage/v2/databases
Sending XML GET request as user 'admin' to path: /manage/v2/databases/data-hub-STAGING/properties
:hubDeployTestDatabase (Thread[Task worker for ':',5,main]) completed. Took 3.232 secs.

In order to workaround the issue I rename the staging-database.json under entity-config and rerun the deploy of test resources.. it then runs through just fine, here is the output:

> Task :hubDeployTestDatabase
Task ':hubDeployTestDatabase' is not up-to-date because:
  Task has not declared any outputs despite executing actions.
Deploying a test database with name data-hub-STAGING-TEST based on configuration file named staging-database.json
Checking for existence of resource: data-hub-STAGING-TEST
Sending XML GET request as user 'admin' to path: /manage/v2/databases
Creating database: data-hub-STAGING-TEST
Sending JSON POST request as user 'admin' to path: /manage/v2/databases
Created database: data-hub-STAGING-TEST
Sending XML GET request as user 'admin' to path: /manage/v2/hosts
Finding eligible hosts for forests for database: data-hub-STAGING-TEST
Sending XML GET request as user 'admin' to path: /manage/v2/hosts
Checking for existence of resource: data-hub-STAGING-TEST
Sending XML GET request as user 'admin' to path: /manage/v2/databases
Sending XML GET request as user 'admin' to path: /manage/v2/databases/data-hub-STAGING-TEST/properties
Checking for existence of resource: data-hub-STAGING-TEST-1
Sending XML GET request as user 'admin' to path: /manage/v2/forests
Creating forest: data-hub-STAGING-TEST-1
Sending JSON POST request as user 'admin' to path: /manage/v2/forests
Created forest: data-hub-STAGING-TEST-1
:hubDeployTestDatabase (Thread[Task worker for ':',5,main]) completed. Took 5.648 secs.
:hubDeployTestServer (Thread[Task worker for ':',5,main]) started.

Add Failure Message to Assert Functions

Update all assert functions so that they accept an optional message argument. The argument should be a string. This message should be displayed whenever a test fails due to an assertion failure. The intention of this message is to provide a description that gives context around why a test exists and what it is attempting to test. This type of failure message can be very valuable in quickly diagnosing and addressing regression test failures.

Configure code coverage via REST endpoint

The ml-unit-test.xqy endpoint is currently defaulting code coverage to "false". Need to add support for a param to configure this, and then the Java client needs to be updated to support that param as well.

Difficult to run a unit test in qconsole

Workaround:

  • Isolate all the setup to the setupSuite.sjs (or xqy) module
    • That is - do not call load-test-module() and related functions directly from the test
  • Spawn the setup module so the path is correct at runtime:
    • xdmp.spawn("/test/suites/Looping/suiteSetup.sjs")

Issue:

If a test uses any of the load-test-data helper functions, then they won't run in qconsole. This is because load-test-data uses the caller's module URI to figure out where to get the test data in the modules DB and when running from qconsole the calling module is the qconsole module.

It's a huge benefit to use tests as starting points for one's own work, and as guaranteed-working sample code, so it would be better if we can generally paste a test into qconsole and go. (perhaps adding "suiteSetup.sjs" in another buffer and running that first).

A possible fix is to add an override function to set the test data directory explicitly, rather than deriving it from the calling code's location. For .sjs this is more difficult, because it does not look like "static" module variables persist across eval() calls from javascript to xquery.

JSON equals is inconsistent in comparisons

JSON equals isn't always correct. See the following example:

xquery version "1.0-ml";

import module namespace test = "http://marklogic.com/roxy/test-helper" at "/test/test-helper.xqy";

let $json-string1 := '{
    "options":
    {
        "matchOptions": "basic",
        "merging": [
        {
            "propertyName": "ssn",
            "algorithmRef": "user-defined",
            "sourceRef":
            {
                "documentUri": "docA"
            }
        },
        {
            "propertyName": "name",
            "maxValues": "1",
            "doubleMetaphone":
            {
                "distanceThreshold": "50"
            },
            "synonymsSupport": "true",
            "thesaurus": "/mdm/config/thesauri/first-name-synonyms.xml",
            "length":
            {
                "weight": "8"
            }
        },
        {
            "default": "true",
            "strategy": "default-standard"
        }],
        "propertyDefs":
        {
            "properties": [
            {
                "namespace": "",
                "localname": "IdentificationID",
                "name": "ssn"
            },
            {
                "namespace": "",
                "localname": "PersonName",
                "name": "name"
            },
            {
                "namespace": "",
                "localname": "Address",
                "name": "address"
            }]
        },
        "mergeStrategies": [
        {
            "name": "default-standard",
            "algorithmRef": "standard",
            "maxValues": "1",
            "sourceWeights":
            {
                "source":
                {
                    "name": "SOURCE1",
                    "weight": "10"
                }
            }
        }],
        "algorithms":
        {
            "stdAlgorithm":{"namespaces":{}, "timestamp":{"path": null}},
            "custom": []
        }
    }
}'
let $json-string2 := $json-string1
return (
"JSONs equal: " || (try { fn:exists(test:assert-equal-json(xdmp:unquote($json-string1)/object-node(),xdmp:unquote($json-string2)/object-node())) } catch ($e) {fn:false()}),
"Strings equal: " || (try { fn:exists(test:assert-equal($json-string1,$json-string2)) } catch ($e) {fn:false()})
)

Output:

JSONs equal: false
Strings equal: true

Extract get-test-data-path from get-test-file

get-test-file has the following logic in it:

fn:replace(
      fn:concat(
        cvt:basepath($test:__CALLER_FILE__), "/test-data/", $filename),
      "//", "/")

It can be useful to do a cts:directory-query on everything in a test data path, but building up that path requires duplicating the above code (without a "filename").

So this ticket is asking for the following function:

test:get-test-data-path()

Which will just do the following:

fn:replace(
    fn:concat(
      cvt:basepath($test:__CALLER_FILE__), "/test-data/"),
    "//", "/")

Support Testing of the MarkLogic Data Hub Framework (DHF)

Support running input flows, harmonization flows, and doing other hub-related things.

Some natural things to test include

  • ingest data to an input flow and check the results, including
    • the envelope created with header metadata
    • any transforms to content
    • REST services on top of the raw data (not that common, but can be done)
    • SQL or SPARQL views projected via TDE
    • traces for errors created, or other error reports and audits
    • ingest a batch or single real-time item
  • harmonize data and check
    • the same items above

Create Node client for REST endpoint

ml-unit-test-client is a Java client for the ml-unit-test REST endpoint; we should probably have a Node client as well so that ml-unit-test tests can be easily integrated into Node-based test runners.

Make assert-same-values show what's different

The assert-same-values function tells you when the expected doesn't match the actual and shows both sequences. It would be much more helpful if it showed what was different between them.

sec:remove-credential is not supported on MarkLogic 8

Running ./gradlew mlDeploy on MarkLogic 8 gives the error:

FAILURE: Build failed with an exception.

* Where:
Build file 'C:\Git\marklogic-unit-test\marklogic-unit-test-client\build.gradle' line: 103

* What went wrong:
A problem occurred evaluating project ':marklogic-unit-test-client'.
> Local message: failed to apply resource at eval: Internal Server Error. Server Message: XDMP-UNDFUN: (err:XPST0017) Undefined function sec:remove-credential()

sec:remove-credential was introduced in MarkLogic 9.

assert-equal-json no longer reports the location of comparison differences

When you have large values that are being compared for equality, it can be difficult to identify the location of the error when looking at two large blocks of text.

The original assert-equal-json implementation provided the paths to the expected and actual nodes being compared. This made it easier to locate the source of the errror.

Ideally, it would be useful if the differences were marked up like in a diff output and properly formatted so you could see what the error is, what the expected value is, and what the actual value is.

Document How To Identify Why a Test Failed

Document the general pattern:

  1. Run suite of tests, some fail
  2. Use UI to run individual test
    • be sure to uncheck teardown
  3. Use qconsole to inspect database state
  4. Use log statements and error logs to get insight into the run time
    • would be great to have the debugger working here

Getting feedback while updating tests:

  1. Run gradle mlWatch
  2. Make changes in editor of choice
  3. Run test using UI code

Build and Test MarkLogic Version 10 using Travis CI

Configure Travis CI to build and test using MarkLogic version 10. Preferably we'd have at least one build job for each supported major version of MarkLogic. Bonus points if this can be done using Travis CI's matrix feature where we only need to manage a MarkLogic version property and the build script would "do the right thing."

Setup real JUnit tests for marklogic-unit-test-modules

We have a test app that can be deployed on port 8090 locally already. And there are a few tests in place. Just need to expand this so that it's easy to write tests for new features, such as the PR that's open right now for supporting additional formats for test output.

Can use ml-junit for writing these tests.

outcome of all passing assertions do not increment counters in UI when running sjs unittests

I just added the ml-unittest plugin in my dhf project and started writing unittests.
This is an sjs project from start to finish so i started writing my first unittests in sjs and hit the unittest UI.
When all my assertions succeed the number of counts in the UI does not reflect the total number of assertions.
My code looks like this:

'use strict';

const test = require('/test/test-helper.xqy');

function testImportLookup() {
    return  [
        test.assertEqual(fn.true(), fn.true()),
        test.assertEqual(fn.false(), fn.false())
    ];
}

const assertions = [];

assertions.push(testImportLookup());

assertions

So the assertions look like this when I run it in qconsole

[
[
"<t:result type=\"success\" xmlns:t=\"http://marklogic.com/roxy/test\"/>", 
"<t:result type=\"success\" xmlns:t=\"http://marklogic.com/roxy/test\"/>"
]
]

Enhancement: nicer test failure output

When tests fail, it is not obvious which test fails without looking inside the /build/test-results files individually (or grep via grep failures=.[1-9] ). Also the output is encoded XML rather than readable text.

This is the file structure I see. There is an error inside TEST-Looping.xml but that is not obvious from teh directory or file structure. The failure contents are not an XML structure, but rather an encoding of XML inside XML. It should ideally be nested, or the stack converted to a typical pure-text stack.

<testsuite errors="0" failures="2" hostname="localhost" name="Looping" tests="2" time="0.049026" timestamp=""><testcase classname="loop.sjs" name="loop.sjs" time="0.021005"><failure type="" message="Error running JavaScript request">&lt;error:error xsi:schemaLocation="http://marklogic.com/xdmp/error error.xsd" xmlns:t="http://marklogic.com/roxy/test" xmlns:error="http://marklogic.com/xdmp/error" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"&gt;
  &lt;error:code&gt;JS-JAVASCRIPT&lt;/error:code&gt;
  &lt;error:name/&gt;
  &lt;error:xquery-version&gt;javascript&lt;/error:xquery-version&gt;
  &lt;error:message&gt;Error running JavaScript request&lt;/error:message&gt;
  &lt;error:format-string&gt;JS-JAVASCRIPT: throw new CompileErrorListError(errorList); -- Error running JavaScript request: CompileErrorListError: Compile Errors&lt;/error:format-string&gt;
  &lt;error:retryable&gt;false&l

image

Allow substitutions during deployment

In order to test a REST extension, I need a username and password. Rather than including those in the code, I'd like to have a module deployed with the test modules that has constants I can use. Those constants would have values that are overwritten during deployment with configurable values. For instance, I could have a module that includes:

declare variable $TEST-USERNAME := "%%TEST-USERNAME%%";
declare variable $TEST-PASSWORD := "%%TEST-PASSWORD%%";

and a properties file that includes:

TEST-USERNAME=admin
TEST-PASSWORD=admin

Then my unit tests can include the library module and use those constants. (Maybe this is already possible, but I don't see such a module.)

Add SJS Unit Test Example to Landing Page

I think these are good to call out, but I'm thinking it'd be nice to show something ASAP on this page.

For example, after briefly describing marklogic-unit-test, hop into an example. Show a bare minimum library module (in SJS), like an "echo" function. Then show what the test module looks like. Then show a screenshot of running the test module via the UI. Then show a screenshot of running the test via Intellij (will require that skeleton Java class, may not be worth mentioning that yet).

That covers the main use case - testing library functions. A secondary use case is using marklogic-junit to test endpoints. I wouldn't go into detail about that on the home page though - instead, provide a link to a Wiki page (or the README for marklogic-junit).

I think the screenshots of the UI and the Intellij (or Eclipse) test runner will carry the most value, because it immediately shows the user how they'll be running the tests.

Originally posted by @rjrudin in #73

Eliminate xdmp:eval()

xdmp:eval() is used in a number places. As a best practice, eval should be avoided. xdmp:function-invoke() or xdmp:invoke() should be used instead.

The following functions should be refactored, where possible, to eliminate xdmp:eval():

Provide example of testing a DHF 5 flow

I think this will ideally be done in the DHF project with a link to it from here, but I'm using this project to track this so it won't be lost in the shuffle.

How to test DHF project that uses multiple databases

This use case is easily explained in a DHF scenario but can be generalized too:

Situation:

  • I have 2 dbs with different documents and configurations (especially range indexes) plus one set of modules, in the same modules db (including my tests cases)
  • the dbs can be attached to separate app servers or not
  • I want to run some tests exclusively against db1 and others exclusively on db2
  • I can run my tests by either:
    • Running some code that will call the relevant testing APIs with parameters
    • Using the web UI provided OOTB with marklogic-unit-test
  • The tests can be run against an empty db but also against a db containing an important number of indexed contextual documents that would take too much time to be loaded relatively to the actual test run time
  • It is important to run the tests in a few seconds top

More generally(out of DHF) - Tracked as issue #77

let's consider that we have hundreds/thousand of tests (in a TDD based project) and we just want to run a few of them (current scope of changes only).

What can marklogic-unit-test provide to help in the situations above ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.