Code Monkey home page Code Monkey logo

archivematica-acceptance-tests's Introduction

Archivematica Automated User Acceptance Tests (AMAUAT)

This repository contains automated user acceptance tests for Archivematica (AM) written using the Python behave library and the Gherkin language. Using Gherkin to express tests makes them readable to a wide range of Archivematica users and stakeholders [1]. Consider the following snippet from the PREMIS events feature file (premis-events.feature):

Feature: PREMIS events are recorded correctly
  Users of Archivematica want to be sure that the steps taken by
  Archivematica are recorded correctly in the resulting METS file, according
  to the PREMIS specification.

  Scenario: Isla wants to confirm that standard PREMIS events are created
    Given that the user has ensured that the default processing config is in its default state
    When a transfer is initiated on directory ~/archivematica-sampledata/SampleTransfers/BagTransfer
    Then in the METS file there are/is 7 PREMIS event(s) of type ingestion

The Given, When and Then statements in the feature files allow us to put the system into a known state, perform user actions, and then make assertions about the expected outcomes, respectively. These steps are implemented by step functions in Python modules located in the features/steps/ directory, which, in turn, may interact with Archivematica GUIs and APIs by calling methods of an ArchivematicaUser instance as defined in the amuser package. For detailed guidance on adding feature files, implementing steps, or adding AM user abilities, please see the Developer documentation. For examples of using these tests to run (performance) experiments on Archivematica, see Running Experiments with the AMAUAT.

Table of Contents

High-level overview

The AMAUAT are a completely separate application from Archivematica (AM) and the Archivematica Storage Service (SS). They require that you already have an Archivematica instance deployed somewhere that you can test against (see Installing Archivematica.) The tests must be supplied with configuration details, including crucially the URLs of the AM and SS instances as well as valid usernames and passwords for authenticating to those instances. The AM instance being tested may be running locally on the same machine or remotely on an external server. Note that running all of the AMAUAT tests to completion will likely take more than one hour and will result in several transfers, SIPs, and AIPs being created in the AM instance that is being tested.

The tests use the Selenium WebDriver to launch a web browser in order to interact with Archivematica's web interfaces. Therefore, you may need to install a web browser (Chrome or Firefox) and the appropriate Selenium drivers; see the Browsers, drivers and displays section for details.

Installation

This section describes how to install the AMAUAT. If you have done this before and just need a refresher, see the Installation quickstart. If you are installing manually for the first time, see the Detailed installation instructions. If you are testing a local Archivematica deploy created using deploy-pub (Vagrant/Ansible), then you can configure that system to install these tests for you: see the Install with deploy-pub sections. If you are testing a local deploy created using am (Docker Compose), then the tests should be installed for you automatically.

Installation quickstart

The following list of commands illustrates the bare minimum required in order to install and run the tests. Note that a real-world invocation of the behave command will require the addition of flags that are particular to your environment and the details of the Archivematica instance that you are testing against (see Usage). If you have never run these tests before, please read the Detailed installation instructions first.

$ virtualenv -p python3 env
$ source env/bin/activate
$ git clone https://github.com/artefactual-labs/archivematica-acceptance-tests.git
$ cd archivematica-acceptance-tests
$ pip install -r requirements.txt
$ behave

Detailed installation instructions

To install these tests manually, first create a virtual environment using Python 3 and activate it:

$ virtualenv -p python3 env
$ source env/bin/activate

Then clone the source:

$ git clone https://github.com/artefactual-labs/archivematica-acceptance-tests.git

Since lxml is a dependency, you may need to install python3-dev. On Ubuntu 14.04 with Python 3 the following command should work:

$ sudo apt-get install python3-dev

Finally, install the Python dependencies:

$ pip install -r requirements.txt

Install with deploy-pub

Archivematica's public Vagrant/Ansible deployment tool deploy-pub allows you to install the AMAUAT when provisioning your virtual machine (VM). This simply requires setting the archivematica_src_install_acceptance_tests variable to "yes" in the Ansible playbook's vars- file, e.g., vars-singlenode-qa.yml.

Browsers, drivers and displays

A web browser (Firefox or Chrome) must be installed on the system where the tests are being run. On a typical desktop computer this is usually not a problem. However, on a development or CI server, this may require extra installation steps. You will need to consult the appropriate documentation for installing Firefox or Chrome on your particular platform.

If you are using Chrome to run the tests, you will need to install the Selenium Chrome driver. Instructions for installing the Selenium Chrome driver on Ubuntu 14.04 are copied below:

wget -N http://chromedriver.storage.googleapis.com/2.26/chromedriver_linux64.zip
unzip chromedriver_linux64.zip
chmod +x chromedriver
sudo mv -f chromedriver /usr/local/share/chromedriver
sudo ln -s /usr/local/share/chromedriver /usr/local/bin/chromedriver
sudo ln -s /usr/local/share/chromedriver /usr/bin/chromedriver

When the tests are running, they will open and close several browser windows. This can be annoying when you are trying to use your computer at the same time for other tasks. On the other hand, if you are running the tests on a virtual machine or a server, chances are that that machine will not have a display and you will require a headless display manager. The recommended way to run the tests headless is with TightVNC. To install TightVNC on Ubuntu 14.04:

$ sudo apt-get update
$ sudo apt-get install -y tightvncserver

Before running the tests, start the VNC server on display port 42 and tell the terminal session to use that display port:

$ tightvncserver -geometry 1920x1080 :42
$ export DISPLAY=:42

Note that the first time you run this command, TightVNC server will ask you to provide a password so that you can connect to the server with a VNC viewer, if desired. If you do want to connect to the VNC session to see the tests running in real-time, use a VNC viewer to connect to display port 42 of the IP of the VM that is running the tests. As an example, if you are using the xtightvncviewer application on Ubuntu (sudo apt-get install xtightvncviewer), you could run the following command to view the tests running on a local machine at IP 192.168.168.192:

$ xtightvncviewer 192.168.168.192:42

Installing Archivematica

As mentioned previously, running the AMAUAT requires having an existing Archivematica instance installed. While describing how to do this is beyond the scope of this document, there are several well-documented ways of installing Archivematica, with the Docker Compose strategy being the recommended method for development. See the following links:

Usage

Simply executing the behave command will run all of the tests and will use the default URLs and authentication strings as defined in features/environment.py. However, in the typical case you will need to provide Behave with some configuration details that are appropriate to your environment and which target a specific subset of tests (i.e., feature files or scenarios). The following command is a more realistic example of running the AMAUAT:

$ behave \
    --tags=icc \
    --no-skipped \
    -v \
    --stop \
    -D am_version=1.7 \
    -D home=archivematica \
    -D transfer_source_path=archivematica/archivematica-sampledata/TestTransfers/acceptance-tests \
    -D driver_name=Firefox \
    -D am_url=http://127.0.0.1:62080/ \
    -D am_username=test \
    -D am_password=test \
    -D ss_url=http://127.0.0.1:62081/ \
    -D ss_username=test \
    -D ss_password=test

The command given above is interpreted as follows.

  • The --tags=icc flag tells Behave that we only want to run the Ingest Conformance Check feature as defined in the features/core/ingest-mkv-conformance.feature file, which has the @icc tag.
  • The --no-skipped flag indicates that we do not want the output to be cluttered with information about the other tests (feature files) that we are skipping in this run.
  • The -v flag indicates that we want verbose output, i.e., that we want any print statements to appear in stdout.
  • The --stop flag tells Behave to stop running the tests as soon as there is a single failure.
  • The rest of the -D-style flags are Behave user data:
    • -D am_version=1.7 tells the tests that we are targeting an Archivematica version 1.7 instance.
    • The -D home=archivematica flag indicates that when the user clicks the Browse button in Archivematica's Transfer tab, the top-level folder for all ~/-prefixed transfer source paths in the feature files should be archivematica/.
    • The -D transfer_source_path=... flag indicates that when the user clicks the Browse button in Archivematica's Transfer tab, the top-level folder for all relative transfer source paths in the feature files should be archivematica/archivematica-sampledata/TestTransfers/acceptance-tests/.
    • The -D driver_name=Firefox flag tells Behave to use the Firefox browser.
    • Finally, the remaining user data flags provide Behave with the URLs and authentication details of particular AM and SS instances.

To see all of the Behave user data flags that the AMAUAT recognizes, inspect the get_am_user function of the features/environment.py module.

To run all tests that match any of a set of tags, separate the tags by commas. For example, the following will run all of the Ingest Conformance Check (icc) and Ingest Policy Check (ipc) tests:

$ behave --tags=icc,ipc

To run all tests that match all of a set of tags, use separate --tags flags for each tag. For example, the following will run only the preservation scenario of the Ingest Conformance Check feature:

$ behave --tags=icc --tags=preservation

In addition to the general guidance just provided, all of the feature files in the features/ directory should contain comments clearly indicating how they should be executed and whether they need any special configuration (flags).

Closing all units

There are two shell scripts that use the AMAUAT test functionality to close all units (i.e., transfers or ingests). These scripts call behave internally (targeting specific feature tags) and will therefore accept the same flags as behave itself (e.g., for specifying the AM url); the basic method for executing these scripts is by running:

$ ./close_all_transfers.sh
$ ./close_all_ingests.sh

Troubleshooting

If the tests generate cannot allocate memory errors, there may be unclosed browsers windows. Run the following command to look for persistent Firefox or Chrome browsers and kill them:

$ ps --sort -rss -eo rss,pid,command | head

Logging

All log messages are written to a file named AMAUAT.log in the root directory. Passing the --no-logcapture flag to behave will cause all of the log messages to also be written to stdout.

Timeouts and attempt counters

At various points, these tests wait for fixed periods of time or attempt to perform some action a fixed number of times before giving up the attempt. The variables holding these wait and attempt values are listed with their defaults in features/environment.py, e.g., MAX_DOWNLOAD_AIP_ATTEMPTS. If you find that tests are failing because of timeouts being exceeded, or conversely that tests that should be failing are waiting too long for an event that will never happen, you can modify these wait and attempt values using behave user data flags, e.g., -D max_download_aip_attempts=200.

[1]The Gherkin syntax and the approach of defining features by describing user behaviours came out of the behavior-driven development (BDD) process, which focuses on what a user wants a system to do, and not on how it does it. The Behave documentation provides a good overview of the key concepts and their origins in BDD.

archivematica-acceptance-tests's People

Contributors

dhwaniartefact avatar jhsimpson avatar jraddaoui avatar jrwdunham avatar mamedin avatar qubot avatar replaceafill avatar ross-spencer avatar scollazo avatar sevein avatar slange-mhath avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

archivematica-acceptance-tests's Issues

`scp`-reliant tests with Xenial need identity-file-based auth

With Xenial vagrant/ansible deploys, the vagrant/vagrant username/password assumption no longer holds, which breaks scp-reliant tests, which require password-authenticated scp.

The solution would seem to be:

  1. add another option (and some documentation) to the acceptance tests, e.g., -D ssh_identity_file=/path/to/.vagrant/machines/am-local/virtualbox/private_key
  2. modify scp_server_file_to_local of archivematicaselenium.py to use ssh_identity_file if present instead of the current server_password
  3. add documentation explaining this

See artefactual/deploy-pub#42 for more details.

Problem: helper scripts broken

The commit 3777b1c added a change to the get_am_sel_cli() method in environment.py. This method requires an input parameter now. It is called from two helper scripts (close_all_transfers.py and close_all_ingest.py), those scripts don't work now.

Problem: need a basic feature for AM sanity checking

If only one feature can be run in CI/CD system, this should be the one. It should do and test the following:

  • Create an AIP from a transfer source (using processing configuration or making decisions?)
    • normalize for preservation
  • Verify that at least some MS outputs are as expected, e.g., by clicking on the gear icon of file identification and verifying 0 exit codes and substrings in stdout
  • Wait for AIP to be created and inspect it and make assertions about it:
    • METS file is XSD-valid and Schematron-valid
    • preservation derivatives exist on disk
    • METS file has crucial state correct (PREMIS events, relationships, ...)
    • ...
  • Attempt a re-ingest? (Breaking re-ingest seems a common consequence of new AM dev work...)

Problem: starting Python and importing Django in many gearman workers is inefficient and unnecessary

Profiling of Archivematica's MCPClient indicates that it takes roughly 1.25 seconds for a (Python-based) client script to be started in a new Python process and for it to import Django (and other heavy modules) and create a database connection. Given a package creation event with very many files each requiring several Pythonic client script process forks, this can result in significant inefficiencies.

In many cases this "cold startup" cost is unnecessary when the following could be done instead:

  1. Python client scripts could be directly (dynamically) imported into the long-running Python (MCP Client) Gearman worker process instead of forking new Python processes for the short-lived client script.

  2. If the client script must run in a separate process, then we could avoid the Django import and db connection by having the long-running MCPClient process provide to the client scripts the state (e.g., database data, Django settings, unit variables) that they need. Once done, the client script could pass back (e.g., as JSON printed to stdout) directives for the MCPClient process to mutate the database appropriately.

normalize-flame-graph-analysis jpg 001

Problem: logging is not configurable

At present, the ArchivematicaUser logs are written to amuser/amuser.log and the step file logs are written to features/steps/steps.log. The tests should be changed so that it is easier to control where the logs get written. Maybe the default should be to log to stdout.

@sevein I think you have been doing work that may have been hampered by this issue. Do you have suggestions or a PR?

Problem: aip-encrypt:transfer-backlog fails because cp can occur before transfer is actually in backlog

The Then the transfer on disk is encrypted step can fail because even though the And the user waits for the DIP to appear in transfer backlog step has passed, the transfer is not actually yet stored at the expected path on disk:

File "/Users/user/.pyenv/versions/3.4.2/lib/python3.4/subprocess.py", line 620, in check_output
      raise CalledProcessError(retcode, process.args, output=output)
  subprocess.CalledProcessError: Command '['docker', 'cp', '2664c8ab5cc9f902a34db8155f41964cd5736b96434687aa0ce28751dfc0785b:/var/archivematica/sharedDirectory/www/AIPsStore/transferBacklogEncrypted/originals/BagTransfer_1525201451-ec84292f-7192-4cdc-93b0-53f4db26561d', '/Users/user/Documents/Artefactual/am/src/archivematica-acceptance-tests/.amsc-tmp/BagTransfer_1525201451-ec84292f-7192-4cdc-93b0-53f4db26561d']' returned non-zero exit status 1

Enhancement: create feature file comparing performance with/without stdout/err in db

Create a feature file that describes anticipated behaviour when the same transfer is run through two different Archivematica deployments which are identical except that one has client scripts that pass stdout/stderr back to MCPServer to be stored in the database and the other simply discards such stdout/stderr. The feature file should express an expectation that the stdout/stderr-discarding system should have significant relative performance gains.

Problem: stale elements can break transfer approval

Cf.:

Given that the user has ensured that the default processing config is in its default state                                                                                       # features/steps/steps.py:43 270.139s
    And the reminder to add metadata is enabled                                                                                                                                      # features/steps/steps.py:49 18.912s
    When a transfer is initiated on directory ~/archivematica-sampledata/SampleTransfers/Images/pictures                                                                             # features/steps/steps.py:374 47.116s
    Traceback (most recent call last):
        File "/Users/jesus/archivematica-acceptance-tests/.env/lib/python3.6/site-packages/behave/model.py", line 1329, in run
        match.run(runner.context)
        File "/Users/jesus/archivematica-acceptance-tests/.env/lib/python3.6/site-packages/behave/matchers.py", line 98, in run
        self.func(context, *args, **kwargs)
        File "features/steps/steps.py", line 376, in step_impl
        utils.initiate_transfer(context, transfer_path)
        File "/Users/jesus/archivematica-acceptance-tests/features/steps/utils.py", line 143, in initiate_transfer
        accession_no=accession_no, transfer_type=transfer_type))
        File "/Users/jesus/archivematica-acceptance-tests/amuser/am_browser_transfer_ability.py", line 70, in start_transfer
        self.approve_transfer(transfer_div_elem, approve_option_uuid)
        File "/Users/jesus/archivematica-acceptance-tests/amuser/am_browser_transfer_ability.py", line 235, in approve_transfer
        approve_transfer_option.click()
        File "/Users/jesus/archivematica-acceptance-tests/.env/lib/python3.6/site-packages/selenium/webdriver/remote/webelement.py", line 80, in click
        self._execute(Command.CLICK_ELEMENT)
        File "/Users/jesus/archivematica-acceptance-tests/.env/lib/python3.6/site-packages/selenium/webdriver/remote/webelement.py", line 628, in _execute
        return self._parent.execute(command, params)
        File "/Users/jesus/archivematica-acceptance-tests/.env/lib/python3.6/site-packages/selenium/webdriver/remote/webdriver.py", line 312, in execute
        self.error_handler.check_response(response)
        File "/Users/jesus/archivematica-acceptance-tests/.env/lib/python3.6/site-packages/selenium/webdriver/remote/errorhandler.py", line 242, in check_response
        raise exception_class(message, screen, stacktrace)
    selenium.common.exceptions.StaleElementReferenceException: Message: stale element reference: element is not attached to the page document
        (Session info: chrome=66.0.3359.117)
        (Driver info: chromedriver=2.38.552522 (437e6fbedfa8762dec75e2c5b3ddb86763dc9dcb),platform=Linux 4.4.0-119-generic x86_64)

Problem: adding new feature files creates clutter

The feature files in this repo are not currently organized into namespaces or folders. It is hard to tell which features are 'core' Archivematica features and which are related to new development work.

Given clauses need assertions

E.g., this

Given directory <transfer_path> contains files that, when normalized, will all <do_files_conform> to <policy_file>

Needs a step implementation in steps.py. However, its ilk will require (assumedly) running a particular command locally/separately during test execution. How do we get the file? The easy way would be to hardcode the github URL of an AM command into the steps.py file...

Problem: timeouts are inconsistent and not configurable

There are many places in these tests where timeouts are used in an ad hoc, de-centralized way. In other cases, the tests run in an infinite loop until some expected event happens in the Archivematica GUI. This is problematic especially in a CI context where we need the tests to fail without human intervention.

In some cases the timeout should be written right into the feature file since the time estimate for a task depends on the nature of the transfer being processed.

In other cases, a centralized set of timeout variables should be used and these should be made configurable from behave user data flags.

In cases where there are infinite loops without logical breakpoints, we need to introduce a such a breakpoint and tie its timeout to one of the above two strategies.

@sevein if you have come across particular places in the tests where this type of issue has been a pain point, please note them here.

Problem: Need a feature file to describe a 'headless' archivematica

Columbia University Library is sponsoring development work in Archivematica to improve the performance of automated pipelines. The first phase of this work involves removing ElasticSearch from the components that are installed with Archivematica.

When Archivematica is run in an entirely automated configuration - with new content being ingested, processed and packaged into AIP's entirely without human intervention, using Elasticsearch does not provide any benefits. This configuration can be referred to as 'headless', in the sense that the dashboard component is not being used as a gui, but only for the REST API to control the flow of content through the system.

Excluding Elasticsearch from the list of components installed and used by Archivematica will reduce the operational complexity and the required compute resources.

We need a Feature File that describes the expected behaviour when installing or upgrading Archivematica, to be clear how the process should work to allow both this new mode to operate without breaking existing installations. The Feature File will also need to describe the expected behaviour when a headless system is running.

Problem: tests break if {am,ss}_url includes standard port

If a value such http://1.2.3.4:80 is passed to am_url where 1.2.3.4 is any network location and the port is 80 and it's not omitted then the tests misbehave when URLs are compared because the browser may omit the port 80 or 443 as they're standard ports.

At least this function is misbehaving:

def navigate(self, url, reload=False):
"""Navigate to ``url``; login and try again, if redirected."""
if self.driver.current_url == url and not reload:
return
self.driver.get(url)
if self.driver.current_url == url:
return
if self.driver.current_url != url:
if self.driver.current_url.endswith('/installer/welcome/'):
self.setup_new_install()
else:
if url.startswith(self.ss_url):
self.login_ss()
else:
self.login()
self.driver.get(url)

Where self.driver.current_url and url don't match

Workaround: pass -D am_url=http://1.2.3.4/ instead of -D am_url=http://1.2.3.4:80/.

Problem: allow-orphaned-key-delete fails because AIP deletion search input ID has changed

The When the AIP is deleted step fails because driver.find_element_by_id('DataTables_Table_0_filter') in am_browser_ss_ability.py now returns a <div> when previously it returned an <input> (I believe). This triggers an exception and the following stacktrace:

@allow-orphaned-key-delete
Scenario: Richard wants to ensure that GPG deletion is never permitted if the key is associated to a space or if it is needed to decrypt an existing package. However, if all space associations are destroyed and all dependent packages deleted, then deletion of the (orphaned) key should be permitted.  # features/core/aip-encryption.feature:89
  ...
  When the AIP is deleted                                                                                                                                                                                                                                                                                    # features/steps/steps.py:378 2.818s
    Traceback (most recent call last):
      File "features/steps/steps.py", line 382, in step_impl
        context.am_user.browser.approve_aip_delete_request(uuid_val)
      File "/Users/user/Documents/Artefactual/am/src/archivematica-acceptance-tests/amuser/am_browser_ss_ability.py", line 33, in approve_aip_delete_request
        self.driver.find_element_by_id('DataTables_Table_0_filter').send_keys(aip_uuid)
    selenium.common.exceptions.WebDriverException: Message: unknown error: cannot focus element
      (Session info: chrome=65.0.3325.181)
      (Driver info: chromedriver=2.38.552518 (183d19265345f54ce39cbb94cf81ba5f15905011),platform=Mac OS X 10.13.4 x86_64)

Problem: Gherkin tests are said to be living documentation but they are not animated

So animate them!

Solution: Write a script that runs the tests at human-comprehension speed with the Gherkin statements added in a subtitles srt file.

  • implement a "human speed" switch in archivematicaselenium.py with minimal code intrusion and which has an intelligent metric for determining how long to pause where. HARD PART.
  • Extract time indices from behave output.
  • Generate .srt file.
  • Automate screen capture in tandem with CI (vnc2swf script http://www.unixuser.org/~euske/vnc2swf/). Other hard part.

Problem: METS file retrieval fails to account for network latency

The get_mets method in am_browser_ingest_ability.py assumes that when a METS file is clicked on, the new window with the XML in it will appear immediately. However, on production (remotely served) applications this is a false assumption and the "XML" extracted will simply be the HTML source of a not-yet-fully-loaded page.

Serialized Output of Dashboard Options Inquiry

Hello,

As part of the feature testing, we'd like to save settings that we have toggled on and off in the dashboard to ensure reproducibility between our environment and the test environments we're setting up. This would be to ensure that we're running the same pipeline that we would normally run on our data in prod.

The documentation under Processing Configuration indicates that these dashboard settings should be defined in a file named processingMCP.xml (whose file name is configurable in the MCPServer config file).

I was unable to find processingMCP.xml on our current prod server. The filename assigned to the processingXMLFile field under MCPServer in the MCPServer config does not seem to have been changed from the default so if the config were written out somewhere based on the value of this parameter it should be where it would normally be in a stock Archivematica installation.

There is a file located within the directory assigned to the sharedDirectoryMounted in the MCPClient config that looks similar to what processingMCP.xml should look like- this file is found at ./sharedMicroServiceTasksConfigs/processingMCPConfigs/defaultProcessingMCP.xml under directory defined by sharedDirectoryMounted. The chief difference between this file and what we'd expect from processingMCP.xml, however, is that whereas we'd expect processingMCP.xml to use prose-based names for the applies and goToChain fields (like here), instead it populates these fields with UUIDs.

This leads to the following questions on our end:

  • Is defaultProcessingMCP.xml the correct file we'd want to use to ensure our pipeline settings are preserved on the test hosts?
  • Are the UUIDs assigned to the microservices on our prod host (and used within the XML config) portable across hosts? Can we drop in defaultProcessingMCP.xml on the test hosts and have it work without any further configuration/tweaking?

Problem: new contributions are too difficult

There are a number of reasons for this:

  1. Lack of guidelines on how to contribute.

  2. Code organization is lacking, e.g.,

    • modules are too big
    • concerns are not separated, e.g., GUI/Selenium calls mixed with REST API calls
    • steps file too big, should be split into many

Problem: it is not clear how to run specific scenarios

We need to institute and implement standard methods/workflows for indicating how a specific feature (or feature file) is to be run. This should be in the documentation and ideally also in the feature files themselves as comments and also in a makefile or shell script that can run all features or individual ones.

Can AIPs be reviewed without indexing?

in Columbia/indexless.feature
there is an ambiguity about whether review is possible without the Index (transfer) service - the decision point suggests it is, but the mechanics are not clear to us. Can the scenario be clarified as to whether review is actually possible?

Problem: common, high-level step sequences are not standardized or abstract enough

There are too many ways being used currently to do the following:

  1. Perform some action, typically initiating a transfer/ingest
  2. Make a series of decisions until something happens:
    • an AIP is created
    • an AIP is available for review
    • a particular micro-service has completed
  3. Perform some second action, given the system/AIP is in the state guaranteed by (2)

We need a clear and memorable way of expressing, with minimal specification, common steps. For example:

When the user makes standard decisions to go from decision point "<DECISON_POINT_A>" to "<DECISON_POINT_B>"

or:

Given a processing config with standard decisions to go from decision point "<DECISON_POINT_A>" to "<DECISON_POINT_B>"

Where "to go from" implicitly assumes a particular sub-chain of the workflow, and "standard decisions" are defined how?

Enhancement: Acceptance tests for new CCA scripts needed

We have a project for the Canadian Centre for Architecture that has two deliverables:

  • a script to create DIPs in a particular structure for CCA researchers
  • a script to upload those DIPs to an instance of Atom.

Both of these scripts need acceptance tests to specify the desired functionality and supported automated testing of that functionality.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.