Code Monkey home page Code Monkey logo

utplsql's Introduction

utPLSQL v3 | Testing Framework for PL/SQL


license latest-release Download statistics chat twitter

build QualityGate Coverage


utPLSQL version 3 is a complete rewrite of utPLSQL v2 from scratch. Version 2 still supports older versions of Oracle that are no longer available. The community that had developed on GitHub decided that a new internal architecture was needed, from that version 3 was born.

Introduction

utPLSQL is a Unit Testing framework for Oracle PL/SQL and SQL. The framework follows industry standards and best patterns of modern Unit Testing frameworks like JUnit and RSpec

Key features

  • multiple ways to compare data with matchers
  • native comparison of complex types (objects/collections/cursors)
  • in-depth and consistent reporting of failures and errors for tests
  • tests identified and configured by annotations
  • hierarchies of test suites configured with annotations
  • automatic (configurable) transaction control
  • Build-in coverage reporting
  • Integration with SonarQube, Coveralls, Jenkins and Teamcity with reporters
  • plugin architecture for reporters and matchers
  • flexible and simple test invocation
  • multi-reporting from test-run from command line

Requirements:

Download

Published releases are available for download on the utPLSQL GitHub Releases Page.

Documentation

Full documentation of the project is automatically published on utPLSQL github pages

Cheat-sheets

Installation

To install the utPLSQL into a new database schema and grant it to public, execute the script install_headless.sql. This will create a new user UT3, grant all required privileges to that user and create PUBLIC synonyms needed.

For detailed instructions on other install options see the Install Guide

Running tests

To execute using development IDE (TOAD/SQLDeveloper/PLSQLDeveloper/other) use one of following commands.

begin
  ut.run();
end;
/
exec  ut.run();
select * from table(ut.run());

The above commands will run all the suites in the current schema and provide report to dbms_output or as a select statement.

Command line client

You can use the utPLSQL command line client utPLSQL-cli to run tests without the need for Oracle Client or any IDE like SQLDeveloper/TOAD etc.

Amongst many benefits it provides ability to:

  • see the progress of test execution for long-running tests - real-time reporting
  • use many reporting formats simultaneously and save reports to files (publish)
  • map your project source files and test files into database objects

Download the latest client and are good to go. See project readme for details.

Example unit test packages

For examples of using Continuous Integration Server & SonarCloud with utPLSQL see the utPLSQL demo project.

The below test package is a fully-functional Unit Test package for testing a betwnstr function. The package specification is annotated with special comments. The annotations define that a package is a unit test suite, they also allow defining a description for the suite as well as the test itself. The package body consists of procedures containing unit test code. To validate an expectation in test, use ut.expect( actual_data ).to_( ... ) syntax.

create or replace package test_between_string as

  -- %suite(Between string function)

  -- %test(Returns substring from start position to end position)
  procedure normal_case;

  -- %test(Returns substring when start position is zero)
  procedure zero_start_position;

  -- %test(Returns string until end if end position is greater than string length)
  procedure big_end_position;

  -- %test(Returns null for null input string value)
  procedure null_string;
end;
/

create or replace package body test_between_string as

  procedure normal_case is
  begin
    ut.expect( betwnstr( '1234567', 2, 5 ) ).to_( equal('2345') );
  end;

  procedure zero_start_position is
  begin
    ut.expect( betwnstr( '1234567', 0, 5 ) ).to_( equal('12345') );
  end;

  procedure big_end_position is
  begin
    ut.expect( betwnstr( '1234567', 0, 500 ) ).to_( equal('1234567') );
  end;

  procedure null_string is
  begin
    ut.expect( betwnstr( null, 2, 5 ) ).to_( be_null );
  end;

end;
/

Outputs from running the above tests

Between string function
  Returns substring from start position to end position
  Returns substring when start position is zero
  Returns string until end if end position is greater than string length
  Returns null for null input string value

Finished in .036027 seconds
4 tests, 0 failures

Contributing to the project

We welcome new developers to join our community and contribute to the utPLSQL project. If you are interested in helping please read our guide to contributing The best place to start is to read the documentation and get familiar with the existing code base. A slack chat is the place to go if you want to talk with team members. To sign up to the chat use this link.


Authors


Project Directories

  • .github - contains files needed for github Actions integration
  • .travis - contains files needed for travis-ci integration
  • client_source - Sources to be used on the client-side. Developer workstation or CI platform to run the tests.
  • development - Set of useful scripts and utilities for development and debugging of utPLSQL
  • docs - Documentation of the project
  • examples - Example source code and unit tests
  • source - The installation code for utPLSQL
  • tests - Tests for utPLSQL framework

If you have a great feature in mind, that you would like to see in utPLSQL v3 please create an issue on GitHub or discuss it with us in the slack chat rooms. Use invite link to join the chat.

Version 2 to Version 3 Comparison

Version 2 to Version 3 Comparison

Supporters

The utPLSQL project is community-driven and is not commercially motivated. Nonetheless, donations and other contributions are always welcome, and are detailed below.

supported_by_redgate utPLSQL has been supported by Redgate in the form of sponsored stickers and t-shirts. Thank you for helping us spreading the word!

utplsql's People

Contributors

abasharin avatar alexisgaldamez avatar barsema avatar felipebz avatar frunjik avatar gassenmj avatar github-actions[bot] avatar goleztrol avatar jgebal avatar john-otoole avatar kukimik avatar lwasylow avatar makwanajigneshm avatar mickevrolijk-of avatar opendba avatar osblaineora avatar paytonrules avatar pazus avatar pesse avatar philippsalvisberg avatar proldan avatar rlove avatar shoelace avatar tschf avatar viniciusam avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

utplsql's Issues

v2 migration routines. Part 1

Implement routins to migrate current v2 config to v3. In the part one let's introduce generation of the specification of test packages generating annotations for tests/setups/teardowns.

Configuration tables of v2 also has to be taken into account.

By the part 1, the contents of package bodies remain unchanged so all assert call have to be changed manually to use v3 package.

Redefine reporters hooks

I'd like to propose to redefine reporters hooks a bit to:

  1. before_execution (before the schema execution of all the suites in the schema)
  2. before_suite_begin
  3. after_suite_begin
  4. before_test
  5. before_asserts_process
  6. on_assert
  7. after_asserts_process
  8. after_test
  9. before_suite_end
  10. after_suite_end
  11. after_execution

The mote detailed hooks would be, the better reporting solutions can be built.

Create 2.3.1 Release

Create 2.3.1 Release.
Latest 2.3.1 code with all the documentation pointing to GitHub.

Implement Documentation Framework

  • Implement code to generate documentation from code comments.
  • Figure out how to integrate existing markdown documentation.
  • This code should be executed in the travis-ci build.

Publishing of documentation is not considered part of this issue.

Implement jUnit XML formatted report

Implement jUnit XML report by creating new reporter class.

Reporter should be able to print it's content to serveroutput, or return as clob/xmltype variable. Most likely the reporter would be configured and processed by a CI using SQlplus so spooling might be a way it can workout.

Add to_raise matcher

We need a way to test a plsql_block to check if it's raising an exception.

Specification:
to_raise(a_error_no)
to_raise(a_error_message_mask)
not_to_raise() - test to check that no exception is raised

Running Unit Tests from different schema.

As part of this feature we need to:

  • include installation option to:
    • create synonyms (public/specific user) for public API elements
    • add grants (public/specific user) to public API objects
  • within Travis CI installation test the public/specific user option by:
    • creating the 'specific user' so that the user uses private synonyms
    • creating 'another user' so that the user uses public synonyms
    • installing a unit test package in 'specific user' schema and running it
    • installing a unit test package in 'another user' schema and running it

need counter argument for "missing" feature / feature request

OK,

so the team here at my new office what to start implementing unit test (yay!)
the main guy who is pushing it want to go with the unit testing from SQLDEVELOPER (which i havent used before) , where as i was suggesting utPLSQL (which i have used)

He's just shown me a demo i dont seem to have a counter argument for one of its features namely:

  • you dont have to have the tool installed on each database. only the underlying packages.

if you are not aware.. it basically work as follows.. you have a package with tests installed on the desired database. separately you have a unit-test repository installed on an instance and the input,output and validation of each function/procedure is configured (via a gui) and stored in the repo . when the tests run it grab the metadata from the repo, executes the package against the desired instance and then validates the results against the metadata and stores results back into repo.

so.. my questions are

  • what is the counter argument for this feature? as it does seem pretty usefull
    and
  • is this something we want to add for v3 (or is it just a different/incompatible design philosophy )

i guess i should point out.. that part of his goal for this is to use 'unit testing' as a way to monitor the database for anomalies. odd object grants, duplicate/missing data in core tables, etc..

How to handle outputs from several reporters?

With sqlplus and any other sql consoles it's really hard to spool data from several different streams into several output files.
We need a nice solution that would allow us to save reporters outcomes into different files on client side.

Transaction control using framework

Implement transaction control using framework that is configurable using annotations both on suite and test (overrides cuite config) level.

I propose the default state to be:

  1. savepoint before test (after setup) execution
  2. rollback to savepoint after test
  3. rollback to savepoint if the test raises exception

Introduce new annotation to disable transaction control

Version 3 - Unit Test Specification

I have drafted a Version 3 Unit Test specification. That allows your current version 2 tests to run in version 3 but also give many new features that are specific to version 3.

Key Features

  • Prefix is optional
  • a test package can have optional metatdata.
  • Multiple Setup, Teardown, and Test procedures with any association
    • Example:
    • setup and teardown can be associated with test1 and test2 but not with test3 or test4
    • setup3 and teardown3 can be associated with test3
    • setup and teardown3 can be associated with test4
  • Auto test detection can be done at registration or test execution/run time.
  • Test Suite Registration
    • Methods can be registered explicitly (by passing the detection algorithm)
    • Packages can be registered explicitly
    • Packages can be found by registering one or more prefixes.
    • Suites can can suites to allow for grouping of smaller test sets into a larger suite.

Please review and let me know what you think.

Build Server?

Do we have any build servers? (Something that will build and test utPLSQL.)

Compatibility with 11.2

Hi all.

I've forked the version3 branch and unfortunately i've figured out that it is not compatible with 11.2 due to the use of record types in Native Dynamic SQL statements with only appear to be included in 12c. It might slightly change the idea to store all types in ut_types package.

Do you think records with colelctions should be replaced with object types?

Also I think definition of dbms_id, dbms_quoted_id types should be made with conditional compilation to support both 11g and 12c.

ORA-02289: sequence does not exist

Our DBA installed the utplsql package (without using/allowing synonyms). I've tried to use it but I get an error.
The error comes if either I use the name of an existing ut_ package or not.

SQL> exec utp.utplsql.run('ut_xxxxxxxx');
begin utp.utplsql.run('ut_xxxxxxxx'); end;
ORA-02291: Integritäts-Constraint (UTP.UTR_ERROR_OUTCOME_FK) verletzt - übergeordneter Schlüssel nicht gefunden
ORA-06512: in "UTP.UTRERROR", Zeile 149
ORA-06512: in "UTP.UTRERROR", Zeile 324
ORA-06512: in "UTP.UTROUTCOME", Zeile 146
ORA-01400: Einfügen von NULL in ("UTP"."UTR_OUTCOME"."RUN_ID") nicht möglich
ORA-06512: in "UTP.UTRESULT2", Zeile 72
ORA-06512: in "UTP.UTASSERT2", Zeile 137
ORA-06512: in "UTP.UTASSERT", Zeile 49
ORA-06512: in "UTP.UTPLSQL", Zeile 1104
ORA-02289: Sequence ist nicht vorhanden.
ORA-06512: in "UTP.UTPLSQL", Zeile 1379
ORA-06512: in Zeile 1

I'll also try to get some help from the DBA next week to trace the code; now I can only see the traceback, but not which sequence is missing.

There are 10 sequences under the schema utp, almost everyone of them start with ut_. Just one doesn't: UTPLSQL_RUNNUM_SEQ.

Add Code coverage gathering and reporting

We need to add a reporter that will run a code coverage analysis using plsql profiler.
The reporter should provide few different ways to generate reports into format of XML for consumption by other tools (CI/Sonar) and/or HTML for viewing the coverage report.
The different output formats could be achieved by the decorator class.

Annotations as declarative configuration of tests

I'like to propose the way to configure tests and suites in a declarative way similar to modern OOP languages using "annotations". Let's say we have the test package like this:

create or replace package test_pkg is

  -- %suite(Name of suite)
  -- %suitepackage(all.globaltests)
  -- %suitetype(critical)

  -- %suitesetup
  procedure globalsetup;

  -- %suiteteardown
  procedure global_teardown;

  /* Such comments are allowed */

  -- %test(Name of test1)
  -- %testtype(smoke)
  procedure test1;

  -- %test(Name of test2)
  -- %testsetup(setup_test1)
  -- %testteardown(teardown_test1)
  procedure test2;

  -- %test(Name of test3)
  -- %testtype(smoke)
  procedure test3;

  procedure setup_test1;

  procedure teardown_test1;

  -- %setup
  procedure setup;

  -- %teardown
  procedure teardown;

end test_pkg;

Using annotation parsing engine Suite object with has to be constructed.
Annotations meaning:

Annotation Meaning
%suite Marks package to be a suite with it procedures as tests. This way all testing packages might be found in the schema. Parameter of the annotation is the Suite name
%suitepackage Similar to java package. The example suite should be put as an element of the "globaltests" suite which is an element of the "all" suite. This allows one to execute "glovaltests" suite which would recursively run all the child suites including this one.
%suitetype The way for suite to have something like a type. One might collect suites based on the subject of tests (a system module for example). There might be critical tests to run every time and more covering but slow tests. This technique allows to configure something like "fast" testing.
%setup Marks procedure as a default setup procedure for the suite.
%teardown Marks procedure as a default teardown procedure for the suite.
%test Marks procedure as a test. Parameter is a name of the test
%testtype Another way to tag tests to filter afterwards
%testsetup Marks that special setup procedure has to be run before the test instead of the default one
%testteardown Marks that special teardown procedure has to be run before the test instead of the default one
%suitesetup Procedure with executes once at the beginning of the suite and doesn't executes before each test
%suiteteardown Procedure with executes once after the execution of the last test in the suite.

Annotations allow us to configure test infrastructure in a declarative way without anything stored in tables or config files. Suite manager would scan the schema for all the suitable packages, automatically configure suites and execute them. This can be simplified to the situation when you just ran suite manager over a schema for the defined types of tests and reporters and everything goes automatically. This is going to be convinient to be executed from CI tools using standard reporting formats.

What do you think about such configuration approach?

Tags for tests and suites

Introduce annotations to tag tests and suites.
Implement filters on execution procedures to include/exclude suites/tests by tags.

We should discuss if "tag" should be assigned to a suite or "suite_type" with predetermined values fits bettet.

Update Sourceforge to refer people to GitHub

The project needs to be left on Sourceforge for historical purposes (it might even be impossible to remove it anyway), but it needs updating so people know to come here for it now. It will also lend credibility to this being the new official home of the project if the old home refers to it.

Version 3 Design Proposal

I took a couple of days thinking of how I might write version 3. It turned into something way bigger than in had planned.

Since it involves lots of files, I put it in a new repository as I thought it might be easier to communicate.

https://github.com/rlove/ut3

How to do discussions?

On Sourceforge, we had two discussion forums:

GitHub doesn't do forums. What do we do instead?

GitHub Issues

I think Issues might work OK, but they could be mixed up with other types of issues. Labels should help with this (i.e. Filter on the discussion label) . I'm also experimenting with waffle.io to help visualise Issues - discussions would probably work best in their own column. I like the idea that it's part of GitHub - no separate site, no separate logins, everything links and cross-references really well, etc. It's actually pretty close to being a forum, once you've done the initial filtering on a label. I want to carry on with this at least in the interim unless/until we find something better

Slack

I've just had a quick look at Slack. I've never used it and know next to nothing about it, but it seems to be a popular tool. The main issue I've seen so far is that while the conversations can be public, people have to be invited before they can contribute. This doesn't necessarily rule it out (new people can always ask to be added), but it's not great.

Gitter

Gitter looks to integrate very nicely with GitHub. The conversation seems more like a chat room rather than threaded converations, but I think we could setup different rooms to talk about the main different subjects.

Google+ Communities

In commit e798cf8 @rlove suggested that other projects have used Google+ Communities.

What does anyone think?

Code Documentation

Similar to #11, I think we should define how the code is documented. The best thing I've found to produce nice output is NaturalDocs. It generates pretty web pages and the comments in the code are easily readable.

Has anyone found anything better?

Travis-CI automation of release

This will enable automation of the release. This is only for the version 3 code base.

  1. Run Build/Tests and make sure everything is ok.
    • This includes building of documentation.
  2. Create ZIP file :
    • Generated Documentation - \docs\ directory
    • Lib\Any_data directory
    • source\ Directory
    • Root blank for now, but will most likely include a release readme of some type in the future.
  3. Updates gh-pages with new "Release Specific" version documentation. Which is documentation published on a push, which is addressed in Issue #59
  4. Creates release with Zip File.

Notable documenation to help:

Add be_within 'fuzzy' matcher

For numbers we should allow a comparison with a percent_off deviation.
The syntax:
ut.expect( a_actual =>1.23 ).to_( be_within( a_pct => 1, a_expected => 1.23456789 ) )
should give success
ut.expect( a_actual =>1.2 ).to_( be_within( a_pct => 1, a_expected => 1.23456789 ) )
should give success

Calculation logic.
a_actual between (a_expected - ((a_expected * a_pct)/100) ) and (a_expected + ((a_expected * a_pct)/100) )

Publish documentation on merge with version3 branch

  • Determine if you can securely publish the documentation that has been merged into version3 using travis-ci.
  • Implement if possible, otherwise find another solution to automatically publish latest documentation from the the build.

Since multiple people can configure travis-ci with there own builds, care must be taken to interact with the correct repository.

add be_between matcher

We need another matcher that would allow comparison of data in ranges.
Specification:

constructor description
be_between( number, number ) for all numeric data types
be_between( timestamp_tz_unconstrained, timestamp_tz_unconstrained ) for all date data types
be_between( interval_ds_unconstrained, interval_ds_unconstrained ) for interval day to second
be_between( interval_ym_unconstrained, interval_ym_unconstrained ) for interval year to month

Add <, >, <=, >= matchers

We need 4 additional matchers:
be_lt / be_less_than / "<"
be_gt / be_greater_than / ">"
be_le / be_less_equal / "<="
be_ge / be_greater_equal / ">="

Those matchers should operate on following data types:
number
date family
interval family

New Project Administrator?

It has become clear in recent weeks that there are other people with more time and energy to spend on this project than I do. I'm pleased to see a group of people working together to drive this project forward, but as things stand at the moment, there's more chance of me holding things up rather than adding much value.

With that in mind, if anyone would like to volunteer to take on running the project, I will be more than happy to take a back seat. Any such volunteer would of course have my full support and I will do what I can to help in the transition.

utplsql v3 - Add Unit Tests

We need a thorough testing of the framework to make sure we provide good quality solution and also to make sure that all the functionalities are added and changed are working as expected.
Using build server, we can easily run all the tests to have a project health-check.

There are few things to be done:

  1. Bild and establish testing methodology that will allow us to test the UTLSQL framework using CI
  2. Build unit tests for functionality that is already there
  3. Establish a culture of contribution - all new functionality is fully covered with unit tests

Automate Deployment by Tagging Release

Create a new deployed_readme.md

  • Light on Markdown features so that easily read with text editor
  • Brief introduction to product
  • Describes the directories and where to go in the documentation for getting started and installation.
  • links back to the project

Create a new release_notes.md

  • Will only contain notable changes since prior release.
  • Will be used as the text for the github release notes.
  • This will not be a complete release note history we have GitHub Releases for that.

Create a script .travis\create_deploy_zip.sh that will generate ZIP file named $TRAVIS_TAG.zip with the following structure:

  • root directory contains:
  • readme.md which is really deployed_readme.md renamed
  • LICENSE from repo root
  • authors.md from repo root
  • CONTRIBUTING.md from repo root
  • release_notes.md from repo root
  • NOTE: Existing readme.md will not be deployed.
  • examples/ contains existing repo examples directory
  • docs/markdown/ contains markdown version of documentation in repo docs directory
  • docs/html/ contains the html version of the documentation which is in the repo's root/site/ directory after build_docs.sh is called
  • source/ contains current repo source directory
  • Create script .travis\deploy_release.sh
    • Setup to only execute on a tagged build.
    • Calls script .travis\create_deploy_zip.sh
    • Creates GitHub Release using:
      • ZIP File created
      • release_notes.md will be used for description.
      • Release name will match TAG Name

Realtime reporting using dbms_pipe

Implement execution of the tests run in separate session.
Implement reporter that pushes messages to the dbms_pipe and never fail.
Implement pipelined function which produces results as they appear in the pipe. Selecting from thee function will allow to have realtime report results in the CI.

As a first approach Teamcity realtime reporting can be used.

How to do a collaberative document(s) for v3 design?

There's been some talk in recent months about working on a new major version of utPLSQL. This is likely to include dropping support for older versions of Oracle and giving the code base a thorough spring clean.

What's the best way for multiple people to work on a v3 design document? This will be very iterative and (hopefully!) have contributions from multiple people.

I'm far from an expert on GitHub, so I'm not sure of the best way to do this... 😕 Any suggestions gratefully received...

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.