Code Monkey home page Code Monkey logo

cheshire137 / webapptestcasegenerators Goto Github PK

View Code? Open in Web Editor NEW
3.0 3.0 1.0 5.21 MB

Web scraper written in Ruby for generating test cases for the QMZ web application testing model. Also another tool, again written in Ruby, for generating test cases for the Atomic Section Model web application testing model for a Ruby on Rails web application.

License: GNU General Public License v3.0

Ruby 99.80% Shell 0.20%
ruby web-scraper test-cases school-project

webapptestcasegenerators's Introduction

Automatic Generation of Artifacts for Two Web Application Testing Models

Abstract

Web applications are prevalent and it is important that they be of high quality since businesses, schools, and public services rely upon them. Unique testing models designed for web applications can be beneficial since web applications differ greatly from desktop applications. Web applications are accessed via a browser, and a user can manipulate the web application in ways not possible with desktop applications, such as modifying the URI or using the Back button in the browser. It can take a great deal of time to apply a given testing model to a web application, due to the size of the application as well as the many steps in applying a model. This project seeks to decrease the time necessary to apply two particular web application testing models: the Atomic Section Model (Jeff Offutt, Ye Wu) and the Qian, Miao, Zeng model (Zhongsheng Qian, Huaikou Miao, Hongwei Zeng). Two tools were written using the Ruby programming language, one for each model. The tools take as input the source code of a Ruby on Rails web application, and the URI to a web application written with any framework, respectively. They produce as output test paths that can be traversed manually to ensure good coverage of the web application, and artifacts that can be further manipulated manually to produce test paths, respectively. Through the use of these tools, a web application developer can better see how to test his or her application, and can see all the paths through the application that a user might take.

The project

This was my Master's project at the University of Kentucky. I provide the source code here for anyone who might get some use out of it, either by using my tools to test their web applications, or by seeing how I accomplished some task with Ruby. See presentation.pdf for an overview of the whole project. The project is divided into two tools, one for applying the QMZ model to a web application, the other for applying the ASM to a web app.

Usage instructions

Several libraries are necessary to run either the QMZ or the ASM scripts. You will need to install Treetop, Nokogiri, and possibly others I am forgetting. There is no installer as of yet to install all dependencies for you. To run the QMZ tool, run ruby qmz/scraper.rb and it will provide more help. To run the ASM tool against a single ERB file, run ruby asm/single_file_generator.rb. To run the ASM tool against an entire Rails application, run ruby asm/generator.rb; both will provide further instructions.

Sample commands:

ruby qmz/scraper.rb -u "http://example.com/"
ruby asm/single_file_generator.rb app/views/test/_feedback.html.erb "http://example.com"
ruby asm/generator.rb myRailsApp "http://example.com"

Copyright

I release the source code of this project under the GNU General Public License v3.

webapptestcasegenerators's People

Contributors

cheshire137 avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

Forkers

pesaply

webapptestcasegenerators's Issues

ASM - navigate Rails directory structure and parse all ERB files

Currently, only a given ERB file is parsed and its component expression is calculated. For test paths to be created for the whole site's atomic sections, though, all the ERB files in the site must be parsed. Need to alter script so it takes in a directory instead of a file, and expects that directory to follow Rails convention, then looks in app/views/ for ERB files to parse.

QMZ - support logging into a site for more scraping

Currently, just scrapes site without providing any credentials, so gets view of site as a non-authenticated user. Should allow for user to provide credentials and give the name of a form/page on which to log in. Then scrape entire site before logging in, and scrape it again after logging in. Should the PTTs be merged then?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.