Code Monkey home page Code Monkey logo

dxa-automation-script's Introduction

DXA Analysis Automation

This script automates most of the sub-points of the DXA Analysis done by G5. Typically done by hand, the DXA consists of tests to, as objectively as possible, score the online presence of a business. The results are typically used in a sales environment to compare current online performance of a prospective client to future online performance given a relationship with G5. The value behind the automation of this tool emanates from two places where human evaluations fall short: speed, and consistency. Not only does this tool grade faster than a human would, but it removes almost all subjectivity from the pre-existing analysis.

Directions to Run:

  1. Download Ruby (follow the directions on https://www.ruby-lang.org/en/documentation/installation/). For more info on Rubygems, or what a gem even is, go to (http://guides.rubygems.org/).

  2. Figure out how to install gems given the operating system you are using. In OSX, the command to install a gem is “sudo gem install <gem_name>”. In Windows, gems can be installed using a utility called RubyInstaller (http://rubyinstaller.org/).

  3. Download each of the following:

[Rubygems](https://github.com/rubygems/rubygems)
[Selenium-WebDriver](https://rubygems.org/gems/selenium-webdriver)
[Nokogiri](https://rubygems.org/gems/nokogiri/)
[Open URI Redirections](https://rubygems.org/gems/open_uri_redirections)
[Algorithms](https://rubygems.org/gems/algorithms)
[JSON](https://rubygems.org/gems/json/versions/1.8.3)
  1. Register for the Google Developers Console, and make two projects—one for the Google Page Speed Tool, and the other for the Google+ API. Then, for each project, go to the credentials section, and copy each API key. Paste the PageSpeed API key into the file dxaAutomation/keys/gPageSpdAPIKey.txt, and the Google+ API key into the file dxaAutomation/keys/gPlusAPIKey.txt as raw strings (no punctuation, newlines, or special characters).

  2. Clone the dxaAutomation BitBucket repo onto your local terminal. It is also important to note that you should not delete any files or directories once the repo is downloaded. Files that may seem non-essential, probably are essential (some good examples of this are "removeList.txt", "testClass1.rb", and "driver1.rb"). If these files are deleted, the DXA won't work.

  3. Type the command: “cd dxaAutomation/drivers”, followed by "ruby runDXA.rb" to run the automation. Answer the questions in the terminal when they arise.

  4. Once the test is done, open the desired results files (the ones ending in “.csv” which are located in “dxaAutomation/results”) in Excel, and format the data as needed.

*If the test does not work due to errors with removing files, delete "removeList.txt", and the contents of the results directory. Once this is done, re-run the test.

Background:

Each test was written in one of three ways. Given the nature of the web (a place where no two web elements are contextually similar) a significant number of the tests written rely on contextual data (“Navigation Bar Structure”, “Competitive Position”, etc. are examples of some of the tests written this way). Another notable chunk of the tests are almost fully subjective (“Website Load Speed”, “301 Redirect”, etc.). Finally, the tests that were not tackled by the former means are addressed using user input (“Calls to Action: Prominent”, “SCAN Able Content”, etc.) to determine the status of some of the more subjective points. That being said, a further explanation of the rationale/strategies behind each test is below under the “Test Explanations” section.

The Future of the DXA:

From a Development Standpoint:

If future development does occur on this project, then in my opinion, the best item to address next would be a user interface. At the point where the DXA was left off, everything was still terminal based. Although the commands to run the script are relatively simple, a user interface would definitely be of benefit to the primary users of this tool—salespeople. From there, the best course of action to take would be to redefine potentially unreliable tests (or increase their reliability), and develop some sort of analytics tool (graphs, models, charts that are not already included in the DXA) to visually capture the value points that we are trying to show clients.

From an Investment Standpoint:

The automation of the DXA generates more credibility around this sales piece. Prospects are more likely to believe results that are generated by a procedural method, than by salespeople who might have varying knowledge of the DXA and the web in general. Additionally, this tool is a step-forward into a domain that G5 does not fully own yet—web analysis. With Reputation Manager, the CMS, and our other analytics tools we specialize in providing insight to our clients on our sites, but not to others on their sites. It is more impressive to say to clients that, “we have analytical power over any site”, instead of “we have analytical power over our sites”. Credibility and Analytical Power, as listed above, are reasons enough to invest in further iterations of the DXA.

From a Product Standpoint:

The DXA automation provides analytics that are reusable in a sales and development environment. If the insight this tool provides was given to the web content team, the SEO team, or even engineering, they could each do great things with it. Also, a product like this might be useful in the context of the CMS. Imagine if clients has access to a tool like this. They could run this tool, and see the difference between the DXA that was shown to them at a G5 sales presentation, and a DXA after our employees have refactored their online presence. They would be extremely pleased to have a quantifiable metric as to how their site has improved because of G5.

Extra Features:

Some of notable features that are implemented in the project are: a debug file system, a file management system, and a competition finder. The debug file system is primarily a developer tool, which allows someone to see what is happening, and where, inside the DXA code. It allows variables, logic, and results to be visualized in a way that is more sophisticated than primitive debugging methods. The file management system keeps track of the DXA results of the former test and deletes the files once the next iteration of the DXA is started. It does this in order to keep the results folder manageable. Finally, the competition finder uses a utility called SpyFu to gather business competitors given a url that you input. It recursively performs a Breadth-First-Search to determine your most nearby competitors until a big enough competitor list is generated. It then allows the user to run the DXA on any competitors found.

Notable (Potential) Features:

SpyFu SEO Tool: SpyFu is an incredibly unique marketing research tool, which contains a plethora of analytics information (SEO, Backlinks, Competitors, Ranking Information). For more info visit: http://www.spyfu.com/seo/overview/domain?query=getg5.com and see the results generated. The notable features of this site, which could be potentially valuable additions to the DXA, are the “Top Organic Competitors” graph, and their competitor analysis tool (the section of the page where they present you with your top competitors). With these powerful analytics tools, a whole new depth could be added to the DXA in a visual sense. Even if the data is not directly pulled from this site, the way in which they visually format their information is compelling. The visual formatting of the results could potentially provide a template for how we visually format the DXA in the future.

Web SEO Analytics: Although most of the tools on http://www.webseoanalytics.com/free/ are only available after a monthly subscription is made, some of the tools (if they do what they say they can do) could be extremely beneficial in a sales setting. Tools like WSA Spider, SERP Analysis, and Link Structure (which was used in one of the tests) could provide further analysis and data (than what we already provide) to convey to prospects how much their websites need G5 refactoring.

Moz Local: Although the use of some of the Moz Local Tool features are not immediately apparent in the DXA, this tool offers citation data across a multitude of user review sites. Given the fact that the DXA currently analyzes only Google Plus Citations, being able to retrieve citation information from sites like Facebook, Bing, Yelp, and City Pages could potentially benefit the sales process.

Developer Pick Up and Refactoring:

If a developer were to pick up on this project, as said before, the best course of action would be to develop a user interface, and improve test reliability.

The former would best be accomplished by using Rails, given the fact that all G5 products are mainly Rails-based. Not only would the next developer have access to G5 Rails resources/people that know Rails, but if this were handed off to someone in Engineering, that person wouldn’t have to learn a new framework in order to work with this tool.

The latter should be tackled in the following way. Reliability of each test should be established, and the least reliable should be refactored/rewritten first. That is not to say that any of the tests are unreliable. They were written to best serve the contexts they reside in. But anyone who has been on the web knows that contexts change. Therefore, tests may need to be adjusted in the future to improve reliability.

That being said, the actual process to better the tests would be accomplished by doing something along these lines: observe contexts in which the sub-point you are testing resides, and write tests based off that context.

For example, when the “Online Payment” test was written, it was observed that maintenance request forms were in one of three areas: a page directly linked to the home page, behind a login portal that was directly linked to the home page, or on an intermediate page between the home page and the login portal. Duly, the test searches for a resident porral input fields along the login page path (i.e: search www.coolhouses.com/home/ first, www.coolhouses.com/residents/ next, and www.logintoresidentportal.com/coolhouses/ last). Finally, search for the resident portal directly on the log in page.

If there is a better way to tackle this than what I wrote, then all that needs to be done to alter this (or any test) is make needed changes to the test, and change the logic in the “each” loop of the “runDXA” function of “driver1.rb” to make all of the values passed between tests work.

Explanation of Dependencies:

In order to run the tests, a few dependencies need to be installed. Because I was using ruby, the libraries (dependencies) that are required in order to run the program are called “gems”. The following are the gems you need (a further explanation of how to install these gems will be included below in the “Directions to Run” section): Rubygems, Selenium WebDriver, Nokogiri, Open URI, Open URI Redirections, JSON, Google Plus, Timeout, Algorithms.

Explanation of “testClass1.rb”:

Each test is in an individual class. The rationale behind this is so Selenium drivers do not need to be re-declared for each test (Selenium is a browser automation framework that was included in the majority of my tests). To elaborate, there is no programmatic way to pass around drivers that are already initialized unless you use classes. And because driver initialization takes a very long time, classes are needed to speed up the process. Because each test is contained within a class, they would have to share some common functions (functions are what programs use to do specialized tasks). For instance, the “write to a file”, initialization, “write to debugfile”, and “get the average score of all pages” functions are all the same, because they use the same logic. The way we tackle this in computer science is with a tactic called inheritance. Inheritance works like this. If you inherit class B from class A, then class B contains all of class A’s functions and local variables (variables only defined within B). You can essentially reuse pre-defined functions and variables, and build your own on top of the pre-existing contents. How I tackled this project is to inherit each test class from a common test class, and then custom make a function called “performTest” (on top of the inherited functions) that actually runs the test. With this method you can run a loop on each of the classes and call the “performTest” method on each (since it will already be defined). The beauty of this is that “performTest” does a different thing in each test context, and those “similar” inherited functions do the same thing in each test context. There are enough similarities to run the test classes in the same way and reuse logic, but enough differences so that they can tackle different things.

Explanation of “driver1.rb”:

The way that driver works is to first set everything up (get all home page links, home page url, @@bizName, zip code, and file names that the program will eventually write to, and initialize results and debug file correctly), initialize all of the test classes in a list, and then loop through that list and call “performTest” on each class. The loop also includes a “case” statement, that allows for values to be passed around amongst the tests. For example, if test 30 needs a value that test 26 has, then the program tests to see if the loop iteration is 26. If it is it sets the value. If the loop iteration is 30, it then used the previously set value. With this method tests can pass around values that they normally could not. If one was to add on to the DXA automation, all they would need to do is add the test to the list, and adjust the arguments in the “case” statement so that the values can be passed around correctly.

Explanation of Each Test:

For each test a unique test name, @testName, is defined. This can then be used to write the results to the file using testClass1.rb’s “writeScore” function, and can be used to log debug errors using testClass1.rb’s “logDebug” function. For each instance of “performTest”—where the work is being done, the debug file is initialized using @testName, so information can then be written to that file in the correct format. Also, it is implied that after every test, the score is written to the file, so no explanation will mention that step. Finally, each final score is multiplied by a coefficient modifier, called @coeffMod in each test, before they are written to the file. This allows each test to have adjustable weights, so that the more important points are scored with more meaning.

For ease of explanation/understanding, a few terms will be used throughout the explanation:

  1. Globally defined: Available to all test classes and classes that are derived from test classes.
  2. @@url: A globally defined string that holds the home page url that is inputted in the beginning of driver1.rb.
  3. @@allLinks: A globally defined list of all internal sites that are linked from the home page that are found/defined at the beginning of driver1.rb.
  4. @@bizName: A globally defined string that holds the business name that is inputted at the beginning of driver1.rb.
  5. @@fileName: A globally defined string that holds the name of the result file. This is hardcoded, and can be changed.
  6. @@logFile: A globally defined string that holds the name of the debug log file. This is hardcoded, and can be changed.
  7. @@zipCode: A globally defined integer that hold the zip code of the business. This is defined in the beginning of driver1.rb

no.1 Rank: Branded Search (): This test loads a Google search page, inputs @@bizName into the search bar, and loads the results. It tests to see if the first result belongs to the business by comparing urls. If the first result belongs to the business the test passes.

  1. Load Google.
  2. Input @@bizName into the search bar, and load results.
  3. Attempt to find the first element. If not found, test is failed. If found continue.
  4. Find the url of the first listing. If not found, test is failed. If found continue.
  5. Compare the found url in the listing to @@url. If there is a match, then pass the test. If there is not, fail the test.

301 Redirect (): This test loads @@url. It then sleeps the script for enough time for a 301 redirect to take place. It then compares the current driver url (the one which might have had a redirect) to @@url (the one loaded).

  1. Load @@url.
  2. Sleep script for enough time for a 301 redirect to take place.
  3. Compare the current driver url to the one loaded, @@url. If there is a match, fail the test. If there isn’t a match, pass the test (a 301 took place).

Content Per Page ():

This test loops through each of the pages in @@allLinks. It rips the text from each element on the page. This takes a while, so if the text rip takes too long, then the page score is set to 0. If the amount of distinct, valid text (text hasn’t been encountered before and the text is non-empty and has no tags inside of it) is greater than 250 words then the page score is set to 1. If there is not enough, set the page score as 0. Once the loop is done average the score of all the pages, and set the total score as the average.

  1. Loop through each page in @@allLinks: - Set a manual timeout to occur. - Loop through all elements:
    • Append valid element text (non-empty, not already inputted, and without html tags) into a “bulk” text string.
    • Test to see if the bulk string has over 250 words. If it has enough, append score list with a 1 (page passed) and go to the next page.
    • If the timeout occurs or the element loop exits and the page was not passed, append the score list with a 0 (page failed) because ripping the text took too long, or there was not enough on the page.
    • Clear the bulk string, reset Booleans and hashes.
  2. Once each page is read, set the score as the average of the score list.

Linking Strategy ():

For this test, a tool called URL analyzer is used on a site called Web SEO Analytics. It is used to measure the number of links on your whole site (both valid and broken). The test involves opening the web page, inputting the url, and setting an explicit wait for the results to appear. If they do not appear it fails the test. If they do appear, then grab the results and see if there are enough links.

  1. Load Web SEO Analytics link structure analyzer.
  2. Find the input section and submit button. If any of these can’t be found, the test is failed.
  3. Input @@url and generate results.
  4. Wait for results to become visible. If they do not become visible then fail the test.
  5. Rip text from the results bar, and find the valid link number.
  6. Test to see if that number is bigger than 10. If so, pass the test. If not, fail the test.

Title Tag Strategy:

This test loops through each of the pages in @@allLinks and does the following. It resets the encoding of the page if it is incorrect. The keywords are then found, formatted, and moved into a list. The title tag is then found, and formatted. The test checks to see if any keywords, or @@bizName is in the title. If either of these is present, the page score is set to 0. If they are not, the page score is set to 1. Once all pages are looped through, the score is set as the average of all the pages.

  1. Loop through each page in @@allLinks: - Check if page encoding is correctly set. If not, set it correctly. - Open the page source of the current link using Nokogiri. - Find the meta tag with keywords contained in it in the page source. - If the keyword meta tag exists, puts the keywords from the meta tag into a list. - Find the title tag. - If title tag is missing or empty, then the page fails. - Loop through the keywords:

    • If the keyword is contained within the title, fail the page and exit the keyword loop. - Test to see if @@bizName is included in the title, if so the page fails. - If the page was not failed up to this point, then append the score list with a 1 (no keywords or @@bizName were found within the title).
  2. Once each page is read, set the score as the average of all the page scores.

URL Structure Strategy:

This test loops through each of the pages in @@allLinks and tests each url on various criteria. If tests the length of the url, and marks it down for every extra character that the url has over 30 characters. It checks for the number of “bad” characters in the url and marks it down for every one that it finds. Finally, it finds, formats, and puts keywords into a list, and tests to see if any keywords are in the url. It marks it down the score for each keyword found. It then sets the page score as the average score of these three criteria. Once each page is looped the final score is set as the average of all page scores.

  1. Loop through @@allLinks: - Check if page encoding is correctly set. If not, set it correctly. - Get page source of the current link. - Test to see if the length of the url is over 30. For any urls whose length is over 30, mark down the score for that page. - Loop through a list of bad characters:
    • Mark down the url for each bad character found. - Search for a meta tag containing keywords. - If there are keywords in the page source then format them, and read them into a list. - Loop through keywords (if they exist):
    • For each keyword found mark down the score. - Append the score list with the average of all three scores (length, bad characters, and keyword scores).
  • Set the test score as the average of the score list.

Home Page Header Tag 1:

This test checks the encoding of the home page source, and resets it if it is incorrect. It then gets the keywords from the page source and puts them into a list (if there are any). It gets the page’s h1, and formats it. If it does not exist, then the test fails. It then tests to see if any keywords or @@bizName is in the header. If there are, then the test is failed. If not, it is passed.

  1. Check if page encoding is correctly set. If not, set it correctly.
  2. Find keywords: - Search for a meta tag that contains keywords. - If a keyword meta tag exists, put keywords into a list.
  3. Get h1 from the home page source.
  4. Using the keywords and the h1, score header: - Loop through keywords:
    • If a keyword is found return a 0 as the test score. - Test to see if the @@bizName is in the headers. If so return a 0 as the test score. - If the score wasn’t already returned then return a 1 because no keywords or @@bizName was found in the header (header passed).

Home Page Header Tag 2:

This test is a continuation of the former test. This test gets the page’s h2, and formats it. If it does not exist, then the test fails. It then tests to see if any keywords or the @@bizName is in the header. If there are, then the test is failed. If not, it is passed.

  1. Get h2 from the home page source.
  2. Using the keywords and the h2, score header: - Loop through keywords:
    • If a keyword is found return a 0 as the test score. - Test to see if @@bizName is in the headers. If so return a 0 as the test score. - If the score wasn’t already returned then return a 1 because no keywords or @@bizName was found in the header (header passed).

Unique Header Tags:

This test loops through @@allLinks and test to see if any unique headers are found on any page. First, it checks to make sure that the current page encoding is correct, and resets it if it is not. It then finds and formats headers until there are no more to be found (i.e: find all h1’s, then h2’s, until no more types of headers can be found). If there are no headers then set the page score as zero. As it’s finding headers, it checks to make sure the current header has not already been used. If it has, it sets the page score as 0. If it makes it all the way through the loop then that means that there were no duplicates (set the page score as 1). After all pages are looped through, the score is set as the average of all page scores.

  1. Loop through @@allLinks:
    • Get page source using Nokogiri.
    • Find headers until there are no more to be found (h1, h2,…,h5) using an “while” loop:
      • Test to see if hash (used to hold previously found headers) contains the current header. If so, return a failing grade for page and set the page score as a 0.
    • If failing grade was not returned, return a passing grade (all headers are distinct), and set the page score as a 1.
  2. Set the score as the average grade of all pages.

Image ALT Text: This test loops through @@allLinks and loops through each image that is on the page. For each image that has a non-empty alt text field it increments a value. After all of the page’s images are looped through, a ratio is obtained by dividing the incremented number by the total number of images. If the ratio is above a certain threshold, the page score is set to 1. If it is not, it is set to 0. Once all pages are looped through, the score is set as the average of all pages. 1. Loop through @@allLinks: - Load link. - Loop through all images on page: - Test to see if alt text field of image is non-empty. If so, increment a ratio value. - Divide ratio number by total number of page images to achieve a ratio. If the ratio is greater than 0.75, append a score list with - If not, append a score list with 0. 2. Set total score as average of the score list.

Google+ Owner Verified: For this test the program performs a get request on a “person” (in this case a business) with the Google+ ID as a parameter, and tests to see if the person is verified given the hash that is returned. If they are, the test passes. If they are not, the test fails. 1. Initialize ‘google_plus’ requirements (object initialization, reading and setting of API key). 2. Perform a get request using Google Plus ID to retrieve person hash. 3. Test if person is verified. If so, pass the test. If not fail the test.

Google+ Link to Website: *This is a continuation of the former test. For this test the program attempts to find the citation portion of the Google+ page. The program finds the container which hold the links to the home page. If the container does not exist, the test fails. If it does, then it loops through all the urls found within the container. If there is a match between @@url and any of the found urls then the test passes. If not, the test fails. 1. Load the actual Google+ page. 2. Find the page element where the listings are (phone, ZIP, website). 3. Test to see if link text is a shortened version of @@url. If so, pass the test. If not, fail the test.

Meta Location Data: This test tests to see whether a pattern that is suggestive of the presence of meta location data is being used. If the pattern is found, the test passes. If not, the test fails. 1. Check the page source to see if the pattern “itemtype=”https://schema.org/”” is present in the page source. If it is, the test passes. If not, the test fails.

Google+ Images/Video: *This is a continuation of the former test. This test finds the Google+ tab that leads to the photo page. It then attempts to find the message container that notifies the user if the current Google+ page has no photos. If it can’t find the container then the test passes. If it can find it, it tests to see whether the container has the words “no photos”. If it does, the test fails. If it doesn’t, the test passes. 1. Click on the photos/video tab of the current Google+ page. 2. Test to see if the tag reading: “There are no photos for this place yet. Be the first to upload your photos here” is present. If so, fail the test. If not, pass the test.

Google+ Consistent Citations: This test loads the MOZ Local tool. It then attempts to locate the input elements on the loaded page, and input @@bizName and @@zipCode into the search fields. If these elements cannot be found then the test is failed. The test then generates the results and explicitly waits for the results container to appear. If it doesn’t appear, the test fails. If it does appear, the test searches the results based off of @@bizName and whether or not the business is verified. If there is more than one business with the same name and verification status, then it refines the results based off of the zip code given. If at that point there is more than one result, then the program asks the user to pick which is the most likely listing. The program then attempts to load the results of the listing that is most likely to belong to the business. If the page loads incorrectly, the test fails. If it loads correctly, the test finds the main citation bar, and rips the numbers from it. If that number is greater than a certain threshold, the test passes. If not, the test fails. It is also important to note that for this test, there are two coefficients that are passed in. This is because the Google+ score is actually the main citation score due to accuracy reasons. More than one coefficient allows a developer in the future to easily incorporate both the Google+, and main citation score into the results. 1. Get MOZ Local Page (has citation info). 2. Initialize ‘google_plus’ requirements (object initialization, reading and setting of API key). This will be used to distinguish between verified and non-verified listings. 3. Load results from MOZ Local: 1. Find both input fields and send @@bizName and ZIP code to each one. If these two fields cannot be located, then the test is failed. 2. Submit fields and explicitly wait for search results to appear. If they don’t appear within time allotted, then the test is failed. 4. Get a list of results with matching names: 1. Using the previously used Google+ Verified flag, get all listings whose verified value match the Google+ verified value. 2. If there are no listings found, return a :NO_RESULTS flag and fail the test. If not, return the list of results. 5. Search results for @@bizName: 1. Loop through each result: 1. If the names match, append the result to a refined result list. 2. If the refined results list is empty, return a :NO_MATCH flag and fail the test. 3. If the list has a single element in it, then return the result and a :FOUND flag to signal that a single result was isolated and continue the test. If there is more than one result, then return the list and a :SEARCH_AGAIN flag, to signal that the results need to be refined more. 5. If another search is needed, search the returned list by zip code: 1. Since @@bizName is already matched, attempt to match by @@zipCode. If there is a match, then assume that result is the correct listing and return that listing. 2. If there was still no match, then return a :NO_RESULTS flag and fail the test. 6. Load the citation results with the found listing (if the program is at this point it has isolated one listing): 1. Explicitly wait until the total citation result bar is visible. If it does not become visible then fail the test. 2. Return the number read. 7. If the number is over 75, then pass the test. If it is not, then fail the test.

Mobile Site Design: This test loads a Google Mobile Compatibility site, with @@url as a parameter (this way the results are generated once the page is fully loaded). The test then explicitly waits for the results to appear. If they do not appear, the test fails. If they do appear, the results box is found. If the results box cannot be found, the test fails. If it can be found, the result box test is grabbed, and the program tests whether the text has the string “not mobile-friendly” in it. If it does, the test fails. If it doesn’t, the test passes. 1. Load Google Mobile Friendly Test page (@@url inputted into url so input is not needed). 2. Set explicit wait for result bar to appear. If it does not appear, then test is failed. 3. When the results bar appears search the result bar for the text, “not mobile friendly”. If it has that text, fail the test. If it doesn’t, pass the test.

Designed to Engage Traffic: This test involves loading the home page and asking the user whether the site is designed to engage traffic. If the user says that it is, the test passed. If not, the test fails. 1. Ask the user whether the home page, @@url, is designed to engage traffic in a loop (in case of invalid input): 1. If they answer “yes”, the test is passed. If they answer no, the test is failed. They can also enter a number from 0 to 10 to describe its ability to engage traffic. If the number is bigger than 10 it will be set to 10. The score is then set based off that number or whether they inputted “yes” or “no”.

Ad Copy Specific to Market: This test uses the page source of the home page to see if adwords (or any similar scripts) are being used. If there are any of these strings present in the page’s scripts, then the test is passed. If not, the test is failed. 1. Using Nokogiri, find all of the scripts in @@url’s page source. 2. Test to see if the strings: “googlesyndication”, “doubleclick.net”, or “bat.js” are in any of the scripts. If so, pass the test. 3. If none of these strings are found then fail the test.

Competitive Position: This test starts with a search for a business listing on a Google SERP. If it finds the listing (by comparing urls) in either the top ad container or the first three listings of the right hand side ad container, then the test passes. If not, the test continues. The program then asks the user to input terms to generate ads for the business. The program searches the ad containers of each newly generated SERP. It does this until an ad is generated in a competitive position of the ad containers, or the user auto passes or fails the test. Once again, if it finds it, the test is passed. If the user auto passes/fails the test then the test passes/fails respectively. 1. Get search results of Business Listing: 1. Load Google search page. 2. Input @@bizName into search bar and load results. 3. Get text of top and right hand side ad containers: 1. If top ad container is found, set a variable to it. If not found, set variable as :NOT_FOUND flag. 2. If rhs (right hand side) ad container is found, set a variable to it. If not found, set variable as :NOT_FOUND flag. 3. Return both top and rhs ad container variables. 4. Search top ads for listing: 1. Loop through first three ads (considered competitive range) and see if the listed url matches @@url. If so pass the test. 2. If no listings are matched, continue with rhs ad testing. 5. Search rhs ads for listing: 1. Loop through first three ads (considered competitive range) and see if the listed url matches @@url. If so pass the test. 2. If no listings are matched, continue with user-inputted tests. 2. If the above search was not successful move to user entered testing: 1. Ask user for a search term to generate ads for the business in a loop: 1. If the user enters “auto” followed by “pass” or “fail” then return and pass/fail the test. 2. Load Google search page. 3. Input the search terms into the bar and load results. 4. Get text of top and right hand side ad containers: 1. If top ad container is found, set a variable to it. If not found, set variable as :NOT_FOUND flag. 2. If rhs (right hand side) ad container is found, set a variable to it. If not found, set variable as :NOT_FOUND flag. 3. Return both top and rhs ad container variables. 5. Search top ads for listing: 1. Loop through first three ads (considered competitive range) and see if the listed url matches @@url. If so pass the test. 2. If no listings are matched, continue with rhs ad testing. 6. Search rhs ads for listing: 1. Loop through first three ads (considered competitive range) and see if the listed url matches @@url. If so pass the test. *This loop keeps going until ads are found or the user auto fails or passed the test.

Landing Page: Clear CTA’s: This test consists of asking the user whether @@url has clear CTA’s. If the user answers yes, then the test passes. If the user answers no, then the test fails. Additionally, if the user enters a number as to how clear the CTA’s are then set the score as that (if the entered number is over 10 then set it as 10). 1. Ask the user whether the home page, @@url, has clear CTA’s, in a loop (in case of invalid input): 1. If they answer “yes”, the score is set to 10. If they answer no, the score is set to 0. They can also enter a number from 0 to 10 to describe the clarity of its CTA’s. If the number is bigger than 10 it will be set to 10. The score is then set based off that number or whether they inputted “yes” or “no”.

Ads on Multiple Search Engines: *This test reads a flag passed by a former test. This test reads a flag passed from the “Ad Copy Specific to Market” test. If the flag is true, it means that Adsense was found, and that there are ads on multiple search engines (the test passes). If the flag is false, the test fails. 1. Test to see whether the Adsense flag passed in is true. If it is, the test passes. If not, the test fails.

Google+ This test gets the source of a reputation manager API call (key coded into the loaded url), and rips the text from the page. It then converts the text to a JSON object, and retrieves the Google+ score from that JSON object. 1. Open the page source of a reputation manager API call (a url with a reputation manager ID entered) by inputting the reputation manager ID (read from a file) into the url before getting the page source. 2. Rip and parse the text of the page and convert into a hash. 3. Set score as the hash’s Google+ score divided by a hundred (to normalize it).

ApartmentRatings: *This is a continuation of the former test. This test uses the same object as the former test, but retrieves different info from it. 1. Set score as the hash’s ApartmentRatings score divided by a hundred (to normalize it).

Yelp: This test uses the same object as the former test, but retrieves different info from it. *This is a continuation of the former test. 1. Set score as the hash’s Yelp score divided by a hundred (to normalize it).

Website Load Speed: This test involves making a cURL call to the Google Page Speed API (with @@url dynamically entered as a url parameter in the cURL call), redirecting it to a file, reading the file into a local variable, and then deleting the file. The local variable that was read into is then converted to a JSON object. Once converted, the score is retrieved from the object, and divided by 100 to normalize it. The score is then set as the normalized score. 1. Make a system cURL call that accesses the Google Page Speed API. This call has a @@url coded into it, so the call emulates the score you get from the actual developers console. Write the output to a file (will be in JSON format). 2. Read the contents of the written-to file and convert to JSON format. 3. Access the page speed score from the JSON object and set the score as that number divided by a hundred (to normalize it).

Splash Page: This test starts with a declaration of a table that is designed to find splash elements (a 3D array that contains the most common terms of the most common html attributes of the most common html tags of typical splash elements). The driver then loads @@url, and the program loops through all page elements. If there is a match among tag name, attribute and term, the element is appended to a potential splash element list. Once that list is returned, a test is performed to see whether that list has any potential splash elements in it. If it did, the user is then asked whether @@url is a splash page. If the user enters yes, the test passes. If not, the test fails. If the list had no element s in it the test passes.

1. Create an html tag, attribute, and term table, which is designed to find splash elements.
2. Load @@url.
3. Scan page for splash elements:
  1. Loop through the tag types:
    1. Loop through the tag’s attributes:
      1. Loop through the terms:
        1. If the current element matches with the tag name, attribute and term fields than append it to a list.
  2. Return that list.
4. If the list contains more that one element, ask the user whether it is a splash page or not in a loop:
  1. If the user answers “yes”, the test passes. If not, the test fails.
5. If the list is empty then the test passes.

Consistent Graphic Elements: This test consists of asking the user whether @@url has consistent graphic elements. If the user answers “yes”, the test passes. If the user answers “no”, the test fails. Additionally, the user may enter a number to describe the consistency of the graphic elements on the page. If the number is greater than 10, it will be set to 10. The score is then set to the entered number. 1. Load @@url. 2. Ask the user whether the current page has consistent graphic elements in a loop (in case of invalid input): 1. If they answer “yes”, the score is set to 10. If they answer “no”, the score is set to 0. They can also enter a number from 0 to 10 to describe the consistency of its graphic elements. If the number is bigger than 10 it will be set to 10. The score is then set based off that number or whether they inputted “yes” or “no”.

Navigation Location/Structure: This test starts with the loading of @@url and the declaration of a table that is designed to find navigation bars (a 3D array that contains the most common terms of the most common html attributes of the most common html tags of navigation bars in general). A primary search is then conducted, which loops through all page elements and searches for the physical navigation bar given the terms declared in the table. If there is a match between tag name, attribute, and term, the element is then tested to see whether is contains links. If it does, it is returned and assumed to be the navigation bar. If no match is found, or the found elements do not contain links, then a secondary search is performed. During this search, the program loops through page elements to try and locate a “button” inside of the navigation bar (if it exists). If a button is found (a tag name, attribute, and term match), the program appends a potential navigation bar array with the parent of the current button (likely the navigation bar). The list of potential navigation bars is then returned if non-empty. If empty, the test fails. If the primary search worked, a list of links is set to all of the links found within the navigation bar. If the secondary search returned a probable navigation bar list (the first search did not work), then the navigation bars are looped through and the main navigation bar is assumed to be the first one in the list that contains links. If none of them contain links, the test fails. If a navigation bar was isolated from the former loop (a navigation bar was found to contain links), then a list of links is set to all of links found within that navigation bar. Finally the list of links generated (by either the first or secondary search) is tested to see whether it has more links than a certain threshold. If there are enough navigation bar links, the test passes. If not, the test fails. 1. Load @@url. 2. Create a html tag, attribute, and term table, which is designed to find navigation bar elements. 3. Pass the table in to a function and perform a primary search to locate the navigation bar: 1. Find and loop through all elements with a certain tag name: 1. Loop through the term list: 1. Loop through the attribute list: 1. If there is a match with the current element’s tag name, attribute and term, then test to see if the element has links in it. If it does, then return the element and assume it is the main navigation bar. If not, then continue the search. 4. If the navigation bar was not found then perform a secondary search for the buttons of the navigation bar using terms that are likely to be in the text of navigation bar buttons: 1. Find and loop through all elements with a certain tag name: 1. Loop through the term list: 1. Loop through the attribute list: 1. If there is a match with the current element’s tag name, attribute and term, then test to see if the element’s parent has links in it. If it does, then append the parent to a list of potential parents. 2. Return the parent list. If the list is empty the test fails. 5. If the primary search was successful, get the links from the navigation bar: 1. Loop through each element of the navigation bar: 1. If the element has a non-empty href attribute than append a url list with the href. 2. Return the list. 6. If you have a list of potential navigation bars from the secondary search, get the links from the first link-containing element: 1. Loop through the potential navigation bars: 1. If the current navigation bar has any links then return those links. If not, raise an error and fail the test. 7. Test if the number of links grabbed from the navigation bar (retrieved from either the primary or secondary test) is greater than or equal to six. If it is, then pass the test. If not, fail the test.

Engaging Images: This test loops through @@allLinks and loops through each image that is on the page. It then tests to see if the image dimensions are big enough to be considered engaging. If the image has acceptable dimensions, a value is incremented. It then divides that value by the total number of images (to get a ratio of good images to total images), and sets that ratio as the page score. Once all pages are looped through, the score is set as the average of all the ratios. 1. Loop through @@allLinks: 1. Loop through all images on page: 1. Test to see if the image has acceptable dimensions: 1. Test if height and width are acceptable or have the “auto” setting. If they do, increment a counter for the number of good images. If not, don’t increment the counter. 2. Append the score list with the ratio of good images to total images on the page (counter divided by total image number). 2. Set the score as the average of the score list (the average ratio of all pages).

Clear and Informative Headings: Loop through each page in @@allLinks. For each link check to make sure that the encoding is correct, and reset it if it is not. In a loop, grab all the header text from each page (unique or non-unique) until no more headers can be found. Once this is done, return the bulk header text of that page and append another bulk string (used for the header text of all @@allLinks pages) with the text captured from the page. Once all the header text is grabbed. If the bulk text string used for all @@allLinks pages is less than 40, the test is failed. Load an online grammar checker named Grammarly. Input all of the header text into the text input box, and instruct the user to read the header text and consider its level of clarity and its ability to inform. If the input/submit elements cannot be located then the test fails. The user is then instructed to generate the results and consider the number of errors/plagiarism issues. Finally the user is asked to grade both the clarity and ability to inform of the header text. The user is then asked to describe the text as “clear” or “unclear”. If the user answers “clear” then a temp score is set to 1. If the user answers “unclear”, a temp score is set to 0. Additionally, the user may enter a number to describe the level of clarity of the header text. If the number is over 10, the temp score is set to 10. The user is then asked to describe the text as “informative” or “not informative”. If the user answers “informative” then a temp score is set to 1. If the user answers “not informative”, a temp score is set to 0. Additionally, the user may enter a number to describe the text’s ability to inform. If the number is over 10, the temp score is set to 10. The final score is then set as the normalized average of the two temp scores (the average of the clarity score and the informative score divided by 10). 1. Loop through @@allLinks: 1. Get headers and number of headers from current page: 1. Redefine encoding if encoding isn’t “utf-8”. 2. Get page source using Nokogiri. 3. In a loop, get headers (h1, h2,…, h7), until there are no more to be gotten: 1. Append a bulk text string with the header text (periods added at the end if they are not already present). 4. Return the bulk string. 2. Add header text to a bulk string in the former context, and increment the number of headers. 2. Test to see if the bulk header text has 40 words. If it doesn’t then auto fail the test. 3. Get grammar score from Grammarly, and user input: 1. Load Grammarly home page. If there is an error while loading the page then the test is failed. 2. Locate the input box and input the bulk text string. If the box cannot be located the test is failed. 3. Tell the user to use the text and the grammar results to score this test. Instruct the user to click the submit button. 4. Once the result loads. Ask the user whether headers are clear. If they answer “clear”, a temporary score is set to 10. If they answer “unclear”, a temporary score is set to 0. They can also enter a number from 0 to 10 to describe the clarity of the headers. If the number is bigger than 10 it will be set to 10. A temporary score is then set based off that number or whether they inputted “clear” or “unclear”. 5. Ask the user whether headers are informative. If they answered “informative”, a temporary score is set to 10. If they answer “not informative”, a temporary score is set to 0. They can also enter a number from 0 to 10 to describe how informative the headers are. If the number is bigger than 10 it will be set to 10. A temporary score is then set based off that number or whether they inputted “informative” or “not informative”. 6. The score is then set to the average of those two temporary numbers.

SCAN Able Content: This test loads @@url and asks the user whether the page content is SCAN Able. If the user answers yes, the test passes. If the user answers no, the test fails. Additionally, the user can input a number to describe the level of SCAN Ability the page has. If the number is greater than 10, it is set to 10. The score is then set to that number. 1. Ask the user whether the home page, @@url, has SCAN Able Content: 1. If they answer “yes”, the score is set to 10. If they answer “no”, the score is set to 0. They can also enter a number from 0 to 10 to describe the pages “SCANability”. If the number is bigger than 10 it will be set to 10. The score is then set based off that number or whether they inputted “yes” or “no”.

Content Readability: This test loops through @@allLinks and grabs the text from each element. It appends a bulk text string with the found text and tests to see if the bulk string has more than 500 words (more than enough words to score content readability). If it has enough it exits the loop and continues. Upon exiting the loop, either the bulk text string contains at least 500 words, or all of the text from the website was grabbed (the only other reason for the loop to exit). The test then loads an online readability tool and attempts to input the text into the tool’s text area to be graded. If there is an error with loading the page, or with finding the input areas the test fails. Once the text is inputted, the results auto generate. An explicit wait for the results occurs. If the results container is not found, the test is failed. Once the results appear, the text is grabbed and the readability score is returned. If the number is between an upper and a lower bound of readability (according to the Flesch-Kincaid readability standards), the test passes. If not, the test fails.

1. Score all text for readability:
  1. Loop through @@allLinks:
    1. Load page.
    2. Loop though all page elements:
      1. Append a bulk text string with each element’s text (unless it is blank, or contains html tags).
      2. Once more than 500 words are found. The page loop exits and the element loop exits.
    3. Load a text readability website:
      1. Input text into the text box and let the utility automatically score the text. Explicitly wait for results.
      2. Return the results once found. If not found, the test is failed.
2. If the returned readability score is between 50 and 70 then pass the test. If not, then fail the test.

Elements of Flash: This test loops through @@allLinks, and tests to see if the page source contains the flash extension “.swf”. If it does, the whole test fails and a true flash flag is returned for use on the next test. If all pages are looped through and the test has not failed, the test passes (no pages have flash), and a false flash flag is returned for use on the next test. 1. Loop through @@allLinks: 1. Redefine page encoding if encoding isn’t “utf-8”. 2. Test to see if the page source contains the string “.swf”, which is a flash extension. If it does, fail the test. 2. If all the links have no flash, then pass the test.

Automatic Audio/Video: *This test uses a Boolean passed from the former test to indicate whether there was flash found. This test takes the returned value from the former test and alerts the user that the presence of auto-play is highly likely if the flash flag was true. It then loops through @@allLinks and asks the user if there is auto-play on the current page. If the user answers “yes” the test fails. If the user answers “no”, the loop continues. The user also has the option to auto pass/fail the test on every loop iteration. If the user auto passes/fails the test then the test passes/fails respectively. If the loop exits and the test was not failed, the test passes. 1. If a there was flash found on the former tests then alert the user that this site likely has auto-play. 2. Loop through @@allLinks: 1. Ask the user whether the current page has auto-play: 1. The user is asked whether or not there is auto-play on this page. They are also given the choice to auto pass or fail the test. If they enter “auto pass”, the test is passed. If they enter “auto fail”, the test is failed. If the user enters “yes” regarding the auto-play question, then the test is failed. If the user enters “no”, then the program continues. 3. If the test was not auto passed/failed then the user did not find any auto-play features on any link. The test is passed.

Calls to Action: Prominent This test loads @@url and asks the user whether the page’s CTA’s are prominent. If the user answers yes, the test passes. If the user answers no, the test fails. Additionally, the user can input a number to describe the prominence of @@url’s CTA’s. If the number entered is greater than 10, it is set to 10. The score is then set to that number. 1. Ask the user whether the home page, @@url, has SCAN Able Content: 1. If they answer “yes”, the score is set to 10. If they answer “no”, the score is set to 0. They can also enter a number from 0 to 10 to describe the pages “SCANability”. If the number is bigger than 10 it will be set to 10. The score is then set based off that number or whether they inputted “yes” or “no”.

Calls to Action on Every Page: This test loops through @@allLinks and asks the user if there are CTA’s on the current page. If the user answers “yes”, the test continues. If the user answers “no”, the test fails. Additionally, the user can auto pass/fail the test on every loop iteration. If the user auto passes/fails the test the test passes/fails respectively. If the test is not failed or passed by the end of the loop, the test passes (all pages has a CTA). 1. Loop through @@allLinks: 1. Ask the user whether the current page has a CTA: 1. The user is asked whether or not there is a CTA on the current page. They are also given the choice to auto pass or fail the test. If they enter “auto pass”, the test is passed. If they enter “auto fail”, the test is failed. If the user enters “yes” then the test continues. If the user enters “no”, then the test fails. 3. If the test was not auto passed/failed, then the user found CTA’s on every page. The test is passed.

Multiple Channels for Engagement: This test loads @@url and asks the user if the current page has multiple channels for engagement. If the user answers yes, the test passes. If the user answers no, the test fails. Additionally, the user can input a number to describe how effective the current pages multiple channels of engagement are (if at all). If the number entered is greater than 10, it is set to 10. The score is then set to that number. 1. Ask the user whether the home page, @@url, has multiple channels for engagement: 1. If they answer “yes”, the score is set to 10. If they answer “no”, the score is set to 0. They can also enter a number from 0 to 10 to describe the effectiveness of the multiple engagement channels (if any). If the number is bigger than 10 it will be set to 10. The score is then set based off that number or whether they inputted “yes” or “no”.

Online Leasing: This test loads @@url and attempts a primary search to find an “apply” link (an anchor tag with the words “apply” or “application” in the text). If one was found, it is used as the main application link (returned as a single member of a list, which will make sense later). If there is no “apply” link, the program tries a secondary search to find “apply” links amongst @@allLinks. If any are found amongst @@allLinks (any that contain certain specified terms) they are appended to a list of potential application links. Regardless of whether the list of “apply” links was found in the primary or secondary search, loop through the list of “apply” links and find the type of submit scheme each has. Find the number of buttons, inputs, bad buttons, selects, and pdf’s on the page. If the page has at least 3 valid buttons and at least 3 valid input fields, then return a :INPUT_SUBMIT flag, and append the score list with a 1. If there is at least one valid button and at least one submit, return a :SELECT_SUBMIT flag, and append the score list with a 0.5. If there is more than one button but no submits return a :BAD_INPUT flag and append the score list with a 0.5. If there is more that one pdf found, then return a :PDF flag and append the score list with a 0.5. If the page met none of these cases, then return a :NONE flag, and append the score list with a 0. Once all “apply” pages are looped, set the score as the minimum of the score list (the worst application portal the user could possibly encounter). 1. Load @@url. 2. Find an “apply” link on the page: 1. Loop through all anchor tags on the page: 1. If the anchor text contains “apply” or “application” return a list with that link as the single member (this step will make sense further on). 2. If there is no link found then return a :NOT_FOUND flag to signify that no application link was found. 3. If there is no link found, determine the most probable application link among @@allLinks: 1. Loop through @@allLinks: 1. Loop through a list of common terms that are associated with application links: 1. If the link text contains any of the terms, append that link to a list of probable links. 2. Return the list if it is non-empty. If not, return a :NOT_FOUND flag and fail the test. 4. Loop through each potential application link: 5. Identify the input scheme that the application link has: 1. Find buttons on page: 1. Loop through each tag name in the table: 1. Loop through all found elements on the page with that tag name: 1. Loop through all attribute types in the table: 1. Loop through all terms in the table: 1. If the element has a good-button format and the tag name, attribute, and term match up then append it to a list. Increment a count as well. 2. Return the button count. 2. Find input elements on the page: 1. Loop through all inputs: 1. Increment a count if the input is non-hidden. 2. Return the input count. 3. Find the PDF elements in the page: 1. Loop through all page elements: 2. Test to see if the current element has an href with “.pdf” in it. If it does, increment a count. 4. Find all bad input buttons: 1. Loop through each tag name in the table: 1. Loop through all found elements on the page with that tag name: 1. Loop through all attribute types in the table: 1. Loop through all terms in the table: 1. If there is a match with tag name, attribute, and term, and if it has a bad-button format then see if its “type” attribute is non-blank. If it is, then append it to a list and increment a count. 2. Return the bad button count. 5. If the good button or bad button number is at least one and the number of good inputs is at least three then return a :INPUT_SUBMIT flag to identify the type of input scheme. Append the score list with a 1. 6. If the good button or bad button number is at least one and the select number is greater than one, then return a :SELECT_SUBMIT flag to identify the type of input scheme. Append the score list with a 0.5. 7. If the good button or bad button number is at least one and the select number is not greater than one then return a :BAD_INPUT flag to identify the type of input scheme. Append the score list with a 0.5. 8. If the number of pdf’s on the page is greater than one return a :PDF flag to identify the type of input scheme. Append the score list with a 0.5. 9. Return a :NONE flag to identify that there was no type of input scheme. Append the score list with a 0. 5. Set the score as the minimum of the score list (the worst submit scheme a user will possibly encounter).

Guest Card Requirements: This test loads @@url and attempts a primary search to find a “contact” link (an anchor tag with the words “contact” in the text). If one was found, it is used as the main “contact” link (returned as a single member of a list, which will make sense later). If there is no “contact” link, the program tries a secondary search to find “contact” links amongst @@allLinks. If any are found amongst @@allLinks (any that contain certain specified terms) they are appended to a list of potential contact links. Regardless of whether the list of “contact” links was found in the primary or secondary search, loop through the list of “contact” links and find the type of submit scheme each has. Find the number of buttons, inputs, bad buttons, selects, and pdf’s on the page. If the page has at least 3 valid buttons and at least 3 valid input fields, then return a :INPUT_SUBMIT flag, and append the score list with a 1. If there is at least one valid button and at least one submit, return a :SELECT_SUBMIT flag, and append the score list with a 0.5. If there is more than one button but no submits return a :BAD_INPUT flag and append the score list with a 0.5. If there is more than one pdf found, return a :PDF flag and append the score list with a 0.5. If the page met none of these cases, then return a :NONE flag, and append the score list with a 0. Once all “contact” pages are looped, set the score as the minimum of the score list (the worst guest card portal a user could possibly encounter).

1. Load @@url.
2. Find a “contact” link:
  1. Loop through all anchor tags on the page:
    1. If anchor text has the term “contact”, append a list with the potential “contact” link.
    2. If no anchor tag is returned then return a :NOT_FOUND flag.
  2. Return the list of potential “contact” links.
2. Determine which contact links are the most probable to be the actual link:
  1. Loop through the potential links:
    1. If the actual link text has “contact” in it then append it to another list.
  2. If the list is empty, return a :NOT_FOUND flag and fail the test.
  3. Return the list.
3. Declare button attribute tables.
4. Loop through the refined links:
  1. Test to see what type of submit scheme the page has using button attribute tables:
    1. Find buttons on the page:
      1. Loop through each tag name in the table:
        1. Loop through all found elements on the page with that tag name:
          1. Loop through all attribute types in the table:
            1. Loop through all terms in the table:
              1. If the element has a good-button format and the tag name, attribute, and term match up then append it to a list. Increment a count as well.
      2. Return the button count.
  2. Find input elements in the page:
    1. Loop through all inputs:
      1. Increment a count if the input is non-hidden.
    2. Return the input count.
  3. Find the PDF elements in the page:
    1. Loop through all page elements:
      2. Test to see if the current element has an href with “.pdf” in it. If it does increment a count.
  4. Find all bad input buttons:
    1. Loop through each tag name in the table:
      1. Loop through all found elements on the page with that tag name:
        1. Loop through all attribute types in the table:
          1. Loop through all terms in the table:
            1. If there is a match with tag name, attribute, and term, and if it has a bad-button format then see if its “type” attribute is non-blank. If it is, then append it to a list and increment a count.
    2. Return the bad button count.
  5. If the good button or bad button number is at least one and the number of good inputs is at least three then return a :INPUT_SUBMIT flag to identify the type of input scheme. Score the page as a 1.
  6. If the good button or bad button number is at least one and the select number is greater than one, then return a :SELECT_SUBMIT flag to identify the type of input scheme. Score the page as a 0.5.
  7. If the good button or bad button number is at least one and the select number is not greater than one then return a :BAD_INPUT flag to identify the type of input scheme. Score the page as a 0.5.
  8. If the number of pdf’s on the page is greater than one return a :PDF flag to identify the type of input scheme. Score the page as a 0.5.
  9. Return a :NONE flag to identify that there was no type of input scheme. Score the page as a 0.
5. Once all pages are scored set the score as the minimum of all page scores (the worst the user will possibly encounter).

Online Payments: This test loads @@url, and attempts to find a resident portal link on the home page. If none are found, it fails the test. Once that link is located, the test conducts a primary search on that link to see if there are enough input and submit elements to consider that page a resident login page. If there are it returns a :PRIMARY_SEARCH flag, the current link (the resident portal), and a nil value for the secondary link, to signal that the resident portal was found in the primary search. If not, the test continues. The program then searches for a “login” link on the current page. If it does not find one, it returns a :NOT_FOUND flag, the current link (for a future searches), and a nil value to signal that there was no resident portal found, but there was an intermediate link searched (the test then fails). If it found a link, it performs a secondary search on that link. This search consists of finding all input/submit elements and seeing if there is enough of each to consider the new page a resident portal. If there are, the program returns a :SECONDARY_SEARCH flag, the intermediate link, and the current link to signal that the resident portal was found on the secondary search. If there aren’t adequate elements return a :NONE flag, the intermediate link (for future searches), and the current link (for future searches) to signal that there was no resident link found, but there were two pages in which the program searched for a login page. 1. Load @@url. 2. Attempt to find a “resident” link: 1. Loop through all anchor tags: 1. If the anchor text has “resident” in it, then return the link. 2. If there are no resident links found, return a :NOT_FOUND flag and fail the test. 3. Perform a primary search with the resident link: 1. Search the page for the number of input fields: 1. Loop through all inputs on the page: 1. If the input is visible then append it to a list. 2. Return the input list. 2. Search the page for the number of submit-buttons: 1. Loop through all buttons, inputs and anchor tags: 1. Test to see if the text contains typical login terms and has an action associated with it. 2. Return the button list. 3. If there is more than one input and at least one log in button, then return a :PRIMARY_SEARCH flag, the current link, and a nil value for the secondary link. This will signal that the login portal was found on the primary search, and that there are no intermediate pages along the link path. 4. If there were not enough inputs or buttons, then search the page for a “login” link: 1. Loop through all anchor tags: 1. If the anchor text has “login” in it, then return the link. 2. If there are no resident links found, return a :NOT_FOUND flag. 5. If no link was found then search the page for a “portal” link: 1. Loop through all anchor tags: 1. If the anchor text has “portal” in it, then return the link. 2. If there are no resident links found, return a :NOT_FOUND flag. 6. If no link was found return a :NONE flag, the current link, and a nil value for the second link, to signal that there were no links found that were indicative of a resident portal, and fail the test. 7. Go to a secondary search using either the “login” or “portal” link: 1. Load the new link. 2. Search the page for the number of input/submit elements: 1. Loop through all inputs on the page: 1. If the input is visible then append it to a list. 2. Return the input list. 3. Search the page for the number of submit-buttons: 1. Loop through all buttons, inputs and anchor tags: 2. Test to see if the element text contains typical log in terms and has an action associated with it. 2. Return button list. 4. If there is more than one input field and at least one button then use the current link as the secondary link. If not, than return a nil value to signal that that this link does not contain a resident portal. 8. If the newly returned link is nil, return a :NONE flag (to signal that no resident link was found), the current link, and the new nil value as the secondary link. If the newly returned link is non-nil, return a :SECONDARY_SEARCH flag (to signal that the resident portal was found during the secondary search), the intermediate link, and the new link found in the secondary search. 5. If a :NONE flag was returned, then fail the test. If any other flag was returned, the prompt the user to answer to whether the log in portal (the driver’s current page) has payment functionality: 1. If the user answers “yes”, then pass the test. If the user answers “no”, then fail the test. 6. Return the search flag, primary link, and secondary link for use in the later tests.

Mobile Device Payments: *This test uses values passed from the former test. This test uses the flags and links passed from the former test, to determine whether the resident portal the site uses (if any), is mobile compatible. If no resident portal is found, the test fails. The test then uses the Google Mobile Compatibility tester (loads the utility with a dynamically entered url as a parameter to auto-generate the results) to see if the resident portal login page is mobile friendly. If the test does not load correctly, it tries one more time. If it is, and the portal is internal, the user is then asked whether the resident portal has online payment functionality (if the page is not internal, the test passes). If the user answers “yes”, the test is passes. If not, the test fails. Additionally, if the portal is not mobile friendly, the test fails. Finally, if the Mobile Compatibility test page fails to load during the second attempt, the test fails. 1. Test to see if search flag is :NONE. If it is, fail the test because there is no online portal. 2. Configure the link to auto generate mobile compatibility results for the resident portal (input the resident portal url as a link parameter). 3. Test whether the resident portal is mobile compatible: 1. Open the Google Mobile Compatibility test site with the url being tested as a parameter (once the page loads the results will be generated). 2. Set an implicit wait. If the timeout occurs, try one more time to reopen the page and get the results. If the second attempt does not work, then fail the test. 3. Once the results are visible, test to see if the results bar contains the string “not mobile-friendly”. If it does, then fail the test. If it doesn’t, return the resident link. 4. If the resident link is internal (on the businesses site), then ask the user whether their site has payment functionality: 1. If the user answers “yes”, then pass the test. If the user answers “no”, then fail the test. 5. If the resident link in non-internal then pass the test.

Maintenance Requests: *The former test and this test were tackled in a similar way. Test to see if the search flag is :NONE. If not, the test continues. Continue, and test to see if the resident portal was found on the secondary search. If it was, perform a secondary maintenance request search on the parent link. In this test, the program attempts to find any links that lead to resident portals. If no link is found then the test continues. If a link is found, load that url, and test to see if the page has at least two inputs and one login button. If it does, the test passes. If it doesn’t, the test continues. The test then determines which link has the resident portal. If the search flag is :PRIMARY_SEARCH, the resident portal is assumed to be the parent link. If it is not, it is assumed to be the child link. The program then loads the assumed resident portal link and asks the user if the portal has maintenance request functionality. If the user answers “yes” then the test passes. If the user answers “no”, the test fails. The last search for maintenance requests is a secondary search on @@url if the search flag was :NONE. In this last test, the program attempts to find any links that lead to a resident portal on @@url. If there is no link then the test fails. If there is a link, load that url, and test to see if the page has at least two inputs and one login button. If it does, the test passes. If it doesn’t, the test fails. 1. Test to see if the search flag is :NONE. If it is not, the test continues. 2. If there the search flag is :SECONDARY_SEARCH, it means that there is an intermediate link between the home page and the resident portal. Search the page for maintenance links or an input scheme: 1. Test to see if there is a link to any maintenance requests: 1. Loop through all anchor tags on the page: 1. If the anchor text has “maintenance request” in it then return the link. 2. If there is no resident link then return a nil value to signal that no link was found. Stop the secondary search. 2. Load the found maintenance link. 3. Get all inputs on page: 1. Loop through each input on the page: 1. If the input is displayed, append it to a list. 2. Return the list. 4. Get all login buttons on page: 1. Loop through all anchor, button, and input elements on the page: 1. Loop through each term of a login terms list: 1. If the element’s text has the current term in it, and it has an action associated with it then append it to a list. 2. Return the list. 5. Test if the number of input fields is more that one and the number of buttons is more than zero. If so, return and pass the test. If not, return and continue to the primary search. 3. Determine which link has the resident portal. 4. Perform a primary search on that link: 1. Load the resident portal link. 2. Ask the user about maintenance requests: 1. Ask the user if the resident portal has maintenance requests. If they answer “yes”, pass the test. If they answer “no”, fail the test. 5. If the search flag was :NONE the search for “maintenance request” links on the home page or the parent link. If the parent link exists, search there. If it doesn’t search @@url: 1. Load @@url page. 2. Test to see if there is a link to any maintenance requests: 1. Loop through all anchor tags on the page: 1. If the anchor text has “maintenance request” in it then return the link. 2. If there is no resident link then return a nil value to signal that no link was found. Stop the secondary search. 3. Load the found maintenance link. 4. Get all inputs on the page: 1. Loop through each input on the page: 1. If the input is displayed, append it to a list. 2. Return the list. 5. Get all login buttons on page: 1. Loop through all anchor, button, and input elements on the page: 1. Loop through each term of a login terms list: 1. If the element’s text has the current term in it, and it has an action associated with it then append it to a list. 2. Return the list. 6. Test if the number of input fields is more that one and the number of buttons is more than zero. If so, return and pass the test. If not, return and fail the test.

Community Calendar: *The former test and this test were tackled in a similar way. Test to see if the search flag is :NONE. If not, the test continues. Continue, and test to see if the resident portal was found on the secondary search. If it was, perform a secondary community calendar search on the parent link. In this test, the program attempts to find any links that lead to resident portals. If no link is found then the test continues. If a link is found, load that url, and test to see if the page has at least one table element (the tag type that calendars are composed of). If it does, the test passes. If it doesn’t, the test continues. The test then determines which link has the resident portal. If the search flag is :PRIMARY_SEARCH, the resident portal is assumed to be the parent link. If it is not, it is assumed to be the child link. The program then loads the assumed resident portal link and asks the user if the portal has a community calendar. If the user answers “yes” then the test passes. If the user answers “no”, the test fails. The last search for maintenance requests is a secondary search on @@url if the search flag was :NONE. In this last test, the program attempts to find any links that lead to a resident portal on @@url. If there is no link then the test fails. If there is a link, load that url, and test to see if the page has at least one table element. If it does, the test passes. If it doesn’t, the test fails. 1. Test to see if the search flag in :NONE. If it is not, the test continues. 2. If the search flag is :SECONDARY_SEARCH, it means that there is an intermediate link between the home page and the resident portal. Search the page for community calendar (a table element): 1. Test to see if there are any “calendar” links: 1. Loop through all anchor tags on the page: 1. If the anchor text has “calendar” in it then return the link. 2. If there is no resident link then return a nil value to signal that no link was found. Stop the secondary search. 2. Load the found “calendar” link. 3. Test to see if the number of “table” elements is at least one on the page. If so, return and pass the test. If, not return and continue. 3. Determine which link of the parent and child link has the resident portal. 4. Perform a primary search on that link: 1. Load the resident portal link. 2. Ask the user about a community calendar: 1. Ask the user if the resident portal has a community calendar. If they answer “yes”, pass the test. If they answer “no”, fail the test. 5. If the search flag was :NONE the search for “calendar” links on the home page or the parent link. If the parent link exists, search there. If it doesn’t search @@url: 1. Load either the parent link or @@url. 2. Test to see if there is a link to any “calendar links”: 1. Loop through all anchor tags on the page: 1. If the anchor text has “calendar” in it then return the link. 2. If there is no resident link then return a nil value to signal that no link was found. Stop the home page search and fail the test. 3. Load the found calendar link. 4. Test to see if the current page has any table elements on it. If it has any, pass the test. If it doesn’t, then fail the test.

Website Analytics: For this test, page encoding is checked and reset if needed, and all of the scripts in @@url’s page source are looped through. The program then tests to see if a common pattern (that is found in Google Analytics scripts) is present in any of the scripts on the page. If one is found, the test passes (the page has a Google Analytics script). If none are found, the test fails (the page does not have a Google Analytics script). 1. Check if page encoding is correctly set. If not, set it correctly. 2. Loop through the scripts in the page source of the home page @@url: 1. If a script contains a pattern associated with a Google Analytics Script then pass the test. 3. If that pattern is not found in any of the scripts then fail the test.

Call Tracking/Recording: This test first loads @@url and attempts to find all the phone numbers on the page. If none are found, the test fails. The program then asks the user in a loop for a search term, to try and generate the business as a first result on the Google SERP. If there is a url match with the first result (it belongs to the business), it reads the phone numbers from both phone number spots in the listing (if they exist). If the phone numbers are read, and do not match any found on the site, the test passes (call tracking numbers were used). If the phone numbers match, no numbers are found, or the first result does not belong to the business, the program continues. The loop keeps going until the first result belongs to the business and no call tracking numbers are found, or the user auto passes/fails this step. If the user chooses the auto pass option, the test passes. If the program exits the loop due to auto failure, or not finding call tracking numbers in the listing, the user is asked whether they want to continue to the ad search portion of the program. If they answer “yes”, the test continues. If they answer “no” the test fails. The program then asks the user in a loop for a search term, to try and generate ads for the business on the SERP. For every ad search iteration, the user has the choice to auto pass/fail the test. If they choose the auto pass/fail option, the test then passes/fails respectively. Once the terms are entered, the results are generated and the top and right hand side ad containers are grabbed. They are then looped through. If there is no url match amongst any ad, the test continues. If there is a url match, the phone numbers are then grabbed from the ad (if they exist). If there are no numbers, the test fails. If there are numbers, the numbers are compared to the ones found on @@url. If there is a match between the numbers, the test fails (no call tracking numbers were used). If there isn’t, the test passes (a call tracking number was used). The loop continues until the user generates an ad, or auto passes/fails the test. 1. Get all phone numbers off of main page: 1. Loop through all elements on the page: 1. If a phone number pattern is found in the element’s text then append it to a list. 2. Return that list. If the list is empty, the test fails. If not, the test continues. 2. Attempt to find call tracking numbers on a Google SERP: 1. In a loop ask the user for a search term to enter into Google to generate a results page where the first result belongs to the business: 1. Test to see if the user auto passed or failed the test. If they entered “auto pass” then pass the test. If they entered “auto fail”, then ask the user if they want to continue to ad testing: 1. If the user answers “yes” when they are asked to continue, then the test continues. If they enter “no” then the test is failed. 2. Load Google search page. 3. Input the user search term and generate results. 4. Test to see if the first result belongs to the business: 1. Find the first element. If the first listing cannot be found then return and continue trying search terms. 2. Test to see if the first listing belongs to the business. If it doesn’t, then return and keep testing. 3. Find the listed numbers in the listing. If they are non-existent then return and continue trying search terms. 4. Test to see if any numbers on the page match the listed numbers. If they do, then ask the user if they want to continue to ads testing: 1. If the user answers “yes” when they are asked to continue then the test continues. If they enter “no” then the test is failed. 5. If there is no match between the listed numbers and numbers found on the page then the test is passed (a call tracking number is used). 3. At this point if the test was not passed then no SERP call tracking numbers could be found or the test was auto failed and the user desired to continue to ads testing. Test for call tracking numbers in ads: 1. In a loop ask the user for a search term to enter into Google to generate ads for the business: 1. Test to see if the user auto passed or failed the test. If they entered “auto pass” then pass the test. If they entered “auto fail”, then ask the user if they want to continue to ad testing: 1. If the user answers “yes” when they are asked to continue then the test continues. If they enter “no” then the test is failed. 2. Load Google search page. 3. Input the user search term and generate results. 4. Get text of top and right hand side ad containers: 1. If a top ad container is found, set a variable to it. If not found, set the variable as a :NOT_FOUND flag. 2. If a rhs (right hand side) ad container is found, set a variable to it. If not found, set the variable as :NOT_FOUND flag. 3. Return both the top and rhs ad container variables. 5. Attempt to find a match in the ads: 1. If the top ad container exists then loop through each ad within it: 1. If the listed ad url and @@url do not match, then go to the next iteration of the loop. 2. Attempt to get the first ad phone number from the ad: 1. If the ad’s first phone number is available, then get the phone number from it. If the element can’t be found then return a :NOT_FOUND flag and continue. 3. Attempt to match the numbers found on the home page and the first found in the listing: 1. Loop through all of the phone numbers found on the page: 1. If there is a match then continue. 2. If there is no match pass the test (a call tracking number was used). 4. Attempt to get the second ad phone number from the ad: 1. If the ad’s second number is available, then get the phone number from it. If the element can’t be found then return a :NOT_FOUND flag and continue. 5. Attempt to match the numbers found on the home page and the second found in the listing: 1. Loop through all of the phone numbers found on the page: 1. If there is a match then fail the test. 2. If there is no match, pass the test (a call tracking number was used). 1. If the rhs ad container exists then loop through each ad within it: 1. If the listed ad url and @@url do not match, then go to the next iteration of the loop. 2. Attempt to get the first ad phone number from the ad: 1. If the ad’s first phone number is available, then get the phone number from it. If the element can’t be found then return a :NOT_FOUND flag and continue. 3. Attempt to match the numbers found on the home page and the first found in the listing: 1. Loop through all of the phone numbers found on the page: 1. If there is a match then continue. 2. If there is no match pass the test (a call tracking number was used). 4. Attempt to get the second ad phone number from the ad: 1. If the ad’s first second number is available, then get the phone number from it. If the element can’t be found then return a :NOT_FOUND flag and fail the test. 5. Attempt to match the numbers found on the home page and the second found in the listing: 1. Loop through all of the phone numbers found on the page: 1. If there is a match then fail the test. 2. If there is no match pass the test (a call tracking number was used).

Dynamic Phone Numbers: *All this test does is use a Boolean flag passed from the former test. For this test the Boolean passed in is tested to be true or not. If it is it means that a call tracking number was used, and therefore, dynamic phone numbers were used (the test passes). If the flag is not true, dynamic phone numbers were not used (the test fails). 1. If the flag passed from the former test is true, then pass the test (if there are call tracking numbers used then there are dynamic phone numbers used as well). If the flag is false, fail the test.

dxa-automation-script's People

Contributors

ryankbales avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.