Code Monkey home page Code Monkey logo

browserstack-runner's Introduction

BrowserStack Runner

Build Status

A command line interface to run browser tests over BrowserStack.

Usage

Install globally:

npm -g install browserstack-runner

Then, after setting up the configuration, run tests with:

browserstack-runner

You can also install locally and run the local binary:

npm install browserstack-runner
node_modules/.bin/browserstack-runner

If you're getting an error EACCES open ... BrowserStackLocal, configure npm to install modules using something other than the default "nobody" user:

npm -g config set user [user]

Where [user] is replaced with a local user with enough permissions.

CLI options:

--path: Can be used if a different test runner is needed other than the one present in the browserstack.json file.

--pid: Custom pid file that stores the pid's of the BrowserStackLocal instances created.

--verbose or -v: For verbose logging.

--browsers or -b: Space separated list of cli_key as defined in the browserstack.json file. This will run tests on the selected browsers only. If not present tests will run on all browsers present in the configuration file.

Sample Usage: browserstack_runner --browsers 1 2 3 --path 'path/to/test/runner' --pid 'path/to/pid/file' -v

Usage as a module

browserstack-runner can also be used as a module. To run your tests, inside your project do -

var browserstackRunner = require("browserstack-runner");

var config = require("./browserstack.json");

browserstackRunner.run(config, function(error, report) {
  if (error) {
    console.log("Error:" + error);
    return;
  }
  console.log(JSON.stringify(report, null, 2));
  console.log("Test Finished");
});

The callback to browserstackRunner.run is called with two params -

  • error: This parameter is either null or an Error object (if test execution failed) with message as the reason of why executing the tests on BrowserStack failed.
  • report: This is an array which can be used to keep track of the executed tests and suites in a run. Each object in the array has the following keys -
    • browser: The name of the browser the test executed on.
    • tests: An array of Test objects. The Test Objects are described here
    • suites: A global Suite Object as described here

The structure of the report object is as follows -

[
  {
    "browser": "Windows 7, Firefox 47.0",
    "tests": [
      {
        "name": "isOdd()",
        "suiteName": "Odd Tests",
        "fullName": ["Odd Tests", "isOdd()"],
        "status": "passed",
        "runtime": 2,
        "errors": [],
        "assertions": [
          {
            "passed": true,
            "actual": true,
            "expected": true,
            "message": "One is an odd number"
          },
          {
            "passed": true,
            "actual": true,
            "expected": true,
            "message": "Three is an odd number"
          },
          {
            "passed": true,
            "actual": true,
            "expected": true,
            "message": "Zero is not odd number"
          }
        ]
      }
    ],
    "suites": {
      "fullName": [],
      "childSuites": [
        {
          "name": "Odd Tests",
          "fullName": ["Odd Tests"],
          "childSuites": [],
          "tests": [
            {
              "name": "isOdd()",
              "suiteName": "Odd Tests",
              "fullName": ["Odd Tests", "isOdd()"],
              "status": "passed",
              "runtime": 2,
              "errors": [],
              "assertions": [
                {
                  "passed": true,
                  "actual": true,
                  "expected": true,
                  "message": "One is an odd number"
                },
                {
                  "passed": true,
                  "actual": true,
                  "expected": true,
                  "message": "Three is an odd number"
                },
                {
                  "passed": true,
                  "actual": true,
                  "expected": true,
                  "message": "Zero is not odd number"
                }
              ]
            }
          ],
          "status": "passed",
          "testCounts": {
            "passed": 1,
            "failed": 0,
            "skipped": 0,
            "total": 1
          },
          "runtime": 2
        }
      ],
      "tests": [],
      "status": "passed",
      "testCounts": {
        "passed": 1,
        "failed": 0,
        "skipped": 0,
        "total": 1
      },
      "runtime": 2
    }
  }
]

Configuration

To run browser tests on BrowserStack infrastructure, you need to create a browserstack.json file in project's root directory (the directory from which tests are run), by running this command:

browserstack-runner init [preset] [path]

preset: Path of a custom preset file. Default: presets/default.json

path: Path to test file. Default: path/to/test/runner

Parameters for browserstack.json

  • username: BrowserStack username (Or BROWSERSTACK_USERNAME environment variable)
  • key: BrowserStack access key (Or BROWSERSTACK_KEY environment variable)
  • test_path: Path to the test page which will run the tests when opened in a browser.
  • test_framework: Specify test framework which will run the tests. Currently supporting qunit, jasmine, jasmine1.3.1, jasmine2 and mocha.
  • test_server_port: Specify test server port that will be opened from BrowserStack. If not set the default port 8888 will be used. Find a list of all supported ports on browerstack.com.
  • timeout: Specify worker timeout with BrowserStack.
  • browsers: A list of browsers on which tests are to be run. Find a list of all supported browsers and platforms on browerstack.com.
  • build: A string to identify your test run in Browserstack. In TRAVIS setup TRAVIS_COMMIT will be the default identifier.
  • proxy: Specify a proxy to use for the local tunnel. Object with host, port, username and password properties.
  • exit_with_fail: If set to true the cli process will exit with fail if any of the tests failed. Useful for automatic build systems.
  • tunnel_pid_file: Specify a path to file to save the tunnel process id into. Can also by specified using the --pid flag while launching browserstack-runner from the command line.

A sample configuration file:

{
  "username": "<username>",
  "key": "<access key>",
  "test_framework": "qunit|jasmine|jasmine2|mocha",
  "test_path": ["relative/path/to/test/page1", "relative/path/to/test/page2"],
  "test_server_port": "8899",
  "browsers": [
    {
      "browser": "ie",
      "browser_version": "10.0",
      "device": null,
      "os": "Windows",
      "os_version": "8",
      "cli_key": 1
    },
    {
      "os": "android",
      "os_version": "4.0",
      "device": "Samsung Galaxy Nexus",
      "cli_key": 2
    },
    {
      "os": "ios",
      "os_version": "7.0",
      "device": "iPhone 5S",
      "cli_key": 3
    }
  ]
}

browsers parameter

browsers parameter is a list of objects, where each object contains the details of the browsers on which you want to run your tests. This object differs for browsers on desktop platforms and browsers on mobile platforms. Browsers on desktop platform should contain browser, browser_version, os, os_version parameters set as required and the cli_key parameter is optional and can be used in the command line when tests need to be run on a set of browsers from the browserstack.json file.

Example:

{
  "browser": "ie",
  "browser_version": "10.0",
  "os": "Windows",
  "os_version": "8",
  "cli_key": 1
}

For mobile platforms, os, os_version and device parameters are required.

Example:

[
  {
    "os": "ios",
    "os_version": "8.3",
    "device": "iPhone 6 Plus",
    "cli_key": 1
  },
  {
    "os": "android",
    "os_version": "4.0",
    "device": "Google Nexus",
    "cli_key": 2
  }
]

For a full list of supported browsers, platforms and other details, visit the BrowserStack site.

Compact browsers configuration

When os and os_version granularity is not desired, following configuration can be used:

  • [browser]_current or browser_latest: will assign the latest version of the browser.
  • [browser]_previous: will assign the previous version of the browser.
  • [browser]_[version]: will assign the version specified of the browser. Minor versions can be concatenated with underscores.

This can also be mixed with fine-grained configuration.

Example:

{
  "browsers": [
    "chrome_previous",
    "chrome_latest",
    "firefox_previous",
    "firefox_latest",
    "ie_6",
    "ie_11",
    "opera_12_1",
    "safari_5_1",
    {
      "browser": "ie",
      "browser_version": "10.0",
      "device": null,
      "os": "Windows",
      "os_version": "8",
      "cli_key": 1
    }
  ]
}

Note: These shortcuts work only for browsers on desktop platforms supported by BrowserStack.

Proxy support for BrowserStack local

Add the following in browserstack.json

{
  "proxy": {
    "host": "localhost",
    "port": 3128,
    "username": "foo",
    "password": "bar"
  }
}

Supported environment variables

To avoid duplication of system or user specific information across several configuration files, use these environment variables:

  • BROWSERSTACK_USERNAME: BrowserStack user name.
  • BROWSERSTACK_KEY: BrowserStack key.
  • TUNNEL_ID: Identifier for the current instance of the tunnel process. In TRAVIS setup TRAVIS_JOB_ID will be the default identifier.
  • BROWSERSTACK_JSON: Path to the browserstack.json file. If null, browserstack.json in the root directory will be used.
  • BROWSERSTACK_LOCAL_BINARY_PATH: Path to the browserstack local binary present on the system. If null, BrowserStackLocal in the lib/ directory will be used.

Secure Information

To avoid checking in the BrowserStack username and key in your source control system, the corresponding environment variables can be used.

These can also be provided by a build server, for example using secure environment variables on Travis CI.

Code Sample

Check out code sample here.

Running Tests

BrowserStack Runner is currently tested by running test cases defined in QUnit, Mocha, and Spine repositories.

To run tests:

npm test

To run a larger suite of tests ensuring compatibility with older versions of QUnit, etc.:

npm run test-ci

Tests are also run for every pull request, courtesy Travis CI.

Timeout issue with Travis CI

You might face build timeout issue on Travis if runner takes more than 10 minutes to run tests.

There are 2 possible ways to solve this problem:

  1. Run a script which does console.log every 1-2 minutes. This will output to console and hence avoid Travis build timeout
  2. Use travis_wait function provided by Travis-CI. You can prefix browserstack-runner command by travis-wait in your travis.yml file

We would recommend using travis_wait function. It also allows you to configure wait time (ex: travis_wait 20 browserstack-runner, this will extend wait time to 20 minutes). Read more about travis_wait here

browserstack-runner's People

Contributors

9ikhan avatar akhillb avatar ankurgel avatar bstack-security-github avatar crodjer avatar cvrebert avatar dhimil avatar eugenet8k avatar flore77 avatar francisf avatar jzaefferer avatar krinkle avatar mohitmun avatar nakula avatar nidhimj22 avatar pankaj-a avatar patocallaghan avatar patrickkettner avatar pishposh avatar pulkitsharma07 avatar punitx avatar raghuhit avatar rahulnwn avatar reeteshranjan avatar saranshbs avatar shirish87 avatar tehshrike avatar tr4n2uil avatar trentmwillis avatar vedharish avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

browserstack-runner's Issues

What browser activity resets the timeout clock?

Hey,

I have a large suite of tests written in QUnit which take around 5 mins to run in Browserstack (EmberJS integration tests) . Can you tell me what causes the timeout counter to reset? Is there some keepalive mechanism which tells browserstack-runner that your tests are still running or is it simply a timer from the time your tests start until the tests are fully completed?

As my tests take around 5 mins to run and my browserstack-runner timeout is 5 mins I get a lot of timeouts if they run slightly over. I've just updated my timeout to be 7 mins so I don't get the timeouts anymore, but should browserstack-runner know that my tests are still in progress?

Thanks

Worker fails to load page, times out

I did a run against a single testsuite in jQuery UI, with this configuration:

{
    "test_framework": "qunit",
    "test_path": ["tests/unit/autocomplete/autocomplete.html"],
    "browsers": [
        "chrome_previous",
        "chrome_current",
        "firefox_previous",
        "firefox_current",
        "opera_previous",
        "opera_current",
        "safari_previous",
        "safari_current",
        "ie_8",
        "ie_9",
        "ie_10",
        "ie_11"
    ]
}

On the first try, seven browsers completed, then the runner died with this (useless) stacktrace:

events.js:72
        throw er; // Unhandled 'error' event
              ^
Error: getaddrinfo ENOTFOUND
    at errnoException (dns.js:37:11)
    at Object.onanswer [as oncomplete] (dns.js:124:16)

On the next run, one browser completed, the rest timed out with a message like this for each:

[Fri May 02 2014 18:00:20 GMT+0200 (CEST)] [Runner alert] Worker inactive for too long: Windows XP, Chrome 33.0

Looking at the screenshots at browserstack.com/automate, the workers failed to load the page, e.g. https://s3.amazonaws.com/testautomation/1fe1c46220e4d579ed7e80664f97d5cc5fc23614/js-screenshot-1399046509.png - all 11 failed workers show a error message like that.

Avoid dumping files in working dir

When running browserstack-runner, currently it seems to dump two files into the working directory, browserstack-run.pid and BrowserStackLocal. I don't think either should be there.

Persistent ETIMEDOUT error

This currently makes browserstack-runner completely unusable for me. The error is pretty useless:

~/dev/jquery-ui [git:optimize-testsuites?] $ browserstack-runner

events.js:72
        throw er; // Unhandled 'error' event
              ^
Error: connect ETIMEDOUT
    at errnoException (net.js:904:11)
    at Object.afterConnect [as oncomplete] (net.js:895:19)

To get some more output, I've installed the long-stack-traces module and required it in cli.js, then installed my local copy globally (npm install -g .). With that in place, I get a longer stacktrace:

~/dev/jquery-ui [git:optimize-testsuites?] $ browserstack-runner
Launching 2 worker(s) for 2 run(s).
Uncaught Error: connect ETIMEDOUT
    at errnoException (net.js:904:11)
    at Object.afterConnect [as oncomplete] (net.js:895:19)
----------------------------------------
    at EventEmitter.on
    at Socket.<anonymous> (_stream_readable.js:688:33)
    at http.js:1753:12
    at process._tickCallback (node.js:419:13)
----------------------------------------
    at process.nextTick
    at ClientRequest.onSocket (http.js:1730:11)
    at Agent.addRequest (http.js:1269:9)
    at new ClientRequest (http.js:1416:16)
    at Object.request (http.js:1843:10)
    at Version.request (/usr/local/lib/node_modules/browserstack-runner/node_modules/browserstack/lib/browserstack.js:163:18)
    at Version.getWorkers (/usr/local/lib/node_modules/browserstack-runner/node_modules/browserstack/lib/browserstack.js:95:8)
    at [object Object].<anonymous> (/usr/local/lib/node_modules/browserstack-runner/bin/cli.js:195:14)
----------------------------------------
    at Object.setInterval
    at Object.start (/usr/local/lib/node_modules/browserstack-runner/bin/cli.js:194:27)
    at /usr/local/lib/node_modules/browserstack-runner/bin/cli.js:278:22
    at [object Object].<anonymous> (/usr/local/lib/node_modules/browserstack-runner/lib/local.js:62:11)
    at Timer.listOnTimeout [as ontimeout] (timers.js:110:15)
----------------------------------------
    at Object.setTimeout
    at Socket.<anonymous> (/usr/local/lib/node_modules/browserstack-runner/lib/local.js:61:9)
    at Socket.emit (events.js:117:20)
    at Socket.<anonymous> (_stream_readable.js:745:14)
    at Socket.emit (events.js:92:17)
    at emitReadable_ (_stream_readable.js:407:10)
    at emitReadable (_stream_readable.js:403:5)
----------------------------------------
    at EventEmitter.on
    at tunnelLauncher (/usr/local/lib/node_modules/browserstack-runner/lib/local.js:51:23)
    at /usr/local/lib/node_modules/browserstack-runner/lib/local.js:87:7
    at Object.cb [as oncomplete] (fs.js:168:19)
Uncaught undefined

/usr/local/lib/node_modules/browserstack-runner/node_modules/long-stack-traces/lib/long-stack-traces.js:80
                    throw ""; // TODO: throw the original error, or undefined?
                    ^

That last part seems like an issue in the long-stack-traces module, which isn't very useful. Thought at least there are multiple references to locations within this tool itself, which should help. I'm not familiar with that part of the codebase though.

Compact 'browsers' configuration

For jQuery projects, we'd like to be able to specify browsers like:

"browsers": [
    "chrome_previous",
    "chrome_current",
    "firefox_previous",
    "firefox_current",
    "ie_6",
    "ie_7",
    "ie_8",
    "ie_9",
    "ie_10",
    "ie_11",
    "opera_12_1",
    "opera_previous",
    "opera_current",
    "safari_5_1",
    "safari_previous",
    "safari_current"
]

"current" is already supported for the browser_version field, but we'd also need "previous", to implement "(Current - 1)".

We don't (want to) care about the platform/OS each browser runs on, but it would be good to be able to mix this compact format with the more verbose one, like this:

"browsers": [
    "chrome_previous",
    "chrome_current",
    {
      "browser": "firefox",
      "browser_version": "15.0",
      "device": null,
      "os": "OS X",
      "os_version": "Snow Leopard"
    },
    "ie_6"
]

Encoding issues with built-in server

This is an odd encoding issue that only appears when running tests through the runner. It happens for my jquery-validation project, specifically in this test. That contains a few lines like this:

    ok( method( "£9", "£"), "Symbol no decimal" );
    ok( method( "£9.9", "£"), "£, one decimal" );
    ok( method( "£9.99", "£"), "£, two decimal" );
    ok( method( "£9.90", "£"), "Valid currency" );

When the file is server by the built-in server (instead of my local nginx), it looks like this:

    ok( method( "£9", "£"), "Symbol no decimal" );
    ok( method( "£9.9", "£"), "£, one decimal" );
    ok( method( "£9.99", "£"), "£, two decimal" );
    ok( method( "£9.90", "£"), "Valid currency" );

That even has the correct character, but for some reason puts the  in front of it.

The only difference I noticed is that nginx servers the js file with Content-Type:application/x-javascript, while the server here uses "text/javascript". I tried changing the content-type, but that didn't help, the result was the same.

I also tried to explicitly set the charset as part of the content-type, forcing it to utf-8. That shouldn't be necessary, since the index.html has a <meta charset="utf-8"> tag, and it didn't help either.

Hoping for some ideas what else could be causing this.

Here's my browserstack.json I used in a checkout of jquery-validation:

{
    "test_framework": "qunit",
    "test_path": ["test/index.html"],
    "browsers": [
        "chrome_latest"
    ]
}

npm install fails because of failed shasum checking

npm WARN package.json [email protected] No repository field.
npm ERR! Error: shasum check failed for /var/folders/gd/xzyzg3891lvgsgynzl72vlyc0000gn/T/npm-1106-xXXQklPd/registry.npmjs.org/browserstack-runner/-/browserstack-runner-0.2.2.tgz
npm ERR! Expected: 4a50ced875371dbd0906ffea60788abea4a2b5cf
npm ERR! Actual:   c84a458caca706d44407076c7ab5a3ef5cab6626
npm ERR! From:     https://registry.npmjs.org/browserstack-runner/-/browserstack-runner-0.2.2.tgz
npm ERR!     at /usr/local/lib/node_modules/npm/node_modules/sha/index.js:38:8
npm ERR!     at ReadStream.<anonymous> (/usr/local/lib/node_modules/npm/node_modules/sha/index.js:85:7)
npm ERR!     at ReadStream.emit (events.js:117:20)
npm ERR!     at _stream_readable.js:943:16
npm ERR!     at process._tickCallback (node.js:419:13)
npm ERR! If you need help, you may report this *entire* log,
npm ERR! including the npm and node versions, at:
npm ERR!     <http://github.com/npm/npm/issues>

npm ERR! System Darwin 14.0.0
npm ERR! command "node" "/usr/local/bin/npm" "install"
npm ERR! cwd /Users/yorkie/workspace/pixbi/duplo
npm ERR! node -v v0.10.32
npm ERR! npm -v 1.4.28
npm ERR! 
npm ERR! Additional logging details can be found in:
npm ERR!     /Users/yorkie/workspace/pixbi/duplo/npm-debug.log
npm ERR! not ok code 0

as i known if the owner of this repo release with --force? suggest bump version to resolve this issue, thank you :-)

Do not kill process on timeouts

In runner.js, if a worker times out, alertBrowserStack is called, causing the process to exit. This prevents the rest of the workers/vms from attempting to run their tests. We should allow them all to run so we can potentially fix/validate tests manually per platform before running the suite again.

timeout settings is not intuitive

i set a timeout of 20 seconds, yet even though a test completes (in miliseconds), it says it times out (i assume its tracking backed on VM queue time, and not true execution). should only begin timer once VM is running

Reuse workers? Was: Error: Cannot queue more than 200 workers

Here's an interesting issue I just ran into trying to use browserstack-runner to run all unit tests in jQuery UI. This gist has my configuration as well as the output I got: https://gist.github.com/jzaefferer/d79717435a5a537b2a67

This run failed pretty quickly, after it tried to launch 264 workers (12 browsers x 22 test suites), with "Error: Cannot queue more than 200 workers". I guess this is an area where the approach taken by TestSwarm, with all its overhead, is somewhat superior, where a run like this needs a minimum amount of 12 workers (maybe a few more if a worker gets killed inbetween or a test has to run again).

Maybe there is some way to reuse workers within browserstack-runner? Or a different approach to this problem?

Similar project

I've been working on a similar project for a while. I just tried to publish mine on NPM, but ran into a naming conflict, which is how I found out about this project. I've renamed my project to browserstack-workers (https://github.com/bramstein/browserstack-workers) and published it.

After looking through the code it seems we're taking a very similar approach. Would you be interested in merging the projects or some other form of collaboration? I wasn't really planning on creating and maintaining a project like this, but there was nothing like it around at the time I started working on it, and I really needed it to run my Mocha and Jasmine tests.

0.1.0 CLI doesn't work in OS X / Linux due to bad line endings

$ browserstack-runner
env: node\r: No such file or directory

When I manually convert the file to use UNIX line-endings, everything works:

$ browserstack-runner
Using config: /Users/ron/Projects/bmp-kernel.js/browserstack.json
Configuration file `browserstack.json` is missing.

Inject patches only in main testsuite html

Currently in server.js, any html file gets injected with the framework patch scripts. This approach causes issues when loading other html files, most likely as iframes.

This gist, running against the sizzle test suite, shows what happens in that case: https://gist.github.com/jzaefferer/401dd88e83443143d351 - its kind of obscure, but the lack of the window object led me to my conclusion above.

I've worked around this by only applying the patching to files called index.html. A proper solution should compare the filename to the one given in config.test_path. I'm not sure how to do that properly since test_path can be an array. Probably compare to each?

Missing license

There's currently no license information in the repository. This should be addressed. MIT would be good.

Participate in discussion about a shared Reporter interface?

We on the QUnit team have been discussing the possibility of working with other JS test frameworks, especially those that can be run client-side (e.g. Mocha, Jasmine, Intern, Buster, etc.), to agree upon a common Reporter interface so that we could hopefully share Reporter plugins between testing frameworks.

This would most likely come in the form of:

  • a common Reporter API/Interface, e.g.
    • an EventEmitter interface (.on(...)/.off(...)) OR an object with standard "hook" properties
    • maybe a standard-ish way to register a Reporter, e.g. MyLib.addReporter(x), MyLib.reporter = x;, etc.
  • a minimum viable set of standardly-named events
    • an associated standard set of data/details provided for each event
  • a minimum viable set of standard test status types (e.g. pass, fail, skip, todo, pending, etc.)
  • updating all participating test frameworks to support this new common Reporter interface

As a likely consumer of such a shared Reporter interface, we suspect you may be interested in participating in the discussions as well. Such a reporter should eliminate the need for any framework-specific shims/plugins/etc. that you may be employing right now to standardize results.

Would you guys be interested in discussing this further with us? If so, please let me know who I should invite to participate.

Centralized Discussions: https://github.com/js-reporters/js-reporters/issues/

Cross-reference issues:

Check for 304 Not Modified before downloading tunnel

Currently each run gets blocked for a few seconds with this:

Downloading BrowserStack Local to '/Users/jza/dev/browserstack-runner/lib/BrowserStackLocal'

I'm not sure if that actually downloads the file each time, but since it won't change inbetween each run, checking if the file actually changed before downloading it may save several seconds.

Timeouts when running more then one page per worker on Travis

I've working on integrating browserstack-runner into QUnit builds on Travis, using simple obfuscation to embed credentials to allow runs against pull requests, without making it too easy to find the credentials. The configuration sets up 14 workers to run 4 test pages, which works fine locally. When running on Travis, each worker runs the first page, then fails to change the URL and eventually times out. I pushed another commit to the same branch to verify the result, and got the same failure - all 14 workers time out after running one page.

Since the same setup works fine locally I have no clue what the issue is. There are no errors displayed in the Travis output and the BrowserStack build page doesn't show anything either. Looking for some ideas how to debug or better yet, solve this.

Make verbose console output opt-in

Even with just eight browsers, the console output is pretty chaotic, since workers terminate in random(?) order. I've pasted a recent output into a gist, along with a more compact version, which skips pretty much all the extra logging and only outputs relevant information: https://gist.github.com/jzaefferer/3f4a9e516a02494140a7

For any debugging, of the tests or the tool, a --verbose option would help, outputting as much, or more, as right now. But the default should only log relevant information.

Either way, errors should always be logged. I don't need to know about "launching", "launched", "acknowledged", "terminated", as long as I'm informed if one of those steps goes wrong or doesn't happen.

development workflow is slow - for eg. no test cases

If you've followed the changes and my comments above, continue here: My changes here break the runner of anything that isn't QUnit. Addressing that requires a lot more work and testing with each supported framework. Since there are no unit tests, testing each changes requires a rather slow run of browserstack-runner. Improving the development workflow should therefore have the highest priority, before anything else is addressed, since that will speed everything else up a lot. I have no idea how to do that, though.

Reference: #35

Completed, yet timed out

I got this from a run of a the resizable test suite in jQuery UI:

~/dev/jquery-ui [git:drag-resize-relative-position+?] $ browserstack-runner
Launching 12 workers
[Windows 8, Chrome 32.0] Completed in 3890 milliseconds. 232 of 232 passed, 0 failed.
[Windows 8, Chrome 32.0] Screenshot: https://s3.amazonaws.com/testautomation/205cc92c471c3b479cc6788ab39bcc47b7dd9d3f/js-screenshot-1396366499.png
[Windows 8.1, Firefox 27.0] Completed in 2534 milliseconds. 232 of 232 passed, 0 failed.
[Windows 8.1, Firefox 27.0] Screenshot: https://s3.amazonaws.com/testautomation/411709057a8b4b6f73380570fe2f240965b4b91e/js-screenshot-1396366506.png
[Windows 8, Firefox 28.0] Completed in 2939 milliseconds. 232 of 232 passed, 0 failed.
[Windows 8, Firefox 28.0] Screenshot: https://s3.amazonaws.com/testautomation/4b9a29143a5b2c7741b89ec16f3c65f45742a443/js-screenshot-1396366588.png
[Windows 7, Opera 19.0] Completed in 14457 milliseconds. 232 of 232 passed, 0 failed.
[Windows 7, Opera 19.0] Screenshot: https://s3.amazonaws.com/testautomation/18fa0a004f56c3828e93df42ed1db9aa348ad5ac/js-screenshot-1396366597.png
[OS X Mavericks, Safari 7.0] Completed in 1255 milliseconds. 232 of 232 passed, 0 failed.
[Windows 7, Ie 9.0] Completed in 44 milliseconds. 0 of 0 passed, 0 failed.
[OS X Mavericks, Safari 7.0] Screenshot: https://s3.amazonaws.com/testautomation/c90a6de932909ffe308c0ae62502752a84632650/js-screenshot-1396366638.png
[Windows 7, Ie 9.0] Screenshot: https://s3.amazonaws.com/testautomation/2d29d9b8250688dcdf0c1069a8c0879d7404c697/js-screenshot-1396366641.png
[Windows 8.1, Opera 20.0] Completed in 3967 milliseconds. 232 of 232 passed, 0 failed.
[Windows 8.1, Opera 20.0] Screenshot: https://s3.amazonaws.com/testautomation/77c2599aa28752c52ff4c26100a566b5981c8eac/js-screenshot-1396366653.png
[Tue Apr 01 2014 17:39:19 GMT+0200 (CEST)] [Runner alert] Tests timed out on: Windows 8, Chrome 32.0
[Tue Apr 01 2014 17:39:25 GMT+0200 (CEST)] [Runner alert] Tests timed out on: Windows 8.1, Firefox 27.0
[Tue Apr 01 2014 17:39:32 GMT+0200 (CEST)] [Runner alert] Tests timed out on: Windows 8.1, Opera 20.0
[Tue Apr 01 2014 17:39:42 GMT+0200 (CEST)] [Runner alert] Tests timed out on: OS X Mountain Lion, Safari 6.1
[Tue Apr 01 2014 17:39:46 GMT+0200 (CEST)] [Runner alert] Tests timed out on: OS X Mavericks, Safari 7.0
[Tue Apr 01 2014 17:39:54 GMT+0200 (CEST)] [Runner alert] Tests timed out on: Windows 7, Ie 9.0
[Tue Apr 01 2014 17:40:00 GMT+0200 (CEST)] [Runner alert] Tests timed out on: Windows XP, Ie 8.0
[Tue Apr 01 2014 17:40:07 GMT+0200 (CEST)] [Runner alert] Worker inactive for too long: Windows 8, Ie 10.0
[Tue Apr 01 2014 17:40:15 GMT+0200 (CEST)] [Runner alert] Worker inactive for too long: Windows 7, Ie 11.0
[Tue Apr 01 2014 17:40:23 GMT+0200 (CEST)] [Runner alert] Worker inactive for too long: Windows 8, Chrome 33.0
All tests done, failures: 10.
Exiting

For example, Chrome 32 completed, but also timed out. I can't tell why that would happen. Let me know what other information I could provide to help debug this.

Cannot download behind proxy

The git:// type URL (package.json) gets blocked by corporate proxy when performing a npm install. It would be useful to have https URLs instead.

TIA,

Worker never terminated, but gets ignored

I did a run with multiple IE versions. In the output, IE 9 is "Launching", "Launched" and "Acknowledged", but never "Terminated". IE 10 is on "Launching" and "Launched", but doesn't get anywhere else.

The full output is here (see current.sh, the second file in the gist): https://gist.github.com/jzaefferer/3f4a9e516a02494140a7

My configuration looks like this:

{
    "username": "jzaefferer",
    "key": "...",
    "test_framework": "qunit",
    "test_path": ["tests/unit/resizable/resizable.html"],
    "browsers": [
        "chrome_previous",
        "chrome_current",
        "firefox_previous",
        "firefox_current",
        "ie_8",
        "ie_9",
        "ie_10",
        "ie_11"
    ]
}

Crashes due to undefined worker(?)

While working on #66 I ran into this, crashing the server with this stacktrace:

/Users/jza/dev/browserstack-runner/lib/server.js:150
          logger.info(chalk.red("[%s] Error:"), worker.string, formatTraceback
                                                      ^
TypeError: Cannot read property 'string' of undefined
    at /Users/jza/dev/browserstack-runner/lib/server.js:150:55
    at Array.forEach (native)
    at progressHandler (/Users/jza/dev/browserstack-runner/lib/server.js:149:26)
    at IncomingMessage.<anonymous> (/Users/jza/dev/browserstack-runner/lib/server.js:235:46)
    at IncomingMessage.EventEmitter.emit (events.js:92:17)
    at _stream_readable.js:920:16
    at process._tickCallback (node.js:415:13)

If it helps, here's the full output after I applied the workaround I described in #66: https://gist.github.com/jzaefferer/3a8d2ddd88a886c242d8

Create a history file during/after runs

Create a JSON file that can be loaded in a subsequent run to allow the user to re-run failed tests (could implement in CLI similar to how VIM behaves when it detects a swap file).

Improve console output for multiple testsuites

When specifying multiple files in test_path, the current output will just log Completed messages for each file, e.g. [Windows 8.1, Chrome 33.0] Completed... multiple times, for each file. The number of tests ran might vary, but that's not enough information.

This is less of a problem when something fails, where the test name and stack trace indicate which test suite is affected.

CLI filters for paths and browsers

Using browserstack-runner on a project with a dozen test_path entries and a dozen browsers, I find myself editing the config whenever I need to run a subset of those browsers against a subset of paths. I have to remember reverting those changes to avoid accidentally committing them.

An idea to address that: Two extra CLI options to filter both paths and browsers, so the usage could look something like this:

browserstack-runner -p menu -b chrome

That would match all paths that contain "menu" and all browsers that contain "chrome", based on what's in the configuration.

Would be good to get some input from others, maybe there's a better way to achieve the same thing.

mocha: tests run, but session times out

I’m trying to run a mocha test suite with browserstack-runner, and the suite runs and passes (looking at the screenshots or in interactive sessions).

Nevertheless browserstack-runner does not pick up the results, and the session times out (message is “Timeout (Session timed out because the browser was idle for 300 seconds”).

My configuration file:

{
    "username": "",
    "key": "",
    "test_framework": "mocha",
    "test_path": "test.html",
    "browsers": [
        "firefox_latest"
    ]
}

On linux, EACCES open ... BrowserStackLocal error

On linux, after installing npm install -g browserstack-runner just fine, I ran into this issue.

$ browserstack-runner

events.js:72
        throw er; // Unhandled 'error' event
              ^
Error: EACCES, open '/usr/lib/node_modules/browserstack-runner/lib/BrowserStackLocal'

It happens /usr/lib/node_modules/browserstack-runner is installed with nobody:<my_user_group> ownership. The above error is actually about not being able to open file for write (not for read as I initially thought).

I had to change ownership chown -R ... the above directory with my user credentials in order to make it work.

It would be good to state that in the documentation, or ask for user input, or make the directories group writable to avoid such problem.

Thanks

Kill process properly when aborting

To reproduce, start browserstack-runner, then abort immediately using Ctrl+C. This is the output I got, trying to abort twice:

~/dev/qunit [git:master?] $ node_modules/.bin/browserstack-runner
Using config: /Users/jza/dev/qunit/browserstack.json
Launching server on port: 8888
^CExiting
Non existent tunnel
Downloading BrowserStack Local to `/Users/jza/dev/browserstack-runner/lib/BrowserStackLocal`
^CServer already closed
Exiting
Non existent tunnel
fs: missing callback Error: ENOENT, unlink '/Users/jza/dev/qunit/browserstack-run.pid'

Fix escaping in lib/utils escapeSpecialChars()

A leftover from #97. Needs a separate fix. I don't know what input it deals with. Currently running npm test produces this error:

lib/utils.js: line 11, col 22, Bad or unnecessary escaping.

A unit test for this method would help a lot. The fix itself should be trivial, probably removing one of the slashes. But that needs a way to verify the correctness of the method.

The purpose of the method:

json response (stack trace etc) we send back from browser to runner sometimes have newline and single quote chars which confuses the JSON.parse(). So, escapeSpecialChars handles that for us.

Failing to run on Travis environment

The browserstck-runner local executable runs fine on my local machine but when I push a build to Travis I just get a notification that the command has exited with a value of 1. I've set the credentials as secure environment variables if that makes any difference. Any ideas as to what might be going on?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.