Code Monkey home page Code Monkey logo

testem's Introduction

Got Scripts? Test’em!

Build Status npm version

Unit testing in Javascript can be tedious and painful, but Testem makes it so easy that you will actually want to write tests.

Features

  • Test-framework agnostic. Support for
  • Run tests in all major browsers as well as Node and PhantomJS
  • Two distinct use-cases:
    • Test-Driven-Development(TDD) — designed to streamline the TDD workflow
    • Continuous Integration(CI) — designed to work well with popular CI servers like Jenkins or Teamcity
  • Cross-platform support
    • OS X
    • Windows
    • Linux
  • Preprocessor support
    • CoffeeScript
    • Browserify
    • JSHint/JSLint
    • everything else

Screencasts

Installation

You need Node version 0.10+ or iojs installed on your system. Node is extremely easy to install and has a small footprint, and is really awesome otherwise too, so just do it.

Once you have Node installed:

npm install testem -g

This will install the testem executable globally on your system.

Usage

As stated before, Testem supports two use cases: test-driven-development and continuous integration. Let's go over each one.

Development Mode

The simplest way to use Testem, in the TDD spirit, is to start in an empty directory and run the command

testem

You will see a terminal-based interface which looks like this

Initial interface

Now open your browser and go to the specified URL. You should now see

Zero of zero

We see 0/0 for tests because at this point we haven't written any code. As we write them, Testem will pick up any .js files that were added, include them, and if there are tests, run them automatically. So let's first write hello_spec.js in the spirit of "test first" (written in Jasmine)

describe('hello', function(){
  it('should say hello', function(){
    expect(hello()).toBe('hello world');
  });
});

Save that file and now you should see

Red

Testem should automatically pick up the new files you've added and also any changes that you make to them and rerun the tests. The test fails as we'd expect. Now we implement the spec like so in hello.js

function hello(){
  return "hello world";
}

So you should now see

Green

Using the Text User Interface

In development mode, Testem has a text-based graphical user interface which uses keyboard-based controls. Here is a list of the control keys

  • ENTER : Run the tests
  • q : Quit
  • ← LEFT ARROW : Move to the next browser tab on the left
  • → RIGHT ARROW : Move to the next browser tab on the right
  • TAB : switch the target text panel between the top and bottom halves of the split panel (if a split is present)
  • ↑ UP ARROW : scroll up in the target text panel
  • ↓ DOWN ARROW : scroll down in the target text panel
  • SPACE : page down in the target text panel
  • b : page up in the target text panel
  • d : half a page down target text panel
  • u : half a page up target text panel

Command line options

To see all command line options

testem --help

Continuous Integration Mode

To use Testem for continuous integration

testem ci

In CI mode, Testem runs your tests on all the browsers that are available on the system one after another.

You can run multiple browsers in parallel in CI mode by specifying the --parallel (or -P) option to be the number of concurrent running browsers.

testem ci -P 5 # run 5 browser in parallel

To find out what browsers are currently available - those that Testem knows about and can make use of

testem launchers

Will print them out. The output might look like

$ testem launchers
Browsers available on this system:
IE7
IE8
IE9
Chrome
Firefox
Safari
Safari Technology Preview
Opera
PhantomJS

Did you notice that this system has IE versions 7-9? Yes, actually it has only IE9 installed, but Testem uses IE's compatibility mode feature to emulate IE 7 and 8.

When you run testem ci to run tests, it outputs the results in the TAP format by default, which looks like

ok 1 Chrome 16.0 - hello should say hello.

1..1
# tests 1
# pass  1

# ok

TAP is a human-readable and language-agnostic test result format. TAP plugins exist for popular CI servers

TAP Options

By default, the TAP reporter outputs all test results to the console, whether pass or fail. You can disable this behavior in order to make it easier to see which tests fail (i.e. only output failing tests) using:

{
  "tap_failed_tests_only": true
}

By default, the TAP reporter outputs console logs (distinct from pass/fail information) from all tests that emit logs to the console. You can disable this behavior and only emit logs for failed tests using:

{
  "tap_quiet_logs": true
}

For improved ergonomics, TAP reporter does not actually strictly adhere to the SPEC by default, reporting 'skip' as a possible status instead of as a directive. To strictly follow the spec use:

{
  "tap_strict_spec_compliance": true
}

By default, the TAP reporter outputs the result of JSON.stringify() for any log content that is not a String. You can override this behavior by specifying a function for tap_log_processor.

{
  "tap_log_processor": function(log) { return log.toString(); }
}

Other Test Reporters

Testem has other test reporters besides TAP: dot, xunit and teamcity. You can use the -R to specify them

testem ci -R dot

You can also add your own reporter.

Example xunit reporter output

Note that the real output is not pretty printed.

<testsuite name="Testem Tests" tests="4" failures="1" timestamp="Wed Apr 01 2015 11:56:20 GMT+0100 (GMT Daylight Time)" time="9">
  <testcase classname="PhantomJS 1.9" name="myFunc returns true when input is valid" time="0"/>
  <testcase classname="PhantomJS 1.9" name="myFunc returns false when user tickles it" time="0"/>
  <testcase classname="Chrome" name="myFunc returns true when input is valid" time="0"/>
  <testcase classname="Chrome" name="myFunc returns false when user tickles it" time="0">
    <failure name="myFunc returns false when user tickles it" message="function is not ticklish">
      <![CDATA[
      Callstack...
      ]]>
    </failure>
  </testcase>
</testsuite>

Example teamcity reporter output

##teamcity[testStarted name='PhantomJS 1.9 - hello should say hello']
##teamcity[testFinished name='PhantomJS 1.9 - hello should say hello']
##teamcity[testStarted name='PhantomJS 1.9 - hello should say hello to person']
##teamcity[testFinished name='PhantomJS 1.9 - hello should say hello to person']
##teamcity[testStarted name='PhantomJS 1.9 - goodbye should say goodbye']
##teamcity[testFailed name='PhantomJS 1.9 - goodbye should say goodbye' message='expected |'hello world|' to equal |'goodbye world|'' details='AssertionError: expected |'hello world|' to equal |'goodbye world|'|n    at http://localhost:7357/testem/chai.js:873|n    at assertEqual (http://localhost:7357/testem/chai.js:1386)|n    at http://localhost:7357/testem/chai.js:3627|n    at http://localhost:7357/hello_spec.js:14|n    at callFn (http://localhost:7357/testem/mocha.js:4338)|n    at http://localhost:7357/testem/mocha.js:4331|n    at http://localhost:7357/testem/mocha.js:4728|n    at http://localhost:7357/testem/mocha.js:4819|n    at next (http://localhost:7357/testem/mocha.js:4653)|n    at http://localhost:7357/testem/mocha.js:4663|n    at next (http://localhost:7357/testem/mocha.js:4601)|n    at http://localhost:7357/testem/mocha.js:4630|n    at timeslice (http://localhost:7357/testem/mocha.js:5761)']
##teamcity[testFinished name='PhantomJS 1.9 - goodbye should say goodbye']

##teamcity[testSuiteFinished name='mocha.suite' duration='11091']

Command line options

To see all command line options for CI

testem ci --help

Configuration File

For the simplest JavaScript projects, the TDD workflow described above will work fine. There are times when you want to structure your source files into separate directories, or want to have finer control over what files to include. This calls for the testem.json configuration file (you can also alternatively use the YAML format with a testem.yml file). It looks like

{
  "framework": "jasmine",
  "src_files": [
    "hello.js",
    "hello_spec.js"
  ]
}

The src_files can also be unix glob patterns.

{
  "src_files": [
    "js/**/*.js",
    "spec/**/*.js"
  ]
}

You can also ignore certain files using src_files_ignore. Update: I've removed the ability to use a space-separated list of globs as a string in the src_files property because it disallowed matching files or directories with spaces in them.

{
  "src_files": [
    "js/**/*.js",
    "spec/**/*.js"
  ],
  "src_files_ignore": "js/toxic/*.js"
}

Read more details about the config options.

Custom Test Pages

You can also use a custom page for testing. To do this, first you need to specify test_page to point to your test page in the config file (framework and src_files are irrelevant in this case)

{
  "test_page": "tests.html",
  "launch_in_dev": [
    "Chrome"
  ]
}

Next, the test page you use needs to have the adapter code installed on them, as specified in the next section.

Include Snippet

Include this snippet directly after your jasmine.js, qunit.js or mocha.js scripts to enable Testem with your test page.

<script src="/testem.js"></script>

Or if you are using require.js or another loader, just make sure you load /testem.js as the next script after the test framework.

'/testem.js' here is dynamically generated to be used client-side and it should not be confused with server-side 'testem.js'.

Dynamic Substitution

To enable dynamic substitutions within the Javascript files in your custom test page, you must

  1. name your test page using .mustache as the extension
  2. use {{#serve_files}} to loop over the set of Javascript files to be served, and then reference its src property to access their path (or {{#css_files}} for stylesheets)

Example:

{{#serve_files}}
<script src="{{src}}"></script>
{{/serve_files}}

{{#css_files}}
<link rel="stylesheet" href="{{src}}">
{{/css_files}}

Multiple Test Pages

You can also specify multiple test pages to run by passing an array to the test_page option.

{
  "test_page": [
    "unit-tests.html",
    "integration-tests.html"
  ]
}

This will cause Testem to run each test page in a separate launcher instance for each launcher you are using. This means that if you define 2 test pages and are using 3 launchers you will get 6 unique runs (2 per launcher).

Launchers

Testem has the ability to automatically launch browsers or processes for you. To see the list of launchers Testem knows about, you can use the command

testem launchers

This will display something like the following

Have 5 launchers available; auto-launch info displayed on the right.

Launcher      Type          CI  Dev
------------  ------------  --  ---
Chrome        browser       ✔
Firefox       browser       ✔
Safari        browser       ✔
Opera         browser       ✔
Mocha         process(TAP)  ✔

This displays the current list of launchers that are available. Launchers can launch either a browser or a custom process — as shown in the "Type" column. Custom launchers can be defined to launch custom processes. The "CI" column indicates the launchers which will be automatically launched in CI-mode. Similarly, the "Dev" column lists those that will automatically launch in dev-mode.

Customizing Browser Paths

You can add your own custom paths to browser binaries by including browser_paths and/or browser_exes options in your Testem configuration. For example:

"browser_paths": {
  "Chromium": "./node_modules/puppeteer/.local-chromium/mac-549031/chrome-mac/Chromium.app/Contents/MacOS/Chromium"
}
"browser_exes": {
  "Chromium": "chrome-custom-binary"
}

Adding a browser_path for a browser will override all default places for testem to look for the browser. So if the browser doesn't exist at the path you provided, you will get failures.

Customizing Browser Arguments

Testem passes its own list of arguments to some of the browsers it launches. You can add your own custom arguments to these lists by including the browser_args option in your Testem configuration. For example:

"browser_args": {
  "Chrome": [
    "--auto-open-devtools-for-tabs"
  ]
}

You can supply arguments to any number of browsers Testem has available by using the launcher name as a key in browser_args. Values may be an array of string arguments, a single string, or an object of arguments by mode.

Read more details about the browser argument options.

Running Tests in Node and Custom Process Launchers

To run tests in Node you need to create a custom launcher which launches a process which will run your tests. This is nice because it means you can use any test framework - or lack thereof. For example, to make a launcher that runs mocha tests, you would write the following in the config file testem.json

"launchers": {
  "Mocha": {
    "command": "mocha tests/*_tests.js"
  }
}

When you run testem, it will auto-launch the mocha process based on the specified command every time the tests are run. It will display the stdout and well as the stderr of the process inside of the "Mocha" tab in the UI. It will base the pass/fail status on the exit code of the process. In fact, because Testem can launch any arbitrary process for you, you could very well be using it to run programs in other languages.

Processes with TAP Output

If your process outputs test results in TAP format, you can tell that to testem via the protocol property. For example

"launchers": {
  "Mocha": {
    "command": "mocha tests/*_tests.js -R tap",
    "protocol": "tap"
  }
}

When this is done, Testem will read in the process's stdout and parse it as TAP, and then display the test results in Testem's normal format. It will also hide the process's stdout output from the console log panel, although it will still display the stderr.

PhantomJS

PhantomJS is a Webkit-based headless browser. It's fast and it's awesome! Testem will pick it up if you have PhantomJS installed in your system and the phantomjs executable is in your path. Run

testem launchers

And verify that it's in the list.

If you want to debug tests in PhantomJS, include the phantomjs_debug_port option in your testem configuration, referencing an available port number. Once testem has started PhantomJS, navigate (with a traditional browser) to http://localhost: and attach to one of PhantomJS's browser tabs (probably the second one in the list). debugger statements will now break in the debugging console.

If you want to use any of the PhantomJS command line options, include the phantomjs_args option in your testem configuration. For example:

"phantomjs_args": [
  "--ignore-ssl-errors=true"
]

You can also customize the phantomjs launcher file by specifying the phantomjs_launch_script option. In this launcher you can change options like the viewPortSize. See assets/phantom.js for the default launcher.

Preprocessors (CoffeeScript, LESS, Sass, Browserify, etc)

If you need to run a preprocessor (or indeed any shell command before the start of the tests) use the before_tests option, such as

"before_tests": "coffee -c *.coffee"

And Testem will run it before each test run. For file watching, you may still use the src_files option

"src_files": [
  "*.coffee"
]

Since you want to be serving the .js files that are generated and not the .coffee files, you want to specify the serve_files option to tell it that

"serve_files": [
  "*.js"
]

Testem will throw up a big ol' error dialog if the preprocessor command exits with an error code, so code checkers like jshint can be used here as well.

If you need to run a command after your tests have completed (such as removing compiled .js files), use the after_tests option.

"after_tests": "rm *.js"

If you would prefer simply to clean up when Testem exits, you can use the on_exit option.

Running browser code after tests complete

It is possible to send coverage reports or run other JavaScript in the browser by using the afterTests method.

Testem.afterTests(
  function(config, data, callback) {
    var coverage = window.__coverage__;
    var postBody = JSON.stringify(coverage);
    if (postBody) {
        var xhr = new XMLHttpRequest();
        xhr.onreadystatechange = function() {
            if (xhr.readyState === 4) {
                callback();
            }
        };
        xhr.open('POST', 'http://localhost:7358/', true);
        xhr.send(postBody);
    }
});

Custom Routes

Sometimes you may want to re-map a URL to a different directory on the file system. Maybe you have the following file structure:

+ src
  + hello.js
  + tests.js
+ css
  + styles.css
+ public
  + tests.html

Let's say you want to serve tests.html at the top level url /tests.html, all the Javascripts under /js and all the css under /css. You can use the "routes" option to do that

"routes": {
  "/tests.html": "public/tests.html",
  "/js": "src",
  "/css": "css"
}

DIY: Use Any Test Framework

If you want to use Testem with a test framework that's not supported out of the box, you can write your own custom test framework adapter. See customAdapter.js for an example of how to write a custom adapter.

Then, to use it, in your config file simply set

"framework": "custom"

And then make sure you include the adapter code in your test suite and you are ready to go. See here for the full example.

Native notifications

If you'd prefer not to be looking at the terminal while developing, you can enable native notifications (e.g. notification center, growl) using the -g option.

API Proxy

The proxy option allows you to transparently forward HTTP requests to an external endpoint.

Simply add a proxies section to the testem.json configuration file.

{
  "proxies": {
    "/api": {
      "target": "http://localhost:4200",
      "onlyContentTypes": ["xml", "json"]
    },
    "/xmlapi": {
      "target": "https://localhost:8000",
      "secure": false
    }
  }
}

This functionality is implemented as a transparent proxy, hence a request to http://localhost:7357/api/posts.json will be proxied to http://localhost:4200/api/posts.json without removing the /api prefix. Setting the secure option to false as in the above /xmlapi configuration block will ignore TLS certificate validation and allow tests to successfully reach that URL even if testem was launched over HTTP. Other available options can be found here: https://github.com/nodejitsu/node-http-proxy#options

To limit the functionality to only certain content types, use "onlyContentTypes".

Example Projects

I've created examples for various setups

Known Issues

  1. On Windows, Mocha fails to run under Testem due to an issue in Node core. Until that gets resolved, I've made a workaround for Mocha. To install this fork of Mocha, do

     npm install https://github.com/airportyh/mocha/tarball/windowsfix -g
    
  2. If you are using prototype.js version 1.6.3 or below, you will encounter issues.

Contributing

If you want to contribute to the project, I am going to do my best to stay out of your way.

Roadmap

  1. BrowserStack integration - following Bunyip's example
  2. Figure out a happy path for testing on mobile browsers (maybe BrowserStack).

Core Maintainer(s)

Community

Credits

Testem depends on the following great software

testem's People

Contributors

airportyh avatar avdv avatar brettz9 avatar dbrans avatar dcombslinkedin avatar dependabot[bot] avatar ef4 avatar endangeredmassa avatar getsnoopy avatar greenkeeper[bot] avatar greenkeeperio-bot avatar igorlima avatar inossidabile avatar jasonkarns avatar jbryson3 avatar jjnypr avatar johanneswuerbach avatar karlvr avatar mrchocolatine avatar raynos avatar rixth avatar rwjblue avatar scalvert avatar smottt avatar stefanpenner avatar step2yeung avatar techn1x avatar trentmwillis avatar wagenet avatar zigomir avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

testem's Issues

Support file based modularity

The workflow is

  • most of the time when I save a test file I want you to only run that test file.
  • sometimes when I save a test file I want to run the entire test suite.

Would be nice to have a clever file watcher that can run a sub set of my test suite.

Render test_page as a template

I've got many javascript modules, each with their own qunit test file. So putting them in the test_page is rather cumbersome, and I'd like to use the src_files config to specify a glob that grabs them all, the same way the framework runners do. And then I'd like to run testem ci to run all the tests.

If the test_page were rendered as a template in a similar fashion to the framework runners templates are, then testem could better integrate with projects that use a dependency loader, such as StealJS or RequireJS.

Here's my test_page so far: https://gist.github.com/3991896

This page works with testem's watching/reloading feature nicely, but you can see the manual entries for 2 qunit tests.

You can see that with StealJS, dependencies (including testem.js) are loaded via a pattern that uses a JS call:

steal('lib/first.js', 'lib/second.js');

instead of HTML:

<script src="lib/first.js"></script>
<script src="lib/second.js"></script>

I'd like to have my src_files set to lib/**/*_test.js and then I would finally be able to do testem ci in a Jenkins job.

no launch or file watching with chrome on windows 7

using the config file to auto launch in chrome, some chrome processes start in task manager, chrome tab appears in cli, but chrome does not open. Also if I open chrome manually testem seems to be watching files and running tests but does not send the changes to chrome. If I run live-reload in chrome it auto refreshes and picks up the changes. So it seems as if everything is working on the testem side but commands are not being passed to chrome.

I started with chrome v21 and then updated to v24. Firefox works perfectly without live-reload.

Could this be a security setting on my PC not allowing commands? I tried with 'run as administrator' and also tried disabling AV shields.

Thanks

show full stack traces

{
    "launchers": {
        "tap": {
            "command": "npm test"
            , "protocol": "tap"
        }
    }
    , "src_files": [
        "./**/*.js"
    ]
    , "launch_in_dev": [
        "tap"
    ]
}

Strips the stack traces off of my tap output thus making debugging a pain in the ass.

filewatcher and new files

It doesn't handle watching new files in an elegant fashion.

Ideally we want to add a watcher to each folder, detect new files added and then add any new files to the watcher

Some questions

Hi there,

I've found testem and I'm a little confused about the project. So if you don't mind, here are my questions.

  • Is it possible to integrate testem with jenkins or hudson?
  • Is there a way to get coverage reports?
  • What is the main difference between testem and other distributed runners like js-test-driver?

Thanks, cheers
Miguel

Exit code zero on CI failure?

I'm sure I'm just missing a configuration option, but I'm seeing something odd.

I run testem with this command:

$ ./node_modules/testem/testem.js ci -f /config/spec.json

which is this config:

{
  "framework" : "jasmine",
  "launch_in_dev" : ["PhantomJS"],
  "launch_in_ci" : ["PhantomJS"],
  "src_files" : [
    "generated/js/app.js",
    "generated/js/spec.js"
  ]
}

And then I get this result:


1..76
# tests 76
# pass  75
# fail  1

So I immediately echo $? and find that exited with code 0, which is causing my build to stay green when it should go red.

$ echo $?
0

NPM install: Cannot find module './lib/dev_mode_app'

$ npm install testem -g
$ testem

module.js:340
    throw err;
          ^
Error: Cannot find module './lib/dev_mode_app'
    at Function.Module._resolveFilename (module.js:338:15)
    at Function.Module._load (module.js:280:25)
    at Module.require (module.js:362:17)
    at require (module.js:378:17)
    at Object.<anonymous> (/usr/local/lib/node_modules/testem/testem.js:32:5)
    at Module._compile (module.js:449:26)
    at Object.Module._extensions..js (module.js:467:10)
    at Module.load (module.js:356:32)
    at Function.Module._load (module.js:312:12)
    at Module.runMain (module.js:487:10)

Can't traverse file paths or globs in subdirectories.

Hi, I was testing out testem and tried to configure it to run based on a complex file structure in an existing app, my config.yml looks like this:

framework: jasmine
src_files:
- health/public/ui/default/js/vendor/jquery.min.js
- health/public/ui/default/js/vendor/jquery.datatables.min.js
- health/public/ui/default/js/vendor/jquery.peity.min.js
- health/public/ui/default
- health/public/tests/spec/helpers/*.js
- health/public/tests/spec/*-spec.js

I can connect browsers but testem can't find any files other than the ones directly in the same path as config.yml. Am I doing something wrong?

Run testem with undefined launcher

If set config option

launch_in_dev:
- PhantomJS

but not install Phantom, it fails:

C:\Users\ekaragodin\AppData\Roaming\npm\node_modules\testem\lib\ci_mode_app.js:67                
            console.log("# Launching " + launcher.name)                                          
                                                 ^                                               
TypeError: Cannot read property 'name' of undefined                                              
    at App.runAllTheTests (C:\Users\ekaragodin\AppData\Roaming\npm\node_modules\testem\lib\ci_mod
e_app.js:67:50)         
...                                                                         

Need more clear error message.

running headless on ubuntu (phantomjs)

hey, first of thanks for creating this.

i'm trying to run this on a virtual box vm ( ubuntu 10.04 ) with phantomjs installed. Now, if I start the test suite it just hangs at:

vagrant@lucid32:/vagrant_data$ testem ci
# Open the URL below in a browser to connect.
# http://*.*.*.*:3580
Waiting for 0 more browsers...

is there anything else I need to consider, like starting xvfb and launching a display or will phantomjs work this out by itself?

Error messages in TAP output are unquoted

Example output

# Open the URL below in a browser to connect.
# http://192.168.1.130:3580
# Ok! Starting tests with browsers: PhantomJS 1.5
# ....

ok 1 - PhantomJS 1.5  hello should say hello.
ok 2 - PhantomJS 1.5  hello level 2 should not say not hello.
ok 3 - PhantomJS 1.5  hello level 2 should throw another exception.
not ok 4 - PhantomJS 1.5  hello level 2 level 3 should throw exception.
  ---
    message: TypeError: 'undefined' is not an object
  ...

1..4
# tests 4
# pass  3
# fail  1

Expected output

# Open the URL below in a browser to connect.
# http://192.168.1.130:3580
# Ok! Starting tests with browsers: PhantomJS 1.5
# ....

ok 1 - PhantomJS 1.5  hello should say hello.
ok 2 - PhantomJS 1.5  hello level 2 should not say not hello.
ok 3 - PhantomJS 1.5  hello level 2 should throw another exception.
not ok 4 - PhantomJS 1.5  hello level 2 level 3 should throw exception.
  ---
    message: "TypeError: 'undefined' is not an object"
  ...

1..4
# tests 4
# pass  3
# fail  1

This is related to this isaacs/yamlish@dd495bb and isaacs/yamlish#2

exception

/home/raynos/Documents/chain-stream/node_modules/testem/lib/runners.js:245
        this.tapConsumer.removeAllListeners()
                         ^ test /home/raynos/Documents/chain-stream
TypeError: Cannot call method 'removeAllListeners' of null
    at Backbone.Model.extend.onTapEnd (/home/raynos/Documents/chain-stream/node_modules/testem/lib/runners.js:245:26)
    at TapConsumer.EventEmitter.emit (events.js:115:20)
    at TapConsumer.end (/home/raynos/Documents/chain-stream/node_modules/tap/lib/tap-consumer.js:98:8)
    at Socket.onend (stream.js:66:10)
    at Socket.EventEmitter.emit (events.js:115:20)
    at Pipe.onread (net.js:416:51)

Not sure why this is caused. Havn't looked into it. It clearly happens when something weird happens with my tap process.

Can't set headers after they are sent.

Hi,

this is the error I get, after I installed testem with npm and startet it on the cli: Can't set headers after they are sent.

Stacktrace:

http.js:644
    throw new Error('Can\'t set headers after they are sent.');
          ^
Error: Can't set headers after they are sent.
    at ServerResponse.OutgoingMessage.setHeader (http.js:644:11)
    at ServerResponse.res.setHeader (/usr/local/lib/node_modules/testem/node_modules/express/node_modules/connect/lib/patch.js:62:20)
    at exports.send (/usr/local/lib/node_modules/testem/node_modules/express/node_modules/connect/lib/middleware/static.js:168:11)
    at Object.oncomplete (fs.js:308:15)
    at process.startup.processMakeCallback.process._makeCallback (node.js:248:20)

Steps to reproduce:

  • sudo npm install testem -g
  • cd /tmp && mkdir test && cd test
  • testem -l firefox
  • => new tab is created, blank page, stacktrace is displayed
  • or:
  • testem
  • => then I open http://localhost:7357/ in the browser and the exception is immediately displayed in the console, and a blank page in the browser

Test with testem 0.2.23 and 0.2.24.

I have no idea what's wrong.

Have Test'em run in JetBrains IDE's (phpStorm, RubyMine, pyCharm, WebStorm etc)

I was using another runner before testem called Testacular (http://vojtajina.github.com/testacular/) and recently switched ti Test'em.

I love Test'em and thank you for all your hard work on it, but I wish it ran in the JetBrains IDE's properly. Here is a screencast of Testacular running in the JetBrains IDE, WebStorm: https://www.youtube.com/watch?v=MVw8N3hTfCI&#t=7m39s

So I figured I could probably do the same with Test'em, but I get errors. See below.

/usr/local/bin/node ../../../../usr/local/bin/testem
tty.setRawMode: Use `process.stdin.setRawMode()` instead.

tty.js:37
    throw new Error('can\'t set raw mode on non-tty');
          ^
Error: can't set raw mode on non-tty
    at Object.<anonymous> (tty.js:37:11)
    at Object.deprecated (util.js:75:15)
    at EventEmitter.exports.Charm (/usr/local/lib/node_modules/testem/node_modules/charm/index.js:43:13)
    at module.exports (/usr/local/lib/node_modules/testem/node_modules/charm/index.js:29:12)
    at initCharm (/usr/local/lib/node_modules/testem/lib/ui/screen.js:29:23)
    at Object.<anonymous> (/usr/local/lib/node_modules/testem/lib/ui/screen.js:43:18)
    at Module._compile (module.js:449:26)
    at Object.Module._extensions..js (module.js:467:10)
    at Module.load (module.js:356:32)
    at Function.Module._load (module.js:312:12)

Process finished with exit code 1

The strange thing is, if I run the command on the first line from my Terminal

/usr/local/bin/node ../../../../usr/local/bin/testem

it works just fine. It only errors when it runs within the IDE. Any ideas?

including runner snippet causes syntaxError

whenever I add:

    <script>
if (location.hash === '#testem')
    document.write('<script src="/testem.js"></'+'script>')
</script>

to my test page all I get is Uncaught SyntaxError: Unexpected token < when I run my test page in the browser.

the odd thing is that the error location is line 1 so I'm wondering what this could be.

Testem is not detecting Opera

I have opera installed on my system. Testem ci -l does not list opera as one of the valid browsers, but when I open opera and navigate to the testem runner page, opera is detected as a connected browser. Opera is installed on my windows box at C:\Program Files (x86)\Opera

npm install -g fails on mac

[email protected] install /usr/local/lib/node_modules/testem/node_modules/socket.io/node_modules/socket.io-client/node_modules/ws
node install.js

sh: node: command not found
npm http GET https://registry.npmjs.org/traverse
npm http GET https://registry.npmjs.org/uglify-js
npm ERR! [email protected] install: node install.js
npm ERR! sh "-c" "node install.js" failed with 127
npm ERR!
npm ERR! Failed at the [email protected] install script.
npm ERR! This is most likely a problem with the ws package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR! node install.js
npm ERR! You can get their info via:
npm ERR! npm owner ls ws
npm ERR! There is likely additional logging output above.

npm ERR! System Darwin 11.4.2
npm ERR! command "node" "/usr/local/bin/npm" "install" "testem" "-g"
npm ERR! cwd /Users/mbenin
npm ERR! node -v v0.8.6
npm ERR! npm -v 1.1.48
npm ERR! code ELIFECYCLE
npm http 304 https://registry.npmjs.org/traverse
npm http 304 https://registry.npmjs.org/uglify-js
npm ERR! Error: ENOENT, lstat '/usr/local/lib/node_modules/testem/node_modules/socket.io/node_modules/socket.io-client/node_modules/active-x-obfuscator/node_modules/zeparser/benchmark.html'
npm ERR! If you need help, you may report this log at:
npm ERR! http://github.com/isaacs/npm/issues
npm ERR! or email it to:
npm ERR! [email protected]

npm ERR! System Darwin 11.4.2
npm ERR! command "node" "/usr/local/bin/npm" "install" "testem" "-g"
npm ERR! cwd /Users/mbenin
npm ERR! node -v v0.8.6
npm ERR! npm -v 1.1.48
npm ERR! path /usr/local/lib/node_modules/testem/node_modules/socket.io/node_modules/socket.io-client/node_modules/active-x-obfuscator/node_modules/zeparser/benchmark.html
npm ERR! fstream_path /usr/local/lib/node_modules/testem/node_modules/socket.io/node_modules/socket.io-client/node_modules/active-x-obfuscator/node_modules/zeparser/benchmark.html
npm ERR! fstream_type File
npm ERR! fstream_class FileWriter
npm ERR! code ENOENT
npm ERR! errno 34
npm ERR! fstream_stack Writer._finish.er.fstream_finish_call (/usr/local/lib/node_modules/npm/node_modules/fstream/lib/writer.js:284:26)
npm ERR! fstream_stack Object.oncomplete (fs.js:297:15)
npm ERR!
npm ERR! Additional logging details can be found in:
npm ERR! /Users/mbenin/npm-debug.log
npm ERR! not ok code 0

Add test_page as a command line parameter

My current testem.yml has just one line:

test_page: specs/runner.html

Currently we have all kinds of services requiring to add config files to the root folder of the project repository: grunt, travis-ci, npm, etc, etc...

My root folder could be a little bit clearer if i could run the following command line:

testem ci --launch phantomjs --test_page specs/runner.html

Hooking up existing tests

Not an issue per se, I just could not find it in the docs a way to hook up my existing qunit tests, to start using testem. Not sure this is the right forum for help, but i'll give it a shot...

I have a large application built using javascriptmvc and TDD. As part of the javascriptmvc framework we use stealjs, instead of requirejs for modular code. The HTML driver page loads stealjs, followed by the main qunit.js file (not qunit itself).

HTML file:

<script type='text/javascript' src='../steal/steal.js?corp/test/qunit'></script>

corp.test.qunit.qunit.js loaded by the statement above:

steal(  'funcunit/qunit' )
.then(
        'corp/common/test/qunit',
        'corp/subscription_manager/test/qunit',
        'corp/jquery/test/qunit',
        'corp/util/test/qunit'
);

The first statement on the snippet above loads in qunit itself.

  • my tests live in corp/test/qunit/qunit.js

  • i created a testem.yml in corp/testem.yml

  • in the yml file I entered:

    framework: qunit
    src_files: - test/qunit/qunit.js

  • when I run testem from corp/, it loads localhost:7357/testem/qunit.js, and not corp/test/qunit/qunit.js (which includes qunit itself, as described above)

What am I doing wrong? Any help/input appreciated...

Guilherme

Consider upnode

upnode has a more stable reconnecting system.

The upnode test-runner to testem-server communication API is also a lot better then your custom socket.io based events

Add a tab for the test results in node environment

Some unit tests are written in such a way that they can be run in either node or the browser (consider testling for example).

What would be the cleanest way the spawn a process that runs the tests in node and then prints their output in a testem tab.

Improvements to how CoffeeScript is Supported

In an ideal world I would rather not have to specify served_files vs src_files inside the testem.yml/json configuration file; Testem would just be smart enough to figure out that it should auto compile any coffee files it finds in src_files and serve those up automatically inside the generated test runners.

Any thoughts on making it easier to support simplifying this kind of workflow?

Allow additional configuration for express server

I modified testem to allow additional configuration for express server application (in separate node module) to make it possible to expose for example REST services needed for tests or *.less compilation or any other custom server stuff.

Didn't you think about such feature?

Implement command line flag to disable file watching

Using Test'em I run 5000+ unit tests, each of which requires 3 additional files which are loaded using XMLHttpRequest. After a couple of hundred tests, Node crashes with the EMFILE error.

Adding the following line to server.js prevents this crash:

fs.watch = function(){};

I don't care about automatically refreshing. In a continuous integration environment it doesn't even make sense. A flag to disable the watching of files would be greatly appreciated.

testem launchers does not find Chrome, Opera hangs in ci mode

I don't know what I am doing wrong, but running testem launchers does not find Chrome ?
I am using the node installer ( npm install testem -g).
I have testem running on a Windows 7 machine, as well as a Windows 8 machine, and both are not finding Chrome.
Chrome version is Version 22.0.1229.79 m

Also, running testem in ci mode hangs Opera, even though connecting to the local testem server works find.

Firefox doesn't remember settings

Using OSX 10.8.2 and firefox 16.0.1.

When the tests are run with either testem, testem -l Firefox or testem ci then firefox always asks to be set as default browser.

add after_tests/on_exit configuration

If you need a preprocessor (coffee), then it's also good to remove the compiled js files.
If not when the whole tests have been run, at least when quitting testem.

on_exit: rm -rf src/*.js

*Update: I've changed the code example to on_exit so as to not confuse more people.

Can not specify port in config file (testem.yml)

If I try to specify
port: XXX
in testem.yml it does not work because it is progOptions and by default its 7357.

Is it supposed behavior?
I understand that command line options should probably override static settings in config, but shouldn't settings in config override default options?

Alex

add a no runner

norunner.html would be a simple

<!doctype html>
<html>
<head>
{{#scripts}}<script src="{{.}}"></script>{{/scripts}}
</head>
<body>
</body>
</html>

Document (or fix) the EMFILE error when watching many files

I'm trying out Testem on a fairly large codebase (about 900 files incl tests) and I ran into the EMFILE error for too many open files pretty quickly (on OS X 10.7.5) when watching all those files.

I'm not very experienced with node, so I had to dig around quite a lot to fix it with this OS setting (found here):

sudo echo "limit maxfiles 4096 400000" >> /etc/launchd.conf

The default setting for the soft limit was 256 - far too low. The problem was that the testem-crash was quite cryptic, I could only see fragments of an error message in my Terminal window, like this:

em/node_modules/rimraf/rimraf.js:67:409:5)ncher.js:82:17)  

I think it would be helpful for newcomers if you could document this somewhere obvious, or ideally work around the issue (I don't know if it's possible).

Maybe the existing issue #29 for folder watching would solve it?

My config looks something like this:

"src_files": [
    "src/**/*.js",
    "lib/**/*.js",
    "test-fixtures/**/*.html"
],
"serve_files": [
    "test/**/*.js"
]

Only the tests are served. They pull in their dependencies, so the dependencies don't need to be served but watched.

When running testem ci -b Firefox Tests don't load correctly

I was testing out the testem continuous integration feature, and if I try to test firefox, the prompt returns the following:

testem ci -b Firefox

Launching Firefox

1..0

tests 0

ok

but when I run from the browser all 17 tests I have in the folder display. If I run testem ci with all browsers, firefox works as intended.

CI Mode Hangs

I've updated to Testem 0.2.1 (and using Node 0.8.11 on Mac OSX 10.8.2) and whenever I run testem ci the process never completes. The tests execute properly in all launchers but the CI mode process just hangs.

I cloned the Testem repo and tested this with a few of the project configurations that you have inside of examples and it seems to hang all the time.

Can you reproduce this on your machine?

Running testem TDD mode with multiple browsers causes crash

Run testem with phantom JS. Connect to page with Opera, Chrome, Firefox, and IE. Rerun tests on testem screen with either saving a file or hitting enter. Testem stops and throws this error:
"C:\Users\AppData\Roaming\npm\node_modules\testem\lib\appview.js:547
.write(Array(this.appview.get('cols') - startCol + 1).join(Chars.h
^
RangeError: Invalid array length
at [object Object].renderLine (C:\Users\AppData\Roaming\npm\node_modules\testem\lib\appview.js:547:20)
at [object Object].render (C:\Users\AppData\Roaming\npm\node_modules\testem\lib\appview.js:541:18)
at [object Object]. (C:\Users\AppData\Roaming\npm\node_modules\testem\lib\appview.js:595:25)
at [object Object].trigger (C:\Users\AppData\Roaming\npm\node_modules\testem\node_modules\backbone\backbone.js:163:27)
at [object Object]._onModelEvent (C:\Users\AppData\Roaming\npm\node_modules\testem\node_modules\backbone\backbone.js:844:20)
at [object Object].trigger (C:\Users\AppData\Roaming\npm\node_modules\testem\node_modules\backbone\backbone.js:170:27)
at [object Object].add (C:\Users\AppData\Roaming\npm\node_modules\testem\node_modules\backbone\backbone.js:631:15)
at [object Object].push (C:\Users\AppData\Roaming\npm\node_modules\testem\node_modules\backbone\backbone.js:662:12)
at [object Object]. (C:\Users\AppData\Roaming\npm\node_modules\testem\lib\appview.js:585:25)
at [object Object].trigger (C:\Users\AppData\Roaming\npm\node_modules\testem\node_modules\backbone\backbone.js:163:27)"

Opera is unstable

The testem runner sometimes has multiple opera tabs because something is going wrong (probably websocket reconnects)

not sure where the bug is coming from or how to solve it though

Add "no_phantom" parameter

I would add some parameter that would exclude automatic phantom launch if its not needed, for example
no_phantom: true

What do you think?

support mocha latests versions

When i use a "test_page" with mocha 1.4 or 1.6, the results are shown in the browser but not in the console.

if i include the file "public/testem/mocha.js" in my "test_page" everything works fine.

What version of mocha is public/testem/mocha.js?

Running different tests suites on one testem server

That is a kind of idea to think through.

As I described earlier in the issue (#35) I use testem to run tests suites depending on location hash. By test suite I mean a set of files including tests and sources. (for example browser with hash #test=models.my_model runs tests from test/models/my_model.test.js)

So, browsers (maybe different tabs in one browser) can attach to testem server to run different test suites.

The questions that rise when using such approach:

  1. How to identify browsers attached in a testem console (besides browsers name and version, maybe title?)
  2. FileWatch problem: To rerun tests (reload page) for particular test suite only if sources files that are included in that test suite have been changed. At the moment testem will reload all attached browsers if any of source files changed.

What do you think, or this is too sophisticated?

Running testem without a testem.json/yml

testem ci -t test/mocha-1.6.0/index.html -p 8080 -l Safari

Does everything it's supposed to as far as opening the browser and running the tests.

After it runs Safari it just hangs:

Launching Safari

.

TAP version 13
ok 1 - Safari Should return getsBored should return gets bored

I have to ctrl c to get out.

Also - I noticed that if I run:

testem ci -t test/mocha-1.6.0/index.html

Does the same with Chrome:

Launching Chrome

.

TAP version 13
ok 1 - Chrome Should return getsBored should return gets bored

I tried setting a --timeout 3:

.
timers.js:96
if (!process.listeners('uncaughtException').length) throw e;
^
ReferenceError: browser is not defined
at Object._onTimeout (/usr/local/lib/node_modules/testem/lib/ci_mode_app.js:132:53)
at Timer.ontimeout (timers.js:94:19)

It would also be nice to specify where you want your testem.log file to go when debugging.

I'm on Lion.

Can I execute testem for many test pages?

In case of CI mode for jenkins.
I want to execute many tests and output single test tap.
like a below specified json.

{
'test_page' : [
'test/1.html',
'test/2.html',
'*/1.html'
]
}

Currently, I temporarily developed the testem-multi that wrap testem for this case.
https://github.com/sideroad/testem-multi

Will you be fixing this case into testem in the future?

Make colors optional

Red in the error tab is annoying me. I want "no colors" so that my own colouring of logging output shines through.

Also red is hard to read on my background which is reddish.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.