Code Monkey home page Code Monkey logo

sass-spec's Introduction

sass-spec

A cross-implementation Sass test suite

Build Status

sass-spec is the official Sass test suite. It's used by all major Sass implementations to ensure that they correctly implement the language.

Language Specs

The bulk of this repository is taken up by language specs, which test how accurately an implementation implements the Sass language. They live in the spec directory.

Running Language Specs

Before running specs, you'll need to install [Node.js] 14.14 or newer. Then, from the root of this repo, run npm install.

From there, it depends which implementation you're testing:

Dart Sass

To run specs against Dart Sass, the reference implementation of Sass that's used for the sass package on npm, you'll first need to install Dart. Then run:

# If you already have a clone of the Dart Sass repo, you can use that instead.
git clone https://github.com/sass/dart-sass
(cd dart-sass; dart pub get)
export DART_SASS_PATH=`pwd`/dart-sass

npm run sass-spec -- --dart $DART_SASS_PATH

LibSass

As LibSass is approaching end-of-life and hasn't had new feature changes in years, this repository no longer supports running tests against it.

Spec Structure

Each spec is defined by a directory with an input.scss or input.sass file and either:

  • An output.css file, in which case the spec asserts that the Sass implementation compiles the input to the output. These specs are known as "success specs".
  • An error file, in which case the spec asserts that the Sass implementation prints the error message to standard error and exits with a non-zero status code when it compiles the input. These specs are known as "error specs".

These files may also have variants that are specific to individual implementations.

The path to the spec serves as the spec's name, which should tersely describe what it's testing. Additional explanation, if necessary, is included in a silent comment in the input file. Specs may also contain additional files that are used by the input file, as well as various other features which are detailed below.

HRX

Most specs are stored in HRX files, which are human-readable plain-text archives that define a virtual filesystem. This format makes it easy for code reviewers to see the context of specs they're reviewing. The spec runner treats each HRX file as a directory with the same name as the file, minus .hrx. For example:

<===> input.scss
ul {
  margin-left: 1em;
  li {
    list-style-type: none;
  }
}

<===> output.css
ul {
  margin-left: 1em;
}
ul li {
  list-style-type: none;
}

HRX archives can also contain directories. This allows us to write multiple specs for the same feature in a single file rather than spreading them out across hundreds of separate tiny files. By convention, we include an HRX comment with 80 = characters between each spec to help keep them visually separate. For example:

<===> unbracketed/input.scss
a {b: is-bracketed(foo bar)}

<===> unbracketed/output.scss
a {b: false}

<===>
================================================================================
<===> bracketed/input.scss
a {b: is-bracketed([foo bar])}

<===> bracketed/output.scss
a {b: true}

Each HRX archive shouldn't be much longer than 500 lines. Once one gets too long, its subdirectories should be split out into separate archives beneath a physical directory. Conversely, if a given directory contains many small HRX archives, they should be merged together into one larger file. This helps ensure that the repo remains easy to navigate.

The only specs that aren't written in HRX format are those that include invalid UTF-8 byte sequences. The HRX format is itself written in UTF-8, so it's unable to represent the files in these specs.

Specifying Warnings

By default, Sass implementations are expected to emit nothing on standard error when executing a success spec. However, if a warning file is added to the spec directory, the spec will assert that the Sass implementation prints that warning message to standard error as well as compiling the output. This is used to test the behavior of the @debug and @warn rules, as well as various warnings (particularly deprecation warnings) emitted by the Sass implementation itself.

Warnings can't be specified for error specs, since everything an implementation emits on standard error is considered part of the error message that's validated against error.

Implementation-Specific Expectations

Sometimes different Sass implementations produce different but equally-valid CSS outputs or error messages for the same input. To accommodate this, implementation-specific output, error, and warning files may be created by adding -dart-sass after the file's name (but before its extension, in the case of output.css).

When a spec is running for an implementation with an implementations-specific expectation, the normal expectation is ignored completely in favor of the implementation-specific one. It's even possible (although rare) for one implementation to expect an input file to produce an error while another expects it to compile successfully.

Options

Metadata for a spec and options for how it's run can be written in an options.yml file in the spec's directory. This file applies recursively to all specs within its directory, so it can be used to configure many specs at once. All options must begin with :.

All options that are supported for new specs are listed below. A few additional legacy options exist that are no longer considered good style and will eventually be removed.

:todo

---
:todo:
- sass/dart-sass#123456

This option indicates implementations that should add support for a spec, but haven't done so yet. When running specs for a given implementation, all specs marked as :todo for that implementation are skipped by default. This ensures that the build remains green while clearly marking which specs are expected to pass eventually.

Implementations can be (and should be) specified as shorthand GitHub issue references rather than plain names. This makes it easy to track whether the implementation has fixed the issue, and to see which specs correspond to which issue. When marking an issue as :todo for an implementation, please either find an existing issue to reference or file a new one.

If the --run-todo flag is passed to sass-spec.rb, specs marked as :todo for the current implementation will be run, and their failures will be reported.

If the --probe-todo flag is passed to sass-spec.rb, specs marked as :todo for the current implementation will be run, but a failure will be reported only if those specs pass. This is used to determine which specs need to have :todo removed once a feature has been implemented. This can be used in combination with --interactive to automatically remove :todos for these specs.

:warning_todo
---
:warning_todo:
- sass/dart-sass#123456

This option works like :todo, except instead of skipping the entire test for listed implementations it only skips validating that spec's warnings. The rest of the spec is run and verified as normal. This should not be used for error specs.

:ignore_for
---
:ignore_for:
- dart-sass

This option indicates implementations that are never expected to be compatible with a given spec. It's used for specs for old features that some but not all implementations have dropped support for.

Spec Style

The specs in this repo accumulated haphazardly over the years from contributions from many different people, so there's not currently much by way of unified style or organization. However, all new specs should follow the style guide, and old specs should be migrated to be style-guide compliant whenever possible.

Interactive Mode

If you pass --interactive to npm run sass-spec, it will run in interactive mode. In this mode, whenever a spec would fail, the spec runner stops and provides the user with a prompt that allows them to inspect the failure and determine how to handle it. This makes it easy to add implementation-specific expectations or mark specs as :todo. For example:

In test case: spec/core_functions/color/hsla/four_args/alpha_percent
Output does not match expectation.
i. Show me the input.
d. show diff.
O. Update expected output and pass test.
I. Migrate copy of test to pass on dart-sass.
T. Mark spec as todo for dart-sass.
G. Ignore test for dart-sass FOREVER.
f. Mark as failed.
X. Exit testing.

Any option can also be applied to all future occurences of that type of failure by adding ! after it. For example, if you want to mark all failing specs as :todo for the current implementation you'd type I!.

Runner Tests

The unit tests for the spec runner are located in the test/ directory. To run these unit tests, run:

npm run test

JS API Specs

In addition to the Sass language itself, the Sass specification describes a JavaScript API that should be used when exposing a Sass implementation in JavaScript. This repository also contains tests for the JavaScript API, located in the js-api-spec directory.

Running JS API Specs

JS API specs are run using npm run js-api-spec. It takes two mandatory arguments:

  • --sassSassRepo: The path to a clone of the Sass language specification repository. This is used to load the type declarations for the JavaScript API, whose canonical form is written as part of the specification.

  • --sassPackage: The path to the npm package to test. This package should expose an implementation of the Sass JavaScript API.

The JS API specs are run using Jasmine.

Dart Sass

To run specs against Dart Sass, the reference implementation of Sass that's used for the sass package on npm, you'll first need to install Dart. Then run:

# If you already have a clone of the Sass language repo, you can use that
# instead.
git clone https://github.com/sass/sass
export SASS_SASS_PATH=`pwd`/sass

# If you already have a clone of the Dart Sass repo, you can use that instead.
git clone https://github.com/sass/dart-sass
(
  cd dart-sass
  dart pub get
  dart run grinder pkg-npm-dev
)
export DART_SASS_PATH=`pwd`/dart-sass

npm run js-api-spec -- --sassSassRepo $SASS_SASS_PATH --sassPackage $DART_SASS_PATH/build/npm

Whenever you modify Dart Sass, make sure to re-run dart run grinder pkg-npm-dev to rebuild the JavaScript output.

Browser Build

To run specs against Dart Sass compiled for a browser context, add the --browser flag to the above command:

npm run js-api-spec -- --sassSassRepo $SASS_SASS_PATH --sassPackage $DART_SASS_PATH/build/npm --browser

Embedded Host

To run specs against the Node Embedded Host, which embeds Dart Sass as a subprocess for increased performance and is available as the sass-embedded package on npm, you'll first need to install Dart. Then run:

# If you already have a clone of the Sass language repo, you can use that
# instead.
git clone https://github.com/sass/sass
export SASS_SASS_PATH=`pwd`/sass

# If you already have a clone of the Dart Sass repo, you can use that instead.
git clone https://github.com/sass/embedded-host-node
(
  cd embedded-host-node
  npm install
  npm run init
  npm run compile
)
export SASS_EMBEDDED_PATH=`pwd`/embedded-host-node

npm run js-api-spec -- --sassSassRepo $SASS_SASS_PATH --sassPackage $SASS_EMBEDDED_PATH

Whenever you modify the Sass embedded host, make sure to re-run npm run compile to rebuild the JavaScript output.

Implementation-Specific Expectations

The js-api-spec/utils.ts file exposes a sassImpl getter that returns the name of the implementation (at time of writing, either 'dart-sass' or 'sass-embedded'). You can use this field to give specs different behavior for different implementations if necessary.

The utils file also exposes a skipForImpl() function, which skips an entire block of specs for an implementation. This is typically used when testing behavior that isn't yet supported by all implementations.

sass-spec's People

Contributors

am11 avatar anlutro avatar asottile avatar awjin avatar blopker avatar chriseppstein avatar connorskees avatar dependabot[bot] avatar glebm avatar goodwine avatar hamptonmakes avatar jathak avatar kittygiraudel avatar lunelson avatar malrase avatar mgol avatar mgreter avatar michaek avatar mirisuzanne avatar nex3 avatar nschonni avatar ntkme avatar pamelalozano16 avatar qulogic avatar rambleraptor avatar saper avatar snugug avatar stof avatar wonja avatar xzyfer avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sass-spec's Issues

[FR] Add automated way to create bug/todo spec tests

Extracting feature request from #845

As a user or dev I would like to easily

  1. create and submit new spec tests for bugs
  2. activate passing spec tests
  3. push these to personal github repo
  4. automatically create a pull request

I already took a first shot at this with gen_libsass_todo. There are quite a few problematic steps involved, mainly due to the interaction with git. The workflow I implemented in that batch file is:

  • take issue number from input
  • read content for test from %issue%.scss
  • fetch latest version from upstream git (i.e. origin)
  • create branch todo/issue_%ISSUE% from master
  • create folder in todo-tests and copy test case
  • run sass-spec to generate files for expected output
  • add files to git and commit with standard message
  • optionally push to remote repository (2nd argument)

I think this per se is pretty much the minimal workflow I would expect to support devs (and would fulfill point 1). The process is error prone, as it is vital to how we interact with git. One question is how fault-tolerant it must be. IMO it will never be 100% bullet proof, just too many variables in between local and remote. I see most problems on the local side, since we need to create new branches (I guess that is a given), since that is only possible with a clean tree. There are a lot of abort conditions we need to check and the question remains how far we want to try to help the user to resolve any git problems.

I guess in a first shot we need to rely on the help of potential users (probably just "us") to progress from there. If you mess up you may need to remove remote branches that were created so you can push again and so on. My only hope here is that we probably can use a ruby git binding (I suspect such a thing exists?) to make our live easier with interacting with git (therefore giving end-users a better experience).

Ultimately we may want to directly create github issues through automated bug reports (automatically attaching the offending code, if you want to got that far). But in the end this always has to do with authentification at some point. And for me that's were it definitely becomes infeasible.

Initial request/goal for me would be to get something up and running in official spec-runner similar to gen_libsass_todo batch script. For me it has proven to work pretty well, so I personaly am not even really in a hurry to get it baked into official sass-spec runner. But the long term use-case for such a bug reporting system seems appealing, even though I'm not yet sure how this could play out (just think of how firefox etc. include bug reporting tools out of the box nowaday).

[FR] Generate list of tests and status to markdown

I was thinking something like the following

Test Name (or Path?) 3.4 3.5 4.0
Something Todo in 3.5+ P T T
Something 4 specific - - P
Something Failing in 3.5 P F P

Not sure if it makes sense on a "per suite"/folder, and also include a visualization of some of the options

Add a flag to merge specs targeting different Sass versions

It's easy to split a spec into a version that targets an older version of Sass and one that targets a newer version, but if this split ends up being incorrect, it's difficult to merge them again. It would be nice to have a way of doing this automatically. Bonus points if --interactive mode detects that the actual output of a failing test is the same as the expected output of a different version and presents an option for merging.

Empty Extends Tests?

A lot of the extends tests seem to be totally empty and also be repetitively named.

209_test_pseudo_element_superselector
210_test_pseudo_element_superselector
211_test_pseudo_element_superselector
212_test_pseudo_element_superselector
213_test_pseudo_element_superselector
214_test_pseudo_element_superselector

What are these for? Can they be deleted?

/ping @michaek

Run todo tests by default for CI

IMO it would be nice to also check the todo tests on CI. We should not alter the overall error state for these, but it should report the spec tests that are unexpectedly passing. We sometimes add todo tests for segfaults, which should not be included here, since it would abort the whole test runner. Therefore I propose to add tests for segfaults to another folder, that would be excluded from this proposed test run.

Errors may not be equal for all output styles

As I suspected, there are some errors which differ for varying output styles. Today the check I've added in the perl-libsass runner (I mentioned that in a slack chat AFAIR) catched a first sample:

foo {
  bar: (rgba(16,16,16,0.5) + #AAA);
}

results

Error: Alpha channels must be equal: rgba(16,16,16,0.5) + #AAA // compressed
Error: Alpha channels must be equal: rgba(16, 16, 16, 0.5) + #AAA // all others

//CC @saper

Nuke is not idempotent

Discovered in #529.

If a spec is nuke'd multiple times, and one run produces an error, the error file is never cleaned up.
I expect the same applies in the inverse. If nuking an error spec multiple times and one mistaken produces output, this output is not cleaned up (untested).

Report disabled tests

Directories which do not have valid input files (or output files before #494) are silently skipped.
This is how not-really-magic disabled keyword works (renaming input.scss to input.disabled.scss).

> ruby sass-spec.rb -c ~/sw/sass/bin/sass -s spec/libsass-todo-tests/css_error_with_windows_newlines
Recursively searching under directory 'spec/libsass-todo-tests/css_error_with_windows_newlines' for test files to test '/home/saper/sw/sass/bin/sass' with.
Sass 3.4.14.2e0f33b (Selective Steve)
Run options: --seed 19577

# Running:



Finished in 0.001196s, 0.0000 runs/s, 0.0000 assertions/s.

0 runs, 0 assertions, 0 failures, 0 errors, 0 skips

This is confusing especially given there is always a chance for a typo. See also #501 and #505 for funny cases where it confuses people.

I think we should add those directories to "skips" to indicate this. Especially useful when running against a single or a small group of directories.

Warning: `compact` has been removed breaks node-sass coverage

My environment:

(libsass has been compiled separately and linked to the node-sass binding as a shared library)

When running npm test I get the following warning:

      ✓ bool
Warning: `compact` has been removed from libsass because it's not part of the Sass spec
Warning: `compact` has been removed from libsass because it's not part of the Sass spec
Warning: `compact` has been removed from libsass because it's not part of the Sass spec
Warning: `compact` has been removed from libsass because it's not part of the Sass spec
      ✓ bourbon (43ms)
      ✓ calc

but the tests pass. Trying however to run mocha in the coverage testing mode fails immediately:

>  npm run coverage

> [email protected] coverage /home/saper/node_modules/node-sass
> node scripts/coverage.js

Warning: `compact` has been removed from libsass because it's not part of the Sass spec


npm ERR! FreeBSD 10.1-STABLE
npm ERR! argv "node" "/usr/local/bin/npm" "run" "coverage"
npm ERR! node v0.12.2
npm ERR! npm  v2.8.4
npm ERR! code ELIFECYCLE
npm ERR! [email protected] coverage: `node scripts/coverage.js`
npm ERR! Exit status 1
npm ERR! 
npm ERR! Failed at the [email protected] coverage script 'node scripts/coverage.js'.
npm ERR! This is most likely a problem with the node-sass package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR!     node scripts/coverage.js
npm ERR! You can get their info via:
npm ERR!     npm owner ls node-sass
npm ERR! There is likely additional logging output above.

npm ERR! Please include the following file with any support request:
npm ERR!     /home/saper/node_modules/node-sass/npm-debug.log

Maybe the warning should be relaxed or compact test files removed from the spec?

Ping sass/node-sass#904 sass/libsass#1055 #359

Todo-method

I know we keep discussing various meta-comment-style-shit things to help track issues. But, one of the most annoying things right now is that the "todo" flag only applies to libsass and isn't setup for multiple projects.

How about we drop a file in todo-tests called libsass.todo. So, your test folder /extends/basic might include a new file called rubysass.todo and that would indicate to the ruby people that it's still a known breaking test for them.

3.4 Support / Scoped Variables

The tests were originally imported from 3.2's test suite and the tests haven't been updated.

The biggest change required is the variable scoping changes. I have started a branch to work on this called scoped-variables

Test organization?

@xzyfer mentioned the need for this on sass/libsass#521 — and I thought it deserved an issue over here. The current organization of sass-spec tests is quite difficult to sort through, based partially on what feature is being tested and partially on where support already exists.

I think the only organization that makes sense long-term is feature-based. Is there something stopping us from moving completely in that direction? Is there anything we can use as a "full list of features" to work against? Any inherent structure in the code, or somewhere in the RubySass project?

EngineAdapters: inconsistent status value returned

When running a subprocess with -c, for example sassc or Ruby sass CLI, the value returned in the status variable is a Process::Status, that contains the PID, an exit code and other information.
Ruby sass CLI returns with an exit code 65 when failed with a SyntaxError (in fact any error ocuring during Sass processing, also coming from an @error directive). sassc fails with an exit code 1 in that case.

When using Ruby Sass as a ruby module, we are getting a numeric output: 0 for OK, 1 for a Sass SyntaxError or 2 for some other exception.

This inconsistency prevents us from using the status code in the test framework consistently. For example, when being written to a file Process::Status will produce pid 123123 exit 1 string which is not a simple numeric exit code one can compare against.

Update readme with accurate documentation

@saper has been making a lot of great updates to Sass spec recently. These should be documented in the read me. There is also information on the read me that is no longer correct that should be removed.

Contribution Guide?

If there’s bugs we submit on libsass that are supported in the Ruby version, would it be helpful to add tests here? Any chance of creating a quick CONTRIBUTING.md file to make sure that can be done how you’d like? Thanks!

Sass 3.4.7 compatibility

Regenereated all spec tests with ruby sass 3.4.9.
Got a few incompatibilities and I thought I'd post it here:

#   Failed test 'sass-spec t/sass-spec/spec/basic/12_pseudo_classes_and_elements/input.scss'
#          got: 'a b {
#  color: red; }
# a b :first-child, a b :nth-of-type(-2n+1) {
#  blah: bloo; }
# a b :first-child .foo#bar:nth-child(even), a b :nth-of-type(-2n+1) .foo#bar:nth-child(even) {
#  hoo: goo; }
# a b :first-child ::after, a b :nth-of-type(-2n+1) ::after {
#  content: "glux"; color: green; }
# a b :first-child :not(.foo), a b :nth-of-type(-2n+1) :not(.foo) {
#  hoo: boo; }
# a b :first-child :not(:not(:not(.foo[bleeble="blabble"] > .hello, .gluxbux))), a b :nth-of-type(-2n+1) :not(:not(:not(.foo[bleeble="blabble"] > .hello, .gluxbux))) {
#  hoo: boo; }
# a b :first-child a, a b :nth-of-type(-2n+1) a {
#  b: c; }'
#     expected: 'a b {
#  color: red; }
# a b :first-child, a b :nth-of-type(-2n+1) {
#  blah: bloo; }
# a b :first-child .foo#bar:nth-child(even), a b :nth-of-type(-2n+1) .foo#bar:nth-child(even) {
#  hoo: goo; }
# a b :first-child ::after, a b :nth-of-type(-2n+1) ::after {
#  content: "glux"; color: green; }
# a b :first-child :not(.foo), a b :nth-of-type(-2n+1) :not(.foo) {
#  hoo: boo; }
# a b :first-child , a b :nth-of-type(-2n+1) {
#  hoo: boo; }
# a b :first-child a, a b :nth-of-type(-2n+1) a {
#  b: c; }'
#   Failed test 'sass-spec t/sass-spec/spec/basic/16_hex_arithmetic/input.scss'
#          got: 'div {
#  p01: #abc; p02: #aabbcc; p03: #aabbcchello; p04: #abbccd; p05: #aabbdd; p06: #0101ff; p07: blue; p08: cyan; p09: #000000; p10: black; p11: black; p12: yellow; p13: #020202; p14: black; p15: 10-#222222; p16: black; p17: magenta; p18: 10 #232323; p19: 10/#222222; p20: #0b0a0b; p21: white; }'
#     expected: 'div {
#  p01: #abc; p02: #aabbcc; p03: #abchello; p04: #abbccd; p05: #aabbdd; p06: #0101ff; p07: blue; p08: cyan; p09: #000000; p10: black; p11: black; p12: yellow; p13: #020202; p14: black; p15: 10-#222; p16: black; p17: magenta; p18: 10 #232323; p19: 10/#222; p20: #0b0a0b; p21: white; }'
#   Failed test 'sass-spec t/sass-spec/spec/basic/18_mixin_scope/input.scss'
#          got: 'div {
#  a: global x; b: global y; f-a: arg; f-b: global y; f-a: local x changed by foo; f-b: global y changed by foo; f-c: new local z; a: global x; b: global y changed by foo; }'
#     expected: 'div {
#  a: global x; b: global y; f-a: arg; f-b: global y; f-a: local x changed by foo; f-b: global y changed by foo; f-c: new local z; a: global x; b: global y; }'
#   Failed test 'sass-spec t/sass-spec/spec/basic/19_full_mixin_craziness/input.scss'
#          got: 'div {
#  /* begin foo */ margin: 1 2; /* end foo */ /* begin foo */ margin: 1 3; /* end foo */ margin: 1 2 zee; margin: 1 kwd-y kwd-z; }
# div blip {
#  hey: now; }
# div blip {
#  hey: now; }
# div {
#  /* begin hux */ color: global-y; /* begin foo */ margin: called-from-hux global-y; /* end foo */ /* end hux */ }
# div blip {
#  hey: now; }
# div {
#  /* begin hux */ color: calling-hux-again; /* begin foo */ margin: called-from-hux calling-hux-again; /* end foo */ /* end hux */ }
# div blip {
#  hey: now; }
# div {
#  blah: original-bung; }
# div {
#  blah: redefined-bung; }
# div {
#  /* calls to nullary mixins may omit the empty argument list */ blah: redefined-bung; }
# div {
#  /* begin foo */ margin: kwdarg1 kwdarg2; /* end foo */ }
# div blip {
#  hey: now; }
# hoo {
#  color: boo; }
# div {
#  blah: boogoo some other default; }
# div {
#  value: original; }
# div {
#  value: no longer original; }
# div {
#  arg: changed local x; blarg: changed global y; a: global-x; b: changed global y; }'
#     expected: 'div {
#  /* begin foo */ margin: 1 2; /* end foo */ /* begin foo */ margin: 1 3; /* end foo */ margin: 1 2 zee; margin: 1 kwd-y kwd-z; }
# div blip {
#  hey: now; }
# div blip {
#  hey: now; }
# div {
#  /* begin hux */ color: global-y; /* begin foo */ margin: called-from-hux global-y; /* end foo */ /* end hux */ }
# div blip {
#  hey: now; }
# div {
#  /* begin hux */ color: calling-hux-again; /* begin foo */ margin: called-from-hux calling-hux-again; /* end foo */ /* end hux */ }
# div blip {
#  hey: now; }
# div {
#  blah: original-bung; }
# div {
#  blah: redefined-bung; }
# div {
#  /* calls to nullary mixins may omit the empty argument list */ blah: redefined-bung; }
# div {
#  /* begin foo */ margin: kwdarg1 kwdarg2; /* end foo */ }
# div blip {
#  hey: now; }
# hoo {
#  color: boo; }
# div {
#  blah: boogoo some other default; }
# div {
#  value: original; }
# div {
#  value: no longer original; }
# div {
#  arg: changed local x; blarg: changed global y; a: global-x; b: different-global-y; }'
#   Failed test 'sass-spec t/sass-spec/spec/basic/25_basic_string_interpolation/input.scss'
#          got: 'div {
#  blah: "hello 4 world px bloo\n blah"; }'
#     expected: 'div {
#  blah: "hello 4 world px bloon blah"; }'
#   Failed test 'sass-spec t/sass-spec/spec/basic/30_if_in_function/input.scss'
#          got: 'div {
#  content: foo; content: bar; content: foo; content: bar; content: bar; }'
#     expected: 'div {
#  content: foo; content: foo; content: foo; content: foo; content: foo; }'
#   Failed test 'sass-spec t/sass-spec/spec/basic/31_if_in_mixin/input.scss'
#          got: 'div {
#  content: foo; content: bar; content: foo; content: foo; }'
#     expected: 'div {
#  content: foo; content: foo; content: foo; content: foo; }'
#   Failed test 'sass-spec t/sass-spec/spec/basic/38_expressions_in_at_directives/input.scss'
#          got: '@foo 1 2, hux {
#  bar {
#  whatever: whatever; }
#  }'
#     expected: '@foo $x $y, hux {
#  bar {
#  whatever: whatever; }
#  }'
#   Failed test 'sass-spec t/sass-spec/spec/basic/42_css_imports/input.scss'
#          got: '@import url(hux\ bux.css); @import url(foo.css); @import url(bar.css); div {
#  color: red; }
# span {
#  color: blue; }'
#     expected: '@import url(hux bux.css); @import url(foo.css); @import url(bar.css); div {
#  color: red; }
# span {
#  color: blue; }'
#   Failed test 'sass-spec t/sass-spec/spec/basic/48_case_conversion/input.scss'
#          got: 'div {
#  bar: "BLAH"; bar: "BLAH"; bar: "BLAH"; bar: "1232178942"; bar: "øáéíóúüñ¿éàŤDžǂɊɱʭʬѪ҈ݓ"; bar: BLAH; bar: BLAH; bar: BLAH; bar: ""; bar: "blah"; bar: "blah"; bar: "blah"; bar: "1232178942"; bar: "øáéíóúüñ¿éàŤDžǂɊɱʭʬѪ҈ݓ"; bar: blah; bar: blah; bar: blah; bar: ""; }'
#     expected: '@charset "UTF-8"; div {
#  bar: "BLAH"; bar: "BLAH"; bar: "BLAH"; bar: "1232178942"; bar: "├©├í├®├¡├│├║├╝├▒┬┐├®├á┼ñÃàÃé╔è╔▒╩¡╩¼Ð¬Êê¦ô"; bar: BLAH; bar: BLAH; bar: BLAH; bar: ""; bar: "blah"; bar: "blah"; bar: "blah"; bar: "1232178942"; bar: "├©├í├®├¡├│├║├╝├▒┬┐├®├á┼ñÃàÃé╔è╔▒╩¡╩¼Ð¬Êê¦ô"; bar: blah; bar: blah; bar: blah; bar: ""; }'
#   Failed test 'sass-spec t/sass-spec/spec/extend-tests/180_test_basic_extend_loop/input.scss'
#          got: '.bar, .foo {
#  a: b; }
# .foo, .bar {
#  c: d; }'
#     expected: '.foo, .bar {
#  a: b; }
# .bar, .foo {
#  c: d; }'
#   Failed test 'sass-spec t/sass-spec/spec/extend-tests/181_test_three_level_extend_loop/input.scss'
#          got: '.baz, .bar, .foo {
#  a: b; }
# .foo, .baz, .bar {
#  c: d; }
# .bar, .foo, .baz {
#  e: f; }'
#     expected: '.foo, .baz, .bar {
#  a: b; }
# .bar, .foo, .baz {
#  c: d; }
# .baz, .bar, .foo {
#  e: f; }'
#   Failed test 'sass-spec t/sass-spec/spec/libsass/list-evaluation/input.scss'
#          got: 'div {
#  content: red 2/3 blue; content: 2/3; content: number; content: color; /**** 4 ****/ content: 0.5 3/40.83333 7/8; content: 0.5 3/4, 0.83333 7/8; /**** ****/ foo: 1; bar: 2; foo: 2; bar: 3; foo: 0.75; bar: 1.75; /*** ***/ stuff: 1, 2 3/4 5, 6; stuff: 0.25; }'
#     expected: 'div {
#  content: red 2/3 blue; content: 0.66667; content: number; content: color; /**** 4 ****/ content: 0.5 3/40.83333 7/8; content: 0.5 3/4, 0.83333 7/8; /**** ****/ foo: 1; bar: 2; foo: 2; bar: 3; foo: 0.75; bar: 1.75; /*** ***/ stuff: 1, 2 3/4 5, 6; stuff: 0.25; }'
WARNING on line 7, column 2 of t/sass-spec/spec/libsass-closed-issues/issue_308/input.scss:
You probably don't mean to use the color value `orange' in interpolation here.
It may end up represented as #ffa500, which will likely produce invalid CSS.
Always quote color names when using them as strings (for example, "orange").
If you really want to use the color value here, use `"" + $var'.
Error: fetched
        on line 2 of t/sass-spec/spec/libsass-closed-issues/issue_799/input.scss
  Use --trace for backtrace.
#   Failed test 'sass-spec t/sass-spec/spec/libsass-closed-issues/issue_799/input.scss'
#          got: '.test {
#  content: "fetched"; }'
#     expected: ''
#   Failed test 'sass-spec t/sass-spec/spec/scss/default-args/input.scss'
#          got: 'div {
#  value: 1, 2; value: 2, 3; value: 1, 3; }
# div {
#  value: ho; }'
#     expected: 'div {
#  value: 1, 2; value: 2, 3; value: 1, 3; }
# div {
#  value: hey; }'
#   Failed test 'sass-spec t/sass-spec/spec/scss/each-in-function/input.scss'
#          got: 'div {
#  a: 0; b: global each 50% 50% type1 number type2 number each cover circle type1 string type2 string each red blue type1 color type2 color; c: a, b, color, d; }'
#     expected: 'div {
#  a: 0; b: global; c: a, b, color, d; }'
#   Failed test 'sass-spec t/sass-spec/spec/scss/env/input.scss'
#          got: 'div {
#  /* 0 */ font: 0; /* 1 */ font: 1; /* 2 */ font: 2; }
# div span {
#  /* 2 */ font: 2; }
# div p {
#  /* 2 */ font: 2; }
# div {
#  @foo {
#  font: 2; }
# @bar {
#  font: 3; }
#  }
# div {
#  content: "foo"; font: fudge; width: "block for foo!"; }'
#     expected: 'div {
#  /* 0 */ font: 0; /* 1 */ font: 1; /* 2 */ font: 2; }
# div span {
#  /* 2 */ font: 2; }
# div p {
#  /* 2 */ font: 2; }
# @foo {
#  div {
#  font: 2; }
#  }
# @bar {
#  div {
#  font: 3; }
#  }
# div {
#  content: "foo"; font: fudge; width: "block for foo!"; }'
#   Failed test 'sass-spec t/sass-spec/spec/scss/if-in-function/input.scss'
#          got: 'div {
#  content: foo; content: bar; content: foo; content: bar; content: bar; }'
#     expected: 'div {
#  content: foo; content: foo; content: foo; content: foo; content: foo; }'
#   Failed test 'sass-spec t/sass-spec/spec/scss/if-in-mixin/input.scss'
#          got: 'div {
#  content: foo; content: bar; content: foo; content: foo; }'
#     expected: 'div {
#  content: foo; content: foo; content: foo; content: foo; }'
#   Failed test 'sass-spec t/sass-spec/spec/scss/selectors/input.scss'
#          got: 'div span, div p, div span {
#  color: red; }
# div a.foo.bar.foo {
#  color: green; }
# div:nth(-3) {
#  color: blue; }
# @-webkit-keyframes {
#  from {
#  left: 0px; }
# from 10% {
#  whatever: hoo; }
# to {
#  left: 200px; }
#  }
# div {
#  @whatever {
#  blah: blah; stuff {
#  blah: bloh; }
#  }
#  }
# a, b {
#  color: red; }
# a c, a d, b c, b d {
#  height: 10px; }
# a c e, a c f, a d e, a d f, b c e, b c f, b d e, b d f {
#  width: 12px; }'
#     expected: 'div span, div p, div span {
#  color: red; }
# div a.foo.bar.foo {
#  color: green; }
# div:nth(-3) {
#  color: blue; }
# @-webkit-keyframes {
#  from {
#  left: 0px; 10% {
#  whatever: hoo; }
#  }
# to {
#  left: 200px; }
#  }
# @whatever {
#  div {
#  blah: blah; }
# div stuff {
#  blah: bloh; }
#  }
# a, b {
#  color: red; }
# a c, a d, b c, b d {
#  height: 10px; }
# a c e, a c f, a d e, a d f, b c e, b c f, b d e, b d f {
#  width: 12px; }'
#   Failed test 'sass-spec t/sass-spec/spec/scss/unquote/input.scss'
#          got: 'div {
#  a: foo; b: I'm a "fashion" "expert".; c: \"wha; d: column1\tcolumn2; e: 24; f: 37%; g: null; j: 1; k: 2; l: a b; m: a 1, b 2; n: 1; }'
#     expected: 'div {
#  a: foo; b: I'm a "fashion" "expert".; c: \"wha; d: column1tcolumn2; e: 24; f: 37%; g: null; j: 1; k: 2; l: a b; m: a 1, b 2; n: 1; }'
#   Failed test 'sass-spec t/sass-spec/spec/scss-tests/047_test_unknown_directive_bubbling/input.scss'
#          got: '.foo {
#  @fblthp {
#  .bar {
#  a: b; }
#  }
#  }'
#     expected: '@fblthp {
#  .foo .bar {
#  a: b; }
#  }'
#   Failed test 'sass-spec t/sass-spec/spec/scss-tests/135_test_simple_at_root/input.scss'
#          got: '.foo {
#  @at-root {
#  .bar {
#  a: b; }
#  }
#  }'
#     expected: '.bar {
#  a: b; }'
#   Failed test 'sass-spec t/sass-spec/spec/scss-tests/136_test_at_root_with_selector/input.scss'
#          got: '.foo {
#  @at-root .bar {
#  a: b; }
#  }'
#     expected: '.bar {
#  a: b; }'
#   Failed test 'sass-spec t/sass-spec/spec/scss-tests/137_test_at_root_in_mixin/input.scss'
#          got: '.foo {
#  @at-root .bar {
#  a: b; }
#  }'
#     expected: '.bar {
#  a: b; }'
#   Failed test 'sass-spec t/sass-spec/spec/scss-tests/138_test_at_root_in_media/input.scss'
#          got: '@media screen {
#  .foo {
#  @at-root .bar {
#  a: b; }
#  }
#  }'
#     expected: '@media screen {
#  .bar {
#  a: b; }
#  }'
#   Failed test 'sass-spec t/sass-spec/spec/scss-tests/139_test_at_root_in_bubbled_media/input.scss'
#          got: '@media screen {
#  .foo {
#  @at-root .bar {
#  a: b; }
#  }
#  }'
#     expected: '@media screen {
#  .bar {
#  a: b; }
#  }'
#   Failed test 'sass-spec t/sass-spec/spec/scss-tests/140_test_at_root_in_unknown_directive/input.scss'
#          got: '@fblthp {
#  .foo {
#  @at-root .bar {
#  a: b; }
#  }
#  }'
#     expected: '@fblthp {
#  .bar {
#  a: b; }
#  }'
# Looks like you failed 28 tests of 601.
Failed 28/601 subtests 

Interactive mode: don't continue if a test can't be migrated

Sometimes when I select the "Migrate copy of the test to pass current version" option in interactive mode, I get the error "Cannot migrate test. Test does not apply to an earlier version"—which makes sense. But then the test runner just starts testing new tests. I'd expect it to let me try choose a different option for the test I tried to migrate, such as updating the expected output.

Change format of tests

Currently, every test requires its own directory, with two specifically named files: input.scss and expected_output.css. Because there are a lot of files, this makes reading through the tests and even grouping contextually related tests difficult.

I'd like to suggest some changes that would allow us to name files after their tests, and express multiple assertions per file. We could take our current extend-tests/001_test_basic and put the following in extend-tests/basic.scss:

/*
.foo, .bar {
  a: b; }
*/

.foo {a: b}
.bar {@extend .foo}

Because block comments are preserved in the output, we can compare the output from our scss to the contents of the comment block after compiling. It would be pretty easy to include multiple scss/comment pairs:

/*
.foo, .bar {
  a: b; }
*/

.foo {a: b;}
.bar {@extend .foo;}

/*
.foo, .bar {
  a: b; }
*/

.bar {@extend .foo;}
.foo {a: b;}

I think this would aid comprehensibility, and keep the parts of sass-spec that were derived from Ruby Sass's test cases closer to the structure of those tests. Thoughts?

Tests very flakey, am I running them incorrectly?

[anthony@anthony-VirtualBox sass-spec (master)]$ ./sass-spec.rb
Recursively searching under directory 'spec' for test files to test 'sass' with.
3.4.13 (Selective Steve)
Run options: --seed 61672

# Running:

.......................................................................................................................................................................................................Command `sass` did not complete:

[anthony@anthony-VirtualBox sass-spec (master)]$ ./sass-spec.rb
Recursively searching under directory 'spec' for test files to test 'sass' with.
3.4.13 (Selective Steve)
Run options: --seed 3929

# Running:

..................................................................................................................................................................................................Command `sass` did not complete:

[anthony@anthony-VirtualBox sass-spec (master)]$ ./sass-spec.rb
Recursively searching under directory 'spec' for test files to test 'sass' with.
3.4.13 (Selective Steve)
Run options: --seed 43638

# Running:

...Command `sass` did not complete:

[anthony@anthony-VirtualBox sass-spec (master)]$ ./sass-spec.rb
Recursively searching under directory 'spec' for test files to test 'sass' with.
3.4.13 (Selective Steve)
Run options: --seed 946

# Running:

...Command `sass` did not complete:

.[anthony@anthony-VirtualBox sass-spec (master)]$ ./sass-spec.rb
Recursively searching under directory 'spec' for test files to test 'sass' with.
3.4.13 (Selective Steve)
Run options: --seed 31792

# Running:

Command `sass` did not complete:

..[anthony@anthony-VirtualBox sass-spec (master)]$ ./sass-spec.rb
Recursively searching under directory 'spec' for test files to test 'sass' with.
3.4.13 (Selective Steve)
Run options: --seed 21729

# Running:

...........Command `sass` did not complete:

[anthony@anthony-VirtualBox sass-spec (master)]$ ./sass-spec.rb
Recursively searching under directory 'spec' for test files to test 'sass' with.
3.4.13 (Selective Steve)
Run options: --seed 18872

# Running:

..............Command `sass` did not complete:

selector functions type checks / error checks

While implementing the 3.4 selector functions, I know how sensitive they can be to bad input.
Currently our test (https://github.com/sass/sass-spec/tree/master/spec/libsass-todo-issues/issue_963/selector-functions), are only being passed ideal input values.

Can we add checks to make sure the passed in types are correct (for example passing in "12" in some functions is considered bad.

The ruby sass test suite has some methods for that, but i'm not sure what the format would be to translate those here.

For example:

    assert_error_message("$selectors: At least one selector must be passed for `selector-append'",
      "selector-append()")
    assert_error_message("Can't append \"> .bar\" to \".foo\" for `selector-append'",
      "selector-append('.foo', '> .bar')")
    assert_error_message("Can't append \"*.bar\" to \".foo\" for `selector-append'",
      "selector-append('.foo', '*.bar')")
    assert_error_message("Can't append \"ns|suffix\" to \".foo\" for `selector-append'",
      "selector-append('.foo', 'ns|suffix')")

Use minitest as the test runner?

I wanted to make a website showing libsass's progress so people can see if they can use it yet.

To do this I need a JSON output of the tests with diffs of expected vs actual output. Using minitest as sass-specs' test runner would allow me to make a custom reporter that showed the info I need.

I have a POC over at https://github.com/blopker/sass-spec
The changes are mostly in https://github.com/blopker/sass-spec/blob/master/lib/sass_spec/runner.rb and https://github.com/blopker/sass-spec/blob/master/lib/sass_spec/test.rb

I still need to port all the options over, but it works. Also, it's faster because minitest gives us parallelism for free.

Should I continue to port the options over?

Test regressions under Ruby 2.1.0

010_test_multiple_extends_with_multiple_extenders_and_single_target and 217_test_parent_and_sibling_extend both fail under Ruby 2.1.0, but not other Ruby versions. This isn't likely to be a significant problem, but we should have a way of documenting this.

RuntimeError should be catchable by error tests

(Strange behavior seen #463)

As a consequence, --nuke does not generate error files after #494

  7) Error:
SassSpec::Test#test__expanded_spec/libsass-todo-issues/issue_1418/static:
RuntimeError: Command `sass` did not complete:


    /home/saper/sw/libsass/sass-spec/lib/sass_spec/test.rb:17:in `run_spec_test'
    /home/saper/sw/libsass/sass-spec/lib/sass_spec/test.rb:50:in `block (2 levels) in create_tests'

Sass Version Compatibility

How should we manage different Sass versions? A few ideas come to mind:

  • maintain several branches, one for each major/minor version of Sass. This means that new test cases would frequently be added to multiple branches.
  • Have some yaml/json file in each test's directory that specifies what versions it applies to. Or have this as an optional preamble in the input and output sass file(s). It's likely that output would change across versions for the same input. This means that there would no longer be a single input or output file in a particular test, but there would be only one possible per sass version.

Maybe you have other ideas. We also need a way for selecting what versions of Sass the implementation is implementing.

Include SourceMap tests

I was looking at the cleaning up the node-sass test suite, but realized we need to keep our sourcemap tests right now.
What do you think of including sourcemaps for the expected.css? Also maybe some option.yml frontmatter to cover the other sourcemap options (inline, relative, etc..)?

[RFC] Further improvements for spec runner (and feedback)

Mostly targeted towards the recent work done by @chriseppstein.

  1. thanks for the work, I like that someone is taking care of sass-spec
  2. don't take this feedback too serious, just wanted to give my 2 cents
  3. also in regard that perl-libsass (and node-sass) have their own spec runners
  • I like the approach with the options.yml in general
  • I do not like the split of different out styles on the first glance
    • it increased size considerabely for perl dist (1.3M to 2.1M)
    • not sure if this is fully because of the split, but I guess so
    • counter arg: it's the most common "multiple" option; in old implementation ...
    • ... each expected output was optional (equivalent to the directories now)
    • still after rethinking it for some time, I think that design made sense.
  • --run-todo does not make much sense to me in the current state
    • I don't see a use case where runner should abort if todo test does not pass
    • Only thing that comes closed would be with a very specific filter argument
    • I added the behavior I would expect under option --probe-todo
    • Disclaimer: I am not sure if exit code signals error for --run-todo
    • I propse to consider to replace --run-todo with the new behavior.
  • Message for --skip confuses me (maybe option name is too generic?)
  • I have no real clue what the migrate option does (just glanced the code)
  • I like the interactive mode (altough I just barely used it for adding --probe-todo)
  • A bit more documentation would be nice (IMO best with some real walk-throughs)
  • parallel mode would be nice (in perl-libsass I opted for distinct output files for each test)
    • I tried with stdin, stderr redirect, but it's doomed with fork; and threads are not so portable
    • so I just assign unqiue files for stdin, stderr and parallelize the commands of sassc
    • this way I just need to know the the exit status, which works also well on windows
    • pretty sure ruby will have the same problems, so this has proven to work in perl

Further ideas to improve here:

  • Add some workflow to create (and activate) todo spec tests (incl. commits and pushes). I already added a batch file I used to create the missing spec tests under tools folder. I just added it so others may use it if it works for them (and also for myself), but no support is given. Having this workflow automated could help to encourage users to submit their spec tests on their own. In the long shot the spec runner could therefore act as an official bug report tool. I guess I don't have to mention this also allows to catch a lot of possible human errors.

Of course I do not really like that I probably have to update the spec runner in perl-libsass 😉 For now I just exclude offending directories by hardcoding them. I know it's rather optional to have this included in perl-libsass, but the nice thing is that this gets us automated coverage accross a bigger set of systems than with our CI.

Really looking forward to see sass-spec evolve with ruby sass on board!
I marked three items above which I would consider as DoD for this issue.

Are all tests meant to pass with Ruby sass?

I'm getting an error from Sass on spec/basic/44_bem_selectors:

Syntax error: Invalid CSS after "  &": expected "{", was "_foo {"

On that syntax error, the test execution is aborted.

$variables should not be omitted in the error message

Having ported #324 to #494 error testing platform I got the error message mismatch.

Normal execution from the shell:

m> ~/sw/sass/bin/sass spec/libsass-todo-issues/issue_1093/parameter/function/input.scss
Error: Invalid CSS after "$foo: foo(#{": expected expression (e.g. 1px, bold), was "});"
        on line 5 of spec/libsass-todo-issues/issue_1093/parameter/function/input.scss
  Use --trace for backtrace.
m> ~/sw/sassc/bin/sassc spec/libsass-todo-issues/issue_1093/parameter/function/input.scss 
Error: Invalid CSS after "$foo: foo(#{": expected expression (e.g. 1px, bold), was "});"
        on line 5 of spec/libsass-todo-issues/issue_1093/parameter/function/input.scss
>> $foo: foo(#{});
   ----------^

(the first lines seem to be identical)

But the test fails:

SassSpec::Test#test__compact_spec/libsass-todo-issues/issue_1093/parameter/mixin [/home/saper/sw/libsass/sass-spec/lib/sass_spec/test.rb:63]:
Expected did not match error.
--- expected
+++ actual
@@ -1 +1 @@
-"Error: Invalid CSS after \"  @include foo(\#{\": expected expression (e.g. 1px, bold), was \"});\""
+"Error: Invalid CSS after \"...@include foo(\#{\": expected expression (e.g. 1px, bold), was \"});\""

Something gets omitted (probably expanded?) here.

The message to match has been generated correctly via --nuke:

Error: Invalid CSS after "$foo: foo(#{": expected expression (e.g. 1px, bold), was "});"
        on line 5 of /home/saper/sw/libsass/sass-spec/spec/libsass-todo-issues/issue_1093/parameter/function/input.scss
  Use --trace for backtrace.

EngineAdapters inconsistently report exceptions

To demonstrate this, please use https://github.com/saper/sass-spec/tree/repro_497 branch (it fixes #495, #496 and includes #494 as well as sample tests for @error and @warn).

Ruby Sass version 3.4.18.4ef8e31 (Selective Steve).

SassC version is sassc: 3.3.0-beta1, libsass: 3.3.0-beta2-1-g1efd, sass2scss: 1.0.3

SassEngineAdapter:

Command to test:

ruby -I <path-to-ruby-sass> sass-spec.rb -s spec/misc/error-directive/

Results:

Standard error: empty
Effect on standard output: https://gist.github.com/a4432027a9f3545531a8
A full exception with a backtrace is returned formatted as a CSS comment

ExecutableEngineAdapter with Ruby Sass:

Command to test:

ruby sass-spec.rb -c <path-to-ruby-sass>/bin/sass -s spec/misc/error-directive/

Results:

Standard error: https://gist.github.com/3c6133b710852b965cb1

Error: Buckle your seatbelt Dorothy, 'cause Kansas is going bye-bye
        on line 1 of /home/saper/sw/libsass/sass-spec/spec/misc/error-directive/input.scss
  Use --trace for backtrace.

Standard output: empty

ExecutableEngineAdapter with Ruby Sass:

Command to test:

ruby sass-spec.rb -c <path-to-sassc>/bin/sassc -s spec/misc/error-directive/

Results:

Standard error: https://gist.github.com/5ac0b9e18252c4906593

Error: Buckle your seatbelt Dorothy, 'cause Kansas is going bye-bye
        on line 1 of spec/misc/error-directive/input.scss
>> @error "Buckle your seatbelt Dorothy, 'cause Kansas is going bye-bye"
   ^

Standard output: empty

Clearly a backtrace should not be used to test against errors. We need to make sure the exception is dumped in one place (probably standard output) and that the format is the same in all cases.

--unexpected-pass broken in latest refactor

As of the latest refactor the --unexpected-pass flag produces a lot of errors

534) Error:
SassSpec::Test#test__expanded_spec/libsass-todo-issues/issue_1029:
NoMethodError: undefined method `assert_not_equal' for #<SassSpec::Test:0x0000010e1e2670>
    /Users/michael/Projects/Sass/sass-spec/lib/sass_spec/test.rb:39:in `run_spec_test'
    /Users/michael/Projects/Sass/sass-spec/lib/sass_spec/test.rb:50:in `block (2 levels) in create_tests'

/cc @hcatlin

Error output: SassEngineAdapter cannot be reliably parallelized

SassEngineAdapter attempts to redirect standard error output coming from the Sass module:

    begin
      captured_stderr = StringIO.new
      real_stderr, $stderr = $stderr, captured_stderr
      begin
        css_output = Sass.compile_file(sass_filename.to_s, :style => style.to_sym)
        [css_output, captured_stderr.string, 0]
      rescue Sass::SyntaxError => e
        ["", "Error: " + e.message.to_s, 1]
      rescue => e
        ["", e.to_s, 2]
      end
    ensure
      $stderr = real_stderr

This unfortunately does not work reliably, since we ask MiniTest to parallelize testing and $stderr redirection is global.

When running some test very fast one notices that sometimes stderr output is not captured, sometime one test captures stderr of two tests etc. This is because at the time we try to redirect
the stderr is already redirect in some other test.

The solution is to:

  • abandon SassEngineAdapter (which was introduced in ba737f5 for speed)
  • abandon parallel tests
  • find better way to capture errors from Sass.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.