nette / tester Goto Github PK
View Code? Open in Web Editor NEWTester: enjoyable unit testing in PHP with code coverage reporter. ππππ
Home Page: https://tester.nette.org
License: Other
Tester: enjoyable unit testing in PHP with code coverage reporter. ππππ
Home Page: https://tester.nette.org
License: Other
class Test {
/*
public function testSomething() // it will try to run this
{
///
}
*/
}
I wrote my first test with TestCase and I forgot to run the test case ( (new SomethingTestCase())->run(); ). This might be common mistake for former PHPUnit users.
Tester said that there was 1 passed test.
I thing that error or at least warning would be better if there is no assert in test.
English would be better than czech on http://tester.nette.org/
When running Nette Tester I got this error. The bug is somewhere in 0ab996a (previous commits works fine, this one and later do not).
Yes, php-cgi.exe is in PATH (otherwise the previous commits wouldn't work either).
Most of current IDE support PHPUnit as test tool with progress and success/errors list and other stuff.
My question is, how hard would be to implement CLI script simulating PHPUnit behavior?
Good enought would be just running tests and producing same output.
(eg.: like sendmail is template for most contemporary mailers)
After commit Fatal errors cannot be hidden by @shutup (43a8b3d), the fatal errors cannot be tested by register_shutdown_function()
because that one in Helpers::setup() exits and other is not executed.
It happens in Nette now with Debugger.shut-up.error.phpt (for example https://travis-ci.org/nette/nette/jobs/9130851).
@missing();
php test.php; echo $?
</del>
wrong repo. sry...
Bisected to b29932c which introduced the problem, previous commit was good.
Might be environment related as Travis thinks it's ok. Trying under PHP 5.5.7 on Linux. Other tests pass.
So which one?
Scanning subdirectories for tests is sometimes problem. Imagine the application structure:
application (current working dir)
|--- tests
|--- vendor
|--- |--- nette
|--- |--- |--- nette
|--- |--- |--- tester
and when you run the Tester without args, vendor dir is scanned and Nette and Tester tests are found and executed. This can be supprise mainly for newbies. And it can leeds to problems with database tests.
In my opinion, the base problem is in git/composer that always clone whole repo, but it cannot be easily solved.
I would disable a Tester's automatic directory scan when not passed as arg. In that case, old behaviour can be enabled by dot:
Tester\tester .
Things to be done:
If I run Tester to test whole directory, only the first file in directory is tested.
I have folder named exampleTests containing two files ExampleTest.phpt and OtherTest.pthp. OtherTests.phpt is containing one failing test.
I run C:>php pathToTester\tester --tap -s -c php.ini pathToProject\exampleTests
Output is wrong, only ExampleTest.phpt is tested, all tests passed:
TAP version 13
ok tests\exampleTests\ExampleTest.phpt
1..1
I'm running on Win 7 64b, PHP 5.4.7, Nette Framework 2.0.12, Nette Tester v0.9.3
Thanks
Is anybody writing any type of documentation? I'll (slowly) prepare some, but I appreciate any other sources.
It seems to me, that tag v0.9.2 points to wrong commit after force push.
ProΔ to potΕebuje nΔco, co nemΓ‘m na poΔΓtaΔi?
It is nice, that we could write our output handler, but there is no options to use it without edit tester.php.
CodeCoverage always appends data to coverage.dat
file. But this file may exist from previous run and data may be outdated. In the result the coverage-reports generates nonsences.
Maybe related to #103.
Is there reason why tester shows only content of stdout of test execution after test failure?
I just spent about hour trying to figure out why my tests suddenly failing with no information at all, because causing exception was written to stderr.
It should be sufficient to append "2>&1" to Job command at Job.php:77
Or is there reason for not doing so?
Please, would it be possible to use absolute file paths in TAP output (diff line already uses them)? For NetBeans, it is not always possible to resolve them properly, like the first 2 files in the following output:
# in Tester/Framework/Assert.php(365)
# in Tester/Framework/Assert.php(57) Tester\Assert::fail()
# in nette-tester/tests/Greeting.test.phpt(16) Tester\Assert::same()
Thanks for considering it.
Discussed in issue #39:
(3) code coverage:
(3a) could we have a command line switch for it?
(3b) could we generate the report in XML (or any more parser-friendly) format?
See this comment #39 (comment) for more information why we need it.
Thanks.
There is an opened RFC which should help users to setup Tester correctly.
Please, use sth like
$prefix = strlen(Nette\Strings:findPrefix($expected, $actual));
$start = max($prefix - 5, 0);
echo substr($expected, $start), "does not match", substr($actual, $start);
btw, awesome coloring etc :-)
I would like to run only specific method in my test case via cmd. I think, now this isn't possible because I can't get cli arguments in my bootstrap, where I have auto-run of my test cases.
Something like this:
> ./run-tests.sh -t testSomething Project/Test.phpt
... and in my bootstrap.php:
Tester\Environment::getArguments();
Of course, I can write own tester.php or specify name of method in my bootstrap.php (new TestCase()->runTest($name)) but this would by easier :)
Discussed in issue #39:
(2) could we provide more parser-friendly output for TAP for failures? Something like:
# expected: foo
# actual: bar
I get 2 lines for string comparison but only one line if I compare string and number:
# Failed: 'Hello JohnX' should be
# ... 'Hello John'
# Failed: 'Hello JohnX' should be 10
Thanks.
Each test method should be count as test. Now Tester says you have run only 1 test when you run TestCase with multiple tests.
It is not good that after new test is written Tester output stays same as before.
DOM query is very cool tool. But I can't use it in another code (with PHPUnit-Selenium).
8021168 doesn't work after @milo's changes, neither does commit 84247e2
vendor/nette/tester$ git co 8021168ed8c12d5b3f6547faf6ccb64e28ab0a16
Previous HEAD position was 84247e2... Runner: better SIGINT catching readability
HEAD is now at 8021168... Runner: invoke output handlers after received SIGINT (CTRL+C)
$ ./tests/run.sh -j 4 ./tests/
PHP 5.5.7 | 'php-cgi' -n -c 'tests/php.ini-unix' | 4 threads
FFF^C
$
I wrote a Nette presenter test where I tried to check if response was an instance of RedirectResponse. Unfortunately I have had a bug in my code and response type was TextResponse.
I would expect some friendly error message, but Nette Tester tries to dump asserted object into an error message. And when this object is complex structure like Nette Template is, memory limit can be exhausted.
Error message like "Expected type ...\RedirectResponse but ...\TextResponse given" would be better in this case.
P.S.: I think similar bugs may occur in other assertions too.
Sometimes if I implement new features, I need to fix tests. When there is more failed tests, it becomes quite messy, so I would like to use some option like fail-fast
, which will run tests and stops when some fails.
class SomeTestCase extends Tester\TestCase
{
public function test_issue_123()
{
Assert::fail('DIE motherfucker!!!');
}
}
(new SomeTestCase)->run('test_issue_123');
Exits with 0 instead of failure, problem is here.
This is just a note. Consider subject influence on Tester
I am not sure if this belongs to the Debugger which causes the problem or to the Tester which runs the tests using the CGI version of PHP.
I enabled the Debugger in tests bootstrap to get a better error visualization and/or logging and I was surprised with the HTML output in the console:
<!DOCTYPE html>
<meta charset="utf-8">
<meta name=robots content=noindex>
<meta name=generator content="Nette Framework">
...
<title>Server Error</title>
...
With the DEVELOPMENT
mode enabled there is of course the whole red-screen/debugger-bar...
I ve found the problem is in the isHtmlMode()
method where the CLI mode is not successfully detected as the PHP_SAPI does not contain the string "cli" but "cgi-fcgi".
Am I missing sth or doing sth wrong?
If not it would be cool to add this value ("cgi-fcgi") to the condition too.
As I am running PHP in the FPM mode I was wondering if this value is not returned also in the server mode and it is not. There is the "fpm-cgi" string returned instead.
class SomeTest extends TestCase
{
public function testAaa()
{
Assert::same(2, 2);
}
public function testBbb()
{
Assert::same(3, []);
}
}
both fail with error array() should be 3 in testBbb()
only when runned with test runner, not directly
the reason is, that both test methods produce expected and actual files with the same name
Just found out that there is no diff
line for simple assert:
Assert::same( 'Hello John', $h->say('Mary') );
(I am using the class from your readme.)
The TAP output is:
not ok nette-tester/tests/Greeting.test_1.phpt
# Failed: 'Hello Mary' should be
# ... 'Hello John'
#
# in Tester/Framework/Assert.php(364)
# in Tester/Framework/Assert.php(56) Tester\Assert::fail()
# in nette-tester/tests/Greeting.test_1.phpt(11) Tester\Assert::same()
Using latest Tester sources (cloned repo).
Thanks.
Sometimes can happend that none assertion is done in a test file, e.g. accidental die() before first assertion or just:
class MyTest extends TestCase {
function testFail() {
Assert::fail('FAIL');
}
}
# and there is missing (new MyTest)->run();
... test seems to be passing.
First idea is to count Assert::method() calls, check it in the shutdown function and fail or skip the test. The plus can be nothing-saying statistics :)
cc @milo
Hi,
please, could anyone have a look at [1] and let me know whether it makes sense or not? I tend to close that issue as INVALID but would like to know your opinion.
Thanks!
[1] https://netbeans.org/bugzilla/show_bug.cgi?id=241996
In PHP 5.3.1 - 5.3.4 (maybe on Windows) is true, that
NAN == NAN
NAN === NAN
What to do:
is should be fixed in Assert::same, notSame, equal and notEqual and perhaps elsewhere, but NAN is very marginal thing.
in tests\Assert.equal.phpt
should be removed test for NAN then PHP < 5.3.5
This issue is very similar to atoum/atoum#171.
We (NetBeans IDE) are currently thinking about nette/tester support [1] - but there are several things that would need to be done:
porcelain
(or similar) which would ensure that the CLI output will never change and can be easily parsed; something like e.g. Git has already==> Suite MyApplicationSuite started
==> Test sayHello started
==> Test sayHello failure:
[information about failure - exception, error, failure]
==> Test sayHello finished: 5ms
==> Suite MyApplicationSuite finished: 25ms
==> Running duration: 0.15 second.
Thanks a lot.
[1] https://netbeans.org/bugzilla/show_bug.cgi?id=228679
please add possibility to specify error message like in PHPUnit.
Assert::same($expected, $actual, $message);
As an early adopter I am very curious in what phase is the development of this tool. Currently I'am using PHPUnit and Mockery for the unit testing. If I would be able to use the same tool as the framework I would be very happy.
Note: Currently the biggest pain for me are the final methods in the core framework classes. But that's another story.
Sometime is useful to test previous exception too. Now it must be done by manual catching and walking throw previous. Would be nice to support:
Assert::throw(function() {
throw new \Exception(..., ..., $previous);
}, '\Exception', 'message', '\Previous\Exception', 'Previous message');
or
Assert::throw(function() {
throw new \Exception(..., ..., $previous);
},
array('\Exception', '\Previous\Exception'),
array('message', 'Previous message')
);
or
Assert::throw(function() {
throw new \Exception(..., ..., $previous);
}, new \Exception('message', ..., new \PreviousException('Previous message'));
http://dev.theladders.com/2013/02/mutation-testing-with-pit-a-step-beyond-normal-code-coverage/
Time estimate: in a galaxy far, far away :)
Is there any option how to tell that some tests should not run parallel?
I have some tests which on the setUp fill the database and on the tearDown restore original state of the DB. But when two tests run in parallel and they fill the same table they will fail because of unexpected data in the database.
It would be great if I can tell the runner, that the test should run only if there is no other test with the same flag (group or something else) active (currently running).
I know that there is option how to tell the runner that it should run only 1 thread, but I need only some tests to not run in parallel. Others can run normally, e.g. if they use different tables.
Please add an option to change destination of .actual/.expected files. For us, it would be better, have these file for example in logs/tests/failed directory than search it in tests directory which have many subdirs.
class SomeTest extends TestCase
{
private function testAaa()
{
Assert::fail('aaa');
}
}
it should notify the user instead
happens only when runned with TestRunner. not directly
A declarative, efficient, and flexible JavaScript library for building user interfaces.
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. πππ
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google β€οΈ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.