Code Monkey home page Code Monkey logo

openqa's Introduction

os-autoinst badge badge

The OS-autoinst project aims at providing a means to run fully automated tests. Especially to run tests of basic and low-level operating system components such as bootloader, kernel, installer and upgrade, which can not easily and safely be tested with other automated testing frameworks. However, it can just as well be used to test firefox and openoffice operation on top of a newly installed OS.

os-autoinst can be executed alone, but is currently designed to be executed together with openQA, the web user interface that allows to run more than one os-autoinst instance at the same time.

More information on os-autoinst and openQA can be found on http://os-autoinst.github.io/openQA/

Getting started

Under openSUSE the os-autoinst package can be installed from the official repository or from our devel repository. For further details, have a look at the openQA documentation.

For building os-autoinst manually checkout the build instructions below.

The main executable isotovideo can read test parameters from the command line or read test parameters from a file named vars.json. This file stores the values of the different variables that will configure the behavior of the test execution.

A container is provided and can be pulled and the main execution can be called in one step, for example using the podman container engine for tests defined in the current directory on x86_64 if your environment supports KVM virtualization acceleration:

podman run --rm -it -v .:/tests registry.opensuse.org/devel/openqa/containers/isotovideo:qemu-kvm casedir=/tests

Use the image variant ending with qemu-x86 on x86_64 if no KVM support is available.

Additional test variables can be supplied on the command line. There are some variables used by os-autoinst itself and other that are used by the tests. A minimal command line can look like this:

isotovideo distri=opensuse casedir=/full/path/for/tests iso=/full/path/for/iso

As alternative or completementary a corresponding vars.json with additional parameters could be:

{
   "DISTRI" :      "opensuse",
   "CASEDIR" :     "/full/path/for/tests",
   "NAME" :        "test-name",
   "ISO" :         "/full/path/for/iso",
   "VNC" :         "91",
   "BACKEND" :     "qemu",
   "DESKTOP" :     "kde"
}

Be advised that the file vars.json is also modified by os-autoinst so make sure to backup handcrafted versions of this file.

For more concrete instructions read on in the "How to run test cases" section below. Find sections about "How to contribute" or "Build instructions" further below.

How to run test cases

This following instructions shows how to run test cases. First one needs to clone the test distribution. Checkout os-autoinst-distri-example for an example of a minimal test distribution.

Example for openSUSE’s tests:

mkdir distri && cd distri
git clone [email protected]:os-autoinst/os-autoinst-distri-opensuse.git opensuse
cd opensuse/products/opensuse
git clone [email protected]:os-autoinst/os-autoinst-needles-opensuse.git needles

Example for openQA’s self-tests ("openQA-in-openQA" test):

mkdir distri && cd distri
git clone [email protected]:os-autoinst/os-autoinst-distri-openQA.git openqa
cd openqa
git clone [email protected]:os-autoinst/os-autoinst-needles-openQA.git needles

Then create a working directory for the test execution, e.g.:

mkdir /tmp/os-autoinst-run && cd /tmp/os-autoinst-run

Create a minimal vars.json config file within that directory, e.g.:

vars.json
{
   "ARCH" : "x86_64",
   "BACKEND" : "qemu",
   "CASEDIR" : "/path/to/os-autoinst-distri-opensuse",
   "DESKTOP" : "gnome",
   "DISTRI" : "opensuse",
   "ISO" : "/path/to/openSUSE-Tumbleweed-DVD-x86_64-Snapshot20160715-Media.iso",
   "PRODUCTDIR" : "/path/to/os-autoinst-distri-opensuse/products/opensuse",
   "VNC" : 90,
}

You will need to correct the file paths to point to real locations. Some of the variables you can use are listed here. Test case specific variables are listed in the distri directories e.g. os-autoinst-distri-opensuse/variables.

Then you can run the isotovideo script within the created working directory. When doing a manual build, that script can be found at the top-level of the os-autoinst Git checkout.

All of these examples were using the QEMU backend which is usually the easiest backend to handle and therefore recommended. If you need to develop and test other backends, have a look at the backend-specific documentation.

When using the QEMU backend it is possible to access the system under test via VNC:

vncviewer localhost:91 -ViewOnly -Shared

Run isotovideo with the environment variable RUN_VNCVIEWER set to autostart a VNC viewer on the right port.

Run isotovideo with the environment variable RUN_DEBUGVIEWER to start the internal debug screenshot viewer updated with an always recent screenshot of the test run.

Develop test modules

Individual test modules are written with one test module per file using the test API in Perl code. Experimental support for test modules in the Python programming language is provided.

Find more details about how to write tests on http://open.qa/docs/#_how_to_write_tests

Verifying a runtime environment

To check if your hardware is able to successfully execute os-autoinst based tests one can execute openQA tests, all the development tests or simply call something like

podman run --pull=always --rm -it --entrypoint '' registry.opensuse.org/devel/openqa/containers/os-autoinst_dev:latest /bin/sh -c 'git -C /opt clone --depth 1 https://github.com/os-autoinst/os-autoinst && make -C /opt/os-autoinst/ test-perl-testsuite TESTS=t/99-full-stack.t'

which only requires the container runtime environment "podman" and will run a container based os-autoinst full-stack test, here without KVM hardware accelerated virtualization support.

How to contribute

If you want to contribute to this project, please clone and send pull requests via https://github.com/os-autoinst/os-autoinst.

More information on the contribution can be found on http://os-autoinst.github.io/openQA/contact/, too.

For an overview of the architecture, see doc/architecture.md.

Rules for commits

  • Every commit is checked by our CI system as soon as you create a pull request but you should run the os-autoinst tests locally. Checkout the build instructions for further details.

  • For git commit messages use the rules stated on How to Write a Git Commit Message as a reference

  • Every pull request is reviewed in a peer review to give feedback on possible implications and how we can help each other to improve

If this is too much hassle for you feel free to provide incomplete pull requests for consideration or create an issue with a code change proposal.

Deprecation approach

In case you want to deprecate functionality consider the use of the function backend::baseclass::handle_deprecate_backend.

Build instructions

Installing dependencies

On openSUSE one can install the package os-autoinst-devel which provides all the dependencies to build and run os-autoinst for the corresponding version of the sources. To build a current version of os-autoinst it is recommended to install os-autoinst-devel from devel:openQA as the distribution-provided packages might be too old or miss dependencies. This is particularly true for openSUSE Leap. Also see the openQA docs.

The required dependencies are also declared in dependencies.yaml. (The names listed within that file are specific to openSUSE but can be easily transferred to other distributions.)

Conducting the build

Simply call

make

in the top folder which automatically creates a build directory and builds the complete project.

Call

make help

to list all available targets.

The above commands use a convenience Makefile calling cmake. For packaging, when using an IDE or to conduct the steps manually it is suggested to use CMake directly and do the following: Create a build directory outside of the source directory. The following commands need to be invoked within that directory.

Configure build:

cmake $path_to_os_autoinst_checkout

You can specify any of the standard CMake variables, e.g. -DCMAKE_BUILD_TYPE=Debug and -DCMAKE_INSTALL_PREFIX=/custom/install/prefix.

The following examples assume that GNU Make is used. It is possible to generate for a different build tool by adding e.g. -G Ninja to the CMake arguments.

Build executables and libraries:

make symlinks

This target also creates symlinks of the built executables and libraries within the source directory so isotovideo can find them.

Run all tests:

make check

By default CTest is invoked in verbose mode because prove already provides condensed output. Add -DVERBOSE_CTEST=OFF to the CMake arguments to avoid that.

Run all Perl tests (*.t files found within the t and xt directories):

make test-perl-testsuite

Run individual tests by specifying them explicitly:

make test-perl-testsuite TESTS="t/15-logging.t t/28-signalblocker.t"

Run perl author tests:

make test-local-author-perl

Run all author tests:

make test-local

Notice that the user needs to include the test directory for each test (either t for normal or xt for developer-centric tests) when specifying individual tests.

Add additional arguments to the prove invocation, e.g. enable verbose output:

make test-perl-testsuite PROVE_ARGS=-v

Gather coverage data while running tests:

make test-perl-testsuite WITH_COVER_OPTIONS=1

Generate a coverage report from the gathered coverage data:

make coverage

If no coverage data has been gathered so far the coverage target will invoke the testsuite automatically.

Reset gathered coverage data:

make coverage-reset

Install files for packaging:

make install DESTDIR=…

Automatically tidy all perl files:

tools/tidyall

Tidy all changed perl files:

tools/tidyall --git

Further notes:

  • When using the test-perl-testsuite target, ctest is not used (and therefore ctest specific tweaks have no effect).

  • One can always run Perl tests manually via prove after the build has been conducted with make symlinks. Note that some tests need to be invoked within the t directory. An invocation like prove -vI.. -I../external/os-autoinst-common/lib 28-signalblocker.t is supposed to work.

  • It is also possible to run ctest within the build directory directly instead of using the mentioned targets.

  • All mentioned variables to influence the test execution (TESTS, WITH_COVER_OPTIONS, …) can be combined and can also be used with the coverage target.

Running isotovideo as CI check

We provide a container to run isotovideo which can be used to run QEMU-based tests directly in a CI runner. Checkout this example workflow for how it can be used. The README of the example test distribution also contains further details.

The script imgsearch in the repository’s script folder allows to use the fuzzy image comparison independently of the normal test execution. Invoke the script with no parameters to show its usage. There is also an example file showing what output you can expect. There is one key for each file to be searched. The best matching image to be found will show up as match and the other images under candidates. If no image matches well enough, match will be null.

To use the script the previously shown build instructions need to be executed (including the invocation of the symlinks target).

History of os-autoinst

At a time Bernhard M. Wiedemann who later joined was on the openSUSE testing team and was assigned the task of testing the installer. Which meant tedious and dull work of waiting for 4GB ISO files to download when it’s not even clear if those things even boot. And as the Perl founder Larry Wall states, important traits of programmers are laziness, impatience and hybris. Which quickly led to developing os-autoinst to automate installations ;) See https://lizards.opensuse.org/2010/04/29/making-of-the-opensuse-install-video/ and https://lizards.opensuse.org/2010/05/25/automated-opensuse-testing/ for Bernhard’s blog posts.

Further notes

When using the QEMU backend, also ensure your user running os-autoinst has access to /dev/kvm.

modprobe kvm-intel || modprobe kvm-amd
chgrp kvm /dev/kvm ; chmod g+rw /dev/kvm # maybe redundant
# optionally use a new user; just to keep things separate
useradd -m USERNAME -G kvm
passwd USERNAME # and/or add ~USERNAME/.ssh/authorized_keys

openqa's People

Contributors

aaannz avatar adamwill avatar amrysliu avatar ancorgs avatar andrii-suse avatar antlarr-suse avatar aplanas avatar asdil12 avatar b10n1k avatar baierjan avatar coolo avatar cwh42 avatar dependabot[bot] avatar foursixnine avatar ilausuch avatar kalikiana avatar kraih avatar krauselukas avatar lnussel avatar martchus avatar mergify[bot] avatar mimi1vx avatar mudler avatar nadvornik avatar nilxam avatar okurz avatar perlpunk avatar pevik avatar r-richardson avatar sysrich avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

openqa's Issues

Web front end can't find files for tests

I've just set up a test deployment of Fedora's openQA set up on a local system. It's mostly working, but all the features which rely on openQA being able to find files in the testresults directory don't seem to be working:

  • I don't get a live log or live view while the tests are running
  • I don't get a list of result files for a finished test
  • I don't see thumbnails for each test step on a finished test (but I do see screenshots when I enter an individual test step)

Comparing to our production deployment where all this stuff works, I see what looks to be a significant difference. On our production deployment, the breadcrumb trail looks something like this:

openQA > Test results > fedora-rawhide-server-x86_64-Build22_Alpha_TC7-server_simple_encrypted

(i.e., whatever's considered the 'test name' for that display does not include the test number). But on my deployment it looks like this:

openQA > Test results > 00000011-fedora-rawhide-server-x86_64-Build22_Alpha_TC7-server_firewall_disabled

The test number is included, there.

however, for both deployments, the testresults subdirectory names have the test number in them. On my deployment I have:

/var/lib/openqa/testresults/00000011-fedora-rawhide-server-x86_64-Build22_Alpha_TC7-server_firewall_disabled

and on the working deployment there is:

/var/lib/openqa/testresults/00000081-fedora-rawhide-server-x86_64-Build22_Alpha_TC7-server_simple_encrypted

So I think there's probably some kind of mismatch somewhere and the server's probably looking for the files in /var/lib/openqa/testresults/00000011-00000011-fedora-rawhide-server-x86_64-Build22_Alpha_TC7-server_firewall_disabled or something like that, and that's why it can't find them? But I'm no perl expert and have no experience with Mojolicious, so I haven't been able to pin down exactly where the problem is, unfortunately.

Add Apparmor in the doc for TAP

Following the documentation for TAP, Networking in OpenQA,

you could encounter the following issue :

QEMU: qemu-system-x86_64: -netdev tap,id=qanet0,ifname=tap0,script=no,downscript=no: could not open /dev/net/tun: Permission denied

This is caused 1 for the owner of tun/tap dev ( this is well documented in the doc, so OK)
Not documented is follow:
2: apparmor could block.
so we have to add in the doc:

vim /etc/apparmor.d/usr.share.openqa.script.worker

  • /dev/net/tun rw,

systemctl restart apparmor.service

needle diff mockup

We have multiple problems in the needle diff as it is. Mostly the fact that we blend images over each other gives the wrong impression we would compare full screens and not areas. So IMO we should have 2 mode - one improved area comparision and one full screen view.

To improve the area comparision I have this mockup, perhaps this is better than what we have.
mock

Next to this would be a full screen comparision not marking areas at all.

`Can't call method "websockets"` error message when displaying workers in admin

I've just installed openQA from repository and when I try to list all workers in admin, it shows dinosaur and in /var/log/openqa is:

[Fri Oct  2 13:26:21 2015] [error] Can't call method "websockets" on an undefined value at /usr/share/openqa/script/../lib/OpenQA/Schema/Result/Workers.pm line 123.
118: }
119: 
120: sub connected {
121:     my ($self) = @_;
122:     my $ipc = OpenQA::IPC->ipc;
123:     return $ipc->websockets('ws_is_worker_connected', $self->id) ? 1 : 0;
124: }
125: 
126: sub info {
127:     my ($self) = @_;
128: 

I have openQA-4.1443110314.cc82053-697.4.noarch and openQA-worker-4.1443110314.cc82053-697.4.noarch packages installed.

unable to login with unicode chars in 'firstname' , 'lastname' fields in openid

I try to log to http://openqa.opensuse.org with my opensuse openid account 'mimi.vx'
as result i get rainbow vomiting raptor with horn...

Full back url:

https://openqa.opensuse.org//response?oic.time=1415959398-60a2c43c53c9f3c50ace&openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0&openid.op_endpoint=https%3A%2F%2Fwww.opensuse.org%2Fopenid%2Fprovider&openid.claimed_id=https%3A%2F%2Fwww.opensuse.org%2Fopenid%2Fuser%2Fmimi_vx&openid.response_nonce=2014-11-14T10%3A03%3A31Z0&openid.mode=id_res&openid.identity=https%3A%2F%2Fwww.opensuse.org%2Fopenid%2Fuser%2Fmimi_vx&openid.return_to=https%3A%2F%2Fopenqa.opensuse.org%2F%2Fresponse%3Foic.time%3D1415959398-60a2c43c53c9f3c50ace&openid.assoc_handle=1408580992698-3628&openid.signed=op_endpoint%2Cclaimed_id%2Cidentity%2Creturn_to%2Cresponse_nonce%2Cassoc_handle%2Cns.ext1%2Cns.ext2%2Cext1.email%2Cext1.fullname%2Cext2.mode%2Cext2.type.email%2Cext2.value.email%2Cext2.type.fullname%2Cext2.value.fullname%2Cext2.type.nickname%2Cext2.value.nickname%2Cext2.type.firstname%2Cext2.value.firstname%2Cext2.type.lastname%2Cext2.value.lastname&openid.sig=3ZxLuRj8lz2Z5T9N3p2iJOzWry3D5kLoprgUD7bFYNM%3D&openid.ns.ext1=http%3A%2F%2Fopenid.net%2Fextensions%2Fsreg%2F1.1&openid.ext1.email=mimi.vx%40gmail.com&openid.ext1.fullname=Ond%C5%99ej+S%C3%BAkup&openid.ns.ext2=http%3A%2F%2Fopenid.net%2Fsrv%2Fax%2F1.0&openid.ext2.mode=fetch_response&openid.ext2.type.email=http%3A%2F%2Fschema.openid.net%2Fcontact%2Femail&openid.ext2.value.email=mimi.vx%40gmail.com&openid.ext2.type.fullname=http%3A%2F%2Faxschema.org%2FnamePerson&openid.ext2.value.fullname=Ond%C5%99ej+S%C3%BAkup&openid.ext2.type.nickname=http%3A%2F%2Faxschema.org%2FnamePerson%2Ffriendly&openid.ext2.value.nickname=mimi_vx&openid.ext2.type.firstname=http%3A%2F%2Faxschema.org%2FnamePerson%2Ffirst&openid.ext2.value.firstname=Ond%C5%99ej&openid.ext2.type.lastname=http%3A%2F%2Faxschema.org%2FnamePerson%2Flast&openid.ext2.value.lastname=S%C3%BAkup

Showing more than 500 test results not working with sqlite

When listing all test results (by going to http://openqa/tests), when there is 500 results or more, OpenQA crashes with sqlite database. In log, there is:

[Thu Aug  6 08:23:23 2015] [error] DBIx::Class::Storage::DBI::_prepare_sth(): DBI Exception: 
 DBD::SQLite::db prepare_cached failed: too many SQL variables [for Statement "SELECT 
 me.child_job_id, me.parent_job_id, me.dependency FROM job_dependencies me WHERE ( ( 
 child_job_id IN ( ?, ...,  ? ) OR parent_job_id IN ( ?, ...,  ? ) ) )"]
at /usr/share/openqa/script/../lib/OpenQA/Controller/Test.pm line 122

It's actually caused by code here and here. 500 results is queried from db, but it's used twice in SQL query and according to this, maximum number of host parameters in a single SQL statement is 999. Editing those two lines to limit query to 499 results works.

Every '#'-reference links to test if it's unknown

So we have a implemented redirect for 'poo#' and 'bsc#' tags in comments that are linking to progress.opensuse.org and bugzilla, and also a t# for a link to a specific test.

I've tried to use 'gh#' for a github pull request, but this was also link to a test, which most likely doesn't exist.

I would suggest to disable links for unknown tags (and implement a 'gh#' for github ;))

screenshot from 2016-03-09 15-15-11

script/fetchneedles got "No such file or directory"

I got /usr/share/openqa/script/fetchneedles: line 58: cd: /var/lib/openqa/tests/opensuse: No such file or directory when running /usr/share/openqa/script/fetchneedles.
I added following line after line 57 to avoid this problem:

[ -d "$dir/$dist" ] || mkdir "$dir/$dist"

TAP with Open vSwitch

The section "TAP with Open vSwitch" on the networking documentation does not seem to work as described.
Apart from being slightly misleading (even with ovs tunctl is still needed), there are two issues that should be mentioned:

  • At least on Leap 42.1, workers started by systemd fail with a "Permission denied" message for tapX. Launching them manually with "sudo -u _openqa-worker ..." works.
  • Before the tapX interfaces can be used by QEMU, they need to be up: "ip link set tap0 up"

And last but not least: As the tapX interfaces are tagged, the traffic reaching br0 is tagged as well.
This means that routing won't work as described, as br0 needs to be on the same VLAN.

`scan_old_jobs()` stuck on infinite repeat due to malformed JSON file

So I just noticed GRU was causing high load on our production instance, but not our staging one. After a bit of digging, I've found out why.

GRU was stuck running scan_old_jobs() task over and over (when a GRU job fails, it doesn't sleep or remove it from the queue, it just keeps retrying it on an infinite loop). Turns out that one of our test result JSON files, for some reason, got malformed / corrupted. Thus the problem happens when scan_old_jobs() does my $details = $module->details();. That calls the details() sub in JobModules.pm, which tries to parse the JSON file, without guarding against failure: my $ret = JSON::decode_json(<$fh>);. If in fact the input is invalid, decode_json() completely errors out; the whole process dies at that point, thus GRU immediately tries to do it again.

The University of Stack Overflow suggests either sticking the decode_json() call in an eval ... or ... block, or using Try::Tiny. I did check to see if the JSON lib has some kind of function that would just validate the input and not flat out explode if it was malformed, but I couldn't find one; there are other JSON libs that do this, though, so using one of those could also be an option.

What do you guys think?

consoletest_setup failure on AArch64

https://openqa.opensuse.org/tests/136073/modules/consoletest_setup/steps/8

failure is:

ps axf > /tmp/psaxf_consoletest_setup.log ; echo 0FJJQ > /dev/ttyAMA0)

-bash: syntax error near unexpected token ')'

I can not seem to be able to figure out where this extra token comes from. I am fearing that this is another side effect of a usb keyboard buffer overflow:

09:31:26.4729 Debug: /var/lib/openqa/share/tests/opensuse/tests/console/consoletest_setup.pm:32 called testapi::script_run
09:31:26.4732 <<< type_string(string=' ; echo 0FJJQ > /dev/ttyAMA0
', max_interval=250)
09:31:28.6399 QEMU: usb-kbd: warning: key event queue full

How to begin a test on openQA WebUI?

I have done all installation work as the REAME guid, and add the test suites to job templates, when I start the worker manually, but it seems nothing has happened.

sudo /usr/share/openqa/script/worker --instance 1 --apikey D80EDAB499F5XXXX --apisecret 26D316CD06XXXXXX --verbose

output:

## adding timer register_worker 0
## removing timer register_worker
registering worker ...
new worker id is 1...
## adding timer setup_websocket 0
## removing timer setup_websocket
WEBSOCKET ws://localhost/api/v1/workers/1/ws
## adding timer ws_keepalive 5
## adding timer check_job 0
## removing timer check_job
checking for job ...
POST http://localhost/api/v1/workers/1/grab_job?backend=qemu&cpu_arch=x86_64&mem_max=3847&cpu_opmode=32-bit,+64-bit&instance=1&host=choldrim-pc

I add the job on openQA WebUI follow:
https://github.com/os-autoinst/openQA/blob/master/docs/GettingStarted.asciidoc#using-job-templates-to-automate-jobs-creation

it seems the backend can't find the jobs difined on WebUI, I get this trouble for a long time ;( , who would do me a favor, thanks.

Unable to log in after installation

I don't know whether it's related to #72 (but I don't have any Unicode character in my name), but after I installed OpenQA, I can not log in the first time - after I approve login from OpenID, it shows uniraptor vomiting rainbow.

/var/log/openqa shows:

[Wed Apr  1 12:29:43 2015] [error] Can't locate object method "params" via package "Mojo::Parameters" at /usr/share/openqa/script/../lib/OpenQA/Controller/Session.pm line 127. 

I am using Fedora as OpenID provider (https://id.fedoraproject.org/). Complete redirect url is:

https://localhost/response?oic.time=1427885467-58af4a2a435d16ecca1d&openid.assoc_handle={HMAC-SHA1}{551bcda7}{qGmiRA%3D%3D}&openid.ax.count.email=1&openid.ax.count.firstname=0&openid.ax.count.fullname=1&openid.ax.count.lastname=0&openid.ax.count.nickname=1&openid.ax.mode=fetch_response&openid.ax.type.email=http%3A%2F%2Fschema.openid.net%2Fcontact%2Femail&openid.ax.type.firstname=http%3A%2F%2Faxschema.org%2FnamePerson%2Ffirst&openid.ax.type.fullname=http%3A%2F%2Faxschema.org%2FnamePerson&openid.ax.type.lastname=http%3A%2F%2Faxschema.org%2FnamePerson%2Flast&openid.ax.type.nickname=http%3A%2F%2Faxschema.org%2FnamePerson%2Ffriendly&openid.ax.value.email.1=rajcze%40gmail.com&openid.ax.value.fullname.1=Project+Coconut&openid.ax.value.nickname.1=coconut&openid.claimed_id=http%3A%2F%2Fcoconut.id.fedoraproject.org%2F&openid.identity=http%3A%2F%2Fcoconut.id.fedoraproject.org%2F&openid.mode=id_res&openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0&openid.ns.ax=http%3A%2F%2Fopenid.net%2Fsrv%2Fax%2F1.0&openid.ns.pape=http%3A%2F%2Fspecs.openid.net%2Fextensions%2Fpape%2F1.0&openid.ns.sreg=http%3A%2F%2Fopenid.net%2Fextensions%2Fsreg%2F1.1&openid.op_endpoint=https%3A%2F%2Fid.fedoraproject.org%2Fopenid%2F&openid.pape.auth_level.nist=2&openid.pape.auth_level.ns.nist=http%3A%2F%2Fcsrc.nist.gov%2Fpublications%2Fnistpubs%2F800-63%2FSP800-63V1_0_2.pdf&openid.pape.auth_policies=http%3A%2F%2Fschemas.openid.net%2Fpape%2Fpolicies%2F2007%2F06%2Fnone&openid.pape.auth_time=2015-04-01T10%3A51%3A16Z&openid.response_nonce=2015-04-01T10%3A51%3A19Z0YncuX&openid.return_to=https%3A%2F%2Flocalhost%2Fresponse%3Foic.time%3D1427885467-58af4a2a435d16ecca1d&openid.sig=%2Bnm8zJ8MHyA%2FWDrVw93IWUJa13M%3D&openid.signed=assoc_handle%2Cax.count.email%2Cax.count.firstname%2Cax.count.fullname%2Cax.count.lastname%2Cax.count.nickname%2Cax.mode%2Cax.type.email%2Cax.type.firstname%2Cax.type.fullname%2Cax.type.lastname%2Cax.type.nickname%2Cax.value.email.1%2Cax.value.fullname.1%2Cax.value.nickname.1%2Cclaimed_id%2Cidentity%2Cmode%2Cns%2Cns.ax%2Cns.pape%2Cns.sreg%2Cop_endpoint%2Cpape.auth_level.nist%2Cpape.auth_level.ns.nist%2Cpape.auth_policies%2Cpape.auth_time%2Cresponse_nonce%2Creturn_to%2Csigned%2Csreg.email%2Csreg.fullname%2Csreg.nickname&openid.sreg.email=rajcze%40gmail.com&openid.sreg.fullname=Project+Coconut&openid.sreg.nickname=coconut

script/client does not accept variables containing '='

when running a test from commandline, e.g.

/usr/share/openqa/script/client jobs post DISTRI=sle VERSION=12 ISO=SLE-12-Server-DVD-x86_64-GM-DVD1.iso ... MYPARAM='name=value'

returns :
ERROR: 403 - Forbidden
{ error => "Not authorized" }

somehow, '=' is incorrectly parsed and causes and invalid error message (all is well if '=' is not present).

Do database init / upgrade in openQA, not RPM spec

Currently, both openSUSE and Fedora have database init / upgrade happening in RPM %post. I kinda don't like this; it doesn't really 'belong' to packaging. In my opinion openQA itself should do this.

I'm willing to work on this, but I wanted to file a ticket first to get other folks' thoughts on whether it's a good idea and if so how/where to do it. My initial thought was to put it into connect_db in Schema.pm (or make it a separate sub which connect_db calls).

The initdb / upgradedb scripts would still exist for the purpose of doing --prepare_init / --prepare_upgrades, but init_database and upgrade_database would be done in openQA itself.

Thoughts? Thanks!

Test suite settings should take priority over settings passed by POST

Fedora for ARM is distributed not by installation ISO, but by already preinstalled disk image. When I want to schedule our ARM tests, I'm doing so while setting HDD_1_URL, but not ISO. I have one test that boots from this disk, creates user/sets password with our "initial-setup" utility and then saves disk by setting STORE_HDD_1. Then I have another ARM test where I want to use that saved disk, but I cannot, because HDD_1 set on test gets overwritten by parameter passed during test scheduling.

I tried to overcome this by setting HDD_ARM=http://url/to/hdd during POST, HDD_1_URL=%HDD_ARM% on first test and correctly setting HDD_1 on second test, but _URL gets resolved only for parameters set by POST (so HDD_1 wasn't created and disk wasn't downloaded). I also tried to set ASSET_1_URL=http://url/to/hdd during POST, HDD_1=%ASSET_1% on first test and correctly setting HDD_1 on second test and that almost worked (ASSET_1 was created and disk was downloaded), but then disk was placed in other/ directory, not in hdd/ directory (so it couldn't find it).

I cannot come up with mean how to solve this problem (resolving _URL for all parameters would be enough), but I think that parameters set on specific test should take precedence before "generic" parameters passed during POST for all tests.

Rationale is that when I'm passing parameters that are the same for all tests planned by one POST, I should be able to override parameters for specific tests.

ikvm: no message action for 0x80/128 unsupported message type received at /usr/lib/os-autoinst/consoles/VNC.pm

I'm just doing the test on supermicro machine by ikvm.
after setup the impi ENV , I got the error :
I print the message_type , is 128 .

http://147.2.212.197/tests/254/file/autoinst-log.txt

/usr/lib/os-autoinst/consoles/vnc_base.pm:49:{
'ikvm' => 1,
'username' => 'ADMIN',
'password' => 'ADMIN',
'port' => 5900,
'hostname' => '147.2.208.125'
}
Session info: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Security Result: 0
IKVM specifics: 1129866464 1 1 1 1
IKVM Session Message: 1 1 1129866464 ADMIN
09:30:28.6266 capture loop failed Can't close(GLOB(0x625f7e8)) filehandle: 'No child processes' at /usr/lib/os-autoinst/backend/baseclass.pm line 267

received magic close
discarding 20 bytes for message 4
DIE 128 unsupported message type received at /usr/lib/os-autoinst/consoles/VNC.pm line 786.

at /usr/lib/os-autoinst/backend/baseclass.pm line 73 thread 1.
backend::baseclass::die_handler('128 unsupported message type received at /usr/lib/os-autoinst...') called at /usr/lib/os-autoinst/consoles/VNC.pm line 786 thread 1
consoles::VNC::_receive_message('consoles::VNC=HASH(0x7efde0112220)') called at /usr/lib/os-autoinst/consoles/VNC.pm line 740 thread 1
consoles::VNC::update_framebuffer('consoles::VNC=HASH(0x7efde0112220)') called at /usr/lib/os-autoinst/consoles/vnc_base.pm line 80 thread 1
consoles::vnc_base::request_screen_update('consoles::vnc_base=HASH(0x625f8c0)', undef) called at /usr/lib/os-autoinst/backend/baseclass.pm line 522 thread 1
backend::baseclass::bouncer('backend::ipmi=HASH(0x5e91f88)', 'request_screen_update', undef) called at /usr/lib/os-autoinst/backend/baseclass.pm line 505 thread 1
backend::baseclass::request_screen_update('backend::ipmi=HASH(0x5e91f88)') called at /usr/lib/os-autoinst/backend/baseclass.pm line 180 thread 1
eval {...} called at /usr/lib/os-autoinst/backend/baseclass.pm line 164 thread 1
backend::baseclass::run_capture_loop('backend::ipmi=HASH(0x5e91f88)', 'IO::Select=ARRAY(0x55582c8)') called at /usr/lib/os-autoinst/backend/baseclass.pm line 113 thread 1
backend::baseclass::run('backend::ipmi=HASH(0x5e91f88)', 13, 16) called at /usr/lib/os-autoinst/backend/driver.pm line 82 thread 1
backend::driver::_run('backend::ipmi=HASH(0x5e91f88)', 13, 16) called at /usr/lib/os-autoinst/backend/driver.pm line 69 thread 1
eval {...} called at /usr/lib/os-autoinst/backend/driver.pm line 69 thread 1
09:30:29.1834 IPMI cmd : ipmitool -H 147.2.208.125 -U ADMIN -P ADMIN chassis power off
IPMI stdout: Chassis Power Control: Down/Off

is it possible to have iso file name in "Download iso" url ?

Hello there,
I have a question regarding the "Download iso" url (1) that is presented in a test result page (2)
Would it be possible to have the url to contain the name of the iso file (Tumbleweed-BE-DVD-ppc64-Snapshot20150516-Media.iso) rather than a number (3498) ?
This would ease the copy/paste of the url to then wget for test on another machine. Today I have to manually find the iso file name and then do a rename after wget operation.

(1) https://openqa.opensuse.org/tests/63224/asset/3498
(2) https://openqa.opensuse.org/tests/63224

[Blocking issue] Physical machine console related test failed due to ipmiconsole process became zombie.

Description:
Beijing side are now doing physical machine(ipmi) tests via openqa. I know that coolo helped jerry with the host installation and made it succeed. But I am afraid that the code was not completely put into official git repo. Because when I checkout the official openqa git code and made the same try, the installation failed.
Root cause is that the [ipmiconsole] process which was started when func do_start_vm calls func start_serial_grab, became zombie process soon after it was created. And the serial0 file of the ipmi worker pool has errors.

Severity:
Serious and blocking all physical machine tests in openqa

Logs:
http://147.2.212.158/tests/17

alice-openqa:/var/lib/openqa/pool/2 # cat vars.json
{
"ARCH" : "x86_64",
"ASSETDIR" : "/var/lib/openqa/share/factory",
"BACKEND" : "ipmi",
"BETA" : "1",
"CASEDIR" : "/var/lib/openqa/share/tests/sle-12-SP2",
"DESKTOP" : "gnome",
"DISTRI" : "sle",
"DVD" : 1,
"FLAVOR" : "Server-DVD",
"GNOME" : 1,
"HASLICENSE" : 1,
"HOST" : "localhost",
"HOST_IMG_URL" : "loader/sles-12-sp2-alpha2-x86_64-linux console=ttyS1,115200 console=tty initrd=loader/sles-12-sp2-alpha2-x86_64-initrd install=http://147.2.207.1/dist/install/SLP/SLE-12-SP2-Server-LATEST/x86_64/DVD1/",
"INSTLANG" : "en_US",
"IPMI_HOSTNAME" : "147.2.208.124",
"IPMI_PASSWORD" : "ADMIN",
"IPMI_USER" : "ADMIN",
"ISO_MAXSIZE" : "4700372992",
"JOBTOKEN" : "5PXmiZoxigNiCswP",
"MACHINE" : "64bit-ipmi",
"MAX_JOB_TIME" : "32000",
"NAME" : "00000015-sle-12-SP2-Server-DVD-x86_64-prj1_guest_installation_on_sles_12_sp2_kvm",
"NOAUTOLOGIN" : 1,
"NOIMAGES" : 1,
"OPENQA_HOSTNAME" : "localhost",
"OPENQA_URL" : "localhost",
"PACKAGETOINSTALL" : "x3270",
"PATTERNS" : "base,minimal,apparmor,32bit,help,gnome,x,print,wbem,kvm,file,kvmserve",
"PRODUCTDIR" : "/var/lib/openqa/share/tests/sle-12-SP2/products/sle",
"QA_SERVER_REPO" : "http://dist.nue.suse.com/ibs/QA:/Head/SLE-12-SP2/",
"QA_VIRTTEST_GI" : "1",
"QEMUPORT" : "20022",
"SHUTDOWN_NEEDS_AUTH" : 1,
"TEST" : "prj1_guest_installation_on_sles_12_sp2_kvm",
"VERSION" : "12-SP2",
"VNC" : "92",
"WALLPAPER" : "/usr/share/wallpapers/SLEdefault/contents/images/1280x1024.jpg",
"WORKER_CLASS" : "64bit-ipmi",
"WORKER_HOSTNAME" : "147.2.212.158",
"WORKER_ID" : "4",
"WORKER_INSTANCE" : "2"
}
alice-openqa:/var/lib/openqa/pool/2 # lsls
If 'lsls' is not a typo you can use command-not-found to lookup the package that contains it, like this:
cnf lsls
alice-openqa:/var/lib/openqa/pool/2 # ls
autoinst-log.txt backend.run job.json .locked os-autoinst.pid qemuscreenshot serial0 testresults tmp vars.json video.ogv
alice-openqa:/var/lib/openqa/pool/2 # cat serial0
Config File Error: configuration file cannot be opened
alice-openqa:/var/lib/openqa/pool/2 # cat /etc/openqa/workers.ini
[1]
BACKEND = qemu

[2]
WORKER_CLASS=64bit-ipmi
IPMI_HOSTNAME=147.2.208.124
IPMI_PASSWORD=ADMIN
IPMI_USER=ADMIN
MAX_JOB_TIME=32000
WORKER_HOSTNAME=147.2.212.158

[3]
BACKEND = qemu
alice-openqa:/usr/lib/os-autoinst/backend # pstree -pal 29342
worker,29342 /usr/share/openqa/script/worker --instance 2
└─isotovideo,29563 -w /usr/bin/isotovideo -d
├─(ipmiconsole,29577)
├─videoencoder,29567 /var/lib/openqa/pool/2/video.ogv
│ └─{videoencoder},29569
├─{isotovideo},29565
├─{isotovideo},29566
└─{isotovideo},29578
alice-openqa:/usr/lib/os-autoinst/backend # ps aux | grep 29577
_openqa+ 29577 0.0 0.0 0 0 ? Z 17:14 0:00 [ipmiconsole]
root 29911 0.0 0.0 10492 932 pts/4 S+ 17:34 0:00 grep --color=auto 29577
alice-openqa:/usr/lib/os-autoinst/backend #

Jobs do not run if perl::EV is installed

While getting openQA running on Fedora, I ran into a problem. Jobs will not run if perl::EV is installed. They'll be scheduled, but while the worker is starting up you get this message: EV does not work with ithreads. Per Mojo FAQ, it seems that the EV reactor and interpreter threads are basically incompatible. openQA doesn't use threads, but os-autoinst does.

I tried sticking $ENV{"MOJO_REACTOR"} = "Mojo::Reactor::Poll"; at the top of isotovideo, but that didn't seem to do the trick. Perhaps it needs to go in openQA somewhere, maybe Worker.pm?

I can deal with this somewhat in downstream by dropping the Fedora Mojolicious package's (bogus) hard requirement of EV and making the openQA package conflict with it, but it seems like something that might also/better be addressed upstream. Mojo in SUSE doesn't depend on EV, but it doesn't conflict with it, and any time you happen to install it, your openQA is gonna stop working.

gru's "infinite repeat on failure" design is awful

So, yeah, I really hate how Gru is set up to work when a task fails.

It just leaves it in the queue and loops back around. So until a higher priority task appears, it just tries the failed task over and over again. If the failure isn't transient, it'll just keep failing over and over and over and over. It never goes to sleep. It never decides "this just isn't working out" and puts the task off to the side and warns the admin or anything. Nope. It just loops around eternally, failing again and again and again. When a higher priority task appears it'll do that, but then go right back to looping on the broken task. Lower priority tasks will never get run until the failing task is cleared out somehow.

I would like to fix this; I hope I'll get some spare time to work on it. Here is my initial idea: the Gru task schema should get a new column, 'failure_count' or somesuch. It's an integer. Every time Gru ran a task and it failed, it would increment the integer. Gru's search for 'what task should I do next' should exclude tasks whose failure_count is higher than, say, 5. There would be a page or something in the admin interface which listed tasks in this state and let you manually reset their failure count to get them run again (so you could figure out what was wrong with them). Maybe Gru would have a one-time code block which searched for all tasks with failure_count > 5 and logged their IDs on startup (as just another place where the admin could notice broken tasks).

Thoughts?

all thumbails / thumbail folders are generated with wrong permissions

tail /var/log/apache2/error_log:

[Tue Mar 08 00:08:45.653634 2016] [core:error] [pid 792] (13)Permission denied: [client 10.163.1.6:44696] AH00035: access to /image/02/.thumbs/19cdfe46a17fa96d1f1cd446bb0e60.png denied (filesystem path '/var/lib/openqa/images/02/.thumbs') because search permissions are missing on a component of the path, referer: http://argus.suse.cz/tests/3/modules/start_install/steps/1
argus:/var/lib/openqa/images/02 # ls -al 
celkem 136
drwxr-x---   3 geekotest nogroup   105  7. bře 23.47 .
drwxr-xr-x 131 geekotest www      4096  8. bře 00.11 ..
drwxr-x---   2 geekotest nogroup    90  7. bře 23.47 .thumbs
-rw-r-----   1 geekotest nogroup 70122  7. bře 23.47 19cdfe46a17fa96d1f1cd446bb0e60.png
-rw-r-----   1 geekotest nogroup 59423  7. bře 23.46 9675c7d94c791f252d9d4dd3f1b0e4.png
argus:/var/lib/openqa/images/02/.thumbs # ls -al 
celkem 8
drwxr-x--- 2 geekotest nogroup   90  7. bře 23.47 .
drwxr-x--- 3 geekotest nogroup  105  7. bře 23.47 ..
-rw-r----- 1 geekotest nogroup 3551  7. bře 23.47 19cdfe46a17fa96d1f1cd446bb0e60.png
-rw-r----- 1 geekotest nogroup 3400  7. bře 23.46 9675c7d94c791f252d9d4dd3f1b0e4.png

favicon is SUSE-specific

A Fedora person pointed out to me that the openQA favicon is the openSUSE geeko. Perhaps we could make the favicon part of the 'branded' bits, and/or get a distro-neutral logo and favicon made by a friendly design team?

Perl crash when starting openqa-webui, sqlite missing ?

Following the installation guide, systemctl start openqa-webui (as root) fails with the following error

2014-11-06T14:50:01.445917+01:00 x220 openqa[20035]: Uncaught exception from user code:
2014-11-06T14:50:01.446211+01:00 x220 openqa[20035]:
DBIx::Class::Storage::DBI::catch {...} (): DBI Connection failed: DBI
connect('dbname=/var/lib/openqa/db/db.sqlite','',...) failed: unable
to open database file at
/usr/lib/perl5/vendor_perl/5.20.1/DBIx/Class/Storage/DBI.pm line 1483.
at /usr/share/openqa/script/../lib/OpenQA.pm line 123

File /var/lib/openqa/db/db.sqlite exists as -rw-r----- 1 root root 0 Nov 5 14:24 db.sqlite

Move 'find latest jobs for build' logic out of `overview()`, make it available for API query

There's this thing the overview() function in lib/OpenQA/WebAPI/Controller/Test.pm does:

my %seen;
while (my $job = $jobs->next) {
    my $settings = $job->settings_hash;
    my $testname = $settings->{NAME};
    my $test     = $job->test;
    my $flavor   = $settings->{FLAVOR} || 'sweet';
    my $arch     = $settings->{ARCH} || 'noarch';
    my $key      = "$test-$flavor-$arch-" . $settings->{MACHINE};
    next if $seen{$key}++;

that's really not only useful right there. We in fact need to do that exact same filter when generating a report for the tests run on a given compose in Fedora. So I had to effectively duplicate it in openQA-python-client: os-autoinst/openQA-python-client@0bffa10

Obviously it's bad to have two bits of code for the same thing. What would be good is if this logic could be moved somewhere else and made available via the query API, as well as being used for this one specific web UI view.

I tried to do this by moving the filtering into query_jobs(), but then realized that doesn't really work at all because query_jobs() returns a DBIC ResultSet and you can't really filter those like this. It doesn't seem like a good idea to suddenly switch query_jobs() to return an array of Results, or something. So, I figure the way to go would be to add a little function that you can feed the query_jobs() ResultSet to and it will give you the filtered array of Results, and then the overview() function could use it and we could also wire it into the query API somehow.

I'm willing to work on this, but I'm just not entirely sure where the helper function should go, or whether there's a better design for this. Any thoughts?

os-autoinst/consoles/VNC.pm

How to reproduce:

QEMU-Version:

Repository: openSUSE-Leap-42.1-Update
Name: qemu
Version: 2.3.1-9.1

Arch: x86_64

job runned on locally, System: openSUSE-LEAP..
Repos are standart from openqa Leap.
URI : http://download.opensuse.org/repositories/devel:/openQA/openSUSE_Leap_42.1/

openqa: up-to-date
Job posted on the web-interface : OK

When running the web-int, (or isotovideo ), i got the message:

DIE Undefined subroutine &tinycv::new_vncinfo called at /usr/lib/os-autoinst/consoles/VNC.pm line 919.

So nowhere i founded the def of function. I looked in past commits, this function wasn't there, last valid commit :
7d7ea0b,
then she was introduced.

No test runs, os-autoinst stop with sigterm to qemu. If i don't reset back to valid commit, i can't run tests.

For further details i can paste more logs.

Interactive mode and "Stop waitforneedle" doesn't work

I have updated to openQA-4.1423751463.9d5dfbc-50.1.noarch and "Stop waitforneedle" button stopped working (OpenQA doesn't stop when I click it). Also, although I am in interactive mode, immediately after assert_screen fails, it runs post_fail_hook.

When I searched through logs, I have noticed that clicking this button sends POST with wrong workerid:

"POST /api/v1/workers/TODO/commands HTTP/1.1" 404 1032 "https://localhost/tests/10"

Searching through commits, I have found this: f0226ce#diff-89341987f897939f67f1f3120f797566L74 so this seems like regression error.

Can't clone a job

$ docker exec openqaplayground_webui_1 /usr/share/openqa/script/clone_job.pl --from localhost --host localhost 142 MAKETESTSNAPSHOTS=1 KEEPHDDS=1
142 failed: 403 Forbidden
downloading
http://localhost/tests/142/asset/iso/openSUSE-13.2-DVD-x86_64.iso
to
/var/lib/openqa/factory/iso/openSUSE-13.2-DVD-x86_64.iso

you can use those images to reproduce it:
https://github.com/actionless/openqa-docker

Non-ASCII characters displayed incorrect in comments

As the description already says, non-ASCII characters are displayed incorrect in comments. Both - the group overview and the comments for particular tests - are affected. I noticed the bug when working on editable comments. Maybe I'll fix it by the way.

VDE vlan assignment doesn't work sometimes

Sometimes (roughly 2/10 times) VLANs aren't allocated and assigned properly.
Although the worker executed

15:59:27.1744 running vdecmd -s /run/openqa/vde.mgmt port/remove 16
15:59:27.1805 running vdecmd -s /run/openqa/vde.mgmt port/create 16
15:59:27.1856 running vdecmd -s /run/openqa/vde.mgmt vlan/create 2
15:59:27.1904 running vdecmd -s /run/openqa/vde.mgmt port/setvlan 16 2
15:59:27.1973 slirpvde --dhcp -s /run/openqa/vde.ctl --port 17 started with pid 4911
15:59:27.1973 running vdecmd -s /run/openqa/vde.mgmt port/setvlan 17 2
Starting slirpvde: virtual_host=10.0.2.2/24
                   DNS         =10.0.2.3
                   dhcp_start  =10.0.2.15
                   vde switch  =/run/openqa/vde.ctl

It shows

# vdecmd -s /run/openqa/vde.mgmt vlan/allprint
VLAN 0000
 -- Port 0017 tagged=0 active=1 status=Forwarding
VLAN 0001
 -- Port 0004 tagged=0 active=0 status=Learning
 -- Port 0005 tagged=0 active=0 status=Learning
 -- Port 0006 tagged=0 active=0 status=Learning
 -- Port 0010 tagged=0 active=0 status=Learning
 -- Port 0012 tagged=0 active=0 status=Learning
 -- Port 0013 tagged=0 active=0 status=Learning
 -- Port 0018 tagged=0 active=0 status=Learning
VLAN 0002
 -- Port 0014 tagged=0 active=1 status=Forwarding
 -- Port 0016 tagged=0 active=1 status=Forwarding

emit count or list of remaining jobs with `job_done` signal?

so I'm not sure if you want to do this or not, thus filing as an issue not a PR.

For the fedmsg plugin I wrote, I did this:

# find count of pending jobs for the same build
# this is so we can tell when all tests for a build are done
my $build = $app->db->resultset('Jobs')->find({'id' => $event_data->{id}})->settings_hash->{BUILD};
$event_data->{remaining} = $app->db->resultset('Jobs')->search(
    {
        'settings.key'   => 'BUILD',
        'settings.value' => $build,
        'state'          => [OpenQA::Schema::Result::Jobs::PENDING_STATES],
    },
    {join => qw/settings/})->count;

i.e. I took the raw event and added another property to the data, which is a count of remaining jobs for the same build that are in one of the PENDING_STATES.

The reason for this is so you can tell when all tests for a build are done: when you get a message with the remaining value 0. (For my purposes it could just as well be a boolean, but meh, emitting the count was easy).

I wondered if this might be useful to others, in which case we could do it at the point of sending the original event - at least for job_done and any other event for which it might make sense?

(Now I think about it I should probably also add a build property to the message, for my needs...)

Extended tests fail with PhantomJS 2.1.1 (work fine with 1.9.2)

See subject. I do not have anywhere near the sufficient expertise to fix this, am just reporting the problem.

With 2.1.1 a test run hangs indefinitely like so:

~/openQA$ MOJO_TMPDIR=/dev/shm/oqa/ make test
... many "sucess" lines...
./t/rand.t ................................ ok    
./t/ui/01-list.t .......................... 29/? 
#   Failed test 'only two links (icon, name, no restart)'
#   at ./t/ui/01-list.t line 118.
#          got: '0'
#     expected: '2'
Uncaught exception from user code:
    An element could not be located on the page using the given search parameters: #results #job_99926 .test .status.result_incomplete,css selector at ./t/ui/01-list.t line 121.
    Selenium::Remote::Driver::find_element(Selenium::Remote::Driver=HASH(0x6617bc8), "#results #job_99926 .test .status.result_incomplete", "css") called at ./t/ui/01-list.t line 121
# Tests were run but no plan was declared and done_testing() was not seen.
# Looks like your test exited with 255 just after 31.

^^ everything stops there until a CTRL+C

`jobs/[job_id]` endpoint gives less information than `jobs?ids=[job_id]` for no obvious reason

The 'softfail' state seems to be really only visible via the web UI. Compare: web UI 'softfailed' test, API 'softfailed' test - note our staging deployment is running close to git master. In the UI the overall state shows as 'passed' but you can clearly see the soft failed test. From the API result there is absolutely no indication that the test soft failed, AFAICS the API data is entirely indistinguishable from a regular 'passed' job.

In general, I guess, the API could stand to expose more information on the test steps in the job.

Cannot search for jobs by HDD (API / query_jobs)

We're looking to start running some ARM disk image tests for Fedora. These tests would have no ISO: they'd only have an HDD_1.

We have a check in our scheduler which looks for jobs for the same ISO and FLAVOR and refuses to create new jobs if there are any, unless a force option is set (this is a safety to prevent us unintentionally automatically triggering multiple runs for the same image). Obviously for these tests we'd want to do the same, only look for tests for the same HDD_1 and FLAVOR - only you can't actually query for jobs by HDD_1. The API doesn't allow for it, and even behind the scenes, query_jobs does not, I don't think. We think it should.

We may send a patch for this, I'm just filing an issue to keep track.

usr/share/openqa/script/client doesn't not accept full-path in a variable

How to reproduce

/usr/share/openqa/script/client --params example.json jobs post

with

 "HDD_1" : "/var/lib/openqa/share/factory/hdd/MY-IMAGE.mg",
 "ISO" : "/var/lib/openqa/share/factory/iso/MY_ISO_EXAMPLE.iso",

the workaround is to set only

HDD_1 :"MY-IMAGE.img"  ...

Why is needed:

When adding new testscases to existing one, as test-developer i do a clonejob from the existing one.

Since sometimes a new variable for control the the tests as to be added, i have to modify the vars.json file that the clone.job has made for me on the web-api.

This vars.json contain mixed values, from engine log and testing work-flow itself.

So the issue, is a sub-issue of the vars.json problem. But could be fixed as first.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.