Code Monkey home page Code Monkey logo

lava-docker's Introduction

Linaro's Automated Validation Architecture (LAVA) Docker Container

Introduction

The goal of lava-docker is to simplify the install and maintenance of a LAVA lab in order to participate in distributed test efforts such as kernelCI.org.

With lava-docker, you describe the devices under test (DUT) in a simple YAML file, and then a custom script will generate the necessary LAVA configuration files automatically.

Similarly, LAVA users and authentication tokens are described in (another) YAML file, and the LAVA configurations are automatically generated.

This enables the setup of a LAVA lab with minimal knowledge of the underlying LAVA configuration steps necessary.

Prerequisites

lava-docker has currently been tested primarily on Debian stable (buster). The following packages are necessary on the host machine:

  • docker
  • docker-compose
  • pyyaml

If you plan to use docker/fastboot tests, you will need probably also to install lava-dispatcher-host.

Quickstart

Example to use lava-docker with only one QEMU device:

  • Checkout the lava-docker repository
  • Generate configuration files for LAVA, udev, serial ports, etc. from boards.yaml via
./lavalab-gen.py
  • Go to output/local directory
  • Build docker images via
docker-compose build
  • Start all images via
docker-compose up -d

Adding your first board:

device-type

To add a board you need to find its device-type, standard naming is to use the same as the official kernel DT name. (But a very few DUT differ from that)

You could check in https://github.com/Linaro/lava/tree/master/etc/dispatcher-config/device-types if you find yours.

Example: For a beagleboneblack, the device-type is beaglebone-black (Even if official DT name is am335x-boneblack) So you need to add in the boards section:

    - name: beagleboneblack-01
      type: beaglebone-black

UART

Next step is to gather information on UART wired on DUT.
If you have a FTDI, simply get its serial (visible in lsusb -v or for major distribution in dmesg)

For other UART type (or for old FTDI without serial number) you need to get the devpath attribute via:

udevadm info -a -n /dev/ttyUSBx |grep ATTRS|grep devpath | head -n1

Example with a FTDI UART:

[    6.616707] usb 4-1.4.2: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[    6.704305] usb 4-1.4.2: SerialNumber: AK04TU1X
The serial is AK04TU1X

So you have now:

    - name: beagleboneblack-01
      type: beaglebone-black
      uart:
        idvendor: 0x0403
	idproduct: 0x6001
	serial: AK04TU1X

Example with a FTDI without serial:

[2428401.256860] ftdi_sio 1-1.4:1.0: FTDI USB Serial Device converter detected
[2428401.256916] usb 1-1.4: Detected FT232BM
[2428401.257752] usb 1-1.4: FTDI USB Serial Device converter now attached to ttyUSB1
udevadm info -a -n /dev/ttyUSB1 |grep devpath | head -n1
    ATTRS{devpath}=="1.5"

So you have now:

    - name: beagleboneblack-01
      type: beaglebone-black
      uart:
        idvendor: 0x0403
	idproduct: 0x6001
	devpath: "1.5"

PDU (Power Distribution Unit)

Final step is to manage the powering of the board.
Many PDU switchs could be handled by a command line tool which control the PDU.
You need to fill boards.yaml with the command line to be ran.

Example with an ACME board: If the beagleboneblack is wired to port 3 and the ACME board have IP 192.168.66.2:

      pdu_generic:
        hard_reset_command: /usr/local/bin/acme-cli -s 192.168.66.2 reset 3
        power_off_command: /usr/local/bin/acme-cli -s 192.168.66.2 power_off 3
        power_on_command: /usr/local/bin/acme-cli -s 192.168.66.2 power_on 3

Example:

beagleboneblack, with FTDI (serial 1234567), connected to port 5 of an ACME

    - name: beagleboneblack-01
      type: beaglebone-black
      pdu_generic:
        hard_reset_command: /usr/local/bin/acme-cli -s 192.168.66.2 reset 5
        power_off_command: /usr/local/bin/acme-cli -s 192.168.66.2 power_off 5
        power_on_command: /usr/local/bin/acme-cli -s 192.168.66.2 power_on 5
      uart:
        idvendor: 0x0403
	idproduct: 0x6001
	serial: 1234567

Architecture

The basic setup is composed of a host which runs the following docker images and DUT to be tested.

  • lava-master: run lava-server along with the web interface
  • lava-slave: run lava-dispatcher, the component which sends jobs to DUTs

The host and DUTs must share a common LAN.
The host IP on this LAN must be set as dispatcher_ip in boards.yaml.

Since most DUTs are booted using TFTP, they need DHCP for gaining network connectivity.
So, on the LAN shared with DUTs, a running DHCPD is necessary. (See DHCPD below)

Multi-host architectures

Lava-docker support multi-host architecture, Master and slaves could be on different host.

Lava-docker support multiples slaves, but with a maximum of one slave per host. This is due to that slave need TFTP port accessible from outside.

Power supply

You need to have a PDU for powering your DUT. Managing PDUs is done via pdu_generic

Network ports

By default, the following ports are used by lava-docker and are proxyfied on the host:

  • 69/UDP proxyfied to the slave for TFTP
  • 80 proxyfied to the slave for TODO (transfer overlay)
  • 5500 proxyfied to the slave for Notification
  • 5555 proxyfied to the master (LAVA logger)
  • 5556 proxyfied to the master (LAVA master)
  • 10080 proxyfied to the master (Web interface)
  • 55950-56000 proxyfied to the slave for NBD

DHCPD

A DHCPD service is necessary for giving network access to DUT.

The DHCPD server could be anywhere with the condition that it is accessible of DUTs. (Could be on host, in a docker in the host, or is the ISP box on the same LAN.

Examples

Example 1: Basic LAB with home router

Router: 192.168.1.1 which handle DHCP for 192.168.1.10-192.168.1.254
Lab: 192.168.1.2

So the dispatcher_ip is set to 192.168.1.2

Example 2: Basic LAB without home router

Lab: 192.168.1.2 which handle DHCP for 192.168.1.10-192.168.1.254

So the dispatcher_ip is set to 192.168.1.2

Example 3: LAB with dedicated LAN for DUTs

A dedicated LAN is used for DUTs. (192.168.66.0/24) The host have two NIC:

  • eth0: (192.168.1.0/24) on home LAN. (The address could be static or via DHCP)
  • eth1: (192.168.66.0/24) with address set to 192.168.66.1

On the host, a DHCPD give address in range of 192.168.66.3-192.168.66.200

So the dispatcher_ip is set to 192.168.66.1

DHCPD examples:

isc-dhcpd-server

A sample isc-dhcpd-server DHCPD config file is available in the dhcpd directory.

dnsmasq

Simply set interface=interfacename where interfacename is your shared LAN interface

Generating files

Helper script

You can use the lavalab-gen.sh helper script which will do all the above actions for you.

boards.yaml

This file describe how the DUTs are connected and powered.

masters:
 - name:  lava-master	name of the master
    host: name		name of the host running lava-master (default to "local")
    version: "202x.xx"	LAVA version for master
    webadmin_https:	    Does the LAVA webadmin is accessed via https
    listen_address:     Address where webinterface_port will listen (default to 0.0.0.0)
    webinterface_port:  Port number to use for the LAVA web interface (default to "10080")
    lava-coordinator:		Does the master should ran a lava-coordinator and export its port
    persistent_db: true/false	(default false) Is the postgresql DB is persistent over reboot.
                                WARNING: this is working accross the same LAVA version, this do not work when there is a postgresql major update
                                This is not recommanded
    pg_lava_password:		The postgresql LAVA server password to set
    http_fqdn:			The FQDN used to access the LAVA web interface. This is necessary if you use https otherwise you will issue CSRF errors.
    healthcheck_url:		Hack healthchecks hosting URL. See hosting healthchecks below
    build_args:
      - line1			A list of line to set docker build-time variables
      - line2
    allowed_hosts:		A list of FQDN used to access the LAVA master
    - "fqdn1"
    - "fqdn2"
    loglevel:
      lava-logs: DEBUG/INFO/WARN/ERROR			(optional) select the loglevel of lava-logs (default to DEBUG)
      lava-slave: DEBUG/INFO/WARN/ERROR			(optional) select the loglevel of lava-slave (default to DEBUG)
      lava-master: DEBUG/INFO/WARN/ERROR		(optional) select the loglevel of lava-master (default to DEBUG)
      lava-server-gunicorn: DEBUG/INFO/WARN/ERROR	(optional) select the loglevel of lava-server-gunicorn (default to DEBUG)
    users:
    - name: LAVA username
      token: The token of this user 	(optional)
      password: Password the this user (generated if not provided)
      email:	email of the user	(optional)
      superuser: yes/no (default no)
      staff: yes/no (default no)
      groups:
      - name: 			Name of the group this user should join
    groups:
    - name: 			LAVA group name
      submitter: True/False	Can this group can submit jobs
    tokens:
    - username: The LAVA user owning the token below. (This user should be created via users:)
      token: The token for this callback
      description: The description of this token. This string could be used with LAVA-CI.
    smtp:			WARNING: Usage of an SMTP server makes it mandatory for each user to have an email address
      email_host:		The host to use for sending email
      email_host_user:		Username to use for the SMTP server
      email_host_password:	Password to use for the SMTP server
      email_port:		Port to use for the SMTP server (default: 25)
      email_use_tls:		Whether to use a TLS (secure) connection when talking to the SMTP server
      email_use_ssl:		Whether to use an implicit TLS (secure) connection when talking to the SMTP server
      email_backend:		The backend to use for sending emails (default: 'django.core.mail.backends.smtp.EmailBackend')
    slaveenv:			A list of environment to pass to slave
      - name: slavename		The name of slave (mandatory)
        env:
	- line1			A list of line to set as environment
	- line2
    event_notifications:
      event_notification_topic:   A string which event receivers can use for filtering (default is set to the name of the master)
      event_notification_port:    Port to use for event notifications (default to "5500")
      event_notification_enabled: Set to true to enable event notifications (default to "false")
slaves:
  - name: lab-slave-XX		The name of the slave (where XX is a number)
    host: name			name of the host running lava-slave-XX (default to "local")
    version: "202x.xx"		LAVA version for worker
    dispatcher_ip: 		the IP where the slave could be contacted. In lava-docker it is the host IP since docker proxify TFTP from host to the slave.
    remote_master: 		the name of the master to connect to
    remote_address: 		the FQDN or IP address of the master (if different from remote_master)
    remote_rpc_port: 		the port used by the LAVA RPC2 (default 80)
    remote_user: 		the user used for connecting to the master
    remote_user_token:		The remote_user's token. This option is necessary only if no master node exists in boards.yaml. Otherwise lavalab-gen.py will get from it.
    remote_proto:		http(default) or https
    lava_worker_token:		token to authenticate worker to master/scheduler (LAVA 2020.09+)
    default_slave:		Does this slave is the default slave where to add boards (default: lab-slave-0)
    bind_dev:			Bind /dev from host to slave. This is needed when using some HID PDU
    use_tftp:			Does LAVA need a TFTP server (default True)
    use_nbd:			Does LAVA need a NBD server (default True)
    use_overlay_server:		Does LAVA need an overlay server (default True)
    use_nfs:			Does the LAVA dispatcher will run NFS jobs
    use_tap:			Does TAP netdevices could be used
    use_docker:			Permit to use docker commands in slave
    arch:			The arch of the worker (if not x86_64), only accept arm64
    host_healthcheck:		If true, enable the optional healthcheck container. See hosting healthchecks below
    lava-coordinator:		Does the slave should ran a lava-coordinator
    expose_ser2net:		Do ser2net ports need to be available on host
    custom_volumes:
      - "name:path"		Add a custom volume
    expose_ports:		Expose port p1 on the host to p2 on the worker slave.
      - p1:p2
    extra_actions:		An optional list of action to do at end of the docker build
    - "apt-get install package"
    build_args:
      - line1			A list of line to set docker build-time variables
      - line2
    env:
      - line1			A list of line to set as environment (See /etc/lava-server/env.yaml for examples)
      - line2
    tags:			(optional) List of tag to set on all devices attached to this slave
    - tag1
    - tag2
    devices:			A list of devices which need UDEV rules
      - name:			The name of the device
        vendorid:		The VID of the UART (Formated as 0xXXXX)
        productid:		the PID of the UART (Formated as 0xXXXX)
        serial:			The serial number of the device if the device got one
        devpath:		The UDEV devpath to this device if more than one is present

boards:
  - name: devicename	Each board must be named by their device-type as "device-type-XX" (where XX is a number)
    type: the LAVA device-type of this device
    slave:		(optional) Name of the slave managing this device. Default to first slave found or default_slave if set.
    kvm: (For qemu only) Does the qemu could use KVM (default: no)
    uboot_ipaddr:	(optional) a static IP to set in uboot
    uboot_macaddr:	(Optional) the MAC address to set in uboot
    custom_option:	(optional) All following strings will be directly append to devicefile included in {% opt %}
    - "set x=1"
    raw_custom_option:	(optional) All following strings will be directly append to devicefile
    - "{% set x=1 %}"
    tags:		(optional) List of tag to set on this device
    - tag1
    - tag2
    aliases:		(optional) List of aliases to set on the DEVICE TYPE.
    - alias1
    - alias2
    user:		(optional) Name of user owning the board (LAVA default is admin) user is exclusive with group
    group:		(optional) Name of group owning the board (no LAVA default) group is exclusive with user
# One of uart or connection_command must be choosen
    uart:
      idvendor: The VID of the UART (Formated as 0xXXXX)
      idproduct: the PID of the UART (Formated as 0xXXXX)
      serial: The serial number in case of FTDI uart
      baud:		(optional) Change the baud rate of the this uart (default is 115200)
      devpath: the UDEV devpath to this uart for UART without serial number
      interfacenum:	(optional) The interfacenumber of the serial. (Used with two serial in one device)
      use_ser2net: 	True/False (Deprecated, ser2net is the default uart handler)
      worker:          (optional) an host/IP where ser2net is running
      ser2net_keepopen:	True/False (optional) Use the recent ser2net keepopen
      ser2net_options:	(optional) A list of ser2net options to add
        - option1
        - option2
    connection_command: A command to be ran for getting a serial console
    pdu_generic:
      hard_reset_command: commandline to reset the board
      power_off_command: commandline to power off the board
      power_on_command: commandline to power on the board

Notes on UART:

  • Only one of devpath/serial is necessary.
  • For finding the right devpath, you could use
udevadm info -a -n /dev/ttyUSBx |grep devpath | head -n1
  • VID and PID could be found in lsusb. If a leading zero is present, the value must be given between double-quotes (and leading zero must be kept) Example:
Bus 001 Device 054: ID 0403:6001 Future Technology Devices International, Ltd FT232 Serial (UART) IC

This device must use "0403" for idvendor and 6001 for idproduct.

  • Some boards reset serial on power on. This can cause ser2net/telnet to disconnect resulting in the LAVA Worker being unable to program the board. This may be mitigated by passing LOCAL as an option to ser2net in the boards.yaml. Example:
      ser2net_options:
        - LOCAL

Note on connection_command: connection_command is for people which want to use other custom way than ser2net to handle the console.

Examples: see boards.yaml.example or boards.yaml.minimal

Generate

lavalab-gen.py

this script will generate all necessary files in the following locations:

output/host/lava-master/tokens/			This is where the callback tokens will be generated
output/host/lava-master/users/			This is where the users will be generated
output/host/lab-slave-XX/devices/		All LAVA devices files
output/host/udev/99-lavaworker-udev.rules 	udev rules for host
output/host/docker-compose.yml			Generated from docker-compose.template

All thoses files (except for udev-rules) will be handled by docker.

You can still hack after all generated files.

udev rules

Note that the udev-rules are generated for the host, they must be placed in /etc/udev/rules.d/ They are used for giving a proper /dev/xxx name to tty devices. (where xxx is the board name) (lavalab-gen.sh will do it for you)

Building

To build all docker images, execute the following from the directory you cloned the repo:

docker-compose build

Running

For running all images, simply run:

docker-compose up -d

Proxy cache (Work in progress)

A squid docker is provided for caching all LAVA downloads (image, dtb, rootfs, etc...)
For the moment, it is unsupported and unbuilded. For using an external squid server see "How to made LAVA slave use a proxy" below

Backporting LAVA patches

All upstream LAVA patches could be backported by placing them in lava-master/lava-patch/

Backups / restore

For backupping a running docker, the "backup.sh" script could be used. It will store boards.yaml + postgresql database backup + joboutputs.

For restoring a backup, postgresql database backup + joboutputs must be copied in master backup directory before build.

Example: ./backup.sh This produce a backup-20180704_1206 directory For restoring this backup, simply cp backup-20180704_1206/* output/local/master/backup/

Upgrading from a previous lava-docker

For upgrading between two LAVA version, the only method is:

  • backup data by running ./backup.sh on the host running the master (See Backups / restore)
  • checkout the new lava-docker and update your boards.yaml
  • Move the old output directory away
  • run lavalab-gen.sh
  • copy your backup data in output/yourhost/master/backup directory
  • build via docker-compose build
  • Stop the old docker via docker-compose down
  • Run the new version via docker-compose up -d
  • Check everything is ok via docker-compose logs -f

Security

Note that this container provides defaults which are unsecure. If you plan on deploying this in a production environment please consider the following items:

  • Changing the default admin password (in tokens.taml)
  • Using HTTPS
  • Re-enable CSRF cookie (disabled in lava-master/Dockerfile)

Non amd64 build

Since LAVA upstream provides only amd64 and arm64 debian packages, lava-docker support only thoses architectures. For building an arm64 lava-docker, some little trick are necessary:

  • replace "baylibre/lava-xxxx-base" by "baylibre/lava-xxxx-base-arm64" for lava-master and lava-slave dockerfiles

For building lava-xxx-base images

  • replace "bitnami/minideb" by "arm64v8/debian" on lava-master-base/lava-slave-base dockerfiles.

How to ran NFS jobs

You need to set use_nfs: True on slave that will ran NFS jobs. A working NFS server must be working on the host. Furthermore, you must create a /var/lib/lava/dispatcher/tmp directory on the host and export it like: /var/lib/lava/dispatcher/tmp 192.168.66.0/24(no_root_squash,rw,no_subtree_check)

How to add custom LAVA patchs

You can add custom or backported LAVA patchs in lava-master/lava-patch Doing the same for lava-slave will be done later.

How to add/modify custom devices type

There are two way to add custom devices types.

  • Copy a device type file directly in lava-master/device-types/ If you have a brand new device-type, it is the simpliest way.
  • Copy a patch adding/modifying a device-type in lava-master/device-types-patch/ If you are modifying an already present (upstream) device-type, it is the best way.

How to made LAVA slave use a proxy ?

Add env to a slave like: slave: env:

  • "http_proxy: http://dns:port" Or on master via slaveenv:
    • name: lab env:
      • "http_proxy: http://squid_IP_address:3128"
      • "https_proxy: http://squid_IP_address:3128"

How to use a board which uses PXE ?

All boards which uses PXE, could be used with LAVA via grub. But you need to add a configuration in your DHCP server for that board. This configuration need tell to the PXE to get GRUB for the dispatcher TFTP. Example for an upsquare and a dispatcher available at 192.168.66.1:

  	host upsquare {
		hardware ethernet 00:07:32:54:41:bb;
		filename "/boot/grub/x86_64-efi/core.efi";
		next-server 192.168.66.1;
	}

How to host healthchecks

Healthchecks jobs needs externals resources (rootfs, images, etc...). By default, lava-docker healthchecks uses ones hosted on our github, but this imply usage of external networks and some bandwidth. For hosting locally healthchecks files, you can set healthcheck_host on a slave for hosting them. Note that doing that bring some constraints:

  • Since healthchecks jobs are hosted by the master, The healthcheck hostname must be the same accross all slaves.
  • You need to set the base URL on the master via healthcheck_url
  • If you have qemu devices, Since they are inside the docker which provides an internal DNS , you probably must use the container("healthcheck") name as hostname.
  • In case of a simple setup, you can use the slave IP as healthcheck_url
  • In more complex setup (slave sprayed on different site with different network subnets) you need to set a DNS server for having the same DNS available on all sites.

For setting a DNS server, the easiest way is to use dnsmasq and add in /etc/hosts "healthcheck ipaddressoftheslave"

Example: One master and slave on DC A, and one slave on DC B. Both slave need to have healthcheck_host to true and master will have healthcheck_url set to http://healthcheck:8080 You have to add a DNS server on both slave with an healthcheck entry.

Bugs, Contact

The preferred way to submit bugs are via the github issue tracker You can also contact us on #lava-docker on the Libera.chat IRC network

lava-docker's People

Contributors

aliceinwire avatar bearrito avatar broonie avatar embeddedandroid avatar gylstorffq avatar khilman avatar montjoie avatar nuclearcat avatar patersonc avatar philm06 avatar qinshulei avatar rahix avatar slawr avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lava-docker's Issues

Healthcheck is failing

Hi guys,

I've have added bcm2837-rpi-3-b device to lava scheduler. whenever the health check runs it is failing . I think it has something to do with networking. so please help me resolve the issue. below i've attached screenshots of test log.
And can anyone help me with device dictionary for my raspberrypi3 board.
Thank you.

screenshot from 2018-10-31 10-07-45
screenshot from 2018-10-31 10-08-30

new udev device discovery for docker test jobs fails to work

Summary

LAVA MR 1238 reworked the device handling in docker test shell jobs. The improvements include better support for udev device discovery of transient devices, e.g. fastboot device on reboot, so they are discovered and shared with the docker container specified in the job.

The support uses the utility lava-dispatcher-host which to work must be installed on the host. Currently lava-docker does not take account of this requirement and so such jobs will fail in some way.

This may also be the case for jobs using LXC but has not been investigated.

I'm happy to send a PR to fix the issue in lava-docker but I have some questions about how lava-docker is working below before I can finalise it.

Example failure

The following fastboot test case shows the issue in docker test jobs:

          - fastboot reboot-bootloader
          - fastboot devices
          - fastboot --set-active=a

The first host fastboot command will reboot the DUT. The third command passing --set-active will cause fastboot on the host to wait for the DUT to reappear. In the failing case LAVA fails to discover and share the DUT fastboot device after reboot into the test container.

Debugging by manually entering the container will show that the DUT has successfully rebooted and host and DUT can communicate via fastboot. So the issue appears to be limited to LAVA.

Investigation

With the code changes rather than sharing a list of devices with the container when it is started, LAVA shares them dynamically via lava-dispatcher-host. lava-dispatcher-host relies on a udev rule for udev discovery. In a current lava-docker installation lava-dispatcher-host is installed by the base linaro dispatcher DockerFile and therefore into the Worker docker. The udev database shared into the Worker docker is insufficient for this to work (at least in my testing) and of course udev events do not appear in the Worker docker. This leads to the udev rule not being run.

I have discussed this with the LAVA devs on the lava-users mailing list. The developer of the changes has confirmed that lava-dispatcher-host needs to run on the host: https://lists.lavasoftware.org/pipermail/lava-users/2020-August/002593.html

The question then is how best to implement that in lava-docker?

The lava-dispatcher-host package, being new, only appears to be in debian in sid unstable:
https://packages.debian.org/search?suite=all&arch=any&mode=filename&searchon=names&keywords=lava-dispatcher-host

It of course is also in the LAVA source itself.

If lava-dispatcher-host is installed via the package then no other configuration is required. The udev rule is installed automatically. If it is installed from source then the rule needs to be installed manually

Personally I installed the debian package in the host and found this worked. After restarting udev and the worker container. Successful test job

One noted downside is that as lava-dispatcher-host is running on the host the rediscovery of the device is not logged in the job log. Not so good for debugging, but an acceptable compromise I think. At least until better solutions appear.

So returning to the question of what to do in lava-docker. This could be a simple change to the readme.md to add lava-dispatcher-host to the list of required packages. However deploy.sh removes all udev rules with lava in the name from the host:

$BEROOT rm /etc/udev/rules.d/*lava*rules

I could see you removing potentially old lava-docker installed rules but I wonder why you are deleting all rules with lava in the name. Could you explain why that is so I can figure out what the best change to make is?

backup.sh script does not work if master name does not contain "master"

In boards.yml file, a "name" needs to be assigned to each master node. This name does not need to contain "master" string in it to work. However, the backup.sh script actually expects the docker container to contain "master" string in it: https://github.com/kernelci/lava-docker/blob/master/backup.sh#L13

One could use yq to parse the boards.yaml file and return the name used for each master node. e.g.:

podman run --rm -i -v "${PWD}":/workdir:rw,Z mikefarah/yq '.masters[].name' < boards.yaml

Would that be something acceptable to run a container inside backup.sh to parse the yaml file?

Ideally we could use docker-compose to run commands for one of the master containers running but I still think we need the name of the container anyways.

Also, this wouldn't work with multiple master nodes but that's a different topic.

boards.yaml should handle upper-case names for masters/slaves

the 'name' field of the slaves (or masters) node gets converted to a docker service name, which is not allowed to be upper case.

e.g.

slaves

  • name: lab-BayLibre-Seattle

results in

$ docker-compose build
ERROR: Service 'lab-BayLibre-Seattle' contains uppercase characters which are not valid as part of an image name. Either use a lowercase service name or use the `image\
` field to set a custom name for the service image.

Checks for x86_64 host as amd64 with uname -m

When checking to see if we are running on an x86_64 host the lava-slave Dockerfile does

if [ "$(uname -m)" != "amd64" ]; then

but uname -m returns the kernel's idea of the architecture which is x86_64 not amd64 causing the check to fail unexpectedly.

This currently fails since the LAVA version is 2021.03 which predates the last Debian release, causing apt to complain about the rename from stable to oldstable when doing dpkg --add-architecture.

Compilations taking more time on lava-docker worker boards

I am running some debian upstream automated tests on my target machine which will pull sources via wget or apt then unpack, compile it using make then run tests.
I have observed source download taking less time but compilation taking more time in lava-docker as compared to standalone lava worker

Here the same architecture board is connected to lava worker using lava-docker instance and standalone lava-dispatcher
Summarizing the time taken on same x86 arch board details below:

Tasks | kernelCI lava-docker | standalone lava-dispatcher

Source download and make (core-utils) | 3 hours | 43 min
test-execution (core-utils) | 4 min | 4 min
Source download and make (util-linux) | 40 min | 17 min
test-execution (util-linux) | 57 min | 6 min

Lava link : https://lava.ciplatform.org/scheduler/job/103027

Please let me know if any more information required .

Thank you!!!
Smita

Adding devices to lava

I'm little confused about adding devices to lava . As described i've edited boards.yaml to add raspberrypi3 board. But when I'm loading docker images I'm getting following error:
local_master1_1 is up-to-date
Recreating local_lab-slave-0_1 ... error

ERROR: for local_lab-slave-0_1 Cannot start service lab-slave-0: linux runtime spec devices: error gathering device information while adding custom device "/dev/bcm2837-rpi-3-b-01": lstat /dev/bcm2837-rpi-3-b-01: no such file or directory

ERROR: for lab-slave-0 Cannot start service lab-slave-0: linux runtime spec devices: error gathering device information while adding custom device "/dev/bcm2837-rpi-3-b-01": lstat /dev/bcm2837-rpi-3-b-01: no such file or directory
ERROR: Encountered errors while bringing up the project.
Help me resolve the issue. I've attached boards.yaml file contents below

boardfile.txt

device-type template

where can I find device-type template in the lava-docker?
I'm using bcm2837-rpi-3-b-32 and I want to remove 'earlycon' kernel arg which is specified device-type template.

Device registration from remote worker does not work

Hi,
If your boards.yml has both local and remote worker and devices are attached to the remote worker, settings dictionary device from the remote workerwill fail
The following command from setup.sh on the slave

lavacli --uri devices dict /root/devices/lab-slave-1/.jinja2 trigger the following error:
Unable to call 'devices.dict': <ProtocolError for ... ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response',))>

How to configure token of notify block in order that the kernelci token is invisible as well as the test result can be submitted to kernelci backend successfully?

Hi, I have some troubles need your help. I would be grateful if you can give any assistance.
I used lava-docker to deploy lava and used kernelci-docker to deploy kernelci.
I need submit the test result of lava job to kernelci backend via notify block in lava job. The kernelci token was stored to Remote artifact tokens on the profile page and was named as kernelci-token. I configured token of notify block to kernelci-token and executed the lava job. The kernelci backend prompted “Token not authorized for IP address 210.120.168.110”.
How to configure token of notify block in order that the kernelci token is invisible as well as the test result can be submitted to kernelci backend successfully?

lava job notify block

notify:
  criteria:
    status: finished
  callbacks:
    - url: http://210.120.168.110:8081/callback/lava/test?lab_name=lab-01-nantong&status={STATUS}&status_string={STATUS_STRING}
      method: POST
      dataset: all
      token: kernelci-token
      content-type: json

kennelci response

[ WARNING/ Thread-26] Token not authorized for IP address 210.120.168.110
[ WARNING/MainThread] 403 POST /callback/lava/test?lab_name=lab-01-nantong&status=2&status_string=complete (180.120.167.208) 1.65ms

pdu_generic doesn't work on riscv board

hi, now I have some trouble need your help. pdu_generic doesn't work on riscv board. Could you have a look?
This is my boards.yaml
3

This is test job.
1

This is the result of test job.
2

kernelci-v2's README seems outdated

"Where LAVA_SERVER_IP is the IP of your Docker host.": there is no LAVA_SERVER_IP on the command line. Only a LAVA_API_TOKEN variable, and it is not clear what to do with that one either.

EDIT: "kernelci/lava-docker-v2:latest" should be "lava:latest"

Job cannot executed completely on RISC-V device

Hi, I have some troubles need your help. I would be grateful if you can give any assistance.
I want to use LAVA to test RISC-V kernel. I used lava-docker to deploy LAVA and executed a job. This job is used for testing hifive unmatched. It cannot be executed completely. The case bootloader-interrupt is unsuccessful. interrupt_prompt is correct. device_type of this job is hifive-unleashed-a00.
This is my boards.yaml

boards:
  - name: qemu-test
    type: qemu
    slave: lab-slave-1
    # kvm: True
  - name: hifive-unmatched
    type: hifive-unleashed-a00
    slave: lab-slave-1
    pdu_generic:
      hard_reset_command: /usr/local/bin/acme-cli -s 192.168.11.73 reset 1
      power_off_command: /usr/local/bin/acme-cli -s 192.168.11.73 switch_off 1
      power_on_command: /usr/local/bin/acme-cli -s 192.168.11.73 switch_on 1
    uart:
      idvendor: 0x067b
      idproduct: 2731
      serial: ABCDEF0123456789AB

This is test job.

device_type: hifive-unleashed-a00
job_name: hifive-unmatched-test
timeouts:
  job:
    minutes: 20
  action:
   minutes: 10
  actions:
    power-off:
      seconds: 30
priority: medium
visibility: public
actions:
- deploy:
    timeout:
      minutes: 3
    to: tftp
    kernel:
      url: file:///home/inlinepath/Image
      type: image
    dtb:
      url: file:///home/inlinepath/hifive-unmatched-a00.dtb
    nfsrootfs:
      url: file:///home/inlinepath/rootfs.tar
- boot:
    timeout:
      minutes: 10
    # failure_retry: 5
    method: u-boot
    commands: nfs
    prompts:
      - '/ #'
    auto_login:
      login_prompt: "buildroot login:"
      username: root
       
- test:
    timeout:
      minutes: 5
    definitions:
    - repository:
        metadata:
          format: Lava-Test Test Definition 1.0
          name: smoke-tests-basic
          description: "Basic system test command for oerv images"
        run:
          steps:
          - printenv
      from: inline
      name: env-dut-inline
      path: inline/env-dut.yaml

This is job log.

- {"dt": "2022-12-02T07:33:57.860323", "lvl": "info", "msg": "lava-dispatcher, installed at version: 2022.03"}
- {"dt": "2022-12-02T07:33:57.860496", "lvl": "info", "msg": "start: 0 validate"}
- {"dt": "2022-12-02T07:33:57.860658", "lvl": "info", "msg": "Start time: 2022-12-02 07:33:57.860648+00:00 (UTC)"}
- {"dt": "2022-12-02T07:33:57.860799", "lvl": "debug", "msg": "Validating that file:///home/inlinepath/Image exists"}
- {"dt": "2022-12-02T07:33:57.860921", "lvl": "debug", "msg": "Validating that file:///home/inlinepath/hifive-unmatched-a00.dtb exists"}
- {"dt": "2022-12-02T07:33:57.861035", "lvl": "debug", "msg": "Validating that file:///home/inlinepath/rootfs.tar exists"}
- {"dt": "2022-12-02T07:33:57.862223", "lvl": "info", "msg": "validate duration: 0.00"}
- {"dt": "2022-12-02T07:33:57.862313", "lvl": "results", "msg": {"case": "validate", "definition": "lava", "result": "pass"}}
- {"dt": "2022-12-02T07:33:57.862467", "lvl": "info", "msg": "start: 1 tftp-deploy (timeout 00:03:00) [common]"}
- {"dt": "2022-12-02T07:33:57.862577", "lvl": "debug", "msg": "start: 1.1 download-retry (timeout 00:03:00) [common]"}
- {"dt": "2022-12-02T07:33:57.862676", "lvl": "debug", "msg": "start: 1.1.1 file-download (timeout 00:03:00) [common]"}
- {"dt": "2022-12-02T07:33:57.862832", "lvl": "info", "msg": "downloading file:///home/inlinepath/Image"}
- {"dt": "2022-12-02T07:33:57.862905", "lvl": "debug", "msg": "saving as /var/lib/lava/dispatcher/tmp/64/tftp-deploy-9kxyu512/kernel/Image"}
- {"dt": "2022-12-02T07:33:57.862984", "lvl": "debug", "msg": "total size: 19813376 (18MB)"}
- {"dt": "2022-12-02T07:33:57.863064", "lvl": "debug", "msg": "No compression specified"}
- {"dt": "2022-12-02T07:33:57.863188", "lvl": "debug", "msg": "progress   0% (0MB)"}
- {"dt": "2022-12-02T07:33:57.867700", "lvl": "debug", "msg": "progress   5% (0MB)"}
- {"dt": "2022-12-02T07:33:57.872040", "lvl": "debug", "msg": "progress  10% (1MB)"}
- {"dt": "2022-12-02T07:33:57.876731", "lvl": "debug", "msg": "progress  15% (2MB)"}
- {"dt": "2022-12-02T07:33:57.881192", "lvl": "debug", "msg": "progress  20% (3MB)"}
- {"dt": "2022-12-02T07:33:57.885977", "lvl": "debug", "msg": "progress  25% (4MB)"}
- {"dt": "2022-12-02T07:33:57.890382", "lvl": "debug", "msg": "progress  30% (5MB)"}
- {"dt": "2022-12-02T07:33:57.894772", "lvl": "debug", "msg": "progress  35% (6MB)"}
- {"dt": "2022-12-02T07:33:57.899122", "lvl": "debug", "msg": "progress  40% (7MB)"}
- {"dt": "2022-12-02T07:33:57.903709", "lvl": "debug", "msg": "progress  45% (8MB)"}
- {"dt": "2022-12-02T07:33:57.908097", "lvl": "debug", "msg": "progress  50% (9MB)"}
- {"dt": "2022-12-02T07:33:57.912437", "lvl": "debug", "msg": "progress  55% (10MB)"}
- {"dt": "2022-12-02T07:33:57.916774", "lvl": "debug", "msg": "progress  60% (11MB)"}
- {"dt": "2022-12-02T07:33:57.921282", "lvl": "debug", "msg": "progress  65% (12MB)"}
- {"dt": "2022-12-02T07:33:57.925625", "lvl": "debug", "msg": "progress  70% (13MB)"}
- {"dt": "2022-12-02T07:33:57.929955", "lvl": "debug", "msg": "progress  75% (14MB)"}
- {"dt": "2022-12-02T07:33:57.934291", "lvl": "debug", "msg": "progress  80% (15MB)"}
- {"dt": "2022-12-02T07:33:57.938631", "lvl": "debug", "msg": "progress  85% (16MB)"}
- {"dt": "2022-12-02T07:33:57.943144", "lvl": "debug", "msg": "progress  90% (17MB)"}
- {"dt": "2022-12-02T07:33:57.947495", "lvl": "debug", "msg": "progress  95% (17MB)"}
- {"dt": "2022-12-02T07:33:57.951846", "lvl": "debug", "msg": "progress 100% (18MB)"}
- {"dt": "2022-12-02T07:33:57.952036", "lvl": "info", "msg": "18MB downloaded in 0.09s (212.19MB/s)"}
- {"dt": "2022-12-02T07:33:57.952204", "lvl": "debug", "msg": "end: 1.1.1 file-download (duration 00:00:00) [common]"}
- {"dt": "2022-12-02T07:33:57.952285", "lvl": "results", "msg": {"case": "file-download", "definition": "lava", "duration": "0.09", "extra": {"label": "kernel", "md5sum": "77a16e120ddffa2ff38e20f408a2e904", "sha256sum": "2dc12b2852c234a34455850c8516732495a1dec5f5c0dbf6f6ddbd3c72a2c259", "sha512sum": "cd649e53e653f3e8dd5d8d494c93b826ab14db5c98c6db3b10348da043b28c67692c4b97bd9a3b81e7795dea0924ec7860ebb3242923933a7b6638df264c7a5e", "size": ! "19813376"}, "level": "1.1.1", "namespace": "common", "result": "pass"}}
- {"dt": "2022-12-02T07:33:57.952456", "lvl": "debug", "msg": "end: 1.1 download-retry (duration 00:00:00) [common]"}
- {"dt": "2022-12-02T07:33:57.952560", "lvl": "debug", "msg": "start: 1.2 download-retry (timeout 00:03:00) [common]"}
- {"dt": "2022-12-02T07:33:57.952656", "lvl": "debug", "msg": "start: 1.2.1 file-download (timeout 00:03:00) [common]"}
- {"dt": "2022-12-02T07:33:57.952821", "lvl": "info", "msg": "downloading file:///home/inlinepath/hifive-unmatched-a00.dtb"}
- {"dt": "2022-12-02T07:33:57.952893", "lvl": "debug", "msg": "saving as /var/lib/lava/dispatcher/tmp/64/tftp-deploy-9kxyu512/dtb/hifive-unmatched-a00.dtb"}
- {"dt": "2022-12-02T07:33:57.952971", "lvl": "debug", "msg": "total size: 10525 (0MB)"}
- {"dt": "2022-12-02T07:33:57.953047", "lvl": "debug", "msg": "No compression specified"}
- {"dt": "2022-12-02T07:33:57.953165", "lvl": "debug", "msg": "progress 100% (0MB)"}
- {"dt": "2022-12-02T07:33:57.953306", "lvl": "info", "msg": "0MB downloaded in 0.00s (30.29MB/s)"}
- {"dt": "2022-12-02T07:33:57.953436", "lvl": "debug", "msg": "end: 1.2.1 file-download (duration 00:00:00) [common]"}
- {"dt": "2022-12-02T07:33:57.953518", "lvl": "results", "msg": {"case": "file-download", "definition": "lava", "duration": "0.00", "extra": {"label": "dtb", "md5sum": "e77dcbfa5aecaa84c85820a75a9cfb06", "sha256sum": "9c5449cfd95b04674512e77e644b8143db7b322a429150d5f429ef944df5fd04", "sha512sum": "bc943cef447aa72f25b1881c99a0561ed330f96acf92b73fd8956f5fa42496220eebca44d9382319dbf8a4ed11533a9f38b88f207392f66117622b99a2dd7a01", "size": ! "10525"}, "level": "1.2.1", "namespace": "common", "result": "pass"}}
- {"dt": "2022-12-02T07:33:57.953684", "lvl": "debug", "msg": "end: 1.2 download-retry (duration 00:00:00) [common]"}
- {"dt": "2022-12-02T07:33:57.953781", "lvl": "debug", "msg": "start: 1.3 download-retry (timeout 00:03:00) [common]"}
- {"dt": "2022-12-02T07:33:57.953875", "lvl": "debug", "msg": "start: 1.3.1 file-download (timeout 00:03:00) [common]"}
- {"dt": "2022-12-02T07:33:57.954006", "lvl": "info", "msg": "downloading file:///home/inlinepath/rootfs.tar"}
- {"dt": "2022-12-02T07:33:57.954079", "lvl": "debug", "msg": "saving as /var/lib/lava/dispatcher/tmp/64/tftp-deploy-9kxyu512/nfsrootfs/rootfs.tar"}
- {"dt": "2022-12-02T07:33:57.954156", "lvl": "debug", "msg": "total size: 21862400 (20MB)"}
- {"dt": "2022-12-02T07:33:57.954233", "lvl": "debug", "msg": "No compression specified"}
- {"dt": "2022-12-02T07:33:57.954350", "lvl": "debug", "msg": "progress   0% (0MB)"}
- {"dt": "2022-12-02T07:33:57.959110", "lvl": "debug", "msg": "progress   5% (1MB)"}
- {"dt": "2022-12-02T07:33:57.963891", "lvl": "debug", "msg": "progress  10% (2MB)"}
- {"dt": "2022-12-02T07:33:57.968799", "lvl": "debug", "msg": "progress  15% (3MB)"}
- {"dt": "2022-12-02T07:33:57.973560", "lvl": "debug", "msg": "progress  20% (4MB)"}
- {"dt": "2022-12-02T07:33:57.978358", "lvl": "debug", "msg": "progress  25% (5MB)"}
- {"dt": "2022-12-02T07:33:57.983267", "lvl": "debug", "msg": "progress  30% (6MB)"}
- {"dt": "2022-12-02T07:33:57.988045", "lvl": "debug", "msg": "progress  35% (7MB)"}
- {"dt": "2022-12-02T07:33:57.992951", "lvl": "debug", "msg": "progress  40% (8MB)"}
- {"dt": "2022-12-02T07:33:57.998031", "lvl": "debug", "msg": "progress  45% (9MB)"}
- {"dt": "2022-12-02T07:33:58.002870", "lvl": "debug", "msg": "progress  50% (10MB)"}
- {"dt": "2022-12-02T07:33:58.007663", "lvl": "debug", "msg": "progress  55% (11MB)"}
- {"dt": "2022-12-02T07:33:58.012597", "lvl": "debug", "msg": "progress  60% (12MB)"}
- {"dt": "2022-12-02T07:33:58.017418", "lvl": "debug", "msg": "progress  65% (13MB)"}
- {"dt": "2022-12-02T07:33:58.022327", "lvl": "debug", "msg": "progress  70% (14MB)"}
- {"dt": "2022-12-02T07:33:58.027080", "lvl": "debug", "msg": "progress  75% (15MB)"}
- {"dt": "2022-12-02T07:33:58.031879", "lvl": "debug", "msg": "progress  80% (16MB)"}
- {"dt": "2022-12-02T07:33:58.036791", "lvl": "debug", "msg": "progress  85% (17MB)"}
- {"dt": "2022-12-02T07:33:58.041551", "lvl": "debug", "msg": "progress  90% (18MB)"}
- {"dt": "2022-12-02T07:33:58.046303", "lvl": "debug", "msg": "progress  95% (19MB)"}
- {"dt": "2022-12-02T07:33:58.051195", "lvl": "debug", "msg": "progress 100% (20MB)"}
- {"dt": "2022-12-02T07:33:58.051325", "lvl": "info", "msg": "20MB downloaded in 0.10s (214.58MB/s)"}
- {"dt": "2022-12-02T07:33:58.051482", "lvl": "debug", "msg": "end: 1.3.1 file-download (duration 00:00:00) [common]"}
- {"dt": "2022-12-02T07:33:58.051561", "lvl": "results", "msg": {"case": "file-download", "definition": "lava", "duration": "0.10", "extra": {"label": "nfsrootfs", "md5sum": "9636b6b1df1af8d301ba3ae4ccd85f55", "sha256sum": "d98837edbd17ec8bc20dca9fdeebaa9dd02f275bdcb313fa6c634cc013fbad69", "sha512sum": "1ed738aee26a1999155f7059c9b811c84736a3d578eac6a4620c105d1c3c9a8efd47c34378180f8c026cbccbb76fc1c2051bf9818cd49e31be3f299d2ee495d4", "size": ! "21862400"}, "level": "1.3.1", "namespace": "common", "result": "pass"}}
- {"dt": "2022-12-02T07:33:58.051740", "lvl": "debug", "msg": "end: 1.3 download-retry (duration 00:00:00) [common]"}
- {"dt": "2022-12-02T07:33:58.051841", "lvl": "debug", "msg": "start: 1.4 prepare-tftp-overlay (timeout 00:03:00) [common]"}
- {"dt": "2022-12-02T07:33:58.051939", "lvl": "debug", "msg": "start: 1.4.1 extract-nfsrootfs (timeout 00:03:00) [common]"}
- {"dt": "2022-12-02T07:33:58.106767", "lvl": "debug", "msg": "Extracted nfsroot to /var/lib/lava/dispatcher/tmp/64/extract-nfsrootfs-05a2f3wk"}
- {"dt": "2022-12-02T07:33:58.106970", "lvl": "debug", "msg": "end: 1.4.1 extract-nfsrootfs (duration 00:00:00) [common]"}
- {"dt": "2022-12-02T07:33:58.107088", "lvl": "debug", "msg": "start: 1.4.2 lava-overlay (timeout 00:03:00) [common]"}
- {"dt": "2022-12-02T07:33:58.107253", "lvl": "debug", "msg": "[common] Preparing overlay tarball in /var/lib/lava/dispatcher/tmp/64/lava-overlay-cynwj7rh"}
- {"dt": "2022-12-02T07:33:58.107403", "lvl": "debug", "msg": "makedir: /var/lib/lava/dispatcher/tmp/64/lava-overlay-cynwj7rh/lava-64/bin"}
- {"dt": "2022-12-02T07:33:58.107525", "lvl": "debug", "msg": "makedir: /var/lib/lava/dispatcher/tmp/64/lava-overlay-cynwj7rh/lava-64/tests"}
- {"dt": "2022-12-02T07:33:58.107645", "lvl": "debug", "msg": "makedir: /var/lib/lava/dispatcher/tmp/64/lava-overlay-cynwj7rh/lava-64/results"}
- {"dt": "2022-12-02T07:33:58.107757", "lvl": "debug", "msg": "Creating /var/lib/lava/dispatcher/tmp/64/lava-overlay-cynwj7rh/lava-64/bin/lava-add-keys"}
- {"dt": "2022-12-02T07:33:58.107906", "lvl": "debug", "msg": "Creating /var/lib/lava/dispatcher/tmp/64/lava-overlay-cynwj7rh/lava-64/bin/lava-add-sources"}
- {"dt": "2022-12-02T07:33:58.108049", "lvl": "debug", "msg": "Creating /var/lib/lava/dispatcher/tmp/64/lava-overlay-cynwj7rh/lava-64/bin/lava-background-process-start"}
- {"dt": "2022-12-02T07:33:58.108193", "lvl": "debug", "msg": "Creating /var/lib/lava/dispatcher/tmp/64/lava-overlay-cynwj7rh/lava-64/bin/lava-background-process-stop"}
- {"dt": "2022-12-02T07:33:58.108331", "lvl": "debug", "msg": "Creating /var/lib/lava/dispatcher/tmp/64/lava-overlay-cynwj7rh/lava-64/bin/lava-common-functions"}
- {"dt": "2022-12-02T07:33:58.108470", "lvl": "debug", "msg": "Creating /var/lib/lava/dispatcher/tmp/64/lava-overlay-cynwj7rh/lava-64/bin/lava-echo-ipv4"}
- {"dt": "2022-12-02T07:33:58.108609", "lvl": "debug", "msg": "Creating /var/lib/lava/dispatcher/tmp/64/lava-overlay-cynwj7rh/lava-64/bin/lava-install-packages"}
- {"dt": "2022-12-02T07:33:58.108747", "lvl": "debug", "msg": "Creating /var/lib/lava/dispatcher/tmp/64/lava-overlay-cynwj7rh/lava-64/bin/lava-installed-packages"}
- {"dt": "2022-12-02T07:33:58.108884", "lvl": "debug", "msg": "Creating /var/lib/lava/dispatcher/tmp/64/lava-overlay-cynwj7rh/lava-64/bin/lava-os-build"}
- {"dt": "2022-12-02T07:33:58.109023", "lvl": "debug", "msg": "Creating /var/lib/lava/dispatcher/tmp/64/lava-overlay-cynwj7rh/lava-64/bin/lava-probe-channel"}
- {"dt": "2022-12-02T07:33:58.109160", "lvl": "debug", "msg": "Creating /var/lib/lava/dispatcher/tmp/64/lava-overlay-cynwj7rh/lava-64/bin/lava-probe-ip"}
- {"dt": "2022-12-02T07:33:58.109297", "lvl": "debug", "msg": "Creating /var/lib/lava/dispatcher/tmp/64/lava-overlay-cynwj7rh/lava-64/bin/lava-target-ip"}
- {"dt": "2022-12-02T07:33:58.109435", "lvl": "debug", "msg": "Creating /var/lib/lava/dispatcher/tmp/64/lava-overlay-cynwj7rh/lava-64/bin/lava-target-mac"}
- {"dt": "2022-12-02T07:33:58.109581", "lvl": "debug", "msg": "Creating /var/lib/lava/dispatcher/tmp/64/lava-overlay-cynwj7rh/lava-64/bin/lava-target-storage"}
- {"dt": "2022-12-02T07:33:58.109722", "lvl": "debug", "msg": "Creating /var/lib/lava/dispatcher/tmp/64/lava-overlay-cynwj7rh/lava-64/bin/lava-test-case"}
- {"dt": "2022-12-02T07:33:58.109860", "lvl": "debug", "msg": "Creating /var/lib/lava/dispatcher/tmp/64/lava-overlay-cynwj7rh/lava-64/bin/lava-test-event"}
- {"dt": "2022-12-02T07:33:58.109999", "lvl": "debug", "msg": "Creating /var/lib/lava/dispatcher/tmp/64/lava-overlay-cynwj7rh/lava-64/bin/lava-test-feedback"}
- {"dt": "2022-12-02T07:33:58.110139", "lvl": "debug", "msg": "Creating /var/lib/lava/dispatcher/tmp/64/lava-overlay-cynwj7rh/lava-64/bin/lava-test-raise"}
- {"dt": "2022-12-02T07:33:58.110278", "lvl": "debug", "msg": "Creating /var/lib/lava/dispatcher/tmp/64/lava-overlay-cynwj7rh/lava-64/bin/lava-test-reference"}
- {"dt": "2022-12-02T07:33:58.110417", "lvl": "debug", "msg": "Creating /var/lib/lava/dispatcher/tmp/64/lava-overlay-cynwj7rh/lava-64/bin/lava-test-runner"}
- {"dt": "2022-12-02T07:33:58.110557", "lvl": "debug", "msg": "Creating /var/lib/lava/dispatcher/tmp/64/lava-overlay-cynwj7rh/lava-64/bin/lava-test-set"}
- {"dt": "2022-12-02T07:33:58.110697", "lvl": "debug", "msg": "Creating /var/lib/lava/dispatcher/tmp/64/lava-overlay-cynwj7rh/lava-64/bin/lava-test-shell"}
- {"dt": "2022-12-02T07:33:58.110820", "lvl": "debug", "msg": "Creating /var/lib/lava/dispatcher/tmp/64/lava-overlay-cynwj7rh/lava-64/environment"}
- {"dt": "2022-12-02T07:33:58.110929", "lvl": "debug", "msg": "LAVA metadata"}
- {"dt": "2022-12-02T07:33:58.111011", "lvl": "debug", "msg": "- LAVA_JOB_ID=64"}
- {"dt": "2022-12-02T07:33:58.111129", "lvl": "debug", "msg": "start: 1.4.2.1 ssh-authorize (timeout 00:03:00) [common]"}
- {"dt": "2022-12-02T07:33:58.111363", "lvl": "debug", "msg": "end: 1.4.2.1 ssh-authorize (duration 00:00:00) [common]"}
- {"dt": "2022-12-02T07:33:58.111463", "lvl": "debug", "msg": "start: 1.4.2.2 lava-vland-overlay (timeout 00:03:00) [common]"}
- {"dt": "2022-12-02T07:33:58.111536", "lvl": "debug", "msg": "skipped lava-vland-overlay"}
- {"dt": "2022-12-02T07:33:58.111630", "lvl": "debug", "msg": "end: 1.4.2.2 lava-vland-overlay (duration 00:00:00) [common]"}
- {"dt": "2022-12-02T07:33:58.111724", "lvl": "debug", "msg": "start: 1.4.2.3 lava-multinode-overlay (timeout 00:03:00) [common]"}
- {"dt": "2022-12-02T07:33:58.111794", "lvl": "debug", "msg": "skipped lava-multinode-overlay"}
- {"dt": "2022-12-02T07:33:58.111886", "lvl": "debug", "msg": "end: 1.4.2.3 lava-multinode-overlay (duration 00:00:00) [common]"}
- {"dt": "2022-12-02T07:33:58.111980", "lvl": "debug", "msg": "start: 1.4.2.4 test-definition (timeout 00:03:00) [common]"}
- {"dt": "2022-12-02T07:33:58.112062", "lvl": "info", "msg": "Loading test definitions"}
- {"dt": "2022-12-02T07:33:58.112167", "lvl": "debug", "msg": "start: 1.4.2.4.1 inline-repo-action (timeout 00:03:00) [common]"}
- {"dt": "2022-12-02T07:33:58.112250", "lvl": "debug", "msg": "Using /lava-64 at stage 0"}
- {"dt": "2022-12-02T07:33:58.112581", "lvl": "debug", "msg": "uuid=64_1.4.2.4.1 testdef=None"}
- {"dt": "2022-12-02T07:33:58.112677", "lvl": "debug", "msg": "end: 1.4.2.4.1 inline-repo-action (duration 00:00:00) [common]"}
- {"dt": "2022-12-02T07:33:58.112775", "lvl": "debug", "msg": "start: 1.4.2.4.2 test-overlay (timeout 00:03:00) [common]"}
- {"dt": "2022-12-02T07:33:58.113224", "lvl": "debug", "msg": "end: 1.4.2.4.2 test-overlay (duration 00:00:00) [common]"}
- {"dt": "2022-12-02T07:33:58.113303", "lvl": "results", "msg": {"case": "test-overlay", "definition": "lava", "duration": "0.00", "extra": {"from": "inline", "name": "env-dut-inline", "path": "inline/env-dut.yaml", "uuid": "64_1.4.2.4.1"}, "level": "1.4.2.4.2", "namespace": "common", "result": "pass"}}
- {"dt": "2022-12-02T07:33:58.113467", "lvl": "debug", "msg": "start: 1.4.2.4.3 test-install-overlay (timeout 00:03:00) [common]"}
- {"dt": "2022-12-02T07:33:58.114022", "lvl": "debug", "msg": "end: 1.4.2.4.3 test-install-overlay (duration 00:00:00) [common]"}
- {"dt": "2022-12-02T07:33:58.114105", "lvl": "results", "msg": {"case": "test-install-overlay", "definition": "lava", "duration": "0.00", "extra": {"from": "inline", "name": "env-dut-inline", "path": "inline/env-dut.yaml", "skipped test-install-overlay": "64_1.4.2.4.1", "uuid": "64_1.4.2.4.1"}, "level": "1.4.2.4.3", "namespace": "common", "result": "pass"}}
- {"dt": "2022-12-02T07:33:58.114274", "lvl": "debug", "msg": "start: 1.4.2.4.4 test-runscript-overlay (timeout 00:03:00) [common]"}
- {"dt": "2022-12-02T07:33:58.115439", "lvl": "debug", "msg": "runner path: /var/lib/lava/dispatcher/tmp/64/lava-overlay-cynwj7rh/lava-64/0/tests/0_env-dut-inline test_uuid 64_1.4.2.4.1"}
- {"dt": "2022-12-02T07:33:58.115606", "lvl": "debug", "msg": "end: 1.4.2.4.4 test-runscript-overlay (duration 00:00:00) [common]"}
- {"dt": "2022-12-02T07:33:58.115684", "lvl": "results", "msg": {"case": "test-runscript-overlay", "definition": "lava", "duration": "0.00", "extra": {"filename": "/var/lib/lava/dispatcher/tmp/64/lava-overlay-cynwj7rh/lava-64/0/tests/0_env-dut-inline/run.sh", "from": "inline", "name": "env-dut-inline", "path": "inline/env-dut.yaml", "uuid": "64_1.4.2.4.1"}, "level": "1.4.2.4.4", "namespace": "common", "result": "pass"}}
- {"dt": "2022-12-02T07:33:58.115825", "lvl": "info", "msg": "Creating lava-test-runner.conf files"}
- {"dt": "2022-12-02T07:33:58.115903", "lvl": "debug", "msg": "Using lava-test-runner path: /var/lib/lava/dispatcher/tmp/64/lava-overlay-cynwj7rh/lava-64/0 for stage 0"}
- {"dt": "2022-12-02T07:33:58.116014", "lvl": "debug", "msg": "- 0_env-dut-inline"}
- {"dt": "2022-12-02T07:33:58.116121", "lvl": "debug", "msg": "end: 1.4.2.4 test-definition (duration 00:00:00) [common]"}
- {"dt": "2022-12-02T07:33:58.116218", "lvl": "debug", "msg": "start: 1.4.2.5 compress-overlay (timeout 00:03:00) [common]"}
- {"dt": "2022-12-02T07:33:58.121976", "lvl": "debug", "msg": "end: 1.4.2.5 compress-overlay (duration 00:00:00) [common]"}
- {"dt": "2022-12-02T07:33:58.122081", "lvl": "debug", "msg": "start: 1.4.2.6 persistent-nfs-overlay (timeout 00:03:00) [common]"}
- {"dt": "2022-12-02T07:33:58.122177", "lvl": "debug", "msg": "end: 1.4.2.6 persistent-nfs-overlay (duration 00:00:00) [common]"}
- {"dt": "2022-12-02T07:33:58.122273", "lvl": "debug", "msg": "end: 1.4.2 lava-overlay (duration 00:00:00) [common]"}
- {"dt": "2022-12-02T07:33:58.122370", "lvl": "debug", "msg": "start: 1.4.3 extract-overlay-ramdisk (timeout 00:03:00) [common]"}
- {"dt": "2022-12-02T07:33:58.122462", "lvl": "debug", "msg": "end: 1.4.3 extract-overlay-ramdisk (duration 00:00:00) [common]"}
- {"dt": "2022-12-02T07:33:58.122558", "lvl": "debug", "msg": "start: 1.4.4 extract-modules (timeout 00:03:00) [common]"}
- {"dt": "2022-12-02T07:33:58.122649", "lvl": "debug", "msg": "end: 1.4.4 extract-modules (duration 00:00:00) [common]"}
- {"dt": "2022-12-02T07:33:58.122742", "lvl": "debug", "msg": "start: 1.4.5 apply-overlay-tftp (timeout 00:03:00) [common]"}
- {"dt": "2022-12-02T07:33:58.122822", "lvl": "info", "msg": "[common] Applying overlay to NFS"}
- {"dt": "2022-12-02T07:33:58.122894", "lvl": "debug", "msg": "[common] Applying overlay /var/lib/lava/dispatcher/tmp/64/compress-overlay-wt_xnfma/overlay-1.4.2.5.tar.gz to directory /var/lib/lava/dispatcher/tmp/64/extract-nfsrootfs-05a2f3wk"}
- {"dt": "2022-12-02T07:33:58.127914", "lvl": "debug", "msg": "end: 1.4.5 apply-overlay-tftp (duration 00:00:00) [common]"}
- {"dt": "2022-12-02T07:33:58.128020", "lvl": "debug", "msg": "start: 1.4.6 prepare-kernel (timeout 00:03:00) [common]"}
- {"dt": "2022-12-02T07:33:58.128121", "lvl": "debug", "msg": "start: 1.4.6.1 uboot-prepare-kernel (timeout 00:03:00) [common]"}
- {"dt": "2022-12-02T07:33:58.128222", "lvl": "debug", "msg": "end: 1.4.6.1 uboot-prepare-kernel (duration 00:00:00) [common]"}
- {"dt": "2022-12-02T07:33:58.128312", "lvl": "debug", "msg": "end: 1.4.6 prepare-kernel (duration 00:00:00) [common]"}
- {"dt": "2022-12-02T07:33:58.128409", "lvl": "debug", "msg": "start: 1.4.7 configure-preseed-file (timeout 00:03:00) [common]"}
- {"dt": "2022-12-02T07:33:58.128499", "lvl": "debug", "msg": "end: 1.4.7 configure-preseed-file (duration 00:00:00) [common]"}
- {"dt": "2022-12-02T07:33:58.128592", "lvl": "debug", "msg": "start: 1.4.8 compress-ramdisk (timeout 00:03:00) [common]"}
- {"dt": "2022-12-02T07:33:58.128680", "lvl": "debug", "msg": "end: 1.4.8 compress-ramdisk (duration 00:00:00) [common]"}
- {"dt": "2022-12-02T07:33:58.128774", "lvl": "debug", "msg": "end: 1.4 prepare-tftp-overlay (duration 00:00:00) [common]"}
- {"dt": "2022-12-02T07:33:58.128867", "lvl": "debug", "msg": "start: 1.5 lxc-create-udev-rule-action (timeout 00:03:00) [common]"}
- {"dt": "2022-12-02T07:33:58.128940", "lvl": "debug", "msg": "No LXC device requested"}
- {"dt": "2022-12-02T07:33:58.129033", "lvl": "debug", "msg": "end: 1.5 lxc-create-udev-rule-action (duration 00:00:00) [common]"}
- {"dt": "2022-12-02T07:33:58.129126", "lvl": "debug", "msg": "start: 1.6 deploy-device-env (timeout 00:03:00) [common]"}
- {"dt": "2022-12-02T07:33:58.129213", "lvl": "debug", "msg": "end: 1.6 deploy-device-env (duration 00:00:00) [common]"}
- {"dt": "2022-12-02T07:33:58.129284", "lvl": "debug", "msg": "Checking files for TFTP limit of 4294967296 bytes."}
- {"dt": "2022-12-02T07:33:58.129640", "lvl": "info", "msg": "end: 1 tftp-deploy (duration 00:00:00) [common]"}
- {"dt": "2022-12-02T07:33:58.129744", "lvl": "info", "msg": "start: 2 uboot-action (timeout 00:10:00) [common]"}
- {"dt": "2022-12-02T07:33:58.129846", "lvl": "debug", "msg": "start: 2.1 uboot-from-media (timeout 00:10:00) [common]"}
- {"dt": "2022-12-02T07:33:58.129937", "lvl": "debug", "msg": "end: 2.1 uboot-from-media (duration 00:00:00) [common]"}
- {"dt": "2022-12-02T07:33:58.130033", "lvl": "debug", "msg": "start: 2.2 bootloader-overlay (timeout 00:10:00) [common]"}
- {"dt": "2022-12-02T07:33:58.130162", "lvl": "debug", "msg": "substitutions:"}
- {"dt": "2022-12-02T07:33:58.130235", "lvl": "debug", "msg": "- {BOOTX}: booti 0x80200000 - 0x83500000"}
- {"dt": "2022-12-02T07:33:58.130311", "lvl": "debug", "msg": "- {DTB_ADDR}: 0x83500000"}
- {"dt": "2022-12-02T07:33:58.130385", "lvl": "debug", "msg": "- {DTB}: 64/tftp-deploy-9kxyu512/dtb/hifive-unmatched-a00.dtb"}
- {"dt": "2022-12-02T07:33:58.130459", "lvl": "debug", "msg": "- {INITRD}: None"}
- {"dt": "2022-12-02T07:33:58.130532", "lvl": "debug", "msg": "- {KERNEL_ADDR}: 0x80200000"}
- {"dt": "2022-12-02T07:33:58.130603", "lvl": "debug", "msg": "- {KERNEL}: 64/tftp-deploy-9kxyu512/kernel/Image"}
- {"dt": "2022-12-02T07:33:58.130674", "lvl": "debug", "msg": "- {LAVA_MAC}: None"}
- {"dt": "2022-12-02T07:33:58.130745", "lvl": "debug", "msg": "- {NFSROOTFS}: /var/lib/lava/dispatcher/tmp/64/extract-nfsrootfs-05a2f3wk"}
- {"dt": "2022-12-02T07:33:58.130815", "lvl": "debug", "msg": "- {NFS_SERVER_IP}: 192.168.11.73"}
- {"dt": "2022-12-02T07:33:58.130885", "lvl": "debug", "msg": "- {PRESEED_CONFIG}: None"}
- {"dt": "2022-12-02T07:33:58.130955", "lvl": "debug", "msg": "- {PRESEED_LOCAL}: None"}
- {"dt": "2022-12-02T07:33:58.131025", "lvl": "debug", "msg": "- {RAMDISK_ADDR}: -"}
- {"dt": "2022-12-02T07:33:58.131094", "lvl": "debug", "msg": "- {RAMDISK}: None"}
- {"dt": "2022-12-02T07:33:58.131164", "lvl": "debug", "msg": "- {ROOT_PART}: None"}
- {"dt": "2022-12-02T07:33:58.131234", "lvl": "debug", "msg": "- {ROOT}: None"}
- {"dt": "2022-12-02T07:33:58.131304", "lvl": "debug", "msg": "- {SERVER_IP}: 192.168.11.73"}
- {"dt": "2022-12-02T07:33:58.131373", "lvl": "debug", "msg": "- {TEE_ADDR}: 0x83000000"}
- {"dt": "2022-12-02T07:33:58.131443", "lvl": "debug", "msg": "- {TEE}: None"}
- {"dt": "2022-12-02T07:33:58.131512", "lvl": "info", "msg": "Parsed boot commands:"}
- {"dt": "2022-12-02T07:33:58.131581", "lvl": "info", "msg": "- setenv autoload no"}
- {"dt": "2022-12-02T07:33:58.131650", "lvl": "info", "msg": "- setenv initrd_high 0xffffffffffffffff"}
- {"dt": "2022-12-02T07:33:58.131719", "lvl": "info", "msg": "- setenv fdt_high 0xffffffffffffffff"}
- {"dt": "2022-12-02T07:33:58.131788", "lvl": "info", "msg": "- dhcp"}
- {"dt": "2022-12-02T07:33:58.131857", "lvl": "info", "msg": "- setenv serverip 192.168.11.73"}
- {"dt": "2022-12-02T07:33:58.131926", "lvl": "info", "msg": "- tftp 0x80200000 64/tftp-deploy-9kxyu512/kernel/Image"}
- {"dt": "2022-12-02T07:33:58.131995", "lvl": "info", "msg": "- setenv initrd_size ${filesize}"}
- {"dt": "2022-12-02T07:33:58.132064", "lvl": "info", "msg": "- tftp 0x83500000 64/tftp-deploy-9kxyu512/dtb/hifive-unmatched-a00.dtb"}
- {"dt": "2022-12-02T07:33:58.132133", "lvl": "info", "msg": "- setenv bootargs 'console=ttySIF0,115200n8 root=/dev/nfs rw nfsroot=192.168.11.73:/var/lib/lava/dispatcher/tmp/64/extract-nfsrootfs-05a2f3wk,tcp,hard  ip=dhcp'"}
- {"dt": "2022-12-02T07:33:58.132203", "lvl": "info", "msg": "- booti 0x80200000 - 0x83500000"}
- {"dt": "2022-12-02T07:33:58.132296", "lvl": "debug", "msg": "end: 2.2 bootloader-overlay (duration 00:00:00) [common]"}
- {"dt": "2022-12-02T07:33:58.132366", "lvl": "results", "msg": {"case": "bootloader-overlay", "definition": "lava", "duration": "0.00", "extra": {"dtb_addr": "0x83500000", "kernel_addr": "0x80200000", "ramdisk_addr": "-", "tee_addr": "0x83000000"}, "level": "2.2", "namespace": "common", "result": "pass"}}
- {"dt": "2022-12-02T07:33:58.132521", "lvl": "debug", "msg": "start: 2.3 connect-device (timeout 00:10:00) [common]"}
- {"dt": "2022-12-02T07:33:58.132597", "lvl": "info", "msg": "[common] connect-device Connecting to device using 'telnet 127.0.0.1 63001'"}
- {"dt": "2022-12-02T07:33:58.196960", "lvl": "debug", "msg": "Setting prompt string to ['lava-test: # ']"}
- {"dt": "2022-12-02T07:33:58.197350", "lvl": "debug", "msg": "end: 2.3 connect-device (duration 00:00:00) [common]"}
- {"dt": "2022-12-02T07:33:58.197515", "lvl": "debug", "msg": "start: 2.4 uboot-commands (timeout 00:10:00) [common]"}
- {"dt": "2022-12-02T07:33:58.197644", "lvl": "debug", "msg": "start: 2.4.1 reset-device (timeout 00:10:00) [common]"}
- {"dt": "2022-12-02T07:33:58.197755", "lvl": "debug", "msg": "start: 2.4.1.1 pdu-reboot (timeout 00:10:00) [common]"}
- {"dt": "2022-12-02T07:33:58.197953", "lvl": "debug", "msg": "Calling: 'nice' '/usr/local/bin/acme-cli' '-s' '192.168.11.73' 'reset' '1'"}
- {"dt": "2022-12-02T07:33:58.292374", "lvl": "debug", "msg": ">> Switch_off Successfully\r"}
- {"dt": "2022-12-02T07:34:02.317168", "lvl": "debug", "msg": ">> Switch_on Successfully\r"}
- {"dt": "2022-12-02T07:34:02.325489", "lvl": "debug", "msg": "Returned 0 in 4 seconds"}
- {"dt": "2022-12-02T07:34:02.426446", "lvl": "debug", "msg": "end: 2.4.1.1 pdu-reboot (duration 00:00:04) [common]"}
- {"dt": "2022-12-02T07:34:02.426703", "lvl": "results", "msg": {"case": "pdu-reboot", "definition": "lava", "duration": "4.23", "extra": {"status": "success"}, "level": "2.4.1.1", "namespace": "common", "result": "pass"}}
- {"dt": "2022-12-02T07:34:02.426939", "lvl": "debug", "msg": "end: 2.4.1 reset-device (duration 00:00:04) [common]"}
- {"dt": "2022-12-02T07:34:02.427064", "lvl": "debug", "msg": "start: 2.4.2 bootloader-interrupt (timeout 00:09:56) [common]"}
- {"dt": "2022-12-02T07:34:02.427176", "lvl": "debug", "msg": "Setting prompt string to ['Hit any key to stop autoboot']"}
- {"dt": "2022-12-02T07:34:02.427269", "lvl": "debug", "msg": "bootloader-interrupt: Wait for prompt ['Hit any key to stop autoboot'] (timeout 00:10:00)"}
- {"dt": "2022-12-02T07:34:02.427602", "lvl": "target", "msg": "Trying 127.0.0.1..."}
- {"dt": "2022-12-02T07:34:02.427701", "lvl": "target", "msg": "Connected to 127.0.0.1."}
- {"dt": "2022-12-02T07:34:02.427785", "lvl": "target", "msg": "Escape character is '^]'."}
- {"dt": "2022-12-02T07:34:02.427870", "lvl": "target", "msg": "Device open failure: Value or file not found"}
- {"dt": "2022-12-02T07:34:02.427953", "lvl": "target", "msg": "Connection closed by foreign host."}
- {"dt": "2022-12-02T07:34:02.428385", "lvl": "exception", "msg": "Connection closed"}
- {"dt": "2022-12-02T07:34:02.428494", "lvl": "debug", "msg": "end: 2.4.2 bootloader-interrupt (duration 00:00:00) [common]"}
- {"dt": "2022-12-02T07:34:02.428622", "lvl": "results", "msg": {"case": "bootloader-interrupt", "definition": "lava", "duration": "0.00", "extra": {"fail": "Connection closed"}, "level": "2.4.2", "namespace": "common", "result": "fail"}}
- {"dt": "2022-12-02T07:34:02.428831", "lvl": "exception", "msg": "Connection closed"}
- {"dt": "2022-12-02T07:34:02.428963", "lvl": "debug", "msg": "end: 2.4 uboot-commands (duration 00:00:04) [common]"}
- {"dt": "2022-12-02T07:34:02.429079", "lvl": "results", "msg": {"case": "uboot-commands", "definition": "lava", "duration": "4.23", "extra": {"fail": "Connection closed"}, "level": "2.4", "namespace": "common", "result": "fail"}}
- {"dt": "2022-12-02T07:34:02.429247", "lvl": "error", "msg": "uboot-action failed: 1 of 1 attempts. 'Connection closed'"}
- {"dt": "2022-12-02T07:34:02.429357", "lvl": "exception", "msg": "Connection closed"}
- {"dt": "2022-12-02T07:34:02.429455", "lvl": "info", "msg": "end: 2 uboot-action (duration 00:00:04) [common]"}
- {"dt": "2022-12-02T07:34:02.429558", "lvl": "results", "msg": {"case": "uboot-action", "definition": "lava", "duration": "4.30", "extra": {"fail": "Connection closed"}, "level": "2", "namespace": "common", "result": "fail"}}
- {"dt": "2022-12-02T07:34:02.429745", "lvl": "info", "msg": "Cleaning after the job"}
- {"dt": "2022-12-02T07:34:02.429864", "lvl": "debug", "msg": "Cleaning up download directory: /var/lib/lava/dispatcher/tmp/64/tftp-deploy-9kxyu512/kernel"}
- {"dt": "2022-12-02T07:34:02.433775", "lvl": "debug", "msg": "Cleaning up download directory: /var/lib/lava/dispatcher/tmp/64/tftp-deploy-9kxyu512/dtb"}
- {"dt": "2022-12-02T07:34:02.433975", "lvl": "debug", "msg": "Cleaning up download directory: /var/lib/lava/dispatcher/tmp/64/tftp-deploy-9kxyu512/nfsrootfs"}
- {"dt": "2022-12-02T07:34:02.437927", "lvl": "debug", "msg": "start: 4.1 power-off (timeout 00:00:30) [common]"}
- {"dt": "2022-12-02T07:34:02.438140", "lvl": "debug", "msg": "Calling: 'nice' '/usr/local/bin/acme-cli' '-s' '192.168.11.73' 'switch_off' '1'"}
- {"dt": "2022-12-02T07:34:02.535740", "lvl": "debug", "msg": ">> Switch_off Successfully\r"}
- {"dt": "2022-12-02T07:34:02.539629", "lvl": "debug", "msg": "Returned 0 in 0 seconds"}
- {"dt": "2022-12-02T07:34:02.640436", "lvl": "debug", "msg": "end: 4.1 power-off (duration 00:00:00) [common]"}
- {"dt": "2022-12-02T07:34:02.640620", "lvl": "results", "msg": {"case": "power-off", "definition": "lava", "duration": "0.20", "extra": {"status": "success"}, "level": "4.1", "namespace": "common", "result": "pass"}}
- {"dt": "2022-12-02T07:34:02.640786", "lvl": "debug", "msg": "start: 4.2 read-feedback (timeout 00:10:00) [common]"}
- {"dt": "2022-12-02T07:34:02.641054", "lvl": "info", "msg": "Finalising connection for namespace 'common'"}
- {"dt": "2022-12-02T07:34:02.641148", "lvl": "debug", "msg": "Already disconnected"}
- {"dt": "2022-12-02T07:34:02.741883", "lvl": "debug", "msg": "end: 4.2 read-feedback (duration 00:00:00) [common]"}
- {"dt": "2022-12-02T07:34:02.742521", "lvl": "info", "msg": "Override tmp directory removed at /var/lib/lava/dispatcher/tmp/64"}
- {"dt": "2022-12-02T07:34:02.782435", "lvl": "info", "msg": "Root tmp directory removed at /var/lib/lava/dispatcher/tmp/64"}
- {"dt": "2022-12-02T07:34:02.783023", "lvl": "error", "msg": "InfrastructureError: The Infrastructure is not working correctly. Please report this error to LAVA admins."}
- {"dt": "2022-12-02T07:34:02.783327", "lvl": "results", "msg": {"case": "job", "definition": "lava", "error_msg": "Connection closed", "error_type": "Infrastructure", "result": "fail"}}

1

backup.sh doesn't find running system

backup.sh looks for the docker running the master server by searching docker ps for master. This doesn't work since the container is named for the hostnames set for the container so no backup is generated, and the script doesn't print an error but instead only exits with an error code.

It is too slow when Building lab-slave-0 Step 1/32 : FROM baylibre/lava-slave-base:latest?

Hi,all. I am installing the lava for the first time.

During docker-compose build, the process below is too slow,

Building lab-slave-0
Step 1/32 : FROM baylibre/lava-slave-base:latest
latest: Pulling from baylibre/lava-slave-base

I waited it for serveral day, and restart this cmd for many times. It also stop there.
Maybe it is due to my network condition, which is out of my control.
Could anyone have some better idea to avoid it ?(Building master1 successed)
THX.

lxc-apt-install error

I'm trying to run fastboot with lxc.
I added
RUN apt-get -y install lavacli
RUN apt-get -y install lava-lxc-mocker
to Dockerfile, cause without it I cant work with lxc.

And I received error after submit the job:
/var/lib/dpkg/lock problem
output: E: Could not get lock /var/lib/dpkg/lock - open (11: Resource temporarily unavailable)
E: Unable to lock the administration directory (/var/lib/dpkg/)

LAVA doesn't accept proxy env.yaml file

Hello,

When I add my proxy environment to the boards.yaml file, I get the following error when LAVA is to run a healthcheck on the qemu device:
"Infrastructure error: Cannot open '/etc/lava-server/dispatcher.d/lab-slave-0/env.yaml': Not a valid YAML file".
Looking on the master container, the env.yaml file generated looks like:
overrides:
{'http_proxy': 'http://XXX:YYY'}
{'https_proxy': 'http://XXX:YYY'}
{'ftp_proxy': 'http://XXX:YYY'}

opposed to
overrides:
http_proxy: http://XXX:YYY
https_proxy: http://XXX:YYY
ftp_proxy: http://XXX:YYY

I'm new to python so I'm not sure exactly how to change the lavalab_gen.py file in order to get the correct output.

Error while docker-compose up with a backup restore

Hello,

I got the following error while trying to restore a backup on both #bacc75a4 and on #cac7cdd9.

master1_1      | Restore database from backup
master1_1      | SET
master1_1      | SET
master1_1      | SET
master1_1      | SET
master1_1      | SET
master1_1      |  set_config 
master1_1      | ------------
master1_1      |  
master1_1      | (1 row)
.
.
.
lab-slave-0_1  | Wait for master.... (1180s remains)
master1_1      | Operations to perform:
master1_1      |   Apply all migrations: admin, auth, authtoken, contenttypes, lava_results_app, lava_scheduler_app, linaro_django_xmlrpc, sessions, sites
master1_1      | Running migrations:
master1_1      |   Applying lava_results_app.0019_auto_20230512_1042...Traceback (most recent call last):
master1_1      |   File "/usr/lib/python3/dist-packages/django/db/backends/utils.py", line 84, in _execute
master1_1      |     return self.cursor.execute(sql, params)
master1_1      | psycopg2.errors.UndefinedTable: table "lava_results_app_actiondata" does not exist
master1_1      | 
master1_1      | 
master1_1      | The above exception was the direct cause of the following exception:
master1_1      | 
master1_1      | Traceback (most recent call last):
master1_1      |   File "/usr/bin/lava-server", line 68, in <module>
master1_1      |     main()
master1_1      |   File "/usr/bin/lava-server", line 64, in main
master1_1      |     execute_from_command_line([sys.argv[0]] + options.command)
master1_1      |   File "/usr/lib/python3/dist-packages/django/core/management/__init__.py", line 381, in execute_from_command_line
master1_1      |     utility.execute()
master1_1      |   File "/usr/lib/python3/dist-packages/django/core/management/__init__.py", line 375, in execute
master1_1      |     self.fetch_command(subcommand).run_from_argv(self.argv)
master1_1      |   File "/usr/lib/python3/dist-packages/django/core/management/base.py", line 323, in run_from_argv
master1_1      |     self.execute(*args, **cmd_options)
master1_1      |   File "/usr/lib/python3/dist-packages/django/core/management/base.py", line 364, in execute
master1_1      |     output = self.handle(*args, **options)
master1_1      |   File "/usr/lib/python3/dist-packages/django/core/management/base.py", line 83, in wrapped
master1_1      |     res = handle_func(*args, **kwargs)
master1_1      |   File "/usr/lib/python3/dist-packages/django/core/management/commands/migrate.py", line 232, in handle
master1_1      |     post_migrate_state = executor.migrate(
master1_1      |   File "/usr/lib/python3/dist-packages/django/db/migrations/executor.py", line 117, in migrate
master1_1      |     state = self._migrate_all_forwards(state, plan, full_plan, fake=fake, fake_initial=fake_initial)
master1_1      |   File "/usr/lib/python3/dist-packages/django/db/migrations/executor.py", line 147, in _migrate_all_forwards
master1_1      |     state = self.apply_migration(state, migration, fake=fake, fake_initial=fake_initial)
master1_1      |   File "/usr/lib/python3/dist-packages/django/db/migrations/executor.py", line 245, in apply_migration
master1_1      |     state = migration.apply(state, schema_editor)
master1_1      |   File "/usr/lib/python3/dist-packages/django/db/migrations/migration.py", line 124, in apply
master1_1      |     operation.database_forwards(self.app_label, schema_editor, old_state, project_state)
master1_1      |   File "/usr/lib/python3/dist-packages/django/db/migrations/operations/models.py", line 261, in database_forwards
master1_1      |     schema_editor.delete_model(model)
master1_1      |   File "/usr/lib/python3/dist-packages/django/db/backends/base/schema.py", line 325, in delete_model
master1_1      |     self.execute(self.sql_delete_table % {
master1_1      |   File "/usr/lib/python3/dist-packages/django/db/backends/base/schema.py", line 137, in execute
master1_1      |     cursor.execute(sql, params)
master1_1      |   File "/usr/lib/python3/dist-packages/django/db/backends/utils.py", line 67, in execute
master1_1      |     return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
master1_1      |   File "/usr/lib/python3/dist-packages/django/db/backends/utils.py", line 76, in _execute_with_wrappers
master1_1      |     return executor(sql, params, many, context)
master1_1      |   File "/usr/lib/python3/dist-packages/django/db/backends/utils.py", line 84, in _execute
master1_1      |     return self.cursor.execute(sql, params)
master1_1      |   File "/usr/lib/python3/dist-packages/django/db/utils.py", line 89, in __exit__
master1_1      |     raise dj_exc_value.with_traceback(traceback) from exc_value
master1_1      |   File "/usr/lib/python3/dist-packages/django/db/backends/utils.py", line 84, in _execute
master1_1      |     return self.cursor.execute(sql, params)
master1_1      | django.db.utils.ProgrammingError: table "lava_results_app_actiondata" does not exist
master1_1      | 
.
.
.

step to reproduce :

$./backup.sh
$cp ./backup-latest/* output/local/master/backup/
$cd output/local
$docker-compose build
$docker-compose down
$docker-compose up 

this problem is also confirmed by @montjoie @ https://github.com/montjoie/lava-docker/actions/runs/4958288353/jobs/8870970915

Thanks

Upgrade to 2022.03 fails because of postgresql error during `docker-compose up`

I cannot update from 2021.03 to 2022.03 because my master node (named lava-server) fails to boot with the following error message:

lava-server_1     | Starting postgresql
lava-server_1     | Starting PostgreSQL 13 database server: mainError: /var/lib/postgresql/13/main is not accessible or does not exist ... failed!

@montjoie reported to have had the same issue when persistent_db is set to True in the master node and removing it "fixes" the issue.

Updating with persistent database is therefore currently not supported as far as I understood.

RISC-V qemu cannot be executed

RISC-V qemu can be execute with the following command in lava-slave docker container.

$ qemu-system-riscv64 -M virt -m 256M -nographic -kernel inlinepath/Image -drive file=inlinepath/rootfs.img,format=raw,id=hd0  -device virtio-blk-device,drive=hd0 -append "root=/dev/vda rw console=ttyS0"

but it cannot be executed in lava job and prompt machine type does not support if=ide bus=0 unit=0

08

Building lab-slave-0 fails @ apt-get update

When I try the default build of the slave container I get the following error during apt-get update:

E: Could not open file /var/lib/apt/lists/deb.debian.org_debian_dists_stretch-backports_main_binary-amd64_Packages.diff_Index - open (2: No such file or directory)
ERROR: Service 'lab-slave-0' failed to build: The command '/bin/sh -c apt-get update' returned a non-zero code: 100

I do not know exactly why this fails, since the file is present in the ftp mirror, maybe redirecting fails? Anyway I managed to fix this by adding this command to the slave Dockerfile, but it is probably a better idea to fix this in the base container.

7 RUN echo "deb https://deb.debian.org/debian/ stretch main contrib non-free \
  deb https://deb.debian.org/debian/ stretch-updates main contrib non-free \ 
  deb https://deb.debian.org/debian-security/ stretch/updates main contrib non-free \
  deb https://deb.debian.org/debian/ stretch-backports main contrib non-free" > /etc/apt/sources.list

Updates are then downloaded from https://cdn-aws.deb.debian.org/debian instead of http://deb.debian.org/debian.

Python2.7 -> Python3 for lava-slave

The update to 2019.07 containers from Lava has dropped Python2.7 support as a side effect. The scripts that are currently using Python2.7 imports should be changed as the lava-slave container does not work at the moment.

I have found and fixed problems in start.sh and setdispatcherip.py. I can open a PR for this, if you want.

Can NFS service run normally in the slave docker container?

Hi, I have some troubles need your help. I would be grateful if you can give any assistance.
I am try to run a job via TFTP and NFS. The file system cannot be mounted via NFS.
It seem that NFS daemon is not running:

root@lab-slave-1:/# service nfs-kernel-server start
Exporting directories for NFS kernel daemon....
Starting NFS kernel daemon: nfsd mountd.
root@lab-slave-1:/# service nfs-kernel-server status
nfsd not running

This is job log:

=> 0000 6/tftp-deploy-l2_u3wly/dtb/hifive-unmatched-a00.dtb
tftp 0x88000000 6/tftp-deploy-l2_u3wly/dtb/hifive-unmatched-a00.dtb
ethernet@10090000: PHY present at 0
ethernet@10090000: Starting autonegotiation...
sethernet@10090000: Autonegotiation complete
ethernet@10090000: link up, 100Mbps full-duplex (lpa: 0x4de1)
Using ethernet@10090000 device
TFTP from server 192.168.10.20; our IP address is 192.168.10.19
Filename '6/tftp-deploy-l2_u3wly/dtb/hifive-unmatched-a00.dtb'.
Load address: 0x88000000
Loading: *�#
	 1.1 MiB/s
done
Bytes transferred = 10525 (291d hex)
=> etenv bootargs 'console=ttySIF0,115200n8 root=/dev/nfs rw nfsroot=192.168.10.20:/var/lib/lava/dispatcher/tmp/6/extract-nfsrootfs-apzgyarh,tcp,hard  ip=dhcp'
setenv bootargs 'console=ttySIF0,115200n8 root=/dev/nfs rw nfsroot=192.168.10.20:/var/lib/lava/dispatcher/tmp/6/extract-nfsrootfs-apzgyarh,tcp,hard  ip=dhcp'
=> 
=> booti 0x84000000 - 0x88000000
booti 0x84000000 - 0x88000000
Moving Image from 0x84000000 to 0x80200000, end=81767000
## Flattened Device Tree blob at 88000000
   Booting using the fdt blob at 0x88000000
Working FDT set to 88000000
   Using Device Tree in place at 0000000088000000, end 000000008800591c
Working FDT set to 88000000
Starting kernel ...
end: 2.4.3 bootloader-commands (duration 00:07:39) [common]
start: 2.4.4 auto-login-action (timeout 00:02:06) [common]
Setting prompt string to ['Linux version [0-9]']
Setting prompt string to ['Linux version [0-9]', 'Resetting CPU', 'Must RESET board to recover', 'TIMEOUT', 'Retry count exceeded', 'Retry time exceeded; starting again', 'ERROR: The remote end did not respond in time.', 'File not found', 'Bad Linux ARM64 Image magic!', 'Wrong Ramdisk Image Format', 'Ramdisk image is corrupt or invalid', 'ERROR: Failed to allocate', 'TFTP error: trying to overwrite reserved memory', 'Bad Linux RISCV Image magic!', 'Wrong Image Format for boot', 'ERROR: Did not find a cmdline Flattened Device Tree', 'ERROR: RD image overlaps OS image']
auto-login-action: Wait for prompt ['Linux version [0-9]', 'Resetting CPU', 'Must RESET board to recover', 'TIMEOUT', 'Retry count exceeded', 'Retry time exceeded; starting again', 'ERROR: The remote end did not respond in time.', 'File not found', 'Bad Linux ARM64 Image magic!', 'Wrong Ramdisk Image Format', 'Ramdisk image is corrupt or invalid', 'ERROR: Failed to allocate', 'TFTP error: trying to overwrite reserved memory', 'Bad Linux RISCV Image magic!', 'Wrong Image Format for boot', 'ERROR: Did not find a cmdline Flattened Device Tree', 'ERROR: RD image overlaps OS image'] (timeout 00:10:00)
start: 2.4.4.1 login-action (timeout 00:02:03) [common]
The string 'root@openEuler-riscv64' does not look like a typical prompt and could match status messages instead. Please check the job log files and use a prompt string which matches the actual prompt string more closely.
Setting prompt string to ['-\\[ cut here \\]', 'Unhandled fault', 'BUG: KCSAN:', 'BUG: KASAN:', 'BUG: KFENCE:', 'Oops(?: -|:)', 'WARNING:', '(kernel BUG at|BUG:)', 'invalid opcode:', 'Kernel panic - not syncing']
Using line separator: #'\n'#
Waiting for the login prompt
Parsing kernel messages
-\[ cut here \],Unhandled fault,BUG: KCSAN:,BUG: KASAN:,BUG: KFENCE:,Oops(?: -|:),WARNING:,(kernel BUG at|BUG:),invalid opcode:,Kernel panic - not syncing,root@openEuler-riscv64,openEuler-riscv64 login:,Login incorrect
[login-action] Waiting for messages, (timeout 00:02:03)
[    0.000000] Linux version 6.3.0-g1a5304fecee5-dirty (runner@fv-az478-300) (riscv64-linux-gnu-gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0, GNU ld (GNU Binutils for Ubuntu) 2.38) #1 SMP Thu May  4 07:15:08 UTC 2023
[    0.000000] Machine model: SiFive HiFive Unmatched
[    0.000000] efi: UEFI not found.
[    0.000000] OF: reserved mem: 0x0000000080000000..0x000000008007ffff (512 KiB) map non-reusable mmode_resv0@80000000
[    0.000000] Zone ranges:
[    0.000000]   DMA32    [mem 0x0000000080000000-0x00000000ffffffff]
[    0.000000]   Normal   [mem 0x0000000100000000-0x000000047fffffff]
[    0.000000] Movable zone start for each node
[    0.000000] Early memory node ranges
[    0.000000]   node   0: [mem 0x0000000080000000-0x000000047fffffff]
[    0.000000] Initmem setup node 0 [mem 0x0000000080000000-0x000000047fffffff]
[    0.000000] SBI specification v1.0 detected
[    0.000000] SBI implementation ID=0x1 Version=0x10002
[    0.000000] SBI TIME extension detected
[    0.000000] SBI IPI extension detected
[    0.000000] SBI RFENCE extension detected
[    0.000000] SBI SRST extension detected
[    0.000000] SBI HSM extension detected
[    0.000000] CPU with hartid=0 is not available
[    0.000000] CPU with hartid=0 is not available
[    0.000000] CPU with hartid=0 is not available
[    0.000000] CPU with hartid=0 is not available
[    0.000000] riscv: base ISA extensions acdfim
[    0.000000] riscv: ELF capabilities acdfim
[    0.000000] percpu: Embedded 19 pages/cpu s40504 r8192 d29128 u77824
[    0.000000] Kernel command line: console=ttySIF0,115200n8 root=/dev/nfs rw nfsroot=192.168.10.20:/var/lib/lava/dispatcher/tmp/6/extract-nfsrootfs-apzgyarh,tcp,hard  ip=dhcp
[    0.000000] Dentry cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear)
[    0.000000] Inode-cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
[    0.000000] Built 1 zonelists, mobility grouping on.  Total pages: 4128768
[    0.000000] mem auto-init: stack:off, heap alloc:off, heap free:off
[    0.000000] software IO TLB: area num 4.
[    0.000000] software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB)
[    0.000000] Virtual kernel memory layout:
[    0.000000]       fixmap : 0xffffffc6fea00000 - 0xffffffc6ff000000   (6144 kB)
[    0.000000]       pci io : 0xffffffc6ff000000 - 0xffffffc700000000   (  16 MB)
[    0.000000]      vmemmap : 0xffffffc700000000 - 0xffffffc800000000   (4096 MB)
[    0.000000]      vmalloc : 0xffffffc800000000 - 0xffffffd800000000   (  64 GB)
[    0.000000]      modules : 0xffffffff01567000 - 0xffffffff80000000   (2026 MB)
[    0.000000]       lowmem : 0xffffffd800000000 - 0xffffffdc00000000   (  16 GB)
[    0.000000]       kernel : 0xffffffff80000000 - 0xffffffffffffffff   (2047 MB)
[    0.000000] Memory: 16383848K/16777216K available (8185K kernel code, 4919K rwdata, 4096K rodata, 2182K init, 472K bss, 393368K reserved, 0K cma-reserved)
[    0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1
[    0.000000] rcu: Hierarchical RCU implementation.
[    0.000000] rcu: 	RCU restricting CPUs from NR_CPUS=64 to nr_cpu_ids=4.
[    0.000000] rcu: 	RCU debug extended QS entry/exit.
[    0.000000] 	Tracing variant of Tasks RCU enabled.
[    0.000000] rcu: RCU calculated value of scheduler-enlistment delay is 25 jiffies.
[    0.000000] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4
[    0.000000] NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0
[    0.000000] CPU with hartid=0 is not available
[    0.000000] riscv-intc: unable to find hart id for /cpus/cpu@0/interrupt-controller
[    0.000000] riscv-intc: 64 local interrupts mapped
[    0.000000] plic: interrupt-controller@c000000: mapped 69 interrupts with 4 handlers for 9 contexts.
[    0.000000] riscv: providing IPIs using SBI IPI extension
[    0.000000] rcu: srcu_init: Setting srcu_struct sizes based on contention.
[    0.000000] riscv-timer: riscv_timer_init_dt: Registering clocksource cpuid [0] hartid [3]
[    0.000000] clocksource: riscv_clocksource: mask: 0xffffffffffffffff max_cycles: 0x1d854df40, max_idle_ns: 3526361616960 ns
[    0.000002] sched_clock: 64 bits at 1000kHz, resolution 1000ns, wraps every 2199023255500ns
[    0.000223] Console: colour dummy device 80x25
[    0.000291] Calibrating delay loop (skipped), value calculated using timer frequency.. 2.00 BogoMIPS (lpj=4000)
[    0.000305] pid_max: default: 32768 minimum: 301
[    0.000409] LSM: initializing lsm=capability,integrity
[    0.001117] Mount-cache hash table entries: 32768 (order: 6, 262144 bytes, linear)
[    0.001777] Mountpoint-cache hash table entries: 32768 (order: 6, 262144 bytes, linear)
[    0.004992] cblist_init_generic: Setting adjustable number of callback queues.
[    0.005005] cblist_init_generic: Setting shift to 2 and lim to 1.
[    0.005150] riscv: ELF compat mode unsupported
[    0.005158] ASID allocator disabled (0 bits)
[    0.005317] rcu: Hierarchical SRCU implementation.
[    0.005322] rcu: 	Max phase no-delay instances is 1000.
[    0.005717] EFI services will not be available.
[    0.006250] smp: Bringing up secondary CPUs ...
[    0.009214] smp: Brought up 1 node, 4 CPUs
[    0.015329] devtmpfs: initialized
[    0.018487] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645041785100000 ns
[    0.018508] futex hash table entries: 1024 (order: 4, 65536 bytes, linear)
[    0.018892] pinctrl core: initialized pinctrl subsystem
[    0.019878] NET: Registered PF_NETLINK/PF_ROUTE protocol family
[    0.020407] DMA: preallocated 2048 KiB GFP_KERNEL pool for atomic allocations
[    0.020558] DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
[    0.020610] audit: initializing netlink subsys (disabled)
[    0.020846] audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1
[    0.021200] cpuidle: using governor menu
[    0.026835] HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
[    0.026844] HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page
[    0.027512] iommu: Default domain type: Translated 
[    0.027518] iommu: DMA domain TLB invalidation policy: strict mode 
[    0.028073] SCSI subsystem initialized
[    0.028580] usbcore: registered new interface driver usbfs
[    0.028613] usbcore: registered new interface driver hub
[    0.028648] usbcore: registered new device driver usb
[    0.029828] vgaarb: loaded
[    0.029981] clocksource: Switched to clocksource riscv_clocksource
[    0.039719] NET: Registered PF_INET protocol family
[    0.046798] IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear)
[    0.067938] tcp_listen_portaddr_hash hash table entries: 8192 (order: 6, 262144 bytes, linear)
[    0.068947] Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
[    0.069006] TCP established hash table entries: 131072 (order: 8, 1048576 bytes, linear)
[    0.073626] TCP bind hash table entries: 65536 (order: 10, 4194304 bytes, linear)
[    0.091244] TCP: Hash tables configured (established 131072 bind 65536)
[    0.091984] UDP hash table entries: 8192 (order: 7, 786432 bytes, linear)
[    0.095227] UDP-Lite hash table entries: 8192 (order: 7, 786432 bytes, linear)
[    0.098702] NET: Registered PF_UNIX/PF_LOCAL protocol family
[    0.099983] RPC: Registered named UNIX socket transport module.
[    0.099992] RPC: Registered udp transport module.
[    0.099996] RPC: Registered tcp transport module.
[    0.100000] RPC: Registered tcp NFSv4.1 backchannel transport module.
[    0.100019] PCI: CLS 0 bytes, default 64
[    0.101284] workingset: timestamp_bits=46 max_order=22 bucket_order=0
[    0.102166] NFS: Registering the id_resolver key type
[    0.102208] Key type id_resolver registered
[    0.102213] Key type id_legacy registered
[    0.102241] nfs4filelayout_init: NFSv4 File Layout Driver Registering...
[    0.102249] nfs4flexfilelayout_init: NFSv4 Flexfile Layout Driver Registering...
[    0.102450] 9p: Installing v9fs 9p2000 file system support
[    0.102794] NET: Registered PF_ALG protocol family
[    0.102867] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 247)
[    0.102880] io scheduler mq-deadline registered
[    0.102886] io scheduler kyber registered
[    0.102916] io scheduler bfq registered
[    0.104812] fu740-pcie e00000000.pcie: host bridge /soc/pcie@e00000000 ranges:
[    0.104858] fu740-pcie e00000000.pcie:       IO 0x0060080000..0x006008ffff -> 0x0060080000
[    0.104893] fu740-pcie e00000000.pcie:      MEM 0x0060090000..0x0070ffffff -> 0x0060090000
[    0.104908] fu740-pcie e00000000.pcie:      MEM 0x2000000000..0x3fffffffff -> 0x2000000000
[    0.210993] fu740-pcie e00000000.pcie: iATU: unroll T, 8 ob, 8 ib, align 4K, limit 4096G
[    0.311064] fu740-pcie e00000000.pcie: PCIe Gen.1 x8 link up
[    0.411088] fu740-pcie e00000000.pcie: PCIe Gen.3 x8 link up
[    0.411096] fu740-pcie e00000000.pcie: PCIe Gen.3 x8 link up
[    0.411302] fu740-pcie e00000000.pcie: PCI host bridge to bus 0000:00
[    0.411314] pci_bus 0000:00: root bus resource [bus 00-ff]
[    0.411327] pci_bus 0000:00: root bus resource [io  0x0000-0xffff] (bus address [0x60080000-0x6008ffff])
[    0.411334] pci_bus 0000:00: root bus resource [mem 0x60090000-0x70ffffff]
[    0.411342] pci_bus 0000:00: root bus resource [mem 0x2000000000-0x3fffffffff pref]
[    0.411385] pci 0000:00:00.0: [f15e:0000] type 01 class 0x060400
[    0.411403] pci 0000:00:00.0: reg 0x10: [mem 0x00000000-0x000fffff]
[    0.411415] pci 0000:00:00.0: reg 0x38: [mem 0x00000000-0x0000ffff pref]
[    0.411481] pci 0000:00:00.0: supports D1
[    0.411487] pci 0000:00:00.0: PME# supported from D0 D1 D3hot
[    0.412266] pci 0000:01:00.0: [1b21:2824] type 01 class 0x060400
[    0.412342] pci 0000:01:00.0: enabling Extended Tags
[    0.412483] pci 0000:01:00.0: PME# supported from D0 D3hot D3cold
[    0.422036] pci 0000:01:00.0: bridge configuration invalid ([bus 00-00]), reconfiguring
[    0.422205] pci 0000:02:00.0: [1b21:2824] type 01 class 0x060400
[    0.422281] pci 0000:02:00.0: enabling Extended Tags
[    0.422407] pci 0000:02:00.0: PME# supported from D0 D3hot D3cold
[    0.422718] pci 0000:02:02.0: [1b21:2824] type 01 class 0x060400
[    0.422795] pci 0000:02:02.0: enabling Extended Tags
[    0.422920] pci 0000:02:02.0: PME# supported from D0 D3hot D3cold
[    0.423190] pci 0000:02:03.0: [1b21:2824] type 01 class 0x060400
[    0.423267] pci 0000:02:03.0: enabling Extended Tags
[    0.423391] pci 0000:02:03.0: PME# supported from D0 D3hot D3cold
[    0.423674] pci 0000:02:04.0: [1b21:2824] type 01 class 0x060400
[    0.423751] pci 0000:02:04.0: enabling Extended Tags
[    0.423876] pci 0000:02:04.0: PME# supported from D0 D3hot D3cold
[    0.424210] pci 0000:02:08.0: [1b21:2824] type 01 class 0x060400
[    0.424286] pci 0000:02:08.0: enabling Extended Tags
[    0.424410] pci 0000:02:08.0: PME# supported from D0 D3hot D3cold
[    0.425120] pci 0000:02:00.0: bridge configuration invalid ([bus 00-00]), reconfiguring
[    0.425138] pci 0000:02:02.0: bridge configuration invalid ([bus 00-00]), reconfiguring
[    0.425153] pci 0000:02:03.0: bridge configuration invalid ([bus 00-00]), reconfiguring
[    0.425169] pci 0000:02:04.0: bridge configuration invalid ([bus 00-00]), reconfiguring
[    0.425184] pci 0000:02:08.0: bridge configuration invalid ([bus 00-00]), reconfiguring
[    0.425343] pci_bus 0000:03: busn_res: [bus 03-ff] end is updated to 03
[    0.425510] pci 0000:04:00.0: [1b21:1142] type 00 class 0x0c0330
[    0.425557] pci 0000:04:00.0: reg 0x10: [mem 0x00000000-0x00007fff 64bit]
[    0.425758] pci 0000:04:00.0: PME# supported from D3cold
[    0.438019] pci_bus 0000:04: busn_res: [bus 04-ff] end is updated to 04
[    0.438180] pci_bus 0000:05: busn_res: [bus 05-ff] end is updated to 05
[    0.438338] pci_bus 0000:06: busn_res: [bus 06-ff] end is updated to 06
[    0.438517] pci 0000:07:00.0: [1002:68f9] type 00 class 0x030000
[    0.438562] pci 0000:07:00.0: reg 0x10: [mem 0x00000000-0x0fffffff 64bit pref]
[    0.438591] pci 0000:07:00.0: reg 0x18: [mem 0x00000000-0x0001ffff 64bit]
[    0.438609] pci 0000:07:00.0: reg 0x20: initial BAR value 0x00000000 invalid
[    0.438615] pci 0000:07:00.0: reg 0x20: [io  size 0x0100]
[    0.438645] pci 0000:07:00.0: reg 0x30: [mem 0x00000000-0x0001ffff pref]
[    0.438666] pci 0000:07:00.0: enabling Extended Tags
[    0.438783] pci 0000:07:00.0: supports D1 D2
[    0.438834] pci 0000:07:00.0: 16.000 Gb/s available PCIe bandwidth, limited by 2.5 GT/s PCIe x8 link at 0000:02:08.0 (capable of 32.000 Gb/s with 2.5 GT/s PCIe x16 link)
[    0.438997] pci 0000:07:00.0: vgaarb: setting as boot VGA device
[    0.439005] pci 0000:07:00.0: vgaarb: bridge control possible
[    0.439010] pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=none,locks=none
[    0.439082] pci 0000:07:00.1: [1002:aa68] type 00 class 0x040300
[    0.439126] pci 0000:07:00.1: reg 0x10: [mem 0x00000000-0x00003fff 64bit]
[    0.439206] pci 0000:07:00.1: enabling Extended Tags
[    0.439323] pci 0000:07:00.1: supports D1 D2
[    0.450032] pci_bus 0000:07: busn_res: [bus 07-ff] end is updated to 07
[    0.450048] pci_bus 0000:02: busn_res: [bus 02-ff] end is updated to 07
[    0.450060] pci_bus 0000:01: busn_res: [bus 01-ff] end is updated to 07
[    0.450105] pci 0000:00:00.0: BAR 9: assigned [mem 0x2000000000-0x200fffffff 64bit pref]
[    0.450115] pci 0000:00:00.0: BAR 0: assigned [mem 0x60100000-0x601fffff]
[    0.450124] pci 0000:00:00.0: BAR 8: assigned [mem 0x60200000-0x603fffff]
[    0.450132] pci 0000:00:00.0: BAR 6: assigned [mem 0x60090000-0x6009ffff pref]
[    0.450140] pci 0000:00:00.0: BAR 7: assigned [io  0x1000-0x1fff]
[    0.450151] pci 0000:01:00.0: BAR 9: assigned [mem 0x2000000000-0x200fffffff 64bit pref]
[    0.450160] pci 0000:01:00.0: BAR 8: assigned [mem 0x60200000-0x603fffff]
[    0.450167] pci 0000:01:00.0: BAR 7: assigned [io  0x1000-0x1fff]
[    0.450179] pci 0000:02:08.0: BAR 9: assigned [mem 0x2000000000-0x200fffffff 64bit pref]
[    0.450188] pci 0000:02:02.0: BAR 8: assigned [mem 0x60200000-0x602fffff]
[    0.450195] pci 0000:02:08.0: BAR 8: assigned [mem 0x60300000-0x603fffff]
[    0.450202] pci 0000:02:08.0: BAR 7: assigned [io  0x1000-0x1fff]
[    0.450211] pci 0000:02:00.0: PCI bridge to [bus 03]
[    0.450237] pci 0000:04:00.0: BAR 0: assigned [mem 0x60200000-0x60207fff 64bit]
[    0.450263] pci 0000:02:02.0: PCI bridge to [bus 04]
[    0.450274] pci 0000:02:02.0:   bridge window [mem 0x60200000-0x602fffff]
[    0.450291] pci 0000:02:03.0: PCI bridge to [bus 05]
[    0.450312] pci 0000:02:04.0: PCI bridge to [bus 06]
[    0.450338] pci 0000:07:00.0: BAR 0: assigned [mem 0x2000000000-0x200fffffff 64bit pref]
[    0.450363] pci 0000:07:00.0: BAR 2: assigned [mem 0x60300000-0x6031ffff 64bit]
[    0.450388] pci 0000:07:00.0: BAR 6: assigned [mem 0x60320000-0x6033ffff pref]
[    0.450397] pci 0000:07:00.1: BAR 0: assigned [mem 0x60340000-0x60343fff 64bit]
[    0.450421] pci 0000:07:00.0: BAR 4: assigned [io  0x1000-0x10ff]
[    0.450434] pci 0000:02:08.0: PCI bridge to [bus 07]
[    0.450442] pci 0000:02:08.0:   bridge window [io  0x1000-0x1fff]
[    0.450453] pci 0000:02:08.0:   bridge window [mem 0x60300000-0x603fffff]
[    0.450463] pci 0000:02:08.0:   bridge window [mem 0x2000000000-0x200fffffff 64bit pref]
[    0.450476] pci 0000:01:00.0: PCI bridge to [bus 02-07]
[    0.450483] pci 0000:01:00.0:   bridge window [io  0x1000-0x1fff]
[    0.450494] pci 0000:01:00.0:   bridge window [mem 0x60200000-0x603fffff]
[    0.450504] pci 0000:01:00.0:   bridge window [mem 0x2000000000-0x200fffffff 64bit pref]
[    0.450517] pci 0000:00:00.0: PCI bridge to [bus 01-07]
[    0.450523] pci 0000:00:00.0:   bridge window [io  0x1000-0x1fff]
[    0.450530] pci 0000:00:00.0:   bridge window [mem 0x60200000-0x603fffff]
[    0.450536] pci 0000:00:00.0:   bridge window [mem 0x2000000000-0x200fffffff 64bit pref]
[    0.450876] pcieport 0000:00:00.0: PME: Signaling with IRQ 30
[    0.450998] pcieport 0000:01:00.0: enabling device (0000 -> 0003)
[    0.451498] pcieport 0000:02:02.0: enabling device (0000 -> 0002)
[    0.452438] pcieport 0000:02:08.0: enabling device (0000 -> 0003)
[    0.452725] pci 0000:04:00.0: enabling device (0000 -> 0002)
[    0.452877] pci 0000:07:00.1: D0 power state depends on 0000:07:00.0
[    0.510791] Serial: 8250/16550 driver, 4 ports, IRQ sharing disabled
[    0.512073] SuperH (H)SCI(F) driver initialized
[    0.512458] 10010000.serial: ttySIF0 at MMIO 0x10010000 (irq = 38, base_baud = 8125000) is a SiFive UART v0
[    0.512497] printk: console [ttySIF0] enabled
[    1.977868] 10011000.serial: ttySIF1 at MMIO 0x10011000 (irq = 39, base_baud = 8125000) is a SiFive UART v0
[    1.997135] loop: module loaded
[    2.000285] sifive_spi 10040000.spi: mapped; irq=40, cs=1
[    2.005699] sifive_spi 10050000.spi: mapped; irq=41, cs=1
[    2.012206] macb 10090000.ethernet: Registered clk switch 'sifive-gemgxl-mgmt'
[    2.022850] macb 10090000.ethernet eth0: Cadence GEM rev 0x10070109 at 0x10090000 irq 42 (70:b3:d5:92:f9:93)
[    2.032048] e1000e: Intel(R) PRO/1000 Network Driver
[    2.036867] e1000e: Copyright(c) 1999 - 2015 Intel Corporation.
[    2.043279] xhci_hcd 0000:04:00.0: xHCI Host Controller
[    2.048006] xhci_hcd 0000:04:00.0: new USB bus registered, assigned bus number 1
[    2.196112] xhci_hcd 0000:04:00.0: hcc params 0x0200e080 hci version 0x100 quirks 0x0000000010800410
[    2.205229] xhci_hcd 0000:04:00.0: xHCI Host Controller
[    2.209706] xhci_hcd 0000:04:00.0: new USB bus registered, assigned bus number 2
[    2.217089] xhci_hcd 0000:04:00.0: Host supports USB 3.0 SuperSpeed
[    2.224885] hub 1-0:1.0: USB hub found
[    2.227929] hub 1-0:1.0: 2 ports detected
[    2.232284] usb usb2: We don't know the algorithms for LPM for this host, disabling LPM.
[    2.240424] hub 2-0:1.0: USB hub found
[    2.243706] hub 2-0:1.0: 2 ports detected
[    2.248460] usbcore: registered new interface driver uas
[    2.253087] usbcore: registered new interface driver usb-storage
[    2.259174] mousedev: PS/2 mouse device common for all mice
[    2.265064] sdhci: Secure Digital Host Controller Interface driver
[    2.270724] sdhci: Copyright(c) Pierre Ossman
[    2.300448] mmc_spi spi1.0: SD/MMC host mmc0, no DMA, no WP, no poweroff, cd polling
[    2.307487] sdhci-pltfm: SDHCI platform and OF driver helper
[    2.313251] usbcore: registered new interface driver usbhid
[    2.318632] usbhid: USB HID core driver
[    2.322569] riscv-pmu-sbi: SBI PMU extension is available
[    2.327863] riscv-pmu-sbi: 16 firmware and 4 hardware counters
[    2.333656] riscv-pmu-sbi: Perf sampling/filtering is not supported as sscof extension is not available
[    2.343720] NET: Registered PF_INET6 protocol family
[    2.349387] Segment Routing with IPv6
[    2.352348] In-situ OAM (IOAM) with IPv6
[    2.356275] sit: IPv6, IPv4 and MPLS over IPv4 tunneling driver
[    2.362719] NET: Registered PF_PACKET protocol family
[    2.367283] 9pnet: Installing 9P2000 support
[    2.371459] Key type dns_resolver registered
[    2.393091] debug_vm_pgtable: [debug_vm_pgtable         ]: Validating architecture page table helpers
[    2.459882] mmc0: host does not support reading read-only switch, assuming write-enable
[    2.467135] mmc0: new SDHC card on SPI
[    2.472424] mmcblk0: mmc0:0000 SD32G 29.7 GiB 
[    2.505772] macb 10090000.ethernet eth0: PHY [10090000.ethernet-ffffffff:00] driver [Microsemi VSC8541 SyncE] (irq=POLL)
[    2.508092] GPT:Primary header thinks Alt. header is not at the end of the disk.
[    2.515891] macb 10090000.ethernet eth0: configuring for phy/gmii link mode
[    2.523268] GPT:40959 != 62333951
[    2.533505] GPT:Alternate GPT header not at the end of the disk.
[    2.539518] GPT:40959 != 62333951
[    2.542809] GPT: Use GNU Parted to correct GPT errors.
[    2.547964]  mmcblk0: p1 p2 p3
[    2.609994] usb 1-2: new high-speed USB device number 2 using xhci_hcd
[    3.133197] hub 1-2:1.0: USB hub found
[    3.136359] hub 1-2:1.0: 4 ports detected
[    3.224891] usb 2-2: new SuperSpeed USB device number 2 using xhci_hcd
[    3.326653] hub 2-2:1.0: USB hub found
[    3.330352] hub 2-2:1.0: 4 ports detected
[    4.575721] macb 10090000.ethernet eth0: Link is Up - 100Mbps/Full - flow control tx
[    4.582727] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[    4.605980] Sending DHCP requests ., OK
[    4.625072] IP-Config: Got DHCP answer from 192.168.10.1, my address is 192.168.10.19
[    4.632880] IP-Config: Complete:
[    4.636078]      device=eth0, hwaddr=70:b3:d5:92:f9:93, ipaddr=192.168.10.19, mask=255.255.255.0, gw=192.168.10.1
[    4.646330]      host=192.168.10.19, domain=, nis-domain=(none)
[    4.652233]      bootserver=0.0.0.0, rootserver=192.168.10.20, rootpath=
[    4.652241]      nameserver0=192.168.1.1, nameserver1=192.168.10.1
[    4.665260] clk: Disabling unused clocks
[  103.390885] VFS: Unable to mount root fs via NFS.
[  103.394889] devtmpfs: mounted
[  103.405427] Freeing unused kernel image (initmem) memory: 2180K
[  103.418031] Run /sbin/init as init process
[  103.421782] Run /etc/init as init process
[  103.425470] Run /bin/init as init process
[  103.429450] Run /bin/sh as init process
Matched prompt #9: Kernel panic - not syncing
Setting prompt string to ['end Kernel panic[^\\r]*\\r', 'root@openEuler-riscv64', 'openEuler-riscv64 login:', 'Login incorrect']
[  103.433269] Kernel panic - not syncing: No working init found.  Try passing init= option to kernel. See Linux Documentation/admin-guide/init.rst for guidance.
[  103.447318] CPU: 2 PID: 1 Comm: swapper/0 Not tainted 6.3.0-g1a5304fecee5-dirty #1
[  103.454873] Hardware name: SiFive HiFive Unmatched (DT)
[  103.460085] Call Trace:
[  103.462518] [<ffffffff80005368>] dump_backtrace+0x1c/0x24
[  103.467901] [<ffffffff807ecc54>] show_stack+0x2c/0x38
[  103.472939] [<ffffffff807f778a>] dump_stack_lvl+0x3c/0x54
[  103.478325] [<ffffffff807f77b6>] dump_stack+0x14/0x1c
[  103.483362] [<ffffffff807ecf2a>] panic+0x102/0x29e
[  103.488139] [<ffffffff807f8f5e>] _cpu_down.constprop.0+0x0/0x40c
[  103.494133] [<ffffffff800036c6>] ret_from_fork+0xa/0x1c
[  103.499345] SMP: stopping secondary CPUs
[  103.503268] ---[ end Kernel panic - not syncing: No working init found.  Try passing init= option to kernel. See Linux Documentation/admin-guide/init.rst for guidance. ]---

Can NFS service run normally in the slave docker container?

ImportError: No module named 'yaml'

This may be a documentation bug.

I'm not sure:

kuzetsa@da7728a3 ~/kernelci/lava-docker $ pip show pyyaml
Name: PyYAML
Version: 3.13
Summary: YAML parser and emitter for Python
Home-page: http://pyyaml.org/wiki/PyYAML
Author: Kirill Simonov
Author-email: [email protected]
License: MIT
Location: /home/kuzetsa/.local/lib/python2.7/site-packages
Requires: 

pyyaml-3.13 is installed, and yet:

kuzetsa@da7728a3 ~/kernelci/lava-docker $ ./lavalab-gen.py 
Traceback (most recent call last):
  File "./lavalab-gen.py", line 6, in <module>
    import yaml
ImportError: No module named 'yaml'

hostname: da7728a3 is running a recent install of Ubuntu 18.04 LTS,

the PyYAML version which was installed via pip:

https://files.pythonhosted.org/packages/9e/a3/1d13970c3f36777c583f136c136f804d70f500168edc1edea6daa7200769/PyYAML-3.13.tar.gz (270kB)


updated to add:

kuzetsa@da7728a3 ~ $ env python --version
Python 3.5.5

kuzetsa@da7728a3 ~ $ pip --version
pip 9.0.1 from /usr/lib/python2.7/dist-packages (python 2.7)

after installing python3 version:

sudo apt-get install python3-pip && pip3 install pyyaml && pip3 show pyyaml

Name: PyYAML
Version: 3.13
Summary: YAML parser and emitter for Python
Home-page: http://pyyaml.org/wiki/PyYAML
Author: Kirill Simonov
Author-email: [email protected]
License: MIT
Location: /home/kuzetsa/.local/lib/python3.6/site-packages
Requires: 

... the error persists

Quickstart flow doesn't work 100%

Trying out the quickstart flow, when going to http://localhost:10080 after bringing the containers up, I get error 400 every time (the same happens if I try to access it from another machine). However, going to http://127.0.0.1:10080 works just fine. This is on a fresh installation of Ubuntu 18.04 with no tweaked settings whatsoever.

Am I missing some obvious setting needed for this to work, or is it some bad default in my OS install?

Is there a way to post job in lava-docker to kernelci-docker?

I defined a job in lava-docker as following:

metadata:
device.type: qemu_x86_64
git.branch: linux-5.4.y
git.commit: f015b86259a520ad886523d9ec6fdb0ed80edc38
job.arch: x86_64
job.build_environment: gcc-8
kernel.defconfig: x86_64_defconfig
kernel.defconfig_full: x86_64_defconfig
kernel.tree: stable
test.plan: baseline
git.describe: v5.4.40
git.url: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
image.url: https://kgithub.com/montjoie/lava-healthchecks-binary/blob/master/stable/linux-5.4.y/v5.4.40/x86_64/x86_64_defconfig/gcc-8/
job.dtb_url: None
job.file_server_resource: stable/linux-5.4.y/v5.4.40/x86_64/x86_64_defconfig/gcc-8
job.initrd_url: https://kgithub.com/montjoie/lava-healthchecks-binary/blob/master/images/rootfs/buildroot/kci-2019.02-9-g25091c539382/x86/baseline/rootfs.cpio.gz?raw=true
job.kernel_image: bzImage
job.kernel_url: https://kgithub.com/montjoie/lava-healthchecks-binary/blob/master/stable/linux-5.4.y/v5.4.40/x86_64/x86_64_defconfig/gcc-8/bzImage?raw=true
job.modules_url: https://kgithub.com/montjoie/lava-healthchecks-binary/blob/master/stable/linux-5.4.y/v5.4.40/x86_64/x86_64_defconfig/gcc-8/modules.tar.xz?raw=true
job.name: stable-linux-5.4.y-v5.4.40-x86_64-x86_64_defconfig-gcc-8-no-dtb-qemu_x86_64-baseline_qemu
job.nfsrootfs_url: None
job.original: None
kernel.arch_defconfig: x86_64-x86_64_defconfig
kernel.endian: little
kernel.version: v5.4.40
platform.dtb: None
platform.dtb_short: None
platform.fastboot: false
platform.mach: qemu
test.plan_variant: baseline_qemu

notify:
criteria:
status: complete
callbacks:
- url: http://192.168.122.208:8000/api/v0.2/jobs/{LAVA_JOB_ID}/tests/?format=json
method: GET
- url: http://192.168.122.208:8081/callback/lava/test?lab_name=lab-01&status={STATUS}&status_string={STATUS_STRING}
method: POST
dataset: all
token: kernelci-token
content-type: json

device_type: qemu

context:
arch: x86_64
cpu: qemu64
guestfs_interface: ide
extra_kernel_args: "no_timer_check"

job_name: "lava-kernelci(kgithub) complete demo"
timeouts:
job:
minutes: 20
action:
minutes: 18
actions:
power-off:
seconds: 30
priority: medium
visibility: public

actions:

  • deploy:
    timeout:
    minutes: 10
    to: tmpfs
    os: oe
    images:
    kernel:
    image_arg: '-kernel {kernel} -append "console=ttyS0,115200 root=/dev/ram0 debug verbose console_msg_format=syslog"'
    url: http://192.168.122.208:9900/bzImage
    ramdisk:
    image_arg: '-initrd {ramdisk}'
    url: http://192.168.122.208:9900/rootfs.cpio.gz

  • boot:
    timeout:
    minutes: 5
    method: qemu
    media: tmpfs
    prompts:
    - '/ #'

  • test:
    timeout:
    minutes: 5
    definitions:

    • repository:
      metadata:
      format: Lava-Test Test Definition 1.0
      name: baseline
      description: "baseline test plan"
      os:
      - debian
      scope:
      - functional
      environment:
      - lava-test-shell
      run:
      steps:
      - >
      for level in warn err; do
      dmesg --level=$level --notime -x -k > dmesg.$level
      done
      - >
      for level in crit alert emerg; do
      dmesg --level=$level --notime -x -k > dmesg.$level
      test -s dmesg.$level && res=fail || res=pass
      count=$(cat dmesg.$level | wc -l)
      lava-test-case $level
      --result $res
      --measurement $count
      --units lines
      done
      - cat dmesg.emerg dmesg.alert dmesg.crit dmesg.err dmesg.warn
      from: inline
      name: dmesg
      path: inline/dmesg.yaml
  • test:
    timeout:
    minutes: 5
    definitions:

    • repository:
      metadata:
      format: Lava-Test Test Definition 1.0
      name: baseline
      description: "baseline test plan"
      os:
      - debian
      scope:
      - functional
      environment:
      - lava-test-shell
      run:
      steps:
      - export PATH=/opt/bootrr/helpers:$PATH
      - cd /opt/bootrr && sh helpers/bootrr-auto
      lava-signal: kmsg
      from: inline
      name: bootrr
      path: inline/bootrr.yaml

Its test result can be post to kernelci-docker and displayed in kernelci-frontend.

But kernelci-front will not list the job.

Is there any way to do that?

Is NFS supported in the slave docker image?

I am trying to run a test job using NFS- https://lava.ciplatform.org/scheduler/job/59 . The file system isn't mounting though.
Having a look in the docker container:

  1. It doesn't seem that the NFS daemon is running
root@lab-cip-renesas:/# service nfs-kernel-server status
nfsd not running
  1. /var/lib/lava/dispatcher/tmp does not support NFS export
root@lab-cip-renesas:/# service nfs-kernel-server start
mount: permission denied
Exporting directories for NFS kernel daemon...exportfs: /var/lib/lava/dispatcher/tmp
 failed!

Has NFS support been added to the slave docker image?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.