Code Monkey home page Code Monkey logo

rapido's People

Contributors

aaptel avatar ddiss avatar dmulder avatar frankenmichl avatar igaw avatar luis-henrix avatar morbidrsa avatar mwilck avatar obnoxxx avatar pevik avatar scabrero avatar werkov avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rapido's Issues

remove hardcoded library paths

Rapido cut_ scritps that install binaries with dlopen()'ed runtime library dependencies need to explicitly direct Dracut to install such dependencies. This is currently done using an absolute path (based on openSUSE install path), which works on SUSE based platforms, but fails on others (Debian, etc.).

Instead of using an absolute path, dlopen()'ed library dependencies should be looked up, based on /etc/ld.so.conf configuration. This could potentially be done with ldconfig -p.

Dracut throws an error for builtin modules

I recently upgraded to dracut-049.1 and found that the samba-local rapido image no longer works. This appears to be due to Dracut no longer gracefully handling --add-drivers entries for builtin kernel modules. I think this should probably be an upstream Dracut report, but I'd like to do some debugging here first.

Common function to create /etc/hosts file

Several autorun scripts were creating /etc/hosts. Add a common function for doing that. While there, create this file following the recommendations from hosts(5), namely:

  • use 127.0.0.1 for localhost
  • use 127.0.1.1 for the FQDN
  • add IPv6 entries

use type -P instead of which

As raised by Dave Chinner on the fstests list:

https://lore.kernel.org/fstests/[email protected]/T/#u

The latest debian unstable release is now causing a bunch of new
test failures because they have deprecated the which command.

Rather than make everyone jump through hoops chasing problems with
'which' while debian decides to how to package at least three
variants of 'which' - each which will different semantics, behaviour
and support - as alternatives users then have to opt into, let's
just remove the remaining uses of the shell independent 'which'
command and replace them with bash builtin 'type -P' operations.

I wasn't aware of the builtin, but it looks like something that we should also do.

trap scripts called multiple times

We have a whole bunch of trap "$script" 0 1 2 3 15 callers, where $script is unintentionally invoked multiple times. Either the trap handler should be cleared, or the signal list should be trimmed to e.g. only 0, so that a SIGINT invocation won't be followed by a second for EXIT.

Add support for tcmu-runner + rbd testing

open-iscsi/tcmu-runner#78 added an RBD backend for tcmu-runner, which used librbd to translate SCSI I/O requests into Ceph OSD requests.
I'm working on a Rapido VM image generator and autorun script that allows for instant testing of these features, using LIO's loopback and iSCSI transport modules.

Provide configs for other archs

All kernel configs are for x86_64 only. I sometimes use configs for other archs, it'd be great to have more.

nit: I'd rename kernel directory to configs.

Use common set of applications/libraries in cut_* scripts

Instead of duplicating a huge amount of applications in the 'dracut --install' (e.g. ps, rmdir, resize, dd, ...), each cut_ script could simply add those applications/libs/... that would be required for that specific script.
In the long run, I guess a single cut_ script would be required and different configuration files would simply set specific sets of applications, libs, kernel modules, ...

ceph-conf mon host address parsing is broken

With a ceph octopus vstart cluster deployment, I see the following monitor host config:

> ./bin/ceph-conf -c ceph.conf -s "global"  "mon host"
...
[v2:192.168.124.1:40741,v1:192.168.124.1:40742]

However, _rt_write_ceph_config() appears to fail when parsing this output:

> git log -1 --pretty=oneline 
0cd4237de03b7cdfc3852eb2e0f79d83aaa287a9 (HEAD -> master, origin/master, origin/HEAD) Merge pull request #128 from ddiss/zram_hot_add_one_dev
> ./rapido cut -B cephfs
...
dracut: *** Creating initramfs image file 'rapido/initrds/myinitrd' done ***
> lsinitrd -f /vm_ceph.env initrds/myinitrd 
...
CEPH_MON_ADDRESS_V1=2

I'm able to work around this with the following change:

diff --git a/runtime.vars b/runtime.vars
index a59daf8..1d9df53 100644
--- a/runtime.vars
+++ b/runtime.vars
@@ -113,7 +113,7 @@ _rt_write_ceph_config() {
                value=($(_ceph_get_conf "mon.${CEPH_MON_NAME}" "mon addr"))
        fi
        if [ -z "$value" ]; then
-               value=($(_ceph_get_conf "global" "mon host"))
+               value=$(_ceph_get_conf "global" "mon host")
        fi
        [ -n "$value" ] || _fail "Can't find mon address"
        # get both msgr v1 and v2 monitor addresses
> ./rapido cut -B cephfs
...
dracut: *** Creating initramfs image file 'rapido/initrds/myinitrd' done ***
> lsinitrd -f /vm_ceph.env initrds/myinitrd 
...
CEPH_MON_ADDRESS_V1=192.168.124.1:40742
CEPH_MON_ADDRESS_V2=192.168.124.1:40741

Strange thing is, I'm certain that this used to work. I'm guessing something might have changed in the ceph output, but I don't know. @luis-henrix have you seen this issue?

fstests: use attached QEMU block devices for scratch/test if provided

It'd be helpful to be able to use the fstests-X runners against block devices aside from just zram.
Unlike lio-local, which uses /dev/vd[ab] if present, I think it'd be cleaner and safer if we filter based on device serial. I.e. if a /sys/block/*/serial starts with fstests then it'll be used as a test or scratch device.

setup br0 failure

CentOS 7 report this issue:

 $ ip link add br0 type bridge;ip addr add 192.168.155.1/24 dev br0;ip tuntap add dev tap0 
Command line is not complete. Try option "help"

The error is from the last command.

fstests: support tests calling "su"

xfstests has a bunch of tests which run commands as other users via su -c ..., for quotas, etc. su calls currently fail on fstests images due to lack of the binary and pam configuration. I hope we can address this issue without pulling in all of the pam bells and whistles.

add hooks for auto-saving test results on shared folder

Some autorun scripts, like the fstests and blktests family of autorun scripts, produce a folder with test results.

If a shared folder is configured, sync these results back to the host for later inspection before shutdown.

This could be done by hooking into the shutdown function that is implemented in rapido.

Support out-of-source kernel builds

Disclaimer: the out-of-source builds work just fine for me in the recent version (e0d44a2).

It is only slightly misleading that I set KERNEL_SRC=$BUILD_DIR.
By quickly skimming rapido cuts, KERNEL_SRC is used for build artifacts most of the time. But some cases would need review.

Proposal: introduce a new config variable pointing explicitly to kernel build artifacts.

  • In-source builds (nothing changes, user configures a single path)
    • KERNEL_BUILD=KERNEL_SRC
  • Out-of-source builds (review if separate path is necessary)
    • KERNEL_BUILD=
    • KERNEL_SRC= -- is it needed?

add shortcut to boot test with one command

Currently to run a test the user has to run both a 'rapido cut' and a 'rapido boot' command.

Let's create a shortcut that combines both. This is useful for single VM tests like fstests on a local file-system.

Example:
For my BTRFS smoke testing I have to do:

 ./rapido cut fstests-btrfs && ./rapido boot

Proposal:

  ./rapido run fstests-btrfs

Optionally a -s or -g command line parameter for starting a gdb server even if it's not specified in rapdio.conf would be good.

future: remove /vm_autorun.env reminder

With 3112ea3 the rapido VM boot sequence changed such that /vm_autorun.env is no longer sourced. As a reminder for out-of-tree runners, the following FYI was left:

# The following can be removed when we expect all out-of-tree runners to have
# been converted to the new boot sequence of:
# dracut -> 00-rapido-init.sh -> .profile (vm_autorun.env) -> /rapido_autorun/*
        cat > /vm_autorun.env <<EOF
echo vm_autorun.env: autorun scripts no longer need to source this file. It is \
sourced via .profile automatically on boot prior to autorun invocation.
EOF

The purpose of this ticket is to track the removal of this FYI notice in the future.

RBD mapping should use add_single_major

The /sys/bus/rbd/add path no longer works with mainline, while add single major isn't available in old (e.g. SLE) kernels without an explicit module parameter, so the shell code should have a fallback mechanism.

4.17 kernel VMs hang in random number generation

Raising a separate issue following discussion in #42 ...

While testing current mainline with XFS mkfs and CephFS mount workloads, I've observed significant hangs in the kernel getrandom code path. E.g.

./vm.sh
...
+ mount -t ceph 192.168.155.1:40174:/ /mnt/cephfs -o name=admin,secret=...
[   26.976920] random: fast init done
^C
rapido1:/# ps -elf|grep mount
4 S root       206     1  0  80   0 -  1569 wait_f **15:41** ttyS0    00:00:00 /sbin/mount.ceph 192.168.155.1:40174:/ /mnt/cephfs -o rw name admin secret ...
rapido1:/# date
Fri May 11 **15:43:31** UTC 2018
rapido1:/# cat /proc/206/stack 
[<0>] wait_for_random_bytes+0x57/0x60
[<0>] ceph_create_client+0x11/0x130
[<0>] ceph_mount+0x1b1/0xad1
...

Given that rapido VMs don't currently have access to a good source of entropy, I think it probably makes sense to encourage use of https://wiki.qemu.org/Features/VirtIORNG, by enabling CONFIG_HW_RANDOM_VIRTIO=y and setting QEMU_EXTRA_ARGS="-nographic -device virtio-rng-pci" by default, which sees the guest make use of /dev/random on the hypervisor for random number generation.
CONFIG_HW_RANDOM_VIRTIO=y only adds ~4K to the size of bzImage. However, frustratingly the (simple_example) boot time on my laptop increases by ~175ms when booting with "-device virtio-rng-pci" .

With these options enabled, I don't see any getrandom() based stalls.

distribution packages

Now that we've started moving away from magic paths in the rapido directory for rapido.conf and initramfs image output, we can probably consider providing rapido as a distribution package (while retaining all run-from-git-dir functionality). There are still a few things which I think should be cleaned up before we do go down that path though. First things that come to mind:

  • rapido.conf default search paths should probably include a path under $HOME (and perhaps /etc too)
  • initramfs images should be output in the current working directory by default(?)
  • rapido boot should accept an initramfs image path parameter

reduce global variables by passing reference to _rt_require_* helpers

A common cut script pattern that we currently have is something like:

_rt_require_foo
_rt_require_bar
...
$DRACUT --install "... $FOO_BINS $BAR_BINS"

With each new component tested via cut / autorun, we tend to gather new _rt_require_X() helpers, and corresponding global variables. To avoid these global variables getting too out of control, I think it makes sense to use Bash pass-by-reference functionality instead, i.e.:

req_bins=()
_rt_require_foo req_bins
_rt_require_bar req_bins
...
$DRACUT --install "... ${req_bins[*]}"

The corresponding helper functions then look something like:

_rt_require_foo() {
        declare -n req_paths_ref="$1"

        req_paths_ref+=( $(type -P mkfs.foo fooer) ) \
                || _fail "path $p doesn't provide foo binaries"
}

I think It's much cleaner, as it's clearer by looking at the caller where the binary paths will end up and it reduces the number of variables needed - the req_bins pass-by-ref variable above is appended with foo and bar paths.

We're already using Bash pass-by-reference functionality elsewhere, so it shouldn't change our (recent) version dependency.

move VM network configuration into separate config file?

(brainstorming ways to add support for >2 network attached VMs)

Including VM network configuration in rapido.conf means that users need to learn yet another network configuration syntax. It also adds clutter to rapido.conf, particularly if we wish to support deployment of more than two VMs in future (#51 added three, but it turned into a bit of a mess).

It should be possible to have br_setup.sh run dnsmasq with a pregenerated config file that offers sensible defaults for all VMs (tap MAC addresses are used for corresponding VM nics). Advanced users could provide their own settings or network environment, allowing for more flexibility while keeping the complexity out of rapido.

As a trade off, such an architecture would perhaps mean dropping support for static VM IP assignment.

dracut throws an error and doesn't package kernel modules listed after builtins

I'm seeing the following strange behaviour on SLE15-SP1 with Dracut 044-18.15.1. It doesn't occur on Leap 42.3 (Dracut 044.1-32.1)...

I'm building an image via cut/samba_local.sh , which requires the following kernel modules:

~/rapido> grep add-drivers cut/samba_local.sh
        --add-drivers "zram lzo xfs btrfs" \

My SLE15-SP1 kernel is built with xfs built-in and the rest as modules:

david@kerbus:~/rapido> . rapido.conf
david@kerbus:~/rapido> cat ${KERNEL_SRC}/include/config/kernel.release
4.12.14+
~/rapido> grep -e zram.ko -e lzo.ko -e xfs.ko -e btrfs.ko ${KERNEL_INSTALL_MOD_PATH}/lib/modules/4.12.14+/modules.builtin
kernel/fs/xfs/xfs.ko
~/rapido> find ${KERNEL_INSTALL_MOD_PATH}/lib/modules/4.12.14+/| grep -e zram.ko -e lzo.ko -e xfs.ko -e btrfs.ko
/home/david/kernel/mods/lib/modules/4.12.14+/kernel/crypto/lzo.ko
/home/david/kernel/mods/lib/modules/4.12.14+/kernel/drivers/block/zram/zram.ko
/home/david/kernel/mods/lib/modules/4.12.14+/kernel/fs/btrfs/btrfs.ko

Dracut throws an error about xfs missing, but it doesn't result in non-zero exit status:

dracut: Executing: /home/david/dracut/dracut.sh --install "tail blockdev ps rmdir resize dd vim grep find df sha256sum             strace mkfs mkfs.btrfs mkfs.xfs           d
dracut: *** Including module: bash ***
dracut: *** Including module: network-legacy ***
dracut: *** Including module: network ***
dracut: *** Including module: ifcfg ***
dracut: *** Including module: kernel-network-modules ***
dracut: *** Including module: udev-rules ***
dracut: Skipping udev rule: 40-redhat.rules
dracut: Skipping udev rule: 50-firmware.rules
dracut: Skipping udev rule: 50-udev.rules
dracut: Skipping udev rule: 91-permissions.rules
dracut: Skipping udev rule: 80-drivers-modprobe.rules
dracut: *** Including module: base ***
dracut: *** Including modules done ***
dracut-install: ERROR: installing 'xfs'
dracut: FAILED:  /home/david/dracut/dracut-install -D /home/david/rapido/initrds/dracut.UWmOn6/initramfs --kerneldir /home/david/kernel/mods/lib/modules/4.12.14+ -m zram lzos
...
dracut: *** Creating initramfs image file '/home/david/rapido/initrds/myinitrd' done ***
dracut: dracut: warning: could not fsfreeze /home/david/rapido/initrds

~/rapido> echo $?
0

The resulting initramfs image is missing the btrfs kernel module, which is listed after the "missing" xfs module:

~/rapido> lsinitrd initrds/myinitrd| grep -e zram.ko -e lzo.ko -e xfs.ko -e btrfs.ko
-rw-r--r--   1 root     root         8464 Feb 14 16:35 lib/modules/4.12.14+/kernel/crypto/lzo.ko
-rw-r--r--   1 root     root        44096 Feb 14 16:35 lib/modules/4.12.14+/kernel/drivers/block/zram/zram.ko

flexible locations for rapido.conf input and initramfs output paths

Rapido currently always assumes that rapido.conf (and any associated net-conf) is placed in the rapido parent directory (RAPIDO_DIR). Similarly, the Dracut output image is placed at $RAPIDO_DIR/initrds/myinitrd (with dracut --tmpdir and qemu -pidfile using the same directory).
These hardcoded locations are a problem for CI systems, which often run many rapido test jobs simultaneously.
The current workaround for this is to use a separate rapido source directory for each test job, but it's not ideal.

if libostree is installed, dracut pulls in systemd

After an upgrade/pkg install of my system, libostree was installed and added a /etc/dracut.conf.d/ostree.conf file with the following content:

add_dracutmodules+=" ostree systemd "

resulting in dracut picking it up automatically by default, which in turn prevented my vm to boot as systemd has some expectations about the system I didn't meet.

The fix was to force-omit ostree and systemd using dracut -o ostree -o systemd on the dracut command line.

drop unnecessary network and ifcfg dracut modules

We use the kernel ip=X functionality for Rapido VM network configuration, so we don't need wicked, etc. in VM images. Simple "ip" and "ping" binary dependencies should suffice.
Dropping the network and ifcfg dracut modules saves 5MB from my cifs.ko test image.

zram now uses lzo-rle by default, which isn't included by cut scripts

The zram default recently changed via:

commit ce82f19fd5809f0cf87ea9f753c5cc65ca0673d6
Author: Dave Rodgman <[email protected]>
Date:   Wed Mar 13 11:44:26 2019 -0700

    zram: default to lzo-rle instead of lzo
    
    lzo-rle gives higher performance and similar compression ratios to lzo.
    
    Link: http://lkml.kernel.org/r/[email protected]
    Signed-off-by: Dave Rodgman <[email protected]>
    Signed-off-by: Andrew Morton <[email protected]>
    Signed-off-by: Linus Torvalds <[email protected]>

diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 04ca65912638..e7a5f1d1c314 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -41,7 +41,7 @@ static DEFINE_IDR(zram_index_idr);
 static DEFINE_MUTEX(zram_index_mutex);
 
 static int zram_major;
-static const char *default_compressor = "lzo";
+static const char *default_compressor = "lzo-rle";

To work with this change, we should add "lzo-rle" to all cut scripts, and/or build with the module built-in.

add support for kernel crashdump

Sometimes it is helpful to analyze a kernel problem via post-mortem debugging it with crash, therefor a kdump kernel needs to be loaded and setup.

Rapido currently is lacking the needed infrastructure to do this.

add support for debugging with drgn

Drgn (pronounced dragon) is a scriptable run time kernel debugger (drgn documentation, drgn lwn.net article) written in Python.

When debugging kernel problems on test kernels in rapido it is sometimes helpful to runtime inspect kernel data structures without the overhead of attaching gdb to the VM, creating breakpoints and so on.

Unfortunately it is a bit cumbersome to just add a local --include or --install for drgn in a cut script:

diff --git a/cut/fstests_btrfs_zoned.sh b/cut/fstests_btrfs_zoned.sh
index a4c1bb134616..ef7f86fe04b9 100755
--- a/cut/fstests_btrfs_zoned.sh
+++ b/cut/fstests_btrfs_zoned.sh
@@ -19,6 +19,10 @@ _rt_require_dracut_args "$RAPIDO_DIR/autorun/fstests_btrfs_zoned.sh" "$@"
 _rt_require_fstests
 _rt_require_btrfs_progs

+DRGN_BIN_DEPS="python3 drgn"
+PYTHON_LIBS="/usr/lib/python3.8/"
+PYTHON_LIBS64="/usr/lib64/python3.8/"
+
 # wipefs mount
 "$DRACUT" --install "tail blockdev ps rmdir resize dd vim grep find df sha256sum \
                   strace mkfs  free \
@@ -36,9 +40,14 @@ _rt_require_btrfs_progs
                   ${FSTESTS_SRC}/ltp/* ${FSTESTS_SRC}/src/* \
                   ${FSTESTS_SRC}/src/log-writes/* \
                   ${FSTESTS_SRC}/src/aio-dio-regress/*
+                  $DRGN_BIN_DEPS \
                   $BTRFS_PROGS_BINS" \
        --include "$FSTESTS_SRC" "$FSTESTS_SRC" \
+       --include "/usr/lib64/libgomp.so.1" "/usr/lib64/libgomp.so.1" \
+       --include "/usr/lib64/libexpat.so.1" "/usr/lib64/libexpat.so.1" \
        $DRACUT_RAPIDO_INCLUDES \
+       --include "$PYTHON_LIBS" "$PYTHON_LIBS" \
+       --include "$PYTHON_LIBS64" "$PYTHON_LIBS64" \
        --include "$RAPIDO_DIR/wipefs" "/usr/sbin/wipefs" \
        --include "$RAPIDO_DIR/mount" "/usr/sbin/mount" \
        --add-drivers "lzo lzo-rle dm-snapshot dm-flakey btrfs raid6_pq \

so we'd need to create a dracut module and include it if needed.

UML support

I've been playing around with User Mode Linux, which runs the kernel as a user-space process. It can be run alongside a Rapido initramfs image with very little changes:

  • build kernel with CONFIG_UML=y, etc.
  • boot via ./linux initrd=initrds/myinitrd eth0=tuntap,tap...

My wip vm.sh equivalent and example kernel config can be found at https://github.com/ddiss/rapido-utils/tree/master/uml .

It seems to work pretty well, including tap networking between multiple VMs and quick boot times. However, there are a few of limitations:

  • no SMP, UML only emulates a single CPU
  • kernel rebuild needed for CONFIG_UML=y
  • /dev/random reads stall
    • #44 all over again
    • CONFIG_UML_RANDOM doesn't work nicely like virtio-rng-pci
  • x86 only
    • UML hasn't been ported to many platforms unlike qemu

I'm curious whether others are interested in seeing this in core rapido - it might be useful for running CI jobs on cloud systems, etc. which don't offer qemu.

add kexec support

I'm working on xfstests CI jobs for SUSE and openSUSE kernels and plan on using the Open Build Service for kernel compilation. Given that obs compile jobs can be run within a VM, my plan is to boot the kernel directly after compilation via kexec.

stale xattrs affect new initramfs images

Reproducer:

./rapido cut simple-example
./rapido cut lio-local

expected:
The lio-local VM is booted with a network device attached

observed:
The lio-local VM is booted without network

xattrs on rapido initramfs images can be used to control CPU, memory and network resource parameters passed to qemu.
These xattrs are not removed when a new rapido initramfs image is generated. In the example above, the user.rapido.vm_networkless xattr is carried by initrds/myinitrd after both cut simple-example and cut lio-local are run.

Enable ftrace in default kernel configs

ftrace is a useful debugging utility, and can be configured with little impact on performance, with extra per-function NOPs added and used as tracing trampolines.

Building the SLE15SP2 kernel (I'd expect similar for mainline) with ftrace enabled results in the following size increase:
before: 4239920 ./arch/x86/boot/bzImage
after: 4870704 ./arch/x86/boot/bzImage

avoid raid6 benchmarks to shave ~500ms off the boot time

On boot, I see a large chunk of time spent in raid6_pq.ko performing benchmarks:
[ 0.257224] PCI: Using configuration type 1 for base access
[ 0.344009] raid6: sse2x1 gen() 9888 MB/s
[ 0.412008] raid6: sse2x1 xor() 7373 MB/s
[ 0.480004] raid6: sse2x2 gen() 11862 MB/s
[ 0.548005] raid6: sse2x2 xor() 8179 MB/s
[ 0.616003] raid6: sse2x4 gen() 13935 MB/s
[ 0.684004] raid6: sse2x4 xor() 8772 MB/s
[ 0.684486] raid6: using algorithm sse2x4 gen() 13935 MB/s
[ 0.685028] raid6: .... xor() 8772 MB/s, rmw enabled
[ 0.685569] raid6: using intx1 recovery algorithm
[ 0.686178] ACPI: Added _OSI(Module Device)

Looking at why these are running, I found that CONFIG_RAID6_PQ=y is responsible. This module is in kernel as a dependency for CONFIG_BTRFS_FS=y. Switching to CONFIG_RAID6_PQ=m and CONFIG_BTRFS_FS=m reduced my boot time by around 500ms.

Add qemu -drive for fstests

Currently all of our fstests test cases rely on zram for the test and scratch devs.

Wouldn't it be a good idea to have a variable in rapdio.conf selecting either zram or an image from qemu?

Another option would be to duplicate the fstests cut and autorun scripts but this sounds like a lot of duplicated work for no reason.

Update:
Having the ability to specify more than one external disk would be great. I.e. one for TEST_DEV and one for SCRATCH_DEV or even mulitple for btrfs' SCRATCH_DEV_POOL which is needed for the raid tests.

document and use a consistent coding style

I propose that we follow blktests coding style:

  • Indent with tabs.
  • Don't add a space before the parentheses or a newline before the curly brace
    in function definitions.
  • Variables set and used by the testing framework are in caps with underscores.
    E.g., TEST_NAME and GROUPS. Variables local to the test are lowercase
    with underscores.
  • Functions defined by the testing framework or group scripts, including
    helpers, have a leading underscore. E.g., _have_scsi_debug. Functions local
    to the test should not have a leading underscore.
  • Use the bash [[ ]] form of tests instead of [ ].
  • Always quote variable expansions unless the variable is a number or inside of
    a [[ ]] test.
  • Use the $() form of command substitution instead of backticks.
  • Use bash for loops instead of seq. E.g., for ((i = 0; i < 10; i++)), not
    for i in $(seq 0 9).

New ceph.conf format parsing

Nautilus seems to have slightly changed the ceph.conf format and it causes _ini_parse function to fail parsing it. The change in question is regarding the monitors section, which can now have a list of IP addresses:

 [mon.a]
        mon host =  [192.168.155.1:40880]

The '[:]' is seems to be confusing the parsing code. The same occurs for the global section:

[global]
        mon host =  [v2:192.168.155.1:40879,v1:192.168.155.1:40880]

I've hacked some code to parse this new config using awk, which should be a bit easier to maintain IMO. Here's what I currently have:

function _ini_parse() {
	local ini_file=$1
	local ini_section=$2
	shift 2
	local ini_key=\"$@\"

	awk -v section="^\\\[$ini_section\\\]" \
		-v key="$ini_key" \
		'$1 ~ section {
			got_section = 1;
			next
		}
		/^\[.*\]/{
			got_section = 0;
			got_key = 0;
		}
		$0 ~ key {
			got_key = 1;
		}
		got_section && got_key && NF {
			print substr($0, index($0, "=") + 1)
			exit 0;
		}' $file
}

Would something this be acceptable? The idea is to actually wrap _ini_parse() with some more high-level functions (such as ceph_get_key, ceph_get_mon_addr...).

explicit DRACUT_SRC results in dracut-init.sh: -D: command not found

The rapido.conf DRACUT_SRC parameter should allow rapido to be run using Dracut from a specified source tree. When I tried using it with current Dracut HEAD (9e68789d66a6a383e5c46f687350897705c7994f) I encountered:

> ./rapido cut lio-local
/home/ddiss/isms/dracut/dracut-init.sh: line 212: -D: command not found
dracut: FAILED: -D .../rapido/initrds/dracut.ebhcfN/initramfs /bin/sh
dracut: Executing: /home/ddiss/isms/dracut/dracut.sh --install "tail blockdev ps rmdir resize dd vim grep find df sha256sum                strace mkfs.xfs truncate losetup dd
dracut: *** Including module: bash ***
/home/ddiss/isms/dracut/dracut-init.sh: line 212: -D: command not found
dracut: FAILED: -D .../rapido/initrds/dracut.ebhcfN/initramfs -l /bin/bash
dracut: *** Including module: udev-rules ***
/home/ddiss/isms/dracut/dracut-init.sh: line 242: -D: command not found
dracut: FAILED: -D .../rapido/initrds/dracut.ebhcfN/initramfs -a -l udevadm cat uname blkid
/home/ddiss/isms/dracut/dracut-init.sh: line 201: -D: command not found
dracut: FAILED: -D .../rapido/initrds/dracut.ebhcfN/initramfs -d /etc/udev

@mwilck are you still using this dracut-from-source functionality and have you seen this issue?

Unable to boot VMs wih Leap 15.0 dracut

When using rapido on openSUSE Leap 15.0, I can't boot up the VMs but get stuck after multi-user.target

Welcome to openSUSE Leap 15.0 Beta dracut-044-29.1 (Initramfs)!

[ 0.383121] systemd[1]: No hostname configured.
[ 0.383743] systemd[1]: Set hostname to .
[ 0.384377] systemd[1]: Initializing machine ID from random generator.
[ 0.394213] systemd[1]: Listening on udev Kernel Socket.
[ OK ] Listening on udev Kernel Socket.
[ 0.395758] systemd[1]: Listening on udev Control Socket.
[ OK ] Listening on udev Control Socket.
[ 0.397718] systemd[1]: Reached target Swap.
[ OK ] Reached target Swap.
[ OK ] Created slice System Slice.
[ OK ] Reached target Slices.
[ OK ] Started Dispatch Password Requests to Console Directory Watch.
[ OK ] Reached target Paths.
[ OK ] Listening on Journal Socket (/dev/log).
[ OK ] Reached target Local File Systems.
[ OK ] Listening on Journal Socket.
Starting udev Coldplug all Devices...
[ OK ] Reached target Sockets.
Starting Create list of required st…ce nodes for the current kernel...
Starting Journal Service...
Starting Load Kernel Modules...
[ OK ] Reached target Timers.
[ OK ] Started Create list of required sta…vice nodes for the current kernel.
Starting Create Static Device Nodes in /dev...
[ OK ] Started Journal Service.
[FAILED] Failed to start Load Kernel Modules.
See 'systemctl status systemd-modules-load.service' for details.
[ OK ] Started Create Static Device Nodes in /dev.
Starting udev Kernel Device Manager...
Starting Apply Kernel Variables...
[ OK ] Started Apply Kernel Variables.
[ OK ] Started udev Kernel Device Manager.
Mounting Kernel Configuration File System...
[ OK ] Mounted Kernel Configuration File System.
[ OK ] Started udev Coldplug all Devices.
[ OK ] Reached target System Initialization.
[ OK ] Reached target Basic System.
[ OK ] Reached target Multi-User System.
[ 1.331688] tsc: Refined TSC clocksource calibration: 2594.085 MHz
[ 1.334155] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x25646885a38, max_idle_ns: 440795311131 s
[ 1.338581] clocksource: Switched to clocksource tsc

Allow VMs to run without network

vm.sh currently unconditionally boots VMs with a network device configured. Certain tests, e.g. local xfstests, don't require a network device on the provisioned test VM, so should be able to run without.

Similar to the way $qemu_cut_args are handled, I think it'd make sense to have Cut scripts add an extended attribute to the initramfs image which signifies whether or not the image needs to be deployed on a network attached VM.

systemd runners no longer working

The lrbd and mpath-local runners haven't been working for me since upgrading to Leap 15.1 . It seems that the "rd.systemd.unit=emergency" hook doesn't have any affect. I need to take a closer look at this one.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.