Code Monkey home page Code Monkey logo

zfs-hetzner-vm's Introduction

zfs-hetzner-vm

shellcheck

Scripts to install Debian 10, 11, 12 or Ubuntu 18 LTS, 20 LTS, 22 LTS with ZFS root on Hetzner root servers (virtual and dedicated).
WARNING: all data on the disk will be destroyed.

How to use:

  • Login into Hetzner cloud server console.
  • Choose "rescue" menu.
  • Click "enable rescue and power cycle", add SSH key to the rescue console, set it OS to linux64, then press mount rescue and power cycle" button.
  • connect via SSH to rescue console, and run the script from this repo.

Debian 10 minimal setup with SSH server

wget -qO- https://raw.githubusercontent.com/terem42/zfs-hetzner-vm/master/hetzner-debian10-zfs-setup.sh | bash -

Debian 11 minimal setup with SSH server

wget -qO- https://raw.githubusercontent.com/terem42/zfs-hetzner-vm/master/hetzner-debian11-zfs-setup.sh | bash -

Debian 12 minimal setup with SSH server

wget -qO- https://raw.githubusercontent.com/terem42/zfs-hetzner-vm/master/hetzner-debian12-zfs-setup.sh | bash -

Ubuntu 18.04 LTS minimal setup with SSH server

wget -qO- https://raw.githubusercontent.com/terem42/zfs-hetzner-vm/master/hetzner-ubuntu18-zfs-setup.sh | bash -

Ubuntu 20 LTS minimal setup with SSH server

wget -qO- https://raw.githubusercontent.com/terem42/zfs-hetzner-vm/master/hetzner-ubuntu20-zfs-setup.sh | bash -

Ubuntu 22 LTS minimal setup with SSH server

wget -qO- https://raw.githubusercontent.com/terem42/zfs-hetzner-vm/master/hetzner-ubuntu22-zfs-setup.sh | bash -

Answer script questions about desired hostname and ZFS ARC cache size.

To cope with network failures its higly recommended to run the commands above inside screen console, type man screen for more info. Example of screen utility usage:

export LC_ALL=en_US.UTF-8 && screen -S zfs

To detach from screen console, hit Ctrl-d then a To reattach, type screen -r zfs

Upon succesfull run, the script will reboot system, and you will be able to login into it, using the same SSH key you have used within rescue console

Please note that the drives you intend to format can not be in use, you can execute mdadm --stop --scan before running the script to halt default software raid operations.

zfs-hetzner-vm's People

Contributors

bitcrush avatar congzhangzh avatar corny avatar crpb avatar digidr avatar driops avatar jlsjonas avatar joshua2504 avatar marcoboers avatar mdbraber avatar terem42 avatar westfeld avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

zfs-hetzner-vm's Issues

only mirror pool is supported

the mirror pool is hardcoded when >1 disks are used.
the usage of raidz1 would be optimal when >2 disks are used, anyhow.

ubuntu installation on dedicated servers will auto-suspend

Hi,
your ubuntu installation script almost work ok..
one major issues has been that the server will automatically suspend after about 20 minutes of activity.. ( and a hetzner robot power button is required to take it back online )

[ 1213.500825] PM: suspend entry (deep)

after investigating the issue i've fixed with this :

sudo apt-get remove xserver-xorg-core

thanks for your work btw :) it's well done

IPv4 configuration incorrect

Reviewing the scripts it looks like DHCP is set for IPv4. After running the script, I'm unable to connect to my dedicated server over IPv4. I haven't yet tried connecting over IPv6 but the IPv4 setup seems incorrect. At least for dedicated servers I don't think Hetzner's running DHCP.

How do I select the disks?

Hello,

I would like to create pool with all the 4 disks that are on my server. However, I only see the labels with 1 disk while running the script. I'm not super experienced with Linux - could you guide me to documentation where I could know what disk to select?

When I selected the ones with the word "Samsung", the process completed and gave me a message of one or more disk was busy.

Disks Image:
Screenshot 2021-09-16 at 3 43 43 AM

The partitions after running the script:

`root@rescue ~ # sudo fdisk -l
Disk /dev/ram0: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

Disk /dev/ram1: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

Disk /dev/ram2: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

Disk /dev/ram3: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

Disk /dev/ram4: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

Disk /dev/ram5: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

Disk /dev/ram6: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

Disk /dev/ram7: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

Disk /dev/ram8: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

Disk /dev/ram9: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

Disk /dev/ram10: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

Disk /dev/ram11: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

Disk /dev/ram12: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

Disk /dev/ram13: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

Disk /dev/ram14: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

Disk /dev/ram15: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

Disk /dev/loop0: 2.86 GiB, 3068773888 bytes, 5993699 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/nvme1n1: 3.49 TiB, 3840755982336 bytes, 7501476528 sectors
Disk model: SAMSUNG MZQL23T8HCLS-00A07
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 131072 bytes / 131072 bytes
Disklabel type: gpt
Disk identifier: EDA7164D-C134-4A01-8C32-497F9068585F

Device Start End Sectors Size Type
/dev/nvme1n1p1 48 2047 2000 1000K BIOS boot
/dev/nvme1n1p2 2048 1050623 1048576 512M Solaris /usr & Apple ZFS
/dev/nvme1n1p3 1050624 7501476494 7500425871 3.5T Solaris /usr & Apple ZFS

Partition 1 does not start on physical sector boundary.

Disk /dev/nvme0n1: 3.49 TiB, 3840755982336 bytes, 7501476528 sectors
Disk model: SAMSUNG MZQL23T8HCLS-00A07
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 131072 bytes / 131072 bytes
Disklabel type: gpt
Disk identifier: 661B0DE6-BF26-4B19-B0EE-A726058F9156

Device Start End Sectors Size Type
/dev/nvme0n1p1 48 2047 2000 1000K BIOS boot
/dev/nvme0n1p2 2048 1050623 1048576 512M Solaris /usr & Apple ZFS
/dev/nvme0n1p3 1050624 7501476494 7500425871 3.5T Solaris /usr & Apple ZFS

Partition 1 does not start on physical sector boundary.

Disk /dev/nvme3n1: 3.49 TiB, 3840755982336 bytes, 7501476528 sectors
Disk model: SAMSUNG MZQLB3T8HALS-00007
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 4B5EFE90-8690-4A6A-904A-A0DDB34C58BC

Device Start End Sectors Size Type
/dev/nvme3n1p1 48 2047 2000 1000K BIOS boot
/dev/nvme3n1p2 2048 1050623 1048576 512M Solaris /usr & Apple ZFS
/dev/nvme3n1p3 1050624 7501476494 7500425871 3.5T Solaris /usr & Apple ZFS

Disk /dev/nvme2n1: 3.49 TiB, 3840755982336 bytes, 7501476528 sectors
Disk model: SAMSUNG MZQLB3T8HALS-00007
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: F49A38F8-0590-412A-8350-33744554CB28

Device Start End Sectors Size Type
/dev/nvme2n1p1 48 2047 2000 1000K BIOS boot
/dev/nvme2n1p2 2048 1050623 1048576 512M Solaris /usr & Apple ZFS
/dev/nvme2n1p3 1050624 7501476494 7500425871 3.5T Solaris /usr & Apple ZFS

Disk /dev/md0: 15.98 GiB, 17162043392 bytes, 33519616 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 131072 bytes / 131072 bytes
`

Also, unable to SSH in after the installation was completed.

Appreciate the script you've created. Thank you!

How to install the key from this error?

Are there more details for this error:

SSH pubkey file is absent, please add it to the rescue system setting, then reboot into rescue system and run the script?

Nothing in the hetzner console makes sense, and I don't see a key in this repo...

zfs: command not found

No matter which version of script I try, it keeps failing here

checking whether inode_owner_or_capable() takes user_ns... configure: error:
*** None of the expected "capability" interfaces were detected.
*** This may be because your kernel version is newer than what is
*** supported, or you are using a patched custom kernel with
*** incompatible modifications.
***
*** ZFS Version: zfs-2.1.11-1
*** Compatible Kernels: 3.10 - 6.2

Install failed, please fix manually!
bash: line 499: zfs: command not found

IPv6 only hosts fail

Currently this script fails for IPv6 only hosts. Any thoughts on getting around this?

The first error is "gpg: keyserver receive failed: no keyserver available", caused by:

gpg --keyid-format long --keyserver hkp://keyserver.ubuntu.com --recv-keys 0x871920D1991BC93C

keyserver.ubuntu.com does not have an IPv6 address. Got a workaround?

Debian12 Script fails

Hello,
I just tried using the debian 12 script but it fails with this message:

Err:17 http://deb.debian.org/debian bookworm/main amd64 linux-kbuild-6.1 amd64 6.1.55-1
  404  Not Found [IP: 2a01:4ff:ff00::3:3 80]
Get:19 http://deb.debian.org/debian bookworm/main amd64 linux-headers-amd64 amd64 6.1.55-1 [1,420 B]
Fetched 17.0 MB in 0s (54.3 MB/s)
E: Failed to fetch http://deb.debian.org/debian/pool/main/l/linux/linux-kbuild-6.1_6.1.55-1_amd64.deb  404  Not Found [IP: 2a01:4ff:ff00::3:3 80]
E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?

Stuck at blinking cursor after reboot (Ubuntu 20 LTS)

Hi,

It seems that the system is failing to boot even into grub after running the Ubuntu 20LTS script. As there was already a system installed, I had to disable the already active raid using mdadm --stop --scan (perhaps that's where it's going wrong?)

image

Follow-up of #9 (comment)

grub-install: error: unknown filesystem

Hi there,

first of all - thanks for the great script! Unfortunately, I stumbled across a problem I seem to be unable to fix myself: After the majority of the installation finished without issues, it crashes while installing grub with:

Creating config file /etc/default/grub with new version
Processing triggers for man-db (2.11.2-2) ...
Installing for i386-pc platform.
grub-install: error: unknown filesystem.

I tried it with both Ubuntu 22.04 and Debian12, but got the same result. The rescue system on hetzner itself is currently a Debian12, too.

I switched to the chrooted environment manually and run the command again with verbose output (see attached File)

log.txt

Additional info: The server is a new AX52 with two nvme's and as far as I can tell, it boots in UEFI mode. Maybe that is an issue?

need a hint how to unlock

i do unlock my LUKS servers remotely but i am clueless how to do it with zfs on my new hetzner bare metal, could you enlighten me how to unlock zfs or are the Ubuntu scripts maybe broken?

end output output from install
`Setting up networkd-dispatcher (2.1-2~ubuntu20.04.1) ...
Setting up dropbear-initramfs (2019.78-2build1) ...
update-initramfs: deferring update (trigger activated)
Dropbear has been added to the initramfs. Don't forget to check
your "ip=" kernel bootparameter to match your desired initramfs
ip configuration.

Setting up systemd (245.4-4ubuntu3.13) ...
Installing new version of config file /etc/dhcp/dhclient-enter-hooks.d/resolved ...
Installing new version of config file /etc/systemd/resolved.conf ...
Setting up netplan.io (0.103-0ubuntu520.04.5) ...
Setting up systemd-timesyncd (245.4-4ubuntu3.13) ...
Setting up systemd-sysv (245.4-4ubuntu3.13) ...
Setting up ubuntu-minimal (1.450.2) ...
Setting up libnss-systemd:amd64 (245.4-4ubuntu3.13) ...
Setting up libpam-systemd:amd64 (245.4-4ubuntu3.13) ...
Processing triggers for mime-support (3.64ubuntu1) ...
Processing triggers for libc-bin (2.31-0ubuntu9.2) ...
Processing triggers for man-db (2.9.1-1) ...
Processing triggers for plymouth-theme-ubuntu-text (0.9.4git20200323-0ubuntu6.2) ...
update-initramfs: deferring update (trigger activated)
Processing triggers for dbus (1.12.16-2ubuntu2.1) ...
Processing triggers for initramfs-tools (0.136ubuntu6.6) ...
update-initramfs: Generating /boot/initrd.img-5.4.0-91-generic
cryptsetup: ERROR: Couldn't resolve device rpool/ROOT/ubuntu
cryptsetup: WARNING: Couldn't determine root device
Processing triggers for ca-certificates (20210119
20.04.2) ...
Updating certificates in /etc/ssl/certs...
0 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...
done.
===========add static route to initramfs via hook to add default routes due to Ubuntu initramfs DHCP bug =========
======= update initramfs ==========
update-initramfs: Generating /boot/initrd.img-5.4.0-91-generic
cryptsetup: ERROR: Couldn't resolve device rpool/ROOT/ubuntu
cryptsetup: WARNING: Couldn't determine root device
======= update grub ==========
Sourcing file /etc/default/grub' Sourcing file /etc/default/grub.d/init-select.cfg'
Generating grub configuration file ...
Warning: Failed to find a valid directory 'etc' for dataset 'rpool'. Ignoring
Warning: Ignoring rpool
Found linux image: vmlinuz-5.4.0-91-generic in rpool/ROOT/ubuntu
Found initrd image: initrd.img-5.4.0-91-generic in rpool/ROOT/ubuntu
done
======= setting up zed ==========
======= setting mountpoints ==========
========= add swap, if defined
======= unmounting filesystems and zfs pools ==========

###############################################################################

unmount_and_export_fs

###############################################################################

Waiting for virtual filesystems to unmount
===========exporting zfs pools=============
all zfs pools were succesfully exported
======== setup complete, rebooting ===============
root@rescue ~ # Connection to xxx.xxx.xxx.xxx closed by remote host.`

zfs_vdev_scheduler, scrub bpool hangs

Hey, been playing around the last days with zfs, hetzner bare metal server, ubuntu 20.04, with no important data, just trying out commands and getting a feeling for zfs.
Did not touch server for a few days, when i run into scrubbing the bpool made my terminal hang. Opened another ssh session and saw some errors in kernel.log.
Rebooting made it go away and scrubbing worked again.
There seems to be more reports on this over at zfs, referring sometimes to set zfs_vdev_scheduler=none and noop setting is deprecated but,

zfs_vdev_scheduler (charp)
DEPRECATED: This option exists for compatibility with older user configurations. It does nothing except print a warning to the kernel log if set.

Fresh install with the script seems to have,

cat /sys/module/zfs/parameters/zfs_vdev_scheduler
unused

From the similar issues i find it seems to not be an easily reproducible bug, i just found it weird that this would happen, right out the gate on a fresh hetzner install with mirrored SSD SATA 240 GB Datacenter.

Dec 24 12:12:42 pen kernel: [613923.985544] WARNING: Pool 'bpool' has encountered an uncorrectable I/O failure and has been suspended.
Dec 24 12:12:42 pen kernel: [613923.985544] 
Dec 24 12:15:11 pen kernel: [614073.132561] INFO: task txg_sync:1341 blocked for more than 120 seconds.
Dec 24 12:15:11 pen kernel: [614073.132569]       Tainted: P           OE     5.4.0-91-generic #102-Ubuntu
Dec 24 12:15:11 pen kernel: [614073.132572] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Dec 24 12:15:11 pen kernel: [614073.132576] txg_sync        D    0  1341      2 0x80004000
Dec 24 12:15:11 pen kernel: [614073.132579] Call Trace:
Dec 24 12:15:11 pen kernel: [614073.132590]  __schedule+0x2e3/0x740
Dec 24 12:15:11 pen kernel: [614073.132594]  ? __internal_add_timer+0x2d/0x40
Dec 24 12:15:11 pen kernel: [614073.132597]  schedule+0x42/0xb0
Dec 24 12:15:11 pen kernel: [614073.132600]  schedule_timeout+0x8a/0x160
Dec 24 12:15:11 pen kernel: [614073.132603]  ? _cond_resched+0x19/0x30
Dec 24 12:15:11 pen kernel: [614073.132606]  ? __next_timer_interrupt+0xe0/0xe0
Dec 24 12:15:11 pen kernel: [614073.132609]  io_schedule_timeout+0x1e/0x50
Dec 24 12:15:11 pen kernel: [614073.132617]  __cv_timedwait_common+0x137/0x160 [spl]
Dec 24 12:15:11 pen kernel: [614073.132620]  ? wait_woken+0x80/0x80
Dec 24 12:15:11 pen kernel: [614073.132625]  __cv_timedwait_io+0x19/0x20 [spl]
Dec 24 12:15:11 pen kernel: [614073.132716]  zio_wait+0x137/0x280 [zfs]
Dec 24 12:15:11 pen kernel: [614073.132765]  dbuf_read+0x2a0/0x580 [zfs]
Dec 24 12:15:11 pen kernel: [614073.132816]  ? dmu_buf_hold_array_by_dnode+0x192/0x510 [zfs]
Dec 24 12:15:11 pen kernel: [614073.132865]  dmu_buf_will_dirty_impl+0xb6/0x170 [zfs]
Dec 24 12:15:11 pen kernel: [614073.132913]  dmu_buf_will_dirty+0x16/0x20 [zfs]
Dec 24 12:15:11 pen kernel: [614073.132962]  dmu_write_impl+0x42/0xd0 [zfs]
Dec 24 12:15:11 pen kernel: [614073.133012]  dmu_write.part.0+0x65/0xc0 [zfs]
Dec 24 12:15:11 pen kernel: [614073.133061]  dmu_write+0x14/0x20 [zfs]
Dec 24 12:15:11 pen kernel: [614073.133132]  spa_history_write+0x194/0x1e0 [zfs]
Dec 24 12:15:11 pen kernel: [614073.133203]  spa_history_log_sync+0x18f/0x7d0 [zfs]
Dec 24 12:15:11 pen kernel: [614073.133273]  log_internal+0xfb/0x130 [zfs]
Dec 24 12:15:11 pen kernel: [614073.133342]  spa_history_log_internal+0x75/0x110 [zfs]
Dec 24 12:15:11 pen kernel: [614073.133406]  ? dsl_scan_sync_state+0xf5/0x320 [zfs]
Dec 24 12:15:11 pen kernel: [614073.133468]  dsl_scan_setup_sync+0x200/0x3a0 [zfs]
Dec 24 12:15:11 pen kernel: [614073.133531]  dsl_sync_task_sync+0xb6/0x100 [zfs]
Dec 24 12:15:11 pen kernel: [614073.133591]  dsl_pool_sync+0x3d6/0x4f0 [zfs]
Dec 24 12:15:11 pen kernel: [614073.133660]  spa_sync+0x562/0xff0 [zfs]
Dec 24 12:15:11 pen kernel: [614073.133663]  ? mutex_lock+0x13/0x40
Dec 24 12:15:11 pen kernel: [614073.133735]  ? spa_txg_history_init_io+0x106/0x110 [zfs]
Dec 24 12:15:11 pen kernel: [614073.133806]  txg_sync_thread+0x2c6/0x460 [zfs]
Dec 24 12:15:11 pen kernel: [614073.133877]  ? txg_thread_exit.isra.0+0x60/0x60 [zfs]
Dec 24 12:15:11 pen kernel: [614073.133884]  thread_generic_wrapper+0x79/0x90 [spl]
Dec 24 12:15:11 pen kernel: [614073.133888]  kthread+0x104/0x140
Dec 24 12:15:11 pen kernel: [614073.133894]  ? __thread_exit+0x20/0x20 [spl]
Dec 24 12:15:11 pen kernel: [614073.133896]  ? kthread_park+0x90/0x90
Dec 24 12:15:11 pen kernel: [614073.133899]  ret_from_fork+0x35/0x40

`Ubuntu 22 LTS` script fails with `bullseye-backports is invalid`

Hello.

Ubuntu 22 script fails during install with following:

Setting up python3-gi (3.38.0-2) ...
Setting up packagekit (1.2.2-2) ...
Created symlink /etc/systemd/user/sockets.target.wants/pk-debconf-helper.socket → /usr/lib/systemd/user/pk-debconf-helper.socket.
Setting up packagekit-tools (1.2.2-2) ...
Setting up software-properties-common (0.96.20.2-2.1) ...
Processing triggers for libc-bin (2.31-13+deb11u6) ...
Processing triggers for man-db (2.9.4-2) ...
Processing triggers for dbus (1.12.24-0+deb11u1) ...
gpg: key 871920D1991BC93C: public key "Ubuntu Archive Automatic Signing Key (2018) <[email protected]>" imported
gpg: Total number processed: 1
gpg:               imported: 1
Reading package lists... Done
E: The value 'bullseye-backports' is invalid for APT::Default-Release as such a release is not available in the sources

Ubuntu 20 script is okay.

Error with zfs packages during installation

I got the same error already comitted in a previous merge. Even in experimental ZFS mode I still get the same error. I also tried it with the Debian 10 script and still got the same error. I would really appreciate some help.

OS: Hetzner rescue Linux
Script: Debian 11
Hardware: Root Server
Options: Experimental ZFS, encrypted

`
Warning: Unable to find an initial ram disk that I know how to handle.
Will not try to make an initrd.

DKMS: install completed.
.
Setting up libzpool5linux (2.1.6-0york1~20.04) ...
Setting up linux-headers-amd64 (5.10.162-1) ...

dpkg: dependency problems prevent configuration of zfs-zed:
zfs-zed depends on zfs-modules | zfs-dkms; however:
Package zfs-modules is not installed.
Package zfs-dkms which provides zfs-modules is not configured yet.
Package zfs-dkms is not configured yet.

dpkg: error processing package zfs-zed (--configure):
dependency problems - leaving unconfigured
Setting up zfsutils-linux (2.1.6-0york1~20.04) ...
modprobe: FATAL: Module zfs not found in directory /lib/modules/6.2.1
Created symlink /etc/systemd/system/zfs-import.target.wants/zfs-import-cache.service → /lib/systemd/system/zfs-import-cache.service.
Created symlink /etc/systemd/system/zfs.target.wants/zfs-import.target → /lib/systemd/system/zfs-import.target.
Created symlink /etc/systemd/system/zfs-mount.service.wants/zfs-load-module.service → /lib/systemd/system/zfs-load-module.service.
Created symlink /etc/systemd/system/zfs.target.wants/zfs-load-module.service → /lib/systemd/system/zfs-load-module.service.
Created symlink /etc/systemd/system/zfs.target.wants/zfs-mount.service → /lib/systemd/system/zfs-mount.service.
Created symlink /etc/systemd/system/zfs.target.wants/zfs-share.service → /lib/systemd/system/zfs-share.service.
Created symlink /etc/systemd/system/zfs-volumes.target.wants/zfs-volume-wait.service → /lib/systemd/system/zfs-volume-wait.service.
Created symlink /etc/systemd/system/zfs.target.wants/zfs-volumes.target → /lib/systemd/system/zfs-volumes.target.
Created symlink /etc/systemd/system/multi-user.target.wants/zfs.target → /lib/systemd/system/zfs.target.
zfs-import-scan.service is a disabled or a static unit, not starting it.
Processing triggers for libc-bin (2.31-13+deb11u5) ...
Processing triggers for man-db (2.9.4-2) ...
Processing triggers for initramfs-tools (0.140) ...
Errors were encountered while processing:
zfs-dkms
zfs-zed
E: Sub-process /usr/bin/dpkg returned an error code (1)
`

Originally posted by @cyni0s in #15 (comment)

Can't build on hetzner Cloud

Hello,

I think Hetzner Cloud updated the rescue console. I can't install zfs in the rescue system.

===========remove unused kernels in rescue system=========
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following packages will be REMOVED:
  linux-headers-6.1.4*
0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded.
After this operation, 51.4 MB disk space will be freed.
(Reading database ... 63642 files and directories currently installed.)
Removing linux-headers-6.1.4 (6.1.4-1) ...
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following packages will be REMOVED:
  linux-image-6.1.4*
0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded.
After this operation, 82.0 MB disk space will be freed.
(Reading database ... 53835 files and directories currently installed.)
Removing linux-image-6.1.4 (6.1.4-1) ...
(Reading database ... 52540 files and directories currently installed.)
Purging configuration files for linux-image-6.1.4 (6.1.4-1) ...
dpkg: warning: while removing linux-image-6.1.4, directory '/lib/modules/6.1.4' not empty so not removed
======= installing zfs on rescue system ==========
.....
Setting up dctrl-tools (2.24-3+b1) ...
Setting up libzfs4linux (2.1.6-0york1~20.04) ...
Setting up dkms (2.8.4-3) ...
Setting up zfs-dkms (2.1.6-0york1~20.04) ...
Loading new zfs-2.1.6 DKMS files...
Building for 6.2.1
Building initial module for 6.2.1
configure: error: 
	*** None of the expected "iops->get_acl()" interfaces were detected.
	*** This may be because your kernel version is newer than what is
	*** supported, or you are using a patched custom kernel with
	*** incompatible modifications.
	***
	*** ZFS Version: zfs-2.1.6-0york1~20.04
	*** Compatible Kernels: 3.10 - 5.19
	
Error! Bad return status for module build on kernel: 6.2.1 (x86_64)
Consult /var/lib/dkms/zfs/2.1.6/build/make.log for more information.
dpkg: error processing package zfs-dkms (--configure):
 installed zfs-dkms package post-installation script subprocess returned error exit status 10
Setting up linux-headers-5.10.0-21-amd64 (5.10.162-1) ...
/etc/kernel/header_postinst.d/dkms:
dkms: running auto installation service for kernel 5.10.0-21-amd64:
Kernel preparation unnecessary for this kernel.  Skipping...

How can i fix this?

Cheers,
Michael

Not booting on uefi bios

I have a new dedicated root server with 2x NVMe and 2x HDD.
I can properly set up RAID1 on 2x NVMe using installimage script.
But it doesn't work with scripts from this repo: I have tried Ubuntu 20, 22 and Debian 11. I get the same result: the script finishes successfully, but the machine does not startup after reboot.

After getting KVM, I see that a BIOS automatically opens after the reboot. More importantly, I see no options in the Boot order section. I have UEFI BIOS.
On the other hand, when I use installimage, I see one option in the Boot section in BIOS: ubuntu.

No longer working: can't build zfs-dkms for 5.13.1

Hiya,

See the following logs on a fresh system. Zfs can't be built against 5.13.1. Not sure what the fix is:

Building for 5.13.1
Building initial module for 5.13.1
configure: error:
	*** None of the expected "capability" interfaces were detected.
	*** This may be because your kernel version is newer than what is
	*** supported, or you are using a patched custom kernel with
	*** incompatible modifications.
	***
	*** ZFS Version: zfs-2.0.3-8~bpo10+1
	*** Compatible Kernels: 3.10 - 5.10

Error! Bad return status for module build on kernel: 5.13.1 (x86_64)```

[debian-log.txt](https://github.com/terem42/zfs-hetzner-vm/files/6810761/debian-log.txt)

Default bpool size to small

kernel updates cause

update-initramfs: Generating /boot/initrd.img-5.4.0-100-generic
Error 24 : Write error : cannot write compressed block

Q: decrypt rpool

I successfully installed Ubuntu 22.04 with the script while encrypting rpool.
However, the server boots into busybox and I'm not sure how to decrypt the root pool
Any hints?

Valid hostnames deemed invalid

When entering my desired hostname, I continually get told it's invalid unless I use a string consisting of alpha characters similar to the default. The regex doesn't appear to match all valid hostnames.

chroot_execute: command not found (Debian 11 and Debian 12)

I booted into Hetzner's rescue mode, ran the openzfs_install command (in a different iteration I tried zfs), then apt update && apt upgrade and finally ran this script. I did that for both Debian 11 and Debian 12, and I got the chroot_execute error in both cases. Unfortunately the script proceeds despite that failure, and finally goes on to reboot.

root@rescue / #
root@rescue / # chmod 755 "$c_zfs_mount_dir/usr/share/initramfs-tools/scripts/init-premount/static-route"
root@rescue / #
root@rescue / # chmod 755 "$c_zfs_mount_dir/etc/network/interfaces"
root@rescue / #
root@rescue / # echo "======= update initramfs =========="
======= update initramfs ==========
root@rescue / # chroot_execute "update-initramfs -u -k all"
bash: chroot_execute: command not found
root@rescue / #
root@rescue / # echo "======= update grub =========="
======= update grub ==========
root@rescue / # chroot_execute "update-grub"
bash: chroot_execute: command not found
root@rescue / #
root@rescue / # echo "======= setting up zed =========="
======= setting up zed ==========
root@rescue / # if [[ $v_zfs_experimental == "1" ]]; then
>   chroot_execute "zfs set canmount=noauto $v_rpool_name"
> else
>   initial_load_debian_zed_cache
> fi
bash: initial_load_debian_zed_cache: command not found
root@rescue / #
root@rescue / # echo "======= setting mountpoints =========="
======= setting mountpoints ==========
root@rescue / # chroot_execute "zfs set mountpoint=legacy $v_bpool_name/BOOT/debian"
bash: chroot_execute: command not found
root@rescue / # chroot_execute "echo $v_bpool_name/BOOT/debian /boot zfs nodev,relatime,x-systemd.requires=zfs-mount.service,x-systemd.device-timeout=10 0 0 > /etc/fstab"
bash: chroot_execute: command not found
root@rescue / #                                                                "
bash: chroot_execute: command not found mountpoint=legacy $v_rpool_name/var/log"
root@rescue / # chroot_execute "echo $v_rpool_name/var/log /var/log zfs nodev,relatime 0 0 >> /etc/fstab"
bash: chroot_execute: command not found
root@rescue / # chroot_execute "zfs set mountpoint=legacy $v_rpool_name/var/spool"
bash: chroot_execute: command not found
root@rescue / # chroot_execute "echo $v_rpool_name/var/spool /var/spool zfs nodev,relatime 0 0 >> /etc/fstab"
bash: chroot_execute: command not found                                        "
bash: chroot_execute: command not found mountpoint=legacy $v_rpool_name/var/tmp"
root@rescue / # chroot_execute "echo $v_rpool_name/var/tmp /var/tmp zfs nodev,relatime 0 0 >> /etc/fstab"
bash: chroot_execute: command not found
root@rescue / # chroot_execute "zfs set mountpoint=legacy $v_rpool_name/tmp"
bash: chroot_execute: command not found
root@rescue / # chroot_execute "echo $v_rpool_name/tmp /tmp zfs nodev,relatime 0 0 >> /etc/fstab"
bash: chroot_execute: command not found
root@rescue / #
root@rescue / # echo "========= add swap, if defined"
========= add swap, if defined
root@rescue / # if [[ $v_swap_size -gt 0 ]]; then
>   chroot_execute "echo /dev/zvol/$v_rpool_name/swap none swap discard 0 0 >> /etc/fstab"
> fi
root@rescue / #
root@rescue / # chroot_execute "echo RESUME=none > /etc/initramfs-tools/conf.d/resume"
bash: chroot_execute: command not found
root@rescue / #
root@rescue / # echo "======= unmounting filesystems and zfs pools =========="
======= unmounting filesystems and zfs pools ==========
root@rescue / # unmount_and_export_fs
bash: unmount_and_export_fs: command not found
root@rescue / #
root@rescue / # echo "======== setup complete, rebooting ==============="
======== setup complete, rebooting ===============
root@rescue / # reboot

Broadcast message from root@rescue on pts/1 (Sat 2024-03-30 22:19:34 CET):

The system will reboot now!

root@rescue / # exit

Server unreachable after using debian 11 script with raidz1

I'm trying to install a 10x10TB hetzner with raidz1 zfs i figured out how to modify the script to do this the script runs without error but after the server reboots its unreachable via SSH and also cant ping the server. Any suggestions on how i could fix this? Thanks.

ERROR 404: Not Found.

Processing triggers for dbus (1.12.20-0+deb10u1) ...
======= installing zfs packages ==========
--2020-12-22 00:23:30-- https://andrey42.github.io/zfs-debian/apt_pub.gpg
Resolving andrey42.github.io (andrey42.github.io)... 185.199.111.153, 185.199.108.153, 185.199.109.153, ...
Connecting to andrey42.github.io (andrey42.github.io)|185.199.111.153|:443... connected.
HTTP request sent, awaiting response... 404 Not Found
2020-12-22 00:23:30 ERROR 404: Not Found.

gpg: no valid OpenPGP data found.

no connection after ubuntu script

======= setup OpenSSH ==========
Creating SSH2 RSA key; this may take some time ...
2048 SHA256:5S2VVntYfZA+cVa1NPj/NbFm2lszVkrJXhb06fSdV68 root@rescue (RSA)
Creating SSH2 ECDSA key; this may take some time ...
256 SHA256:XExQhAQ2sIZ4sDFgdgaWEAJ7EEtauQyfrTfZfxLpBEc root@rescue (ECDSA)
Creating SSH2 ED25519 key; this may take some time ...
256 SHA256:nTPTa82V/v+PAZmAAkPYU/0tjBT9UfQ2XvZltr/DbDs root@rescue (ED25519)
Running in chroot, ignoring request.
======= set root password ==========

ssh: connect to host xxxx port 22: Connection refused

default settings break root on zfs after kernel updates

See openzfs/zfs#10355 . REMAKE_INITRD='yes' needs to be added to /etc/dkms/zfs.conf to prevent breakage upon kernel updates, making it impossible to boot.
Took a lot of time to debug and fix, annoying.

Also, the grub changes make it impossible to boot into hetzner's rescue mode, requiring a support ticket to change boot order and make it working again. Please fix this.

Unrelated, but there's a stray 18.04 variable in the script:

chroot_execute "DEBIAN_FRONTEND=noninteractive apt install --yes linux-headers${v_kernel_variant}-hwe-18.04 linux-image${v_kernel_variant}-hwe-18.04"

Add support for Debian 12

Debian 12 is already available in Hetzner's rescue system (although it is not yet available through their Robot web interface as one-click installer nor during the ordering process of servers).

It would be great to have Debian 12 supported.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.