Code Monkey home page Code Monkey logo

kvdo's Introduction

kvdo

The kernel module component of VDO which provides pools of deduplicated and/or compressed block storage.

Background

VDO is a device-mapper target that provides inline block-level deduplication, compression, and thin provisioning capabilities for primary storage. VDO is managed through LVM and can be integrated into any existing storage stack.

Deduplication is a technique for reducing the consumption of storage resources by eliminating multiple copies of duplicate blocks. Compression takes the individual unique blocks and shrinks them with coding algorithms; these reduced blocks are then efficiently packed together into physical blocks. Thin provisioning manages the mapping from logical block addresses presented by VDO to where the data has actually been stored, and also eliminates any blocks of all zeroes.

With deduplication, instead of writing the same data more than once each duplicate block is detected and recorded as a reference to the original block. VDO maintains a mapping from logical block addresses (presented to the storage layer above VDO) to physical block addresses on the storage layer under VDO. After deduplication, multiple logical block addresses may be mapped to the same physical block address; these are called shared blocks and are reference-counted by the software.

With VDO's compression, blocks are compressed with the fast LZ4 algorithm, and collected together where possible so that multiple compressed blocks fit within a single 4 KB block on the underlying storage. Each logical block address is mapped to a physical block address and an index within it for the desired compressed data. All compressed blocks are individually reference-counted for correctness.

Block sharing and block compression are invisible to applications using the storage, which read and write blocks as they would if VDO were not present. When a shared block is overwritten, a new physical block is allocated for storing the new block data to ensure that other logical block addresses that are mapped to the shared physical block are not modified.

This repository includes the kvdo module, which can be built and loaded as an out-of-tree kernel module. This module implements fine-grained storage virtualization, thin provisioning, block sharing, compression, and memory-efficient duplicate identification.

History

VDO was originally developed by Permabit Technology Corp. as a proprietary set of kernel modules and userspace tools. This software and technology has been acquired by Red Hat and relicensed under the GPL (v2 or later). The kernel module has been merged into the upstream Linux kernel as the dm-vdo devive mapper target. The source for this module can be found in drivers/md/dm-vdo/.

Documentation

Releases

This repository is no longer being updated for newer kernels.

The most recent version of this project can be found in the upstream Linux kernel, as the dm-vdo module. Each existing branch of this repository is intended to work with a specific release of Enterprise Linux (Red Hat Enterprise Linux, CentOS, etc.).

Version Intended Enterprise Linux Release
6.1.x.x EL7 (3.10.0-*.el7)
6.2.x.x EL8 (4.18.0-*.el8)
8.2.x.x EL9 (5.14.0-*.el9)
  • Pre-built versions with the required modifications for older Fedora releases can be found here and can be used by running dnf copr enable rhawalsh/dm-vdo.

Building

In order to build the kernel modules, invoke the following command from the top directory of this tree:

    make -C /usr/src/kernels/`uname -r` M=`pwd`

To install the compiled module:

    make -C /usr/src/kernels/`uname -r` M=`pwd` modules_install
  • There is a dkms.conf template that can be used in the kvdo.spec file which can take care of rebuilding and installing the kernel module any time a new kernel is booted.

  • Patched sources that work with certain older upstream kernels can be found here.

Communication Channels and Contributions

Community feedback, participation and patches are welcome to the vdo-devel repository, which is the parent of this one. This repository does not accept pull requests.

Licensing

GPL v2.0 or later. All contributions retain ownership by their original author, but must also be licensed under the GPL 2.0 or later to be merged.

kvdo's People

Contributors

corwin avatar lorelei-sakai avatar rhawalsh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kvdo's Issues

High tail latency on top of the VDO

In kvdo 6.2.5.41,
it seems that if I install the kvdo on top of SSD, sometimes the tail latency becomes very high compared to the raw I/O latency.
Is there any acknowledgement issue? or something else (maybe lock contention)?

If there is any recent version resolving above problem, please let me know.

VDO is not compressing some (easy compressible) files.

I have strange problem. VDO don't want to compress some of my CSV files
(which are easy compressible, I think). Command line lz4 tool doesn't have any problem with them.

# du /1/test2.csv 
102400  /1/test2.csv

# lz4 -1 /1/test2.csv /1/test2.csv.lz4
Compressed 104857600 bytes into 42259380 bytes ==> 40.30%                      

# vdo status -n vdobackup|grep -i Compression
    Compression: enabled

# vdostats --verbose vdobackup|egrep 'used|compress'
  data blocks used                    : 13056414
  overhead blocks used                : 1292638
  logical blocks used                 : 17262451
  1K-blocks used                      : 57396208
  used percent                        : 27
  compressed fragments written        : 773855
  compressed blocks written           : 305048
  compressed fragments in packer      : 0
  KVDO module bytes used              : 1016029328
  KVDO module peak bytes used         : 1016031648
  KVDO module bios used               : 74572

# cp /1/test2.csv /mnt/backup/1/; sync

# vdostats --verbose vdobackup|egrep 'used|compress'
  data blocks used                    : 13082012
  overhead blocks used                : 1292670
  logical blocks used                 : 17288052
  1K-blocks used                      : 57498728
  used percent                        : 27
  compressed fragments written        : 773855
  compressed blocks written           : 305048
  compressed fragments in packer      : 0
  KVDO module bytes used              : 1016029328
  KVDO module peak bytes used         : 1016031648
  KVDO module bios used               : 74572

As you can see:
"data blocks used" increased by: 25598
"compressed fragments/blocks" are the same.

No other activity at the same time. It is not the only one file with this problem.
I have a few GB of files with similar content and none of them is compressed on the vdo volume.
Compression on other type of files works fine.

My VDO is created on top of standard disk partition. No LVM, no encryption.
kvdo from rhel/centos:

# uname -a
Linux .... 3.10.0-957.10.1.el7.x86_64 #1 SMP Mon Mar 18 15:06:45 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
# rpm -qa|grep -i vdo
vdo-6.1.1.125-3.el7.x86_64
kmod-kvdo-6.1.1.125-5.el7.x86_64

Test file attached to the issue. test2.csv.gz

Any ideas?

Can I set Block Size to 8K?

I find the vdo block size is 4K

[root@hdz1 hdzhang]# cat /sys/block/dm-2/queue/physical_block_size
4096

Can I set block size to 8K?

Kernel 4.17.14 and KVDO 6.1.0.181, failed build

@corwin do you have variants to fix this issues?
Centos 7.5, last kernel-ml and kernel-ml-devel from ELREPO

make -C /usr/src/kernels/`uname -r` M=`pwd`
make: Entering directory `/usr/src/kernels/4.17.14-1.el7.elrepo.x86_64'
  AR      /home/kvdo-6.1.0.181/uds/built-in.a
  CC [M]  /home/kvdo-6.1.0.181/uds/stringLinuxKernel.o
  CC [M]  /home/kvdo-6.1.0.181/uds/threadSemaphoreLinuxKernel.o
/home/kvdo-6.1.0.181/uds/threadSemaphoreLinuxKernel.c: In function ‘acquireSemaphore’:
/home/kvdo-6.1.0.181/uds/threadSemaphoreLinuxKernel.c:99:7: error: implicit declaration of function ‘__set_task_state’ [-Werror=implicit-function-declaration]
       __set_task_state(task, TASK_INTERRUPTIBLE);
       ^
cc1: all warnings being treated as errors
make[2]: *** [/home/kvdo-6.1.0.181/uds/threadSemaphoreLinuxKernel.o] Error 1
make[1]: *** [/home/kvdo-6.1.0.181/uds] Error 2
make: *** [_module_/home/kvdo-6.1.0.181] Error 2
make: Leaving directory `/usr/src/kernels/4.17.14-1.el7.elrepo.x86_64'

vdo status command may trigger kernel panic

Description

Item Version
OS rhel-7.8
kernel 3.10.0-1127.19.1.el7.x86_64
kvdo 6.1.1.8
vdo 6.1.1.125

When I was using FIO to pressure VDO, I added a monitoring script, which would obtain information through "vdo status" every minute. After a period of time, the system panic occurred.
Later I looked up the commit record that showed the BUG was fixed in Version 6.1.1.8: "Fixed General Protection Fault unlocking the UDS callback mutex", But the problem persisted when I upgraded to 6.1.1.8

[84607.724514] BUG: unable to handle kernel paging request at 000000000001b8f0
[84607.726005] IP: [<ffffffff9d517fd0>] native_queued_spin_lock_slowpath+0x110/0x200
[84607.727588] PGD 1ecad06067 PUD 1c63a5c067 PMD 0
[84607.728867] Oops: 0002 [#1] SMP
[84607.729571] Modules linked in: mpt3sas mptctl mptbase nvmet_rdma(OE) nvmet(OE) dell_rbu scsi_transport_iscsi drbg ansi_cprng bonding usdm_drv(OE) rdma_ucm(OE) ib_ucm(OE) rdma_cm(OE) iw_cm(OE) ib_ipoib(OE) ib_cm(OE) ib_umad(OE) mlx5_fpga_tools(OE) kvdo(OE) sha512_ssse3 sha512_generic qat_api(OE) uds(E) mlx4_ib(OE) mlx4_en(OE) mlx4_core(OE) iTCO_wdt iTCO_vendor_support dell_smbios dell_wmi_descriptor dcdbas cas_cache(OE) cas_disk(OE) skx_edac intel_powerclamp coretemp intel_rapl iosf_mbi kvm irqbypass crc32_pclmul ghash_clmulni_intel aesni_intel lrw gf128mul glue_helper ablk_helper cryptd pcspkr sg i2c_i801 lpc_ich mei_me mei wmi ipmi_si ipmi_devintf ipmi_msghandler acpi_power_meter acpi_pad knem(OE) binfmt_misc ip_tables xfs libcrc32c mlx5_ib(OE) ib_uverbs(OE) ib_core(OE) mgag200 drm_kms_helper syscopyarea
[84607.736604]  sysfillrect sysimgblt fb_sys_fops ttm drm crc32c_intel mlx5_core(OE) mlxfw(OE) vfio_mdev(OE) igb vfio_iommu_type1 qat_c62x(OE) vfio intel_qat(OE) authenc mdev(OE) uio devlink nvme(OE) nvme_core(OE) mlx_compat(OE) ptp pps_core dca i2c_algo_bit drm_panel_orientation_quirks nfit libnvdimm sd_mod crc_t10dif crct10dif_generic crct10dif_pclmul crct10dif_common ahci libahci libata mpt2sas raid_class scsi_transport_sas megaraid_sas dm_mirror dm_region_hash dm_log dm_mod
[84607.743088] CPU: 1 PID: 60128 Comm: kvdo69:callback Kdump: loaded Tainted: G           OE  ------------   3.10.0-1127.19.1.el7.x86_64 #1
[84607.745664] Hardware name: Dell Inc. PowerEdge R740xd/06WXJT, BIOS 2.11.2 004/21/2021
[84607.747010] task: ffff9e174048c1c0 ti: ffff9e1703648000 task.ti: ffff9e1703648000
[84607.748373] RIP: 0010:[<ffffffff9d517fd0>]  [<ffffffff9d517fd0>] native_queued_spin_lock_slowpath+0x110/0x200
[84607.749576] RSP: 0018:ffff9e170364bd78  EFLAGS: 00010206
[84607.750785] RAX: 0000000000001fff RBX: ffff9e11e6cebba0 RCX: 0000000000090000
[84607.751774] RDX: 000000000001b8f0 RSI: 00000000ffff9e12 RDI: ffff9e11e6cebb9c
[84607.752726] RBP: ffff9e170364bd78 R08: ffff9e17eba5b8c0 R09: 0000000000000000
[84607.753713] R10: 000000000406c000 R11: 0000000000000800 R12: ffff9e11e6cebb9c
[84607.754717] R13: ffff9e170364bda8 R14: 0000000000000000 R15: 0000000000000000
[84607.755895] FS:  0000000000000000(0000) GS:ffff9e17eba40000(0000) knlGS:0000000000000000
[84607.757233] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[84607.758664] CR2: 000000000001b8f0 CR3: 0000001c8e3aa000 CR4: 00000000007607e0
[84607.759984] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[84607.761290] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[84607.762548] PKRU: 00000000
84607.763859] Call Trace:
[84607.765226]  [<ffffffff9db7a024>] queued_spin_lock_slowpath+0xb/0xf
[84607.766552]  [<ffffffff9db886d0>] _raw_spin_lock+0x20/0x30
[84607.767547]  [<ffffffff9db84b4a>] __mutex_unlock_slowpath+0x4a/0x90
[84607.768943]  [<ffffffff9db83fcb>] mutex_unlock+0x1b/0x20
[84607.770367]  [<ffffffffc0cf360c>] enterCallbackStage+0x6c/0xc0 [uds]
[84607.771631]  [<ffffffffc0cea908>] handleCallbacks+0x108/0x120 [uds]
[84607.773151]  [<ffffffffc0cfb6a7>] ? eventCountWait+0xb7/0xe0 [uds]
[84607.774639]  [<ffffffffc0cde12a>] ? pollQueues+0x3a/0x50 [uds]
[84607.775717]  [<ffffffffc0cde29c>] requestQueueWorker+0x15c/0x1b0 [uds]
[84607.776639]  [<ffffffffc0cdbcb0>] ? lookupThread+0x60/0x60 [uds]
[84607.777539]  [<ffffffffc0cdbd4a>] threadStarter+0x9a/0xd0 [uds]
[84607.778465]  [<ffffffff9d4c6691>] kthread+0xd1/0xe0
[84607.779396]  [<ffffffff9d4c65c0>] ? insert_kthread_work+0x40/0x40
[84607.780312]  [<ffffffff9db92d1d>] ret_from_fork_nospec_begin+0x7/0x21
[84607.781457]  [<ffffffff9d4c65c0>] ? insert_kthread_work+0x40/0x40
[84607.782783] Code: 87 47 02 c1 e0 10 45 31 c9 85 c0 74 44 48 89 c2 c1 e8 13 48 c1 ea 0d 48 98 83 e2 30 48 81 c2 c0 b8 01 00 48 03 14 c5 a0 10 15 9e <4c> 89 02 41 8b 40 08 85 c0 75 0f 0f 1f 44 00 00 f3 90 41 8b 40
[84607.785653] RIP  [<ffffffff9d517fd0>] native_queued_spin_lock_slowpath+0x110/0x200
[84607.786743]  RSP <ffff9e170364bd78>
[84607.788095] CR2: 000000000001b8f0

Reproducible

  1. using FIO to (randwrite)pressure VDO volume
  2. vdo status or cat /proc/vdo/$volname/kernel_stats(once per second)

Make failing to compile kvdo & uds modules on Debian

Hi There,

I am trying to compile the kvdo and uds modules on Debian 11.2 running kernel "5.10.0-12-amd64"

I haven't updated in a wee while and now it seems that I can't compile the modules anymore under the latest Debian version.

Steps to reproduce:

  1. Fully up to date - fresh Debian install
  2. Install all pre-reqs like linux-headers, make etc...
  3. After a git clone, under the kvdo directory run "make -C /usr/src/linux-headers-uname -r M=pwd" I get the following error:
    CC [M] /root/modules/vdo/kvdo/vdo/allocatingVIO.o
    CC [M] /root/modules/vdo/kvdo/vdo/allocationSelector.o
    CC [M] /root/modules/vdo/kvdo/vdo/batchProcessor.o
    CC [M] /root/modules/vdo/kvdo/vdo/bio.o
    /root/modules/vdo/kvdo/vdo/bio.c: In function ‘vdo_bio_copy_data_in’:
    /root/modules/vdo/kvdo/vdo/bio.c:45:3: error: implicit declaration of function ‘memcpy_from_bvec’; did you mean ‘memcpy_fromio’? [-Werror=implicit-function-declaration]
    45 | memcpy_from_bvec(data_ptr, &biovec);
    | ^~~~~~~~~~~~~~~~
    | memcpy_fromio
    /root/modules/vdo/kvdo/vdo/bio.c: In function ‘vdo_bio_copy_data_out’:
    /root/modules/vdo/kvdo/vdo/bio.c:57:3: error: implicit declaration of function ‘memcpy_to_bvec’; did you mean ‘memcpy_toio’? [-Werror=implicit-function-declaration]
    57 | memcpy_to_bvec(&biovec, data_ptr);
    | ^~~~~~~~~~~~~~
    | memcpy_toio
    cc1: all warnings being treated as errors
    make[3]: *** [/usr/src/linux-headers-5.10.0-12-common/scripts/Makefile.build:285: /root/modules/vdo/kvdo/vdo/bio.o] Error 1
    make[2]: *** [/usr/src/linux-headers-5.10.0-12-common/scripts/Makefile.build:502: /root/modules/vdo/kvdo/vdo] Error 2
    make[1]: *** [/usr/src/linux-headers-5.10.0-12-common/Makefile:1846: /root/modules/vdo/kvdo] Error 2
    make: *** [/usr/src/linux-headers-5.10.0-12-common/Makefile:185: __sub-make] Error 2
    make: Leaving directory '/usr/src/linux-headers-5.10.0-12-amd64'
    [root@hawea]~/modules/vdo/kvdo

Not sure if I am doing something wrong? I followed my notes and history to retrace my steps but never recalled bumping my head in to this issue on an earlier version of Debian.

Discards not working

OS: Debian 9
Kernel: 4.17.0-0.bpo.1-amd64
VDO: 6.2.0.187

Test Setup 1 (no VDO): disks->ZFS->ZVOL->ext4
Steps:

  • create ext4 on VDO, mount with discard option
  • zfs list shows ~0G referenced
  • create 2G file
  • zfs list shows ~2.1G referenced
  • delete 2G file
  • zfs list shows ~0G referenced
    This is expected behavior.

Test Setup 2 (with VDO): disks->ZFS->ZVOL->VDO->ext4
Steps:

  • create ext4 on VDO, mount with discard option
  • zfs list shows ~0G referenced
  • create 2G file
  • zfs list shows ~2.1G referenced
  • delete 2G file
  • zfs list shows unchanged ~2.1G referenced

Looks like discard did not work. Let's discard manually:

  • fstrim on ext4
  • zfs list shows unchanged ~2.1G referenced
  • unmount ext4
  • blkdiscard on vdo
  • zfs list shows unchanged ~2.1G referenced

One more thing I noticed is that blkdiscard takes just about 10 seconds directly on zvol but minutes on VDO.

bcache and vdo not work

When i create bcache over vdo device and vdo logical size bigger then original drive, i got this message:

[ 4966.723860] bcache: run_cache_set() invalidating existing data
[ 4966.728316] bcache: register_cache() registered cache device sdf
[ 4966.763060] bcache: bcache_device_init() nr_stripes too large or invalid: 2415919102 (start sector beyond end of disk?)
[ 4966.763101] bcache: register_bdev() error dm-0: cannot allocate memory
[ 4966.763166] bcache: bcache_device_free() (null) stopped

This is a bug vdo or bcache?

Parallel write compression is inefficient

System information

Type Version/Name
Distribution Name Redhat-7.8
Kernel Version 3.10.0-1127.19.1.el7.x86_64
Architecture x86_64
vdo Version 6.1.3.4
kmod-kvdo Version 6.1.3.7-5

Describe the problem you're observing

I used fio to test compression of vdo, and found that the compression ratio was very low when multiple processes were tested in parallel, whether sequential or random, while a single process was normal, What is the reason for this?

fio saving%
sequential (numjobs:1 iodepth:8) 66%
sequential (numjobs:4 iodepth:8) less 30%
randon (numjobs:1 iodepth:8) 66%
randon (numjobs:4 iodepth:8) less 43%

Describe how to reproduce the problem

  • sequential write fio
#cat fio.w
[global]
ioengine=libaio
direct=1
size=100%

numjobs=1 #1 or 4
iodepth=8

bs=1M
rw=write

scramble_buffers=1
buffer_compress_percentage=70
buffer_compress_chunk=4K

group_reporting
[job]
filename=/dev/mapper/vdo1
  • random write fio
#cat fio.rw
[global]
ioengine=libaio
direct=1
size=100%

numjobs=1 #1 or 4
iodepth=8

bs=4K
rw=randwrite

scramble_buffers=1
buffer_compress_percentage=70
buffer_compress_chunk=4K

group_reporting
[job]
filename=/dev/mapper/vdo2

kvdo-6.2.6.3 does not build against 5.17.4-200.fc35

I'm on Fedora 35 on x86_64 and I'm using kmod-kvdo-6.2.6.3-2.fc35 from copr:copr.fedorainfracloud.org:rhawalsh:dm-vdo .

Upgrading from kernel 5.16.20-200.fc35 to 5.17.4-200.fc35 result in build errors

DKMS make.log for kvdo-6.2.6.3 for kernel 5.17.4-200.fc35.x86_64 (x86_64)
Thu 28 Apr 13:50:15 CEST 2022
make: Entering directory '/usr/src/kernels/5.17.4-200.fc35.x86_64'
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/bits.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/buffer.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/actionManager.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/bufferedReader.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/adminCompletion.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/bufferedWriter.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/adminState.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/cacheCounters.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/cachedChapterIndex.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/allocatingVIO.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/chapterIndex.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/allocationSelector.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/chapterWriter.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/config.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/blockAllocator.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/deltaIndex.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/blockMap.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/deltaMemory.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/blockMapPage.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/errors.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/blockMapRecovery.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/geometry.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/blockMapTree.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/hashUtils.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/completion.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/index.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/indexCheckpoint.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/compressedBlock.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/indexComponent.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/compressionState.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/constants.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/indexConfig.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/dataVIO.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/indexInternals.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/indexLayout.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/dirtyLists.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/extent.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/indexLayoutLinuxKernel.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/indexLayoutParser.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/fixedLayout.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/indexPageMap.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/flush.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/forest.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/hashLock.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/indexRouter.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/indexSession.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/hashZone.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/header.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/indexState.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/indexStateData.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/heap.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/intMap.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/indexVersion.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/lockCounter.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/indexZone.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/logicalZone.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/ioFactoryLinuxKernel.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/lz4.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/packer.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/loadType.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/partitionCopy.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/logger.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/pbnLock.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/loggerLinuxKernel.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/pbnLockPool.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/masterIndex005.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/masterIndex006.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/physicalLayer.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/physicalZone.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/pointerMap.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/masterIndexOps.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/memoryAlloc.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/memoryLinuxKernel.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/nonce.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/numeric.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/priorityTable.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/openChapter.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/openChapterZone.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/pageCache.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/readOnlyNotifier.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/permassert.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/permassertLinuxKernel.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/random.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/recordPage.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/readOnlyRebuild.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/request.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/requestQueueKernel.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/recoveryJournal.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/recoveryJournalBlock.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/searchList.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/recoveryUtils.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/sparseCache.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/refCounts.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/referenceCountRebuild.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/referenceOperation.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/stringLinuxKernel.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/slab.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/stringUtils.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/slabDepot.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/sysfs.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/slabJournal.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/threadCondVarLinuxKernel.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/slabJournalEraser.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/threadOnce.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/threadRegistry.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/threadsLinuxKernel.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/slabScrubber.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/slabSummary.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/timeUtils.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/udsMain.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/statusCodes.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/superBlock.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/udsModule.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/threadConfig.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/volume.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/volumeStore.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/trace.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/zone.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/upgrade.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/vdo.o
/var/lib/dkms/kvdo/6.2.6.3/build/uds/threadsLinuxKernel.c: In function ‘exitThread’:
/var/lib/dkms/kvdo/6.2.6.3/build/uds/threadsLinuxKernel.c:153:3: error: implicit declaration of function ‘complete_and_exit’ [-Werror=implicit-function-declaration]
  153 |   complete_and_exit(completion, 1);
      |   ^~~~~~~~~~~~~~~~~
/var/lib/dkms/kvdo/6.2.6.3/build/uds/threadsLinuxKernel.c:154:1: error: ‘noreturn’ function does return [-Werror]
  154 | }
      | ^
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/vdoDebug.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/murmur/MurmurHash3.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/vdoLayout.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/vdoLoad.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/util/eventCount.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/uds/util/funnelQueue.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/vdoPageCache.o
cc1: all warnings being treated as errors
make[2]: *** [scripts/Makefile.build:288: /var/lib/dkms/kvdo/6.2.6.3/build/uds/threadsLinuxKernel.o] Error 1
make[2]: *** Waiting for unfinished jobs....
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/vdoRecovery.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/vdoResize.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/vdoResizeLogical.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/vdoResume.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/vdoState.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/vdoSuspend.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/vio.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/vioPool.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/vioRead.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/vioWrite.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/volumeGeometry.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/base/waitQueue.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/kernel/batchProcessor.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/kernel/bio.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/kernel/bufferPool.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/kernel/dataKVIO.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/kernel/deadlockQueue.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/kernel/dedupeIndex.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/kernel/deviceConfig.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/kernel/deviceRegistry.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/kernel/dmvdo.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/kernel/dump.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/kernel/errors.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/kernel/histogram.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/kernel/instanceNumber.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/kernel/ioSubmitter.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/kernel/kernelLayer.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/kernel/kernelVDO.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/kernel/ktrace.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/kernel/kvdoFlush.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/kernel/kvio.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/kernel/limiter.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/kernel/logger.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/kernel/memoryUsage.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/kernel/poolSysfs.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/kernel/poolSysfsStats.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/kernel/statusProcfs.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/kernel/sysfs.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/kernel/threadDevice.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/kernel/threadRegistry.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/kernel/threads.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/kernel/udsIndex.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/kernel/vdoStringUtils.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/kernel/verify.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/kernel/workItemStats.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/kernel/workQueue.o
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/kernel/workQueueHandle.o
/var/lib/dkms/kvdo/6.2.6.3/build/vdo/kernel/batchProcessor.c: In function ‘makeBatchProcessor’:
/var/lib/dkms/kvdo/6.2.6.3/build/vdo/kernel/batchProcessor.c:172:17: error: cast between incompatible function types from ‘BatchProcessorCallback’ {aka ‘void (*)(struct batchProcessor *, void *)’} to ‘void (*)(KvdoWorkItem *)’ {aka ‘void (*)(struct kvdoWorkItem *)’} [-Werror=cast-function-type]
  172 |                 (KvdoWorkFunction) callback, CPU_Q_ACTION_COMPLETE_KVIO);
      |                 ^
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/kernel/workQueueStats.o
cc1: all warnings being treated as errors
  CC [M]  /var/lib/dkms/kvdo/6.2.6.3/build/vdo/kernel/workQueueSysfs.o
make[2]: *** [scripts/Makefile.build:288: /var/lib/dkms/kvdo/6.2.6.3/build/vdo/kernel/batchProcessor.o] Error 1
make[2]: *** Waiting for unfinished jobs....
make[1]: *** [scripts/Makefile.build:550: /var/lib/dkms/kvdo/6.2.6.3/build/uds] Error 2
make[1]: *** Waiting for unfinished jobs....
/var/lib/dkms/kvdo/6.2.6.3/build/vdo/kernel/statusProcfs.c: In function ‘statusDedupeOpen’:
/var/lib/dkms/kvdo/6.2.6.3/build/vdo/kernel/statusProcfs.c:79:46: error: implicit declaration of function ‘PDE_DATA’; did you mean ‘NODE_DATA’? [-Werror=implicit-function-declaration]
   79 |   return single_open(file, statusDedupeShow, PDE_DATA(inode));
      |                                              ^~~~~~~~
      |                                              NODE_DATA
/var/lib/dkms/kvdo/6.2.6.3/build/vdo/kernel/statusProcfs.c:79:46: error: passing argument 3 of ‘single_open’ makes pointer from integer without a cast [-Werror=int-conversion]
   79 |   return single_open(file, statusDedupeShow, PDE_DATA(inode));
      |                                              ^~~~~~~~~~~~~~~
      |                                              |
      |                                              int
In file included from /var/lib/dkms/kvdo/6.2.6.3/build/vdo/kernel/statusProcfs.h:27,
                 from /var/lib/dkms/kvdo/6.2.6.3/build/vdo/kernel/statusProcfs.c:35:
./include/linux/seq_file.h:165:68: note: expected ‘void *’ but argument is of type ‘int’
  165 | int single_open(struct file *, int (*)(struct seq_file *, void *), void *);
      |                                                                    ^~~~~~
/var/lib/dkms/kvdo/6.2.6.3/build/vdo/kernel/statusProcfs.c: In function ‘statusKernelOpen’:
/var/lib/dkms/kvdo/6.2.6.3/build/vdo/kernel/statusProcfs.c:181:46: error: passing argument 3 of ‘single_open’ makes pointer from integer without a cast [-Werror=int-conversion]
  181 |   return single_open(file, statusKernelShow, PDE_DATA(inode));
      |                                              ^~~~~~~~~~~~~~~
      |                                              |
      |                                              int
In file included from /var/lib/dkms/kvdo/6.2.6.3/build/vdo/kernel/statusProcfs.h:27,
                 from /var/lib/dkms/kvdo/6.2.6.3/build/vdo/kernel/statusProcfs.c:35:
./include/linux/seq_file.h:165:68: note: expected ‘void *’ but argument is of type ‘int’
  165 | int single_open(struct file *, int (*)(struct seq_file *, void *), void *);
      |                                                                    ^~~~~~
cc1: all warnings being treated as errors
make[2]: *** [scripts/Makefile.build:288: /var/lib/dkms/kvdo/6.2.6.3/build/vdo/kernel/statusProcfs.o] Error 1
make[1]: *** [scripts/Makefile.build:550: /var/lib/dkms/kvdo/6.2.6.3/build/vdo] Error 2
make: *** [Makefile:1841: /var/lib/dkms/kvdo/6.2.6.3/build] Error 2
make: Leaving directory '/usr/src/kernels/5.17.4-200.fc35.x86_64'

Booting with kernel 5.16 restore functionality, but would be appreciated if module could be loaded in 5.17 as well.

Thanks

Failed to compile kvdo 6.2.1.48 with GCC 9

I am compiling kvdo-6.2.1.48 for kernel 4.19.55-2-lts. With GCC 8.3.0 everything is OK but with GCC 9.1.0 I get the following error:

/var/lib/dkms/kvdo/6.2.1.48/build/vdo/base/recoveryJournal.c: In function ‘decodeRecoveryJournalState_7_0’:
/var/lib/dkms/kvdo/6.2.1.48/build/vdo/base/recoveryJournal.c:622:46: error: taking address of packed member of ‘struct <anonymous>’ may result in an unaligned pointer value [-Werror=address-of-packed-member]
  622 |   int result = getUInt64LEFromBuffer(buffer, &state->journalStart);
      |                                              ^~~~~~~~~~~~~~~~~~~~
/var/lib/dkms/kvdo/6.2.1.48/build/vdo/base/recoveryJournal.c:627:42: error: taking address of packed member of ‘struct <anonymous>’ may result in an unaligned pointer value [-Werror=address-of-packed-member]
  627 |   result = getUInt64LEFromBuffer(buffer, &state->logicalBlocksUsed);
      |                                          ^~~~~~~~~~~~~~~~~~~~~~~~~
/var/lib/dkms/kvdo/6.2.1.48/build/vdo/base/recoveryJournal.c:632:42: error: taking address of packed member of ‘struct <anonymous>’ may result in an unaligned pointer value [-Werror=address-of-packed-member]
  632 |   result = getUInt64LEFromBuffer(buffer, &state->blockMapDataBlocks);
      |                                          ^~~~~~~~~~~~~~~~~~~~~~~~~~

kvdo does not compile on Fedora 27

I realize Fedora is not properly supported. I have tested on CentOS and everything works as expected.

My system is fully up-to-date using dnf.

$ cat /etc/fedora-release
Fedora release 27 (Twenty Seven)`

$ rpm -q kernel kernel-core kernel-devel kernel-headers gcc
kernel-4.14.14-300.fc27.x86_64
kernel-core-4.14.14-300.fc27.x86_64
kernel-devel-4.14.14-300.fc27.x86_64
kernel-headers-4.14.14-300.fc27.x86_64
gcc-7.2.1-2.fc27.x86_64`

`$ uname -a
Linux linux.local.lan 4.14.14-300.fc27.x86_64 #1 SMP Fri Jan 19 13:19:54 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux`

Following the instructions:

$ git clone https://github.com/dm-vdo/kvdo.git
Cloning into 'kvdo'...
remote: Counting objects: 573, done.
remote: Total 573 (delta 0), reused 0 (delta 0), pack-reused 573
Receiving objects: 100% (573/573), 1.06 MiB | 3.77 MiB/s, done.
Resolving deltas: 100% (179/179), done.
$ cd kvdo
$ make -C /usr/src/kernels/`uname -r`/ M=`pwd`
make: Entering directory '/usr/src/kernels/4.14.14-300.fc27.x86_64'
  AR      /home/user/devel/vdo/kvdo/uds/built-in.o
  CC [M]  /home/user/devel/vdo/kvdo/uds/stringLinuxKernel.o
  CC [M]  /home/user/devel/vdo/kvdo/uds/threadSemaphoreLinuxKernel.o
/home/user/devel/vdo/kvdo/uds/threadSemaphoreLinuxKernel.c: In function ‘acquireSemaphore’:
/home/user/devel/vdo/kvdo/uds/threadSemaphoreLinuxKernel.c:99:7: error: implicit declaration of function ‘__set_task_state’; did you mean ‘__get_task_state’? [-Werror=implicit-function-declaration]
       __set_task_state(task, TASK_INTERRUPTIBLE);
       ^~~~~~~~~~~~~~~~
       __get_task_state
cc1: all warnings being treated as errors
make[2]: *** [scripts/Makefile.build:315: /home/user/devel/vdo/kvdo/uds/threadSemaphoreLinuxKernel.o] Error 1
make[1]: *** [scripts/Makefile.build:573: /home/user/devel/vdo/kvdo/uds] Error 2
make: *** [Makefile:1511: _module_/home/user/devel/vdo/kvdo] Error 2
make: Leaving directory '/usr/src/kernels/4.14.14-300.fc27.x86_64'

Installation steps of kvdo on Debian 11(.3)

Hello everyone,

I wrote this little script to install kvdo on Debian 11(.3) (tested with 5.10.0-13 Linux kernel)

DEPENDANCIES="git make linux-headers-$(uname -r)"
WORK_DIRECTORY="/tmp/kvdo/"

apt update && \
apt install -y $DEPENDANCIES && \
#git clone https://github.com/dm-vdo/kvdo.git $WORK_DIRECTORY && \
git clone https://github.com/tigerblue77/kvdo.git $WORK_DIRECTORY && \
cd $WORK_DIRECTORY && \
make -j $(nproc) -C /usr/src/linux-headers-`uname -r` M=`pwd` && \
cd - && \
cp ${WORK_DIRECTORY}vdo/kvdo.ko /lib/modules/$(uname -r) && \
cp ${WORK_DIRECTORY}uds/uds.ko /lib/modules/$(uname -r) && \
apt autoremove --purge -y $DEPENDANCIES && \
rm -Rf $WORK_DIRECTORY

depmod
#update-initramfs -u
#echo uds >>/etc/modules && \
#echo kvdo >> /etc/modules
modprobe kvdo
modprobe uds
  • Line 6 is commented while waiting for my pull request #52 to be merged
  • Can anyone tell me if lines 17-19 are needed ?
  • Is there a way to make the 13 and 14 lines run in parallel while keeping line 12 "&&" ? (solved in comments)
  • Don't hesitate to suggest any improvment !
  • Once improved, maybe we could add this script to repository's README.md ?
  • This script is part of my full kvdo + vdo installation script, don't hesitate to check the issue I opened on vdo's github repository

I wrote this script with help of those forum topics :

Don't compile with 5.15.10-1.el8.elrepo.x86_64

many errors about builtin_va*

In file included from ./include/linux/kernel.h:5,
                 from /var/lib/dkms/kvdo/8.1.0.316/build/uds/stringUtils.h:26,
                 from /var/lib/dkms/kvdo/8.1.0.316/build/uds/common.h:25,
                 from /var/lib/dkms/kvdo/8.1.0.316/build/uds/buffer.h:25,
                 from /var/lib/dkms/kvdo/8.1.0.316/build/uds/indexLayout.h:25,
                 from /var/lib/dkms/kvdo/8.1.0.316/build/uds/indexLayout.c:22:
./include/linux/stdarg.h:6: ошибка: «va_start» переопределён [-Werror]
 #define va_start(v, l) __builtin_va_start(v, l)

and not have new enum switch

/var/lib/dkms/kvdo/8.1.0.316/build/vdo/dmvdo.c: В функции «vdo_status»:
/var/lib/dkms/kvdo/8.1.0.316/build/vdo/dmvdo.c:177:2: ошибка: в переключателе пропущено значение «STATUSTYPE_IMA» перечислимого типа [-Werror=switch]
  switch (status_type) {
  ^~~~~~
cc1: все предупреждения считаются ошибками

Running out of slabs

A VDO volume over LVM was created with the default slabSize (2G) but it grew bigger than expected therefore when trying to extend the physicalsize it complains about too many slabs

Mar 18 14:23:59 backup2-co.conexcol.net kernel: kvdo0:dmsetup: mapToSystemError: mapping internal status code 2072 (kvdo: VDO_TOO_MANY_SLABS: kvdo: Exceeds maximum number of slabs supported) to EIO
Mar 18 14:23:59 backup2-co.conexcol.net kernel: device-mapper: table: 253:1: vdo: Device prepareToGrowPhysical failed (specified physical size too big based on formatted slab size)
Mar 18 14:23:59 backup2-co.conexcol.net kernel: device-mapper: ioctl: error adding target to table
Mar 18 14:23:59 backup2-co.conexcol.net vdo[1637]: ERROR - Device vdo-backup could not be changed; device-mapper: reload ioctl on vdo-backup failed: Input/output error
Mar 18 14:23:59 backup2-co.conexcol.net vdo[1637]: ERROR - device-mapper: reload ioctl on vdo-backup failed: Input/output error

Is it possible to change the slab size on the fly or is that volume hopelessly stuck at its current size? The modify option has no argument for slabSize, only the "create" option. Modifying "/etc/vdoconf.yml" to change slabSize to 32G from 2G didn't work (I assume it's because all the slabs are already created and using the 2G size)

lsblk

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 40G 0 disk
└─sda1 8:1 0 40G 0 part /
sdb 8:16 0 2T 0 disk
└─vg_backup-lv_backup 253:0 0 18T 0 lvm
└─vdo-backup 253:1 0 20T 0 vdo /backup
sdc 8:32 0 2T 0 disk
└─vg_backup-lv_backup 253:0 0 18T 0 lvm
└─vdo-backup 253:1 0 20T 0 vdo /backup
sdd 8:48 0 2T 0 disk
└─vg_backup-lv_backup 253:0 0 18T 0 lvm
└─vdo-backup 253:1 0 20T 0 vdo /backup
sde 8:64 0 2T 0 disk
└─vg_backup-lv_backup 253:0 0 18T 0 lvm
└─vdo-backup 253:1 0 20T 0 vdo /backup
sdf 8:80 0 2T 0 disk
└─vg_backup-lv_backup 253:0 0 18T 0 lvm
└─vdo-backup 253:1 0 20T 0 vdo /backup
sdg 8:96 0 2T 0 disk
└─vg_backup-lv_backup 253:0 0 18T 0 lvm
└─vdo-backup 253:1 0 20T 0 vdo /backup
sdh 8:112 0 1T 0 disk
└─vg_backup-lv_backup 253:0 0 18T 0 lvm
└─vdo-backup 253:1 0 20T 0 vdo /backup
sdi 8:128 0 1T 0 disk
└─vg_backup-lv_backup 253:0 0 18T 0 lvm
└─vdo-backup 253:1 0 20T 0 vdo /backup
sdj 8:144 0 2T 0 disk
└─vg_backup-lv_backup 253:0 0 18T 0 lvm
└─vdo-backup 253:1 0 20T 0 vdo /backup
sdk 8:160 0 2T 0 disk
└─vg_backup-lv_backup 253:0 0 18T 0 lvm
└─vdo-backup 253:1 0 20T 0 vdo /backup
sr0 11:0 1 1024M 0 rom

vdostats --verbose

/dev/mapper/vdo-backup :
version : 31
release version : 133524
data blocks used : 3586432605
overhead blocks used : 9662628
logical blocks used : 4933717551
physical blocks : 3758088192
logical blocks : 5368709120
1K-blocks : 15032352768
1K-blocks used : 14384380932
1K-blocks available : 647971836
used percent : 95
saving percent : 27
block map cache size : 134217728
write policy : sync
block size : 4096
completed recovery count : 6
read-only recovery count : 0
operating mode : normal
recovery progress (%) : N/A
compressed fragments written : 504
compressed blocks written : 36
compressed fragments in packer : 8
slab count : 7166
slabs opened : 7166
slabs reopened : 1
journal disk full count : 0
journal commits requested count : 0
journal entries batching : 0
journal entries started : 2032
journal entries writing : 0
journal entries written : 2032
journal entries committed : 2032
journal blocks batching : 0
journal blocks started : 15
journal blocks writing : 0
journal blocks written : 15
journal blocks committed : 15
slab journal disk full count : 0
slab journal flush count : 0
slab journal blocked count : 0
slab journal blocks written : 1
slab journal tail busy count : 0
slab summary blocks written : 1
reference blocks written : 0
block map dirty pages : 2
block map clean pages : 32
block map free pages : 32734
block map failed pages : 0
block map incoming pages : 0
block map outgoing pages : 0
block map cache pressure : 0
block map read count : 1345
block map write count : 1016
block map failed reads : 0
block map failed writes : 0
block map reclaimed : 0
block map read outgoing : 0
block map found in cache : 2327
block map discard required : 0
block map wait for page : 0
block map fetch required : 34
block map pages loaded : 34
block map pages saved : 0
block map flush count : 0
dedupe advice valid : 0
dedupe advice stale : 0
concurrent data matches : 0
concurrent hash collisions : 0
invalid advice PBN count : 0
no space error count : 0
read only error count : 0
instance : 0
512 byte emulation : off
current VDO IO requests in progress : 8
maximum VDO IO requests in progress : 514
dedupe advice timeouts : 0
flush out : 0
write amplification ratio : 1.0
bios in read : 963
bios in write : 512
bios in discard : 0
bios in flush : 0
bios in fua : 0
bios in partial read : 0
bios in partial write : 0
bios in partial discard : 0
bios in partial flush : 0
bios in partial fua : 0
bios out read : 833
bios out write : 512
bios out discard : 0
bios out flush : 0
bios out fua : 0
bios meta read : 939191
bios meta write : 118
bios meta discard : 0
bios meta flush : 17
bios meta fua : 16
bios journal read : 0
bios journal write : 15
bios journal discard : 0
bios journal flush : 15
bios journal fua : 15
bios page cache read : 34
bios page cache write : 0
bios page cache discard : 0
bios page cache flush : 0
bios page cache fua : 0
bios out completed read : 833
bios out completed write : 512
bios out completed discard : 0
bios out completed flush : 0
bios out completed fua : 0
bios meta completed read : 939191
bios meta completed write : 118
bios meta completed discard : 0
bios meta completed flush : 0
bios meta completed fua : 0
bios journal completed read : 0
bios journal completed write : 15
bios journal completed discard : 0
bios journal completed flush : 0
bios journal completed fua : 0
bios page cache completed read : 34
bios page cache completed write : 0
bios page cache completed discard : 0
bios page cache completed flush : 0
bios page cache completed fua : 0
bios acknowledged read : 963
bios acknowledged write : 512
bios acknowledged discard : 0
bios acknowledged flush : 0
bios acknowledged fua : 0
bios acknowledged partial read : 0
bios acknowledged partial write : 0
bios acknowledged partial discard : 0
bios acknowledged partial flush : 0
bios acknowledged partial fua : 0
bios in progress read : 0
bios in progress write : 0
bios in progress discard : 0
bios in progress flush : 0
bios in progress fua : 0
KVDO module bytes used : 4435102384
KVDO module peak bytes used : 4444035840
entries indexed : 65471652
posts found : 0
posts not found : 0
queries found : 0
queries not found : 0
updates found : 0
updates not found : 0
current dedupe queries : 0
maximum dedupe queries : 0

KVDO build fails in kernel 4.17.0+

Hi
I'm trying to build kvdo for the linux kernel 4.17.0-3 and it fails, it also happens with 4.18.0-2.
There is no problem with kernel 4.16.0-trunk-amd64
I'm thinking that it is not a duplicate of the issue13

I'm on Version 6.2.0.4
gcc version 8.2.0 (Debian 8.2.0-7)

Thanks

dclavijo@testing4:~/code/kvdo$ make -C /usr/src/linux-headers-4.17.0-3-amd64/ M=$(pwd) -j 16
make: Entering directory '/usr/src/linux-headers-4.17.0-3-amd64'
  CC [M]  /home/dclavijo/code/kvdo/uds/indexLayoutLinuxKernel.o
  CC [M]  /home/dclavijo/code/kvdo/uds/indexZone.o
  CC [M]  /home/dclavijo/code/kvdo/uds/permassertLinuxKernel.o
  CC [M]  /home/dclavijo/code/kvdo/uds/stringUtils.o
  CC [M]  /home/dclavijo/code/kvdo/uds/regionIndexState.o
  CC [M]  /home/dclavijo/code/kvdo/uds/indexConfig.o
  CC [M]  /home/dclavijo/code/kvdo/uds/deltaMemory.o
  CC [M]  /home/dclavijo/code/kvdo/uds/sparseCache.o
  CC [M]  /home/dclavijo/code/kvdo/uds/stringLinuxKernel.o
  CC [M]  /home/dclavijo/code/kvdo/uds/threadOnce.o
  CC [M]  /home/dclavijo/code/kvdo/uds/localIndexRouter.o
  CC [M]  /home/dclavijo/code/kvdo/uds/udsModule.o
  CC [M]  /home/dclavijo/code/kvdo/uds/threadCondVarLinuxKernel.o
  CC [M]  /home/dclavijo/code/kvdo/uds/cacheCounters.o
  CC [M]  /home/dclavijo/code/kvdo/uds/masterIndexOps.o
  CC [M]  /home/dclavijo/code/kvdo/uds/buffer.o
  CC [M]  /home/dclavijo/code/kvdo/uds/zone.o
  CC [M]  /home/dclavijo/code/kvdo/uds/threadRegistry.o
  CC [M]  /home/dclavijo/code/kvdo/uds/threadsLinuxKernel.o
  CC [M]  /home/dclavijo/code/kvdo/uds/regionIndexComponent.o
  CC [M]  /home/dclavijo/code/kvdo/uds/indexSession.o
  CC [M]  /home/dclavijo/code/kvdo/uds/indexComponent.o
  CC [M]  /home/dclavijo/code/kvdo/uds/requestQueue.o
  CC [M]  /home/dclavijo/code/kvdo/uds/chapterWriter.o
  CC [M]  /home/dclavijo/code/kvdo/uds/volume.o
  CC [M]  /home/dclavijo/code/kvdo/uds/openChapter.o
  CC [M]  /home/dclavijo/code/kvdo/uds/indexInternals.o
  CC [M]  /home/dclavijo/code/kvdo/uds/indexCheckpoint.o
  CC [M]  /home/dclavijo/code/kvdo/uds/parameter.o
  CC [M]  /home/dclavijo/code/kvdo/uds/udsMain.o
  CC [M]  /home/dclavijo/code/kvdo/uds/singleFileLayout.o
  CC [M]  /home/dclavijo/code/kvdo/uds/logger.o
  CC [M]  /home/dclavijo/code/kvdo/uds/uds.mod.o
  CC [M]  /home/dclavijo/code/kvdo/uds/pageCache.o
  CC [M]  /home/dclavijo/code/kvdo/uds/threadSemaphoreLinuxKernel.o
  CC [M]  /home/dclavijo/code/kvdo/uds/timeUtils.o
  CC [M]  /home/dclavijo/code/kvdo/uds/sysfs.o
  CC [M]  /home/dclavijo/code/kvdo/uds/openChapterZone.o
  CC [M]  /home/dclavijo/code/kvdo/uds/cachedChapterIndex.o
  CC [M]  /home/dclavijo/code/kvdo/uds/memoryLinuxKernel.o
  CC [M]  /home/dclavijo/code/kvdo/uds/searchList.o
  CC [M]  /home/dclavijo/code/kvdo/uds/linuxIORegion.o
  CC [M]  /home/dclavijo/code/kvdo/uds/block.o
  CC [M]  /home/dclavijo/code/kvdo/uds/session.o
  CC [M]  /home/dclavijo/code/kvdo/uds/udsState.o
  CC [M]  /home/dclavijo/code/kvdo/uds/deltaIndex.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/slabScrubber.o
  CC [M]  /home/dclavijo/code/kvdo/uds/errors.o
  CC [M]  /home/dclavijo/code/kvdo/uds/bufferedReader.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/blockMapPage.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/vio.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/flush.o
  CC [M]  /home/dclavijo/code/kvdo/uds/geometry.o
  CC [M]  /home/dclavijo/code/kvdo/uds/indexStateData.o
  CC [M]  /home/dclavijo/code/kvdo/uds/context.o
  CC [M]  /home/dclavijo/code/kvdo/uds/hashUtils.o
  CC [M]  /home/dclavijo/code/kvdo/uds/permassert.o
  CC [M]  /home/dclavijo/code/kvdo/uds/indexRouter.o
In file included from /home/dclavijo/code/kvdo/uds/linuxIORegion.c:25:0:
/usr/src/linux-headers-4.17.0-3-common/include/linux/blkdev.h:1037:22: error: expected identifier before numeric constant
 #define SECTOR_SHIFT 9
                      ^
/home/dclavijo/code/kvdo/uds/linuxIORegion.c:46:8: note: in expansion of macro ‘SECTOR_SHIFT’
 enum { SECTOR_SHIFT = 9 };
        ^~~~~~~~~~~~
/usr/src/linux-headers-4.17.0-3-common/include/linux/blkdev.h:1040:21: error: expected identifier before ‘(’ token
 #define SECTOR_SIZE (1 << SECTOR_SHIFT)
                     ^
/home/dclavijo/code/kvdo/uds/linuxIORegion.c:47:8: note: in expansion of macro ‘SECTOR_SIZE’
 enum { SECTOR_SIZE  = 1 << SECTOR_SHIFT };
        ^~~~~~~~~~~
make[4]: *** [/usr/src/linux-headers-4.17.0-3-common/scripts/Makefile.build:317: /home/dclavijo/code/kvdo/uds/linuxIORegion.o] Error 1
make[4]: *** Waiting for unfinished jobs....
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/partitionCopy.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/adminCompletion.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/vioWrite.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/vioRead.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/extent.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/readOnlyRebuild.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/threadConfig.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/referenceOperation.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/blockMapRecovery.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/superBlock.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/vdoRecovery.o
make[3]: *** [/usr/src/linux-headers-4.17.0-3-common/scripts/Makefile.build:564: /home/dclavijo/code/kvdo/uds] Error 2
make[3]: *** Waiting for unfinished jobs....
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/lockCounter.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/lz4.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/physicalZone.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/readOnlyModeContext.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/vioPool.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/upgrade.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/dataVIO.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/pbnLock.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/slabJournalEraser.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/slabCompletion.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/completion.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/vdoResizeLogical.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/logicalZone.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/trace.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/blockAllocator.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/vdoLayout.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/vdoResize.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/vdoDebug.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/packer.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/physicalLayer.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/pbnLockPool.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/compressedBlock.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/refCounts.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/allocatingVIO.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/recoveryUtils.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/priorityTable.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/slabJournal.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/objectPool.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/slabSummary.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/intMap.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/slabRebuild.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/vdoClose.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/hashZone.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/volumeGeometry.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/vdoPageCache.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/statusCodes.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/threadData.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/blockMap.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/referenceCountRebuild.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/waitQueue.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/forest.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/vdo.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/slabDepot.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/hashLock.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/vdoLoad.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/blockMapTree.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/slab.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/pointerMap.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/header.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/heap.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/compressionState.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/dirtyLists.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/fixedLayout.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/recoveryJournal.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/base/constants.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/kernel/workQueueHandle.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/kernel/poolSysfs.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/kernel/kvdoFlush.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/kernel/kernelVDO.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/kernel/workItemStats.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/kernel/dump.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/kernel/memoryUsage.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/kernel/threads.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/kernel/bio.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/kernel/udsIndex.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/kernel/ktrace.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/kernel/readCache.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/kernel/bufferPool.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/kernel/kvio.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/kernel/deadlockQueue.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/kernel/sysfs.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/kernel/deviceConfig.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/kernel/errors.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/kernel/workQueueStats.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/kernel/kernelLayer.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/kernel/limiter.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/kernel/dmvdo.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/kernel/poolSysfsStats.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/kernel/vdoStringUtils.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/kernel/verify.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/kernel/workQueue.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/kernel/ioSubmitter.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/kernel/dedupeIndex.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/kernel/dataKVIO.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/kernel/histogram.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/kernel/logger.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/kernel/workQueueSysfs.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/kernel/batchProcessor.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/kernel/deviceRegistry.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/kernel/statusProcfs.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/kernel/threadDevice.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/kernel/funnelQueue.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/kernel/threadRegistry.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/kernel/instanceNumber.o
  CC [M]  /home/dclavijo/code/kvdo/vdo/../uds/murmur/MurmurHash3.o
  AR      /home/dclavijo/code/kvdo/vdo/built-in.a
  LD [M]  /home/dclavijo/code/kvdo/vdo/kvdo.o
make[2]: *** [/usr/src/linux-headers-4.17.0-3-common/Makefile:1585: _module_/home/dclavijo/code/kvdo] Error 2
make[1]: *** [Makefile:146: sub-make] Error 2
make: *** [Makefile:8: all] Error 2
make: Leaving directory '/usr/src/linux-headers-4.17.0-3-amd64'

Moving from kvdo 6.2.3.114 to 6.1.3.23

I would like move from CentOS 8 to 7
After starting vdo service at CentOS 7 with configuration that was running under CentOS 8 I am getting following errors:

systemd[1]: Starting VDO volume services...
vdo[20804]: ERROR - 'instancemethod' object has no attribute '__getitem__'
vdo[20804]: vdo: ERROR - 'instancemethod' object has no attribute '__getitem__'
systemd[1]: vdo.service: main process exited, code=exited, status=1/FAILURE
systemd[1]: Failed to start VDO volume services.
systemd[1]: Unit vdo.service entered failed state.
systemd[1]: vdo.service failed.

Running vdodumpconfig results in:

vdodumpconfig: allocateVDO failed for '/dev/md254' with VDO Status: Unsupported component version
vdodumpconfig: Could not load VDO from '/dev/md254'

Is it somehow possible to get VDO volume up and running while moving from CentOS 8 (kmod-kvdo 6.2.3.114) to CentOS 7 (kmod-kvdo 6.1.3.23)?

High CPU load caused by indexW kernel threads while vdo volume is unused

Hello team,

I have noticed that the indexW kernel threads consume a lot of CPU time while the VDO volume is idle (for example when it has just been started and has been unused since):

CPU time increase, as reported by 'top':

$ top -bw | grep indexW
6608 root      20   0       0      0      0 S   6.2   0.0   0:01.69 kvdo0:indexW
6609 root      20   0       0      0      0 S   6.2   0.0   0:01.69 kvdo0:indexW
6611 root      20   0       0      0      0 S   6.2   0.0   0:01.69 kvdo0:indexW
6612 root      20   0       0      0      0 S   6.2   0.0   0:01.66 kvdo0:indexW
6613 root      20   0       0      0      0 S   6.2   0.0   0:01.68 kvdo0:indexW
6610 root      20   0       0      0      0 S   0.0   0.0   0:01.69 kvdo0:indexW
6611 root      20   0       0      0      0 S   4.6   0.0   0:01.83 kvdo0:indexW
6608 root      20   0       0      0      0 S   4.3   0.0   0:01.82 kvdo0:indexW
6609 root      20   0       0      0      0 S   4.3   0.0   0:01.82 kvdo0:indexW
6610 root      20   0       0      0      0 S   4.3   0.0   0:01.82 kvdo0:indexW
6612 root      20   0       0      0      0 S   4.3   0.0   0:01.79 kvdo0:indexW
6613 root      20   0       0      0      0 S   4.3   0.0   0:01.81 kvdo0:indexW
6610 root      20   0       0      0      0 S   4.6   0.0   0:01.96 kvdo0:indexW
6613 root      20   0       0      0      0 S   4.6   0.0   0:01.95 kvdo0:indexW
6608 root      20   0       0      0      0 S   4.3   0.0   0:01.95 kvdo0:indexW
6609 root      20   0       0      0      0 S   4.3   0.0   0:01.95 kvdo0:indexW
6611 root      20   0       0      0      0 S   4.3   0.0   0:01.96 kvdo0:indexW
6612 root      20   0       0      0      0 S   4.0   0.0   0:01.91 kvdo0:indexW
6613 root      20   0       0      0      0 S   4.6   0.0   0:02.09 kvdo0:indexW
6608 root      20   0       0      0      0 S   4.3   0.0   0:02.08 kvdo0:indexW
6609 root      20   0       0      0      0 S   4.3   0.0   0:02.08 kvdo0:indexW
6610 root      20   0       0      0      0 S   4.3   0.0   0:02.09 kvdo0:indexW
6611 root      20   0       0      0      0 S   4.3   0.0   0:02.09 kvdo0:indexW
6612 root      20   0       0      0      0 S   4.3   0.0   0:02.04 kvdo0:indexW

VDO statistics after collecting the usage above:

# vdostats --verbose | grep -e 'bios in\|bios out'
  bios in read                        : 0
  bios in write                       : 0
  bios in discard                     : 0
  bios in flush                       : 0
  bios in fua                         : 0
  bios in partial read                : 0
  bios in partial write               : 0
  bios in partial discard             : 0
  bios in partial flush               : 0
  bios in partial fua                 : 0
  bios out read                       : 0
  bios out write                      : 0
  bios out discard                    : 0
  bios out flush                      : 0
  bios out fua                        : 0
  bios out completed read             : 0
  bios out completed write            : 0
  bios out completed discard          : 0
  bios out completed flush            : 0
  bios out completed fua              : 0
  bios in progress read               : 0
  bios in progress write              : 0
  bios in progress discard            : 0
  bios in progress flush              : 0
  bios in progress fua                : 0

System information:
GNU/Gentoo Linux (amd64)
kvdo version: 6.2.3.114 on 5.8.0 kernel (amd64) (kvdo-corp)
GCC: 10.2

The VDO device is started on top of dm-crypt encrypted partition, ie:

    vdo_storage: !VDOService
      _operationState: finished
      ackThreads: 1
      activated: enabled
      bioRotationInterval: 64
      bioThreads: 4
      blockMapCacheSize: 128M
      blockMapPeriod: 16380
      compression: enabled
      cpuThreads: 2
      deduplication: enabled
      device: /dev/mapper/cryptstorage
      hashZoneThreads: 1
      indexCfreq: 0
      indexMemory: 0.25
      indexSparse: disabled
      indexThreads: 0
      logicalBlockSize: 4096
      logicalSize: 1T
      logicalThreads: 1
      maxDiscardSize: 4K
      name: vdo_storage
      physicalSize: 306742596K
      physicalThreads: 1
      slabSize: 2G
      uuid: null
      writePolicy: async

Hardware specs of the machine:
Intel(R) Core(TM) i7-9850H CPU @ 2.60GHz
32GB memory
disk partition on NVME disk

There is currently no virtualization in use, though KVM and linux-containers are compiled in as modules.

Anything else you may need to know, do let me know.

compile fail with Ubuntu 18.04 (Kernel 4.15)

Hello there,

I try to compile this module on ubuntu 18.04, but i get this error:

The headers is installed. Any ideas?

root@pure:~/kvdo# make -C /usr/src/linux-headers-`uname -r` M=`pwd`
make: Entering directory '/usr/src/linux-headers-4.15.0-106-generic'
  CC [M]  /root/kvdo/uds/openChapter.o
In file included from /root/kvdo/uds/indexLayout.h:28:0,
                 from /root/kvdo/uds/index.h:26,
                 from /root/kvdo/uds/openChapter.h:27,
                 from /root/kvdo/uds/openChapter.c:22:
/root/kvdo/uds/ioFactory.h:28:10: fatal error: linux/dm-bufio.h: No such file or directory
 #include <linux/dm-bufio.h>
          ^~~~~~~~~~~~~~~~~~
compilation terminated.
scripts/Makefile.build:330: recipe for target '/root/kvdo/uds/openChapter.o' failed
make[2]: *** [/root/kvdo/uds/openChapter.o] Error 1
scripts/Makefile.build:604: recipe for target '/root/kvdo/uds' failed
make[1]: *** [/root/kvdo/uds] Error 2
Makefile:1577: recipe for target '_module_/root/kvdo' failed
make: *** [_module_/root/kvdo] Error 2
make: Leaving directory '/usr/src/linux-headers-4.15.0-106-generic'

optimized configuration

I have got a server with
12/24 cpus
48 GB RAM
110 TB Raid 6 Disk (/dev/sda)

and created a vdo store with

vdo create --force --vdoCpuThreads=12 --name=vdo1 --device=/dev/sda --writePolicy=async --vdoLogicalSize=256T --vdoSlabSize=32G --sparseIndex=enabled --indexMem=2

and xfs Filesystem on top of it. (only) purpose is a backup server for disk images.
(First test with dedup worked fine)

I don't know exactly where to tune the config above to get the most out of the hardware,
(I tried blockMapCacheSize=32G which crashed I couldn't even login and had to reboot.)

So what would be a good configuration for this hardware ?

How to reduce CPU utilization on journal thread?

It seems that journal can only works on only one thread.
When I using fio for performance test, the CPU utilization of kvdo0:journalQ is about 90% when storage device is NVMe SSD (using top to get this) and is about 50% on a SATA SSD. I'm afraid the journal could be the bottleneck of this configuration because of high CPU utilization.
Is that influences performance actually and how to reduce CPU utilization on journal thread?

Errors and random vdo volume failures

Hi Everyone,
I have this very big issue with all the configured drives randomly getting into errors. The environment is as follows:

  • 6 virtual machines running on top of redhat virtualisation ( rhhci )
  • each vm is running Centos7.5.1804
  • vdo version is 6.1.1.125 on all the vms
  • each of the vms has 10 x enterprise grade, spinning, 12gb/s, 3.6TB drives passthrough mode from the physical hosts.
  • on each of the vms, the vdo drives where configured as: logical volume 35TB, index size 4GB, sync mode
  • underlying block device is actually a raid0 volume configured with cache and write back
  • if I used the vm drives without vdo, everything work fine - so not a passthrough issue
  • each of the vdo volumes was formated with xfs with discard, and also mounted by using systemd services as per the given example.

But at some point they are randomly dying....
Below, some example errors regarding one volume ( "vsdh" in this case ):

vdostats

Device 1K-blocks Used Available Use% Space saving%
/dev/mapper/vsda 3906249728 48084728 3858165000 1% 96%
/dev/mapper/vsdb 3906249728 48178260 3858071468 1% 92%
/dev/mapper/vsdc 3906249728 48165820 3858083908 1% 93%
/dev/mapper/vsdd 3906249728 48316504 3857933224 1% 87%
/dev/mapper/vsde 3906249728 48304600 3857945128 1% 87%
/dev/mapper/vsdf 3906249728 48176624 3858073104 1% 92%
/dev/mapper/vsdg 3906249728 48211872 3858037856 1% 90%
/dev/mapper/vsdh 3906249728 N/A N/A N/A N/A
/dev/mapper/vsdi 3906249728 48192572 3858057156 1% 91%
/dev/mapper/vsdj 3906249728 48133656 3858116072 1% 94%

cat /var/log/messages | grep vdo

Jul 25 00:37:22 hdp-datanode-5 kernel: kvdo8:logQ0: Expected page 11058242 but got page 11058240 instead: kvdo: Corrupt or incorrect page (2057)
Jul 25 00:37:23 hdp-datanode-5 kernel: uds: kvdo7:dedupeQ: read index page map, last update 0
Jul 25 00:37:23 hdp-datanode-5 kernel: uds: kvdo7:dedupeQ: index_0: loaded index from chapter 0 through chapter 0
Jul 25 00:37:24 hdp-datanode-5 kernel: uds: kvdo8:dedupeQ: read index page map, last update 0
Jul 25 00:37:24 hdp-datanode-5 kernel: uds: kvdo8:dedupeQ: index_0: loaded index from chapter 0 through chapter 0
Jul 25 00:37:25 hdp-datanode-5 kernel: uds: kvdo9:dedupeQ: read index page map, last update 0
Jul 25 00:37:25 hdp-datanode-5 kernel: uds: kvdo9:dedupeQ: index_0: loaded index from chapter 0 through chapter 0
Jul 25 06:19:34 hdp-datanode-5 kernel: kvdo4:logQ0: Expected page 11058137 but got page 11058019 instead: kvdo: Corrupt or incorrect page (2057)
Jul 25 06:22:36 hdp-datanode-5 kernel: kvdo8:logQ0: Expected page 11058933 but got page 11058982 instead: kvdo: Corrupt or incorrect page (2057)
Jul 25 06:22:36 hdp-datanode-5 kernel: kvdo8:logQ0: Expected page 11058940 but got page 11058934 instead: kvdo: Corrupt or incorrect page (2057)
Jul 25 06:22:36 hdp-datanode-5 kernel: kvdo8:logQ0: Completing read VIO for LBN 2415919097 with error after getMappedBlock: kvdo: Corrupt or incorrect page (2057)
Jul 25 06:22:36 hdp-datanode-5 kernel: kvdo8:cpuQ0:
Jul 25 06:22:36 hdp-datanode-5 kernel: kvdo8:journalQ:
Jul 25 06:22:36 hdp-datanode-5 kernel: : kvdo: Corrupt or incorrect page (2057)
Jul 25 06:22:36 hdp-datanode-5 kernel: mapToSystemError: mapping internal status code 2057 (kvdo: VDO_BAD_PAGE: kvdo: Corrupt or incorrect page) to EIO
Jul 25 06:23:10 hdp-datanode-5 kernel: kvdo2:logQ0: Expected page 11058124 but got page 11058120 instead: kvdo: Corrupt or incorrect page (2057)
Jul 25 06:24:41 hdp-datanode-5 kernel: kvdo7:logQ0: Expected page 11058095 but got page 11058090 instead: kvdo: Corrupt or incorrect page (2057)
Jul 25 06:25:19 hdp-datanode-5 kernel: kvdo0:logQ0: Expected page 11058216 but got page 11058208 instead: kvdo: Corrupt or incorrect page (2057)

vdo status -n vsdh

VDO status:
Date: '2019-07-25 07:23:29+00:00'
Node: hdp-datanode-5.mptjo.internal
Kernel module:
Loaded: true
Name: kvdo
Version information:
kvdo version: 6.1.1.125
Configuration:
File: /etc/vdoconf.yml
Last modified: '2019-07-24 17:53:44'
VDOs:
vsdh:
Acknowledgement threads: 1
Activate: enabled
Bio rotation interval: 64
Bio submission threads: 4
Block map cache size: 128M
Block map period: 16380
Block size: 4096
CPU-work threads: 2
Compression: enabled
Configured write policy: auto
Deduplication: enabled
Device mapper status: 0 75161927680 vdo /dev/sdh read-only - online online 12001210 976562432
Emulate 512 byte: disabled
Hash zone threads: 1
Index checkpoint frequency: 0
Index memory setting: 4
Index parallel factor: 0
Index sparse: disabled
Index status: online
Logical size: 35T
Logical threads: 1
Physical size: 3814697M
Physical threads: 1
Read cache: disabled
Read cache size: 0M
Slab size: 2G
Storage device: /dev/disk/by-id/scsi-3600062b20314e1c024c6fa21cdf899cc
VDO statistics:
/dev/mapper/vsdh:
1K-blocks: 3906249728
1K-blocks available: N/A
1K-blocks used: N/A
512 byte emulation: false
KVDO module bios used: 372860
KVDO module bytes used: 51372628896
KVDO module peak bio count: 373148
KVDO module peak bytes used: 51372817808
bios acknowledged discard: 0
bios acknowledged flush: 0
bios acknowledged fua: 0
bios acknowledged partial discard: 0
bios acknowledged partial flush: 0
bios acknowledged partial fua: 0
bios acknowledged partial read: 0
bios acknowledged partial write: 0
bios acknowledged read: 780
bios acknowledged write: 1016
bios in discard: 0
bios in flush: 0
bios in fua: 0
bios in partial discard: 0
bios in partial flush: 0
bios in partial fua: 0
bios in partial read: 0
bios in partial write: 0
bios in progress discard: 0
bios in progress flush: 0
bios in progress fua: 0
bios in progress read: 0
bios in progress write: 0
bios in read: 780
bios in write: 1016
bios journal completed discard: 0
bios journal completed flush: 0
bios journal completed fua: 0
bios journal completed read: 0
bios journal completed write: 278
bios journal discard: 0
bios journal flush: 278
bios journal fua: 278
bios journal read: 0
bios journal write: 278
bios meta completed discard: 0
bios meta completed flush: 0
bios meta completed fua: 0
bios meta completed read: 293
bios meta completed write: 403
bios meta discard: 0
bios meta flush: 282
bios meta fua: 280
bios meta read: 293
bios meta write: 403
bios out completed discard: 0
bios out completed flush: 0
bios out completed fua: 0
bios out completed read: 219
bios out completed write: 1012
bios out discard: 0
bios out flush: 0
bios out fua: 0
bios out read: 219
bios out write: 1012
bios page cache completed discard: 0
bios page cache completed flush: 0
bios page cache completed fua: 0
bios page cache completed read: 21
bios page cache completed write: 0
bios page cache discard: 0
bios page cache flush: 0
bios page cache fua: 0
bios page cache read: 21
bios page cache write: 0
block map cache pressure: 0
block map cache size: 134217728
block map clean pages: 11
block map dirty pages: 9
block map discard required: 0
block map failed pages: 0
block map failed reads: 1
block map failed writes: 0
block map fetch required: 21
block map flush count: 0
block map found in cache: 3468
block map free pages: 32748
block map incoming pages: 0
block map outgoing pages: 0
block map pages loaded: 21
block map pages saved: 0
block map read count: 1778
block map read outgoing: 0
block map reclaimed: 0
block map wait for page: 0
block map write count: 1711
block size: 4096
completed recovery count: 0
compressed blocks written: 55
compressed fragments in packer: 30
compressed fragments written: 662
concurrent data matches: 0
concurrent hash collisions: 0
current VDO IO requests in progress: 30
current dedupe queries: 0
data blocks used: N/A
dedupe advice stale: 0
dedupe advice timeouts: 0
dedupe advice valid: 37
entries indexed: 1209
flush out: 0
instance: 8
invalid advice PBN count: 0
journal blocks batching: 0
journal blocks committed: 278
journal blocks started: 278
journal blocks writing: 0
journal blocks written: 278
journal commits requested count: 0
journal disk full count: 0
journal entries batching: 0
journal entries committed: 3422
journal entries started: 3422
journal entries writing: 0
journal entries written: 3422
logical blocks: 9395240960
logical blocks used: 522409
maximum VDO IO requests in progress: 512
maximum dedupe queries: 160
no space error count: 0
operating mode: read-only
overhead blocks used: 12000831
physical blocks: 976562432
posts found: 37
posts not found: 463
queries found: 0
queries not found: 0
read cache accesses: 0
read cache data hits: 0
read cache hits: 0
read only error count: 4
read-only recovery count: 0
recovery progress (%): N/A
reference blocks written: 0
release version: 131337
saving percent: N/A
slab count: 1841
slab journal blocked count: 0
slab journal blocks written: 2
slab journal disk full count: 0
slab journal flush count: 0
slab journal tail busy count: 0
slab summary blocks written: 2
slabs opened: 1
slabs reopened: 1
updates found: 150
updates not found: 8
used percent: N/A
version: 28
write amplification ratio: 1.39
write policy: sync

Could you please help me out sorting this issue ? It already became a very big blocker in moving forward with other deployments..
Thank you very much !
Leo

Kernel 5.x

Hi, based on the init Table info would be interesting when an officially vdo version getting into the Repos for newer Kernels

Version Intended Enterprise Linux Release Supported With Modifications
6.1.x.x EL7 (3.10.0-*.el7)  
6.2.x.x EL8 (4.18.0-*.el8) Fedora 28, Fedora 29, Fedora 30, Rawhide

Is '# -c <warn_pct>' right? I thought '# -c <crit_pct>' was right.

Hello. corwin

I saw monitoring perl code.
Even though it is a comment, I feel like there is something wrong.

31 # -c <warn_pct>: critical threshold equal to or greater than
32 #                <warn_pct> percent.
33 #
34 # -w <crit_pct>: warning threshold equal to or greater than
35 #                <crit_pct> percent.

Shouldn't 'warn_pct' be changed to 'crit_pct' in line 31
and Shouldn't 'crit_pct' be changed to 'warn_pct' in line 34

  • monitor_check_vdostats_physicalSpace.pl
  • monitor_check_vdostats_logicalSpace.pl
  • monitor_check_vdostats_savingPercent.pl

All three files are the same.

Can I partition on a vdo volume

I have a disk with a capacity of 8T, I use this disk create a 32T vdo volume, I want partition on this vdo volume, So Is there any problem with that?

Where is the dm-vdo `dmsetup create foo --table ...` documentation?

Hello,

I would like to understand the low-level behavior of VDO at the target level. Where is the vdo target's dmsetup create and dmsetup message documentation?

I've looked around, usually there is a dm-<target>.txt or similar document (like Documentation/device-mapper/cache.txt in the kernel tree) that defines all of the device-mapper table options, but I couldn't find anything informative when grepping these "vdo" and "kvdo" GIT trees for dmsetup.*create and the like.

Is there such a document? If not, where does kvdo parse the table arguments in your kernel module?

Thanks for your help!

-Eric

vdo 6.2.6.3 crash on kernel 4.19

Hi, team,
VDO crash system on kernel 4.19,
Here is the crash info ,
[135468.438992] CPU: 28 PID: 0 Comm: swapper/28 Kdump: loaded Tainted: G W OE 4.19.36

[135468.438996] x25: 0000000000000000
[135468.438998] pstate: 40400009 (nZcv daif +PAN -UAO)
[135468.439000] pc : __list_del_entry_valid+0x60/0xc8
[135468.439002] x24: 0000000000000105
[135468.439004] lr : __list_del_entry_valid+0x60/0xc8
[135468.439006] sp : ffff0000815dbd40
[135468.439009] x29: ffff0000815dbd40 x28: ffffa02f6be1eb80
[135468.439012] x23: ffff0000811d9000
[135468.439017] x22: ffff00000126ad68
[135468.439018] x27: dead000000000100 x26: ffff0000811d9000
[135468.439022] x21: ffffa02fab0cf400
[135468.439025] x25: dead000000000200
[135468.439028] x20: ffff0000811d9000
[135468.439030] x24: ffffa02f69eb6000
[135468.439036] x19: ffffa02f69c4a380
[135468.439038] x23: dead000000000100 x22: ffffa02fab0cf400
[135468.439041] x18: ffffffffffffffff
[135468.439047] x17: 0000000000000000
[135468.439049] x21: dead000000000200 x20: ffff0000815dbdc8
[135468.439054] x16: ffff0000804e0508
[135468.439061] x19: ffffa02f69eb6498 x18: ffffffffffffffff
[135468.439066] x15: ffff0000811d9708 x14: 6461656420736177
[135468.439069] x17: 0000000000000000 x16: ffff0000804e0508
[135468.439076] x13: 20747562202c3839
[135468.439077] x15: ffff0000811d9708 x14: 3030303030303064
[135468.439081] x12: 3461346339366632
[135468.439086] x13: 6165642820314e4f x12: 53494f505f545349
[135468.439089] x11: 3061666666662065
[135468.439093] x10: 0000000000000000
[135468.439095] x11: 4c20736920747865 x10: 6e3e2d3839343662
[135468.439102] x9 : ffff0000811dbaf0
[135468.439104] x9 : 6539366632306166 x8 : ffff0000813d9a54
[135468.439111] x8 : 00000000001a91e0
[135468.439113] x7 : 0000000000000000
[135468.439116] x7 : ffff0000813b6000
[135468.439118] x6 : 00000c066ce23796
[135468.439121] x6 : 0000000000000001
[135468.439123] x5 : 0000000000000001 x4 : ffff802fbfb9a400
[135468.439131] x5 : 0000000000000001
[135468.439133] x3 : ffff802fbfb9a400 x2 : 0000000000000007
[135468.439139] x4 : ffffa02fbf9e0400
[135468.439144] x1 : 45680c97a7e6cd00 x0 : 0000000000000000
[135468.439147] x3 : ffffa02fbf9e0400
[135468.439150] Call trace:
[135468.439152] x2 : 0000000000000007
[135468.439155] __list_del_entry_valid+0x60/0xc8
[135468.439176] timeoutIndexOperations+0x14c/0x268 [kvdo]
[135468.439180] call_timer_fn+0x34/0x178
[135468.439183] x1 : 45680c97a7e6cd00
[135468.439185] expire_timers+0xec/0x158
[135468.439189] run_timer_softirq+0xc0/0x1f8
[135468.439193] __do_softirq+0x120/0x324
[135468.439194] x0 : 0000000000000000
[135468.439199] irq_exit+0x11c/0x140
[135468.439202] __handle_domain_irq+0x6c/0xc0
[135468.439205] Call trace:
[135468.439207] gic_handle_irq+0x6c/0x150
[135468.439211] __list_del_entry_valid+0xa4/0xc8
[135468.439213] el1_irq+0xb8/0x140
[135468.439217] arch_cpu_idle+0x38/0x1c0
[135468.439221] default_idle_call+0x30/0x4c
[135468.439238] finishIndexOperation+0x150/0x208 [kvdo]
[135468.439241] do_idle+0x22c/0x300
[135468.439250] handleCallbacks+0x48/0x80 [uds]
[135468.439260] requestQueueWorker+0x74/0x400 [uds]
[135468.439263] cpu_startup_entry+0x28/0x30
[135468.439273] threadStarter+0xa0/0xd8 [uds]
[135468.439277] secondary_start_kernel+0x17c/0x1c8
[135468.439282] ---[ end trace 488bf3b169748bff ]---
[135468.439285] kthread+0x134/0x138

Blow is the vmcore-dmesg file.
20220314vdolog.zip

Upgrade of kmod-kvdo fails

I just tried to upgrade kmod-kvdo-6.1.1.12-1.fc27.x86_64 rpm package. And I got:

  Running scriptlet: kmod-kvdo-6.1.1.12-1.fc27.x86_64                                                         644/1005
modprobe: FATAL: Module kvdo is in use.
modprobe: FATAL: Module uds is in use.
error: executing %preun(kmod-kvdo-6.1.1.12-1.fc27.x86_64) scriptlet failed, return code: 1
Error in PREUN scriptlet in rpm package kmod-kvdo
Error in PREUN scriptlet in rpm package kmod-kvdo

Fail to run 'make': "error: field ‘threadDone’ has incomplete type"

I can't get make to finish successfully. My environment:

$ cat /etc/redhat-release 
Fedora release 26 (Twenty Six)

$ uname -r
4.13.9-200.fc26.x86_64

$ rpm -q kernel-core kernel-devel
kernel-core-4.13.9-200.fc26.x86_64
kernel-devel-4.13.9-200.fc26.x86_64

$ rpm -q gcc
gcc-7.2.1-2.fc26.x86_64

I'm issuing:

$ git clone https://github.com/dm-vdo/kvdo.git
...
$ cd kvdo
$ sudo make -C /usr/src/kernels/$(uname -r) M=$(pwd)

The error I see:

make: Entering directory '/usr/src/kernels/4.13.9-200.fc26.x86_64'
  AR      /kvdo/uds/built-in.o
  CC [M]  /kvdo/uds/grid.o
  CC [M]  /kvdo/uds/threadsLinuxKernel.o
/kvdo/uds/threadsLinuxKernel.c:40:21: error: field ‘threadDone’ has incomplete type
   struct completion threadDone;
                     ^~~~~~~~~~
/kvdo/uds/threadsLinuxKernel.c: In function ‘threadStarter’:
/kvdo/uds/threadsLinuxKernel.c:62:3: error: implicit declaration of function ‘complete’ [-Werror=implicit-function-declaration]
   complete(&kt->threadDone);
   ^~~~~~~~
/kvdo/uds/threadsLinuxKernel.c: In function ‘createThread’:
/kvdo/uds/threadsLinuxKernel.c:80:3: error: implicit declaration of function ‘init_completion’; did you mean ‘get_option’? [-Werror=implicit-function-declaration]
   init_completion(&kt->threadDone);
   ^~~~~~~~~~~~~~~
   get_option
/kvdo/uds/threadsLinuxKernel.c: In function ‘joinThreads’:
/kvdo/uds/threadsLinuxKernel.c:92:10: error: implicit declaration of function ‘wait_for_completion_interruptible’; did you mean ‘schedule_timeout_uninterruptible’? [-Werror=implicit-function-declaration]
   while (wait_for_completion_interruptible(&kt->threadDone) != 0) {
          ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
          schedule_timeout_uninterruptible
cc1: all warnings being treated as errors
make[2]: *** [scripts/Makefile.build:303: /kvdo/uds/threadsLinuxKernel.o] Error 1
make[1]: *** [scripts/Makefile.build:561: /kvdo/uds] Error 2
make: *** [Makefile:1516: _module_/kvdo] Error 2
make: Leaving directory '/usr/src/kernels/4.13.9-200.fc26.x86_64'

Build failed, variable length array

I am not sure if this is something strange with my buildflags or if Kernel 4.20.1 is a bit more strict. It doesnt feel like this is something I should just ignore with compiler flags.

DKMS make.log for kvdo-6.2.0.293 for kernel 4.20.1-arch1-1-ARCH (x86_64)
Sun Jan 13 20:53:40 CET 2019
make: Entering directory '/usr/lib/modules/4.20.1-arch1-1-ARCH/build'
  AR      /var/lib/dkms/kvdo/6.2.0.293/build/uds/built-in.a
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/uds/loggerLinuxKernel.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/uds/openChapterZone.o
  AR      /var/lib/dkms/kvdo/6.2.0.293/build/vdo/built-in.a
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/uds/stringLinuxKernel.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/allocatingVIO.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/uds/threadSemaphoreLinuxKernel.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/recoveryJournal.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/uds/bufferedWriter.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/uds/cachedChapterIndex.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/uds/context.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/uds/threadRegistry.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/uds/blockIORegion.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/uds/indexPageMap.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/uds/memoryLinuxKernel.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/constants.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/uds/indexState.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/slabScrubber.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/uds/memoryAlloc.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/uds/random.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/uds/zone.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/pbnLockPool.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/uds/linuxIORegion.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/uds/searchList.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/uds/regionIndexState.o
/var/lib/dkms/kvdo/6.2.0.293/build/uds/searchList.c: In function 'purgeSearchList':
/var/lib/dkms/kvdo/6.2.0.293/build/uds/searchList.c:80:3: error: ISO C90 forbids variable length array 'alive' [-Werror=vla]
   uint8_t alive[searchList->firstDeadEntry];
   ^~~~~~~
/var/lib/dkms/kvdo/6.2.0.293/build/uds/searchList.c:81:3: error: ISO C90 forbids variable length array 'skipped' [-Werror=vla]
   uint8_t skipped[searchList->firstDeadEntry];
   ^~~~~~~
/var/lib/dkms/kvdo/6.2.0.293/build/uds/searchList.c:82:3: error: ISO C90 forbids variable length array 'dead' [-Werror=vla]
   uint8_t dead[searchList->firstDeadEntry];
   ^~~~~~~
cc1: all warnings being treated as errors
make[2]: *** [scripts/Makefile.build:291: /var/lib/dkms/kvdo/6.2.0.293/build/uds/searchList.o] Error 1
make[2]: *** Waiting for unfinished jobs....
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/slabSummary.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/logicalZone.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/vio.o
make[1]: *** [scripts/Makefile.build:516: /var/lib/dkms/kvdo/6.2.0.293/build/uds] Error 2
make[1]: *** Waiting for unfinished jobs....
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/partitionCopy.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/pointerMap.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/header.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/slabJournal.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/compressedBlock.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/extent.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/dirtyLists.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/intMap.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/vioRead.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/objectPool.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/threadConfig.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/vdoResize.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/dataVIO.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/adminCompletion.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/referenceOperation.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/readOnlyModeContext.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/vdoClose.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/blockMapRecovery.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/vdoRecovery.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/lockCounter.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/blockMapTree.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/lz4.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/hashZone.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/physicalZone.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/vioPool.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/upgrade.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/statusCodes.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/threadData.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/pbnLock.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/vdoPageCache.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/blockMap.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/blockMapPage.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/readOnlyRebuild.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/waitQueue.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/slabJournalEraser.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/vdoLoad.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/slabCompletion.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/slab.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/blockAllocator.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/vdoLayout.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/forest.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/vdo.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/slabDepot.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/hashLock.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/physicalLayer.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/recoveryUtils.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/trace.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/vdoDebug.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/priorityTable.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/compressionState.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/volumeGeometry.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/completion.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/vioWrite.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/recoveryJournalBlock.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/vdoResizeLogical.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/referenceCountRebuild.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/heap.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/flush.o
/var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/heap.c: In function 'swapElements':
/var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/heap.c:52:3: error: ISO C90 forbids variable length array 'temp' [-Werror=vla]
   byte temp[heap->elementSize];
   ^~~~
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/packer.o
  CC [M]  /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/slabRebuild.o
cc1: all warnings being treated as errors
make[2]: *** [scripts/Makefile.build:291: /var/lib/dkms/kvdo/6.2.0.293/build/vdo/base/heap.o] Error 1
make[2]: *** Waiting for unfinished jobs....
make[1]: *** [scripts/Makefile.build:516: /var/lib/dkms/kvdo/6.2.0.293/build/vdo] Error 2
make: *** [Makefile:1563: _module_/var/lib/dkms/kvdo/6.2.0.293/build] Error 2
make: Leaving directory '/usr/lib/modules/4.20.1-arch1-1-ARCH/build'

Can not insmod vdo drivers!

Build driver and insmod ,can not insmod !
[root@localhost vdo]# insmod kvdo.ko
insmod: ERROR: could not insert module kvdo.ko: Unknown symbol in module
[root@localhost vdo]# dmesg
[16324.089963] kvdo: Unknown symbol getByte (err 0)
[16324.090009] kvdo: Unknown symbol udsFreeConfiguration (err 0)
[16324.090034] kvdo: Unknown symbol funnelQueuePoll (err 0)
[16324.090053] kvdo: Unknown symbol getUInt64LEFromBuffer (err 0)
[16324.090078] kvdo: Unknown symbol putBytes (err 0)
[16324.090174] kvdo: Unknown symbol MurmurHash3_x64_128 (err 0)
[16324.090199] kvdo: Unknown symbol skipForward (err 0)
[16324.090215] kvdo: Unknown symbol udsConfigurationSetCheckpointFrequency (err 0)
[16324.090231] kvdo: Unknown symbol putBuffer (err 0)
[16324.090249] kvdo: Unknown symbol getBoolean (err 0)
[16324.090291] kvdo: Unknown symbol assertionFailedLogOnly (err 0)
[16324.090315] kvdo: Unknown symbol putUInt32LEIntoBuffer (err 0)
[16324.090335] kvdo: Unknown symbol reportMemoryUsage (err 0)
[16324.090357] kvdo: Unknown symbol udsFlushBlockContext (err 0)
[16324.090374] kvdo: Unknown symbol freeBuffer (err 0)
[16324.090394] kvdo: Unknown symbol getMemoryStats (err 0)
[16324.090423] kvdo: Unknown symbol udsOpenBlockContext (err 0)
[16324.090441] kvdo: Unknown symbol udsStartChunkOperation (err 0)
[16324.090460] kvdo: Unknown symbol wrapBuffer (err 0)
[16324.090477] kvdo: Unknown symbol ensureAvailableSpace (err 0)
[16324.090494] kvdo: Unknown symbol clearBuffer (err 0)
[16324.090513] kvdo: Unknown symbol makeFunnelQueue (err 0)
[16324.090537] kvdo: Unknown symbol duplicateString (err 0)
[16324.090556] kvdo: Unknown symbol putUInt64LEIntoBuffer (err 0)
[16324.090573] kvdo: Unknown symbol freeMemory (err 0)
[16324.090592] kvdo: Unknown symbol recordBioFree (err 0)
[16324.090608] kvdo: Unknown symbol udsGetBlockContextStats (err 0)
[16324.090624] kvdo: Unknown symbol udsRebuildLocalIndex (err 0)
[16324.090643] kvdo: Unknown symbol unregisterAllocatingThread (err 0)
[16324.090663] kvdo: Unknown symbol udsCloseBlockContext (err 0)
[16324.090723] kvdo: Unknown symbol putBoolean (err 0)
[16324.090739] kvdo: Unknown symbol udsConfigurationSetSparse (err 0)
[16324.090761] kvdo: Unknown symbol makeBuffer (err 0)
[16324.090776] kvdo: Unknown symbol udsConfigurationSetNonce (err 0)
[16324.090810] kvdo: Unknown symbol udsGetBlockContextIndexStats (err 0)
[16324.090825] kvdo: Unknown symbol udsComputeIndexSize (err 0)
[16324.090840] kvdo: Unknown symbol udsConfigurationGetNonce (err 0)
[16324.090857] kvdo: Unknown symbol udsGetIndexConfiguration (err 0)
[16324.090875] kvdo: Unknown symbol isFunnelQueueEmpty (err 0)
[16324.090895] kvdo: Unknown symbol putByte (err 0)
[16324.090914] kvdo: Unknown symbol uncompactedAmount (err 0)
[16324.090932] kvdo: Unknown symbol allocateMemoryNowait (err 0)
[16324.090948] kvdo: Unknown symbol udsSaveIndex (err 0)
[16324.090963] kvdo: Unknown symbol udsCreateLocalIndex (err 0)
[16324.090984] kvdo: Unknown symbol assertionFailed (err 0)
[16324.091001] kvdo: Unknown symbol getBufferContents (err 0)
[16324.091019] kvdo: Unknown symbol freeFunnelQueue (err 0)
[16324.091035] kvdo: Unknown symbol reallocateMemory (err 0)
[16324.091071] kvdo: Unknown symbol getBytesFromBuffer (err 0)
[16324.091089] kvdo: Unknown symbol contentLength (err 0)
[16324.091107] kvdo: Unknown symbol nowUsec (err 0)
[16324.091124] kvdo: Unknown symbol recordBioAlloc (err 0)
[16324.091159] kvdo: Unknown symbol registerAllocatingThread (err 0)
[16324.091177] kvdo: Unknown symbol hasSameBytes (err 0)
[16324.091219] kvdo: Unknown symbol udsCloseIndexSession (err 0)
[16324.091247] kvdo: Unknown symbol allocateMemory (err 0)
[16324.091278] kvdo: Unknown symbol udsInitializeConfiguration (err 0)
[16324.091298] kvdo: Unknown symbol allocSprintf (err 0)
[16324.091341] kvdo: Unknown symbol copyBytes (err 0)
[16324.091380] kvdo: Unknown symbol currentTime (err 0)
[16324.091399] kvdo: Unknown symbol resetBufferEnd (err 0)
[16324.091427] kvdo: Unknown symbol getUInt32LEFromBuffer (err 0)

kvdo compile fails on debiant (kernel 4.9)

There seems so be some API Change (e.g. https://gitlab.com/cip-project/cip-kernel/linux-cip/commit/1eff9d322a444245c67515edb52bc0eb68374aa8 ) so kvdo does not compile on current kernel without some changes (like in akiradeveloper/dm-writeboost#164)

CC [M] /usr/local/src/kvdo-master/vdo/kernel/kernelLayer.o
In file included from /usr/local/src/kvdo-master/vdo/kernel/deadlockQueue.h:27:0,
from /usr/local/src/kvdo-master/vdo/kernel/kernelLayer.h:37,
from /usr/local/src/kvdo-master/vdo/kernel/kernelLayer.c:22:
/usr/local/src/kvdo-master/vdo/kernel/bio.h: In function ‘clearBioOperationAndFlags’:
/usr/local/src/kvdo-master/vdo/kernel/bio.h:88:6: error: ‘BIO {aka struct bio}’ has no member named ‘bi_rw’; did you mean ‘bi_opf’?
bio->bi_rw = 0;
^~
/usr/local/src/kvdo-master/vdo/kernel/bio.h: In function ‘copyBioOperationAndFlags’:
/usr/local/src/kvdo-master/vdo/kernel/bio.h:98:5: error: ‘BIO {aka struct bio}’ has no member named ‘bi_rw’; did you mean ‘bi_opf’?
to->bi_rw = from->bi_rw;
^~
/usr/local/src/kvdo-master/vdo/kernel/bio.h:98:19: error: ‘BIO {aka struct bio}’ has no member named ‘bi_rw’; did you mean ‘bi_opf’?
to->bi_rw = from->bi_rw;
^~
/usr/local/src/kvdo-master/vdo/kernel/bio.h: In function ‘setBioOperationFlag’:
/usr/local/src/kvdo-master/vdo/kernel/bio.h:108:6: error: ‘BIO {aka struct bio}’ has no member named ‘bi_rw’; did you mean ‘bi_opf’?
bio->bi_rw |= flag;
^~
/usr/local/src/kvdo-master/vdo/kernel/bio.h: In function ‘clearBioOperationFlag’:
/usr/local/src/kvdo-master/vdo/kernel/bio.h:118:6: error: ‘BIO {aka struct bio}’ has no member named ‘bi_rw’; did you mean ‘bi_opf’?
bio->bi_rw &= flag;
^

/usr/local/src/kvdo-master/vdo/kernel/bio.h: In function ‘isDiscardBio’:
/usr/local/src/kvdo-master/vdo/kernel/bio.h:181:32: error: ‘BIO {aka struct bio}’ has no member named ‘bi_rw’; did you mean ‘bi_opf’?
return (bio != NULL) && ((bio->bi_rw & REQ_DISCARD) != 0);
^~
/usr/local/src/kvdo-master/vdo/kernel/bio.h:181:42: error: ‘REQ_DISCARD’ undeclared (first use in this function)
return (bio != NULL) && ((bio->bi_rw & REQ_DISCARD) != 0);
^~~~~~~~~~~
/usr/local/src/kvdo-master/vdo/kernel/bio.h:181:42: note: each undeclared identifier is reported only once for each function it appears in
/usr/local/src/kvdo-master/vdo/kernel/bio.h: In function ‘isFlushBio’:
/usr/local/src/kvdo-master/vdo/kernel/bio.h:193:14: error: ‘BIO {aka struct bio}’ has no member named ‘bi_rw’; did you mean ‘bi_opf’?
return (bio->bi_rw & REQ_FLUSH) != 0;
^~
/usr/local/src/kvdo-master/vdo/kernel/bio.h:193:24: error: ‘REQ_FLUSH’ undeclared (first use in this function)
return (bio->bi_rw & REQ_FLUSH) != 0;
^~~~~~~~~
/usr/local/src/kvdo-master/vdo/kernel/bio.h: In function ‘isFUABio’:
/usr/local/src/kvdo-master/vdo/kernel/bio.h:205:14: error: ‘BIO {aka struct bio}’ has no member named ‘bi_rw’; did you mean ‘bi_opf’?
return (bio->bi_rw & REQ_FUA) != 0;
^~
/usr/local/src/kvdo-master/vdo/kernel/bio.h: In function ‘isEmptyFlush’:
/usr/local/src/kvdo-master/vdo/kernel/bio.h:227:14: error: ‘BIO {aka struct bio}’ has no member named ‘bi_rw’; did you mean ‘bi_opf’?
return (bio->bi_rw & REQ_FLUSH) != 0;
^~
/usr/local/src/kvdo-master/vdo/kernel/bio.h:227:24: error: ‘REQ_FLUSH’ undeclared (first use in this function)
return (bio->bi_rw & REQ_FLUSH) != 0;
^~~~~~~~~
In file included from /usr/local/src/kvdo-master/vdo/kernel/deadlockQueue.h:27:0,
from /usr/local/src/kvdo-master/vdo/kernel/kernelLayer.h:37,
from /usr/local/src/kvdo-master/vdo/kernel/kernelLayer.c:22:
/usr/local/src/kvdo-master/vdo/kernel/bio.h:229:1: error: control reaches end of non-void function [-Werror=return-type]
}
^
/usr/local/src/kvdo-master/vdo/kernel/bio.h: In function ‘isDiscardBio’:
/usr/local/src/kvdo-master/vdo/kernel/bio.h:183:1: error: control reaches end of non-void function [-Werror=return-type]
}
^
cc1: all warnings being treated as errors
/usr/src/linux-headers-4.9.0-6-common/scripts/Makefile.build:298: die Regel für Ziel „/usr/local/src/kvdo-master/vdo/kernel/kernelLayer.o“ scheiterte

kvdo failed build on openSUSE Leap 42.3

Get error when i try build kvdo (vdo successful build ) on OpenSUSE 42.3 with last updates.

Kernel:

  • kernel-default-4.4.114-42.1.x86_64
  • kernel-devel-4.4.114-42.1.noarch
  • kernel-source-4.4.114-42.1.noarch
make -C /usr/src/linux M=`pwd`
make: Entering directory '/usr/src/linux-4.4.114-42'

  WARNING: Symbol version dump ./Module.symvers
           is missing; modules will have no dependencies and modversions.

  CC [M]  /home/sw/dm-vdo/kvdo/uds/linuxIORegion.o
/home/sw/dm-vdo/kvdo/uds/linuxIORegion.c: In function ‘lior_bio_submit’:
/home/sw/dm-vdo/kvdo/uds/linuxIORegion.c:192:3: error: passing argument 1 of ‘submit_bio’ makes pointer from integer without a cast [-Werror]
   submit_bio(rw, bio);
   ^
In file included from /home/sw/dm-vdo/kvdo/uds/linuxIORegion.c:24:0:
include/linux/bio.h:411:17: note: expected ‘struct bio *’ but argument is of type ‘int’
 extern blk_qc_t submit_bio(struct bio *);
                 ^
/home/sw/dm-vdo/kvdo/uds/linuxIORegion.c:192:3: error: too many arguments to function ‘submit_bio’
   submit_bio(rw, bio);
   ^
In file included from /home/sw/dm-vdo/kvdo/uds/linuxIORegion.c:24:0:
include/linux/bio.h:411:17: note: declared here
 extern blk_qc_t submit_bio(struct bio *);
                 ^
cc1: all warnings being treated as errors
scripts/Makefile.build:270: recipe for target '/home/sw/dm-vdo/kvdo/uds/linuxIORegion.o' failed
make[2]: *** [/home/sw/dm-vdo/kvdo/uds/linuxIORegion.o] Error 1
scripts/Makefile.build:491: recipe for target '/home/sw/dm-vdo/kvdo/uds' failed
make[1]: *** [/home/sw/dm-vdo/kvdo/uds] Error 2
Makefile:1434: recipe for target '_module_/home/sw/dm-vdo/kvdo' failed
make: *** [_module_/home/sw/dm-vdo/kvdo] Error 2
make: Leaving directory '/usr/src/linux-4.4.114-42'

Please help to resolve this issue

Deduplication and compression can't be disabled depending on the workload

Hello there,
I observed some unexpected behavior when I use the VDO flags --deduplication=disabled or --compression=disabled or both. It seems that deduplication and/or compression still remain enabled for some workloads. I ran multiple workloads where vdo status still reported space savings although both stages where disabled. I ensured that the workloads do not contain any zero-blocks. To me it seems that this behavior only occurs in workloads that contain duplicates and the temporal distance between them played a critical role.

The following workloads(WL) were tested:

  1. Compressible only, which worked as expected. Generated by
  mkdir compOnly; cd compOnly
  for i in $(seq 0 1023); do python -c "import os; X=''.join([(os.urandom(1536)+'\xde\xad\xbe\xef'*640) for i in range(256)])[0:-1]; print X" > randComp$i; done
  cat randComp* > compOnly
  1. Dedupable only, contiguous random data duplicated on 1Gb boundary (pattern like ABCD...ABCD...). This also worked as expected. Generated by
  mkdir dedupCont; cd dedupCont
  for i in $(seq 0 1023); do dd if=/dev/urandom of=random_$i bs=1M count=1 ; done
  cat random_* > dedupOnly_contiguous; cat random_* >> dedupOnly_contiguous
  1. Dedupable only, interlaced random data duplicated per 1Mb boundary (pattern like AABBCCDD...). Here, savings were reported even if deduplication and/or compression were disabled. Generated by
  mkdir dedupInterlaced; cd dedupInterlaced
  for i in $(seq 0 1023); do dd if=/dev/urandom of=random_$i bs=1M count=1 ; done
  for i in random*; do cat $i >> dedupOnly_interlaced; cat $i >> dedupOnly_interlaced; done
  1. Compressible and dedupable, contiguous compressible data duplicated on 1 Gb boundary (ABCD...ABCD...). Like the dedupable contiguous workload, it worked as expected. Generated by
  mkdir dedupCompCont; cd dedupCompCont
  for i in $(seq 0 1023); do python -c "import os; X=''.join([(os.urandom(1536)+'\xde\xad\xbe\xef'*640) for i in range(256)])[0:-1]; print X" > randComp$i; done
  cat randComp* > dedup_comp_contiguous; cat randComp* >> dedup_comp_contiguous
  1. Compressible and dedupable, interlaced compressible data duplicated per 1Mb boundary (AABBCCDD...). Like for the interlaced dedupable only workload, savings can be observed in all dedupe/comp=on/off combinations. Generated by
  mkdir dedupCompInterlaced; cd dedupCompInterlaced
  for i in $(seq 0 1023); do python -c "import os; X=''.join([(os.urandom(1536)+'\xde\xad\xbe\xef'*640) for i in range(256)])[0:-1]; print X" > randComp$i; done
  for i in randComp*; do cat $i >> dedup_comp_interlaced; cat $i >> dedup_comp_interlaced; done

Test setup

For each test, a fresh VDO volume was created using vdo & kvdo version 6.1.3.23 on Centos 7.9.2009. However, short tests under Centos 8 revealed similar behavior. The workloads were tested using dd, but the issue was also observable using fio.

vdo create --name=<vol_name> --device=<dev-name> --compression=enabled --deduplication=enabled
dd if=<WL-file> of=/dev/mapper/<vdo_vol> oflag=direct status=progress

The following savings were reported:

dedupe,comp=on Only comp=on Only dedupe=on dedupe,comp=off
1. compression only WL 49% 49% 0% 0%
2. dedupe only WL, contiguously 50% 0% 50% 0%
3. dedupe only WL, interlaced 50% 46% 50% 43%
4. dedupe+comp WL, contiguously 74% 49% 50% 0%
5. dedupe+comp WL, interlaced 66% 68% 50% 45%

I assume the behavior is somehow triggered by the deduplication stage which seems to still deduplicate blocks that appear within a certain window of locality (here 1Mb). However, there is also some effect on the saving whether deduplication is turned on or off as the saving may decrease like in WL 3 or increase as in WL 5.

i/o hang on vdo device

Version 6.1.1.24, rhel 7.4,

3.10.0-693.21.1.el7.x86_64

got this on vdo volume :

[239977.562141] INFO: task xfsaild/dm-3:12083 blocked for more than 120 seconds.
[239977.562176] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[239977.562216] xfsaild/dm-3 D ffff88086ea00fd0 0 12083 2 0x00000080
[239977.562220] Call Trace:
[239977.562226] [] schedule+0x29/0x70
[239977.562262] [] _xfs_log_force+0x1c6/0x2c0 [xfs]
[239977.562266] [] ? wake_up_state+0x20/0x20
[239977.562281] [] ? xfsaild+0x16c/0x6f0 [xfs]
[239977.562309] [] xfs_log_force+0x2c/0x70 [xfs]
[239977.562325] [] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs]
[239977.562340] [] xfsaild+0x16c/0x6f0 [xfs]
[239977.562355] [] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs]
[239977.562361] [] kthread+0xd1/0xe0
[239977.562363] [] ? insert_kthread_work+0x40/0x40
[239977.562367] [] ret_from_fork+0x77/0xb0
[239977.562369] [] ? insert_kthread_work+0x40/0x40
[240457.516304] INFO: task xfsaild/dm-3:12083 blocked for more than 120 seconds.
[240457.516342] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[240457.516373] xfsaild/dm-3 D ffff88086ea00fd0 0 12083 2 0x00000080
[240457.516377] Call Trace:
[240457.516385] [] schedule+0x29/0x70
[240457.516426] [] _xfs_log_force+0x1c6/0x2c0 [xfs]
[240457.516430] [] ? wake_up_state+0x20/0x20
[240457.516446] [] ? xfsaild+0x16c/0x6f0 [xfs]
[240457.516462] [] xfs_log_force+0x2c/0x70 [xfs]
[240457.516481] [] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs]
[240457.516501] [] xfsaild+0x16c/0x6f0 [xfs]
[240457.516523] [] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs]
[240457.516526] [] kthread+0xd1/0xe0
[240457.516529] [] ? insert_kthread_work+0x40/0x40
[240457.516534] [] ret_from_fork+0x77/0xb0
[240457.516537] [] ? insert_kthread_work+0x40/0x40
[394784.811235] FS-Cache: Loaded

Thank you!

Compilation error error: ISO C90 forbids array ‘buf’ whose size can’t be evaluated [-Werror=vla]

Hello!
I'd like to try kvdo on gentoo but I'm getting compilation error:

# make -C /usr/src/linux-bcache M="/tmp/kvdo" -k
make: Entering directory '/usr/src/linux-bcache'
  CC [M]  /tmp/kvdo/uds/util/pathBuffer.o
/tmp/kvdo/uds/util/pathBuffer.c: In function ‘initializePathBufferSprintf’:
/tmp/kvdo/uds/util/pathBuffer.c:91:3: error: ISO C90 forbids array ‘buf’ whose size can’t be evaluated [-Werror=vla]
   char buf[DEFAULT_PATH_BUFFER_SIZE];
   ^~~~
cc1: all warnings being treated as errors
make[2]: *** [scripts/Makefile.build:279: /tmp/kvdo/uds/util/pathBuffer.o] Error 1
make[2]: Target '__build' not remade because of errors.
make[1]: *** [scripts/Makefile.build:489: /tmp/kvdo/uds] Error 2
  CC [M]  /tmp/kvdo/vdo/base/heap.o
/tmp/kvdo/vdo/base/heap.c: In function ‘swapElements’:
/tmp/kvdo/vdo/base/heap.c:52:3: error: ISO C90 forbids variable length array ‘temp’ [-Werror=vla]
   byte temp[heap->elementSize];
   ^~~~
cc1: all warnings being treated as errors
make[2]: *** [scripts/Makefile.build:279: /tmp/kvdo/vdo/base/heap.o] Error 1
make[2]: Target '__build' not remade because of errors.
make[1]: *** [scripts/Makefile.build:489: /tmp/kvdo/vdo] Error 2
make[1]: Target '__build' not remade because of errors.
make: *** [Makefile:1595: _module_/tmp/kvdo] Error 2
make: Target '_all' not remade because of errors.
make: Leaving directory '/usr/src/linux-bcache'
Using built-in specs.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/usr/libexec/gcc/x86_64-pc-linux-gnu/8.3.0/lto-wrapper
Target: x86_64-pc-linux-gnu
Configured with: /var/tmp/portage/sys-devel/gcc-8.3.0-r1/work/gcc-8.3.0/configure --host=x86_64-pc-linux-gnu --build=x86_64-pc-linux-gnu --prefix=/usr --bindir=/usr/x86_64-pc-linux-gnu/gcc-bin/8.3.0 --includedir=/usr/lib/gcc/x86_64-pc-linux-gnu/8.3.0/include --datadir=/usr/share/gcc-data/x86_64-pc-linux-gnu/8.3.0 --mandir=/usr/share/gcc-data/x86_64-pc-linux-gnu/8.3.0/man --infodir=/usr/share/gcc-data/x86_64-pc-linux-gnu/8.3.0/info --with-gxx-include-dir=/usr/lib/gcc/x86_64-pc-linux-gnu/8.3.0/include/g++-v8 --with-python-dir=/share/gcc-data/x86_64-pc-linux-gnu/8.3.0/python --enable-languages=c,c++ --enable-obsolete --enable-secureplt --disable-werror --with-system-zlib --enable-nls --without-included-gettext --enable-checking=release --with-bugurl=https://bugs.gentoo.org/ --with-pkgversion='Gentoo Hardened 8.3.0-r1 p1.1' --enable-esp --enable-libstdcxx-time --disable-libstdcxx-pch --enable-shared --enable-threads=posix --enable-__cxa_atexit --enable-clocale=gnu --disable-multilib --with-multilib-list=m64 --disable-altivec --disable-fixed-point --enable-targets=all --enable-libgomp --disable-libmudflap --disable-libssp --disable-libmpx --disable-systemtap --enable-vtable-verify --disable-libquadmath --enable-lto --with-isl --disable-isl-version-check --enable-default-pie --enable-default-ssp
Thread model: posix

kernel-5.2.x

wishlist: Encrypted blocks and HMAC-driven deduplication domains to provide volume snapshots.

It has been my goal to have a dm-target (or stack of targets) that provides the following features, and it seems that VDO is close. VDO already supports deduplication, compression, and thin provisioning: It would be nice to see per-volume encryption and snapshots as well because that could provide encrypted thin volumes with snapshots that deduplicate and compress. Unfortunately compression, de-duplication, and encryption are at odds with each other:

  • The problem with encrypted de-duplication of, lets say, a dm-thinpool backing device (tdata) is that even if the same encryption key is used across logical volumes, the data offset seeds the encryption so the on-disk representation is different and cannot be deduplicated.
  • Relatedly, compression must happen before encryption because encrypted data has too much entropy to compress.

To implement encrypted thin volumes (with different keys per volume) that are compressed throughout the lifecycle of their snapshot history the possible existing topologies and their shortcomings as follows:

  1. SCSI => dm-crypt -> dm-vdo -> dm-thin
    • Compresses and deduplicates, but has a shared dm-crypt key at the bottom and we want per-volume encryption to deactivate volumes while still getting deduplication against other volumes with the same key.
  2. SCSI => dm-vdo -> dm-thin -> dm-crypt
    • Provides thin provisioning and encryption but the value of VDO is nullified by the encryption above dm-thin
  3. SCSI => dm-thin -> encryption -> dm-vdo
    • This is nearly the best option because it supports thin provisioning snapshots with per-volume encryption, deduplication and compression. Unfortunately deduplicated content from thin-snapshot divergence over time is lost because VDO is at the bottom. That is, if you:
      1. Write A
      2. Snapshot
      3. Delete A
      4. Snapshot
      5. Write A
    • Thus you will have two copies of A (when "A" is identical content) because VDO never sees the first and last A as being the same (indeed, no single instance of VDO is "seeing" them).
  4. SCSI => dm-crypt+dm-vdo -> dm-thin
    • If encryption+VDO were implemented above LVM's tdata volume such that each dm-thin pool was compressed and deduplicated by VDO, but LVM could still mange the per-customer pool, then you could have per-customer encryption with per-customer deduplication, but there is an issue that creates a disk usage inefficiency: dm-thin pools are allocated statically; multiple dm-thin pools themselves are not thin, they are staticaly allocated.

One way to solve this would be to implement each feature in a target that has snapshotting and encryption as additional features, maybe within VDO. This might be added as follows:

  1. Incorporate thin provisioning using a btree or similar structure:
    • Similar to dm-thin, snapshots generate a snapshot ID
    • The snapshot ID references a logical-block to deduplicated-block mapping.
    • VDO target activation selects some snapshot ID from the same VDO pool
  2. Add encryption around compressed+hashed blocks:
    • Store the hash as an HMAC (or in the clear for higher performance/lower security)
    • Encrypt the data: use the hash as the CBC IV since it is never duplicated (or use some other means of extending a block cipher). Things like ESSIV are not needed because the data is de-duplicated so the mapping index may place the de-duplicated block in multiple volume locations anyway.
  3. All snapshots sharing the same HMAC deduplicate to the same deduplicated-blocks, thus all thin provisioned VDO volumes share the compressed+encrypted block.
  4. Theoretically two different thin volumes could have the same HMAC for deduplication, but if they have different encryption keys. However, both keys need to be active or one volume might fail to decrypt a block that was encrypted with another volume's key. Practically speaking, each HMAC needs to be matched with its own cipher key.
  5. Different HMACs provide different deduplication domains: One customer might have multiple volumes with one shared key and they would all deduplicate with themselves, whereas, another customer would have another HMAC and customer#1's data would be cryptographically isolated from customer#2's data.
  6. It sounds like the previous list item is equivalent to "SCSI => dm-crypt -> dm-vdo -> dm-thin" above, but the additional benefit here is that the encrypted+deduplicated block-store is a shared resource. Thus we get isolation between organizations with deduplication domains and still get to share the same pool space without static pool allocation, and get all the advantages of deduplication with compression.
  7. Additional caching layers (ie, dm-cache, dm-writeboost, bcache) can be placed above dm-vdo to hide the processing latency at lower levels if so desired.

Anyway, this has been on my mind for literally years and I'm just now writing it up. It could be a neat project if someone is interested in extending VDO to support this.

-Eric

kvdo 6.1.3.23 hang operating system

I create 3 vdo logical volumes as backup storage, after 50days running, my system is not responsiable hang, can't login by ssh.

My system's kernel is 3.10.0-957.
image

Blow is the info from messages file,
image

After some repeated info, the last message info is
image

UDS on separate device

From reading the documentation, it looks like the UDS index is created on the same storage device as where the data is being stored. The documentation doesn't show a parameter to specify a different device for the UDS index.

Is it correct to assume that at this point the UDS index and the VDO volume have to be on the same block device?

If so, can we please add a feature request to allow one to specify a different block device for a VDO volume (or perhaps even a set of VDO volumes)?

The use case is to have the UDS index on a SSD while the UDS volume would be on a HDD. While this is still likely not optimal since I suspect the block layout on the HDD is likely to be more random than sequential, it seems like there should be some improvement in, at least, decreasing the head contention on the HDD as it accesses the UDS and VDO volumes during normal operations.

Big problem on deleting big file on vdo. It will stop all io request

I have a big problem

I use glusterfs over vdo on xfs in case.
my vdo version
Kernel module:
Loaded: true
Name: kvdo
Version information:
kvdo version: 6.2.0.219

In my case We have many big file over than 500G some is 1T or 2T.
if it be deleteing. Vdo will hang all of io reqeust.
Some small file like 20g or 50g. if size over 100G. this problem will be very frequency.
I also set discard_limit to 100 from 1500
dedep is off in my place
only compression in on
this is come information on my system.I use iostat to record

your will see dev utils is 100% and it will hang all io request

iostats: dm-3 is vdo device
avg-cpu: %user %nice %system %iowait %steal %idle
0.38 0.00 28.95 0.64 0.00 70.03

Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdb 0.00 218.00 0.00 15055.00 0.00 61152.00 8.12 0.54 0.04 0.00 0.04 0.03 49.00
sda 0.00 205.00 1.00 16104.00 8.00 65908.00 8.19 0.78 0.05 0.00 0.05 0.04 70.00
sdc 0.00 4.00 0.00 125.00 0.00 644.00 10.30 0.06 0.50 0.00 0.50 0.01 0.10
dm-0 0.00 0.00 0.00 129.00 0.00 644.00 9.98 0.06 0.50 0.00 0.50 0.01 0.10
dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
dm-2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
bcache0 0.00 0.00 0.00 15980.00 0.00 61144.00 7.65 0.87 0.05 0.00 0.05 0.05 75.40
dm-3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 100.00

avg-cpu: %user %nice %system %iowait %steal %idle
0.51 0.00 29.62 0.76 0.00 69.11

Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdb 0.00 193.00 0.00 15245.00 0.00 61764.00 8.10 0.53 0.03 0.00 0.03 0.03 49.90
sda 0.00 192.00 1.00 16268.00 4.00 66512.00 8.18 0.77 0.05 0.00 0.05 0.04 69.60
sdc 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
dm-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
dm-2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
bcache0 0.00 0.00 0.00 16137.00 0.00 61756.00 7.65 0.88 0.05 0.00 0.05 0.05 74.60
dm-3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 100.00

avg-cpu: %user %nice %system %iowait %steal %idle
0.39 0.00 28.35 0.64 0.00 70.62

Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdb 0.00 143.00 0.00 15014.00 0.00 60648.00 8.08 0.54 0.04 0.00 0.04 0.03 50.20
sda 0.00 142.00 1.00 15996.00 4.00 65224.00 8.16 0.74 0.05 0.00 0.05 0.04 67.30
sdc 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
dm-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
dm-2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
bcache0 0.00 0.00 0.00 15830.00 0.00 60648.00 7.66 0.87 0.05 0.00 0.05 0.05 73.30
dm-3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 100.00

top info
Tasks: 1929 total, 2 running, 140 sleeping, 0 stopped, 0 zombie
%Cpu(s): 2.8 us, 17.0 sy, 0.0 ni, 71.7 id, 8.0 wa, 0.0 hi, 0.4 si, 0.0 st
KiB Mem : 13158505+total, 7327428 free, 10939488+used, 14862756 buff/cache
KiB Swap: 4194300 total, 4194300 free, 0 used. 8015328 avail Mem

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2144 root 20 0 0 0 0 S 19.8 0.0 3767:26 kvdo0:journalQ
2150 root 20 0 0 0 0 S 15.8 0.0 4701:00 kvdo0:physQ0
18560 root 20 0 2429296 40120 9004 S 12.2 0.0 1585:55 glusterfsd
2161 root 20 0 0 0 0 R 11.2 0.0 3579:51 kvdo0:cpuQ0
2162 root 20 0 0 0 0 S 11.2 0.0 3572:51 kvdo0:cpuQ1
17807 root 0 -20 0 0 0 D 10.2 0.0 109:02.36 kworker/5:0H
2146 root 20 0 0 0 0 S 7.6 0.0 1678:45 kvdo0:logQ0
2147 root 20 0 0 0 0 S 7.3 0.0 1674:34 kvdo0:logQ1
2148 root 20 0 0 0 0 S 7.3 0.0 1674:09 kvdo0:logQ2
2149 root 20 0 0 0 0 S 7.3 0.0 1672:51 kvdo0:logQ3
18567 root 20 0 2369988 31936 9068 S 5.9 0.0 483:05.83 glusterfsd
2145 root 20 0 0 0 0 S 4.0 0.0 1572:49 kvdo0:packerQ
2151 root 20 0 0 0 0 S 4.0 0.0 1446:38 kvdo0:hashQ0
2152 root 20 0 0 0 0 S 4.0 0.0 1442:42 kvdo0:hashQ1
2156 root 20 0 0 0 0 S 2.6 0.0 798:50.26 kvdo0:bioQ0
2157 root 20 0 0 0 0 S 2.6 0.0 779:48.42 kvdo0:bioQ1
2158 root 20 0 0 0 0 S 2.6 0.0 778:43.52 kvdo0:bioQ2
2159 root 20 0 0 0 0 S 2.6 0.0 776:37.81 kvdo0:bioQ3
2160 root 20 0 0 0 0 S 2.6 0.0 974:01.15 kvdo0:ackQ

Latest kvdo push (6.2.2.117) does not build on 5.6 kernel

Currently the build is failing due to the time_t symbol being removed in the 5.6 kernel.

make: Entering directory '/usr/src/kernels/5.6.0-0.rc3.git0.1.fc32.x86_64'
  AR      /var/lib/dkms/kvdo/6.2.2.117-6.2.2.117/build/uds/built-in.a
  CC [M]  /var/lib/dkms/kvdo/6.2.2.117-6.2.2.117/build/uds/indexInternals.o
In file included from /var/lib/dkms/kvdo/6.2.2.117-6.2.2.117/build/uds/common.h:27,
                 from /var/lib/dkms/kvdo/6.2.2.117-6.2.2.117/build/uds/openChapterZone.h:25,
                 from /var/lib/dkms/kvdo/6.2.2.117-6.2.2.117/build/uds/chapterWriter.h:27,
                 from /var/lib/dkms/kvdo/6.2.2.117-6.2.2.117/build/uds/index.h:25,
                 from /var/lib/dkms/kvdo/6.2.2.117-6.2.2.117/build/uds/indexInternals.h:25,
                 from /var/lib/dkms/kvdo/6.2.2.117-6.2.2.117/build/uds/indexInternals.c:22:
/var/lib/dkms/kvdo/6.2.2.117-6.2.2.117/build/uds/uds.h:182:3: error: unknown type name ‘time_t’
  182 |   time_t   currentTime;
      |   ^~~~~~
In file included from /var/lib/dkms/kvdo/6.2.2.117-6.2.2.117/build/uds/threads.h:27,
                 from /var/lib/dkms/kvdo/6.2.2.117-6.2.2.117/build/uds/indexSession.h:29,
                 from /var/lib/dkms/kvdo/6.2.2.117-6.2.2.117/build/uds/index.h:27,
                 from /var/lib/dkms/kvdo/6.2.2.117-6.2.2.117/build/uds/indexInternals.h:25,
                 from /var/lib/dkms/kvdo/6.2.2.117-6.2.2.117/build/uds/indexInternals.c:22:
/var/lib/dkms/kvdo/6.2.2.117-6.2.2.117/build/uds/timeUtils.h:225:15: error: unknown type name ‘time_t’
  225 | static INLINE time_t asTimeT(AbsTime time)
      |               ^~~~~~
/var/lib/dkms/kvdo/6.2.2.117-6.2.2.117/build/uds/timeUtils.h:241:33: error: unknown type name ‘time_t’; did you mean ‘ktime_t’?
  241 | static INLINE AbsTime fromTimeT(time_t time)
      |                                 ^~~~~~
      |                                 ktime_t
make[2]: *** [scripts/Makefile.build:268: /var/lib/dkms/kvdo/6.2.2.117-6.2.2.117/build/uds/indexInternals.o] Error 1
make[1]: *** [scripts/Makefile.build:505: /var/lib/dkms/kvdo/6.2.2.117-6.2.2.117/build/uds] Error 2
make: *** [Makefile:1681: /var/lib/dkms/kvdo/6.2.2.117-6.2.2.117/build] Error 2
make: Leaving directory '/usr/src/kernels/5.6.0-0.rc3.git0.1.fc32.x86_64'


Hello,

Hello,

The VDO Integration Guide (link is to the RHEL 7 documentation, but many of the concepts still apply to VDO in general) has a lot of information on how to set up VDO. There is also a subsection called Tuning VDO, with information on fine-tuning VDO settings.

For the index of this VDO volume, you chose "2 GB memory" and "Sparse mode"; this will configure a deduplication window of 20 TB. How big is your base backup set? If the backup set is much smaller than 20 TB, it may be possible to use a smaller index memory setting.

In terms of memory, a VDO volume requires about 370 MB of memory, plus an additional 268 MB per 1 TB of physical storage. In the case of your VDO volume, if the default 128 MB block map cache size is added, this is about 29.3 GB of memory to start the VDO volume (not including the index memory).

There's a mailing list for VDO development, vdo-devel; please feel free to send an email to the list with questions like these.

Originally posted by @bgurney-rh in #21 (comment)

Build failure of 6.2.1.102 against 4.19.10 kernel

I am getting the following compilation error using tag 6.2.1.102 against kernel version 4.19.10:

In file included from /kvdo/vdo/../uds/memoryAlloc.h:30,
                 from /kvdo/vdo/kernel/histogram.c:24:
/kvdo/vdo/kernel/histogram.c: In function 'makeLogarithmicJiffiesHistogram':
/kvdo/vdo/../uds/permassert.h:114:5: error: duplicate case value
     case expr:              \
     ^~~~
/kvdo/vdo/kernel/histogram.c:600:3: note: in expansion of macro 'STATIC_ASSERT'
   STATIC_ASSERT((MSEC_PER_SEC % HZ) == 0);
   ^~~~~~~~~~~~~
/kvdo/vdo/../uds/permassert.h:113:5: note: previously used here
     case 0:                 \
     ^~~~
/kvdo/vdo/kernel/histogram.c:600:3: note: in expansion of macro 'STATIC_ASSERT'
   STATIC_ASSERT((MSEC_PER_SEC % HZ) == 0);
   ^~~~~~~~~~~~~
make[2]: *** [scripts/Makefile.build:303: /kvdo/vdo/kernel/histogram.o] Error 1
make[2]: *** Waiting for unfinished jobs....
make[1]: *** [scripts/Makefile.build:544: /kvdo/vdo] Error 2
make: *** [Makefile:1517: _module_/kvdo] Error 2
make: Leaving directory '/usr/src/linux-4.19.10'

It is possible to support reflink syscall.

https://www.winehq.org/pipermail/wine-devel/2021-July/191267.html

This change here made me think. ext4 does not support the reflink syscalls. What vdo does by de-duplicating does save disc space yes but is CPU expensive doing the compares. Lets say ext4 sitting on vdo result in the reflink syscalls being functional when the reflink syscalls were used you straight up know its a duplicate without having to perform a compare to find out.

This would allow supporting user space de-duplication on demand. So that a vdo with deduplication off does not have to be 100 percent without de-duplication. I don't think there is a clean syscall for compress this file I could be wrong.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.