Code Monkey home page Code Monkey logo

linbit / linstor-server Goto Github PK

View Code? Open in Web Editor NEW
820.0 29.0 65.0 25.41 MB

High Performance Software-Defined Block Storage for container, cloud and virtualisation. Fully integrated with Docker, Kubernetes, Openstack, Proxmox etc.

Home Page: https://docs.linbit.com/docs/linstor-guide/

License: GNU General Public License v3.0

Makefile 0.05% Shell 0.03% Java 99.72% C 0.08% Python 0.12%
sds drbd linstor zfs software-defined-storage open-source block-storage storage high-availability cloud-native

linstor-server's Introduction

Open Source

Open Source GPLv3 License Slack Channel Support Active GitHub Release GitHub Commit

LINSTOR is a SODA ECO Project

What is LINSTOR®

LINSTOR® developed by LINBIT, is a software that manages replicated volumes across a group of machines. With native integration to Kubernetes, LINSTOR makes building, running, and controlling block storage simple. LINSTOR® is open-source software designed to manage block storage devices for large Linux server clusters. It’s used to provide persistent Linux block storage for cloudnative and hypervisor environments.

Historically LINSTOR started as a resource-file generator for DRBD® which conveniently also created LVM/ZFS volumes. By time LINSTOR steadily grew and got new features and drivers in both directions, south-bound like snapshots, LUKS, dm-cache, dm-writecache or nvme, and north-bound drivers for Kubernetes, Openstack, Open Nebula, Openshift, VMware.

How it works

LINSTOR system consists of multiple server and client components. A LINSTOR controller manages the configuration of the LINSTOR cluster and all of its managed storage resources. The LINSTOR satellite component manages creation, modification and deletion of storage resources on each node that provides or uses storage resources managed by LINSTOR.

The storage system can be managed by directly using a command line utility to interact with the active LINSTOR controller. Alternatively, users may integrate the LINSTOR system into the storage architecture of other software systems, such as Kubernetes. All communication between LINSTOR components uses LINSTOR’s own network protocol, based on TCP/IP network connections.

Features

  • Open Source

  • Main Features

    • Provides replicated block storage and persistent container storage
    • Separation of Data & Control plane
    • Online live migration of backend storage
    • Compatible with high I/O workloads like databases
    • Storage tiering (multiple storage pools)
    • Choose your own Linux filesystem
    • Rich set of plugins
  • Storage Related Features

    • Network replication through DRBD integration
    • LVM Snapshot Support
    • LVM Thin Provisioning Support
    • RDMA
    • Management of persistent Memory (PMEM)
    • ZFS support
    • NVME over Fabrics
  • Network Related Features

    • Replicate via multiple network cards
    • Automatic management of TCP/IP port range, minor number range etc. provides consistent data
    • Scale-up and scale-out
    • Rest API
    • LDAP Authentification

User Guide

If you want to use all of the feature set that LINSTOR have (such as quorum, DRBD replication etc), you will need at least 3 nodes to use LINSTOR. Linstor-controller and Linstor-client role should be installed on one node and all nodes should have linstor-satellite.

LINSTOR can also perform disk operations without using DRBD. However, if replication with DRBD is desired, DRBD 9 must be installed on all servers. For DRBD installation, please follow this link.

For a more detailed installation guide, please follow the link below.

LINSTOR GUIDE

Plugins

LINSTOR is currently extended with the following plugins. Instructions on how to use them in your own application are linked below.

Plugin More Information
iSCSI https://github.com/LINBIT/linstor-iscsi
VSAN https://www.linbit.com/linstor-vsan-software-defined-storage-for-vmware%e2%80%8b/
OpenShift https://www.linbit.com/openshift-persistent-container-storage-support/
OpenNebula https://www.linbit.com/drbd-user-guide/linstor-guide-1_0-en/#ch-opennebula-linstor
Kubernetes https://www.linbit.com/drbd-user-guide/linstor-guide-1_0-en/#ch-kubernetes
OpenStack https://www.linbit.com/drbd-user-guide/linstor-guide-1_0-en/#ch-openstack-linstor

Support

LINSTOR is an open source software. You can use our slack channel above link to get support for individual use and development use. If you are going to use it in enterprise and mission critical environments, please contact us via the link below for professional support.

LINSTOR Support

Releases

Releases generated by git tags on github are snapshots of the git repository at the given time. They might lack things such as generated man pages, the configure script, and other generated files. If you want to build from a tarball, use the ones provided by us.

Also for alternative, please look at the "Building" section below.

Building

Gradle is used for building LINSTOR. On a fresh git clone some protobuf java files need to be generated and for that a fitting proto compiler is needed. So before building you need to run:

$ ./gradlew getProtoc

After the correct proto compiler is installed in the ./tools directory you can build with:

$ ./gradlew assemble

Development

Please check the development documentation for details.

LINSTOR Development

Free Software, Hell Yeah!

LINSTOR Powered by LINBIT

linstor-server's People

Contributors

beornf avatar bernardgut avatar boomanaiden154 avatar chrboe avatar franciosi avatar ghernadi avatar joelcolledge avatar johannesthoma avatar jokucera avatar kermat avatar philipp-reisner avatar rainerlein avatar raltnoeder avatar rck avatar rp- avatar tbykowsk avatar wanzenbug avatar yusufyildiz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

linstor-server's Issues

After (hard) reboot of storage node state of (disk-)resource (attached to a VM) is unknown on rebooted node

Hi,
I am currently in the status of testing "what can go wrong" using OpenNebula and the new linstor add-on. So far I could create new VMs, live migrate and delete VMs.
My current Test: I have switched off the passive storage node (resource state=unused) and switched it on again to see if replication starts again.
See detailed history below (FYI, that the text is partly crossed is unintended)

SETUP:
srv485 -> linstor controller & OpenNebula Frontend
srv484 & srv483 -> linstor satellite & storage node & OpenNebula KVM host,
datatore policy -> LINSTOR_AUTO_PLACE = 2
OS: Centos 7.5 (same error behaviour on Ubuntu 18.04 btw)

Is there conceptually something I am missing here or is this unexpected behavior?

oneadmin@srv485:~$ linstor resource list
╭────────────────────────────────────────────────────────╮
┊ ResourceName ┊ Node ┊ Port ┊ Usage ┊ State ┊
╞┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄╡
┊ OpenNebula-Image-0 ┊ srv483 ┊ 7000 ┊ Unused ┊ UpToDate ┊
┊ OpenNebula-Image-0 ┊ srv484 ┊ 7000 ┊ Unused ┊ UpToDate ┊
╰────────────────────────────────────────────────────────╯

#CREATE VM in OpenNebula
oneadmin@srv485:~$ onetemplate instantiate 0
VM ID: 0

oneadmin@srv485:~$ linstor resource list
╭──────────────────────────────────────────────────────────────────╮
┊ ResourceName ┊ Node ┊ Port ┊ Usage ┊ State ┊
╞┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄╡
┊ OpenNebula-Image-0 ┊ srv483 ┊ 7000 ┊ Unused ┊ UpToDate ┊
┊ OpenNebula-Image-0 ┊ srv484 ┊ 7000 ┊ Unused ┊ UpToDate ┊
┊ OpenNebula-Image-0-vm0-disk0 ┊ srv483 ┊ 7001 ┊ Unused ┊ UpToDate ┊
┊ OpenNebula-Image-0-vm0-disk0 ┊ srv484 ┊ 7001 ┊ InUse ┊ UpToDate ┊
╰──────────────────────────────────────────────────────────────────╯

#SWITCHING OFF NODE srv483 at this point

oneadmin@srv485:~$ linstor resource list
╭──────────────────────────────────────────────────────────────────╮
┊ ResourceName ┊ Node ┊ Port ┊ Usage ┊ State ┊
╞┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄╡
┊ OpenNebula-Image-0 ┊ srv483 ┊ 7000 ┊ ┊ Unknown ┊
┊ OpenNebula-Image-0 ┊ srv484 ┊ 7000 ┊ Unused ┊ UpToDate ┊
┊ OpenNebula-Image-0-vm0-disk0 ┊ srv483 ┊ 7001 ┊ ┊ Unknown ┊
┊ OpenNebula-Image-0-vm0-disk0 ┊ srv484 ┊ 7001 ┊ InUse ┊ UpToDate ┊
╰──────────────────────────────────────────────────────────────────╯

oneadmin@srv485:~$ linstor node list
╭─────────────────────────────────────────────────────────────────────╮
┊ Node ┊ NodeType ┊ Addresses ┊ State ┊
╞┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄╡
┊ srv483 ┊ SATELLITE ┊ 100.80.0.15,100.80.0.15:3366 (PLAIN) ┊ OFFLINE ┊
┊ srv484 ┊ SATELLITE ┊ 100.80.0.16,100.80.0.16:3366 (PLAIN) ┊ Online ┊
╰─────────────────────────────────────────────────────────────────────╯

#SWITCHING ON node srv483 again and waiting until the node comes back online

oneadmin@srv485:$ linstor node list
╭────────────────────────────────────────────────────────────────────╮
┊ Node ┊ NodeType ┊ Addresses ┊ State ┊
╞┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄╡
┊ srv483 ┊ SATELLITE ┊ 100.80.0.15,100.80.0.15:3366 (PLAIN) ┊ Online ┊
┊ srv484 ┊ SATELLITE ┊ 100.80.0.16,100.80.0.16:3366 (PLAIN) ┊ Online ┊
╰────────────────────────────────────────────────────────────────────╯
oneadmin@srv485:
$ linstor resource list
╭──────────────────────────────────────────────────────────────────╮
┊ ResourceName ┊ Node ┊ Port ┊ Usage ┊ State ┊
╞┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄╡
┊ OpenNebula-Image-0 ┊ srv483 ┊ 7000 ┊ Unused ┊ UpToDate ┊
┊ OpenNebula-Image-0 ┊ srv484 ┊ 7000 ┊ Unused ┊ UpToDate ┊
┊ OpenNebula-Image-0-vm0-disk0 ┊ srv483 ┊ 7001 ┊ ┊ Unknown ┊
┊ OpenNebula-Image-0-vm0-disk0 ┊ srv484 ┊ 7001 ┊ InUse ┊ UpToDate ┊
╰──────────────────────────────────────────────────────────────────╯

oneadmin@srv485:$ linstor error-reports list
╭────────────────────────────────────────────────────────────╮
┊ Nr. ┊ Id ┊ Datetime ┊ Node ┊
╞┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄╡
┊ 1 ┊ 5BFE9E8A-90340-000000 ┊ 2018-11-28 14:56:47 ┊ srv483 ┊
╰────────────────────────────────────────────────────────────╯
oneadmin@srv485:
$ linstor error-reports show 5BFE9E8A-90340-000000
ERROR REPORT 5BFE9E8A-90340-000000

============================================================

Application: LINBIT® LINSTOR
Module: Satellite
Version: 0.7.3
Build ID: 6e47dd2
Build time: 2018-11-22T10:55:51+00:00
Error time: 2018-11-28 14:56:47
Node: srv483

============================================================

Reported error:

Description:
Operations on resource 'OpenNebula-Image-0-vm0-disk0' were aborted
Cause:
Meta data creation for volume 0 failed because the execution of an external command failed
Correction:
- Check whether the required software is installed
- Check whether the application's search path includes the location
of the external software
- Check whether the application has execute permission for the external command

Category: LinStorException
Class name: ResourceException
Class canonical name: com.linbit.linstor.core.DrbdDeviceHandler.ResourceException
Generated at: Method 'createResourceMetaData', Source file 'DrbdDeviceHandler.java', Line #1524

Error message: Meta data creation for resource 'OpenNebula-Image-0-vm0-disk0' volume 0 failed

Error context:
Meta data creation for resource 'OpenNebula-Image-0-vm0-disk0' volume 0 failed

Call backtrace:

Method                                   Native Class:Line number
createResourceMetaData                   N      com.linbit.linstor.core.DrbdDeviceHandler:1524
createResource                           N      com.linbit.linstor.core.DrbdDeviceHandler:1138
dispatchResource                         N      com.linbit.linstor.core.DrbdDeviceHandler:364
run                                      N      com.linbit.linstor.core.DeviceManagerImpl$DeviceHandlerInvocation:1225
run                                      N      com.linbit.WorkerPool$WorkerThread:179

Caused by:

Description:
Execution of the external command 'drbdadm' failed.
Cause:
The external command exited with error code 20.
Correction:
- Check whether the external program is operating properly.
- Check whether the command line is correct.
Contact a system administrator or a developer if the command line is no longer valid
for the installed version of the external program.
Additional information:
The full command line executed was:
drbdadm -vvv --max-peers 7 -- --force create-md OpenNebula-Image-0-vm0-disk0/0

The external command sent the following output data:
drbdmeta 1001 v09 /dev/vg-drbdpool/OpenNebula-Image-0-vm0-disk0_00000 internal create-md 7 --force 


The external command sent the follwing error information:
open(/dev/vg-drbdpool/OpenNebula-Image-0-vm0-disk0_00000) failed: No such file or directory
open(/dev/vg-drbdpool/OpenNebula-Image-0-vm0-disk0_00000) failed: No such file or directory
Command 'drbdmeta 1001 v09 /dev/vg-drbdpool/OpenNebula-Image-0-vm0-disk0_00000 internal create-md 7 --force' terminated with exit code 20

Category: LinStorException
Class name: ExtCmdFailedException
Class canonical name: com.linbit.extproc.ExtCmdFailedException
Generated at: Method 'execute', Source file 'DrbdAdm.java', Line #437

Error message: The external command 'drbdadm' exited with error code 20

Call backtrace:

Method                                   Native Class:Line number
execute                                  N      com.linbit.drbd.DrbdAdm:437
simpleAdmCommand                         N      com.linbit.drbd.DrbdAdm:388
createMd                                 N      com.linbit.drbd.DrbdAdm:217
createVolumeMetaData                     N      com.linbit.linstor.core.DrbdDeviceHandler:1060
createResourceMetaData                   N      com.linbit.linstor.core.DrbdDeviceHandler:1489
createResource                           N      com.linbit.linstor.core.DrbdDeviceHandler:1138
dispatchResource                         N      com.linbit.linstor.core.DrbdDeviceHandler:364
run                                      N      com.linbit.linstor.core.DeviceManagerImpl$DeviceHandlerInvocation:1225
run                                      N      com.linbit.WorkerPool$WorkerThread:179

END OF ERROR REPORT.


Maybe some additional information from the storage node that is switched off and on again:
BEFORE:
root@srv483:# ls /dev/vg-drbdpool/ -la
total 0
drwxr-xr-x 2 root root 80 Nov 29 09:20 .
drwxr-xr-x 21 root root 6460 Nov 29 09:20 ..
lrwxrwxrwx 1 root root 7 Nov 29 09:20 OpenNebula-Image-0_00000 -> ../dm-4
lrwxrwxrwx 1 root root 7 Nov 29 09:20 OpenNebula-Image-0-vm1-disk0_00000 -> ../dm-5
AFTER:
root@srv483:
# ls /dev/vg-drbdpool/ -la
total 0
drwxr-xr-x 2 root root 60 Nov 29 09:25 .
drwxr-xr-x 21 root root 6420 Nov 29 09:25 ..
lrwxrwxrwx 1 root root 7 Nov 29 09:25 OpenNebula-Image-0_00000 -> ../dm-4

For some reason /dev/dm-5 vanished...

Thanks
Uli

P.S.: I also openend an issue at the OpenNebula Addon Repository and was adviced tom come here. For reference: OpenNebula/addon-linstor#2

Autoplacing not working for Diskless pools

# linstor r c test --auto-place 1 -s DfltDisklessStorPool
ERROR:
Description:
    Not enough available nodes
Details:
    Not enough nodes fulfilling the following auto-place criteria:
     * has a deployed storage pool named 'DfltDisklessStorPool'
     * the storage pool 'DfltDisklessStorPool' has to have at least '1048576' free space
     * the current access context has enough privileges to use the node and the storage pool
     * the node is online
    Auto-placing resource: test
Show reports:
    linstor error-reports show 5BB67EB3-00000-000004

Can't upgrade with Postgres backend

LINSTOR, Module Controller
Version:            0.7.1 (b4dab7399d24dc10917d221bd25ffcf0ed8f9712)
Build time:         2018-10-31T12:27:59+00:00
Java Version:       10
Java VM:            Oracle Corporation, Version 10.0.2+13-Ubuntu-1ubuntu0.18.04.3
Operating system:   Linux, Version 4.15.18-7-pve
Environment:        amd64, 1 processors, 7756 MiB memory reserved for allocations

System components initialization in progress

16:45:20.626 [main] INFO  LINSTOR/Controller - Log directory set to: '/logs'
16:45:20.628 [Main] INFO  LINSTOR/Controller - Loading API classes started.
16:45:20.771 [Main] INFO  LINSTOR/Controller - API classes loading finished: 143ms
16:45:20.772 [Main] INFO  LINSTOR/Controller - Dependency injection started.
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.google.inject.internal.cglib.core.$ReflectUtils$1 (file:/usr/share/linstor-server/lib/guice-4.2.0.jar) to method java.lang.ClassLoader.defineClass(java.lang.String,byte[],int,int,java.security.ProtectionDomain)
WARNING: Please consider reporting this to the maintainers of com.google.inject.internal.cglib.core.$ReflectUtils$1
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
16:45:21.322 [Main] INFO  LINSTOR/Controller - Dependency injection finished: 550ms
16:45:21.399 [Main] INFO  LINSTOR/Controller - Initializing the database connection pool
16:45:21.501 [Main] INFO  org.flywaydb.core.internal.license.VersionPrinter - Flyway Community Edition 5.2.0 by Boxfuse
16:45:21.571 [Main] INFO  org.flywaydb.core.internal.database.DatabaseFactory - Database: jdbc:postgresql://linstor-db-stolon-proxy/linstor1 (PostgreSQL 10.4)
16:45:21.615 [Main] INFO  org.flywaydb.core.internal.command.DbValidate - Successfully validated 11 migrations (execution time 00:00.013s)
16:45:21.623 [Main] INFO  org.flywaydb.core.internal.command.DbMigrate - Current version of schema "LINSTOR": 2018.09.03.14.30
16:45:21.623 [Main] WARN  org.flywaydb.core.internal.command.DbMigrate - outOfOrder mode is active. Migration of schema "LINSTOR" may not be reproducible.
16:45:21.623 [Main] INFO  org.flywaydb.core.internal.command.DbMigrate - Migrating schema "LINSTOR" to version 2018.10.08.13.00 - Add FLAGS column to RESOURCE_CONNECTIONS
Error: 
CREATE TABLE RESOURCE_CONNECTIONS_TMP AS SELECT * FROM RESOURCE_CONNECTIONS
16:45:21.642 [Main] ERROR org.flywaydb.core.internal.command.DbMigrate - Migration of schema "LINSTOR" to version 2018.10.08.13.00 - Add FLAGS column to RESOURCE_CONNECTIONS failed! Changes successfully rolled back.
16:45:21.661 [Main] ERROR LINSTOR/Controller - SQL State  : 42P01
Error Code : 0
Message    : ERROR: relation "resource_connections" does not exist
  Position: 57
 [Report number 5BE1C520-00000-000000]

16:45:21.662 [Thread-0] INFO  LINSTOR/Controller - Shutdown in progress
16:45:21.662 [Thread-0] INFO  LINSTOR/Controller - Shutting down service instance 'DatabaseService' of type DatabaseService
16:45:21.663 [Thread-0] INFO  LINSTOR/Controller - Waiting for service instance 'DatabaseService' to complete shutdown
16:45:21.663 [Thread-0] INFO  LINSTOR/Controller - Shutting down service instance 'TaskScheduleService' of type TaskScheduleService
16:45:21.663 [Thread-0] INFO  LINSTOR/Controller - Waiting for service instance 'TaskScheduleService' to complete shutdown
16:45:21.664 [Thread-0] INFO  LINSTOR/Controller - Shutting down service instance 'TimerEventService' of type TimerEventService
16:45:21.664 [Thread-0] INFO  LINSTOR/Controller - Waiting for service instance 'TimerEventService' to complete shutdown
16:45:21.664 [Thread-0] INFO  LINSTOR/Controller - Shutdown complete

ErrorReport-5BE1C520-00000-000000.log

missing linstor_common.conf

Error in "linstor resource create".
if create /var/lib/linstor.d/linstor_common.conf with

# This file was generated by linstor(0.6.3), do not edit manually.

common
{
}

then no problem.

cat ErrorReport-5B9A395F-000000.log

ERROR REPORT 5B9A395F-000000

============================================================

Application:                        LINBIT® LINSTOR
Module:                             Satellite
Version:                            0.6.3
Build ID:                           d608e73ee81aba96ced64e6358244994192341d1
Build time:                         2018-09-10T12:33:20+00:00
Error time:                         2018-09-13 13:22:20
Node:                               eserver

============================================================

Description:
    Unable to move temporary common Linstor DRBD configuration file 'linstor_common.conf' failed due to an I/O error
Cause:
    Creation of the DRBD configuration file failed due to an I/O error
Correction:
    - Check whether enough free space is available for the creation of the file
    - Check whether the application has write access to the target directory
    - Check whether the storage is operating flawlessly
Additional information:
    The error reported by the runtime environment or operating system is:
    /tmp/linstor-common.2811572917602003663.tmp -> /var/lib/linstor.d/linstor_common.conf: Invalid cross-device link

Error context:
    /tmp/linstor-common.2811572917602003663.tmp -> /var/lib/linstor.d/linstor_common.conf: Invalid cross-device link

Access context information

Identity:                           SYSTEM
Role:                               SYSTEM
Domain:                             SYSTEM

Caused by:
==========

Description:
    /tmp/linstor-common.2811572917602003663.tmp -> /var/lib/linstor.d/linstor_common.conf: Invalid cross-device link

Category:                           Exception
Class name:                         AtomicMoveNotSupportedException
Class canonical name:               java.nio.file.AtomicMoveNotSupportedException
Generated at:                       Method 'move', Source file 'UnixCopyFile.java', Line #394

Error message:                      /tmp/linstor-common.2811572917602003663.tmp -> /var/lib/linstor.d/linstor_common.conf: Invalid cross-device link

Error context:
    /tmp/linstor-common.2811572917602003663.tmp -> /var/lib/linstor.d/linstor_common.conf: Invalid cross-device link

Call backtrace:

    Method                                   Native Class:Line number
    move                                     N      sun.nio.fs.UnixCopyFile:394
    move                                     N      sun.nio.fs.UnixFileSystemProvider:262
    move                                     N      java.nio.file.Files:1395
    doApplyControllerChanges                 N      com.linbit.linstor.core.apicallhandler.satellite.StltApiCallHandler:539
    applyFullSync                            N      com.linbit.linstor.core.apicallhandler.satellite.StltApiCallHandler:283
    execute                                  N      com.linbit.linstor.api.protobuf.satellite.FullSync:84
    executeNonReactive                       N      com.linbit.linstor.proto.CommonMessageProcessor:481
    lambda$execute$13                        N      com.linbit.linstor.proto.CommonMessageProcessor:458
    doInScope                                N      com.linbit.linstor.core.apicallhandler.ScopeRunner:110
    lambda$null$2                            N      com.linbit.linstor.core.apicallhandler.ScopeRunner:74
    call                                     N      reactor.core.publisher.MonoCallable:92
    trySubscribeScalarMap                    N      reactor.core.publisher.FluxFlatMap:126
    subscribe                                N      reactor.core.publisher.MonoFlatMapMany:46
    subscribe                                N      reactor.core.publisher.Flux:6877
    onNext                                   N      reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain:184
    request                                  N      reactor.core.publisher.Operators$ScalarSubscription:1640
    onSubscribe                              N      reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain:131
    subscribe                                N      reactor.core.publisher.MonoCurrentContext:35
    subscribe                                N      reactor.core.publisher.MonoFlatMapMany:49
    subscribe                                N      reactor.core.publisher.Flux:6877
    onNext                                   N      reactor.core.publisher.FluxFlatMap$FlatMapMain:372
    onNext                                   N      reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner:238
    request                                  N      reactor.core.publisher.Operators$ScalarSubscription:1640
    onSubscribeInner                         N      reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain:140
    onSubscribe                              N      reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner:233
    trySubscribeScalarMap                    N      reactor.core.publisher.FluxFlatMap:161
    subscribe                                N      reactor.core.publisher.MonoFlatMapMany:46
    subscribe                                N      reactor.core.publisher.Flux:6877
    onNext                                   N      reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain:184
    request                                  N      reactor.core.publisher.Operators$ScalarSubscription:1640
    onSubscribe                              N      reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain:131
    subscribe                                N      reactor.core.publisher.MonoCurrentContext:35
    subscribe                                N      reactor.core.publisher.MonoFlatMapMany:49
    subscribe                                N      reactor.core.publisher.FluxFlatMap:97
    subscribe                                N      reactor.core.publisher.FluxOnErrorResume:47
    subscribe                                N      reactor.core.publisher.FluxOnErrorResume:47
    subscribe                                N      reactor.core.publisher.FluxOnErrorResume:47
    subscribe                                N      reactor.core.publisher.FluxContextStart:49
    subscribe                                N      reactor.core.publisher.FluxPeekFuseable:86
    subscribe                                N      reactor.core.publisher.FluxPeekFuseable:86
    subscribe                                N      reactor.core.publisher.FluxDefer:55
    subscribe                                N      reactor.core.publisher.Flux:6877
    onNext                                   N      reactor.core.publisher.FluxFlatMap$FlatMapMain:372
    drainAsync                               N      reactor.core.publisher.FluxFlattenIterable$FlattenIterableSubscriber:391
    drain                                    N      reactor.core.publisher.FluxFlattenIterable$FlattenIterableSubscriber:633
    onNext                                   N      reactor.core.publisher.FluxFlattenIterable$FlattenIterableSubscriber:238
    onNext                                   N      reactor.core.publisher.FluxHandleFuseable$HandleFuseableSubscriber:143
    drainFused                               N      reactor.core.publisher.UnicastProcessor:234
    drain                                    N      reactor.core.publisher.UnicastProcessor:267
    onNext                                   N      reactor.core.publisher.UnicastProcessor:343
    next                                     N      reactor.core.publisher.FluxCreate$IgnoreSink:573
    next                                     N      reactor.core.publisher.FluxCreate$SerializedSink:151
    processInOrder                           N      com.linbit.linstor.netcom.TcpConnectorPeer:351
    doProcessMessage                         N      com.linbit.linstor.proto.CommonMessageProcessor:200
    lambda$processMessage$2                  N      com.linbit.linstor.proto.CommonMessageProcessor:146
    onNext                                   N      reactor.core.publisher.FluxPeek$PeekSubscriber:177
    runAsync                                 N      reactor.core.publisher.FluxPublishOn$PublishOnSubscriber:396
    run                                      N      reactor.core.publisher.FluxPublishOn$PublishOnSubscriber:480
    call                                     N      reactor.core.scheduler.WorkerTask:84
    call                                     N      reactor.core.scheduler.WorkerTask:37
    run                                      N      java.util.concurrent.FutureTask:266
    access$201                               N      java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask:180
    run                                      N      java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask:293
    runWorker                                N      java.util.concurrent.ThreadPoolExecutor:1149
    run                                      N      java.util.concurrent.ThreadPoolExecutor$Worker:624
    run                                      N      java.lang.Thread:748


END OF ERROR REPORT.

Volume creating exception

Bug

Versions

  • linstor 0.6.0
  • linstor-proxmox plugin 2.9.0
  • proxmox-ve 5.2
  • drbd kernel 9.0.15
  • drbdadm 9.5.0
  • deiver: lvm

Details

Hi I'm testing linstor-proxmox plugin, and I have some problems:

When I try to create lxc container, I have the next output:

SUCCESS:
Description:
    New resource definition 'vm-555-disk-1' created.
Details:
    Resource definition 'vm-555-disk-1' UUID is: e82d64c5-3a83-48d1-b9e7-6bb342320f0b
SUCCESS:
Description:
    Resource definition 'vm-555-disk-1' modified.
Details:
    Resource definition 'vm-555-disk-1' UUID is: e82d64c5-3a83-48d1-b9e7-6bb342320f0b
SUCCESS:
    New volume definition with number '0' of resource definition 'vm-555-disk-1' created.
SUCCESS:
Description:
    Resource 'vm-555-disk-1' successfully autoplaced on 2 nodes
Details:
    Used storage pool: 'data'
    Used nodes: 'pve1-2', 'pve1-3'
mke2fs 1.43.4 (31-Jan-2017)
Could not open /dev/drbd/by-res/vm-555-disk-1/0: Wrong medium type
WARNING: Satellite connection lost
error with cfs lock 'storage-drbdstorage': Could not remove vm-555-disk-1: exit code 11
TASK ERROR: command 'mkfs.ext4 -O mmp -E 'root_owner=0:0' /dev/drbd/by-res/vm-555-disk-1/0' failed: exit code 1

Experienced, I found out that in the moment of crating filesystem both devices are in Secondary

After that I can't create any new devices, because autoplacing is not working anymore:

Reported error:
===============

Category:                           RuntimeException
Class name:                         AccessToDeletedDataException
Class canonical name:               com.linbit.linstor.AccessToDeletedDataException
Generated at:                       Method 'checkDeleted', Source file 'VolumeData.java', Line #416

Error message:                      Access to deleted volume

Error context:
    Registration of auto-placing resource: 'vm-333-disk-1' failed due to an unknown exception.

Full stacktrace: ErrorReport-5B89B429-000001.log

I have thee nodes cluster and the next config:

drbd: drbdstorage
   content images,rootdir
   redundancy 2
   controller 10.28.36.172
   controllervm 103

linstor-controller info not updated when cluster is upgraded

Cluster of 3 combined nodes is set up, storage-1 is the active controller.
Upgraded and restarted linstor services on storage-1:

root@storage-1:~# linstor node list
╭───────────────────────────────────────────────────────────────────────────────╮
┊ Node ┊ NodeType ┊ Addresses ┊ State ┊
╞┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄╡
┊ storage-1 ┊ COMBINED ┊ 192.168.220.2:3366 (PLAIN) ┊ Online ┊
┊ storage-2 ┊ COMBINED ┊ 192.168.220.3:3366 (PLAIN) ┊ OFFLINE(VERSION MISMATCH) ┊
┊ storage-3 ┊ COMBINED ┊ 192.168.220.4:3366 (PLAIN) ┊ OFFLINE(VERSION MISMATCH) ┊
╰───────────────────────────────────────────────────────────────────────────────╯

After upgrading, and restarting storage-2 and storage-3 status of nodes persist until linstor-controller service is restarted a second time on storage-1.

error in Linstor Server compiled from sources: Failed to load database configuration

I have built the Linstor Server from linstor-server-0.9.5.tar.gz with gradle getProtoc; gradle assemble. This produces a tarball build/distributions/linstor-server-0.9.5.tar.

When I run the Linstor Server from this tarball with tar xf linstor-server-0.9.5.tar; cd linstor-server-0.9.5; ./bin/Controller it immediately exits with the following error:

[Main] ERROR LINSTOR/Controller - Failed to load database configuration

Isn't an external database an optional choice for the Linstor Server? The entry points in the provided Dockerfiles suggest controllers (and satellites) normally receive a command line argument --config-directory. What should the content of such a directory be such that this error message goes away and the Linstor Server can start up normally?

Resource delete + recreate on DRBD sometimes results in out-of-sync data

Every now and then, I've needed to move a DRBD resource to another pool with LINSTOR, and while doing so, I've encountered multiple occasions where out-of-sync data has unexpectedly appeared.

Say I have a resource resource-name on POOL_HDD on node-a and node-b. I want to migrate it to POOL_SSD. What I've done:

# linstor r delete node-b resource-name
     (...no errors, all seemingly OK...)
# linstor r create -s POOL_SSD node-b resource-name
     (...no errors, all seemingly OK...)

If I've understood these commands correctly, I'd expect the newly created resource-name on node-b to be in "inconsistent" state, and to start immediately pulling data from node-a. At times it does, but often drbdtop shows disconnected state for the resource, sometimes refusing to connect (citing split brain) except when I do a "discard my data" connect on node-b. Even after discarding data on node-b, the resource often seems to goes into a "uptodate" state immediately, and only running drbdadm verify on the resource then reveals out-of-sync data that needs to be fixed by drbdadm invalidate resource-name on node-b.


POOL_HDD is on thin LVM on bcache at node-a, and on thin LVM on mdadm raid 0 on node-b.
POOL_SSD is on thin LVM on SSD at node-a, and plain LVM on SSD at node-b.

For DRBD options, I have: linstor controller drbd-options --sndbuf-size 0 --rcvbuf-size 0 --c-plan-ahead 15 --c-fill-target 40960 --c-min-rate 30720 --c-max-rate 614400 --max-epoch-size 20000 --max-buffers 20000 --verify-alg sha1 --csums-alg sha1.

Versions:

linstor 0.6.2; GIT-hash: 4dcc5834c8bfa6abbfde9f55cdc30114de7630b1

DRBD:
version: 9.0.15-1 (api:2/proto:86-114)
GIT-hash: c46d27900f471ea0f5ba587592439a9ddde1d08b build by root@mox-b, 2018-09-29 22:58:26
Transports (api:16): tcp (9.0.15-1)

Linux 4.17.0-0.bpo.3-amd64 #1 SMP Debian 4.17.17-1~bpo9+1 (2018-08-27) x86_64 GNU/Linux

(Shouldn't matter:)
Proxmox: pve-manager/5.2-9/4b30e8f9 (running kernel: 4.17.0-0.bpo.3-amd64)
linstor-proxmox                      3.0.1-1

Handle reserved size for ZFS volumes

Bug report

Hello, here is something wrong with volumes size in ZFS:

# linstor vd l
+---------------------------------------------------------+
| ResourceName | VolumeNr | VolumeMinor | Size    | State |
|---------------------------------------------------------|
| nfs100       | 0        | 1000        | 900 GiB | ok    |
+---------------------------------------------------------+
linstor sp l
+-----------------------------------------------------------------------------+
| StoragePool | Node  | Driver    | PoolName |       Free | SupportsSnapshots |
|-----------------------------------------------------------------------------|
| data        | m8c2  | ZfsDriver | data     | 920.90 GiB | true              |
| data        | m8c3  | ZfsDriver | data     | 920.90 GiB | true              |
+-----------------------------------------------------------------------------+
# linstor r c m8c2 m8c3 nfs100 -s data
SUCCESS:
Description:
    New resource 'nfs100' on node 'm8c2' created.
Details:
    Resource 'nfs100' on node 'm8c2' UUID is: d9155893-622d-4810-92eb-e705ab2cb5ef
SUCCESS:
Description:
    Volume with number '0' on resource 'nfs100' on node 'm8c2' successfully created
Details:
    Volume UUID is: 4f17154e-f805-49b8-9398-ce9c0246b95c
SUCCESS:
Description:
    New resource 'nfs100' on node 'm8c3' created.
Details:
    Resource 'nfs100' on node 'm8c3' UUID is: 5f48ab1f-39b3-4635-a680-0c6f3287115a
SUCCESS:
Description:
    Volume with number '0' on resource 'nfs100' on node 'm8c3' successfully created
Details:
    Volume UUID is: 395a2294-a422-45ef-8fa6-f9a6746ea5fe
ERROR:
Description:
    (m8c2) Initialization of storage for resource 'nfs100' volume 0 failed
Cause:
    Storage volume creation failed for resource 'nfs100' volume 0
# linstor r lv
+------------------------------------------------------------------------------+
| Node | Resource | StoragePool | VolumeNr | MinorNr | DeviceName    |   State |
|------------------------------------------------------------------------------|
| m8c2 | nfs100   | data        | 0        | 1000    | /dev/drbd1000 | Unknown |
| m8c3 | nfs100   | data        | 0        | 1000    | /dev/drbd1000 | Unknown |
+------------------------------------------------------------------------------+

From the machine log

Category:                           LinStorException
Class name:                         StorageException
Class canonical name:               com.linbit.linstor.storage.StorageException
Generated at:                       Method 'checkExitCode', Source file 'AbsStorageDriver.java', Line #694

Error message:                      Command 'zfs create -V 943920128KB data/nfs100_00000' returned with exitcode 1. 

Standard out: 


Error message: 
cannot create 'data/nfs100_00000': out of space

As can you see: linstor sends command for create 943920128KB volume, but ZFS tries to creates larger one:

Example:

# zfs create -V 800G data/test1 
# zfs create -V 5G data/test2
# zfs list
NAME         USED  AVAIL  REFER  MOUNTPOINT
data/test1             25G   921G    12K  -
data/test2             5.16G  95.9G    12K  -

Seems zfs adds some exstra space for volume. Unfortunately I don't know for which purpose, but it can be cahnged viarefreservation parameter, example:

# zfs create -V 800G data/test1 -o refreservation=800G
# zfs create -V 5G data/test2 -o refreservation=5G
# zfs list 
NAME         USED  AVAIL  REFER  MOUNTPOINT
data/test1   825G   911G    12K  -
data/test2  5.16G  90.7G    12K  -

I'm not sure: is it safe?

Anyway I think linstor-controller should handle this behavior and report error if end volume is too large.

Can't upgrade 0.7.1 --> 0.7.5 with Postgress backend

16:07:22.457 [Main] INFO  LINSTOR/Controller - Dependency injection finished: 548ms
16:07:22.510 [Main] INFO  LINSTOR/Controller - Initializing authentication subsystem
16:07:22.557 [Main] INFO  LINSTOR/Controller - Initializing the database connection pool
16:07:22.684 [Main] ERROR LINSTOR/Controller - org.flywaydb.core.Flyway.configure()Lorg/flywaydb/core/api/configuration/FluentConfiguration; [Report number 5C4F28B9-00000-000000]
ERROR REPORT 5C4F28B9-00000-000000

============================================================

Application:                        LINBIT? LINSTOR
Module:                             Controller
Version:                            0.7.5
Build ID:                           d74305b420fdc878182afa162378a317e6a4a3b9
Build time:                         2018-12-21T09:05:14+00:00
Error time:                         2019-01-28 16:07:22
Node:                               linstor2-controller-5b479dcb58-p694m

============================================================

Reported error:
===============

Category:                           Error
Class name:                         NoSuchMethodError
Class canonical name:               java.lang.NoSuchMethodError
Generated at:                       Method 'migrate', Source file 'DbConnectionPool.java', Line #181

Error message:                      org.flywaydb.core.Flyway.configure()Lorg/flywaydb/core/api/configuration/FluentConfiguration;

Call backtrace:

    Method                                   Native Class:Line number
    migrate                                  N      com.linbit.linstor.dbcp.DbConnectionPool:181
    initialize                               N      com.linbit.linstor.dbcp.DbConnectionPoolInitializer:68
    start                                    N      com.linbit.linstor.core.Controller:195
    main                                     N      com.linbit.linstor.core.Controller:385


END OF ERROR REPORT.

No way to list pools when you have controller node in the nodes

Bug

Version

# linstor --version
linstor 0.6.0; GIT-hash: e64d3971f852d49c1aac0d0f42be06823e2f6bcd

Steps for reproduce

# linstor node create --node-type Controller linstor 10.28.36.172
# linstor sp l
ERROR:
Description:
    (Node: 'linstor') The requested function call cannot be executed.
Cause:
    Common causes of this error are:
       - The function call name specified by the caller
         (client side) is incorrect
       - The requested function call was not loaded into
         the system (server side)
Details:
    The requested function call name was 'RequestFreeSpace'.

after removing controller node storage-pools is starting to show itself

Error when run "linstor node list" linstor-server 0.7.1

I'm getting the following error when i try to list my nodes on linstor.

Traceback (most recent call last):
File "/usr/bin/linstor", line 21, in
import linstor_client_main
File "/usr/lib/python3.4/site-packages/linstor_client-0.7.0-py3.4.egg/linstor_client_main.py", line 30, in
import linstor
File "/usr/lib/python3.4/site-packages/python_linstor-0.7.0-py3.4.egg/linstor/init.py", line 1, in
File "/usr/lib/python3.4/site-packages/python_linstor-0.7.0-py3.4.egg/linstor/linstorapi.py", line 23, in
ImportError: No module named 'linstor.proto.MsgHeader_pb2'

When i downloaded linstor-server, my big question was "what to do with these java files?".
Reading makefile and build.gradle, i downloaded protoc, running:
make getprotoc

then i run:
make genJava
gradle assemble
make copytolibs

Every command on root of linstor-server.

Strange autoplacing bug

I have thee nodes in LocalM8 pool:

# linstor sp l | grep LocalM8
| LocalM8     | m8c3  | LvmDriver      | data     |   418.75 GiB |    953.87 GiB | false             |
| LocalM8     | m8c4  | LvmDriver      | data     |   418.75 GiB |    953.87 GiB | false             |
| LocalM8     | m8c5  | LvmDriver      | data     |   918.86 GiB |    953.87 GiB | false             |

And one resource which I want to create:

# linstor vd l | grep pv-drbd-00003
| pv-drbd-00003 | 0        | 1002        | 500 GiB | ok    |

Then I run autoplacing, and it reports an error:

# linstor r c --auto-place 1 -s LocalM8 pv-drbd-00003
ERROR:
Description:
    Not enough available nodes
Details:
    Not enough nodes fulfilling the following auto-place criteria:
     * has a deployed storage pool named 'LocalM8'
     * the storage pool 'LocalM8' has to have at least '524288000' free space
     * the current access context has enough privileges to use the node and the storage pool
     * the node is online
    Auto-placing resource: pv-drbd-00003
Show reports:
    linstor error-reports show 5BBF2CFE-00000-000169

But if I assign resource manually, it is working fine:

# linstor r c m8c5 pv-drbd-00003 -s LocalM8       
SUCCESS:
Description:
    New resource 'pv-drbd-00003' on node 'm8c5' registered.
Details:
    Resource 'pv-drbd-00003' on node 'm8c5' UUID is: 523995af-2c33-40df-ac1c-4381962c9e21
SUCCESS:
Description:
    Volume with number '0' on resource 'pv-drbd-00003' on node 'm8c5' successfully registered
Details:
    Volume UUID is: 2d25b533-3efb-467b-9b87-6a80cd7af6ed
SUCCESS:
    Created resource on 'm8c5'
SUCCESS:
Description:
    Resource ready
Details:
    Node: m8c5, Resource: pv-drbd-00003

Now I can delete resource:

# linstor r d m8c5 pv-drbd-00003
SUCCESS:
Description:
    Node: m8c5, Resource: pv-drbd-00003 marked for deletion.
Details:
    Node: m8c5, Resource: pv-drbd-00003 UUID is: 523995af-2c33-40df-ac1c-4381962c9e21
SUCCESS:
    Deleted 'pv-drbd-00003' on 'm8c5'
SUCCESS:
Description:
    Node: m8c5, Resource: pv-drbd-00003 deletion complete.
Details:
    Node: m8c5, Resource: pv-drbd-00003 UUID was: 523995af-2c33-40df-ac1c-4381962c9e21

And try autoplacing again:

# linstor r c --auto-place 1 -s LocalM8 pv-drbd-00003
SUCCESS:
Description:
    Resource 'pv-drbd-00003' successfully autoplaced on 1 nodes
Details:
    Used storage pool: 'LocalM8'
    Used nodes: 'm8c5'

Linstor-controller version: 0.6.5
ErrorReport-5BBF2CFE-00000-000169.log

Snapshot stuck on DELETING

Similar to #19 (which is about a Resource), I have a snapshot that's stuck on DELETING with a NullPointerException:

╭───────────────────────────────────────────────────────────────────────────────╮
┊ ResourceName  ┊ SnapshotName               ┊ NodeNames ┊ Volumes   ┊ State    ┊
╞┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄╡
┊ vm-105-disk-1 ┊ snap_vm-105-disk-1_vanilla ┊ mox-a     ┊ 0: 32 GiB ┊ DELETING ┊
╰───────────────────────────────────────────────────────────────────────────────╯

Satellite error report:

ERROR REPORT 5BDC4670-2E931-000001

============================================================

Application:                        LINBIT® LINSTOR
Module:                             Satellite
Version:                            0.7.1
Build ID:                           b4dab7399d24dc10917d221bd25ffcf0ed8f9712
Build time:                         2018-10-31T12:18:38+00:00
Error time:                         2018-11-02 14:43:52
Node:                               mox-a

============================================================

Reported error:
===============

Category:                           RuntimeException
Class name:                         NullPointerException
Class canonical name:               java.lang.NullPointerException
Generated at:                       Method 'computeVlmName', Source file 'DrbdDeviceHandler.java', Line #252


Error context:
    NullPointerException

Call backtrace:

    Method                                   Native Class:Line number
    computeVlmName                           N      com.linbit.linstor.core.DrbdDeviceHandler:252
    initializeResourceState                  N      com.linbit.linstor.core.DrbdDeviceHandler:177
    dispatchResource                         N      com.linbit.linstor.core.DrbdDeviceHandler:340
    run                                      N      com.linbit.linstor.core.DeviceManagerImpl$DeviceHandlerInvocation:1262
    run                                      N      com.linbit.WorkerPool$WorkerThread:179


END OF ERROR REPORT.

Feature Request: Import / Export configuration

Feature request

It would be nice for have an opportunity for export current controller configuration in some machine readable format like json, yaml, or just simple commands.

Yes I know that I can use -m flag for save some resources partially. But there is no way for export all configuration.

The is also no opportunity for upload it back, fully and partially.

Goals

  • Simple backup / restore of configuration.
  • Simple migration configuration from one server to another.
  • Independence from database type.
  • Simplicity for upload initial configuration to server during deploy.
  • Better for automation and integration with 3rd party software.

Interface changes

We can add new command like linstor configuration:

Something like that:

usage: linstor configuration [-h]
                    {export, import} ...

Configuration subcommands

optional arguments:
  -h, --help            show this help message and exit

Node commands:
   - export (e)
   - import (i)

  {export, import}
usage: linstor configuration export [-h]
                           [--format {commands,json}]
                           [--scope {controller, encryption, node, resource,
                                   resource-definition, snapshot, storage-pool,
                                   storage-pool-definition, volume-definition} ...]

Exports configuration from linstor cluster.

optional arguments:
  -h, --help            show this help message and exit
  --format {commands,json}
                        Export format (default: json)
  --scope   {controller, encryption, node, resource,
            resource-definition, snapshot, storage-pool,
            storage-pool-definition, volume-definition} ...
                        Scope for export (default: all)
usage: linstor configuration import [-h]
                           [--format {commands,json}]
                           [--overwrite {yes,no}]
                           filename

Imports configuration to linstor cluster.

optional arguments:
  -h, --help            show this help message and exit
  --format {json}
                        Import format (default: json)
  --overwrite {yes,no}
                        Overwrite existing parameters (default: yes)

positional arguments:
  filename              File, contain exported configuration.

Maybe have you something already in controller debug console?
It is really wanted feature for me.

Feature request: High availability on Linstor changes

Feature request

Problem

When a satellite is offline and we want to apply changes such as deleting a node for a resource or expanding a volume dependent on that satellite, that change is queued and will only be executed when the connection of all the satellites will be restored.

This means that when a node is dead, we can no longer expand the resources or delete a resource on another node, so there is a problem of high availability.

Idea of solution

It would be interesting, therefore, for Linstor to act differently when a satellite is offline.
All drbd connections must be interrupted on the offline nodes and changes must be saved to be executed when the satellite is re-established. This would ensure that changes are immediately applied to online satellites despite the fact that a satellite is offline.

No way to migrate manually created resources to LINSTOR if storagepool have no additional space

Need some option for temporary disable free disk space check when declaring resources. (may be some debug option?) Otherwise there is no way for declare already created resources if storage-pool have no enough additional space for create similar resource:

ERROR:
Description:
    Not enough free space available for volume 0 of resource 'res100'.
Details:
    Node(s): 'm10c45', Resource: 'res100'
Show reports:
    linstor error-reports show 5C22A3E6-00000-000002

LINSTORPlugin.pm line 139

After update:

Start-Date: 2018-09-17  08:16:21
Commandline: apt-get upgrade
Upgrade: linstor-common:amd64 (0.6.3-1, 0.6.4-1), linstor-satellite:amd64 (0.6.3-1, 0.6.4-1)
End-Date: 2018-09-17  08:16:24

Get errors of starting any VM with disk on drbdpool:

TASK ERROR: malformed JSON string, neither tag, array, object, number, string or atom, at character offset 0 (before "(end of string)") at /usr/share/perl5/PVE/Storage/Custom/LINSTORPlugin.pm line 139.

Feature: Possibility for rename resources, snapshots and storage pools

According #17 I've used two different storage pools and call auto-placing command two times, for place data on two different racks.

But few time ago there was released linstor CSI plugin which allows to set AUX properties: ReplicasOnSame and ReplicasOnDifferent, what made my setup not so optimal as just using AUX properties.

I need to rename my current storage pools for allow using same pool name but with differnet aux properties.

Cannot migrate resource from LVM to ZFS, disk size mismatch

While trying to migrate a 32GB resource from (thick) LVM to (thick) ZFS storage pool, I got the following error in dmesg, after which connection to the ZFS-backed DRBD node was immediately disconnected:

drbd vm-121-www-data-2/0 drbd1011 mox-a: The peer's disk size is too small! (67108864 < 67110832 sectors)

My idea was to recreate & resync the resource on ZFS one node at a time, until not no more LVM backed resource remain. The command was linstor r create -s pool_ssd mox-a vm-121-www-data-2, which got stuck for a long time and then finally printed: Error: Socket timeout, no data received since 300192ms..

Satellite and Server binding ipv6 only

Both linstor-server/satellite 0.7.3 bind to ipv6 only. They should bind to both ipv4 and ipv6. Tested on debian-stretch.
Interestingly, if the --bind-address is set to a v4 address, the protocol used is still v6 when examining the binding with netstat.

Stuck operations and controller dead after restarting linstor-satellite

Today was interesting issue,
Linstor was creating new resources but wasn't update their config on one node.
After restarting linstor-satellite on this node, linstor-controller was dead:

Controller[3868]: 16:16:35.219 [PlainConnector] INFO  LINSTOR/Controller - Remote satellite peer /10.28.36.161:3366 has closed the connection.
Controller[3868]: 16:16:35.220 [PlainConnector] ERROR LINSTOR/Controller - Problem of type 'java.util.ConcurrentModificationException' logged to report number 5BA284EF-00000-012789
ERROR REPORT 5BA284EF-00000-012789

============================================================

Application:                        LINBIT? LINSTOR
Module:                             Controller
Version:                            0.6.4
Build ID:                           464f6e5b198002e0d82122d84c27e1b06ec0400c
Build time:                         2018-09-14T09:11:03+00:00
Error time:                         2018-10-01 16:16:35
Node:                               linstor-controller

============================================================

Reported error:
===============

Category:                           RuntimeException
Class name:                         ConcurrentModificationException
Class canonical name:               java.util.ConcurrentModificationException
Generated at:                       Method 'nextEntry', Source file 'TreeMap.java', Line #1211


Call backtrace:

    Method                                   Native Class:Line number
    nextEntry                                N      java.util.TreeMap$PrivateEntryIterator:1211
    next                                     N      java.util.TreeMap$ValueIterator:1256
    connectionClosing                        N      com.linbit.linstor.netcom.TcpConnectorPeer:454
    closeConnection                          N      com.linbit.linstor.netcom.TcpConnectorService:966
    closeConnection                          N      com.linbit.linstor.netcom.TcpConnectorService:955
    run                                      N      com.linbit.linstor.netcom.TcpConnectorService:532
    run                                      N      java.lang.Thread:748


END OF ERROR REPORT.

After restarting linstor-controller too, it was started working as it should.

gradle assemble fails during compileJava task

Hi, i'm just trying to build the linstor-server from scratch so i can get an understanding of that process (for my own information).
gradle getProtoc works great

+ gradle getProtoc
Starting a Gradle Daemon, 2 incompatible and 3 stopped Daemons could not be reused, use --status for details

> Task :downloadProtoc
downloading protoc...

Deprecated Gradle features were used in this build, making it incompatible with Gradle 5.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/4.9/userguide/command_line_interface.html#sec:command_line_warnings

BUILD SUCCESSFUL in 4s
2 actionable tasks: 2 executed

however gradle assemble does not... it starts off okay (Task :generateJava completes successfully) but Task :compileJava fails.

> Task :generateJava
mkdir ../server/generated-src
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgApiVersion.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgCrtSnapshot.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/NodeConn.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgDelStorPool.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgModVlmDfn.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgAutoPlaceRsc.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgReqErrorReport.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgModRsc.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgHostname.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgCrtStorPoolDfn.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/LinStorMapEntry.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgLstSnapshotDfn.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgCrtCryptPassphrase.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgModVlmConn.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/Vlm.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgSignIn.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgLstRscDfn.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgLstStorPoolDfn.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/VlmState.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgRestoreSnapshotRsc.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgRestoreSnapshotVlmDfn.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgLstStorPool.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/StorPoolFreeSpace.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/AutoSelectFilter.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgEnterCryptPassphrase.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgRspMaxVlmSizes.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgDelNodeConn.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgDelCtrlCfgProp.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgDelRscConn.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgHeader.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/Node.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgCancelWatch.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgLstNode.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgModNode.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgQryMaxVlmSizes.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgDelSnapshot.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgCrtNodeConn.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgDelRscDfn.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/StorPool.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgEvent.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgCrtRscConn.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgModNetInterface.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/RscDfn.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgDelRsc.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgDelStorPoolDfn.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/Filter.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgModRscConn.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/VlmDfn.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgLstCtrlCfgProps.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgModNodeConn.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgCrtStorPool.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgModStorPool.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgDelNode.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/Rsc.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgCrtRsc.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgDelVlmConn.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgCrtNetInterface.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgCrtNode.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/VlmConn.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/NetInterface.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgLstRsc.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgDelWatch.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgCrtRscDfn.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgCrtVlmConn.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgModCryptPassphrase.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/RscState.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgControlCtrl.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgModStorPoolDfn.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgModRscDfn.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgCrtWatch.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgErrorReport.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgDelVlmDfn.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/RscConn.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgSetCtrlCfgProp.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgDelNetInterface.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/SnapshotDfn.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgApiCallResponse.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/StorPoolDfn.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/MsgCrtVlmDfn.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/eventdata/EventRscState.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/eventdata/EventVlmDiskState.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/eventdata/EventRscDfnReady.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/eventdata/EventSnapshotDeployment.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/eventdata/EventRscDeploymentState.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/javainternal/MsgIntFullSyncSuccess.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/javainternal/MsgIntObjectId.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/javainternal/MsgIntNodeData.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/javainternal/MsgIntApplyStorPoolSuccess.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/javainternal/MsgIntResizedVlm.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/javainternal/MsgIntAuth.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/javainternal/MsgDebugCommand.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/javainternal/MsgIntDelVlm.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/javainternal/MsgIntStorPoolData.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/javainternal/MsgIntSnapshotEndedData.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/javainternal/MsgIntFreeSpace.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/javainternal/MsgIntRscData.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/javainternal/EventInProgressSnapshot.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/javainternal/MsgDebugReply.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/javainternal/MsgIntNodeDeletedData.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/javainternal/MsgIntStorPoolDeletedData.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/javainternal/MsgIntDelRsc.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/javainternal/MsgIntSetMasterKey.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/javainternal/MsgIntPrimary.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/javainternal/MsgIntFullSync.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/javainternal/MsgIntSnapshotData.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/javainternal/MsgIntRscDeletedData.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/javainternal/MsgIntAuthSuccess.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/javainternal/MsgIntCryptKey.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/javainternal/MsgIntControllerData.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/javainternal/MsgIntResizedDrbdVlm.proto
/builddir/build/BUILD/linstor-server-0.2.5/tools/protoc-3.2.0/bin/protoc -I=. --java_out=../server/generated-src linstor/proto/javainternal/MsgIntApplyRscSuccess.proto
mkdir -p ../server/generated-src/com/linbit/linstor/api
./genconsts.py java > ../server/generated-src/com/linbit/linstor/api/ApiConsts.java
./gendrbdoptions.py drbdoptions.json
Unable to execute drbdsetup: /usr/sbin/drbdsetup xml-help
Using local file drbdsetup.xml
mkdir -p ../server/generated-src/com/linbit/linstor/api/prop
./genproperties.py java properties.json drbdoptions.json > ../server/generated-src/com/linbit/linstor/api/prop/GeneratedPropertyRules.java

> Task :compileJava
/builddir/build/BUILD/linstor-server-0.2.5/src/com/linbit/linstor/netcom/Peer.java:9: error: cannot find symbol
import com.linbit.linstor.api.ApiConsts;
                             ^
  symbol:   class ApiConsts
  location: package com.linbit.linstor.api
/builddir/build/BUILD/linstor-server-0.2.5/src/com/linbit/linstor/core/StltConfigAccessor.java:4: error: cannot find symbol
import com.linbit.linstor.api.ApiConsts;
                             ^
  symbol:   class ApiConsts
  location: package com.linbit.linstor.api
/builddir/build/BUILD/linstor-server-0.2.5/src/com/linbit/linstor/storage/StorageDriverKind.java:4: error: cannot find symbol
import com.linbit.linstor.api.ApiConsts;
                             ^
  symbol:   class ApiConsts
  location: package com.linbit.linstor.api
/builddir/build/BUILD/linstor-server-0.2.5/src/com/linbit/linstor/ConfFileBuilder.java:6: error: cannot find symbol
import com.linbit.linstor.api.ApiConsts;
                             ^
  symbol:   class ApiConsts
  location: package com.linbit.linstor.api
/builddir/build/BUILD/linstor-server-0.2.5/src/com/linbit/linstor/core/apicallhandler/AbsApiCallHandler.java:41: error: cannot find symbol
import com.linbit.linstor.api.ApiConsts;
                             ^
  symbol:   class ApiConsts
  location: package com.linbit.linstor.api
/builddir/build/BUILD/linstor-server-0.2.5/src/com/linbit/linstor/core/apicallhandler/controller/CtrlRscDfnApiCallHandler.java:30: error: cannot find symbol
import com.linbit.linstor.api.ApiConsts;
                             ^
  symbol:   class ApiConsts
  location: package com.linbit.linstor.api
/builddir/build/BUILD/linstor-server-0.2.5/src/com/linbit/linstor/core/apicallhandler/controller/CtrlStorPoolApiCallHandler.java:19: error: cannot find symbol
import com.linbit.linstor.api.ApiConsts;
                             ^
  symbol:   class ApiConsts
  location: package com.linbit.linstor.api
/builddir/build/BUILD/linstor-server-0.2.5/src/com/linbit/linstor/NodeData.java:7: error: cannot find symbol
import com.linbit.linstor.api.ApiConsts;
                             ^
  symbol:   class ApiConsts
  location: package com.linbit.linstor.api
/builddir/build/BUILD/linstor-server-0.2.5/src/com/linbit/linstor/ResourceData.java:40: error: cannot find symbol
import static com.linbit.linstor.api.ApiConsts.KEY_STOR_POOL_NAME;
                                    ^
  symbol:   class ApiConsts
  location: package com.linbit.linstor.api
/builddir/build/BUILD/linstor-server-0.2.5/src/com/linbit/linstor/ResourceData.java:40: error: static import only from classes and interfaces
import static com.linbit.linstor.api.ApiConsts.KEY_STOR_POOL_NAME;
^
/builddir/build/BUILD/linstor-server-0.2.5/src/com/linbit/linstor/StorPoolData.java:3: error: cannot find symbol
import static com.linbit.linstor.api.ApiConsts.KEY_STOR_POOL_SUPPORTS_SNAPSHOTS;
                                    ^
....
... <I'll refrain from spamming to much as there are several pages of failures)
...
Note: /builddir/build/BUILD/linstor-server-0.2.5/src/com/linbit/linstor/dbcp/DbConnectionPool.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
100 errors

> Task :compileJava FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':compileJava'.
> Compilation failed; see the compiler error output for details.

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 5.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/4.9/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 4s
5 actionable tasks: 5 executed

With respect to the very first failure (of many); i can confirm the files are there:

find -name ApiConsts.java
./builddir/build/BUILD/linstor-server-0.2.5/server/generated-src/com/linbit/linstor/api/ApiConsts.java

I'm not much of a JAVA guy, but i took a guess thinking that perhaps the bug involves having the ApiConsts.java be created in:
src/com/linbit/linstor/api/ApiConsts.java instead of
server/generated-src/com/linbit/linstor/api/ApiConsts.java. So i copied it over and tried the gradle assemble again and it just failed

...
> Task :compileJava
/builddir/build/BUILD/linstor-server-0.2.5/src/com/linbit/linstor/api/protobuf/ApiCallAnswerer.java:9: error: package com.linbit.linstor.proto.MsgHeaderOuterClass does not exist
import com.linbit.linstor.proto.MsgHeaderOuterClass.MsgHeader;
                                                   ^
/builddir/build/BUILD/linstor-server-0.2.5/src/com/linbit/linstor/api/protobuf/ProtoMapUtils.java:3: error: cannot find symbol
import com.linbit.linstor.proto.LinStorMapEntryOuterClass;
                               ^
  symbol:   class LinStorMapEntryOuterClass
  location: package com.linbit.linstor.proto
/builddir/build/BUILD/linstor-server-0.2.5/src/com/linbit/linstor/api/protobuf/ProtoMapUtils.java:12: error: package LinStorMapEntryOuterClass does not exist
    public static Map<String, String> asMap(List<LinStorMapEntryOuterClass.LinStorMapEntry> list)
                                                                          ^
/builddir/build/BUILD/linstor-server-0.2.5/src/com/linbit/linstor/api/protobuf/ProtoMapUtils.java:22: error: package LinStorMapEntryOuterClass does not exist
    public static List<LinStorMapEntryOuterClass.LinStorMapEntry> fromMap(Map<String, String> map)
....

Any advice would be great! :)

MySQL connectior isn't working

I've created mysql database linstor and gave access to the linstor user.

here is my database.cfg file:

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">
<properties>
  <comment>LinStor database configuration</comment>
  <entry key="user">linstor</entry>
  <entry key="password">papae8eihohf3Rie</entry>
  <entry key="connection-url">jdbc:mysql://linstordb-pxc/linstor</entry>
</properties>

Here is output of my controller:

LINSTOR, Module Controller
Version:            0.6.4 (464f6e5b198002e0d82122d84c27e1b06ec0400c)
Build time:         2018-09-14T10:20:12+00:00
Java Version:       1.8
Java VM:            Oracle Corporation, Version 25.181-b13
Operating system:   Linux, Version 4.15.18-3-pve
Environment:        amd64, 8 processors, 7131 MiB memory reserved for allocations

System components initialization in progress

12:01:09.059 [main] INFO  LINSTOR/Controller - Log directory set to: '/logs'
12:01:09.061 [Main] INFO  LINSTOR/Controller - Loading API classes started.
12:01:09.270 [Main] INFO  LINSTOR/Controller - API classes loading finished: 209ms
12:01:09.270 [Main] INFO  LINSTOR/Controller - Dependency injection started.
12:01:09.850 [Main] INFO  LINSTOR/Controller - Dependency injection finished: 580ms
12:01:09.932 [Main] INFO  LINSTOR/Controller - Initializing the database connection pool
12:01:10.016 [Main] INFO  org.flywaydb.core.internal.util.VersionPrinter - Flyway Community Edition 5.0.7 by Boxfuse
Fri Sep 21 12:01:10 UTC 2018 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
12:01:10.271 [Main] INFO  org.flywaydb.core.internal.database.DatabaseFactory - Database: jdbc:mysql://linstordb-pxc/linstor (MySQL 5.7)
12:01:10.309 [Main] INFO  org.flywaydb.core.internal.command.DbSchemas - Creating schema `LINSTOR` ...
12:01:11.325 [Main] INFO  org.flywaydb.core.internal.command.DbSchemas - Creating schema `LINSTOR` ...
12:01:12.335 [Main] INFO  org.flywaydb.core.internal.command.DbSchemas - Creating schema `LINSTOR` ...
12:01:13.345 [Main] INFO  org.flywaydb.core.internal.command.DbSchemas - Creating schema `LINSTOR` ...
12:01:14.355 [Main] INFO  org.flywaydb.core.internal.command.DbSchemas - Creating schema `LINSTOR` ...
12:01:15.363 [Main] INFO  org.flywaydb.core.internal.command.DbSchemas - Creating schema `LINSTOR` ...
12:01:16.373 [Main] INFO  org.flywaydb.core.internal.command.DbSchemas - Creating schema `LINSTOR` ...
12:01:17.382 [Main] INFO  org.flywaydb.core.internal.command.DbSchemas - Creating schema `LINSTOR` ...
12:01:18.391 [Main] INFO  org.flywaydb.core.internal.command.DbSchemas - Creating schema `LINSTOR` ...
12:01:19.400 [Main] INFO  org.flywaydb.core.internal.command.DbSchemas - Creating schema `LINSTOR` ...
12:01:19.413 [Main] ERROR LINSTOR/Controller - 
Unable to create schema `LINSTOR`
---------------------------------
SQL State  : 42000
Error Code : 1044
Message    : Access denied for user 'linstor'@'%' to database 'LINSTOR'
 [Report number 5BA4DD84-00000-000000]

12:01:19.414 [Thread-1] INFO  LINSTOR/Controller - Shutdown in progress
12:01:19.414 [Thread-1] INFO  LINSTOR/Controller - Shutting down service instance 'DatabaseService' of type DatabaseService
12:01:19.415 [Thread-1] INFO  LINSTOR/Controller - Waiting for service instance 'DatabaseService' to complete shutdown
12:01:19.415 [Thread-1] INFO  LINSTOR/Controller - Shutting down service instance 'TaskScheduleService' of type TaskScheduleService
12:01:19.415 [Thread-1] INFO  LINSTOR/Controller - Waiting for service instance 'TaskScheduleService' to complete shutdown
12:01:19.415 [Thread-1] INFO  LINSTOR/Controller - Shutting down service instance 'TimerEventService' of type TimerEventService
12:01:19.415 [Thread-1] INFO  LINSTOR/Controller - Waiting for service instance 'TimerEventService' to complete shutdown
12:01:19.415 [Thread-1] INFO  LINSTOR/Controller - Shutdown complete

trace log and error report in attachment:

Resource creation fails due to removed drbdadm option

Resource creatin fails, because drbdadm no longer supports --config-to-exclude switch:

ii drbd-utils 9.5.0-1ppa1bionic2 amd64 RAID 1 over TCP/IP for Linux (user utilities)
ii linstor-client 0.6.2-1ppa1
bionic1 all Linstor client command line tool
ii linstor-common 0.6.5-1ppa1bionic1 all DRBD distributed resource management utility
ii linstor-controller 0.6.5-1ppa1
bionic1 all DRBD distributed resource management utility
ii linstor-satellite 0.6.5-1ppa1~bionic1 all DRBD distributed resource management utility

ERROR REPORT 5BBDEFE7-0DE37-000004

============================================================

Application: LINBIT® LINSTOR
Module: Satellite
Version: 0.6.5
Build ID: d1bd204
Build time: 2018-10-02T07:10:01+00:00
Error time: 2018-10-11 11:26:01
Node: storage-2

============================================================

Reported error:

Description:
Operations on resource 'kubernetes-rd' were aborted
Cause:
Verification of resource file failed
Additional information:
The error reported by the runtime environment or operating system is:
The external command 'drbdadm' exited with error code 1

Category: LinStorException
Class name: ResourceException
Class canonical name: com.linbit.linstor.core.DrbdDeviceHandler.ResourceException
Generated at: Method 'createResourceConfiguration', Source file 'DrbdDeviceHandler.java', Line #1371

Error message: Generated resource file for resource 'kubernetes-rd' is invalid.

Error context:
Generated resource file for resource 'kubernetes-rd' is invalid.

Call backtrace:

Method                                   Native Class:Line number                                                                    
createResourceConfiguration              N      com.linbit.linstor.core.DrbdDeviceHandler:1371                                       
createResource                           N      com.linbit.linstor.core.DrbdDeviceHandler:1069                                       
dispatchResource                         N      com.linbit.linstor.core.DrbdDeviceHandler:311                                        
run                                      N      com.linbit.linstor.core.DeviceManagerImpl$DeviceHandlerInvocation:1276               
run                                      N      com.linbit.WorkerPool$WorkerThread:179

Caused by:

Description:
Execution of the external command 'drbdadm' failed.
Cause:
The external command exited with error code 1.
Correction:
- Check whether the external program is operating properly.
- Check whether the command line is correct.
Contact a system administrator or a developer if the command line is no longer valid
for the installed version of the external program.
Additional information:
The full command line executed was:
drbdadm --config-to-test /var/lib/linstor.d/kubernetes-rd.res_tmp --config-to-exclude /var/lib/linstor.d/kubernetes-rd.res sh-nop

The external command sent the following output data:


The external command sent the follwing error information:
drbdadm: unrecognized option '--config-to-exclude'

Category: LinStorException
Class name: ExtCmdFailedException
Class canonical name: com.linbit.extproc.ExtCmdFailedException
Generated at: Method 'execute', Source file 'DrbdAdm.java', Line #445

Error message: The external command 'drbdadm' exited with error code 1

Call backtrace:

Method                                   Native Class:Line number
execute                                  N      com.linbit.drbd.DrbdAdm:445
execute                                  N      com.linbit.drbd.DrbdAdm:431
checkResFile                             N      com.linbit.drbd.DrbdAdm:334
createResourceConfiguration              N      com.linbit.linstor.core.DrbdDeviceHandler:1364
createResource                           N      com.linbit.linstor.core.DrbdDeviceHandler:1069
dispatchResource                         N      com.linbit.linstor.core.DrbdDeviceHandler:311
run                                      N      com.linbit.linstor.core.DeviceManagerImpl$DeviceHandlerInvocation:1276
run                                      N      com.linbit.WorkerPool$WorkerThread:179

END OF ERROR REPORT.

uppercase character in hostname

Is it possible that uppercase characters confuse Linstore?

[MainWorkerPool-1] ERROR LINSTOR/Satellite - Satellite node name 'LNQHSOpensT006' doesn't match nodes hostname 'LNQHSOpensT006'

FeatureRequest: Zones

Hi, I want to have opportunity for placing resources per zones eg., I have two chassis per 45 cartridges.
And I need to autoplace resource to both of them.

Linstor resources ignore global_common.conf

Linstor resource ignore all settings set in global common section.

I have made some investigation and found out that when resource uses template-file directive and the template file includes common section it replaces the global common section altogether.
linstor uses
template-file "linstor_common.conf";
in every resource file.

This would not be a problem, if linstor allowed all settings of DRBD to be set in controller drbd-options but this is not the case (e.g. handlers).

I think that this should get some thought. IMO this renders linstor unusable for anyone who wants to use some advanced DRBD features.

Resources stuck on DELETING

I have some problem with removing my resources.
I presume it may be same problem connected with long lvm output #8
Any way I can't delete my resources even after correcting global_filter for my lvm.conf

satellite's log:

23:49:44.854 [MainWorkerPool-7] ERROR LINSTOR/Satellite - Problem of type 'java.lang.NullPointerException' logged to report number 5BAFAC32-CDA0B-000000

23:49:44.863 [MainWorkerPool-7] ERROR LINSTOR/Satellite - Access to deleted resource [Report number 5BAFAC32-CDA0B-000001]

ErrorReport-5BAFAC32-CDA0B-000000.log
ErrorReport-5BAFAC32-CDA0B-000001.log

This problem occurs quite often for my installation.

linstor error-report show xxx fails if output is piped/redirected

linstor error-reports show 5B9A4EE9-000002 | tee /home/test/5B9A4EE9-000002
Traceback (most recent call last):
File "/usr/bin/linstor", line 24, in
linstor_client_main.main()
File "/usr/lib/python2.7/dist-packages/linstor_client_main.py", line 555, in main
LinStorCLI().run()
File "/usr/lib/python2.7/dist-packages/linstor_client_main.py", line 521, in run
sys.exit(self.parse_and_execute(sys.argv[1:]))
File "/usr/lib/python2.7/dist-packages/linstor_client_main.py", line 264, in parse_and_execute
rc = args.func(args)
File "/usr/lib/python2.7/dist-packages/linstor_client/commands/commands.py", line 759, in cmd_error_report
return self.output_list(args, lstmsg, self.show_error_report, single_item=False)
File "/usr/lib/python2.7/dist-packages/linstor_client/commands/commands.py", line 226, in output_list
output_func(args, replies[0].proto_msg if single_item else replies)
File "/usr/lib/python2.7/dist-packages/linstor_client/commands/commands.py", line 755, in show_error_report
print(error.text)
UnicodeEncodeError: 'ascii' codec can't encode character u'\xae' in position 134: ordinal not in range(128)

Allow specify nested zfs datasets

Feature Request

Description

Please allow to specify nested zfs datasets:

linstor storage-pool create node1 zfspool zfs rpool/linstor

It is very usful for having many pools on same devices.
Also you can specify different options for different zfs datasets.
Any way always use root pool for placing volumes usualy is not good practice.

Use case

For example I have one root zfs pool rpool, and I want to store docker images and drbd drives there, plus some data, and have them separated from each other. I can create few datasets for this purpose:

rpool/data (some filesystem)
rpool/docker (docker zvols)
rpool/linstor (linstor zvols)

Then I need to configure linstor for use rpool/linstor (not rpool)

Actual Behavior

Currently this command returns:

ERROR:
Description:
    Invalid property value
Cause:
    The value 'rpool/linstor' is not valid for the key 'StorDriver/ZPool'
Details:
    The value must match '[a-zA-Z0-9_-]+'
    Node: node1, Storage pool name: zfspool

Error creating resources

I get the following error when I try to create a resource on a storage node. drbdadm is installed on the storage nodes. The error report indicates that it is a timeout issue.

sudo linstor resource create stor02a stor02 --storage-pool pool_stor02

SUCCESS: Description: New resource 'stor02' on node 'stor02a' registered. Details: Resource 'stor02' on node 'stor02a' UUID is: 392a2f54-3d21-4b2b-adf5-269dc0f4ee8c
SUCCESS: Description: Volume with number '0' on resource 'stor02' on node 'stor02a' successfully registered Details: Volume UUID is: b58a14e8-8a7c-4b4f-8c0b-cc2d0effd385
ERROR: Description: (Node: 'stor02a') Adjusting the DRBD state of resource 'stor02' failed Cause: The external command for adjusting the DRBD state of the resource failed Correction: - Check whether the required software is installed - Check whether the application's search path includes the location of the external software - Check whether the application has execute permission for the external command Show reports: linstor error-reports show 5C2CFB48-BB581-000002

linstor error-reports show 5C2CFB48-BB581-000002

`ERROR REPORT 5C2CFB48-BB581-000002

============================================================

Application: LINBIT® LINSTOR
Module: Satellite
Version: 0.7.5
Build ID: d74305b
Build time: 2018-12-21T09:05:14+00:00
Error time: 2019-01-02 13:02:21
Node: stor02a

============================================================

Reported error:

Description:
Operations on resource 'stor02' were aborted
Cause:
The external command for adjusting the DRBD state of the resource failed
Correction:
- Check whether the required software is installed
- Check whether the application's search path includes the location
of the external software
- Check whether the application has execute permission for the external command

Category: LinStorException
Class name: ResourceException
Class canonical name: com.linbit.linstor.core.DrbdDeviceHandler.ResourceException
Generated at: Method 'adjustResource', Source file 'DrbdDeviceHandler.java', Line #1559

Error message: Adjusting the DRBD state of resource 'stor02' failed

Error context:
Adjusting the DRBD state of resource 'stor02' failed

Call backtrace:

Method                                   Native Class:Line number
adjustResource                           N      com.linbit.linstor.core.DrbdDeviceHandler:1559
createResource                           N      com.linbit.linstor.core.DrbdDeviceHandler:1141
dispatchResource                         N      com.linbit.linstor.core.DrbdDeviceHandler:364
run                                      N      com.linbit.linstor.core.DeviceManagerImpl$DeviceHandlerInvocation:1229
run                                      N      com.linbit.WorkerPool$WorkerThread:179

Caused by:

Description:
Execution of the external command 'drbdadm' failed.
Cause:
The external command did not complete within the timeout.
Possible causes include:
- The system load may be too high to ensure completion of external commands in a timely manner.
- The program implementing the external command may not be operating properly.
- The operating system may have entered an errorneous state.
Correction:
Check whether the external program and the operating system are still operating properly.
Check whether the system's load is within normal parameters.
Additional information:
The full command line executed was:
drbdadm -vvv adjust stor02

Category: LinStorException
Class name: ExtCmdFailedException
Class canonical name: com.linbit.extproc.ExtCmdFailedException
Generated at: Method 'execute', Source file 'DrbdAdm.java', Line #442

Error message: The external command 'drbdadm' did not complete within the timeout

Call backtrace:

Method                                   Native Class:Line number
execute                                  N      com.linbit.drbd.DrbdAdm:442
adjust                                   N      com.linbit.drbd.DrbdAdm:95
adjustResource                           N      com.linbit.linstor.core.DrbdDeviceHandler:1553
createResource                           N      com.linbit.linstor.core.DrbdDeviceHandler:1141
dispatchResource                         N      com.linbit.linstor.core.DrbdDeviceHandler:364
run                                      N      com.linbit.linstor.core.DeviceManagerImpl$DeviceHandlerInvocation:1229
run                                      N      com.linbit.WorkerPool$WorkerThread:179

Caused by:

Category: Exception
Class name: ChildProcessTimeoutException
Class canonical name: com.linbit.ChildProcessTimeoutException
Generated at: Method 'waitFor', Source file 'ChildProcessHandler.java', Line #133

Call backtrace:

Method                                   Native Class:Line number
waitFor                                  N      com.linbit.extproc.ChildProcessHandler:133
syncProcess                              N      com.linbit.extproc.ExtCmd:92
pipeExec                                 N      com.linbit.extproc.ExtCmd:63
execute                                  N      com.linbit.drbd.DrbdAdm:434
adjust                                   N      com.linbit.drbd.DrbdAdm:95
adjustResource                           N      com.linbit.linstor.core.DrbdDeviceHandler:1553
createResource                           N      com.linbit.linstor.core.DrbdDeviceHandler:1141
dispatchResource                         N      com.linbit.linstor.core.DrbdDeviceHandler:364
run                                      N      com.linbit.linstor.core.DeviceManagerImpl$DeviceHandlerInvocation:1229
run                                      N      com.linbit.WorkerPool$WorkerThread:179

END OF ERROR REPORT.`

Parameters for `DfltDisklessStorPool` not showable

I can set nic for DfltDisklessStorPool:

linstor storage-pool set-property $node DfltDisklessStorPool PrefNic data

but can't determine is it already set or not.

linstor -m sp l and linstor sp lp $node DfltDisklessStorPool are both saying nothing about it

Resources not updating after removing vis linstor node lost

After linstor node lost command the resource still will have configured connection to the dead node on the alive nodes.
No way for remove those connections from alive without touching resource.

exampleres role:Primary
  disk:UpToDate
  node1 role:Secondary
    peer-disk:UpToDate
  node2 connection:Connecting
  node3 connection:Connecting

Initialization of the com.linbit.linstor.netcom.TcpConnectorService service instance 'PlainConnector' failed.

Hi, today I was deployed new linstor with postgres database:

I was added only 4 nodes, then restarted controlller, now I have really weird behavior.

It seems that controller working, and listening right port, but client show something strange:

Error: Unable connecting to linstor://localhost:3376: [Errno 99] Cannot assign requested address

Logs shows that controller can't connect to my nodes:

LINSTOR, Module Controller
Version:            0.6.5 (d1bd204b9d673f9938226f8c18853e8aae8dd00c)
Build time:         2018-10-02T07:10:01+00:00
Java Version:       10
Java VM:            Oracle Corporation, Version 10.0.2+13-Ubuntu-1ubuntu0.18.04.2
Operating system:   Linux, Version 4.15.18-7-pve
Environment:        amd64, 1 processors, 7756 MiB memory reserved for allocations

System components initialization in progress

18:43:59.015 [main] INFO  LINSTOR/Controller - Log directory set to: '/logs'
18:43:59.016 [Main] INFO  LINSTOR/Controller - Loading API classes started.
18:43:59.251 [Main] INFO  LINSTOR/Controller - API classes loading finished: 235ms
18:43:59.252 [Main] INFO  LINSTOR/Controller - Dependency injection started.
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.google.inject.internal.cglib.core.$ReflectUtils$1 (file:/usr/share/linstor-server/lib/guice-4.2.0.jar) to method java.lang.ClassLoader.defineClass(java.lang.String,byte[],int,int,java.security.ProtectionDomain)
WARNING: Please consider reporting this to the maintainers of com.google.inject.internal.cglib.core.$ReflectUtils$1
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
18:43:59.886 [Main] INFO  LINSTOR/Controller - Dependency injection finished: 634ms
18:43:59.960 [Main] INFO  LINSTOR/Controller - Initializing the database connection pool
18:44:00.061 [Main] INFO  org.flywaydb.core.internal.util.VersionPrinter - Flyway Community Edition 5.0.7 by Boxfuse
18:44:00.114 [Main] INFO  org.flywaydb.core.internal.database.DatabaseFactory - Database: jdbc:postgresql://10.102.210.159/linstor1 (PostgreSQL 10.4)
18:44:00.151 [Main] INFO  org.flywaydb.core.internal.command.DbValidate - Successfully validated 9 migrations (execution time 00:00.013s)
18:44:00.160 [Main] INFO  org.flywaydb.core.internal.command.DbMigrate - Current version of schema "LINSTOR": 2018.09.03.14.30
18:44:00.161 [Main] WARN  org.flywaydb.core.internal.command.DbMigrate - outOfOrder mode is active. Migration of schema "LINSTOR" may not be reproducible.
18:44:00.161 [Main] INFO  org.flywaydb.core.internal.command.DbMigrate - Schema "LINSTOR" is up to date. No migration necessary.
18:44:00.164 [Main] INFO  LINSTOR/Controller - Starting service instance 'DatabaseService' of type DatabaseService
18:44:00.164 [Main] INFO  LINSTOR/Controller - Starting service instance 'TaskScheduleService' of type TaskScheduleService
18:44:00.164 [Main] INFO  LINSTOR/Controller - Starting service instance 'TimerEventService' of type TimerEventService
18:44:00.193 [Main] INFO  LINSTOR/Controller - Loading security objects
18:44:00.200 [Main] INFO  LINSTOR/Controller - Current security level is MAC
18:44:00.211 [Main] INFO  LINSTOR/Controller - Core objects load from database is in progress
18:44:00.260 [Main] INFO  LINSTOR/Controller - Core objects load from database completed
18:44:00.262 [Main] INFO  LINSTOR/Controller - Initializing network communications services
18:44:00.263 [Main] WARN  LINSTOR/Controller - The SSL network communication service 'DebugSslConnector' could not be started because the keyStore file (ssl/keystore.jks) is missing
18:44:00.276 [Main] ERROR LINSTOR/Controller - Initialization of the com.linbit.linstor.netcom.TcpConnectorService service instance 'PlainConnector' failed. [Report number 5BD0BD6E-00000-000000]

18:44:00.276 [Main] WARN  LINSTOR/Controller - The SSL network communication service 'SslConnector' could not be started because the keyStore file (ssl/keystore.jks) is missing
18:44:00.277 [Main] INFO  LINSTOR/Controller - Reconnecting to previously known nodes
18:44:00.282 [Main] ERROR LINSTOR/Controller - Cannot connect to satellite [Report number 5BD0BD6E-00000-000001]

18:44:00.287 [Main] ERROR LINSTOR/Controller - Cannot connect to satellite [Report number 5BD0BD6E-00000-000002]

18:44:00.291 [Main] ERROR LINSTOR/Controller - Cannot connect to satellite [Report number 5BD0BD6E-00000-000003]

18:44:00.295 [Main] ERROR LINSTOR/Controller - Cannot connect to satellite [Report number 5BD0BD6E-00000-000004]

18:44:00.295 [Main] INFO  LINSTOR/Controller - Reconnect requests sent
18:44:00.295 [Main] INFO  LINSTOR/Controller - Controller initialized

ErrorReport-5BD0BD6E-00000-000004.log
ErrorReport-5BD0BD6E-00000-000003.log
ErrorReport-5BD0BD6E-00000-000002.log
ErrorReport-5BD0BD6E-00000-000001.log
ErrorReport-5BD0BD6E-00000-000000.log

How to install linstor-server

Do you have any document on how to install linstor-server (satellite and controller) on redhat based system?
your help would be greatly appreciated.
Thanks in advance

Identity 'PUBLIC' using role: 'PUBLIC' is not authorized to set a controller config property

linstor controller drbd-options --after-sb-0pri=discard-zero-changes --after-sb-1pri=discard-secondary --after-sb-2pri=disconnect
ERROR:
    Identity 'PUBLIC' using role: 'PUBLIC' is not authorized to set a controller config property.
ERROR:
    Identity 'PUBLIC' using role: 'PUBLIC' is not authorized to set a controller config property.
ERROR:
    Identity 'PUBLIC' using role: 'PUBLIC' is not authorized to set a controller config property.

cat ErrorReport-5B9B4BAF-000002.log

ERROR REPORT 5B9B4BAF-000002

============================================================

Application:                        LINBIT? LINSTOR
Module:                             Controller
Version:                            0.6.3
Build ID:                           d608e73ee81aba96ced64e6358244994192341d1
Build time:                         2018-09-10T12:33:20+00:00
Error time:                         2018-09-14 08:49:10
Node:                               slava8

============================================================

Reported error:
===============

Description:
    Access was denied by mandatory access control rules
Cause:
    No rule is present that allows access of type CHANGE by a subject in security domain PUBLIC to an object of security type SYSTEM
Correction:
    A rule defining the allowed level of access from the subject domain to the object type must be defined.
    Mandatory access control rules can only be defined by a role with administrative privileges.

Category:                           LinStorException
Class name:                         AccessDeniedException
Class canonical name:               com.linbit.linstor.security.AccessDeniedException
Generated at:                       Method 'requireAccess', Source file 'SecurityType.java', Line #269

Error message:                      Access of type 'CHANGE' not allowed by mandatory access control rules

Error context:
    Identity 'PUBLIC' using role: 'PUBLIC' is not authorized to set a controller config property.

Call backtrace:

    Method                                   Native Class:Line number
    requireAccess                            N      com.linbit.linstor.security.SecurityType:269
    requireAccess                            N      com.linbit.linstor.security.ObjectProtection:170
    setStltProp                              N      com.linbit.linstor.SystemConfProtectionRepository:77
    setProp                                  N      com.linbit.linstor.core.apicallhandler.controller.CtrlConfApiCallHandler:173
    setCtrlCfgProp                           N      com.linbit.linstor.core.apicallhandler.controller.CtrlApiCallHandler:1376
    execute                                  N      com.linbit.linstor.api.protobuf.controller.SetCtrlCfgProp:36
    executeNonReactive                       N      com.linbit.linstor.proto.CommonMessageProcessor:481
    lambda$execute$13                        N      com.linbit.linstor.proto.CommonMessageProcessor:458
    doInScope                                N      com.linbit.linstor.core.apicallhandler.ScopeRunner:110
    lambda$null$2                            N      com.linbit.linstor.core.apicallhandler.ScopeRunner:74
    call                                     N      reactor.core.publisher.MonoCallable:92
    trySubscribeScalarMap                    N      reactor.core.publisher.FluxFlatMap:126
    subscribe                                N      reactor.core.publisher.MonoFlatMapMany:46
    subscribe                                N      reactor.core.publisher.Flux:6877
    onNext                                   N      reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain:184
    request                                  N      reactor.core.publisher.Operators$ScalarSubscription:1640
    onSubscribe                              N      reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain:131
    subscribe                                N      reactor.core.publisher.MonoCurrentContext:35
    subscribe                                N      reactor.core.publisher.MonoFlatMapMany:49
    subscribe                                N      reactor.core.publisher.Flux:6877
    onNext                                   N      reactor.core.publisher.FluxFlatMap$FlatMapMain:372
    onNext                                   N      reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner:238
    request                                  N      reactor.core.publisher.Operators$ScalarSubscription:1640
    onSubscribeInner                         N      reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain:140
    onSubscribe                              N      reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner:233
    trySubscribeScalarMap                    N      reactor.core.publisher.FluxFlatMap:161
    subscribe                                N      reactor.core.publisher.MonoFlatMapMany:46
    subscribe                                N      reactor.core.publisher.Flux:6877
    onNext                                   N      reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain:184
    request                                  N      reactor.core.publisher.Operators$ScalarSubscription:1640
    onSubscribe                              N      reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain:131
    subscribe                                N      reactor.core.publisher.MonoCurrentContext:35
    subscribe                                N      reactor.core.publisher.MonoFlatMapMany:49
    subscribe                                N      reactor.core.publisher.FluxFlatMap:97
    subscribe                                N      reactor.core.publisher.FluxOnErrorResume:47
    subscribe                                N      reactor.core.publisher.FluxOnErrorResume:47
    subscribe                                N      reactor.core.publisher.FluxOnErrorResume:47
    subscribe                                N      reactor.core.publisher.FluxContextStart:49
    subscribe                                N      reactor.core.publisher.Flux:6877
    onNext                                   N      reactor.core.publisher.FluxFlatMap$FlatMapMain:372
    slowPath                                 N      reactor.core.publisher.FluxArray$ArraySubscription:126
    request                                  N      reactor.core.publisher.FluxArray$ArraySubscription:99
    onSubscribe                              N      reactor.core.publisher.FluxFlatMap$FlatMapMain:332
    subscribe                                N      reactor.core.publisher.FluxMerge:70
    subscribe                                N      reactor.core.publisher.FluxOnErrorResume:47
    subscribe                                N      reactor.core.publisher.Flux:6877
    onComplete                               N      reactor.core.publisher.FluxConcatArray$ConcatArraySubscriber:208
    subscribe                                N      reactor.core.publisher.FluxConcatArray:81
    subscribe                                N      reactor.core.publisher.FluxPeek:83
    subscribe                                N      reactor.core.publisher.FluxPeek:83
    subscribe                                N      reactor.core.publisher.FluxDefer:55
    subscribe                                N      reactor.core.publisher.Flux:6877
    onNext                                   N      reactor.core.publisher.FluxFlatMap$FlatMapMain:372
    drainAsync                               N      reactor.core.publisher.FluxFlattenIterable$FlattenIterableSubscriber:391
    drain                                    N      reactor.core.publisher.FluxFlattenIterable$FlattenIterableSubscriber:633
    onNext                                   N      reactor.core.publisher.FluxFlattenIterable$FlattenIterableSubscriber:238
    onNext                                   N      reactor.core.publisher.FluxHandleFuseable$HandleFuseableSubscriber:143
    drainFused                               N      reactor.core.publisher.UnicastProcessor:234
    drain                                    N      reactor.core.publisher.UnicastProcessor:267
    onNext                                   N      reactor.core.publisher.UnicastProcessor:343
    next                                     N      reactor.core.publisher.FluxCreate$IgnoreSink:573
    next                                     N      reactor.core.publisher.FluxCreate$SerializedSink:151
    processInOrder                           N      com.linbit.linstor.netcom.TcpConnectorPeer:351
    doProcessMessage                         N      com.linbit.linstor.proto.CommonMessageProcessor:200
    lambda$processMessage$2                  N      com.linbit.linstor.proto.CommonMessageProcessor:146
    onNext                                   N      reactor.core.publisher.FluxPeek$PeekSubscriber:177
    runAsync                                 N      reactor.core.publisher.FluxPublishOn$PublishOnSubscriber:396
    run                                      N      reactor.core.publisher.FluxPublishOn$PublishOnSubscriber:480
    call                                     N      reactor.core.scheduler.WorkerTask:84
    call                                     N      reactor.core.scheduler.WorkerTask:37
    run                                      N      java.util.concurrent.FutureTask:266
    access$201                               N      java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask:180
    run                                      N      java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask:293
    runWorker                                N      java.util.concurrent.ThreadPoolExecutor:1149
    run                                      N      java.util.concurrent.ThreadPoolExecutor$Worker:624
    run                                      N      java.lang.Thread:748


END OF ERROR REPORT.

Linstor sometimes double-books DRBD ports

While deleting and recreating resources on different nodes, Linstor somehow got into a state where it tries to reuse another resource's network port every time, and fails on drbdadm. No idea how to reproduce this, unfortunately. Here's the error report:

ERROR REPORT 5BBCF6F0-2E931-000000

============================================================

Application:                        LINBIT® LINSTOR
Module:                             Satellite
Version:                            0.6.4
Build ID:                           464f6e5b198002e0d82122d84c27e1b06ec0400c
Build time:                         2018-09-14T09:11:03+00:00
Error time:                         2018-10-09 21:44:05
Node:                               mox-a

============================================================

Reported error:
===============

Description:
    Operations on resource 'vm-103-disk-1' were aborted
Cause:
    Verification of resource file failed
Additional information:
    The error reported by the runtime environment or operating system is:
    The external command 'drbdadm' exited with error code 10

Category:                           LinStorException
Class name:                         ResourceException
Class canonical name:               com.linbit.linstor.core.DrbdDeviceHandler.ResourceException
Generated at:                       Method 'createResourceConfiguration', Source file 'DrbdDeviceHandler.java', Line #1349

Error message:                      Generated resource file for resource 'vm-103-disk-1' is invalid.

Error context:
    Generated resource file for resource 'vm-103-disk-1' is invalid.

Call backtrace:

    Method                                   Native Class:Line number
    createResourceConfiguration              N      com.linbit.linstor.core.DrbdDeviceHandler:1349
    createResource                           N      com.linbit.linstor.core.DrbdDeviceHandler:1058
    dispatchResource                         N      com.linbit.linstor.core.DrbdDeviceHandler:313
    run                                      N      com.linbit.linstor.core.DeviceManagerImpl$DeviceHandlerInvocation:1337
    run                                      N      com.linbit.WorkerPool$WorkerThread:179

Caused by:
==========

Description:
    Execution of the external command 'drbdadm' failed.
Cause:
    The external command exited with error code 10.
Correction:
    - Check whether the external program is operating properly.
    - Check whether the command line is correct.
      Contact a system administrator or a developer if the command line is no longer valid
      for the installed version of the external program.
Additional information:
    The full command line executed was:
    drbdadm --config-to-test /var/lib/linstor.d/vm-103-disk-1.res_tmp --config-to-exclude /var/lib/linstor.d/vm-103-disk-1.res sh-nop

    The external command sent the following output data:


    The external command sent the follwing error information:
    /var/lib/linstor.d/vm-103-disk-1.res_tmp:44: in resource vm-103-disk-1
        ipv4:172.17.0.1:7002 is also used /var/lib/drbd.d/drbdmanage_vm-201-disk-1.res:0 (resource vm-201-disk-1)
    command sh-nop exited with code 10


Category:                           LinStorException
Class name:                         ExtCmdFailedException
Class canonical name:               com.linbit.extproc.ExtCmdFailedException
Generated at:                       Method 'execute', Source file 'DrbdAdm.java', Line #445

Error message:                      The external command 'drbdadm' exited with error code 10


Call backtrace:

    Method                                   Native Class:Line number
    execute                                  N      com.linbit.drbd.DrbdAdm:445
    execute                                  N      com.linbit.drbd.DrbdAdm:431
    checkResFile                             N      com.linbit.drbd.DrbdAdm:334
    createResourceConfiguration              N      com.linbit.linstor.core.DrbdDeviceHandler:1342
    createResource                           N      com.linbit.linstor.core.DrbdDeviceHandler:1058
    dispatchResource                         N      com.linbit.linstor.core.DrbdDeviceHandler:313
    run                                      N      com.linbit.linstor.core.DeviceManagerImpl$DeviceHandlerInvocation:1337
    run                                      N      com.linbit.WorkerPool$WorkerThread:179


END OF ERROR REPORT.

Resource stuck on DELETING after timeout operation

One time I had an error during removing resource from one node:

Description:
    Failed to delete volume [hosting-vol-data-web-hc1-wd11-0_00000]
Cause:
    External command timed out
Additional information:
    External command: lvremove -f data/hosting-vol-data-web-hc1-wd11-0_00000

After few time I've checked lvm on the node, it was fully removed, but linstor still show it as DELETING.

After restart linstor-satellite service it was totally disappears.

volume-definition set-size runs drbdadm resize on diskless nodes

Hi.

I have a 3 node cluster, one being diskless like so:

LINSTOR ==> storage-pool list
╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
┊ StoragePool ┊ Node                  ┊ Driver         ┊ PoolName        ┊ FreeCapacity ┊ TotalCapacity ┊ SupportsSnapshots ┊
╞┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄╡
┊ drbdpool    ┊ arbiter1.hosting90.cz ┊ DisklessDriver ┊                 ┊              ┊               ┊ false             ┊
┊ drbdpool    ┊ drbd1.hosting90.cz    ┊ LvmThinDriver  ┊ centos/drbdpool ┊    18.00 TiB ┊     18.00 TiB ┊ true              ┊
┊ drbdpool    ┊ drbd2.hosting90.cz    ┊ LvmThinDriver  ┊ centos/drbdpool ┊    18.00 TiB ┊     18.00 TiB ┊ true              ┊
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

When I tried to resize a volume:
volume-definition set-size target123459 0 200GiB

linstor tried to run drbdadm resize on the diskless node:

drbdadm -vvv resize target123459/0
The resize command requires a local disk, but the configuration gives none

The volume is now stuck in resizing status
┊ target123459 ┊ 0 ┊ 1002 ┊ 200 GiB ┊ resizing ┊

String index out of range: -3

Hi, after upgrade my testing cluster, linstor doesn't want to work:

Nodes shows as Connected:

╭─────────────────────────────────────────────────────────────────────────╮
┊ Node   ┊ NodeType  ┊ Addresses                              ┊ State     ┊
╞┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄╡
┊ pve1-1 ┊ SATELLITE ┊ 10.29.36.159,10.28.36.159:3366 (PLAIN) ┊ Connected ┊
┊ pve1-2 ┊ SATELLITE ┊ 10.29.36.160,10.28.36.160:3366 (PLAIN) ┊ Connected ┊
┊ pve1-3 ┊ SATELLITE ┊ 10.29.36.161,10.28.36.161:3366 (PLAIN) ┊ Connected ┊
╰─────────────────────────────────────────────────────────────────────────╯

there is no errors on linstor-controller

Logs from linstor-satellite:

[MainWorkerPool-1] INFO  LINSTOR/Satellite - Controller connected and authenticated (10.28.36.172:48704)
[MainWorkerPool-2] INFO  LINSTOR/Satellite - Node 'pve1-1' created.
[MainWorkerPool-2] INFO  LINSTOR/Satellite - Node 'pve1-2' created.
[MainWorkerPool-2] INFO  LINSTOR/Satellite - Node 'pve1-3' created.
[MainWorkerPool-2] INFO  LINSTOR/Satellite - Storage pool 'DfltDisklessStorPool' created.
[MainWorkerPool-2] INFO  LINSTOR/Satellite - Storage pool 'drbdpool' created.
[MainWorkerPool-2] INFO  LINSTOR/Satellite - Resource 'test' created for node 'pve1-1'.
[MainWorkerPool-2] INFO  LINSTOR/Satellite - Resource 'vm-100-disk-1' created for node 'pve1-1'.
[MainWorkerPool-2] INFO  LINSTOR/Satellite - Resource 'vm-100-disk-1' created for node 'pve1-2'.
[MainWorkerPool-2] INFO  LINSTOR/Satellite - Resource 'vm-100-disk-1' created for node 'pve1-3'.
[MainWorkerPool-2] INFO  LINSTOR/Satellite - Resource 'vm-1012-disk-1' created for node 'pve1-1'.
[MainWorkerPool-2] INFO  LINSTOR/Satellite - Resource 'vm-1012-disk-1' created for node 'pve1-2'.
[MainWorkerPool-2] INFO  LINSTOR/Satellite - Resource 'vm-1012-disk-1' created for node 'pve1-3'.
[MainWorkerPool-2] INFO  LINSTOR/Satellite - Resource 'vm-187-disk-4' created for node 'pve1-1'.
[MainWorkerPool-2] INFO  LINSTOR/Satellite - Resource 'vm-187-disk-4' created for node 'pve1-2'.
[MainWorkerPool-2] INFO  LINSTOR/Satellite - Resource 'vm-187-disk-4' created for node 'pve1-3'.
[MainWorkerPool-2] ERROR LINSTOR/Satellite - String index out of range: -3 [Report number 5BC5EE87-83FDA-000000]
[StltWorkerPool_0002] ERROR LINSTOR/Satellite - String index out of range: -3 [Report number 5BC5EE87-83FDA-000001]
[StltWorkerPool_0003] ERROR LINSTOR/Satellite - String index out of range: -3 [Report number 5BC5EE87-83FDA-000002]
[StltWorkerPool_0001] ERROR LINSTOR/Satellite - String index out of range: -3 [Report number 5BC5EE87-83FDA-000003]
[StltWorkerPool_0000] ERROR LINSTOR/Satellite - String index out of range: -3 [Report number 5BC5EE87-83FDA-000004]
[MainWorkerPool-3] DEBUG reactor.core.publisher.Operators - onNextDropped: [2,FluxDefer]

ErrorReport-5BC5EE87-83FDA-000004.log
ErrorReport-5BC5EE87-83FDA-000003.log
ErrorReport-5BC5EE87-83FDA-000002.log
ErrorReport-5BC5EE87-83FDA-000001.log
ErrorReport-5BC5EE87-83FDA-000000.log

Upgrade from DRBD Manage to LINSTOR

I do understand that the initial set-up for LINSTOR is more difficult, but it's the successor to DRBD Manage and is in every way better.

Is it possible to move from drbdmanage to linstore without loosing already configured resources?
I've a two node kubernetes setup with drbd-flex-provision on both nodes and would love to switch from drbdmanage to linstore.
Are there any guides on howto "upgrade" drbdmanage to linstore?

Best Regards
Andreas

Originally posted by @adoerler in https://github.com/LINBIT/drbd-flex-provision/issues/8#issuecomment-436426273

Read-only file system in proxmox disk operation

In proxmox web interface, in disk operations? as "move Volume", restore from backup, Clone from template etc i have error.

New disk create, but readonly.

drbdadm status

vm-102-disk-1 role:Secondary
  disk:Inconsistent
  fesxi8 role:Secondary
    peer-disk:Inconsistent

Proxmox log

Virtual Environment 5.2-8
Storage 'mirrorNFS' on node 'eserver'
Search:
Logs
restore vma archive: zcat /mnt/pve/mirrorNFS/dump/vzdump-qemu-101-2018_09_14-09_19_37.vma.gz | vma extract -v -r /var/tmp/vzdumptmp18234.fifo - /var/tmp/vzdumptmp18234
CFG: size: 369 name: qemu-server.conf
DEV: dev_id=1 size: 5371715584 devname: drive-scsi0
CTIME: Fri Sep 14 09:19:42 2018
SUCCESS:
Description:
    New resource definition 'vm-102-disk-1' created.
Details:
    Resource definition 'vm-102-disk-1' UUID is: 90591fb3-65c2-4b58-a831-9c155b765993
SUCCESS:
Description:
    Resource definition 'vm-102-disk-1' modified.
Details:
    Resource definition 'vm-102-disk-1' UUID is: 90591fb3-65c2-4b58-a831-9c155b765993
SUCCESS:
    New volume definition with number '0' of resource definition 'vm-102-disk-1' created.
SUCCESS:
Description:
    Resource 'vm-102-disk-1' successfully autoplaced on 2 nodes
Details:
    Used storage pool: 'drbdpool'
    Used nodes: 'eserver', 'fesxi8'
new volume ID is 'drbdpool:vm-102-disk-1'
map 'drive-scsi0' to '/dev/drbd/by-res/vm-102-disk-1/0' (write zeros = 1)

** (process:18237): ERROR **: can't open file /dev/drbd/by-res/vm-102-disk-1/0 - Could not open '/dev/drbd/by-res/vm-102-disk-1/0': Read-only file system
/bin/bash: line 1: 18236 Broken pipe             zcat /mnt/pve/mirrorNFS/dump/vzdump-qemu-101-2018_09_14-09_19_37.vma.gz
     18237 Trace/breakpoint trap   | vma extract -v -r /var/tmp/vzdumptmp18234.fifo - /var/tmp/vzdumptmp18234
SUCCESS:
Description:
    Resource definition 'vm-102-disk-1' marked for deletion.
Details:
    Resource definition 'vm-102-disk-1' UUID is: 90591fb3-65c2-4b58-a831-9c155b765993
temporary volume 'drbdpool:vm-102-disk-1' sucessfuly removed
no lock found trying to remove 'create'  lock
TASK ERROR: command 'set -o pipefail && zcat /mnt/pve/mirrorNFS/dump/vzdump-qemu-101-2018_09_14-09_19_37.vma.gz | vma extract -v -r /var/tmp/vzdumptmp18234.fifo - /var/tmp/vzdumptmp18234' failed: exit code 133

Connection error on one node after upgrade (NullPtr on buildSnapshotDataMsg)

After upgrading LINSTOR packages on my Proxmox hosts, one of the nodes is stuck on "Connected" state and never becomes "Online":

╭───────────────────────────────────────────────────────────────────╮
┊ Node         ┊ NodeType   ┊ Addresses                 ┊ State     ┊
╞┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄╡
┊ linstor-ctl  ┊ CONTROLLER ┊ 172.17.0.100:3376 (PLAIN) ┊ Unknown   ┊
┊ mox-a        ┊ SATELLITE  ┊ 172.17.0.1:3366 (PLAIN)   ┊ Connected ┊
┊ mox-b        ┊ SATELLITE  ┊ 172.17.0.2:3366 (PLAIN)   ┊ Online    ┊
┊ mox-c        ┊ SATELLITE  ┊ 172.17.0.5:3366 (PLAIN)   ┊ Online    ┊
┊ mox-d        ┊ SATELLITE  ┊ 172.17.0.6:3366 (PLAIN)   ┊ Online    ┊
┊ serv-archive ┊ SATELLITE  ┊ 172.17.0.4:3366 (PLAIN)   ┊ Online    ┊
╰───────────────────────────────────────────────────────────────────╯

Controller shows the following error:

# cat /var/log/linstor-controller/ErrorReport-5BEEA0C5-00000-000000.log
ERROR REPORT 5BEEA0C5-00000-000000

============================================================

Application:                        LINBIT® LINSTOR
Module:                             Controller
Version:                            0.7.2
Build ID:                           1e04fe9baa0ee8d046fdc108ace9f2e460201e06
Build time:                         2018-11-15T09:53:01+00:00
Error time:                         2018-11-16 12:49:44
Node:                               serv-linstor-ctl

============================================================

Reported error:
===============

Category:                           RuntimeException
Class name:                         NullPointerException
Class canonical name:               java.lang.NullPointerException
Generated at:                       Method 'buildSnapshotDataMsg', Source file 'ProtoCtrlStltSerializerBuilder.java', Line #1040


Error context:
    Uncaught exception in processor for peer 'Node: 'mox-a''

Call backtrace:

    Method                                   Native Class:Line number
    buildSnapshotDataMsg                     N      com.linbit.linstor.api.protobuf.serializer.ProtoCtrlStltSerializerBuilder$SnapshotSerializerHelper:1040
    access$600                               N      com.linbit.linstor.api.protobuf.serializer.ProtoCtrlStltSerializerBuilder$SnapshotSerializerHelper:1009
    fullSync                                 N      com.linbit.linstor.api.protobuf.serializer.ProtoCtrlStltSerializerBuilder:424
    fullSync                                 N      com.linbit.linstor.api.protobuf.serializer.ProtoCtrlStltSerializerBuilder:71
    sendFullSyncInScope                      N      com.linbit.linstor.core.apicallhandler.controller.CtrlFullSyncApiCallHandler:129
    lambda$sendFullSync$0                    N      com.linbit.linstor.core.apicallhandler.controller.CtrlFullSyncApiCallHandler:76
    doInScope                                N      com.linbit.linstor.core.apicallhandler.ScopeRunner:100
    lambda$null$0                            N      com.linbit.linstor.core.apicallhandler.ScopeRunner:64
    call                                     N      reactor.core.publisher.MonoCallable:92
    trySubscribeScalarMap                    N      reactor.core.publisher.FluxFlatMap:126
    subscribe                                N      reactor.core.publisher.MonoFlatMapMany:46
    subscribe                                N      reactor.core.publisher.Flux:6877
    onNext                                   N      reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain:184
    request                                  N      reactor.core.publisher.Operators$ScalarSubscription:1640
    onSubscribe                              N      reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain:131
    subscribe                                N      reactor.core.publisher.MonoCurrentContext:35
    subscribe                                N      reactor.core.publisher.MonoFlatMapMany:49
    subscribe                                N      reactor.core.publisher.MonoIgnoreElements:37
    subscribe                                N      reactor.core.publisher.Mono:3080
    onComplete                               N      reactor.core.publisher.FluxConcatArray$ConcatArraySubscriber:208
    subscribe                                N      reactor.core.publisher.FluxConcatArray:81
    subscribe                                N      reactor.core.publisher.Flux:6877
    trySubscribeScalarMap                    N      reactor.core.publisher.FluxFlatMap:169
    subscribe                                N      reactor.core.publisher.MonoFlatMapMany:46
    subscribe                                N      reactor.core.publisher.Flux:6877
    onNext                                   N      reactor.core.publisher.FluxFlatMap$FlatMapMain:372
    onNext                                   N      reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner:238
    request                                  N      reactor.core.publisher.Operators$ScalarSubscription:1640
    onSubscribeInner                         N      reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain:140
    onSubscribe                              N      reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner:233
    trySubscribeScalarMap                    N      reactor.core.publisher.FluxFlatMap:161
    subscribe                                N      reactor.core.publisher.MonoFlatMapMany:46
    subscribe                                N      reactor.core.publisher.Flux:6877
    onNext                                   N      reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain:184
    request                                  N      reactor.core.publisher.Operators$ScalarSubscription:1640
    onSubscribe                              N      reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain:131
    subscribe                                N      reactor.core.publisher.MonoCurrentContext:35
    subscribe                                N      reactor.core.publisher.MonoFlatMapMany:49
    subscribe                                N      reactor.core.publisher.FluxFlatMap:97
    subscribe                                N      reactor.core.publisher.FluxOnErrorResume:47
    subscribe                                N      reactor.core.publisher.FluxOnErrorResume:47
    subscribe                                N      reactor.core.publisher.FluxOnErrorResume:47
    subscribe                                N      reactor.core.publisher.FluxContextStart:49
    subscribe                                N      reactor.core.publisher.FluxPeekFuseable:86
    subscribe                                N      reactor.core.publisher.FluxPeekFuseable:86
    subscribe                                N      reactor.core.publisher.FluxDefer:55
    subscribe                                N      reactor.core.publisher.Flux:6877
    onNext                                   N      reactor.core.publisher.FluxFlatMap$FlatMapMain:372
    drainAsync                               N      reactor.core.publisher.FluxFlattenIterable$FlattenIterableSubscriber:391
    drain                                    N      reactor.core.publisher.FluxFlattenIterable$FlattenIterableSubscriber:633
    onNext                                   N      reactor.core.publisher.FluxFlattenIterable$FlattenIterableSubscriber:238
    onNext                                   N      reactor.core.publisher.FluxHandleFuseable$HandleFuseableSubscriber:143
    drainFused                               N      reactor.core.publisher.UnicastProcessor:234
    drain                                    N      reactor.core.publisher.UnicastProcessor:267
    onNext                                   N      reactor.core.publisher.UnicastProcessor:343
    next                                     N      reactor.core.publisher.FluxCreate$IgnoreSink:573
    next                                     N      reactor.core.publisher.FluxCreate$SerializedSink:151
    processInOrder                           N      com.linbit.linstor.netcom.TcpConnectorPeer:361
    doProcessMessage                         N      com.linbit.linstor.proto.CommonMessageProcessor:224
    lambda$processMessage$4                  N      com.linbit.linstor.proto.CommonMessageProcessor:170
    onNext                                   N      reactor.core.publisher.FluxPeek$PeekSubscriber:177
    runAsync                                 N      reactor.core.publisher.FluxPublishOn$PublishOnSubscriber:396
    run                                      N      reactor.core.publisher.FluxPublishOn$PublishOnSubscriber:480
    call                                     N      reactor.core.scheduler.WorkerTask:84
    call                                     N      reactor.core.scheduler.WorkerTask:37
    run                                      N      java.util.concurrent.FutureTask:266
    access$201                               N      java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask:180
    run                                      N      java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask:293
    runWorker                                N      java.util.concurrent.ThreadPoolExecutor:1149
    run                                      N      java.util.concurrent.ThreadPoolExecutor$Worker:624
    run                                      N      java.lang.Thread:748


END OF ERROR REPORT.

Package versions should match:

=== linstor-ctl ===
ii  linstor-client                0.7.1-1                        all          Linstor client command line tool
ii  linstor-common                0.7.2-2                        all          DRBD distributed resource management utility
ii  linstor-controller            0.7.2-2                        all          DRBD distributed resource management utility
ii  python-linstor                0.7.1-1                        all          Linstor python api library

=== mox-a ===
ii  linstor-client                       0.7.1-1                                  all          Linstor client command line tool
ii  linstor-common                       0.7.2-2                                  all          DRBD distributed resource management utility
ii  linstor-proxmox                      3.0.2-1                                  all          DRBD distributed resource management utility
ii  linstor-satellite                    0.7.2-2                                  all          DRBD distributed resource management utility
ii  python-linstor                       0.7.1-1                                  all          Linstor python api library

=== mox-b ===
ii  linstor-client                       0.7.1-1                        all          Linstor client command line tool
ii  linstor-common                       0.7.2-2                        all          DRBD distributed resource management utility
ii  linstor-proxmox                      3.0.2-1                        all          DRBD distributed resource management utility
ii  linstor-satellite                    0.7.2-2                        all          DRBD distributed resource management utility
ii  python-linstor                       0.7.1-1                        all          Linstor python api library

=== mox-c ===
ii  linstor-client                       0.7.1-1                        all          Linstor client command line tool
ii  linstor-common                       0.7.2-2                        all          DRBD distributed resource management utility
ii  linstor-proxmox                      3.0.2-1                        all          DRBD distributed resource management utility
ii  linstor-satellite                    0.7.2-2                        all          DRBD distributed resource management utility
ii  python-linstor                       0.7.1-1                        all          Linstor python api library

=== mox-d ===
ii  linstor-client                       0.7.1-1                        all          Linstor client command line tool
ii  linstor-common                       0.7.2-2                        all          DRBD distributed resource management utility
ii  linstor-proxmox                      3.0.2-1                        all          DRBD distributed resource management utility
ii  linstor-satellite                    0.7.2-2                        all          DRBD distributed resource management utility
ii  python-linstor                       0.7.1-1                        all          Linstor python api library

=== serv-archive ===
ii  linstor-client                         0.7.1-1                                 all          Linstor client command line tool
ii  linstor-common                         0.7.2-2                                 all          DRBD distributed resource management utility
ii  linstor-satellite                      0.7.2-2                                 all          DRBD distributed resource management utility
ii  python-linstor                         0.7.1-1                                 all          Linstor python api library

Can not set global disk options

Bug

According this mail:
http://lists.linbit.com/pipermail/drbd-user/2018-July/024228.html

Global disk options can be set, but they art not working.

Example: the net options can be set and they are working fine:

max-buffers, protocol, rcvbuf-size, sndbuf-size

But disk options can be set, and the are not working afterwards:

c-fill-target, c-max-rate, c-plan-ahead, c-min-rate`

Steps to reproduce

  • Run
    linstor c drbd-options --c-plan-ahead=10 --c-min-rate=$((20*1024)) --c-max-rate=$((720*1024)) --c-fill-target=$((10*1024))
    
  • Check your /var/lib/drbd.d/linstor_common.conf

Can not change listen/bind address.

# /usr/share/linstor-server/bin/linstor-config set-plain-listen /var/lib/linstor/linstordb.mv.db 127.0.0.1
Exception in thread "main" picocli.CommandLine$ExecutionException: Error while calling command (com.linbit.linstor.core.LinstorConfig$CmdSetPlainListen@1c4af82c): java.util.InvalidPropertiesFormatException: org.xml.sax.SAXParseException; lineNumber: 1; columnNumber: 1; Content is not allowed in prolog.
        at picocli.CommandLine.execute(CommandLine.java:544)
        at picocli.CommandLine.access$600(CommandLine.java:143)
        at picocli.CommandLine$RunLast.handleParseResult(CommandLine.java:624)
        at picocli.CommandLine.parseWithHandlers(CommandLine.java:742)
        at picocli.CommandLine.parseWithHandler(CommandLine.java:695)
        at com.linbit.linstor.core.LinstorConfig.main(LinstorConfig.java:158)
Caused by: java.util.InvalidPropertiesFormatException: org.xml.sax.SAXParseException; lineNumber: 1; columnNumber: 1; Content is not allowed in prolog.
        at sun.util.xml.PlatformXmlPropertiesProvider.load(PlatformXmlPropertiesProvider.java:80)
        at java.util.Properties$XmlSupport.load(Properties.java:1201)
        at java.util.Properties.loadFromXML(Properties.java:881)
        at com.linbit.linstor.core.LinstorConfig.initConnectionProviderFromCfg(LinstorConfig.java:192)
        at com.linbit.linstor.core.LinstorConfig.access$200(LinstorConfig.java:26)
        at com.linbit.linstor.core.LinstorConfig$CmdSetPlainListen.call(LinstorConfig.java:141)
        at picocli.CommandLine.execute(CommandLine.java:542)
        ... 5 more
Caused by: org.xml.sax.SAXParseException; lineNumber: 1; columnNumber: 1; Content is not allowed in prolog.
        at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAXParseException(ErrorHandlerWrapper.java:203)
        at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.fatalError(ErrorHandlerWrapper.java:177)
        at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:400)
        at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:327)
        at com.sun.org.apache.xerces.internal.impl.XMLScanner.reportFatalError(XMLScanner.java:1472)
        at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$PrologDriver.next(XMLDocumentScannerImpl.java:994)
        at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:602)
        at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:505)
        at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:842)
        at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:771)
        at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:141)
        at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:243)
        at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:339)
        at sun.util.xml.PlatformXmlPropertiesProvider.getLoadingDoc(PlatformXmlPropertiesProvider.java:106)
        at sun.util.xml.PlatformXmlPropertiesProvider.load(PlatformXmlPropertiesProvider.java:78)
        ... 11 more

ZFS Thin Provisioning

Feature Request

Description

Hello, do you plan to support zfs thin volumes?
Basically it is just about specifying -s flag when creating zfs volume, example:

zfs create -s -V 500G data/thinvolume_00000

Use case

It would be nice for save space on the nodes.

Interface Changes

I think this option may be set via

linstor storage-pool set-property node1 myzfspool sparse true

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.