Code Monkey home page Code Monkey logo

longhorn's Introduction

Longhorn Astronomer

Build Status

  • Engine: Build Status Go Report Card
  • Instance Manager: Build StatusGo Report Card
  • Manager: Build StatusGo Report Card
  • UI: Build Status
  • Test: Build Status

Overview

Longhorn is a distributed block storage system for Kubernetes.

Longhorn is lightweight, reliable, and powerful. You can install Longhorn on an existing Kubernetes cluster with one kubectl apply command or using Helm charts. Once Longhorn is installed, it adds persistent volume support to the Kubernetes cluster.

Longhorn implements distributed block storage using containers and microservices. Longhorn creates a dedicated storage controller for each block device volume and synchronously replicates the volume across multiple replicas stored on multiple nodes. The storage controller and replicas are themselves orchestrated using Kubernetes. Here are some notable features of Longhorn:

  1. Enterprise-grade distributed storage with no single point of failure
  2. Incremental snapshot of block storage
  3. Backup to secondary storage (NFS or S3-compatible object storage) built on efficient change block detection
  4. Recurring snapshot and backup
  5. Automated non-disruptive upgrade. You can upgrade the entire Longhorn software stack without disrupting running volumes!
  6. Intuitive GUI dashboard

You can read more technical details of Longhorn here.

Current status

Longhorn is beta-quality software. We appreciate your willingness to deploy Longhorn and provide feedback.

The latest release of Longhorn is v0.8.1.

Source code

Longhorn is 100% open source software. Project source code is spread across a number of repos:

  1. Longhorn engine -- Core controller/replica logic https://github.com/longhorn/longhorn-engine
  2. Longhorn manager -- Longhorn orchestration https://github.com/longhorn/longhorn-manager
  3. Longhorn UI -- Dashboard https://github.com/longhorn/longhorn-ui

Longhorn UI

Requirements

For the installation requirements, refer to the Longhorn documentation.

Install

Longhorn can be installed on a Kubernetes cluster in several ways:

Documentation

The official Longhorn documentation is here.

Community

Longhorn is an open source software, so contribution are greatly welcome. Please read Code of Conduct and Contributing Guideline before contributing.

Contributing code is not the only way of contributing. We value feedbacks very much and many of the Longhorn features are originated from users' feedback. If you have any feedbacks, feel free to file an issue and talk to the developers at the CNCF #longhorn slack channel.

License

Copyright (c) 2014-2020 The Longhorn Authors

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Longhorn is a CNCF Sandbox Project

Longhorn is a CNCF Sandbox Project

longhorn's People

Contributors

yasker avatar sheng-liang avatar shuo-wu avatar tedder avatar wusphinx avatar rbq avatar lucperkins avatar aioue avatar runningman84 avatar peron avatar meldafrawi avatar ttpcodes avatar jaciechao avatar kaskavalci avatar catherineluse avatar andyjeffries avatar lnikell avatar oskapt avatar aspettl avatar

Watchers

James Cloos avatar  avatar

longhorn's Issues

Nightly test test_engine_live_upgrade failed

stacktrace:\n```client = <longhorn.Client object at 0x7f089dc60850>
core_api = <kubernetes.client.apis.core_v1_api.CoreV1Api object at 0x7f089d8b4090>
volume_name = 'longhorn-testvol-o29sgt'

@pytest.mark.coretest # NOQA
def test_engine_live_upgrade(client, core_api, volume_name): # NOQA

engine_live_upgrade_test(client, core_api, volume_name)

test_engine_upgrade.py:188:


test_engine_upgrade.py:256: in engine_live_upgrade_test
check_volume_data(volume, data)
common.py:1533: in check_volume_data
dev = get_volume_endpoint(volume)


v = {'backupStatus': [], 'baseImage': '', 'conditions': {'scheduled': {'lastProbeTime': '', 'lastTransitionTime': '2020-04...77216', 'staleReplicaTimeout': 0, 'standby': False, 'state': 'attached', 'timestamp': '2020-04-02T09:40:09.756578542Z'}

def get_volume_endpoint(v):
engine = get_volume_engine(v)
endpoint = engine.endpoint

assert endpoint != ""
E AssertionError

common.py:1894: AssertionError```

Nightly test test_engine_live_upgrade failed

core_api = <kubernetes.client.apis.core_v1_api.CoreV1Api object at 0x7f089d8b4090>
volume_name = 'longhorn-testvol-o29sgt'

 @pytest.mark.coretest # NOQA
 def test_engine_live_upgrade(client, core_api, volume_name): # NOQA
> engine_live_upgrade_test(client, core_api, volume_name)

test_engine_upgrade.py:188: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
test_engine_upgrade.py:256: in engine_live_upgrade_test
 check_volume_data(volume, data)
common.py:1533: in check_volume_data
 dev = get_volume_endpoint(volume)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

v = {'backupStatus': [], 'baseImage': '', 'conditions': {'scheduled': {'lastProbeTime': '', 'lastTransitionTime': '2020-04...77216', 'staleReplicaTimeout': 0, 'standby': False, 'state': 'attached', 'timestamp': '2020-04-02T09:40:09.756578542Z'}

 def get_volume_endpoint(v):
 engine = get_volume_engine(v)
 endpoint = engine.endpoint
> assert endpoint != ""
E AssertionError

common.py:1894: AssertionError```

Nightly test test_salvage_auto_crash_replicas_long_wait failed

core_api = <kubernetes.client.apis.core_v1_api.CoreV1Api object at 0x7f089dc42a50>
volume_name = 'longhorn-testvol-bijl6e'
pod_make = <function pod_make.<locals>.make_pod at 0x7f089df94d40>

 def test_salvage_auto_crash_replicas_long_wait(client, core_api, volume_name, pod_make): # NOQA
 pod_name = volume_name + "-pod"
 pv_name = volume_name + "-pv"
 pvc_name = volume_name + "-pvc"
 
 pod = pod_make(name=pod_name)
 
 pod_liveness_probe_spec = get_liveness_probe_spec(initial_delay=1,
 period=1)
 
 pod['spec']['containers'][0]['livenessProbe'] = pod_liveness_probe_spec
 
 volume = create_and_check_volume(client, volume_name, num_of_replicas=2)
 
 create_pv_for_volume(client, core_api, volume, pv_name)
 create_pvc_for_volume(client, core_api, volume, pvc_name)
 pod['spec']['volumes'] = [create_pvc_spec(pvc_name)]
 create_and_wait_pod(core_api, pod)
 
 test_data = generate_random_data(VOLUME_RWTEST_SIZE)
 
 write_pod_volume_data(core_api, pod_name, test_data)
 
 stream(core_api.connect_get_namespaced_pod_exec,
 pod_name,
 'default',
 command="sync",
 stderr=True, stdin=True,
 stdout=True, tty=True,
 _preload_content=False)
 
 volume = client.by_id_volume(volume_name)
 replica0 = volume.replicas[0]
 
 crash_replica_processes(client, core_api, volume_name, [replica0])
 
 time.sleep(60)
 
 volume = client.by_id_volume(volume_name)
 
 replicas = []
 for r in volume.replicas:
 if r.running is True:
 replicas.append(r)
 
 crash_replica_processes(client, core_api, volume_name, replicas)
 
 volume = common.wait_for_volume_faulted(client, volume_name)
 
 volume = common.wait_for_volume_detached_unknown(client, volume_name)
 assert len(volume.replicas) == 3
 
 volume = wait_for_volume_healthy(client, volume_name)
 
 wait_for_pod_remount(core_api, pod_name)
 
 resp = read_volume_data(core_api, pod_name)
 
> assert test_data == resp
E assert 'jy5jyi4mycx6...9gexsjpe6wlbt' == "cat: can't o...utput error
"
E - jy5jyi4mycx6vae6qx6fxn66gxnxmk3x22wjr2814epdo8jp48gsfixg77njycuge8k8tzlep0pbta8snf9yfu6umid7go9tu4nmivtrlqrxfqa76m45lr4id1v6a4f9u65fz7rejp07t4deayvjqxa1i013j1if64gp4zk64rkyvz1co3fscsckdoixljssuuoq2196fu4ag4p16wfa8e3t2kt4kor3d399hdaaqw3fawzlescoqxhe6f2ojzu37dlykr7zs9c2rid8jjwt9o1ek6ba4hr1ixcx177ypkkw6u1brncs9e1fuhf2v70hr3k7w6zh92c0c9dq0af40ubgfkc0cm7ur7ucg1ad4lqxs5fwgoyor24oe8mqkxdl6pbzj9ie70b3pirifpw9z5ptozo6ybi75iqnoau8a0154fly7iwdxh1e55paj9gmqfd4e9me1xpuwtnfq6ccojwirkl2zktoqdmpbtyacmdzl3iv5pr9gexsjpe6wlbt
E + cat: can't open '/data/test': Input/output error

test_ha.py:522: AssertionError```

Nightly test test_engine_live_upgrade failed

client = <longhorn.Client object at 0x7f089dc60850> core_api = <kubernetes.client.apis.core_v1_api.CoreV1Api object at 0x7f089d8b4090> volume_name = 'longhorn-testvol-o29sgt' @pytest.mark.coretest # NOQA def test_engine_live_upgrade(client, core_api, volume_name): # NOQA > engine_live_upgrade_test(client, core_api, volume_name) test_engine_upgrade.py:188: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ test_engine_upgrade.py:256: in engine_live_upgrade_test check_volume_data(volume, data) common.py:1533: in check_volume_data dev = get_volume_endpoint(volume) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ v = {'backupStatus': [], 'baseImage': '', 'conditions': {'scheduled': {'lastProbeTime': '', 'lastTransitionTime': '2020-04...77216', 'staleReplicaTimeout': 0, 'standby': False, 'state': 'attached', 'timestamp': '2020-04-02T09:40:09.756578542Z'} def get_volume_endpoint(v): engine = get_volume_engine(v) endpoint = engine.endpoint > assert endpoint != "" E AssertionError common.py:1894: AssertionError

Nightly test test_engine_live_upgrade failed

client = <longhorn.Client object at 0x7f089dc60850>
core_api = <kubernetes.client.apis.core_v1_api.CoreV1Api object at 0x7f089d8b4090>
volume_name = 'longhorn-testvol-o29sgt'

@pytest.mark.coretest # NOQA
def test_engine_live_upgrade(client, core_api, volume_name): # NOQA

engine_live_upgrade_test(client, core_api, volume_name)

test_engine_upgrade.py:188:


test_engine_upgrade.py:256: in engine_live_upgrade_test
check_volume_data(volume, data)
common.py:1533: in check_volume_data
dev = get_volume_endpoint(volume)


v = {'backupStatus': [], 'baseImage': '', 'conditions': {'scheduled': {'lastProbeTime': '', 'lastTransitionTime': '2020-04...77216', 'staleReplicaTimeout': 0, 'standby': False, 'state': 'attached', 'timestamp': '2020-04-02T09:40:09.756578542Z'}

def get_volume_endpoint(v):
engine = get_volume_engine(v)
endpoint = engine.endpoint

assert endpoint != ""
E AssertionError

common.py:1894: AssertionError

Nightly test test_salvage_auto_crash_replicas_long_wait failed

client = <longhorn.Client object at 0x7f089e08d350>
core_api = <kubernetes.client.apis.core_v1_api.CoreV1Api object at 0x7f089dc42a50>
volume_name = 'longhorn-testvol-bijl6e'
pod_make = <function pod_make..make_pod at 0x7f089df94d40>

def test_salvage_auto_crash_replicas_long_wait(client, core_api, volume_name, pod_make): # NOQA
pod_name = volume_name + "-pod"
pv_name = volume_name + "-pv"
pvc_name = volume_name + "-pvc"

pod = pod_make(name=pod_name)

pod_liveness_probe_spec = get_liveness_probe_spec(initial_delay=1,
period=1)

pod['spec']['containers'][0]['livenessProbe'] = pod_liveness_probe_spec

volume = create_and_check_volume(client, volume_name, num_of_replicas=2)

create_pv_for_volume(client, core_api, volume, pv_name)
create_pvc_for_volume(client, core_api, volume, pvc_name)
pod['spec']['volumes'] = [create_pvc_spec(pvc_name)]
create_and_wait_pod(core_api, pod)

test_data = generate_random_data(VOLUME_RWTEST_SIZE)

write_pod_volume_data(core_api, pod_name, test_data)

stream(core_api.connect_get_namespaced_pod_exec,
pod_name,
'default',
command="sync",
stderr=True, stdin=True,
stdout=True, tty=True,
_preload_content=False)

volume = client.by_id_volume(volume_name)
replica0 = volume.replicas[0]

crash_replica_processes(client, core_api, volume_name, [replica0])

time.sleep(60)

volume = client.by_id_volume(volume_name)

replicas = []
for r in volume.replicas:
if r.running is True:
replicas.append(r)

crash_replica_processes(client, core_api, volume_name, replicas)

volume = common.wait_for_volume_faulted(client, volume_name)

volume = common.wait_for_volume_detached_unknown(client, volume_name)
assert len(volume.replicas) == 3

volume = wait_for_volume_healthy(client, volume_name)

wait_for_pod_remount(core_api, pod_name)

resp = read_volume_data(core_api, pod_name)

assert test_data == resp
E assert 'jy5jyi4mycx6...9gexsjpe6wlbt' == "cat: can't o...utput error
"
E - jy5jyi4mycx6vae6qx6fxn66gxnxmk3x22wjr2814epdo8jp48gsfixg77njycuge8k8tzlep0pbta8snf9yfu6umid7go9tu4nmivtrlqrxfqa76m45lr4id1v6a4f9u65fz7rejp07t4deayvjqxa1i013j1if64gp4zk64rkyvz1co3fscsckdoixljssuuoq2196fu4ag4p16wfa8e3t2kt4kor3d399hdaaqw3fawzlescoqxhe6f2ojzu37dlykr7zs9c2rid8jjwt9o1ek6ba4hr1ixcx177ypkkw6u1brncs9e1fuhf2v70hr3k7w6zh92c0c9dq0af40ubgfkc0cm7ur7ucg1ad4lqxs5fwgoyor24oe8mqkxdl6pbzj9ie70b3pirifpw9z5ptozo6ybi75iqnoau8a0154fly7iwdxh1e55paj9gmqfd4e9me1xpuwtnfq6ccojwirkl2zktoqdmpbtyacmdzl3iv5pr9gexsjpe6wlbt
E + cat: can't open '/data/test': Input/output error

test_ha.py:522: AssertionError

Nightly test test_salvage_auto_crash_replicas_long_wait failed

stacktrace:\n```client = <longhorn.Client object at 0x7f089e08d350>
core_api = <kubernetes.client.apis.core_v1_api.CoreV1Api object at 0x7f089dc42a50>
volume_name = 'longhorn-testvol-bijl6e'
pod_make = <function pod_make..make_pod at 0x7f089df94d40>

def test_salvage_auto_crash_replicas_long_wait(client, core_api, volume_name, pod_make): # NOQA
pod_name = volume_name + "-pod"
pv_name = volume_name + "-pv"
pvc_name = volume_name + "-pvc"

pod = pod_make(name=pod_name)

pod_liveness_probe_spec = get_liveness_probe_spec(initial_delay=1,
period=1)

pod['spec']['containers'][0]['livenessProbe'] = pod_liveness_probe_spec

volume = create_and_check_volume(client, volume_name, num_of_replicas=2)

create_pv_for_volume(client, core_api, volume, pv_name)
create_pvc_for_volume(client, core_api, volume, pvc_name)
pod['spec']['volumes'] = [create_pvc_spec(pvc_name)]
create_and_wait_pod(core_api, pod)

test_data = generate_random_data(VOLUME_RWTEST_SIZE)

write_pod_volume_data(core_api, pod_name, test_data)

stream(core_api.connect_get_namespaced_pod_exec,
pod_name,
'default',
command="sync",
stderr=True, stdin=True,
stdout=True, tty=True,
_preload_content=False)

volume = client.by_id_volume(volume_name)
replica0 = volume.replicas[0]

crash_replica_processes(client, core_api, volume_name, [replica0])

time.sleep(60)

volume = client.by_id_volume(volume_name)

replicas = []
for r in volume.replicas:
if r.running is True:
replicas.append(r)

crash_replica_processes(client, core_api, volume_name, replicas)

volume = common.wait_for_volume_faulted(client, volume_name)

volume = common.wait_for_volume_detached_unknown(client, volume_name)
assert len(volume.replicas) == 3

volume = wait_for_volume_healthy(client, volume_name)

wait_for_pod_remount(core_api, pod_name)

resp = read_volume_data(core_api, pod_name)

assert test_data == resp
E assert 'jy5jyi4mycx6...9gexsjpe6wlbt' == "cat: can't o...utput error
"
E - jy5jyi4mycx6vae6qx6fxn66gxnxmk3x22wjr2814epdo8jp48gsfixg77njycuge8k8tzlep0pbta8snf9yfu6umid7go9tu4nmivtrlqrxfqa76m45lr4id1v6a4f9u65fz7rejp07t4deayvjqxa1i013j1if64gp4zk64rkyvz1co3fscsckdoixljssuuoq2196fu4ag4p16wfa8e3t2kt4kor3d399hdaaqw3fawzlescoqxhe6f2ojzu37dlykr7zs9c2rid8jjwt9o1ek6ba4hr1ixcx177ypkkw6u1brncs9e1fuhf2v70hr3k7w6zh92c0c9dq0af40ubgfkc0cm7ur7ucg1ad4lqxs5fwgoyor24oe8mqkxdl6pbzj9ie70b3pirifpw9z5ptozo6ybi75iqnoau8a0154fly7iwdxh1e55paj9gmqfd4e9me1xpuwtnfq6ccojwirkl2zktoqdmpbtyacmdzl3iv5pr9gexsjpe6wlbt
E + cat: can't open '/data/test': Input/output error

test_ha.py:522: AssertionError```

[BUG] Nightly Test: test_volume_basic sometimes failed

Describe the bug
sometimes test_volume_basic test failed

To Reproduce
Install Longhorn master
Run test_volume_basic

Expected behavior
test should pass

Log

clients = {'longhorn-tests-01': <longhorn.Client object at 0x7f3ecbba9690>, 'longhorn-tests-02': <longhorn.Client object at 0x7f3ecbc84dd0>, 'longhorn-tests-03': <longhorn.Client object at 0x7f3ecb8d8350>}
volume_name = 'longhorn-testvol-hexj3f'

    @pytest.mark.coretest   # NOQA
    def test_volume_basic(clients, volume_name):  # NOQA
        """
        Test basic volume operations:
    
        1. Check volume name and parameter
        2. Create a volume and attach to the current node, then check volume states
        3. Check soft anti-affinity rule
        4. Write then read back to check volume data
        """
>       volume_basic_test(clients, volume_name)

test_basic.py:185: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
test_basic.py:263: in volume_basic_test
    cleanup_volume(client, volume)
common.py:201: in cleanup_volume
    volume.detach()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = {'backupStatus': [], 'baseImage': '', 'conditions': {'scheduled': {'lastProbeTime': '', 'lastTransitionTime': '2020-04...77216', 'staleReplicaTimeout': 0, 'standby': False, 'state': 'detached', 'timestamp': '2020-04-13T08:48:23.845772852Z'}
k = 'detach'

    def __getattr__(self, k):
        if self._is_list() and k in LIST_METHODS:
            return getattr(self.data, k)
>       return getattr(self.__dict__, k)
E       AttributeError: 'dict' object has no attribute 'detach'

longhorn.py:134: AttributeError

Environment:

  • Longhorn version: master
  • Kubernetes version: v1.17
  • Node OS type and version: Ubuntu 18.04

Additional context
Jenkins build num: longhorn-tests/373

Nightly test test_salvage_auto_crash_replicas_long_wait failed

core_api = <kubernetes.client.apis.core_v1_api.CoreV1Api object at 0x7f089dc42a50>
volume_name = 'longhorn-testvol-bijl6e'
pod_make = <function pod_make.<locals>.make_pod at 0x7f089df94d40>

 def test_salvage_auto_crash_replicas_long_wait(client, core_api, volume_name, pod_make): # NOQA
 pod_name = volume_name + "-pod"
 pv_name = volume_name + "-pv"
 pvc_name = volume_name + "-pvc"
 
 pod = pod_make(name=pod_name)
 
 pod_liveness_probe_spec = get_liveness_probe_spec(initial_delay=1,
 period=1)
 
 pod['spec']['containers'][0]['livenessProbe'] = pod_liveness_probe_spec
 
 volume = create_and_check_volume(client, volume_name, num_of_replicas=2)
 
 create_pv_for_volume(client, core_api, volume, pv_name)
 create_pvc_for_volume(client, core_api, volume, pvc_name)
 pod['spec']['volumes'] = [create_pvc_spec(pvc_name)]
 create_and_wait_pod(core_api, pod)
 
 test_data = generate_random_data(VOLUME_RWTEST_SIZE)
 
 write_pod_volume_data(core_api, pod_name, test_data)
 
 stream(core_api.connect_get_namespaced_pod_exec,
 pod_name,
 'default',
 command="sync",
 stderr=True, stdin=True,
 stdout=True, tty=True,
 _preload_content=False)
 
 volume = client.by_id_volume(volume_name)
 replica0 = volume.replicas[0]
 
 crash_replica_processes(client, core_api, volume_name, [replica0])
 
 time.sleep(60)
 
 volume = client.by_id_volume(volume_name)
 
 replicas = []
 for r in volume.replicas:
 if r.running is True:
 replicas.append(r)
 
 crash_replica_processes(client, core_api, volume_name, replicas)
 
 volume = common.wait_for_volume_faulted(client, volume_name)
 
 volume = common.wait_for_volume_detached_unknown(client, volume_name)
 assert len(volume.replicas) == 3
 
 volume = wait_for_volume_healthy(client, volume_name)
 
 wait_for_pod_remount(core_api, pod_name)
 
 resp = read_volume_data(core_api, pod_name)
 
> assert test_data == resp
E assert 'jy5jyi4mycx6...9gexsjpe6wlbt' == "cat: can't o...utput error
"
E - jy5jyi4mycx6vae6qx6fxn66gxnxmk3x22wjr2814epdo8jp48gsfixg77njycuge8k8tzlep0pbta8snf9yfu6umid7go9tu4nmivtrlqrxfqa76m45lr4id1v6a4f9u65fz7rejp07t4deayvjqxa1i013j1if64gp4zk64rkyvz1co3fscsckdoixljssuuoq2196fu4ag4p16wfa8e3t2kt4kor3d399hdaaqw3fawzlescoqxhe6f2ojzu37dlykr7zs9c2rid8jjwt9o1ek6ba4hr1ixcx177ypkkw6u1brncs9e1fuhf2v70hr3k7w6zh92c0c9dq0af40ubgfkc0cm7ur7ucg1ad4lqxs5fwgoyor24oe8mqkxdl6pbzj9ie70b3pirifpw9z5ptozo6ybi75iqnoau8a0154fly7iwdxh1e55paj9gmqfd4e9me1xpuwtnfq6ccojwirkl2zktoqdmpbtyacmdzl3iv5pr9gexsjpe6wlbt
E + cat: can't open '/data/test': Input/output error

test_ha.py:522: AssertionError```

Nightly test test_salvage_auto_crash_replicas_long_wait failed

client = <longhorn.Client object at 0x7f089e08d350> core_api = <kubernetes.client.apis.core_v1_api.CoreV1Api object at 0x7f089dc42a50> volume_name = 'longhorn-testvol-bijl6e' pod_make = <function pod_make..make_pod at 0x7f089df94d40> def test_salvage_auto_crash_replicas_long_wait(client, core_api, volume_name, pod_make): # NOQA pod_name = volume_name + "-pod" pv_name = volume_name + "-pv" pvc_name = volume_name + "-pvc" pod = pod_make(name=pod_name) pod_liveness_probe_spec = get_liveness_probe_spec(initial_delay=1, period=1) pod['spec']['containers'][0]['livenessProbe'] = pod_liveness_probe_spec volume = create_and_check_volume(client, volume_name, num_of_replicas=2) create_pv_for_volume(client, core_api, volume, pv_name) create_pvc_for_volume(client, core_api, volume, pvc_name) pod['spec']['volumes'] = [create_pvc_spec(pvc_name)] create_and_wait_pod(core_api, pod) test_data = generate_random_data(VOLUME_RWTEST_SIZE) write_pod_volume_data(core_api, pod_name, test_data) stream(core_api.connect_get_namespaced_pod_exec, pod_name, 'default', command="sync", stderr=True, stdin=True, stdout=True, tty=True, _preload_content=False) volume = client.by_id_volume(volume_name) replica0 = volume.replicas[0] crash_replica_processes(client, core_api, volume_name, [replica0]) time.sleep(60) volume = client.by_id_volume(volume_name) replicas = [] for r in volume.replicas: if r.running is True: replicas.append(r) crash_replica_processes(client, core_api, volume_name, replicas) volume = common.wait_for_volume_faulted(client, volume_name) volume = common.wait_for_volume_detached_unknown(client, volume_name) assert len(volume.replicas) == 3 volume = wait_for_volume_healthy(client, volume_name) wait_for_pod_remount(core_api, pod_name) resp = read_volume_data(core_api, pod_name) > assert test_data == resp E assert 'jy5jyi4mycx6...9gexsjpe6wlbt' == "cat: can't o...utput error
" E - jy5jyi4mycx6vae6qx6fxn66gxnxmk3x22wjr2814epdo8jp48gsfixg77njycuge8k8tzlep0pbta8snf9yfu6umid7go9tu4nmivtrlqrxfqa76m45lr4id1v6a4f9u65fz7rejp07t4deayvjqxa1i013j1if64gp4zk64rkyvz1co3fscsckdoixljssuuoq2196fu4ag4p16wfa8e3t2kt4kor3d399hdaaqw3fawzlescoqxhe6f2ojzu37dlykr7zs9c2rid8jjwt9o1ek6ba4hr1ixcx177ypkkw6u1brncs9e1fuhf2v70hr3k7w6zh92c0c9dq0af40ubgfkc0cm7ur7ucg1ad4lqxs5fwgoyor24oe8mqkxdl6pbzj9ie70b3pirifpw9z5ptozo6ybi75iqnoau8a0154fly7iwdxh1e55paj9gmqfd4e9me1xpuwtnfq6ccojwirkl2zktoqdmpbtyacmdzl3iv5pr9gexsjpe6wlbt E + cat: can't open '/data/test': Input/output error test_ha.py:522: AssertionError

Nightly test test_engine_live_upgrade failed

core_api = <kubernetes.client.apis.core_v1_api.CoreV1Api object at 0x7f089d8b4090>
volume_name = 'longhorn-testvol-o29sgt'

 @pytest.mark.coretest # NOQA
 def test_engine_live_upgrade(client, core_api, volume_name): # NOQA
> engine_live_upgrade_test(client, core_api, volume_name)

test_engine_upgrade.py:188: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
test_engine_upgrade.py:256: in engine_live_upgrade_test
 check_volume_data(volume, data)
common.py:1533: in check_volume_data
 dev = get_volume_endpoint(volume)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

v = {'backupStatus': [], 'baseImage': '', 'conditions': {'scheduled': {'lastProbeTime': '', 'lastTransitionTime': '2020-04...77216', 'staleReplicaTimeout': 0, 'standby': False, 'state': 'attached', 'timestamp': '2020-04-02T09:40:09.756578542Z'}

 def get_volume_endpoint(v):
 engine = get_volume_engine(v)
 endpoint = engine.endpoint
> assert endpoint != ""
E AssertionError

common.py:1894: AssertionError```

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.