Code Monkey home page Code Monkey logo

zstack's People

Contributors

ads6ads6 avatar alanjager avatar alpha0312 avatar alvin-lau avatar bustezero avatar camilesing avatar heathhose avatar hhjuliet avatar kefeng-wang avatar lemeiyu avatar liningone avatar littleya avatar live4thee avatar luchukun avatar majin1996 avatar mathematrix avatar mingjian2049 avatar njuguoyi avatar pandawuu avatar quarkonics avatar ruansteve avatar taogan21 avatar winger007 avatar youyk avatar zhanyonm avatar zqydaodao avatar zstack-robot avatar zstackio2 avatar zsyzsyhao avatar zxwing avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

zstack's Issues

VIP might be deleted failed if it is related with EIP, but there isn't EIP is assigned

The log is like:

2015-06-17 16:30:18,907 TRACE CloudBusImpl2 [msg received]: {"org.zstack.network.service.vip.VipDeletionReply":{"success":false,"error":{"code":"SYS.1000","description":"An internal error happened in system","details":"unhandled exception happened when calling public void org.zstack.network.service.eip.EipManagerImpl.releaseServicesOnVip(org.zstack.network.service.vip.VipInventory, org.zstack.header.core.Completion), null"},"headers":{"isReply":"true","correlationId":"e42eae30d53c4679800ba7b29a25d6b7","schema":{}},"id":"10fc850832c541e69e20d61073d0ec67","serviceId":"zstack.message.cloudbus.5e6455b975de4f5abda79cb5f221c743","creatingTime":1434529818913}}
2015-06-17 16:30:18,907 WARN VipCascadeExtension failed to delete vip[uuid:b6c6f08f37df4a29abd87d89a910ce38, ip: 10.10.4.66, name:vip-e47e95b17bdd4461836d14e3a4dba059], ErrorCode [code = SYS.1000, description = An internal error happened in system, details = unhandled exception happened when calling public void org.zstack.network.service.eip.EipManagerImpl.releaseServicesOnVip(org.zstack.network.service.vip.VipInventory, org.zstack.header.core.Completion), null]

Chosen for default SNAT rule in EIP Case

In EIP case, VR enable default SNAT rule for VM which does not cover floating ip, while the default SNAT rule should be disable in some case, for billing or private cloud.

iscsiPrimaryStorage: can't determine annotations of missing type org.springframework.util.MimeType

Environment

zstack version: 0bfddb9
os: ubuntu 14.04
java: java version "1.7.0_75"

Steps

build command: mvn -DskipTests clean install

Failure

Error information is as below:
[INFO] iscsiPrimaryStorage ............................... FAILURE [0.429s]
[INFO] mediator .......................................... SKIPPED
[INFO] test .............................................. SKIPPED
[INFO] build ............................................. SKIPPED
[INFO] tool .............................................. SKIPPED
[INFO] doclet ............................................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 6:01.567s
[INFO] Finished at: Wed Oct 07 04:02:55 CST 2015
[INFO] Final Memory: 96M/684M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.codehaus.mojo:aspectj-maven-plugin:1.4:compile (default) on project iscsiPrimaryStorage: Compiler errors:
[ERROR] error can't determine annotations of missing type org.springframework.util.MimeType
[ERROR] when processing declare parents MediaType
[ERROR] when weaving intertype declarations MediaType
[ERROR] when processing compilation unit /home/shuang/zstack/zstack/plugin/iscsiPrimaryStorage/src/main/java/org/zstack/storage/primary/iscsi/IscsiBtrfsPrimaryStorageSimulator.java
[ERROR] when batch building BuildConfig[null] #Files=20 AopXmls=#0
[ERROR] [Xlint:cantFindType]
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command

[ERROR] mvn -rf :iscsiPrimaryStorage

Analysis

If add following content to file zstack/zstack/plugin/iscsiPrimaryStorage/target/classes/META-INF/aop.xml, it would be build successfully:

<?xml version="1.0"?>
<aspectj>
        <weaver options="-showWeaveInfo">
                <include within="org.zstack.storage.primary.iscsi"/>
        </weaver>
</aspectj>

zstack management node start failure due to rabbitmq: Attempt to use closed channel

2015-06-14 23:39:37,885 WARN DispatchQueueImpl unhandled exception happened when calling sync task[name:run-anisble-for-host-10.0.101.20, class:org.zstack.core.ansible.AnsibleFacadeImpl$1]
com.rabbitmq.client.AlreadyClosedException: clean connection shutdown; reason: Attempt to use closed channel
at com.rabbitmq.client.impl.AMQChannel.ensureIsOpen(AMQChannel.java:190) ~[amqp-client-3.2.1.jar:?]
at com.rabbitmq.client.impl.AMQChannel.transmit(AMQChannel.java:291) ~[amqp-client-3.2.1.jar:?]
at com.rabbitmq.client.impl.ChannelN.basicPublish(ChannelN.java:636) ~[amqp-client-3.2.1.jar:?]
at com.rabbitmq.client.impl.ChannelN.basicPublish(ChannelN.java:619) ~[amqp-client-3.2.1.jar:?]
at org.zstack.core.cloudbus.CloudBusImpl2$Wire.send(CloudBusImpl2.java:427) ~[core-0.7.0.jar:?]

Installation identify can't start management server for all-in-one

......................

  1. Start ZStack Server:\n----------------
    Start ZStack management node:successfully stopped management node
    successfully started Tomcat container; now it's waiting for the management node ready for serving APIs, which may take a few seconds
    ERROR: no management-node-ready message received within 120 seconds, please check error in log file ar/log/zstack/management-server.log
    restart zstack service .... FAILED
    FAIL

    Reason: failed to start zstack

The detailed installation log could be found in /tmp/zstack_installation.log

2015-04-23 13:26:02,059 WARN DispatchQueueImpl unhandled exception happened when calling sync task[name:run-anisble-for-host-9.110.85.55, class:org.zstack.core.ansible.AnsibleFacadeImpl$1]
com.rabbitmq.client.AlreadyClosedException: clean connection shutdown; reason: Attempt to use closed channel
at com.rabbitmq.client.impl.AMQChannel.ensureIsOpen(AMQChannel.java:190) ~[amqp-client-3.2.1.jar:?]
at com.rabbitmq.client.impl.AMQChannel.transmit(AMQChannel.java:291) ~[amqp-client-3.2.1.jar:?]
at com.rabbitmq.client.impl.ChannelN.basicPublish(ChannelN.java:636) ~[amqp-client-3.2.1.jar:?]
at com.rabbitmq.client.impl.ChannelN.basicPublish(ChannelN.java:619) ~[amqp-client-3.2.1.jar:?]
at org.zstack.core.cloudbus.CloudBusImpl2$Wire.send(CloudBusImpl2.java:427) ~[core-0.6.0.jar:?]
at org.zstack.core.cloudbus.CloudBusImpl2.reply(CloudBusImpl2.java:1576) ~[core-0.6.0.jar:?]
at org.zstack.core.ansible.AnsibleFacadeImpl$1$1.fail_aroundBody4(AnsibleFacadeImpl.java:170) ~[core-0.6.0.jar:?]
at org.zstack.core.ansible.AnsibleFacadeImpl$1$1$AjcClosure5.run(AnsibleFacadeImpl.java:1) ~[core-0.6.0.jar:?]
at org.zstack.core.aspect.CompletionSingleCallAspect.ajc$around$org_zstack_core_aspect_CompletionSingleCallAspect$2$4b6fbdf7proceed(CompletionSingleCallAspect.aj:1) ~[core-0.6.0.jar:?]
at org.zstack.core.aspect.CompletionSingleCallAspect.ajc$around$org_zstack_core_aspect_CompletionSingleCallAspect$2$4b6fbdf7(CompletionSingleCallAspect.aj:24) ~[core-0.6.0.jar:?]
at org.zstack.core.ansible.AnsibleFacadeImpl$1$1.fail(AnsibleFacadeImpl.java:168) ~[core-0.6.0.jar:?]
at org.zstack.core.aspect.AsyncSafeAspect$1.call(AsyncSafeAspect.aj:44) ~[core-0.6.0.jar:?]
at org.zstack.core.aspect.AsyncSafeAspect.ajc$around$org_zstack_core_aspect_AsyncSafeAspect$1$c4af5fac(AsyncSafeAspect.aj:104) ~[core-0.6.0.jar:?]
at org.zstack.core.ansible.AnsibleFacadeImpl$1.run(AnsibleFacadeImpl.java:125) ~[core-0.6.0.jar:?]
at org.zstack.core.ansible.AnsibleFacadeImpl$1.call(AnsibleFacadeImpl.java:161) ~[core-0.6.0.jar:?]
at org.zstack.core.thread.DispatchQueueImpl$SyncTaskFuture.run(DispatchQueueImpl.java:59) [core-0.6.0.jar:?]
at org.zstack.core.thread.DispatchQueueImpl$SyncTaskQueueWrapper$1.run(DispatchQueueImpl.java:114) [core-0.6.0.jar:?]
at org.zstack.core.thread.DispatchQueueImpl$SyncTaskQueueWrapper$1.call(DispatchQueueImpl.java:132) [core-0.6.0.jar:?]
at org.zstack.core.thread.DispatchQueueImpl$SyncTaskQueueWrapper$1.call(DispatchQueueImpl.java:1) [core-0.6.0.jar:?]
at org.zstack.core.thread.ThreadFacadeImpl$Worker.call(ThreadFacadeImpl.java:111) [core-0.6.0.jar:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) [?:1.7.0_45]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178) [?:1.7.0_45]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292) [?:1.7.0_45]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [?:1.7.0_45]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [?:1.7.0_45]
at java.lang.Thread.run(Thread.java:744) [?:1.7.0_45]

/etc/init.d/rabbitmq-server status
..................
{running_applications,[{rabbit,"RabbitMQ","3.3.4"},

Delete iscsi volume snapshot failed

2015-06-15 08:31:44,784 DEBUG SimpleFlowChain [FlowChain: delete-snapshot-03377a3090074616a53f9da1de99e5ab] start executing flow[merge-volume-snapshots-to-volume]
2015-06-15 08:31:44,785 WARN SimpleFlowChain [FlowChain: delete-snapshot-03377a3090074616a53f9da1de99e5ab] unhandled exception when executing flow[org.zstack.storage.snapshot.VolumeSnapshotTreeBase$4], start to rollback
org.springframework.dao.InvalidDataAccessApiUsageException: id to load is required for loading; nested exception is java.lang.IllegalArgumentException: id to load is required for loading
at org.springframework.orm.jpa.EntityManagerFactoryUtils.convertJpaAccessExceptionIfPossible(EntityManagerFactoryUtils.java:384) ~[spring-orm-4.0.3.RELEASE.jar:4.0.3.RELEASE]
at org.springframework.orm.jpa.aspectj.JpaExceptionTranslatorAspect.ajc$afterThrowing$org_springframework_orm_jpa_aspectj_JpaExceptionTranslatorAspect$1$18a1ac9(JpaExceptionTranslatorAspect.aj:33) ~[spring-aspects-4.0.3.RELEASE.jar:4.0.3.RELEASE]
at org.zstack.core.db.DatabaseFacadeImpl.findByUuid_aroundBody2(DatabaseFacadeImpl.java:474) ~[core-0.7.0.jar:?]
at org.zstack.core.db.DatabaseFacadeImpl$AjcClosure3.run(DatabaseFacadeImpl.java:1) ~[core-0.7.0.jar:?]
at org.springframework.transaction.aspectj.AbstractTransactionAspect.ajc$around$org_springframework_transaction_aspectj_AbstractTransactionAspect$1$2a73e96cproceed(AbstractTransactionAspect.aj:59) ~[spring-aspects-4.0.3.RELEASE.jar:4.0.3.RELEASE]
at org.springframework.transaction.aspectj.AbstractTransactionAspect$AbstractTransactionAspect$1.proceedWithInvocation(AbstractTransactionAspect.aj:65) ~[spring-aspects-4.0.3.RELEASE.jar:4.0.3.RELEASE]
at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:262) ~[spring-tx-4.0.3.RELEASE.jar:4.0.3.RELEASE]
at org.springframework.transaction.aspectj.AbstractTransactionAspect.ajc$around$org_springframework_transaction_aspectj_AbstractTransactionAspect$1$2a73e96c(AbstractTransactionAspect.aj:63) ~[spring-aspects-4.0.3.RELEASE.jar:4.0.3.RELEASE]
at org.zstack.core.db.DatabaseFacadeImpl.findByUuid(DatabaseFacadeImpl.java:473) ~[core-0.7.0.jar:?]
at org.zstack.storage.snapshot.VolumeSnapshotTreeBase$4.run(VolumeSnapshotTreeBase.java:276) ~[storage-0.7.0.jar:?]
at org.zstack.core.workflow.SimpleFlowChain.runFlow(SimpleFlowChain.java:159) [core-0.7.0.jar:?]
at org.zstack.core.workflow.SimpleFlowChain.next(SimpleFlowChain.java:304) [core-0.7.0.jar:?]
at org.zstack.storage.snapshot.VolumeSnapshotTreeBase$3$1.success_aroundBody0(VolumeSnapshotTreeBase.java:233) [storage-0.7.0.jar:?]
at org.zstack.storage.snapshot.VolumeSnapshotTreeBase$3$1$AjcClosure1.run(VolumeSnapshotTreeBase.java:1) [storage-0.7.0.jar:?]
at org.zstack.core.aspect.CompletionSingleCallAspect.ajc$around$org_zstack_core_aspect_CompletionSingleCallAspect$1$cecd1872proceed(CompletionSingleCallAspect.aj:1) [core-0.7.0.jar:?]
at org.zstack.core.aspect.CompletionSingleCallAspect.ajc$around$org_zstack_core_aspect_CompletionSingleCallAspect$1$cecd1872(CompletionSingleCallAspect.aj:15) [core-0.7.0.jar:?]
at org.zstack.storage.snapshot.VolumeSnapshotTreeBase$3$1.success_aroundBody2(VolumeSnapshotTreeBase.java:232) [storage-0.7.0.jar:?]
at org.zstack.storage.snapshot.VolumeSnapshotTreeBase$3$1$AjcClosure3.run(VolumeSnapshotTreeBase.java:1) [storage-0.7.0.jar:?]
at org.zstack.core.aspect.AsyncBackupAspect.ajc$around$org_zstack_core_aspect_AsyncBackupAspect$1$cecd1872proceed(AsyncBackupAspect.aj:1) [core-0.7.0.jar:?]
at org.zstack.core.aspect.AsyncBackupAspect.ajc$around$org_zstack_core_aspect_AsyncBackupAspect$1$cecd1872(AsyncBackupAspect.aj:86) [core-0.7.0.jar:?]
at org.zstack.storage.snapshot.VolumeSnapshotTreeBase$3$1.success(VolumeSnapshotTreeBase.java:232) [storage-0.7.0.jar:?]
at org.zstack.storage.snapshot.VolumeSnapshotTreeBase$13.run_aroundBody0(VolumeSnapshotTreeBase.java:646) [storage-0.7.0.jar:?]
at org.zstack.storage.snapshot.VolumeSnapshotTreeBase$13$AjcClosure1.run(VolumeSnapshotTreeBase.java:1) [storage-0.7.0.jar:?]
at org.zstack.core.aspect.AsyncBackupAspect.ajc$around$org_zstack_core_aspect_AsyncBackupAspect$6$545faac6proceed(AsyncBackupAspect.aj:1) [core-0.7.0.jar:?]
at org.zstack.core.aspect.AsyncBackupAspect.ajc$around$org_zstack_core_aspect_AsyncBackupAspect$6$545faac6(AsyncBackupAspect.aj:126) [core-0.7.0.jar:?]
at org.zstack.storage.snapshot.VolumeSnapshotTreeBase$13.run(VolumeSnapshotTreeBase.java:627) [storage-0.7.0.jar:?]
at org.zstack.core.cloudbus.CloudBusImpl2$4.ack(CloudBusImpl2.java:1328) [core-0.7.0.jar:?]
at org.zstack.core.cloudbus.CloudBusImpl2$1.handle_aroundBody0(CloudBusImpl2.java:317) [core-0.7.0.jar:?]

[0.7-preview] Can not create guest vm from ISO image with iSCSI on Btrfs

I just test zstack 0.7-preview version, and I try to create guest vm from ISO image but it fails:

{"org.zstack.header.vm.APICreateVmInstanceMsg":{"name":"VM1","description":null,"instanceOfferingUuid":"8dc33e6ae39a45f3968b3170185cfea5","imageUuid":"50952352b1fe483b9b9ad07370739da2","l3NetworkUuids":["eeb9feff031243f2ae4928c3f3882ad6"],"rootDiskOfferingUuid":"aa995c8e171b4e39aa3e7b7b85ccd18e","dataDiskOfferingUuids":[],"zoneUuid":null,"clusterUuid":null,"hostUuid":null,"resourceUuid":"b1d10ea8b02b40818686768abe42e196","defaultL3NetworkUuid":"eeb9feff031243f2ae4928c3f3882ad6","systemTags":["hostname::vm1"],"session":{"uuid":"b3000cee427549d494ed0b706a3ea219"}}}

the resault is :
{"org.zstack.header.vm.APICreateVmInstanceEvent":{"success":false,"apiId":"1394b85890964031aca6d7e6a084b8e5","headers":{"schema":{}},"error":{"code":"SYS.1006","cause":{"code":"SYS.1000","description":"An internal error happened in system","details":"unhandled exception happened when calling private void org.zstack.kvm.KVMHost.startVm(org.zstack.header.vm.VmInstanceSpec, org.zstack.header.message.NeedReplyMessage, org.zstack.header.core.NoErrorCompletion), null"},"description":"An operation failed","details":"unhandled exception happened when calling private void org.zstack.kvm.KVMHost.startVm(org.zstack.header.vm.VmInstanceSpec, org.zstack.header.message.NeedReplyMessage, org.zstack.header.core.NoErrorCompletion), null"},"id":"062aa8552c7b481aa088cb33d6369aeb","creatingTime":1430834985652}}

NPE when allocating host capacity if hosts are in status of connecting

2015-04-24 00:24:46,305 DEBUG HostAllocatorChain [Host Allocation]: flow[org.zstack.compute.allocator.AttachedL2NetworkAllocatorFlow] successfully found 2 candidate hosts for vm[uuid:86c683c84ebe4f0f9a310ec86a001523, name:test_vm_default_name]
2015-04-24 00:24:46,306 WARN HostAllocatorChain unhandled throwable
java.lang.NullPointerException
at org.zstack.compute.allocator.HostCapacityAllocatorFlow.allocate(HostCapacityAllocatorFlow.java:43) ~[compute-0.6.0.jar:?]
at org.zstack.compute.allocator.HostCapacityAllocatorFlow.allocate(HostCapacityAllocatorFlow.java:56) ~[compute-0.6.0.jar:?]
at org.zstack.compute.allocator.HostAllocatorChain.runFlow(HostAllocatorChain.java:172) [compute-0.6.0.jar:?]
at org.zstack.compute.allocator.HostAllocatorChain.next(HostAllocatorChain.java:226) [compute-0.6.0.jar:?]
at org.zstack.header.allocator.AbstractHostAllocatorFlow.next(AbstractHostAllocatorFlow.java:56) [header-0.6.0.jar:?]
at org.zstack.compute.allocator.AttachedL2NetworkAllocatorFlow.allocate(AttachedL2NetworkAllocatorFlow.java:113) [compute-0.6.0.jar:?]
at org.zstack.compute.allocator.HostAllocatorChain.runFlow(HostAllocatorChain.java:172) [compute-0.6.0.jar:?]
at org.zstack.compute.allocator.HostAllocatorChain.start(HostAllocatorChain.java:195) [compute-0.6.0.jar:?]
at org.zstack.compute.allocator.HostAllocatorChain.allocate_aroundBody2(HostAllocatorChain.java:201) [compute-0.6.0.jar:?]
at org.zstack.compute.allocator.HostAllocatorChain$AjcClosure3.run(HostAllocatorChain.java:1) [compute-0.6.0.jar:?]
at org.zstack.core.aspect.AsyncSafeAspect.ajc$around$org_zstack_core_aspect_AsyncSafeAspect$1$c4af5facproceed(AsyncSafeAspect.aj:1) [core-0.6.0.jar:?]
at org.zstack.core.aspect.AsyncSafeAspect.ajc$around$org_zstack_core_aspect_AsyncSafeAspect$1$c4af5fac(AsyncSafeAspect.aj:84) [core-0.6.0.jar:?]
at org.zstack.compute.allocator.HostAllocatorChain.allocate(HostAllocatorChain.java:198) [compute-0.6.0.jar:?]
at org.zstack.compute.allocator.HostAllocatorChain.allocate_aroundBody6(HostAllocatorChain.java:254) [compute-0.6.0.jar:?]
at org.zstack.compute.allocator.HostAllocatorChain$AjcClosure7.run(HostAllocatorChain.java:1) [compute-0.6.0.jar:?]
at org.zstack.core.aspect.AsyncSafeAspect.ajc$around$org_zstack_core_aspect_AsyncSafeAspect$1$c4af5facproceed(AsyncSafeAspect.aj:1) [core-0.6.0.jar:?]
at org.zstack.core.aspect.AsyncSafeAspect.ajc$around$org_zstack_core_aspect_AsyncSafeAspect$1$c4af5fac(AsyncSafeAspect.aj:84) [core-0.6.0.jar:?]
at org.zstack.compute.allocator.HostAllocatorChain.allocate(HostAllocatorChain.java:252) [compute-0.6.0.jar:?]
at org.zstack.compute.allocator.HostAllocatorManagerImpl.handle(HostAllocatorManagerImpl.java:118) [compute-0.6.0.jar:?]
at org.zstack.compute.allocator.HostAllocatorManagerImpl.handleLocalMessage(HostAllocatorManagerImpl.java:57) [compute-0.6.0.jar:?]
at org.zstack.compute.allocator.HostAllocatorManagerImpl.handleMessage(HostAllocatorManagerImpl.java:51) [compute-0.6.0.jar:?]
at org.zstack.core.cloudbus.CloudBusImpl2$11$1$1.call(CloudBusImpl2.java:1818) [core-0.6.0.jar:?]
at org.zstack.core.cloudbus.CloudBusImpl2$11$1$1.call(CloudBusImpl2.java:1) [core-0.6.0.jar:?]
at org.zstack.core.thread.ThreadFacadeImpl$Worker.call(ThreadFacadeImpl.java:111) [core-0.6.0.jar:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) [?:1.7.0_75]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178) [?:1.7.0_75]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292) [?:1.7.0_75]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [?:1.7.0_75]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [?:1.7.0_75]
at java.lang.Thread.run(Thread.java:745) [?:1.7.0_75]

zstack management node shutdown due to: web listener issued context destroy event

The latest zstack management node will shutdown itself:

2015-06-14 23:50:17,366 TRACE CloudBusImpl2 [msg received]: {"org.zstack.core.ansible.RunAnsibleMsg":{"targetIp":"10.0.101.20","privateKeyFile":"/usr/local/zstack/apache-tomcat-7.0.35/webapps/zstack/WEB-INF/classes/ansible/rsaKeys/id_rsa","playBookName":"consoleproxy.yaml","arguments":{"pkg_consoleproxy":"consoleproxy-0.7.tar.gz"},"timeout":1800000,"headers":{"correlationId":"19e3c8bb2b27435dbc8f53f2beefa249","replyTo":"zstack.message.cloudbus.b006db3039d94e6eb33438ab0697cbd1","noReply":"false","schema":{}},"id":"19e3c8bb2b27435dbc8f53f2beefa249","serviceId":"ansible.b006db3039d94e6eb33438ab0697cbd1","creatingTime":1434297017362}}
2015-06-14 23:50:17,387 DEBUG AnsibleFacadeImpl start running ansible for playbook[consoleproxy.yaml]
2015-06-14 23:50:17,403 DEBUG ShellUtils exec shell command[sudo ansible-playbook /usr/local/zstack/ansible/consoleproxy.yaml -i /usr/local/zstack/ansible/hosts --private-key /usr/local/zstack/apache-tomcat-7.0.35/webapps/zstack/WEB-INF/classes/ansible/rsaKeys/id_rsa -e '{"host":"10.0.101.20","zstack_root":"/var/lib/zstack/","pypi_url":"https://pypi.mirrors.ustc.edu.cn/simple","pkg_zstacklib":"zstacklib-0.7.tar.gz","pkg_consoleproxy":"consoleproxy-0.7.tar.gz"}']
2015-06-14 23:50:17,884 WARN ComponentLoaderWebListener web listener issued context destroy event, start stropping process
2015-06-14 23:50:17,889 WARN CloudBusImpl2 cannot find endpoint for service[api.portal]
2015-06-14 23:50:17,917 DEBUG ConsistentHash after removing, consistent hash circle has 0 virtual nodes now
2015-06-14 23:50:17,917 WARN CloudBusImpl2 management node[uuid:b006db3039d94e6eb33438ab0697cbd1] becomes unavailable, reply ErrorCode [code = SYS.1012, description = The management node is unavailable, details = management node[uuid:b006db3039d94e6eb33438ab0697cbd1] is unavailable] to message[org.zstack.core.ansible.RunAnsibleMsg]. Message metadata dump: {"replyTo":"zstack.message.cloudbus.b006db3039d94e6eb33438ab0697cbd1","timeout":1800000,"msgId":"19e3c8bb2b27435dbc8f53f2beefa249","needApiEvent":false,"className":"org.zstack.core.cloudbus.CloudBusImpl2$RequestMessageMetaData","serviceId":"ansible.b006db3039d94e6eb33438ab0697cbd1","messageName":"org.zstack.core.ansible.RunAnsibleMsg"}

Ubuntu installation fails - 2 of the params to deploydb.sh are empty/blank

Trying to install on Ubuntu 14.04 and after executing:

$ sudo bash install-zstack.sh -a

log file indicates 2 of the parameters being passed to deploydb.sh are blank ('') which from looking at the script causes that script to return a failure if ANY parameter is blank...

Deploy ZStack Database: ERROR: failed to execute shell command: sh /usr/local/zstack/apache-tomcat/webapps/zstack/WEB-INF/classes/deploydb.sh root '' 192.168.2.102 3306 ''
return code: 1
stdout: /usr/local/zstack/apache-tomcat/webapps/zstack/WEB-INF/classes/deploydb.sh root 192.168.2.102 3306

any suggestions on what to try next?

attach iscsi volume failed, after create snapshot

2015-06-12 00:09:09,667 TRACE CloudBusImpl2 [msg received]: {"org.zstack.header.vm.AttachDataVolumeToVmMsg":{"volume":{"uuid":"fe037288083d48dfbd11bb754c83010b","name":"data volume created by sp: dd8065b57bf3414ba525392f00592b2e","primaryStorageUuid":"49a39d25817348ebbeb756bbcc6ac16a","vmInstanceUuid":"f465d817ea2542ff8cd08b0b8e968e74","installPath":"/home/btrfs/dataVolumes/acct-36c27e8ff05c4780bf6d2fa65700f22e/vol-fe037288083d48dfbd11bb754c83010b/fe037288083d48dfbd11bb754c83010b.img","type":"Data","size":10485760,"state":"Enabled","status":"Ready","createDate":"Jun 12, 2015 12:09:05 AM","lastOpDate":"Jun 12, 2015 12:09:05 AM"},"vmInstanceUuid":"f465d817ea2542ff8cd08b0b8e968e74","timeout":1800000,"headers":{"correlationId":"2459ae77a7f04e4a8b43b218b0ff5479","replyTo":"zstack.message.cloudbus.9ba08e94954e45ad817fe81ac7674f97","noReply":"false","schema":{"org.zstack.header.volume.VolumeInventory":["volume"]}},"id":"2459ae77a7f04e4a8b43b218b0ff5479","serviceId":"vmInstance.9ba08e94954e45ad817fe81ac7674f97","creatingTime":1434038949666}}
2015-06-12 00:09:09,669 WARN AsyncSafeAspect unhandled exception happened when calling protected void org.zstack.compute.vm.VmInstanceBase.attachVolume(org.zstack.header.vm.AttachDataVolumeToVmMsg, org.zstack.header.core.NoErrorCompletion), 1
java.lang.ArrayIndexOutOfBoundsException: 1
at org.zstack.storage.primary.iscsi.IscsiVolumePath.disassemble(IscsiVolumePath.java:45) ~[iscsiPrimaryStorage-0.6.0.jar:?]
at org.zstack.storage.primary.iscsi.IscsiFileSystemPrimaryStorageTargetExtension.createTarget(IscsiFileSystemPrimaryStorageTargetExtension.java:128) ~[iscsiPrimaryStorage-0.6.0.jar:?]
at org.zstack.storage.primary.iscsi.IscsiFileSystemPrimaryStorageTargetExtension.preAttachVolume(IscsiFileSystemPrimaryStorageTargetExtension.java:189) ~[iscsiPrimaryStorage-0.6.0.jar:?]
at org.zstack.compute.vm.VmInstanceExtensionPointEmitter.preAttachVolume(VmInstanceExtensionPointEmitter.java:303) ~[compute-0.6.0.jar:?]
at org.zstack.compute.vm.VmInstanceBase.attachVolume_aroundBody10(VmInstanceBase.java:1130) ~[compute-0.6.0.jar:?]
at org.zstack.compute.vm.VmInstanceBase$AjcClosure11.run(VmInstanceBase.java:1) ~[compute-0.6.0.jar:?]
at org.zstack.core.aspect.AsyncSafeAspect.ajc$around$org_zstack_core_aspect_AsyncSafeAspect$1$c4af5facproceed(AsyncSafeAspect.aj:1) ~[core-0.6.0.jar:?]
at org.zstack.core.aspect.AsyncSafeAspect.ajc$around$org_zstack_core_aspect_AsyncSafeAspect$1$c4af5fac(AsyncSafeAspect.aj:84) [core-0.6.0.jar:?]
at org.zstack.compute.vm.VmInstanceBase.attachVolume(VmInstanceBase.java:1118) [compute-0.6.0.jar:?]
at org.zstack.compute.vm.VmInstanceBase$8.run_aroundBody0(VmInstanceBase.java:418) [compute-0.6.0.jar:?]
at org.zstack.compute.vm.VmInstanceBase$8$AjcClosure1.run(VmInstanceBase.java:1) [compute-0.6.0.jar:?]
at org.zstack.core.aspect.AsyncBackupAspect.ajc$around$org_zstack_core_aspect_AsyncBackupAspect$7$fb2da26bproceed(AsyncBackupAspect.aj:1) [core-0.6.0.jar:?]
at org.zstack.core.aspect.AsyncBackupAspect.ajc$around$org_zstack_core_aspect_AsyncBackupAspect$7$fb2da26b(AsyncBackupAspect.aj:134) [core-0.6.0.jar:?]
at org.zstack.compute.vm.VmInstanceBase$8.run(VmInstanceBase.java:417) [compute-0.6.0.jar:?]
at org.zstack.core.thread.DispatchQueueImpl$ChainFuture.run(DispatchQueueImpl.java:199) [core-0.6.0.jar:?]
at org.zstack.core.thread.DispatchQueueImpl$ChainTaskQueueWrapper$1.runQueue_aroundBody0(DispatchQueueImpl.java:271) [core-0.6.0.jar:?]
at org.zstack.core.thread.DispatchQueueImpl$ChainTaskQueueWrapper$1$AjcClosure1.run(DispatchQueueImpl.java:1) [core-0.6.0.jar:?]
at org.zstack.core.aspect.ThreadAspect.ajc$around$org_zstack_core_aspect_ThreadAspect$4$de40e327proceed(ThreadAspect.aj:1) [core-0.6.0.jar:?]
at org.zstack.core.aspect.ThreadAspect$4.call(ThreadAspect.aj:145) [core-0.6.0.jar:?]
at org.zstack.core.aspect.ThreadAspect$4.call(ThreadAspect.aj:1) [core-0.6.0.jar:?]
at org.zstack.core.thread.ThreadFacadeImpl$Worker.call(ThreadFacadeImpl.java:111) [core-0.6.0.jar:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) [?:1.7.0_75]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178) [?:1.7.0_75]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292) [?:1.7.0_75]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [?:1.7.0_75]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [?:1.7.0_75]
at java.lang.Thread.run(Thread.java:745) [?:1.7.0_75]

iscsi volume doesn't support backup volume snapshot

2015-06-11 23:56:35,322 TRACE CloudBusImpl2 [event publish]: {"org.zstack.header.storage.snapshot.APIBackupVolumeSnapshotEvent":{"apiId":"c2d3947873b841898534e26c48d0006b","success":false,"error":{"code":"SYS.1008","description":"Unknown message, no service can deal with the message","details":"No service deals with message: {"org.zstack.header.storage.primary.BackupVolumeSnapshotFromPrimaryStorageToBackupStorageMsg"

VM with iscsi volume can't be migrated in centos6

The failed log:

MigrateVm hostUuid=d19ed9c89e35412db974c3c04874dd87 vmInstanceUuid=2877fc87bfbb404f81519002772d58d0
2015-06-16 11:48:32,187 DEBUG [apibinding.api] async call[url: http://localhost:8080/zstack/api/, request: {"org.zstack.header.vm.APIMigrateVmMsg": {"vmInstanceUuid": "2877fc87bfbb404f81519002772d58d0", "session": {"uuid": "a1054f72e83646ce9e714ccbf04236ab"}, "hostUuid": "d19ed9c89e35412db974c3c04874dd87"}}]
2015-06-16 11:48:32,697 DEBUG [apibinding.api] async call[url: http://localhost:8080/zstack/api/, response: {"org.zstack.header.vm.APIMigrateVmEvent":{"success":false,"error":{"code":"HOST.1009","description":"Failed to migrate vm on hypervisor","details":"failed to migrate vm[uuid:2877fc87bfbb404f81519002772d58d0] from kvm host[uuid:de0e1d924837468f8f1599c62d6b44d9, ip:10.0.101.20] to dest host[ip:10.1.101.21], unable to migrate vm[uuid:2877fc87bfbb404f81519002772d58d0] to qemu+tcp://10.1.101.21/system, Failed to open file \u0027/dev/disk/by-path/ip-10.0.101.1:3260-iscsi-iqn.2015-06.org.zstack:b6471918d5ce42f6a745857bb4dcdb7e-lun-1\u0027: No such file or directory"}}}] after 500ms
API call[org.zstack.header.vm.APIMigrateVmEvent] failed because [code: HOST.1009, description: Failed to migrate vm on hypervisor, details: failed to migrate vm[uuid:2877fc87bfbb404f81519002772d58d0] from kvm host[uuid:de0e1d924837468f8f1599c62d6b44d9, ip:10.0.101.20] to dest host[ip:10.1.101.21], unable to migrate vm[uuid:2877fc87bfbb404f81519002772d58d0] to qemu+tcp://10.1.101.21/system, Failed to open file '/dev/disk/by-path/ip-10.0.101.1:3260-iscsi-iqn.2015-06.org.zstack:b6471918d5ce42f6a745857bb4dcdb7e-lun-1': No such file or directory]

primary storage capacity allocation and counting bug

When primary storage does not have enough space and multiple VMs are creating at the same time, some VM will be created successfully, some will be failed. The failed log is as:

API call[org.zstack.header.vm.APICreateVmInstanceEvent] failed because [code: SYS.1006, description: An operation failed, details: after subtracting reserved capacity[1G], there is no primary storage having required size[209715200 bytes]]

The issues are :

  1. the PS capacity is not changed at this moment
  2. we can continue to try to create multiple VMs again. Same numbers of VMs will be created like the 1st time, and the failed VM has same failed log. This parallel VMs creating operation (fixed number of VMs are created success and the rest failed) could be kept doing for many times.

The next picture show the PS capacity is not changed.

screen shot 2015-05-01 at 23 59 56

The execution script is like:

cat test.sh

!/bin/bash

zstack-cli LogInByAccount accountName=admin password=password
max_times=20
times=0
while [ $times -lt $max_times ]; do
zstack-cli CreateVmInstance name=VM+$times instanceOfferingUuid=af36372ace294399a6c56eb62af95020 imageUuid=1aa945dfdc004cdca7cb0a1ffd8b1acd l3NetworkUuids=096a7c294ec146febb1c645205ea99db &
times=expr $times + 1
done

In every execution cycle, 8 VMs (VM root volume is 200M) will be created. 12 VMs failed.

iscsi volume revert snapshot failed

This issue looks like reverting 2 snapshots to same volume will fail.

2015-06-12 17:28:00,270 WARN VolumeSnapshotTreeBase failed to restore volume[uuid:1047d20a70fc4567b2f614ec81edb7f7] to snapshot[uuid:4bab7dfa69fd445cba53840444301c53, name:create_snapshot1], ErrorCode [code = SYS.1006, description = An operation failed, details = subvolume[/home/btrfs/rootVolumes/acct-36c27e8ff05c4780bf6d2fa65700f22e/vol-by-snapshot-1047d20a70fc4567b2f614ec81edb7f7] existing]
2015-06-12 17:28:00,270 TRACE CloudBusImpl2 [event publish]: {"org.zstack.header.storage.snapshot.APIRevertVolumeFromSnapshotEvent":{"apiId":"ed2d472c505e42c8ab51242742d15419","success":false,"error":{"code":"SYS.1006","description":"An operation failed","details":"subvolume[/home/btrfs/rootVolumes/acct-36c27e8ff05c4780bf6d2fa65700f22e/vol-by-snapshot-1047d20a70fc4567b2f614ec81edb7f7] existing"},"headers":{"schema":{}},"id":"adc22cb23ebf4b259439e23abd98a100","creatingTime":1434101280253}}
2015-06-12 17:28:00,270 TRACE CloudBusImpl2 [event received]: {"org.zstack.header.storage.snapshot.APIRevertVolumeFromSnapshotEvent":{"type":{"_name":"key.event.API.API_EVENT"},"apiId":"ed2d472c505e42c8ab51242742d15419","success":false,"error":{"code":"SYS.1006","description":"An operation failed","details":"subvolume[/home/btrfs/rootVolumes/acct-36c27e8ff05c4780bf6d2fa65700f22e/vol-by-snapshot-1047d20a70fc4567b2f614ec81edb7f7] existing"},"headers":{"schema":{}},"id":"adc22cb23ebf4b259439e23abd98a100","creatingTime":1434101280253}}

iscsi volume snapshot revert to volume might fail

The failed log is like:
ApiError: API call[org.zstack.header.storage.snapshot.APIRevertVolumeFromSnapshotEvent] failed because [code: SYS.1006, description: An operation failed, details: subvolume[/home/btrfs/rootVolumes/acct-36c27e8ff05c4780bf6d2fa65700f22e/vol-0c63876f658741d4841b99bdf5493ed6-by-snapshot/snapshot-d31631687b644bb98810ca585fe39d99] existing]

The reproduce step:

  1. Create Data volume and attach to vm
  2. Create data volume snapshot1 (sp1) and sp2 sp3,
  3. revert data volume to sp1
  4. create data volume snapshot1.1 (sp1.1) and sp1.1.2 sp 1.1.3
  5. revert data volume to sp1 again
  6. try to create new volume snapshot 1.2 will trigger the error .

Zstack requirement to change sshd_config for PermitRootLogin to "yes" seems a security issue

During my initial attempts to install zstack I eventually found the documentation reference:

       http://zstack.org/tutorials/flat-network-ui.html

where it says:
= = = = = = = = =
Configure root user
The KVM host will need root user credentials of SSH, to allow Ansible to install necessary packages and to give the KVM agent full control of the host. As this tutorial use a single machine for both ZStack management node and KVM host, you will need to configure credentials for the root user.

CentOS:
sudo su
passwd root

Ubuntu:
You need to also enable root user in SSHD configuration.
1. sudo su
2. passwd root
3. edit /etc/ssh/sshd_config
4. comment out 'PermitRootLogin without-password'
5. add 'PermitRootLogin yes'
6. restart SSH: 'service ssh restart'
= = = = = = = = =

I know its stated that doing the above is required for Ansible... but in my mind and from everything I've read anywhere this permitting "root" login via ssh is NOT considered a Security "best practice".

This also seems a very odd config requirement for an IaaS Platform to require for obvious reasons.

Isn't that the whole reason why "sudo" exists ?

Is there no way to resolve this by leaving PermitRootLogin to deny "root" login but utilize SUDO instead?

Brian

unable to boot virtual router

I follow the test guid: http://zstack.org/tutorials/ec2-ui.html
When I just craete guest vm, and wait for a long time but throw the respond as bellow:

{"org.zstack.header.vm.APICreateVmInstanceEvent":{"success":false,"apiId":"54e2cf17dd3a48d98cb0f01f43882a48","headers":{"schema":{}},"error":{"code":"SYS.1006","cause":{"code":"APPLIANCE_VM.1001","description":"Unable to start appliance VM for some reason","details":"\nssh command failed\ncommand: null\nreturn code: 1\nstdout: null\nstderr: null\nexitErrorMessage: Exhausted available authentication methods"},"description":"An operation failed","details":"\nssh command failed\ncommand: null\nreturn code: 1\nstdout: null\nstderr: null\nexitErrorMessage: Exhausted available authentication methods"},"id":"fe36effe71e84098a94935289fc72d7d","creatingTime

add host failed

I input correct ssh user/password to add host, but get error as below:

REQUEST

{
"org.zstack.kvm.APIAddKVMHostMsg": {
"username": "root",
"password": "kelimeng",
"name": "host-tw7v",
"description": null,
"clusterUuid": "0f486da62f764c589515c5f6a8ce40a1",
"managementIp": "9.5.127.191",
"session": {
"uuid": "5b5752638fad4d20b813615c69621a9c"
}
}
}

RESPONSE

{
"org.zstack.header.host.APIAddHostEvent": {
"success": false,
"apiId": "fcd4767ae4914b9abe65edcb3e666a09",
"headers": {
"schema": {}
},
"error": {
"code": "HOST.1000",
"cause": {
"code": "SYS.1000",
"description": "An internal error happened in system",
"details": "unhandled exception happened when calling private void org.zstack.core.ansible.AnsibleFacadeImpl.1.run(org.zstack.header.core.Completion), \nshell command[sudo ansible-playbook /usr/local/zstack/ansible/kvm.yaml -i /usr/local/zstack/ansible/hosts --private-key /usr/local/zstack/apache-tomcat-7.0.35/webapps/zstack/WEB-INF/classes/ansible/rsaKeys/id_rsa -e '{"host":"9.5.127.191","zstack_root":"/var/lib/zstack/","init":"true","hostname":"9-5-127-191.zstack.org","pkg_zstacklib":"zstacklib-0.6.tar.gz","pkg_kvmagent":"kvmagent-0.6.tar.gz"}'] failed\nret code: 3\nstderr: \nstdout: \nPLAY [9.5.127.191] ************************************************************ \n\nGATHERING FACTS *************************************************************** \nfatal: [9.5.127.191] => SSH Error: Permission denied (password,hostbased).\n while connecting to 9.5.127.191:22\nIt is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.\n\nTASK: [install dependency packages for RedHat based OS] *********************** \nFATAL: no hosts matched or all hosts have already failed -- aborting\n\n\nPLAY RECAP ******************************************************************** \n to retry, use: --limit @/home/ubuntu/kvm.yaml.retry\n\n9.5.127.191 : ok=0 changed=0 unreachable=1 failed=0 \n\n"
},
"description": "Unable to add host",
"details": "unhandled exception happened when calling private void org.zstack.core.ansible.AnsibleFacadeImpl.1.run(org.zstack.header.core.Completion), \nshell command[sudo ansible-playbook /usr/local/zstack/ansible/kvm.yaml -i /usr/local/zstack/ansible/hosts --private-key /usr/local/zstack/apache-tomcat-7.0.35/webapps/zstack/WEB-INF/classes/ansible/rsaKeys/id_rsa -e '{"host":"9.5.127.191","zstack_root":"/var/lib/zstack/","init":"true","hostname":"9-5-127-191.zstack.org","pkg_zstacklib":"zstacklib-0.6.tar.gz","pkg_kvmagent":"kvmagent-0.6.tar.gz"}'] failed\nret code: 3\nstderr: \nstdout: \nPLAY [9.5.127.191] ************************************************************ \n\nGATHERING FACTS *************************************************************** \nfatal: [9.5.127.191] => SSH Error: Permission denied (password,hostbased).\n while connecting to 9.5.127.191:22\nIt is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.\n\nTASK: [install dependency packages for RedHat based OS] *********************** \nFATAL: no hosts matched or all hosts have already failed -- aborting\n\n\nPLAY RECAP ******************************************************************** \n to retry, use: --limit @/home/ubuntu/kvm.yaml.retry\n\n9.5.127.191 : ok=0 changed=0 unreachable=1 failed=0 \n\n"
},
"id": "31334fa1e0f34f34afdefa6540d034a6",
"creatingTime": 1433764430289
}
}

backuped iscsi volume snapshot doesn't support CreateDataVolumeFromVolumeSnapshot

This issue seems only happened when try to create data volume from a backuped iscsi volume snapshot.

2015-06-13 10:26:39,758 TRACE CloudBusImpl2 [msg send]: {"org.zstack.header.message.MessageReply":{"success":false,"error":{"code":"SYS.1006","description":"An operation failed","details":"Storage volume snapshot has not supported yet"},"headers":{"isReply":"true","correlationId":"b602a785dd9642f88a313302ce9489af","schema":{}},"id":"45ea8091173b4c23882853e5fcd95f2f","serviceId":"zstack.message.cloudbus.b92a9a12332149c9a041ef40a243193f","creatingTime":1434162399757}}

UI Bug:it does not work when attach data volume to new instance

When detach a data volume from instance1 and want to attach to instance2,it does not work when operate on UI side but works on zstack-cli;When operates on UI side,it always attach to instance1.

and,on UI side,when want to attach volume from Instances,double click instance name,and click Volume Tab,and select Attach Volume from Actions Menu,there is no data volume listed in box.

vncconsole doesn't have timeout

vncconsole doesn't have timeout and the same vncconsole url could be used in different browser. This might have potential security issue.

It is better to be add timeout and 1 url for 1 connection.

Support Ceph/RBD

Hi, developers

I am a IaaS engineer and I also focus on Ceph/RBD. Does ZStack support Ceph/RBD as backend storage ? Thanks !
And I create a QQ Group: 410185063, named "ZStack**社区", welcome to anyone to join us. Thanks.

Best Regards,
Star Guo

Host status tracing has bug, which might wrongly change Host status from disconnected to connected

Description:

When Host was crashed and rebooted, some of the net device (e.g. manual created vlan device eth10.0) might be lost. So when ZStack tried to reconnect the Host, the Host Reconnection Event will failed. The Host status will be marked as disconnected. But when Host tracer is running, it will find the Host is pingable, then change the status to Connected. It will make the VM failed to be created on this host, since the net bridges are not initialized successfully.

For example:
The Reconnection Event failed:
2015-05-02 16:24:03,376 TRACE CloudBusImpl2 [event received]: {"org.zstack.header.host.APIReconnectHostEvent":{"type":{"_name":"key.event.API.API_EVENT"},"apiId":"d8cc9b03833844968983923740814226","success":false,"error":{"code":"HOST.1002","description":"Unable to reconnect host","details":"connection error for KVM host[uuid:4de0cd34fe3741ac80d38c05a9d4a7a1, ip:10.1.101.22] when calling org.zstack.kvm.KVMConnectExtensionForL2Network, because org.zstack.header.errorcode.OperationFailureException: ErrorCode [code \u003d SYS.1006, description \u003d An operation failed, details \u003d , failed to check physical network interfaces[names : eth0.10] on kvm host[uuid:4de0cd34fe3741ac80d38c05a9d4a7a1, ip:10.1.101.22]]","cause":{"code":"HOST.1002","description":"Unable to reconnect host","details":"connection error for KVM host[uuid:4de0cd34fe3741ac80d38c05a9d4a7a1, ip:10.1.101.22] when calling org.zstack.kvm.KVMConnectExtensionForL2Network, because org.zstack.header.errorcode.OperationFailureException: ErrorCode [code \u003d SYS.1006, description \u003d An operation failed, details \u003d , failed to check physical network interfaces[names : eth0.10] on kvm host[uuid:4de0cd34fe3741ac80d38c05a9d4a7a1, ip:10.1.101.22]]","cause":{"code":"HOST.1003","description":"An error happened when connecting to host","details":"connection error for KVM host[uuid:4de0cd34fe3741ac80d38c05a9d4a7a1, ip:10.1.101.22] when calling org.zstack.kvm.KVMConnectExtensionForL2Network, because org.zstack.header.errorcode.OperationFailureException: ErrorCode [code \u003d SYS.1006, description \u003d An operation failed, details \u003d , failed to check physical network interfaces[names : eth0.10] on kvm host[uuid:4de0cd34fe3741ac80d38c05a9d4a7a1, ip:10.1.101.22]]"}}},"headers":{"schema":{}},"id":"ad5bed1c47514e969ba65a1571cd4a51","creatingTime":1430555043376}}

The host tracer will wrongly change the status to Connected:
2015-05-02 16:24:06,431 TRACE CloudBusImpl2 [msg send]: {"org.zstack.header.host.ChangeHostConnectionStateMsg":{"hostUuid":"4de0cd34fe3741ac80d38c05a9d4a7a1","connectionStateEvent":"connected","timeout":1800000,"headers":{"correlationId":"7e30138cdcd94fe1b675825d80a2b36e","replyTo":"zstack.message.cloudbus.c6a0f3cab5764e12b87bdb24cdcd0f4f","noReply":"false","schema":{}},"id":"7e30138cdcd94fe1b675825d80a2b36e","serviceId":"host.c6a0f3cab5764e12b87bdb24cdcd0f4f","creatingTime":1430555046431}}

The possible fix might be:

  1. When reconnection failed, change the host state to Disable.
  2. When Host tracer founds Host is pingable, should call ReconnectHost to retrigger the reconnection event.

But consider the environment need manual recovery, would prefer option 1.

Add local disk support

In some case, local disk could be more common and low cost sulotion. I hope ZStack support local disk, which can be chosen qcow2 or raw format with KVM hypervisor.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.