ansibleplaybookbundle / import-vm-apb Goto Github PK
View Code? Open in Web Editor NEWImport a Virtual Machine
License: Apache License 2.0
Import a Virtual Machine
License: Apache License 2.0
For few random times, I see the error Token used before issued
for pvc
It might be a issue related to heketi/heketi#646
But can we make sure anything from this apb point to avoid that?
it starts working fine once I delete the heketi pod (and it gets recreated)
The following scenario will remove an existing PVC, due to the does the PVC exist check (would do the same without the check as it would fail on the binding of the PVC).
This will remove the PVC from the existing VM, potentially while its running, that cannot be good.
Passing the vmware login id with @vsphere.local
results in error where virsh
is not able to parse it
When importing from a URL we create a PVC to contain the imported disk image. In order to estimate the amount of space needed we check the content-length header of the given URL. This works for most raw files but will not work for compressed images.
After filling apb to import a VM, there are no VM related pod details shown in provisioned services UI, so in case of importing multiple VMs, one simply cannot find which pod is for which VM to see logs or any such stuff.
Create a virtual machine from a downloaded disk image
Import a virtual machine from VMware vCenter
In the import vm for, there should be option to add labels for that vm
Currently we are supposed to be keeping CDI version 0.5
which looks for annotations without cdi
in prefix here https://github.com/ansibleplaybookbundle/import-vm-apb/blob/master/roles/import-from-url/templates/cdi-pvc.yml#L6
The changes here kubevirt/containerized-data-importer#210 are correct but there should be a separate branch compatible with cdi 0.5 as well
found 2 issues while running v2v import
1.
pvc were bound
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
temp Bound pvc-df95f098-a14f-11e8-8e1d-fa163eda3874 2Gi RWO glusterfs-storage 54m
test-v2v-rhel-7-2-automation-vmware-disk-01 Bound pvc-df928f86-a14f-11e8-8e1d-fa163eda3874 8Gi RWO glusterfs-storage 54m
But the v2v pod gave the error
v2v-test-v2v-rhel-7-2-automation-vmware-7nphq 0/1 Error 0 52m
v2v-test-v2v-rhel-7-2-automation-vmware-nw682 0/1 Error 0 55m
In the pod logs, I see
qemu-system-x86_64: -drive file=/usr/lib64/guestfs/appliance/root,snapshot=on,id=appliance,cache=unsafe,if=none,format=raw: Could not read L1 table: No such file or directory
libguestfs: child_cleanup: 0x116bcf0: child process died
libguestfs: sending SIGTERM to process 30
libguestfs: trace: v2v: launch = -1 (error)
2.
Even though a VM object was not created, and the pod gave error, the apb showed provisioned successful in the console UI
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
When following the "Windows Virtual Machine" APB wizard from the catalog, it fails to generate the VM from the template with the following err:
fatal: [localhost]: FAILED! => {"changed": true, "cmd": "oc process -f /tmp/vmtemplate.yaml -pNAME=win2012 -pMEMORY=4096Mi -pCPU_CORES=2 -pPVC_NAME=disk-windows > /tmp/vm.yaml", "delta": "0:00:00.510032", "end": "2018-08-07 12:16:21.193470", "msg": "non-zero return code", "rc": 1, "start": "2018-08-07 12:16:20.683438", "stderr": "error: unknown parameter name "PVC_NAME"", "stderr_lines": ["error: unknown parameter name "PVC_NAME""], "stdout": "", "stdout_lines": []}
What you expected to happen:
VM is created succesfully
How to reproduce it (as minimally and precisely as possible):
Follow the "Windows Virtual Machine" APB wizard from the catalog and input an existing PVC name under the "Pvc to use" field.
Environment:
KubeVirt version (use virtctl version):
Client Version: version.Info{GitVersion:"v0.7.0", GitCommit:"b5b91243f540739eb5db61af89b2f1e5ba449dfa", GitTreeState:"clean", BuildDate:"2018-07-04T14:16:40Z", GoVersion:"go1.10", Compiler:"gc", Platform:"linux/amd64"}
Server Version: &version.Info{GitVersion:"v0.7.0", GitCommit:"b5b91243f540739eb5db61af89b2f1e5ba449dfa", GitTreeState:"clean", BuildDate:"2018-07-04T14:16:40Z", GoVersion:"go1.10", Compiler:"gc", Platform:"linux/amd64"}
Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"10+", GitVersion:"v1.10.0+b81c8f8", GitCommit:"b81c8f8", GitTreeState:"clean", BuildDate:"2018-07-10T22:32:34Z", GoVersion:"go1.9.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10+", GitVersion:"v1.10.0+b81c8f8", GitCommit:"b81c8f8", GitTreeState:"clean", BuildDate:"2018-07-10T22:32:34Z", GoVersion:"go1.9.4", Compiler:"gc", Platform:"linux/amd64"}
This issue is opened to track https://bugzilla.redhat.com/1584172
in current apb, when importing a vm via url, the form shows 2 options ovm
and template
with the latest changes in kubevirt, ovm
should be changed to virtual machine
logs for the provisioning pod
[cloud-user@cnv-executor-vatsal-master1 ~]$ oc logs -n dockerhub-import-vm-apb-prov-zbhhx bundle-42d8e21e-5f0b-4286-a5b4-1cae590f0e78
DEPRECATED: APB playbooks should be stored at /opt/apb/project
cp: omitting directory ‘/opt/apb/actions/roles’
cp: omitting directory ‘/opt/apb/actions/templates’
ERROR! the role 'import-from-url' was not found in /opt/apb/project/roles:/etc/ansible/roles:/opt/ansible/roles:/opt/apb/project
The error appears to have been in '/opt/apb/project/provision.yml': line 9, column 5, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- role: ansibleplaybookbundle.asb-modules
- role: import-from-url
^ here
Log for the deprovision pod
[cloud-user@cnv-executor-vatsal-master1 ~]$ oc logs -n dockerhub-import-vm-apb-depr-qmxs2 bundle-a41816a6-3f18-4614-820d-db1e9bf34161
DEPRECATED: APB playbooks should be stored at /opt/apb/project
cp: omitting directory ‘/opt/apb/actions/roles’
cp: omitting directory ‘/opt/apb/actions/templates’
ERROR! the role 'import-from-url' was not found in /opt/apb/project/roles:/etc/ansible/roles:/opt/ansible/roles:/opt/apb/project
The error appears to have been in '/opt/apb/project/deprovision.yml': line 9, column 5, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- role: ansibleplaybookbundle.asb-modules
- role: import-from-url
^ here
k8s allows only unique names for a vm in the same namespace, similarly in vmware unique names are for a folder, but not across the cluster or data-center, so if someone is bringing multiple VMs to cnv, there will be error due to this.
suggestion: can we tweak it with appending uuid of the importing vm to the vm name? uuid are unique across vmware.
Do we have an icon for this catalog item @xsgordon?
Seeing the below log for importing vm image via url
[WARNING]: Found variable using reserved name: action
PLAY [Playbook to import the virtual machine disk] *****************************
TASK [ansible.kubernetes-modules : Install latest openshift client] ************
skipping: [localhost]
TASK [ansibleplaybookbundle.asb-modules : debug] *******************************
skipping: [localhost]
TASK [import-from-url : include] ***********************************************
included: /opt/ansible/roles/import-from-url/tasks/provision.yml for localhost
TASK [import-from-url : Change project to default] *****************************
changed: [localhost]
TASK [import-from-url : Get content metadata] **********************************
ok: [localhost]
TASK [import-from-url : Calculate PVC size] ************************************
skipping: [localhost]
TASK [import-from-url : Calculate PVC size] ************************************
ok: [localhost]
TASK [import-from-url : Build PVC] *********************************************
changed: [localhost]
TASK [import-from-url : Debug generated pvc] ***********************************
changed: [localhost]
TASK [import-from-url : Show generated pvc] ************************************
ok: [localhost] => {
"msg": "apiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: \"cirros-demo-disk-01\"\n annotations:\n kubevirt.io/storage.import.endpoint: \"https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-i386-disk.img\"\nspec:\n storageClassName: kubevirt\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: \"1116691496\""
}
TASK [import-from-url : Provision PVC] *****************************************
changed: [localhost]
TASK [import-from-url : Set pvc_name] ******************************************
ok: [localhost]
TASK [import-from-url : Set disk_bus] ******************************************
ok: [localhost]
TASK [import-from-url : Set disk_size_bytes] ***********************************
ok: [localhost]
TASK [import-from-url : Build VM resources] ************************************
changed: [localhost]
TASK [import-from-url : Provision VM resources] ********************************
fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["oc", "create", "-f", "/tmp/vm-resources.yml"], "delta": "0:00:00.438359", "end": "2018-06-11 09:45:50.614037", "msg": "non-zero return code", "rc": 1, "start": "2018-06-11 09:45:50.175678", "stderr": "Error from server (Forbidden): error when creating \"/tmp/vm-resources.yml\": offlinevirtualmachines.kubevirt.io is forbidden: User \"system:serviceaccount:dh-import-vm-apb-prov-xd64x:apb-d27f4009-f265-4797-b2a5-0d45e155dc27\" cannot create offlinevirtualmachines.kubevirt.io in the namespace \"default\": User \"system:serviceaccount:dh-import-vm-apb-prov-xd64x:apb-d27f4009-f265-4797-b2a5-0d45e155dc27\" cannot create offlinevirtualmachines.kubevirt.io in project \"default\"", "stderr_lines": ["Error from server (Forbidden): error when creating \"/tmp/vm-resources.yml\": offlinevirtualmachines.kubevirt.io is forbidden: User \"system:serviceaccount:dh-import-vm-apb-prov-xd64x:apb-d27f4009-f265-4797-b2a5-0d45e155dc27\" cannot create offlinevirtualmachines.kubevirt.io in the namespace \"default\": User \"system:serviceaccount:dh-import-vm-apb-prov-xd64x:apb-d27f4009-f265-4797-b2a5-0d45e155dc27\" cannot create offlinevirtualmachines.kubevirt.io in project \"default\""], "stdout": "", "stdout_lines": []}
PLAY RECAP *********************************************************************
localhost : ok=12 changed=5 unreachable=0 failed=1
Need to finalize the catalog name and description. @aglitke Can you add a more detailed description? Do we need separate descriptions for the upstream and downstream versions? Are we okay with "Import Virtual Machine" as the catalog item name?
@serenamarie125 FYI
Current apb allows users to import only single VM, but it becomes naive when migrating a cluster or multiple VMs
ansibleplaybookbundle/apb-base#36
Changes:
playbooks need to be moved from /opt/apb/actions to /opt/apb/projects
copy inventory to /opt/apb/inventory/hosts
This issue tracks BZ 1576573 which is needed to reduce size of v2v-job image.
The text box to enter Storage Class
should be a Dropdown option to select from the available storage classes
Currently apb doesn't show version info about which apb we are going to run from console.
Description should have version info
As per the current design, we are using the temp
pvc for all the import of v2v
This becomes complex when we import more then one VM at the same time (and they both start using same pvc?)
Currently there is no way for the user to specify which storage class they want the imported image to go into. I am assuming its going into the 'default' storage class, but as we now allow users to specify which storage class in the import from URL path, I think we need to add it to the VMware import as well.
The import from URL path PR that implements specifying the storage class: #29
If a v2v pod gets Error
, I see that it keeps creating new pods for retry, while it might be a good method to retry to import, but there should be some limit on the number of tries, I literally see 100+ pods for the single import
All the pod logs seem to have similar logs
+ echo /v2v-dst libvirt test_v2v 'vpx://[email protected]/Folder1/Folder2/Compute3/Folder4/Cluster5/10.35.92.10?no_verify=1' linux ovm '${size}]'
+ DSTD=/v2v-dst
+ SRCTYPE=libvirt
+ SRC=test_v2v
+ SRCURI='vpx://[email protected]/Folder1/Folder2/Compute3/Folder4/Cluster5/10.35.92.10?no_verify=1'
+ OS=linux
+ TYPE=ovm
+ PVC_SIZE='${size}]'
/v2v-dst libvirt test_v2v vpx://[email protected]/Folder1/Folder2/Compute3/Folder4/Cluster5/10.35.92.10?no_verify=1 linux ovm ${size}]
++ echo Heslo123
+ SRCPASS=Heslo123
++++ readlink -f /v2v.d/job
+++ dirname /v2v.d/job
++ readlink -f /v2v.d/..
+ BASEDIR=/
+ main
+ [[ -n /v2v-dst ]]
+ [[ -n libvirt ]]
+ [[ -n test_v2v ]]
+ [[ libvirt = \l\i\b\v\i\r\t ]]
+ [[ -z vpx://[email protected]/Folder1/Folder2/Compute3/Folder4/Cluster5/10.35.92.10?no_verify=1 ]]
+ [[ libvirt = \l\i\b\v\i\r\t ]]
+ [[ test_v2v =~ ^vpx ]]
+ transformVM libvirt test_v2v 'vpx://[email protected]/Folder1/Folder2/Compute3/Folder4/Cluster5/10.35.92.10?no_verify=1' Heslo123 linux ovm
+ local SRCTYPE=libvirt
+ local SRC=test_v2v
+ local 'SRCURI=vpx://[email protected]/Folder1/Folder2/Compute3/Folder4/Cluster5/10.35.92.10?no_verify=1'
+ local SRCPASS=Heslo123
+ local OS=linux
+ local TYPE=ovm
++ basename test_v2v
+ local WD=test_v2v.d
+ echo ' Converting source: test_v2v'
+ mkdir -p test_v2v.d
Converting source: test_v2v
+ ls -shal test_v2v.d
total 0
0 drwxr-xr-x. 2 root root 6 Jun 6 06:08 .
0 drwxr-xr-x. 1 root root 24 Jun 6 06:08 ..
++ basename test_v2v
++ sed 's/[^a-zA-Z0-9-]/-/g'
++ tr '[:upper:]' '[:lower:]'
+ local NAME=test-v2v
+ [[ -n vpx://[email protected]/Folder1/Folder2/Compute3/Folder4/Cluster5/10.35.92.10?no_verify=1 ]]
+ SRCURI='-ic vpx://[email protected]/Folder1/Folder2/Compute3/Folder4/Cluster5/10.35.92.10?no_verify=1'
+ [[ -n SRCPASS ]]
+ echo Heslo123
+ SRCPASS='--password-file pass'
+ [[ -n 1 ]]
+ DEBUG_OPTS='-v -x'
+ virt-v2v -v -x -i libvirt --password-file pass -ic 'vpx://[email protected]/Folder1/Folder2/Compute3/Folder4/Cluster5/10.35.92.10?no_verify=1' test_v2v -o local -on test-v2v -oa sparse -of raw -os /v2v-dst --machine-readable
virt-v2v: libguestfs 1.36.13fedora=26,release=1.fc26,libvirt (x86_64)
libvirt version: 3.2.1
libguestfs: trace: set_verbose true
libguestfs: trace: set_verbose = 0
libguestfs: trace: get_backend
libguestfs: trace: get_backend = "direct"
input_libvirt_vcenter_https: source: scheme vpx server 10.35.5.21
[ 0.0] Opening the source -i libvirt -ic vpx://[email protected]/Folder1/Folder2/Compute3/Folder4/Cluster5/10.35.92.10?no_verify=1 test_v2v
libvirt xml is:
<domain type='vmware' xmlns:vmware='http://libvirt.org/schemas/domain/vmware/1.0'>
<name>test_v2v</name>
<uuid>42352ec7-97c8-12a8-a8bc-39a1cd603cd0</uuid>
<memory unit='KiB'>2097152</memory>
<currentMemory unit='KiB'>2097152</currentMemory>
<vcpu placement='static'>2</vcpu>
<cputune>
<shares>2000</shares>
</cputune>
<os>
<type arch='x86_64'>hvm</type>
</os>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<disk type='file' device='disk'>
<source file='[nsimsolo_vmware_nfs] test_v2v/test_v2v.vmdk'/>
<target dev='sda' bus='scsi'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
<controller type='scsi' index='0' model='vmpvscsi'/>
<interface type='bridge'>
<mac address='00:12:34:56:78:9a'/>
<source bridge='VM Network'/>
<model type='vmxnet3'/>
</interface>
<video>
<model type='vmvga' vram='8192' primary='yes'/>
</video>
</devices>
<vmware:datacenterpath>Folder1/Folder2/Compute3</vmware:datacenterpath>
</domain>
vcenter: using <vmware:datacenterpath> from libvirt: Folder1/Folder2/Compute3
'curl' -q --max-redirs '5' --globoff --head --silent --url 'https://10.35.5.21/folder/test%5fv2v/test%5fv2v-flat.vmdk?dcPath=Folder1/Folder2/Compute3&dsName=nsimsolo%5fvmware%5fnfs' --user <hidden> --insecure
HTTP/1.1 200 OK
Date: Wed, 6 Jun 2018 06:08:36 GMT
Set-Cookie: vmware_soap_session="8d8b56f212e1a0f3531eb9dea99fbe2047c1db0c"; Path=/; HttpOnly; Secure;
Accept-Ranges: bytes
Connection: Keep-Alive
Content-Type: application/octet-stream
Content-Length: 5368709120
vcenter: json parameters: { "file.cookie": "vmware_soap_session=\"8d8b56f212e1a0f3531eb9dea99fbe2047c1db0c\"", "file.sslverify": "off", "file.driver": "https", "file.url": "https://10.35.5.21/folder/test%5fv2v/test%5fv2v-flat.vmdk?dcPath=Folder1/Folder2/Compute3&dsName=nsimsolo%5fvmware%5fnfs", "file.timeout": 2000 }
source name: test_v2v
hypervisor type: vmware
memory: 2147483648 (bytes)
nr vCPUs: 2
CPU features:
firmware: unknown
display:
video: vmvga
sound:
disks:
json: { "file.cookie": "vmware_soap_session=\"8d8b56f212e1a0f3531eb9dea99fbe2047c1db0c\"", "file.sslverify": "off", "file.driver": "https", "file.url": "https://10.35.5.21/folder/test%5fv2v/test%5fv2v-flat.vmdk?dcPath=Folder1/Folder2/Compute3&dsName=nsimsolo%5fvmware%5fnfs", "file.timeout": 2000 } (raw) [scsi]
removable media:
NICs:
Bridge "VM Network" mac: 00:12:34:56:78:9a [vmxnet3]
check_host_free_space: overlay_dir=/var/tmp free_space=1721368576
[ 6.6] Creating an overlay to protect the source from being modified
qemu-img 'create' '-q' '-f' 'qcow2' '-b' 'json: { "file.cookie": "vmware_soap_session=\"8d8b56f212e1a0f3531eb9dea99fbe2047c1db0c\"", "file.sslverify": "off", "file.driver": "https", "file.url": "https://10.35.5.21/folder/test%5fv2v/test%5fv2v-flat.vmdk?dcPath=Folder1/Folder2/Compute3&dsName=nsimsolo%5fvmware%5fnfs", "file.timeout": 2000 }' '-o' 'compat=1.1,backing_fmt=raw' '/var/tmp/v2vovl91a109.qcow2'
libguestfs: trace: set_verbose true
libguestfs: trace: set_verbose = 0
libguestfs: trace: disk_has_backing_file "/var/tmp/v2vovl91a109.qcow2"
libguestfs: command: run: qemu-img
libguestfs: command: run: \ info
libguestfs: command: run: \ --output json
libguestfs: command: run: \ /var/tmp/v2vovl91a109.qcow2
libguestfs: parse_json: qemu-img info JSON output:\n{\n "backing-filename-format": "raw",\n "virtual-size": 5368709120,\n "filename": "/var/tmp/v2vovl91a109.qcow2",\n "cluster-size": 65536,\n "format": "qcow2",\n "actual-size": 197120,\n "format-specific": {\n "type": "qcow2",\n "data": {\n "compat": "1.1",\n "lazy-refcounts": false,\n "refcount-bits": 16,\n "corrupt": false\n }\n },\n "full-backing-filename": "json: { \"file.cookie\": \"vmware_soap_session=\\\"8d8b56f212e1a0f3531eb9dea99fbe2047c1db0c\\\"\", \"file.sslverify\": \"off\", \"file.driver\": \"https\", \"file.url\": \"https://10.35.5.21/folder/test%5fv2v/test%5fv2v-flat.vmdk?dcPath=Folder1/Folder2/Compute3&dsName=nsimsolo%5fvmware%5fnfs\", \"file.timeout\": 2000 }",\n "backing-filename": "json: { \"file.cookie\": \"vmware_soap_session=\\\"8d8b56f212e1a0f3531eb9dea99fbe2047c1db0c\\\"\", \"file.sslverify\": \"off\", \"file.driver\": \"https\", \"file.url\": \"https://10.35.5.21/folder/test%5fv2v/test%5fv2v-flat.vmdk?dcPath=Folder1/Folder2/Compute3&dsName=nsimsolo%5fvmware%5fnfs\", \"file.timeout\": 2000 }",\n "dirty-flag": false\n}\n\n
libguestfs: trace: disk_has_backing_file = 1
libguestfs: trace: set_verbose true
libguestfs: trace: set_verbose = 0
libguestfs: trace: disk_virtual_size "/var/tmp/v2vovl91a109.qcow2"
libguestfs: command: run: qemu-img
libguestfs: command: run: \ info
libguestfs: command: run: \ --output json
libguestfs: command: run: \ /var/tmp/v2vovl91a109.qcow2
libguestfs: parse_json: qemu-img info JSON output:\n{\n "backing-filename-format": "raw",\n "virtual-size": 5368709120,\n "filename": "/var/tmp/v2vovl91a109.qcow2",\n "cluster-size": 65536,\n "format": "qcow2",\n "actual-size": 197120,\n "format-specific": {\n "type": "qcow2",\n "data": {\n "compat": "1.1",\n "lazy-refcounts": false,\n "refcount-bits": 16,\n "corrupt": false\n }\n },\n "full-backing-filename": "json: { \"file.cookie\": \"vmware_soap_session=\\\"8d8b56f212e1a0f3531eb9dea99fbe2047c1db0c\\\"\", \"file.sslverify\": \"off\", \"file.driver\": \"https\", \"file.url\": \"https://10.35.5.21/folder/test%5fv2v/test%5fv2v-flat.vmdk?dcPath=Folder1/Folder2/Compute3&dsName=nsimsolo%5fvmware%5fnfs\", \"file.timeout\": 2000 }",\n "backing-filename": "json: { \"file.cookie\": \"vmware_soap_session=\\\"8d8b56f212e1a0f3531eb9dea99fbe2047c1db0c\\\"\", \"file.sslverify\": \"off\", \"file.driver\": \"https\", \"file.url\": \"https://10.35.5.21/folder/test%5fv2v/test%5fv2v-flat.vmdk?dcPath=Folder1/Folder2/Compute3&dsName=nsimsolo%5fvmware%5fnfs\", \"file.timeout\": 2000 }",\n "dirty-flag": false\n}\n\n
libguestfs: trace: disk_virtual_size = 5368709120
[ 9.1] Initializing the target -o local -os /v2v-dst
[ 9.1] Opening the overlay
libguestfs: trace: set_verbose true
libguestfs: trace: set_verbose = 0
libguestfs: trace: set_identifier "v2v"
libguestfs: trace: v2v: set_identifier = 0
libguestfs: trace: v2v: get_memsize
libguestfs: trace: v2v: get_memsize = 500
libguestfs: trace: v2v: set_memsize 2000
libguestfs: trace: v2v: set_memsize = 0
libguestfs: trace: v2v: set_network true
libguestfs: trace: v2v: set_network = 0
libguestfs: trace: v2v: add_drive "/var/tmp/v2vovl91a109.qcow2" "format:qcow2" "cachemode:unsafe" "discard:besteffort" "copyonread:true"
libguestfs: trace: v2v: add_drive = 0
libguestfs: trace: v2v: launch
libguestfs: trace: v2v: get_tmpdir
libguestfs: trace: v2v: get_tmpdir = "/tmp"
libguestfs: trace: v2v: version
libguestfs: trace: v2v: version = <struct guestfs_version = major: 1, minor: 36, release: 13, extra: fedora=26,release=1.fc26,libvirt, >
libguestfs: trace: v2v: get_backend
libguestfs: trace: v2v: get_backend = "direct"
libguestfs: launch: program=virt-v2v
libguestfs: launch: identifier=v2v
libguestfs: launch: version=1.36.13fedora=26,release=1.fc26,libvirt
libguestfs: launch: backend registered: unix
libguestfs: launch: backend registered: uml
libguestfs: launch: backend registered: libvirt
libguestfs: launch: backend registered: direct
libguestfs: launch: backend=direct
libguestfs: launch: tmpdir=/tmp/libguestfsti5Rg6
libguestfs: launch: umask=0022
libguestfs: launch: euid=0
libguestfs: trace: v2v: get_backend_setting "force_tcg"
libguestfs: trace: v2v: get_backend_setting = NULL (error)
libguestfs: begin testing qemu features
libguestfs: trace: v2v: get_cachedir
libguestfs: trace: v2v: get_cachedir = "/var/tmp"
libguestfs: checking for previously cached test results of /usr/bin/qemu-kvm, in /var/tmp/.guestfs-0
libguestfs: loading previously cached test results
libguestfs: qemu version: 2.9
libguestfs: qemu mandatory locking: no
libguestfs: trace: v2v: get_sockdir
libguestfs: trace: v2v: get_sockdir = "/tmp"
libguestfs: finished testing qemu features
libguestfs: trace: v2v: get_backend_setting "gdb"
libguestfs: trace: v2v: get_backend_setting = NULL (error)
[00028ms] /usr/bin/qemu-kvm \
-global virtio-blk-pci.scsi=off \
-nodefconfig \
-enable-fips \
-nodefaults \
-display none \
-machine accel=kvm:tcg \
-cpu host \
-m 2000 \
-no-reboot \
-rtc driftfix=slew \
-no-hpet \
-global kvm-pit.lost_tick_policy=discard \
-kernel /usr/lib64/guestfs/appliance/kernel \
-initrd /usr/lib64/guestfs/appliance/initrd \
-object rng-random,filename=/dev/urandom,id=rng0 \
-device virtio-rng-pci,rng=rng0 \
-device virtio-scsi-pci,id=scsi \
-drive file=/var/tmp/v2vovl91a109.qcow2,cache=unsafe,discard=unmap,format=qcow2,copy-on-read=on,id=hd0,if=none \
-device scsi-hd,drive=hd0 \
-drive file=/usr/lib64/guestfs/appliance/root,snapshot=on,id=appliance,cache=unsafe,if=none,format=raw \
-device scsi-hd,drive=appliance \
-device virtio-serial-pci \
-serial stdio \
-device sga \
-chardev socket,path=/tmp/libguestfsqrHI5b/guestfsd.sock,id=channel0 \
-device virtserialport,chardev=channel0,name=org.libguestfs.channel.0 \
-netdev user,id=usernet,net=169.254.0.0/16 \
-device virtio-net-pci,netdev=usernet \
-append 'panic=1 console=ttyS0 edd=off udevtimeout=6000 udev.event-timeout=6000 no_timer_check printk.time=1 cgroup_disable=memory usbcore.nousb cryptomgr.notests tsc=reliable 8250.nr_uarts=1 root=/dev/sdb selinux=0 guestfs_verbose=1 guestfs_network=1 TERM=linux guestfs_identifier=v2v'
qemu-system-x86_64: -drive file=/usr/lib64/guestfs/appliance/root,snapshot=on,id=appliance,cache=unsafe,if=none,format=raw: Could not read L1 table: No such file or directory
libguestfs: child_cleanup: 0x1310c10: child process died
libguestfs: sending SIGTERM to process 27
libguestfs: trace: v2v: launch = -1 (error)
virt-v2v: error: libguestfs error: guestfs_launch failed, see earlier error
messages
rm -rf '/var/tmp/null.pnlXt0'
libguestfs: trace: v2v: close
libguestfs: closing guestfs handle 0x1310c10 (state 0)
libguestfs: command: run: rm
libguestfs: command: run: \ -rf /tmp/libguestfsti5Rg6
libguestfs: command: run: rm
libguestfs: command: run: \ -rf /tmp/libguestfsqrHI5b
libguestfs: trace: close
libguestfs: closing guestfs handle 0x13108e0 (state 0)
libguestfs: trace: close
libguestfs: closing guestfs handle 0x13104d0 (state 0)
libguestfs: trace: close
libguestfs: closing guestfs handle 0x121fc70 (state 0)
And once a pod get Error
, I see the new pod in ContainerCreating
NAME READY STATUS RESTARTS AGE
docker-registry-1-gnhbf 1/1 Running 0 6m
registry-console-1-q22b5 1/1 Running 0 6m
router-1-j6px4 1/1 Running 0 6m
router-1-t8qkq 1/1 Running 0 6m
router-1-tblpm 1/1 Running 0 6m
v2v-test-v2v-2dp8w 0/1 Error 0 2m
v2v-test-v2v-5942d 0/1 Error 0 3m
v2v-test-v2v-5lb98 0/1 Error 0 3m
v2v-test-v2v-6c65f 0/1 Error 0 5m
v2v-test-v2v-6pn7z 0/1 Error 0 5m
v2v-test-v2v-7xq8w 0/1 Error 0 47s
v2v-test-v2v-9f9c4 0/1 Terminating 0 3s
v2v-test-v2v-bv5zt 0/1 Error 0 3m
v2v-test-v2v-cqdx7 0/1 Error 0 2m
v2v-test-v2v-d8x7z 0/1 Error 0 6m
v2v-test-v2v-dw2h6 0/1 Error 0 4m
v2v-test-v2v-k4cw9 0/1 Error 0 2m
v2v-test-v2v-k8gd9 0/1 Error 0 1m
v2v-test-v2v-lkmbq 0/1 Error 0 6m
v2v-test-v2v-m8pgn 0/1 ContainerCreating 0 3s
v2v-test-v2v-mrd6r 0/1 Error 0 1m
v2v-test-v2v-n6kjg 0/1 Error 0 33s
v2v-test-v2v-nj4ch 0/1 Error 0 5m
v2v-test-v2v-pntzz 0/1 Error 0 1m
v2v-test-v2v-ptd65 0/1 Error 0 1m
v2v-test-v2v-pzsbk 0/1 Error 0 5m
v2v-test-v2v-q7xmp 0/1 Error 0 4m
v2v-test-v2v-qsrgn 0/1 Error 0 1m
v2v-test-v2v-sfsr4 0/1 Error 0 4m
v2v-test-v2v-t59vm 0/1 Error 0 3m
v2v-test-v2v-t9xfk 0/1 Error 0 4m
v2v-test-v2v-txchz 0/1 Error 0 6m
v2v-test-v2v-vkbwh 0/1 Error 0 19s
v2v-test-v2v-vxq6t 0/1 Error 0 2m
v2v-test-v2v-vzgf5 0/1 Error 0 6m
v2v-test-v2v-xs6xh 0/1 Error 0 5m
and events shows
1h 1h 1 v2v-test-v2v-zfjmz.1535794a26e24bae Pod Warning FailedMount kubelet, cnv-executor-vatsal-master1.example.com Unable to mount volumes for pod "v2v-test-v2v-zfjmz_default(bb2de02c-6945-11e8-b102-fa163ee75a3c)": timeout expired waiting for volumes to attach or mount for pod "default"/"v2v-test-v2v-zfjmz". list of unmounted volumes=[kvm volume-1 volume-2 kubevirt-privileged-token-4r52g]. list of unattached volumes=[kvm volume-1 volume-2 kubevirt-privileged-token-4r52g]
1h 1h 1 v2v-test-v2v-zgcb7.153578d84a81c5b8 Pod Normal Scheduled default-scheduler Successfully assigned v2v-test-v2v-zgcb7 to cnv-executor-vatsal-node1.example.com
1h 1h 1 v2v-test-v2v-zgcb7.153578f4ee662b90 Pod Warning FailedMount kubelet, cnv-executor-vatsal-node1.example.com Unable to mount volumes for pod "v2v-test-v2v-zgcb7_default(e10bec3d-6944-11e8-b102-fa163ee75a3c)": timeout expired waiting for volumes to attach or mount for pod "default"/"v2v-test-v2v-zgcb7". list of unmounted volumes=[kvm volume-1 volume-2 kubevirt-privileged-token-4r52g]. list of unattached volumes=[kvm volume-1 volume-2 kubevirt-privileged-token-4r52g]
1h 1h 1 v2v-test-v2v-zh5dm.15357a071e80a777 Pod Normal Scheduled default-scheduler Successfully assigned v2v-test-v2v-zh5dm to cnv-executor-vatsal-node1.example.com
1h 1h 1 v2v-test-v2v-zh5dm.15357a07c9bf41b1 Pod spec.containers{v2v} Normal Pulling kubelet, cnv-executor-vatsal-node1.example.com pulling image "quay.io/kubevirt/v2v-job"
1h 1h 1 v2v-test-v2v-zh5dm.15357a07e116c96d Pod spec.containers{v2v} Normal Pulled kubelet, cnv-executor-vatsal-node1.example.com Successfully pulled image "quay.io/kubevirt/v2v-job"
1h 1h 1 v2v-test-v2v-zh5dm.15357a07e3369c16 Pod spec.containers{v2v} Normal Created kubelet, cnv-executor-vatsal-node1.example.com Created container
1h 1h 1 v2v-test-v2v-zh5dm.15357a07e94c5aa2 Pod spec.containers{v2v} Normal Started kubelet, cnv-executor-vatsal-node1.example.com Started container
17m 17m 1 v2v-test-v2v-zj7mt.15357d0f81d6b74d Pod Normal Scheduled default-scheduler Successfully assigned v2v-test-v2v-zj7mt to cnv-executor-vatsal-master1.example.com
15m 15m 1 v2v-test-v2v-zj7mt.15357d2c2659aa56 Pod Warning FailedMount kubelet, cnv-executor-vatsal-master1.example.com Unable to mount volumes for pod "v2v-test-v2v-zj7mt_default(abd6a537-694f-11e8-b102-fa163ee75a3c)": timeout expired waiting for volumes to attach or mount for pod "default"/"v2v-test-v2v-zj7mt". list of unmounted volumes=[kvm volume-1 volume-2 kubevirt-privileged-token-4r52g]. list of unattached volumes=[kvm volume-1 volume-2 kubevirt-privileged-token-4r52g]
1h 1h 1 v2v-test-v2v-zmwzt.15357a2188898501 Pod Normal Scheduled default-scheduler Successfully assigned v2v-test-v2v-zmwzt to cnv-executor-vatsal-master1.example.com
1h 1h 1 v2v-test-v2v-zmwzt.15357a3e304304c8 Pod Warning FailedMount kubelet, cnv-executor-vatsal-master1.example.com Unable to mount volumes for pod "v2v-test-v2v-zmwzt_default(2be81065-6948-11e8-b102-fa163ee75a3c)": timeout expired waiting for volumes to attach or mount for pod "default"/"v2v-test-v2v-zmwzt". list of unmounted volumes=[kvm volume-1 volume-2 kubevirt-privileged-token-4r52g]. list of unattached volumes=[kvm volume-1 volume-2 kubevirt-privileged-token-4r52g]
13m 13m 1 v2v-test-v2v-zqwc2.15357d497c3d0961 Pod Normal Scheduled default-scheduler Successfully assigned v2v-test-v2v-zqwc2 to cnv-executor-vatsal-node1.example.com
11m 11m 1 v2v-test-v2v-zqwc2.15357d661ff9feca Pod Warning FailedMount kubelet, cnv-executor-vatsal-node1.example.com Unable to mount volumes for pod "v2v-test-v2v-zqwc2_default(40438524-6950-11e8-b102-fa163ee75a3c)": timeout expired waiting for volumes to attach or mount for pod "default"/"v2v-test-v2v-zqwc2". list of unmounted volumes=[kvm volume-1 volume-2 kubevirt-privileged-token-4r52g]. list of unattached volumes=[kvm volume-1 volume-2 kubevirt-privileged-token-4r52g]
1h 1h 1 v2v-test-v2v-zrf9t.153579f010d7b6d7 Pod Normal Scheduled default-scheduler Successfully assigned v2v-test-v2v-zrf9t to cnv-executor-vatsal-master1.example.com
1h 1h 1 v2v-test-v2v-zrf9t.15357a0cb768a329 Pod Warning FailedMount kubelet, cnv-executor-vatsal-master1.example.com Unable to mount volumes for pod "v2v-test-v2v-zrf9t_default(ad44d4cd-6947-11e8-b102-fa163ee75a3c)": timeout expired waiting for volumes to attach or mount for pod "default"/"v2v-test-v2v-zrf9t". list of unmounted volumes=[kvm volume-1 volume-2 kubevirt-privileged-token-4r52g]. list of unattached volumes=[kvm volume-1 volume-2 kubevirt-privileged-token-4r52g]
And to stop this continuous creation of pods, I had to delete the job
while importing a VM via apb from console, we should check if the KubeVirt is installed or not, and maybe enforce installing it first, because without it we know importing will fail
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.