Code Monkey home page Code Monkey logo

import-vm-apb's People

Contributors

aglitke avatar awels avatar pkliczewski avatar rawagner avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

import-vm-apb's Issues

[Tracker] Token used before issued

For few random times, I see the error Token used before issued for pvc
It might be a issue related to heketi/heketi#646
But can we make sure anything from this apb point to avoid that?
it starts working fine once I delete the heketi pod (and it gets recreated)

Importing with same name as existing PVC will remove it.

The following scenario will remove an existing PVC, due to the does the PVC exist check (would do the same without the check as it would fail on the binding of the PVC).

  1. Someone has a running VM with name: test, and thus has a PVC with name test-disk-01
  2. Now I come in and try to create a VM called 'test' using the APB. I fill in all the fields and hit create.
  3. The provision step will fail (due to the above mentioned reasons), and thus the de-provision step will fire. This will attempt to delete the VM resources (and ignore not found), and then attempt to delete the PVC with the name test-disk-01, because the names match.

This will remove the PVC from the existing VM, potentially while its running, that cannot be good.

PVC size detection does not work for compressed images

When importing from a URL we create a PVC to contain the imported disk image. In order to estimate the amount of space needed we check the content-length header of the given URL. This works for most raw files but will not work for compressed images.

options in console goes away when provisioning again

After provisioning an apb, (mostly when the apb went unsuccessful) if going to provision the same apb again, all the options are gone, it needs a explicit browser refresh to see them again
Not sure if this is a APB issue or something with the console
screenshot from 2018-08-16 18-14-08

VM Pod details in the Provisioned Services UI

After filling apb to import a VM, there are no VM related pod details shown in provisioned services UI, so in case of importing multiple VMs, one simply cannot find which pod is for which VM to see logs or any such stuff.

Update the Plans step

  • Plan titles should be, Import from URL, and Import from VMware. W in VMware should not be capitalized.
  • Plan descriptions should be:

Create a virtual machine from a downloaded disk image
Import a virtual machine from VMware vCenter

[v2v] Could not read L1 table: No such file or directory

found 2 issues while running v2v import
1.
pvc were bound

NAME                                          STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS        AGE
temp                                          Bound     pvc-df95f098-a14f-11e8-8e1d-fa163eda3874   2Gi        RWO            glusterfs-storage   54m
test-v2v-rhel-7-2-automation-vmware-disk-01   Bound     pvc-df928f86-a14f-11e8-8e1d-fa163eda3874   8Gi        RWO            glusterfs-storage   54m

But the v2v pod gave the error

v2v-test-v2v-rhel-7-2-automation-vmware-7nphq   0/1       Error     0          52m
v2v-test-v2v-rhel-7-2-automation-vmware-nw682   0/1       Error     0          55m

In the pod logs, I see

qemu-system-x86_64: -drive file=/usr/lib64/guestfs/appliance/root,snapshot=on,id=appliance,cache=unsafe,if=none,format=raw: Could not read L1 table: No such file or directory
libguestfs: child_cleanup: 0x116bcf0: child process died
libguestfs: sending SIGTERM to process 30
libguestfs: trace: v2v: launch = -1 (error)

2.
Even though a VM object was not created, and the pod gave error, the apb showed provisioned successful in the console UI

"Windows Virtual Machine" APB fails with unknown parameter PVC_NAME

Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug

What happened:
When following the "Windows Virtual Machine" APB wizard from the catalog, it fails to generate the VM from the template with the following err:

fatal: [localhost]: FAILED! => {"changed": true, "cmd": "oc process -f /tmp/vmtemplate.yaml -pNAME=win2012 -pMEMORY=4096Mi -pCPU_CORES=2 -pPVC_NAME=disk-windows > /tmp/vm.yaml", "delta": "0:00:00.510032", "end": "2018-08-07 12:16:21.193470", "msg": "non-zero return code", "rc": 1, "start": "2018-08-07 12:16:20.683438", "stderr": "error: unknown parameter name "PVC_NAME"", "stderr_lines": ["error: unknown parameter name "PVC_NAME""], "stdout": "", "stdout_lines": []}

What you expected to happen:
VM is created succesfully

How to reproduce it (as minimally and precisely as possible):
Follow the "Windows Virtual Machine" APB wizard from the catalog and input an existing PVC name under the "Pvc to use" field.

Environment:
KubeVirt version (use virtctl version):
Client Version: version.Info{GitVersion:"v0.7.0", GitCommit:"b5b91243f540739eb5db61af89b2f1e5ba449dfa", GitTreeState:"clean", BuildDate:"2018-07-04T14:16:40Z", GoVersion:"go1.10", Compiler:"gc", Platform:"linux/amd64"}
Server Version: &version.Info{GitVersion:"v0.7.0", GitCommit:"b5b91243f540739eb5db61af89b2f1e5ba449dfa", GitTreeState:"clean", BuildDate:"2018-07-04T14:16:40Z", GoVersion:"go1.10", Compiler:"gc", Platform:"linux/amd64"}
Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"10+", GitVersion:"v1.10.0+b81c8f8", GitCommit:"b81c8f8", GitTreeState:"clean", BuildDate:"2018-07-10T22:32:34Z", GoVersion:"go1.9.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10+", GitVersion:"v1.10.0+b81c8f8", GitCommit:"b81c8f8", GitTreeState:"clean", BuildDate:"2018-07-10T22:32:34Z", GoVersion:"go1.9.4", Compiler:"gc", Platform:"linux/amd64"}

name change: ovm to virtual machine

in current apb, when importing a vm via url, the form shows 2 options ovm and template
with the latest changes in kubevirt, ovm should be changed to virtual machine

v2v fails with ansible error

logs for the provisioning pod

[cloud-user@cnv-executor-vatsal-master1 ~]$ oc logs -n dockerhub-import-vm-apb-prov-zbhhx   bundle-42d8e21e-5f0b-4286-a5b4-1cae590f0e78 
DEPRECATED: APB playbooks should be stored at /opt/apb/project
cp: omitting directory ‘/opt/apb/actions/roles’
cp: omitting directory ‘/opt/apb/actions/templates’
ERROR! the role 'import-from-url' was not found in /opt/apb/project/roles:/etc/ansible/roles:/opt/ansible/roles:/opt/apb/project

The error appears to have been in '/opt/apb/project/provision.yml': line 9, column 5, but may
be elsewhere in the file depending on the exact syntax problem.

The offending line appears to be:

  - role: ansibleplaybookbundle.asb-modules
  - role: import-from-url
    ^ here

Log for the deprovision pod

[cloud-user@cnv-executor-vatsal-master1 ~]$ oc logs -n dockerhub-import-vm-apb-depr-qmxs2   bundle-a41816a6-3f18-4614-820d-db1e9bf34161  
DEPRECATED: APB playbooks should be stored at /opt/apb/project
cp: omitting directory ‘/opt/apb/actions/roles’
cp: omitting directory ‘/opt/apb/actions/templates’
ERROR! the role 'import-from-url' was not found in /opt/apb/project/roles:/etc/ansible/roles:/opt/ansible/roles:/opt/apb/project

The error appears to have been in '/opt/apb/project/deprovision.yml': line 9, column 5, but may
be elsewhere in the file depending on the exact syntax problem.

The offending line appears to be:

  - role: ansibleplaybookbundle.asb-modules
  - role: import-from-url
    ^ here

AlreadyExists error due to same names

k8s allows only unique names for a vm in the same namespace, similarly in vmware unique names are for a folder, but not across the cluster or data-center, so if someone is bringing multiple VMs to cnv, there will be error due to this.
suggestion: can we tweak it with appending uuid of the importing vm to the vm name? uuid are unique across vmware.

fatal error in importing image via url

Seeing the below log for importing vm image via url

 [WARNING]: Found variable using reserved name: action

PLAY [Playbook to import the virtual machine disk] *****************************

TASK [ansible.kubernetes-modules : Install latest openshift client] ************
skipping: [localhost]

TASK [ansibleplaybookbundle.asb-modules : debug] *******************************
skipping: [localhost]

TASK [import-from-url : include] ***********************************************
included: /opt/ansible/roles/import-from-url/tasks/provision.yml for localhost

TASK [import-from-url : Change project to default] *****************************
changed: [localhost]

TASK [import-from-url : Get content metadata] **********************************
ok: [localhost]

TASK [import-from-url : Calculate PVC size] ************************************
skipping: [localhost]

TASK [import-from-url : Calculate PVC size] ************************************
ok: [localhost]

TASK [import-from-url : Build PVC] *********************************************
changed: [localhost]

TASK [import-from-url : Debug generated pvc] ***********************************
changed: [localhost]

TASK [import-from-url : Show generated pvc] ************************************
ok: [localhost] => {
    "msg": "apiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n  name: \"cirros-demo-disk-01\"\n  annotations:\n    kubevirt.io/storage.import.endpoint: \"https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-i386-disk.img\"\nspec:\n  storageClassName: kubevirt\n  accessModes:\n  - ReadWriteOnce\n  resources:\n    requests:\n      storage: \"1116691496\""
}

TASK [import-from-url : Provision PVC] *****************************************
changed: [localhost]

TASK [import-from-url : Set pvc_name] ******************************************
ok: [localhost]

TASK [import-from-url : Set disk_bus] ******************************************
ok: [localhost]

TASK [import-from-url : Set disk_size_bytes] ***********************************
ok: [localhost]

TASK [import-from-url : Build VM resources] ************************************
changed: [localhost]

TASK [import-from-url : Provision VM resources] ********************************
fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["oc", "create", "-f", "/tmp/vm-resources.yml"], "delta": "0:00:00.438359", "end": "2018-06-11 09:45:50.614037", "msg": "non-zero return code", "rc": 1, "start": "2018-06-11 09:45:50.175678", "stderr": "Error from server (Forbidden): error when creating \"/tmp/vm-resources.yml\": offlinevirtualmachines.kubevirt.io is forbidden: User \"system:serviceaccount:dh-import-vm-apb-prov-xd64x:apb-d27f4009-f265-4797-b2a5-0d45e155dc27\" cannot create offlinevirtualmachines.kubevirt.io in the namespace \"default\": User \"system:serviceaccount:dh-import-vm-apb-prov-xd64x:apb-d27f4009-f265-4797-b2a5-0d45e155dc27\" cannot create offlinevirtualmachines.kubevirt.io in project \"default\"", "stderr_lines": ["Error from server (Forbidden): error when creating \"/tmp/vm-resources.yml\": offlinevirtualmachines.kubevirt.io is forbidden: User \"system:serviceaccount:dh-import-vm-apb-prov-xd64x:apb-d27f4009-f265-4797-b2a5-0d45e155dc27\" cannot create offlinevirtualmachines.kubevirt.io in the namespace \"default\": User \"system:serviceaccount:dh-import-vm-apb-prov-xd64x:apb-d27f4009-f265-4797-b2a5-0d45e155dc27\" cannot create offlinevirtualmachines.kubevirt.io in project \"default\""], "stdout": "", "stdout_lines": []}

PLAY RECAP *********************************************************************
localhost                  : ok=12   changed=5    unreachable=0    failed=1   

Initial UX review

Please review the UX of the current implementation. So far only Import from URL is implemented.

01
02
3a
3b

multiple import at the same time fails

As per the current design, we are using the temp pvc for all the import of v2v
This becomes complex when we import more then one VM at the same time (and they both start using same pvc?)

Allow VMware import to specify target storage class

Currently there is no way for the user to specify which storage class they want the imported image to go into. I am assuming its going into the 'default' storage class, but as we now allow users to specify which storage class in the import from URL path, I think we need to add it to the VMware import as well.

The import from URL path PR that implements specifying the storage class: #29

v2v failure keeps creating new pods

If a v2v pod gets Error, I see that it keeps creating new pods for retry, while it might be a good method to retry to import, but there should be some limit on the number of tries, I literally see 100+ pods for the single import

All the pod logs seem to have similar logs

+ echo /v2v-dst libvirt test_v2v 'vpx://[email protected]/Folder1/Folder2/Compute3/Folder4/Cluster5/10.35.92.10?no_verify=1' linux ovm '${size}]'
+ DSTD=/v2v-dst
+ SRCTYPE=libvirt
+ SRC=test_v2v
+ SRCURI='vpx://[email protected]/Folder1/Folder2/Compute3/Folder4/Cluster5/10.35.92.10?no_verify=1'
+ OS=linux
+ TYPE=ovm
+ PVC_SIZE='${size}]'
/v2v-dst libvirt test_v2v vpx://[email protected]/Folder1/Folder2/Compute3/Folder4/Cluster5/10.35.92.10?no_verify=1 linux ovm ${size}]
++ echo Heslo123
+ SRCPASS=Heslo123
++++ readlink -f /v2v.d/job
+++ dirname /v2v.d/job
++ readlink -f /v2v.d/..
+ BASEDIR=/
+ main
+ [[ -n /v2v-dst ]]
+ [[ -n libvirt ]]
+ [[ -n test_v2v ]]
+ [[ libvirt = \l\i\b\v\i\r\t ]]
+ [[ -z vpx://[email protected]/Folder1/Folder2/Compute3/Folder4/Cluster5/10.35.92.10?no_verify=1 ]]
+ [[ libvirt = \l\i\b\v\i\r\t ]]
+ [[ test_v2v =~ ^vpx ]]
+ transformVM libvirt test_v2v 'vpx://[email protected]/Folder1/Folder2/Compute3/Folder4/Cluster5/10.35.92.10?no_verify=1' Heslo123 linux ovm
+ local SRCTYPE=libvirt
+ local SRC=test_v2v
+ local 'SRCURI=vpx://[email protected]/Folder1/Folder2/Compute3/Folder4/Cluster5/10.35.92.10?no_verify=1'
+ local SRCPASS=Heslo123
+ local OS=linux
+ local TYPE=ovm
++ basename test_v2v
+ local WD=test_v2v.d
+ echo '  Converting source: test_v2v'
+ mkdir -p test_v2v.d
  Converting source: test_v2v
+ ls -shal test_v2v.d
total 0
0 drwxr-xr-x. 2 root root  6 Jun  6 06:08 .
0 drwxr-xr-x. 1 root root 24 Jun  6 06:08 ..
++ basename test_v2v
++ sed 's/[^a-zA-Z0-9-]/-/g'
++ tr '[:upper:]' '[:lower:]'
+ local NAME=test-v2v
+ [[ -n vpx://[email protected]/Folder1/Folder2/Compute3/Folder4/Cluster5/10.35.92.10?no_verify=1 ]]
+ SRCURI='-ic vpx://[email protected]/Folder1/Folder2/Compute3/Folder4/Cluster5/10.35.92.10?no_verify=1'
+ [[ -n SRCPASS ]]
+ echo Heslo123
+ SRCPASS='--password-file pass'
+ [[ -n 1 ]]
+ DEBUG_OPTS='-v -x'
+ virt-v2v -v -x -i libvirt --password-file pass -ic 'vpx://[email protected]/Folder1/Folder2/Compute3/Folder4/Cluster5/10.35.92.10?no_verify=1' test_v2v -o local -on test-v2v -oa sparse -of raw -os /v2v-dst --machine-readable
virt-v2v: libguestfs 1.36.13fedora=26,release=1.fc26,libvirt (x86_64)
libvirt version: 3.2.1
libguestfs: trace: set_verbose true
libguestfs: trace: set_verbose = 0
libguestfs: trace: get_backend
libguestfs: trace: get_backend = "direct"
input_libvirt_vcenter_https: source: scheme vpx server 10.35.5.21
[   0.0] Opening the source -i libvirt -ic vpx://[email protected]/Folder1/Folder2/Compute3/Folder4/Cluster5/10.35.92.10?no_verify=1 test_v2v
libvirt xml is:
<domain type='vmware' xmlns:vmware='http://libvirt.org/schemas/domain/vmware/1.0'>
  <name>test_v2v</name>
  <uuid>42352ec7-97c8-12a8-a8bc-39a1cd603cd0</uuid>
  <memory unit='KiB'>2097152</memory>
  <currentMemory unit='KiB'>2097152</currentMemory>
  <vcpu placement='static'>2</vcpu>
  <cputune>
    <shares>2000</shares>
  </cputune>
  <os>
    <type arch='x86_64'>hvm</type>
  </os>
  <clock offset='utc'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <disk type='file' device='disk'>
      <source file='[nsimsolo_vmware_nfs] test_v2v/test_v2v.vmdk'/>
      <target dev='sda' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <controller type='scsi' index='0' model='vmpvscsi'/>
    <interface type='bridge'>
      <mac address='00:12:34:56:78:9a'/>
      <source bridge='VM Network'/>
      <model type='vmxnet3'/>
    </interface>
    <video>
      <model type='vmvga' vram='8192' primary='yes'/>
    </video>
  </devices>
  <vmware:datacenterpath>Folder1/Folder2/Compute3</vmware:datacenterpath>
</domain>

vcenter: using <vmware:datacenterpath> from libvirt: Folder1/Folder2/Compute3
'curl' -q --max-redirs '5' --globoff --head --silent --url 'https://10.35.5.21/folder/test%5fv2v/test%5fv2v-flat.vmdk?dcPath=Folder1/Folder2/Compute3&dsName=nsimsolo%5fvmware%5fnfs' --user <hidden> --insecure
HTTP/1.1 200 OK
Date: Wed, 6 Jun 2018 06:08:36 GMT
Set-Cookie: vmware_soap_session="8d8b56f212e1a0f3531eb9dea99fbe2047c1db0c"; Path=/; HttpOnly; Secure; 
Accept-Ranges: bytes
Connection: Keep-Alive
Content-Type: application/octet-stream
Content-Length: 5368709120

vcenter: json parameters: { "file.cookie": "vmware_soap_session=\"8d8b56f212e1a0f3531eb9dea99fbe2047c1db0c\"", "file.sslverify": "off", "file.driver": "https", "file.url": "https://10.35.5.21/folder/test%5fv2v/test%5fv2v-flat.vmdk?dcPath=Folder1/Folder2/Compute3&dsName=nsimsolo%5fvmware%5fnfs", "file.timeout": 2000 }
    source name: test_v2v
hypervisor type: vmware
         memory: 2147483648 (bytes)
       nr vCPUs: 2
   CPU features: 
       firmware: unknown
        display: 
          video: vmvga
          sound: 
disks:
	json: { "file.cookie": "vmware_soap_session=\"8d8b56f212e1a0f3531eb9dea99fbe2047c1db0c\"", "file.sslverify": "off", "file.driver": "https", "file.url": "https://10.35.5.21/folder/test%5fv2v/test%5fv2v-flat.vmdk?dcPath=Folder1/Folder2/Compute3&dsName=nsimsolo%5fvmware%5fnfs", "file.timeout": 2000 } (raw) [scsi]
removable media:

NICs:
	Bridge "VM Network" mac: 00:12:34:56:78:9a [vmxnet3]

check_host_free_space: overlay_dir=/var/tmp free_space=1721368576
[   6.6] Creating an overlay to protect the source from being modified
qemu-img 'create' '-q' '-f' 'qcow2' '-b' 'json: { "file.cookie": "vmware_soap_session=\"8d8b56f212e1a0f3531eb9dea99fbe2047c1db0c\"", "file.sslverify": "off", "file.driver": "https", "file.url": "https://10.35.5.21/folder/test%5fv2v/test%5fv2v-flat.vmdk?dcPath=Folder1/Folder2/Compute3&dsName=nsimsolo%5fvmware%5fnfs", "file.timeout": 2000 }' '-o' 'compat=1.1,backing_fmt=raw' '/var/tmp/v2vovl91a109.qcow2'
libguestfs: trace: set_verbose true
libguestfs: trace: set_verbose = 0
libguestfs: trace: disk_has_backing_file "/var/tmp/v2vovl91a109.qcow2"
libguestfs: command: run: qemu-img
libguestfs: command: run: \ info
libguestfs: command: run: \ --output json
libguestfs: command: run: \ /var/tmp/v2vovl91a109.qcow2
libguestfs: parse_json: qemu-img info JSON output:\n{\n    "backing-filename-format": "raw",\n    "virtual-size": 5368709120,\n    "filename": "/var/tmp/v2vovl91a109.qcow2",\n    "cluster-size": 65536,\n    "format": "qcow2",\n    "actual-size": 197120,\n    "format-specific": {\n        "type": "qcow2",\n        "data": {\n            "compat": "1.1",\n            "lazy-refcounts": false,\n            "refcount-bits": 16,\n            "corrupt": false\n        }\n    },\n    "full-backing-filename": "json: { \"file.cookie\": \"vmware_soap_session=\\\"8d8b56f212e1a0f3531eb9dea99fbe2047c1db0c\\\"\", \"file.sslverify\": \"off\", \"file.driver\": \"https\", \"file.url\": \"https://10.35.5.21/folder/test%5fv2v/test%5fv2v-flat.vmdk?dcPath=Folder1/Folder2/Compute3&dsName=nsimsolo%5fvmware%5fnfs\", \"file.timeout\": 2000 }",\n    "backing-filename": "json: { \"file.cookie\": \"vmware_soap_session=\\\"8d8b56f212e1a0f3531eb9dea99fbe2047c1db0c\\\"\", \"file.sslverify\": \"off\", \"file.driver\": \"https\", \"file.url\": \"https://10.35.5.21/folder/test%5fv2v/test%5fv2v-flat.vmdk?dcPath=Folder1/Folder2/Compute3&dsName=nsimsolo%5fvmware%5fnfs\", \"file.timeout\": 2000 }",\n    "dirty-flag": false\n}\n\n
libguestfs: trace: disk_has_backing_file = 1
libguestfs: trace: set_verbose true
libguestfs: trace: set_verbose = 0
libguestfs: trace: disk_virtual_size "/var/tmp/v2vovl91a109.qcow2"
libguestfs: command: run: qemu-img
libguestfs: command: run: \ info
libguestfs: command: run: \ --output json
libguestfs: command: run: \ /var/tmp/v2vovl91a109.qcow2
libguestfs: parse_json: qemu-img info JSON output:\n{\n    "backing-filename-format": "raw",\n    "virtual-size": 5368709120,\n    "filename": "/var/tmp/v2vovl91a109.qcow2",\n    "cluster-size": 65536,\n    "format": "qcow2",\n    "actual-size": 197120,\n    "format-specific": {\n        "type": "qcow2",\n        "data": {\n            "compat": "1.1",\n            "lazy-refcounts": false,\n            "refcount-bits": 16,\n            "corrupt": false\n        }\n    },\n    "full-backing-filename": "json: { \"file.cookie\": \"vmware_soap_session=\\\"8d8b56f212e1a0f3531eb9dea99fbe2047c1db0c\\\"\", \"file.sslverify\": \"off\", \"file.driver\": \"https\", \"file.url\": \"https://10.35.5.21/folder/test%5fv2v/test%5fv2v-flat.vmdk?dcPath=Folder1/Folder2/Compute3&dsName=nsimsolo%5fvmware%5fnfs\", \"file.timeout\": 2000 }",\n    "backing-filename": "json: { \"file.cookie\": \"vmware_soap_session=\\\"8d8b56f212e1a0f3531eb9dea99fbe2047c1db0c\\\"\", \"file.sslverify\": \"off\", \"file.driver\": \"https\", \"file.url\": \"https://10.35.5.21/folder/test%5fv2v/test%5fv2v-flat.vmdk?dcPath=Folder1/Folder2/Compute3&dsName=nsimsolo%5fvmware%5fnfs\", \"file.timeout\": 2000 }",\n    "dirty-flag": false\n}\n\n
libguestfs: trace: disk_virtual_size = 5368709120
[   9.1] Initializing the target -o local -os /v2v-dst
[   9.1] Opening the overlay
libguestfs: trace: set_verbose true
libguestfs: trace: set_verbose = 0
libguestfs: trace: set_identifier "v2v"
libguestfs: trace: v2v: set_identifier = 0
libguestfs: trace: v2v: get_memsize
libguestfs: trace: v2v: get_memsize = 500
libguestfs: trace: v2v: set_memsize 2000
libguestfs: trace: v2v: set_memsize = 0
libguestfs: trace: v2v: set_network true
libguestfs: trace: v2v: set_network = 0
libguestfs: trace: v2v: add_drive "/var/tmp/v2vovl91a109.qcow2" "format:qcow2" "cachemode:unsafe" "discard:besteffort" "copyonread:true"
libguestfs: trace: v2v: add_drive = 0
libguestfs: trace: v2v: launch
libguestfs: trace: v2v: get_tmpdir
libguestfs: trace: v2v: get_tmpdir = "/tmp"
libguestfs: trace: v2v: version
libguestfs: trace: v2v: version = <struct guestfs_version = major: 1, minor: 36, release: 13, extra: fedora=26,release=1.fc26,libvirt, >
libguestfs: trace: v2v: get_backend
libguestfs: trace: v2v: get_backend = "direct"
libguestfs: launch: program=virt-v2v
libguestfs: launch: identifier=v2v
libguestfs: launch: version=1.36.13fedora=26,release=1.fc26,libvirt
libguestfs: launch: backend registered: unix
libguestfs: launch: backend registered: uml
libguestfs: launch: backend registered: libvirt
libguestfs: launch: backend registered: direct
libguestfs: launch: backend=direct
libguestfs: launch: tmpdir=/tmp/libguestfsti5Rg6
libguestfs: launch: umask=0022
libguestfs: launch: euid=0
libguestfs: trace: v2v: get_backend_setting "force_tcg"
libguestfs: trace: v2v: get_backend_setting = NULL (error)
libguestfs: begin testing qemu features
libguestfs: trace: v2v: get_cachedir
libguestfs: trace: v2v: get_cachedir = "/var/tmp"
libguestfs: checking for previously cached test results of /usr/bin/qemu-kvm, in /var/tmp/.guestfs-0
libguestfs: loading previously cached test results
libguestfs: qemu version: 2.9
libguestfs: qemu mandatory locking: no
libguestfs: trace: v2v: get_sockdir
libguestfs: trace: v2v: get_sockdir = "/tmp"
libguestfs: finished testing qemu features
libguestfs: trace: v2v: get_backend_setting "gdb"
libguestfs: trace: v2v: get_backend_setting = NULL (error)
[00028ms] /usr/bin/qemu-kvm \
    -global virtio-blk-pci.scsi=off \
    -nodefconfig \
    -enable-fips \
    -nodefaults \
    -display none \
    -machine accel=kvm:tcg \
    -cpu host \
    -m 2000 \
    -no-reboot \
    -rtc driftfix=slew \
    -no-hpet \
    -global kvm-pit.lost_tick_policy=discard \
    -kernel /usr/lib64/guestfs/appliance/kernel \
    -initrd /usr/lib64/guestfs/appliance/initrd \
    -object rng-random,filename=/dev/urandom,id=rng0 \
    -device virtio-rng-pci,rng=rng0 \
    -device virtio-scsi-pci,id=scsi \
    -drive file=/var/tmp/v2vovl91a109.qcow2,cache=unsafe,discard=unmap,format=qcow2,copy-on-read=on,id=hd0,if=none \
    -device scsi-hd,drive=hd0 \
    -drive file=/usr/lib64/guestfs/appliance/root,snapshot=on,id=appliance,cache=unsafe,if=none,format=raw \
    -device scsi-hd,drive=appliance \
    -device virtio-serial-pci \
    -serial stdio \
    -device sga \
    -chardev socket,path=/tmp/libguestfsqrHI5b/guestfsd.sock,id=channel0 \
    -device virtserialport,chardev=channel0,name=org.libguestfs.channel.0 \
    -netdev user,id=usernet,net=169.254.0.0/16 \
    -device virtio-net-pci,netdev=usernet \
    -append 'panic=1 console=ttyS0 edd=off udevtimeout=6000 udev.event-timeout=6000 no_timer_check printk.time=1 cgroup_disable=memory usbcore.nousb cryptomgr.notests tsc=reliable 8250.nr_uarts=1 root=/dev/sdb selinux=0 guestfs_verbose=1 guestfs_network=1 TERM=linux guestfs_identifier=v2v'
qemu-system-x86_64: -drive file=/usr/lib64/guestfs/appliance/root,snapshot=on,id=appliance,cache=unsafe,if=none,format=raw: Could not read L1 table: No such file or directory
libguestfs: child_cleanup: 0x1310c10: child process died
libguestfs: sending SIGTERM to process 27
libguestfs: trace: v2v: launch = -1 (error)
virt-v2v: error: libguestfs error: guestfs_launch failed, see earlier error 
messages
rm -rf '/var/tmp/null.pnlXt0'
libguestfs: trace: v2v: close
libguestfs: closing guestfs handle 0x1310c10 (state 0)
libguestfs: command: run: rm
libguestfs: command: run: \ -rf /tmp/libguestfsti5Rg6
libguestfs: command: run: rm
libguestfs: command: run: \ -rf /tmp/libguestfsqrHI5b
libguestfs: trace: close
libguestfs: closing guestfs handle 0x13108e0 (state 0)
libguestfs: trace: close
libguestfs: closing guestfs handle 0x13104d0 (state 0)
libguestfs: trace: close
libguestfs: closing guestfs handle 0x121fc70 (state 0)

And once a pod get Error, I see the new pod in ContainerCreating

NAME                       READY     STATUS              RESTARTS   AGE
docker-registry-1-gnhbf    1/1       Running             0          6m
registry-console-1-q22b5   1/1       Running             0          6m
router-1-j6px4             1/1       Running             0          6m
router-1-t8qkq             1/1       Running             0          6m
router-1-tblpm             1/1       Running             0          6m
v2v-test-v2v-2dp8w         0/1       Error               0          2m
v2v-test-v2v-5942d         0/1       Error               0          3m
v2v-test-v2v-5lb98         0/1       Error               0          3m
v2v-test-v2v-6c65f         0/1       Error               0          5m
v2v-test-v2v-6pn7z         0/1       Error               0          5m
v2v-test-v2v-7xq8w         0/1       Error               0          47s
v2v-test-v2v-9f9c4         0/1       Terminating         0          3s
v2v-test-v2v-bv5zt         0/1       Error               0          3m
v2v-test-v2v-cqdx7         0/1       Error               0          2m
v2v-test-v2v-d8x7z         0/1       Error               0          6m
v2v-test-v2v-dw2h6         0/1       Error               0          4m
v2v-test-v2v-k4cw9         0/1       Error               0          2m
v2v-test-v2v-k8gd9         0/1       Error               0          1m
v2v-test-v2v-lkmbq         0/1       Error               0          6m
v2v-test-v2v-m8pgn         0/1       ContainerCreating   0          3s
v2v-test-v2v-mrd6r         0/1       Error               0          1m
v2v-test-v2v-n6kjg         0/1       Error               0          33s
v2v-test-v2v-nj4ch         0/1       Error               0          5m
v2v-test-v2v-pntzz         0/1       Error               0          1m
v2v-test-v2v-ptd65         0/1       Error               0          1m
v2v-test-v2v-pzsbk         0/1       Error               0          5m
v2v-test-v2v-q7xmp         0/1       Error               0          4m
v2v-test-v2v-qsrgn         0/1       Error               0          1m
v2v-test-v2v-sfsr4         0/1       Error               0          4m
v2v-test-v2v-t59vm         0/1       Error               0          3m
v2v-test-v2v-t9xfk         0/1       Error               0          4m
v2v-test-v2v-txchz         0/1       Error               0          6m
v2v-test-v2v-vkbwh         0/1       Error               0          19s
v2v-test-v2v-vxq6t         0/1       Error               0          2m
v2v-test-v2v-vzgf5         0/1       Error               0          6m
v2v-test-v2v-xs6xh         0/1       Error               0          5m

and events shows

1h        1h        1         v2v-test-v2v-zfjmz.1535794a26e24bae             Pod                              Warning   FailedMount            kubelet, cnv-executor-vatsal-master1.example.com   Unable to mount volumes for pod "v2v-test-v2v-zfjmz_default(bb2de02c-6945-11e8-b102-fa163ee75a3c)": timeout expired waiting for volumes to attach or mount for pod "default"/"v2v-test-v2v-zfjmz". list of unmounted volumes=[kvm volume-1 volume-2 kubevirt-privileged-token-4r52g]. list of unattached volumes=[kvm volume-1 volume-2 kubevirt-privileged-token-4r52g]
1h        1h        1         v2v-test-v2v-zgcb7.153578d84a81c5b8             Pod                              Normal    Scheduled              default-scheduler                                  Successfully assigned v2v-test-v2v-zgcb7 to cnv-executor-vatsal-node1.example.com
1h        1h        1         v2v-test-v2v-zgcb7.153578f4ee662b90             Pod                              Warning   FailedMount            kubelet, cnv-executor-vatsal-node1.example.com     Unable to mount volumes for pod "v2v-test-v2v-zgcb7_default(e10bec3d-6944-11e8-b102-fa163ee75a3c)": timeout expired waiting for volumes to attach or mount for pod "default"/"v2v-test-v2v-zgcb7". list of unmounted volumes=[kvm volume-1 volume-2 kubevirt-privileged-token-4r52g]. list of unattached volumes=[kvm volume-1 volume-2 kubevirt-privileged-token-4r52g]
1h        1h        1         v2v-test-v2v-zh5dm.15357a071e80a777             Pod                              Normal    Scheduled              default-scheduler                                  Successfully assigned v2v-test-v2v-zh5dm to cnv-executor-vatsal-node1.example.com
1h        1h        1         v2v-test-v2v-zh5dm.15357a07c9bf41b1             Pod       spec.containers{v2v}   Normal    Pulling                kubelet, cnv-executor-vatsal-node1.example.com     pulling image "quay.io/kubevirt/v2v-job"
1h        1h        1         v2v-test-v2v-zh5dm.15357a07e116c96d             Pod       spec.containers{v2v}   Normal    Pulled                 kubelet, cnv-executor-vatsal-node1.example.com     Successfully pulled image "quay.io/kubevirt/v2v-job"
1h        1h        1         v2v-test-v2v-zh5dm.15357a07e3369c16             Pod       spec.containers{v2v}   Normal    Created                kubelet, cnv-executor-vatsal-node1.example.com     Created container
1h        1h        1         v2v-test-v2v-zh5dm.15357a07e94c5aa2             Pod       spec.containers{v2v}   Normal    Started                kubelet, cnv-executor-vatsal-node1.example.com     Started container
17m       17m       1         v2v-test-v2v-zj7mt.15357d0f81d6b74d             Pod                              Normal    Scheduled              default-scheduler                                  Successfully assigned v2v-test-v2v-zj7mt to cnv-executor-vatsal-master1.example.com
15m       15m       1         v2v-test-v2v-zj7mt.15357d2c2659aa56             Pod                              Warning   FailedMount            kubelet, cnv-executor-vatsal-master1.example.com   Unable to mount volumes for pod "v2v-test-v2v-zj7mt_default(abd6a537-694f-11e8-b102-fa163ee75a3c)": timeout expired waiting for volumes to attach or mount for pod "default"/"v2v-test-v2v-zj7mt". list of unmounted volumes=[kvm volume-1 volume-2 kubevirt-privileged-token-4r52g]. list of unattached volumes=[kvm volume-1 volume-2 kubevirt-privileged-token-4r52g]
1h        1h        1         v2v-test-v2v-zmwzt.15357a2188898501             Pod                              Normal    Scheduled              default-scheduler                                  Successfully assigned v2v-test-v2v-zmwzt to cnv-executor-vatsal-master1.example.com
1h        1h        1         v2v-test-v2v-zmwzt.15357a3e304304c8             Pod                              Warning   FailedMount            kubelet, cnv-executor-vatsal-master1.example.com   Unable to mount volumes for pod "v2v-test-v2v-zmwzt_default(2be81065-6948-11e8-b102-fa163ee75a3c)": timeout expired waiting for volumes to attach or mount for pod "default"/"v2v-test-v2v-zmwzt". list of unmounted volumes=[kvm volume-1 volume-2 kubevirt-privileged-token-4r52g]. list of unattached volumes=[kvm volume-1 volume-2 kubevirt-privileged-token-4r52g]
13m       13m       1         v2v-test-v2v-zqwc2.15357d497c3d0961             Pod                              Normal    Scheduled              default-scheduler                                  Successfully assigned v2v-test-v2v-zqwc2 to cnv-executor-vatsal-node1.example.com
11m       11m       1         v2v-test-v2v-zqwc2.15357d661ff9feca             Pod                              Warning   FailedMount            kubelet, cnv-executor-vatsal-node1.example.com     Unable to mount volumes for pod "v2v-test-v2v-zqwc2_default(40438524-6950-11e8-b102-fa163ee75a3c)": timeout expired waiting for volumes to attach or mount for pod "default"/"v2v-test-v2v-zqwc2". list of unmounted volumes=[kvm volume-1 volume-2 kubevirt-privileged-token-4r52g]. list of unattached volumes=[kvm volume-1 volume-2 kubevirt-privileged-token-4r52g]
1h        1h        1         v2v-test-v2v-zrf9t.153579f010d7b6d7             Pod                              Normal    Scheduled              default-scheduler                                  Successfully assigned v2v-test-v2v-zrf9t to cnv-executor-vatsal-master1.example.com
1h        1h        1         v2v-test-v2v-zrf9t.15357a0cb768a329             Pod                              Warning   FailedMount            kubelet, cnv-executor-vatsal-master1.example.com   Unable to mount volumes for pod "v2v-test-v2v-zrf9t_default(ad44d4cd-6947-11e8-b102-fa163ee75a3c)": timeout expired waiting for volumes to attach or mount for pod "default"/"v2v-test-v2v-zrf9t". list of unmounted volumes=[kvm volume-1 volume-2 kubevirt-privileged-token-4r52g]. list of unattached volumes=[kvm volume-1 volume-2 kubevirt-privileged-token-4r52g]

And to stop this continuous creation of pods, I had to delete the job

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.