Code Monkey home page Code Monkey logo

ceph-iscsi-cli's Introduction

ceph-iscsi-cli installs a targetcli-like interface to allow multiple ceph/iscsi
gateways to be managed from a single interface. The CLI interfaces with the API
exposed in the rbd-target-gw daemon running on each gateway.

Here's an example of the shell interface the gwcli tool provides;

/> ls
o- / .................................................................................. [...]
  o- clusters ................................................................. [Clusters: 1]
  | o- ceph ..................................................................... [HEALTH_OK]
  |   o- pools ................................................................... [Pools: 3]
  |   | o- ec ........................................ [(2+1), Commit: 0b/40G (0%), Used: 0b]
  |   | o- iscsi ..................................... [(x3), Commit: 0b/20G (0%), Used: 18b]
  |   | o- rbd ....................................... [(x3), Commit: 8G/20G (40%), Used: 5K]
  |   o- topology ......................................................... [OSDs: 3,MONs: 3]
  o- disks ................................................................... [8G, Disks: 5]
  | o- rbd.disk_1 ............................................................. [disk_1 (1G)]
  | o- rbd.disk_2 ............................................................. [disk_2 (2G)]
  | o- rbd.disk_3 ............................................................. [disk_3 (2G)]
  | o- rbd.disk_4 ............................................................. [disk_4 (1G)]
  | o- rbd.disk_5 ............................................................. [disk_5 (2G)]
  o- iscsi-target .............................................................. [Targets: 1]
    o- iqn.2003-01.com.redhat.iscsi-gw:ceph-gw ................................ [Gateways: 2]
      o- gateways ..................................................... [Up: 2/2, Portals: 2]
      | o- rh7-gw1 .................................................... [192.168.122.69 (UP)]
      | o- rh7-gw2 ................................................... [192.168.122.104 (UP)]
      o- hosts ................................................................... [Hosts: 2]
        o- iqn.1994-05.com.redhat:myhost1 ........................ [Auth: None, Disks: 1(1G)]
        | o- lun 0 ......................................... [rbd.disk_1(1G), Owner: rh7-gw2]
        o- iqn.1994-05.com.redhat:rh7-client .......... [LOGGED-IN, Auth: CHAP, Disks: 1(2G)]
          o- lun 0 ......................................... [rbd.disk_5(2G), Owner: rh7-gw2]


The rbd-target-api daemon utilises the flask's internal development server to
provide the REST api. It is normally not used in a production context, but
given this specific use case it provides a simple way to provide an admin
interface - at least for the first release!

The API has been tested with Firefox RESTclient add-on with https (based on a common
self-signed certificate). With the certificate in place on each gateway you can
add basic auth credentials to match the local api configuration in the RESTclient
and use the client as follows;

Add a Header content type for application/x-www-form-urlencoded
METHOD: PUT        URL: https://192.168.122.69:5000/api/gateway/rh7-gw1
select the urlencoded content type and the basic auth credentials
add the required variables to the body section in the client ui
  eg. ip_address=192.168.122.69
Click 'SEND'


Curl Examples
If the UI is not your thing, curl probably is! Here's an example of using
curl to create a gateway node.

curl --insecure --user admin:admin -d "ip_address=192.168.122.104" \
     -X PUT https://192.168.122.69:5000/api/gateway/rh7-gw2


## Installation
### Via RPM
Simply install the provided rpm with
```rpm -ivh ceph-iscsi-cli-<ver>.el7.noarch.rpm```

### Manually

The following packages are required by ceph-iscsi-cli and must be
installed before starting the rbd-target-api service:

python-rtslib
ceph-iscsi-config
python-requests
python-configshell
python-flask
pyOpenSSL

To install the python package that provides the application logic, run the provided setup.py script
i.e. ```> python setup.py install```

For the management daemon (rbd-target-api), simply copy the following files into their equivalent places on each gateway
- <archive_root>/usr/lib/systemd/system/rbd-target-api.service  --> /lib/systemd/system
- <archive_root>/usr/bin/rbd-target-api--> /usr/bin

## Configuration

Once the package is in installed, the Ceph ceph-iscsi-cli instructions found
here:

http://docs.ceph.com/docs/master/rbd/iscsi-target-cli/

can be used to create a iscsi-gateway.cfg and create a target.

ceph-iscsi-cli's People

Contributors

alexander-bauer avatar gangbiao avatar mikechristie avatar pcuzner avatar rjerk avatar smithfarm avatar vatelzh avatar vshankar avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ceph-iscsi-cli's Issues

performance issue

In my environment, I create 46 disks, 37 hosts, and I found gwcli is started really slowly...
time gwcli ls > /dev/null

real 0m7.003s
user 0m2.570s
sys 0m0.551s

time ceph -s > /dev/null

real 0m0.448s
user 0m0.390s
sys 0m0.038s

vSphere Client: can not find iscsi target by using dynamic discovery

steps:
1, Create a host using gwcli named iqn.2016-04.com.test.sn1
2, Config iscsi adapter, rename the wwn to iqn.2016-04.com.test.sn1
3, Fill the chap and IP(192.168.0.20) using dynaimc discovery and click 'ok', no device, no path found...
4, If using static discovery, device and path can be found again.

Trouble Adding Second Gateway Server to GWCLI

Please help as this is stopping me cold.

Having a problem adding a second gateway server. Both the first and second gateways are on dedicated servers on Centos 7.

The 10.2.51.0/24 subnet is the primary subnet (ceph public network) and is routable.
The 10.2.41.0/24 subnet is only used by iSCSI initiators to connect to the gateways and is not routable.

I created the first gateway as cephrgw01 with an ip address of 10.2.51.41. That same first gateway server has a secondary ip address of 10.2.41.41 (seen in the error below).

I created the second gateway as cephrgw02 with an ip address of 10.2.51.43. That same second gateway server has a secondary ip address of 10.2.41.43 (was originally seen in the error message but disappeared once I rebooted the server). Rebooting the first server did not remove this error.

I initially ran gwcli on cephrgw01 and was able to create the gateway cephrgw01 without any issues. I then tried to add a disk but got the message about needing a second gateway, so I tried to add a second gateway.

In the first server when I try to create the second gateway after running gwcli as root I get this....

/iscsi-target...test/gateways> create skipchecks=true cephrgw02 10.2.51.43
CMD: ../gateways/ create cephrgw02 10.2.51.43 nosync=False skipchecks=true
OS version/package checks have been bypassed
Adding gateway, sync'ing 0 disk(s) and 0 client(s)
Failed : Gateway creation failed, gateway(s) unavailable:10.2.41.41(UNKNOWN state)
/iscsi-target...test/gateways>

On the second gateway I get this when I try to run gwcli I only get this...

[root@cephrgw02 ~]# gwcli
KeyError: 403
[root@cephrgw02 ~]#

When I run gwcli -d I get this......

Adding ceph cluster 'ceph' to the UI
Fetching ceph osd information
Querying ceph for state information
Refreshing disk information from the config object
Refreshing gateway & client information

  • checking iSCSI/API ports on cephrgw01
    Traceback (most recent call last):
    File "/usr/bin/gwcli", line 187, in
    main()
    File "/usr/bin/gwcli", line 99, in main
    root_node.refresh()
    File "/usr/lib/python2.7/site-packages/gwcli/gateway.py", line 78, in refresh
    self.target.refresh()
    File "/usr/lib/python2.7/site-packages/gwcli/gateway.py", line 341, in refresh
    tgt.gateway_group.load(self.gateway_group)
    File "/usr/lib/python2.7/site-packages/gwcli/gateway.py", line 406, in load
    Gateway(self, gateway_name, gateway_group[gateway_name])
    File "/usr/lib/python2.7/site-packages/gwcli/gateway.py", line 604, in init
    self.refresh()
    File "/usr/lib/python2.7/site-packages/gwcli/gateway.py", line 620, in refresh
    self._get_state()
    File "/usr/lib/python2.7/site-packages/gwcli/gateway.py", line 649, in _get_state
    self.state = lookup[rc].get('status')
    KeyError: 403

from running journctl | grep target on the second machine I get this.........

Dec 11 13:25:54 cephrgw02.wl.internal rbd-target-api[1462]: Checking for config object changes every 1s
Dec 11 13:25:54 cephrgw02.wl.internal rbd-target-api[1462]: * Running on http://0.0.0.0:5000/
Dec 11 13:25:55 cephrgw02.wl.internal rbd-target-gw[1459]: No OSD blacklist entries found
Dec 11 13:25:55 cephrgw02.wl.internal rbd-target-gw[1459]: Reading the configuration object to update local LIO configuration
Dec 11 13:25:55 cephrgw02.wl.internal rbd-target-gw[1459]: Configuration does not have an entry for this host(cephrgw02) - nothing to define to LIO
Dec 11 13:26:05 cephrgw02.wl.internal rbd-target-api[1462]: 10.2.51.41 - - [11/Dec/2017 13:26:05] "GET /api/_ping HTTP/1.1" 200 -
Dec 11 13:26:05 cephrgw02.wl.internal rbd-target-api[1462]: 10.2.51.41 - - [11/Dec/2017 13:26:05] "GET /api/_ping HTTP/1.1" 200 -
Dec 11 13:26:05 cephrgw02.wl.internal rbd-target-api[1462]: 10.2.51.41 - - [11/Dec/2017 13:26:05] "GET /api/_ping HTTP/1.1" 200 -
Dec 11 13:26:05 cephrgw02.wl.internal rbd-target-api[1462]: 10.2.51.41 - - [11/Dec/2017 13:26:05] "GET /api/_ping HTTP/1.1" 200 -
Dec 11 13:26:05 cephrgw02.wl.internal rbd-target-api[1462]: 10.2.51.41 - - [11/Dec/2017 13:26:05] "GET /api/_ping HTTP/1.1" 200 -
Dec 11 13:26:05 cephrgw02.wl.internal rbd-target-api[1462]: 10.2.51.41 - - [11/Dec/2017 13:26:05] "GET /api/_ping HTTP/1.1" 200 -
Dec 11 13:27:46 cephrgw02.wl.internal rbd-target-api[1462]: Change detected - internal 3 / xattr 4 refreshing
Dec 11 13:32:10 cephrgw02.wl.internal rbd-target-api[1462]: 10.2.41.41 - - [11/Dec/2017 13:32:10] "GET /api/_ping HTTP/1.1" 403 -
Dec 11 14:44:53 cephrgw02.wl.internal rbd-target-api[1462]: 10.2.41.41 - - [11/Dec/2017 14:44:53] "GET /api/_ping HTTP/1.1" 403 -
Dec 11 14:44:53 cephrgw02.wl.internal rbd-target-api[1462]: 10.2.41.41 - - [11/Dec/2017 14:44:53] "GET /api/_ping HTTP/1.1" 403 -
Dec 11 14:53:58 cephrgw02.wl.internal rbd-target-api[1462]: 127.0.0.1 - - [11/Dec/2017 14:53:58] "GET /api/config HTTP/1.1" 200 -
Dec 11 14:54:07 cephrgw02.wl.internal rbd-target-api[1462]: 127.0.0.1 - - [11/Dec/2017 14:54:07] "GET /api/config HTTP/1.1" 200 -
Dec 11 14:54:17 cephrgw02.wl.internal rbd-target-api[1462]: 127.0.0.1 - - [11/Dec/2017 14:54:17] "GET /api/config HTTP/1.1" 200 -
Dec 11 14:54:39 cephrgw02.wl.internal rbd-target-api[1462]: 127.0.0.1 - - [11/Dec/2017 14:54:39] "GET /api/config HTTP/1.1" 200 -
Dec 11 15:49:30 cephrgw02.wl.internal rbd-target-api[1462]: 127.0.0.1 - - [11/Dec/2017 15:49:30] "GET /api/config HTTP/1.1" 200 -
Dec 11 15:49:38 cephrgw02.wl.internal rbd-target-api[1462]: 127.0.0.1 - - [11/Dec/2017 15:49:38] "GET /api/config HTTP/1.1" 200 -

All the sites on 10.2.41.41 get 403 errors while the rest get 200 returns

From the first server when I run journalctl | grep target I get this....

Dec 11 14:45:05 cephrgw01.wl.internal rbd-target-api[1498]: Gateway request received, with validity checks disabled
Dec 11 14:45:05 cephrgw01.wl.internal rbd-target-api[1498]: 10.2.41.41 - - [11/Dec/2017 14:45:05] "GET /api/_ping HTTP/1.1" 403 -
Dec 11 14:45:05 cephrgw01.wl.internal rbd-target-api[1498]: 127.0.0.1 - - [11/Dec/2017 14:45:05] "PUT /api/gateway/cephrgw02 HTTP/1.1" 503 -
Dec 11 14:45:06 cephrgw01.wl.internal rbd-target-api[1498]: 10.2.51.41 - - [11/Dec/2017 14:45:06] "GET /api/_ping HTTP/1.1" 200 -
Dec 11 14:45:16 cephrgw01.wl.internal rbd-target-api[1498]: 10.2.51.41 - - [11/Dec/2017 14:45:16] "GET /api/_ping HTTP/1.1" 200 -
Dec 11 14:45:26 cephrgw01.wl.internal rbd-target-api[1498]: 10.2.51.41 - - [11/Dec/2017 14:45:26] "GET /api/_ping HTTP/1.1" 200 -
Dec 11 14:45:36 cephrgw01.wl.internal rbd-target-api[1498]: 10.2.51.41 - - [11/Dec/2017 14:45:36] "GET /api/_ping HTTP/1.1" 200 -
Dec 11 14:45:46 cephrgw01.wl.internal rbd-target-api[1498]: 10.2.51.41 - - [11/Dec/2017 14:45:46] "GET /api/_ping HTTP/1.1" 200 -
Dec 11 14:45:56 cephrgw01.wl.internal rbd-target-api[1498]: 10.2.51.41 - - [11/Dec/2017 14:45:56] "GET /api/_ping HTTP/1.1" 200 -
Dec 11 14:46:06 cephrgw01.wl.internal rbd-target-api[1498]: 10.2.51.41 - - [11/Dec/2017 14:46:06] "GET /api/_ping HTTP/1.1" 200 -
Dec 11 14:46:16 cephrgw01.wl.internal rbd-target-api[1498]: 10.2.51.41 - - [11/Dec/2017 14:46:16] "GET /api/_ping HTTP/1.1" 200 -
Dec 11 14:46:26 cephrgw01.wl.internal rbd-target-api[1498]: 10.2.51.41 - - [11/Dec/2017 14:46:26] "GET /api/_ping HTTP/1.1" 200 -
Dec 11 14:54:11 cephrgw01.wl.internal rbd-target-api[1498]: 10.2.51.43 - - [11/Dec/2017 14:54:11] "GET /api/_ping HTTP/1.1" 403 -
Dec 11 14:54:19 cephrgw01.wl.internal rbd-target-api[1498]: 10.2.51.43 - - [11/Dec/2017 14:54:19] "GET /api/_ping HTTP/1.1" 403 -
Dec 11 14:54:30 cephrgw01.wl.internal rbd-target-api[1498]: 10.2.51.43 - - [11/Dec/2017 14:54:30] "GET /api/_ping HTTP/1.1" 403 -
Dec 11 14:54:51 cephrgw01.wl.internal rbd-target-api[1498]: 10.2.51.43 - - [11/Dec/2017 14:54:51] "GET /api/_ping HTTP/1.1" 403 -
Dec 11 15:49:42 cephrgw01.wl.internal rbd-target-api[1498]: 10.2.51.43 - - [11/Dec/2017 15:49:42] "GET /api/_ping HTTP/1.1" 403 -
Dec 11 15:49:51 cephrgw01.wl.internal rbd-target-api[1498]: 10.2.51.43 - - [11/Dec/2017 15:49:51] "GET /api/_ping HTTP/1.1" 403 -

Why is the performance of my lio iscsi gateway so poor?

Hi, I have a three nodes(kernel 4.16.3, updated by yum) cluster that has 19 OSDs without ssd. I test the performance of lio gateway with fio and compare with the tgt gateway. The result is that:
1.test ceph image directly in rbd ioengine:
4M,write:398MB/s, 4M,randwrite:402MB/s, 4M,read:603MB/s, 4M,randread:709MB/s
1M,write:227MB/s, 1M,randwrite:213MB/s, 1M,read:377MB/s, 1M,randread:646MB/s
2.test lio gateway in libaio ioengine:
4M,write:45MB/s, 4M,randwrite:42MB/s, 4M,read:128MB/s, 4M,randread:95MB/s
1M,write:45MB/s, 1M,randwrite:37MB/s, 1M,read:118MB/s, 1M,randread:73MB/s
3.test tgt gateway in libaio ioengine:
4M,write:106MB/s, 4M,randwrite:117MB/s, 4M,read:267MB/s, 4M,randread:411MB/s
1M,write:106MB/s, 1M,randwrite:94MB/s, 1M,read:285MB/s, 1M,randread:545MB/s
I also test suse iscsi gateway(https://www.suse.com/documentation/ses-4/book_storage_admin/data/ceph_iscsi_install.html), the performance of its write is better than tgt and its read is a little worse than tgt.
The performance of lio gateway is the worst. Is there any optimization methods?
Thanks!

rbd-target-api IOError on adding second gateway

I just wiped my configuration (deleted gateway.conf from the pool and rebooted the hosts) after already having the iSCSI Gateway working without any issues.

I followed the same steps as I did before - on my first node (node1) I created a target, added the node itself as a gateway and then tried adding a second node (node2) as gateway.
It however now errors out every time (even after multiple reboots and deleting gateway.conf from the pool) with the following message:

/iscsi-target...sigw/gateways> create node1 192.168.44.205
Adding gateway, sync'ing 0 disk(s) and 0 client(s)
ok
/iscsi-target...sigw/gateways> create node2 192.168.44.206
Adding gateway, sync'ing 0 disk(s) and 0 client(s)
ValueError: No JSON object could be decoded

rbd-target-api on node2 shows the following:

Aug 24 03:39:44 node2.ceph.xxxx.com rbd-target-api[20832]: Started the configuration object watcher
Aug 24 03:39:44 node2.ceph.xxxx.com rbd-target-api[20832]: Checking for config object changes every 1s
Aug 24 03:39:44 node2.ceph.xxxx.com rbd-target-api[20832]:  * Running on http://0.0.0.0:5000/
Aug 24 03:39:57 node2.ceph.xxxx.com rbd-target-api[20832]: Change detected - internal 1 / xattr 2 refreshing
Aug 24 03:40:02 node2.ceph.xxxx.com rbd-target-api[20832]: 192.168.44.205 - - [24/Aug/2017 03:40:02] "GET /api/sysinfo/ipv4_addresses HTTP/1.1" 200 -
Aug 24 03:40:02 node2.ceph.xxxx.com rbd-target-api[20832]: 192.168.44.205 - - [24/Aug/2017 03:40:02] "GET /api/sysinfo/checkconf HTTP/1.1" 200 -
Aug 24 03:40:02 node2.ceph.xxxx.com rbd-target-api[20832]: 192.168.44.205 - - [24/Aug/2017 03:40:02] "GET /api/sysinfo/checkversions HTTP/1.1" 200 -
Aug 24 03:40:02 node2.ceph.xxxx.com rbd-target-api[20832]: 192.168.44.205 - - [24/Aug/2017 03:40:02] "PUT /api/_gateway/node2 HTTP/1.1" 500 -
Aug 24 03:40:02 node2.ceph.xxxx.com rbd-target-api[20832]: Traceback (most recent call last):
Aug 24 03:40:02 node2.ceph.xxxx.com rbd-target-api[20832]: File "/usr/lib/python2.7/site-packages/flask/app.py", line 1836, in __call__
Aug 24 03:40:02 node2.ceph.xxxx.com rbd-target-api[20832]: return self.wsgi_app(environ, start_response)
Aug 24 03:40:02 node2.ceph.xxxx.com rbd-target-api[20832]: File "/usr/lib/python2.7/site-packages/flask/app.py", line 1820, in wsgi_app
Aug 24 03:40:02 node2.ceph.xxxx.com rbd-target-api[20832]: response = self.make_response(self.handle_exception(e))
Aug 24 03:40:02 node2.ceph.xxxx.com rbd-target-api[20832]: File "/usr/lib/python2.7/site-packages/flask/app.py", line 1403, in handle_exception
Aug 24 03:40:02 node2.ceph.xxxx.com rbd-target-api[20832]: reraise(exc_type, exc_value, tb)
Aug 24 03:40:02 node2.ceph.xxxx.com rbd-target-api[20832]: File "/usr/lib/python2.7/site-packages/flask/app.py", line 1817, in wsgi_app
Aug 24 03:40:02 node2.ceph.xxxx.com rbd-target-api[20832]: response = self.full_dispatch_request()
Aug 24 03:40:02 node2.ceph.xxxx.com rbd-target-api[20832]: File "/usr/lib/python2.7/site-packages/flask/app.py", line 1477, in full_dispatch_request
Aug 24 03:40:02 node2.ceph.xxxx.com rbd-target-api[20832]: rv = self.handle_user_exception(e)
Aug 24 03:40:02 node2.ceph.xxxx.com rbd-target-api[20832]: File "/usr/lib/python2.7/site-packages/flask/app.py", line 1381, in handle_user_exception
Aug 24 03:40:02 node2.ceph.xxxx.com rbd-target-api[20832]: reraise(exc_type, exc_value, tb)
Aug 24 03:40:02 node2.ceph.xxxx.com rbd-target-api[20832]: File "/usr/lib/python2.7/site-packages/flask/app.py", line 1475, in full_dispatch_request
Aug 24 03:40:02 node2.ceph.xxxx.com rbd-target-api[20832]: rv = self.dispatch_request()
Aug 24 03:40:02 node2.ceph.xxxx.com rbd-target-api[20832]: File "/usr/lib/python2.7/site-packages/flask/app.py", line 1461, in dispatch_request
Aug 24 03:40:02 node2.ceph.xxxx.com rbd-target-api[20832]: return self.view_functions[rule.endpoint](**req.view_args)
Aug 24 03:40:02 node2.ceph.xxxx.com rbd-target-api[20832]: File "/usr/bin/rbd-target-api", line 94, in decorated
Aug 24 03:40:02 node2.ceph.xxxx.com rbd-target-api[20832]: return f(*args, **kwargs)
Aug 24 03:40:02 node2.ceph.xxxx.com rbd-target-api[20832]: File "/usr/bin/rbd-target-api", line 475, in _gateway
Aug 24 03:40:02 node2.ceph.xxxx.com rbd-target-api[20832]: gateway.manage(target_mode)
Aug 24 03:40:02 node2.ceph.xxxx.com rbd-target-api[20832]: File "/usr/lib/python2.7/site-packages/ceph_iscsi_config/gateway.py", line 420, in manage
Aug 24 03:40:02 node2.ceph.xxxx.com rbd-target-api[20832]: self.load_config()
Aug 24 03:40:02 node2.ceph.xxxx.com rbd-target-api[20832]: File "/usr/lib/python2.7/site-packages/ceph_iscsi_config/gateway.py", line 256, in load_config
Aug 24 03:40:02 node2.ceph.xxxx.com rbd-target-api[20832]: lio_root = root.RTSRoot()
Aug 24 03:40:02 node2.ceph.xxxx.com rbd-target-api[20832]: File "/usr/lib/python2.7/site-packages/rtslib_fb/root.py", line 85, in __init__
Aug 24 03:40:02 node2.ceph.xxxx.com rbd-target-api[20832]: self._set_dbroot_if_needed()
Aug 24 03:40:02 node2.ceph.xxxx.com rbd-target-api[20832]: File "/usr/lib/python2.7/site-packages/rtslib_fb/root.py", line 167, in _set_dbroot_if_needed
Aug 24 03:40:02 node2.ceph.xxxx.com rbd-target-api[20832]: fwrite(dbroot_path, self._preferred_dbroot+"\n")
Aug 24 03:40:02 node2.ceph.xxxx.com rbd-target-api[20832]: File "/usr/lib/python2.7/site-packages/rtslib_fb/utils.py", line 79, in fwrite
Aug 24 03:40:02 node2.ceph.xxxx.com rbd-target-api[20832]: file_fd.write(str(string))
Aug 24 03:40:02 node2.ceph.xxxx.com rbd-target-api[20832]: IOError: [Errno 22] Invalid argument
Aug 24 03:40:02 node2.ceph.xxxx.com rbd-target-api[20832]: Change detected - internal 2 / xattr 3 refreshing

gwcli error - ZeroDivisionError: float division by zero

Hi,
I am scratching my head to figure out what could be the issue:

[root@iscsi-gw-test1 RPMS]# gwcli -d
Adding ceph cluster 'ceph' to the UI
Refreshing disk information from the config object
Refreshing gateway & client information
Gathering pool stats for 'ceph'
Traceback (most recent call last):
File "/usr/bin/gwcli", line 185, in
main()
File "/usr/bin/gwcli", line 99, in main
root_node.refresh()
File "/usr/lib/python2.7/site-packages/gwcli/gateway.py", line 88, in refresh
self.ceph.refresh()
File "/usr/lib/python2.7/site-packages/gwcli/ceph.py", line 125, in refresh
cluster.pools.refresh()
File "/usr/lib/python2.7/site-packages/gwcli/ceph.py", line 277, in refresh
self.pool_lookup[pool_name].update(pool_data)
File "/usr/lib/python2.7/site-packages/gwcli/ceph.py", line 306, in update
self._calc_overcommit()
File "/usr/lib/python2.7/site-packages/gwcli/ceph.py", line 299, in _calc_overcommit
(potential_demand / float(self.max_bytes)) * 100)
ZeroDivisionError: float division by zero

This is a new iscsi-gw fresh install, "rbd" pool was cleaned up of all objects before.
The object gateway.conf is created in poll "rbd" but i just can't get gwcli working...

Deletion of disk returns OK but is not removed

After fixing the previous issue this new issue occurred, the disk and client are deleted but reappear after reopening gwcli.

/iscsi-target...stiscsi/hosts> ls
o- hosts .......................................................................... [Hosts: 1]
  o- iqn.1998-01.com.vmware:internal-appliances-hv-01-4b798644 .... [Auth: CHAP, Disks: 0(0b)]
/iscsi-target...stiscsi/hosts> delete iqn.1998-01.com.vmware:internal-appliances-hv-01-4b798644
ok
/iscsi-target...stiscsi/hosts> ls
o- hosts .......................................................................... [Hosts: 0]
/iscsi-target...stiscsi/hosts> cd /disks
/disks> delete rbd.testimage123
rbd rbd/testimage123 is not defined to the configuration
/disks> delete rbd.imagetest123
ok
/disks> ls /
o- / ................................................................................... [...]
  o- clusters .................................................................. [Clusters: 1]
  | o- ceph .................................................................... [HEALTH_WARN]
  |   o- pools .................................................................... [Pools: 1]
  |   | o- rbd ....................................... [(x3), Commit: 0b/10.2T (0%), Used: 4K]
  |   o- topology .......................................................... [OSDs: 6,MONs: 3]
  o- disks .................................................................... [0b, Disks: 0]
  o- iscsi-target ............................................................... [Targets: 1]
    o- iqn.2018-01.com.test.iscsi-gw:testiscsi ................................. [Gateways: 3]
      o- gateways ...................................................... [Up: 3/3, Portals: 3]
      | o- iscsigw01 ....................................................... [10.0.36.21 (UP)]
      | o- iscsitest ............................................................. [10.0.36.21 (UP)]
      | o- iscsitest2 ......................................................... [10.0.36.22 (UP)]
      o- host-groups ............................................................ [Groups : 0]
      o- hosts .................................................................... [Hosts: 0]
/disks> cd /
/> cd iscsi-target/
/iscsi-target> clearconfig confirm=true
Clients(1) and Disks(1) must be removed first before clearing the gateway configuration
/iscsi-target> exit
[zxcs@testrbd root]$ sudo gwcli
/iscsi-target> ls /
o- / ................................................................................... [...]
  o- clusters .................................................................. [Clusters: 1]
  | o- ceph .................................................................... [HEALTH_WARN]
  |   o- pools .................................................................... [Pools: 1]
  |   | o- rbd ....................................... [(x3), Commit: 5G/10.2T (0%), Used: 4K]
  |   o- topology .......................................................... [OSDs: 6,MONs: 3]
  o- disks .................................................................... [5G, Disks: 1]
  | o- rbd.imagetest123 .................................................. [imagetest123 (5G)]
  o- iscsi-target ............................................................... [Targets: 1]
    o- iqn.2018-01.com.test.iscsi-gw:testiscsi ................................. [Gateways: 3]
      o- gateways ...................................................... [Up: 3/3, Portals: 3]
      | o- iscsigw01 ....................................................... [10.0.36.21 (UP)]
      | o- iscsitest ............................................................. [10.0.36.21 (UP)]
      | o- iscsitest2 ......................................................... [10.0.36.22 (UP)]
      o- host-groups ............................................................ [Groups : 0]
      o- hosts .................................................................... [Hosts: 1]
        o- iqn.1998-01.com.vmware:internal-appliances-hv-01-4b798644  [Auth: CHAP, Disks: 0(0b)]
/iscsi-target>

add gateway issue

test steps:

  1. create two gateways.
  2. create one host and one disk mapped to the host.
  3. create the third gateway.
  4. discovery from the vsphere, and find three targets in static discovery page, but only show two paths in adapter details.
  5. do the step 2 with another host and disk, vsphere will display three paths.

Does it work as design?

It is not possible to create a target

Hello, i can not create a target, the python fouls and gwcli crashes out

[root@ceph1 ~]# gwcli 
/iscsi-target> create iqn.1998-01.com.mycompany:myserver
IOError: [Errno 22] Invalid argument
[root@ceph1 ~]# gwcli 
/iscsi-target> /iscsi-target create iqn.2003-01.com.redhat.iscsi-gw:hosting
IOError: [Errno 22] Invalid argument

Traceback:

/> /iscsi-target create iqn.2003-01.com.redhat.iscsi-gw:hosting
CMD: /iscsi create iqn.2003-01.com.redhat.iscsi-gw:hosting
Traceback (most recent call last):
  File "/usr/bin/gwcli", line 187, in <module>
    main()
  File "/usr/bin/gwcli", line 119, in main
    shell.run_interactive()
  File "/usr/lib/python2.7/site-packages/configshell_fb/shell.py", line 905, in run_interactive
    self._cli_loop()
  File "/usr/lib/python2.7/site-packages/configshell_fb/shell.py", line 734, in _cli_loop
    self.run_cmdline(cmdline)
  File "/usr/lib/python2.7/site-packages/configshell_fb/shell.py", line 848, in run_cmdline
    self._execute_command(path, command, pparams, kparams)
  File "/usr/lib/python2.7/site-packages/configshell_fb/shell.py", line 823, in _execute_command
    result = target.execute_command(command, pparams, kparams)
  File "/usr/lib/python2.7/site-packages/configshell_fb/node.py", line 1406, in execute_command
    return method(*pparams, **kparams)
  File "/usr/lib/python2.7/site-packages/gwcli/gateway.py", line 222, in ui_command_create
    local_lio = root.RTSRoot()
  File "/usr/lib/python2.7/site-packages/rtslib_fb/root.py", line 85, in __init__
    self._set_dbroot_if_needed()
  File "/usr/lib/python2.7/site-packages/rtslib_fb/root.py", line 167, in _set_dbroot_if_needed
    fwrite(dbroot_path, self._preferred_dbroot+"\n")
  File "/usr/lib/python2.7/site-packages/rtslib_fb/utils.py", line 79, in fwrite
    file_fd.write(str(string))
IOError: [Errno 22] Invalid argument

Packet version:
targetcli-2.1.fb47-0.1.20170815.git5bf3517.el7.noarch
python-rtslib-2.1.fb64-2.el7.noarch
tcmu-runner-v1.3.0-0.4.23.g68fee8c.x86_64
ceph-iscsi-config-2.3-33.gae18773.el7.noarch
ceph-iscsi-cli-2.5-75.g8181a92.el7.noarch

kernel: Linux ceph1 4.14.13-1.el7.elrepo.x86_64

In this case, the target is created with the kernel-3.10.0-693.11.6.el7.x86_64, but the kernel does not pass the version check and the disks can not be added

If you add a target on the old kernel and boot into the new one, then rbd-target-gw and rbd-target-cli crush

rbd-target-api daemon raises KeyError: 'TERM'

After the installation of ceph-iscsi-cli, systemctl daemon-reload, systemctl enable rbd-target-api and systemctl start rbd-target-api, I check the status of rbd-target-api daemon. Then, I get the following error:

11月 08 17:08:56 localhost.localdomain rbd-target-api[4770]:   File "/usr/lib/python2.7/site-packages/configshell_fb-1.1.23-py2.7.egg/con
11月 08 17:08:56 localhost.localdomain rbd-target-api[4770]:     from .shell import ConfigShell
11月 08 17:08:56 localhost.localdomain rbd-target-api[4770]:   File "/usr/lib/python2.7/site-packages/configshell_fb-1.1.23-py2.7.egg/con
11月 08 17:08:56 localhost.localdomain rbd-target-api[4770]:     oldTerm = os.environ['TERM']
11月 08 17:08:56 localhost.localdomain rbd-target-api[4770]:   File "/usr/lib64/python2.7/UserDict.py", line 40, in __getitem__
11月 08 17:08:56 localhost.localdomain rbd-target-api[4770]:     raise KeyError(key)
11月 08 17:08:56 localhost.localdomain rbd-target-api[4770]: KeyError: 'TERM'
11月 08 17:08:56 localhost.localdomain systemd[1]: rbd-target-api.service: Main process exited, code=exited, status=1/FAILURE
11月 08 17:08:56 localhost.localdomain systemd[1]: rbd-target-api.service: Unit entered failed state.
11月 08 17:08:56 localhost.localdomain systemd[1]: rbd-target-api.service: Failed with result 'exit-code'.

What it is? And how could I correct it?
Thanks!

health checks cause error messages in gateway syslogs

Current health checks open a socket on the target gateway's iscsi port - to determine whether the iscsi port is up/open...however, this is interpreted as a login sequence which obviously fails generating the errors in the syslog

Can't open gwcli after a pool is deleted

Hello,

I found that gwcli fails to open after I have removed a pool.

[user@server ~]# gwcli
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib64/python2.7/threading.py", line 812, in __bootstrap_inner
self.run()
File "/usr/lib64/python2.7/threading.py", line 765, in run
self.__target(*self.__args, **self.__kwargs)
File "/usr/lib/python2.7/site-packages/gwcli/storage.py", line 74, in _get_disk_meta
with cluster_ioctx.open_ioctx(pool) as ioctx:
File "rados.pyx", line 498, in rados.requires.wrapper.validate_func (/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.5/rpm/el7/BUILD/ceph-12.2.5/build/src/pybind/rados/pyrex/rados.c:4651)
File "rados.pyx", line 1193, in rados.Rados.open_ioctx (/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.5/rpm/el7/BUILD/ceph-12.2.5/build/src/pybind/rados/pyrex/rados.c:12602)
ObjectNotFound: [errno 2] error opening pool 'THEPOOLNAME'

rbd-target-api disk create/update failed when create image

Creating imgae failed after creating two gateways via gwcli !
env:
two gateways: node0(172.16.121.130), nod1(172.16.121.131)
both kernel : 4.13.0-0.rc6.el7.elrepo.x86_64

both os : centos 7.3.1611
install form github:
rtslib-fb 2.1.64
ceph0iscsi-cli(gwcli) 2.5
ceph-iscsi-config 2.3
tcmu-runner

service:
node1: start rbd-target-api , rbd-target-gw, tcmu-runner, stop target.service
node2: start rbd-target-api , rbd-target-gw, tcmu-runner, ceph cluster, stop target.service

step:

  1. check ceph pool:
    ceph osd lspools
    1 rbd

  2. create two gateways
    start gwcli on node1
    /iscsi-target...i-gw:ceph-igw> ls
    o- iqn.2003-01.com.redhat.iscsi-gw:ceph-igw ............ [Gateways: 2]
    o- gateways .................................. [Up: 2/2, Portals: 2]
    | o- node0 ................................... [172.16.121.130 (UP)]
    | o- node1 ................................... [172.16.121.131 (UP)]
    o- host-groups ........................................ [Groups : 0]
    o- hosts ................................................ [Hosts: 0]

  3. create image on node1
    start gwcli on node1
    /iscsi-target...i-gw:ceph-igw> /disks/ create pool=rbd image=disk_1 size=128m
    Failed : disk create/update failed on node1. LUN allocation failure

// rbd hava a image:disk_1
rbd info rbd/disk_1
rbd image 'disk_1':
size 128 MB in 32 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.103351ead36b
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
flags:
create_timestamp: Mon Aug 28 12:02:06 2017

/var/log/rbd-target-api.log on node1:
2017-08-28 12:10:22,212 DEBUG [rbd-target-api:547:disk()] - this host is node1
2017-08-28 12:10:22,212 DEBUG [rbd-target-api:563:disk()] - All gateways: [u'node1', u'node0']
2017-08-28 12:10:22,213 DEBUG [rbd-target-api:575:disk()] - Other gateways: [u'node0']
2017-08-28 12:10:22,220 INFO [_internal.py:87:_log()] - 127.0.0.1 - - [28/Aug/2017 12:10:22] "GET /api/config HTTP/1.1" 200 -
2017-08-28 12:10:22,232 DEBUG [rbd-target-api:1254:call_api()] - gateway update order is 127.0.0.1,node0
2017-08-28 12:10:22,233 DEBUG [rbd-target-api:1257:call_api()] - processing GW '127.0.0.1'
2017-08-28 12:10:22,243 DEBUG [common.py:130:_get_ceph_config()] - (_get_rbd_config) Opening connection to rbd pool
2017-08-28 12:10:22,245 DEBUG [common.py:138:_get_ceph_config()] - (_get_rbd_config) connection opened
2017-08-28 12:10:22,248 DEBUG [common.py:106:_read_config_object()] - _read_config_object reading the config object
2017-08-28 12:10:22,250 DEBUG [common.py:154:_get_ceph_config()] - (_get_rbd_config) config object contains '{
"clients": {},
"created": "2017/08/28 03:26:42",
"disks": {},
"epoch": 6,
"gateways": {
"created": "2017/08/28 03:31:58",
"ip_list": [
"172.16.121.131",
"172.16.121.130"
],
"iqn": "iqn.2003-01.com.redhat.iscsi-gw:ceph-igw",
"node0": {
"active_luns": 0,
"created": "2017/08/28 04:00:44",
"gateway_ip_list": [
"172.16.121.131",
"172.16.121.130"
],
"inactive_portal_ips": [
"172.16.121.131"
],
"iqn": "iqn.2003-01.com.redhat.iscsi-gw:ceph-igw",
"portal_ip_address": "172.16.121.130",
"tpgs": 2,
"updated": "2017/08/28 04:00:44"
},
"node1": {
"active_luns": 0,
"created": "2017/08/28 03:33:26",
"gateway_ip_list": [
"172.16.121.131",
"172.16.121.130"
],
"inactive_portal_ips": [
"172.16.121.130"
],
"iqn": "iqn.2003-01.com.redhat.iscsi-gw:ceph-igw",
"portal_ip_address": "172.16.121.131",
"tpgs": 2,
"updated": "2017/08/28 04:00:43"
}
},
"groups": {},
"updated": "2017/08/28 04:00:45",
"version": 3
}'
2017-08-28 12:10:22,261 DEBUG [lun.py:320:allocate()] - LUN.allocate starting, listing rbd devices
2017-08-28 12:10:22,280 DEBUG [lun.py:323:allocate()] - rados pool 'rbd' contains the following - [u'disk_1']
2017-08-28 12:10:22,280 DEBUG [lun.py:328:allocate()] - Hostname Check - this host is node1, target host for allocations is node1
2017-08-28 12:10:22,328 DEBUG [common.py:256:add_item()] - (Config.add_item) config updated to {u'updated': u'2017/08/28 04:00:45', u'disks': {'rbd.disk_1': {'created': '2017/08/28 04:10:22'}}, u'created': u'2017/08/28 03:26:42', u'clients': {}, u'epoch': 6, u'version': 3, u'gateways': {u'node1': {u'gateway_ip_list': [u'172.16.121.131', u'172.16.121.130'], u'active_luns': 0, u'created': u'2017/08/28 03:33:26', u'updated': u'2017/08/28 04:00:43', u'iqn': u'iqn.2003-01.com.redhat.iscsi-gw:ceph-igw', u'inactive_portal_ips': [u'172.16.121.130'], u'portal_ip_address': u'172.16.121.131', u'tpgs': 2}, u'node0': {u'gateway_ip_list': [u'172.16.121.131', u'172.16.121.130'], u'active_luns': 0, u'created': u'2017/08/28 04:00:44', u'updated': u'2017/08/28 04:00:44', u'iqn': u'iqn.2003-01.com.redhat.iscsi-gw:ceph-igw', u'inactive_portal_ips': [u'172.16.121.131'], u'portal_ip_address': u'172.16.121.130', u'tpgs': 2}, u'iqn': u'iqn.2003-01.com.redhat.iscsi-gw:ceph-igw', u'ip_list': [u'172.16.121.131', u'172.16.121.130'], u'created': u'2017/08/28 03:31:58'}, u'groups': {}}
2017-08-28 12:10:22,328 DEBUG [lun.py:384:allocate()] - Check the rbd image size matches the request
2017-08-28 12:10:22,357 DEBUG [lun.py:405:allocate()] - rbd image rbd.disk_1 size matches the configuration file request
2017-08-28 12:10:22,357 DEBUG [lun.py:407:allocate()] - Begin processing LIO mapping
2017-08-28 12:10:22,358 INFO [lun.py:598:add_dev_to_lio()] - (LUN.add_dev_to_lio) Adding image 'rbd.disk_1' to LIO

2017-08-28 12:10:22,392 ERROR [lun.py:631:add_dev_to_lio()] - Could not set LIO device attribute cmd_time_out/qfull_time_out for device: rbd.disk_1. Kernel not supported. - error(nike Cannot find attribute: qfull_time_out[/sys/kernel/config/target/core/user_0/rbd.disk_1/attrib/qfull_time_out])

2017-08-28 12:10:22,395 ERROR [rbd-target-api:677:_disk()] - LUN alloc problem - Could not set LIO device attribute cmd_time_out/qfull_time_out for device: rbd.disk_1. Kernel not supported. - error(nike Cannot find attribute: qfull_time_out[/sys/kernel/config/target/core/user_0/rbd.disk_1/attrib/qfull_time_out])
2017-08-28 12:10:22,395 INFO [_internal.py:87:_log()] - 127.0.0.1 - - [28/Aug/2017 12:10:22] "PUT /api/_disk/rbd.disk_1 HTTP/1.1" 500 -
2017-08-28 12:10:22,397 ERROR [rbd-target-api:1278:call_api()] - _disk change on 127.0.0.1 failed with 500
2017-08-28 12:10:22,397 DEBUG [rbd-target-api:1295:call_api()] - failed on node1. LUN allocation failure
2017-08-28 12:10:22,397 INFO [_internal.py:87:_log()] - 127.0.0.1 - - [28/Aug/2017 12:10:22] "PUT /api/disk/rbd.disk_1 HTTP/1.1" 500 -

Exception if "api_secure" mismatch between gateways

Likely also occurs if the public/private keys differ between hosts as well.

Traceback (most recent call last):
File "/usr/bin/gwcli", line 187, in
main()
File "/usr/bin/gwcli", line 119, in main
shell.run_interactive()
File "/usr/lib/python2.7/site-packages/configshell_fb/shell.py", line 905, in run_interactive
self._cli_loop()
File "/usr/lib/python2.7/site-packages/configshell_fb/shell.py", line 734, in _cli_loop
self.run_cmdline(cmdline)
File "/usr/lib/python2.7/site-packages/configshell_fb/shell.py", line 848, in run_cmdline
self._execute_command(path, command, pparams, kparams)
File "/usr/lib/python2.7/site-packages/configshell_fb/shell.py", line 823, in _execute_command
result = target.execute_command(command, pparams, kparams)
File "/usr/lib/python2.7/site-packages/configshell_fb/node.py", line 1406, in execute_command
return method(*pparams, **kparams)
File "/usr/lib/python2.7/site-packages/gwcli/gateway.py", line 519, in ui_command_create
msg = api.response.json()['message']
File "/usr/lib/python2.7/site-packages/requests/models.py", line 802, in json
return json.loads(self.text, **kwargs)
File "/usr/lib64/python2.7/json/init.py", line 338, in loads
return _default_decoder.decode(s)
File "/usr/lib64/python2.7/json/decoder.py", line 366, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib64/python2.7/json/decoder.py", line 384, in raw_decode
raise ValueError("No JSON object could be decoded")
ValueError: No JSON object could be decoded

Error on removing hosts/disks

on command:

curl --insecure --user admin:admin -d "ip_address=10.0.36.21" -X DELETE http://127.0.0.1:5000/api/client/iqn.1998-01.com.vmware:internal-appliances-hv-01-4b798644 > test.html

Array
(
[language] => pytb
[code] => Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/flask/app.py", line 1836, in call
return self.wsgi_app(environ, start_response)
File "/usr/lib/python2.7/site-packages/flask/app.py", line 1820, in wsgi_app
response = self.make_response(self.handle_exception(e))
File "/usr/lib/python2.7/site-packages/flask/app.py", line 1403, in handle_exception
reraise(exc_type, exc_value, tb)
File "/usr/lib/python2.7/site-packages/flask/app.py", line 1817, in wsgi_app
response = self.full_dispatch_request()
File "/usr/lib/python2.7/site-packages/flask/app.py", line 1477, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/lib/python2.7/site-packages/flask/app.py", line 1381, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/lib/python2.7/site-packages/flask/app.py", line 1475, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/lib/python2.7/site-packages/flask/app.py", line 1461, in dispatch_request
return self.view_functionsrule.endpoint
File "/usr/bin/rbd-target-api", line 94, in decorated
return f(*args, **kwargs)
File "/usr/bin/rbd-target-api", line 961, in client
gateways.remove(local_gw)
ValueError: list.remove(x): x not in list
)
failed 2

I had problems with the server being messed up after multiple compiles and tests, now I want to remove all old gateways that do not exist anymore. Before I can do that I have to remove those disks and hosts, but then this error came.

gwcli ERROR: REST API failure, code : 500

from shell:

[root@centos2 ~]# gwcli
Unable to access the configuration object : REST API failure, code : 500
GatewayError:
[root@centos2 ~]#

from log file:
...
2017-09-30 16:14:00,791 DEBUG [ceph.py:51:init()] Adding ceph cluster 'ceph' to the UI
2017-09-30 16:14:01,031 DEBUG [ceph.py:289:populate()] Fetching ceph osd information
2017-09-30 16:14:01,045 DEBUG [ceph.py:199:update_state()] Querying ceph for state informatio
n
2017-09-30 16:14:01,068 CRITICAL [gateway.py:85:refresh()] Unable to access the configuration ob
ject : REST API failure, code : 500

I can't start gwcli.
Could you help about this problem?
Thanks
best regards

using split networks causes gateway addition to fail

If you're adding a gw with a name that resolves to one subnet that has an IP for iscsi on a different subnet, the gateway create may be denied with a 403 error. The workaround is to add the gw IPs to the trusted_ip_list in iscsi-gateway.cfg, but that setting is intended for extending the authorised clients that may connect to the api, rather than become a list of gateway IPs

iSCSI login failed due to authorization failure

login iqn.2003-01.com.redhat.iscsi-gw:ceph-igw failed
I don't understand gwcli set chap under client name.

  1. does iscsiadm need to specify the name of the client when it login to target?
    2.if the client name have some rules,such astarget-name:client-name?

Login result:
iscsiadm -m node -T iqn.2003-01.com.redhat.iscsi-gw:ceph-igw -l
Logging in to [iface: default, target: iqn.2003-01.com.redhat.iscsi-gw:ceph-igw, portal: 172.16.121.131,3260] (multiple)
Logging in to [iface: default, target: iqn.2003-01.com.redhat.iscsi-gw:ceph-igw, portal: 172.16.121.130,3260] (multiple)
iscsiadm: Could not login to [iface: default, target: iqn.2003-01.com.redhat.iscsi-gw:ceph-igw, portal: 172.16.121.131,3260].
iscsiadm: initiator reported error (24 - iSCSI login failed due to authorization failure)
iscsiadm: Could not login to [iface: default, target: iqn.2003-01.com.redhat.iscsi-gw:ceph-igw, portal: 172.16.121.130,3260].
iscsiadm: initiator reported error (24 - iSCSI login failed due to authorization failure)
iscsiadm: Could not log into all portals

Prepare for login:
vi /etc/iscsi/iscsid.conf
node.session.auth.authmethod = CHAP
node.session.auth.username = consumer
node.session.auth.password = 012345678912

restart iscsid

/> ls
o- / .............................................................. [...]
o- clusters ............................................. [Clusters: 1]
| o- c1 ................................................. [HEALTH_WARN]
| o- pools ............................................... [Pools: 1]
| | o- rbd ................... [(x3), Commit: 384M/6G (6%), Used: 3K]
| o- topology ..................................... [OSDs: 1,MONs: 1]
o- disks ............................................. [384M, Disks: 3]
| o- rbd.disk_1 ....................................... [disk_1 (128M)]
| o- rbd.disk_2 ....................................... [disk_2 (128M)]
| o- rbd.disk_3 ....................................... [disk_3 (128M)]
o- iscsi-target .......................................... [Targets: 1]
o- iqn.2003-01.com.redhat.iscsi-gw:ceph-igw ........... [Gateways: 2]
o- gateways ................................. [Up: 2/2, Portals: 2]
| o- node0 .................................. [172.16.121.130 (UP)]
| o- node1 .................................. [172.16.121.131 (UP)]
o- host-groups ....................................... [Groups : 0]
o- hosts ............................................... [Hosts: 3]
o- iqn.1994-05.com.redhat:rh7-client . [Auth: None, Disks: 0(0b)]
o- iqn.1994-05.com.redhat:xx ....... [Auth: CHAP, Disks: 1(128M)]
| o- lun 0 ..................... [rbd.disk_1(128M), Owner: node1]
o- iqn.2003-01.com.redhat.iscsi-gw:nike [Auth: CHAP, Disks: 1(128M)]
o- lun 0 ..................... [rbd.disk_2(128M), Owner: node0]

rbd-target-api.log:

  • (_get_rbd_config) config object contains '{
    "clients": {
    "iqn.1994-05.com.redhat:rh7-client": {
    "auth": {
    "chap": ""
    },
    "created": "2017/08/28 06:08:53",
    "group_name": "",
    "luns": {},
    "updated": "2017/08/28 06:08:53"
    },
    "iqn.1994-05.com.redhat:xx": {
    "auth": {
    "chap": "consumer/012345678912"
    },
    "created": "2017/08/29 03:05:00",
    "group_name": "",
    "luns": {
    "rbd.disk_1": {
    "lun_id": 0
    }
    },
    "updated": "2017/08/29 06:20:45"
    },
    "iqn.2003-01.com.redhat.iscsi-gw:nike": {
    "auth": {
    "chap": "consumer/012345678912"
    },
    "created": "2017/08/29 07:07:28",
    "group_name": "",
    "luns": {
    "rbd.disk_2": {
    "lun_id": 0
    }
    },
    "updated": "2017/08/29 07:11:57"
    }
    },
    "created": "2017/08/28 03:26:42",
    "disks": {
    "rbd.disk_1": {
    "created": "2017/08/29 02:41:52",
    "image": "disk_1",
    "owner": "node1",
    "pool": "rbd",
    "pool_id": 1,
    "updated": "2017/08/29 02:41:52",
    "wwn": "95c0d3a9-96cf-4158-9341-88020e2cc8d7"
    },
    "rbd.disk_2": {
    "created": "2017/08/29 03:02:49",
    "image": "disk_2",
    "owner": "node0",
    "pool": "rbd",
    "pool_id": 1,
    "updated": "2017/08/29 03:02:49",
    "wwn": "8383bdfa-1e94-4a94-ae59-c33a0dd7a737"
    },
    "rbd.disk_3": {
    "created": "2017/08/29 03:03:51",
    "image": "disk_3",
    "owner": "node1",
    "pool": "rbd",
    "pool_id": 1,
    "updated": "2017/08/29 03:03:51",
    "wwn": "d80cc287-4b16-408b-932a-b5ede4e5bc96"
    }
    },
    "epoch": 20,
    "gateways": {
    "created": "2017/08/28 03:31:58",
    "ip_list": [
    "172.16.121.131",
    "172.16.121.130"
    ],
    "iqn": "iqn.2003-01.com.redhat.iscsi-gw:ceph-igw",
    "node0": {
    "active_luns": 1,
    "created": "2017/08/28 04:00:44",
    "gateway_ip_list": [
    "172.16.121.131",
    "172.16.121.130"
    ],
    "inactive_portal_ips": [
    "172.16.121.131"
    ],
    "iqn": "iqn.2003-01.com.redhat.iscsi-gw:ceph-igw",
    "portal_ip_address": "172.16.121.130",
    "tpgs": 2,
    "updated": "2017/08/29 03:02:49"
    },
    "node1": {
    "active_luns": 2,
    "created": "2017/08/28 03:33:26",
    "gateway_ip_list": [
    "172.16.121.131",
    "172.16.121.130"
    ],
    "inactive_portal_ips": [
    "172.16.121.130"
    ],
    "iqn": "iqn.2003-01.com.redhat.iscsi-gw:ceph-igw",
    "portal_ip_address": "172.16.121.131",
    "tpgs": 2,
    "updated": "2017/08/29 03:03:51"
    }
    },
    "groups": {},
    "updated": "2017/08/29 07:11:57",
    "version": 3
    }'

"gateway is inaccessible - updates will be disabled"

Hi,
Try to create second iscsi gateway but failed. When I use gwcli on second gateway then this message show up. Even I clean everything (configs, setting, package)and back to square one still no luck.

Node1

/iscsi-target...-igw/gateways> ls
o- gateways ...................................................................... [Up: 1/1, Portals: 1]
o- cosd1 ...................................................................... [192.168.128.209 (UP)]

Node2

/iscsi-target...-igw/gateways> ls
o- gateways ...................................................................... [Up: 0/1, Portals: 1]
o- cosd1 ................................................................. [192.168.128.209 (UNKNOWN)]

Any hint ?

update the Manual Installation Guide

Hi, everyone:
I deploed the ceph cluster ,and used LIO as iSCSI target tool in few days.Today, I found a issue about deployment tutorial.
I used the manual ways to deploed the ceph cluster, so I had not a rbd pool named 'rbd', result in error when I ware running 'rbd-target-gw'.

Error in sys.excepthook:
Traceback (most recent call last):
File "/usr/bin/rbd-target-gw", line 33, in exception_handler
clearconfig()
File "/usr/bin/rbd-target-gw", line 82, in clearconfig
gw.drop_target(local_gw)
File "/usr/lib/python2.7/site-packages/ceph_iscsi_config/lio.py", line 58, in drop_target
if this_host in self.config.config['gateways']:
KeyError: 'gateways'

Original exception was:
Traceback (most recent call last):
File "/usr/bin/rbd-target-gw", line 462, in
halt("Unable to open/read the configuration object")
File "/usr/bin/rbd-target-gw", line 210, in halt
clearconfig()
File "/usr/bin/rbd-target-gw", line 82, in clearconfig
gw.drop_target(local_gw)
File "/usr/lib/python2.7/site-packages/ceph_iscsi_config/lio.py", line 58, in drop_target
if this_host in self.config.config['gateways']:
KeyError: 'gateways'

So, I suggest add a tips in deployment tutorial, to remind user check if they have rbd pool named 'rbd' in their clusters. Need to update the Manual Installation Guide URL likes:
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/2/html/block_device_guide/using_an_iscsi_gateway_technology_preview#configuring_the_iscsi_target_using_the_command_line_interface
http://docs.ceph.com/docs/master/rbd/iscsi-target-cli/

Thanks!!

How do I create multiple targets?

I have 10 gateway node, I want to select three nodes to provide services per target, but now only create one target, all 10 nodes provide services.

OS not supported

I am using:
CentOS Linux release 7.4.1708
Kernel: 4.13.4-1.el7.elrepo.x86_64

When trying to create a GW it says
Failed : iscsi0 failed package validation checks - OS is unsupported

why can it be, what OS do I need?

On the issue of log processing

I found that we used loging modules to process logs, but I didn't found some configuration for logs?such as configuring /etc/logrotate.d in spec files.

Adding RBD disk with ValueError: No JSON object could be decoded

After creating iSCSI gateways successfully, I add RBD disk by using

create pool=rbd image=iscsi-test-16T size=16T

Then, I got the following error

ValueError: No JSON object could be decoded

The rbd-target-api daemon is

[root@ceph-gw-1 ~]# journalctl -u rbd-target-api
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]: (LUN.allocate) created rbd/iscsi-test-16T successfully
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]: (LUN.add_dev_to_lio) Adding image 'rbd.iscsi-test-16T' to LIO
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]: 127.0.0.1 - - [10/Nov/2017 12:00:18] "PUT /api/_disk/rbd.iscsi-test-16T HTTP/1.1" 500 -
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]: Traceback (most recent call last):
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib64/python2.7/site-packages/flask/app.py", line 1997, in __call__
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:     return self.wsgi_app(environ, start_response)
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib64/python2.7/site-packages/flask/app.py", line 1985, in wsgi_app
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:     response = self.handle_exception(e)
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib64/python2.7/site-packages/flask/app.py", line 1540, in handle_exception
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:     reraise(exc_type, exc_value, tb)
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib64/python2.7/site-packages/flask/app.py", line 1982, in wsgi_app
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:     response = self.full_dispatch_request()
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib64/python2.7/site-packages/flask/app.py", line 1614, in full_dispatch_req
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:     rv = self.handle_user_exception(e)
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib64/python2.7/site-packages/flask/app.py", line 1517, in handle_user_excep
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:     reraise(exc_type, exc_value, tb)
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib64/python2.7/site-packages/flask/app.py", line 1612, in full_dispatch_req
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:     rv = self.dispatch_request()
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib64/python2.7/site-packages/flask/app.py", line 1598, in dispatch_request
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:     return self.view_functions[rule.endpoint](**req.view_args)
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib/python2.7/site-packages/gwcli-2.5-py2.7.egg/EGG-INFO/scripts/rbd-target-
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib/python2.7/site-packages/gwcli-2.5-py2.7.egg/EGG-INFO/scripts/rbd-target-
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "build/bdist.linux-x86_64/egg/ceph_iscsi_config/lun.py", line 426, in allocate
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:     lun = self.add_dev_to_lio()
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "build/bdist.linux-x86_64/egg/ceph_iscsi_config/lun.py", line 612, in add_dev_to_l
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:     wwn=in_wwn)
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib/python2.7/site-packages/rtslib_fb/tcm.py", line 815, in __init__
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:     self._configure(config, size, wwn, hw_max_sectors)
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib/python2.7/site-packages/rtslib_fb/tcm.py", line 831, in _configure
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:     self._enable()
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib/python2.7/site-packages/rtslib_fb/tcm.py", line 172, in _enable
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:     fwrite(path, "1\n")
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib/python2.7/site-packages/rtslib_fb/utils.py", line 79, in fwrite
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:     file_fd.write(str(string))
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]: IOError: [Errno 2] No such file or directory
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]: _disk change on 127.0.0.1 failed with 500
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]: 127.0.0.1 - - [10/Nov/2017 12:00:18] "PUT /api/disk/rbd.iscsi-test-16T HTTP/1.1" 500 -
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]: Traceback (most recent call last):
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib64/python2.7/site-packages/flask/app.py", line 1997, in __call__
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:     return self.wsgi_app(environ, start_response)
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib64/python2.7/site-packages/flask/app.py", line 1985, in wsgi_app
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:     response = self.handle_exception(e)
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib64/python2.7/site-packages/flask/app.py", line 1540, in handle_exception
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:     reraise(exc_type, exc_value, tb)
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib64/python2.7/site-packages/flask/app.py", line 1982, in wsgi_app
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:     response = self.full_dispatch_request()
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib64/python2.7/site-packages/flask/app.py", line 1614, in full_dispatch_req
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:     rv = self.handle_user_exception(e)
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib64/python2.7/site-packages/flask/app.py", line 1517, in handle_user_excep
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:     reraise(exc_type, exc_value, tb)
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib64/python2.7/site-packages/flask/app.py", line 1612, in full_dispatch_req
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:     rv = self.dispatch_request()
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib64/python2.7/site-packages/flask/app.py", line 1598, in dispatch_request
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:     return self.view_functions[rule.endpoint](**req.view_args)
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib/python2.7/site-packages/gwcli-2.5-py2.7.egg/EGG-INFO/scripts/rbd-target-
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib/python2.7/site-packages/gwcli-2.5-py2.7.egg/EGG-INFO/scripts/rbd-target-
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib/python2.7/site-packages/gwcli-2.5-py2.7.egg/EGG-INFO/scripts/rbd-target-
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib/python2.7/site-packages/requests/models.py", line 866, in json
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:     return complexjson.loads(self.text, **kwargs)
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib64/python2.7/json/__init__.py", line 339, in loads
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:     return _default_decoder.decode(s)
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib64/python2.7/json/decoder.py", line 364, in decode
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:     obj, end = self.raw_decode(s, idx=_w(s, 0).end())
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:   File "/usr/lib64/python2.7/json/decoder.py", line 382, in raw_decode
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]:     raise ValueError("No JSON object could be decoded")
11月 10 12:00:18 ceph-gw-1 rbd-target-api[779]: ValueError: No JSON object could be decoded

And the log file: /var/log/rbd-target-api.log

[root@ceph-gw-1 log]# tail rbd-target-api.log
2017-11-10 12:00:18,133    DEBUG [lun.py:323:allocate()] - rados pool 'rbd' contains the following - [u'iscsi-test-8T']
2017-11-10 12:00:18,133    DEBUG [lun.py:328:allocate()] - Hostname Check - this host is ceph-gw-1, target host for allocations is ceph-gw-1
2017-11-10 12:00:18,594    DEBUG [common.py:256:add_item()] - (Config.add_item) config updated to {u'updated': u'2017/11/10 01:50:09', u'disks': {'rbd.iscsi-test-16T': {'created': '2017/11/10 04:00:18'}}, u'created': u'2017/11/08 07:36:52', u'clients': {}, u'epoch': 4, u'version': 3, u'gateways': {u'iqn': u'iqn.2017-11.com.ctcloud.iscsi-gw:ceph-igw', u'created': u'2017/11/09 08:56:41', u'ceph-gw-1': {u'gateway_ip_list': [u'192.168.100.248', u'192.168.100.246'], u'active_luns': 0, u'created': u'2017/11/10 00:30:31', u'updated': u'2017/11/10 01:20:25', u'iqn': u'iqn.2017-11.com.ctcloud.iscsi-gw:ceph-igw', u'inactive_portal_ips': [u'192.168.100.246'], u'portal_ip_address': u'192.168.100.248', u'tpgs': 2}, u'ip_list': [u'192.168.100.248', u'192.168.100.246'], u'ceph-gw-2': {u'gateway_ip_list': [u'192.168.100.248', u'192.168.100.246'], u'active_luns': 0, u'created': u'2017/11/10 01:50:09', u'updated': u'2017/11/10 01:50:09', u'iqn': u'iqn.2017-11.com.ctcloud.iscsi-gw:ceph-igw', u'inactive_portal_ips': [u'192.168.100.248'], u'portal_ip_address': u'192.168.100.246', u'tpgs': 2}}, u'groups': {}}
2017-11-10 12:00:18,594     INFO [lun.py:344:allocate()] - (LUN.allocate) created rbd/iscsi-test-16T successfully
2017-11-10 12:00:18,595    DEBUG [lun.py:384:allocate()] - Check the rbd image size matches the request
2017-11-10 12:00:18,595    DEBUG [lun.py:407:allocate()] - Begin processing LIO mapping
2017-11-10 12:00:18,595     INFO [lun.py:598:add_dev_to_lio()] - (LUN.add_dev_to_lio) Adding image 'rbd.iscsi-test-16T' to LIO
2017-11-10 12:00:18,633     INFO [_internal.py:87:_log()] - 127.0.0.1 - - [10/Nov/2017 12:00:18] "PUT /api/_disk/rbd.iscsi-test-16T HTTP/1.1" 500 -
2017-11-10 12:00:18,640    ERROR [rbd-target-api:1266:call_api()] - _disk change on 127.0.0.1 failed with 500
2017-11-10 12:00:18,651     INFO [_internal.py:87:_log()] - 127.0.0.1 - - [10/Nov/2017 12:00:18] "PUT /api/disk/rbd.iscsi-test-16T HTTP/1.1" 500 -

What is it? How could I solve the problem? Thanks!

health state is reported incorrectly on Luminous backed clusters

Luminous changes the output of the ceph health command, which is not accounted for in the UI. In addition, when the cluster is in a warn or error state, the admin can not see why without opening another sessions and running additional ceph commands - which breaks the workflow.

create snapshot of rbd image failed with errno 30

When a rbd image is mapped as a LUN, the tcmu-runner will hold the image's exclusive-lock.
If we create a snapshot of this image, it will fail with errno 30 (EROFS: Read-only file system)
Is there any plan or suggestion on how to support rbd snapshot without unbind the LUN, for a reseaon that we may running a VM on this LUN. VM be stopped or crashed is unacceptable when creating snapshot.
#70 is releated to this issue.

gwcli TypeError: 'NoneType' object is not iterable

Hello,
I configure and run service rbd-target-api , but when try run gwcli getting an error , below gwcli -d output:

Adding ceph cluster 'ceph' to the UI
Fetching ceph osd information
Querying ceph for state information
Traceback (most recent call last):
  File "/bin/gwcli", line 5, in <module>
    pkg_resources.run_script('gwcli==2.5', 'gwcli')
  File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 540, in run_script
    self.require(requires)[0].run_script(script_name, ns)
  File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 1462, in run_script
    exec_(script_code, namespace, namespace)
  File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 41, in exec_
    exec("""exec code in globs, locs""")
  File "<string>", line 1, in <module>
  File "/usr/lib/python2.7/site-packages/gwcli-2.5-py2.7.egg/EGG-INFO/scripts/gwcli", line 187, in <module>
    
  File "/usr/lib/python2.7/site-packages/gwcli-2.5-py2.7.egg/EGG-INFO/scripts/gwcli", line 93, in main
    
  File "build/bdist.linux-x86_64/egg/gwcli/gateway.py", line 55, in __init__
  File "build/bdist.linux-x86_64/egg/gwcli/ceph.py", line 55, in __init__
  File "build/bdist.linux-x86_64/egg/gwcli/ceph.py", line 141, in __init__
  File "build/bdist.linux-x86_64/egg/gwcli/ceph.py", line 211, in update_state
TypeError: 'NoneType' object is not iterable

hostgroup issue

o- host-groups ......................................................................... [Groups : 1]
| o- linux ..................................................................... [Hosts: 1, Disks: 1]
| | o- iqn.2016-04.com.test.sn2 ................................................................ [host]
| | o- rbd.disk9 ............................................................................. [disk]
o- hosts ................................................................................. [Hosts: 2]
o- iqn.2016-04.com.test.sn1 ............................................. [Auth: CHAP, Disks: 1(16G)]
| o- lun 0 ........................................................... [rbd.disk9(16G), Owner: gw2]
o- iqn.2016-04.com.test.sn2 ............................................. [Auth: None, Disks: 1(16G)]
o- lun 0 ........................................................... [rbd.disk9(16G), Owner: gw2]

I added two hosts and a disk to hostgroup linux, and I removed a host from the group.
The relationship between host and lun is still existing, and I can't add this host to group again. Is this a bug?

RBD Cache support

Hello

I have just started using this in Centos 7.5 and was wondering if RBD cache is supported? I have tried the normal ceph.conf settings but can't see any improvement.

AttributeError: 'RadosPool' object has no attribute 'commit'

I setup the gwcli following http://docs.ceph.com/docs/master/rbd/iscsi-target-cli/
And when I executed "gwcli ls" command on top dir, this issue happened, is this a bug?

[root@gw1 iscsi]# gwcli ls
'RadosPool' object has no attribute 'commit'

[root@gw1 iscsi]# gwcli ls iscsi-target
o- iscsi-target .......................................................................... [Targets: 1]
o- iqn.2017-11.test.com:tgt1 ....................................................... [Gateways: 1]
o- gateways ................................................................. [Up: 1/1, Portals: 1]
| o- gw1 ....................................................................... [10.16.0.198 (UP)]
o- host-groups ....................................................................... [Groups : 0]
o- hosts ............................................................................... [Hosts: 0]

clearconfig failed

[root@node3 ~]# gwcli
/iscsi-target> ls
o- iscsi-target .......................................................................... [Targets: 1]
o- iqn.2017-11.com.redhat.aaaaa ....................................................... [Gateways: 0]
o- gateways ................................................................. [Up: 0/0, Portals: 0]
o- host-groups ....................................................................... [Groups : 0]
o- hosts ............................................................................... [Hosts: 0]
/iscsi-target> clearconfig confirm=true
ValueError: list.remove(x): x not in list

and there's no error message in gwcli.log, rbd-target-api.log and rbd-target-gw.log

rbd-target-gw failed to start

Traceback (most recent call last):
File "/usr/bin/rbd-target-gw", line 468, in
main()
File "/usr/bin/rbd-target-gw", line 419, in main
apply_config()
File "/usr/bin/rbd-target-gw", line 388, in apply_config
gateway = define_gateway()
File "/usr/bin/rbd-target-gw", line 259, in define_gateway
gateway.manage('target')
File "/usr/lib/python2.7/site-packages/ceph_iscsi_config/gateway.py", line 426, in manage
self.create_target()
File "/usr/lib/python2.7/site-packages/ceph_iscsi_config/gateway.py", line 230, in create_target
for ip in self.gateway_ip_list:
AttributeError: 'GWTarget' object has no attribute 'gateway_ip_list'

I gdb to the code, and found that there's inconsistent between gateways,
In another two gateways:
1 gateway is inaccessible - updates will be disabled
/iscsi-target...aaaa/gateways> ls
o- gateways ..................................................................... [Up: 2/3, Portals: 3]
o- node1 ...................................................................... [192.168.15.201 (UP)]
o- node2 ................................................................. [192.168.15.202 (UNKNOWN)] <-----node2 is here
o- node3 ...................................................................... [192.168.15.203 (UP)]

and in node2, I start gateway

/usr/lib/python2.7/site-packages/ceph_iscsi_config/gateway.py(49)init()
-> matching_ip = set(gateway_ip_list).intersection(ipv4_addresses())
(Pdb) l
44 import pdb
45 pdb.set_trace()
46 # if the ip list provided doesn't match any ip of this host, abort
47 # the assumption here is that we'll only have one matching ip in
48 # the list!
49 -> matching_ip = set(gateway_ip_list).intersection(ipv4_addresses())
50 if len(list(matching_ip)) == 0:
51 self.error = True
52 self.error_msg = ("gateway IP addresses provided do not match"
53 " any ip on this host")
54 return
(Pdb) p set(gateway_ip_list).intersection(ipv4_addresses())
set([])
(Pdb) p gateway_ip_list
[u'192.168.15.201', u'192.168.15.203'] <------node2 is not in the list
(Pdb) p ipv4_addresses()
['192.168.15.202']

I checkout the gateway.conf from rbd, and node2 is not in the gateway_ip_list...

How should I get my environment normal? Should we add an API to delete unavaliable gateways?

vmware multipath restored issue

I'm testing this scenario:
prerequisite:
1. create a target and two gateways
2. create a host and a disk, and map the disk to host.
test steps:

  1. discovery the target and login, check the target has one device and two paths.
  2. stop one path(stop rbd-target-api/gw and tcmu-runner), and refresh the iscsi adapter in vsphere client.
  3. when the path turns to failed, start the services(tcmu-runner, rbd-target-gw/api), check the gateway is ok using gwcli
  4. waiting for the path back to normal.

The path can not back to normal even I refresh or rescan, unless I remove this target and discovery it again.

Issue creating gateway on patched Ceph kernel

I'm trying to create a single gateway on a Linux machine running the patched Ceph kernel. I am under the impression that it shouldn't be necessary to set skipchecks=true, and even if so, it should produce a cleaner error message.

/iscsi-target...-igw/gateways> create ceph-disk-1 10.0.142.101
Adding gateway, sync'ing 0 disk(s) and 0 client(s)
Failed : Malformed REST API response

The logs of rbd-target-api show this stack trace:

Mar 15 12:52:53 ceph-disk-1.storage rbd-target-api[13444]: 127.0.0.1 - - [15/Mar/2018 12:52:53] "GET /api/config HTTP/1.1" 200 -
Mar 15 12:52:53 ceph-disk-1.storage rbd-target-api[13444]: gateway validation needed for ceph-disk-1
Mar 15 12:52:53 ceph-disk-1.storage rbd-target-api[13444]: 10.0.142.101 - - [15/Mar/2018 12:52:53] "GET /api/sysinfo/ipv4_addresses HTTP/1.1" 200 -
Mar 15 12:52:53 ceph-disk-1.storage rbd-target-api[13444]: 10.0.142.101 - - [15/Mar/2018 12:52:53] "GET /api/sysinfo/checkconf HTTP/1.1" 200 -
Mar 15 12:52:53 ceph-disk-1.storage rbd-target-api[13444]: 10.0.142.101 - - [15/Mar/2018 12:52:53] "GET /api/sysinfo/checkversions HTTP/1.1" 500 -
Mar 15 12:52:53 ceph-disk-1.storage rbd-target-api[13444]: Traceback (most recent call last):
Mar 15 12:52:53 ceph-disk-1.storage rbd-target-api[13444]: File "/usr/lib/python2.7/site-packages/flask/app.py", line 1836, in __call__
Mar 15 12:52:53 ceph-disk-1.storage rbd-target-api[13444]: return self.wsgi_app(environ, start_response)
Mar 15 12:52:53 ceph-disk-1.storage rbd-target-api[13444]: File "/usr/lib/python2.7/site-packages/flask/app.py", line 1820, in wsgi_app
Mar 15 12:52:53 ceph-disk-1.storage rbd-target-api[13444]: response = self.make_response(self.handle_exception(e))
Mar 15 12:52:53 ceph-disk-1.storage rbd-target-api[13444]: File "/usr/lib/python2.7/site-packages/flask/app.py", line 1403, in handle_exception
Mar 15 12:52:53 ceph-disk-1.storage rbd-target-api[13444]: reraise(exc_type, exc_value, tb)
Mar 15 12:52:53 ceph-disk-1.storage rbd-target-api[13444]: File "/usr/lib/python2.7/site-packages/flask/app.py", line 1817, in wsgi_app
Mar 15 12:52:53 ceph-disk-1.storage rbd-target-api[13444]: response = self.full_dispatch_request()
Mar 15 12:52:53 ceph-disk-1.storage rbd-target-api[13444]: File "/usr/lib/python2.7/site-packages/flask/app.py", line 1477, in full_dispatch_request
Mar 15 12:52:53 ceph-disk-1.storage rbd-target-api[13444]: rv = self.handle_user_exception(e)
Mar 15 12:52:53 ceph-disk-1.storage rbd-target-api[13444]: File "/usr/lib/python2.7/site-packages/flask/app.py", line 1381, in handle_user_exception
Mar 15 12:52:53 ceph-disk-1.storage rbd-target-api[13444]: reraise(exc_type, exc_value, tb)
Mar 15 12:52:53 ceph-disk-1.storage rbd-target-api[13444]: File "/usr/lib/python2.7/site-packages/flask/app.py", line 1475, in full_dispatch_request
Mar 15 12:52:53 ceph-disk-1.storage rbd-target-api[13444]: rv = self.dispatch_request()
Mar 15 12:52:53 ceph-disk-1.storage rbd-target-api[13444]: File "/usr/lib/python2.7/site-packages/flask/app.py", line 1461, in dispatch_request
Mar 15 12:52:53 ceph-disk-1.storage rbd-target-api[13444]: return self.view_functions[rule.endpoint](**req.view_args)
Mar 15 12:52:53 ceph-disk-1.storage rbd-target-api[13444]: File "/usr/bin/rbd-target-api", line 59, in decorated
Mar 15 12:52:53 ceph-disk-1.storage rbd-target-api[13444]: return f(*args, **kwargs)
Mar 15 12:52:53 ceph-disk-1.storage rbd-target-api[13444]: File "/usr/bin/rbd-target-api", line 167, in get_sys_info
Mar 15 12:52:53 ceph-disk-1.storage rbd-target-api[13444]: config_errors = pre_reqs_errors()
Mar 15 12:52:53 ceph-disk-1.storage rbd-target-api[13444]: File "/usr/bin/rbd-target-api", line 1499, in pre_reqs_errors
Mar 15 12:52:53 ceph-disk-1.storage rbd-target-api[13444]: this_ver, this_rel = this_kernel.split('-')
Mar 15 12:52:53 ceph-disk-1.storage rbd-target-api[13444]: ValueError: too many values to unpack
Mar 15 12:52:53 ceph-disk-1.storage rbd-target-api[13444]: 127.0.0.1 - - [15/Mar/2018 12:52:53] "PUT /api/gateway/ceph-disk-1 HTTP/1.1" 400 -

I suspect this is because the kernel version is this:

[root@ceph-disk-1 ~]# uname -a
Linux ceph-disk-1.storage 4.15.0-ceph-g1c778f43da52 #1 SMP Wed Feb 28 13:56:41 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

The line this_kernel.split('-'), where this_kernel is 4.15.0-ceph-g1c778f43da52 will yield a length-three iterable. The line should be this_kernel.split('-', 1), so that the result is ['4.15.0', 'ceph-g1c778f43da52'].

resize issue

Using gwcli to create rbd.disk1 with 10GB, and I want to resize it to 101GB, so I resize it as below:
/disks resize rbd.disk1 101G

I check the image info with rbd info disk1, and the size is 101GB.
I check the /sys/kernel/config/target/iscsi/
cat info
Status: ACTIVATED Max Queue Depth: 0 SectorSize: 512 HwMaxSectors: 128
Config: rbd/rbd/disk1;osd_op_timeout=30 Size: 108447924224

The problem is when I using iscsi initiator to login, I find the LUN size is still 10GB.
Is this a bug?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.