Code Monkey home page Code Monkey logo

Comments (10)

ddiss avatar ddiss commented on August 17, 2024

I'm happy to work on this, but would appreciate some input on how best to expose these options to the user. At the moment, TPGs appear to be hidden, with configuration instead taking place at a portal level.

from ceph-iscsi.

mikechristie avatar mikechristie commented on August 17, 2024
  1. I am not sure what you mean by explicit LUN1. I mean I have no idea what that is wrt scsi, lio, and targetcli :) Can you maybe point me to the targetcli/rtslib code that implements it?

Is it SUSE specific by any chance?

Why LUN 1 and not LUN 0 btw? Are you using LUN 0 as the special LUN only?

  1. Just to make sure I got this one, you have a target with multiple tpgs, but then you want each tpg to have different CHAP settings? What is the user case for this?

I think the reason the tpgs were hidden from the user was because for ESX you needed them to be in a specific order across all gateways so when the ALUA info is reported it looks the same through any port (some versions of ESX look at all the info in RTPG info and not the local one like defined in newer specs and like how linux does it). When going from targetcli to gwcli we did not allow the user to config tpgs, because they kept messing things up when we were doing the targetcli based deployments.

There will also be some issues with tcmu-runner if we ever finish support for commands that need to know the I_T nexus info. For you guys that does not matter so we can add some checks for tcmu-runner uses.

I think it might be some major rework because of how we do tpgs today, so unless you have a strong user case, I would not bother. If you are going to do it, you would need to take a look at the code where when a gw is created it automatically creates the tpg. Maybe start with rbd-target-api's _gateway function.

from ceph-iscsi.

mikechristie avatar mikechristie commented on August 17, 2024

Just to add to #2, the goal of gwcli was not to replicate all of targetcli's functionality and it was not to give really low level access like how targetcli can, so if something is missing it's because no user has asked for it. It was just trying to be as easy as possible for the user.

We are open to adding whatever users need though.

from ceph-iscsi.

ddiss avatar ddiss commented on August 17, 2024

Thanks for the feedback, Mike.

I am not sure what you mean by explicit LUN1. I mean I have no idea what that is wrt scsi, lio, and targetcli :) Can you maybe point me to the targetcli/rtslib code that implements it?

Without an explicit client mapping under target/iscsi/$TARGET_IQN/$TPG/acls/$INITIATOR_IQN/lun_%d, the LUN id is derived from the configfs path at:
target/iscsi/$TARGET_IQN/$TPG/lun/lun_%d

It's passed via:

static struct config_group *target_fabric_make_lun(
        struct config_group *group,
        const char *name)
{
...
        if (strstr(name, "lun_") != name) {
                pr_err("Unable to locate \'_\" in"
                                " \"lun_$LUN_NUMBER\"\n");
                return ERR_PTR(-EINVAL);
        }
        errno = kstrtoull(name + 4, 0, &unpacked_lun);
        if (errno)
                return ERR_PTR(errno);

        lun = core_tpg_alloc_lun(se_tpg, unpacked_lun);
        if (IS_ERR(lun))
                return ERR_CAST(lun);

Is it SUSE specific by any chance?

Nope :-)

Why LUN 1 and not LUN 0 btw? Are you using LUN 0 as the special LUN only?

LUN 1 was just used as an example, but consider a configuration where two LUNs are configured without explicit client mappings, ensuring that the TPG lun_%d id is used. In this case, both are created as lun_0 and lun_1. The user then wishes to delete the image exposed as lun_0, and the corresponding ceph-iscsi export. Unless I'm missing something, this ceph-iscsi config change will result in a change of the LUN id for the second image from lun_1 to lun_0, which may be problematic/confusing for some initiators which use LUN# instead of the SCSI unit serial number.

Using an explicit client mapping would ensure that the above LUN id change doesn't happen, but some users don't want to go to the effort of gathering and entering the IQNs for all initiators that may connect to the iSCSI target.

Just to make sure I got this one, you have a target with multiple tpgs, but then you want each tpg to have different CHAP settings? What is the user case for this?

Like with the LUN settings, some users don't want to gather IQNs and configure per-initiator CHAP settings. The ceph-iscsi ability to group initiators does make this more user-friendly, but allowing users to configure CHAP settings at the TPG level is still much quicker.

I think the reason the tpgs were hidden from the user was because for ESX you needed them to be in a specific order across all gateways so when the ALUA info is reported it looks the same through any port (some versions of ESX look at all the info in RTPG info and not the local one like defined in newer specs and like how linux does it). When going from targetcli to gwcli we did not allow the user to config tpgs, because they kept messing things up when we were doing the targetcli based deployments.

Okay, understood.

There will also be some issues with tcmu-runner if we ever finish support for commands that need to know the I_T nexus info. For you guys that does not matter so we can add some checks for tcmu-runner uses.

I think it might be some major rework because of how we do tpgs today, so unless you have a strong user case, I would not bother. If you are going to do it, you would need to take a look at the code where when a gw is created it automatically creates the tpg. Maybe start with rbd-target-api's _gateway function.

Just to add to #2, the goal of gwcli was not to replicate all of targetcli's functionality and it was not to give really low level access like how targetcli can, so if something is missing it's because no user has asked for it. It was just trying to be as easy as possible for the user.

We are open to adding whatever users need though.

Thanks. IMO allowing users to configure LUN# and CHAP settings without needing to gather initiator IQNs is enough of a reason to proceed with support for this.

from ceph-iscsi.

mikechristie avatar mikechristie commented on August 17, 2024

Adding @ricardoasmarques because he added support for a lot of what you asking about.

LUN 1 was just used as an example, but consider a configuration where two LUNs are configured without explicit client mappings, ensuring that the TPG lun_%d id is used. In this case, both are created as lun_0 and lun_1. The user then wishes to delete the image exposed as lun_0, and the corresponding ceph-iscsi export. Unless I'm missing something, this ceph-iscsi config change will result in a change of the LUN id for the second image from lun_1 to lun_0, which may be problematic/confusing for some initiators which use LUN# instead of the SCSI unit serial number.

I think I see what you are saying, yeah that is a bug that added when support for clientless exports (acl_enabled=false) got added.

Originally we only supported exporting a LUN if mapped to an initiator/client, so the TPG LUN value was never exposed. When the client LUN mapping happens the LUN for that mapping is stored in the lun_id field in the gateway.conf and so the image will always be export as the same LUN value.

@ricardoasmarques could we support something like this by making the target's disk array in the gateway.conf an array of {pool/image, lun_id} ?

Just to make sure I got this one, you have a target with multiple tpgs, but then you want each tpg to have different CHAP settings? What is the user case for this?

Like with the LUN settings, some users don't want to gather IQNs and configure per-initiator CHAP settings. The ceph-iscsi ability to group initiators does make this more user-friendly, but allowing users to configure CHAP settings at the TPG level is still much quicker.

I am more asking if you would have a need to configure different CHAP values for different TPGs or would all TPGs under a target have the same values? I could not think of a reason for the per TPG values, but having all the same values for all TPGs under a target makes sense to me and would be simple to implement and simple for the user to setup. It would just be some target level settings.

[edit: adding some more detail]
If you just want to be able to set CHAP values at the TPG level, and it's ok to set them the same values for all TPGs under a target, then we could put the settings/values at the target level in the gateway.conf like we do for the other tpg values. We do not need to add/modify the interface to be able to per TPG values. Check out the TPG_SETTINGS in ceph_iscsi_config/target.py to start.

If we need to do per TPG values then it will be more difficult like I mentioned before due to how the TPGs are auto created and not exposed today.

from ceph-iscsi.

ddiss avatar ddiss commented on August 17, 2024

I am more asking if you would have a need to configure different CHAP values for different TPGs or would all TPGs under a target have the same values? I could not think of a reason for the per TPG values, but having all the same values for all TPGs under a target makes sense to me and would be simple to implement and simple for the user to setup. It would just be some target level settings.

Differing CHAP settings for each TPG should indeed be unlikely, but if configuration syntax / API changes are already needed then I think my preference would be to add explicit TPG support, even if it's not exposed via the RESTful API / Dashboard initially.

IMO adding extra mapping / abstraction layers in front of the underlying LIO config representation will eventually cause headaches as support for more and more tunables, etc. is requested.

Full disclosure though: we have support for TPG level settings in lrbd and having them also supported in ceph-iscsi will make our life easier when migrating users over to ceph-iscsi.

from ceph-iscsi.

ricardoasmarques avatar ricardoasmarques commented on August 17, 2024

@ricardoasmarques could we support something like this by making the target's disk array in the gateway.conf an array of {pool/image, lun_id} ?

@mikechristie will we specify the lun_id during the disk creation:

/disks> create [pool] [image] [size] [backstore] [lun_id] [count]

or when assigning the disk to the target?

/iscsi-target...5601081/disks> add disk [lun_id]

If we choose the former, then we have to add the new lun_id field inside /disks/<disk> field instead of /targets/<target>/disks field on "gateway.conf".

from ceph-iscsi.

mikechristie avatar mikechristie commented on August 17, 2024

We definitely need it to be persistent across reboots, and not based on the first available value in the rtslib, but I cannot think of any cases where the user actually cares about the value as long as it's not changing everytime LUNs are mapped/unmapped due to restarts.

David for your use case does the user need to pass in the LUN/lun_id? Will you need it for what you did in lrbd?

from ceph-iscsi.

mikechristie avatar mikechristie commented on August 17, 2024

Differing CHAP settings for each TPG should indeed be unlikely, but if configuration syntax / API changes are already needed then I think my preference would be to add explicit TPG support, even if it's not exposed via the RESTful API / Dashboard initially.

Go for it! It would be really nice to get that code cleaned up.

You might want to start by looking at how we do target settings today and how to properly separate them so they can be set at the matching LIO level. For example, we have target and lun settings. The target settings are actually lio/rtslib TPG and NodeACL settings that get distributed over all TPG/ACLs for a target. So I would start with the NodeACL stuff first since it would be useful to have per ACL values (maybe less so now since you can just do multiple targets with different ACLs with 3.0), it's all already there, and might be more clear on how the flow goes and how you need to clean it up and then do the same for the TPG code. Or maybe that idea/plan won't help and do whatever you want :)

from ceph-iscsi.

ddiss avatar ddiss commented on August 17, 2024

David for your use case does the user need to pass in the LUN/lun_id? Will you need it for what you did in lrbd?

Yes, lrbd did allow for explicit lun_ids, independent of per-client mapping. Thanks again for the pointers :)

from ceph-iscsi.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.