Code Monkey home page Code Monkey logo

Comments (8)

tazend avatar tazend commented on July 21, 2024 1

@dun

Yeah a mention in the documentation would be good. I opened a bug report for this: https://bugs.schedmd.com/show_bug.cgi?id=16035
Lets see what they say :)

from munge.

dun avatar dun commented on July 21, 2024

This error message indicates the MUNGE credential was encoded with a MUNGE_OPT_UID_RESTRICTION and/or MUNGE_OPT_GID_RESTRICTION option which allows the decoding of that credential to be restricted to a specific UID and/or GID. It implies the same MUNGE key was used on both the encoding and decoding nodes since credential decryption was successful.

This error message also notes the client process attempting to decode the credential is running as root (UID=0 and GID=0). My guess as to the problem here is the SlurmUser (in slurm.conf, see here) differs between the node encoding the credential and the node decoding the credential. This is typically set to the slurm user (a non-privileged system account), but it looks like the node generating the above error message is running with SlurmUser set to root. If that's not the problem, you'll need to follow-up with the Slurm community for an answer.

from munge.

ZXRobotum avatar ZXRobotum commented on July 21, 2024

Thank you very much for your help.
On Slurm.conf is have these:
SlurmUser=slurm
SlurmdUser=root

Well, I create the new "munge.key" with the following command, like your page:
sudo -u munge ${sbindir}/mungekey --verbose

On all my systems UID & GID from slurs & munge are the same....

As I wrote before, my small test cluster works fine with the same settings, compile operations, etc. Only the large cluster does not like more... And I don't find these mistake, failure, etc.

from munge.

dun avatar dun commented on July 21, 2024

The munge.key appears to be fine since the credential is successfully decoded. But the credential has been encoded with a uid restriction for a non-root user (presumably the slurm user), and the process attempting to decode it is running as root (hence the authorization error).

The only advice I have to offer here is to double-check that the slurm account (with the same uid) exists on all nodes in your large cluster, that SlurmUser=slurm is set, and perhaps restart the slurm service on all nodes in case the configuration changed after the service was initially started.

from munge.

tazend avatar tazend commented on July 21, 2024

Hi @ZXRobotum

just came accross this error too in our productive cluster and wondered why this happens.
Do the productive and your test cluster have the same slurm version?

Because in Slurm 22.05 (presumably) the Slurm devs added this check in the init function of their auth_munge plugin.

Now, whenever a job starts, a new slurmstepd process is spawned and this init function is called. If you have verbose mode on, you should see something like this every time a job step is launched on a node in your syslog:

slurmstepd[16784]: cred/munge: init: Munge credential signature plugin loaded

As the comments in their source code say, they only check if munge is configured to allow the root user to decode any incoming credential. They are creating a pseudo credential encoded with a different uid, and try to decode this as the root User. If this succeeds, you would be prompted with a error-message telling you to disable the setting in munge that root is allowed to decode any credential, as only the SlurmUser should be allowed to decode the credentials from slurmctld.

In short: The Unauthorized credential for client UID=0 GID=0 is just a byproduct of their safety check to see whether root is able to decode any credential and can be, as to my (hopefully correct) interpretation, savely ignored.

from munge.

dun avatar dun commented on July 21, 2024

@tazend, thanks for looking into this! ⭐

It's unfortunate their safety check is causing confusion. Ideally this would be documented in their installation faq.

from munge.

ZXRobotum avatar ZXRobotum commented on July 21, 2024

@tazend, many thanks for this and the ingenious bug-finding....

First of all, my test and CORE clusters are set up completely identically. The only difference between the two is that the test cluster consists of real machines and the CORE cluster of cloud instances.

Whether one can really ignore this is really the question here, which I am still trying to answer through error analysis. I see the error message under SLURM on the CORE system:

SlurmCTLD:
2023-02-13T14:28:34.802] JobId=370421 nhosts:1 ncpus:1 node_req:64000 nodes=CompNode01
[2023-02-13T14:28:34.802] Node[0]:
[2023-02-13T14:28:34.802] Mem(MB):15998:0 Sockets:1 Cores:6 CPUs:6:0
[2023-02-13T14:28:34.802] Socket[0] Core[0] is allocated
[2023-02-13T14:28:34.802] Socket[0] Core[1] is allocated
[2023-02-13T14:28:34.802] Socket[0] Core[2] is allocated
[2023-02-13T14:28:34.802] Socket[0] Core[3] is allocated
[2023-02-13T14:28:34.802] Socket[0] Core[4] is allocated
[2023-02-13T14:28:34.802] Socket[0] Core[5] is allocated
[2023-02-13T14:28:34.802] --------------------
[2023-02-13T14:28:34.802] cpu_array_value[0]:6 reps:1
[2023-02-13T14:28:34.802] ====================
[2023-02-13T14:28:34.803] sched/backfill: _start_job: Started JobId=370421 in Artificial on CompNode01
[2023-02-13T14:28:34.910] _slurm_rpc_requeue: Requeue of JobId=370421 returned an error: Only batch jobs are accepted or processed
[2023-02-13T14:28:34.914] _slurm_rpc_kill_job: REQUEST_KILL_JOB JobId=370421 uid 0
[2023-02-13T14:28:34.915] job_signal: 9 of running JobId=370421 successful 0x8004
[2023-02-13T14:28:35.917] _slurm_rpc_complete_job_allocation: JobId=370421 error Job/step already completing or completed

SlurmD:
[370420.extern] fatal: Could not create domain socket: Operation not permitted
[2023-02-13T14:13:12.412] error: _forkexec_slurmstepd: slurmstepd failed to send return code got 0: Resource temporarily unavailable
[2023-02-13T14:13:12.417] Could not launch job 370420 and not able to requeue it, cancelling job

And with this, the SlurmD process aborts the processing and reports back to the CTLD that the JOB cannot be executed. And I find absolutely no explanation for this. I only see on both sides CTLD and SlurmD, the "unauthorised credential for client .....". - How did you solve the problem in the end? With this FLAG under MUNGE or rather under SLURM?
Best regards from Berlin.....

Z. Matthias

from munge.

tazend avatar tazend commented on July 21, 2024

Hi @ZXRobotum

We can continue discussion here in this issue, if you want (just to not further hijack this issue for discussing perhaps unrelated slurm errors.)

from munge.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.