samba-in-kubernetes / samba-operator Goto Github PK
View Code? Open in Web Editor NEWAn operator for a Samba as a service on PVCs in kubernetes
License: Apache License 2.0
An operator for a Samba as a service on PVCs in kubernetes
License: Apache License 2.0
Looking at the code, the operator creates the resources (PVC, Deployment, Service) in the operator's namespace and then sets the SMBShare resource as the owner of these resources (to have K8S garbage collect them).
But, K8S does not allow cross namespace ownership. So, if the SMBShare is created in a different namespace, this operation will fail. Please note that the return value is not checked:
samba-operator/internal/resources/smbshare.go
Line 311 in 878ac25
In the demo showed in SambaXP right after provisioning the operator the context is set to use the namespace of the operator. As a result, all the resources created in the demo are created in that namespace and this issue does not occur.
Is the intention to force the users to create SMBShares only in the operator's namespace?
This issue is more of a stopgap until proper documentation is written for people searching for that information. I intend to write this up properly.
OpenShift's DNS operator does not allow editing the coredns configmap when in managed state. It does support changing the CRD though. The following file does the same on OpenShift as the file in tests/files/coredns-snippet.template
.
The file can be applied with
oc patch dns.operator/default --type merge --patch-file /path/to/file
(Of course AD_SERVER_IP
has to be the actual IP.)
spec:
servers:
- name: ad-zone
zones:
- ad.schaeffer-ag.de
forwardPlugin:
upstreams:
- AD_SERVER_IP
Support for additional configuration that will allow access to a share from outside the k8s cluster was added a few months back but not documented. In fact, all the basic docs are a bit out of date. These should be updated to reflect the current state of the operator.
Wondering if any design docs have been made for a DC CRD
For users it will be very convenient to have a Service that can be use to connect to the SmbShare by name.
For example, using tests/files/smbshare1.yaml
as SmbShare, and adding the following Service:
apiVersion: v1
kind: Service
metadata:
name: tshare1
namespace: samba-operator-system
spec:
selector:
samba-operator.samba.org/service: tshare1
ports:
- port: 445
protocol: TCP
This makes it possible to use //tshare1/My Share
to connect, independent from the IP-address that the Pod currently has:
$ kubectl -n samba-operator-system exec -ti centos -- /bin/bash
[root@centos /]# smbclient -U sambauser '//tshare1/My Share'
Enter SAMBA\sambauser's password:
Try "help" to get a list of possible commands.
smb: \>
Or, if the consumer of the SmbShare runs in a different namespace, it can use tshare1.samba-operator-system.svc.cluster.local/My Share
:
$ kubectl exec -ti centos -- /bin/bash
[root@centos /]# smbclient -U sambauser '//tshare1.samba-operator-system.svc.cluster.local/My Share'
Enter SAMBA\sambauser's password:
Try "help" to get a list of possible commands.
smb: \>
After deploying the operator with make deploy
on OpenShift, the pod is restarting constantly:
$ kubectl -n samba-operator-system get pods
NAME READY STATUS RESTARTS AGE
samba-operator-controller-manager-5c4766cfc-hnp2k 1/2 OOMKilled 2 5m58s
$ kubectl -n samba-operator-system describe pods
Name: samba-operator-controller-manager-5c4766cfc-hnp2k
Namespace: samba-operator-system
Priority: 0
Node: ip-10-0-153-215.ec2.internal/10.0.153.215
Start Time: Fri, 30 Apr 2021 10:18:13 +0200
Labels: control-plane=controller-manager
pod-template-hash=5c4766cfc
Annotations: k8s.v1.cni.cncf.io/network-status:
[{
"name": "",
"interface": "eth0",
"ips": [
"10.131.0.62"
],
"default": true,
"dns": {}
}]
k8s.v1.cni.cncf.io/networks-status:
[{
"name": "",
"interface": "eth0",
"ips": [
"10.131.0.62"
],
"default": true,
"dns": {}
}]
openshift.io/scc: restricted
Status: Running
IP: 10.131.0.62
IPs:
IP: 10.131.0.62
Controlled By: ReplicaSet/samba-operator-controller-manager-5c4766cfc
Containers:
kube-rbac-proxy:
Container ID: cri-o://9dcea85dea85669f5d32af871cc81f45dd0ab2a98f762d8d3763afd3d9c5d5f2
Image: gcr.io/kubebuilder/kube-rbac-proxy:v0.5.0
Image ID: gcr.io/kubebuilder/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b
Port: 8443/TCP
Host Port: 0/TCP
Args:
--secure-listen-address=0.0.0.0:8443
--upstream=http://127.0.0.1:8080/
--logtostderr=true
--v=10
State: Running
Started: Fri, 30 Apr 2021 10:18:18 +0200
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-tnfbb (ro)
manager:
Container ID: cri-o://99320227bde1457d9fc019ecf479ce75fba630a428c80109f875023bae8a831e
Image: quay.io/samba.org/samba-operator:latest
Image ID: quay.io/samba.org/samba-operator@sha256:d1dbcea58e9800d17c40064d238ded061700eb5fb9e643c7b2884b834e6dd812
Port: <none>
Host Port: <none>
Command:
/manager
Args:
--metrics-addr=127.0.0.1:8080
--enable-leader-election
State: Terminated
Reason: OOMKilled
Exit Code: 137
Started: Fri, 30 Apr 2021 10:25:40 +0200
Finished: Fri, 30 Apr 2021 10:26:06 +0200
Last State: Terminated
Reason: OOMKilled
Exit Code: 137
Started: Fri, 30 Apr 2021 10:24:26 +0200
Finished: Fri, 30 Apr 2021 10:24:51 +0200
Ready: False
Restart Count: 4
Limits:
cpu: 100m
memory: 30Mi
Requests:
cpu: 100m
memory: 20Mi
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-tnfbb (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-tnfbb:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-tnfbb
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 8m default-scheduler Successfully assigned samba-operator-system/samba-operator-controller-manager-5c4766cfc-hnp2k to ip-10-0-153-215.ec2.internal
Normal AddedInterface 7m59s multus Add eth0 [10.131.0.62/23]
Normal Pulling 7m58s kubelet Pulling image "gcr.io/kubebuilder/kube-rbac-proxy:v0.5.0"
Normal Pulled 7m56s kubelet Successfully pulled image "gcr.io/kubebuilder/kube-rbac-proxy:v0.5.0" in 2.107868137s
Normal Created 7m56s kubelet Created container kube-rbac-proxy
Normal Started 7m56s kubelet Started container kube-rbac-proxy
Normal Pulled 7m52s kubelet Successfully pulled image "quay.io/samba.org/samba-operator:latest" in 3.780498563s
Normal Pulled 3m20s kubelet Successfully pulled image "quay.io/samba.org/samba-operator:latest" in 139.668453ms
Normal Pulled 2m40s kubelet Successfully pulled image "quay.io/samba.org/samba-operator:latest" in 141.670246ms
Normal Pulling 108s (x4 over 7m56s) kubelet Pulling image "quay.io/samba.org/samba-operator:latest"
Normal Created 108s (x4 over 7m52s) kubelet Created container manager
Normal Started 108s (x4 over 7m52s) kubelet Started container manager
Normal Pulled 108s kubelet Successfully pulled image "quay.io/samba.org/samba-operator:latest" in 130.90248ms
Warning BackOff 57s (x6 over 2m54s) kubelet Back-off restarting failed container
This seems to be related to the configuration in the Deployment/samba-operator-controller-manager
. Increasing the memory limit from 30Mi
to 60Mi 100Mi
seems to make the Pod run (without any workloads, that is):
resources:
limits:
cpu: 100m
memory: 100Mi
No rush on this one, but we're testing our PRs on k8s 1.23 but 1.24 was released earlier this month. I think that we should target 1.24 (only) for a while and then consider widening the number of kube versions we test against later this year. Feel free to discuss.
There are calls to smbclient.CommandOutput & smbclient.Command that have no retry loops like the one added recently for share_access_test.go. We should ensure we can simply login at all before doing more complex uses of the smbclient.
Tests should assert that all resources created for a SmbShare are deleted when said SmbShare is deleted.
Basically we want to check that things are cleaned up properly. There's currently some breakage here due to the namespace/SetControllerReference problem reported in #87 but I suspect there's more. By adding to the tests we can be confident that this stuff gets fixed and prevents problems in the future.
The minimum size of the cluster in the SmbShare should be reflected in the size of the stateful set.
We can start with scaling up the stateful set. Later we can try and handle scaling it down too (possibly as a new issue?)
In the future we will probably want to fully reconcile certain changes to the SmbShare, such as changing from a non-clustered to a clustered instance. However in the short term it should be enough to recognize the situation and refuse to do anything (too) destructive.
The following error was addressed with #71:
Traceback (most recent call last):
File "/usr/local/bin/samba-container", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.9/site-packages/sambacc/main.py", line 447, in main
cfunc(cli, config)
File "/usr/local/lib/python3.9/site-packages/sambacc/main.py", line 97, in run_container
init_container(cli, config)
File "/usr/local/lib/python3.9/site-packages/sambacc/main.py", line 84, in init_container
import_config(cli, config)
File "/usr/local/lib/python3.9/site-packages/sambacc/main.py", line 49, in import_config
paths.ensure_samba_dirs()
File "/usr/local/lib/python3.9/site-packages/sambacc/paths.py", line 37, in ensure_samba_dirs
_mkdir(wb_sockets_dir)
File "/usr/local/lib/python3.9/site-packages/sambacc/paths.py", line 43, in _mkdir
os.mkdir(path)
PermissionError: [Errno 13] Permission denied: '/run/samba/winbindd'
Unfortunately, it is not sufficient and the next problem looks like this:
$ oc -n samba-operator-system logs pvc-0c6867c2-5875-405a-a4da-6f11d11c9e12-b58654f9-bmgn5
Failed to initialize the registry: WERR_ACCESS_DENIED
Failed to initialize the registry: WERR_ACCESS_DENIED
Can't load /etc/samba/smb.conf - run testparm to debug it
Traceback (most recent call last):
File "/usr/local/bin/samba-container", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.9/site-packages/sambacc/main.py", line 447, in main
cfunc(cli, config)
File "/usr/local/lib/python3.9/site-packages/sambacc/main.py", line 97, in run_container
init_container(cli, config)
File "/usr/local/lib/python3.9/site-packages/sambacc/main.py", line 84, in init_container
import_config(cli, config)
File "/usr/local/lib/python3.9/site-packages/sambacc/main.py", line 54, in import_config
loader.import_config(iconfig)
File "/usr/local/lib/python3.9/site-packages/sambacc/netcmd_loader.py", line 59, in import_config
self._check(cli, proc)
File "/usr/local/lib/python3.9/site-packages/sambacc/netcmd_loader.py", line 52, in _check
raise LoaderError("failed to run {}".format(cli))
sambacc.netcmd_loader.LoaderError: failed to run ['net', 'conf', 'import', '/dev/stdin']
Currently the tests for clustered SmbShares are not run in the CI. This is because the CI uses a single node k8s cluster with no ability to provision RWX PVCs.
At the bare minimum the test cluster must support Read-Write-Many. Ideally it would also support >=3 k8s nodes.
I don't know if the github CI is sufficient for this.
The operator will need integration level tests (aka e2e tests), tests that run in a proper (although probably minimal) cluster, sooner rather than later.
I used csi-driver-smb and a pv
. If that's the intended way, I can write some documentation.
In the future it would, of course, be great to allow consuming smbshares
with pods in the same namespace without the administrator having to manually create a pv
.
We need to add test cases to verify that the metrics container is being created when it should and works (for some value of works).
I don't think we need to run a full blown Prometheus but it may be good to at least verify that the http endpoint is valid.
If it works right now, its largely luck. This needs to be fully implemented and tested.
We can set annotations that help exec/log commands function without extra flags for common kubectl subcommands like "logs" and "exec"
The contents for config/developer/kustomization.yaml should also include
resources:
for it to work properly.
Without this, the command
make DEVEOPER=1 deploy will fail with the output
/home/sprabhu/go/bin/controller-gen "crd:trivialVersions=true,crdVersions=v1" rbac:roleName=manager-role webhook paths="./..." output:crd:artifacts:config=config/crd/bases
cd config/developer && /home/sprabhu/go/bin/kustomize edit set image controller=quay.io/spuiuk/smbshare_devel:test
/home/sprabhu/go/bin/kustomize build config/developer | kubectl apply -f -
Error: merging from generator &{0xc0004b6360 { map[] map[]} {{system controller-cfg merge {[SAMBA_OP_SAMBA_DEBUG_LEVEL=10 SAMBA_OP_CLUSTER_SUPPORT=ctdb-is-experimental] [] [] } }}}: id resid.ResId{Gvk:resid.Gvk{Group:"", Version:"v1", Kind:"ConfigMap", isClusterScoped:false}, Name:"controller-cfg", Namespace:"system"} does not exist; cannot merge or replace
It would be very useful to see the underlying software versions in the metrics page.
The software I am interested in are:
When testing against the smb-operator samba server using a rook supplied cephfs pvc, we see a failure when running the smb2.rw.invalid test.
[root@smbclient samba-integration]# /bin/smbtorture --fullname --target=samba3 --user=sambauser%samba //10.244.2.14/smbshare3 smb2.rw.invalid
smbtorture 4.15.5
Using seed 1651410917
time: 2022-05-01 13:15:17.325545
test: smb2.rw.invalid
time: 2022-05-01 13:15:17.327756
dos charset 'CP850' unavailable - using ASCII
time: 2022-05-01 13:15:17.444662
failure: smb2.rw.invalid [
../../source4/torture/smb2/read_write.c:331: status was NT_STATUS_DISK_FULL, expected NT_STATUS_OK: Incorrect status
]
The part of the code with the failure is at
w.in.file.handle = h;
w.in.offset = 0xfffffff0000 - 1; /* MAXFILESIZE - 1 */
w.in.data.data = buf;
w.in.data.length = 1;
status = smb2_write(tree, &w);
if (TARGET_IS_SAMBA3(torture) || TARGET_IS_SAMBA4(torture)) {
CHECK_STATUS(status, NT_STATUS_OK);
CHECK_VALUE(w.out.nwritten, 1);
} else {
CHECK_STATUS(status, NT_STATUS_DISK_FULL);
}
This is not seen with an ext4 underlying filesystem.
Versions:
samba-4.15.6-0.fc35.x86_64
mount point:
10.111.173.90:6789,10.110.224.62:6789,10.101.104.103:6789:/volumes/csi/csi-vol-0cb59f87-c54e-11ec-ad3d-1e1dd7acb57d/ae264282-34b6-4255-a25c-6d8f60d9fc5e /mnt/dc189f61-d413-4b76-bb99-4b86beb30c0a ceph rw,relatime,name=csi-cephfs-node,secret=,acl,mds_namespace=myfs 0 0
apiVersion: samba-operator.samba.org/v1alpha1
kind: SmbShare
metadata:
name: smbshare3
spec:
scaling:
availabilityMode: clustered
minClusterSize: 2
storage:
pvc:
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
readOnly: false
shareName: "smbshare3"
[sprabhu@fedora samba-operator]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
samba-ad-server-86b7dd9856-zkptq 1/1 Running 0 5h22m
smbshare3-0 3/3 Running 0 6m31s
smbshare3-1 0/3 Init:0/4 0 6m10s
Checking with kubectl describe smbshare3-1, we see
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m20s default-scheduler Successfully assigned default/smbshare3-1 to minikube-m02
Warning FailedAttachVolume 3m20s attachdetach-controller Multi-Attach error for volume "pvc-10d65634-15ab-4db1-ad35-243e9589d861" Volume is already used by pod(s) smbshare3-0
Normal SuccessfulAttachVolume 3m19s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-7d070fb6-76c0-48d7-a629-a99e3d4ee2a6"
Warning FailedMount 77s kubelet Unable to attach or mount volumes: unmounted volumes=[smbshare3-pvc-smb], unattached volumes=[smbshare3-state-ctdb ctdb-config ctdb-volatile samba-container-config ctdb-sockets samba-state-dir kube-api-access-8kcnn ctdb-persistent smbshare3-pvc-smb]: timed out waiting for the condition
[sprabhu@fedora samba-operator]$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
smbshare3-pvc Bound pvc-10d65634-15ab-4db1-ad35-243e9589d861 1Gi RWO rook-cephfs 7m41s
smbshare3-state Bound pvc-7d070fb6-76c0-48d7-a629-a99e3d4ee2a6 1Gi RWX rook-cephfs 7m41s
I'm using traefik as ingress controller.
Is it possible to ask the operator to create the service that exposes the share only as a cluster IP so that it will be possible to use the ingresstcp resource to expose the service through the ingress controller?
This should be preventable by a simple retry after delay.
Currently we use SetControllerReference all over the place, but this may not be the best choice.
oc edit deployment/samba-operator-controller-manager
and then:
resources:
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 100m
memory: 20Mi
the results are Out of Memory fails. So i changed it manually and it works. The Values are to small ... (0,1 Core and 20M Memory)
Tools like smbclient and net are generating dos charset warnings when run.
Looks somewhat like:
# smbclient -U foo -L //localhost
lp_load_ex: changing to config backend registry
Password for [WORKGROUP\foo]:
dos charset 'CP850' unavailable - using ASCII <-------- HERE
session setup failed: NT_STATUS_LOGON_FAILURE
It's minor but unnecessary as it can be eliminated by setting an smb.conf option (dos charset
AFAICT).
Let's set that option and eliminate an annoyance.
Awesome project so far. However, there is a community of users that will use the "user" authentication method (i.e. no AD) but still want to expose samba shares external to the k8s cluster.
Since there is no AD, AD-DNS isn't really an option. However, K8s does provide ExternalDNS as an option for exposing services externally via integration with the external DNS infrastructure.
It may be interesting to have an option to expose either via AD-DNS or ExernalDNS.
Sachin wants to do research about this (will get in touch with John about it).
See also PR #60 which adds the ability to configure this behavior to the CR design doc.
steps :
[a@dhcp47-98 files]$ k apply -f client-test-pod.yaml
pod/smbclient created
[a@dhcp47]$ k get pods
NAME READY STATUS RESTARTS AGE
samba-ad-server-86b7dd9856-shvxp 1/1 Running 0 46m
samba-operator-controller-manager-844d976b7b-nlgqb 2/2 Running 0 19h
smbclient 0/1 ContainerCreating 0 30m -->> pod is not comming up
events:
35m Normal Scheduled pod/smbclient Successfully assigned samba-operator-system/smbclient to minikube
34m Warning FailedMount pod/smbclient MountVolume.SetUp failed for volume "data" : configmap "sample-data1" not found
27m Warning FailedMount pod/smbclient MountVolume.SetUp failed for volume "data" : configmap "sample-data1" not found
25m Warning FailedMount pod/smbclient Unable to attach or mount volumes: unmounted volumes=[kube-api-access-45s5r data], unattached volumes=[kube-api-access-45s5r data]: timed out waiting for the condition
4m36s Normal Scheduled pod/smbclient Successfully assigned samba-operator-system/smbclient to minikube
2m28s Warning FailedMount pod/smbclient MountVolume.SetUp failed for volume "data" : configmap "sample-data1" not found
2m33s Warning FailedMount pod/smbclient Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[data kube-api-access-88s6g]: timed out waiting for the condition
87s Normal Scheduled pod/smbclient Successfully assigned samba-operator-system/smbclient to minikube
24s Warning FailedMount pod/smbclient MountVolume.SetUp failed for volume "data" : configmap "sample-data1" not found
16s Warning FailedMount pod/smbclient Unable to attach or mount volumes: unmounted volumes=[data kube-api-access-88s6g], unattached volumes=[data kube-api-access-88s6g]: timed out waiting for the condition
Hi plz, let me know if I missed any steps ,
I've been researching how best to expose SMB shares to systems outside the k8s cluster. One thing that stands out to me is that we really don't need port 139. It's pretty old, obsolete, and having it as part of the container spec is just going to be confusing. I think we should focus our efforts on "modern" SMB until we have strong demand otherwise.
OpenShift's extra security stuff seems to cause the samba container to fail. We'll need to properly research this in order to either code up or write up how to run on openshift.
I installed the Samba Operator 0.2 on an Openshift 4.8 Barebone Cluster. I created some AD shares.
winbindd version 4.15.7 started.
Copyright Andrew Tridgell and the Samba Team 1992-2021
initialize_winbindd_cache: clearing cache and re-creating with version number 2
Could not fetch our SID - did we join?
unable to initialize domain list
`apiVersion: v1
kind: Secret
metadata:
name: join1
namespace: samba-shares
type: Opaque
stringData:
join.json: |
{"username": "samba-container-join", "password": ":-)"}
---
apiVersion: samba-operator.samba.org/v1alpha1
kind: SmbSecurityConfig
metadata:
name: addomain
namespace: samba-shares
spec:
mode: active-directory
realm: ad.domain.com
joinSources:
- userJoin:
secret: join1
key: join.json
---
apiVersion: samba-operator.samba.org/v1alpha1
kind: SmbSecurityConfig
metadata:
name: addomain
namespace: samba-shares
spec:
mode: active-directory
realm: ad.domain.com
joinSources:
- userJoin:
secret: join1
key: join.json
apiVersion: samba-operator.samba.org/v1alpha1
kind: SmbCommonConfig
metadata:
name: freigabe
namespace: samba-shares
spec:
network:
publish: external
---
apiVersion: samba-operator.samba.org/v1alpha1
kind: SmbShare
metadata:
name: testshare
namespace: samba-shares
spec:
commonConfig: freigabe
securityConfig: addomain
readOnly: false
storage:
pvc:
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
# samba-tool computer show TESTSHARE
dn: CN=TESTSHARE,OU=Containers,OU=Domain Computers,DC=ad,DC=domain,DC=com
objectClass: top
objectClass: person
objectClass: organizationalPerson
objectClass: user
objectClass: computer
cn: TESTSHARE
instanceType: 4
whenCreated: 20220615103058.0Z
uSNCreated: 144306
name: TESTSHARE
objectGUID: 3adabc17-a938-47fa-843c-1e864b86e19e
badPwdCount: 0
codePage: 0
countryCode: 0
badPasswordTime: 0
lastLogoff: 0
primaryGroupID: 515
objectSid: S-1-5-21-2358220382-4025805735-3930986455-1375
accountExpires: 9223372036854775807
sAMAccountName: TESTSHARE$
sAMAccountType: 805306369
servicePrincipalName: HOST/TESTSHARE.ad.domain.com
servicePrincipalName: RestrictedKrbHost/TESTSHARE.ad.domain.com
servicePrincipalName: HOST/TESTSHARE
servicePrincipalName: RestrictedKrbHost/TESTSHARE
objectCategory: CN=Computer,CN=Schema,CN=Configuration,DC=ad,DC=domain,DC=com
isCriticalSystemObject: FALSE
dNSHostName: testshare.ad.domain.com
lastLogonTimestamp: 132997626582395210
msDS-SupportedEncryptionTypes: 31
pwdLastSet: 132997630161230470
userAccountControl: 4096
lastLogon: 132997630162023640
logonCount: 6
whenChanged: 20220615104727.0Z
uSNChanged: 144314
distinguishedName: CN=TESTSHARE,OU=Containers,OU=Domain Computers,DC=ad,DC=domain,DC=com
# oc get pods
NAME READY STATUS RESTARTS AGE
testshare-testshare-5986c96565-92gx9 1/2 CrashLoopBackOff 12 41m
# oc get logs testshare-5986c96565-92gx9 -c wb
winbindd version 4.15.7 started.
Copyright Andrew Tridgell and the Samba Team 1992-2021
initialize_winbindd_cache: clearing cache and re-creating with version number 2
Could not fetch our SID - did we join?
unable to initialize domain list
sh-5.1# samba-container
[global]
disable spoolss = yes
fileid:algorithm = fsid
load printers = no
printcap name = /dev/null
printing = bsd
smb ports = 445
vfs objects = fileid
idmap config * : backend = autorid
idmap config * : range = 2000-9999999
realm = AD.DOMAIN.COM
security = ads
workgroup = AD
netbios name = testshare
[testshare]
path = /mnt/75067755-fe82-4f3c-841f-1ad7df34b5c8
read only = no
and the same wenn I start debugging ...
[root@testshare-5986c96565-92gx9-debug /]# samba-container run winbindd
winbindd version 4.15.7 started.
Copyright Andrew Tridgell and the Samba Team 1992-2021
initialize_winbindd_cache: clearing cache and re-creating with version number 2
Could not fetch our SID - did we join?
unable to initialize domain list
so, there is a SID, AD says welcome and the Pod could not fetch the own SID.
This is on version v0.2
.
> oc logs -n samba-operator-system deploy/samba-operator-controller-manager
2022-06-15T09:33:03.716Z INFO setup loaded configuration successfully {"config": {"SmbdContainerImage":"quay.io/samba.org/samba-server:v0.2","SmbdMetricsContainerImage":"qu
ay.io/samba.org/samba-metrics:v0.2","SvcWatchContainerImage":"quay.io/samba.org/svcwatch:v0.2","SmbdContainerName":"samba","WinbindContainerName":"wb","WorkingNamespace":"samba-operator-syst
em","SambaDebugLevel":"","StatePVCSize":"1Gi","ClusterSupport":"","SmbServicePort":445,"SmbdPort":445,"ServiceAccountName":"samba","MetricsExporterMode":"disabled","PodName":"samba-operator-
controller-manager-7486c6dcf5-zq7w7","PodNamespace":"samba-operator-system","PodIP":"172.20.11.77"}}
[...]
2022-06-15T09:33:26.516Z INFO controllers.SmbShare Updating state for SmbShare {"smbshare": "namespace/my-share", "SmbShare.Namespace": "namespace", "SmbShare.Name": "myshare", "Smb
Share.UID": "f69ba631-de0b-4856-b3fb-b0cb9f2a4ca1"}
2022-06-15T09:33:31.718Z INFO controllers.SmbShare Done updating SmbShare resources {"smbshare": "namespace/my-share"}
But:
> oc get po -n namespace myshare | grep -E '(serviceAccountName|image):'
image: quay.io/samba.org/samba-server:latest
serviceAccountName: default
image: quay.io/samba.org/samba-server:latest
Group multiple shares which export the same PVC into a single cluster instance.
(Possibly optional)
A headless service will create a DNS record with all the IPs for pods in the stateful set.
The non-clustered SmbShare instances are able to host additional containers that watch for changes with a Services (public) IP addresses and register those to the AD DNS. This should be supported and tested in clustered (ctdb) modes too.
I think this is fallout from #131
When I run commands such as make manifests
many of the YAML files show as changed to git. However, I've not touched the sources. I think we should do one of the following:
make manifests
and the likemake manifests
and the like.I was trying out this operator and following along with the readme but couldn't figure out the justfication for the two different CRDs. It seems like the SmbPvc is meant to create a SmbService and a PVC and glue them together. If I'm right I'm not quite sure what the advantage is.
If I may, my first thought is that the CRDs should be more oriented toward the task the user (cluster/storage admins) are going to be performing. In that way, SmbService makes some sense but you perhaps even go more granluar and create SmbShare CRDs instead.
IMO if you want to bind the lifecycle of a PV/PVC from a lower part of the storage stack either by name, for an existing PVC, or by directly specifying the pvc parameters I'd go with something like:
source:
pvc:
name: "mypvcname"
AND as an alternate form
source:
pvc:
spec:
#...embedded pvc spec...
It is possible to have the value for the KUBECONFIG env such that you have multiple kubeconfig files. The kubectl command knows how to parse and uses the first kubeconfig file that is available in that list.
I don't know if it is possible to do something similar in bash scripts. May be it is possible to provide a similar behavior by usign the client library from https://pkg.go.dev/k8s.io/cli-runtime. Just wanted to put it out there.
Tests need to verify that they've cleaned the resources at teardown.
Tests should use unique names for resources. This will help avoid collisions, identify problems, and allow parallel runs in the future.
Running make deploy
emits numerous warnings, including:
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
I think any/all uses of this ClusterRole kind is from kubebuilder/operator sdk and its tool chain, but we should probably figure out how to update anyway, since we don't even need to be backwards compatible with k8s pre 1.17
Ideally, the tests run like they do in the CI, once per cluster. But when developing and/or testing locally there are good reasons to reuse a k8s cluster. The tests don't tear down certain resources like smbclient. Unfortunately, using smbclient to connect to one dns name places a cache record in gencache.tdb that seems to outlive the TTL for the record as served up by Coredns in the k8s cluster.
This issues is more a reminder that the problem lurks rather than something that needs to be resolved immediately.
It would be great to automatically manage the container image updates. On OpenShift, it could use imagestreams and triggers. But that's hardly a general option.
It would be great to allow adding sidecar containers to the pods serving the samba shares.
Our use case is automatically reacting to inotify
events.
Either clone the annotations from the SmbShare resource or have them defined in the CRD
If mountPath test happens to be the first or when run alone it fails as smbclient test pod is not present in the k8s cluster. Following is the error from one such occurrence from CentOS CI runs:
=== RUN TestIntegration
=== RUN TestIntegration/deploy
=== RUN TestIntegration/deploy/default
=== RUN TestIntegration/deploy/default/TestImageAndTag
=== RUN TestIntegration/deploy/default/TestOperatorReady
=== RUN TestIntegration/smbShares
=== RUN TestIntegration/smbShares/mountPath
mount_path_test.go:70:
Error Trace: mount_path_test.go:70
suite.go:118
integration_test.go:15
Error: Received unexpected error:
failed to flush cache: ['rm' '-f' '/var/lib/samba/lock/gencache.tdb']: failed executing command (pod:samba-operator-system/smbclient container:client): pods "smbclient" not found [exit: 1; stdout: ; stderr: ]
Test: TestIntegration/smbShares/mountPath
We may have to create smbclient pod if missing in the cluster as part of SetupSuite()
Sometimes, it is not right to do that, especially in the case when the kubeconfig ENV is of the form file1:file2:file3 etc.
I think the solution in this case is probably to skip providing the arg to kubectl if the ENV is set.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.