Code Monkey home page Code Monkey logo

ibm-common-service-operator's People

Contributors

adamdyszy avatar aniruddhj avatar arturobrzut avatar ashank07 avatar bitscuit avatar bluzarraga avatar chenzhiwei avatar cwangvt avatar daniel-fan avatar dependabot[bot] avatar emmahumber avatar ericabr avatar giacomoch avatar horis233 avatar ibm-ci-bot avatar imgbot[bot] avatar imgbotapp avatar jamstah avatar liqlin2015 avatar mikekaczmarski avatar nemivant avatar pawelkopel avatar pgodowski avatar posriniv avatar ppyt-pl avatar qpdpq avatar sgrube avatar shrivastava-varsha avatar ycshen1010 avatar zhuoxili avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ibm-common-service-operator's Issues

Using Github Action to automate release publish and version bump

Manually trigger the Github Action to do

  1. Publish a release for specific branch
    • Input: existing branch name for publishing a release
  2. Create a new branch for existing version
    • new branch should be a minor version. For example release-3.19 or release-4.0
  3. Create a PR to bump the version
    • Input: specify if it is a major, minor or patch version update.

move the CRD/RBAC from CSV to go code

The CRDs and RBACs are cold data, and we usually don't update them.

So let's move them to the go code constant/yaml.go to reduce the lines of CSV file.

In the CSV file, the only left resources are:

  • CS subcription
  • ODLM subscription
  • secretshare operator deployment
  • webhook operator deployment
  • namespace scope operator deployment

3.5.5 clusterservice.yaml

Jiaming asked me to raise this issue:

Can the 3.5.5 equivalent of this file be created please? https://github.com/IBM/ibm-common-service-operator/blob/master/deploy/olm-catalog/ibm-common-service-operator/3.5.4/ibm-common-service-operator.v3.5.4.clusterserviceversion.yaml

We would like to add the Events Operator stanza to the csOperandRegistry.spec.operators stanza, but it's not due out in 3.5.4 so I can't add it until the 3.5.5 file exists:

        - name: ibm-events-operator
          namespace: ibm-common-services
          channel: beta
          packageName: ibm-events-operator
          scope: public
          sourceName: opencloud-operators
          sourceNamespace: openshift-marketplace

maintain extra resources inside CSV file

/kind enhancement

currently, the extra k8s resource files are maintained inside the docker image, under build/resources directory.

When we create a new release, we need:

  1. Create a new git branch for the release
  2. Update the yaml code to use image digest and build the image
  3. Update the CSV file to use image that built in step above

If we put the yaml files to CSV file, we only need to update the CSV file in new release. This can significantly reduce the maintain efforts in new releases.

apiVersion: operators.coreos.com/v1alpha1
kind: ClusterServiceVersion
metadata:
  annotations:
    csNamespace: |-
      yaml
    csOperandRegistry: |-
      yaml
    csOperandConfig: |-
      yaml
    odlmSubscription: |-
      yaml
    extraResourceList: webhookOperator,secretShareOperator,rbac
    webhookOperator: |-
      yaml
    secretShareOperator: |-
      yaml
    rbac: :-
      yaml

cc @mikekaczmarski

relatedImages contains incorrect image path

Background
While testing catalog mirroring found the common services catalog contains incorrect reference to operand image in relatedImages. Xinchun Liu indicated the image belongs to install operator and to open issue here :

- image: icr.io/cpopen/multi-instance-conversion@sha256:a9ed199056ae20a28c4949e1fe4b5e851bba1f50446897b069382cb7e5156619
  name: CS_MULTI_INSTANCE_CONVERSION_IMAGE

should be :

- image: icr.io/cpopen/cpfs/multi-instance-conversion@sha256:a9ed199056ae20a28c4949e1fe4b5e851bba1f50446897b069382cb7e5156619
  name: CS_MULTI_INSTANCE_CONVERSION_IMAGE

operator installation failed when installing in a custom numeric namespace

Hi team,

Installation of operator kept failing when I tried to install in a custom numeric namespace (e.g. 123) and seems like it's being recognised as empty:

logs from pod:
I1118 15:06:01.284823 1 request.go:621] Throttling request took 1.045048932s, request: GET:https://172.21.0.1:443/apis/apps/v1?timeout=32s I1118 15:06:05.307624 1 init.go:184] Single Deployment Status: true, MultiInstance Deployment status: false, SaaS Depolyment Status: false I1118 15:06:05.317230 1 main.go:194] Creating CommonService CR in the namespace 987 E1118 15:06:05.334289 1 main.go:196] Failed to create CommonService CR: an empty namespace may not be set when a resource name is provided

and my common-service-maps configmaps:
`namespaceMapping:

  • requested-from-namespace:
    • "987"
      map-to-common-service-namespace: "987"
      defaultCsNs: ibm-common-services`

we found a workaround for now: create CommonService manually and restart the pod

please can you have a look, thank you

Create 3.6 clusterserviceversion.yaml

Similarly to #254, can the 3.6 clusterserviceversion.yaml be created please?

We would like to add the Events operator stanza is as follows to it:

        - name: ibm-events-operator
          namespace: ibm-common-services
          channel: beta
          packageName: ibm-events-operator
          scope: public
          sourceName: opencloud-operators
          sourceNamespace: openshift-marketplace

Update upgrade operator-sdk to v1.0.1, the serviceAccountName become default

As a Cloud Pak I want to request a CommonService CR through an operandrequest

Details

As a Cloud Pak that has adopted the use of ODLM to manage its resources, I want to configure and deploy a local CommonService CR through ODLM to be reconciled by ibm-common-services with the base resource (resides in ibm-common-services namespace by default).

The Cloud Pak has a Lead operator and a handful of Component operators. There is no Capability operator layer at this time.
As such, the Lead operator could be thought of also as a capability operator itself. In order to provide a design that allows for seamless transition between the various Lead and Capability topologies and entry points, we would like to utilize ODLM to manage the request for the local CommonService resource.

@gyliu513 @horis233

Many other operators being installed when installing ODLM

Hello!

I installed the latest version 3.5.4 of common services from Operator Hub. Previously ODLM was installed automatically but now that does not seem to happen. When I do install ODLM from operator hub explicitly, there are 10 other operators being installed(like MongoDB, Helm, UI etc etc). Is this a bug or is this expected? I personally don't need all these operators installed and hence it would be great if only the required ODLM ca be installed.

Thank you,
Srutha K

keep both storageclass and size info

When the stoageclass is added into a commonservice CR before creating the operandconfig, operandconfig will only have storageclass info but no size info

add git commit hash to docker images

Add a relationship between git repo commit and docker image, when encounter errors, we can use the commit hash inside docker image to find the source code and debug.

Dockerfile:

ARG GIT_COMMIT_SHA=""
LABEL git-commit-sha=${GIT_COMMIT_SHA}

Command:

docker build --build-arg GIT_COMMIT_SHA=xxxx

Update Install document to use CatalogSource

The OperatorSource is deprecated in OpenShift v4.4, so we need to use CatalogSource instead.

We also need to add a notice that the doc in this repo is used for developers and early adopters, customers should use the official document in IBM Knowledge Center.

Permission error when webhook tries to fetch configmap in kube-public

admission webhook "ibm-cloudpak-operandrequest.operator.ibm.com" denied the request: failed to fetch configmap kube-public/common-service-maps: configmaps "common-service-maps" is forbidden: User "system:serviceaccount:common-service-installed:ibm-common-service-webhook" cannot get resource "configmaps" in API group "" in the namespace "kube-public"

I will submit a PR to add the corresponding items.

/cc @horis233

Allow users of ODLM to specify OLM dependencies for the APIs

Summary

Today the ibm-common-service-operator bootstraps the operand-lifecycle-operator-manager as well as a supporting namespacescope operator, among other bootstrapping tasks.

It is told that the users of ODLM must include ibm-common-service-operator and must not include ODLM in their definitions.

This request is to allow the users of ODLM to declare their API and package requirements through OLM.

Details

There are at least two other operators who have adopted ODLM for their deployment architecture:

  • ibm-management-orchestrator
  • ibm-aiops-orchestrator

These operators include the ODLM golang library and directly interface with the ODLM APIs in kubernetes.
The deployment architecture for the operands of these operators is based on ODLM as a fundamental component.
If the ODLM APIs do not exist, or are not within the supported range, the operators will panic.

Further, these operators are not yet offered through a bundle format in OLM. What this means is that they are only specifying required APIs in the CSV and are not utilizing the bundle dependencies.yaml for GVK and package type specifications.

When IAM operator isn't enabled, checking message will repeat forever

When IBM common services have been installed correctly without IAM operator is enabled, there is a message which is repeating forever every 2 mins in the log

[root@jordaxbastion ~]# oc logs -f ibm-common-service-operator-bccf584bf-nqbnc -n common-service
I0625 18:57:24.842757       1 main.go:49] Operator Version: 3.4.1
I0625 18:57:24.842948       1 main.go:50] Go Version: go1.13.8
I0625 18:57:24.842955       1 main.go:51] Go OS/Arch: linux/amd64
I0625 18:57:24.842961       1 main.go:52] Version of operator-sdk: v0.16.0
I0625 18:57:39.022043       1 main.go:115] Registering Components.
I0625 18:57:39.022246       1 main.go:123] check Helm based IBM Common Services installation
I0625 18:57:39.028465       1 main.go:134] start installing ODLM operator and initialize IBM Common Services
I0625 18:57:39.053070       1 init.go:68] create namespace for common services
I0625 18:57:39.053149       1 init.go:105] create resource: csNamespace
I0625 18:57:39.060118       1 init.go:73] check existing ODLM operator
I0625 18:57:39.089185       1 init.go:78] create ODLM operator
I0625 18:57:39.089235       1 init.go:105] create resource: odlmSubscription
I0625 18:57:39.094632       1 init.go:199] create resource with name: operand-deployment-lifecycle-manager-app, namespace: openshift-operators, kind: Subscription, apiversion: operators.coreos.com/v1alpha1
I0625 18:57:39.108057       1 init.go:83] create extra resources for common services
I0625 18:57:39.108108       1 init.go:105] create resource: webhookCRD
I0625 18:57:39.138212       1 init.go:105] create resource: webhookOperator
I0625 18:57:39.196357       1 init.go:105] create resource: secretShareOperator
I0625 18:57:39.239278       1 init.go:105] create resource: rbac
I0625 18:57:39.262014       1 init.go:88] create ODLM  OperandRegistry and OperandConfig CR resources
I0625 18:57:45.268291       1 init.go:105] create resource: csOperandRegistry
I0625 18:57:45.355419       1 init.go:199] create resource with name: common-service, namespace: ibm-common-services, kind: OperandRegistry, apiversion: operator.ibm.com/v1alpha1
I0625 18:57:45.378411       1 init.go:105] create resource: csOperandConfig
I0625 18:57:45.457123       1 init.go:199] create resource with name: common-service, namespace: ibm-common-services, kind: OperandConfig, apiversion: operator.ibm.com/v1alpha1
I0625 18:57:45.470294       1 main.go:139] finish installing ODLM operator and initialize IBM Common Services
I0625 18:57:45.470342       1 main.go:141] check IAM pods status
I0625 18:57:45.475009       1 check_iam.go:103] IAM status is NoReady, waiting some minutes...
E0625 18:57:45.492500       1 check_iam.go:45] create or update configmap failed
I0625 18:59:45.506445       1 check_iam.go:103] IAM status is NoReady, waiting some minutes...
I0625 19:01:45.533394       1 check_iam.go:103] IAM status is NoReady, waiting some minutes...
I0625 19:03:45.562996       1 check_iam.go:103] IAM status is NoReady, waiting some minutes...
I0625 19:05:45.592411       1 check_iam.go:103] IAM status is NoReady, waiting some minutes...
I0625 19:07:45.615526       1 check_iam.go:103] IAM status is NoReady, waiting some minutes...
I0625 19:09:45.638640       1 check_iam.go:103] IAM status is NoReady, waiting some minutes...
I0625 19:11:45.658923       1 check_iam.go:103] IAM status is NoReady, waiting some minutes...
I0625 19:13:45.682919       1 check_iam.go:103] IAM status is NoReady, waiting some minutes...
I0625 19:15:45.707237       1 check_iam.go:103] IAM status is NoReady, waiting some minutes...
I0625 19:17:45.734384       1 check_iam.go:103] IAM status is NoReady, waiting some minutes...
I0625 19:19:45.758511       1 check_iam.go:103] IAM status is NoReady, waiting some minutes...
I0625 19:21:45.794558       1 check_iam.go:103] IAM status is NoReady, waiting some minutes...
I0625 19:23:45.818470       1 check_iam.go:103] IAM status is NoReady, waiting some minutes...
I0625 19:25:45.840129       1 check_iam.go:103] IAM status is NoReady, waiting some minutes...

This will gives users the idea something is wrong. We need to enhance this logs

Convert to Multi-Instance Script - Bug with yq

Following documentation here: https://www.ibm.com/docs/en/cloud-paks/1.0?topic=icpfsimn-converting-single-namespace-installation-multiple-namespaces-installation-tech-preview#example

Convert to multi instance script fails due to an old version of yq. The script does not adequately check the user's yq version before proceeding and gives a poor error message as a result. In my case, I got this error with yq version 3.3.0. This issue was resolved after upgrading to the latest version of yq.

Line causing the issue:return_value=$("${OC}" get configmap -n kube-public -o yaml ${cm_name} | yq '.data' | grep controlNamespace: > /dev/null || echo failed)

Error message: 2023/09/01 18:32:58 unknown command ".data" for "yq"

Upgrade of IBM Common Service operator and related dependencies

Hi folks! What would be the process to take on new updates of the IBM Common Services operator and other related dependencies with their respective container? For example, in the operator list below: what would be the approach to take new versions of the operator and/or the containers pulled automatically by the operator?

oc get csv -n ibm-common-services
NAME                                          DISPLAY                                VERSION   REPLACES                                      PHASE
ibm-common-service-operator.v3.6.5            IBM Cloud Platform Common Services     3.6.5     ibm-common-service-operator.v3.6.4            Succeeded
ibm-namespace-scope-operator.v1.0.2           IBM NamespaceScope Operator            1.0.2     ibm-namespace-scope-operator.v1.0.1           Succeeded
operand-deployment-lifecycle-manager.v1.4.3   Operand Deployment Lifecycle Manager   1.4.3     operand-deployment-lifecycle-manager.v1.4.2   Succeeded

The following indicates an automatic upgrade process every 45 minutes. But is there a manual task to take on new containers - e.g., let's say a new patched container image has been published to the registry?

spec:
  displayName: IBMCS Operators
  image: docker.io/ibmcom/ibm-common-service-catalog:latest
  publisher: IBM
  sourceType: grpc
  updateStrategy:
    registryPoll:
      interval: 45m

Thanks!

setup_singleton.sh script miss `--yq` option

https://github.com/IBM/ibm-common-service-operator/blob/scripts/cp3pt0-deployment/setup_singleton.sh#L66
when run setup_singleton.sh script with --yq, it dose not work. and in the help message, it include --yq option

# setup_singleton.sh -h
Usage: setup_singleton.sh --license-accept [OPTIONS]...

Install Cloud Pak 3 pre-reqs if they do not already exist: ibm-cert-manager-operator and optionally ibm-licensing-operator
The ibm-cert-manager-operator will be installed in namespace ibm-cert-manager
The ibm-licensing-operator will be installed in namespace ibm-licensing
The --license-accept must be provided.
See https://www.ibm.com/docs/en/cloud-paks/foundational-services/4.0?topic=manager-installing-cert-licensing-by-script for more information.

Options:
   --oc string                                    Optional. File path to oc CLI. Default uses oc in your PATH
   --yq string                                    Optional. File path to yq CLI. Default uses yq in your PATH

CreateContainerConfigError: message: secret "icp-serviceid-apikey-secret" not found

Hi,

I was trying to install WebSphere Automation using the small profile on IBM ROKS (OpenShift):
https://www.ibm.com/docs/en/ws-automation?topic=installation-small-profile-configuration

about 50% of the time, I'm facing this issue.

2 pods having CreateContainerConfigError error with secret "icp-serviceid-apikey-secret" not found:

  • secret-watcher-*
  • security-onboarding-*

iam-onboarding-* job have these errors:

ERROR (:0016) (0.000000 elapsed): {
    "request_stat": {
        "url": "https://platform-identity-provider:4300/v1/auth/identitytoken",
        "retries": 15,
        "elapsed": "933.50 seconds",
        "method": "POST",
        "result": 400
    }
}

Have tried deleting pods, and rerun iam-onboarding job, but issue still persist.

oc get job iam-onboarding -o json | jq 'del(.spec.selector)' | jq 'del(.spec.template.metadata.labels)' | oc replace --force -f -
$ oc get configmap -n kube-public ibm-common-services-status -o yaml
apiVersion: v1
data:
  iamstatus: NotReady
  ibm-common-services-iamstatus: NotReady
  openshift-operators-iamstatus: NotReady
  websphere-automation-iamstatus: NotReady
kind: ConfigMap
metadata:
  creationTimestamp: "2021-07-26T02:39:55Z"
  name: ibm-common-services-status
  namespace: kube-public
  resourceVersion: "902731"
  selfLink: /api/v1/namespaces/kube-public/configmaps/ibm-common-services-status
  uid: a124e38f-8f0e-4d53-b8e5-1dbb56abc026
$ oc get pod -A | grep icp-
ibm-common-services                                icp-mongodb-0                                                     2/2     Running                      0          25h
ibm-common-services                                icp-mongodb-1                                                     2/2     Running                      0          25h
ibm-common-services                                icp-mongodb-2                                                     2/2     Running                      0          25h
$ oc get Operandrequest -A
NAMESPACE              NAME                           AGE   PHASE     CREATED AT
ibm-common-services    ibm-commonui-request           25h   Running   2021-07-26T02:38:08Z
ibm-common-services    ibm-iam-request                25h   Running   2021-07-26T02:39:15Z
ibm-common-services    ibm-mongodb-request            25h   Running   2021-07-26T02:48:46Z
ibm-common-services    management-ingress             25h   Running   2021-07-26T02:39:13Z
ibm-common-services    platform-api-request           25h   Running   2021-07-26T02:39:13Z
openshift-operators    iaf-ai-operator                25h   Running   2021-07-26T02:34:58Z
openshift-operators    iaf-core-operator              25h   Running   2021-07-26T02:35:01Z
openshift-operators    iaf-eventprocessing-operator   25h   Running   2021-07-26T02:35:02Z
openshift-operators    iaf-operator                   25h   Running   2021-07-26T02:34:56Z
openshift-operators    ibm-elastic-operator           25h   Running   2021-07-26T02:35:06Z
websphere-automation   iaf-system-common-service      25h   Running   2021-07-26T02:36:37Z
websphere-automation   iaf-system-events              25h   Running   2021-07-26T02:37:54Z
websphere-automation   ibm-iam-service                23h   Running   2021-07-26T04:16:46Z
apiVersion: zen.cpd.ibm.com/v1
kind: ZenService
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: >
      {"apiVersion":"zen.cpd.ibm.com/v1","kind":"ZenService","metadata":{"annotations":{},"labels":{"app.kubernetes.io/component":"automation-ui-config","app.kubernetes.io/instance":"iaf-system","app.kubernetes.io/managed-by":"iaf-core-operator","app.kubernetes.io/name":"automation-foun-9361"},"name":"iaf-zen-cpdservice","namespace":"websphere-automation"},"spec":{"iamIntegration":true,"license":{"accept":true},"storageClass":"ibmc-file-gold-gid"}}
  selfLink: >-
    /apis/zen.cpd.ibm.com/v1/namespaces/websphere-automation/zenservices/iaf-zen-cpdservice
  resourceVersion: '1104540'
  name: iaf-zen-cpdservice
  uid: 635ae4a7-0135-4b95-9f3f-ba5c2bb9c3ac
  creationTimestamp: '2021-07-26T03:58:09Z'
  generation: 1
  managedFields:
    - apiVersion: zen.cpd.ibm.com/v1
      fieldsType: FieldsV1
      fieldsV1:
        'f:metadata':
          'f:annotations':
            .: {}
            'f:kubectl.kubernetes.io/last-applied-configuration': {}
          'f:labels':
            .: {}
            'f:app.kubernetes.io/component': {}
            'f:app.kubernetes.io/instance': {}
            'f:app.kubernetes.io/managed-by': {}
            'f:app.kubernetes.io/name': {}
        'f:spec':
          .: {}
          'f:iamIntegration': {}
          'f:license': {}
          'f:storageClass': {}
      manager: oc
      operation: Update
      time: '2021-07-26T03:58:09Z'
    - apiVersion: zen.cpd.ibm.com/v1
      fieldsType: FieldsV1
      fieldsV1:
        'f:status':
          .: {}
          'f:zenOperatorBuildNumber': {}
          'f:zenStatus': {}
      manager: OpenAPI-Generator
      operation: Update
      time: '2021-07-27T04:04:47Z'
    - apiVersion: zen.cpd.ibm.com/v1
      fieldsType: FieldsV1
      fieldsV1:
        'f:status':
          'f:conditions': {}
      manager: ansible-operator
      operation: Update
      time: '2021-07-27T04:04:48Z'
  namespace: websphere-automation
  labels:
    app.kubernetes.io/component: automation-ui-config
    app.kubernetes.io/instance: iaf-system
    app.kubernetes.io/managed-by: iaf-core-operator
    app.kubernetes.io/name: automation-foun-9361
spec:
  iamIntegration: true
  license:
    accept: true
  storageClass: ibmc-file-gold-gid
status:
  conditions:
    - lastTransitionTime: '2021-07-27T03:56:53Z'
      message: Running reconciliation
      reason: Running
      status: 'False'
      type: Running
    - ansibleResult:
        changed: 8
        completion: '2021-07-27T04:04:48.434606'
        failures: 1
        ok: 185
        skipped: 305
      lastTransitionTime: '2021-07-27T04:04:48Z'
      message: |-
        unknown playbook failure
        unknown playbook failure
      reason: Failed
      status: 'True'
      type: Failure
  zenOperatorBuildNumber: zen operator build 64
  zenStatus: Failed

These are secrets are well:

  • zen-serviceid-apikey-secret
  • ibm-iam-bindinfo-zen-serviceid-apikey-secret

I can provide the cluster if you want to check and debug it.

One of the pods icp-mongodb-2 is stuck in Init:1/2 state.

This is happening in a CP4I 2020.3.1 instance setup on ROKS 4.4 for a partner in IBM Cloud. It was running fine until the certificates expired. After regeneration of the certificates the ipc-mongdb-2 pod is stuck in this state. The bootstrap (init)container is still Running but no obvious errors in the log. PV, PVC and POD do not have any error events and PV is attached and accessible from the bootstrap pod.

bootstrap container logs:
Sachins-MacBook-Pro-2:~ sachinkumarjha$ oc logs icp-mongodb-2 -c bootstrap
2021/01/28 06:05:38 Determined Domain to be ibm-common-services.svc.cluster.local
2021/01/28 06:05:38 Peer list updated
was []
now [icp-mongodb-0.icp-mongodb.ibm-common-services.svc.cluster.local icp-mongodb-1.icp-mongodb.ibm-common-services.svc.cluster.local icp-mongodb-2.icp-mongodb.ibm-common-services.svc.cluster.local]
2021/01/28 06:05:38 execing: /init/on-start.sh with stdin: icp-mongodb-0.icp-mongodb.ibm-common-services.svc.cluster.local
icp-mongodb-1.icp-mongodb.ibm-common-services.svc.cluster.local
icp-mongodb-2.icp-mongodb.ibm-common-services.svc.cluster.local

Describe Pod :

Sachins-MacBook-Pro-2:~ sachinkumarjha$ oc describe pod icp-mongodb-2
Name: icp-mongodb-2
Namespace: ibm-common-services
Priority: 0
Node: 10.73.236.67/10.73.236.67
Start Time: Thu, 28 Jan 2021 11:35:25 +0530
Labels: app=icp-mongodb
app.kubernetes.io/instance=common-mongodb
controller-revision-hash=icp-mongodb-86bb8c788
release=mongodb
statefulset.kubernetes.io/pod-name=icp-mongodb-2
Annotations: clusterhealth.ibm.com/dependencies: ibm-common-services.cert-manager
cni.projectcalico.org/podIP: 172.30.209.174/32
cni.projectcalico.org/podIPs: 172.30.209.174/32
cs-podpreset.operator.ibm.com/podpreset-ibm-common-service-webhook: 83674
k8s.v1.cni.cncf.io/networks-status:
[{
"name": "k8s-pod-network",
"ips": [
"172.30.209.174"
],
"dns": {}
}]
openshift.io/scc: restricted
productID: 068a62892a1e4db39641342e592daa25
productMetric: FREE
productName: IBM Cloud Platform Common Services
prometheus.io/path: /metrics
prometheus.io/port: 9216
prometheus.io/scrape: true
Status: Pending
IP: 172.30.209.174
IPs:
IP: 172.30.209.174
Controlled By: StatefulSet/icp-mongodb
Init Containers:
install:
Container ID: cri-o://9e7886c77c716bc1de2acc9a785a7d0f6684202f747acbd0869963f6ddce90ff
Image: quay.io/opencloudio/ibm-mongodb-install@sha256:1211ee8ece2791b91fadd5b1294749e46cb88cef5f3a37e1d9dd6890038f1043
Image ID: quay.io/opencloudio/ibm-mongodb-install@sha256:1211ee8ece2791b91fadd5b1294749e46cb88cef5f3a37e1d9dd6890038f1043
Port:
Host Port:
Command:
/install/install.sh
Args:
--work-dir=/work-dir
--config-dir=/data/configdb
State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 28 Jan 2021 11:35:36 +0530
Finished: Thu, 28 Jan 2021 11:35:36 +0530
Ready: True
Restart Count: 0
Limits:
cpu: 2
memory: 2Gi
Requests:
cpu: 2
memory: 2Gi
Environment:
Mounts:
/ca-readonly from ca (rw)
/configdb-readonly from config (rw)
/data/configdb from configdir (rw)
/data/db from mongodbdir (rw,path="datadir")
/install from install (rw)
/keydir-readonly from keydir (rw)
/tmp from tmp-mongodb (rw)
/var/run/secrets/kubernetes.io/serviceaccount from ibm-mongodb-operand-token-4887s (ro)
/work-dir from mongodbdir (rw,path="workdir")
bootstrap:
Container ID: cri-o://b41088c8b31092d3d7e4273537dd4511cd2735d45b2b8b3a8274038d1849599b
Image: quay.io/opencloudio/ibm-mongodb@sha256:5004b6073efd2df5eae51431e866123d386495aea1b4baa2dcac9fcbaaf7eb83
Image ID: quay.io/opencloudio/ibm-mongodb@sha256:5004b6073efd2df5eae51431e866123d386495aea1b4baa2dcac9fcbaaf7eb83
Port:
Host Port:
Command:
/work-dir/peer-finder
Args:
-on-start=/init/on-start.sh
-service=icp-mongodb
State: Running
Started: Thu, 28 Jan 2021 11:35:38 +0530
Ready: False
Restart Count: 0
Limits:
cpu: 2
memory: 2Gi
Requests:
cpu: 2
memory: 2Gi
Environment:
POD_NAMESPACE: ibm-common-services (v1:metadata.namespace)
REPLICA_SET: rs0
AUTH: true
ADMIN_USER: <set to the key 'user' in secret 'icp-mongodb-admin'> Optional: false
ADMIN_PASSWORD: <set to the key 'password' in secret 'icp-mongodb-admin'> Optional: false
METRICS: true
METRICS_USER: <set to the key 'user' in secret 'icp-mongodb-metrics'> Optional: false
METRICS_PASSWORD: <set to the key 'password' in secret 'icp-mongodb-metrics'> Optional: false
NETWORK_IP_VERSION: ipv4
Mounts:
/data/configdb from configdir (rw)
/data/db from mongodbdir (rw,path="datadir")
/init from init (rw)
/tmp from tmp-mongodb (rw)
/var/run/secrets/kubernetes.io/serviceaccount from ibm-mongodb-operand-token-4887s (ro)
/work-dir from mongodbdir (rw,path="workdir")
Containers:
icp-mongodb:
Container ID:
Image: quay.io/opencloudio/ibm-mongodb@sha256:5004b6073efd2df5eae51431e866123d386495aea1b4baa2dcac9fcbaaf7eb83
Image ID:
Port: 27017/TCP
Host Port: 0/TCP
Command:
mongod
--config=/data/configdb/mongod.conf
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Limits:
cpu: 2
memory: 2Gi
Requests:
cpu: 2
memory: 2Gi
Liveness: exec [mongo --ssl --sslCAFile=/data/configdb/tls.crt --sslPEMKeyFile=/work-dir/mongo.pem --eval db.adminCommand('ping')] delay=30s timeout=10s period=30s #success=1 #failure=5
Readiness: exec [mongo --ssl --sslCAFile=/data/configdb/tls.crt --sslPEMKeyFile=/work-dir/mongo.pem --eval db.adminCommand('ping')] delay=5s timeout=1s period=10s #success=1 #failure=3
Environment:
AUTH: true
ADMIN_USER: <set to the key 'user' in secret 'icp-mongodb-admin'> Optional: false
ADMIN_PASSWORD: <set to the key 'password' in secret 'icp-mongodb-admin'> Optional: false
Mounts:
/data/configdb from configdir (rw)
/data/db from mongodbdir (rw,path="datadir")
/tmp from tmp-mongodb (rw)
/var/run/secrets/kubernetes.io/serviceaccount from ibm-mongodb-operand-token-4887s (ro)
/work-dir from mongodbdir (rw,path="workdir")
metrics:
Container ID:
Image: quay.io/opencloudio/ibm-mongodb-exporter@sha256:d919b68e11254c38bef86b036ce5636e62699045d22c04c3c7ce616d95f341ec
Image ID:
Port: 9216/TCP
Host Port: 0/TCP
Command:
sh
-ec
/bin/mongodb_exporter --mongodb.uri mongodb://$METRICS_USER:$METRICS_PASSWORD@localhost:27017 --mongodb.tls --mongodb.tls-ca=/data/configdb/tls.crt --mongodb.tls-cert=/work-dir/mongo.pem --mongodb.socket-timeout=3s --mongodb.sync-timeout=1m --web.telemetry-path=/metrics --web.listen-address=:9216
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Limits:
cpu: 1
memory: 350Mi
Requests:
cpu: 100m
memory: 300Mi
Liveness: exec [sh -ec /bin/mongodb_exporter --mongodb.uri mongodb://$METRICS_USER:$METRICS_PASSWORD@localhost:27017 --mongodb.tls --mongodb.tls-ca=/data/configdb/tls.crt --mongodb.tls-cert=/work-dir/mongo.pem --test] delay=30s timeout=10s period=30s #success=1 #failure=10
Readiness: exec [sh -ec /bin/mongodb_exporter --mongodb.uri mongodb://$METRICS_USER:$METRICS_PASSWORD@localhost:27017 --mongodb.tls --mongodb.tls-ca=/data/configdb/tls.crt --mongodb.tls-cert=/work-dir/mongo.pem --test] delay=30s timeout=1s period=10s #success=1 #failure=3
Environment:
METRICS_USER: <set to the key 'user' in secret 'icp-mongodb-metrics'> Optional: false
METRICS_PASSWORD: <set to the key 'password' in secret 'icp-mongodb-metrics'> Optional: false
Mounts:
/data/configdb from configdir (rw)
/tmp from tmp-metrics (rw)
/var/run/secrets/kubernetes.io/serviceaccount from ibm-mongodb-operand-token-4887s (ro)
/work-dir from mongodbdir (rw,path="workdir")
Conditions:
Type Status
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
mongodbdir:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mongodbdir-icp-mongodb-2
ReadOnly: false
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: icp-mongodb
Optional: false
init:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: icp-mongodb-init
Optional: false
install:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: icp-mongodb-install
Optional: false
ca:
Type: Secret (a volume populated by a Secret)
SecretName: mongodb-root-ca-cert
Optional: false
keydir:
Type: Secret (a volume populated by a Secret)
SecretName: icp-mongodb-keyfile
Optional: false
configdir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit:
tmp-mongodb:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit:
tmp-metrics:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit:
ibm-mongodb-operand-token-4887s:
Type: Secret (a volume populated by a Secret)
SecretName: ibm-mongodb-operand-token-4887s
Optional: false
QoS Class: Burstable
Node-Selectors:
Tolerations: CriticalAddonsOnly
dedicated:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/unreachable:NoExecute
Events:
Type Reason Age From Message


Normal Scheduled default-scheduler Successfully assigned ibm-common-services/icp-mongodb-2 to 10.73.236.67
Normal Pulled 75m kubelet, 10.73.236.67 Container image "quay.io/opencloudio/ibm-mongodb-install@sha256:1211ee8ece2791b91fadd5b1294749e46cb88cef5f3a37e1d9dd6890038f1043" already present on machine
Normal Created 75m kubelet, 10.73.236.67 Created container install
Normal Started 75m kubelet, 10.73.236.67 Started container install
Normal Pulled 75m kubelet, 10.73.236.67 Container image "quay.io/opencloudio/ibm-mongodb@sha256:5004b6073efd2df5eae51431e866123d386495aea1b4baa2dcac9fcbaaf7eb83" already present on machine
Normal Created 75m kubelet, 10.73.236.67 Created container bootstrap
Normal Started 75m kubelet, 10.73.236.67 Started container bootstrap
Sachins-MacBook-Pro-2:~ sachinkumarjha$

./migrate_tenant.sh hang at the ibm-common-service-operator to be upgraded,

[[email protected] cp3pt0-deployment]# ./migrate_tenant.sh --operator-namespace auto-ucvvr --services-namespace auto-ucvvr --cert-manager-source ibm-cert-manager-catalog --enable-licensing false -v 1
wildcard
[:heavy_check_mark:] oc command available
[:heavy_check_mark:] yq command available
[:heavy_check_mark:] oc command logged in as kube:admin
[INFO] v3.23 is less than v4.0
[INFO] catalogsource opencloud-operators is the same as opencloud-operators
[INFO] is ready for scaling down.
deployment.apps/ibm-common-service-operator scaled
Deleting operand-deployment-lifecycle-manager-app in namesapce auto-ucvvr...
[1] Removing the subscription of operand-deployment-lifecycle-manager-app in namesapce auto-ucvvr ...
subscription.operators.coreos.com "operand-deployment-lifecycle-manager-app" deleted
[2] Removing the csv of operand-deployment-lifecycle-manager-app in namesapce auto-ucvvr ...
clusterserviceversion.operators.coreos.com "operand-deployment-lifecycle-manager.v1.21.2" deleted
[:heavy_check_mark:] Remove operand-deployment-lifecycle-manager-app successfully.
operandregistry.operator.ibm.com "common-service" deleted
operandconfig.operator.ibm.com "common-service" deleted
[:heavy_check_mark:] oc command available
[:heavy_check_mark:] oc command logged in as kube:admin
De-activating IBM Cloud Pak 2.0 Cert Manager in auto-cs-control...
[INFO] Configuring Common Services Cert Manager..
configmap/ibm-cpp-config created
[INFO] Deleting existing Cert Manager CR...
certmanager.operator.ibm.com "default" deleted
[INFO] Restarting IBM Cloud Pak 2.0 Cert Manager to provide cert-rotation only...
pod "ibm-cert-manager-operator-84fb46ff9d-gjxhs" deleted
[INFO] Waiting for pod cert-manager-cainjector in namespace auto-cs-control to be deleting
[:heavy_check_mark:] Pod cert-manager-cainjector in namespace auto-cs-control is deleted
[INFO] Waiting for pod cert-manager-controller in namespace auto-cs-control to be deleting
[:heavy_check_mark:] Pod cert-manager-controller in namespace auto-cs-control is deleted
[INFO] Waiting for pod cert-manager-webhook in namespace auto-cs-control to be deleting
[:heavy_check_mark:] Pod cert-manager-webhook in namespace auto-cs-control is deleted
[INFO] Waiting for pod ibm-cert-manager-operator in namespace auto-cs-control to be running
[:heavy_check_mark:] Pod ibm-cert-manager-operator in namespace auto-cs-control is running
[:heavy_check_mark:] oc command available
[:heavy_check_mark:] oc command logged in as kube:admin
Installing cert-manager
[โœ—] There is a cert-manager Subscription already
[INFO] Namespace ibm-cert-manager already exists. Skip creating
[INFO] Checking existing OperatorGroup in ibm-cert-manager:
[INFO] OperatorGroup already exists in ibm-cert-manager. Skip creating
[INFO] Creating following Subscription:
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: ibm-cert-manager-operator
namespace: ibm-cert-manager
spec:
channel: v4.0
installPlanApproval: Automatic
name: ibm-cert-manager-operator
source: ibm-cert-manager-catalog
sourceNamespace: openshift-marketplace
subscription.operators.coreos.com/ibm-cert-manager-operator configured
[INFO] Waiting for operator ibm-cert-manager-operator in namespace ibm-cert-manager to be made available
[:heavy_check_mark:] Operator ibm-cert-manager-operator in namespace ibm-cert-manager is available
[:heavy_check_mark:] Migration is completed for Cloud Pak 3.0 Foundational singleton services.
[INFO] Configuring CommonService CR common-service in auto-ucvvr
Warning: resource commonservices/common-service is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by oc apply. oc apply should only be used on resources created declaratively by either oc create --save-config or oc apply. The missing annotation will be patched automatically.
commonservice.operator.ibm.com/common-service configured
[INFO] v3.23 is less than v4.0
[INFO] catalogsource opencloud-operators is the same as opencloud-operators
[INFO] ibm-common-service-operator is ready for updaing the subscription.
Warning: resource subscriptions/ibm-common-service-operator-v3.23-opencloud-operators-openshift-marketplace is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by oc apply. oc apply should only be used on resources created declaratively by either oc create --save-config or oc apply. The missing annotation will be patched automatically.
subscription.operators.coreos.com/ibm-common-service-operator-v3.23-opencloud-operators-openshift-marketplace configured
[INFO] Waiting for operator ibm-common-service-operator to be upgraded
[INFO] RETRYING: Waiting for operator ibm-common-service-operator to be upgraded (10 left)
[INFO] RETRYING: Waiting for operator ibm-common-service-operator to be upgraded (9 left)
[INFO] RETRYING: Waiting for operator ibm-common-service-operator to be upgraded (8 left)
[INFO] RETRYING: Waiting for operator ibm-common-service-operator to be upgraded (7 left)
[INFO] RETRYING: Waiting for operator ibm-common-service-operator to be upgraded (6 left)
[INFO] RETRYING: Waiting for operator ibm-common-service-operator to be upgraded (5 left)
[INFO] RETRYING: Waiting for operator ibm-common-service-operator to be upgraded (4 left)
[INFO] RETRYING: Waiting for operator ibm-common-service-operator to be upgraded (3 left)
[INFO] RETRYING: Waiting for operator ibm-common-service-operator to be upgraded (2 left)
[INFO] RETRYING: Waiting for operator ibm-common-service-operator to be upgraded (1 left)
[โœ˜] Timeout after 5 minutes waiting for operator ibm-common-service-operator to be upgraded

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.