Code Monkey home page Code Monkey logo

Comments (6)

andtos90 avatar andtos90 commented on August 19, 2024 1

You're right! It works! I had some issues related to escaping the command and I didn't think the error could be related to this. Many thanks!

from charts.

tico24 avatar tico24 commented on August 19, 2024

We have a CI test for (almost) exactly what you're doing, and it didn't fail this morning when I pushed the latest release. https://github.com/sorry-cypress/charts/blob/main/charts/sorry-cypress/ci/mongo-values.yaml

While this isn't instantly helpful for you, it should at least give you a bit of hope :)

Maybe start from scratch, copy just the values changes in that particular CI job and build from there?

If you're still struggling after that, showing us the output of helm template for the director deployment would be useful.

from charts.

andtos90 avatar andtos90 commented on August 19, 2024

I tried with:

> helm install  --namespace redacted sorry-cypress-v2 sorry-cypress/sorry-cypress --set director.environmentVariables.executionDriver=\"../execution/mongo/driver\"
NAME: sorry-cypress-v2
LAST DEPLOYED: Tue Jul 26 10:46:25 2022
NAMESPACE: redacted
STATUS: deployed
REVISION: 1

and I've got the same error:

 MongoServerSelectionError: connect ECONNREFUSED 127.0.0.1:27017                                                                                                                                                            │
│     at Timeout._onTimeout (/app/node_modules/mongodb/lib/core/sdam/topology.js:438:30)                                                                                                                                     │
│     at listOnTimeout (internal/timers.js:557:17)                                                                                                                                                                           │
│     at processTimers (internal/timers.js:500:7) {                                                                                                                                                                          │
│   reason: TopologyDescription {                                                                                                                                                                                            │
│     type: 'Single',                                                                                                                                                                                                        │
│     setName: null,                                                                                                                                                                                                         │
│     maxSetVersion: null,                                                                                                                                                                                                   │
│     maxElectionId: null,                                                                                                                                                                                                   │
│     servers: Map(1) { 'localhost:27017' => [ServerDescription] },                                                                                                                                                          │
│     stale: false,                                                                                                                                                                                                          │
│     compatible: true,                                                                                                                                                                                                      │
│     compatibilityError: null,                                                                                                                                                                                              │
│     logicalSessionTimeoutMinutes: null,                                                                                                                                                                                    │
│     heartbeatFrequencyMS: 10000,                                                                                                                                                                                           │
│     localThresholdMS: 15,                                                                                                                                                                                                  │
│     commonWireVersion: null                                                                                                                                                                                                │
│   }                                                                                                                                                                                                                        │
│ }

And If I access the dashboard I see this (it will happen also if I remove the mongo driver):

Screenshot 2022-07-26 at 11 02 28

I think it could be related to a custom instance of ngnix that we're using just for sorry cypress and that's why I'm using this configuration:

helm install --namespace redacted sorry-cypress-v2 sorry-cypress/sorry-cypress
--set api.ingress.ingressClassName=nginx-redacted-cypress
--set dashboard.ingress.ingressClassName=nginx-redacted-cypress
--set director.ingress.ingressClassName=nginx-redacted-cypress
--set dashboard.ingress.hosts\[0\].host=dashboard.cypress.dev.redacted.net
--set api.ingress.hosts\[0\].host=api.cypress.dev.redacted.net
--set director.ingress.hosts\[0\].host=director.cypress.dev.redacted.net
--set dashboard.environmentVariables.graphQlSchemaUrl=http://api.cypress.dev.redacted.net/graphql
--set director.environmentVariables.dashboardUrl=http://dashboard.cypress.dev.redacted.net

Do I need any specific configuration for the mongodb instance? Is there any reason why the api works but the director doesn't when the mongodb driver is enabled?

from charts.

tico24 avatar tico24 commented on August 19, 2024

Your screenshots imply that the API is also not accessible. So yes, given the information provided, I'd point the blame at your ingress controller.

The output of helm template as previously requested would be useful still.

from charts.

andtos90 avatar andtos90 commented on August 19, 2024

This one? I'm new to k8s/helm, if it's wrong can you tell me the command I can use to get the output?

NAME: sorry-cypress-v2
LAST DEPLOYED: Tue Jul 26 11:17:28 2022
NAMESPACE: pep
STATUS: pending-install
REVISION: 1
USER-SUPPLIED VALUES:
director:
  environmentVariables:
    executionDriver: '"../execution/mongo/driver"'

COMPUTED VALUES:
api:
  affinity: |
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
            matchLabels:
              app: {{ include "sorry-cypress-helm.fullname" . }}-api
          topologyKey: kubernetes.io/hostname
      preferredDuringSchedulingIgnoredDuringExecution:
        - weight: 100
          podAffinityTerm:
            labelSelector:
              matchLabels:
                app: {{ include "sorry-cypress-helm.fullname" . }}-api
            topologyKey: failure-domain.beta.kubernetes.io/zone
  enabled: true
  image:
    pullPolicy: Always
    repository: agoldis/sorry-cypress-api
  ingress:
    annotations: {}
    hosts:
    - host: api.chart-example.local
      path: /
    ingressClassName: nginx
    labels: {}
    tls: []
  initContainers: []
  nodeSelector: {}
  podAnnotations: {}
  podLabels: {}
  readinessProbe:
    failureThreshold: 5
    periodSeconds: 5
    successThreshold: 2
    timeoutSeconds: 3
  replicas: 1
  resources: {}
  service:
    port: 4000
  tolerations: []
dashboard:
  affinity: |
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
            matchLabels:
              app: {{ include "sorry-cypress-helm.fullname" . }}-dashboard
          topologyKey: kubernetes.io/hostname
      preferredDuringSchedulingIgnoredDuringExecution:
        - weight: 100
          podAffinityTerm:
            labelSelector:
              matchLabels:
                app: {{ include "sorry-cypress-helm.fullname" . }}-dashboard
            topologyKey: failure-domain.beta.kubernetes.io/zone
  enabled: true
  environmentVariables:
    ciUrl: ""
    graphQlClientCredentials: ""
    graphQlSchemaUrl: ""
  image:
    pullPolicy: Always
    repository: agoldis/sorry-cypress-dashboard
  ingress:
    annotations: {}
    enabled: true
    hosts:
    - host: dashboard.chart-example.local
      path: /
    ingressClassName: nginx
    labels: {}
    tls: []
  initContainers: []
  nodeSelector: {}
  podAnnotations: {}
  podLabels: {}
  replicas: 1
  resources: {}
  service:
    port: 8080
  tolerations: []
director:
  affinity: |
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
            matchLabels:
              app: {{ include "sorry-cypress-helm.fullname" . }}-director
          topologyKey: kubernetes.io/hostname
      preferredDuringSchedulingIgnoredDuringExecution:
        - weight: 100
          podAffinityTerm:
            labelSelector:
              matchLabels:
                app: {{ include "sorry-cypress-helm.fullname" . }}-director
            topologyKey: failure-domain.beta.kubernetes.io/zone
  environmentVariables:
    allowedKeys: ""
    dashboardUrl: ""
    executionDriver: '"../execution/mongo/driver"'
    inactivityTimeoutSeconds: ""
    screenshotsDriver: ../screenshots/dummy.driver
  image:
    pullPolicy: Always
    repository: agoldis/sorry-cypress-director
  ingress:
    annotations: {}
    enabled: true
    hosts:
    - host: director.chart-example.local
      path: /
    ingressClassName: nginx
    labels: {}
    tls: []
  initContainers: []
  nodeSelector: {}
  podAnnotations: {}
  podLabels: {}
  readinessProbe:
    failureThreshold: 5
    periodSeconds: 5
    successThreshold: 2
    timeoutSeconds: 3
  replicas: 1
  resources: {}
  service:
    port: 1234
  serviceAccount:
    annotations: []
    create: false
    name: null
  tolerations: []
imagePullSecrets: []
minio:
  DeploymentUpdate:
    maxSurge: 100%
    maxUnavailable: 0
    type: RollingUpdate
  StatefulSetUpdate:
    updateStrategy: RollingUpdate
  accessKey: ""
  affinity: {}
  azuregateway:
    enabled: false
    replicas: 4
  bucketRoot: ""
  buckets: []
  certsPath: /etc/minio/certs/
  clusterDomain: cluster.local
  configPathmc: /etc/minio/mc/
  defaultBucket:
    enabled: true
    name: sorry-cypress
    policy: none
    purge: false
  drivesPerNode: 1
  enabled: false
  endpoint: storage.yourdomain.com
  environment: {}
  etcd:
    clientCert: ""
    clientCertKey: ""
    corednsPathPrefix: ""
    endpoints: []
    pathPrefix: ""
  existingSecret: ""
  extraArgs: []
  fullnameOverride: ""
  gcsgateway:
    enabled: false
    gcsKeyJson: ""
    projectId: ""
    replicas: 4
  global: {}
  helmKubectlJqImage:
    pullPolicy: IfNotPresent
    repository: bskim45/helm-kubectl-jq
    tag: 3.1.0
  image:
    pullPolicy: IfNotPresent
    repository: minio/minio
    tag: RELEASE.2020-12-03T05-49-24Z
  imagePullSecrets: []
  ingress:
    annotations: {}
    enabled: false
    hosts:
    - chart-example.local
    labels: {}
    path: /
    tls: []
  makeBucketJob:
    annotations: null
    podAnnotations: null
    resources:
      requests:
        memory: 128Mi
    securityContext:
      enabled: false
      fsGroup: 1000
      runAsGroup: 1000
      runAsUser: 1000
  mcImage:
    pullPolicy: IfNotPresent
    repository: minio/mc
    tag: RELEASE.2020-11-25T23-04-07Z
  metrics:
    serviceMonitor:
      additionalLabels: {}
      enabled: false
      relabelConfigs: {}
  mode: standalone
  mountPath: /export
  nameOverride: ""
  nasgateway:
    enabled: false
    pv: null
    replicas: 4
  networkPolicy:
    allowExternal: true
    enabled: false
  nodeSelector: {}
  persistence:
    VolumeName: ""
    accessMode: ReadWriteOnce
    enabled: true
    existingClaim: ""
    size: 10Gi
    storageClass: ""
    subPath: ""
  podAnnotations: {}
  podDisruptionBudget:
    enabled: false
    maxUnavailable: 1
  podLabels: {}
  priorityClassName: ""
  replicas: 4
  resources:
    requests:
      memory: 4Gi
  s3gateway:
    accessKey: ""
    enabled: false
    replicas: 4
    secretKey: ""
    serviceEndpoint: ""
  secretKey: ""
  securityContext:
    enabled: true
    fsGroup: 1000
    runAsGroup: 1000
    runAsUser: 1000
  service:
    annotations: {}
    clusterIP: null
    externalIPs: []
    nodePort: 32000
    port: 9000
    type: ClusterIP
  serviceAccount:
    create: true
    name: null
  tls:
    certSecret: ""
    enabled: false
    privateKey: private.key
    publicCrt: public.crt
  tolerations: []
  trustedCertsSecret: ""
  updatePrometheusJob:
    annotations: null
    podAnnotations: null
    securityContext:
      enabled: false
      fsGroup: 1000
      runAsGroup: 1000
      runAsUser: 1000
  url: http://storage.yourdomain.com
  zones: 1
mongodb:
  affinity: {}
  annotations: {}
  arbiter:
    affinity: {}
    annotations: {}
    configuration: ""
    containerSecurityContext:
      enabled: true
      runAsUser: 1001
    customLivenessProbe: {}
    customReadinessProbe: {}
    enabled: true
    extraEnvVars: []
    extraFlags: []
    extraVolumeMounts: []
    extraVolumes: []
    hostAliases: []
    initContainers: {}
    labels: {}
    livenessProbe:
      enabled: true
      failureThreshold: 6
      initialDelaySeconds: 30
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 5
    nodeAffinityPreset:
      key: ""
      type: ""
      values: []
    nodeSelector: {}
    pdb:
      create: false
      minAvailable: 1
    podAffinityPreset: ""
    podAnnotations: {}
    podAntiAffinityPreset: soft
    podLabels: {}
    podSecurityContext:
      enabled: true
      fsGroup: 1001
      sysctls: []
    readinessProbe:
      enabled: true
      failureThreshold: 6
      initialDelaySeconds: 5
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 5
    resources:
      limits: {}
      requests: {}
    service:
      nameOverride: ""
    sidecars: {}
    tolerations: []
  architecture: replicaset
  auth:
    enabled: false
    replicaSetKey: ""
    rootPassword: ""
  clusterDomain: cluster.local
  common:
    exampleValue: common-chart
    global: {}
  commonAnnotations: {}
  configuration: ""
  containerSecurityContext:
    enabled: true
    runAsNonRoot: true
    runAsUser: 1001
  customLivenessProbe: {}
  customReadinessProbe: {}
  customStartupProbe: {}
  directoryPerDB: false
  disableJavascript: false
  disableSystemLog: false
  enableIPv6: false
  enableJournal: true
  external_db:
    enabled: false
    mongoServer: ""
  externalAccess:
    autoDiscovery:
      enabled: false
      image:
        pullPolicy: IfNotPresent
        pullSecrets: []
        registry: docker.io
        repository: bitnami/kubectl
        tag: 1.19.11-debian-10-r8
      resources:
        limits: {}
        requests: {}
    enabled: true
    hidden:
      enabled: false
      service:
        annotations: {}
        loadBalancerIPs: []
        loadBalancerSourceRanges: []
        nodePorts: []
        port: 27017
        type: LoadBalancer
    service:
      annotations: {}
      loadBalancerIPs: []
      loadBalancerSourceRanges: []
      nodePorts: []
      port: 27017
      type: ClusterIP
  extraDeploy: []
  extraEnvVars: []
  extraFlags: []
  extraVolumeMounts: []
  extraVolumes: []
  global: {}
  hidden:
    affinity: {}
    annotations: {}
    configuration: ""
    customLivenessProbe: {}
    customReadinessProbe: {}
    enabled: false
    extraEnvVars: []
    extraFlags: []
    extraVolumeMounts: []
    extraVolumes: []
    initContainers: {}
    labels: {}
    livenessProbe:
      enabled: true
      failureThreshold: 6
      initialDelaySeconds: 30
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 5
    nodeAffinityPreset:
      key: ""
      type: ""
      values: []
    nodeSelector: {}
    pdb:
      create: false
      minAvailable: 1
    persistence:
      accessModes:
      - ReadWriteOnce
      annotations: {}
      enabled: true
      mountPath: /bitnami/mongodb
      size: 8Gi
      subPath: ""
      volumeClaimTemplates: {}
    podAffinityPreset: ""
    podAnnotations: {}
    podAntiAffinityPreset: soft
    podLabels: {}
    podManagementPolicy: OrderedReady
    readinessProbe:
      enabled: true
      failureThreshold: 6
      initialDelaySeconds: 5
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 5
    replicaCount: 1
    resources:
      limits: {}
      requests: {}
    sidecars: {}
    strategyType: RollingUpdate
    tolerations: []
  hostAliases: []
  image:
    debug: false
    pullPolicy: IfNotPresent
    registry: docker.io
    repository: bitnami/mongodb
    tag: 4.4.6-debian-10-r8
  initContainers: {}
  initdbScripts: {}
  internal_db:
    enabled: true
  labels: {}
  livenessProbe:
    enabled: true
    failureThreshold: 6
    initialDelaySeconds: 30
    periodSeconds: 10
    successThreshold: 1
    timeoutSeconds: 5
  metrics:
    containerPort: 9216
    enabled: false
    extraFlags: ""
    extraUri: ""
    image:
      pullPolicy: IfNotPresent
      registry: docker.io
      repository: bitnami/mongodb-exporter
      tag: 0.11.2-debian-10-r178
    livenessProbe:
      enabled: true
      failureThreshold: 3
      initialDelaySeconds: 15
      periodSeconds: 5
      successThreshold: 1
      timeoutSeconds: 5
    prometheusRule:
      additionalLabels: {}
      enabled: false
      rules: {}
    readinessProbe:
      enabled: true
      failureThreshold: 3
      initialDelaySeconds: 5
      periodSeconds: 5
      successThreshold: 1
      timeoutSeconds: 1
    resources:
      limits: {}
      requests: {}
    service:
      annotations:
        prometheus.io/path: /metrics
        prometheus.io/port: '{{ .Values.metrics.service.port }}'
        prometheus.io/scrape: "true"
      port: 9216
      type: ClusterIP
    serviceMonitor:
      additionalLabels: {}
      enabled: false
      interval: 30s
  mongoConnectionString: ""
  mongoDatabase: sorry-cypress
  mongoSecretConnectionString:
    enableCustomSecret: false
    enableSecret: false
  nodeAffinityPreset:
    key: ""
    type: ""
    values: []
  nodeSelector: {}
  pdb:
    create: false
    minAvailable: 1
  persistence:
    accessModes:
    - ReadWriteOnce
    annotations: {}
    enabled: false
    mountPath: /bitnami/mongodb
    size: 1Gi
    subPath: ""
    volumeClaimTemplates: {}
  podAffinityPreset: ""
  podAnnotations: {}
  podAntiAffinityPreset: soft
  podLabels: {}
  podManagementPolicy: OrderedReady
  podSecurityContext:
    enabled: true
    fsGroup: 1001
    sysctls: []
  podSecurityPolicy:
    allowPrivilegeEscalation: false
    create: false
    privileged: false
    spec: {}
  rbac:
    create: false
    role:
      rules: []
  readinessProbe:
    enabled: true
    failureThreshold: 6
    initialDelaySeconds: 5
    periodSeconds: 10
    successThreshold: 1
    timeoutSeconds: 5
  replicaCount: 2
  replicaSetHostnames: true
  replicaSetName: rs0
  resources:
    limits: {}
    requests:
      cpu: 25m
      memory: 90Mi
  service:
    annotations: {}
    externalIPs: []
    loadBalancerSourceRanges: []
    nameOverride: ""
    nodePort: ""
    port: 27017
    portName: mongodb
    type: ClusterIP
  serviceAccount:
    annotations: {}
    create: true
  sidecars: {}
  startupProbe:
    enabled: false
    failureThreshold: 30
    initialDelaySeconds: 5
    periodSeconds: 10
    successThreshold: 1
    timeoutSeconds: 5
  strategyType: RollingUpdate
  systemLogVerbosity: 0
  tls:
    enabled: false
    image:
      pullPolicy: IfNotPresent
      registry: docker.io
      repository: bitnami/nginx
      tag: 1.19.10-debian-10-r39
  tolerations: []
  useStatefulSet: false
  volumePermissions:
    enabled: false
    image:
      pullPolicy: Always
      pullSecrets: []
      registry: docker.io
      repository: bitnami/bitnami-shell
      tag: 10-debian-10-r91
    resources:
      limits: {}
      requests: {}
    securityContext:
      runAsUser: 0
runCleaner:
  clusterDomain: cluster.local
  daysToKeep: 200
  enabled: false
  image:
    repository: ghcr.io/sendible-labs/sorry-cypress-run-cleaner
    tag: stable
  schedule: 0 1 * * *
s3:
  accessKeyId: ""
  acl: public-read
  bucketName: example-bucket
  ingress:
    annotations: {}
    enabled: false
    hosts:
    - host: static.chart-example.local
      path: /
    ingressClassName: nginx
    labels: {}
    tls: []
  readUrlPrefix: ""
  region: us-east-1
  secretAccessKey: ""

HOOKS:
---
# Source: sorry-cypress/templates/test/test-connections.yaml
# A very basic set of tests to query that the appropriate services work and connect to a pod as long as they are enabled in the Values.yaml file
apiVersion: v1
kind: Pod
metadata:
  name: "sorry-cypress-v2-test-dashboard-connection"
  labels:
    app.kubernetes.io/name: "sorry-cypress-v2-test-dashboard-connection"
  annotations:
    "helm.sh/hook": test-success
    "helm.sh/hook-delete-policy": hook-succeeded
spec:
  containers:
    - name: wget
      image: busybox
      command: ['wget']
      args:  ['sorry-cypress-v2-dashboard:8080']
  restartPolicy: Never
---
# Source: sorry-cypress/templates/test/test-connections.yaml
apiVersion: v1
kind: Pod
metadata:
  name: "sorry-cypress-v2-test-director-connection"
  labels:
    app.kubernetes.io/name: "sorry-cypress-v2-test-director-connection"
  annotations:
    "helm.sh/hook": test-success
    "helm.sh/hook-delete-policy": hook-succeeded
spec:
  containers:
    - name: wget
      image: busybox
      command: ['wget']
      args:  ['sorry-cypress-v2-director:1234']
  restartPolicy: Never
---
# Source: sorry-cypress/templates/test/test-connections.yaml
apiVersion: v1
kind: Pod
metadata:
  name: "sorry-cypress-v2-test-mongodb-connection"
  labels:
    app.kubernetes.io/name: "sorry-cypress-v2-test-mongodb-connection"
  annotations:
    "helm.sh/hook": test-success
    "helm.sh/hook-delete-policy": hook-succeeded
spec:
  containers:
    - name: wget
      image: busybox
      command: ['wget']
      args:  ['sorry-cypress-v2-mongodb-headless:27017']
  restartPolicy: Never
---
# Source: sorry-cypress/templates/test/test-connections.yaml
---
apiVersion: v1
kind: Pod
metadata:
  name: "sorry-cypress-v2-test-api-connection"
  labels:
    app.kubernetes.io/name: "sorry-cypress-v2-test-api-connection"
  annotations:
    "helm.sh/hook": test-success
    "helm.sh/hook-delete-policy": hook-succeeded
spec:
  containers:
    - name: wget
      image: busybox
      command:
      - wget
      - 'sorry-cypress-v2-api:4000/.well-known/apollo/server-health'
  restartPolicy: Never
MANIFEST:
---
# Source: sorry-cypress/charts/mongodb/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: sorry-cypress-v2-mongodb
  namespace: pep
  labels:
    app.kubernetes.io/name: mongodb
    helm.sh/chart: mongodb-10.19.0
    app.kubernetes.io/instance: sorry-cypress-v2
    app.kubernetes.io/managed-by: Helm
secrets:
  - name: sorry-cypress-v2-mongodb
---
# Source: sorry-cypress/charts/mongodb/templates/replicaset/scripts-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: sorry-cypress-v2-mongodb-scripts
  namespace: pep
  labels:
    app.kubernetes.io/name: mongodb
    helm.sh/chart: mongodb-10.19.0
    app.kubernetes.io/instance: sorry-cypress-v2
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: mongodb
data:
  setup.sh: |-
    #!/bin/bash

    . /opt/bitnami/scripts/mongodb-env.sh

    echo "Advertised Hostname: $MONGODB_ADVERTISED_HOSTNAME"

    if [[ "$MY_POD_NAME" = "sorry-cypress-v2-mongodb-0" ]]; then
        echo "Pod name matches initial primary pod name, configuring node as a primary"
        export MONGODB_REPLICA_SET_MODE="primary"
    else
        echo "Pod name doesn't match initial primary pod name, configuring node as a secondary"
        export MONGODB_REPLICA_SET_MODE="secondary"
        export MONGODB_INITIAL_PRIMARY_ROOT_PASSWORD="$MONGODB_ROOT_PASSWORD"
        export MONGODB_INITIAL_PRIMARY_PORT_NUMBER="$MONGODB_PORT_NUMBER"
        export MONGODB_ROOT_PASSWORD="" MONGODB_USERNAME="" MONGODB_DATABASE="" MONGODB_PASSWORD=""
        export MONGODB_ROOT_PASSWORD_FILE="" MONGODB_USERNAME_FILE="" MONGODB_DATABASE_FILE="" MONGODB_PASSWORD_FILE=""
    fi

    exec /opt/bitnami/scripts/mongodb/entrypoint.sh /opt/bitnami/scripts/mongodb/run.sh
  setup-hidden.sh: |-
    #!/bin/bash

    . /opt/bitnami/scripts/mongodb-env.sh
    echo "Advertised Hostname: $MONGODB_ADVERTISED_HOSTNAME"
    echo "Configuring node as a hidden node"
    export MONGODB_REPLICA_SET_MODE="hidden"
    export MONGODB_INITIAL_PRIMARY_ROOT_PASSWORD="$MONGODB_ROOT_PASSWORD"
    export MONGODB_INITIAL_PRIMARY_PORT_NUMBER="$MONGODB_PORT_NUMBER"
    export MONGODB_ROOT_PASSWORD="" MONGODB_USERNAME="" MONGODB_DATABASE="" MONGODB_PASSWORD=""
    export MONGODB_ROOT_PASSWORD_FILE="" MONGODB_USERNAME_FILE="" MONGODB_DATABASE_FILE="" MONGODB_PASSWORD_FILE=""
    exec /opt/bitnami/scripts/mongodb/entrypoint.sh /opt/bitnami/scripts/mongodb/run.sh
---
# Source: sorry-cypress/charts/mongodb/templates/arbiter/headless-svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: sorry-cypress-v2-mongodb-arbiter-headless
  namespace: pep
  labels:
    app.kubernetes.io/name: mongodb
    helm.sh/chart: mongodb-10.19.0
    app.kubernetes.io/instance: sorry-cypress-v2
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: arbiter
spec:
  type: ClusterIP
  clusterIP: None
  ports:
    - name: tcp-mongodb
      port: 27017
      targetPort: mongodb
  selector:
    app.kubernetes.io/name: mongodb
    app.kubernetes.io/instance: sorry-cypress-v2
    app.kubernetes.io/component: arbiter
---
# Source: sorry-cypress/charts/mongodb/templates/replicaset/headless-svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: sorry-cypress-v2-mongodb-headless
  namespace: pep
  labels:
    app.kubernetes.io/name: mongodb
    helm.sh/chart: mongodb-10.19.0
    app.kubernetes.io/instance: sorry-cypress-v2
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: mongodb
spec:
  type: ClusterIP
  clusterIP: None
  publishNotReadyAddresses: true
  ports:
    - name: mongodb
      port: 27017
      targetPort: mongodb
  selector:
    app.kubernetes.io/name: mongodb
    app.kubernetes.io/instance: sorry-cypress-v2
    app.kubernetes.io/component: mongodb
---
# Source: sorry-cypress/charts/mongodb/templates/replicaset/svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: sorry-cypress-v2-mongodb-0
  namespace: pep
  labels:
    app.kubernetes.io/name: mongodb
    helm.sh/chart: mongodb-10.19.0
    app.kubernetes.io/instance: sorry-cypress-v2
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: mongodb
spec:
  type: ClusterIP
  ports:
    - name: mongodb
      port: 27017
      targetPort: mongodb
  selector:
    app.kubernetes.io/name: mongodb
    app.kubernetes.io/instance: sorry-cypress-v2
    app.kubernetes.io/component: mongodb
    statefulset.kubernetes.io/pod-name: sorry-cypress-v2-mongodb-0
---
# Source: sorry-cypress/charts/mongodb/templates/replicaset/svc.yaml
---
apiVersion: v1
kind: Service
metadata:
  name: sorry-cypress-v2-mongodb-1
  namespace: pep
  labels:
    app.kubernetes.io/name: mongodb
    helm.sh/chart: mongodb-10.19.0
    app.kubernetes.io/instance: sorry-cypress-v2
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: mongodb
spec:
  type: ClusterIP
  ports:
    - name: mongodb
      port: 27017
      targetPort: mongodb
  selector:
    app.kubernetes.io/name: mongodb
    app.kubernetes.io/instance: sorry-cypress-v2
    app.kubernetes.io/component: mongodb
    statefulset.kubernetes.io/pod-name: sorry-cypress-v2-mongodb-1
---
# Source: sorry-cypress/templates/service-api.yml
apiVersion: v1
kind: Service
metadata:
  name: sorry-cypress-v2-api
spec:
  ports:
  - name: "4000"
    port: 4000
    targetPort: 4000
  selector:
    app: sorry-cypress-v2-api
  type: ClusterIP
---
# Source: sorry-cypress/templates/service-dashboard.yml
apiVersion: v1
kind: Service
metadata:
  name: sorry-cypress-v2-dashboard
spec:
  ports:
  - name: "8080"
    port: 8080
    targetPort: 8080
  selector:
    app: sorry-cypress-v2-dashboard
  type: ClusterIP
---
# Source: sorry-cypress/templates/service-director.yml
apiVersion: v1
kind: Service
metadata:
  name: sorry-cypress-v2-director
spec:
  ports:
  - name: "1234"
    port: 1234
    targetPort: 1234
  selector:
    app: sorry-cypress-v2-director
  type: ClusterIP
---
# Source: sorry-cypress/templates/deployment-api.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: sorry-cypress-v2-api
  labels:
    helm.sh/chart: sorry-cypress-1.6.2
    app.kubernetes.io/name: sorry-cypress
    app.kubernetes.io/instance: sorry-cypress-v2
    app.kubernetes.io/version: "2.1.7"
    app.kubernetes.io/managed-by: Helm
spec:
  replicas: 1
  selector:
    matchLabels:
      app: sorry-cypress-v2-api
  template:
    metadata:
      name: sorry-cypress-v2-api
      labels:
        app: sorry-cypress-v2-api
    spec:
      nodeSelector:
        
        {}
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchLabels:
                  app: sorry-cypress-v2-api
              topologyKey: kubernetes.io/hostname
          preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 100
              podAffinityTerm:
                labelSelector:
                  matchLabels:
                    app: sorry-cypress-v2-api
                topologyKey: failure-domain.beta.kubernetes.io/zone
        
      containers:
      - env:
        - name: MONGODB_DATABASE
          value: sorry-cypress
        - name: MONGODB_URI
          value: "mongodb://sorry-cypress-v2-mongodb-0:27017"
        image: "agoldis/sorry-cypress-api:2.1.7"
        imagePullPolicy: Always
        name: sorry-cypress-v2-api
        ports:
        - containerPort: 4000
        resources:
          {}
        readinessProbe:
          httpGet:
            path: /.well-known/apollo/server-health
            port: 4000
          periodSeconds: 5
          timeoutSeconds: 3
          successThreshold: 2
          failureThreshold: 5
      restartPolicy: Always
      serviceAccountName: ""
      volumes: null
---
# Source: sorry-cypress/templates/deployment-dashboard.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: sorry-cypress-v2-dashboard
  labels:
    helm.sh/chart: sorry-cypress-1.6.2
    app.kubernetes.io/name: sorry-cypress
    app.kubernetes.io/instance: sorry-cypress-v2
    app.kubernetes.io/version: "2.1.7"
    app.kubernetes.io/managed-by: Helm
spec:
  replicas: 1
  selector:
    matchLabels:
      app: sorry-cypress-v2-dashboard
  template:
    metadata:
      name: sorry-cypress-v2-dashboard
      labels:
        app: sorry-cypress-v2-dashboard
    spec:
      nodeSelector:
        
        {}
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchLabels:
                  app: sorry-cypress-v2-dashboard
              topologyKey: kubernetes.io/hostname
          preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 100
              podAffinityTerm:
                labelSelector:
                  matchLabels:
                    app: sorry-cypress-v2-dashboard
                topologyKey: failure-domain.beta.kubernetes.io/zone
        
      containers:
      - env:
        - name: GRAPHQL_SCHEMA_URL
          value: ""
        - name: PORT
          value: "8080"
        image: "agoldis/sorry-cypress-dashboard:2.1.7"
        imagePullPolicy: Always
        name: sorry-cypress-v2-dashboard
        ports:
        - containerPort: 8080
        readinessProbe:
          httpGet:
            path: /
            port: 8080
          periodSeconds: 10
          timeoutSeconds: 5
          successThreshold: 2
          failureThreshold: 5
        resources:
          {}
      restartPolicy: Always
      serviceAccountName: ""
      volumes: null
---
# Source: sorry-cypress/templates/deployment-director.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: sorry-cypress-v2-director
spec:
  replicas: 1
  selector:
    matchLabels:
      app: sorry-cypress-v2-director
  template:
    metadata:
      name: sorry-cypress-v2-director
      labels:
        app: sorry-cypress-v2-director
    spec:
      nodeSelector:
        
        {}
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchLabels:
                  app: sorry-cypress-v2-director
              topologyKey: kubernetes.io/hostname
          preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 100
              podAffinityTerm:
                labelSelector:
                  matchLabels:
                    app: sorry-cypress-v2-director
                topologyKey: failure-domain.beta.kubernetes.io/zone
        
      containers:
      - env:
        - name: DASHBOARD_URL
          value: ""
        - name: ALLOWED_KEYS
          value: 
        - name: PORT
          value: "1234"
        - name: EXECUTION_DRIVER
          value: "../execution/mongo/driver"
        - name: SCREENSHOTS_DRIVER
          value: ../screenshots/dummy.driver
        - name: INACTIVITY_TIMEOUT_SECONDS
          value: ""
        image: "agoldis/sorry-cypress-director:2.1.7"
        imagePullPolicy: Always
        name: sorry-cypress-v2-director
        ports:
        - containerPort: 1234
        resources:
          {}
        readinessProbe:
          httpGet:
            path: /health-check-db
            port: 1234
          periodSeconds: 5
          timeoutSeconds: 3
          successThreshold: 2
          failureThreshold: 5
      restartPolicy: Always
      volumes: null
---
# Source: sorry-cypress/charts/mongodb/templates/arbiter/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: sorry-cypress-v2-mongodb-arbiter
  namespace: pep
  labels:
    app.kubernetes.io/name: mongodb
    helm.sh/chart: mongodb-10.19.0
    app.kubernetes.io/instance: sorry-cypress-v2
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: arbiter
spec:
  serviceName: sorry-cypress-v2-mongodb-arbiter-headless
  selector:
    matchLabels:
      app.kubernetes.io/name: mongodb
      app.kubernetes.io/instance: sorry-cypress-v2
      app.kubernetes.io/component: arbiter
  template:
    metadata:
      labels:
        app.kubernetes.io/name: mongodb
        helm.sh/chart: mongodb-10.19.0
        app.kubernetes.io/instance: sorry-cypress-v2
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: arbiter
    spec:
      
      serviceAccountName: sorry-cypress-v2-mongodb
      affinity:
        podAffinity:
          
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - podAffinityTerm:
                labelSelector:
                  matchLabels:
                    app.kubernetes.io/name: mongodb
                    app.kubernetes.io/instance: sorry-cypress-v2
                    app.kubernetes.io/component: arbiter
                namespaces:
                  - "pep"
                topologyKey: kubernetes.io/hostname
              weight: 1
        nodeAffinity:
          
      securityContext:
        fsGroup: 1001
        sysctls: []
      initContainers:
      containers:
        - name: mongodb-arbiter
          image: docker.io/bitnami/mongodb:4.4.6-debian-10-r8
          imagePullPolicy: "IfNotPresent"
          securityContext:
            runAsUser: 1001
          env:
            - name: BITNAMI_DEBUG
              value: "false"
            - name: MY_POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: MY_POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: K8S_SERVICE_NAME
              value: "sorry-cypress-v2-mongodb-arbiter-headless"
            - name: MONGODB_REPLICA_SET_MODE
              value: "arbiter"
            - name: MONGODB_INITIAL_PRIMARY_HOST
              value: "sorry-cypress-v2-mongodb-0.sorry-cypress-v2-mongodb-headless.$(MY_POD_NAMESPACE).svc.cluster.local"
            - name: MONGODB_REPLICA_SET_NAME
              value: "rs0"
            - name: MONGODB_ADVERTISED_HOSTNAME
              value: "$(MY_POD_NAME).$(K8S_SERVICE_NAME).$(MY_POD_NAMESPACE).svc.cluster.local"
            - name: ALLOW_EMPTY_PASSWORD
              value: "yes"
          ports:
            - containerPort: 27017
              name: mongodb
          livenessProbe:
            tcpSocket:
              port: mongodb
            initialDelaySeconds: 30
            periodSeconds: 10
            timeoutSeconds: 5
            successThreshold: 1
            failureThreshold: 6
          readinessProbe:
            tcpSocket:
              port: mongodb
            initialDelaySeconds: 5
            periodSeconds: 10
            timeoutSeconds: 5
            successThreshold: 1
            failureThreshold: 6
          resources:
            limits: {}
            requests: {}
---
# Source: sorry-cypress/charts/mongodb/templates/replicaset/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: sorry-cypress-v2-mongodb
  namespace: pep
  labels:
    app.kubernetes.io/name: mongodb
    helm.sh/chart: mongodb-10.19.0
    app.kubernetes.io/instance: sorry-cypress-v2
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: mongodb
spec:
  serviceName: sorry-cypress-v2-mongodb-headless
  podManagementPolicy: OrderedReady
  replicas: 2
  updateStrategy:
    type: RollingUpdate
  selector:
    matchLabels:
      app.kubernetes.io/name: mongodb
      app.kubernetes.io/instance: sorry-cypress-v2
      app.kubernetes.io/component: mongodb
  template:
    metadata:
      labels:
        app.kubernetes.io/name: mongodb
        helm.sh/chart: mongodb-10.19.0
        app.kubernetes.io/instance: sorry-cypress-v2
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: mongodb
    spec:
      
      serviceAccountName: sorry-cypress-v2-mongodb
      affinity:
        podAffinity:
          
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - podAffinityTerm:
                labelSelector:
                  matchLabels:
                    app.kubernetes.io/name: mongodb
                    app.kubernetes.io/instance: sorry-cypress-v2
                    app.kubernetes.io/component: mongodb
                namespaces:
                  - "pep"
                topologyKey: kubernetes.io/hostname
              weight: 1
        nodeAffinity:
          
      securityContext:
        fsGroup: 1001
        sysctls: []
      containers:
        - name: mongodb
          image: docker.io/bitnami/mongodb:4.4.6-debian-10-r8
          imagePullPolicy: "IfNotPresent"
          securityContext:
            runAsNonRoot: true
            runAsUser: 1001
          command:
            - /scripts/setup.sh
          env:
            - name: BITNAMI_DEBUG
              value: "false"
            - name: MY_POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: MY_POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: K8S_SERVICE_NAME
              value: "sorry-cypress-v2-mongodb-headless"
            - name: MONGODB_INITIAL_PRIMARY_HOST
              value: "sorry-cypress-v2-mongodb-0.$(K8S_SERVICE_NAME).$(MY_POD_NAMESPACE).svc.cluster.local"
            - name: MONGODB_REPLICA_SET_NAME
              value: "rs0"
            - name: ALLOW_EMPTY_PASSWORD
              value: "yes"
            - name: MONGODB_SYSTEM_LOG_VERBOSITY
              value: "0"
            - name: MONGODB_DISABLE_SYSTEM_LOG
              value: "no"
            - name: MONGODB_DISABLE_JAVASCRIPT
              value: "no"
            - name: MONGODB_ENABLE_JOURNAL
              value: "yes"
            - name: MONGODB_ENABLE_IPV6
              value: "no"
            - name: MONGODB_ENABLE_DIRECTORY_PER_DB
              value: "no"
          ports:
            - containerPort: 27017
              name: mongodb
          livenessProbe:
            exec:
              command:
                - mongo
                - --disableImplicitSessions
                - --eval
                - "db.adminCommand('ping')"
            initialDelaySeconds: 30
            periodSeconds: 10
            timeoutSeconds: 5
            successThreshold: 1
            failureThreshold: 6
          readinessProbe:
            exec:
              command:
                - bash
                - -ec
                - |
                  # Run the proper check depending on the version
                  [[ $(mongo --version | grep "MongoDB shell") =~ ([0-9]+\.[0-9]+\.[0-9]+) ]] && VERSION=${BASH_REMATCH[1]}
                  . /opt/bitnami/scripts/libversion.sh
                  VERSION_MAJOR="$(get_sematic_version "$VERSION" 1)"
                  VERSION_MINOR="$(get_sematic_version "$VERSION" 2)"
                  VERSION_PATCH="$(get_sematic_version "$VERSION" 3)"
                  if [[ "$VERSION_MAJOR" -ge 4 ]] && [[ "$VERSION_MINOR" -ge 4 ]] && [[ "$VERSION_PATCH" -ge 2 ]]; then
                      mongo --disableImplicitSessions $TLS_OPTIONS --eval 'db.hello().isWritablePrimary || db.hello().secondary' | grep -q 'true'
                  else
                      mongo --disableImplicitSessions $TLS_OPTIONS --eval 'db.isMaster().ismaster || db.isMaster().secondary' | grep -q 'true'
                  fi
            initialDelaySeconds: 5
            periodSeconds: 10
            timeoutSeconds: 5
            successThreshold: 1
            failureThreshold: 6
          resources:
            limits: {}
            requests:
              cpu: 25m
              memory: 90Mi
          volumeMounts:
            - name: datadir
              mountPath: /bitnami/mongodb
              subPath: 
            - name: scripts
              mountPath: /scripts/setup.sh
              subPath: setup.sh
      volumes:
        - name: scripts
          configMap:
            name: sorry-cypress-v2-mongodb-scripts
            defaultMode: 0755
        - name: datadir
          emptyDir: {}
---
# Source: sorry-cypress/templates/ingress-api.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: sorry-cypress-v2-api
  labels:
    helm.sh/chart: sorry-cypress-1.6.2
    app.kubernetes.io/name: sorry-cypress
    app.kubernetes.io/instance: sorry-cypress-v2
    app.kubernetes.io/version: "2.1.7"
    app.kubernetes.io/managed-by: Helm
spec:
  ingressClassName: nginx
  rules:
    - host: "api.chart-example.local"
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: sorry-cypress-v2-api
                port:
                  number: 4000
---
# Source: sorry-cypress/templates/ingress-dashboard.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: sorry-cypress-v2-dashboard
  labels:
    helm.sh/chart: sorry-cypress-1.6.2
    app.kubernetes.io/name: sorry-cypress
    app.kubernetes.io/instance: sorry-cypress-v2
    app.kubernetes.io/version: "2.1.7"
    app.kubernetes.io/managed-by: Helm
spec:
  ingressClassName: nginx
  rules:
    - host: "dashboard.chart-example.local"
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: sorry-cypress-v2-dashboard
                port:
                  number: 8080
---
# Source: sorry-cypress/templates/ingress-director.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: sorry-cypress-v2-director
  labels:
    helm.sh/chart: sorry-cypress-1.6.2
    app.kubernetes.io/name: sorry-cypress
    app.kubernetes.io/instance: sorry-cypress-v2
    app.kubernetes.io/version: "2.1.7"
    app.kubernetes.io/managed-by: Helm
spec:
  ingressClassName: nginx
  rules:
    - host: "director.chart-example.local"
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: sorry-cypress-v2-director
                port:
                  number: 1234

from charts.

tico24 avatar tico24 commented on August 19, 2024

Not exactly what I was after, but close enough. It looks as though the database environment variable isn't being set for the director deployment. My suspicion is your use of quoted quotes.

Can you please try with just one set of double quotes as in this test: https://github.com/sorry-cypress/charts/blob/main/charts/sorry-cypress/ci/mongo-values.yaml

from charts.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.