sorry-cypress / charts Goto Github PK
View Code? Open in Web Editor NEWA Kubernetes Helm Chart for Sorry Cypress, an open-source on-premise, self-hosted alternative to cypress dashboard.
Home Page: https://sorry-cypress.dev
License: MIT License
A Kubernetes Helm Chart for Sorry Cypress, an open-source on-premise, self-hosted alternative to cypress dashboard.
Home Page: https://sorry-cypress.dev
License: MIT License
Cypress-api and cypress-director gets wrong mongodb url: mongodb://cypress-sorry-cypress-mongodb:27017 actual service name is "cypress-mongodb."
1.0.3
When running on a minikube cluster and following the readme, the default published helm chart displays HTML error message in the dashboard projects list:
"Error: Something went wrong while loading the project list. Unexpected token '<', "<html> <h"... is not valid JSON"
and an error message when trying to create a project:
Error: Unexpected token '<', "<html> <h"... is not valid JSON
Kubernetes pod logs show 405 GQL errors, I guess POST is not allowed:
172.17.0.1 - - [23/Nov/2022:16:48:47 +0000] "GET /projects HTTP/1.1" 200 796 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36" "-"
172.17.0.1 - - [23/Nov/2022:16:48:47 +0000] "POST /graphql HTTP/1.1" 405 571 "http://127.0.0.1:51178/projects" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36" "-"
172.17.0.1 - - [23/Nov/2022:16:48:48 +0000] "POST /graphql HTTP/1.1" 405 571 "http://127.0.0.1:51178/--create-new-project--/edit" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36" "-"
minikube delete
minikube start
k create ns cypress-dashboard
helm repo add sorry-cypress https://sorry-cypress.github.io/charts
helm install -n cypress-dashboard sorry-cypress sorry-cypress/sorry-cypress
minikube service -n cypress-dashboard --url sorry-cypress-dashboard
not relevant
not relevant
not relevant
We are using the Terraform helm provider. When specifying the following abbreviated configuration:
yamlencode({
director = {
service = {
port = 4002
}
})
The director service always starts on port 1234
. The same syntax allows the API and Dashboard service use the correct port.
I suspect this line:
should be changed to:
- containerPort: {{ .Values.director.service.port }}
I am happy to raise a PR if my suspicion is correct ๐
2.1.1
N/A
1.4.4
This feature is about allowing the configuration of S3_ACL
and S3_READ_URL_PREFIX
environment variables configurable by sorry-cypress.
The default value of S3_ACL
is public-read
, and then it forces the bucket to be public. If we allow overriding it in the chart, we may have a different setup.
About the S3_READ_URL_PREFIX
, the bucket website host may be different (e.g. website static hosting, Route53, nginx, cached layers, etc).
We could add values.director.s3.acl
and values.director.s3.readUrlPrefix
.
As these are kind of advanced features, the default values could be ""
and then it fallbacks to the application default behavior.
Doesn't this line wiping Dashboard URL in GitHub action execution?
charts/charts/sorry-cypress/values.yaml
Line 245 in cc92d0d
Sorry cypress Helm chart bundles MinIO v8.0.9 which does not support Ingresses version networking.k8s.io/v1 for Kubernetes 1.24+
This is caused by the following block in the MinIO _helpers.tpl:
{{/*
Return the appropriate apiVersion for ingress.
*/}}
{{- define "minio.ingress.apiVersion" -}}
{{- if semverCompare "<1.14-0" .Capabilities.KubeVersion.GitVersion -}}
{{- print "extensions/v1beta1" -}}
{{- else -}}
{{- print "networking.k8s.io/v1beta1" -}}
{{- end -}}
{{- end -}}
2.4.2
various
1.7.10
Sorry cypress has at least 2 env vars that the chart doesn't support: S3_IMAGE_KEY_PREFIX
and S3_VIDEO_KEY_PREFIX
There also doesn't seem to be a mechanism to set arbitrary env vars without editing the chart.
See
Sorry cypress dashboard doesn't deploy dashboard properly, getting Error: Unexpected token < in JSON at position 0
helm install sorry-cypress sorry-cypress/sorry-cypress
kubectl port-forward svc/sorry-cypress-dashboard 8080
kubectl port-forward svc/sorry-cypress-director 1234
Then I just get Error: Unexpected token < in JSON at position 0
in localhost:8080
1.1.1
9.1.0
To support NodePort service for this helm chart.
We'll be able to expose the Service on each Node's IP at a static port
All the services inside the helm chart support the Nodeport service
Hey guys. I'm trying to deploy the helm chart with default values.yaml and getting the following error:
Failed to evaluate #
Run the proper check depending on the version [[ (mongo --version | grep "MongoDB shell") =~ ([0-9]+.[0-9]+.[0-9]+) ]] && VERSION={BASH_REMATCH[1]} . /opt/bitnami/scripts/libversion.sh VERSION_MAJOR="(get_sematic_version "VERSION" 1)" VERSION_MINOR="(get_sematic_version "VERSION" 2)" VERSION_PATCH="(get_sematic_version "VERSION" 3)" if [[ "VERSION_MAJOR" -ge 4 ]] && [[ "VERSION_MINOR" -ge 4 ]] && [[ "VERSION_PATCH" -ge 2 ]]; then mongo --disableImplicitSessions TLS_OPTIONS --eval 'db.hello().isWritablePrimary || db.hello().secondary' | grep -q 'true' else mongo --disableImplicitSessions TLS_OPTIONS --eval 'db.isMaster().ismaster || db.isMaster().secondary' | grep -q 'true' fi EL1012E: Cannot index into a null value
Deploy failed: Error from server (NotFound): error when creating "STDIN": namespaces "spinnaker" not found
Looks like the director is trying to use mongodb though there is executionDriver: "../execution/in-memory"
in values.yaml
Deploy the helm chart using the default values.yaml
1.4.10
-
3
We will have to follow suit at some point.
Run cleaner doesn't correctly find the API's service DNS entry
The connection string for a mongodb replicaset is wrongly constructed, so that director and api can not connect to mongodb.
The most urgent thing is the hardcoded port information on the end of the connection string.
I want to use a pre-installed mongodb cluster, to get a more reliable mongodb service, because i stumbled over the unsolved issue: sorry-cypress/sorry-cypress#90
Therefore i installed a mongodb cluster with the bitnami chart (https://github.com/bitnami/charts/tree/master/bitnami/mongodb), setting the architecture to "replicaset" with 3 nodes (pods).
To gain high availability, the connection string for the nodejs mongodb client in my scenario needs to be:
mongodb://k8s-sorry-cypress-mongo-mongodb-0.k8s-sorry-cypress-mongo-mongodb-headless.sorry-cypress.svc.cluster.local:27017,k8s-sorry-cypress-mongo-mongodb-1.k8s-sorry-cypress-mongo-mongodb-headless.sorry-cypress.svc.cluster.local:27017,k8s-sorry-cypress-mongo-mongodb-2.k8s-sorry-cypress-mongo-mongodb-headless.sorry-cypress.svc.cluster.local:27017/sorry-cypress?replicaSet=rs0
I put this string in the mongoServer parameter (with mongo.enabled=false) provided by the sorry-cypress chart.
But the MONGODB_URI in the deployments of api as well director is constructed as follows:
mongodb://mongodb://k8s-sorry-cypress-mongo-mongodb-0.k8s-sorry-cypress-mongo-mongodb-headless.sorry-cypress.svc.cluster.local:27017,k8s-sorry-cypress-mongo-mongodb-1.k8s-sorry-cypress-mongo-mongodb-headless.sorry-cypress.svc.cluster.local:27017,k8s-sorry-cypress-mongo-mongodb-2.k8s-sorry-cypress-mongo-mongodb-headless.sorry-cypress.svc.cluster.local:27017/sorry-cypress?replicaSet=rs0:27017
This isn't working. At least the hardcoded port information in the end of the string breaks it.
I edited the deployment manually within the cluster to the proper format - and it works.
So we need an option to pass a proper replicaset connection string.
0.1.36
Doesn't support nginx-ingress v 1.0.0 https://github.com/kubernetes/ingress-nginx/blob/main/Changelog.md
Be able to use an already installed minio
The best way of installing minio is through the operator, if you already have minio installed you wouldn't want a new instance. It would be great to be able to use it.
The if statement that decides if set MINIO_USESSL envvar or not is always false.
minio.service.port: 443
or minio.service.port: "443"
deployment-director.yml
<here>
<here>
I had sent an email to [email protected] but haven't heard back so I'm raising my question here instead.
We're currently implementing the sorry-cypress helm chart in our Kubernetes cluster. One cause of concern we've come across is the ALLOWED_KEYS
environment variable. I figured these keys might be sensitive but it doesn't seem to be the case. Are these keys intended to be in plaintext format?
charts/charts/sorry-cypress/values.yaml
Line 273 in 1d802ac
Setting allowedKeys
value with plaintext keys
<here>
<here>
Sorry cypress does not apply the helm values from values.yaml correctly. The environment values (see output from deploy/sorry-cypress-api and sorry-cypress-director) does not inherit any values defined in the values.yaml.
kubectl create namespace cypress
helm install -f values.yaml sorry sorry-cypress/sorry-cypress --namespace cypress
1.0.3
7.5.0
1.0.0
director.environmentVariables.executionDriver: "../execution/mongo/driver"
director.environmentVariables.screenshotsDriver: "../screenshots/minio.driver"
minio.enabled: true
MONGODB_URI: "mongodb://sorry-mongodb-01:27017"
Name: sorry-sorry-cypress-api
Namespace: cypress
CreationTimestamp: Wed, 16 Jun 2021 14:38:09 +0800
Labels: app.kubernetes.io/instance=sorry
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=sorry-cypress
app.kubernetes.io/version=1.0.3
helm.sh/chart=sorry-cypress-1.0.0
Annotations: deployment.kubernetes.io/revision: 1
meta.helm.sh/release-name: sorry
meta.helm.sh/release-namespace: cypress
Selector: app=sorry-sorry-cypress-api
Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=sorry-sorry-cypress-api
Containers:
sorry-sorry-cypress-api:
Image: agoldis/sorry-cypress-api:1.0.3
Port: 4000/TCP
Host Port: 0/TCP
Readiness: http-get http://:4000/.well-known/apollo/server-health delay=0s timeout=3s period=5s #success=2 #failure=5
Environment:
MONGODB_DATABASE: sorry-cypress
MONGODB_URI: mongodb://sorry-sorry-cypress-mongodb-0:27017
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available False MinimumReplicasUnavailable
Progressing True ReplicaSetUpdated
OldReplicaSets: <none>
NewReplicaSet: sorry-sorry-cypress-api-858b59648b (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 5m5s deployment-controller Scaled up replica set sorry-sorry-cypress-api-858b59648b to 1
Name: sorry-sorry-cypress-director
Namespace: cypress
CreationTimestamp: Wed, 16 Jun 2021 14:38:09 +0800
Labels: app.kubernetes.io/managed-by=Helm
Annotations: deployment.kubernetes.io/revision: 1
meta.helm.sh/release-name: sorry
meta.helm.sh/release-namespace: cypress
Selector: app=sorry-sorry-cypress-director
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=sorry-sorry-cypress-director
Containers:
sorry-sorry-cypress-director:
Image: agoldis/sorry-cypress-director:1.0.3
Port: 1234/TCP
Host Port: 0/TCP
Readiness: http-get http://:1234/ delay=0s timeout=5s period=10s #success=2 #failure=5
Environment:
DASHBOARD_URL:
ALLOWED_KEYS:
EXECUTION_DRIVER: ../execution/in-memory
SCREENSHOTS_DRIVER: ../screenshots/dummy.driver
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: sorry-sorry-cypress-director-5cccf65ddf (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 2m deployment-controller Scaled up replica set sorry-sorry-cypress-director-5cccf65ddf to 1
After restarting the director once the api is "ready" that error is gone, but I'm not sure why this happens.
mongodb:
persistence:
enabled: true
size: 10Gi
Fails with:
๐ Director service is ready at http://0.0.0.0:1234/...
(node:1) UnhandledPromiseRejectionWarning: MongoServerSelectionError: connect ECONNREFUSED 10.254.255.17:27017
at Timeout._onTimeout (/app/node_modules/mongodb/lib/core/sdam/topology.js:438:30)
at listOnTimeout (internal/timers.js:557:17)
at processTimers (internal/timers.js:500:7)
(Use `node --trace-warnings ...` to show where the warning was created)
(node:1) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1)
(node:1) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
When the helm chart value mongodb.enabled == false
the conditional on Line 38 prevents the MONGODB_DATABASE && MONGODB_URI ENV variables from being set to the sorry-cypress--api deployment object which prevents Helm Chart usage for users wanting to configure an external DB.
Seems like there havent been any updates since october, and a few prs are waiting for review
Currently there is no way to configure the cluster domain for the clean up run. Hence the clean up job fails if it is running within a cluster that doesn't have the cluster domain cluster.local
Could not resolve host: sorry-cypress-2-api.sorry-cypress-2.svc.cluster.local
Deploy the sorry-cypress charts on a cluster with a different cluster domain.
From what I can see this is due to this line within the cronjob-run-cleaner.yml
- name: sorry_cypress_api_url
value: "http://{{ include "sorry-cypress-helm.fullname" . }}-api.{{ .Release.Namespace }}.svc.cluster.local:4000"
Instead it should be possible to set the cluster domain as a value:
- name: sorry_cypress_api_url
value: "http://{{ include "sorry-cypress-helm.fullname" . }}-api.{{ .Release.Namespace }}.svc.{{ .Values.runCleaner.clusterDomain }}:4000"
We should have Issue and PR templates to make it easier for contributing
Im deploying minio with the sorry-cypress chart and I realized my screenshots are not being saved to the bucket. I think its because port 9000 is hard coded in director and my minio runs through an ingress on port 80. There does not seem to be a way to override the MINIO_PORT in the values.yaml.
Deploy using helm on a kubernetes cluster with metallb as the load balancer.
latest
8.7.0
2.5.2
Sorry-Cypress is very popular in air-gapped networks that doesn't have access to AWS S3.
We need to store our media in different storage option.
The second option that sorry-cypress support is MinIO.
We would be happy to support this in the HELM Chart ๐
I am using the following for my value.yaml
fullnameOverride: "cypress-dashboard"
api:
image:
repository: agoldis/sorry-cypress-api
pullPolicy: Always
enabled: true
podLabels:
label: cypress-api
ingress:
labels: {}
annotations: {}
hosts:
- host: cydashapi.local.com
path: /
dashboard:
image:
repository: agoldis/sorry-cypress-dashboard
pullPolicy: Always
enabled: true
environmentVariables:
graphQlSchemaUrl: "http://cydashapi.local.com/"
podLabels:
labels: cypress-dashboard
ingress:
enabled: true
labels: {}
annotations: {}
hosts:
- host: cydashboard.local.com
path: /
director:
image:
repository: agoldis/sorry-cypress-director
pullPolicy: Always
environmentVariables:
executionDriver: "../execution/mongo/driver"
screenshotsDriver: "../screenshots/minio.driver"
podLabels:
label: cypress-director
ingress:
enabled: true
labels: {}
annotations: {}
hosts:
- host: cydirector.local.com
path: /
mongodb:
enable: true
internal_db:
enabled: true
persistence:
# set this to a higher number in deployed envs
size: 1Gi
minio:
enabled: true
service:
port: "443"
<here>
<here>
Error Text
Error: Deployment in version "v1" cannot be handled as a Deployment: v1.Deployment.Spec: v1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: v1.Container.v1.Container.Env: []v1.EnvVar: v1.EnvVar.Value: ReadString: expects " or n, but found 4, error found in #10 byte of ...|,"value":443},{"name|..., bigger context ...|ge.yourdomain.com"},{"name":"MINIO_PORT","value":443},{"name":"MINIO_USESSL","value":"true"}],"image|...
Add the possibility to install sorry-cypress in another namespace than default
Sorry cypress chart README docs mention if you are using s3 storage setup that you should use the mino screenshot driver, but seems like this is a typo, and the s3 screenshot driver should be recommended, unless I'm mistaken.
The readme docs in question can be found here.
2.4.2
12.5.0
1.9.0
At the moment, we use a manually-coded mongo pod (one that's also quite an old version). It would be nicer to use a mongo subchart in the same way we have with minio.
It should make longer-term support easier, plus the mongo chart will inherently come with a lot of extra configuration options.
We probably need to consider the upgrade path. If there isn't one, we need to make the breaking change quite clear, and probably increase a major version on the chart.
Although referenced on docker-compose and kubernetes-full, the AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
aren't referenced anywhere, as that AWS is expected to be configured in a credentials file
@tico24 I've just released https://github.com/sorry-cypress/sorry-cypress/releases/tag/v1.0.0-beta.12
Should I change anything here?
nodeSelector
to runCleaner
section to enable targetting Linux nodes onlynodeSelector
s already available elsewhere in the chartWe're using the helm terraform provider, and are attempting to disable the mongodb service. We're also using the api
, dashboard
, and director
service.
A cut down version of our configuration is:
yamlencode({
director = {
environmentVariables = {
executionDriver = "../execution/in-memory"
}
mongodb = {
internal_db = {
enabled = false
}
}
})
The deployment fails:
module.sorry-cypress.helm_release.sorry_cypress: Still creating... [4m50s elapsed]
module.sorry-cypress.helm_release.sorry_cypress: Still creating... [5m0s elapsed]
Warning: Helm release "sorry-cypress" was created but has a failed status. Use the `helm` command to investigate the error, correct it, then run Terraform again.
with module.sorry-cypress.helm_release.sorry_cypress,
on modules/sorry-cypress/chart.tf line 11, in resource "helm_release" "sorry_cypress":
11: resource "helm_release" "sorry_cypress" {
Error: timed out waiting for the condition
with module.sorry-cypress.helm_release.sorry_cypress,
on modules/sorry-cypress/chart.tf line 11, in resource "helm_release" "sorry_cypress":
11: resource "helm_release" "sorry_cypress" {
ERROR: 1
make: *** [deploy] Error 1
helm list -n sorry-cypress
yields:
kubectl get deployment -n sorry-cypress
yields:
The docs seem to indicate it is possible to run without mongodb (not that we want to do that long term) so I'm wondering if you can provide any guidance as to where I should be looking to resolve this issue.
2.1.1
N/A
1.4.4
When installing/updating the helm chart, it will update the mongo service deployment, which will scale a new pod, and downscale the old one when the new one is successful. When launching the new pod, it will try to bind the persistent volume, but at that time it is bound to the old one, and therefore the deployment fails.
Reinstalling/redeploying the helm chart.
0.6.1
6.2.1
0.1.14
Warning ProbeWarning 4m18s (x481 over 84m) kubelet Readiness probe warning: Found. Redirecting to https://github.com/agoldis/sorry-cypress
Ran install, used minio subchart. Director shows CreateContainerConfigError
trying to mount secret.
Error: secret "sorry-cypress" not found
Being able to configure more that 1 replica for director, dashboard and API Deployments.
High Availability. It would prevent issues during pod rescheduling
Not sure if there is any limitation for Director, Dashboard and API to be highly available?
As there are several ways to provide AWS authentication (credential files, environment variables, or even abstractions like kiam), we could make director.s3.accessKeyId
and director.s3.secretAccessKey
optional by having an empty default value and handling its presence on the deployment environment variables.
Being able to only run a subset of the three services (director, API and dashboard) based on what you'll actually use.
We're only using sorry-cypress for the parallelisation offered by the director. Being able to exclude the other extraneous services means less to think about / manage, and less resource used in our cluster.
We could add an 'enabled' value (defaulted to true
) for the api
and dashboard
services, and use this to switch off their deployments if set to false
. I'm happy to have a go at a PR for this if it sounds sensible!
This feature is about allowing the usage of an Ingress
resource, combined with a Service
type ExternalName
, to expose read access to S3 Bucket through a custom URL (see more about S3_READ_URL_PREFIX
here, accessed by user's browser.
Assuming you want to define a custom URL (based on S3_READ_URL_PREFIX
), allowing read access to S3 Bucket through a private endpoint (VPC network only, for example), you should be able to define it using native K8s resources, as Ingress
and Service
to get it done. The approach of using Service
type ExternalName
is described in detail on this Google Cloud Blog post. It is a good way to use a feature provided by sorry-cypress without requiring new external infrastructure components.
Raw manifests example (using nginx as ingress-controller):
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/upstream-vhost: "<your_bucket_name>.s3-website-<aws_region>.amazonaws.com"
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-origin: ""POST, GET, PUT, DELETE, HEAD"
kubernetes.io/ingress.class: nginx-aws
name: sorry-cypress-static
namespace: sorry-cypress
spec:
rules:
- host: <your_bucket_name>.s3-website-<aws_region>.amazonaws.com
http:
paths:
- backend:
serviceName: sorry-cypress-static
servicePort: http
path: /
apiVersion: v1
kind: Service
metadata:
name: sorry-cypress-static
namespace: sorry-cypress
spec:
externalName: <your_bucket_name>.s3-website-<aws_region>.amazonaws.com
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
type: ExternalName
NOTE: If you want to restrict access to your S3 bucket only from your AWS VPC Network, it is recommended creating a Bucket Policy with aws:SourceVpc
or aws:SourceVpce
condition keys. More info on Amazon S3 Condition Keys. To use that is required the creation of a S3 VPC Endpoint inside your VPC.
Provide the ability to pull containers from a private registry
Some organisations prevent downloading of images from public registries, and require authenticated access to private registries
Adding the standard imagePullSecrets
section as per:
https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod
Add an option to disable director when not needed.
We use Sorry Cypress and chose to deploy mongo on another cluster than the director / api / dashboard. So we use the same chart 2 times but there is no option to disable the director.
Just add a enabled: true/false
for the director deployment.
Link to pull request : #160
Currently if using the internal mongo db, the deployment works fine. However as soon as some pod of the replicaset has a restart for whatever reason, the replicaset never manages to get functional again.
I've looked into it and it seems to be due to the externalAccess being set to true for the mongo deployment (https://github.com/sorry-cypress/charts/blob/main/charts/sorry-cypress/values.yaml#L350).
Due to this setting MONGODB_ADVERTISED_HOSTNAME is not set and from my understanding this is required for a replicaset to recover (https://github.com/bitnami/charts/blob/main/bitnami/mongodb/templates/replicaset/statefulset.yaml#L237-L240).
I've now come up with the following workaround:
mongodb:
internal_db:
enabled: true
externalAccess:
enabled: false
mongoConnectionString: "mongodb://sorry-cypress-2-mongodb-headless:27017/sorry-cypress?replicaSet=rs0"
As disabling the external access causes some sorry-cypress pods to not be able to start up, I also had to adjust the mongo connection string.
What I do not yet get is why the externalAccess is required in the first place? As said I now disabled this and with the adjusted connection string it just seem to work fine. Are there any consequences that I am currently missing?
Otherwise I would propose to adjust the values accordingly.
Deploy sorry-cypress with an internal mongo db and restart any of the mongo db pods.
When enabling MiniO the deployment of sorry-cypress fails with:
Error: template: sorry-cypress/templates/deployment-director.yml:100:15: executing "sorry-cypress/templates/deployment-director.yml" at <eq .Values.minio.service.port "443">: error calling eq: incompatible types for comparison
Enable MiniO and deploy sorry-cypress
From what I see this is coming from deployment-director.yml:
{{- if eq .Values.minio.service.port "443" }}
- name: MINIO_USESSL
value: "true"
Instead it should be:
{{- if eq (.Values.minio.service.port | quote) "443" }}
- name: MINIO_USESSL
value: "true"
When using an external DB (i.e. AWS DocumentDB) mongodb.mongoConnectionString
should have an option to use secret values.
Our organization is strict about not committing secrets or credentials to a git repository. Our organization likely isn't the only one to benefit from this feature. The Department of Veterans Affair's.
There doesn't appear to be a straightforward way to provide DB credentials when using an external DB without committing them to a repository in values.yaml
. I think the correct pattern to hookup the Director and API to AWS DocumentDB is to use the provided application connection string. The difficultly with this is that it contains the username and password in it. Additionally, I see that I can use values from the upstream helm chart, but setting auth.rootuser
and auth.rootpassword
exposes these secrets in values.yaml.
Allowing the respective deployment objects to pull from a secretKeyRef
would make this more secure.
We're using ArgoCD and committing Helm Charts and Values to a shared application manifest repository. It's likely I'm completely ignorant on how to deliver these credentials securely to Sorry-Cypress without a feature request.
Sorry cypress does not start pod sorry-cypress-dashboard, but should do start it
[root@openshift ~]# helm install my-release sorry-cypress/sorry-cypress -n xrow
NAME: my-release
LAST DEPLOYED: Thu Jul 1 19:31:33 2021
NAMESPACE: xrow
STATUS: deployed
REVISION: 1
You now see this in the logs:
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
--
ย | /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
ย | /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
ย | 10-listen-on-ipv6-by-default.sh: info: can not modify /etc/nginx/conf.d/default.conf (read-only file system?)
ย | /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
ย | 20-envsubst-on-templates.sh: ERROR: /etc/nginx/templates exists, but /etc/nginx/conf.d is not writable
ย | /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
ย | /docker-entrypoint.sh: Configuration complete; ready for start up
ย | 2021/07/01 17:31:55 [warn] 1#1: the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:2
ย | nginx: [warn] the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:2
ย | 2021/07/01 17:31:55 [emerg] 1#1: mkdir() "/var/cache/nginx/client_temp" failed (13: Permission denied)
ย | nginx: [emerg] mkdir() "/var/cache/nginx/client_temp" failed (13: Permission denied)
[root@openshift ~]# oc version
oc v3.11.0+62803d0-1
kubernetes v1.11.0+d4cacc0
features: Basic-Auth GSSAPI Kerberos SPNEGO
Server https://openshift.06.xrow.net:8443
openshift v3.11.0+9caa622-494
kubernetes v1.11.0+d4cacc0
dashboard
This means warnings are thrown from k8s 1.19+
From k8s 1.22 (when it is released) the chart will stop working altogether.
Not sure this is a bug or me not using the correct configuration. The issue is I can run tests using the director and I can access the dashboard, but I can't see any test run on the dashabord.
This is how I install the charts:
helm install --namespace redacted sorry-cypress-v2 sorry-cypress/sorry-cypress
--set api.ingress.ingressClassName=nginx-redacted-cypress
--set dashboard.ingress.ingressClassName=nginx-redacted-cypress
--set director.ingress.ingressClassName=nginx-redacted-cypress
--set dashboard.ingress.hosts\[0\].host=dashboard.cypress.dev.redacted.net
--set api.ingress.hosts\[0\].host=api.cypress.dev.redacted.net
--set director.ingress.hosts\[0\].host=director.cypress.dev.redacted.net
--set dashboard.environmentVariables.graphQlSchemaUrl=http://api.cypress.dev.redacted.net/graphql
--set director.environmentVariables.dashboardUrl=http://dashboard.cypress.dev.redacted.net
I tried to change the director execution driver by adding:
--set director.environmentVariables.executionDriver=\"../execution/mongo/driver\"
But on the director pod I get this error:
MongoServerSelectionError: connect ECONNREFUSED 127.0.0.1:27017 โ
โ at Timeout._onTimeout (/app/node_modules/mongodb/lib/core/sdam/topology.js:438:30) โ
โ at listOnTimeout (internal/timers.js:557:17) โ
โ at processTimers (internal/timers.js:500:7) { โ
โ reason: TopologyDescription { โ
โ type: 'Single', โ
โ setName: null, โ
โ maxSetVersion: null, โ
โ maxElectionId: null, โ
โ servers: Map(1) { 'localhost:27017' => [ServerDescription] }, โ
โ stale: false, โ
โ compatible: true, โ
โ compatibilityError: null, โ
โ logicalSessionTimeoutMinutes: null, โ
โ heartbeatFrequencyMS: 10000, โ
โ localThresholdMS: 15, โ
โ commonWireVersion: null โ
โ }
I'm not sure how to set the MongoDB uri because:
Am I doing something wrong?
2.1.7
10.3.0
1.6.2
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.