Code Monkey home page Code Monkey logo

wire-server-deploy's Introduction

Wire™

Wire logo

This repository is part of the source code of Wire. You can find more information at wire.com or by contacting [email protected].

You can find the published source code at github.com/wireapp/wire.

For licensing information, see the attached LICENSE file and the list of third-party licenses at wire.com/legal/licenses/.

No license is granted to the Wire trademark and its associated logos, all of which will continue to be owned exclusively by Wire Swiss GmbH. Any use of the Wire trademark and/or its associated logos is expressly prohibited without the express prior written consent of Wire Swiss GmbH.

Introduction

This repository contains the code and configuration to deploy wire-server and wire-webapp, as well as dependent components, such as cassandra databases. To allow a maximum of flexibility with respect to where wire-server can be deployed (e.g. with cloud providers like AWS, on bare-metal servers, etc), we chose kubernetes as the target platform.

Documentation

All the documentation on how to make use of this repository is hosted on https://docs.wire.com - refer to the Administrator's Guide.

Contents

  • ansible/ contains Ansible roles and playbooks to install Kubernetes, Cassandra, etc. See the Administrator's Guide for more info.
  • charts/ contains helm charts that can be installed on kubernetes. The charts are mirroed to S3 and can be used with helm repo add wire https://s3-eu-west-1.amazonaws.com/public.wire.com/charts. See the Administrator's Guide for more info.
  • terraform/ contains some examples for provisioning servers. See the Administrator's Guide for more info.
  • bin/ contains some helper bash scripts. Some are used in the Administrator's Guide when installing wire-server, and some are used for developers/maintainers of this repository.

wire-server-deploy's People

Contributors

akshaymankar avatar amitsagtani97 avatar arianvp avatar arthurwolf avatar chrispenner avatar e-lisa avatar fisx avatar flokli avatar franziskuskiefer avatar jenskasemets avatar jmatsushita avatar jschaul avatar jschumacher-wire avatar julialongtin avatar kvaps avatar lucendio avatar mheinzel avatar orandev avatar pcapriotti avatar q3k avatar smatting avatar supersven avatar sysvinit avatar tiago-loureiro avatar tjanson avatar tofutim avatar veki301 avatar zebot avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

wire-server-deploy's Issues

redis VM and etcd VM in production setup

Hi,
In the assumptions, mentioned about redis VM but did not mention about etcd VM.

In hosts.ini file nothing mention about redis but for helm external ansible playbook instructions mentioned about redis network interface, how that works? No ansible playbooks given for redis in wire server deploy repo

Documentation bit confusing to understand about redis and etcd VM and its playbooks.

Please help me understand that.

Getting the unreachable Host for the kubenode01

My question: I followed the Step in the Documentation which is provided and I want to install the server offline using Docker Container and kubernetes but when I run this step poetry run ansible-playbook -i hosts.ini kubernetes.yml -vv I get the Error for the same.

Context:

Please provide sufficent context about your problem:

How did you install wire-server?

On kubernetes With docker-compose

How many servers are involved => 1 Server that is the Docker installed in my system along with the kubernete

What is installed on which servers? => Docker and kubernetes

E.g Server A has component X and server B has component Y

Provide details about networking

We don't need to know any specific IP address, but it helps if you provide information whether an IP is ipv4 or ipv6, whether is is publicly reachable from the global internet or not, and if you installed any component of wire-server, which network interfaces are processes listening on?

How did you configure wire-server?

Note: only the configuration from helm charts in wire-server-deploy is what we support, like these defaults applied here in the case of brig. If you have used other configuration files, please post them (or the relevant parts of them).
Did you use the helm charts from wire-server-deploy?
Did you use and adapt configuration files from wire-server? If so, which ones?
Are there any overrides?

Screenshot 2020-04-22 at 7 51 19 PM

sync.sh is broken on macOS

The mapfile built-in doesn't exist in the (old) version of Bash that macOS ships by default.

Until we get rid of mapfile, the sync.sh script can be ran by doing

$ brew install bash
$ /usr/local/Cellar/bash/<version here>/bin/bash bin/sync.sh

SSL Issue with wireapp

Hi all,

I have to deploy wire-server on Kubernetes but without ingress controller. Because of our infrastructure use service nodeport to expose the apps inside kubernetes and use dedicated haproxy node point into the nodeport service.

I am able to terminate SSL for this domain

  • bare-s3
  • bare-https
  • bare-ssl

But when I want to open bare-webapp I get an error in the browser like:

Screenshot from 2019-05-21 15-30-26

photo_2019-05-21_15-31-16

I try to curl the nodeport service webapp from haproxy, but why the webapp have use ssl and redirecting into https?

this configuration said the SSL is terminated in ingress controller (or in my cases haproxy node) https://github.com/wireapp/wire-server-deploy/blob/master/docs/configuration.md#load-balancer-on-bare-metal-servers

root@haproxyweb1:~# curl 172.27.254.11:30091
Moved Permanently. Redirecting to https://172.27.254.11:30091/

root@haproxyweb1:~# curl https://172.27.254.11:30091
curl: (35) error:1408F10B:SSL routines:ssl3_get_record:wrong version number

Anyone know how to fix this?

galley error crashloopbackoff - teamSearchVisibility

Update. A clue via kubectl logs - service: user error (AesonException "Error in $.settings.featureFlags: key "teamSearchVisibility" not found")


Hi, in attempting to

bash-4.4# helm upgrade --install wire-server wire/wire-server -f values.yaml -f secrets.yaml --wait

I am seeing issues with ERROR and CRASHLOOPBACKOFF. How do I debug this? I am targeting Ubuntu 18.04 setup using the kubernetes guide in docs. I have tried helm delete --purge wire-server and restarting to no avail. Target VM has 10 GB ram and 5 cpus allocated. It finally ends with Error: release wire-server failed: timed out waiting for the condition.

bash-4.4# kubectl get pods -w
NAME                                       READY   STATUS              RESTARTS   AGE
brig-67cbfdc4d9-k5cdm                      0/1     ContainerCreating   0          1s
cannon-0                                   0/1     Pending             0          1s
cargohold-ccd7c6d5f-rvbh2                  0/1     ContainerCreating   0          1s
cassandra-ephemeral-0                      1/1     Running             0          3h25m
cassandra-migrations-kz2vx                 0/1     Completed           0          17s
demo-smtp-84b7b85ff6-7jh74                 1/1     Running             0          3h42m
elasticsearch-ephemeral-8545b66bcc-2vk42   1/1     Running             0          3h25m
elasticsearch-index-create-6tkpv           0/1     Completed           0          5s
fake-aws-dynamodb-84f87cd86b-6n7rc         2/2     Running             0          3h54m
fake-aws-s3-5c846cb5d8-2sxl6               1/1     Running             0          3h54m
fake-aws-s3-reaper-7c6d9cddd6-vxk5g        1/1     Running             0          3h54m
fake-aws-sns-5c56774d95-l6mgk              2/2     Running             0          3h54m
fake-aws-sqs-554bbc684d-8zv48              2/2     Running             0          3h54m
galley-5c645c7d46-vrtrc                    0/1     ContainerCreating   0          1s
gundeck-7b99d8cb79-knjkq                   0/1     Pending             0          1s
nginz-7d7f7c57b8-8x2vs                     0/2     Pending             0          1s
redis-ephemeral-69bb4885bb-n9jxz           1/1     Running             0          3h25m
webapp-b56ccd6b9-djwcp                     0/1     Pending             0          1s
gundeck-7b99d8cb79-knjkq                   0/1     ContainerCreating   0          1s
cargohold-ccd7c6d5f-rvbh2                  0/1     Running             0          2s
brig-67cbfdc4d9-k5cdm                      0/1     Running             0          2s
gundeck-7b99d8cb79-knjkq                   0/1     Running             0          3s
galley-5c645c7d46-vrtrc                    0/1     Error               0          3s
galley-5c645c7d46-vrtrc                    0/1     Error               1          4s
galley-5c645c7d46-vrtrc                    0/1     CrashLoopBackOff    1          5s
cargohold-ccd7c6d5f-rvbh2                  1/1     Running             0          8s
gundeck-7b99d8cb79-knjkq                   1/1     Running             0          9s
brig-67cbfdc4d9-k5cdm                      1/1     Running             0          17s
galley-5c645c7d46-vrtrc                    0/1     Error               2          22s
galley-5c645c7d46-vrtrc                    0/1     CrashLoopBackOff    2          27s
galley-5c645c7d46-vrtrc                    0/1     Running             3          54s
galley-5c645c7d46-vrtrc                    0/1     Error               3          55s
galley-5c645c7d46-vrtrc                    0/1     CrashLoopBackOff    3          57s
galley-5c645c7d46-vrtrc                    0/1     Error               4          98s
galley-5c645c7d46-vrtrc                    0/1     CrashLoopBackOff    4          107s

I am launching from the provided Docker image with

bash-4.4# kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.2", GitCommit:"66049e3b21efe110454d67df4fa62b08ea79a19b", GitTreeState:"clean", BuildDate:"2019-05-16T16:23:09Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.2", GitCommit:"66049e3b21efe110454d67df4fa62b08ea79a19b", GitTreeState:"clean", BuildDate:"2019-05-16T16:14:56Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
bash-4.4# helm version
Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}

Also, I am trying to do the demo. Here is describe pod:

bash-4.4# kubectl describe pod galley-5c645c7d46-vrtrc
Name:               galley-5c645c7d46-vrtrc
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               kubenode01/172.16.186.11
Start Time:         Tue, 12 May 2020 00:32:07 +0000
Labels:             pod-template-hash=5c645c7d46
                    release=wire-server
                    wireService=galley
Annotations:        checksum/configmap: d0f8235d0d693c58437884b3a6cb0c75f4a40394b3de2434fa786984ba259ad0
                    checksum/secret: eafab498d9de3a986c40c06a9db01d5fa59111180f0b787fd3f16144e1b1cb2e
Status:             Running
IP:                 10.233.64.47
Controlled By:      ReplicaSet/galley-5c645c7d46
Containers:
  galley:
    Container ID:   docker://925aebb69ec79a4765e53ef783bac69a43f690bb3bf8a2f23a3ab14a82134bb6
    Image:          quay.io/wire/galley:2.82.0
    Image ID:       docker-pullable://quay.io/wire/galley@sha256:5cdea32d01ce88a9e74ff328d25af696396e7d4ae165a3b6d0a66b9eb3317f72
    Port:           8080/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Tue, 12 May 2020 00:37:57 +0000
      Finished:     Tue, 12 May 2020 00:37:57 +0000
    Ready:          False
    Restart Count:  6
    Limits:
      cpu:     500m
      memory:  512Mi
    Requests:
      cpu:      100m
      memory:   256Mi
    Liveness:   http-get http://:8080/i/status delay=0s timeout=1s period=10s #success=1 #failure=3
    Readiness:  http-get http://:8080/i/status delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:
      AWS_ACCESS_KEY_ID:      <set to the key 'awsKeyId' in secret 'galley'>      Optional: false
      AWS_SECRET_ACCESS_KEY:  <set to the key 'awsSecretKey' in secret 'galley'>  Optional: false
      AWS_REGION:             eu-west-1
    Mounts:
      /etc/wire/galley/conf from galley-config (rw)
      /etc/wire/galley/secrets from galley-secrets (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-sqltv (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  galley-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      galley
    Optional:  false
  galley-secrets:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  galley
    Optional:    false
  default-token-sqltv:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-sqltv
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                  From                 Message
  ----     ------     ----                 ----                 -------
  Normal   Scheduled  10m                  default-scheduler    Successfully assigned default/galley-5c645c7d46-vrtrc to kubenode01
  Normal   Pulled     9m13s (x5 over 10m)  kubelet, kubenode01  Container image "quay.io/wire/galley:2.82.0" already present on machine
  Normal   Created    9m13s (x5 over 10m)  kubelet, kubenode01  Created container galley
  Normal   Started    9m13s (x5 over 10m)  kubelet, kubenode01  Started container galley
  Warning  BackOff    44s (x50 over 10m)   kubelet, kubenode01  Back-off restarting failed container

Having issue while configuring real AWS services

root@kubenode01:~# kubectl -n production logs -f brig-7c5547f7db-dmg6q {"logger":"cassandra.brig","msgs":["I","Known hosts: [datacenter1:rack1:ip1:9042,datacenter1:rack1:ip2:9042,datacenter1:rack1:ip3:9042]"]} {"logger":"cassandra.brig","msgs":["I","New control connection: datacenter1:rack1:ip3:9042#<socket: 11>"]} service: GeneralError (TransportError (HttpExceptionRequest Request { host = "sqs.us-east-1.amazonaws.com" port = 443 secure = True requestHeaders = [("Host","sqs.us-east-1.amazonaws.com"),("X-Amz-Date","20200418T154336Z"),("X-Amz-Content-SHA256","a1bccad704dcd70c26894d153950204cc9abd1acac42904739730e1de32c5458"),("Content-Type","application/x-www-form-urlencoded; charset=utf-8"),("Authorization","<REDACTED>")] path = "/" queryString = "" method = "POST" proxy = Nothing rawBody = False redirectCount = 0 responseTimeout = ResponseTimeoutMicro 70000000 requestVersion = HTTP/1.1 } (ConnectionFailure Network.Socket.connect: <socket: 16>: does not exist (Host is unreachable))))
brig and gundeck pods are getting failed.

is it something wrong with configuration?

default Kubespray config claims kubelets will load-balance access to apiserver but it doesn't seem to be the case

From the Documentaiton from Kubespray (emphasis mine)

https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ha-mode.md#kube-apiserver

K8s components require a loadbalancer to access the apiservers via a reverse proxy. Kubespray includes support for an nginx-based proxy that resides on each non-master Kubernetes node. This is referred to as localhost loadbalancing. It is less efficient than a dedicated load balancer because it creates extra health checks on the Kubernetes apiserver, but is more practical for scenarios where an external LB or virtual IP management is inconvenient. This option is configured by the variable loadbalancer_apiserver_localhost (defaults to True. Or False, if there is an external loadbalancer_apiserver defined). You may also define the port the local internal loadbalancer uses by changing, loadbalancer_apiserver_port. This defaults to the value of kube_apiserver_port. It is also important to note that Kubespray will only configure kubelet and kube-proxy on non-master nodes to use the local internal loadbalancer.

So this would mean that there is nginx running on each node that runs kubelet and that nginx will forward the kubelet traffic to any of the three apiservers.

However, the documentation seems to vaguely suggest this is only installed on non-master nodes. Note sure what that means. Are our worker nodes (that are also masters) considered non-master nodes by kubespray? We should probably have a look at their playbooks. It seems the case that for us, our kubelets are not configured by default to talk to the apiserver in a highly available way. I hope this is just a small config change in ansible to kick this thing in the right direction.

On the kubesprayed installation that @julialongtin sprayed today, there is no nginx running locally and the kubelet.conf seems to be configured to talk directly to the apiserver that is only on the same note as the kubelet. Not on an nginx running on port :443 as documented in their documentation. So this seems to suggest this load balancing component of kubespray's installation procedure was not installed for some reason.

root@kubenode01:~# cat /etc/kubernetes/kubelet.conf 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data:  "<redacted>"
    server: https://10.0.0.39:6443
  name: cluster.local

Note that for components that talk to the apiserver inside the cluster kube-proxy takes care of the load-balancing and everything is fine. It's just the kubelets that only talk to their local apiserver.

root@kubenode01:~# kubectl describe svc kubernetes
Name:              kubernetes
Namespace:         default
Labels:            component=apiserver
                   provider=kubernetes
Annotations:       <none>
Selector:          <none>
Type:              ClusterIP
IP:                10.233.0.1
Port:              https  443/TCP
TargetPort:        6443/TCP
Endpoints:         10.0.0.39:6443,10.0.0.40:6443,10.0.0.41:6443
Session Affinity:  None
Events:            <none>

missing possibility to add CA certs to brig and galley pods

Hello,
I want to add a bot to my platform. The bot is hosted behind an https URL with a certificate generated by an internal CA.
I cannot add the bot in the conversation (PinInvalidCert error)
After debugging, it's because brig and galley pods only accepts bot URLs with self-signed certs or certs generated by a trusted CA.
I was able to install our internal CA cert on the pods like this:
vi /usr/local/share/ca-certificates/internal_ca.crt
paste the crt content
update-ca-certificates
But it's a bit complicated to do that on each pod recreation.
Could you modify brig and galley charts so that we can specify somewhere one or multiple internal CA crt files?
Maybe method3 from this URL: https://medium.com/@paraspatidar/add-self-signed-or-ca-root-certificate-in-kubernetes-pod-ca-root-certificate-store-cb7863cb3f87

error `Unexpected Error` register user in webapp

What happened

error Unexpected Error when try to register user in bare-webapp.example.com

What you expected to happen

I can register and my email can receive email from wire-server

Step to reproduce

I create the step-by-step instalation in https://gist.github.com/zufardhiyaulhaq/ef9aa383292b34bf68198016c5fe1c49

Anything else we need to know?:

Environment

  • Kubernetes baremetal v1.13.3 (the hard way, 3 etcd, 3 master, 3 worker)
  • Helm v.2.13.1
  • Self generated wildcard certificate (*.example.com)

self troubleshooting

Based on wire-server-deploy documentation, I can open this following site

  • webapp : bare-webapp.example.com
  • fakes3 : bare-s3.example.com
  • https: bare-https.example.com
  • ssl: bare-ssl.example.com

I can login into fakes3 (minio) with user dummykey and password dummysecret. But when I try to register account in webapp, I get error Unexpected Error.

image

I try to login with user dummykey and password dummysecret

image

I think the SMTP cannot send email properly. I am using gmail account with this configuration in values/wire-server/demo-values.yaml.

    emailSMS:
      general:
        emailSender: [email protected]
        smsSender: "insert-sms-sender-for-twilio" # change this if SMS support is desired
    smtp:
      host: smtp.gmail.com
      port: 587
      connType: tls

and plain user password in values/wire-server/demo-secrets.yaml

brig:
  secrets:
    smtpPassword: "plainpassword for user [email protected]"

wireapp pod

root@zu-master1:~/wire-server-deploy# kubectl -n demo get pod
NAME                                                              READY   STATUS      RESTARTS   AGE
brig-5dcd4848d8-78lsk                                             1/1     Running     0          104s
cannon-0                                                          1/1     Running     0          104s
cargohold-557b7d89b4-6t9m4                                        1/1     Running     0          104s
cassandra-ephemeral-0                                             1/1     Running     0          15m
cassandra-migrations-4nkvw                                        0/1     Completed   0          2m19s
demo-nginx-lb-ingress-nginx-ingress-controller-65f5c4cf8f-59s8c   1/1     Running     0          45s
demo-nginx-lb-ingress-nginx-ingress-default-backend-cfb85f46ddj   1/1     Running     0          45s
elasticsearch-ephemeral-c8f779df8-7rjvw                           1/1     Running     0          15m
elasticsearch-index-9hmw2                                         0/1     Completed   0          110s
fake-aws-dynamodb-6779d5f867-fgkb7                                2/2     Running     0          3m8s
fake-aws-s3-857f769967-zk7cw                                      1/1     Running     0          3m8s
fake-aws-sns-5cbfc979c7-wxggr                                     2/2     Running     0          3m8s
fake-aws-sqs-94c7dc958-c2ntq                                      2/2     Running     0          3m8s
galley-6ffd58ff7-692wz                                            1/1     Running     0          104s
gundeck-69f9cc8556-6qpxc                                          1/1     Running     0          104s
nginz-854b5574b4-24wmg                                            2/2     Running     0          104s
redis-ephemeral-646b8c65bf-9qfrl                                  1/1     Running     0          15m
webapp-8489fd85ff-f2vmc                                           1/1     Running     0          104s

brig logs

root@zu-master1:~/wire-server-deploy# kubectl -n demo logs brig-5dcd4848d8-78lsk
1:I,6:logger,1:=,14:cassandra.brig,50:Known hosts: [datacenter1:rack1:10.244.0.180:9042],
1:I,6:logger,1:=,14:cassandra.brig,62:New control connection: datacenter1:rack1:10.244.0.180:9042#11,
1:I,25:Listening on 0.0.0.0:8080,

Webapp stucked

@lucendio, I am facing an webapp stuck with below screenshot.

image

i am not able click on "Accept" or "No thanks". I tried with all browser. I inspect element and it shows this error:

Request URL:https://nginz-https.domain.com/self/consent
Request Method:PUT
Remote Address:52.201.26.92:443
Status Code:
404
Version:HTTP/2
Referrer Policy:same-origin

Also i check try check nginz pod logs.
kubectl logs -f nginz-85cdfdb8df-rsvc8 -c nginz

172.31.0.81 - "25/May/2020:03:45:58 +0000" "OPTIONS /self/consent HTTP/1.1" 204 0 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36" 172.31.8.250 290 0.000 - - - - fbac564e77dc4b29f0717f355f565680 
172.31.8.250 - "25/May/2020:03:45:58 +0000" "PUT /self/consent HTTP/1.1" 404 106 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36" 172.31.8.250 290 0.001 0.000 - 46a60720-96c0-40af-ba79-6866db2c94fc 1842137572099248510 e160fdac8f7290e75d4bd9c8067009ec 

Can you please me to why i am facing this unknown issue? i tried with remove and setup things again. but not get any success.

Thanks You

Bump image versions in charts to latest

  • Edit concourse (kubernetes_dev) pipeline to bump image versions inside wire-server-deploy on successful integration tests; this will cause a chart version bump with the new latest-greatest-images.

  • This means we can remove the docker image overrides within Cailleach entirely and rely on the chart's values. This is better because we know everything works in the default charts (it passed CI); and it means that Cailleach uses the same image versions that an open source user would have.

  • Bump wire-server-deploy patch on merge to develop on wire-server

  • Bump wire-server-deploy patch on merge to master on wire-server-deploy

  • Bump wire-server-deploy minor on merge to master on wire-server

We should version secrets and configmaps

If you update a secret (Which causes a new deployment rollout because we have the hash of the secret in our deployments) and this secret contains a syntax error, the rollout of the new replicaset fails. However, if now any pods of your old replicaset restart, they also refer to the wrong secret and crash too, and then everything crashes.

Instead, if we updtate a deployment from software-v1 to software-v2 then software-v2 should point to software-v2-secret and software-v1 should point to software-v1-secret. Only after successfully rolling out software-v2 should we delete the software-v1-secret

I discovered this the hard way when I accidentally uploaded the encrypted version of a secret to my test cluster (forgot to sops decrypt) and then things went bad. Helm also didn't allow me to rollback the Secret for some reason no matter what I did, and then both the old and new replicaset crashed.

So there are two issues here:

  1. The old replicaset can refer to the new secret which can lead to strange things

  2. If you upload a new secret with helm upgrade and you forgot to sops decrypt it, then helm refuses to update the secret even after subsequent deploys that do sops decrypt the secret, and you're stuck with a broken deploy.

  3. we can fix, 2. is some bug I can't figure out why it happens.

Related issue: kubernetes/kubernetes#22368

Curate Grafana Dashboards for Wire Services

The metrics chart currently has several kubernetes health dashboards built in; but there are no dashboards built-in for monitoring wire services. The metrics for wire services are being scraped by Prometheus, so it should just be a matter of building some nice dashboards (we can do so on the staging environment) then exporting the JSON definitions of the dashboards and including them in the chart.

Unfortunately it seems the dashboard importing in the Grafana chart is broken ATM, but has apparently been fixed upstream and should propagate to prometheus-operator soon:

See the following issues:

See for a head start: https://github.com/wireapp/wire-server-deploy/blob/cp/prometheus-operator/charts/wire-server-metrics/values.yaml#L41-L58

Emails not received after register

Hello,
i am facing an issue after completely setup demo on virtual server. everything installed properly. but after register user i am not able to receive email in my mailbox. while debug logs of smtp pod, i see this errors.can you please help to resolve this error?

kubectl log -f demo-smtp-84b7b85ff6-p62q6

Size of off_t: 8
289 delivering 1jZD6i-00004e-Oj
289 R: dnslookup for rahul******@hotmail.com
290 T: remote_smtp for rahul*****@hotmail.com
288 LOG: smtp_connection MAIN
288 SMTP connection from 10-233-64-34.brig.default.svc.cluster.local (brig-bf8d7fcd5-5pbsx) [10.233.64.34] lost D=5s
287 Connecting to hotmail-com.olc.protection.outlook.com [104.47.14.33]:25 ... failed: Connection timed out (timeout=5m)
287 LOG: MAIN
287 H=hotmail-com.olc.protection.outlook.com [104.47.14.33] Connection timed out
290 Connecting to hotmail-com.olc.protection.outlook.com [104.47.59.161]:25 ... failed: Connection timed out (timeout=5m)
290 LOG: MAIN
290 H=hotmail-com.olc.protection.outlook.com [104.47.59.161] Connection timed out

appreciate for your help.

Can't install wire-server demo through helm

I followed installation on https://docs.wire.com/how-to/install/helm.html#helm and after run:
helm upgrade --install wire-server wire/wire-server -f values.yaml -f secrets.yaml --wait
I get error:
Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(StatefulSet.spec.template.spec): unknown field "podManagementPolicy" in io.k8s.api.core.v1.PodSpec
AFAIK from https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-management-policies
podManagementPolicy field supported by Kubernetes 1.7 and later, I have version 1.14.2.
What can be reason of this error?

Ensure that we have a way to check a client's IP addresses

On our cloud solution we use some ELB's specifics, we need to revise this wrt on prem installations, when ELB's or other load balancers are there. We should be able to see a client's IP address at the ingress/nginz/backend services level, either for rate limiting or logging purposes.

Something relevant from the nginx-ingress for instance: https://github.com/helm/charts/tree/master/stable/nginx-ingress

Parameter Description Default value
controller.service.externalTrafficPolicy If controller.service.type is NodePort or LoadBalancer, set this to Local to enable source IP preservation "Cluster"

LoadBalancer for external databases?

Hi, I've deployed wire-server with external databases using official installation guide, then I created endpoints for these external services:

---
kind: "Service"
apiVersion: "v1"
metadata:
  name: "cassandra-external"
spec:
  clusterIP: None
  ports:
    - name: "cassandra"
      protocol: "TCP"
      port: 9042
      targetPort: 9042
---
kind: "Endpoints"
apiVersion: "v1"
metadata:
  name: "cassandra-external"
subsets:
  - addresses:
      - ip: "10.9.8.11"
    ports:
      - port: 9042
        name: "cassandra"
  - addresses:
      - ip: "10.9.8.12"
    ports:
      - port: 9042
        name: "cassandra"
  - addresses:
      - ip: "10.9.8.13"
    ports:
      - port: 9042
        name: "cassandra"
---
kind: "Service"
apiVersion: "v1"
metadata:
  name: "elasticsearch-external"
spec:
  clusterIP: None
  ports:
    - name: "elasticsearch"
      protocol: "TCP"
      port: 9200
      targetPort: 9200
---
kind: "Endpoints"
apiVersion: "v1"
metadata:
  name: "elasticsearch-external"
subsets:
  - addresses:
      - ip: "10.9.8.21"
    ports:
      - port: 9200
        name: "cassandra"
  - addresses:
      - ip: "10.9.8.22"
    ports:
      - port: 9200
        name: "cassandra"
  - addresses:
      - ip: "10.9.8.23"
    ports:
      - port: 9200
        name: "cassandra"
---
kind: "Service"
apiVersion: "v1"
metadata:
  name: "minio-external"
spec:
  clusterIP: None
  ports:
    - name: "minio"
      protocol: "TCP"
      port: 9000
      targetPort: 9000
---
kind: "Endpoints"
apiVersion: "v1"
metadata:
  name: "minio-external"
subsets:
  - addresses:
      - ip: "10.9.8.31"
    ports:
      - port: 9000
        name: "minio"
  - addresses:
      - ip: "10.9.8.32"
    ports:
      - port: 9000
        name: "minio"
  - addresses:
      - ip: "10.9.8.33"
    ports:
      - port: 9000
        name: "minio"
---
kind: "Service"
apiVersion: "v1"
metadata:
  name: "redis-external"
spec:
  clusterIP: None
  ports:
    - name: "elasticsearch"
      protocol: "TCP"
      port: 6379
      targetPort: 6379
---
kind: "Endpoints"
apiVersion: "v1"
metadata:
  name: "redis-external"
subsets:
  - addresses:
      - ip: "10.9.8.41"
    ports:
      - port: 6379
        name: "redis"
  - addresses:
      - ip: "10.9.8.42"
    portsж
      - port: 6379
        name: "redis"
  - addresses:
      - ip: "10.9.8.43"
    ports:
      - port: 6379
        name: "redis"

After, I found out that official helm charts also using the same logic:

Problem is that created manually endpoints have no any healthcheck mechanism and they are not high available by the default, thus there should be some logic implemented on the client side or another load balancer.

My question does wire-server this logic implemented, or should I to use some external loadbalancer eg. harpoxy to route the traffic to these services?

AWS'less deployment

Hi, I'm trying to to deploy wire-server for production environment, I've finished deploying databases using Ansible, I've created external endpoints for external databases in Kubernetes and deployed wire-server using helm chart with production values.

Everything seems working fine except few external dependencies not described on production deployment guide:

Official guide describes fake-aws installation, but if I understood it correct they are not acceptable for production use.

My questions:

  • Is it possible to deploy these services on-premise without using AWS?
  • How are they critical, what if I will use fake-aws on production?
  • Is there any other alternatives?

Thank you!

AWS SES error (SSL routines:tls_process_server_certificate:certificate verify failed)

Hello,

Thanks for look into this issue from #266. As per suggestion by @lucendio, i configure:

(1) SES services with https endpoint:
sesEndpoint: https://email-smtp.ap-south-1.amazonaws.com, i am facing an error

{"request":"2d663fe7784ba090cd11a52e7f5f0a3e","msgs":["E","GeneralError (TransportError (HttpExceptionRequest Request {\n host = \"email-smtp.ap-south-1.amazonaws.com\"\n port = 443\n secure = True\n requestHeaders = [(\"Host\",\"email-smtp.ap-south-1.amazonaws.com\"),(\"X-Amz-Date\",\"20200519T130041Z\"),(\"X-Amz-Content-SHA256\",\"27ce160be8d5759f047cd34af883551891efd64ea91bff430e28352b2c6da41d\"),(\"Content-Type\",\"application/x-www-form-urlencoded; charset=utf-8\"),(\"Authorization\",\"<REDACTED>\")]\n path = \"/\"\n queryString = \"\"\n method = \"POST\"\n proxy = Nothing\n rawBody = False\n redirectCount = 0\n responseTimeout = ResponseTimeoutMicro 70000000\n requestVersion = HTTP/1.1\n}\n (InternalException ProtocolError \"error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed\")))"]}

(2) i configured a mail client CLI on my kubernetes node and and send email. i am able to successfully send email using following command.

openssl s_client -crlf -quiet -connect email-smtp.ap-south-1.amazonaws.com:465 < test.txt

Also, i verified both base64 value for AWS access key and secret key in brig secret and manual i used in AWS CLI mail tool, both are correct.
It seems another issue while calling AWS API using https from wire server.

Lot of certificate failed errors in pod logs for aws services end points

Hi,

I have wild card certificate for setup and its a valid one. But lot of certificate failed error coming in the pod logs for aws services end points.

1:E,6:logger,1:=,11:aws.gundeck,5:error,1:=,891:GeneralError (TransportError (HttpExceptionRequest Request { host = "sqs.us-east-1.amazonaws.com" port = 443 secure = True requestHeaders = [("Host","sqs.us-east-1.amazonaws.com"),("X-Amz-Date","20200617T111732Z"),("X-Amz-Content-SHA256","8d19ceea0c2609868a134d22880a0d1d8ce6ff6b22a4203a12540f23cf9b6f70"),("Content-Type","application/x-www-form-urlencoded; charset=utf-8"),("Authorization","<REDACTED>")] path = "/" queryString = "" method = "POST" proxy = Nothing rawBody = False redirectCount = 0 responseTimeout = ResponseTimeoutMicro 70000000 requestVersion = HTTP/1.1 } (InternalException (HandshakeFailed (Error_Protocol ("certificate rejected: [NameMismatch \"sqs.us-east-1.amazonaws.com\"]",True,CertificateUnknown)))))),23:Failed to read from SQS,

{"error":"GeneralError (TransportError (HttpExceptionRequest Request {\n host = \"sqs.us-east-1.amazonaws.com\"\n port = 443\n secure = True\n requestHeaders = [(\"Host\",\"sqs.us-east-1.amazonaws.com\"),(\"X-Amz-Date\",\"20200617T080625Z\"),(\"X-Amz-Content-SHA256\",\"1bcb22493cf3e61a8a93aaf06da833295b32d77fb7ccb06cfa3bbd8cc04968e3\"),(\"Content-Type\",\"application/x-www-form-urlencoded; charset=utf-8\"),(\"Authorization\",\"<REDACTED>\")]\n path = \"/\"\n queryString = \"\"\n method = \"POST\"\n proxy = Nothing\n rawBody = False\n redirectCount = 0\n responseTimeout = ResponseTimeoutMicro 70000000\n requestVersion = HTTP/1.1\n}\n ResponseTimeout))","logger":"aws.brig","msgs":["E","Failed to read from SQS"]}

1:E,7:request,1:=,32:cbf32839bf38138892019bcbc5013f35,679:HttpExceptionRequest Request { host = "sqs.us-east-1.amazonaws.com" port = 443 secure = True requestHeaders = [("Date","Wed, 17 Jun 2020 11:16:57 GMT"),("Authorization","<REDACTED>")] path = "/assets/v3/eternal/d3d776ae-fac1-44bd-918d-beed0b86c920" queryString = "" method = "HEAD" proxy = Nothing rawBody = False redirectCount = 10 responseTimeout = ResponseTimeoutDefault requestVersion = HTTP/1.1 } (InternalException ProtocolError "error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed"),

pods are getting filled with above logs. Please help me how to fix this issue

Unable to install wire-server

just stacked during wire-server deployment:
helm install wire/team-settings Error: render error in "team-settings/templates/secret.yaml": template: team-settings/templates/secret.yaml:12:23: executing "team-settings/templates/secret.yaml" at <required "No .secrets found in configuration. Did you forget to helm -f path/to/secrets.yaml ?" .Values.secrets>: error calling required: No .secrets found in configuration. Did you forget to helm -f path/to/secrets.yaml ?
did I missed something? do I have to compile build recreate the pod/chart for team-settings?? any help would be appreciated.

Unable to install wire-server with helm

When I try to install the server demo, I have the following error:

helm upgrade --install databases-ephemeral wire/databases-ephemeral --wait
Release "databases-ephemeral" does not exist. Installing it now.
Error: unable to build kubernetes objects from release manifest: unable to recognize "": no matches for kind "Deployment" in version "extensions/v1beta1"

I'm very new to kubernetes so I have no idea how to ever troubleshoot this.

Thanks.

Audio/Video call issue.

Hello guys,

i have setup one turn server turn01.example.io on separate VM. Also i make proper configuration on wire-server/value.yml for turn server:

brig:
    config:
      - some configuration
    turnStatic:
      v1:
      - turn:turn01.example.io.io:3478
      v2:
      - "turn:turn01.example.io:3478"
      - "turn:turn01.example.io::80?transport=tcp"
      - "turns:turn01.example.io:443?transport=tcp"

after successful configuration. when run test for restund server using kubernetes proxy with localhost and my system UUID, it shows localhost rather than my turn server endpoints.please find in screenshot.

image

can you help why i am facing this issue?

Thanks

What is exacly list domain for wire

I setup wire server follow your guild. All step finished, no error!
Im current set cidraddress in metallb : 10.15.15.30 (for internal demo testing).
And i set host to above ip for list domain :
In nginx-ingress-service/demo-values.yaml

  • nginz-https.mydomain.com
  • nginz-ssl.mydomain.com
  • webapp.mydomain.com
  • s3.mydomain.com
  • team.mydomain.com
  • account.mydomain.com

In wire-server/demo-values.yaml

  • bare-https.mydomain.com
  • bare-ssl.mydomain.com
  • bare-s3.mydomain.com
  • bare-webapp.mydomain.com
  • bare-team.mydomain.com
  • api.mydomain.com
  • mydomain.com

But when i access https://webapp.mydomain.com, I got error with unknow route for domain :

I try to acces to this url https://bare-https.mydomain.com/self and got default backend - 404

So what is exacly list domain you provide? And with bare-https.mydomain.com (original : bare-https.example.com) What is IP i need to point for domain bare-*.mydomain.com and mydomain.com and api.mydomain.com ? Is that cidr provide in metallb ? (all domain list on wire-server/demo.values.yaml)

Where can i see list api backend like /swagger-ui/

p/s: And pls update your document Load balancer on bare metal servers. Dont have folder called values/nginx-lb-ingress . I use folder called nginx-ingress-services instead. Is that right?

Thank!

brig error on deploy wire-server

Hi there,

thanks again for all the help and assistance.

Currently trying to deploy wire-server using helm; everything is working fine except for when the brig kubes are stopping on CrashLoopBackOff

When i pull the logs, this is all I get:

wireadmin@wire-controller:~/wire-server-deploy/ansible$ kubectl logs brig-8674744bc7-ccbtf
{"logger":"cassandra.brig","msgs":["I","Known hosts: [datacenter1:rack1:172.16.32.31:9042,datacenter1:rack1:172.16.32.32:9042,datacenter1:rack1:172.16.32.33:9042]"]}
{"logger":"cassandra.brig","msgs":["I","New control connection: datacenter1:rack1:172.16.32.33:9042#<socket: 11>"]}
NAME                                  READY   STATUS             RESTARTS   AGE
brig-8674744bc7-ccbtf                 0/1     CrashLoopBackOff   6          7m58s
brig-8674744bc7-jlpgn                 0/1     CrashLoopBackOff   7          7m58s
brig-8674744bc7-mbh5m                 0/1     CrashLoopBackOff   7          7m58s
cannon-0                              1/1     Running            0          7m58s
cannon-1                              1/1     Running            0          7m58s
cannon-2                              1/1     Running            0          7m58s
cargohold-d474c7847-mpj7w             1/1     Running            0          7m58s
cargohold-d474c7847-phms7             1/1     Running            0          7m58s
cargohold-d474c7847-r4j8b             1/1     Running            0          7m58s
cassandra-migrations-g667z            0/1     Completed          0          8m7s
demo-smtp-84b7b85ff6-k2djh            1/1     Running            0          9h
elasticsearch-index-create-xnzwm      0/1     Completed          0          8m1s
fake-aws-dynamodb-84f87cd86b-dsz2v    2/2     Running            0          9h
fake-aws-s3-5468cdf989-fccm9          1/1     Running            0          9h
fake-aws-s3-reaper-7c6d9cddd6-ff8fn   1/1     Running            0          9h
fake-aws-sns-5c56774d95-dwcsw         2/2     Running            0          9h
fake-aws-sqs-554bbc684d-cqxzl         2/2     Running            0          9h
galley-87df7b65f-kp588                1/1     Running            0          7m58s
galley-87df7b65f-t7wtd                1/1     Running            0          7m58s
galley-87df7b65f-vhzpg                1/1     Running            0          7m58s
gundeck-f9bf469f9-b9rxt               1/1     Running            0          7m58s
gundeck-f9bf469f9-clff6               1/1     Running            0          7m58s
gundeck-f9bf469f9-gx8d4               1/1     Running            0          7m57s
nginz-77f7ff6f5d-5m94p                2/2     Running            1          7m58s
nginz-77f7ff6f5d-h7w5n                2/2     Running            1          7m58s
nginz-77f7ff6f5d-pwbzl                2/2     Running            1          7m58s
redis-ephemeral-69bb4885bb-qbmdw      1/1     Running            0          8h
spar-59fd5db594-gbsbz                 1/1     Running            0          7m58s
spar-59fd5db594-jclmh                 1/1     Running            0          7m58s
spar-59fd5db594-zvbl6                 1/1     Running            0          7m58s
webapp-6cb84759d9-wfhc9               1/1     Running            0          7m58s
wireadmin@wire-controller:~/wire-server-deploy/ansible$

those are the correct IPs for my three cassandra nodes and they seem to be up fine. I'm using cassandra-external to point them there.

any guidance as to what I should upload to help with this would be much appreciated too.

Thanks!

Shared Images not being displayed on the chat

Hi,

I am on production setup with 3 minio VM's. Some of the images being shared on the chat are not displayed or downloadable.

Lets say I have shared three images at a time on my chat, out 3 only 1 images is visible on the chat and remaining 2 showing as blank frames. as show below.

image

Below are the network logs.

Request URL: https://assets.example.com/assets/v3/eternal/4899444b-3032-4851-95cd-102baa0d05d3?Expires=1591297623&AWSAccessKeyId=koUSmGaX6YumkLTnIN&SignatureMethod=HmacSHA256&Signature=IQO5HUkZP8YMNpB%2F8z9EpEL4GQo%3D
Request Method: GET
Status Code: 404 
Remote Address: xx.xx.xx.xx:443
Referrer Policy: same-origin

<Error><Code>NoSuchKey</Code><Message>The specified key does not exist.</Message><Key>v3/eternal/4899444b-3032-4851-95cd-102baa0d05d3</Key><BucketName>assets</BucketName><Resource>/assets/v3/eternal/4899444b-3032-4851-95cd-102baa0d05d3</Resource><RequestId>16156C7C127B2C2B</RequestId><HostId>a9e6c92a-1d42-419a-aa5b-038d619a6b23</HostId></Error>

Request URL: https://assets.example.com/assets/v3/eternal/f928f49d-8a7e-4967-a4a8-03af5ac9e2cb?Expires=1591297628&AWSAccessKeyId=koUSmGaX6YumkLTnIN&SignatureMethod=HmacSHA256&Signature=F7rNGF%2BKJNo1El%2BXMYKxCAVQU0A%3D
Request Method: GET
Status Code: 404 
Remote Address: xx.xx.xx.xx:443
Referrer Policy: same-origin

<Error><Code>NoSuchKey</Code><Message>The specified key does not exist.</Message><Key>v3/eternal/f928f49d-8a7e-4967-a4a8-03af5ac9e2cb</Key><BucketName>assets</BucketName><Resource>/assets/v3/eternal/f928f49d-8a7e-4967-a4a8-03af5ac9e2cb</Resource><RequestId>16156C7D7C826400</RequestId><HostId>d2bb6d4f-ce77-4a4f-a9df-f5c9e696872b</HostId></Error>

Please see below bucket screenshot which shows assets are present inside the bucket.

image

Sometimes, files being stored across all 3 minio buckets randomly like few in one bucket and few in others. In that case also all images or files are not being displayed on the chat

Please help me resolve this issue.

images or file in able to download or view

Hello,

i just deployed fresh wire server demo on GCP. after successfully installed. i am able to login and chat message. working fine as expected. but when i upload some file or image. its not visible in chat on both side. its show like below screenshot.

image

Also i see some console error. there are some errors. as in below screenshot:

image

can you please help me why i am facing this issue. as just with fresh demo with dummy bucket?

FYI, i am fine with disclose domain name here as this is my test server so i will demolish it after all things working as expected.

Thank you

Getting error while running "make download".

While running make or make download inside "wire-server-deploy/ansible" directory, I'm getting this error:

TASK [ansible-kubectl : Download kubernetes-client archive] **********************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Failed to connect to dl.k8s.io at port 443: [Errno 110] Connection timed out"}

PLAY RECAP ***********************************************************************************************************************************************************************************************
localhost                  : ok=3    changed=1    unreachable=0    failed=1   

Makefile:13: recipe for target 'download-cli-binaries' failed
make: *** [download-cli-binaries] Error 2

getting external smtp to work - smtpUsername to username?

Hi, I have been having a hard time getting email working on the demo server (https://github.com/wireapp/wire-docs/issues/56). In particular, I am attempting to use authenticated smtp. In wire-server-deploy/charts/brig/templates/configmap.yaml, I see

        smtpCredentials:
          username: {{ .smtp.username }}
          password: {{ .smtp.passwordFile }}

yet in wire-server/services/brig/src/Brig/Options.hs

data EmailSMTPCredentials = EmailSMTPCredentials
  { -- | Username to authenticate
    --   against the SMTP server
    smtpUsername :: !Text,
    -- | File containing password to
    --   authenticate against the SMTP server
    smtpPassword :: !FilePathSecrets
  }
  deriving (Show, Generic)

instance FromJSON EmailSMTPCredentials

it gets read in App.hs

      smtpCredentials <- case Opt.smtpCredentials s of
        Just (Opt.EmailSMTPCredentials u p) -> do
          pass <- initCredentials p
          return $ Just (SMTP.Username u, SMTP.Password pass)
        _ -> return Nothing

where I think there will be a mismatch between smtpUsername and username, and smtpPassword and password, maybe why I have the error

service: user error (AesonException "Error in $.emailSMS.email.smtpCredentials: parsing Brig.Options.EmailSMTPCredentials(EmailSMTPCredentials) failed, key \"smtpUsername\" not found")

for my deploy values.txt, I am now using

    smtp:
      host: smtp.gmail.com # change this if you want to use your own SMTP server
      port: 587        # change this
      connType: tls # change this. Possible values: plain|ssl|tls
      username: [email protected]
      passwordFile: /etc/wire/brig/secrets/smtp-password.txt

I'd like to test out my theory, but haven't yet explored compiling brig on this end. Any thoughts much appreciated. I think perhaps I can make the changes in configmap.yaml and

./bin/update.sh ./charts/<chart-name> # this will clean and re-package subcharts
helm install charts/<chart-name> # specify a local file path

Update. I tried changing the configmap and then

helm dependency update charts

from /mnt/wire-server-deploy, then

helm upgrade --install wire-server charts/wire-server -f wire-server/values.yaml -f wire-server/secrets.yaml

this time, a new error

  error, called at src/Brig/SMTP.hs:59:5 in brig-1.35.0-75AXQWXBSmK9R80VH6rqDH:Brig.SMTP

but I think this time username and password were read in.

Update 2. This time the error was from Gmail!
image
after making this change email works...

Can't deploy cassandra on ubuntu18.04

Hi, ansible-role-java ansible role install openjdk-11-jdk by default:

https://github.com/geerlingguy/ansible-role-java/blob/master/vars/Ubuntu-18.yml

But cassandra role requires openjdk-8-jdk instead.

Solved by specifying exact java version for cassandra and elasticsearch vms in inventory:

cassandra1 ansible_host=X.X.X.X java_packages='["openjdk-8-jdk"]'
cassandra2 ansible_host=X.X.X.X java_packages='["openjdk-8-jdk"]'
cassandra3 ansible_host=X.X.X.X java_packages='["openjdk-8-jdk"]'
elasticsearch1 ansible_host=X.X.X.X java_packages='["openjdk-8-jdk"]'
elasticsearch2 ansible_host=X.X.X.X java_packages='["openjdk-8-jdk"]'
elasticsearch3 ansible_host=X.X.X.X java_packages='["openjdk-8-jdk"]'

Unable to deploy wire server on the kubernetes due to cassandra-migration pod failing

Hi,

I am trying deploy wire server demo on the kubernetes cluster but cassandra-migration pod failing due to gundeck-schema. Please check the below details and help me sort this.

Name: cassandra-migrations-r752j
Namespace: demo
Priority: 0
PriorityClassName:
Node: node1/IP
Start Time: Mon, 14 Oct 2019 07:38:13 +0000
Labels: controller-uid=93baf997-ee55-11e9-859d-38d547b2cf63
job-name=cassandra-migrations
release=wire-server
wireService=cassandra-migrations
Annotations:
Status: Pending
IP: 10.233.67.70
Controlled By: Job/cassandra-migrations
Init Containers:
gundeck-schema:
Container ID: docker://9b4906e2881d69d345b629eb0fda78453b7efdd51df8fc241384416f99fc9921
Image: quay.io/wire/gundeck-schema:2.63.0
Image ID: docker-pullable://quay.io/wire/gundeck-schema@sha256:ebb21d55f0ef6efb58d9f73505cfdc57195115cf6e745c80e52e694143c994e9
Port:
Host Port:
Command:
gundeck-schema
--host
cassandra-ephemeral
--port
9042
--keyspace
gundeck
--replication-factor
3
State: Terminated
Reason: Error
Exit Code: 1
Started: Mon, 14 Oct 2019 07:38:26 +0000
Finished: Mon, 14 Oct 2019 07:38:31 +0000
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Mon, 14 Oct 2019 07:38:19 +0000
Finished: Mon, 14 Oct 2019 07:38:24 +0000
Ready: False
Restart Count: 1
Environment:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-svgmk (ro)
brig-schema:
Container ID:
Image: quay.io/wire/brig-schema:2.63.0
Image ID:
Port:
Host Port:
Command:
brig-schema
--host
cassandra-ephemeral
--port
9042
--keyspace
brig
--replication-factor
3
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Environment:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-svgmk (ro)
galley-schema:
Container ID:
Image: quay.io/wire/galley-schema:2.63.0
Image ID:
Port:
Host Port:
Command:
galley-schema
--host
cassandra-ephemeral
--port
9042
--keyspace
galley
--replication-factor
3
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Environment:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-svgmk (ro)
spar-schema:
Container ID:
Image: quay.io/wire/spar-schema:2.63.0
Image ID:
Port:
Host Port:
Command:
spar-schema
--host
cassandra-ephemeral
--port
9042
--keyspace
spar
--replication-factor
3
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Environment:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-svgmk (ro)
Containers:
job-done:
Container ID:
Image: busybox
Image ID:
Port:
Host Port:
Command:
sh
-c
echo "gundeck, brig, galley, spar schema cassandra-migrations completed. See initContainers for details with e.g. kubectl logs ... -c gundeck-schema"
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Environment:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-svgmk (ro)
Conditions:
Type Status
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-svgmk:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-svgmk
Optional: false
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message


Normal Scheduled 23s default-scheduler Successfully assigned demo/cassandra-migrations-r752j to node1
Normal Pulling 21s kubelet, node1 Pulling image "quay.io/wire/gundeck-schema:2.63.0"
Normal Pulled 17s kubelet, node1 Successfully pulled image "quay.io/wire/gundeck-schema:2.63.0"
Normal Created 10s (x2 over 16s) kubelet, node1 Created container gundeck-schema
Normal Pulled 10s kubelet, node1 Container image "quay.io/wire/gundeck-schema:2.63.0" already present on machine
Normal Started 9s (x2 over 16s) kubelet, node1 Started container gundeck-schema
Warning BackOff 4s kubelet, node1 Back-off restarting failed container

SMS configuration issues with wire server deploy script

Hi,

I am trying to deploy wire server using below SMS configuration and getting below error when I set smsSender as number in values.yaml file.

    emailSMS:
      general:
        emailSender: [email protected] # change this
        smsSender: "+18123456789" # change this if SMS support is desired

Getting below error for brig pod.

service: user error (AesonException "Error in $.emailSMS.general.smsSender: parsing Text failed, expected String, but encountered Number")

Please help me.

Image or file sharing not working with minio

Hi,

I am unable to share any files or images on webapp. I configured minio for server setup and getting below error in cargohold pod.

{ host = "minio-external"
port = 9000
secure = False
requestHeaders = [("Date","Thu, 30 Apr 2020 14:52:37 GMT"),("Content-Type","application/octet-stream"),("Content-MD5","FQq4x/fX/F7FUwQBY4wkYQ=="),("Authorization","<REDACTED>"),("x-amz-meta-user","c64da7b6-71bc-4df4-94bf-ca22c80248d9"),("Expect","100-continue")]
path = "/assets/v3/eternal/71d4d79e-c703-4b14-ba29-f969d6df3be8"
queryString = ""
method = "PUT"
proxy = Nothing
rawBody = False
redirectCount = 10
responseTimeout = ResponseTimeoutDefault
requestVersion = HTTP/1.1 } ResponseTimeout

Please help me resolve this error.

Ingress per deployment?

So we recently refactored nginx-ingress to nginx-ingress-controller and nginx-ingress-services. I think we can take it a step further which will make the code a bit nicer in my opinion

Currently nginx-ingress-services contains a single Ingress, referring to each Service per Deployment that is exposed.

the ingress controller, however can actually support more than one Ingress at the same time, and it will listen to changes in all of them and merge them together. This would allow us to move the Service and Ingress files to their respective charts, meaning we don't need to specify externalPorts in multiple places (e.g. both in the nginz chart and in the nginx-ingress-services chart).

It also means people can deploy wire-server chart with an existing ingress-controller and things should 'just work'. Instead of having to configure the nginx-ingress-services chart with the same values that were already in wire-server

e.g. we would have

nginz
├── templates
│   ├── configmap.yaml
│   ├── deployment.yaml
│   ├── service.yaml
│   ├── ingress.yaml
│   └── secret.yaml
webapp
├── templates
│   ├── configmap.yaml
│   ├── deployment.yaml
│   ├── service.yaml
│   ├── ingress.yaml
│   └── secret.yaml

Instead of

nginz
├── templates
│   ├── configmap.yaml
│   ├── deployment.yaml
│   └── secret.yaml
webapp
├── templates
│   ├── configmap.yaml
│   ├── deployment.yaml
│   └── secret.yaml
nginx-ingress-services/
├── Chart.yaml
├── README.md
├── templates
│   ├── ingress.yaml  <-- contains duplicate port also present in `nginz` chart, contains duplicate port also present in `webapp` chart
│   ├── secret.yaml
│   └── service.yaml <-- contains duplicate port also present in `nginz` chart, contains duplicate port also present in `webapp` chart
└── values.yaml

This makes the configs a bit less spread out and will have less code duplication, which removes room for error when for example changing port numbers of a service

Thoughts?

Error in $.settings.featureFlags.teamSearchVisibility: FeatureSearchVisibility: null

Hello,

I am deploying demo server on EC2 instance. Before 3 days back everything working fine with new deployment as well. when i deploy new demo on another EC2 instance, gallery pod crashed every time. Same things happen on Google cloud as well.

kubectl logs -f galley-7cbc886974-lxrn9

ERROR:
service: user error (AesonException "Error in $.settings.featureFlags.teamSearchVisibility: FeatureSearchVisibility: null")

helm error recovery from Errors w/o delete and reinstall

@akshaymankar I was able to get galley running with your fix. However, helm still did not complete:

bash-4.4# helm upgrade --install wire-server wire/wire-server -f values.yaml -f secrets.yaml --wait
Release "wire-server" does not exist. Installing it now.
Error: release wire-server failed: timed out waiting for the condition

and I see

NAME                                       READY   STATUS      RESTARTS   AGE
brig-6b69fbfcc5-x4ggf                      1/1     Running     0          9m4s
cannon-0                                   0/1     Pending     0          9m4s
cargohold-9bdc54d87-ggppg                  1/1     Running     0          9m4s
cassandra-ephemeral-0                      1/1     Running     1          17h
cassandra-migrations-tc6xl                 0/1     Completed   0          10m
demo-smtp-84b7b85ff6-7jh74                 1/1     Running     1          18h
elasticsearch-ephemeral-8545b66bcc-2vk42   1/1     Running     1          17h
elasticsearch-index-create-j9pj8           0/1     Completed   0          9m22s
fake-aws-dynamodb-84f87cd86b-6n7rc         2/2     Running     2          18h
fake-aws-s3-5c846cb5d8-2sxl6               1/1     Running     27         18h
fake-aws-s3-reaper-7c6d9cddd6-vxk5g        1/1     Running     1          18h
fake-aws-sns-5c56774d95-l6mgk              2/2     Running     2          18h
fake-aws-sqs-554bbc684d-8zv48              2/2     Running     2          18h
galley-6f5c67db5-l27j5                     1/1     Running     0          9m4s
gundeck-79b5d5885c-77g7k                   1/1     Running     0          9m4s
nginz-6564c65c78-wgcng                     0/2     Pending     0          9m4s
redis-ephemeral-69bb4885bb-n9jxz           1/1     Running     1          17h
webapp-b56ccd6b9-vmlp4                     0/1     Pending     0          9m4s

So it looks like it got stuck with cannon and maybe webapp. Besides deleting everything and re-running helm upgrade --install is there a way I can just retry those that have failed?

Update. I decided to delete and retry with -debug - maybe it will shed more light.
Update 2. Not so useful

bash-4.4# helm upgrade --install wire-server wire/wire-server -f values.yaml -f secrets.yaml --wait --debug
[debug] Created tunnel using local port: '43073'

[debug] SERVER: "127.0.0.1:43073"

[debug] Fetched wire/wire-server to /root/.helm/cache/archive/wire-server-0.103.0.tgz

Release "wire-server" does not exist. Installing it now.
[debug] CHART PATH: /root/.helm/cache/archive/wire-server-0.103.0.tgz

Error: release wire-server failed: timed out waiting for the condition

though it seems from get pods -w that the problem perhaps is in webapp, cannon or brig?

bash-4.4# kubectl get pods -o wide
NAME                                       READY   STATUS      RESTARTS   AGE   IP             NODE         NOMINATED NODE   READINESS GATES
brig-5c7f4cd4c8-cfv5f                      1/1     Running     0          11m   10.233.64.96   kubenode01   <none>           <none>
cannon-0                                   0/1     Pending     0          11m   <none>         <none>       <none>           <none>
cargohold-9bdc54d87-l9nlr                  1/1     Running     0          11m   10.233.64.97   kubenode01   <none>           <none>
cassandra-ephemeral-0                      1/1     Running     1          19h   10.233.64.65   kubenode01   <none>           <none>
cassandra-migrations-2mm8x                 0/1     Completed   0          11m   10.233.64.93   kubenode01   <none>           <none>
demo-smtp-84b7b85ff6-7jh74                 1/1     Running     1          19h   10.233.64.77   kubenode01   <none>           <none>
elasticsearch-ephemeral-8545b66bcc-2vk42   1/1     Running     1          19h   10.233.64.79   kubenode01   <none>           <none>
elasticsearch-index-create-wgtxt           0/1     Completed   0          11m   10.233.64.94   kubenode01   <none>           <none>
fake-aws-dynamodb-84f87cd86b-6n7rc         2/2     Running     2          19h   10.233.64.70   kubenode01   <none>           <none>
fake-aws-s3-5c846cb5d8-2sxl6               1/1     Running     27         19h   10.233.64.73   kubenode01   <none>           <none>
fake-aws-s3-reaper-7c6d9cddd6-vxk5g        1/1     Running     1          19h   10.233.64.80   kubenode01   <none>           <none>
fake-aws-sns-5c56774d95-l6mgk              2/2     Running     2          19h   10.233.64.69   kubenode01   <none>           <none>
fake-aws-sqs-554bbc684d-8zv48              2/2     Running     2          19h   10.233.64.71   kubenode01   <none>           <none>
galley-6f5c67db5-9kllh                     1/1     Running     0          11m   10.233.64.98   kubenode01   <none>           <none>
gundeck-79b5d5885c-qs2dh                   1/1     Running     0          11m   10.233.64.99   kubenode01   <none>           <none>
nginz-6564c65c78-t2t22                     0/2     Pending     0          11m   <none>         <none>       <none>           <none>
redis-ephemeral-69bb4885bb-n9jxz           1/1     Running     1          19h   10.233.64.74   kubenode01   <none>           <none>
webapp-b56ccd6b9-8vjzx                     0/1     Pending     0          11m   <none>         <none>       <none>           <none>

@maaaaaaav did you run into anything like this?

nginz upstreams sometimes get a wrong config

Under certain circumstances, I have witnessed this on our logs:

Log from nginz:

Config file update detected
2019/07/09 06:44:50 [emerg] 12745#0: invalid number of arguments in "server" directive in /etc/wire/nginz/upstreams/upstreams.conf:30
nginx: [emerg] invalid number of arguments in "server" directive in /etc/wire/nginz/upstreams/upstreams.conf:30
nginx: configuration file /etc/wire/nginz/conf/nginx.conf test failed
ERROR: New configuration is invalid!!
Config file update detected
nginx: the configuration file /etc/wire/nginz/conf/nginx.conf syntax is ok
nginx: configuration file /etc/wire/nginz/conf/nginx.conf test is successful
New configuration is valid, reloading nginx
2019/07/09 06:44:52 [notice] 12747#0: signal process started
Config file update detected
2019/07/09 06:45:15 [emerg] 12756#0: invalid number of arguments in "server" directive in /etc/wire/nginz/upstreams/upstreams.conf:40
nginx: [emerg] invalid number of arguments in "server" directive in /etc/wire/nginz/upstreams/upstreams.conf:40
nginx: configuration file /etc/wire/nginz/conf/nginx.conf test failed
ERROR: New configuration is invalid!!
Config file update detected
nginx: the configuration file /etc/wire/nginz/conf/nginx.conf syntax is ok
nginx: configuration file /etc/wire/nginz/conf/nginx.conf test is successful
New configuration is valid, reloading nginx
2019/07/09 06:45:17 [notice] 12758#0: signal process started

This is how one of these problematic configs looks like:

upstream brig {
         least_conn;
         keepalive 32;
         server 10.233.X.X:8080 max_fails=3 weight=100;
}
upstream cannon {
         least_conn;
         keepalive 32;
         server 10.233.X.X:8080 max_fails=3 weight=100;
}
upstream cargohold {
         least_conn;
         keepalive 32;
         server 10.233.X.X:8080 max_fails=3 weight=100;
}
upstream galley {
         least_conn;
         keepalive 32;
         server 10.233.X.X:8080 max_fails=3 weight=100;
}
upstream gundeck {
         least_conn;
         keepalive 32;
         server 10.233.X.X:8080 max_fails=3 weight=100;
}
upstream ibis {
         least_conn;
         keepalive 32;
         server localhost:8080 down;
}
upstream proxy {
         least_conn;
         keepalive 32;
         server ;;:8080 max_fails=3 weight=100;
         server expected:8080 max_fails=3 weight=100;
         server opt:8080 max_fails=3 weight=100;
         server record:8080 max_fails=3 weight=100;
         server in:8080 max_fails=3 weight=100;
         server response:8080 max_fails=3 weight=100;
}
upstream spar {
         least_conn;
         keepalive 32;
         server localhost:8080 down;

Not really sure how we end up in such state but worth putting this here as it has been seen multiple times.

Frequent logs "Failed to parse SQS event"

I configured SNS,SQS and SES service using terraform for production deployment. I face folllowing error in my logs which is very frequent(approx. 20 per minute)

k logs -f brig-6954f59478-lxbfp

{"logger":"aws.brig","msgs":["E","Failed to parse SQS event: Message' {_mMessageAttributes = Just (Map {toMap = fromList []}), _mMD5OfBody = Just "e4f71afcf60ec43c61ec2977c803bb75", _mBody = Just "{\"notificationType\":\"AmazonSnsSubscriptionSucceeded\",\"message\":\"You have successfully subscribed your Amazon SNS topic 'arn:aws:sns:us-east-1:118123628683:dev-brig-email-events' to receive 'Bounce' notifications from Amazon SES for identity '[email protected]'.\"}\n", _mAttributes = Just (Map {toMap = fromList []}), _mReceiptHandle = Just "AQEBiKEFRZsaPSh2v81+laHDCGFxkdirFWAYHzTDxxYTWV3qCG/rqgSQWn5X4E/KaE3D+NEcn6yDXe8LhbK3ZDr97Q3zXGttuwhwvkzSyGhVwt2RlsAQ/arTEKFj14VwU3h9rOPKxyBt231IwC6Vxd1xlshFx5IcWmx6LC9ALf4CeJOWOSmpla5ECCECei2Lf25Oy1jBAa31LSo0n2/8YFXhsG3h5ArgNG7Pq+Cnq4S2DJFcfEPbABy63aSNkqfHbajHV4VuoKUAMUg2arEP3uN+PtQHPaqYjjoMaf7+pg0lj9CsOlEi/8Iue+85P9YwJnsapnI4zrjJfHVs8T+Sd0ceeAbjj2ae4Gf/tXbV9dm+yLQKbgTPRppNG6VYI2VjEtXZSiFwmty9zPGPICueKGSJOA==", _mMessageId = Just "7f2747ee-7bf8-4a55-b908-b795d37d5636", _mMD5OfMessageAttributes = Nothing}"]}

{"logger":"aws.brig","msgs":["T","[Version 4 Metadata] {\n time = 2020-06-20 13:59:19.658076839 UTC\n endpoint = sqs.us-east-1.amazonaws.com\n credential = AKIARXAFXRCFVHKL3OFL/20200620/us-east-1/sqs/aws4_request\n signed headers = content-type;host;x-amz-content-sha256;x-amz-date\n signature = 5f46c31a16d09fbc40d0fef24f5bf180eca943ff49eb28648f6ac54a8d586d32\n string to sign = {\nAWS4-HMAC-SHA256\n20200620T135919Z\n20200620/us-east-1/sqs/aws4_request\nd28835086c46a843cf8f3d4b7381bc9683471c205dc9a1510628eedc7669980e\n}\n canonical request = {\nPOST\n/\n\ncontent-type:application/x-www-form-urlencoded; charset=utf-8\nhost:sqs.us-east-1.amazonaws.com\nx-amz-content-sha256:34a4d2c18c4af86e1587b99dc40d5acbd7fce10cde0cf0363c05362d298fc6d3\nx-amz-date:20200620T135919Z\n\ncontent-type;host;x-amz-content-sha256;x-amz-date\n34a4d2c18c4af86e1587b99dc40d5acbd7fce10cde0cf0363c05362d298fc6d3\n }\n}"]}

{"logger":"aws.brig","msgs":["T","[Client Request] {\n host = sqs.us-east-1.amazonaws.com:443\n secure = True\n method = POST\n target = Nothing\n timeout = ResponseTimeoutMicro 70000000\n redirects = 0\n path = /\n query = \n headers = host: sqs.us-east-1.amazonaws.com; x-amz-date: 20200620T135919Z; x-amz-content-sha256: 34a4d2c18c4af86e1587b99dc40d5acbd7fce10cde0cf0363c05362d298fc6d3; content-type: application/x-www-form-urlencoded; charset=utf-8; authorization: AWS4-HMAC-SHA256 Credential=AKIARXAFXRCFVHKL3OFL/20200620/us-east-1/sqs/aws4_request, SignedHeaders=content-type;host;x-amz-content-sha256;x-amz-date, Signature=5f46c31a16d09fbc40d0fef24f5bf180eca943ff49eb28648f6ac54a8d586d32\n body = Action=ReceiveMessage&MaxNumberOfMessages=10&QueueUrl=https%3A%2F%2Fsqs.us-east-1.amazonaws.com%2F118123628683%2Fdev-brig-email-events&Version=2012-11-05&WaitTimeSeconds=20\n}"]}

All configuration and IAM policies created by terraform deployment. Can you please help me to figure out this issue?

error on helm upgrade

Dear devs ) im getting this errors when trying to do

helm upgrade --install databases-ephemeral wire/databases-ephemeral --wait

UPGRADE FAILED ROLLING BACK Error: "databases-ephemeral" has no deployed releases

same with

helm upgrade --install fake-aws wire/fake-aws --wait

UPGRADE FAILED ROLLING BACK Error: "fake-aws" has no deployed releases Error: UPGRADE FAILED: "fake-aws" has no deployed releases

some debug info

helm upgrade --install fake-aws wire/fake-aws --wait --timeout 600 --debug
[debug] Created tunnel using local port: '33003'

[debug] SERVER: "127.0.0.1:33003"

[debug] Fetched wire/fake-aws to /root/.helm/cache/archive/fake-aws-0.74.0.tgz

UPGRADE FAILED
ROLLING BACK

Thanks

brig pod errors: tls_process_server_certificate:certificate verify failed

Hi,

I am getting below errors on the brig for aws and login with phone number.

{"error":"GeneralError (TransportError (HttpExceptionRequest Request
{\n host = \"sqs.us-east-1.amazonaws.com\"\n
port = 443\n
secure = True\n
requestHeaders = [(\"Host\",\"sqs.us-east-1.amazonaws.com\"),(\"X-Amz-Date\",\"20200430T013331Z\"),(\"X-Amz-Content-SHA256\",\"1bcb22493cf3e61a8a93aaf06da833295b32d77fb7ccb06cfa3bbd8cc04968e3\"),(\"Content-Type\",\"application/x-www-form-urlencoded; charset=utf-8\"),(\"Authorization\",\"<REDACTED>\")]\n path = \"/\"\n
queryString = \"\"\n
method = \"POST\"\n
proxy = Nothing\n
rawBody = False\n
redirectCount = 0\n
responseTimeout = ResponseTimeoutMicro 70000000\n
requestVersion = HTTP/1.1\n}\n (InternalException ProtocolError \"error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed\")))","logger":"aws.brig","msgs":["E","Failed to read from SQS"]}

{\n
host = \"lookups.twilio.com\"\n
port = 443\n
secure = True\n
requestHeaders = [(\"Authorization\",\"<REDACTED>\")]\n
path = \"/v1/PhoneNumbers/+1xxxxxxxxxx\"\n
queryString = \"\"\n method = \"GET\"\n proxy = Nothing\n rawBody = False\n redirectCount = 10\n responseTimeout = ResponseTimeoutDefault\n requestVersion = HTTP/1.1\n}\n (InternalException ProtocolError \"error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed\")"]}

Please help me resolve these errors.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.