Code Monkey home page Code Monkey logo

tyk-kubernetes's Introduction

Tyk + Kubernetes integration

This project is deprecated and our Tyk Operator and Helm charts should be used instead.

tyk-kubernetes's People

Contributors

asoorm avatar buger avatar christtyk avatar davegarvey avatar excieve avatar hustshawn avatar joshblakeley avatar matiasinsaurralde avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tyk-kubernetes's Issues

error creating redis masters

When running the following command as described in the README:

$ redis-trib.py create `dig +short redis-1.redis.svc.cluster.local`:6379 \
    `dig +short redis-2.redis.svc.cluster.local`:6379 \
    `dig +short redis-3.redis.svc.cluster.local`:6379

I get this error:

Traceback (most recent call last):
  File "/usr/local/bin/redis-trib.py", line 7, in <module>
    from redistrib.console import main
  File "/usr/local/lib/python2.7/dist-packages/redistrib/console.py", line 67
    command.shutdown_cluster(*_parse_host_port(addr), ignore_failed)
SyntaxError: only named arguments may follow *expression

I was able to make this error (and a subsequent one) go away by making the following changes to
/usr/local/lib/python2.7/dist-packages/redistrib/console.py:

67c67
<     command.shutdown_cluster(*_parse_host_port(addr), ignore_failed)
---
>     command.shutdown_cluster(*_parse_host_port(addr), ignore_failed=ignore_failed)
90c90
<     command.rescue_cluster(host, port, *_parse_host_port(new_addr), max_slots)
---
>     command.rescue_cluster(host, port, *_parse_host_port(new_addr), max_slots=max_slots)

However, running redis-trib.py now I get this:

Redis-trib 0.6.0 Copyright (c) HunanTV Platform developers
Traceback (most recent call last):
  File "/usr/local/bin/redis-trib.py", line 11, in <module>
    sys.exit(main())
  File "/usr/local/lib/python2.7/dist-packages/redistrib/console.py", line 177, in main
    cli()
  File "/usr/local/lib/python2.7/dist-packages/click/core.py", line 722, in __call__
    return self.main(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/click/core.py", line 697, in main
    rv = self.invoke(ctx)
  File "/usr/local/lib/python2.7/dist-packages/click/core.py", line 1066, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/usr/local/lib/python2.7/dist-packages/click/core.py", line 895, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/usr/local/lib/python2.7/dist-packages/click/core.py", line 535, in invoke
    return callback(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/redistrib/console.py", line 28, in create
    command.create([_parse_host_port(hp) for hp in addrs], max_slots)
  File "/usr/local/lib/python2.7/dist-packages/redistrib/command.py", line 91, in create
    t = Connection(host, port)
  File "/usr/local/lib/python2.7/dist-packages/redistrib/connection.py", line 83, in __init__
    self._conn()
  File "/usr/local/lib/python2.7/dist-packages/redistrib/connection.py", line 68, in g
    raise RedisIOError(e, conn.host, conn.port)
redistrib.exceptions.RedisIOError: 10.110.136.80:6379 - [Errno 111] Connection refused

Any ideas?
Thanks...

"tyk-gateway" ReadWriteOnce volume prevents scaling of gateway nodes

The tyk-gateway volume is created as a 10GB gcePersistentDisk and mounted as a volume in the gateway container at /apps.

Because it's mounted as a ReadWriteOnce volume, the tyk-gateway deployment can't be scaled to more than one replica:

The Deployment "tyk-gateway" is invalid: spec.template.spec.volumes[0].gcePersistentDisk.readOnly: Invalid value: false: must be true for replicated pods > 1; GCE PD can only be mounted on multiple machines if it is read-only

What is the purpose of this volume? I can only see a single app_sample.json file on the volume in my setup. Can it be mounted as ReadOnlyMany to allow multiple nodes to access it or perhaps is a StatefulSet more appropriate?

Deployments are designed for stateless applications and therefore all replicas of a Deployment share the same Persistent Volume Claim. Since the replica Pods created will be identical to each other, only Volumes with modes ReadOnlyMany or ReadWriteMany can work in this setting.

Even Deployments with one replica using a ReadWriteOnce Volume are not recommended.

...
StatefulSets are the recommended method of deploying stateful applications that require a unique volume per replica. By using StatefulSets with Persistent Volume Claim Templates you can have applications that can scale up automatically with unique Persistent Volume Claims associated to each replica Pod.

https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes#deployments_vs_statefulsets

Container command arguments incorrect for tyk-pump

The current tyk-pump.yaml causes the pump to crash and the following error is logged:

➜  tyk-kubernetes git:(master) ✗ kubectl logs deployment/tyk-pump --namespace tyk
tyk-pump: error: unknown long flag '--c', try --help

Changing the tyk-pump.yaml to a single dash:

command: ["/opt/tyk-pump/tyk-pump", "-c=/etc/tyk-pump/pump.conf"]

causes another error:

➜  tyk-kubernetes git:(master) ✗ kubectl logs deployment/tyk-pump --namespace tyk
time="Oct 23 13:31:43" level=info msg="## Tyk Analytics Pump, v0.5.4 ##"
time="Oct 23 13:31:43" level=fatal msg="Couldn't load configuration file: open =/etc/tyk-pump/pump.conf: no such file or directory"

The correct syntax is to specify the config location via an argument:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: tyk-pump
  namespace: tyk
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: tyk-pump
    spec:
      containers:
      - image: tykio/tyk-pump-docker-pub:latest
        imagePullPolicy: Always
        name: tyk-pump
        workingDir: "/opt/tyk-pump"
        env:
          - name: REDIGOCLUSTER_SHARDCOUNT
            value: "128"
        command: ["/opt/tyk-pump/tyk-pump"]
        args: ["-c", "/etc/tyk-pump/pump.conf"]
        volumeMounts:
          - name: tyk-pump-conf
            mountPath: /etc/tyk-pump
      volumes:
        - name: tyk-pump-conf
          configMap:
            name: tyk-pump-conf
            items:
              - key: pump.conf
                path: pump.conf

Configure Tyk Identity Broker

How can I configure Tyk Identity Broker in Kubernetes tyk deploy? Is there a deploy about it? Are you planning to dockerize that component?

Portal Not Found

I had to configure the cname with a reverse proxy to do it, i use a kubernetes with node port configuration, it's imposible to reserve port 3000 or 80. I modify the bootstrap.sh to point to port 32000, the dashboard and gateway are working fine but the portal give us this message. Why the portal could give us this message?

Tyk dashboad and gateway "No nodes available"

Getting level=error msg="No nodes available" for both dashboard and gateway.

Note:
I have the redis run on standalone mode.

Tyk analytics config:
apiVersion: v1
kind: ConfigMap
metadata:
name: tykanalytics-conf
namespace: default
data:
tyk_analytics.conf: |-
{
"listen_port": 3000,
"tyk_api_config": {
"Host": "http://tyk-gateway.default.svc.kubernetes.local",
"Port": "80",
"Secret": "352d20ee67be67f6340b4c0605b044b7"
},
"mongo_url": "mongodb://mongodb.default.svc.kubernetes.local:27017/tyk_analytics",
"license_key": "",
"page_size": 10,
"admin_secret": "12345",
"shared_node_secret": "352d20ee67be67f6340b4c0605b044b7",
"redis_port": 6379,
"redis_host": "redis.default.svc.kubernetes.local",
"redis_password": "",
"enable_cluster": false,
"force_api_defaults": false,
"notify_on_change": true,
"redis_database": 0,
"hash_keys": true,
"email_backend": {
"enable_email_notifications": false,
"code": "",
"settings": null,
"default_from_email": "",
"default_from_name": ""
},
"hide_listen_path": false,
"sentry_code": "",
"sentry_js_code": "",
"use_sentry": false,
"enable_master_keys": false,
"enable_duplicate_slugs": true,
"show_org_id": true,
"host_config": {
"enable_host_names": false,
"disable_org_slug_prefix": true,
"hostname": "",
"override_hostname": "www.tyk-portal-test.com",
"portal_domains": {},
"portal_root_path": "/portal"
},
"http_server_options": {
"use_ssl": false,
"certificates": [
{
"domain_name": "",
"cert_file": "",
"key_file": ""
}
],
"min_version": 0
},
"ui": {
"login_page": {},
"nav": {},
"uptime": {},
"portal_section": null,
"designer": {},
"dont_show_admin_sockets": false,
"dont_allow_license_management": false,
"dont_allow_license_management_view": false
},
"home_dir": "/opt/tyk-dashboard",
"identity_broker": {
"enabled": false,
"host": {
"connection_string": "",
"secret": ""
}
},
"tagging_options": {
"tag_all_apis_by_org": false
}
}

Tyk config:
apiVersion: v1
kind: ReplicationController
metadata:
name: tyk-gateway
namespace: default
spec:
replicas: 1

selector identifies the set of Pods that this

replication controller is responsible for managing

selector:
app: tyk-gateway
template:
metadata:
labels:
# Important: these labels need to match the selector above
# The api server enforces this constraint.
app: tyk-gateway
spec:
containers:
- name: tyk-gateway
image: "{{ tyk_gateway_image }}"
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
env:
- name: REDIGOCLUSTER_SHARDCOUNT
value: "128"
command: ["/opt/tyk-gateway/tyk", "--conf=/etc/tyk-gateway/tyk.conf"]
workingDir: /opt/tyk-gateway
ports:
- containerPort: 8080
hostPort: 9093
volumeMounts:
- mountPath: /etc/tyk-gateway
name: gateway-conf
imagePullSecrets:
- name: docker_repo
volumes:
- name: gateway-conf
configMap:
name: tykgateway-conf

kind: Service
apiVersion: v1
metadata:
name: tyk-gateway
namespace: {{ kube_ns }}
labels:
app: tyk-gateway
spec:

ports:
- name: primary-port
port: 80
targetPort: 8080
- name: secondary-port
port: 8081
targetPort: 8080
selector:
app: tyk-gateway

Any ideas?

Tyk-dashboard not connecting properly

I'm just following the README which work perfectly until testing LoadBalancer service.
I'm getting External-IP but when trying to open dashboard UI, it just failed to connect.

Any idea what I should do? Below is what I investigated so far.

The latest log in dashboard pod is

time="Dec 22 08:28:02" level=info msg="Using /etc/tyk-dashboard/tyk_analytics.conf for configuration" 

As for tyk_analytics.conf, I only change the license_key with the key I received in the email and not touching anything else.

I tried to debug using nmap, the port seems to be closed.

3000/tcp closed ppp

Describing tyk-dasyboard service, I can see port 3000 specified.

Name:                     tyk-dashboard
Namespace:                tyk
Labels:                   <none>
Annotations:              kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"tyk-dashboard","namespace":"tyk"},"spec":{"ports":[{"port":3000,"protocol":"TC...
Selector:                 app=tyk-dashboard
Type:                     LoadBalancer
IP:                       10.55.246.138
LoadBalancer Ingress:     35.198.236.135
Port:                     <unset>  3000/TCP
TargetPort:               3000/TCP
NodePort:                 <unset>  30774/TCP
Endpoints:                10.52.0.19:3000
Session Affinity:         None
External Traffic Policy:  Cluster
Events:
  Type    Reason                Age   From                Message
  ----    ------                ----  ----                -------
  Normal  EnsuringLoadBalancer  47m   service-controller  Ensuring load balancer
  Normal  EnsuredLoadBalancer   47m   service-controller  Ensured load balancer

Pod also specified port 3000.

Name:           tyk-dashboard-5b984dfd57-qclzn
Namespace:      tyk
Node:           gke-linkerd-default-pool-c98ee3ca-hk37/10.148.0.4
Start Time:     Fri, 22 Dec 2017 15:26:28 +0700
Labels:         app=tyk-dashboard
                pod-template-hash=1654089813
Annotations:    kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"tyk","name":"tyk-dashboard-5b984dfd57","uid":"ce82de40-e6f1-11e7-bb36-42010a94006...
Status:         Running
IP:             10.52.0.19
Created By:     ReplicaSet/tyk-dashboard-5b984dfd57
Controlled By:  ReplicaSet/tyk-dashboard-5b984dfd57
Containers:
  tyk-dashboard:
    Container ID:  docker://8e6048699e54457d793f77e77f887d769296bf94872f3228f9d59df1b5891a60
    Image:         tykio/tyk-dashboard:latest
    Image ID:      docker-pullable://tykio/tyk-dashboard@sha256:4291408dff57abc005aeb8e2a1787c073af9030aee4a4f8ae52f2a307cc4caf7
    Port:          3000/TCP
    Command:
      /opt/tyk-dashboard/tyk-analytics
      --conf=/etc/tyk-dashboard/tyk_analytics.conf
    State:          Running
      Started:      Fri, 22 Dec 2017 15:28:02 +0700
    Ready:          True
    Restart Count:  0
    Environment:
      REDIGOCLUSTER_SHARDCOUNT:  128
    Mounts:
      /etc/tyk-dashboard from tyk-dashboard-conf (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-s5sh6 (ro)
Conditions:
  Type           Status
  Initialized    True 
  Ready          True 
  PodScheduled   True 
Volumes:
  tyk-dashboard-volume:
    Type:       GCEPersistentDisk (a Persistent Disk resource in Google Compute Engine)
    PDName:     tyk-dashboard
    FSType:     ext4
    Partition:  0
    ReadOnly:   false
  tyk-dashboard-conf:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      tyk-dashboard-conf
    Optional:  false
  default-token-s5sh6:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-s5sh6
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.alpha.kubernetes.io/notReady:NoExecute for 300s
                 node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:          <none>

Configure developer portal

#Hello,
How can I configure developer portal in Kubernetes tyk deploy?
Which is the portal domain for that?
Do I have to have a cname? I'm working with IPs and nodeport in Kubernetes so I don't have any cname available.

Thanks in advance

Hi, I got stuck at the "kubectl create -f namespaces"

i created a new project in my gcp account,
opened my cloud shell
used these commands:
$ cd ~
$ git clone https://github.com/TykTechnologies/tyk-kubernetes.git
$ cd tyk-kubernetes

and hit a road block in this command:

$ kubectl create -f namespaces

jiachengg73@cloudshell:~/tyk-kubernetes/redis (tyk-in-the-cloud)$ kubectl create -f namespaces/redis.yaml
The connection to the server localhost:8080 was refused - did you specify the right host or port?
jiachengg73@cloudshell:~/tyk-kubernetes/redis (tyk-in-the-cloud)$

thanks in advance

Configure port 5000

Hello,
I'm seeing this logs in dashboard (when I press F12 to see the console).

vendors.5d3412e608d096c6bd82.js:424 GET http://MY_IP:5000/socket.io/?chan=ui_notifications.5a981b863797ec0001f393f9&EIO=3&transport=polling&t=M9DG-F4 net::ERR_CONNECTION_REFUSED

I don't see any place where I can change the 5000 port and expose that port through NodePort.
Is it configurable or at least can I disable and what consquences would have in my Tyk deployment? Best regards

Update for Mongo

Starting mongo with ---smallfiles seems to be deprecated for mongo 4.X.X

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.