Code Monkey home page Code Monkey logo

kubernetai's Introduction

CoreDNS

Documentation CodeQL Go Tests CircleCI Code Coverage Docker Pulls Go Report Card CII Best Practices

CoreDNS is a DNS server/forwarder, written in Go, that chains plugins. Each plugin performs a (DNS) function.

CoreDNS is a Cloud Native Computing Foundation graduated project.

CoreDNS is a fast and flexible DNS server. The key word here is flexible: with CoreDNS you are able to do what you want with your DNS data by utilizing plugins. If some functionality is not provided out of the box you can add it by writing a plugin.

CoreDNS can listen for DNS requests coming in over:

Currently CoreDNS is able to:

  • Serve zone data from a file; both DNSSEC (NSEC only) and DNS are supported (file and auto).
  • Retrieve zone data from primaries, i.e., act as a secondary server (AXFR only) (secondary).
  • Sign zone data on-the-fly (dnssec).
  • Load balancing of responses (loadbalance).
  • Allow for zone transfers, i.e., act as a primary server (file + transfer).
  • Automatically load zone files from disk (auto).
  • Caching of DNS responses (cache).
  • Use etcd as a backend (replacing SkyDNS) (etcd).
  • Use k8s (kubernetes) as a backend (kubernetes).
  • Serve as a proxy to forward queries to some other (recursive) nameserver (forward).
  • Provide metrics (by using Prometheus) (prometheus).
  • Provide query (log) and error (errors) logging.
  • Integrate with cloud providers (route53).
  • Support the CH class: version.bind and friends (chaos).
  • Support the RFC 5001 DNS name server identifier (NSID) option (nsid).
  • Profiling support (pprof).
  • Rewrite queries (qtype, qclass and qname) (rewrite and template).
  • Block ANY queries (any).
  • Provide DNS64 IPv6 Translation (dns64).

And more. Each of the plugins is documented. See coredns.io/plugins for all in-tree plugins, and coredns.io/explugins for all out-of-tree plugins.

Compilation from Source

To compile CoreDNS, we assume you have a working Go setup. See various tutorials if you don’t have that already configured.

First, make sure your golang version is 1.21 or higher as go mod support and other api is needed. See here for go mod details. Then, check out the project and run make to compile the binary:

$ git clone https://github.com/coredns/coredns
$ cd coredns
$ make

This should yield a coredns binary.

Compilation with Docker

CoreDNS requires Go to compile. However, if you already have docker installed and prefer not to setup a Go environment, you could build CoreDNS easily:

docker run --rm -i -t \
    -v $PWD:/go/src/github.com/coredns/coredns -w /go/src/github.com/coredns/coredns \
        golang:1.21 sh -c 'GOFLAGS="-buildvcs=false" make gen && GOFLAGS="-buildvcs=false" make'

The above command alone will have coredns binary generated.

Examples

When starting CoreDNS without any configuration, it loads the whoami and log plugins and starts listening on port 53 (override with -dns.port), it should show the following:

.:53
CoreDNS-1.6.6
linux/amd64, go1.16.10, aa8c32

The following could be used to query the CoreDNS server that is running now:

dig @127.0.0.1 -p 53 www.example.com

Any query sent to port 53 should return some information; your sending address, port and protocol used. The query should also be logged to standard output.

The configuration of CoreDNS is done through a file named Corefile. When CoreDNS starts, it will look for the Corefile from the current working directory. A Corefile for CoreDNS server that listens on port 53 and enables whoami plugin is:

.:53 {
    whoami
}

Sometimes port number 53 is occupied by system processes. In that case you can start the CoreDNS server while modifying the Corefile as given below so that the CoreDNS server starts on port 1053.

.:1053 {
    whoami
}

If you have a Corefile without a port number specified it will, by default, use port 53, but you can override the port with the -dns.port flag: coredns -dns.port 1053, runs the server on port 1053.

You may import other text files into the Corefile using the import directive. You can use globs to match multiple files with a single import directive.

.:53 {
    import example1.txt
}
import example2.txt

You can use environment variables in the Corefile with {$VARIABLE}. Note that each environment variable is inserted into the Corefile as a single token. For example, an environment variable with a space in it will be treated as a single token, not as two separate tokens.

.:53 {
    {$ENV_VAR}
}

A Corefile for a CoreDNS server that forward any queries to an upstream DNS (e.g., 8.8.8.8) is as follows:

.:53 {
    forward . 8.8.8.8:53
    log
}

Start CoreDNS and then query on that port (53). The query should be forwarded to 8.8.8.8 and the response will be returned. Each query should also show up in the log which is printed on standard output.

To serve the (NSEC) DNSSEC-signed example.org on port 1053, with errors and logging sent to standard output. Allow zone transfers to everybody, but specifically mention 1 IP address so that CoreDNS can send notifies to it.

example.org:1053 {
    file /var/lib/coredns/example.org.signed
    transfer {
        to * 2001:500:8f::53
    }
    errors
    log
}

Serve example.org on port 1053, but forward everything that does not match example.org to a recursive nameserver and rewrite ANY queries to HINFO.

example.org:1053 {
    file /var/lib/coredns/example.org.signed
    transfer {
        to * 2001:500:8f::53
    }
    errors
    log
}

. {
    any
    forward . 8.8.8.8:53
    errors
    log
}

IP addresses are also allowed. They are automatically converted to reverse zones:

10.0.0.0/24 {
    whoami
}

Means you are authoritative for 0.0.10.in-addr.arpa..

This also works for IPv6 addresses. If for some reason you want to serve a zone named 10.0.0.0/24 add the closing dot: 10.0.0.0/24. as this also stops the conversion.

This even works for CIDR (See RFC 1518 and 1519) addressing, i.e. 10.0.0.0/25, CoreDNS will then check if the in-addr request falls in the correct range.

Listening on TLS (DoT) and for gRPC? Use:

tls://example.org grpc://example.org {
    whoami
}

Similarly, for QUIC (DoQ):

quic://example.org {
    whoami
    tls mycert mykey
}

And for DNS over HTTP/2 (DoH) use:

https://example.org {
    whoami
    tls mycert mykey
}

in this setup, the CoreDNS will be responsible for TLS termination

you can also start DNS server serving DoH without TLS termination (plain HTTP), but beware that in such scenario there has to be some kind of TLS termination proxy before CoreDNS instance, which forwards DNS requests otherwise clients will not be able to communicate via DoH with the server

https://example.org {
    whoami
}

Specifying ports works in the same way:

grpc://example.org:1443 https://example.org:1444 {
    # ...
}

When no transport protocol is specified the default dns:// is assumed.

Community

We're most active on Github (and Slack):

More resources can be found:

Contribution guidelines

If you want to contribute to CoreDNS, be sure to review the contribution guidelines.

Deployment

Examples for deployment via systemd and other use cases can be found in the deployment repository.

Deprecation Policy

When there is a backwards incompatible change in CoreDNS the following process is followed:

  • Release x.y.z: Announce that in the next release we will make backward incompatible changes.
  • Release x.y+1.0: Increase the minor version and set the patch version to 0. Make the changes, but allow the old configuration to be parsed. I.e. CoreDNS will start from an unchanged Corefile.
  • Release x.y+1.1: Increase the patch version to 1. Remove the lenient parsing, so CoreDNS will not start if those features are still used.

E.g. 1.3.1 announce a change. 1.4.0 a new release with the change but backward compatible config. And finally 1.4.1 that removes the config workarounds.

Security

Security Audits

Third party security audits have been performed by:

Reporting security vulnerabilities

If you find a security vulnerability or any security related issues, please DO NOT file a public issue, instead send your report privately to [email protected]. Security reports are greatly appreciated and we will publicly thank you for it.

Please consult security vulnerability disclosures and security fix and release process document

kubernetai's People

Contributors

chrisohaver avatar miekg avatar nyodas avatar rajansandeep avatar t0rr3sp3dr0 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubernetai's Issues

why kubernetai?

Hi , I can`t get why use kubernetai even use forword as so fa. if i just want to connect onther kubernetes cluster , i can just set coredns forword to another coredns which in another cluster. kubernetai is more userful in just one cluster have coredns?

For chinese:
到目前为止,我还是不理解为什么要用 kubernetai 而是不是 forword 。如果我需要解析另一个集群的域名,我只需要在本集群coredns配置 forword到上游集群,用 kubernetai 的话还需要 连接 kube-apiserver。 kubernetai 更适合多个集群只有一个集群有coredns的场景吗?

Can you please expand the README a little?

I got two k8s cluster in the same aws vpc, using aws vpc cni plugin, and I wanted the service discovery piece, but it's not clear in the doc, if I got to enable this on both the k8s cluster, or only the one which has to discover the resources on another.

And are those http endpoints given in your sample config mean the ip for k8s api server?

Option to serve external IPs

What do you all think about the idea of serving up the load balancer VIP instead of the clusterIP, at least if some option is set.

So, the config for the connection to the local API server would still serve up clusterIPs, but the configs for the other clusters would have some bit to tell it to serve up the load balancer VIP. Of course it doesn't help for pure NodePort or ClusterIP services, but would allow cross-cluster discovery for LB services.

This might be a bad idea. Another option is to somehow combine the k8s_external and kubernetai to do a similar thing, either directly or through zone transfer or something. Anyway, just a thought to get some discussion going.

Cannot compile with coredns v1.2.5

Tried to compile with the latest coredns version (v1.2.5) and give the following error:

# github.com/coredns/kubernetai/plugin/kubernetai
../kubernetai/plugin/kubernetai/kubernetai.go:70:9: assignment mismatch: 2 variables but 1 values

The coredns PR that introduces the change is: coredns/coredns#2225

coredns build failed with kubernetai

I tried using master/v1.8.0 version of coredns/coredns with kubernetai enable, make failed with all these version.
master and v1.8.0 's output of make

# go.etcd.io/etcd/clientv3/balancer/picker
/data/pkg/mod/go.etcd.io/[email protected]/clientv3/balancer/picker/err.go:25:9: cannot use &errPicker literal (type *errPicker) as type Picker in return argument:
	*errPicker does not implement Picker (wrong type for Pick method)
		have Pick(context.Context, balancer.PickInfo) (balancer.SubConn, func(balancer.DoneInfo), error)
		want Pick(balancer.PickInfo) (balancer.PickResult, error)
/data/pkg/mod/go.etcd.io/[email protected]/clientv3/balancer/picker/roundrobin_balanced.go:33:9: cannot use &rrBalanced literal (type *rrBalanced) as type Picker in return argument:
	*rrBalanced does not implement Picker (wrong type for Pick method)
		have Pick(context.Context, balancer.PickInfo) (balancer.SubConn, func(balancer.DoneInfo), error)
		want Pick(balancer.PickInfo) (balancer.PickResult, error)
# github.com/coredns/kubernetai/plugin/kubernetai
/data/pkg/mod/github.com/coredns/[email protected]/plugin/kubernetai/setup.go:28:24: assignment mismatch: 3 variables but k.InitKubeCache returns 1 values

Extend documentation for new users

Hi,
I'd like to try kubernetai to mirror some service from a remote cluster-2 to another one cluster-1 on AKS.

But we can't understand what are the steps and the implications from the documentation here.

Of course, for this to work, you will also need to ensure that Service Endpoint IPs are routable across the clusters, which is possible to do, but not necessarily the case by default.

Could you add some command to verify this?

After the cloning of the project and the makes, what should we do?

Is there a way to backup the current coreDNS, before deploying this?

Thanks!

Problem with compiling older versions 1.6.6, 1.7.0

Hello, I have problems with compiling coredns+kubernetai of 1.6.6 and 1.7.0 versions.

1.8 builds fine, for coredns 1.6.6 I get:

# github.com/coredns/coredns/plugin/trace
plugin/trace/trace.go:68:20: undefined: zipkintracer.NewHTTPCollector
plugin/trace/trace.go:73:14: undefined: zipkintracer.NewRecorder
plugin/trace/trace.go:74:18: undefined: zipkintracer.NewTracer
plugin/trace/trace.go:74:45: undefined: zipkintracer.ClientServerSameSpan
# github.com/coredns/coredns/plugin/kubernetes/object
plugin/kubernetes/object/object.go:92:36: undefined: "k8s.io/apimachinery/pkg/apis/meta/v1".Initializers
plugin/kubernetes/object/object.go:95:47: undefined: "k8s.io/apimachinery/pkg/apis/meta/v1".Initializers

for coredns 1.7.0:

plugin/trace/trace.go:66:20: undefined: zipkintracer.NewHTTPCollector
plugin/trace/trace.go:71:14: undefined: zipkintracer.NewRecorder
plugin/trace/trace.go:72:18: undefined: zipkintracer.NewTracer
plugin/trace/trace.go:72:45: undefined: zipkintracer.ClientServerSameSpan
# github.com/coredns/kubernetai/plugin/kubernetai
/go/pkg/mod/github.com/coredns/[email protected]/plugin/kubernetai/setup.go:30:22: cannot use c (type *"github.com/coredns/caddy".Controller) as type *"github.com/caddyserver/caddy".Controller in argument to k.RegisterKubeCache
/go/pkg/mod/github.com/coredns/[email protected]/plugin/kubernetai/setup.go:33:21: cannot use c (type *"github.com/coredns/caddy".Controller) as type *"github.com/caddyserver/caddy".Controller in argument to dnsserver.GetConfig
/go/pkg/mod/github.com/coredns/[email protected]/plugin/kubernetai/setup.go:50:36: cannot use c (type *"github.com/coredns/caddy".Controller) as type *"github.com/caddyserver/caddy".Controller in argument to kubernetes.ParseStanza

Images are built as follows:

FROM golang:alpine AS builder

ARG COREDNS_VERSION=v1.7.0

ENV GO111MODULE=on \
    CGO_ENABLED=0 \
    GOOS=linux \
    GOARCH=amd64

RUN apk update && apk add --no-cache git 

WORKDIR $GOPATH/src/github.com/coredns

RUN git clone --depth 1 --branch $COREDNS_VERSION https://github.com/coredns/coredns.git && \
    cd coredns && \
    go get github.com/coredns/kubernetai && \
    sed -i 's/kubernetes:kubernetes/kubernetes:kubernetes\nkubernetai:github.com\/coredns\/kubernetai\/plugin\/kubernetai/' plugin.cfg && \
    go generate && \
    go build -o /go/bin

If I replace go get github.com/coredns/kubernetai with go get github.com/coredns/kubernetai@196a693161c7230a8c4a29bb7c4db1838311547f ("bump to coredns" 1.6.5 commit) then it seems, that 1.6.6 is built fine (but nevertheless here is the version of kubernetai for coredns 1.6.5, and the coredns itself is version 1.6.6, not sure if this is correctly).

for 1.7.0 I get:

/go/pkg/mod/github.com/coredns/[email protected]/plugin/kubernetai/setup.go:24:24: not enough arguments in call to k.InitKubeCache
        have ()
        want (context.Context)

Since AWS recommends using coredns 1.6.6 and 1.7.0 for cluster versions 1.17 and 1.18 respectively, I would like to be able to do so.

Could you please suggest the correct way to build older versions 1.6.6 and 1.7.0?

Remote k8s dns entries are failing intermittently (Host not found: 3(NXDOMAIN))

I have for kubernetai running in one of the k8s clusters which pulls the dns entries from another k8s cluster. Everything works fine for sometime however it starts failing later in the day. And then everything starts working again when we delete the coredns pods to allow coredns deployment spawn new ones.

These two k8s clusters are in separate aws subnets in the same vpc.

cb-test1 and nginxd are running in remote k8s cluster.

This is on k8s cluster running kubernetai:
You can see that remote entries are not available.
As the next step I will delete the coredns pods.

$ k run -it --rm --restart=Never --image=infoblox/dnstools:latest dnstools
If you don't see a command prompt, try pressing enter.
dnstools# 
dnstools# host cb-test1
Host cb-test1 not found: 3(NXDOMAIN)
dnstools# 
dnstools# host nginxd
Host nginxd not found: 3(NXDOMAIN)
dnstools# 
dnstools# host google.com
google.com has address 172.217.4.174
google.com has IPv6 address 2607:f8b0:4007:801::200e
google.com mail is handled by 40 alt3.aspmx.l.google.com.
google.com mail is handled by 20 alt1.aspmx.l.google.com.
google.com mail is handled by 30 alt2.aspmx.l.google.com.
google.com mail is handled by 50 alt4.aspmx.l.google.com.
google.com mail is handled by 10 aspmx.l.google.com.
dnstools# 
dnstools# host kubernetes
kubernetes.default.svc.cluster.local has address 10.233.0.1
dnstools# 

Note: alias ks='kubectl -n kube-system'

$ ks get pods -l k8s-app=kube-dns
NAME                       READY   STATUS    RESTARTS   AGE
coredns-5d88d59d69-7hjrr   1/1     Running   0          9h
coredns-5d88d59d69-s8b48   1/1     Running   0          9h

$ ks delete pods coredns-5d88d59d69-7hjrr coredns-5d88d59d69-s8b48
pod "coredns-5d88d59d69-7hjrr" deleted
pod "coredns-5d88d59d69-s8b48" deleted

$ ks get pods -l k8s-app=kube-dns
NAME                       READY   STATUS    RESTARTS   AGE
coredns-5d88d59d69-pjcrx   1/1     Running   0          35s
coredns-5d88d59d69-sf6nt   1/1     Running   0          35s

And now the remote entries start showing up again (with no other change in local and remote k8s clusters)

$ k run -it --rm --restart=Never --image=infoblox/dnstools:latest dnstools
If you don't see a command prompt, try pressing enter.
dnstools# 
dnstools# host cb-test1
Host cb-test1 not found: 3(NXDOMAIN)
dnstools# 
dnstools# 
dnstools# host nginxd
nginxd.default.svc.cluster.local has address 10.233.251.220
dnstools# 
dnstools# 
dnstools# host cb-test1
cb-test1.default.svc.cluster.local has address 10.223.33.149
cb-test1.default.svc.cluster.local has address 10.223.36.104
Host cb-test1.default.svc.cluster.local not found: 3(NXDOMAIN)
dnstools# 
dnstools# host cb-test1
Host cb-test1 not found: 3(NXDOMAIN)
dnstools# 
dnstools# 
dnstools# host cb-test1
cb-test1.default.svc.cluster.local has address 10.223.33.149
cb-test1.default.svc.cluster.local has address 10.223.36.104
dnstools# 
dnstools# 
dnstools# host google.com
google.com has address 172.217.11.174
google.com has IPv6 address 2607:f8b0:4007:804::200e
google.com mail is handled by 40 alt3.aspmx.l.google.com.
google.com mail is handled by 50 alt4.aspmx.l.google.com.
google.com mail is handled by 10 aspmx.l.google.com.
google.com mail is handled by 30 alt2.aspmx.l.google.com.
google.com mail is handled by 20 alt1.aspmx.l.google.com.
dnstools# 
dnstools# host kubernetes
kubernetes.default.svc.cluster.local has address 10.233.0.1
dnstools# 

And here's how my coredns configmap looks like

apiVersion: v1
data:
  Corefile: |
    # Kubernetes Services (cluster.local domain)
    10.233.0.0/16 10.240.0.0/12 cluster.local {
      prometheus :9153
      errors
      log
      cache 10
      template IN ANY net.svc.cluster.local com.svc.cluster.local org.svc.cluster.local internal.svc.cluster.local {
        rcode NXDOMAIN
        authority "{{ .Zone }} 60 IN SOA ns.coredns.cluster.local coredns.cluster.local (1 60 60 60 60)"
      }
      kubernetai {
        fallthrough
      }
      kubernetai {
        endpoint https://10.223.32.35:443
        tls /etc/k8sflashcerts/client.crt /etc/k8sflashcerts/client.key /etc/k8sflashcerts/ca.crt
        upstream
      }
    }

    # AWS DNS Hosts
    10.223.0.0/19 compute.internal {
      prometheus :9153
      errors
      log
      cache 10
      template IN ANY {
        match "^([^i]|i[^p]|ip[^-])[a-z0-9\-\.]+(\.us-west-2)?\.compute\.internal"
        rcode NXDOMAIN
        authority "{{ .Zone }} 60 IN SOA ns.coredns.cluster.local coredns.cluster.local (1 60 60 60 60)"
        fallthrough
      }
      forward .  dns://10.223.13.143 dns://10.223.23.8 dns://10.223.15.8
    }

    . {
        reload 10s
        health
        prometheus :9153
        errors
        log
        cache 10

        forward .  dns://10.223.13.143 dns://10.223.23.8 dns://10.223.15.8
    }
kind: ConfigMap

Health reporting

Need to evaluate what needs to be done for health reporting for multiple kubernetes connections.

Metrics/Prometheus

Need to evaluate what needs to be done (if anything) for prometheus metrics reporting for multiple kubernetes connections.

Unable to configure ignoring SERVFAIL

Hey folks!

First off, thanks for having already solved the issue of wanting to serve multiple Kubernetes clusters with a single coredns (very handy for integrating Kubernetes's DNS into our existing DNS infra).

One problem we have is that due to waves hands reasons, the kubelets of a cluster have non-contiguous CIDRs, but a set of clusters have a large contiguous block.

So our config looks something like:

kubernetai <cluster zone> 10.0.0.0/16
...
kubernetai <cluster zone> 10.0.0.0/16

This always works fine for forward DNS, but if any cluster is down, then it breaks reverse dns for any clusters after it.

Would an additional flag (e.g. ignore_servfail) be an acceptable solution to ya'll? As we're essentially using kubernetai to express an authoritative whole, and so we need each cluster to be consulted before declaring SERVFAIL.

SERVFAIL on fallthrough to forward

When trying to use kubernetai with the following configuration

.:53 {
  errors

  kubernetai cluster-a.local in-addr.arpa ip6.arpa {
    fallthrough in-addr.arpa ip6.arpa
    kubeconfig /Users/user/.kube/config arn:aws:eks:us-east-1:000000000000:cluster/a
    pods insecure
  }

  kubernetai cluster-b.local in-addr.arpa ip6.arpa {
    fallthrough in-addr.arpa ip6.arpa
    kubeconfig /Users/user/.kube/config arn:aws:eks:us-east-1:000000000000:cluster/b
    pods insecure
  }

  forward . /etc/resolv.conf {
    max_concurrent 1000
  }
}

and using dig to query PTR records of an IP address

dig @127.0.0.1 -x 1.1.1.1

it raises an error saying that the next plugin was not found

.:53
CoreDNS-1.8.3
darwin/amd64, go1.16.3, 4293992b-dirty
[ERROR] plugin/errors: 2 1.1.1.1.in-addr.arpa. PTR: plugin/kubernetes: no next plugin found specified

and answers the request with SERVFAIL

; <<>> DiG 9.10.6 <<>> -x 1.1.1.1 @127.0.0.1
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 57076
;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;1.1.1.1.in-addr.arpa.          IN      PTR

;; Query time: 0 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Thu May 06 17:11:41 -03 2021
;; MSG SIZE  rcvd: 49

instead of using the forward plugin configuration.

EKS compatible?

Just tried to make it work with an EKS remote cluster, no success. I am getting an unauthorized error response from k8s api-server.

E0319 07:09:59.530761       1 reflector.go:134] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: Failed to list *v1.Namespace: Unauthorized

This is my coredns kubernetai config (replaced kubernetes default block), the rest is coredns defaults

kubernetai cluster.local {
      kubeconfig /tmp/coredns/kubeconfig.yaml remote
      namespaces eu-central-1-b
      fallthrough
}
kubernetai cluster.local {
      pods insecure
      upstream
      fallthrough in-addr.arpa ip6.arpa
}

kubeconfig.yaml

apiVersion: v1
clusters:
- cluster:
    # I was getting another error when setting EKS cluster's CA, so I enabled insecure mode to test
    insecure-skip-tls-verify: true
    server: https://bla.sk1.eu-central-1.eks.amazonaws.com
  name: remote
contexts:
- context:
    cluster: remote
    namespace: default
    user: remote
  name: remote
current-context: ""
kind: Config
preferences: {}
users:
- name: remote
  user:
    client-certificate: cert.pem
    client-key: key.pem

Client key and certificate are self-signed and added to the user in AWS with permissions to operate the cluster.

Am I missing anything? Or the plugin is not compatible with EKS yet?

Thank you :)

Error during parsing: Unknown directive 'kubernetai'

I got this error when I replaced kubernetes with kubernetai in coredns configmap in my k8s cluster.

ks edit cm coredns
apiVersion: v1
data:
  Corefile: |-
    .:53 {
        errors
        health
        kubernetai cluster.local. in-addr.arpa ip6.arpa {
          pods insecure
          upstream
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
Error during parsing: Unknown directive 'kubernetai'
ks get pods
NAME                                                                  READY   STATUS             RESTARTS   AGE
coredns-784bfc9fbd-9f7sz                                              0/1     CrashLoopBackOff   3          1m
coredns-784bfc9fbd-svw98                                              0/1     CrashLoopBackOff   3          1m
dns-controller-7c49b9b6d5-6ljlz                                       1/1     Running            0          41m

Unable to build coredns image with kubernetai

I was following the steps given in the readme file.
Last time Chris gave me a docker image to test but I got to build one with CoreDNS 1.3.1.
However, as the starter I thought of trying the steps for CoreDNS 1.4.0

gist:

cannot find package "github.com/coredns/coredns/core/plugin
cannot find package "github.com/coredns/coredns/coremain

details:

> git clone https://github.com/coredns/coredns
> cd coredns	
> git checkout tags/v1.4.0 -b 1.4.0
make -f Makefile.release DOCKER=bjethwan/coredns:1.4.0-kubernetai release
go get github.com/estesp/manifest-tool
Cleaning old builds
Building: darwin/amd64 - 1.4.0
mkdir -p build/darwin/amd64 && /Library/Developer/CommandLineTools/usr/bin/make coredns BINARY=build/darwin/amd64/coredns SYSTEM="GOOS=darwin GOARCH=amd64" CHECKS="godeps" BUILDOPTS=""
(cd /Users/bj151v/go/src/github.com/mholt/caddy 2>/dev/null              && git checkout -q master 2>/dev/null || true)
(cd /Users/bj151v/go/src/github.com/miekg/dns 2>/dev/null                && git checkout -q master 2>/dev/null || true)
(cd /Users/bj151v/go/src/github.com/prometheus/client_golang 2>/dev/null && git checkout -q master 2>/dev/null || true)
go get -u github.com/mholt/caddy
go get -u github.com/miekg/dns
go get -u github.com/prometheus/client_golang/prometheus/promhttp
go get -u github.com/prometheus/client_golang/prometheus
(cd /Users/bj151v/go/src/github.com/mholt/caddy              && git checkout -q v0.11.4)
(cd /Users/bj151v/go/src/github.com/miekg/dns                && git checkout -q v1.1.4)
(cd /Users/bj151v/go/src/github.com/prometheus/client_golang && git checkout -q v0.9.1)
CGO_ENABLED=0 GOOS=darwin GOARCH=amd64 go build  -ldflags="-s -w -X github.com/coredns/coredns/coremain.GitCommit=8dcc7fcc" -o build/darwin/amd64/coredns
coredns.go:9:2: cannot find package "github.com/coredns/coredns/core/plugin" in any of:
	/usr/local/Cellar/go/1.12.6/libexec/src/github.com/coredns/coredns/core/plugin (from $GOROOT)
	/Users/bj151v/go/src/github.com/coredns/coredns/core/plugin (from $GOPATH)
coredns.go:6:2: cannot find package "github.com/coredns/coredns/coremain" in any of:
	/usr/local/Cellar/go/1.12.6/libexec/src/github.com/coredns/coredns/coremain (from $GOROOT)
	/Users/bj151v/go/src/github.com/coredns/coredns/coremain (from $GOPATH)
make[1]: *** [coredns] Error 1
make: *** [build] Error 2

Under syntax in README.md is a dead link

Just came across kubernetai and was taking a look. The link in README.md is a dead link. I'd submit a PR to fix the problem, but I'm not sure what the proper link should be. I'd like to do an eval - so if someone could provide the proper link it would be appreciated.

Cheers
-steve

coredns build failed with kubernetai

I tried using master/v1.7.0/v1.6.9/v1.6.8 version of coredns/coredns with kubernetai enable, make failed with all these version.
master and v1.7.0 's output of make

../../../../pkg/mod/github.com/coredns/[email protected]/plugin/kubernetai/setup.go:24:24: not enough arguments in call to k.InitKubeCache
	have ()
	want (context.Context)

while 1.6.9 and 1.6.8 's output of make

../../../../pkg/mod/github.com/!azure/[email protected]+incompatible/services/dns/mgmt/2018-05-01/dns/client.go:24:2: ambiguous import: found package github.com/Azure/go-autorest/autorest in multiple modules:
	github.com/Azure/go-autorest v11.1.2+incompatible (/Users/forrestchen/code/go/pkg/mod/github.com/!azure/[email protected]+incompatible/autorest)
	github.com/Azure/go-autorest/autorest v0.9.4 (/Users/forrestchen/code/go/pkg/mod/github.com/!azure/go-autorest/[email protected])
../../../../pkg/mod/github.com/!azure/[email protected]+incompatible/services/dns/mgmt/2018-05-01/dns/models.go:24:2: ambiguous import: found package github.com/Azure/go-autorest/autorest/azure in multiple modules:
	github.com/Azure/go-autorest v11.1.2+incompatible (/Users/forrestchen/code/go/pkg/mod/github.com/!azure/[email protected]+incompatible/autorest/azure)
	github.com/Azure/go-autorest/autorest v0.9.4 (/Users/forrestchen/code/go/pkg/mod/github.com/!azure/go-autorest/[email protected]/azure)
plugin/azure/setup.go:14:2: ambiguous import: found package github.com/Azure/go-autorest/autorest/azure/auth in multiple modules:
	github.com/Azure/go-autorest v11.1.2+incompatible (/Users/forrestchen/code/go/pkg/mod/github.com/!azure/[email protected]+incompatible/autorest/azure/auth)
	github.com/Azure/go-autorest/autorest/azure/auth v0.4.2 (/Users/forrestchen/code/go/pkg/mod/github.com/!azure/go-autorest/autorest/azure/[email protected])
../../../../pkg/mod/github.com/!azure/[email protected]+incompatible/services/dns/mgmt/2018-05-01/dns/models.go:25:2: ambiguous import: found package github.com/Azure/go-autorest/autorest/to in multiple modules:
	github.com/Azure/go-autorest v11.1.2+incompatible (/Users/forrestchen/code/go/pkg/mod/github.com/!azure/[email protected]+incompatible/autorest/to)
	github.com/Azure/go-autorest/autorest/to v0.2.0 (/Users/forrestchen/code/go/pkg/mod/github.com/!azure/go-autorest/autorest/[email protected])

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.