Code Monkey home page Code Monkey logo

nack's Introduction

nack-large

License Apache 2 Release Badge E2E Badge TEST Badge

NATS Controllers for Kubernetes (NACK)

JetStream Controller

The JetStream controllers allows you to manage NATS JetStream Streams and Consumers via K8S CRDs.

Getting started

First install the JetStream CRDs:

$ kubectl apply -f https://github.com/nats-io/nack/releases/latest/download/crds.yml

Now install with Helm:

helm repo add nats https://nats-io.github.io/k8s/helm/charts/
helm install nats nats/nats --set=config.jetstream.enabled=true
helm install nack nats/nack --set jetstream.nats.url=nats://nats:4222

Creating Streams and Consumers

Let's create a a stream and a couple of consumers:

---
apiVersion: jetstream.nats.io/v1beta2
kind: Stream
metadata:
  name: mystream
spec:
  name: mystream
  subjects: ["orders.*"]
  storage: memory
  maxAge: 1h
---
apiVersion: jetstream.nats.io/v1beta2
kind: Consumer
metadata:
  name: my-push-consumer
spec:
  streamName: mystream
  durableName: my-push-consumer
  deliverSubject: my-push-consumer.orders
  deliverPolicy: last
  ackPolicy: none
  replayPolicy: instant
---
apiVersion: jetstream.nats.io/v1beta2
kind: Consumer
metadata:
  name: my-pull-consumer
spec:
  streamName: mystream
  durableName: my-pull-consumer
  deliverPolicy: all
  filterSubject: orders.received
  maxDeliver: 20
  ackPolicy: explicit
# Create a stream.
$ kubectl apply -f https://raw.githubusercontent.com/nats-io/nack/main/deploy/examples/stream.yml

# Check if it was successfully created.
$ kubectl get streams
NAME       STATE     STREAM NAME   SUBJECTS
mystream   Created   mystream      [orders.*]

# Create a push-based consumer
$ kubectl apply -f https://raw.githubusercontent.com/nats-io/nack/main/deploy/examples/consumer_push.yml

# Create a pull based consumer
$ kubectl apply -f https://raw.githubusercontent.com/nats-io/nack/main/deploy/examples/consumer_pull.yml

# Check if they were successfully created.
$ kubectl get consumers
NAME               STATE     STREAM     CONSUMER           ACK POLICY
my-pull-consumer   Created   mystream   my-pull-consumer   explicit
my-push-consumer   Created   mystream   my-push-consumer   none

# If you end up in an Errored state, run kubectl describe for more info.
#     kubectl describe streams mystream
#     kubectl describe consumers my-pull-consumer

Now we're ready to use Streams and Consumers. Let's start off with writing some data into mystream.

# Run nats-box that includes the NATS management utilities, and exec into it.
$ kubectl apply -f https://nats-io.github.io/k8s/tools/nats-box.yml
$ kubectl exec -it nats-box -- /bin/sh -l

# Publish a couple of messages from nats-box
nats-box:~$ nats context save jetstream -s nats://nats:4222
nats-box:~$ nats context select jetstream

nats-box:~$ nats pub orders.received "order 1"
nats-box:~$ nats pub orders.received "order 2"

First, we'll read the data using a pull-based consumer.

From the above my-pull-consumer Consumer CRD, we have set the filterSubject of orders.received. You can double check with the following command:

$ kubectl get consumer my-pull-consumer -o jsonpath={.spec.filterSubject}
orders.received

So that's the subject my-pull-consumer will pull messages from.

# Pull first message.
nats-box:~$ nats consumer next mystream my-pull-consumer
--- subject: orders.received / delivered: 1 / stream seq: 1 / consumer seq: 1

order 1

Acknowledged message

# Pull next message.
nats-box:~$ nats consumer next mystream my-pull-consumer
--- subject: orders.received / delivered: 1 / stream seq: 2 / consumer seq: 2

order 2

Acknowledged message

Next, let's read data using a push-based consumer.

From the above my-push-consumer Consumer CRD, we have set the deliverSubject of my-push-consumer.orders, as you can confirm with the following command:

$ kubectl get consumer my-push-consumer -o jsonpath={.spec.deliverSubject}
my-push-consumer.orders

So pushed messages will arrive on that subject. This time all messages arrive automatically.

nats-box:~$ nats sub my-push-consumer.orders
17:57:24 Subscribing on my-push-consumer.orders
[#1] Received JetStream message: consumer: mystream > my-push-consumer / subject: orders.received /
delivered: 1 / consumer seq: 1 / stream seq: 1 / ack: false
order 1

[#2] Received JetStream message: consumer: mystream > my-push-consumer / subject: orders.received /
delivered: 1 / consumer seq: 2 / stream seq: 2 / ack: false
order 2

Getting Started with Accounts

You can create an Account resource with the following CRD. The Account resource can be used to specify server and TLS information.

---
apiVersion: jetstream.nats.io/v1beta2
kind: Account
metadata:
  name: a
spec:
  name: a
  servers:
  - nats://nats:4222
  tls:
    secret:
      name: nack-a-tls
    ca: "ca.crt"
    cert: "tls.crt"
    key: "tls.key"

You can then link an Account to a Stream so that the Stream uses the Account information for its creation.

---
apiVersion: jetstream.nats.io/v1beta2
kind: Stream
metadata:
  name: foo
spec:
  name: foo
  subjects: ["foo", "foo.>"]
  storage: file
  replicas: 1
  account: a # <-- Create stream using account A information

The following is an example of how to get Accounts working with a custom NATS Server URL and TLS certificates.

# Install cert-manager
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.6.0/cert-manager.yaml

# Install TLS certs
cd examples/secure
# Install certificate issuer
kubectl apply -f issuer.yaml
# Install account A cert
kubectl apply -f nack-a-client-tls.yaml
# Install server cert
kubectl apply -f server-tls.yaml
# Install nats-box cert
kubectl apply -f client-tls.yaml

# Install NATS cluster
helm install -f nats-helm.yaml nats nats/nats
# Verify pods are healthy
kubectl get pods

# Install nats-box to run nats cli later
kubectl apply -f nats-client-box.yaml

# Install JetStream Controller from nack
helm install --set jetstream.enabled=true jetstream-controller nats/nack
# Install CRDs
kubectl apply -f ../../deploy/crds.yml
# Verify pods are healthy
kubectl get pods

# Create account A resource
kubectl apply -f nack/nats-account-a.yaml

# Create stream using account A
kubectl apply -f nack/nats-stream-foo-a.yaml
# Create consumer using account A
kubectl apply -f nack/nats-consumer-bar-a.yaml

After Accounts, Streams, and Consumers are created, let's log into the nats-box container to run the management CLI.

# Get container shell
kubectl exec -it nats-client-box-abc-123 -- sh
# Change to TLS directory
cd /etc/nats-certs/clients/nack-a-tls

There should now be some Streams available, verify with nats command.

# List streams
nats --tlscert tls.crt --tlskey tls.key --tlsca ca.crt -s tls://nats.default.svc.cluster.local stream ls

You can now publish messages on a Stream.

# Push message
nats --tlscert tls.crt --tlskey tls.key --tlsca ca.crt -s tls://nats.default.svc.cluster.local pub foo hi

And pull messages from a Consumer.

# Pull message
nats --tlscert tls.crt --tlskey tls.key --tlsca ca.crt -s tls://nats.default.svc.cluster.local consumer next foo bar

Local Development

# First, build the jetstream controller.
make jetstream-controller

# Next, run the controller like this
./jetstream-controller -kubeconfig ~/.kube/config -s nats://localhost:4222

# Pro tip: jetstream-controller uses klog just like kubectl or kube-apiserver.
# This means you can change the verbosity of logs with the -v flag.
#
# For example, this prints raw HTTP requests and responses.
#     ./jetstream-controller -v=10

# You'll probably want to start a local Jetstream-enabled NATS server, unless
# you use a public one.
nats-server -DV -js

Build Docker image

make jetstream-controller-docker ver=1.2.3

NATS Server Config Reloader

This is a sidecar that you can use to automatically reload your NATS Server configuration file.

Installing with Helm

For more information see the Chart repo.

helm repo add nats https://nats-io.github.io/k8s/helm/charts/
helm install my-nats nats/nats

Configuring

reloader:
  enabled: true
  image: natsio/nats-server-config-reloader:0.6.0
  pullPolicy: IfNotPresent

Local Development

# First, build the config reloader.
make nats-server-config-reloader

# Next, run the reloader like this
./nats-server-config-reloader

Build Docker image

make nats-server-config-reloader-docker ver=1.2.3

NATS Boot Config

Installing with Helm

For more information see the Chart repo.

helm repo add nats https://nats-io.github.io/k8s/helm/charts/
helm install my-nats nats/nats

Configuring

bootconfig:
  image: natsio/nats-boot-config:0.5.2
  pullPolicy: IfNotPresent

Local Development

# First, build the project.
make nats-boot-config

# Next, run the project like this
./nats-boot-config

Build Docker image

make nats-boot-config-docker ver=1.2.3

nack's People

Contributors

1995parham avatar bruth avatar caleblloyd avatar danielcibrao-form3 avatar ddseapy avatar dependabot[bot] avatar haisum avatar jackzxj avatar jarema avatar jkralik avatar liam-verta avatar ludusrusso avatar marjakm avatar martin31821 avatar mfuhol-weka avatar nsurfer avatar pacoguzman avatar philpennock avatar piotrpio avatar rytswd avatar samirmarin avatar samuelattwood avatar sboulkour avatar scottf avatar squat avatar stuartcrichton avatar thomasbabtist avatar treksler avatar variadico avatar wallyqs avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nack's Issues

Refine installation step to allow easier getting started

Enhancement Idea

Goal

I believe the installation can be simplified much more, even without Helm.

Motivation

In order to allow smooth onboarding and easy getting started with NATS and JetStream, I believe that how easy it is to install and get our hands dirty is the key. Current installation step requires some understanding on how NATS Server cluster needs to be running, JS enabled NATS node also needs to be added, etc.

It could be handled by Helm Chat perhaps. But that may also mean that Helm is the default, and some familiarity with Helm would then be required for managing this controller.

Background

There are a few good examples, such as cert-manager and Argo projects. They have a single YAML file for installation, which bundles all CRDs, RBAC, etc.

Ref:
https://cert-manager.io/docs/installation/kubernetes/
https://argoproj.github.io/argo/quick-start/

You could achieve the similar setup with Helm Chart, but when you are introducing a controller to the cluster, the installation of the controller itself can be usually simple and straightforward (like above examples).

Even in complex scenario such as Istio, they have their own installation CLI istioctl, which actually simply generates YAML, applies to the cluster, and ensures clean installation. The CLI has much more features to it for debugging, management, etc., but the installation itself is straightforward.
You can find the installation guide using istioctl https://istio.io/latest/docs/setup/install/istioctl/, but there is also a way to generate the entire YAML instead: istioctl manifest generate > istio.yaml. After that, you can do kubectl apply -f istio.yaml to deploy all components.

Implementation Ideas

I think there are about 3 approaches:

  • Like cert-manager, each release could generate a bundled YAML as a build artifact. This would be a part of release CI job.
  • Like argo/argo-cd, bundled YAML can be generated ahead of the release, and point the release tag to it.
  • Adopt Helm only installation, and do not support individual YAML as installation path.

Given that the controller currently assumes a NATS Server cluster is installed to the cluster already, using Helm seems to be the only viable option at the moment. I think it would make sense for the controller to support generating NATS Server clusters in the future, and thus may be better to allow controller to run by itself. This allows having multiple NATS Server clusters in a single K8s, and still have single controller to manage all CRDs (each CRD will then be able to target NATS Server cluster of their choice). With ValidatingWebhook, we can also ensure any JetStream CRD will be rejected if created without running NATS in the cluster.

Other Notes

I don't mean to self-promote this, but I have started putting together some JS getting started doc, mainly for myself and my teammates to learn. You can see how the installation step has so much implementation details, which you may not need to know when you are just to play with it.

Config Reloader v0.6.x Crash Loop on "Too Many Open Files"

Posting this here as the code seems to live here in this repo:
nats-io/k8s#488

Hello!

We're seeing strange issues with our config-reloader using the latest build from February (v0.6.3) - config reloader pods will not boot with logs stopping at:

2022/04/18 19:37:26 Starting NATS Server Reloader v0.6.3
Error: too many open files
Is anyone else seeing this? I'm able to show this reproducing in multiple clusters (k3s) unfortunately, while sometimes the container launches just fine.

Have attempted to utilize older versions as well (0.6.1 and 0.6.2) with the same issue happening unfortunately.

State of stream and actual state might be different

A result of doing kubectl get streams can show:

kubectl get streams
NAME   STATE     STREAM NAME   SUBJECTS
foo    Errored   foo           ["foo","foo.\u003e"]
foo2   Errored   foo2          ["foo2","foo2.\u003e"]
foo3   Created   foo3          ["foo3","foo3.\u003e"]

Although there is no issue with the streams themselves:

nats stream ls -s nats://nats:4222 --tlsca nack105163165/default/a/ca.c...  Waldemars-Air-2: Fri Oct 29 11:26:23 2021

Streams:

        foo
        foo2
        foo3

Resource Deletion

I am trying to understand the history of #56 with respect to this comment:

With finalizers gone, the Kubernetes Informers no longer send us an event when a resource is deleted

Why are finalizers gone? They are still documented as supported for CRDs:

https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#finalizers

Managed resource deletion in k8s is typically handled in finalizers, not a list/cleanup loop like was implemented in #56. Or at least it was 1-2 years ago, it has been a while since I worked on an Operator so have things changed? Or should we go back to using finalizers here?

error when deploying push consumer. consumer with flow control also needs heartbeats (10108)

following the README to do a test deploy

when deploying the push consumer

kubectl apply -f https://raw.githubusercontent.com/nats-io/nack/main/deploy/examples/consumer_push.yml

it ends up in an error state with

failed to create consumer "my-push-consumer" on stream "mystream": consumer with flow control also needs heartbeats (10108)

i am new to nats and jetstream, so correct me if i am way off here, but should the push consumer example be updated to something like

---
apiVersion: jetstream.nats.io/v1beta2
kind: Consumer
metadata:
  name: my-push-consumer
spec:
  streamName: mystream
  durableName: my-push-consumer
  deliverSubject: my-push-consumer.orders
  deliverPolicy: last
  ackPolicy: none
  replayPolicy: instant
  description: my consumer description
  flowControl: true
  heartbeatInterval: 1s

Cannot create consumer with file deliverGroup

I got this problem when createing consumer by nack unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(Consumer.spec): unknown field "deliverGroup" in io.nats.jetstream.v1beta2.Consumer.spec

the server could not find the requested resource

Hi,

I've deployed NACK according to the ReadMe, with one controller and 3 servers. All pods are in running state.
I established 1 stream, 1 push and 1 pull consumer. However, if I monitor the setup in Grafana, I get nor streams nor consumers, nor any activity.
If I check logs from the controller, I only see bunch of errors (see below). What am I missing?

Thanks,
Harald

2022-04-29 11:28:39
E0429 09:28:39.789385 1 reflector.go:127] nack/pkg/jetstream/generated/informers/externalversions/factory.go:114: Failed to watch *v1beta1.Consumer: failed to list *v1beta1.Consumer: the server could not find the requested resource (get consumers.jetstream.nats.io)
2022-04-29 11:28:29
E0429 09:28:29.342957 1 reflector.go:127] nack/pkg/jetstream/generated/informers/externalversions/factory.go:114: Failed to watch *v1beta1.Stream: failed to list *v1beta1.Stream: the server could not find the requested resource (get streams.jetstream.nats.io)
2022-04-29 11:28:03
E0429 09:28:03.904655 1 reflector.go:127] nack/pkg/jetstream/generated/informers/externalversions/factory.go:114: Failed to watch *v1beta1.StreamTemplate: failed to list *v1beta1.StreamTemplate: the server could not find the requested resource (get streamtemplates.jetstream.nats.io)
2022-04-29 11:27:48
E0429 09:27:48.541547 1 reflector.go:127] nack/pkg/jetstream/generated/informers/externalversions/factory.go:114: Failed to watch *v1beta1.Consumer: failed to list *v1beta1.Consumer: the server could not find the requested resource (get consumers.jetstream.nats.io)
2022-04-29 11:27:45
E0429 09:27:45.306433 1 reflector.go:127] nack/pkg/jetstream/generated/informers/externalversions/factory.go:114: Failed to watch *v1beta1.Stream: failed to list *v1beta1.Stream: the server could not find the requested resource (get streams.jetstream.nats.io)
2022-04-29 11:27:08
E0429 09:27:08.782693 1 reflector.go:127] nack/pkg/jetstream/generated/informers/externalversions/factory.go:114: Failed to watch *v1beta1.StreamTemplate: failed to list *v1beta1.StreamTemplate: the server could not find the requested resource (get streamtemplates.jetstream.nats.io)
2022-04-29 11:27:04
E0429 09:27:04.566932 1 reflector.go:127] nack/pkg/jetstream/generated/informers/externalversions/factory.go:114: Failed to watch *v1beta1.Stream: failed to list *v1beta1.Stream: the server could not find the requested resource (get streams.jetstream.nats.io)
2022-04-29 11:27:03
E0429 09:27:03.103242 1 reflector.go:127] nack/pkg/jetstream/generated/informers/externalversions/factory.go:114: Failed to watch *v1beta1.Consumer: failed to list *v1beta1.Consumer: the server could not find the requested resource (get consumers.jetstream.nats.io)
2022-04-29 11:26:30
E0429 09:26:30.407416 1 reflector.go:127] nack/pkg/jetstream/generated/informers/externalversions/factory.go:114: Failed to watch *v1beta1.Stream: failed to list *v1beta1.Stream: the server could not find the requested resource (get streams.jetstream.nats.io)
2022-04-29 11:26:26
E0429 09:26:26.260237 1 reflector.go:127] nack/pkg/jetstream/generated/informers/externalversions/factory.go:114: Failed to watch *v1beta1.StreamTemplate: failed to list *v1beta1.StreamTemplate: the server could not find the requested resource (get streamtemplates.jetstream.nats.io)

Authorization Violation connection issue with management through accounts

Hello everyone, there is a problem with creating resources using NACK

versions:
nats: nats:2.9.8-alpine, deployed by helm chart 0.19.1
nack: jetstream-controller:0.8.0, deployed by helm chart 0.19.0 and CRD v0.8.0

NACK config:

jetstream:
  enabled: true

resources:
  limits:
    cpu: 500m
    memory: 1024Mi
  requests:
    cpu: 100m
    memory: 256Mi

after deployment, NACK works well, there are no errors in the logs.
I want to manage many resources in different accounts, so after creating an account in NATS, I added the credentials of this account to the secrets of k8s and created an account entity:

---
apiVersion: jetstream.nats.io/v1beta2
kind: Account
metadata:
  name: test
spec:
  name: test
  servers:
  - nats://nats.${URL}:4222
  creds:
    secret:
      name: nats-nack-account-test
    file: nats-nack-account-test.creds

next, I tried to create a Stream using this account, added stream:

---
apiVersion: jetstream.nats.io/v1beta2
kind: Stream
metadata:
  name: test-nack
spec:
  name: test-nack
  subjects: ["foo", "foo.>"]
  storage: file
  replicas: 1
  account: test

after that, errors appeared in the NACK logs:

 failed to process stream: failed to connect to nats-servers(nats://nats.${URL}:4222): nats: Authorization Violation

I rechecked the URL and accounts secrets, everything is correct and there are no errors.

Its important that if I specify this account directly in the NACK config, then everything works without errors, Stream is created:

jetstream:
  enabled: true

  nats:
     url: nats://nats.${URL}:4222
     credentials:
        secret:
          name: nats-nack-account-test
          key: "nats-nack-account-test.creds"

resources:
  limits:
    cpu: 500m
    memory: 1024Mi
  requests:
    cpu: 100m
    memory: 256Mi

but in this configuration, I can only manage one account, so it's not suitable.

Could you tell me what the error may be and how to solve it? klogLevel: 10 does not add clarity

jetstream.yml - [ is not respecting the storage value on the helm chart ]

Hey Folks,

When I create a stream-product for nats on kubernetes it is not respecting the file parameter in helm chart.

follow my deployment.
https://gist.github.com/randomk/6456ef3db59127765697607db06ea488


apiVersion: jetstream.nats.io/v1beta1
kind: Stream
metadata:
name: product
spec:
name: product
subjects: ["product.*"]
storage: file
maxAge: 1h
replicas: 3
noAck: true
maxMsgs: -1
maxBytes: -1
retention: limits
discard: old
maxMsgSize: -1

I am checking via nats-toolbox and its not showing correct, showing as a memory stream

fun-fact when I create a stream via the nats-toolbox it creates correct as a file.
nats str add ORDERS --subjects "ORDERS.*" --ack --max-msgs=-1 --max-bytes=-1 --max-age=1y --storage file --retention limits --max-msg-size=-1 --discard=old

Add metrics based horizontal autoscaling support for NATS cluster

As we already have nats-operator to scale the number of instance of NATS cluster up and down in k8s, it would be great to be able to add metrics based horizontal autoscaling support so NATS cluster can increase/decrease number of instances horizontally on its own based on load.

Something like:

apiVersion: nats.io/v1alpha2
kind: NatsCluster
metadata:
  name: nats-cluster
spec:
  version: "2.1.8"
  size: 3 # default cluster instance
  max: 100 # max cluster instance
  min: 3 # min cluster instance
  targetCPUUtilizationPercentage: 75 # will spin up new instance if pod hitting this threshold
  pod:
    resources:
      limits:
        cpu: "200m"
        memory: "500Mi"
      requests:
        cpu: "100m"
        memory: "100Mi"

Thank you the team for the work so far on NATS. Great product!

cc @wallyqs

installation of nack controller via Helm chart - Error

While installing the nack getting this error.

is there any additional roles is needed ?

panic: mkdir ./nack1490200196: permission denied

goroutine 1 [running]:
github.com/nats-io/nack/controllers/jetstream.NewController({{0x198de78, 0xc000508180}, {0x19d17f0, 0xc00051a160}, {0x197aaa8, 0xc00050a300}, {0x16d5009, 0x14}, {0x0, 0x0}, ...})
/go/src/nack/controllers/jetstream/controller.go:150 +0xd89
main.run()
/go/src/nack/cmd/jetstream-controller/main.go:103 +0x725
main.main()
/go/src/nack/cmd/jetstream-controller/main.go:44 +0x19

in the values.yaml file, following security context was used for the installation.

securityContext:
fsGroup: 65534
runAsUser: 65534
runAsNonRoot: true

containerSecurityContext:
allowPrivilegeEscalation: true
readOnlyRootFilesystem: false

reloader: ignore chmod events, ensure same config state do not cause config reload

2021/11/02 14:23:18 Event: "/etc/nats-config/..2021_11_02_13_04_19.318044751": CHMOD
2021/11/02 14:23:18 Sending signal to server to reload configuration
2021/11/02 14:23:18 Event: "/etc/nats-config": CHMOD
2021/11/02 14:23:18 Sending signal to server to reload configuration
2021/11/02 14:24:36 Event: "/etc/nats-config/..2021_11_02_13_04_19.318044751": CHMOD
2021/11/02 14:24:36 Sending signal to server to reload configuration
2021/11/02 14:24:36 Event: "/etc/nats-config": CHMOD
2021/11/02 14:24:36 Sending signal to server to reload configuration
2021/11/02 14:25:44 Event: "/etc/nats-config/..2021_11_02_13_04_19.318044751": CHMOD
2021/11/02 14:25:44 Sending signal to server to reload configuration
2021/11/02 14:25:44 Event: "/etc/nats-config": CHMOD
2021/11/02 14:25:44 Sending signal to server to reload configuration
2021/11/02 14:26:58 Event: "/etc/nats-config/..2021_11_02_13_04_19.318044751": CHMOD
2021/11/02 14:26:58 Sending signal to server to reload configuration
2021/11/02 14:26:58 Event: "/etc/nats-config": CHMOD
2021/11/02 14:26:58 Sending signal to server to reload configuration
2021/11/02 14:26:58 Event: "/etc/nats-config": CHMOD
2021/11/02 14:26:58 Sending signal to server to reload configuration
2021/11/02 14:28:27 Event: "/etc/nats-config/..2021_11_02_13_04_19.318044751": CHMOD
2021/11/02 14:28:27 Sending signal to server to reload configuration
2021/11/02 14:28:27 Event: "/etc/nats-config": CHMOD
2021/11/02 14:28:27 Sending signal to server to reload configuration
2021/11/02 14:29:55 Event: "/etc/nats-config/..2021_11_02_13_04_19.318044751": CHMOD
2021/11/02 14:29:55 Sending signal to server to reload configuration
2021/11/02 14:29:55 Event: "/etc/nats-config": CHMOD
2021/11/02 14:29:55 Sending signal to server to reload configuration
2021/11/02 14:29:55 Event: "/etc/nats-config": CHMOD
2021/11/02 14:29:55 Sending signal to server to reload configuration 

Stream Template crd not served

Hi,
While working on a project that was supposed to use stream templates for dynamic stream creation we found out that stream templates are declared in crds file, but marked as not served. Are they not served due to some issue directly in NACK or are they missing version v1beta2? Because, as I see, all other resources have versions v1beta1 and v1beta2, but stream templates only have v1beta1.

nack/deploy/crds.yml

Lines 677 to 692 in 552472d

---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: streamtemplates.jetstream.nats.io
spec:
group: jetstream.nats.io
scope: Namespaced
names:
kind: StreamTemplate
singular: streamtemplate
plural: streamtemplates
versions:
- name: v1beta1
served: false
storage: true

Jetstream Controller does not support to update stream CRD's “sources” field

  1. Create a stream CRD, for example mystream
  2. mystream has "sources" field
  3. Jetstream Controller can create a stream named "mystream" with "sources" info in the NATS cluster.
  4. Edit the mystream CRD by kubectl, and add another source to the "sources" array
  5. Then Jetstream Controller just "delete" the "sources" info from "mystream" in the NATS cluster.

before
image

edit CRD
image

after
image

Support read-only root file-system in jetstream-controller

The jetstream-controller v0.6.0 attempts to create a temp directory for caching purposes in the current working directory:

https://github.com/nats-io/nack/blob/main/controllers/jetstream/controller.go#L146

In the Dockerfile the working directory results in / by default. For security reasons containers may run in a constrained environment with a read-only root file-system. In this case creating a temp directory will fail.

It would be great if the base directory in which the temp directory will be created either will be the default OS location (by using os.MkdirTemp("", "nack")) or will be made configurable.

Jetstream Stream CRD requires mirror, sources and placement

When these changes were added to the Stream CRD, mirror, sources and placement became required fields. These changes mean previously working streams as well as the ones suggested by the docs now fail to be created.

NACK logs these errors when previously existing streams are attempted to be created:

E1026 22:30:10.276732       1 controller.go:339] failed to process stream: failed to set "<name>" stream finalizers: Stream.jetstream.nats.io "ingestion-monitoring-attempts-queue" is invalid: [spec.sources: Invalid value: "null": spec.sources in body must be of type array: "null", spec.placement: Invalid value: "null": spec.placement in body must be of type object: "null", spec.mirror: Invalid value: "null": spec.mirror in body must be of type object: "null"]

gRPC debug Output

Using charts:
nack-0.19.0 0.8.0
nats-0.19.1 2.9.8

The controller is working without TLS, authentication and accounts,
but after enabling TLS, auth and using accounts,
the controller can start and connect to the nats cluster successfully over his defined creds of SYS account,
but to create a consumer CRD it simply fails with:
controller.go:416] failed to process consumer: failed to check if consumer exists: context deadline exceeded

I created the stream 'test' in account 'SANDBOX' over nats.exe from my dev machine.

What I tried so fair:

  • I tried to enable debug output of the controller with the -v=10 parameter, but it prints only requests/responses to the kube API
  • Remove the SYS creds of the controller helm and it starts with an auth error
  • Create an Account CRD 'SANDBOX' with and without the server-url, tls, creds options
  • Added the tester.creds of acccount 'SANDBOX' to the secret of the nack-helm
  • Added server-url, tls, creds-NAME to 'tester.creds' to the Consumer CRD

Nothing worked and the only error logged of the controller is:
controller.go:416] failed to process consumer: failed to check if consumer exists: context deadline exceeded

Can't find any example that uses simple creds files and don't know how and what the controller needs to use and connect to the proper nats account. The lack of debug output from the controller to the nats server does not help.

Need help

crd storage type mismatch

when i create a stream crd and set the storage is "file" ,then i find the stream storage is "Memory".
image
image

Pull based consumer stuck with the first message

I followed the README steps, and got a bit stuck with the pull based consumer.

The below is what I got with the pull step (there is one additional message I tested with):

nats-box:~# nats pub orders.received "order 1"
22:38:30 Published 7 bytes to "orders.received"
nats-box:~# nats pub orders.received "order 2"
22:38:32 Published 7 bytes to "orders.received"
nats-box:~# nats pub orders.other "other order ABCDEF"
22:38:37 Published 18 bytes to "orders.other"
nats-box:~# nats consumer next mystream my-pull-consumer
--- subject: _INBOX.HCW9GpLC65HZhTydfegh0A

order 1

Acknowledged message
nats-box:~# nats consumer next mystream my-pull-consumer
--- subject: _INBOX.ciwMPLGB5fuE6PfyMTmaC0

order 1

Acknowledged message
nats-box:~# nats consumer next mystream my-pull-consumer
--- subject: _INBOX.pkfpec14mJGxXOn1jf4Bl7

order 1

Acknowledged message
nats-box:~# nats consumer next mystream my-pull-consumer
--- subject: _INBOX.6r2GUcY9rHARpBbcCPzN9h

order 1

Acknowledged message
nats-box:~# nats consumer next mystream my-pull-consumer
--- subject: _INBOX.pTHpbcMJV2qouexDevGrkA

order 1

Acknowledged message
nats-box:~# nats consumer next mystream my-pull-consumer
--- subject: _INBOX.JF2Rn72a8GRlxNMCTHHXBz

order 1

Acknowledged message

As you can see, I tried to get the next message with nats consumer next but it didn't get to the next message. It looks like the consumer next is not responding on the pull correctly?

nats-box:~# nats consumer info 
? Select a Stream mystream
? Select a Consumer my-pull-consumer
Information for Consumer mystream > my-pull-consumer

Configuration:

        Durable Name: my-pull-consumer
           Pull Mode: true
      Filter Subject: orders.received
         Deliver All: true
          Ack Policy: Explicit
            Ack Wait: 1ns
       Replay Policy: Instant
  Maximum Deliveries: 20

State:

  Last Delivered Message: Consumer sequence: 6 Stream sequence: 1
    Acknowledgment floor: Consumer sequence: 0 Stream sequence: 0
        Pending Messages: 1
    Redelivered Messages: 1

(Btw, I have seen this in the past, but the Consumer sequence being greater than the Stream sequence was rather surprising to me. I thought the Consumer sequence would stay as 1.)


Just for a reference, the push based consumer works as expected:

nats-box:~# nats sub my-push-consumer.orders
22:50:01 Subscribing on my-push-consumer.orders
22:50:01 [#1] Received on "my-push-consumer.orders"
order 1

22:50:01 [#2] Received on "my-push-consumer.orders"
order 2

22:50:01 [#3] Received on "my-push-consumer.orders"
other order ABCDEF

Deleting a nats cluster pod results in peer log: error sending snapshot to follower [xyz]: raft: no snapshot available

Issue:

When killing the stream leader of a nats jetstream cluster it seems like RAFT is not synchronizing perfectly. After the deleted peer (which is typically a statefulset pod like nats-0, nats-1, nats-2, nats-n) is recovered it throws the warning JetStream cluster consumer '$G > data> some_durable-connection' has NO quorum, stalled. . While the not deleted peers (pods) throw the logged error Error sending snapshot to follower [xyz]: raft: no snapshot available

This issue results in an not corret working stream. It seems like that the clients are not connected to Jetstream in a correct manner.

When requesting consumer info in nats-box, the following is printed out.
error: could not load Consumer consumer-xyz > consumer_durable-connection: JetStream system temporarily unavailable (10008)

╭─────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                   Stream Report                                                 │
├──────────┬─────────┬───────────┬──────────┬─────────┬──────┬─────────┬──────────────────────────┤
│ Stream   │ Storage │ Consumers │ Messages │ Bytes   │ Lost │ Deleted │ Replicas                 │
├──────────┼─────────┼───────────┼──────────┼─────────┼──────┼─────────┼──────────────────────────┤
│ stream-1 │ Memory  │ 2         │ 23,753   │ 48 MiB  │ 0    │ 0       │ nats-0, nats-1, nats-2*  │
│ stream-2 │ Memory  │ 1         │ 91,459   │ 238 MiB │ 0    │ 0       │ nats-0, nats-1!, nats-2* │
│ stream-3 │ Memory  │ 4         │ 201,431  │ 477 MiB │ 0    │ 0       │ nats-0, nats-1, nats-2*  │
╰──────────┴─────────┴───────────┴──────────┴─────────┴──────┴─────────┴──────────────────────────╯
Experienced with:

Kubernetes 1.21.2 with nats jetstream cluster (3 and 5 statefulset pods) and 3 streams (3 Replicas each), nats jetstream 2.7.2, java client with jnats 2.13.2
nats jetstream was istalled via offical helm chart.

Reproduce:

kill peer pod which is marked as current leader for a stream (kubectl delete pod), checking the peer pod logs.

Add periodic checking of stream/consumers

When a stream/consumer ends up in Error state due to an API call failing, the actual JS might actually be fine so need to retry when that happens to clear the Error state from the CRD.

NACK fails to create a stream with kustomize and namespacing

I've installed CRDs, NACK using Helm chart, and created a stream in the same namespace as NACK and NATS server.

In logs, I see that NACK client opened a connection to the NATS server.
Then nothing changes. I don’s see any stream in NATS using its command-tool.

# NACK log
I1222 17:45:47.333603       1 main.go:117] Starting /jetstream-controller v...
​
# NATS log
[6] 2021/12/22 17:45:46.411918 [DBG] 10.233.122.229:39318 - cid:24 - "v1.12.1:go:jetstream-controller" - Client connection closed: Client Closed
[6] 2021/12/22 17:45:52.336792 [DBG] 10.233.68.1:56700 - cid:25 - Client connection created
[6] 2021/12/22 17:45:54.605762 [DBG] 10.233.68.1:56700 - cid:25 - "v1.12.1:go:jetstream-controller" - Client Ping Timer
# stream state is empty
kubectl get streams -n testing01
NAME                            STATE   STREAM NAME   SUBJECTS
nats-back-back-reliable-strem           reliable      ["scope.*.*"]
​
# port forwarding
nats str list -s localhost:59286
No Streams defined
apiVersion: jetstream.nats.io/v1beta2
kind: Stream
metadata:
  name: nats-back-back-reliable-strem
spec:
  name: reliable
  subjects:
  - "scope.*.*"
  storage: memory
  retention: limits
  discard: old
  maxAge: 1h
  maxBytes: 1048576
  replicas: 1

Definition of the stream.

jetstream:
  enabled: true
  nats:
    url: nats://nats-back-back-internal-lb:4222

namespaced: true

NACK config.

nats:
  image: nats:2.6.5-alpine3.14
  resources:
    limits:
      cpu: 0.1
      memory: 300Mi
    requests:
      cpu: 0.1
      memory: 300Mi
  logging:
    debug: true

  jetstream:
      enabled: true

      memStorage:
        enabled: true
        size: 1Gi

exporter:
  enabled: true
  image: natsio/prometheus-nats-exporter:0.9.0
  serviceMonitor:
    enabled: true

cluster:
  enabled: false

natsbox:
  enabled: false

auth:
  enabled: false

NATS server config.

Consumer delete and create is not create new consumer.

nats-0.9.2
nack-0.9.2

  1. Create consumer with CRD.
    master [~/k8s/helm/nats/consumer]$ kubectl apply -f consumer-billing.yaml
    consumer.jetstream.nats.io/csl-billing-txn-billing-prepare created
    consumer.jetstream.nats.io/csl-billing-txn-billing-cancel created
    consumer.jetstream.nats.io/csl-billing-txn-billing-complete created
    master [~/k8s/helm/nats/consumer]$ kubectl get consumer
    NAME STATE STREAM CONSUMER ACK POLICY
    csl-billing-txn-billing-cancel Created crp-billing-stream csl-billing-txn-billing-cancel all
    csl-billing-txn-billing-complete Created crp-billing-stream csl-billing-txn-billing-complete all
    csl-billing-txn-billing-prepare Created crp-billing-stream csl-billing-txn-billing-prepare all

  2. Delete above for change consumer option.
    master [~/k8s/helm/nats/consumer]$ kubectl delete -f consumer-billing.yaml
    consumer.jetstream.nats.io "csl-billing-txn-billing-prepare" deleted
    consumer.jetstream.nats.io "csl-billing-txn-billing-cancel" deleted
    consumer.jetstream.nats.io "csl-billing-txn-billing-complete" deleted

  3. Modify some consumer option, Recreate with CRD.
    master [~/k8s/helm/nats/consumer]$ kubectl apply -f consumer-billing.yaml
    consumer.jetstream.nats.io/csl-billing-txn-billing-prepare created
    consumer.jetstream.nats.io/csl-billing-txn-billing-cancel created
    consumer.jetstream.nats.io/csl-billing-txn-billing-complete created

  4. But, this time consumer didn't created.
    master [~/k8s/helm/nats/consumer]$ kubectl get consumer
    NAME STATE STREAM CONSUMER ACK POLICY
    csl-billing-txn-billing-cancel crp-billing-stream csl-billing-txn-billing-cancel all
    csl-billing-txn-billing-complete crp-billing-stream csl-billing-txn-billing-complete all
    csl-billing-txn-billing-prepare crp-billing-stream csl-billing-txn-billing-prepare all

  5. Nack error message
    Event(v1.ObjectReference{Kind:"Consumer", Namespace:"default", Name:"csl-odr-txn-complete-order", UID:"b1b60a3a-5b95-4c54-a39e-573082a56143", APIVersion:"jetstream.nats.io/v1beta2", ResourceVersion:"315784971", FieldPath:""}): type: 'Warning' reason: 'Updating' Consumer updates ("csl-odr-txn-complete-order" on "crp-order-stream") are not allowed, recreate to update

How can I delete the old consumer and create it again for the update option?
Same problem from StackOverflow : https://stackoverflow.com/questions/69403789/nack-jetstream-controller-failed-to-update-consumer

NACK Configuration issue

Hi,

i have installed the nats in the K8 using helm charts and configured the account using this link https://github.com/nats-io/k8s/blob/main/setup/nsc-setup.sh.

After i installed nacks, i am not able to create stream using Yaml. it is providing below error ( NACK is using the sys.creds via the K8 secret)

E0301 09:56:15.775121 1 controller.go:416] failed to process stream: failed to check if stream exists: context deadline exceeded

For sys account, even it is not listing the Streams

nats stream ls  --creds ./nsc/nkeys/creds/DEMO/SYS/sys.creds
nats: error: could not list streams: context deadline exceeded, try --help. 

For others account it is able to list the streams,

How to resolve this system-account , Nack issues ?

leadnodes unable to lookup for host and no such host

hi, anyone can help me, i have faced a problem of no such host error when i follow the setup on the read.me of the command kubectl apply -f https://raw.githubusercontent.com/nats-io/k8s/master/nats-server/nats-js-leaf.yml

[8] 2021/08/19 11:14:52.225665 [INF] Starting nats-server
[8] 2021/08/19 11:14:52.225707 [INF] Version: 2.3.5-beta.2
[8] 2021/08/19 11:14:52.225711 [INF] Git: [a5afa867]
[8] 2021/08/19 11:14:52.225723 [INF] Name: NBLIEZQN6UJEWD6Y7BJILKGCXSMW7YB7O72MM5VH3UTABMRDQGK3QECF
[8] 2021/08/19 11:14:52.225729 [INF] Node: ciFh09UR
[8] 2021/08/19 11:14:52.225732 [INF] ID: NBLIEZQN6UJEWD6Y7BJILKGCXSMW7YB7O72MM5VH3UTABMRDQGK3QECF
[8] 2021/08/19 11:14:52.225742 [INF] Using configuration file: /etc/nats-config/nats.conf
[8] 2021/08/19 11:14:52.227071 [INF] Starting JetStream
[8] 2021/08/19 11:14:52.229306 [INF] _ ___ _____ ___ _____ ___ ___ _ __ __
[8] 2021/08/19 11:14:52.229319 [INF] _ | | | / | | _ \ __| /\ | / |
[8] 2021/08/19 11:14:52.229322 [INF] | || | | | | _ \ | | | / | / _ | |/| |
[8] 2021/08/19 11:14:52.229324 [INF] _
/|
| || |/ || ||___// __| |_|
[8] 2021/08/19 11:14:52.229326 [INF]
[8] 2021/08/19 11:14:52.229329 [INF] https://docs.nats.io/jetstream
[8] 2021/08/19 11:14:52.229332 [INF]
[8] 2021/08/19 11:14:52.229334 [INF] ---------------- JETSTREAM ----------------
[8] 2021/08/19 11:14:52.229343 [INF] Max Memory: 5.46 GB
[8] 2021/08/19 11:14:52.229347 [INF] Max Storage: 953.67 MB
[8] 2021/08/19 11:14:52.229350 [INF] Store Directory: "/data/jetstream/store/jetstream"
[8] 2021/08/19 11:14:52.229353 [INF] -------------------------------------------
[8] 2021/08/19 11:14:52.230023 [INF] Starting http monitor on 0.0.0.0:8222
[8] 2021/08/19 11:14:52.230096 [INF] Listening for leafnode connections on 0.0.0.0:7422
[8] 2021/08/19 11:14:52.230808 [INF] Listening for client connections on 0.0.0.0:4222
[8] 2021/08/19 11:14:52.230943 [INF] Server is ready
[8] 2021/08/19 11:14:52.244034 [ERR] Error trying to connect as leafnode to remote server "nats-js:7422" (attempt 1): lookup for host "nats-js": lookup nats-js on 10.0.0.10:53: no such host

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.