Code Monkey home page Code Monkey logo

lightkube's People

Contributors

addyess avatar ca-scribner avatar danielarndt avatar dependabot[bot] avatar gtsystem avatar jnsgruk avatar knkski avatar sed-i avatar stonepreston avatar tim-hutchinson avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

lightkube's Issues

Need simpler way to remove labels.

The current method to remove a label looks somewhat convoluted.

label_to_remove = label.replace('/','~1')

patch = [{"op": "remove", "path": label_to_remove}]

client.patch(
                res=ServiceAccount,
                name=resource_name,
                namespace=namespace,
                obj=patch,
                patch_type=PatchType.JSON,
            )

Request is to add a cleaner dedicated api to remove a label on a resource, something akin to

client.remove_label(
                res=ServiceAccount,
                name=resource_name,
                namespace=namespace,
                label=label_full_path_unmodified
            )

Along the same lines, a dedicated api to label a resource would also be useful. For e.g. for labeling a service account

client.set_label(
                res=ServiceAccount,
                name=resource_name,
                namespace=namespace,
                label=label_full_path_unmodified
            )

Apply-like behaviour that can create or patch existing objects

It would be nice to have a function that is similar to kubectl apply which lets you both create or (if something already exists) patch the objects via client.apply(objs). I don't think there's anything that does this natively - do you have this implemented or planned? If not, do you have interest and an opinion on how it should function?

Getting TypeError: send() got an unexpected keyword argument 'timeout' trying simple tasks

I get send() got an unexpected keyword argument 'timeout' messages when trying to patch or list resources, but this might affect other things.

Possible cause

This behaviour started today and might be related to the latest httpx release 0.20.0.

Steps to reproduce

This happens after installing the lightkube pypi package, letting pip solve dependencies.

$ pip install lightkube --no-cache

Then try listing nodes

Python 3.8.10 (default, Sep 28 2021, 16:10:42) 
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from lightkube import Client
>>> from lightkube.resources.core_v1 import Node
>>> client = Client()
>>> for node in client.list(Node):
...     print(node.metadata.name)
... 
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python3.8/dist-packages/lightkube/core/generic_client.py", line 239, in list
    resp = self.send(req)
  File "/usr/local/lib/python3.8/dist-packages/lightkube/core/generic_client.py", line 204, in send
    return self._client.send(req, stream=stream, timeout=self._watch_timeout if stream else None)
TypeError: send() got an unexpected keyword argument 'timeout'

Workaround

I workaround this issue downgrading to 0.18.1.

My setup

Ubuntu 20.04
Python 3.8.10
pip 21.3
microk8s 1.21/stable

Fix syntax warnings in Python 3.12

$ uv pip install lightkube
Resolved 10 packages in 520ms
Downloaded 2 packages in 89ms
Installed 2 packages in 3ms
 + lightkube==0.15.2
 + lightkube-models==1.30.0.7

$ ipython
Python 3.12.3 (main, Apr 10 2024, 05:33:47) [GCC 13.2.0]
Type 'copyright', 'credits' or 'license' for more information
IPython 8.23.0 -- An enhanced Interactive Python. Type '?' for help.

In [1]: import lightkube
.../.venv/lib/python3.12/site-packages/lightkube/models/core_v1.py:3702: SyntaxWarning: invalid escape sequence '\S'
  """PodSpec is a description of a pod.
.../.venv/lib/python3.12/site-packages/lightkube/models/apiextensions_v1.py:363: SyntaxWarning: invalid escape sequence '\d'
  """JSONSchemaProps is a JSON-Schema following Specification Draft 4

Better integration of Custom Resources

When working with Custom Resources, it is some times useful to have the full power of a defined model, just like when working when "regular" resources. From what I can tell, I can define my custom resources by creating a model and a resource in the same way that the regular resources are created:

@dataclass
class OpenSearch(DataclassDictMixIn):
    apiVersion: 'str' = None
    kind: 'str' = None
    metadata: 'meta_v1.ObjectMeta' = None
    spec: 'OpenSearchSpec' = None
    status: 'OpenSearchStatus' = None


class OpenSearch(res.NamespacedResourceG, models.OpenSearch):
    _api_info = res.ApiInfo(
        resource=res.ResourceDef('aiven.io', 'v1alpha1', 'OpenSearch'),
        plural='opensearches',
        verbs=['delete', 'deletecollection', 'get', 'global_list', 'global_watch', 'list', 'patch', 'post', 'put', 'watch'],
    )

However, while this allows working with them directly, it fails when trying to load them from YAML, because the load mechanism assumes it's either a generic resource (with no proper model), or a resource defined in a submodule of lightkube.resources.

Have you considered to make it possible to find a resource definition by looking at subclasses1 of NamespacedResourceG?
That way, I could load my resources from YAML, as long as I have made sure to import my resource/model definition before calling load_from_yaml, and get fully typed objects from the loader.

Footnotes

  1. https://docs.python.org/3/library/stdtypes.html#class.__subclasses__ โ†ฉ

`codecs.load_all_yaml(...)` doesn't handle objects that are of kind `*List`

given a file like this defined by ServiceAccountList

apiVersion: v1
kind: ServiceAccountList
metadata: {}
items:
- apiVersion: v1
  kind: ServiceAccount
  metadata:
    name: test-sa
    namespace: kube-system

can't be handled by codecs.load_all_yaml(...)

I think i could patch something pretty simply to read through the items of the list and they'd be parsed just fine. I'll likely raise a PR for this.

Error: could not determine a constructor for the tag 'tag:yaml.org,2002:python/tuple'

Getting the error from the below code

config = KubeConfig.from_file("/tmp/kubeconfig")

kubeconfig

{'apiVersion': ('v1',), 'clusters': [{'cluster': {'server': '', 'certificate-authority-data': ''}, 'name': 'kubernetes'}], 'contexts': [{'context': {'cluster': 'kubernetes', 'user': 'aws'}, 'name': 'aws'}], 'current-context': 'aws', 'Kind': 'config', 'users': [{'name': 'aws', 'user': 'lambda'}]}

Feedback on comparison with kr8s

Hey there ๐Ÿ‘‹. I'm the author of kr8s, another Python library for Kubernetes.

The goal of kr8s is a little different to lightkube, it's intended to be a batteries-included kubectl-inspired Python library that has a very shallow learning curve and reduces boilerplate. I built it to solve some specific challenges we were having in dask-kubernetes with using the official kubernetes library (and especially kubernetes_asyncio).

We are at the point in development where dask-kubernetes has been fully migrated over to it and the API is getting pretty stable, so I decided to take a step back and assess how well we are doing in terms of hitting our design goals. In doing so I wrote up a blog post comparing the kr8s API with other Python Kubernetes libraries including lightkube.

The comparison is intended to check how well kr8s is meeting it's goals and to discuss the tradeoffs we make to achieve those goals compared with other libraries, rather than trying to show which is "better".

I wanted to stop by here and open an issue to ask for feedback on the lightkube description and examples I've included in the post. I want to make sure I'm highlighting the strengths of lightkube accurately so that if folks do use the post to choose between the libraries they can make an informed decision based on their own requirements.

If you do have any feedback feel free to reply here, send me an email to [email protected] or open a PR directly on the blog post.

Thank you!

Unable to gather a list of daemonsets

from lightkube import Client
from lightkube.models.apps_v1 import DaemonSet

client = Client(namespace="default", field_manager="lightkube")
fetched = list(client.list(DaemonSet, namespace="kube-system"))

this yields a Traceback

Traceback (most recent call last):
  File "/home/user/.tox/unit/lib/python3.9/site-packages/IPython/core/interactiveshell.py", line 3340, in run_code
    exec(code_obj, self.user_global_ns, self.user_ns)
  File "<ipython-input-6-f1919e43688d>", line 1, in <cell line: 1>
    list(client.list(
  File "/home/user/.tox/unit/lib/python3.9/site-packages/lightkube/core/client.py", line 134, in list
    br = self._client.prepare_request(
  File "/home/user/.tox/unit/lib/python3.9/site-packages/lightkube/core/generic_client.py", line 118, in prepare_request
    api_info = r.api_info(res)
  File "/home/user/.tox/unit/lib/python3.9/site-packages/lightkube/core/resource.py", line 25, in api_info
    return res._api_info
AttributeError: type object 'DaemonSet' has no attribute '_api_info'

Surely this is an obvious mistake on my part?

Getting "TypeError: 'NoneType' object is not iterable"

Hi,
I followed the documentation of example usage https://lightkube.readthedocs.io/en/stable/#usage and tried to check active deployments using

client.get(Deployment, name='deployment-name', namespace='devtest')

I get the following error:

File ~/anaconda3/envs/py38/lib/python3.8/site-packages/lightkube/core/client.py:108, in Client.get(self, res, name, namespace)
     99 def get(self, res, name, *, namespace=None):
    100     """Return an object
    101
    102     **parameters**
   (...)
    106     * **namespace** - *(optional)* Name of the namespace containing the object (Only for namespaced resources).
    107     """
--> 108     return self._client.request("get", res=res, name=name, namespace=namespace)

File ~/anaconda3/envs/py38/lib/python3.8/site-packages/lightkube/core/generic_client.py:244, in GenericSyncClient.request(self, method, res, obj, name, namespace, watch, headers, params)
    242 br = self.prepare_request(method, res, obj, name, namespace, watch, headers=headers, params=params)
    243 req = self.build_adapter_request(br)
--> 244 resp = self.send(req)
    245 return self.handle_response(method, resp, br)

File ~/anaconda3/envs/py38/lib/python3.8/site-packages/lightkube/core/generic_client.py:216, in GenericSyncClient.send(self, req, stream)
    215 def send(self, req, stream=False):
--> 216     return self._client.send(req, stream=stream)

File ~/anaconda3/envs/py38/lib/python3.8/site-packages/httpx/_client.py:902, in Client.send(self, request, stream, auth, follow_redirects)
    894 follow_redirects = (
    895     self.follow_redirects
    896     if isinstance(follow_redirects, UseClientDefault)
    897     else follow_redirects
    898 )
    900 auth = self._build_request_auth(request, auth)
--> 902 response = self._send_handling_auth(
    903     request,
    904     auth=auth,
    905     follow_redirects=follow_redirects,
    906     history=[],
    907 )
    908 try:
    909     if not stream:

File ~/anaconda3/envs/py38/lib/python3.8/site-packages/httpx/_client.py:927, in Client._send_handling_auth(self, request, auth, follow_redirects, history)
    925 auth_flow = auth.sync_auth_flow(request)
    926 try:
--> 927     request = next(auth_flow)
    929     while True:
    930         response = self._send_handling_redirects(
    931             request,
    932             follow_redirects=follow_redirects,
    933             history=history,
    934         )

File ~/anaconda3/envs/py38/lib/python3.8/site-packages/lightkube/config/client_adapter.py:82, in ExecAuth.sync_auth_flow(self, request)
     79     if response.status_code != 401:
     80         return
---> 82 command, env = self._prepare()
     83 output = sync_check_output(command, env=env)
     84 token = json.loads(output)["status"]["token"]

File ~/anaconda3/envs/py38/lib/python3.8/site-packages/lightkube/config/client_adapter.py:69, in ExecAuth._prepare(self)
     67     raise ConfigError(f"auth exec api version {exec.apiVersion} not implemented")
     68 cmd_env_vars = dict(os.environ)
---> 69 cmd_env_vars.update((var.name, var.value) for var in exec.env)
     70 # TODO: add support for passing KUBERNETES_EXEC_INFO env var
     71 # https://github.com/kubernetes/community/blob/master/contributors/design-proposals/auth/kubectl-exec-plugins.md
     72 args = exec.args if exec.args else []

TypeError: 'NoneType' object is not iterable

Is there some example where it mentions how to set required environment variables?

Cannot set Client trust_env=False

httpx client created here doesn't allow override of the trust_env keyword option.

Please consider some generic **kwargs for configuration of the client. httpx.Client will probably continue to expand its constructor here

`apply`ing to an aggregated clusterrole fails with 409 conflict on `.rules`

Is there a way in lightkube to apply an aggregate cluster role that includes an empty rules: [] without forceing it? Applying to an aggregate clusterrole with anything in rules results in a 409 conflict because the control plane maintains the rules list. I'd rather avoid using force so I don't suppress other errors, but can't think of anything else apart from adding some custom logic before calling .apply to remove the rules entirely.

This python snippet demonstrates the issue, generating a 409 conflict on rules when a change is applied without force=True:

import time

import lightkube
from lightkube.codecs import load_all_yaml

# Note: Running this will leave two clusterroles (aggregate-clusterrole and aggregate-clusterrole)
# in your cluster.  Running it a second time will fail faster (on the first apply of
# aggregate-clusterrole, because the aggregate-clusterrole will already exist).

c = lightkube.Client(field_manager="myself")

aggregated_clusterrole_yaml = """
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: aggregated-clusterrole
  labels:
    test.com/aggregate-to-view: "true"
rules: 
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "watch", "list"]
"""

aggregated_clusterrole = load_all_yaml(aggregated_clusterrole_yaml)[0]
c.apply(aggregated_clusterrole)


aggregate_clusterrole_yaml = """
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: aggregate-clusterrole
aggregationRule:
  clusterRoleSelectors:
    - matchLabels:
        test.com/aggregate-to-view: "true"
rules: [] # Rules are automatically filled in by the controller manager.
"""

aggregate_clusterrole = load_all_yaml(aggregate_clusterrole_yaml)[0]

# Creates aggregate_clusterrole
c.apply(aggregate_clusterrole)
# let the aggregation manager catch up.  if we don't do this, we can sometimes do the below applies before the aggregation is completed
time.sleep(1)  

# Force apply onto existing aggregate_clusterrole (works, as it ignores the 409 conflict error)
c.apply(aggregate_clusterrole, force=True)

# Applies onto existing aggregate_clusterrole (and fails due to a 409 conflict on .rules)
c.apply(aggregate_clusterrole)

Inconsistent `_lazy_values` dict once the dataclass's attribute is set

When trying to update a Deployment's topologySpreadConstraints I unearthed a situation where an object of type lightkube.models.core_v1.PodSpec had a value set for its attribute topologySpreadConstraints and a value in in _lazy_values set as

{"topologySpreadConstraints": None}

whats worse is that it wasn't consistent and debugging the issue might be hiding it. Whatever it is... it seems that when setting a value for an attribute, the _lazy_values shadow value should be removed but isn't. This resulted in the de-serialization of the Deployment into a json blob to be sent to the API didn't include these topologySpreadConstraints

As a workaround i'm doing this:

        obj.spec.template.spec._lazy_values.pop("topologySpreadConstraints", None)
        obj.spec.template.spec.topologySpreadConstraints = [
            ... whatever i'd like here ...
        ]

but it feels pretty gross

Using codecs.load_all_yaml with generic resources

I added the following generic resource for a CRD that is present in my cluster:

SparkApp = create_namespaced_resource(group="sparkoperator.k8s.io", version="v1beta2", kind="SparkApplication", plural="sparkapplications")

and then attempted to create an object by loading from this yaml file using:

with open('../../examples/spark-pi.yaml') as f:
    for obj in codecs.load_all_yaml(f):
        client.create(obj, namespace="spark")

But I get an error:

lightkube.core.exceptions.LoadResourceError: No module named 'lightkube.resources.sparkoperator_v1beta2'

However I can create it this way by reading the yaml into a dict, creating a SparkApp object using that dict, and then using client.create:

with open('../../examples/spark-pi.yaml', 'r') as stream:
    try:
        parsed_yaml = yaml.safe_load(stream)
        app = SparkApp(parsed_yaml)
        client.create(app, namespace="spark")
    except yaml.YAMLError as exc:
        print(exc)

Should I be able to create this using load_all_yaml? It seems as if load_all_yaml is not aware of my generic resource.

Problems with the list method and custom resources

I'm experiencing some problems using the list method with custom resources. I will lay out a minimal working example below using the example from the k8s docs

First create the CRD defined here

kubectl apply -f resourcedefinition.yaml

Then create a custom object defined here

kubectl apply -f my-crontab.yaml

Verify it exists:

 kubectl get crontab

I can then use the get method on this newly created object just fine:

cron_resource = create_namespaced_resource(group="stable.example.com",
                                               version="v1",
                                               kind="CronTab",
                                               plural="crontabs",
                                               verbs=None)
cron = client.get(cron_resource, name="my-new-cron-object", namespace="default")
print(cron)

However if I try to list objects of this resource:

cron_list = client.list(cron_resource, namespace="default")
for obj in cron_list:
    print(obj)

Then I end up getting an infinite loop, when there should just be one object.

Another issue is when no objects of the requested resource exist:

cron_list = client.list(cron_resource, namespace="nonexistent-namespace")
for obj in cron_list:
    print(obj)

Then the loop just hangs.

The list method works fine whenever its used with a lightkube resource:

cm_list = client.list(ConfigMap, namespace="default")
    for obj in cm_list:
        print(obj)

so it seems to be specific to custom resources. Let me know if you need any other information from me. Being able to list custom resources is very important to me right now and Id love to help get to the bottom of it.

Add means to compare quantities

Background

K8s converts user input quantities to "canonical form":

Before serializing, Quantity will be put in "canonical form". This means that Exponent/suffix will be adjusted up or down (with a corresponding increase or decrease in Mantissa) such that: a. No precision is lost b. No fractional digits will be emitted c. The exponent (or suffix) is as large as possible. The sign will be omitted unless the number is negative.

Examples: 1.5 will be serialized as "1500m" 1.5Gi will be serialized as "1536Mi"

Additional examples:

User input K8s representation
{"memory": "0.9Gi"} {"memory": "966367641600m"}
{"cpu": "0.30000000000000004"} {"cpu": "301m"}

Use case

When patching a statefulset's resource limits, it would be handy to be able to compare a quantity's setpoint

statefulset.spec.template.spec.containers[i].resources

to the active podspec

pod.spec.containers[i].resources

Currently, two equivalent ResourceRequirements instances could differ under == comparison and require manual interconversion.

Additional thoughts

  • k8s-client has a parse_quantity function, but having both lightkube and kubernetes deps is not ideal.
  • bitmath doesn't support millis and is awkward to "abuse" for cpu values.
  • Implementing comparisons for class ResourceRequirements might be ackward.

API change in k8s 1.23 breaking StatefulSetStatus?

I might have this wrong as I don't fully understand how lightkube and lightkube-models interacts, but...

I've recently found this code that worked with lightkube-models==1.21.0.4 (and I believe, 1.22.X) now fails:

import lightkube
from lightkube.resources.apps_v1 import StatefulSet

client = lightkube.Client()

found_resources = list(client.list(StatefulSet))
# Will raise with: {TypeError}__init__() missing 1 required positional argument: 'availableReplicas'
found_resources[0].status

I think it is because for class StatefulSetStatus the availableReplicas has changed from an optional kwarg (in 1.21.0.4) to a positional arg (in 1.23.X). Am I configuring something wrong between lightkube and lightkube-models, or is this an actual bug?

Opinion on Alternative Public Forks

@gtsystem I wanted to start this issue by thanking you for the maintenance of an amazing package that helps those of us in @charmed-kubernetes to create easier-to-maintain projects

As more projects in @charmed-kubernetes make use of lightkube, we hope to insure our dependence on it by publicly forking the project and keeping it in sync with your master branch. In the unfortunate event we must make a patch to lightkube quickly, and rebuild one of our projects -- without your upstream merge, we need a repository from which to build hot-patches.

I wanted to get your blessing for this use case, and let you know we have no plans for creating an alternative project based on your work. I'll encourage my team to use this new fork to raise PRs into this upstream source. We may also create release branches in this fork in the event we need to backport a hot-patch into an older release of lightkube.

Thank you so much, please let me know if this will be an issue.

Unnecessary redundancy in generic resource api?

I wonder if I'm misunderstanding something in the below example

The following creates a generic resource using a CRD that has been deployed in k8s:

client = lightkube.Client()
Profile = create_global_resource(group="kubeflow.org", version="v1", kind="Profile", plural="profiles")

profile_name = "newuser"
user_name = "[email protected]"
profile_metadata = ObjectMeta(name=profile_name)
profile_spec = {
    "owner": {
        "kind": "User",
        "name": user_name,
    }
}
profile = Profile(apiVersion="kubeflow.org/v1", metadata=profile_metadata, spec=profile_spec, kind="Profile")
print(profile)
client.create(profile, profile_name)

It seemed odd that profile = Profile(...) needs apiVersion and kind specified - shouldn't these me apparent from the definition of Profile?

how to deleting a pod immediacy?

standard lib has this functionality,

api_ = client.CoreV1Api()
api_.delete_namespaced_pod('name', 'default', grace_period_seconds=0)

analog - kubectl delete pod ... --grace-period=0

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.