Code Monkey home page Code Monkey logo

kubetest's Introduction

kubetest

Build Status PyPI Documentation Status

Kubetest is a pytest plugin that makes it easier to manage a Kubernetes cluster within your integration tests. While you can use the Kubernetes Python client directly, this plugin provides some cluster and object management on top of that so you can spend less time setting up and tearing down tests and more time actually writing your tests. In particular, this plugin is useful for testing your Kubernetes infrastructure (e.g., ensure it deploys and behaves correctly) and for testing disaster recovery scenarios (e.g. delete a pod or deployment and inspect the aftermath).

Features:

  • Simple API for common cluster interactions.
  • Uses the Kubernetes Python client as the backend, so more complex custom actions are possible.
  • Load Kubernetes manifest YAMLs into their Kubernetes models.
  • Each test is run in its own namespace and the namespace is created and deleted automatically.
  • Detailed logging to help debug error cases.
  • Wait functions for object readiness and for object deletion.
  • Get container logs and search for expected logging output.
  • Plugin-managed RBAC permissions at test-case granularity using pytest markers.

For more information, see the kubetest documentation.

Installation

This plugin can be installed with pip

pip install kubetest

Note that the kubetest package has entrypoint hooks defined in its setup.py which allow it to be automatically made available to pytest. This means that it will run whenever pytest is run. Since kubetest expects a cluster to be set up and to be given configuration for that cluster, pytest will fail if those are not present. It is therefore recommended to only install kubetest in a virtual environment or other managed environment, such as a CI pipeline, where you can assure that cluster access and configuration are available.

Documentation

See the kubetest documentation page for details on command line usage, available fixtures and markers, and general pytest integration.

Feedback

Feedback for kubetest is greatly appreciated! If you experience any issues, find the documentation unclear, have feature requests, or just have questions about it, we'd love to know. Feel free to open an issue for any feedback you may have.

License

kubetest is released under the GPL-3.0 license.

kubetest's People

Contributors

blakewatters avatar bukowa avatar carloscastrojumo avatar cblegare avatar corentin1002 avatar danieldiamond avatar edaniszewski avatar hello-ming avatar joshrivers avatar mickours avatar nil-scan avatar qvicksilver avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubetest's Issues

support "apply" functionality

following #66, we could have an apply() method that works similar to kubectl apply in that it would load all manifests in a directory and create them on the cluster all at once. This would simplify test patterns such as

    bb_secret = kube.load_secret(manifest_path('blackbox.secret.yaml'))
    bb_configmap = kube.load_configmap(manifest_path('blackbox.configmap.yaml'))
    bb_deployment = kube.load_deployment(manifest_path('blackbox.deployment.yaml'))

    kube.create(bb_secret)
    kube.create(bb_configmap)
    kube.create(bb_deployment)

to

    kube.apply('./manifests')

Error when trying to use markers

I just tried to use markers:

import pytest

@pytest.mark.applymanifests('service-account.yaml')
def test_deployment(kube):
    """Example test case for creating and deleting a deployment."""
    
    d = kube.load_deployment('deployment.yaml')
    
    d.create()
    d.wait_until_ready(timeout=30)
    
    pods = d.get_pods()
    print(pods)

    print("hello")

And this code results in the following exception:

============================================= test session starts =============================================
platform darwin -- Python 3.7.2, pytest-4.3.0, py-1.8.0, pluggy-0.9.0
kubetest config file: default
rootdir: <>/kamus/tests/crd-controller, inifile:
plugins: kubetest-0.0.3
collected 0 items / 1 errors                                                                                  
2019-03-17 09:37:50,633 WARNING Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x111807ba8>: Failed to establish a new connection: [Errno 61] Connection refused')': /api/v1/namespaces
2019-03-17 09:37:50,635 WARNING Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x111807b38>: Failed to establish a new connection: [Errno 61] Connection refused')': /api/v1/namespaces
2019-03-17 09:37:50,636 WARNING Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x111807d68>: Failed to establish a new connection: [Errno 61] Connection refused')': /api/v1/namespaces
Failed to clean up kubetest artifacts from cluster on keyboard interrupt. You may need to manually remove items from your cluster. Check for namespaces with the "kubetest-" prefix and cluster role bindings with the "kubetest:" prefix. (HTTPSConnectionPool(host='localhost', port=443): Max retries exceeded with url: /api/v1/namespaces (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x111807278>: Failed to establish a new connection: [Errno 61] Connection refused')))

=================================================== ERRORS ====================================================
__________________________________________ ERROR collecting test.py ___________________________________________
test.py:1: in <module>
    @pytest.mark.applymanifests('service-account.yaml')
E   NameError: name 'pytest' is not defined
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 errors during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
=========================================== 1 error in 0.51 seconds ===========================================
(env) omerl1-mac:crd-controller omerl$ pytest test.py 
============================================= test session starts =============================================
platform darwin -- Python 3.7.2, pytest-4.3.0, py-1.8.0, pluggy-0.9.0
kubetest config file: default
rootdir: /Users/omerl/dev/kamus/tests/crd-controller, inifile:
plugins: kubetest-0.0.3
collected 1 item                                                                                              

test.py EE                                                                                              [100%]

=================================================== ERRORS ====================================================
______________________________________ ERROR at setup of test_deployment ______________________________________

item = <Function test_deployment>

    def pytest_runtest_setup(item):
        """Run setup actions to prepare the test case.
    
        See Also:
            https://docs.pytest.org/en/latest/reference.html#_pytest.hookspec.pytest_runtest_setup
        """
        # Register a new test case with the manager and setup the test case state.
        test_case = manager.new_test(
            node_id=item.nodeid,
            test_name=item.name,
        )
    
        # FIXME (etd) - not sure this is really what we want to do. does it make sense
        # to entirely disable the plugin just be specifying the disable flag? probably..
        # but there must be a better way than adding this check (perhaps unregistering the
        # plugin in the pytest_configure hook?)
        disabled = item.config.getoption('kube_disable')
        if not disabled:
    
            # Register test case state based on markers on the test case
            test_case.register_rolebindings(
                *markers.rolebindings_from_marker(item, test_case.ns)
            )
            test_case.register_clusterrolebindings(
                *markers.clusterrolebindings_from_marker(item, test_case.ns)
            )
    
            # Apply manifests for the test case, if any are specified.
>           markers.apply_manifest_from_marker(item, test_case)

env/lib/python3.7/site-packages/kubetest/plugin.py:197: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
env/lib/python3.7/site-packages/kubetest/markers.py:94: in apply_manifest_from_marker
    objs = load_path(dir_path)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

path = '/Users/omerl/dev/kamus/tests/crd-controller/service-account.yaml'

    def load_path(path):
        """Load all of the Kubernetes YAML manifest files found in the
        specified directory path.
    
        Args:
            path (str): The path to the directory of manifest files.
    
        Returns:
            list: A list of all the Kubernetes objects loaded from
                manifest file.
    
        Raises:
            ValueError: The provided path is not a directory.
        """
        if not os.path.isdir(path):
>           raise ValueError('{} is not a directory'.format(path))
E           ValueError: /Users/omerl/dev/kamus/tests/crd-controller/service-account.yaml is not a directory

env/lib/python3.7/site-packages/kubetest/manifest.py:53: ValueError
____________________________________ ERROR at teardown of test_deployment _____________________________________

self = <urllib3.connection.VerifiedHTTPSConnection object at 0x112239f98>

    def _new_conn(self):
        """ Establish a socket connection and set nodelay settings on it.
    
        :return: New socket connection.
        """
        extra_kw = {}
        if self.source_address:
            extra_kw['source_address'] = self.source_address
    
        if self.socket_options:
            extra_kw['socket_options'] = self.socket_options
    
        try:
            conn = connection.create_connection(
>               (self._dns_host, self.port), self.timeout, **extra_kw)

env/lib/python3.7/site-packages/urllib3/connection.py:159: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

address = ('localhost', 443), timeout = None, source_address = None, socket_options = [(6, 1, 1)]

    def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT,
                          source_address=None, socket_options=None):
        """Connect to *address* and return the socket object.
    
        Convenience function.  Connect to *address* (a 2-tuple ``(host,
        port)``) and return the socket object.  Passing the optional
        *timeout* parameter will set the timeout on the socket instance
        before attempting to connect.  If no *timeout* is supplied, the
        global default timeout setting returned by :func:`getdefaulttimeout`
        is used.  If *source_address* is set it must be a tuple of (host, port)
        for the socket to bind as a source address before making the connection.
        An host of '' or port 0 tells the OS to use the default.
        """
    
        host, port = address
        if host.startswith('['):
            host = host.strip('[]')
        err = None
    
        # Using the value from allowed_gai_family() in the context of getaddrinfo lets
        # us select whether to work with IPv4 DNS records, IPv6 records, or both.
        # The original create_connection function always returns all records.
        family = allowed_gai_family()
    
        for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
            af, socktype, proto, canonname, sa = res
            sock = None
            try:
                sock = socket.socket(af, socktype, proto)
    
                # If provided, set socket level options before connecting.
                _set_socket_options(sock, socket_options)
    
                if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT:
                    sock.settimeout(timeout)
                if source_address:
                    sock.bind(source_address)
                sock.connect(sa)
                return sock
    
            except socket.error as e:
                err = e
                if sock is not None:
                    sock.close()
                    sock = None
    
        if err is not None:
>           raise err

env/lib/python3.7/site-packages/urllib3/util/connection.py:80: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

address = ('localhost', 443), timeout = None, source_address = None, socket_options = [(6, 1, 1)]

    def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT,
                          source_address=None, socket_options=None):
        """Connect to *address* and return the socket object.
    
        Convenience function.  Connect to *address* (a 2-tuple ``(host,
        port)``) and return the socket object.  Passing the optional
        *timeout* parameter will set the timeout on the socket instance
        before attempting to connect.  If no *timeout* is supplied, the
        global default timeout setting returned by :func:`getdefaulttimeout`
        is used.  If *source_address* is set it must be a tuple of (host, port)
        for the socket to bind as a source address before making the connection.
        An host of '' or port 0 tells the OS to use the default.
        """
    
        host, port = address
        if host.startswith('['):
            host = host.strip('[]')
        err = None
    
        # Using the value from allowed_gai_family() in the context of getaddrinfo lets
        # us select whether to work with IPv4 DNS records, IPv6 records, or both.
        # The original create_connection function always returns all records.
        family = allowed_gai_family()
    
        for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
            af, socktype, proto, canonname, sa = res
            sock = None
            try:
                sock = socket.socket(af, socktype, proto)
    
                # If provided, set socket level options before connecting.
                _set_socket_options(sock, socket_options)
    
                if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT:
                    sock.settimeout(timeout)
                if source_address:
                    sock.bind(source_address)
>               sock.connect(sa)
E               ConnectionRefusedError: [Errno 61] Connection refused

env/lib/python3.7/site-packages/urllib3/util/connection.py:70: ConnectionRefusedError

During handling of the above exception, another exception occurred:

self = <urllib3.connectionpool.HTTPSConnectionPool object at 0x112239668>, method = 'DELETE'
url = '/api/v1/namespaces/kubetest-test-deployment-1552808334', body = '{}'
headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'Swagger-Codegen/8.0.1/python'}
retries = Retry(total=0, connect=None, read=None, redirect=None, status=None), redirect = False
assert_same_host = False, timeout = None, pool_timeout = None, release_conn = True, chunked = False
body_pos = None
response_kw = {'preload_content': True, 'request_url': 'https://localhost/api/v1/namespaces/kubetest-test-deployment-1552808334'}
conn = None, release_this_conn = True, err = None, clean_exit = False
timeout_obj = <urllib3.util.timeout.Timeout object at 0x112239cf8>, is_new_proxy_conn = False

    def urlopen(self, method, url, body=None, headers=None, retries=None,
                redirect=True, assert_same_host=True, timeout=_Default,
                pool_timeout=None, release_conn=None, chunked=False,
                body_pos=None, **response_kw):
        """
        Get a connection from the pool and perform an HTTP request. This is the
        lowest level call for making a request, so you'll need to specify all
        the raw details.
    
        .. note::
    
           More commonly, it's appropriate to use a convenience method provided
           by :class:`.RequestMethods`, such as :meth:`request`.
    
        .. note::
    
           `release_conn` will only behave as expected if
           `preload_content=False` because we want to make
           `preload_content=False` the default behaviour someday soon without
           breaking backwards compatibility.
    
        :param method:
            HTTP request method (such as GET, POST, PUT, etc.)
    
        :param body:
            Data to send in the request body (useful for creating
            POST requests, see HTTPConnectionPool.post_url for
            more convenience).
    
        :param headers:
            Dictionary of custom headers to send, such as User-Agent,
            If-None-Match, etc. If None, pool headers are used. If provided,
            these headers completely replace any pool-specific headers.
    
        :param retries:
            Configure the number of retries to allow before raising a
            :class:`~urllib3.exceptions.MaxRetryError` exception.
    
            Pass ``None`` to retry until you receive a response. Pass a
            :class:`~urllib3.util.retry.Retry` object for fine-grained control
            over different types of retries.
            Pass an integer number to retry connection errors that many times,
            but no other types of errors. Pass zero to never retry.
    
            If ``False``, then retries are disabled and any exception is raised
            immediately. Also, instead of raising a MaxRetryError on redirects,
            the redirect response will be returned.
    
        :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int.
    
        :param redirect:
            If True, automatically handle redirects (status codes 301, 302,
            303, 307, 308). Each redirect counts as a retry. Disabling retries
            will disable redirect, too.
    
        :param assert_same_host:
            If ``True``, will make sure that the host of the pool requests is
            consistent else will raise HostChangedError. When False, you can
            use the pool on an HTTP proxy and request foreign hosts.
    
        :param timeout:
            If specified, overrides the default timeout for this one
            request. It may be a float (in seconds) or an instance of
            :class:`urllib3.util.Timeout`.
    
        :param pool_timeout:
            If set and the pool is set to block=True, then this method will
            block for ``pool_timeout`` seconds and raise EmptyPoolError if no
            connection is available within the time period.
    
        :param release_conn:
            If False, then the urlopen call will not release the connection
            back into the pool once a response is received (but will release if
            you read the entire contents of the response such as when
            `preload_content=True`). This is useful if you're not preloading
            the response's content immediately. You will need to call
            ``r.release_conn()`` on the response ``r`` to return the connection
            back into the pool. If None, it takes the value of
            ``response_kw.get('preload_content', True)``.
    
        :param chunked:
            If True, urllib3 will send the body using chunked transfer
            encoding. Otherwise, urllib3 will send the body using the standard
            content-length form. Defaults to False.
    
        :param int body_pos:
            Position to seek to in file-like body in the event of a retry or
            redirect. Typically this won't need to be set because urllib3 will
            auto-populate the value when needed.
    
        :param \\**response_kw:
            Additional parameters are passed to
            :meth:`urllib3.response.HTTPResponse.from_httplib`
        """
        if headers is None:
            headers = self.headers
    
        if not isinstance(retries, Retry):
            retries = Retry.from_int(retries, redirect=redirect, default=self.retries)
    
        if release_conn is None:
            release_conn = response_kw.get('preload_content', True)
    
        # Check host
        if assert_same_host and not self.is_same_host(url):
            raise HostChangedError(self, url, retries)
    
        conn = None
    
        # Track whether `conn` needs to be released before
        # returning/raising/recursing. Update this variable if necessary, and
        # leave `release_conn` constant throughout the function. That way, if
        # the function recurses, the original value of `release_conn` will be
        # passed down into the recursive call, and its value will be respected.
        #
        # See issue #651 [1] for details.
        #
        # [1] <https://github.com/shazow/urllib3/issues/651>
        release_this_conn = release_conn
    
        # Merge the proxy headers. Only do this in HTTP. We have to copy the
        # headers dict so we can safely change it without those changes being
        # reflected in anyone else's copy.
        if self.scheme == 'http':
            headers = headers.copy()
            headers.update(self.proxy_headers)
    
        # Must keep the exception bound to a separate variable or else Python 3
        # complains about UnboundLocalError.
        err = None
    
        # Keep track of whether we cleanly exited the except block. This
        # ensures we do proper cleanup in finally.
        clean_exit = False
    
        # Rewind body position, if needed. Record current position
        # for future rewinds in the event of a redirect/retry.
        body_pos = set_file_position(body, body_pos)
    
        try:
            # Request a connection from the queue.
            timeout_obj = self._get_timeout(timeout)
            conn = self._get_conn(timeout=pool_timeout)
    
            conn.timeout = timeout_obj.connect_timeout
    
            is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None)
            if is_new_proxy_conn:
                self._prepare_proxy(conn)
    
            # Make the request on the httplib connection object.
            httplib_response = self._make_request(conn, method, url,
                                                  timeout=timeout_obj,
                                                  body=body, headers=headers,
>                                                 chunked=chunked)

env/lib/python3.7/site-packages/urllib3/connectionpool.py:600: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <urllib3.connectionpool.HTTPSConnectionPool object at 0x112239668>
conn = <urllib3.connection.VerifiedHTTPSConnection object at 0x112239f98>, method = 'DELETE'
url = '/api/v1/namespaces/kubetest-test-deployment-1552808334'
timeout = <urllib3.util.timeout.Timeout object at 0x112239cf8>, chunked = False
httplib_request_kw = {'body': '{}', 'headers': {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'Swagger-Codegen/8.0.1/python'}}
timeout_obj = <urllib3.util.timeout.Timeout object at 0x112239c18>

    def _make_request(self, conn, method, url, timeout=_Default, chunked=False,
                      **httplib_request_kw):
        """
        Perform a request on a given urllib connection object taken from our
        pool.
    
        :param conn:
            a connection from one of our connection pools
    
        :param timeout:
            Socket timeout in seconds for the request. This can be a
            float or integer, which will set the same timeout value for
            the socket connect and the socket read, or an instance of
            :class:`urllib3.util.Timeout`, which gives you more fine-grained
            control over your timeouts.
        """
        self.num_requests += 1
    
        timeout_obj = self._get_timeout(timeout)
        timeout_obj.start_connect()
        conn.timeout = timeout_obj.connect_timeout
    
        # Trigger any extra validation we need to do.
        try:
>           self._validate_conn(conn)

env/lib/python3.7/site-packages/urllib3/connectionpool.py:343: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <urllib3.connectionpool.HTTPSConnectionPool object at 0x112239668>
conn = <urllib3.connection.VerifiedHTTPSConnection object at 0x112239f98>

    def _validate_conn(self, conn):
        """
        Called right before a request is made, after the socket is created.
        """
        super(HTTPSConnectionPool, self)._validate_conn(conn)
    
        # Force connect early to allow us to validate the connection.
        if not getattr(conn, 'sock', None):  # AppEngine might not have  `.sock`
>           conn.connect()

env/lib/python3.7/site-packages/urllib3/connectionpool.py:839: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <urllib3.connection.VerifiedHTTPSConnection object at 0x112239f98>

    def connect(self):
        # Add certificate verification
>       conn = self._new_conn()

env/lib/python3.7/site-packages/urllib3/connection.py:301: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <urllib3.connection.VerifiedHTTPSConnection object at 0x112239f98>

    def _new_conn(self):
        """ Establish a socket connection and set nodelay settings on it.
    
        :return: New socket connection.
        """
        extra_kw = {}
        if self.source_address:
            extra_kw['source_address'] = self.source_address
    
        if self.socket_options:
            extra_kw['socket_options'] = self.socket_options
    
        try:
            conn = connection.create_connection(
                (self._dns_host, self.port), self.timeout, **extra_kw)
    
        except SocketTimeout as e:
            raise ConnectTimeoutError(
                self, "Connection to %s timed out. (connect timeout=%s)" %
                (self.host, self.timeout))
    
        except SocketError as e:
            raise NewConnectionError(
>               self, "Failed to establish a new connection: %s" % e)
E           urllib3.exceptions.NewConnectionError: <urllib3.connection.VerifiedHTTPSConnection object at 0x112239f98>: Failed to establish a new connection: [Errno 61] Connection refused

env/lib/python3.7/site-packages/urllib3/connection.py:168: NewConnectionError

During handling of the above exception, another exception occurred:

item = <Function test_deployment>

    def pytest_runtest_teardown(item):
        """Run teardown actions to clean up the test client.
    
        See Also:
            https://docs.pytest.org/en/latest/reference.html#_pytest.hookspec.pytest_runtest_teardown
        """
        disabled = item.config.getoption('kube_disable')
        if not disabled:
>           manager.teardown(item.nodeid)

env/lib/python3.7/site-packages/kubetest/plugin.py:208: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
env/lib/python3.7/site-packages/kubetest/manager.py:343: in teardown
    test_case.teardown()
env/lib/python3.7/site-packages/kubetest/manager.py:185: in teardown
    self.namespace.delete()
env/lib/python3.7/site-packages/kubetest/objects/namespace.py:91: in delete
    body=options,
env/lib/python3.7/site-packages/kubernetes/client/apis/core_v1_api.py:9084: in delete_namespace
    (data) = self.delete_namespace_with_http_info(name, body, **kwargs)
env/lib/python3.7/site-packages/kubernetes/client/apis/core_v1_api.py:9181: in delete_namespace_with_http_info
    collection_formats=collection_formats)
env/lib/python3.7/site-packages/kubernetes/client/api_client.py:321: in call_api
    _return_http_data_only, collection_formats, _preload_content, _request_timeout)
env/lib/python3.7/site-packages/kubernetes/client/api_client.py:155: in __call_api
    _request_timeout=_request_timeout)
env/lib/python3.7/site-packages/kubernetes/client/api_client.py:387: in request
    body=body)
env/lib/python3.7/site-packages/kubernetes/client/rest.py:256: in DELETE
    body=body)
env/lib/python3.7/site-packages/kubernetes/client/rest.py:166: in request
    headers=headers)
env/lib/python3.7/site-packages/urllib3/request.py:68: in request
    **urlopen_kw)
env/lib/python3.7/site-packages/urllib3/request.py:89: in request_encode_url
    return self.urlopen(method, url, **extra_kw)
env/lib/python3.7/site-packages/urllib3/poolmanager.py:323: in urlopen
    response = conn.urlopen(method, u.request_uri, **kw)
env/lib/python3.7/site-packages/urllib3/connectionpool.py:667: in urlopen
    **response_kw)
env/lib/python3.7/site-packages/urllib3/connectionpool.py:667: in urlopen
    **response_kw)
env/lib/python3.7/site-packages/urllib3/connectionpool.py:667: in urlopen
    **response_kw)
env/lib/python3.7/site-packages/urllib3/connectionpool.py:638: in urlopen
    _stacktrace=sys.exc_info()[2])
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = Retry(total=0, connect=None, read=None, redirect=None, status=None), method = 'DELETE'
url = '/api/v1/namespaces/kubetest-test-deployment-1552808334', response = None
error = NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x112239f98>: Failed to establish a new connection: [Errno 61] Connection refused')
_pool = <urllib3.connectionpool.HTTPSConnectionPool object at 0x112239668>
_stacktrace = <traceback object at 0x11222b8c8>

    def increment(self, method=None, url=None, response=None, error=None,
                  _pool=None, _stacktrace=None):
        """ Return a new Retry object with incremented retry counters.
    
        :param response: A response object, or None, if the server did not
            return a response.
        :type response: :class:`~urllib3.response.HTTPResponse`
        :param Exception error: An error encountered during the request, or
            None if the response was received successfully.
    
        :return: A new ``Retry`` object.
        """
        if self.total is False and error:
            # Disabled, indicate to re-raise the error.
            raise six.reraise(type(error), error, _stacktrace)
    
        total = self.total
        if total is not None:
            total -= 1
    
        connect = self.connect
        read = self.read
        redirect = self.redirect
        status_count = self.status
        cause = 'unknown'
        status = None
        redirect_location = None
    
        if error and self._is_connection_error(error):
            # Connect retry?
            if connect is False:
                raise six.reraise(type(error), error, _stacktrace)
            elif connect is not None:
                connect -= 1
    
        elif error and self._is_read_error(error):
            # Read retry?
            if read is False or not self._is_method_retryable(method):
                raise six.reraise(type(error), error, _stacktrace)
            elif read is not None:
                read -= 1
    
        elif response and response.get_redirect_location():
            # Redirect retry?
            if redirect is not None:
                redirect -= 1
            cause = 'too many redirects'
            redirect_location = response.get_redirect_location()
            status = response.status
    
        else:
            # Incrementing because of a server error like a 500 in
            # status_forcelist and a the given method is in the whitelist
            cause = ResponseError.GENERIC_ERROR
            if response and response.status:
                if status_count is not None:
                    status_count -= 1
                cause = ResponseError.SPECIFIC_ERROR.format(
                    status_code=response.status)
                status = response.status
    
        history = self.history + (RequestHistory(method, url, error, status, redirect_location),)
    
        new_retry = self.new(
            total=total,
            connect=connect, read=read, redirect=redirect, status=status_count,
            history=history)
    
        if new_retry.is_exhausted():
>           raise MaxRetryError(_pool, url, error or ResponseError(cause))
E           urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='localhost', port=443): Max retries exceeded with url: /api/v1/namespaces/kubetest-test-deployment-1552808334 (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x112239f98>: Failed to establish a new connection: [Errno 61] Connection refused'))

env/lib/python3.7/site-packages/urllib3/util/retry.py:398: MaxRetryError
------------------------------------------ Captured stderr teardown -------------------------------------------
2019-03-17 09:38:54,770 WARNING Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x112239a58>: Failed to establish a new connection: [Errno 61] Connection refused')': /api/v1/namespaces/kubetest-test-deployment-1552808334
2019-03-17 09:38:54,773 WARNING Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x112239f60>: Failed to establish a new connection: [Errno 61] Connection refused')': /api/v1/namespaces/kubetest-test-deployment-1552808334
2019-03-17 09:38:54,776 WARNING Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x112239e10>: Failed to establish a new connection: [Errno 61] Connection refused')': /api/v1/namespaces/kubetest-test-deployment-1552808334
-------------------------------------------- Captured log teardown --------------------------------------------
api_object.py              105 WARNING  unknown version (None), falling back to preferred version
connectionpool.py          662 WARNING  Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x112239a58>: Failed to establish a new connection: [Errno 61] Connection refused')': /api/v1/namespaces/kubetest-test-deployment-1552808334
connectionpool.py          662 WARNING  Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x112239f60>: Failed to establish a new connection: [Errno 61] Connection refused')': /api/v1/namespaces/kubetest-test-deployment-1552808334
connectionpool.py          662 WARNING  Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x112239e10>: Failed to establish a new connection: [Errno 61] Connection refused')': /api/v1/namespaces/kubetest-test-deployment-1552808334
=========================================== 2 error in 1.37 seconds ===========================================
Exception ignored in: <function ApiClient.__del__ at 0x111480c80>
Traceback (most recent call last):
  File "<>/kamus/tests/crd-controller/env/lib/python3.7/site-packages/kubernetes/client/api_client.py", line 78, in __del__
  File "/usr/local/Cellar/python/3.7.2_2/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/pool.py", line 556, in join
  File "/usr/local/Cellar/python/3.7.2_2/Frameworks/Python.framework/Versions/3.7/lib/python3.7/threading.py", line 1028, in join
TypeError: 'NoneType' object is not callable

deployment version management causes failure on responses with no returned apiVersion

For example, if we refresh() a Deployment object, the response object that is returned has no apiVersion or kind specified in the response, but we take that response and update the internal obj with it, so we lose the original kind/apiVersion.

the kind is easy enough, but for objects which can have multiple versions (e.g. a deployment can be apps/v1, apps/v1beta1, extensions/v1beta1, ...), we should be choosing the correct kubernetes API client to interface with it.

To fix this, we'll probably need to:

  • store the apiVersion as an object field and fall back to it if the underlying k8s object doesn't have one
  • specify a 'default' or 'preferred' version, so it will use that if it can't determine a version

plan out framework usage

we have a rough idea of the things we want to support, the tricky part is figuring out how to provide them in a meaningful/useful way as a pytest plugin. will need to think about actions the user will want to take, how objects are managed internally, etc.

my initial thought is that its all mediated through a fixture, something along the lines of

def test_something(k8s):
    c = k8s.new_client()
    d = k8s.load_deployment('deployment.yaml')
    c.create_deployment(d)
    ...

update readme

we'll do this later once we have an initial working version

Open Source?

Make this un-private so everyone can bask in the glory?

Create generic objects from manifest files

Currently the create method support creating resources from python objects, but how can I create objects from manifest files? There is support for wrappers for some objects, but not for everything (service account, cluster role, CRD etc). How can I create them?

setup CI config

not pressing, but once we have some stuff to lint/build/test/publish, we'll want CI

add --kube-context flag

Add a --kube-context flag to allow specifying the context to use from the kubeconfig if you don't want to use the current active context.

Integrating with a orchestrated fixture

Hi, thank you for this fine project.

We use Terraform to bootstrap temporary clusters to run tests, generating a kubeconfig on-the-fly.

Here is pseudo code to illustrate

class Cluster:
    @property
    def kubeconfig(self):
        """Path to a generated kubeconfig"""

@pytest.fixture
def cluster():
    # Terraform a cluster and generate the kubeconfig
    return Cluster()

This implies that I would like to use kubetest but I can't feed it the --kube-config. What would be the way to achieve this?

option to disable warnings?

if a test fails, pytest captures and displays a bunch of warnings, that I believe are coming from the kubernetes api, e.g.

source:21943: DeprecationWarning: 'async' and 'await' will become reserved keywords in Python 3.7
source:22057: DeprecationWarning: 'async' and 'await' will become reserved keywords in Python 3.7
source:22171: DeprecationWarning: 'async' and 'await' will become reserved keywords in Python 3.7
source:22285: DeprecationWarning: 'async' and 'await' will become reserved keywords in Python 3.7
source:22399: DeprecationWarning: 'async' and 'await' will become reserved keywords in Python 3.7
source:22506: DeprecationWarning: 'async' and 'await' will become reserved keywords in Python 3.7
source:22613: DeprecationWarning: 'async' and 'await' will become reserved keywords in Python 3.7
source:22720: DeprecationWarning: 'async' and 'await' will become reserved keywords in Python 3.7
source:22827: DeprecationWarning: 'async' and 'await' will become reserved keywords in Python 3.7
source:252: DeprecationWarning: invalid escape sequence \[
source:257: DeprecationWarning: invalid escape sequence \(
source:281: DeprecationWarning: 'async' and 'await' will become reserved keywords in Python 3.7

there can be a lot of these. they aren't harmful, but when there are 800 of them, they can get annoying. perhaps we should have an option to squelch these warnings?

add support for secrets

for private dockerhub repos, we'll need the correct image pull secrets for a deployment, so we should add support for creating that secret

my initial thought is that it will be similar to namespaces in that, if configured, it will be managed by the plugin.

clean up cluster if tests are manually terminated

right now, the cleanup of a test case will only happen after a test completes (whether passing or failed). if we manually bail out (e.g. ctrl+C), then the test does not get cleaned up and any test artifacts (namespace, cluster role bindings, etc) are left hanging around. we'll need to force the cleanup of any role with the kubetest: prefix and any namespace with the kubetest: prefix if we detect a manual termination.

logging integration with pytest

see if there is a native way to integrate logging messages into pytest, or if we should just use a standard logger.logger instance.

this would make it easier to see whats going on internally without having to drop print statements in all of the test code, as the example tests currently have.

I believe pytest also allows different levels of verbosity (-v, -vv), so we may also be able to utilize different levels of verbosity for our logging (e.g. normal is just standard logging, "creating deployment X", and detailed logging could also display the manifest for the deployment, e.g.).

finish setting up documentation for project

sphinx was set up for the project and configured with autodoc to pick up the docstrings, but we'll still need some more docs written up about usage, config, etc. it'll also need to be set up on readthedocs at some point. once thats done, the readme link needs to be fixed.

loading stale kubeconfig for GKE will result in error

if using a GKE deployment that has been sitting for a while, then running a test using kubetest, you may get:

INTERNALERROR>     if self._load_auth_provider_token():
INTERNALERROR>   File "/Users/edaniszewski/dev/vaporio/kubetest/.tox/py36/lib/python3.6/site-packages/kubernetes/config/kube_config.py", line 196, in _load_auth_provider_token
INTERNALERROR>     return self._load_gcp_token(provider)
INTERNALERROR>   File "/Users/edaniszewski/dev/vaporio/kubetest/.tox/py36/lib/python3.6/site-packages/kubernetes/config/kube_config.py", line 236, in _load_gcp_token
INTERNALERROR>     self._refresh_gcp_token()
INTERNALERROR>   File "/Users/edaniszewski/dev/vaporio/kubetest/.tox/py36/lib/python3.6/site-packages/kubernetes/config/kube_config.py", line 245, in _refresh_gcp_token
INTERNALERROR>     credentials = self._get_google_credentials()
INTERNALERROR>   File "/Users/edaniszewski/dev/vaporio/kubetest/.tox/py36/lib/python3.6/site-packages/kubernetes/config/kube_config.py", line 139, in _refresh_credentials
INTERNALERROR>     scopes=['https://www.googleapis.com/auth/cloud-platform']
INTERNALERROR>   File "/Users/edaniszewski/dev/vaporio/kubetest/.tox/py36/lib/python3.6/site-packages/google/auth/_default.py", line 306, in default
INTERNALERROR>     raise exceptions.DefaultCredentialsError(_HELP_MESSAGE)
INTERNALERROR> google.auth.exceptions.DefaultCredentialsError: Could not automatically determine credentials. Please set GOOGLE_APPLICATION_CREDENTIALS or explicitly create credentials and re-run the application. For more information, please see https://developers.google.com/accounts/docs/application-default-credentials.

this is fixed by running any kubectl command, presumably to refresh the token, but that's pretty hacky. we'll need to look into refreshing automatically if possible.

add helper methods around services

related #7

This should include:

  • create
  • delete
  • get
  • refresh (update the service state locally)
  • get status
  • get endpoints - in particular, we would at least want to know the ip of things (so we can match to svc discovery in logs, if desired)

More could be added later, but the above is all we should need for this issue.

Error when trying to get pods

I'm getting the following exception when trying to get pods:

env/lib/python3.7/site-packages/kubetest/objects/deployment.py:143: in get_pods
    label_selector=selector_string(self.obj.metadata.labels),
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

selectors = None

    def selector_string(selectors):
        """Create a selector string from the given dictionary of selectors.
    
        Args:
            selectors (dict): The selectors to stringify.
    
        Returns:
            str: The selector string for the given dictionary.
        """
>       return ','.join(['{}={}'.format(k, v) for k, v in selectors.items()])
E       AttributeError: 'NoneType' object has no attribute 'items'

env/lib/python3.7/site-packages/kubetest/utils.py:42: AttributeError

This is my code:

    d = kube.load_deployment('deployment.yaml')
    
    d.create()
    d.wait_until_ready(timeout=30)
    
    pods = d.get_pods()
    print(pods)

    print("hello")

The line pods = d.get_pods() it what causing the error.

This is the deployment file:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: kamus-crd-controller
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kamus
      component: crd-controller
  template:
    metadata:
      labels:
        app: kamus
        component: crd-controller
    spec:
      containers:
      - name: controller
        image: crd-controller
        imagePullPolicy: IfNotPresent
        livenessProbe:
          httpGet:
            path: /api/v1/monitoring/isAlice
            port: 9999

Can you help me figure it out?

add helper for 'wait with timeout' behavior

we have the same/similar wait-with-timeout block in a few places now, and likely in more places as we add more functionality. it would be good to break that out into its own helper to make it easier to add new wait-based functionality.

dump logs on error

it would be nice to be able to dump container logs on error. this would be great if it were done in an automated fashion, but at a minimum, we should have a way of telling kubetest to get the container logs and print them out prior to cleaning up the container namespace.

perhaps all of the logs would be too much, so being able to limit the amount logged could be helpful.

also to consider: there could be multiple containers in a test case -- how do we know which one to log out? or do we just log out all of them?

reorganize api object wrappers in project

now that there are a bunch of api object wrappers and they are all stuffed into the same file, the file has grown and can be hard to navigate. we should instead make objects a package and each object be defined in its own file

lifecycle management helpers

A test would take a manifest file, e.g. for a deployment, then it would need to load that manifest, create the k8s object, run some tests against it, then delete it. we could also list objects and get objects via some filter for test validation. those lifecycle actions should be supported via a simple interface

create delete list get update
deployment
service
configmap
namespace
pod
node

other things can be added later.
note: update capabilities are lowest priority here, so if we don't get to them now, we can come back to them later.

capability to get/match container logs

in particular, this is useful for matching an expected log output to the actual logs. we could expose a "get_logs" to get all the logs, a simpler "match_logs", or both.

Update how pods are retrieved for Deployments

see: #88

Instead of assuming that the deployment will have the same labels as the pods, or that the deployment will have labels at all, we should instead collect all pods in the namespace and traverse the owner_reference until we either

  • get to the specified deployment, in which case the pod is part of the deployment
  • do not get to the specified deployment, in which case the pod is not part of the deployment

This will require us traversing multiple objects.

Example of pod metadata

 'metadata': {'annotations': None,
              'cluster_name': None,
              'creation_timestamp': datetime.datetime(2019, 3, 14, 13, 55, 32, tzinfo=tzutc()),
              'deletion_grace_period_seconds': None,
              'deletion_timestamp': None,
              'finalizers': None,
              'generate_name': 'redis-master-7898448-',
              'generation': None,
              'initializers': None,
              'labels': {'app': 'redis',
                         'pod-template-hash': '3454004',
                         'role': 'master',
                         'tier': 'backend'},
              'name': 'redis-master-7898448-brd8m',
              'namespace': 'kubetest-test-pods-1552571732',
              'owner_references': [{'api_version': 'extensions/v1beta1',
                                    'block_owner_deletion': True,
                                    'controller': True,
                                    'kind': 'ReplicaSet',
                                    'name': 'redis-master-7898448',
                                    'uid': 'd5cb88aa-4660-11e9-ada4-025000000001'}],
              'resource_version': '534397',
              'self_link': '/api/v1/namespaces/kubetest-test-pods-1552571732/pods/redis-master-7898448-brd8m',
              'uid': 'd5cd3b0c-4660-11e9-ada4-025000000001'},

so from this pod, we'd need to traverse up to the replica set, and from there the deployment (or any other intermediary steps)

build in assertions into kube client helper functions?

Not sure if this would make sense, but if we build in assertions into the client helpers, it could make the test code easier to manage.

e.g.

    # Get the container from the pod.
    containers = bb_pod.get_containers()
    assert len(containers) == 2, 'blackbox pod should have two containers'

    elect_container = bb_pod.get_container('blackbox-elector')
    assert elect_container is not None

    bb_container = bb_pod.get_container('blackbox')
    assert bb_container is not None

could become

    # Get the container from the pod.
    containers = bb_pod.get_containers(expected=2)

    elect_container = bb_pod.get_container('blackbox-elector')
    bb_container = bb_pod.get_container('blackbox')

assertions could be optional if we have them set as kwargs or something.. not sure if this is a great idea or not, just putting it here for more thought

teardown logic should only execute when setup is successful

see: #90

If setup fails (e.g. using markers/fixtures), then we may not have a connection to the api server and the error traceback could be mostly errors during teardown trying to clean up. If we keep an internal flag as to whether setup succeeded or not, we can limit teardown logic to run only when there is actually something to clean up.

move setup/teardown functions to the client manager

since the client is something we expose to the user, it doesn't really make sense to expose the internal setup/teardown stuff to the user. we should just move that functionality to the manager, since ya know.. the job of the manager is to manage things.

allow loading manifests from directory

currently we have to load in manifests one at a time, e.g.

    bb_secret = kube.load_secret(manifest_path('blackbox.secret.yaml'))
    bb_configmap = kube.load_configmap(manifest_path('blackbox.configmap.yaml'))
    bb_deployment = kube.load_deployment(manifest_path('blackbox.deployment.yaml'))

We could add support for just specifying the directly and loading all the manifests from within, e.g.

    manifests = kube.load_dir('./manifests')

add helper methods around deployments

related #7

This should include:

  • create
  • delete
  • get
  • refresh (update the deployment state locally)
  • get status
  • get deployment pods

More could be added later, but the above is all we should need for this issue.

improvements to condition checking

I've been running into some intermittent issues with condition checking in the integration test I'm writing using this, and it appears to be timing related.

the tests basically check the logs of blackbox to verify it attempts connection and ultimately fails and restarts. I added some print statements with timing info:

[2018-09-19 15:48:35.491327] condition "connect attempt 1" - check status: True
[2018-09-19 15:48:36.115306] condition "connect attempt 2" - check status: True
[2018-09-19 15:48:36.444556] condition "connect attempt 3" - check status: True
[2018-09-19 15:48:36.678976] condition "connect attempt 4" - check status: True
[2018-09-19 15:48:37.097374] condition "connect attempt 5" - check status: True
[2018-09-19 15:48:37.304388] condition "connect attempt 6" - check status: False
[2018-09-19 15:48:37.510073] condition "connect failed" - check status: False
[2018-09-19 15:48:37.720688] condition "container restarted" - check status: False

The time difference between condition checks is non-trivial (~2.23 seconds from start to finish for the snippet above), but the time can be variable. The timing is likely due to all the separate API calls to get logs.

One way to improve this is to allow the log matching to take multiple things to match against - then its only one request for logs and matching against that (fewer network calls).

Another thing we could do is run the checks in parallel.

There may be other improvements as well - I'll add comments if I think of any. While this isn't high priority, some improvement should be made here eventually.

pytest integration

this issue is a high-level bucket. some things we want:

  • project setup to integrate w/ pytest on pip install
  • automatically load kubernetes configs (.kube/config by default)
  • automatic namespace management (creation, cleanup)
  • automatic object management (where possible - e.g. cleaning things up after a test completes)
  • provide fixture to access the framework

async/parallel support

this feature is longer term -- it would be nice to be able to run tests in parallel. generally, this should be fine since each test is run in its own namespace. there's a bit of research that will need to go into this, and undoubtedly some rework/redesign to accommodate. this is likely a post-v1 feature.

add support for watching/events

listening for events could help with learning when something is created, updated, or deleted. We already have helpers to determine create/delete. (e.g. if I try to get it from the cluster, is it there?), but perhaps that isn't the best way of doing things. watching for events could be useful, but it also seems like integrating it into the current flow would be tricky.

add helper methods around pods

related #7

This should include:

  • create
  • delete
  • get
  • refresh (update the pod state locally)
  • get status
  • get containers

More could be added later, but the above is all we should need for this issue.

set up role bindings/cluster role bindings

If certain components, like the elector sidecar, require cluster access (e.g. to the kubernetes' etcd instance for elections), we'll need to be able to set this up in tests. The role is applied to a namespace, so this is a contender for something that could be automatically managed alongside the namespace?

templating for test manifests

Right now there are two somewhat inconvenient ways of approaching tests that use similar, but different, manifests (e.g. only a few values are changed).

  1. Make separate copies for each test
  2. Make a basic manifest and then modify it once it has been loaded

Option 2 prevents you from using the applymanifest marker, since you would need to manually update the config.

To improve this flow, we could support templating (e.g. some kind of simple jinja) for the manifests. tbd how it would work with the applymanifests marker, but worst case is we could just pass a dict of values to the marker for templating.

add helper methods around containers

it would be useful to have helper methods around containers - this would make it easier to get logs for a specific container, kill a specific container within a pod, etc.

option to output container logs to file on error

if there is a test that has many pods and the test fails, kubetest will try and get the logs for all the containers, which could end up being a lot and clutter the test output making it harder to determine what actually went wrong.

we could provide an option to output container logs to file on error so they can be inspected later, but wont clutter the console output

add "applymanifest" marker for single manifest loading

see: #90

the applymanifests marker is useful for applying a directory of manifests, but its usage can be confusing if you only need to apply a single manifest for a test. we can add an applymanifest marker for such a case, which would only apply the single specified manifest.

'wait for condition' feature

I've started to template out some test cases and something that seems useful is a built-in mechanic for waiting for some condition to be met.

We already have a 'wait until read' and 'wait until deleted', but there are other conditions which we may want to wait for. For example:

  • 'wait until the pod has restarted' - this could be used to ensure that a certain action expects a pod restart
  • 'wait until X shows up in the logs' - if matching against the logs, we can't say when something will happen, so we may want to wait until it does.

add 'wait for node count'

If the tests are running on an auto-scaling cluster, the cluster could start with fewer nodes than it needs for the test. After the 'create' phase of a test, we should be able to call 'wait for nodes' to ensure that we have the number of nodes we need available for everything to deploy and test correctly.

issue with pod restart while waiting for condition

if we are waiting for a condition, e.g. 'check that container X has Z in its logs' and the pod for that container gets reset during the wait block, the test could fail with an error similar to:

E           kubernetes.client.rest.ApiException: (400)
E           Reason: Bad Request
E           HTTP response headers: HTTPHeaderDict({'Audit-Id': '3f90ef9f-b1b6-46e8-97e7-87854e36801e', 'Content-Type': 'application/json', 'Date': 'Mon, 24 Sep 2018 19:07:59 GMT', 'Content-Length': '213'})
E           HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"container \"blackbox\" in pod \"blackbox-666cffd4c9-8lt4b\" is waiting to start: ContainerCreating","reason":"BadRequest","code":400}

this happens because while we are re-checking for the condition, the pod restarted and is temporarily unavailable.

I think the solution to this is to add a param to the wait function that, when set, will prevent these failures from erroring out of the check loop.

state assurance helpers

we'll want things like:

  • wait until X is created
  • wait until X is ready
  • wait until X is terminated

for each of these, there should also be an optional timeout, where it would fail if the timeout is exceeded.

there should also be a corresponding non-wait option as well, e.g. something like

  • create_deployment
  • create_deployment_and_wait

unable to read logs from container


    def get_logs(self):
        """Get up-to-date stream logs of a given container.
    
            Returns:
                str: String of logs.
            """
>       return self.obj.read_namespaced_pod_log(
            name=self.pod.name,
            namespace=self.pod.namespace,
            container=self.obj.name,
        )
E       AttributeError: 'V1Container' object has no attribute 'read_namespaced_pod_log'

../../../../../dev/vaporio/kubetest/kubetest/objects.py:346: AttributeError

I think instead of self.obj, we'll want client.CoreV1Api or something?

cluster proxy capability

we want to be able to test service endpoints, so we'll need to be able to access them. we'll need to be able to proxy into the cluster to do so.

We could have a generic cluster_proxy function, or higher level proxy_http_get, proxy_http_post, etc. Either approach would probably be fine for now.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.