Code Monkey home page Code Monkey logo

freeipa-pr-ci's Introduction

FreeIPA Server

FreeIPA allows Linux administrators to centrally manage identity, authentication and access control aspects of Linux and UNIX systems by providing simple to install and use command line and web based management tools.

FreeIPA is built on top of well known Open Source components and standard protocols with a very strong focus on ease of management and automation of installation and configuration tasks.

FreeIPA can seamlessly integrate into an Active Directory environment via cross-realm Kerberos trust or user synchronization.

Benefits

FreeIPA:

  • Allows all your users to access all the machines with the same credentials and security settings
  • Allows users to access personal files transparently from any machine in an authenticated and secure way
  • Uses an advanced grouping mechanism to restrict network access to services and files only to specific users
  • Allows central management of security mechanisms like passwords, SSH Public Keys, SUDO rules, Keytabs, Access Control Rules
  • Enables delegation of selected administrative tasks to other power users
  • Integrates into Active Directory environments

Components

The FreeIPA project provides unified installation and management tools for the following components:

Project Website

Releases, announcements and other information can be found on the IPA server project page at http://www.freeipa.org/ .

Documentation

The most up-to-date documentation can be found at http://freeipa.org/page/Documentation .

Quick Start

To get started quickly, start here: http://www.freeipa.org/page/Quick_Start_Guide

For developers

Licensing

Please see the file called COPYING.

Contacts

freeipa-pr-ci's People

Contributors

abbra avatar amore17 avatar antoniotorresm avatar bhavikbhavsar avatar carma12 avatar dhnunes avatar f-trivino avatar fcami avatar fdvorak256 avatar felipevolpone avatar flo-renaud avatar frasertweedale avatar menonsudhir avatar miskopo avatar netoarmando avatar nicki-krizek avatar pavelpicka avatar pvoborni avatar rcritten avatar rezney avatar slaykovsky avatar sorlov-rh avatar ssidhaye avatar stlaz avatar t-woerner avatar tiboris avatar tiran avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

freeipa-pr-ci's Issues

artifacts: failover remote storage

If fedorapeople.org goes down or is slow, we're unable to publish build and test artifacts, which effectively disables our CI. A failover mechanism should be implemented that could be used in such cases. It should use an alternative server for publishing. It only has to follow the same directory structure, but it can have more limited lifetime of data and features (i.e. no in-browser support for *.gz files).

runner deployment: extra configuration

The template was tested with Fedora Cloud Base template and works out of the box for this distro. However, if additional features are turned on, they need to be configured.

  • If firewall is configured on the runner machine, enable the required services (e.g. NFS).
  • Enable NAT routing for nested VMs.

Migrate Python test suites from Travis CI to PR CI

Travis CI is currently overloaded, because it executes the following tests:

  • build, lint
  • Python2 unit tests
  • Python3 unit tests
  • tox tests
  • web unit tests (soon)

It'd make sense to move the Python2&3 tests to PR CI, because they take quite a bit of time (~1h30m per PR in Travis CI) and it currently makes Travis a bottleneck, because we're using the free version.

Handle provisioning issues

Occasionally, some one-time provisioning issue is encountered during vagrant up, such as

  • libvirt: Call to virConnectOpen failed: Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such file or directory
  • nfs: /usr/bin/mv: cannot create regular file '/etc/exports': File exists
  • hostnamectl: Could not set property: Failed to activate service 'org.freedesktop.hostname1': timed out (service_start_timeout=25000ms)

These problems aren't persistent. When vagrant up or provision fails, cleanup should be performed and then another one or two attempts to provision the machines. If they all fail, then an error should be reported to github as it is now.

add runner id to runner.log

If some infrastructure issue occurs, it can be very useful to have the runner id in the runner.log to be able to find out on which runner the issue happened.

Change the use of REST API to GraphQL to improve scalabilty

One of the possible bottlenecks in the project is the abuse of the Github API. This limitation can prevent us to have more runners, as it's already explained in the documentation here. This problem happens because we need to do 4 or 5 REST API calls to Github in order to have all information that we need about the PRs and their statuses.

To fix this bottleneck, we can use the GraphQL API. With it, in just one call, we could have all the info we need.

If you want to use it, use this query in the explorer page

{
    repository(owner: "freeipa", name: "freeipa") {
    pullRequests(last: 50, states: [OPEN]){
      edges {
        node {
          number
          }
        }
      }
    }
  rateLimit {
	  cost
	  limit
	  nodeCount
	  remaining
	  resetAt
  } 
}

template: enable updates repo

During build/testing, updates repo in fedora should be enabled in the template, so new packages can be pulled in if they were bumped in spec file.

Setup PR-CI runner without access to shared team resources - for private runner

Currently, PR-CI runners use FedoraPeopleUpload subtask for artifact uploads.

Setup for this task is hardcoded in constants.py as:

FEDORAPEOPLE_KEY_PATH = '/root/.ssh/freeipa_pr_ci'
FEDORAPEOPLE_DIR = '[email protected]:/srv/groups/freeipa/prci/{path}'
FEDORAPEOPLE_BASE_URL = 'https://fedorapeople.org/groups/freeipa/prci/'
FEDORAPEOPLE_JOBS_URL = urllib.parse.urljoin(FEDORAPEOPLE_BASE_URL, 'jobs/')

Making this part configurable will allow people without access to FreeIPA team's official freeipa_pr_ci private key use PR-CI for their private runners.

Thinking of it. If we change FedoraPeopleUpload class into, e.g., RsyncUpload and define what the remote location should support we might then configure the CI against any server.

Deprecation warning on 'include'

I caught deprecation warning:

$ ansible-playbook -i ansible/inventory ansible/prepare_devel_test_runners.yml
[DEPRECATION WARNING]: The use of 'include' for tasks has been deprecated. Use 'import_tasks' for static inclusions or 'include_tasks' for dynamic inclusions. This feature will be removed in a future release. 
Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
[DEPRECATION WARNING]: include is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a 
future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
Owner of monitored GitHub repo:

attemp to restart the prci service if it fails

In case api.github.com is temporarily inaccessible, if runners happen to attempt to start the service at that time, it will fail (complete traceback attached below). Incidentally, this also disables automatic reboots, which can make the machine go stale and become inaccessible through ssh.

We can either use systemd to restart the service if it fails, or handle the exceptions during HTTPSConnectionPool initialization. I'm in favor of using the systemd service restart, since it can also cover other failures.

Traceback:

Traceback (most recent call last):
   File "/usr/lib/python3.6/site-packages/requests/packages/urllib3/connection.py", line 141, in _new_conn
     (self.host, self.port), self.timeout, **extra_kw)
   File "/usr/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py", line 60, in create_connection
     for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
   File "/usr/lib64/python3.6/socket.py", line 745, in getaddrinfo
     for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
 socket.gaierror: [Errno -2] Name or service not known
 During handling of the above exception, another exception occurred:
 Traceback (most recent call last):
   File "/usr/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py", line 600, in urlopen
     chunked=chunked)
   File "/usr/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py", line 345, in _make_request
     self._validate_conn(conn)
   File "/usr/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py", line 844, in _validate_conn
     conn.connect()
   File "/usr/lib/python3.6/site-packages/requests/packages/urllib3/connection.py", line 284, in connect
     conn = self._new_conn()
   File "/usr/lib/python3.6/site-packages/requests/packages/urllib3/connection.py", line 150, in _new_conn
     self, "Failed to establish a new connection: %s" % e)
 requests.packages.urllib3.exceptions.NewConnectionError: <requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7f16f7a08940>: Failed to establish a new connection: [Errno -2] Name or service not known
 During handling of the above exception, another exception occurred:
 Traceback (most recent call last):
   File "/usr/lib/python3.6/site-packages/requests/adapters.py", line 423, in send
     timeout=timeout
   File "/usr/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py", line 649, in urlopen
     _stacktrace=sys.exc_info()[2])
   File "/usr/lib/python3.6/site-packages/requests/packages/urllib3/util/retry.py", line 376, in increment
     raise MaxRetryError(_pool, url, error or ResponseError(cause))
 requests.packages.urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='api.github.com', port=443): Max retries exceeded with url: /repos/freeipa/freeipa (Caused by NewConnectionError('<requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7f16f7a08940>: Failed to establish a new connection: [Errno -2] Name or service not known',))
 During handling of the above exception, another exception occurred:
 Traceback (most recent call last):
   File "/root/freeipa-pr-ci/github/prci.py", line 295, in <module>
     main()
   File "/root/freeipa-pr-ci/github/prci.py", line 262, in main
     repo = github.repository(repo['owner'], repo['name'])
   File "/usr/lib/python3.6/site-packages/github3/github.py", line 1138, in repository
     json = self._json(self._get(url), 200)
   File "/usr/lib/python3.6/site-packages/github3/models.py", line 185, in _get
     return self.session.get(url, **kwargs)
   File "/usr/lib/python3.6/site-packages/requests/sessions.py", line 501, in get
     return self.request('GET', url, **kwargs)
   File "/usr/lib/python3.6/site-packages/github3/session.py", line 88, in request
     response = super(GitHubSession, self).request(*args, **kwargs)
   File "/usr/lib/python3.6/site-packages/requests/sessions.py", line 488, in request
     resp = self.send(prep, **send_kwargs)
   File "/usr/lib/python3.6/site-packages/requests/sessions.py", line 609, in send
     r = adapter.send(request, **kwargs)
   File "/root/freeipa-pr-ci/github/prci_github/adapter.py", line 43, in send
     request, *args, **kwargs)
   File "/usr/lib/python3.6/site-packages/cachecontrol/adapter.py", line 50, in send
     resp = super(CacheControlAdapter, self).send(request, **kw)
   File "/usr/lib/python3.6/site-packages/requests/adapters.py", line 487, in send
     raise ConnectionError(e, request=request)
 requests.exceptions.ConnectionError: HTTPSConnectionPool(host='api.github.com', port=443): Max retries exceeded with url: /repos/freeipa/freeipa (Caused by NewConnectionError('<requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7f16f7a08940>: Failed to establish a new connection: [Errno -2] Name or service not known',))
 prci.service: Main process exited, code=exited, status=1/FAILURE
 prci.service: Unit entered failed state.
 prci.service: Failed with result 'exit-code'.

github: handle http connection errors

When PR CI attempts to communicate with GitHub API, it can occasionally fails with

  • ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response',)) or
  • ServerError 500

This can be particularly annoying if the runner fails to publish test results and it will appear to be endlessly stuck on executing the task on GitHub.

These errors could be handled with a re-try after some specified timeout.

New PR CI Architecture to support more runners

The PR CI as it is now, was not designed to support lots of runners (60+ runners), with that in mind, I would like to propose a new architecture for the project.

new pr ci arch 1

This is the workflow described in details:

  1. GitHub (GH) trigger the web server when a PR is created or when a PR it's modified (using webhooks).

  2. With the PR information, the web server will read the .freeipa-pr-ci.yaml spec file and will create a Task to each test described on it. A Task is basically an object with info of the test, the priority, if that Task depends on another one, etc.

    1. An important detail here. When a PR is created, we first need to run the build process, then the other tests can run. At the first time, the web server would read the PR info, see that it doesn't have any tasks created (in its task list), and it would create only the build Task and submit it to the RabbitMQ.
      When the build process is done, the Publisher will update GH saying that the build succeeded. With this GH webhook (it's triggered when a status of a commit changes), the web server can check that the build is done and then create the other Tasks.
  3. The web server will add the Tasks in the TASKS_QUEUE on the RabbitMQ. This queue is only for tasks that the runners will consume and run. Important to say that we can have different kinds of tests to run, some of then consume less resource, but others demand bigger runners (VMs with more RAM and vCPU). So, we will need different queues for each kind of tests to run. Tests that consume less resource, will be added to the "medium tasks queue" and the bigger ones to "big tasks queue". In practice, we will not have only the BITASKS_QUEUE, but BIG_TASKS_QUEUE, SMALL_TASKS_QUEUE and so on.

  4. The runners consume the TASKS_QUEUE and each runner will get a task to run. Important to say that RabbitMQ has its own mechanisms to prevent that a message in a queue could be consumed by two consumers. With the Task on hands, the runner knows what do to.

  5. After running the tests, the runner will create a Result object, that will be added to the RESULTS_QUEUE on the RabbitMQ.

  6. A Publisher (this is could be a service running on some machine) that consumes the RESULTS_QUEUE queue will get the Result object and publish the logs on FedoraPeople and change the GH task list, showing if the tests succeed or not (as we already do today)

Advantages:

  • This way we will never hit the API rate limit; given that we will use webhooks
  • It's totally uncoupled from GH. In the new arch. the tasks will be created by the Web Server based on the info received by GH, however, if sometime in the future we move from GH, we would just need to change the code that talks with the PR list.

Disadvantages:

  • It's more complex and it has more point of failures
  • It's harder to deploy than it is today. Today we just need to deploy the runners. With this new proposal, we would need to deploy: the RabbitMQ, the web server, the runners and the publisher. All this can be automated.
  • It's harder to debug, given that it has more pieces on the game.

Challenges:

  • Serialization of the object (Task and Result). It's not obvious to serialize and deserialize big objects like classes and so on. So a default "protocol" could/should be used, like only transfer JSON files.

PS:

  • The items II, III, IV and VI are in blue color to show that they belong to our infrastructure.
  • The items II and IV could be deployed on the same machine.
  • The item IV could run as a service

Use git tag when releasing a new box version

I think it could be a good idea create a new git tag every time we release a new vagrant box version, this way it would be easy to know what features/code/etc each box version has.

Support more tests suites in PR CI

Enable the PR CI infrastructure to execute more resource-consuming integration tests.

Benefits

  • QE can use development runners to run more test cases (as in celestian/freeipa#2 )
  • Test suite can be extended with more tests (especially for nightly testing)
  • All resources of "beefy" runners can be utilized (useful in beaker and for future scaling)

Tasks

  • support multiple topologies #70
  • review&adjust PR to support job difficulty definition #54, #72
  • add topologies and difficulty definition in config file (all branches)
  • update docs: add section about topologies and job complexity
  • raise minimum requirements for runner (docs, recreate all runners)

  • add external_ca test into the test suite
  • analyze the available resources and capacity for adding more tier0 test suites
  • identify candidate test suites for tier0 testing

Only run failed jobs on re-run

It does not make sense to build and run all jobs if only one job out of x failed. Run only one job on a re-run request, all jobs are run on a rebase anyway.

runner: allow execution of multiple jobs in parallel

A runner should be capable of executing multiple jobs. Initial implementation could have a predefined number of maximum concurrent jobs. In the future, each job could have the required resources associated with it. Then, the runner would be capable of fully utilizing its resources by executing as many jobs as it has resources for.

github: adjust PR priority (older first)

When PR task queue is constructed, older PR should be handled sooner than newly opened PRs. This should prevent situation when old PR waits a long time for tests, because many new PR are being opened (e.g. backport PR after pushing).

github: job configuration from the target branch should be used by default

Currently, we load job configuration from the PR. This doesn't ensure all the tests we want to execute will be executed, because contributors don't have the current branch and the job config can be either missing entirely, or missing some jobs. It is easy to overlook some tests weren't executed and it is annoying for the contributors to rebase every time we modify the config file.

I think we should detect the target branch of PR and use that config file instead. This will ensure all the selected tests are executed on all PRs, even without rebase. Additionally, if the config file in PR was modified, it should be used instead. This is to allow testing of new test suites and templates.

This would also solve the future issue of bumping template versions in the config file.

Runner disk cleanup

Once a new template is generated and uploaded to vagrantcloud, there should be a (default) option to also revoke and older version from vagrant cloud and to delete it from all the runners via ansible.

Two versions per template should be kept. One current stable and one updated one.

CONTRIBUTING guide

It would be nice to have a CONTRIBUTING guide to help new contributors. The guide could cover:

  • Forking freeipa; forking freeipa-pr-ci
  • Setuping dev environment: runner, PR on your own fork, using pr_ci_test_control to re-run tasks,...
  • Submitting a patch

Move lists of installed packages from runner.log to separate file(s)

runner.log.gz is polluted with a list of installed packages for each machine (machine/provision : get all packages) and it's displayed twice (machine/provision : display installed packages).

This list can be useful, but it should be in a separate file(s), so the runner.log is readable.

vagrantfiles: unify replica numbering

In the FreeIPA tests, replicas are numbered from 0, while the vagrantfile in PR CI uses numbering from 1. This can be confusing when inspecting logs.

py.test: log output of called subprocesses

The runner.log is missing the output of commands like ipa-server-install, which makes debugging some issues much harder or impossible. Output of subcommands that are called by py.test should be logged either directly in the runner.log or in a separate file.

systemd unit should wait for libvirt/qemu after boot

When the prci service is started after boot, attempt to spin up a virtual machine sometimes ends up with

Call to virDomainCreateWithFlags failed: monitor socket did not show up: No such file or directory

This only happens on the first run after reboot, so there's probably some service required that hasn't started yet. We should add a dependency to the systemd unit to fix this.

runner monitoring

Currently, there is no good way to monitor status and health of the individual runners. A monitoring solution using for example zabbix would be ideal, so we can track the activity and state of production runners.

tox testing

Add support for running tox tests in PR CI.

runner: support multiple topologies

Currently, only master/replica topology is supported for testing. It'd be ideal if we could define multiple topologies and specify which one should be used for a given job. The topologies could also have defined how many resources (CPU/RAM) they require and this information could be used to fully utilize runner's capacity (see #54)

Run the new job for all PRs once it gets added

As all jobs are being run on a new version of a template (e.g. when we did F25->F26 move), a new job (like the external CA added recently) should probably be run once added for all PRs, too, so that we know that no new PR breaks it.

Run tests on rawhide fedora

Often happens that we have IPA with some errors when running in a fedora rawhide. We could run nightly tests using rawhide fedora and compare the results with the last run.

Benefits

  • Figure out problems before releases
  • Discover when a new version of some dependency breaks IPA

Tasks

  • Add an option to disable COPR during template creation (PR #138)
  • Add option to always update pkgs (both in mock and system) during provisioning (PR #139)
  • Automate the creation of PRs at night (PR #136)
  • Create a fedora rawhide template (ci-master-frawhide)
  • Create a tool to compare results and pkgs with previous test run and generate diff in the PR comment (see nicki-krizek/freeipa#35 (comment) for example)
  • Create a tool to open and close PRs (PR #136)

Depends on

  • Use PR CI infrastructure to run nightly tests, #108

runner crashes while watching PR queue

Traceback (most recent call last):
  File "./prci.py", line 193, in <module>
    task.take(runner_id)
  File "/home/sharp/git/freeipa-pr-ci/github/prci_github/internals.py", line 219, in take
    status = Status(self.repo, self.pull, self.name)
  File "/home/sharp/git/freeipa-pr-ci/github/prci_github/internals.py", line 84, in __init__
    raise ValueError('No status with context {}'.format(context))

create-template-box CLI is confusing

The create-template-box script asks for inputs, however the CLI is confusing.
For example:
When git branch (e.g. master): is printed, it seems that if the user press enter (keeping it empty) the branch master will be used, but that is not true; and the validation (of empty field) is done later in the script.

So, we could:

  • Use the example as default; or
  • Check if it's empty and ask the user to input a valid information

Use PR CI infrastructure to run nightly tests

Even with a global team, there is some window time that the PR CI infrastructure is not used. We could use these hours to run long tests that require a lot of machinery and resources.

Benefits

  • Have a bigger test coverage
  • Utilize PR CI hardware resources

Tasks

  • Automate the creation of a PR at night to have the tests ran

Depends on:

Green mark for skipped tests in PR CI is confusing.

When tests skipped it reflects in html.report, but green mark for runs looks a bit confusing.
I think, not everyone will actually check all logs and reports and green mark would be a green light that all good. It needs improvement.

Allow runner deployment and development without github

  1. Modify runner deployment playbook to allow deployment of runner without github communication.
  2. Provide steps / script to deploy runner without github part.
  3. Update build / run-pytest scripts to enable local builds and test runs.

runner crashes while watching PR queue #2

Traceback (most recent call last):
  File "/usr/lib/python3.5/site-packages/requests/packages/urllib3/connectionpool.py", line 385, in _make_request
    httplib_response = conn.getresponse(buffering=True)
TypeError: getresponse() got an unexpected keyword argument 'buffering'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.5/site-packages/requests/packages/urllib3/connectionpool.py", line 578, in urlopen
    chunked=chunked)
  File "/usr/lib/python3.5/site-packages/requests/packages/urllib3/connectionpool.py", line 387, in _make_request
    httplib_response = conn.getresponse()
  File "/usr/lib64/python3.5/http/client.py", line 1198, in getresponse
    response.begin()
  File "/usr/lib64/python3.5/http/client.py", line 297, in begin
    version, status, reason = self._read_status()
  File "/usr/lib64/python3.5/http/client.py", line 266, in _read_status
    raise RemoteDisconnected("Remote end closed connection without"
http.client.RemoteDisconnected: Remote end closed connection without response

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.5/site-packages/requests/adapters.py", line 403, in send
    timeout=timeout
  File "/usr/lib/python3.5/site-packages/requests/packages/urllib3/connectionpool.py", line 623, in urlopen
    _stacktrace=sys.exc_info()[2])
  File "/usr/lib/python3.5/site-packages/requests/packages/urllib3/util/retry.py", line 255, in increment
    raise six.reraise(type(error), error, _stacktrace)
  File "/usr/lib/python3.5/site-packages/requests/packages/urllib3/packages/six.py", line 685, in reraise
    raise value.with_traceback(tb)
  File "/usr/lib/python3.5/site-packages/requests/packages/urllib3/connectionpool.py", line 578, in urlopen
    chunked=chunked)
  File "/usr/lib/python3.5/site-packages/requests/packages/urllib3/connectionpool.py", line 387, in _make_request
    httplib_response = conn.getresponse()
  File "/usr/lib64/python3.5/http/client.py", line 1198, in getresponse
    response.begin()
  File "/usr/lib64/python3.5/http/client.py", line 297, in begin
    version, status, reason = self._read_status()
  File "/usr/lib64/python3.5/http/client.py", line 266, in _read_status
    raise RemoteDisconnected("Remote end closed connection without"
requests.packages.urllib3.exceptions.ProtocolError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response',))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "./prci.py", line 193, in <module>
    task.take(runner_id)
  File "/home/sharp/git/freeipa-pr-ci/github/prci_github/internals.py", line 219, in take
    status = Status(self.repo, self.pull, self.name)
  File "/home/sharp/git/freeipa-pr-ci/github/prci_github/internals.py", line 80, in __init__
    for status in repo.commit(pull.pull.head.sha).statuses():
  File "/usr/lib/python3.5/site-packages/github3/repos/repo.py", line 487, in commit
    json = self._json(self._get(url), 200)
  File "/usr/lib/python3.5/site-packages/github3/models.py", line 185, in _get
    return self.session.get(url, **kwargs)
  File "/usr/lib/python3.5/site-packages/requests/sessions.py", line 487, in get
    return self.request('GET', url, **kwargs)
  File "/usr/lib/python3.5/site-packages/github3/session.py", line 88, in request
    response = super(GitHubSession, self).request(*args, **kwargs)
  File "/usr/lib/python3.5/site-packages/requests/sessions.py", line 475, in request
    resp = self.send(prep, **send_kwargs)
  File "/usr/lib/python3.5/site-packages/requests/sessions.py", line 585, in send
    r = adapter.send(request, **kwargs)
  File "/home/sharp/git/freeipa-pr-ci/github/prci_github/adapter.py", line 36, in send
    request, *args, **kwargs)
  File "/usr/lib/python3.5/site-packages/cachecontrol/adapter.py", line 50, in send
    resp = super(CacheControlAdapter, self).send(request, **kw)
  File "/usr/lib/python3.5/site-packages/requests/adapters.py", line 453, in send
    raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response',))

Improve usability and sustainability of PR CI

Make lives of users and developer of PR CI easier by addressing some of the minor issues.

Benefits

  • More usable, stable and robust interface to deal with
  • Easier to use and maintain

Tasks

Ordered by priority, not everything has to be done, some of the minor issues may be too difficult to implement and not worth it.

  • change re-run behavior to only re-run tasks in failed/error state #81, #83
  • bring back logging of py.test's output #76, https://pagure.io/freeipa/issue/7186
  • do not require reboot during runner deployment #113
  • update developer documentation (re-run) #93
  • update developer documentation (ansible and vagrant) #97
  • update README.md #107
  • unify replica numbering for logs and tests #74
  • properly support hypervizor machines (vmx flag needed?) and test updated beaker file #89
  • improve help of prci_test_control.py #88
  • limit the size of systemd journal #53
  • move package list to a separate file #106
  • improve cli for create-template-box #98

  • runner disk cleanup #27
  • add a failover storage for logs #71
  • add informative messages to job status from py.test #17

Automate provisioning runner machines from OpenStack

Most of our runners are in OpenStack and they need to be recreated occasionally. There could be a tool that could automate the provisioning of the machine that's used as a runner. This would include:

  • specifying which openstack instance to use,
  • selecting image, flavor, keypair, ...,
  • assigning a floating IP,
  • configuring authorized_keys to be able to connect as root with the freeipa_pr_ci key.

freeipa-pr-ci does not support symlinks in FreeIPA repository

When we add symlinks into FreeIPA repository the freeipa-pr-ci fails during creating tarball.

Error from log:

Error when writing tar.gz archive at /root/rpmbuild/SOURCES/freeipa-4.5.90test.tar.gz: [Errno 2] No such file or directory: '/tmp/freeipa-4.5.90test/install/ui/js/plugins'
2017-07-31 11:42:39,764 DEBUG to retry, use: --limit @/root/freeipa-pr-ci/ansible/build.retry

Link to the log with error: runner.log.gz

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.