Code Monkey home page Code Monkey logo

fabric8-pipeline-library's Introduction

Table of Contents

Fabric8 Pipeline Library

This git repository contains a library of reusable Jenkins Pipeline steps and functions that can be used in your Jenkinsfile to help improve your Continuous Delivery pipeline.

fabric8 logo

The idea is to try promote sharing of scripts across projects where it makes sense.

How to use this library

This library is intended to be used with fabric8's Jenkins image that is deployed as part of the fabric8 platform.

To use the functions in this library just add the following to the top of your Jenkinsfile:

@Library('github.com/fabric8io/fabric8-pipeline-library@master') _

That will use the master branch of this library. You can if you wish pick a specific tag or commit SHA of this repository too.

Making changes

Feel free to reuse a version of this library as is. However if you want to make changes, please fork this repository and change it in your own fork!

Then just refer to your fork in the @Library() annotation as shown above.

If you do make local changes we'd love a Pull Request back though! We love contributions and pull requests!

Requirements

These flows make use of the Fabric8 DevOps Pipeline Steps and kubernetes-plugin which help when working with Fabric8 DevOps in particular for clean integration with the Hubot chat bot and human approval of staging, promotion and releasing.

Functions from the Jenkins global library

Approve

  • requests approval in a pipeline
  • hubot integration prompting chat room for approvals including links to environments
  • sends events to elasticsearch if running in a configured namespace, this helps OOTB charting of approval wait times

example..

    approve {
      version = '0.0.1'
      console = 'http://fabric8.kubernetes.fabric8.io'
      environment = 'staging'
    }

Deploy Project

  • applies a kubernetes json resource to the host OpenShift / Kubernetes cluster
  • lazily creates the environment if it doesn't already exist
    deployProject {
      stagedProject = 'my-project'
      resourceLocation = 'target/classes/kubernetes.json'
      environment = 'staging'
    }

Drop Project

in the case of an aborted approval

  • will drop the OSS sonatype staged repository
  • close any pull requests that have been created based on the release
  • delete the branch relating to the PR mentioned above
    dropProject{
      stagedProject = project
      pullRequestId = '1234'
    }

Get Deployment Resources

  • returns a default OpenShift or Kubernetes YAML that can be used by kubernetes-workflow apply step
  • returns a service, deployment / deployment config YAML using sensible defaults
  • can be used in conjunction with kubernetesApply
    node {
        def resources = getDeploymentResources {
          port = 8080
          label = 'node'
          icon = 'https://cdn.rawgit.com/fabric8io/fabric8/dc05040/website/src/images/logos/nodejs.svg'
          version = '0.0.1'
        }

        kubernetesApply(file: resources, environment: 'my-cool-app-staging', registry: 'myexternalregistry.io:5000')
    }

Get Kubernetes JSON

WARNING this function is deprecated. Please change to use getDeploymentResources{}

  • returns a default OpenShift templates that gets translated into Kubernetes List when applied by kubernetes-workflow apply step and running on Kubernetes
  • returns a service and replication controller JSON using sensible defaults
  • can be used in conjunction with kubernetesApply
    node {
        def rc = getKubernetesJson {
          port = 8080
          label = 'node'
          icon = 'https://cdn.rawgit.com/fabric8io/fabric8/dc05040/website/src/images/logos/nodejs.svg'
          version = '0.0.1'
        }

        kubernetesApply(file: rc, environment: 'my-cool-app-staging', registry: 'myexternalregistry.io:5000')
    }

Get New Version

  • returns the short git sha for the current project to be used as a version
    def newVersion = getNewVersion{}

Maven Canary Release

  • creates a release branch

  • sets the maven pom versions using versions-maven-plugin

    • version can be overridden as follows
        mavenCanaryRelease{
          version = canaryVersion
        }
  • runs mvn deploy docker:build

  • default goal of "install" can overridden

    mavenCanaryRelease{
      goal = "deploy"     # executes `mvn deploy` instead of `mvn install`
    }
  • default profile - "openshift" can overridden
    mavenCanaryRelease{
      profile = "osio"    # executes `mvn install -P osio`
    }
  • mvn cmd can be overriden by setting cmd variable.
    mavenCanaryRelease{
      cmd = "mvn clean -B -e -U deploy -Dmaven.test.skip=true -P profile"
    }

NOTE: if cmd is set, goal, profile, skipTests will have no effect.

  • auto updates fabric8 maven plugin in applications pom.xml; true by default.
    mavenCanaryRelease{
      autoUpdateFabric8Plugin = false // disables patching pom.xml
    }

Maven Integration Test

  • lazily creates a test environment in kubernetes
  • runs maven integration tests in test environment
    mavenIntegrationTest {
      environment = 'Testing'
      failIfNoTests = 'false'
      itestPattern = '*KT'
    }
  • pass cmd parameter to override the mvn command to execute in integration test
    mavenIntegrationTest {
      cmd = 'mvn -P openshift-it org.apache.maven.plugins:maven-failsafe-plugin:verify'
    }

Note: All other flags are ignored if mavenIntegrationTest has integrationTestCmd parameter.

Merge and Wait for Pull Request

  • adds a [merge] comment to a github pull request
  • waits for GitHub pull request to be merged by an external CI system
    mergeAndWaitForPullRequest {
      project = 'fabric8/fabric8'
      pullRequestId = prId
    }

Perform Canary Release

  • generic function used by non Java based project
  • gets a new version based on the short git sha
  • builds docker image using a Dockerfile in the root of the project
  • tags the image with the release version and prefixes the private fabric8 docker registry for the current namespace
  • if running in a multi node cluster will perform a docker push. Not needed in a single node setup as image built and cached locally
    stage 'Canary release'
    echo 'NOTE: running pipelines for the first time will take longer as build and base docker images are pulled onto the node'
    if (!fileExists ('Dockerfile')) {
      writeFile file: 'Dockerfile', text: 'FROM django:onbuild'
    }

    def newVersion = performCanaryRelease {}

REST Get URL

  • utility function returning the JSON contents of a REST Get request
    def apiUrl = new URL("https://api.github.com/repos/${config.name}/pulls/${id}")
    JsonSlurper rs = restGetURL{
      authString = githubToken
      url = apiUrl
    }

Update Maven Property Version

During a release involving multiple java projects we often need to update downstream maven poms with new versions of a dependency. In a release pipeline we want to automate this, set up a pull request and let CI run to make sure there's no conflicts.

  • performs a search and replace in the maven pom
  • finds the latest version available in maven central (repo is configurable)
  • if newer version exists pom is updated
  • pull request submitted
  • pipeline will wait until this is merged before continuing

If CI fails and updates are required as a result of the dependency upgrade then

  • pipeline will notify a chat room (we use Slack)
  • informs the team of the git commands needed to clone, switch to the version update branch and command to push back once fixed
  • pipeline will wait until the CI passes before continuing

Automating this has saved us a lot of time during the release pipeline

    def properties = []
    properties << ['<fabric8.version>','io/fabric8/kubernetes-api']
    properties << ['<docker.maven.plugin.version>','io/fabric8/docker-maven-plugin']

    updatePropertyVersion {
      updates = properties
      repository = source // if null defaults to http://central.maven.org/maven2/
      project = 'fabric8io/ipaas-quickstarts'
    }

Wait Until Artifact Synced With Maven Central

When working with open source java projects we need to stage artifacts with OSS Sonatype in order to promote them into maven central. This can take 10-30 mins depending on the size of the artifacts being synced.

A useful thing is to be notified in chat when artifacts are available in maven central as blocking the pipeine until we're sure the promote has worked.

  • polls waiting for artifacts to be available in maven central
    waitUntilArtifactSyncedWithCentral {
      repo = 'http://central.maven.org/maven2/'
      groupId = 'io.fabric8.archetypes'
      artifactId = 'archetypes-catalog'
      version = '0.0.1'
      ext = 'jar'
    }

Wait Until Pull Request Merged

During a CD pipeline we often need to wait for external events to complete before continuing. One of the most common events we have on the fabric8 project is waiting for CI jobs or manually review and approval of github pull requests. We don't want to fail a pipeline, rather just wait patiently for the pull requests to merge so we can continue.

  • pull request submitted
  • pipeline will wait until this is merged before continuing

If CI fails and updates are required as a result of the dependency upgrade then

  • pipeline will notify a chat room (we use Slack)
  • informs the team of the git commands needed to clone, switch to the version update branch and command to push back once fixed
  • pipeline will wait until the CI passes before continuing
    waitUntilPullRequestMerged {
      name = 'fabric8io/fabric8'
      prId = '1234'
    }

fabric8 release

These functions are focused specifically on the fabric8 release itself however could be used as examples or extended in users own setup.

The core fabric8 release consists of multiple Java projects that generate Java artifacts, docker images and kubernetes resources. These projects are built and staged together, automatically deployed into a test environment and after approval promoted together ready for the community to use.

When a project is staged an array is returned and passed around functions further down the pipeline. The structure of this stagedProject array is in the form [config.project, releaseVersion, repoId]

  • config.project the name of the github project being released e.g. 'fabric8io/fabric8'
  • releaseVersion the new version e.g. '0.0.1'
  • repoId the OSS Sonatype staging repository Id used to interact with Sonatype later on
    def stagedProject = stageProject {
      project = 'fabric8io/ipaas-quickstarts'
      useGitTagForNextVersion = true
    }

One other important note is on the fabric8 project we don't use the maven release plugin or update to next SNAPSHOT versions as it causes unwanted noise and commits to our many github repos. Instead we use a fixed development x.x-SNAPSHOT version so we can easily work in development on multiple projects that have maven dependencies with each other.

Now that we don't store the next release version in the poms we need to figure it out during the release. Rather than store the version number in the repo which involves a commit and not too CD friendly (i.e. would trigger another release just for the version update) we use the git tag. From this we can get the previous release version, increment it and push it back without triggering another release. This seems a bit strange but it has been holding up and has significantly reduced unwanted SCM commits related to maven releases.

Promote Artifacts

  • releases OSS sonatype staging repository so that artifacts are synced with maven central
  • commits generated Helm charts to the fabric8 Helm repo
  • if useGitTagForNextVersion is set (true by default) then the next snapshot development version PR is committed
    String pullRequestId = promoteArtifacts {
      projectStagingDetails = config.stagedProject
      project = 'fabric8io/fabric8'
      useGitTagForNextVersion = true
      helmPush = false
    }

Release Project

  • promotes artifacts from OSS sonatype staging repo to maven central
  • promotes images from internal docker registry to dockerhub
  • waits for github pull request to merge if updating next snapshot version (not used by default)
  • waits for artifacts to be synced and available in maven central
  • sends chat notification when artifacts appear in maven central
    releaseProject {
      stagedProject = project
      useGitTagForNextVersion = true
      helmPush = false
      groupId = 'io.fabric8.archetypes'
      githubOrganisation = 'fabric8io'
      artifactIdToWatchInCentral = 'archetypes-catalog'
      artifactExtensionToWatchInCentral = 'jar'
    }

Stage Extra Images

  • takes a list of external images not built by the CD pipeline which need tagging in dockerhub with the new release version
  • pulls the latest images from dockerhub
  • tags them with the new fabric8 release
  • stages them in the internal docker registry
    stageExtraImages {
      images = ['gogs','jenkins','taiga']
      tag = releaseVersion
    }

Stage Project

  • builds and stages a fabric8 java project with OSS sonatype
  • build docker images and stages them in the internal docker registry
  • stages extra images not built by docker-maven-plugin in the internal docker registry
    def stagedProject = stageProject {
      project = 'fabric8io/ipaas-quickstarts'
      useGitTagForNextVersion = true
    }

Tag Images

  • will pull external images which have been staged in the fabric8 docker registry and push the new tag to dockerhub
    tagImages {
      images = ['gogs','jenkins','taiga']
      tag = releaseVersion
    }

Git Tag

  • tags the current git repo with the provided version
  • pushes the tag to the remote repository
    gitTag {
      releaseVersion = '0.0.1'
    }

Deploy Remote OpenShift

Deploys the staged fabric8 release to a remote OpenShift cluster

NOTE in order for images to be found by the remote OpenShift instance it must be able to pull images from the staging docker registry. Noting private networks and insecure-registry flags.

    node {
      deployRemoteOpenShift {
        url = openshiftUrl
        domain = 'staging'
        stagingDockerRegistry = openshiftStagingDockerRegistryUrl
      }
    }

Deploy Remote Kubernetes

Deploys the staged fabric8 release to a remote Kubernetes cluster

NOTE in order for images to be found by the remote OpenShift instance it must be able to pull images from the staging docker registry. Noting private networks and insecure-registry flags.

    node {
      deployRemoteKubernetes {
        url = kubernetesUrl
        defaultNamespace = 'default'
        stagingDockerRegistry = kubernetesStagingDockerRegistryUrl
      }
    }

Add Annotation To Build

Add an annotation to the matching openshift build

    @Library('github.com/fabric8io/fabric8-pipeline-library@master')
    def dummy
    node {
        def utils = new io.fabric8.Utils()
        utils.addAnnotationToBuild('fabric8.io/foo', 'bar')
    }

Understanding how it works

Most of the functions provided by this library are meant to run inside a Kubernetes or Openshift pod. Those pods are managed by the kubernetes plugin. This library abstracts the pipeline capabilities of kubernetes plugin so that it makes it easier to use. So for example when you need to use a pod with maven capabilities instead of defining something like:

podTemplate(label: 'maven-node', containers: [
    containerTemplate(name: 'maven', image: 'maven:3.3.9-jdk-8-alpine', ttyEnabled: true, command: 'cat')
  ],
  volumes: [secretVolume(secretName: 'shared-secrets', mountPath: '/etc/shared-secrets')]) {

    node('maven-node') {
        container(name: 'maven') {
            ...
        }
    }
  }

You can just use the mavenTemplate provided by this library:

mavenTemplate(label: 'mylabel') {
    node('mylabel') {
        container(name: 'maven') {
          ...
        }
    }
}

or for ease of use you can directly reference the mavenNode:

mavenNode {
    container(name: 'maven') {
        ...
    }
}

Template vs Node

A template defines how the jenkins slave pod will look like, but the pod is not created until a node is requested. When a node is requested the matching template will be selected and pod from the template will be created.

The library provides shortcut function both to nodes and templates. In most cases you will just need to use the node. The only exception is when you need to mix and match (see mixing and matching).

The provided node / template pairs are the following:

  • maven Provides maven capabilities.
  • docker Provides access to the docker client and socket.
  • release Mounts release related secrets (e.g. gpg keys, ssh keys etc).
  • clients Provides access to the kubernetes and openshift binaries.

Maven Node

Provides maven capabilities by adding a container with the maven image. The container mounts the following volumes:

  • Secret jenkins-maven-settings Add your maven configuration here.
  • PersistentVolumeClaim jenkins-mvn-local-repo The maven local repository to use.

The maven node and template support limited customization through the following properties:

  • mavenImage Select the maven docker image to use.

Example:

mavenNode(mavenImage: 'maven:3.3.9-jdk-7') {
    container(name: 'maven') {
        sh 'mvn clean install'
    }
}

Docker Node

Provides docker capabilities by adding a container with the docker binary. The container mounts the following volumes:

  • HostPathVolume /var/run/docker.sock The docker socket.

Host path mounts are not allowed everywhere, so use with caution. Also note that the mount will be mounted to all containers in the pod. This means that if we add a maven container to the pod, it will have docker capabilities.

The docker node and template support limited customization through the following properties:

  • dockerImage Select the docker image to use.

Example:

mavenNode(dockerImage: 'docker:1.11.2') {
    container(name: 'docker') {
        sh 'docker build -t myorg/myimage .'
    }
}

Clients Node

Provides access to the kubectl and oc binaries by adding a container to the pod that provides them. The container is configured exactly as the docker container provided by the dockerTemplate.

Example:

clientsNode(clientsImage: 'fabric8/builder-clients:latest') {
    container(name: 'clients') {
        sh 'kubectl create -f ./target/classes/META-INF/kubernetes/kubernetes.yml'
    }
}

Release Node

Provides docker capabilities by enriching the jenkins slave pod with the proper environment variables and volumes.

  • Secret jenkins-release-gpg Add your maven configuration here.

Also the following environment variables will be available to all containers:

  • SONATYPE_USERNAME
  • SONATYPE_PASSWORD
  • GPG_PASSPHRASE
  • NEXUS_USERNAME
  • NEXUS_PASSWORD

These variables will obtain their values from jenkins container (they will be copied).

Example:

releaseTemplate {
    mavenNode {
    container(name: 'docker') {
        sh 'docker build -t myorg/myimage .'
    }
}

Mixing and matching

There are cases where we might need a more complex setup that may require more than a single template. (e.g. a maven container that can run docker builds).

For this case you can combine add the docker template and the maven template together:

dockerTemplate {
    mavenTemplate(label: 'maven-and-docker') {
        node('maven-and-docker') {
             container(name: 'maven') {
                sh 'mvn clean package fabric8:build fabric8:push'
             }
        }
    }
}

The above is equivalent to:

dockerTemplate {
    mavenNode(label: 'maven-and-docker') {
        container(name: 'maven') {
            sh 'mvn clean package fabric8:build fabric8:push'
        }
    }
}

In the example above we can add release capabilities too, by adding the releaseTemplate:

        dockerTemplate {
            releaseTemplate {
                mavenNode(label: 'maven-and-docker') {
                    container(name: 'maven') {
                        sh """
                            mvn release:clean release:prepare
                            mvn clean release:perform
                        """
                    }
                }
            }
        }

Creating and using your own templates

If the existing selection of templates is limiting you can also create your own templates. Templates can be created either by using the Jenkins administration console or by using the groovy.

Using the Jenkins Administration Console

In the console choose Manage Jenkins -> Configure System and scroll down until you find the section Cloud -> Kubernetes. There you can click to Add Pod Template to create your own using the wizard.

Then you can just instantiate the template by creating a node that references the label to the template:

        node('my-custom-template') {
        }

Note: You can use this template to mix and match too. For example you can combine your custom template with an existing one:

        mavenNode(inheritFrom: 'my-custom-template') {
        }

fabric8-pipeline-library's People

Contributors

bartoszmajsak avatar chmouel avatar dgutride avatar edewit avatar fusesource-ci avatar hectorj2f avatar hrishin avatar iocanel avatar jaseemabid avatar jerboaa avatar jetersen avatar jimmidyson avatar joshuawilson avatar jstrachan avatar kadel avatar kameshsampath avatar kishansagathiya avatar ladicek avatar ldimaggi avatar msrb avatar piyush-garg avatar rajdavies avatar rawlingsj avatar rhuss avatar rupalibehera avatar sbose78 avatar sthaha avatar vpavlin avatar yzainee avatar yzainee-zz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fabric8-pipeline-library's Issues

Adding support for Documentation

The pipeline library steps allows to generate gh-pages based on maven profile -Pdoc-html and -Pdoc-pdf, typically replicate whats is done as part of the fabric8 tools/ci-docs.sh utility.

we can then add method to release.groovy like

def documentation(project) {
  Model m = readMavenPom file: 'pom.xml'
  generateWebsiteDocs {
    project = project[0]
    releaseVersion = project[1]
    artifactId = m.artifactId
  }
}

Which will generate the documentation and push to gh-pages branch of the repo

lets add the ability to use the current branch name to decide if we do a full release, CI job or developer build

we could use branch names and naming conventions/patterns to decide which branches are

  • production release branches (where each release creates a new real release version number, artifacts and docker image
  • CI or PR jobs (where unit tests are ran, maybe a local snapshot image is built and tested locally but nothing is pushed for realz)
  • a developer editing branch - where a developer image is updated and used in a developer namespace (so each change is running quickly in a users namespace) to give a kinda RAD editing environment

We could use variables to define the patterns used to differentiate between the kinds of builds. e.g. branches called master or starting with release could be the default production releases; branches starting with editing- could be developer editing branches and anything else assumed to be CI / PR branches?

Then if folks fork a master branch, they get a new CI build for the changes they push

error starting build pod java.io.IOException: Pipe not connected

Seems to happen when there's low resources or multiple jobs running. This has been seen on OSO and GKE.

Executing shell script inside container [maven] of pod [kubernetes-137ebf2065f949d4acac4e019ed07af7-1e96524904d1e]
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] End of Pipeline

GitHub has been notified of this commitโ€™s build result

java.io.IOException: Pipe not connected
	at java.io.PipedOutputStream.write(PipedOutputStream.java:140)
	at java.io.OutputStream.write(OutputStream.java:75)
	at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.launch(ContainerExecDecorator.java:125)
	at hudson.Launcher$ProcStarter.start(Launcher.java:384)
	at org.jenkinsci.plugins.durabletask.BourneShellScript.launchWithCookie(BourneShellScript.java:157)
	at org.jenkinsci.plugins.durabletask.FileMonitoringTask.launch(FileMonitoringTask.java:63)
	at org.jenkinsci.plugins.workflow.steps.durable_task.DurableTaskStep$Execution.start(DurableTaskStep.java:172)
	at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:184)
	at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:126)
	at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:108)
	at groovy.lang.GroovyObject$invokeMethod.call(Unknown Source)
	at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
	at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
	at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:151)
	at org.kohsuke.groovy.sandbox.GroovyInterceptor.onMethodCall(GroovyInterceptor.java:21)
	at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:115)
	at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:149)
	at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:146)
	at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:123)
	at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.methodCall(SandboxInvoker.java:16)
	at stageProject.call(/var/jenkins_home/jobs/fabric8-cd/jobs/fabric8-maven-plugin/branches/master/builds/16/libs/github.com/fabric8io/fabric8-pipeline-library/vars/stageProject.groovy:18)
	at ___cps.transform___(Native Method)
	at com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:57)
	at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:109)
	at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:82)
	at sun.reflect.GeneratedMethodAccessor239.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72)
	at com.cloudbees.groovy.cps.impl.ConstantBlock.eval(ConstantBlock.java:21)
	at com.cloudbees.groovy.cps.Next.step(Next.java:74)
	at com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:154)
	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$001(SandboxContinuable.java:18)
	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable$1.call(SandboxContinuable.java:33)
	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable$1.call(SandboxContinuable.java:30)
	at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.GroovySandbox.runInSandbox(GroovySandbox.java:108)
	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:30)
	at org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:165)
	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:328)
	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$100(CpsThreadGroup.java:80)
	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:240)
	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:228)
	at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:64)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:112)
	at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
Finished: FAILURE

approve step does not seem to resume after jenkins restart

I just got this after a pipeline was in the approve state for a while. Am guessing jenkins pod got killed:

Proceed or Abort
Resuming build at Tue Apr 18 18:50:27 UTC 2017 after Jenkins restart
Waiting to resume Unknown Pipeline node step: jenkins-slave-rcklt-5vn7r is offline
Waiting to resume Unknown Pipeline node step: jenkins-slave-rcklt-5vn7r is offline
Waiting to resume Unknown Pipeline node step: jenkins-slave-rcklt-5vn7r is offline
Waiting to resume Unknown Pipeline node step: jenkins-slave-rcklt-5vn7r is offline
Waiting to resume Unknown Pipeline node step: jenkins-slave-rcklt-5vn7r is offline
Waiting to resume Unknown Pipeline node step: jenkins-slave-rcklt-5vn7r is offline
Waiting to resume Unknown Pipeline node step: jenkins-slave-rcklt-5vn7r is offline
Waiting to resume Unknown Pipeline node step: jenkins-slave-rcklt-5vn7r is offline
Waiting to resume Unknown Pipeline node step: jenkins-slave-rcklt-5vn7r is offline
Waiting to resume Unknown Pipeline node step: jenkins-slave-rcklt-5vn7r is offline
Waiting to resume Unknown Pipeline node step: jenkins-slave-rcklt-5vn7r is offline
Waiting to resume Unknown Pipeline node step: jenkins-slave-rcklt-5vn7r is offline
Waiting to resume Unknown Pipeline node step: Jenkins doesnโ€™t have label jenkins-slave-rcklt-5vn7r
Waiting to resume Unknown Pipeline node step: Jenkins doesnโ€™t have label jenkins-slave-rcklt-5vn7r
Waiting to resume Unknown Pipeline node step: Jenkins doesnโ€™t have label jenkins-slave-rcklt-5vn7r
Waiting to resume Unknown Pipeline node step: Jenkins doesnโ€™t have label jenkins-slave-rcklt-5vn7r
Waiting to resume Unknown Pipeline node step: Jenkins doesnโ€™t have label jenkins-slave-rcklt-5vn7r
Waiting to resume Unknown Pipeline node step: Jenkins doesnโ€™t have label jenkins-slave-rcklt-5vn7r
Waiting to resume Unknown Pipeline node step: Jenkins doesnโ€™t have label jenkins-slave-rcklt-5vn7r
Waiting to resume Unknown Pipeline node step: Jenkins doesnโ€™t have label jenkins-slave-rcklt-5vn7r
Waiting to resume Unknown Pipeline node step: Jenkins doesnโ€™t have label jenkins-slave-rcklt-5vn7r
Waiting to resume Unknown Pipeline node step: Jenkins doesnโ€™t have label jenkins-slave-rcklt-5vn7r
Waiting to resume Unknown Pipeline node step: Jenkins doesnโ€™t have label jenkins-slave-rcklt-5vn7r
Waiting to resume Unknown Pipeline node step: Jenkins doesnโ€™t have label jenkins-slave-rcklt-5vn7r
Waiting to resume Unknown Pipeline node step: Jenkins doesnโ€™t have label jenkins-slave-rcklt-5vn7r
Waiting to resume Unknown Pipeline node step: Jenkins doesnโ€™t have label jenkins-slave-rcklt-5vn7r
Waiting to resume Unknown Pipeline node step: Jenkins doesnโ€™t have label jenkins-slave-rcklt-5vn7r

commit generated non java deployment / deployment config yamls to source code repo

Folks want to customise the generated deployment / deployment config yamls, in java this is done with the help of the fabric8-maven-plugin. For non java pipelines we use the shared function https://github.com/fabric8io/fabric8-pipeline-library/blob/master/vars/getDeploymentResources.groovy, we should raise a PR to merge the parameterised yaml so folks can customise in their repo and when the pipeline runs it will still replace the version number, project name, labels etc.

if planner (work item tracker) is running we should POST an update to the REST API when we have promoted a build

for background see this issue:
fabric8-services/fabric8-wit#726

essentially if we can detect planner / workitem-tracker is running (e.g. via a kubernetes Service being present or via a configuration as per this issue: #74) and when we have the new REST API as per fabric8-services/fabric8-wit#726 then when a kubernetesApply() is done and the deployment has completed we should post the necessary JSON to the REST API so that the workitem tracker can update the issue with a comment that something is ready for test etc

PodTemplates should be optionally named

The next release kubernetes-plugin will allow the user to name the build pod, based on the name set on the PodTemplate (currently are named kubernetes-xxx-yyy-zzz which is meaningless).

So it would be great if we were able to optionally pass a name.

Since for the biggest part we are composing PodTemplates I am not sure if it makes sense to use naming by type (e.g. maven, go, nodejs) though this could possibly be a default value.

An other approach would be to name templates in the same manner as we label them (by job name and build number). This would allows to easily correlate a build pod with a specific jenkins build. For example the pod would be named something like myproject-12-xxx-yyy-zzz.

Use semver to work out maven project release versions

Currently our maven Jenkinsfile library works out the release version using the jenkins build number which is bad if the jenkins job is ever recreated.

We could extract this semver code that java fabric8 projects uses itself to work out the next version.

https://github.com/fabric8io/fabric8-pipeline-library/blob/master/src/io/fabric8/Fabric8Commands.groovy#L180-L209

Then call this new function from the Jenkinsfiles here https://github.com/fabric8io/fabric8-jenkinsfile-library/blob/master/maven/CanaryReleaseStageAndApprovePromote/Jenkinsfile#L25

Bonus points for adding the first unit test for the library too ;)

Error in provisioning; slave=KubernetesSlave

Without any changes to our Jenkinsfile, our build started to fail. In Jenkins log we see:

Feb 27, 2017 10:49:12 AM org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud$ProvisioningCallback call
SEVERE: Error in provisioning; slave=KubernetesSlave name: kubernetes-8ca638566b324447ad9fed48eccf8a81-33a3f3091084a, template=org.csanchez.jenkins.plugins.kubernetes.PodTemplate@6db9af81
java.lang.NullPointerException
	at org.csanchez.jenkins.plugins.kubernetes.PodTemplateUtils.combine(PodTemplateUtils.java:59)
	at org.csanchez.jenkins.plugins.kubernetes.PodTemplateUtils.lambda$combine$14(PodTemplateUtils.java:118)
	at java.util.stream.Collectors.lambda$toMap$58(Collectors.java:1321)
	at java.util.stream.ReduceOps$3ReducingSink.accept(ReduceOps.java:169)
	at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1374)
	at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
	at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
	at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
	at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
	at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
	at org.csanchez.jenkins.plugins.kubernetes.PodTemplateUtils.combine(PodTemplateUtils.java:118)
	at org.csanchez.jenkins.plugins.kubernetes.PodTemplateUtils.unwrap(PodTemplateUtils.java:164)
	at org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud.getPodTemplate(KubernetesCloud.java:375)
	at org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud.access$000(KubernetesCloud.java:87)
	at org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud$ProvisioningCallback.call(KubernetesCloud.java:555)
	at org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud$ProvisioningCallback.call(KubernetesCloud.java:532)
	at jenkins.util.ContextResettingExecutorService$2.call(ContextResettingExecutorService.java:46)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)

This started to happen when 088d36f was committed.

The Jenkinsfile is:

#!/usr/bin/groovy
@Library('github.com/fabric8io/fabric8-pipeline-library@master')

def failIfNoTests = ""
try {
  failIfNoTests = ITEST_FAIL_IF_NO_TEST
} catch (Throwable e) {
  failIfNoTests = "false"
}

def localItestPattern = ""
try {
  localItestPattern = ITEST_PATTERN
} catch (Throwable e) {
  localItestPattern = "*KT"
}


def versionPrefix = ""
try {
  versionPrefix = VERSION_PREFIX
} catch (Throwable e) {
  versionPrefix = "1.0"
}


def utils = new io.fabric8.Utils()
def canaryVersion = "${versionPrefix}.${env.BUILD_NUMBER}"
def label = "buildpod.${env.JOB_NAME}.${env.BUILD_NUMBER}".replace('-', '_').replace('/', '_')

mavenNode{
    checkout scm

  echo 'NOTE: running pipelines for the first time will take longer as build and base docker images are pulled onto the node'
  container(name: 'maven') {

    stage 'Build Release'
    mavenCanaryRelease {
      version = canaryVersion
    }

    stage 'Integration Test'
    mavenIntegrationTest {
      environment = 'Testing'
      failIfNoTests = localFailIfNoTests
      itestPattern = localItestPattern
    }
	
  }
}

Workaround

I found that by using @Library('github.com/fabric8io/fabric8-pipeline-library@versionUpdate3f1bf454-700d-4274-9e22-3b6bab4361bc') it does build.

It would be good if there is a stable branch and perhaps more user friendly branch names.

Missing script approvals configuration in Jenkins CI

Using fabric8-pipeline-library@master (commit 3f84b0b).
Missing script approvals configuration in Jenkins CI.

I've added approvals manually in "In-process Script Approval" page but is there a way to configure "Signatures already approved" list at fabric8 CI/CD creation ?

org.jenkinsci.plugins.scriptsecurity.sandbox.RejectedAccessException: Scripts not permitted to use staticMethod jenkins.model.Jenkins getInstance
 		at org.jenkinsci.plugins.scriptsecurity.sandbox.whitelists.StaticWhitelist.rejectStaticMethod(StaticWhitelist.java:192)
 		at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onStaticCall(SandboxInterceptor.java:142)
 		at org.kohsuke.groovy.sandbox.impl.Checker$2.call(Checker.java:180)
 		at org.kohsuke.groovy.sandbox.impl.Checker.checkedStaticCall(Checker.java:177)
 		at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:91)
 		at org.kohsuke.groovy.sandbox.impl.Checker$checkedCall$0.callStatic(Unknown Source)
 		at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallStatic(CallSiteArray.java:56)
 		at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callStatic(AbstractCallSite.java:194)
 		at io.fabric8.Fabric8Commands.getCloudConfig(Fabric8Commands.groovy:711)
 		at io.fabric8.Fabric8Commands$getCloudConfig$0.call(Unknown Source)
 		at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
 		at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
 		at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:151)
 		at org.kohsuke.groovy.sandbox.GroovyInterceptor.onMethodCall(GroovyInterceptor.java:21)
 		at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:115)
 		at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:149)
 		at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:146)
 		at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.methodCall(SandboxInvoker.java:16)
 		at mavenTemplate.call(/var/jenkins_home/jobs/spring-petclinic-fabric8/builds/19/libs/github.com/fabric8io/fabric8-pipeline-library/vars/mavenTemplate.groovy:14)
 		at mavenNode.call(/var/jenkins_home/jobs/spring-petclinic-fabric8/builds/19/libs/github.com/fabric8io/fabric8-pipeline-library/vars/mavenNode.groovy:8)
 		at WorkflowScript.run(WorkflowScript:30)
org.jenkinsci.plugins.scriptsecurity.sandbox.RejectedAccessException: Scripts not permitted to use method jenkins.model.Jenkins getCloud java.lang.String
 		at org.jenkinsci.plugins.scriptsecurity.sandbox.whitelists.StaticWhitelist.rejectMethod(StaticWhitelist.java:178)
 		at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:119)
 		at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:149)
 		at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:146)
 		at org.kohsuke.groovy.sandbox.impl.Checker$checkedCall$0.callStatic(Unknown Source)
 		at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallStatic(CallSiteArray.java:56)
 		at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callStatic(AbstractCallSite.java:194)
 		at io.fabric8.Fabric8Commands.getCloudConfig(Fabric8Commands.groovy:711)
 		at io.fabric8.Fabric8Commands$getCloudConfig$0.call(Unknown Source)
 		at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
 		at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
 		at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:151)
 		at org.kohsuke.groovy.sandbox.GroovyInterceptor.onMethodCall(GroovyInterceptor.java:21)
 		at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:115)
 		at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:149)
 		at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:146)
 		at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.methodCall(SandboxInvoker.java:16)
 		at mavenTemplate.call(/var/jenkins_home/jobs/spring-petclinic-fabric8/builds/20/libs/github.com/fabric8io/fabric8-pipeline-library/vars/mavenTemplate.groovy:14)
 		at mavenNode.call(/var/jenkins_home/jobs/spring-petclinic-fabric8/builds/20/libs/github.com/fabric8io/fabric8-pipeline-library/vars/mavenNode.groovy:8)
 		at WorkflowScript.run(WorkflowScript:30)

One last:

approval: method io.fabric8.kubernetes.client.KubernetesClient services
callers:
at io.fabric8.Fabric8Commands.hasService(Fabric8Commands.groovy:637)
at io.fabric8.Fabric8Commands$hasService$1.call(Unknown Source)
at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:115)
at sonarQubeScanner.call(/var/jenkins_home/jobs/spring-petclinic-fabric8/builds/21/libs/github.com/fabric8io/fabric8-pipeline-library/vars/sonarQubeScanner.groovy:15)
at mavenCanaryRelease.call(/var/jenkins_home/jobs/spring-petclinic-fabric8/builds/21/libs/github.com/fabric8io/fabric8-pipeline-library/vars/mavenCanaryRelease.groovy:62)

lets make it easier to configure some aspects of pipelines via a ConfigMap in Kubernetes/OpenShift

e.g. things like which branches are CD release branches versus CI branches/PRs versus developer branches (run tests + re-run apps fast) - see #3

We also should make it easy to enable/disable various features like:

  • generate maven site report
  • generate changelog report
  • run sonarqube
  • run baysian reports
  • run selenium tests

I'm not sure the perfect approach; do we use the fabric8.yml file to enable/disable those features? Or use a ConfigMap?

Either way we should come up with a standard function to wrap that up so that we can make the pipelines configurable to enable/disable feature flags from a nice UI or CLI tool - without users having to hack groovy source etc

lets use the new API to query if a job name / branch name / gitUrl is a CI / CD / Developer pipeline

the current isCI() and isCD() functions should now delegate to a getPipeline() helper method which should lazily invoke this code: https://github.com/fabric8io/fabric8/blob/master/components/kubernetes-api/src/main/java/io/fabric8/kubernetes/api/pipelines/Pipelines.java#L34

and cache the Pipeline object (transiently!) around for the lifetime of a job - lazily requerying if its null.

something kinda like...

def isCI(){
  return getPipeline().isCI()
}
def isCD(){
  return getPipeline().isCD()
}

// TODO not sure if this works ;) just trying to cache this value for later
def transient _pipeline: Pipeline = null;

def getPipeline() {
      def kubernetes = new DefaultKubernetesClient()
      def namespace = kubernetes.getNamespace()
      // TODO ensure that BRANCH_NAME and GIT_URL are populated!
  return io.fabric8.kubernetes.api.pipelines.Pipelines.getPipeline(kubernetes, namespace, env); 
}

use different location for local maven repo based on OSO versus OSD

When using OSO we're gonna be restricting builds to 1 concurrent build per user. In which case its safe to have a write once PV for the local mvn repo for doing CD releases or for snapshot builds

However when using OSD we probably want to use the job workspace as the local maven repository so that we can have parallel builds running and avoid overwriting each other or causing inconsistencies in the builds.

So we maybe need a configuration to know if we can use a single read/write once; use a read/write many or use workspace based persistence for builds. Some folks may want to disable persistence too maybe?

Maybe we need a ConfigMap we load to configure these kinds of things?

support a PR comment to skip automatic release

There are occasions where a readme update or CI change to say a Jenkins plugin for example may mean a developer doesn't want a full release automatically triggered.

CD pipelines should check the PR comment to see if we have a @fabric8cd skip release or something similar.

Jenkinsfiles should assert that the rollout to Staging/Production succeeds

Right now a Pipeline could fail to get a new version of a pod running in an enviroment (e.g. the pod never becomes ready - maybe due to quota issues or a missing environment specific Service, Secret or ConfigMap or something.

Currently once the apply is done, the kubernetesApply() just assumes everything's great and carries on.

It would be nice to have a better flavour of this which does the Arquillian equivalent of this line:

             assertThat(kubernetesClient).deployments().pods().isPodReadyForPeriod();

Then the pipeline would wait for the pods to go green & be ready (readiness checks + liveness checks kick in) - if things don't work it'd barf the build.

Maybe extra bonus points would be to automatically rollback the Deployment change if the new version doesn't startup correctly?

Utils.groovy findTagSha logic

The following assumption in the code is not true in any of the OpenShift deployments I have tried

def findTagSha(OpenShiftClient client, String imageStreamName, String namespace) {

...

// latest tag is the first
TAG_EVENT_LIST:
for (def list : tags) {

The order of the tags in an ImageStream seems to be random, so picking the first tag found does not work reliably.

eg.

status:
dockerImageRepository: 172.30.209.124:5000/mta/simontest123
tags:

  • items:
    • created: 2017-05-09T23:58:58Z
      dockerImageReference: 172.30.209.124:5000/mta/simontest123@sha256:7b92ede95898259a8976fbd0013f81309c330b7a0a4d4b794f98bb08174e62a3
      generation: 1
      image: sha256:7b92ede95898259a8976fbd0013f81309c330b7a0a4d4b794f98bb08174e62a3
      tag: 6ea89bb
  • items:
    • created: 2017-05-10T01:18:03Z
      dockerImageReference: 172.30.209.124:5000/mta/simontest123@sha256:59e235aeabc89a3038cc16275c8d3cd7d70a16cfee1f45a1484a890acaae51db
      generation: 1
      image: sha256:59e235aeabc89a3038cc16275c8d3cd7d70a16cfee1f45a1484a890acaae51db
      tag: 7d0ef5a
  • items:
    • created: 2017-05-10T01:02:03Z
      dockerImageReference: 172.30.209.124:5000/mta/simontest123@sha256:4bbf3a31a5474d2455b3c005f55c1d94b23c41324089e4bca710b8f3e86cc037
      generation: 1
      image: sha256:4bbf3a31a5474d2455b3c005f55c1d94b23c41324089e4bca710b8f3e86cc037
      tag: e2b3b93
  • items:
    • created: 2017-05-10T00:57:41Z
      dockerImageReference: 172.30.209.124:5000/mta/simontest123@sha256:2ef9f96201fe7b349ba0fb3afcb9f630d4662c4c59896803cb4e4bd7e732c1b9
      generation: 1
      image: sha256:2ef9f96201fe7b349ba0fb3afcb9f630d4662c4c59896803cb4e4bd7e732c1b9
      tag: e5ad8f0

Fabric8 always picks up old image 7b92ede95898259a8976fbd0013f81309c330b7a0a4d4b794f98bb08174e62a3 and deploys it to staging and production, when it should have used the newer image 59e235aeabc89a3038cc16275c8d3cd7d70a16cfee1f45a1484a890acaae51db

Light / modular template

The pod templates that we are currently using (e.g. the maven template) refer to resources managed by gofabric8 and are used to store stuff like settings, ssh keys, gnupgp keys and more.

It would be nice, if we had a flavor of the templates, without all these fixed resources, or if those were optional.

Something like this would allow the user to get started, regardless of how he setup the environment or what jenkins image he uses. Of course, he wouldn't be able to enjoy the Fabric8 in its full length, but he could easily hack a pipeline that does a maven build, run the integration/system tests and even update internal environments. Then he gradually adds more things to the mix. I think that a step by step approach is really important, as it gives time to the user to digest and better understand how to use our stuff. It also gives us more flexibility.

The implementation is the tricky part....

What I'd like to avoid is an endless chain of if then else.
What I'd also like to avoid is having tons of different templates for the same thing.

What could possibly make sense here, is to leverage template nesting / composition.
So we could have something like a light maven template called withMaven and additional templates that attach the secrets or the rest of the resources (e.g. to define the ssh keys: withSsh). We could then bind them together:

withMaven(mavenImage: 'maven:3.3.9') {
    withSsh('jenkins-ssh') {
        withGpg('jenkins-gpg') {
            //do stuff
        }
    }
}

And if this is starting to getting verbose, we could hack the withFabric8 that adds the things we need with a simple declaration.

Exception: "Scripts not permitted to use new io.fabric8.openshift.client.DefaultOpenShiftClient"

Aloha,

currently the Fabric8-Jenkins-Build-Job runs into an weird exception when creating a new project using v2.2.192.

I suppose it's in some fashion (or not) related with the following line but I'm not sure:

return new DefaultOpenShiftClient().isAdaptable(OpenShiftClient.class)

The original stack trace can be found here:
https://gist.github.com/anonymous/8b6b08236331677d24e42ff62edf571b

Thanks for any hint,
Qaiser

apps fail if short git sha starts with a zero

I created a .Net app and the version used is the short git sha of 0517806. When the application deployment config yaml was applied the version changed to 517806.0 which means the image stream isn't found.

reduce logging of hubot

When I have a pipeline without any hubot configured I still see:

[Pipeline] hubotApprove
Hubot sending to room fabric8_default => Would you like to promote version 2.0.1 to the next environment?

    to Proceed reply:  fabric8 jenkins proceed job maxandersen/mirror/master build 1
    to Abort reply:    fabric8 jenkins abort job maxandersen/mirror/master build 1

No service hubot  is running!!!
No service found!

Two things come to mind:

  1. why does it even tell me about hubot when I don't have it running. I assume for many this would be the default and thus 9 lines of output is wasteful. If it must do something maybe only do 1 line like Service hubot is not running!

  2. shouldn't enable/disablement of features in the shared jenkins pipeline be something controlled by the user ? i.e. by the user enable/disabling extensions rather than it is defined in the world global: github.com/fabric8io/fabric8-pipeline-library@master ?

new function to promote the release to all the environments in order with optional human approval

we define the environments for a team in the fabric8-environments ConfigMap

Rather than having lots of different jobs that include Staging and/or Production, it might be just nice to have a PromoteAll job and function that promotes a release to all environments that are defined in the ConfigMap?

Possibly adding an include/exclude list too? e.g. you typically wanna exclude Test.

So something like

promoteToEnviroments()

which default to something like

promoteToEnviroments(excludes=['Test'], includes=['*'])

Or something like that?

Incorrect namespace created

Steps to reproduce

  1. Add def envStage = utils.environmentNamespace('my-project') to a Jenkinsfile
  2. Run in Jenkins

Expected

We would expect a new namespace of 'my-project' is created.

Actual
'default-my-project' is created.

Approve step outside of a mavenNode{...} definition doesn't terminate build pod

Testing Using a Jenkinsfile of:

#!/usr/bin/groovy
@Library('github.com/fabric8io/fabric8-pipeline-library@master')
def dummy
mavenNode{
  container('maven'){
    echo 'inside build pod'
  }
}
node{
    approve {
      room = null
      version = '1.0.0'
      console = null
      environment = 'Stage'
    }
}

The build pod is kepy running until the jobs has finished rather than at the closing parenthesis of the mavenNode, this means that build pods will stick around during the approve step which is a waste of resources.

@iocanel suggested trying this fabric8io/kubernetes-plugin@2b4f6d8 which works great. I wonder however if instead we need to mark the build pod as complete list the openshift s2i build pods?

No clear upgrade path

The following code from a jenkinsfile used to work:

kubernetes.pod('buildpod').withImage('<ip address>:80/shiftwork/jhipster-build')
      .withPrivileged(true)
      .withHostPathMount('/var/run/docker.sock','/var/run/docker.sock')
      .withEnvVar('DOCKER_CONFIG','/home/jenkins/.docker/')
      .withSecret('jenkins-docker-cfg','/home/jenkins/.docker')
      .withSecret('jenkins-maven-settings','/root/.m2')
      .withServiceAccount('jenkins')
      .inside {

Now however it results in an error.

hudson.remoting.ProxyException: groovy.lang.MissingMethodException: No signature of method: static io.fabric8.kubernetes.pipeline.Kubernetes.withPrivileged() is applicable for argument types: (java.lang.Boolean) values: [true]
	at groovy.lang.MetaClassImpl.invokeStaticMissingMethod(MetaClassImpl.java:1503)
	at groovy.lang.MetaClassImpl.invokeStaticMethod(MetaClassImpl.java:1489)
	at org.codehaus.groovy.runtime.InvokerHelper.invokeMethod(InvokerHelper.java:897)
	at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeMethodN(ScriptBytecodeAdapter.java:168)
	at io.fabric8.kubernetes.pipeline.Kubernetes$Pod.methodMissing(Kubernetes.groovy)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93)
	at groovy.lang.MetaClassImpl.invokeMissingMethod(MetaClassImpl.java:941)
	at groovy.lang.MetaClassImpl.invokePropertyOrMissing(MetaClassImpl.java:1264)
	at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1217)
	at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1024)
	at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:812)
	at io.fabric8.kubernetes.pipeline.Kubernetes$Pod.invokeMethod(Kubernetes.groovy)
	at groovy.lang.GroovyObject$invokeMethod.call(Unknown Source)
	at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
	at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
	at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:151)
	at org.kohsuke.groovy.sandbox.GroovyInterceptor.onMethodCall(GroovyInterceptor.java:21)
	at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:115)
	at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:103)
	at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:149)
	at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:146)
	at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.methodCall(SandboxInvoker.java:16)
	at WorkflowScript.run(WorkflowScript:33)
	at ___cps.transform___(Native Method)
	at com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:57)
	at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:109)
	at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:82)
	at sun.reflect.GeneratedMethodAccessor240.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72)
	at com.cloudbees.groovy.cps.impl.ConstantBlock.eval(ConstantBlock.java:21)
	at com.cloudbees.groovy.cps.Next.step(Next.java:58)
	at com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:154)
	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$001(SandboxContinuable.java:18)
	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable$1.call(SandboxContinuable.java:33)
	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable$1.call(SandboxContinuable.java:30)
	at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.GroovySandbox.runInSandbox(GroovySandbox.java:108)
	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:30)
	at org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:163)
	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:328)
	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$100(CpsThreadGroup.java:80)
	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:240)
	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:228)
	at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:63)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:112)
	at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)

Perhaps I missed a blog post. But I am unaware of any upgrade path from a previous version.

It would seem the withPrivileged() method has not be deprecated, it has just been removed.

https://github.com/fabric8io/fabric8-jenkinsfile-library/search?utf8=%E2%9C%93&q=withPrivileged
https://github.com/fabric8io/fabric8-pipeline-library/search?utf8=%E2%9C%93&q=withPrivileged

It would be good to know how to upgrade.

Installed relevant Jenkins plugins

Below is the relevant plugins we have installed.
selection_657
These two pluggins appear to be child plugins of https://github.com/jenkinsci/kubernetes-pipeline-plugin.

Analysis

It seems there are several related projects. In order to get more clarity Ive compiled the below table.

Name Active Has withPrivileged() Comments
jenkins-pipeline-library No - Deprecated Yes
fabric8-pipeline-library Yes No
kubernetes-plugin Yes Yes Kubernetes Pipeline is Jenkins plugin which extends Jenkins Pipeline to allow building and testing inside Kubernetes Pods reusing kubernetes features like pods, build images, service accounts, volumes and secrets while providing an elastic slave pool (each build runs in new pods).
fabric8-jenkinsfile-library Yes No
kubernetes-pipeline-plugin Yes 1.4-SNAPSHOT Yes Uses io.fabric8.kubernetes.pipeline package name, yet not in fabric8 github project. Kubernetes Pipeline is Jenkins plugin which extends Jenkins Pipeline to provide native support for using Kubernetes pods, secrets and volumes to perform builds.

java pipelines should support using next pom or git tag version rather than using jenkins build number

This is just a quick thought whilst it's on my mind..

To get the next pom version we could use something like this in the mavenCanaryRelease.groovy

What's missing is the PR to update the next pom version number. This isn't a great approach and we could use what fabric8 does and base the next version on incrementing the latest git tag. This means no code changes are needed for the next version.

#!/usr/bin/groovy
def call(body) {
    // evaluate the body block, and collect configuration into the object
    def config = [:]
    body.resolveStrategy = Closure.DELEGATE_FIRST
    body.delegate = config
    body()

    def flow = new io.fabric8.Fabric8Commands()
    def s2iMode = flow.isOpenShiftS2I()
    echo "s2i mode: ${s2iMode}"
    def m = readMavenPom file: 'pom.xml'
    def version

    sh "git checkout -b ${env.JOB_NAME}-${env.BUILD_NUMBER}"

    if (config.version){
        version = config.version
        sh "mvn org.codehaus.mojo:versions-maven-plugin:2.2:set -U -DnewVersion=${version}"
    } else {
        sh 'mvn build-helper:parse-version versions:set -DnewVersion=\\\${parsedVersion.majorVersion}.\\\${parsedVersion.minorVersion}.\\\${parsedVersion.nextIncrementalVersion} '
        m = readMavenPom file: 'pom.xml'
        version = m.version
    }

    sh "mvn clean -e -U deploy"

    if (flow.isSingleNode()){
        echo 'Running on a single node, skipping docker push as not needed'

        def groupId = m.groupId.split( '\\.' )
        def user = groupId[groupId.size()-1].trim()
        def artifactId = m.artifactId

       if (!s2iMode) {
           sh "docker tag ${user}/${artifactId}:${version} ${env.FABRIC8_DOCKER_REGISTRY_SERVICE_HOST}:${env.FABRIC8_DOCKER_REGISTRY_SERVICE_PORT}/${user}/${artifactId}:${version}"
       }
    } else {
      if (!s2iMode) {
        retry(3){
          sh "mvn fabric8:push -Ddocker.push.registry=${env.FABRIC8_DOCKER_REGISTRY_SERVICE_HOST}:${env.FABRIC8_DOCKER_REGISTRY_SERVICE_PORT}"
        }
      }
    }

    if (flow.hasService("content-repository")) {
      try {
        sh 'mvn site site:deploy'
      } catch (err) {
        // lets carry on as maven site isn't critical
        echo 'unable to generate maven site'
      }
    } else {
      echo 'no content-repository service so not deploying the maven site report'
    }
  }

automate github release notes for non-npm projects too if using Conventional Commits?

So I'm loving the github release notes we generate for npm projects:
https://github.com/fabric8-ui/fabric8-runtime-console/releases

e.g. if a project is using the Conventional Commits (http://conventionalcommits.org/) format for commit messages then we can generate nice release notes for the project.

I wonder if we could start to enable this on all java & go projects too if they opt in to using Conventional Commits? Maybe it could be a flag we enable in the Jenkinsfile or something?

don't fail the pipeline when updating project dependencies

we noticed today that if we get an error in the pipeline when updating downstream projects the entire build fails. Perhaps we dont want to do this and catch the error, log it and continue to the next project?

The error in this case was no permissions to create the updateVersion branch in the downstream project.

lets generate release notes HTML reports in pipelines

there's a number of maven plugins and tools out there for generating release notes based on git commit history and fixed issues on github etc. Here's some of them:

it'd be nice to include this OOTB in our release pipelines. So I guess we need to try some of these tools and see which ones work well, generate nice HTML output and work well with github issues etc.

Then if we can package it up in a docker image we can start to include it OOTB in our release pipelines (maybe making it optional via an environment variable or something) so folks can disable it if they wish?

Add a mechanism to share binaries between containers of the same pod.

In some cases we may need to add something extra to a container, without having to recreate an image for it. Since the pod templates are already leveraging multiple container it would be nice to have a tool, that would allow us to ask a container to share something found in its path, by moving it inside the worksapce. This would allow other containers to use that if needed.

create a github milestone and tag all closed issues (which have no milestone) with the current milestone

it'd be awesome to create milestones every time we do a release where there are closed issues which are not associated with a milestone

Then we can easily see what releases got fixed in what version - all done mostly automatically. (folks can always update the milestone on the issue after the release).

So how about a function, githubCreateMilestone(String version) which would:

  • find all closed issues with no milestone
  • if there are any issues not marked with a milestone then really create a github milestone & associate those issues
  • we may want to avoid creating a milestone for releases with no fixed issues maybe? I guess that could be flag?

long running approval jobs seem to get locked up?

I had a job waiting on the approval step that I left for 11 hours on DevTools OSO and the Proceed & Abort links in the jenkins console didn't seem to do anything any more. I eventually had to just kill the build.

I wonder if the build pods go unresponsive after a while?

PodTemplate should accept claimNames as parameters.

Bugs, quotas, provisioning cost (it often takes a while until a pvc is bound) sometimes make working with PVCs a PITA.

I should be able to pass different template names as parameters and if none is passed the podTemplate should be ephemeral.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.