Deploying Applications to OpenShift v2 through Jenkins

There are at least three different ways to pursue Jenkins integration with OpenShift Enterprise version 2.

1. Use OpenShift’s built-in Jenkins capability. You can push source code to the OpenShift gear directly and allow an embedded Jenkins client to handle compilation and deployment of the code. For more details on this, see the Continuous Integration section on the OpenShift Developer Portal.

2. Binary deployment via OpenShift API and Git push. Create a script job in Jenkins that uses the OpenShift REST API or CLI to create the gear, then use the Git URL of the created gear returned by the REST call to push a binary deployment.

3. Jenkins OpenShift Deployer Plugin. The community open source Jenkins OpenShift Deployer plugin allows a Jenkins job to create OpenShift containers and deploy applications to it. The Jenkins job is configured through the Jenkins UI rather than through a custom script.

Most enterprises will want to run Jenkins in a centralized server/cluster external to the PaaS containers that host the applications. OpenShift itself could be the hosting environment for that Jenkins cluster, but it would likely run in a separate OpenShift Enterprise environment and feed releases to the DEV, TEST, and PROD OpenShift Enterprise environments that host and autoscale the applications.

That means either handling your own integration between Jenkins and OpenShift Enterprise use the OpenShift REST API and Git API or using a third-party tool like the open source community Jenkins OpenShift Deployer.

The OpenShift Deployer worked well for the use case I’ve deen demo’ing. It’s simply a matter of configuring the connection and SSH keys for the OpenShift environments that Jenkins will communicate with and then tying that OpenShift broker into a build’s deployment job. Here is a screenshot of the configuration of OpenShift servers in the Manage Jenkins > Configure System screen:


As you can see, the plugin allows you to test your credentials against the broker for the OpenShift Enterprise environment you’re configuration and even upload the Jenkins server’s public SSH key to the broker. The plugin will log in with your credentials and upload the SSH key so that Jenkins can automatically drive deployments to your OpenShift environment with no human intervention.

With that in place, you can configure a build job to deploy to OpenShift. Here is the snippet from the build job (in this case the one that corresponds to the Acceptance stage of our deployment pipeline) that deploys the application on OpenShift:

Jenkins Job Configuration with the OpenShift Deployer Plugin

The Deployment Type is GIT, which signifies to the plugin that this is not a deployable zipped as a .tar.gz archive. The actual deployment package is listed in the current directory, which is where the package was moved to by an earlier step. In a real enterprise environment, this would contain a URL to the location of the package in an artifact repository like Nexus.

With Jenkins/OpenShift integration, you have two powerful tools of automation in a deployment pipeline working hand-in-hand.

Jenkins and Deployment Pipelines

In the previous post, I described deployment pipelines and mentioned how you need a controller to drive software releases through each stage of the pipeline. This component is typically a Jenkins CI/CD server, although Jenkins’ built-in capabilities to capture pipelines of deployment activity are rather limited. There is an open source Build Pipeline plugin that gets the job done, for the most part. Other commercial tools in this area include: Atlassian Bamboo, ThoughtWorks Go, and XebiaLabs XLRelease.

This is a screenshot from the Jenkins Build Pipeline plugin that shows various pipelines in progress.


It helpfully color codes the status of each stage of each pipeline, and, combined with Jenkins itself, allows you to do role-based access to control how and when jobs are launched and restarted. Most importantly, it allows you to capture a flow of build jobs, from stage to stage. Stages can be configured to automatically begin based on successful completion of the stage preceding it or they can require manual push button approval to proceed.

The Build Pipeline tool enables us to capture each of the deployment stages shown below:

Commit Stage


The Commit Stage is a continuous integration step, used to validate a particular code commit from a compilation, unit test, and code quality standpoint. It does not involve any deployment to an environment, but does include export of a compiled binary (in the case of a Java project) to an enterprise artifact repository such as Nexus.

Acceptance Stage


The Acceptance stage is where automated functional acceptance and integration tests are performed. To do this, Jenkins will deploy the code to an environment. PaaS makes this trivial, as I’ll shown in a later post.

UAT Stage


In this deployment pipeline example, a separate UAT stage is available for manual user testing, important for confirming application usability. Advancement to the UAT Stage must be approved by the QA team, who log in to Jenkins to perform a push button deployment.

Production Stage


Release to production is the final stage, controlled by the Operations team. Like the two stages before it, it also involves CI/CD server deployment to the PaaS environment.

So that captures the deployment pipeline’s integration with the Jenkins CI/CD server. The next post will profile how the CI/CD server leverages PaaS to make both environment creation, configuration, and application deployment an easy process.