In previous posts, we’ve built a Maven foundation that allows us to standardize our Mule projects and deployment processes. Next, we’ll work on automating the build and deployment processes. The goal is to make it easier to support organization-wide processes and procedures with developer-focused automations during code development. The automated process should provide a consistent implementation of standard processes, methods, and control points that leads to the implementation of high-quality code.
Automating the development process
There are two aspects of the development lifecycle that can be automated: the developer pipeline and release deployment pipeline.
A developer’s interaction with the source code repository and the Mule runtime environments (the sandbox or dev environments) is an iterative development process.
This developer process is supported with an automated tool called the developer pipeline.
The release process takes committed code and progresses it through test stages, finally to production. The associated automation for deployment of the finished code to successfully higher test environments, and finally to a production environment is called the release deployment pipeline.
The developer pipeline is generally not used for the release pipeline since the developer pipeline works at the source code level. The release pipeline works with deployable artifacts (the binaries). This blog will focus on the developer pipeline, the release pipeline will be the subject of a later blog.
The developer pipeline
We’ll build a developer pipeline in Jenkins with source code for the Mule application stored in a Git repository (GitHub specifically). This is a common configuration, but if you do not use these specific tools, the logic of the developer pipeline is still the same and should be easily translated to the tools you do have at hand.
The developer pipeline prepares the source code for deployment, creating deployable artifacts (binaries), facilitating the testing of each artifact and publishing the artifact when it is ready to be moved to the release process.
Standardized developer pipeline
The developer pipeline is standardized, the same pipeline is used for every Mule project. The pipeline is included as a file in the Mule project’s source code (in the root folder of the project).
Git repository requirements
Git repository setup is standardized. Here are the two requirements for using thee developer pipeline:
- Each Mule application will be stored in its own Git repository. This lets us detect changes within a single Mule application and trigger the appropriate pipeline.
- Four server-based (remote) branches are used with a strict promotion hierarchy: feature->develop->release->master. Only the master, release, and develop branches need to be stored in the Git (remote) server. The pipeline performs specific functions for each branch. The pipeline we are using has the following branch associated functions:
- Feature/developer: Changes will be unit tested.
- Develop: Changes will be deployed to the dev CloudHub Runtime environment after being built and unit tested.
- Release: Changes are unit tested and built as a deployable artifact. Then published to an artifact repository. A Git tag is created for the pom.xml version.
- Master: Changes will be unit tested.
Jenkins installation requirements
The Jenkins server will run the pipeline which uses the standardized Maven foundation we built earlier. This means that software packages need to be installed on the Jenkins server and the server configured to use the Maven foundation components:
- Maven must be installed on the Jenkins server.
- Java must be installed on the Jenkins server.
- Create the /..x../settings/np-settings.xml containing the non-production version of settings.xml. Replace the /..x../ with the actual path to the file. Create a similar prd-settings.xml file that contains the production version of settings.xml. This allows you to have production settings separate from non-production settings.
- Install Jenkins Plug-ins (use the suggested plugins during Jenkins install, but some may need to be installed manually):
- Artifact Repository Plugin
- Credentials Plugin
- Credentials Binding Plugin
- Git Parameter Plugin
- Pipe Utility Steps
- Create a Jenkins Global Credential ID named ANYPOINT_PLATFORM_CREDENTIALS containing your Anypoint Platform native user credentials for performing the build and runtime deployments.
- Create a Jenkins Global Credential ID named GITXXX_SERVICE_ACCOUNT. The example Jenkinsfile uses GITHUB_SERVICE_ACCOUNT. This should be changed to reflect the credential created.
What is the Jenkins pipeline anatomy?
The pipeline has several parts as described below:
- Pipeline job: Each Mule project Git repository has a corresponding pipeline item defined in Jenkins. The pipeline item defines the specifics of where the Mule project is located, where the pipeline script is located and how the script is to be executed. A pipeline job is an instance of the pipeline item executed. There will be multiple pipeline jobs for each pipeline item. The pipeline item and its jobs is loosely called the “pipeline job.”
- Trigger: How will the developer pipeline job be triggered to run? In our example, the multibranch pipeline item defines the trigger which is automatically or manually triggered.
- Pipeline script: Consists of multiple steps associated with one or more Git branches that triggered the pipeline:
- Checkout source: Pulls the branch’s code from Git.
- Build and test: Compile and package the Mule application into a deployable artifact. Then run the unit tests (MUnits).
- Deployment: Deploy the artifact created in the build and test step to Mule Runtime Manager. The deployment is made to a development environment using a standardized Maven command.
- Publish artifact: Tag the project in Git with the project version. Then upload the deployable artifact to an artifact repository. The example pipeline uses the standard Maven command to publish to Anypoint Exchange.
The trigger
The Jenkins developer pipeline script execution is triggered whenever a merge is committed on any of the branches defined in Git. A multi-branch pipeline is used to create this trigger. The multi-branch pipeline is displayed in Jenkins as a folder that contains a pipeline item for each branch named in the Mule application’s Git repository.
Every one to two minutes, the multi-branch pipeline scans the Mule application’s Git repository for changes (merges) in any of the branches on the Git server. The pipeline job under the multibranch item whose name matches the Git branch that changed is triggered. The multibranch trigger passes the name of the branch (environment variable BRANCH_NAME) to the pipeline script so that the pipeline steps can be conditionally executed based on the branch name.
The pipeline script
The pipeline script file is named Jenkinsfile and is stored in every Mule application.
An example Mule application with a pipeline script is provided in GitHub. The Jenkinsfile is standardized, so it can be included in any project using GitHub, Jenkins, and standardized Maven commands. View the sample project with Jenkins Pipeline for details.
The pipeline item/job
A Jenkins multibranch pipeline item is created for each Mule application inside a folder structure. Use the same name for the Mule artifactId, Git repository name, and Jenkins multibranch pipeline item name. The example project is named “Mule-Maven-HowTo-pipeline” so create a Jenkins multibranch pipeline item named “Mule-Maven-HowTo-pipeline.”
Create a multibranch pipeline:
- Login to Jenkins
- Navigate to the folder where the item is to be created
- Select “New Item” from the menu
- Choose “multibranch pipeline” and give it the Mule project name.
Refer to this picture to see the Jenkins configuration page filled in.
What do we see in Jenkins?
After saving the configuration, we see every Git branch matched with a pipeline job in Jenkins under the multibranch name. The multi-branch pipeline will scan for new branches which are added to the job list, deleted branches are removed, and changes in branches trigger the pipeline job to run.
What’s in GitHub?
In the GitHub repository, we see the matching the branch names.
Several merges were made to the GitHub develop branch and then merged to the release branch. Each merge triggered the release pipeline job and the following tags were added to the GitHub repository by those pipeline jobs.
Want to see the code that is associated with version 1.0.1 of the Mule application? Check out the tag name from GitHub (ie: git checkout v1.0.0).
What’s in Exchange?
The pipeline published the deployable artifact to Anypoint Exchange. We can use these artifacts to deploy the “binary” artifact to any of the higher-level Mule runtime environments. This is what we see in Exchange:
As we saw with GitHub tags, version 1.0.1 is stored in Exchange and we can access that deployable archive using Exchange’s maven facade interface.
Things to consider
Perhaps you noticed the pipeline design discussed takes several decisions which may not be consistent with what you need to implement? Some of these decisions might be worth further consideration with possible changes to the pipeline:
- The develop branch does not publish a snapshot artifact each time the project is built and deployed to the dev environment. Is it important to have these snapshots available?
- Should we always build and test a branch?
- Those familiar with deployment to Runtime Fabric will notice there is no automatic increment of the artifact version.
- Using the release pipeline as the deployable artifact generation point supports the policy of using the same binary for all testing and production. However, because we already published the artifact from the merge to the release branch, it was never rebuilt from the master branch. The master branch source code is “technically” untested.
- Process control points for code review, release management, product feature management, etc. have not been discussed.
What about Hotfixes?
For hotfixes, we can use the same pipeline triggers and branch merge processes as a standard development cycle. In many cases, creating a hotfix branch from the master branch release tag should yield source code that can be changed to apply the fix in the develop branch for testing.
When the hotfix is ready to release, merge the develop branch to the release branch for publishing the new deployable artifact (with subsequent use of a release pipeline).
This leverages the same processes we’ve already built so the only difference is the scope of testing. But the process is less clear if there is already a release in process. Do we hotfix the release and accelerate its testing? Do we rename the current release branch and create a new release branch for the hotfix process? How and when do we merge the fix into the release in process?
What’s next?
The next blog in the series will look at release deployment pipelines and how to create a standard release deployment process.