Jump to section

What is blue green deployment?

Copy URL

Red Hat named a Leader in the 2023 Gartner® Magic Quadrant™

Red Hat was positioned highest for ability to execute and furthest for completeness of vision in the Gartner 2023 Magic Quadrant for Container Management.

Blue green deployment is an application release model that gradually transfers user traffic from a previous version of an app or microservice to a nearly identical new release—both of which are running in production. 

The old version can be called the blue environment while the new version can be known as the green environment. Once production traffic is fully transferred from blue to green, blue can standby in case of rollback or pulled from production and updated to become the template upon which the next update is made.

There are downsides to this continuous deployment model. Not all environments have the same uptime requirements or the resources to properly perform CI/CD processes like blue green. But many apps evolve to support such continuous delivery as the enterprises supporting them digitally transform.

 

 

Blue green deployment model

Blue green deployment model

Think about it like this. You’ve developed a simple cloud-native app—a mobile game where users earn points tapping multicolored balloons that fly across the screen. The game’s back end is supported by multiple container-based microservices that handle game achievements, scoring, mechanics, communication, and player identification.

Hundreds of users start playing the game after its initial release. They’re logging thousands of transactions every minute. Your DevOps team has encouraged you to release early and often, which is why you’re about to release a minor update to the mechanics microservice that increases the size and speed of the red balloon.

Instead of waiting until midnight to push the update to the production environment (when the least amount of users are active), you’re using a blue green deployment model to update the app during peak use. And you’re going to do it with zero downtime. 

You’re able to do this because you took the mechanics microservice in the production environment (blue) and copied it to an identical—but separate—container (green). After you increased the size and speed of the red balloons in the green environment, it passed through Q/A and staging (which were perhaps automated by an open source stress test project like Jenkins) before it’s pushed to the production environment alongside the active blue environment. 

The ops team can use a load balancer to redirect each user’s next transaction from blue to green, and—once all production traffic is filtered through the green environment—the blue environment is brought offline. Blue can either standby as a disaster recovery option, or it can become the container for the next update.

 

Kubernetes is a natural fit with all the elements associated with the blue green deployment process, including cloud-native apps, microservices, containers, continuous integration, continuous delivery, continuous deployment, SRE, and DevOps. As an open source platform that automates Linux® container operations, Kubernetes not only helps orchestrate the containers that package cloud-native apps’ microservices, but Kubernetes is also supported by a collection of architectural patterns that developers can reuse instead of creating application architectures from scratch.

One of those Kubernetes patterns is known as the Declarative Deployment pattern. Since microservices are inherently small, they can multiply in number very quickly. The Declarative Deployment pattern reduces the manual effort needed to deploy new pods—the smallest and simplest unit in the Kubernetes architecture.

Because we’ve hardened the leading enterprise Kubernetes platform—Red Hat® OpenShift—with CI/CD capabilities at its core. And we’ve already documented step-by-step command line prompts and arguments to roll out blue green deployments within your Red Hat OpenShift environment.

And when you keep your enterprise Kubernetes platform open source, you retain control over the entire platform and everything that relies on it, allowing your applications and services to just work—regardless of where they are or what’s supporting them.

So go ahead: Inspect, modify, and enhance the source code behind our technologies. With products trusted by more than 90% of Fortune 500 companies,* there’s very little you can’t do with an infrastructure built on Red Hat products and technologies.

Keep reading

Article

What is DevSecOps?

If you want to take full advantage of the agility and responsiveness of DevOps, IT security must play a role in the full life cycle of your apps.

Article

What is CI/CD?

CI/CD introduces ongoing automation and continuous monitoring throughout the lifecycle of apps, from integration and testing phases to delivery and deployment.

Article

Who is a DevOps engineer?

A DevOps engineer has a unique combination of skills and expertise that enables collaboration, innovation, and cultural shifts within an organization.  

More about DevOps

Products

An intensive, highly focused residency with Red Hat experts where you learn to use an agile methodology and open source tools to work on your enterprise’s business problems.

Engagements with our strategic advisers who take a big-picture view of your organization, analyze your challenges, and help you overcome them with comprehensive, cost-effective solutions.

Resources

Podcast

Command Line Heroes Season 1, Episode 4:

"DevOps: Tear down that wall"

Checklist

Enterprise automation with a DevOps methodology

Whitepaper

Streamline CI/CD pipelines with Red Hat Ansible Automation Platform

Operator

Manage infrastructure and application configurations with Red Hat® OpenShift® GitOps