Do you know what is meant by the phrase Cloud bursting?
The latest feature article in our series, looking at the topics and themes discussed in Raconteurs 2023 Cloud for Business report, looks at Jon Axworthy’s article on Cloud bursting, considering what it is, what it means and the impact it can have on businesses.
Cloud bursting – whereby firms shift some of their processing workload to a public cloud when demand is rocketing – is becoming a popular way to ensure service continuity at optimal cost.
For many digital businesses, the ability to handle huge increases in demand – from the rush to a retailer’s site on Black Friday to the Saturday night stampede for meal delivery services – is key to their ongoing competitiveness.
To cope with the extra burden, these firms often look to divert some of the data processing workload from their own systems to a public cloud service. But the fact that the spikes in demand are temporary means that they won’t need that additional capacity permanently – and they definitely won’t want to pay for it. This is where cloud bursting, an application deployment method first proposed by Jeff Barr, chief evangelist at Amazon Web Services, comes into play.
An adaptation of the hybrid approach, using both public and private clouds, enables IT teams to set workload thresholds for their own systems and applications. When such a threshold is reached, the cloud bursting configuration will trigger an application to start working in a public cloud, where it can more easily cope with the increase in traffic coming its way.
“Cloud bursting offers evident advantages to businesses in terms of cost, flexibility and service continuity,” says Ravi Mayuram, CTO at cloud database platform Couchbase. “First, you pay only for what capacity you use, avoiding fixed overheads. Second, resourcing can be much more flexible: you scale back once the need goes away. And third, cloud bursting means that applications and services can continue operating during demand peaks (or at other times) without negatively affecting the user experience.”
Although dealing with workload spikes is its main application, cloud bursting can be utilised for processor-hungry modelling tasks such as 3D rendering, or for software engineering, where running virtual machines can become prohibitively costly. Bursting into a public cloud also gives users access to tech that’s often optimised for big-data analytics and AI tasks.
Sounds attractive, doesn’t it? Especially as there are established container environments that natively handle cloud bursting. But there are some caveats, and preparation is needed before adopting this approach.
First, a company must look closely at each application to determine whether bursting it would be feasible in its current state. This will often boil down to how an application has been designed, notes Steve Judd, senior solutions architect at the Jetstack consultancy. “The ideal architecture is loosely coupled and independent,” he says. “This means that the components communicating between the private data centre and the public cloud don’t need to transfer large amounts of data between them. They can also tolerate unpredictable latency.”
Once an application’s suitability has been established, the CIO will need to determine the most suitable bursting mechanism. There are three options available with the big cloud service providers.
- Manual: where an IT administrator must decide when to instigate the burst and when to bring that workload back
- Automated: where the tech manages cloud resources and shifts workloads as per the instructions given to it
- Distributed load balancing
Judd explains: “You have a small capacity of standby servers provisioned and ready in the cloud. This mitigates the risk of having your own servers overwhelmed when there’s a steep increase in traffic.” The balancing system allocates the workload between the two environments automatically. The manual option is the most accessible of the approaches and it’s a good way for organisations to test cloud bursting projects, but it’s also most prone to inefficiency and error, given that it relies on human judgement.
“Automation is key,” says Greg Adams, vice-president of Dynatrace’s operation in the UK and Ireland. “The most effective way to support this is by using service-level objectives (SLOs) to set thresholds for an acceptable user experience. For instance, SLOs for application response times can enforce an automated process that invokes cloud bursting if the user experience falls below that threshold.”
Mayuram notes that network capacity problems can sometimes stymie a cloud burst because such problems tend not to reveal themselves until it’s too late. If there isn’t enough bandwidth, he says, “then all the goodness of cloud bursting is only a theory; it will never materialise. The challenge is to plan for adequate bandwidth between private data centres and the public cloud, so that bursting actually happens effectively and meets your SLOs.”
No matter which mechanism is chosen, security and regulatory compliance must remain a priority when bursting is enabled. “The data that will be sent has to be monitored and protected,” Mayuram stresses. “If there is material which is protected by compliance requirements or industry-specific governance standards, companies need to take adequate precautions to ensure their security procedures are tight enough.”
To safeguard the data being transferred in bursts, businesses should set up encrypted routes between their systems and the public cloud, Judd advises. “Also, the dynamic nature of cloud bursting creates an influx of machine identities,” he says. “Companies must deploy a control plane to automate the management of these identities. This gives their teams the observability, consistency and reliability to manage their machine identities.” Ultimately, a firm needs to monitor its cloud bursting constantly to check that its performance keeps within the tolerances and to verify that the method remains cost-effective. If doubts were to arise on either count, the CIOs would need to review their workflow models to determine whether their bursting strategy is still viable. Otherwise, it can be all too easy to get caught out in the rain.
The flexibility and scalability benefits of the Cloud have long been widely known and reported, and businesses have been keen to leverage cloud-first solutions to add capacity and functionality to their business systems. But ‘bursting’ as the recent phenomenon suggests, enables businesses which are utilising the Cloud to create flexibility within their in-house systems.
As the article references, businesses with finite capacity can struggle during periods of high demand. In fact, ‘struggle’ may be significantly underplaying the potential fallout of systems being overwhelmed. You only have to consider the reputational damage suffered by Ticketmaster in the wake of the Taylor Swift tour ticket fiasco to see why being able to add capacity, quickly, is crucial. This is a company that’s responsible for 70% of all ticket sales in the US, yet Ticketmaster now finds itself facing a class-action lawsuit filed by more than 300 plaintiffs because its systems couldn’t handle a huge surge in traffic.
Cloud bursting is a simple principle that can have a huge impact on a business’s ability to function during both unforeseen and predictable spikes in demand. Being able to add capacity to resources, and only pay for what, when and how much you use, makes good financial and operational sense. But leveraging public cloud on an ad hoc basis will also throw up some challenges, especially for those businesses dealing with sensitive data and those with regulatory obligations to consider.
Choosing the right mechanisms, connectivity and security tools from the right provider will be a crucial consideration for businesses looking to add ‘bursting’ to their systems capabilities, but done well, it could provide the perfect, cost-effective solution to capacity problems.
To download your complete copy of 2023 Cloud for Business report and read more articles like this, click here