Companies across industries are reliant on complex systems birthed years, or even decades, ago. As they attempt to modernise, they need to take care when basing key systems in more innovative and adaptable environments
In the penultimate Article of the Week from Raconteur’s 2023 Cloud for Business Report, as distributed in The Sunday Times, we look at an article from LzLabs, which argues in support of businesses taking a methodical approach to legacy modernisation.
What is new in tech today, will be old tomorrow. As businesses attempt to become more agile, the adaptability of their key technology, applications and hardware is often in question. Over recent decades, this has certainly been the case with mainframes, on which companies base many of their key applications. Similar change has also affected even the most modern enterprise resource planning platforms and cloud-based apps.
But modernising existing setups should not necessarily involve complete replacement. In fact, many aspects of an organisation’s technology architecture are there for good reason and remain critical to core processes. Elsewhere, however, parts of an IT infrastructure might not be as adaptable to current demands and shifting business objectives. Rather than looking for a silver bullet to solve these issues at once, a more nuanced approach is necessary.
“Many companies are moving away from running data centres and mainframes due to the costs and complexity involved,” explains Thilo Rockmann, chief executive of the mainframe transformation and modernisation company LzLabs. “Executives at a car manufacturer or a retailer, for example, might ask themselves: ‘Why would I want to spend resources running a data centre when I have more pressing business priorities to focus on?’ While this question is understandable, there needs to be a more nuanced view of what needs modernising and how.”
At the same time, businesses are facing a growing skills problem. In many cases, personnel who implemented these older systems or wrote the source code have not only quit the company, but left the workforce entirely. Their retirement means organisations have some loss of control over core systems.
Worsening the situation is the fact that data centres typically rely on multiple technologies, from operating systems such as UNIX and Linux, to different processor architectures such as ARM and x86, as well as multiple coding languages including COBOL and Python.
When tackling these problems, companies are increasingly attempting to ‘lift and shift’ all their IT and data to the cloud, hoping that centralised technology will solve their operational concerns.
“The pressure around this ultimately comes from business leaders, who want tech change and business model adaptation to happen more quickly and for systems to meet multiple emergent demands,” Rockmann warns. “It’s a bit like having hundreds of people working on a spreadsheet, and expecting it to be switched to offer the capabilities of a database – sometimes a system just can’t keep pace with requirements.”
Among some executives, the idea that the grass is greener with a different setup typically leads to a strategy of shifting to a singular system or service. In practice, when businesses implement a central off-the-shelf platform, many later regret being tied in with one vendor and unable to customise the technology. So-called ‘big bang’ shifts also end up taking so long to prepare that the requirements
have often changed by the go-live date. Alternatively, businesses may hand over all systems to a cloud host, perhaps mistakenly allowing key knowledge to leave the company.
By contrast, other businesses attempt to rewrite large tranches of their software. However, this quickly becomes incredibly complex and takes longer than leaders usually expect. Developers may realise that either the source code is nowhere in sight, or the documentation they have is not representative of the system in live production.
Methodical and iterative improvement
A more effective approach to legacy modernisation is to work step-by-step towards an innovative and adaptable setup. This means tech leaders recognising nuances in their company’s changing needs, and in how their systems serve these requirements.
“Modernisation is really like solving a complex mathematical equation, you can’t jump to the answer,” Rockmann explains. “The only way is to work step-by-step in analysing system strengths and then making the right changes in sequence, without too many concurrent moving parts.”
Businesses should start by recognising the value in the systems they have with a view to preserving what works. They then make changes only where necessary, focusing on introducing a fluid pace of modernisation that delivers steady results from early in the process. This approach also means ensuring interoperability and the use of open source software where possible, to avoid being tied into a particular vendor.
“Sometimes businesses need to make a leap in aspects of their modernisation, but as a rule it’s far better to be cautious and considered when it comes to introducing technological change,” Rockmann says.
The benefits of a more measured approach are twofold. First, the risk of technology failure is minimised because businesses move away from big bang thinking towards a continuum of change supported by an agile culture and an ability to course correct. And, second, IT staff and an organisation’s entire workforce have the chance to adjust and develop with the new ways of operating.
Success in practice
Companies worldwide are working to adopt this effective approach and deliver legacy modernisation that meets current and future business needs.
As broader economic conditions evolve and business leaders push for more rapid operational and business model transformations to meet these new realities, IT departments are under immense pressure to maximise the value of their systems and any changes made. The smartest companies are carefully undertaking legacy modernisation, while retaining essential parts of their IT infrastructure and placing them in improved environments. Those that act assertively, but with consideration and methodical change, position themselves well for long-term success.
Whether it’s cloud migration, cloud repatriation or legacy modernisation, there’s a lot of talk at the moment about the need for businesses to take a considered, methodical and individualised approach to implementing technological change. It makes sense. Businesses are operating in increasingly complex and individualised external environments, and it’s imperative that their internal environments can remain flexible to the shifting demands of the business and its workforce. What works for one business won’t work for another. This is as true of legacy technology as it is of the cloud environment, and businesses will be well served by avoiding falling into the trap of ‘shiny new object’ syndrome. The newest technology isn’t always the best – not for every business. And in fact introducing new tech without proper forethought and consideration of the rest of the environment can cause significant issues and disruption down the line.
Some element of modernisation will undoubtedly bolster the capabilities of most businesses that still run legacy applications and software in legacy environments, but as this article says, there really is no silver bullet. Legacy environments are often complex, specialist and sprawling – they have been built, tailored and tweaked over sometimes decades, and this can’t be easily or quickly replicated. There’s mutual operability, delicately intertwined processes and workflows, and the only way to really preserve the functionality of the environment is by taking a methodical, fine-grained approach to modernisation.
With the right approach and the right support, however, businesses can harness the best of modern technology to support a legacy environment that has served the business well for decades, to ensure it will continue to do so for years into the future.
To download your complete copy of 2023 Cloud for Business report and read more articles like this, click here