From high maintenance costs and siloed data to security threats and non-compliance, legacy IT systems are what can politely be described as “challenging.” And for financial services organisations specifically, that challenge is now starting to spill beyond the confines of the technology function, and into other areas as well.
For instance, an article written by Computer Weekly found that legacy tech meant fewer than 40% of banks could match consumer expectations about the speed of opening an account. Another source suggests that the average financial services organisation with six legacy applications wastes as much as $150,000 per year on managing them.
Old systems causing trouble is hardly news and it’s something that we have covered before. The problem is that the answer to the legacy question – system upgrades – come with their own considerations too.
“Upgrades are often seen as costly, time-consuming, and something that can require significant change,” says Jon Jenkins, Lendscape’s Head of Engineering. “It’s not uncommon for clients to feel like they’re just exchanging one problem for another,” he continues.
That’s not an unreasonable belief, of course. In days gone by (and even still in some cases) system upgrades were momentous and often risky initiatives. But, like the platforms they are designed to replace, that view is ultimately behind the times.
“There’s been a fundamental shift in the way we develop and deploy software,” says Xavier Lang-Claes, product owner at Lendscape. “It’s an agile approach now. So, when we perform an upgrade, it’s much easier for a client to track and understand what has changed and everything that is being done. It’s a much more transparent process,” he adds.
That additional transparency is crucial, particularly from the point of view of benefits realisation. In our previous piece highlighting the issues posed by legacy systems, maintenance costs were only part of the equation: the other, arguably bigger, issue is that organisations are missing out on a wealth of opportunities by failing to upgrade.
Broadly, that lost potential can be divided into four categories:
- Cost: upkeep is not the only factor here; outdated systems can have a demonstrable impact on productivity and customer service too, as evidenced above.
- Continuity: as well as posing a greater overall failure risk, legacy systems can be much more vulnerable to attack. With cybersecurity threats in financial services increasing by 238% in a single quarter of 2020, that is more of a problem than ever.
- Capability: system upgrades are not just in service of stability. Often, platform upgrades will introduce a range of new features and functionality, increasingly crucial in our application-centric world.
- Compatibility: finally, many systems now operate as part of an integrated technology stack. Keeping a platform current can help to ensure that it remains compatible with other systems in that setup.
The benefits of keeping a system up to date are clear, then. But what does the actual process of upgrading look like in financial services today?
“A lot of software out there now is designed to automatically update itself,” continues Lang-Claes. “You’ll be asked if you’re happy to upgrade, click a button, and suddenly you’re on the next version. That’s great, but not particularly suitable for a financial services environment where things are much more complex.
“Instead, what we’re focusing on at Lendscape is frequency. We believe that it’s better to make smaller changes more often; doing so reduces the risk of failure and ensures that we can get valuable improvements into our customers’ hands much sooner.”
Jenkins echoes that philosophy. “Upgrades used to be a very manual, multistep process,” he says. “We’ve consciously moved away from that because we want to make things smoother, safer, and faster for our customers. It means less downtime, lower costs, and a more seamless experience.”
That user-centric approach carries over into the (often thorny) issue of regression testing. Implementing a system upgrade is one thing; finding out that it has upended something that was working perfectly well before is quite another. Additionally, the disruption and cost of testing will always be a concern.
“One of the risks with an upgrade is that you end up overwriting a specific configuration which was working very well before,” says Lang-Claes. “We’ve reshaped our engineering approach to make the risk of that happening much, much lower. We also have an automated testing regime that checks around 80% to 90% of the most frequently used functionalities and transactions which can be tailored with clients before they go live.”
Are all these practices designed to help financial services organisations start to think differently about the risk/reward dynamic of system upgrades, then?
“Absolutely,” says Jon. “I think we’re moving from a time in which an upgrade was seen as this big, opaque challenge towards one where it’s much more streamlined and less intrusive.”
“That’s the aim, at least. We don’t want our customers to upgrade just because we say they should. We want them to upgrade – and stay current – because they understand what that’s going to do for them in terms of performance and functionality,” Jenkins concludes.
Aside from the direct impact unexpected downtime or outages can have on a business, decision-makers should also consider the grinding effect of not upgrading for efficiencies brought about by performance improvements and enhanced functionality.
Upgrading and maintaining your installed software is critical to ensuring your business gets the most from your technology investment, from efficiency and productivity to meet the needs of the business and business continuity.
Article written by: