Start small, and gather momentum with early successes
The insatiable demand for data and analytics by the business and regulators means that traditional data processing solutions are rarely up to the task. The result is large, slow databases, or ‘data islands,’ throughout the enterprise, which decrease data utility and increase the cost and complexity of deriving meaningful, actionable insights.
Due in part to the architectural flux brought about by regulatory demands, many IT solutions have evolved their architectures in a knee-jerk fashion, rather than with a strategic view to the long-term. This application of tape and string has, over time, resulted in architectures that are easy to break, difficult to test and expensive to change.
Architectural brittleness and complexity makes software increasingly difficult to change, a problem which results in hugely inflated change budgets at a time where regulatory mandates require change to happen quickly and correctly. This is often exacerbated by poor software testing.
With such complex systems, brittle architectures and patchy test coverage, it is no surprise that firms suffer from production outages, performance dips or other incidents. Often the response is to commit more response to production support teams, which fails to address the problem at the source and further increases the total cost of ownership.
Legacy platforms are both hardware-hungry and inefficient in their use of compute resources with millions of CPU-cycles wasted in data centers. Server infrastructure can also become increasingly difficult to manage due to configuration drift; and non-production environments often suffer contention between projects which creates further sluggishness and cost when implementing change.
While most technology owners understand the problems and have a good idea where the solutions reside, they often battle to secure buy-in and convince business stakeholders of the need for change.
The perceived cost to replace or upgrade legacy technology is difficult to qualify and estimate and, with benefits that are often difficult to articulate, making a case for change initiatives becomes problematic in budget-constrained environments. Budget cycles and the myopic nature of change management often mean that the case for architectural improvement is deprioritised in favour of work that, at least superficially, appears to yield more immediate benefit.
Defining the scope and reach of change initiatives can often be difficult. Large-scale initiatives are often thought of as ‘too big to tackle’ by budget holders and implementers alike, which means that value-adding initiatives to modernise IT platforms are sometimes abandoned. This typically results in non-strategic change being required down the line, which usually exacerbates architectural brittleness, complexity and cost and ends up being benefit-neutral at best, rather than improving the agility, cost footprint, and reliability of the IT estate.
Large organisations typically find embracing change, especially transformational change, challenging. This can be caused by legacy organisational structures, red tape, vested interests, skills gaps, or simply fear of the unknown. Tackling the problems of legacy technology platforms is often fraught with this kind of organisational inertia as, to a degree, a level of cultural change is also required to successfully effect change.
While most IT leaders understand the benefits that modern approaches such as Cloud or DevOps bring, a primary objection to many of these approaches is that security policies, or data protection concerns, preclude their adoption. Additional regulatory concerns also surface, which introduce further barriers to modernisation.
The Technology Accelerator is designed to help banks and vendors modernise their technology stacks and processes and adopt best practices in IT. This holistic suite of services combines decades of technology experience with an expert grasp of modern IT approaches to deliver cutting-edge approaches specifically tailored to wholesale finance clients and their most pressing needs.
“Banks and many of their technology vendors continue to rely on outdated server infrastructure, brittle software architectures and legacy IT processes in an environment where businesses and regulators demand more data, more insightful analytics, more reliable performance, better security, faster change and less risk. And this, in an environment where IT cost reduction remains a priority.”
— Bradley Wood, Partner, and Head of the Technology Accelerator offering
Tech Accelerator Service Offering
- Adopting Agile best practice
- Designing and Implementing CI frameworks
- Code branching and management techniques
- Toolchain selection
- Behaviour-driven design and requirements engineering
- Velocity reporting and management tools
- API design and development
- Monolith migration planning (“strangulation”)
- CQRS and service design
- Transactional integrity and eventual consistency techniques
- Microservices-based big data pipelines.
- Tool/vendor selection
- Infrastructure-as-code / Immutable infrastructure
- CI/CD pipeline design for hardware, OS, and app release management.
- Containerisation and container clustering.
- Elastic scalability
- Zero down-time deployment
- DR & BCP design and implementation
- Cloud migration
- BDD and requirements engineering
- Unit, system and load test automation
- Stub and harness development
- Destructive testing and chaos engineering
- UAT management
Achieving compliance with:
- PCI DSS
- Cloud Security Alliance framework
- ISO 27000-series standards
- Security as part of CI/CD
- Static vulnerability analysis
- Security policy development and implementation
Many of the legacy systems that operate today’s enterprise platforms are monolithic. And so, the challenge to adopting a modern microservices architecture ultimately requires redesigning monoliths into smaller, discrete services. Doing this in a wholesale, big-bang fashion is a massive undertaking and so the best approach is to start small, which begins with building an API in front of monolithic systems and then applying the strangler pattern over time to slowly deconstruct the monolith.
In order to produce products efficiently, you need a factory that works well. This means adopting Agile approaches like continuous integration, test automation and sprint- or Kanban-based development cycles. Toolchain selection to support the entire SDLC is also critical, with significant engineering required in environment provisioning, test infrastructure, test data management, code profiling, testing and deployment. Once this is addressed, the efficacy with which value-adding functionality can be produced is massively improved.
In order to optimise the code factory, automation must be relentlessly pursued. This DevOps philosophy is central to achieving agility and, with legacy release environments, it can appear an insurmountable task. The approach here too, is to start small and begin to automate those aspects of the SDLC and its supporting infrastructure that give the biggest benefit in the shortest time. Tests are the obvious thing to automate and this should be actively pursued but, beyond that, the automation of environment provision, test data refreshes and deployment should all be addressed within a CI/CD pipeline, as well as infrastructure tasks like firewall configurations, DR invocation, and the like.
Bake in Security
Security should not be applied as an afterthought in production environments, but systematically baked into the SDLC and production process. By applying DevOps principles to security considerations, much of the more mundane security assurance work can be applied in an auditable fashion. Authentication, entitlement, encryption, static vulnerability scanning and OS patching, for example, should be part of the software lifecycle, rather than a production management BAU function.