The Problem Statement
Enterprise financial services firms are increasingly bogged down by old-fashioned technology solutions which, over the course of their lifetimes, grow in complexity over the years due to architectural entropy and knee-jerk tactical changes. The result is sluggish, difficult-to-change platforms which become increasingly expensive to operate.
Firms who have maintained their legacy systems typically experience:
The insatiable demand for data and analytics by financial services firms and regulators means that traditional data processing solutions are rarely up to the task. The result is large, slow ‘data islands’ throughout an enterprise, which ultimately decreases data utility and increases the cost and complexity of deriving meaningful, actionable insights.
Due in part to the architectural flux brought about by regulatory demands, many IT solutions have evolved their architectures in a knee-jerk rather than strategic fashion. The complexity created from this approach has resulted in architectures that are fallible, difficult-to-test and expensive to change.
Architectural brittleness and complexity makes software increasingly difficult to change, which results in hugely inflated change budgets at a time where regulatory mandates require change to happen quickly and correctly. Poor software testing practices that rely on manual text execution – a practice that is labour-intensive, expensive and rarely sufficient in its coverage – exacerbate the problem.
With such complex systems, brittle architectures and patchy test coverage, it is no surprise that firms suffer from production outages, performance dips and other incidents. Often, the response is to commit more resources to production support teams, which fails to address the problem at the source and only further increases the total cost of ownership.
Legacy platforms are both hardware-hungry and inefficient in their use of compute resources with millions of CPU-cycles wasted in datacentres. Server infrastructure can also become increasingly difficult to manage due to configuration drift while non-production environments often suffer contention between projects, which creates further sluggishness and cost when implementing change.
The Technology Accelerator is comprised of five services as detailed below:
Assists clients with:
- Adopting Agile best practice
- Designing and Implementing CI frameworks
- Code branching and management techniques
- Toolchain selection
- Behaviour-driven design and requirements engineering
- Velocity reporting and management tools
Helps clients adopt modern architecture practices including:
- API design and development
- Monolith migration planning (“strangulation”)
- CQRS and service design
- Transactional integrity and eventual consistency techniques
- Microservices-based big data pipelines.
- Tool/vendor selection
Helps clients ‘automate everything’, specifically:
- Infrastructure-as-code / Immutable infrastructure
- CI/CD pipeline design for hardware, OS, and app release management
- Containerisation and container clustering
- Elastic scalability
- Zero down-time deployment
- DR & BCP design and implementation
- Cloud migration
Automates tests as part of the SDLC, specifically:
- BDD and requirements engineering
- Unit, system and load test automation
- Stub and harness development
- Destructive testing and chaos engineering
- UAT management
Integrates security practices from the outset:
- Achieving compliance with PCI DSS, Cloud Security Alliance framework & ISO 27000-series standards
- Security as part of CI/CD
- Static vulnerability analysis
- Security policy development and implementation