Paths to Mainframe modernization
Mainframes have been around forever, so has the idea of mainframe modernization. Experts have been predicting the death of Mainframe for quite some time now, but it is very much alive
- 67 of the Fortune 100;
- 45 of the top 50 Banks;
- 8 of the top 10 Insurers;
- 8 of the top 10 Telcos;
- 7 of the top 10 Retailers;
- 4 of the top 5 Airlines;
Folks who haven't worked on Mainframes cannot comprehend why it is so hard to get rid of it, and folks who have worked extensively question why it needs to be.
Most of the large-scale mainframe modernization ends up being multi-year projects with a lot of disappointment in the end. Mainframe modernization is complex on its own, and it is often made worse by leaders who do not understand the complete set of challenges while migrating from Mainframe.
Replacing Mainframe is as much a cultural shift for your teams as it is for Technology. The mainframe ecosystem is heavily fenced, where you will see players like CA, BMC, and of course, IBM with deep proprietary integrations with the platform.
There's a mainframe way of doing everything, even though the underlying computer science is the same. Architectural problems are solved at the Subsystem level, for example - CICS Managed Concurrency, Sysplex for horizontal clustering.
The engineering team that supports these subsystems is usually different from the development team, and developers typically have no experience solving for problems that are pretty common in the distributed world.
Most of the conversion activities focus on the programs, such as COBOL (the most popular language on the Mainframe). This optimism soon disappears when teams realize that the business rules are sprinkled across the programs and the JCL ( A Popular DSL for Batch processing), schedulers, and an endless list of parameter files (parmlibs ).
Depending on who you ask, Mainframe is either a real-time transaction processing system, a Database or an ETL engine, or all three. Mainframes operate as the Data of Record for many business domains. This data needs to flow to internal and external systems. For large enterprises, this is thousands of files and integration touchpoints. These files/services are often exchanged using proprietary formats (e.g., EBCDIC, Packed Decimal ). Proprietary interfaces will require that you need to refactor downstream consumers when you replace the Mainframe. Of course, it is possible to use a legacy adapter pattern to transform the information from the replacement system to the legacy format to reduce downstream impact.
If you haven't already guessed it, testing is one of the critical aspects of your migration effort. Even if your migration approach uses significant automation, a good test strategy can help reduce risks.
Production Parallel: This approach is slightly different from Blue Green deployment in that your persistent store is not production but a near-production region. The parallel test is much more effective when replacing entire systems that are likely to have cleaner boundaries.
Automated Testing: If you have a mature automated test suite, this would certainly reduce your testing risk.
Replay Tests: You could consider this a variant of production parallel. The idea is to regenerate workloads from history ( from transactions logs or file archives ) to generate replays and see how the replacement system behaves. This approach is critical for interfaces/inputs that are infrequent or Adhoc (e.g., Monthly file feed /Yearly feed / Adhoc interfaces ) which cannot generate enough activity during the production parallel)
The team that runs Mainframe migration quickly learns that their performance models are pretty skewed. The classic MIPS to Core mapping is a pretty poor model for capacity planning.
The Mainframe is a well-engineered piece of hardware, CISC instruction set, the IO Coprocessors that offloads IO, Crypto cards that offloads encryption and blazing fast multi Gigabyte inter-process communication channels and shared memory model make it a beast. The IBM COBOL compiler and the CICS application server optimized for Mainframe - This allows the programmers to tap into mainframe hardware and avoid or ignore problems that are pretty common in the distributed world. Database access is a case in point. Mainframe Database is co-located and accessible at a very high speed from the application program's run time- when these programs are converted into Java and deployed in a 3 tier architecture, the application experiences a significantly increased latency.
These become obvious in significant batch processes, where you might have to consider caching or parallelizing batch processes to meet your SLA's. The ability to foresee such performance issues and planning for refactoring the process is an essential part of conversion planning.
A transpiler or source to source compiler converts the COBOL source to a more "modern" programming language like Java or .NET. This approach often gets criticized as JOBOL - Java that looks like COBOL, an inherent disadvantage when converting a procedural programming language to an objective one.
However, this approach provides the quickest path to migration and allows your development team to move to a modern program language. Migrating to a modern language will enable enterprises to tap into a broader pool of developers, libraries, and tooling.
An Emulators emulates a Mainframe( z Series processor ) on an Intel X86 . IBM's zDT (https://www.ibm.com/docs/en/zdt/12.0.0?topic=personal-edition) allows enterprises to take their test workloads to X86. ( You cannot run production workloads, of course ). I have seen folks running zDT on AWS to deploy non-production workload. While IBM zDT is a full-blown mainframe emulator, other vendors in this space emulated specific platforms/subsystems (e.g., https://www.raincode.com/).
One of the critical challenges in the emulator space is that most emulator products could potentially attract IBM Legal's attention since emulators can potentially infringe on IBM's patents.
With an emulator, you are mainly in the same development ecosystem. Your teams are developing in COBOL - Depending on the depth of your COBOL team, this might be an advantage for you since you have the same team, who already have a deep understanding of your business domain support the application.
Lastly , you are moving from an implementation of a proprietary technology to the other . You are dependent of the emulator vendor and not moving an open platform or ecosystem .
Reimagine business needs and develop new solutions. New, in this context, could be a combination of COTS /SaaS products and custom applications. Rewrite, which can sound super ambitious, is often the best approach, especially if the applications are critical for your business.
Rewrite is a complex activity; however, you have better tools to handle the problem; you also have the liberty to eliminate legacy features that are not required but build on features that are important for your enterprise. Rewrite is also your opportunity to transform your architecture from batch-based workflows to real-time APIs or break down monoliths to smaller apps and services or whatever your architecture goals are.
Rewrite offers the best opportunity for cultural transformation and re-organization.
Keep the Mainframe
Migrating the Mainframe is not for everyone; your organization might find that the fully accounted cost for migration is much higher than keeping the Mainframe. Many organizations like to run the Mainframe with "keep lights on" mode and expand the new workloads on the cloud. The "Lights on" method is attractive if Mainframes support a line of business that your organization plans to shut down or reduce investments or sell-off.
Even though costs are one of the key reasons organizations take on the mainframe migration, speed of delivery, lack of personal, and inability to leverage modern development techniques are equally strong reasons to consider mainframe migration.
If you are keeping the Mainframe, one option is to attempt to adopt modern Mainframe Development practices. Migrate your teams out of the ISPF and use IBM IDz or Similar Modern IDE for development to improve productivity. Use new Z/OS API and Management console to administer the mainframe platform. A Mainframe that you ignore always ends up being much more costly than the one you maintain.
Cloud vs. Mainframe.
Mainframe migration and modernization have seen increased interest due to Cloud adoption. This is because
- Mainframe acts as anchors that tie enterprise to their data center ( which enterprises want to get out of)
- Mainframe datastores act as a constant choking point for data access to the cloud workloads ( API's running on the cloud have to reach back to the DataCenter for the latest information)
- Cloud platforms reduced the risk for mainframe migration.
These are obvious but what is particularly interesting is how cloud platforms reduce risk to mainframe migration efforts.
Mainframe runs mission-critical workloads. When migrating away from Mainframe, an organization's key challenge is developing the technical maturity to run distributed infrastructure at similar service levels. A Mature cloud platform like AWS or Azure solves this by providing these capabilities out of the box. For example, teams can get five-9 uptimes without building a Database engineering team using Amazon Aurora DB. Cloud makes it possible for organizations to reduce the Technology risks post-migration. The elastic nature of the cloud also means that new workloads can react to real-world needs post-migration - this makes the TCO models on cloud migration look very attractive.
There is no universal approach to migrate or modernize mainframe assets. Your business strategy and technology strategy should drive the path you choose. Identifying natural seams in the application and pulling smaller parts out as a pilot project often helps the team validate the conversion strategy before they go all in.
Lastly, Be mindful of the cultural challenges and the strengths /weaknesses of your teams.