Many large firms, especially Banks and Financial Institutions are moving to a Mainframe-Multi Cloud model of computing. This is a long term effort still in train and includes the following stages:
- multiple addresses in one OS
- virtual instance (LPAR)
- clustering (parallel sysplex)
- cloud
- container (docker, K8S)
The mainframe can use different applications concurrently, with application running their own virtual storage called address space (System tasks also called STC, batch as JOB). For example we can have one DB2 or CICS instance in an independent address space with isolated real and virtual memory allocated.
VM
The original virtualization wasn’t hardware built-in hypervisor, but the OS level virtual machine CP/CMS (branded to z/VM) developed in the 1960s, the common known type-1 hypervisor of mainframe – PR/SM (Processor Resource/System Manager) was only introduced in the 1990s, probably that’s when people started to use LPAR (logical partition) to virtualise a mainframe system. Today, we have 2 main types of hypervisors, type 1 (bare metal) runs directly with host hardware, or type 2 runs as software on the host OS.
PR/SM is shipped with mainframe machines which acts as type 1 hypervisor, z/VM can be purchased and installed on one LPAR to act as type 2 hypervisor and host thousands of guest VMs like Linux. We can now assign 10GB persistent storage to a cloud VM using web console.
Clustering
Clustering is also called horizontal scaling, with each hardware having a ceiling of specifications, even for mainframe which can be configured with 190 CPU cores and 40TB memory with its latest z15 model. When physical capacity is achieved, another machine can be added to form a cluster of parallel sysplex in mainframe. Each node in the cluster can be placed in different data centre but still serve a single endpoint to users, just like you deploy cloud applications in different zones in different regions for High Availability.
Container
There are some obvious advantages in moving to a container and microservice application and infrastructure model including faster development time, visibility, isolation, scaling and efficiency. A container makes software more portable, the runtime lightweight, and enables the deployment of infrastructure with key SRE characteristics of self-healing, automation and CI/CD. This target model assumes the use of APIs.
IBM supports Docker on system z with z/OS Container Extensions (zCX) using z/OSMF workflow, the Docker engine will be run as an address space in z/OS, and Linux images can be hosted within that address space. The greatest selling point is theoretically that critical data never leaves the core mainframe platform which can be important for customers facing tight compliance regulations. This design also puts the API and attendant plugins on the mainframe, reducing latency, round trips and security issues.
IBM also supports a container runtime for IBM z/O in support of Open Containers Initiative compliant images comprising z/OS software as well as Kubernetes orchestration for containers on z/OS. There are a few ways to implement Kubernetes with Openshift on Linux powered z/VM or Redhat KVM, which can be hosted on System z and LinuxOne family.