blog-banner

Not if, but when: The case for mainframe modernization

Last edited on July 6, 2023

0 minute read

    The first idea for a mainframe was developed by a Harvard researcher who took the concept to IBM in the 1930s. After a decade or so of development, the 5 ton machine – that filled an entire room and would cost around $3M today to build – was ready to use in 1943.

    Industries such as banking, retail, insurance, utilities, healthcare, and government still rely on their mainframes to handle their most sensitive, large-scale transactional data. Mainframes excel when it comes to security and reliability which is why many organizations trust them with their mission-critical workloads.

    While the mainframe has evolved quite a bit over the last 80 years, it’s still not designed to take advantage of the global scale, access and efficiency offered by the public cloud. For many organizations, a transition to the cloud is not so much a matter of if, it’s more a matter of when. And as part of this modernization effort, the mainframe needs to be left behind.

    At Cockroach Labs, we’ve worked with customers across industries on several “mainframe-offload” projects with the end goal of transitioning all their workloads to the cloud and increasing efficiencies. Yes, it’s a time consuming project that could introduce risks, but the benefits can greatly outweigh the costs.

    Here’s a few reasons why our customers believe leaving your mainframe behind for a distributed SQL solution is the right decision.

    Keep pace with your customersCopy Icon

    Companies need to meet customers where they are or they risk losing market share. For example, banks need to have a great mobile experience, retailers need a performant ecommerce site, and hospitals need to give patients digital access to their data. New companies form specifically to meet the emerging demands of customers and if legacy enterprises want to keep pace, they must have an infrastructure that allows them to innovate.

    As the foundation for your infrastructure, the database can quickly become a bottleneck when it comes to scale and performance. And, if you are scaling a traditional relational database management system (RDBMS), there’s a lot of manual work required to serve a global audience while guaranteeing an always-on experience.

    Organizations are rethinking their monolithic tech stack and transitioning all their apps and services to the cloud. Distributed systems (like microservices) are an increasingly popular choice when it comes to building and deploying software. Distributed applications – built on a distributed SQL database – can scale up or down quickly by adding or removing services or instances. This flexibility allows organizations to react to customer demands and get new applications to market faster.

    Infrastructure management and costsCopy Icon

    The software, hardware, and other third-party costs associated with mainframes are high. While mainframes no longer weigh 5 tons, they still can have a substantial footprint. Many organizations want to get rid of their data centers entirely. They want to consolidate workloads, improve utilization levels, and outsource management.


    RELATED

    Discover the total cost of ownership of distributed SQL


    The management of legacy databases can create a ton of complications for teams including inconsistent customer records, heavy operational complexity associated with managing millions of transactions, scaling to accommodate new growth, and much more. Plus, these outdated systems can put them at risk for outages.

    With the cloud, you can outsource infrastructure management. With a cloud-native database, you can avoid vendor lock-in. And with a managed solution, you don’t have to spend resources on a DBA team and you can get support straight from the experts.

    Availability of skills Copy Icon

    Do you think Gen Z is learning how to operate a mainframe?

    There’s already an increasing shortage of trained mainframe workers and the skills gap is increasing. As the mainframe operations workforce ages, these skills may simply be unavailable.

    This is where automation can come into play. Automation should be built into the solution. It is a key feature that allows your teams to do more without thinking about or spending time on operations. Time to innovate is the top thing DevOps teams and SREs both say they want more of. So when you have these very expensive and difficult-to-hire-for roles spending their cycles on non-strategic work (i.e. maintaining mainframes and legacy systems), those are wasted resources and you incur an opportunity cost.

    With a distributed SQL database, you can automate scale and replication. You can perform online rolling upgrades and make schema changes in production. These types of features give development teams their time back and reduce total cost of ownership (TCO).

    In summaryCopy Icon

    Our customers use CockroachDB to modernize their applications because it delivers resilient/highly available infrastructure, guarantees consistent transactions, provides the flexibility to deploy across multiple regions, allows you to pin data to a location, works with other modern technologies (like Kubernetes), and can scale to accommodate heavy transactional workloads.

    Ultimately all these features allow them to achieve their business goals – which is in this instance, to get to the cloud.

    If your organization is undergoing a large modernization effort, or has mandated “no data centers, no mainframes”, get in touch to see how CockroachDB can help.

    distributed SQL
    distributed database
    distributed
    database cost