In fact when it comes to cloud computing, a perception that first comes to mind for many is the huge data centers like Google, where hundreds of thousands of low-cost servers based on Intel, constitute their platform hardware.
But is that all companies can have these data centers? For me it’s clearly not. Even a large bank can not create and maintain multiple data centers with over 500,000 distributed servers. These corporate data centers operate differently from public clouds, because they need to keep certain internal controls and processes, whether by regulatory issues, whether in obedience to the auditing standards. On the other hand need to build a dynamic infrastructure, based on the concepts of cloud computing. Private clouds offer many facilities than public clouds, but operating internally to the company firewall. Clouds are made available and accessed only internally.
And in which hardware platforms should build its clouds?
Large corporations such as big banks are already using mainframes. And why not use them as a platform for its clouds?
Let’s think a little about it.
The new mainframes running legacy applications are not only based on Cobol, but processing efficiency programs with Java and Linux Cloud Hosting systems. A practical example is the CMMA facilities (collaborative memory management assist) and DCSS (discontinuous segments shared). The CMMA expands the paging coordination between Linux and z / VM at the level of individual pages, optimizing memory usage. With the DCSS, portions of memory can be shared by multiple virtual machines. Thus, programs that are used in many or all Linux virtual machines can be placed in DCSS, so that all share the same pages. Another interesting issue that affects the clouds built on distributed servers is the latency that occurs when programs are on remote machines to each other. In a single mainframe, it can have thousands of virtual servers, connected by communication memory to memory, eliminating this problem.
Mainframes naturally incorporate many of the attributes that are needed in a cloud as scalable capacity, elasticity (you can create virtual machines on and off without needing to acquire hadware), resilience and security. And not to mention virtualization, which is part of mainframes since 1967!
The automatic management of resources is already incorporated much of the software in the mainframe. In fact, System z Integrated Systems Management Firmware seamlessly manages resources, workloads, availability, virtual images and energy consumption between different mainframes.
Let’s now look at the load distribution. A mainframe can handle many more virtual servers per square foot than in an environment of Intel servers. The average space occupied by a mainframe to a cloud of thousands of servers can be 1 / 25 of what is needed with Intel servers. Furthermore, for each processor mainframes could put, depending on the load, dozens of virtual servers. Another consequence is that energy consumption can be around 1 / 20 of what would be consumed by thousands of physical servers.
A practical example: The cloud created by Marist College in the U.S., which in a four-processor mainframe operates more than 600 virtual machines.
In the economic aspect, the zEconomics, or economics of the mainframe (System z) may have a cost of ownership extremely advantageous. Java applications (which run on a specific processor called zAAP) and Linux (running on IFL processors) use processors that cost much less than usual processors that run the z / OS systems and legacy applications.
A final thought as automatic controls are already in the mainframe and because there are fewer physical components to manage the demand for professional management of the cloud may lie around 1 / 5 of what is needed in physically distributed systems.
Click here for top dedicated servers in nashik india