Why are organisations not opting for full scale virtualisation?
Going by Gartner or IDC estimates, about 50% workloads are virtualised today. Though virtualisation technology has been around for some time, the question is why are the other 50% workloads not virtualised yet? We believe there are couple of things happening in that part of the workloads. Some workloads are not virtualised simply because of their size, they are so large that virtualisation doesn’t make sense.
Virtualisation tends to be a great consolidation play. It’s great when you want to thin provision or over provision. But it’s not so good for predictable performance, error containment or security. In most cases of virtualisation, you give up something in exchange for thin provisioning. It’s acceptable in the context to that workload. However, we found that other workloads have a mission critical or performance prediction aspect to them, which makes virtualisation not the best fit for them. Hence they haven’t been virtualised yet.
I also think some workloads have not been virtualised because of the complexity or cost. If an organisation decides not to go for virtualisation, then it is also about control. People who are responsible for the workload are simply unwilling to give up on the perceived or real security problems.
Often, customers are unclear whether they want a combination of workloads to move to the public cloud or out of the building— because of the application nature, kind of data and regulatory issues attached with it. So, I think there’s room for all of these. The challenge for us is to build a system that can deliver all of these: on premise, off premise, virtualisation and provisioning. It depends on the customer to decide how he wants to plan his infrastructure.
From a risk point of view, what are the areas or elements that come along with virtualisation?
There is a set of characteristics and trades that come along with virtualisation. When you start doing thin provisioning in virtualisation, be prepared to tolerate unpredictable performance in the application. Also, be prepared to deal with the system where the failure of one VM (virtual machine) may lead to the failure of multiple VMs. That’s where the application needs to tolerate or the infrastructure has to be designed to cope with. Then there are security concerns and issues linked with the system when the infrastructure, particularly storage, is shared. There have been some cases where the control on some part of the virtualisation software has been compromised.
So I think the next frontier for security is actually the storage and virtualisation layer. We have put lot of effort into security capability around the technological stealth. That’s not only about dealing with data in motion but also with data at rest.
What’s your vision for Forward!? What role is it going to play for enterprises?
Forward! plays in the business critical space. Our initial use cases are around SAP ERP, migration of UNIX workloads and other business-critical areas. Then there’s mainframe application in that space, which is called non-mainframe mission critical. It’s the euphemism for UNIX, like IBM-AIX, HP-UX and Solaris. Each of these environments has its own problems today. Generally, those people want to move to Linux.
They look at Linux environments, be it SUSE or Red Hat—they are less expensive and meet their opex or capex needs. Linux environments offer a fairly open architecture, where they don’t have to deal with one vendor. The inhibitor to this move have been business critical workloads that tend to run on those systems. People are attracted by all the attributes of open systems and pursue it because of the platform’s longevity. But they can’t just leap to the Linux environment with the workload sets through looking for something that got security and resiliency characteristics that they have today. So, we think Forward! fits exactly that space. That’s why it’s one of the use cases.