Tag Archives: Virtualization

Why modern enterprises should move from a virtual first to a virtual only strategy

Virtualization has changed how modern enterprises are run. Most companies by now should have completed, or are currently finishing off a virtualization program where all legacy physical servers are migrated to a virtualized infrastructure, both increasing efficiency and lowering operational costs.

Most companies, after the virtualization program has been completed and the people and processes are mature and reliable, switch to a virtual first policy. This is where all new services and applications are delivered by virtual machines by default, and physical machines are only offered if the requests match certain exception criteria. This exception criteria is often, but not limited to, certain physical boundaries such as specialized hardware, monster VM’s, and huge amounts of storage. Often the argument is, that is an application consumes an entire host’s resource, what is the point in virtualizing it? Doesn’t it get expensive to purchase virtualization licenses for just a single VM?

However, I see issues with physical servers still been issued under exception criteria, especially if the enterprise has reached higher than around 95% virtualization. Yes, the initial costs will be higher from day one to virtualize monster VM’s, but if a proper TCO calculation is run you will begin to see why physical servers end up costing more towards the middle/end of the physical hardware’s lifecycle.

Here are some examples of operational boundaries and limitations seen by introducing physical servers back to a highly virtualized infrastructure, along with reasons of why virtualization should be default for all workloads.

  • You will need to put the responsibility back on service and application owners for maintaining physical hardware lifecycle. Synchronization between application and physical hardware must be re-coupled. As the application lifecycle is stuck to the physical hardware lifecycle, any delays in application upgrades or replacements will mean you are forced to keep renewing the physical hardware support, or taking the risk of running physical hardware without support.
  • Patching Hardware and Driver updates become complex. Separate driver packages and firmware updates along with routines for physical servers must be maintained and managed separately of the virtual environment. Since you cannot vMotion a physical server, maintenance windows between operational teams and service owners must be re-established, often with work that requires overtime.
  • Failover procedures must be maintained separately. Any datacenter failovers must involve separate procedures and tests independent of the rest of the virtualized environment. High availability must be 100% solid and handled separately to the rest of your standardized infrastructure. VMware HA does not become a second option with physical servers if application high availability fails.
  • Backup and restore procedures must be maintained and operated separately. Backup agents need to be installed and managed in physical servers with separate backup schedules and policies. Restore procedures become complex if the entire server fails.
  • Different Server deployment procedures must be maintained for both the Physical and Virtual Environment. Many companies deploy VM’s using templates, whilst deploying physical servers using PXE. This means both deployment methods must continue to be managed separately, sometimes even by different teams.
  • The monster VM’s of today will not be the monster VM’s of tomorrow. Performance of modern x86 CPU’s continues to grow according to Moore’s law. A typical large SQL database server, five years ago, was typically running a dual socket, quad core configuration and 64-128GB of RAM. You wouldn’t think twice about virtualizing that kind of workload today.
  • Virtualization enables a faster hardware refresh lifecycle. Once application decoupling has being completed, you will see many enterprises begin to move to a much faster hardware refresh cycle in their virtual environment. Production Virtualization hosts will typically be moved to test environments faster, and VM’s will be migrated without application owners realizing. Applications will see increase in performance during their normal lifecycle period, which will not happen on physical hardware.
  • Everything can be virtualized with proper design. Any claims about virtualization creating performance impacts have no real technical basis today if the application and underlying hardware is properly sized and tuned. The overhead imposed by hypervisors, especially with paravirtualized SCSI and Network adapters is negligible. Low latency voice applications can be virtualized using the low latency function in vSphere 5.5. If the application requires such high performance that somehow exceeds the limits of modern hypervisors, consider scaling out instead of scaling up the application. Consider hiring expert consultants to analyze your most demanding applications before they are considered to be run physical.
  • Have applications that require huge amounts of storage such as MS Exchange 2013? Consider smarter storage solutions that enable compression, and/or dedupe. You could see considerable savings in required capacity and datacenter space when this functionality is moved to the array level. Properly evaluate the TCO, risks and operational overhead of maintaining and managing cheap storage in DAS cabinet’s vs enterprise storage with lower failure rates.

As with everything, a proper TCO calculation must be run early at the project phase to determine the true cost of introducing physical servers to a highly virtualized environment. Make sure all the stakeholders are involved and are aware of the extra operational costs in maintaining a separate non standardized physical silo of infrastructure.

Eliminating RDM complexity with storage replica in the next version of Windows Server

Recently, the new features available in the next version of Windows Server were announced along with a public preview.  One hot feature that caught my attention was storage replica. Storage replica enables block level synchronous or asynchronous replication with two storage agnostic volumes over SMB3.

If Synchronous replication is used, you can create Metro clusters using Windows Server Failover cluster manager.  You select two volumes that support SCSI3 persistent reservations, create the replica, and the volume will appear as a standard clustered disk resource in failover cluster manager which can be failed over to other nodes in the cluster.

Asynchronous replication can be used for scenarios such as data migration, as you can create replication partners between other servers or even to other volumes on the same server.  Since the replication is block and not file based, open files such as SQL databases are not a problem.

Many VMware customers, including myself, utilize in-guest virtualized metro clusters to create high availability across two or more datacenters for mission critical tier-1 applications.  These applications require four or more nines of availability, which cannot be dependent on a single VM for HA.

Unfortunately, not all applications that require high availability support application based replication and instead depend on shared clustered disk for this functionality.  So instead designs are based on SAN disk that is virtualized and replicated to two geo locations at the back end by products such as EMC VPLEX, and then presented to the guest as an RDM device.

You can create a cluster in a box scenario with a single shared VMDK, however unless the multi-writer flag is disabled you cannot run the two cluster VM’s across more than a single host.   Windows failover cluster requires SCSI persistent reservations to lock access to the disk, so unfortunately this solution what is common utilized for Oracle RAC also won’t work for Microsoft clusters.

So, in hindsight, the only way to create virtualized Windows based Metro clusters that require shared cluster disk is to use RDM devices across two or more guests.

I have the following issues with RDM’s used for in-guest clustering

  • They create operational dependencies between your virtualization and storage departments. To resize an RDM, it requires the virtualization administrator to coordinate with the storage administrator to resize the backend LUN.  This is difficult to automate without third party products.
  •  They create operational dependencies between application owners, OS administrators, and virtualization teams. RDM’s using SCSI bus sharing requires the virtualized SCSI adapter to be configured in physical bus sharing mode. If physical bus sharing is enabled, live vmotion is disabled.  Therefore any maintenance on the VMware hosts requires coordination between all these teams to agree on downtime as cluster resources are failed over.  Unfortunately, storage replica in synchronous mode still requires SCSI reservations, so one way around this is to use the Windows in-guest iSCSI initiator and target in loopback mode to get around this limitation.  Hopefully in future VMware versions we can vMotion with physical bus sharing enabled.
  • SAN migrations become more complex. Yes, with solutions like VPLEX you can present another SAN behind the VPLEX controller and migrate data on the fly, but what if you want to move to another vendor’s mirroring product entirely?  This requires potential downtime as data is manually copied in-guest from one array vendor to another, unless another third party block level replication software is used.  Clusters demand high uptime by design, so receiving the OK for these outage windows can take weeks of negotiation.
  • The 256 LUN limit per VMware host allows less consolidation of VM’s to host, and can cause you to reach this LUN limit faster. Especially if you use products like VERITAS storage foundation with in-guest mirroring, as this will require a minimum of two RDM’s per logical volume.
  • RDM’s are complex to manage. Unless this can be orchestrated in some way, it can be difficult to identify and select the correct RDM when adding disks to a new VM.

With storage replica, managing virtualized metro clusters are simplified as we can use VMDK’s the same as all other virtual machines. Replication dependency is moved away from the underlying hardware and closer to the application level, where it belongs.   I have demoed and automated the creation of virtualized metro clusters running on VMware in my lab, so I will share these guides in upcoming blog posts.   If you want to get started yourself, the following microsoft resources have good information.

Windows Server Technical Preview Storage Replica Guide –

http://go.microsoft.com/fwlink/?LinkID=514902

Whats New in Storage Services in Windows Server Technical Preview

http://technet.microsoft.com/en-us/library/dn765475.aspx