Hyper-V and VMware storage, which is better: block or file – based access?

By admin 3 years agoNo Comments
Home  /  Hyper-V  /  Hyper-V and VMware storage, which is better: block or file – based access?

The rate of acceptance of server virtualisation has enhanced over recent years, and virtual server workloads now incorporate numerous production applications, including Tier 1 applications such as databases.
Here, we discuss about basic requirements needed for Hyper-V and VMware storage and examine the key question of block vs file storage in such deployments.
Basic requirements needed for storage in virtual server environments

When choosing storage for virtual server environments, a basic set of requirements must be met, no matter the hypervisor or the storage protocol. They are:

• Shared access. Storage connected to hypervisors usually must give access shared among hypervisor hosts. This allows redundant and high-availability configurations. Wherever shared storage is required for multiple hypervisors, guests are load-balanced through the servers for performance and handiness within the event of a server failure.

• Scalability. Virtual server environments will embody many virtual machines. This suggests any storage resolution must be scalable to cater for the big volume of knowledge virtual guests produce. Moreover, quantifiability is needed for shared property, providing for multiple hosts with multiple redundant connections.

• High handiness. Virtual server environments will contain many virtual servers or desktops. This represents a degree of risk requiring high availability from the storage array. Availability is measured in terms of array uptime however conjointly of parts that connect the server to the array, like network or Fibre Channel switch.

• Performance. Virtual environments produce a unique performance profile for I/O than that of individual servers. Typically, I/O is unsystematic in nature, however certain tasks, like backup and guest clone, may end up in high successive I/O demands.

Block vs file?

Virtual servers are deployed either to direct-attached storage (DAS) or networked storage (NAS or SAN). DAS doesn’t give the shared access needed of extremely obtainable virtual clusters as a result of it’s physically related to one virtual server. Enterprise-class solutions, therefore, use networked storage and this implies protocols like NFS, CIFS, iSCSI, Fibre Channel and Fibre Channel over LAN (FCoE).

File-level access: NAS

Network-attached storage encompasses the NFS and CIFS protocols and refers specifically to the utilization of file-based storage to store virtual guests. VMware ESXi supports solely NFS for file-level access; Hyper-V supports solely CIFS for file access. This distinction is maybe explained by the actual fact that CIFS was developed by Microsoft from Server Message Block(SMB) and NFS was originally developed by Sun Microsystems for its Solaris OS — each Solaris and ESXi are UNIX system variants.

For VMware, NFS could be a good selection of protocol because it provides variety of distinct edges.

• Virtual machines have hold on in directories on NFS shares, creating them straightforward to access while not victimization the hypervisor. this can be helpful for taking virtual machine backups or stored a private virtual guest. VMware configuration files can even be directly created or amended.

• Virtual storage will simply be shared among multiple virtual servers; VMware uses a lockup file on the share to make sure integrity during a clustered surroundings.

• No additional server hardware is needed to access NFS shares, which may be achieved over commonplace network interface cards (NICs).

• Virtual guests are thinly provisioned, if the underlying storage hardware supports it.

• Network shares are distended dynamically, if the storage filer supports it, with none impact on ESXi.
There are, however, some disadvantages once victimization NFS with VMware.

• Scalability is prescribed to eight NFS shares per VMware host (this is distended to sixty four however conjointly needs TCP/IP heap size to be increased).

• Although these NFS shares will scale to the utmost size permissible by the storage filer, the share is often created from one cluster of disks with one performance characteristic; so, all guests on the share can expertise a similar I/O performance profile.

• NFS doesn’t support multipathing, and then high handiness must be managed at the physical network layer with guaranteed networks on ESXi and virtual interfaces on the storage array — if it supports it.

For Hyper-V, CIFS permits virtual machines (stored as virtual hard disc, or VHD, files) to be hold on and accessed on CIFS shares nominative by a regular Naming Convention (UNC) or a share mapped to a drive letter. Whereas this provides a particular degree of flexibility in storing virtual machines on Windows file servers, CIFS is an ineffective protocol for the block-based access needed by Hyper-V and not an authentic alternative. It’s unsatisfying to announce that Microsoft presently doesn’t support Hyper-V guests on NFS shares. This sounds like a evident omission.

Block-level access: Fibre Channel and iSCSI

Block protocols embody iSCSI, Fibre Channel and FCoE. Fibre Channel and FCoE are delivered over dedicated host adapter cards (HBAs and CNAs, respectively). iSCSI is delivered over commonplace NICs or using dedicated TOE (TCP/IP Offload Engine) HBAs. For each VMware and Hyper-V, the utilization of Fibre Channel or FCoE means that further price for dedicated storage networking hardware. iSCSI doesn’t expressly need further hardware however customers could realize it necessary to achieve higher performance.

VMware supports all 3 block storage protocols. In every case, storage is discussed to the VMware host as a LUN. Block storage has the subsequent benefits.

• Each LUN is configured with Virtual Machine classification system, or VMFS, that is specifically written for storing virtual machines.

• VMware provisions multipath I/O for iSCSI and Fibre Channel/FCoE.

• Block protocols support hardware acceleration through vStorage apis for Array Integration (VAAI). These hardware-based directions improve the performance of knowledge migration and lockup to extend throughput and quantifiability.

• ESXi 4.x supports “boot from SAN” for all protocols, enabling unsettled deployments.

• SAN environments will use RDM (Raw Device Mapping) that allows virtual guests to write down non-standard interface commands to LUNs on the storage array. This feature is beneficial on management servers.
For VMware, there are some disadvantages to using block storage.

• VMFS is proprietary to VMware, and information on a VMFS LUN is accessed solely through the hypervisor. This method is cumbersome and slow.

• Replication of SAN storage typically happens at the LUN level; so, replicating one VMware host is additional advanced and wasteful in resources wherever multiple guests exist on a similar VMFS LUN.

• iSCSI traffic can’t be encrypted then passes across the network in plain view.

• iSCSI security is proscribed to CHAP (Challenge Handshake Protocol), that isn’t centralized and wants to be managed through the storage array and/or VMware host. In massive deployments this might be a major overhead in management.

Hyper-V is deployed either as a part of Windows Server 2008 or as Windows Hyper-V Server 2008, each of that are Windows Server variants. So, virtual guests gain all the advantages of the underlying OS, as well as multipathing support. Individual virtual machines are stored as VHD files on LUNs mapped to drive letters or Windows mount points, creating them straightforward to copy or clone.

Summary

NFS storage is appropriate just for VMware deployments and isn’t supported by Hyper-V. Typically, NAS files are cheaper to deploy than Fibre Channel arrays, and NFS provides higher out-of-band access to guest files while not the requirement to use the hypervisor. Within the past NFS had been used wide for supporting information like ISO installation files, however nowadays it’s wider deployments wherever the array design supports the random I/O nature of virtual workloads.

CIFS storage is supported by Hyper-V however is maybe best avoided in preference of iSCSI, even in check environments; Microsoft has currently created its iSCSI software package Target freely obtainable.
Block-based storage works well on both virtualisation platforms however will need extra hardware. Directly accessing information is a problem for iSCSI/Fibre Channel/FCoE, creating data cloning and backup additional advanced.

Overall, the choice must be considered based on the requirement, there are always pros and cons with both file and block based approach, each of which can coexist in the same infrastructure.

Categories:
  Hyper-V, Storage and Virtualization, VMware
this post was shared 0 times
 100
About

 admin

  (14 articles)

Leave a Reply

Your email address will not be published.