Virtual Servers

Virtual Servers (53)

DTI Data Recovery receives several requests for data recovery quotes each and every day. Many times the quotes are self-explanatory and we can offer an accurate solution as well as an upfront price for almost any recovery. With that being said, there are times when a more complex solution is necessary and some additional information is needed in order to offer an accurate quote and estimate the possibility of recovery. One of these instances is the deletion of data, any data. The possibility of recovery hinges on so many factors that a phone conversation is normally necessary in order to gather more information and make sure that the Deleted VMWare VMDK can be recovered.

Some of the most complex situations when dealing with deleted data involves virtual technology. Although this technology has been a real life safer when it comes to optimizing hardware resources it is a real nightmare when it comes to unraveling the mystery that involves a deleted VMWare VMDK. That being said, I received a request for a deleted VMWare VMDK recovery quote  recently. The person inquiring could not be contacted by phone so I could not speak to them. The description of the problem was so vague that I decided to send a list of questions that would enable me to make an educated and informed assessment of the recovery as well as the pricing for that recovery. The following are those questions and the reasons why they were asked. The questions were designed for a VMWare server.

1. What version of ESXi are you running on the server?
This is important in as much as some of the older versions of ESXi had file size limitations. In addition there are some utility functions for more recent versions that do not exist on the older versions of the operating system. These types of version discrepancies can and do affect the possibility of recovery.

Virtual machine backups are widely available, but many shops are ignoring the technology until their application performance suffers.

Increasing your use of server virtualization is one of the easiest ways to prove that your backup process is antiquated. In other words:

If 20% of your server infrastructure is virtualized, you may be able to use any approach to virtual machine (VM) backups that you want, including agents inside each VM and a backup server running software that is a few years old and presumed to be "good enough."

But when you get to 50%, 75% or 90% virtualized, legacy backup will significantly hinder the performance of your highly-virtualized, converged or hyper-converged infrastructure.

The problem is twofold:

Old backup software: In many enterprises, the backup team is intentionally a version or two behind, so that they can be assured that the backup software is fully patched and the bugs have already been mitigated. Unfortunately, with some legacy backup software, two versions can be three or more years behind; meaning it doesn't make use of modern APIs such as Microsoft VSS for Hyper-V or VMware vStorage VADP. So, when those legacy approaches are applied to a highly virtualized environment, their mechanisms decrease the performance of production VMs and the underlying hosts.

Backup technology for virtualized environments has become increasingly more advanced. Many organizations have implemented backup applications which are specifically designed to efficiently backup data in a virtualized environment without causing any disruption to application performance. In addition, some backup applications, like Veeam, now allow for data residing on a disk based backup target to be used as a boot device to support instant VM recoveries.

Boot From Backup

Generically referred to as “recovery-in-place”, this feature gives administrators the option to point a VM to the backup data residing on a disk partition (typically a backup appliance) so that a failed VM can be more quickly recovered. The idea is to use the backup data as a temporary boot area until a full data restore can be completed on to a primary storage resource.

With cloud service becoming ubiquitous for sever operations, it’s a natural progression for servers themselves to be backed up in the cloud

It’s been estimated that server downtown can cost a company upwards of $50,000 per minute, so the need for a server backup solution you can trust is imperative.

IBM has a very robust, efficient, and reliable solution to ensure that server downtime is kept to a minimum. Why IBM? They are an innovator in the field of disaster recovery, with more than 50 years of experience in business continuity and resiliency.

At the heart of the IBM Cloud Virtualized Server Recovery system (VSR), which keeps up-to-the-second backup copies of your protected servers in the IBM Resiliency Cloud, are bootable disk images of your servers. If an outage occurs your servers are automatically backed up via a web-based management portal that can be fully automated.

To back up Virtual Server, you can:

  • Back up by using software that supports the Volume Shadow Copy Service.
  • Manually back up its various configuration files and resource files by using standard file backup software. To back up a virtual machine in this manner, the virtual machine must be turned off.
  • Back up a running virtual machine by using live backup software on the guest operating system.

Restoring Virtual Server involves reinstalling Virtual Server and copying the backed up files into the appropriate locations in the file system.

Back up by using software the supports Volume Shadow Copy Service

When you use backup software that works with the new Volume Shadow Copy Service writer for Virtual Server to back up your host operating system, you can back up Virtual Server and its running virtual machines without needing to install backup agents inside the guest operating system of the virtual machines.

Often overlooked, a VM's BIOS is more limited than a physical machine's BIOS, but will allow you to adjust boot settings.

Virtual machines are logical representations that are designed to mimic physical hardware. Consequently, VMs have many of the same attributes as a physical machine, including things like network ports, memory, and virtualized CPU instances. One aspect of a virtual server that is often completely overlooked however is its BIOS.

In a way, this is completely understandable. After all, it's usually possible to get a virtualized workload up and running without ever touching the VM BIOS. Besides, it's easy to assume that virtual servers either don't have a BIOS or that they simply use a pass through BIOS that leverages the physical server's BIOS. However, there are actually some BIOS settings that can be configured for VMs.

These settings depend on two main factors. The first of these factors is the hypervisor. Each hypervisor vendor chooses for itself the settings they want to expose.

The second factor is the VM generation. Different generations of VMs interact with the hardware in different ways, and the hypervisor therefore exposes different settings based on the VM generation.

Over the years, VMware has contributed much to server virtualization and made an impact on the IT industry. A great competitor, VMware has been and in my view made Microsoft a stronger and better IT solution provider. Both have been trying hard to help enterprise IT deliver much with less. And a great news is that competitions and open dialogues benefit tremendously to our customers and the IT industry in general. The two companies in my view have however a fundamentally different perspective in addressing cloud computing challenges. Let me be clear. This blog post is not about feature parity. This blog presents my personal view on important considerations for assessing a cloud computing solution platform and is intended to help IT technical leadership and C level decision makers look into the fundamental principles which will ultimately have a profound and long-term impact on the bottom line of adopting cloud computing. The presented criteria apply to Microsoft as much as to any other solution providers in consideration.

Virtualization vs. Fabric

In cloud computing, resources presented for consumption are via abstraction without the need to reveal the underlying physical complexities. And in current state of cloud computing, one approach is to deliver consumable resources via a form of virtualization. In this way, a server is in fact a virtual machine (VM), an IP address space can in reality logically defined through a virtual network layer, and a disk drive appearing with a continuous storage space is as a matter of fact an aggregate of the storage provided by a bunch of disks, or JBOD. All cloud computing artifacts eventually consist of resources categorized into three pools, namely compute, networking, and storage. The three resource pools are logically simple to understand. Compute is the ability to execute code and run instances. Networking is how instances and resources are glued or isolated. And storage is where the resources and instances are stored. And these three resource pools via server virtualization, network virtualization, and storage virtualization collectively form an abstraction, the so-called fabric, as detailed in “Resource Pooling, Virtualization, Fabric, and Cloud.” Fabric signifies the ability to discover and manage datacenter resources. Sometimes we refer the owner of fabric which is essentially a datacenter management solution as fabric controller which manages or essentially owns all the datacenter resources, physical or virtual.

Cloud computing is about providing and consuming resources on demand. It is about enabling consumption via the management of resources which happen to be virtualized in this case. In cloud computing, we must go beyond virtualization and envision fabric as the architectural layer of abstraction. Fabric management needs to be architected as a whole into a solution. Such that the translation between fabric management and virtualization operations in each resource pool can be standardized, automated, and optimized.

So, at an architectural level, look into a holistic approach of fabric management, i.e. a comprehensive view of how the three resource pools integrated and complementary to one another. Let me here recognize that virtualization is very important, while fabric is critical.

Introduction

Businesses thinking about deploying new virtualization solutions would do well to begin by comparing the available features of different virtualization platforms before deciding upon which platform to implement. Microsoft's Hyper-V virtualization technology, which is built into their Windows Server operating system, together the VMware platform and its line of products, represent the two most popular virtualization solutions used by enterprises today. Many features of the Hyper-V platform have close or near parallels in the VMware world, and likewise many VMware capabilities are mirrored almost exactly in the Hyper-V universe.

Unfortunately this overlap between these two technologies is obscured to some degree because of how different features are named in both platforms. If you were able however to translate the name of a Hyper-V (or VMware) feature into its most closely corresponding VMware (or Hyper-V) feature, you would gain some immunity from the oceans of spin that attends each of these virtualization platforms. The net effect would be to allow you to more rationally compare and assess the capabilities of the two platforms instead of being swayed and tossed two and fro by the waves of hype emanating from their marketing departments.

The purpose of this article is to do just that. In other words, to provide you with a way of translating Hyper-V terminology into VMware terminology and vice versa. Using this cross-reference will then enable you to determine which virtualization technology has the capabilities you need to solve your business problem.

The purpose of this article is to do just that. In other words, to provide you with a way of translating Hyper-V terminology into VMware terminology and vice versa. Using this cross-reference will then enable you to determine which virtualization technology has the capabilities you need to solve your business problem.

Comparing terminology for virtual machines

At the heart of it, the function of both VMware and Hyper-V is to run virtual machines so you can virtualize workloads, desktops, applications and services. The first table below compares the terminology that VMware uses for describing its virtual machines with that used by Microsoft for a similar purpose.

VMware terminology

Microsoft erminology

Service Console Parent Partition
Templates Templates
Standard/Distributed Switch Virtual Switch
Standard/Distributed Switch VM IDE Boot
Hot Add Disks & Storage Hot Add Disks & Storage
Distributed Resource Scheduler (DRS) Performance and Resource Optimization (PRO)
Distributed Power Management Core Parking & Dynamic Optimization
VMware Tools Integration Component
Converter SCVMM P2V / V2V

Table 1

In lots of customer discussions, the one thing that comes out often - How does Microsoft Virtualization stcak ( Hyper-V & System Center 2012 R2) compares with VMWare virtulization stack (vSphere 5.5 Enterprise Plus + vCenter Server 5.5). I have tried focus on real-world perspective based on my experience implementing both solutions in the field throughout my career. In this article, I’ll provide a summarized comparison of the feature sets provided by each of these latest releases using the currently available public information from both Microsoft and VMware as of this article’s publication date for additional reference.

How to compare?

Rather than simply comparing feature-by-feature using just simple check-marks in each category, I’ll try to provide as much detail as possible for you to intelligently compare each area. As I’m sure you’ve heard before, sometimes the “devil is in the details”.

For each comparison area, I’ll rate the related capabilities with the following color coded rankings:

  • Supported – Fully supported without any additional products or licenses
  • Limited Support – Significant limitations when using related feature, or limitations in comparison to the competing solution represented
  • Not Supported – Not supported at all or without the addition of other product licensing costs

In this article, I’ve organized the comparison into the following sections:

  • Licensing
  • Virtualization Scalability
  • VM Portability, High Availability and Disaster Recovery
  • Storage
  • Networking
  • Guest Operating Systems

Are you keeping score at home?

Of course, not all of the features and capabilities presented in the summary below may be important to you. As you review the comparison summary of each section, just make a note of the particular features that you're likely to use in your environment. When you're done, tally up the Green ratings in each column to determine which platform achieves a better score in meeting the needs of your organization.

Here we go…

If there's one technology that can greatly improve computing environments of any size, it's virtualization. By using a single physical server to run many virtual servers, you can decrease operational costs and get far more bang for your buck. Whether your company is a 2-server or 2000-server shop, you can benefit from server virtualization in a variety of ways. The best part? You can do it cheaply and easily.

The reasons to virtualize even a small infrastructure come down to ease of administration and cost reductions. Cost reductions come from cutting down the number of physical servers, thus reducing the power and cooling requirements, but they also come in the form of greatly reduced expansion. Rather than having to purchase new hardware to support a new business application, all you need to do is add a new virtual server.

If your business has only a single server, virtualization isn't likely to buy you much, but if you have more than two servers or if you plan on expanding anytime soon, virtualization can likely make a difference.

Get Help Now

Thank you for contacting us.
Your Private Investigator will call you shortly.