Data Recovery Expert

Data Recovery Expert

Viktor S., Ph.D. (Electrical/Computer Engineering), was hired by DataRecoup, the international data recovery corporation, in 2012. Promoted to Engineering Senior Manager in 2010 and then to his current position, as C.I.O. of DataRecoup, in 2014. Responsible for the management of critical, high-priority RAID data recovery cases and the application of his expert, comprehensive knowledge in database data retrieval. He is also responsible for planning and implementing SEO/SEM and other internet-based marketing strategies. Currently, Viktor S., Ph.D., is focusing on the further development and expansion of DataRecoup’s major internet marketing campaign for their already successful proprietary software application “Data Recovery for Windows” (an application which he developed).

Virtual machine backups remedy outdated legacy backups

Virtual machine backups are widely available, but many shops are ignoring the technology until their application performance suffers.

Increasing your use of server virtualization is one of the easiest ways to prove that your backup process is antiquated. In other words:

If 20% of your server infrastructure is virtualized, you may be able to use any approach to virtual machine (VM) backups that you want, including agents inside each VM and a backup server running software that is a few years old and presumed to be "good enough."

But when you get to 50%, 75% or 90% virtualized, legacy backup will significantly hinder the performance of your highly-virtualized, converged or hyper-converged infrastructure.

The problem is twofold:

Old backup software: In many enterprises, the backup team is intentionally a version or two behind, so that they can be assured that the backup software is fully patched and the bugs have already been mitigated. Unfortunately, with some legacy backup software, two versions can be three or more years behind; meaning it doesn't make use of modern APIs such as Microsoft VSS for Hyper-V or VMware vStorage VADP. So, when those legacy approaches are applied to a highly virtualized environment, their mechanisms decrease the performance of production VMs and the underlying hosts.

Data protection techniques for object storage systems

Techniques such as replication and erasure coding protect data on object storage systems and other high-capacity primary storage systems when traditional backup is difficult.

Object storage systems are designed to cost effectively store a lot of data for a very long period of time. However, that makes traditional backup difficult, if not impossible. To ensure data is protected from both disk failure and corruption, vendors use replication or erasure coding (or a combination of the two).

Even if you are not considering object storage, understanding the differences between these data protection techniques is important since many primary storage arrays are beginning to use them. We explore the pros and cons of each approach so you can determine which method of data protection is best for your data center.

Scale-out basics

Most object storage systems, as well as converged systems, rely on scale-out storage architectures. These architectures are built around a cluster of servers that provide storage capacity and performance. Each time another node is added to the cluster, the performance and capacity of the overall cluster is increased.

These systems require redundancy across multiple storage nodes so that if one node fails, data can still be accessed. Typical RAID levels such as RAID 5 and RAID 6 are particularly ill-suited for this multi-node data distribution because of their slow rebuild times.

Replication pros and cons

Replication was the most prevalent form of data protection in early object storage systems and is becoming a common data protection technique in converged infrastructures, which are also node-based.

In this protection scheme, each unique object is copied a given number of times to a specified number of nodes, where the number of copies and how they're distributed (how many nodes receive a copy) is set manually or by policy. Many of these products also have the ability to control the location of the nodes that will receive the copies. They can be in different racks, different rows and, of course, different data centers.

The advantage of replication is that it is a relatively lightweight process, in that no complex calculations have to be made (compared with erasure coding). Also, it creates fully usable, standalone copies that are not dependent on any other data set for access. In converged or hyperconverged architectures, replication also allows for better virtual machine performance since all data can be served up locally.

The obvious downside to replication is that full, complete copies are made, and each redundant copy consumes that much more storage capacity. For smaller environments, this can be a minor detail. For environments with multiple petabytes of information, it can be a real problem. For example, a 5 PB environment could require 15 PB of total capacity, assuming a relatively common three-copy strategy.

Cloud disaster recovery offers data protection at low cost

Cloud disaster recovery and backup offer high availability at low cost. There are plenty of cloud backup options -- and some snags -- to consider.

Cloud disaster recovery and backup options have become more common, and some users say they provide a higher level of protection than traditional solutions -- at lower cost.

Disaster recovery (DR) means different things to different people. One IT pro's definition of DR might be simple file backup, while another might be referring to full standby server farms ready to take over production duties at a moment's notice.

At its most basic level, disaster recovery means storing backup data off-site, which increasingly means the public cloud.

For Prellwitz Chilinski Associates, Inc. (PCA), an architecture firm in Cambridge, Mass., nightly backups are the cornerstone of its disaster recovery plan. The data being backed up are large files architects generate using tools from vendors such as Autodesk Inc. and Adobe Systems Inc.

For many years, PCA replicated its data between a pair of storage appliances provided by a local integrator -- one on-site, the other off-site.

"It was a great solution -- we had the local backups for quick access, plus the safety net of off-site backup," said Dan Carp, systems administrator for the firm.

Then, about two years ago, Carp began exploring whether cloud backup could be more cost-effective. Using the replicated data appliances, PCA paid about $14,000 annually to protect 700 GB of data. After a few fits and starts with various cloud data protection providers, it settled on Zetta.net, which stores almost twice the data it had on the data appliances, for about half as much money.

Data protection methods: Mounting backups vs. recovering from backups

Mounting a virtual machine image can save time, but there's a penalty to pay for the convenience. We list the questions to ask when deciding which data protection method to employ.

Backup software today is not the file backup software of yesteryear. It has evolved significantly to back up virtual machines (VMs); containers; cloud applications; edge devices such as laptops, tablets and smartphones; websites and more. But one of the data protection methods generating a lot of excitement is the ability of backup software products to mount a virtual or physical machine image directly on the backup or media server as a VM and put it into production. This capability is a game-changer when it comes to system-level recoveries because it enables server systems to be brought back in minutes -- a system no longer has to be recovered on new hardware and then put back into production.

So when does it make sense to mount a backup versus recover from a backup?

Challenges with mounting backups

Mounting backups and running them on media or backup servers decreases recovery time objectives (RTOs) by orders of magnitude, but the technology is subject to real limitations. For example, backups mounted on a media or backup server will often run in a degraded state, although exceptions exist when there is a separate physical host for those mounts. Media or backup servers are sized for backups, not application production and ongoing backups, so organizations should ask and answer the following questions while planning their data protection methods:

  • How often do physical hosts fail?
  • Will the budget allow for overprovisioning media or backup servers?
  • If not, can applications, application owners or users tolerate running in a degraded mode if there is a physical host failure?
  • How much degradation can users tolerate?

When answering these questions, many data protection professionals will envision a single VM running concurrently with the backups on that media or backup server. But that's an unrealistic scenario today. Should a physical host die, there will be far more than a single VM that will have to be brought up on media or backup servers. Additionally, most media or backup servers back up more than one hypervisor-equipped host plus other nonhypervisor-equipped hosts. Should there be a multiple host outage, each media or backup server will likely have to mount several dozen VMs concurrently.

Q-Cloud Protect for AWS expands Quantum's cloud services

Quantum preannounced its cloud disaster recovery and cloud backup services last year. Q-Cloud Protect is now available, along with the Q-Cloud Archive and Q-Cloud Vault.

Quantum has made its Q-Cloud backup for Amazon Web Services available, nearly a year after its official launch and more than six months behind its original schedule.

Quantum Q-Cloud Protect for AWS is one of three Q-Cloud services the vendor preannounced in January 2015. The others were Q-Cloud Archive and Q-Cloud Vault.

Protect is a companion to Quantum's DXi disk backup library family, using DXi's data deduplication and replication software. Archive and Vault are part of Quantum's StorNext file management platform. Archive, which uses Amazon Simple Storage Service to store data that needs to be accessed occasionally, became available last spring. Vault, which puts rarely accessed cold data in Amazon Glacier, followed in November 2015.

Q-Cloud Protect was originally scheduled to launch around June 2015, but tweaking the deduplication to move data off to the cloud proved trickier than expected.

Solid State Drives and Data Recovery

If you followed this year’s Consumer Electronic Show you might have learned that solid state drives (SSD’s) are the latest hot topic taking the data storage industry by storm. No, hard drives aren’t going away any time soon, but the increase in production and decrease in cost in SSD’s is definitely bringing them to the forefront. SSD’s are now creeping into the hand of consumers as notebook manufacturers are now including SSD drives as options in their top of the line notebooks.

SSD drives are great for several reasons:

  • Low voltage = Less power consumption = Less heat = Longer battery life
  • No noise
  • No moving parts = Less prone to failure
  • Very fast read speed – 20 to 33X faster than hard disk drives

SMART Testing Water Damaged Hard Drives

We received a particularly interesting call from a prospective client a few weeks ago. They are a hard drive reseller who had sold about 1,200 brand new 2.5 inch drives to a customer. The customer put about 200 of these hard drives into use where they started to rapidly fail, much sooner than a ‘new’ hard drive should. The client inspected several of the drives and found an abnormality with almost all of them, there appeared to be corrosion under the drive’s printed circuit board (PCB).

The hard drives were purchased as new O.E.M. overstock sold by a brokerage reseller. There was no tampering with the hard drives and they remained in their factory sealed anti-static bags until opened by the customer. The reseller was looking for a theory behind the cause of corrosion and to employ a company in documenting the damage for insurance compensation.

After some hypothesizing, we were contracted to catalogue all of the hard drives, evaluate their self-monitoring analysis and reporting technology (SMART) log, inspect the drives for damage and document it if it was found. This required us to run our SMART test on each hard drive, it checks for as well as records the number of power on hours (POH) a drive has. Next we removed the PCB, photographed any abnormalities and put the PCB back on the drive.

Deleted VMWare VMDK – What should you do?

DTI Data Recovery receives several requests for data recovery quotes each and every day. Many times the quotes are self-explanatory and we can offer an accurate solution as well as an upfront price for almost any recovery. With that being said, there are times when a more complex solution is necessary and some additional information is needed in order to offer an accurate quote and estimate the possibility of recovery. One of these instances is the deletion of data, any data. The possibility of recovery hinges on so many factors that a phone conversation is normally necessary in order to gather more information and make sure that the Deleted VMWare VMDK can be recovered.

Some of the most complex situations when dealing with deleted data involves virtual technology. Although this technology has been a real life safer when it comes to optimizing hardware resources it is a real nightmare when it comes to unraveling the mystery that involves a deleted VMWare VMDK. That being said, I received a request for a deleted VMWare VMDK recovery quote  recently. The person inquiring could not be contacted by phone so I could not speak to them. The description of the problem was so vague that I decided to send a list of questions that would enable me to make an educated and informed assessment of the recovery as well as the pricing for that recovery. The following are those questions and the reasons why they were asked. The questions were designed for a VMWare server.

1. What version of ESXi are you running on the server?
This is important in as much as some of the older versions of ESXi had file size limitations. In addition there are some utility functions for more recent versions that do not exist on the older versions of the operating system. These types of version discrepancies can and do affect the possibility of recovery.

2. Was the VM set up as thick or thin provision?
When a VKDK file is deleted then the storage map for that particular file is destroyed. There are times when a low level forensic scan is necessary to try and piece together the file. If the original VMDK is ‘thin’ provision then that means the file is compressed and allocated on an as needed basis. Thin provision can be difficult to recover if the actual block map for the file has been destroyed due to a deletion. Whereas ‘thick’ provision allocates the full size of the VMDL and can be treated like any other operating system partition.

The ‘R’ in RAID stands for what?

Hint: R does not stand for Re-build or Re-initialize, or even Re-Format or Re-arrange.

All joking aside, most people in the IT field understand what a RAID array is and how it can help protect your data if used correctly, but many people do not understand what to do in the event of a system or drive failure.

Data loss inevitable, Brits say

The majority of workers in the UK agree that the loss or theft of their digital data is inevitable at some point.

This is according to a survey of 2,000 Brits conducted by Citrix, which found 71 per cent of respondents have accepted the fact they will fall victim to this problem sooner or later.

Younger individuals were found to be more alert to the risks, with a third of 16 to 25-year-olds saying they felt more vulnerable to attacks than in the past, compared with just 15 per cent of over-65s.

However, despite this, a large number of people are still relying on outdated solutions when it comes to backing up their most valuable data.

Nearly a third of respondents (30 per cent) said they still used USB sticks to back up key information, while just nine per cent have turned to a cloud-based service to do this.

Chris Mayers, chief security architect at Citrix, said: "Many [workers] are still reliant on dated practices such as using USB sticks to store and protect their information, when more advanced and robust measures are available."

Reference: http://www.krollontrack.co.uk/company/press-room/data-recovery-news/data-loss-inevitable,-brits-say531.aspx

Page 8 of 47

Get Help Now

Thank you for contacting us.
Your Private Investigator will call you shortly.