Data Recovery

Data Recovery (125)

Legacy applications are often hardware dependent, so when we recover the data we must provide a 'bootable' duplicate cloned hard drive

Recovering a bootable hard drive

Data loss is a common problem for many CT applications and with medical imaging systems now storing upwards of a million images, this can prove disastrous to any nuclear medicine department. This is exactly what happened when Philips Medical Solutions approached Data Recovery Specialists with a critical predicament, whereby St Vincents University Hospital in Dublin had suffered a failure with their CT scanner.

Working in partnership with Data Recovery Specialists, Philips engineers responded immediately narrowing the problem down to the hard disk drive. The drive was part of a Pegasus Workstation supporting the scanner, which stored the operating system, application and images. Brendan Cummins of Philips Medical Solutions stated “problems arose because of the age of the system complicated by local patches that were not supported by Philips. The customer wanted all these implemented so the only solution was a full data recovery”

CHKDSK is a great tools for checking the status of your disk but beware if your are using it to attempt a DIY data recovery!

CHKDSK data recovery...

CHKDSK short for ‘check disk’ is a Windows utility which verifies the file system integrity of a volume on your hard drive, fixing logical errors and repairing bad sectors. CHKDSK will search for errors but will only fix them if ordered to do so by the user. In Windows 7 there has been reported problems whereby the CHKDSK /R command can cause a system crash, but we are not able to replicate this at Data Recovery Specialists. But is CHKDSK good enough to test a failed hard drive and should I use it?

If the file system has become corrupted, there is a chance that CHKDSK may recover your lost data. There are options available to ‘automatically fix file system errors’ and’ scan for and attempt the recovery of bad sectors’. However CHKDSK can only run if it is the only application using that hard disk drive and you may have to ‘force a dismount’. It is best to make sure you are not running anything else inadvertently. If your windows operating system is running, CHKDSK will not run. Hence you should run CHKDSK before Windows is loaded and this can be forced by scheduling the disk check and closing down the system. Before it reboots CHKDSK will start verifying files. To see the results of a scheduled CHKDSK, it is necessary to run the event viewer by clicking ‘start’ followed by ‘run’ and then entering ‘eventvwr’ and clicking ‘OK’. Look for ‘Wininit’ under source in the Windows Application Logs.

Even opening the chassis of a hard drive can cause misalignment of the heads. Make sure you engage the most skilled data recovery technicians!

One attempt at data recovery…

Depending on the type of problem with your hard drive, the user may well only have one attempt at data recovery so make sure you choose the right company! If your hard drive has failed mechanically, a skilled professional is going to rebuild that disk using donor parts. Data recovery after a logical failure can often be attempted many times without effecting the media at all. This is because the technician is not having to open the chassis for a mechanical or electronic repair.

Hard drives are precision instruments and are not designed to be disassembled. Similarly donor parts are often not compatible. To provide a quote for a data recovery and file listing of recoverable files, our experts have already rebuilt the hard drive and recovered the data. Where quotes are declined we will always return the media to its original condition and return to the client. Yet just the fact that the drive has been disassembled means a subsequent attempt at recovery will be more complicated.

Many companies will face the situation whereby their IT systems fail and they need to act fast in order to recover any lost data and get back up and running as quickly as possible.

However, in some cases, employees might be tempted to resort to DIY data recovery practices, but this invariably is not a good idea and could lead to individuals doing more harm than good in the long run.

This is the view of Richard Cuthbertson, head technician at Xytron, who argued all disc drives clean room environments and to exacting tolerances. It can therefore be extremely risky to try to operate on faulty or unresponsive IT infrastructure if individuals do not have the necessary expertise or training to do so with causing further headaches down the line.

Tape back-up has always been traditionally viewed as a cost-effective and reliable platform for storing corporate data off-line safely and securely. 

However, the huge amount of legacy data that companies now need to store on tape, together with legal, regulatory and governance requirements to produce data in an accurate and timely fashion, mean that companies are looking for different approaches to managing and restoring data from tape back-ups.

When we discuss the issue with IT administrators, who are ultimately responsible for identifying and producing data required by the business when responding to compliance, legal or regulatory enquiries, they say their biggest challenge is the inability to identify the specific location of the records required.

We undertook a global study undertaken with 720 IT administrators earlier this year to understand these challenges in more detail. We found that almost a third (30%) said that they do not have clear insight into what information is stored within their archives.  Since most organisations are required by law to keep and maintain access to regulated data for a designated period of time, this is a potentially massive problem.

The need for reliable tape restores

We recently worked with a Spanish IT services provider that needed to ensure access to legacy backup tapes of a new end-customer in the insurance services business. The provider needed to guarantee reliable restores from legacy back-up tapes without the costs of migration or the maintenance of retired infrastructures.

The end-customer’s needed to have access to data held on a large number of 3592 and 3592/JA tapes over a period of five years due to strict data retention laws and general good governance. The backup tapes had been created using Tivoli Storage Manager.

Understandably, neither the service provider nor the insurance business wanted to incur the costs of maintaining the Tivoli Storage Manager environment for a five year period for infrequent and ad hoc backup tape restores requests.

Together with the IT service provider, we engineered a cost effective and efficient solution to allow access to the end customer’s backup tapes.

When it comes to their channel partners' ability to deliver backup as a service, backup vendors have plenty of misconceptions.

Having spent 25 years in data protection, with several early years in channel and field-sales roles, I've noticed three common things that backup vendors don't seem to realize about channel partners.

Not every VAR/SI should become a backup service provider

Some vendors assume that they can help every one of their partners become backup as a service (BaaS) providers. The partners know the products, so with a little extra marketing for "BaaS in a box," they can presumably jump on the cloud bandwagon. There are two problems with this assumption:

  • Just because a reseller is outstanding at technology deployment, it does not necessarily mean that it can or should run whole infrastructures as your "data of last resort." Operational management isn't the same as deployment or integration expertise -- and not all resellers maintain enough staff to manage an "as a service" set of platforms adequately.
  • Not every reseller is effective with the marketing and business-development model of a cloud-based business. Sure, they could offer services to their existing customers who are considering "the cloud," but they could also resell those services with far less effort, unless it really is core to their business model.

What most successful VARs/SIs really offer is "expertise" combined with relationships and situational awareness of customer environments. After all, they probably installed most of the IT systems and will assuredly be the first called when something breaks.

But, when ESG asked IT decision-makers who they wanted to purchase backup as a service from, local VARs/SIs were near the bottom of the list, as were telco providers. Telco providers typically do know how to run an "as a service" infrastructure and market to a cloud consumer, but telcos don't know customers' IT environments, nor do most have the depth to help during an actual disaster, other than ensuring the BaaS platform is available for data restores upon request.

In many cases, to actually be successful in converting a VAR/SI to enable cloud services, VAR/SIs should partner with telcos or MSPs, who understand backup as a service and want to partner with the local resellers. The resellers stay involved with their customer (bringing expertise and environmental experience); the MSP/telco delivers the reliable service and the customer benefits from cloud-based data protection. In other words, the VAR/SI should be involved in selection, deployment and management of BaaS, but not the actual delivery of the service.

Resellers are not always a good proxy for vendors looking for customer opinions

Vendors assume that any VAR/SI with 50 customers must have nearly 50 times (or even just 10 times) the insight into what customers are looking for. Some vendors will solicit their partners as proxies for what customers want in sales initiatives, marketing messages or even product development. Yes, partners definitely have insights and can see things at a macro level, but they are biased in two distinct ways.

First, partners often have a long history with their primary vendors, which can affect the objectivity of their consensus of feedback. And second, partners are looking for profitability, which means that the products they want to sell may not (in many ways) resemble what customers want to buy.

Partners have unique insights that absolutely have to be listened to and considered as vendors try to outperform in the congested data protection marketplace, but customers' viewpoints are unique and are very difficult to quantify. If the vendor's favorite long-time partner is answering on its customers' behalf, it won't provide any insight into the opinions of prospective customers who aren't currently aligned to that partner's services.

Techniques such as replication and erasure coding protect data on object storage systems and other high-capacity primary storage systems when traditional backup is difficult.

Object storage systems are designed to cost effectively store a lot of data for a very long period of time. However, that makes traditional backup difficult, if not impossible. To ensure data is protected from both disk failure and corruption, vendors use replication or erasure coding (or a combination of the two).

Even if you are not considering object storage, understanding the differences between these data protection techniques is important since many primary storage arrays are beginning to use them. We explore the pros and cons of each approach so you can determine which method of data protection is best for your data center.

Scale-out basics

Most object storage systems, as well as converged systems, rely on scale-out storage architectures. These architectures are built around a cluster of servers that provide storage capacity and performance. Each time another node is added to the cluster, the performance and capacity of the overall cluster is increased.

These systems require redundancy across multiple storage nodes so that if one node fails, data can still be accessed. Typical RAID levels such as RAID 5 and RAID 6 are particularly ill-suited for this multi-node data distribution because of their slow rebuild times.

Replication pros and cons

Replication was the most prevalent form of data protection in early object storage systems and is becoming a common data protection technique in converged infrastructures, which are also node-based.

In this protection scheme, each unique object is copied a given number of times to a specified number of nodes, where the number of copies and how they're distributed (how many nodes receive a copy) is set manually or by policy. Many of these products also have the ability to control the location of the nodes that will receive the copies. They can be in different racks, different rows and, of course, different data centers.

The advantage of replication is that it is a relatively lightweight process, in that no complex calculations have to be made (compared with erasure coding). Also, it creates fully usable, standalone copies that are not dependent on any other data set for access. In converged or hyperconverged architectures, replication also allows for better virtual machine performance since all data can be served up locally.

The obvious downside to replication is that full, complete copies are made, and each redundant copy consumes that much more storage capacity. For smaller environments, this can be a minor detail. For environments with multiple petabytes of information, it can be a real problem. For example, a 5 PB environment could require 15 PB of total capacity, assuming a relatively common three-copy strategy.

Mounting a virtual machine image can save time, but there's a penalty to pay for the convenience. We list the questions to ask when deciding which data protection method to employ.

Backup software today is not the file backup software of yesteryear. It has evolved significantly to back up virtual machines (VMs); containers; cloud applications; edge devices such as laptops, tablets and smartphones; websites and more. But one of the data protection methods generating a lot of excitement is the ability of backup software products to mount a virtual or physical machine image directly on the backup or media server as a VM and put it into production. This capability is a game-changer when it comes to system-level recoveries because it enables server systems to be brought back in minutes -- a system no longer has to be recovered on new hardware and then put back into production.

So when does it make sense to mount a backup versus recover from a backup?

Challenges with mounting backups

Mounting backups and running them on media or backup servers decreases recovery time objectives (RTOs) by orders of magnitude, but the technology is subject to real limitations. For example, backups mounted on a media or backup server will often run in a degraded state, although exceptions exist when there is a separate physical host for those mounts. Media or backup servers are sized for backups, not application production and ongoing backups, so organizations should ask and answer the following questions while planning their data protection methods:

  • How often do physical hosts fail?
  • Will the budget allow for overprovisioning media or backup servers?
  • If not, can applications, application owners or users tolerate running in a degraded mode if there is a physical host failure?
  • How much degradation can users tolerate?

When answering these questions, many data protection professionals will envision a single VM running concurrently with the backups on that media or backup server. But that's an unrealistic scenario today. Should a physical host die, there will be far more than a single VM that will have to be brought up on media or backup servers. Additionally, most media or backup servers back up more than one hypervisor-equipped host plus other nonhypervisor-equipped hosts. Should there be a multiple host outage, each media or backup server will likely have to mount several dozen VMs concurrently.

The majority of workers in the UK agree that the loss or theft of their digital data is inevitable at some point.

This is according to a survey of 2,000 Brits conducted by Citrix, which found 71 per cent of respondents have accepted the fact they will fall victim to this problem sooner or later.

Younger individuals were found to be more alert to the risks, with a third of 16 to 25-year-olds saying they felt more vulnerable to attacks than in the past, compared with just 15 per cent of over-65s.

However, despite this, a large number of people are still relying on outdated solutions when it comes to backing up their most valuable data.

Nearly a third of respondents (30 per cent) said they still used USB sticks to back up key information, while just nine per cent have turned to a cloud-based service to do this.

Chris Mayers, chief security architect at Citrix, said: "Many [workers] are still reliant on dated practices such as using USB sticks to store and protect their information, when more advanced and robust measures are available."

Reference: http://www.krollontrack.co.uk/company/press-room/data-recovery-news/data-loss-inevitable,-brits-say531.aspx

It goes without saying that the amount of data we create is continually increasing. Methods of storing it are numerous and hard drive manufacturers are tempting us with greater capacity, speed, and – for a change – lower prices. So what storage medium should you choose?

What are the Choices?

  1. Optical Storage
  2. Hard Disk and Solid State Drives
  3. Flash Memory
  4. Cloud Services
  5. Magnetic Tape

Optical Storage

Just a few years ago, CDs and DVDs were one of the most popular methods for storing large amounts of data – particularly among home users. This was a consequence of the relatively high prices of HDDs and particularly SSDs, as well as their limited capacity. In comparison, optical discs were competitive both in terms of price and the capacity they offered. Manufacturers declared relatively long lifespans, although these claims were quickly verified in practice. Depending on the manufacturer, a disc should serve from anywhere from 25 to 200 years, but that claim depends on so many factors. You should be prepared that the disc may become unreadable at any time – and rather sooner than later.

Our own behavior is the main reason for shortening the lifespan of a disc. Keeping them under wrong conditions, scratching, greasing (all of us have left our full sets of fingerprints on CDs) are common issues. In addition, lifespan can also be reduced from the methods used by manufactures to cut costs. Low-quality materials used to produce and protect the disc reduce the layer thickness, thus accelerating oxidation of the reflective layer.

All that aside, the hardware we use to read those discs may also turn out to be a problem. New computers often don’t have a disc drive, and even if they do, not all formats are supported (CD-R, CD-RW, DVD-RAM, DVD-R, DVD+R, DVD-RW, DVD+/–R DL). Each of these formats indicates a separate technology and an individual method of data reading, so it may turn out that soon you won’t be able to get hold of an appropriate reader for your disc.

Get Help Now

Thank you for contacting us.
Your Private Investigator will call you shortly.