Data Recovery Expert

Data Recovery Expert

Viktor S., Ph.D. (Electrical/Computer Engineering), was hired by DataRecoup, the international data recovery corporation, in 2012. Promoted to Engineering Senior Manager in 2010 and then to his current position, as C.I.O. of DataRecoup, in 2014. Responsible for the management of critical, high-priority RAID data recovery cases and the application of his expert, comprehensive knowledge in database data retrieval. He is also responsible for planning and implementing SEO/SEM and other internet-based marketing strategies. Currently, Viktor S., Ph.D., is focusing on the further development and expansion of DataRecoup’s major internet marketing campaign for their already successful proprietary software application “Data Recovery for Windows” (an application which he developed).

Your Windows Server 2003 Migration Checklist

After a 12 year run, Microsoft is finally set to withdraw extended support for the Windows Server 2003 family of operating systems. After 14 July, Microsoft will not provide any patches, updates or support to businesses using Server 2003 unless they take out an extremely expensive custom support agreement.

In the event that your business does not have a spare $9 million (or access to public funds), the only cost effective option available is to take the leap and upgrade to one of the two more recent versions of Server – 2008 or 2012. But only a few days until the deadline, what do you need to do to be ready? This checklist will take you through the most important steps required to complete a successful migration.

1. Assess your upgrade options

Chances are that your direct upgrade options are severely limited if you are still using Windows Server 2003 because the hardware simply will not handle the demands of a later version. Even if you can coax an upgrade to Server 2008 you must decide whether the effort and relatively short remaining lifespan of the OS outweigh the benefits of buying an upgraded system capable of running Server 2012.

Use this handy guide to find out more about the different upgrade options you have available.

2. Assess your migration options

In all likelihood you will need to purchase a brand new server and migrate data and settings from the original machine. It is important to note that Microsoft does not provide an automated upgrade tool for moving from Server 2003 to Server 2012.

Instead you will need to perform a double upgrade (2003 to 2008, followed by 2008 to 2012) or more likely, a manual migration between the two versions. For the rest of this guide we will assume you are migrating software, services and files to a completely new system running Windows Server 2012 R2.

Who Just Paid Microsoft Millions of Dollars for Continued Windows XP Support?

Windows XP has long been an OS Microsoft begrudgingly kept alive far longer than it could have ever imagined. Initially released in 2001, official support for Windows XP persisted all the way through April 2014. At the time, Microsoft noted that after 13 years of support, it was time for the company to look forward, unencumbered by outdated software.

But Microsoft is more than willing to make an exception if you’re willing to pony up some big bucks. Case in point: The US Navy’s Space and Naval Warfare Systems Command recently agreed to pay Microsoft $9 million in exchange for ongoing support of its Windows XP systems. The contract, recently signed in early June, also contains a number of options, which if exercised, “would bring the cumulative value” of the contract to nearly $31 million.

According to the US Navy, they have approximately 100,000 workstations still in use running legacy Windows XP applications. “Support for this software can no longer be obtained under existing agreements with Microsoft because the software has reached the end of maintenance period,” Navy officials explained. In addition to support for Windows XP, the contract also calls for ongoing support for Microsoft Office 2003, Exchange 2003, and Server 2003.

Corrupted Exchange database

The Exchange database can become corrupted in different ways, for example, if the Exchange server is shut down incorrectly or if the hard disk is defective. Because it may be difficult to correct a corrupted Exchange database, Microsoft recommends that you perform regular backups of the Exchange database. This article describes how to troubleshoot a corrupted Exchange database.

Trying to repair an Exchange database should be a last resort. Often these attempts can lead to irretrievable data loss. The most common causes of corruption occur in the information store. Typically this will involve any one of public or private EDB files. When an Exchange database is corrupted, you may not see any warning signs, or you may experience the following symptoms:

  • You may not be able to access the Global Address List.
  • The Microsoft Exchange Server Information Store does not start or you cannot stop it.
  • Users cannot send or receive emails.
  • Event 1018 is logged in the application event log and hardware failures logged in the system log.
  • The client computer may appear to stop responding (hang) for a while.
  • You may receive a "failure to connect to the Exchange server" message.

SQL FAQ – Preparing For The Retirement Of Microsoft SQL Server 2005

Mainstream support for Microsoft SQL Server 2005 ends in half a year – April 2016. This provides plenty of time for CTOs and DBAs to prepare a Microsoft SQL Server upgrade or migration plan, but what are the options open to them?

The platform

At this point in time, the smartest move would be to skip Microsoft SQL Server 2008 and 2012, and move straight on to SQL Server 2014. Choosing to use the latest version of the database engine will help to maximise the lifespan of the platform and lengthen the time between upgrades.

Your business will also be able to take advantage of new features, like full text search, that are not available in intermediate editions of Microsoft SQL Server.

Upgrade option #1 – In-place upgrade

Simultaneously the easiest, and potentially most risky option, an SQL Server in-place upgrade involves installing the new software, preferably SQL Server 2014, over the top of the existing system. The database engine is then upgraded, as are the tables and any other “moving parts”.

However, anything but the most basic of SQL Server environments (think single instance) is unlikely to be quite so straightforward. The in-place upgrade route is probably not the correct upgrade path for enterprise databases.

The other potential problem with in-place upgrades is the lack of simple rollback in the event of a problem. The database server(s) will need to be taken offline, whilst a full SQL Server recovery procedure is performed using the last full back up.

SSD vs. HDD – Minimizing the Risk of Data Loss

This post is a continuation of the series on Solid State Drives (SSDs) and their role in enterprise storage.  In the first post, I discussed the differences between traditional HDDs and SSDs.  In the second post, I looked at the challenges associated with data destruction and asset disposal.

In this post, I will discuss:

  • How to minimize risks associated with data loss
  • How to minimize risks associated with data destruction.

There are a couple areas that can make or break an organization in the event of a data disaster for both HDDs and SDDs.  Studies have suggested that data loss costs companies more than $18 billion per year and that 50 percent of companies that have an outage lasting 10 days or more will go out of business within 5 years, with 70 percent of them closing within the first 12 months.  Understanding your storage and taking the steps to minimize the impact of a data disaster can greatly improve your organization’s chances of surviving.

A Reliable Approach To Managing And Restoring Legacy Tapes

Tape back-up has always been traditionally viewed as a cost-effective and reliable platform for storing corporate data off-line safely and securely. 

However, the huge amount of legacy data that companies now need to store on tape, together with legal, regulatory and governance requirements to produce data in an accurate and timely fashion, mean that companies are looking for different approaches to managing and restoring data from tape back-ups.

When we discuss the issue with IT administrators, who are ultimately responsible for identifying and producing data required by the business when responding to compliance, legal or regulatory enquiries, they say their biggest challenge is the inability to identify the specific location of the records required.

We undertook a global study undertaken with 720 IT administrators earlier this year to understand these challenges in more detail. We found that almost a third (30%) said that they do not have clear insight into what information is stored within their archives.  Since most organisations are required by law to keep and maintain access to regulated data for a designated period of time, this is a potentially massive problem.

The need for reliable tape restores

We recently worked with a Spanish IT services provider that needed to ensure access to legacy backup tapes of a new end-customer in the insurance services business. The provider needed to guarantee reliable restores from legacy back-up tapes without the costs of migration or the maintenance of retired infrastructures.

The end-customer’s needed to have access to data held on a large number of 3592 and 3592/JA tapes over a period of five years due to strict data retention laws and general good governance. The backup tapes had been created using Tivoli Storage Manager.

Understandably, neither the service provider nor the insurance business wanted to incur the costs of maintaining the Tivoli Storage Manager environment for a five year period for infrequent and ad hoc backup tape restores requests.

Together with the IT service provider, we engineered a cost effective and efficient solution to allow access to the end customer’s backup tapes.

SSD vs. HDD – Data Destruction and Asset Disposal

Why care about data destruction and asset disposal? According to the US Department of Commerce, data security breaches cost US companies more than $250 billion per year! A few examples will help illustrate the importance of proper data erasure and asset disposal practices.

A Loyola University (Chicago) computer with the Social Security numbers of 5,800 students was discarded before its hard drive was erased, forcing the school to warn students about potential identity theft. 

A survey by data forensics experts, Garfinkel and Shelat, found that over 40 precent of hard drives collected from eBay and other places had recoverable data and over 30 percent contained sensitive information, including credit card numbers.

A BBC documentary revealed that bank account details of potentially thousands of UK residents were being sold in West Africa for less than £20. Sensitive information was contained on the PC hard drives exported to Nigeria (2006).

Data is stored magnetically on traditional hard disks (HDDs).  As the read/writes heads pass over the magnetic substrate, bits of data are magnetically aligned and oriented in such a way that they can be interpreted as 0’s and 1’s (binary data).  A collection of these bits of data are put together to form bytes which are in turn grouped together  in what is traditionally referred to as a sector (usually 512 bytes of data).

SSD vs. HDD – The Missing Considerations

Recently, there has been a lot written about Solid State Drives (SSDs) and their role in enterprise storage. These articles include several comparisons of solid state drives and mechanical drives in RAID arrays for enterprise applications. While most of these articles address several very key areas of comparison (cost, performance, capacity, power, cooling, reliability), they often neglect to consider data recovery and data destruction/asset disposal.  In the first part of this three-part series, I will examine data loss and recovery. In the second part, I will examine asset disposal, and in the third article, I will discuss steps that can be taken to minimize the risks.

Let’s tackle data recovery first. To understand how your choice of storage can affect the recoverability of data in the event something happens to your storage (and your backups), we need to take a closer look at how the data from a RAID array is written to the media.

With solid state disks, the data passes through the RAID controller to the individual SSDs that make up the array. As the data reaches the individual drives, it is passed to another specialized controller called a wear-leveling controller. The wear-leveling controller then determines to which NAND chip and block inside that chip the data is electronically written. The location of the data on the NAND chips changes constantly to help protect the NAND chips from wearing out.

With mechanical disks, the data is passed from the RAID controller to the individual disks. The data is then magnetically written by the read/write head to the platters in the drives as bits. It is important to note that the data is written in a very specific pattern on the platters. Specific bits of data are stored in consistent locations. As an example, when block 10 is written to the platter, barring any defects, the block stays in the same location on the disk platters. The data can then be read from the platter by going back to the same location on the platter and reading the magnetic orientation of the bit stored there. When changes are made to the data, the orientation of the bit may change, but its location on the platter does not change.

Revive Your PC Or Mac With An SSD

SSD hard drive technology is extremely fast in terms of reading data. It can either optimise a PC that is usually dedicated to video games with premium features, such as 4K or 3D, or revitalise an old PC or Mac. Most importantly, the computer can be adapted to accommodate a standard 3.5″ desktop or a 2.5″ laptop hard drive.

What is the procedure to be followed?

Changing a hard drive requires several steps. It can be done by yourself or by a professional. There are several things you should know before having a go at this task, in particular, including software that allows copying from one hard drive to another if you want to keep your regular working environment. In addition, you may need more specific software if you intend to copy a Macintosh hard drive (a PC can copy Macintosh hard drives) because the purpose of the drive-to-drive copy is that the software knows how to copy correctly the so-called partitions.

In the case of Macintosh, these are partitions of the HFS+ type, while Windows partitions are of the NTFS type. Generally, it is best to have a desktop PC running Windows (7, 8 or 10) to perform the operation because you may actually use the original PC drive to work with the copying software. This is then used to copy to the replacement drive or to copy the laptop drive to the SSD that will replace it.

The choice of copying software is important

There are several choices of copying software. In practice, to replace the internal drive of a PC or a PC-type laptop, you can find solid state hard drives including a special kit with the right software and the adequate small cable to be used in the operation. But this is generally a little more expensive. To copy a Macintosh drive, after several attempts made by professionals, Paragon Drive Copy Professional and Acronis True Image turned out to be the only ones able to successfully copy the partition. Indeed, they allowed starting the Mac with the new hard drive in place of the old drive without any problem at all, doing nothing else than replacing one with the other after copying.

What is left now is to choose the appropriate hard drive and install it in place of the old one.

Backup as a service isn't for everyone

When it comes to their channel partners' ability to deliver backup as a service, backup vendors have plenty of misconceptions.

Having spent 25 years in data protection, with several early years in channel and field-sales roles, I've noticed three common things that backup vendors don't seem to realize about channel partners.

Not every VAR/SI should become a backup service provider

Some vendors assume that they can help every one of their partners become backup as a service (BaaS) providers. The partners know the products, so with a little extra marketing for "BaaS in a box," they can presumably jump on the cloud bandwagon. There are two problems with this assumption:

  • Just because a reseller is outstanding at technology deployment, it does not necessarily mean that it can or should run whole infrastructures as your "data of last resort." Operational management isn't the same as deployment or integration expertise -- and not all resellers maintain enough staff to manage an "as a service" set of platforms adequately.
  • Not every reseller is effective with the marketing and business-development model of a cloud-based business. Sure, they could offer services to their existing customers who are considering "the cloud," but they could also resell those services with far less effort, unless it really is core to their business model.

What most successful VARs/SIs really offer is "expertise" combined with relationships and situational awareness of customer environments. After all, they probably installed most of the IT systems and will assuredly be the first called when something breaks.

But, when ESG asked IT decision-makers who they wanted to purchase backup as a service from, local VARs/SIs were near the bottom of the list, as were telco providers. Telco providers typically do know how to run an "as a service" infrastructure and market to a cloud consumer, but telcos don't know customers' IT environments, nor do most have the depth to help during an actual disaster, other than ensuring the BaaS platform is available for data restores upon request.

In many cases, to actually be successful in converting a VAR/SI to enable cloud services, VAR/SIs should partner with telcos or MSPs, who understand backup as a service and want to partner with the local resellers. The resellers stay involved with their customer (bringing expertise and environmental experience); the MSP/telco delivers the reliable service and the customer benefits from cloud-based data protection. In other words, the VAR/SI should be involved in selection, deployment and management of BaaS, but not the actual delivery of the service.

Resellers are not always a good proxy for vendors looking for customer opinions

Vendors assume that any VAR/SI with 50 customers must have nearly 50 times (or even just 10 times) the insight into what customers are looking for. Some vendors will solicit their partners as proxies for what customers want in sales initiatives, marketing messages or even product development. Yes, partners definitely have insights and can see things at a macro level, but they are biased in two distinct ways.

First, partners often have a long history with their primary vendors, which can affect the objectivity of their consensus of feedback. And second, partners are looking for profitability, which means that the products they want to sell may not (in many ways) resemble what customers want to buy.

Partners have unique insights that absolutely have to be listened to and considered as vendors try to outperform in the congested data protection marketplace, but customers' viewpoints are unique and are very difficult to quantify. If the vendor's favorite long-time partner is answering on its customers' behalf, it won't provide any insight into the opinions of prospective customers who aren't currently aligned to that partner's services.

Get Help Now

Thank you for contacting us.
Your Private Investigator will call you shortly.