Data Recovery Expert

Data Recovery Expert

Viktor S., Ph.D. (Electrical/Computer Engineering), was hired by DataRecoup, the international data recovery corporation, in 2012. Promoted to Engineering Senior Manager in 2010 and then to his current position, as C.I.O. of DataRecoup, in 2014. Responsible for the management of critical, high-priority RAID data recovery cases and the application of his expert, comprehensive knowledge in database data retrieval. He is also responsible for planning and implementing SEO/SEM and other internet-based marketing strategies. Currently, Viktor S., Ph.D., is focusing on the further development and expansion of DataRecoup’s major internet marketing campaign for their already successful proprietary software application “Data Recovery for Windows” (an application which he developed).

SQL FAQ – How Does A Page Level Restore Improve SQL Server Recovery Provisions?

For very large Microsoft SQL Server databases, a complete restore operation can take many hours. During this time the database cannot be used to prevent data being entered and lost as information is copied back from tape or disk. Obviously in a high-availability environment any downtime is costly, so keeping it to a minimum is essential.

Fortunately page level restore techniques can be used to keep recovery times to a minimum by reducing the amount of data that needs to copied back from the backup media. Since the release of Microsoft SQL Server 2005, DBAs have had the option of carrying out a ‘page level restore’ which allows them to recover a ‘handful’ of pages, rather than having to restore entire datasets and copy information back into the original database.

The page level restore operation is perfect for situations where data becomes corrupted during writes through a faulty disk controller, misconfigured antivirus software or an IO subsystem. Better still, restore level operations can be performed online for Enterprise editions of Microsoft SQL Server.

Prerequisites

As with any database recovery operation, page level restores are reliant on having a complete backup from which to work. If such a backup is not available, you will need to investigate an alternative method of recovering data from the server disks direct.

And although you can carry out the page level restore with the database online, you may decide to keep things safe by switching to single user mode whilst you transfer data using:

ALTER DATABASE <DBName> SET SINGLE_USER
WITH ROLLBACK AFTER 10 SECONDS
GO

This command ensures that everyone is out of the system and cannot enter until you change the mode back again. You will also want to ensure that you have the end of the log file backed up so that you have all transactions fully accounted for and to prevent any further data loss:

BACKUP LOG DBName
TO DISK = N'X:\SQLBackups\DBName_TailEnd.trn'
GO

Making Virtual Server Recovery-In-Place Viable

Backup technology for virtualized environments has become increasingly more advanced. Many organizations have implemented backup applications which are specifically designed to efficiently backup data in a virtualized environment without causing any disruption to application performance. In addition, some backup applications, like Veeam, now allow for data residing on a disk based backup target to be used as a boot device to support instant VM recoveries.

Boot From Backup

Generically referred to as “recovery-in-place”, this feature gives administrators the option to point a VM to the backup data residing on a disk partition (typically a backup appliance) so that a failed VM can be more quickly recovered. The idea is to use the backup data as a temporary boot area until a full data restore can be completed on to a primary storage resource.

3 Permanent Ways To Erase Your Data

In our last blog,we discussed the importance of being able to securely and permanently erase end-of-life data.Whether you’re working for a company that has legal obligations to destroy customers’ personal information after a certain timeframe, or you’re looking to sell an old smartphone on eBay and want to make sure nobody digs up your selfies, it pays to know how to do the job properly. And yet this is often a source of confusion – lots of consumers and businesses hold misconceptions about what constitutes secure data destruction and what doesn’t.

Formatting a disk, for example, won’t actually wipe it – it just removes the existing file system and generates a new one, which is analogous to throwing out a library catalogue when you really want to clear out the library itself. What’s more, taking a hammer to your hard drives is no guarantee – however unlikely – that someone with enough time on their hands won’t be able to reassemble the platters and transcribe the data.

So, how can consumers and businesses achieve peace of mind that their sensitive information won’t be coming back to haunt them after it’s been deleted? There are actually a few different fail-safe data destruction methods that have the approval of international governments and standards agencies, which vary wildly in cost and come with their own particular advantages and disadvantages. Here are three of the most important.

Method 1: Data erasure software

One of the simplest ways to permanently erase data is to use software. Hard drives, flash storage devices and virtual environments can all be wiped without specialist hardware, and the software required ranges from free – such as the ‘shred’ command bundled with most Unix-like operating systems – to commercial products.

While different data destruction applications use different techniques, they all adhere to a single principle: overwrite the information stored on the medium with something else. So, a program might go over a hard drive sector by sector and swap every bit for a zero, or else with randomly generated data. In order to ensure that no trace of the original magnetic pattern remains, this is typically done multiple times – common algorithms include Scheier seven-pass, as well as the even more rigorous, 35-pass Gutmann method.

Unfortunately, there are a few drawbacks to software-based data erasure. For one, it’s fairly time-consuming. Then, perhaps more significantly, there’s the fact that if certain sectors of the hard drive become inaccessible via normal means, the application won’t be able to write to them. Nonetheless, it’ll be possible for someone with the right tools to recover data from a bad sector.

Obviously, software-based data erasure also hits a snag when you want to destroy information stored on media that can only be written to once, such as most optical discs.

IBM Cloud Virtualized Server Recovery (VSR) – Service Introduction

With cloud service becoming ubiquitous for sever operations, it’s a natural progression for servers themselves to be backed up in the cloud

It’s been estimated that server downtown can cost a company upwards of $50,000 per minute, so the need for a server backup solution you can trust is imperative.

IBM has a very robust, efficient, and reliable solution to ensure that server downtime is kept to a minimum. Why IBM? They are an innovator in the field of disaster recovery, with more than 50 years of experience in business continuity and resiliency.

At the heart of the IBM Cloud Virtualized Server Recovery system (VSR), which keeps up-to-the-second backup copies of your protected servers in the IBM Resiliency Cloud, are bootable disk images of your servers. If an outage occurs your servers are automatically backed up via a web-based management portal that can be fully automated.

Should You Review Your Tape Archives?

If your only exposure to the world of data storage has been in the context of a small to medium-sized business or a startup, you’d be forgiven for thinking that magnetic tape is a relic from another era of enterprise computing. Once the de facto standard for long-term data retention, the format no longer gets much airtime in an age of cloud backups and tumbling HDD prices.

Nonetheless, rumours of the magnetic tape’s demise have been greatly exaggerated. According to an Information Age article from September 2014, all ten of the world’s biggest banks and telecoms firms, as well as eight of the world’s ten biggest pharmaceutical companies, are tape users. And as trends like big data pick up steam, there’s more interest than ever for organisations to invest in low-cost, high-volume storage for offline data.

For all their advantages, though, tape archives need to be looked after. It can be tempting to think that business records are out of sight, out of mind once they’re filed away in a format proven to last upwards of decades, but this is a mistake. The reasons for creating a tape archive aren’t trivial – regulatory compliance, mainly, and disaster recovery – and you don’t want to discover at the critical moment that your records are patchy.

Planning to back up and restore data

Deploying your operating system inside a virtual machine does not eliminate the need to archive your server data for long-term storage and disaster recovery. The primary reasons to continue archiving your server data are to meet requirements to archive data (business, legal, or financial requirements), to recover from a hardware or software disaster, and to recover from a site disaster.

You should consider a number of backup strategies. You can back up the data directly from inside the virtual machine using existing technologies and in the same way you back up your servers currently. Additionally, with Virtual Server 2005 R2 Service Pack 1, you can use backup software that uses the Volume Shadow Copy Service (VSS) writer to back up each of your physical computers running Virtual Server as well as all attached virtual machines without needing to install backup agents inside the guest operating system.

Best practices for designing a backup and archive strategy include:

  • Back up only what is necessary.
  • Schedule backups carefully.
  • Choose the appropriate tools for backing up and restoring data.

Secure Data Destruction: Why It Matters

It can be hard to comprehend the scale of the average company’s data footprint. Not only do firms today have local hard drives and tape backups to contend with, but also mobile devices, memory cards and even virtual environments provided through the cloud. Every bit of that data needs to be managed securely and compliantly – not just in storage and transit, but also at the end of its lifecycle.

Everyone ought to understand the importance of erasing data. If you’re selling a smartphone on eBay, the chances are you’ll want to make sure the buyer, regardless of intent, can’t dig up your old photographs and text messages. Similarly, most companies have legal obligations to destroy any sensitive information they’re no longer using.

Nonetheless, some consumers and businesses exhibit a surprising degree of negligence in this respect. According to a 2012 study from the Information Commissioner’s Office (ICO), the UK regulator responsible for enforcing the Data Protection Act, as many as one in ten second-hand hard drives sold online contain personal information. In the same year, the ICO fined one NHS trust £325,000 for selling old hardware on eBay that still held confidential records on thousands of patients and staff members.

Note that when the Data Protection Act is swapped for the more stringent EU General Data Protection Regulation next year, fines for equivalent acts of non-compliance will skyrocket – the new rules stipulate penalties of up to five per cent of a company’s annual turnover, or €100,000,000 (£80,000,000).

What makes data destruction secure?

As the above cautionary tale demonstrates, not taking pains to permanently erase data can lead to catastrophe. In an age of increasingly smart, interconnected technology, it bears remembering that every byte of electronic information exists in physical form – no matter what it looks like on screen, there’s a hard drive platter or memory chip somewhere that’s ripe for the taking.

So, businesses – and privacy-conscious consumers – need to keep track of data assets that have come to the end of their lifecycle, and then destroy them at their origin. This might not sound like too complex a job – even someone with rudimentary knowledge of technology might be familiar, in theory if not in practice, with concepts like a disk format or factory reset. Failing that, it might still occur to them to toss an old laptop into a skip rather than risk its unauthorised reuse.

Unfortunately, secure data destruction isn’t actually that simple. None of the above methods guarantee that the information stored on those devices won’t be recoverable – in fact, it might take little more than a few minutes with a free software package to retrieve it.

Backing up and restoring Virtual Server

To back up Virtual Server, you can:

  • Back up by using software that supports the Volume Shadow Copy Service.
  • Manually back up its various configuration files and resource files by using standard file backup software. To back up a virtual machine in this manner, the virtual machine must be turned off.
  • Back up a running virtual machine by using live backup software on the guest operating system.

Restoring Virtual Server involves reinstalling Virtual Server and copying the backed up files into the appropriate locations in the file system.

Back up by using software the supports Volume Shadow Copy Service

When you use backup software that works with the new Volume Shadow Copy Service writer for Virtual Server to back up your host operating system, you can back up Virtual Server and its running virtual machines without needing to install backup agents inside the guest operating system of the virtual machines.

Optimal BIOS settings for server virtualization

The ideal mix of performance and reliability starts with learning which server BIOS settings you should configure for a virtualized system.

Simply installing processors with virtualization extensions does not always guarantee optimum performance or best host server stability; you must enable a variety of BIOS settings to manage details of the processors' virtualization behavior. The options allow IT professionals to configure host platforms to use virtualization features that best support the computing needs of virtual machines and to disable unneeded features to ensure stability. Let's consider some of the most common virtualization options found in server BIOS.

Enable virtualization

BIOS settings allow technicians to enable or disable the processors' virtualization extensions. You should never assume that processors such as Intel VT or AMD-V's capabilities are enabled by default. Motherboards like Intel's S2600GZ/GL server board disable Intel VT by default, and the option to enable virtualization extensions is only available if all of the installed processors support virtualization extensions. With the option disabled, the server will continue to run in the traditional nonvirtualized mode (one server; one workload). Remember that changing this option is a major change to the hardware, and you may need to power cycle the server before the change takes effect.

Enable I/O virtualization

While ordinary virtualization extensions allow the virtualization of processor and memory resources, additional virtualization features are typically needed to virtualize I/O activities such as DMA transfers and device interrupts. By virtualizing I/O resources, the server can potentially improve the way the I/O resources are secured and allocated to virtual machines (VMs). Intel calls these I/O virtualization extensions VT-d. I/O virtualization is also a part of AMD-V extensions.

Since some I/O devices or subsystems do not fully support I/O virtualization, the BIOS may disable this feature -- and typically does by default to ensure the best system stability and device interoperability. However, if every server component or subsystem is capable of supporting virtualized I/O, a technician can enable I/O virtualization such as Intel VT for Directed I/O. This option normally appears underneath the overall virtualization setting and is only available if all of the processors support virtualization extensions.

Enable interrupt remapping

I/O virtualization changes the way I/O resources are assigned to workloads. When I/O virtualization is enabled, the system's interrupt table is abstracted before being reported to the hypervisor. This allows more flexibility and control over the way system interrupts are allocated and dealt with at the hardware level. Interrupt remapping is often enabled by default once I/O virtualization is enabled, but you should specifically verify that the interrupt remapping feature is available and active. In some cases, interrupt remapping may be forced on, and you may have to disable this feature by disabling I/O virtualization entirely.

Microsoft private cloud backup challenges

There are two major challenges that must be addressed when backing up a Microsoft private cloud: figuring out what needs protection and backing up virtual machines.

One of the big trends in IT is the move from relatively simple virtual server environments to private or hybrid clouds. As organizations contemplate such a transition, they must consider how a private cloud implementation will impact their backup process.

A Microsoft private cloud is built from the same basic components as a typical Hyper-V deployment: Hyper-V servers, System Center Virtual Machine Manager (SCVMM), and one or more Cluster Shared Volumes. If your organization uses Microsoft Hyper-V, you probably know how to back up these components.

When it comes to backing up a Microsoft private cloud environment, there are two challenges that must be addressed:

  • Ensuring everything necessary to rebuild the private cloud in the event of a failure is backed up.
  • The ability to back up virtual machines (VMs) residing on inaccessible virtual network segments.
  • Page 10 of 47

    Get Help Now

    Thank you for contacting us.
    Your Private Investigator will call you shortly.