Tuesday, 29 September 2015 00:00

Faster SQL Server backups in 10 steps

Rate this item
(0 votes)

Faster SQL Server backups are just 10 steps away with this expert advice. These days, a 100 GB database is considered the norm, and many databases are beginning to surpass the terabyte range. Contributor Greg Robidoux looks at 10 ways to achieve faster SQL Server backups.

Running SQL Server backups is one process that can take a long time. With databases continuing to grow, the time it takes to run backups also continues to grow. At one point, a 100 GB database was considered a very large database. Today it's the norm, and many databases are beginning to surpass the terabyte range. We will look at ways to achieve faster SQL Server backups.

1. Back up to disk, archive to tape

Backing up to disk is considerably faster than backing up to tape, and most experienced DBAs prefer this method. In addition to the faster I/O rates, you also have the latest backup on hand in case you need to do a restore. Once the backup to disk has completed, you should then archive to tape for long-term storage.

2. Back up during off hours

It's best to do your full backup during off hours when there is little activity on the server. But keep in mind that some batch jobs that are run during off hours can put more strain on your systems than jobs run during the day. It's important to monitor what's happening on your servers, and to schedule your full backups at the right time.

3. Use compression software

The best approach to SQL Server backups is to back up to disk and then archive to tape. The downside is that the backup files generally are about the same size as the data files. Because of this, if you have a 100 GB database you will need about 100 GB of disk space for your backup file. Unfortunately, SQL Server does not have a built-in process to compress the backup file. You can use a Zip product, but this adds extra processing time to the mix. Luckily, there are three products on the market -- from Idera, Quest Software Inc. and Red Gate Software Ltd. -- that allow you to create compressed backups on the fly. The benefits you receive from this type of product far outweigh the cost.

4. Write to multiple files

Another approach is to write your backups to multiple files so you have multiple threads for the backups. This is the approach taken by both the tape vendors and the three companies mentioned above. The ability to multitask will create your backups much faster than just using one backup file. With this approach, you won't be compressing the backup files, but it should greatly reduce the time it takes to complete the backup.

5. Write to multiple physical disk drives

Creating full backups is a very I/O-intensive process. Every data page in the database must be read and then written to another file. By having multiple physical disks, you will be able to achieve higher I/O rates and complete the process more quickly. In addition to writing to multiple files as mentioned above, it is also helpful to write to different physical disks to alleviate the I/O bottleneck

6. Run file or filegroup backups

Another backup option that SQL Server offers is file or filegroup backups. This method is predetermined by how the database was originally set up. If you created multiple files and/or filegroups when you created your database, then you can back up only portions of your database instead of the entire database. This approach can get complicated and also introduces other risks, so make sure you plan the backup and restore process before you try this approach.

7. Create snapshots

SQL Server also offers another backup approach called snapshots. This does exactly what the name implies -- creates a snapshot of the database at a particular point in time. This option is supported by third-party software and hardware and can get quite expensive. The advantage is that you can create a backup of your database in seconds versus hours.

8. Back up to local disk versus across the network

Creating backups across your network brings network I/O issues into the equation. Like disk I/O, you will have the same issues from trying to pump a large file across your network. When you bring the network into the equation, the time it takes to create your backup can vary greatly from day to day based on what other data are being pushed across the network at the same time. The best approach is to back up to disks that are locally attached to the server. After the backup has been completed, copy the file to your backup server for tape archival.

9. Use continuous data protection (CDP)

A new approach to backups is continuous data protection or CDP. This approach makes a copy of the transactions as they occur and allows you to reassemble the .mdf and .ldf files on another server for failover, reporting or any need that you may have. This eliminates the need to do full backups on your primary server. One product now available is from a company called TimeSpring Software Corp.

10. Run differential backups

This option allows you to only back up changes since the last full backup. This is achieved by reading only extents that have been changed since the last full backup and creating a backup of just these changes. A full backup can be run once per week, and differentials can be run throughout the week. This approach will create a faster backup but will not help much when you run your full backup at the end of the week. Depending on how much data change within your database, it is possible that your differential backups could be as large as your full backups.

Summary

As you can see, there are several different techniques to make your backups run faster. I am a strong believer that you should always back up to disk first and then archive to tape. Based on this approach (and keeping things as simple as possible), implementing backup compression software into your backup procedures is one of the simplest and most cost effective changes. Take a look at your options, and then determine what works best for your environment.

Reference : http://searchsqlserver.techtarget.com/tip/Faster-SQL-Server-backups-in-10-steps

Last modified on Tuesday, 29 September 2015 09:19
Data Recovery Expert

Viktor S., Ph.D. (Electrical/Computer Engineering), was hired by DataRecoup, the international data recovery corporation, in 2012. Promoted to Engineering Senior Manager in 2010 and then to his current position, as C.I.O. of DataRecoup, in 2014. Responsible for the management of critical, high-priority RAID data recovery cases and the application of his expert, comprehensive knowledge in database data retrieval. He is also responsible for planning and implementing SEO/SEM and other internet-based marketing strategies. Currently, Viktor S., Ph.D., is focusing on the further development and expansion of DataRecoup’s major internet marketing campaign for their already successful proprietary software application “Data Recovery for Windows” (an application which he developed).

Leave a comment

Make sure you enter the (*) required information where indicated. HTML code is not allowed.

Get Help Now

Thank you for contacting us.
Your Private Investigator will call you shortly.