Thursday, 21 May 2015 00:00

HDD selection criteria. Part 2 –Other specifications. Data transfer rate

Rate this item
(0 votes)

b2ap3_thumbnail_iStock_000004443148XSmall.jpgUsually capacity and interface in device specification is followed by data transfer rate. There are two kinds of transfer rate: interface transfer rate and storage medium transfer rate. Interface transfer rate (Buffer to Host) – which we have already mentioned in the previous article – maximum theoretically achievable data transfer rate from a hard drive’s buffer into the system. Usually specified to clarify the interface mode used in case it is not mentioned in the ‘interface’ field (for example, the field is filled with ‘UDMA’ only).

Medium transfer rate (Buffer to Disk) indicates how fast data can be transferred from/to the storage medium. As rule, transfer rate to the medium (i.e. Write Speed) is not equal to the receiving from it rate (Read Speed), that is why usually write and read speed are indicated separately (usually read is faster than write). However, only one field indicating maximum exchange with disk can be indicated, in this case it should be understood as read speed and, accordingly, write speed is a bit less. It is easy to figure out that the more the value is the better. Mentioned above buffer is a memory used to temporarily store data during operations with disk. In the simplest case it serves to coordinate the operation of fast and slow devices: for example, writing to disk speed is lower than the data transfer rate via interface, then, when writing is required, the data is transferred at the maximum speed in the disk buffer and then to the disk itself with a speed they can be written with, or for example, when reading from hard drive the system was not ready to receive the data read, then this data is transferred to buffer from where it is retrieved whenever possible. Absence of buffer would have substantially complicated and slowed this process. This is the simplest case and buffer, as it is presented here, is not used in modern drives.

In reality, system’s appeal to data is not accidental and you can predict with a certain degree of confidence what kind of data will be required by user in the next moment and prepare them, or keep the most frequently appealed data for some time in the buffer and so forth. This technique is called cashing and buffer – cache. In case the necessary data is in the cache and there are no mechanical access operations, input/output occurs very fast – data is transferred to the system with maximum, for interface, speed. All modern storage media are equipped with buffer of this particular kind. Elementary logics suggests that along with increase of cache volume, the possibility of unnecessary data trapped there increases as well.

Hard drives that are sold these days have the buffer within 512KB - 16 MB. In modern models it is 2 MB, in very recent - 8 MB, in old enough it varies from 512 KB to 1 MB. The size of cache-buffer does not influence the cost much, that is why try to give preference to drives with larger cache and, preferably, not less than 2 MB. Despite the fact that less cache is not deadly, most probably a hard drive equipped with such cache has worse mechanics and considerably lesser performance (in all likelihood you will face such hard drives if you buy used ones).

From specification you can also dip out information about average seek time, consisting of track seeking time and head positioning time. First is a period during which a head moves from the current position to the position assigned by the new command. Second – time required for positioning head above the required cylinder and confirming the correctness of track identification. Rotational latency – the time it takes for a specific block of data on a data track to rotate around to the read/write head (on average equals to the time during which the disk makes half of the circle).

It is easy to figure out that upon random data access, which prevails in modern systems, the lower these three parameters are the better. Accordingly you should give preference to the drive with higher spindle speed (mass-produced models 5400 and 7200 RPM, superfast 10000 and 15000 RPM), which determines the speed of blocks of data under the read/write head, i.e. input/output speed and with better positioning mechanism (determined by the value of average seek time).

Today, the cost difference between hard drives with a speed of 5400 and 7200 RPM is only several bucks and there is no reason to buy the former one. Production volumes of such hard drives are constantly being reduced and some companies have already completely and irreversibly stopped manufacturing them.

From time to time I hear that drives with 7200 RPM are too noisy and overheat fast. If the latter observation is justified, although it should be noted that there have not been a single case when they heat up to critical temperatures within home environment, and if required you can always install cooling system, the first observation has no grounds and if we are speaking about modern storage media: they all are silent and my hearing has never caught a sound difference. Although it is not fair to old drive models. You can find and compare information about produced noise in the drive’s specification (It is mentioned in ‘Acoustics’ field in two modes – idle and seek). I’d like to note that previously authors of many reviews used to assess this indicator with the help of different tricks (noise recording with microphone placed at a certain distance from the drive and comparing signal levels), these days nobody does it. There is simply no need in doing that. And returning to the heating, you can compare it by looking at utilized current values: the more it eats the more it heats.

Record density

In some articles devoted to HDD selection process it is recommended to pay attention to record density. This parameter sometimes can be indicated in specifications or it can be calculated. Given the time and the inclination you can do yourself. If nothing is mentioned about density in specifications but the number of heads and used surfaces is mentioned, then the record density per platter equals to the capacity of drive divided by the number of heads and multiplied by 2. They say the higher it is, the better, the drive has better performance. It is often fair enough, not always though. The thing is in getting record density. We will speak about that in a separate article devoted to hard drive disk structure.

If specification neither contains information about density nor data allowing you to calculate it, there is no reason to get upset about it. Usually, this parameter is the last thing that comes to my mind. The drives that have been released around the same time have approximately the same record densities. Increasing by 1.5 - 2 times won’t give a huge leap forward if not be lost behind more slow mechanics and vice versa, smaller density but faster mechanics – and the same result again.

Shock resistance

Hard drive is the most vulnerable device in a computer. It is sensitive to shakes and shocks. To increase its shock resistance a variety of technologies has been developed – like SPS (Shock Protection System), SPS II. In principle, the name of technology implemented in the drive doesn’t matter to user: a criterion in this case is the result of its implementation, in this case – sensitivity to shock. This information is available in specification, in ‘Shock’ field for disk that is off and write/read modes. The higher is value the better. If you are going to move the disk from one place to another, make sure to compare these values with other disks as well. In this case it can be of more importance than other indicators and parameters. If it is going to be used in a stable environment, then this parameter is not crucial, because in this case the disk is not subjected to severe shocks.

There are technologies to warn against possible storage medium failure. Ubiquitously embedded technology is a so-called SMART (Self-Monitoring Analysis and Reporting Technology). It is installed in all modern drives and in the majority of drives manufactured within the last 5-6 years. It tracks some disk parameters and gives a warning in case they drop below a certain threshold value. There are add-on settings over it enabling deeper condition analysis: its presence is not bad, although not necessary. I consider SMART as an essentially sophisticated technology.

You should not save on the warranty. You should buy the drive in those companies that give the same warranty period as the manufacturer does. And, of course, it would be much better if you buy it from a well-known and reputed company even if you will have to pay a bit more: in case anything happens you won’t hear unpleasant excuses and explanations.

Possible issues

Problems with hard drive are divided into mechanical and electronic failures. The most common mechanical failure is bad spots on the surface of hard drive and electronic failures are usually connected with a controller.

Emergence of bad spots on the surface of hard drive certifies either of premature wear of storage medium or of problems during the manufacturing stage. You need to discuss with the seller a minimum total volume of bad spots necessary for warranty to cover it. The majority of companies consider even the fact of emergence of bad spots as a warranty event. Modern drives have reserve surface to replace failed clusters. And if diagnostics programs tell about presence of failed clusters it means that this reserve surface has been utilized completely. At this stage there are reasons for serious concern.

Disk’s controller is poorly protected against electrical damages and it can burn down due to many reasons – from loose data bus, ungrounded enclosure, faulty power unit and so on. There is a chance that controller is not covered by the warranty, hence, you are advised to discuss this moment when purchasing the device.

Despite the common perception, purchasing used device does not necessarily equal to suicide. Any purchase is a lottery. Under normal operating conditions the chances of failing are the same for old and new devices. The only advantage of purchasing a new device is that it is has the warranty period. If your funds are limited, then purchasing used device is fair enough.

There is nothing definite to say about manufacturers in terms of whether to recommend or not. Each manufacturer has its own ups and downs. To help yourself with the choice, browse some devoted forums and seek advice from other users.

I think that so far I was able to tell you where to pay attention to. And the last but not least advice: do not forget to backup your data. Farewell!

Last modified on Thursday, 21 May 2015 19:42
Data Recovery Expert

Viktor S., Ph.D. (Electrical/Computer Engineering), was hired by DataRecoup, the international data recovery corporation, in 2012. Promoted to Engineering Senior Manager in 2010 and then to his current position, as C.I.O. of DataRecoup, in 2014. Responsible for the management of critical, high-priority RAID data recovery cases and the application of his expert, comprehensive knowledge in database data retrieval. He is also responsible for planning and implementing SEO/SEM and other internet-based marketing strategies. Currently, Viktor S., Ph.D., is focusing on the further development and expansion of DataRecoup’s major internet marketing campaign for their already successful proprietary software application “Data Recovery for Windows” (an application which he developed).

Leave a comment

Make sure you enter the (*) required information where indicated. HTML code is not allowed.