Internet Windows Android

Hard disk cache memory. What is hard disk cache memory and what is it for

If you want to know what hard disk cache is and how it works, this article is for you. You will learn what it is, what functions it performs and how it affects the operation of the device, as well as the advantages and disadvantages of the cache.

Hard disk cache concept

The hard drive itself is a fairly leisurely device. Compared to RAM, a hard drive is orders of magnitude slower. This also causes a drop in computer performance when there is a shortage of RAM, since the shortage is compensated for by the hard disk.

So, the hard disk cache is a kind of random access memory. It is built into the hard drive and serves as a buffer for read information and its subsequent transfer to the system, and also contains the most frequently used data.

Let's take a look at what a hard disk cache is for.

As noted above, reading information from a hard disk is very leisurely, since the movement of the head and finding the required sector takes a long time.

It is necessary to clarify that the word "slow" means milliseconds. And for modern technologies, a millisecond is a lot.

Therefore, like the hard disk cache, it stores data physically read from the disk surface, and also reads and stores sectors that are likely to be requested later.

This reduces the number of physical calls to the drive, while increasing performance. Winchester can work even if the host bus is not free. The transmission speed can increase hundreds of times with the same type of requests.

How hard drive cache works

Let's dwell on this in more detail. You already have a rough idea of ​​what the hard disk cache is for. Now let's find out how it works.

Let's imagine that the hard disk receives a request to read 512 KB of information from one block. The necessary information is taken from the disk and transferred to the cache, but along with the requested data, several neighboring blocks are read at the same time. This is called prefetching. When a new request for a disk arrives, the microcontroller of the drive first checks for the presence of this information in the cache, and if it finds them, it instantly transfers it to the system without accessing the physical surface.

Since the cache memory is limited, the oldest blocks of information are replaced with new ones. It is a circular cache or circular buffer.

Methods to increase the speed of the hard disk by means of buffer memory

  • Adaptive segmentation. The cache memory consists of segments with the same amount of memory. Since the sizes of the requested information cannot always be the same size, many cache segments will be used irrationally. Therefore, manufacturers began to make cache memory with the ability to change the size of segments and their number.
  • Prefetch. The hard drive's processor analyzes the previously requested and currently requested data. Based on the analysis, it transfers information from the physical surface that is more likely to be requested at the next moment in time.
  • User control. More advanced models of hard drives give the user control over the operations performed in the cache. For example: disabling the cache, setting the segment size, toggling the adaptive segmentation feature, or disabling prefetching.

Which gives the device more cache memory

Now we will find out what volumes are equipped and what gives the cache memory in the hard disk.

Most often you can find hard drives with a cache size of 32 and 64 MB. But there are also 8 and 16 MB left. Recently, only 32 and 64 MB have been released. A significant breakthrough in performance came when 16 MB was used instead of 8 MB. And between caches with a volume of 16 and 32 MB, you no longer feel much difference, as well as between 32 and 64.

The average computer user will not notice the difference in the performance of hard drives with a cache of 32 and 64 MB. But it is worth noting that the cache memory periodically experiences significant loads, so it is better to purchase a hard drive with a higher cache size if there is a financial opportunity.

The main advantages of cache memory

Cache has many benefits. We will consider only the main ones:


Disadvantages of cache memory

  1. The speed of the hard drive does not increase if data is written on disks in a random way. This makes it impossible to prefetch information. This problem can be partially avoided by periodically defragmenting.
  2. The buffer is useless when reading files larger than the cache memory can fit. So, when accessing a file of 100 MB, the 64 MB cache will be useless.

Additional Information

You now know the hard drive and what it affects. What else do you need to know? Currently, there is a new type of storage - SSD (solid state). Instead of disk platters, they use synchronous memory, like in flash drives. Such drives are tens of times faster than conventional hard drives, so the presence of a cache is useless. But even such drives have their drawbacks. First, the price of such devices increases in proportion to the volume. Secondly, they have a limited reserve of the cycle of rewriting memory cells.

There are also hybrid drives: a solid state drive with a conventional hard drive. The advantage is the ratio of high speed of work and a large amount of stored information with a relatively low cost.

The hard disk cache is a temporary storage of data.
If you have a modern hard drive, then the cache is not as important as it used to be.
You will find in more detail about the role of cache in hard drives and how much cache should be for fast computer operation, you will find later in the article.

What is the cache for

The hard disk cache allows you to store frequently used data in a dedicated location. Accordingly, the size of the cache determines the capacity of the stored data. Thanks to the large cache, the performance of the hard disk can increase significantly, because frequently used data can be loaded exactly into the cache of the hard disk, which does not require physical reading when requested.
Physical reading is direct access to hard disk sectors. It takes a fairly noticeable period of time, measured in milliseconds. At the same time, the hard disk cache transfers information on demand about 100 times faster than if the information was requested through physical access to the hard disk. Thus, the hard disk cache allows the hard drive to work even if the host bus is busy.

Along with the importance of the cache, one should not forget about other characteristics of the hard disk, and sometimes the cache size can be neglected. If we compare two hard drives of the same size with different cache sizes, for example 8 and 16 MB, then the choice in favor of a larger cache should be made only if their price difference is about $ 7- $ 12. Otherwise, it makes no sense to overpay money for a larger cache size.

It is worth looking at the cache if you buy a gaming computer and there are no trifles for you, in which case you also need to look at the turnover.

Summarizing all of the above

The advantages of the cache are that data processing does not take a long time, when, during a physical access to a certain sector, time must pass until the disk head finds the desired piece of information and starts reading. In addition, hard drives with a large cache size can significantly offload the computer's processor, since no physical access is required to request information from the cache. Accordingly, the work of the processor is minimal here.

Hard disk cache can be called a real accelerator, because its buffering function really allows the hard disk to work much faster and more efficiently. However, in the context of the rapid development of high technologies, the former value of the hard disk cache is not of great importance, since most modern models use a cache of 8 or 16 MB, which is quite enough for the optimal operation of the hard disk.

Today, there are hard drives with an even larger 32 MB cache, but as we said, you should pay extra for the difference only if the difference in price matches the difference in performance.

The hard disk (hard drive, HDD) is one of the very important parts of the computer. After all, if a processor, video card, etc. breaks down. You only regret losing money for a new purchase; if the hard drive breaks down, you risk losing important data without ever returning. Also, the speed of the computer as a whole depends on the hard disk. Let's figure out how to choose the right hard drive.

Hard disk tasks

The job of a hard drive inside a computer is to store and display information very quickly. The hard drive is an amazing invention of the computer industry. Using the laws of physics, this small device stores an almost unlimited amount of information.

Hard disk type

IDE - outdated hard drives are used to connect to older motherboards.

SATA - Replaced IDE hard drives, have a higher data transfer rate.

SATA interfaces are of different models, they differ among themselves in the same speed of data exchange and support for different technologies:

  • SATA - has a transfer rate of up to 150mb / s.
  • SATA II - has a transfer rate of up to 300mb / s
  • SATА III - has a transfer rate up to 600mb / s

SATA-3 began to be produced not long ago, from the beginning of 2010. When buying such a hard drive, you need to pay attention to the year of manufacture of your computer (without an upgrade), if it is below this date, then this hard drive will not work for you! HDD - SATA, SATA 2 have the same connection connectors and are compatible with each other.

Hard disk space

The most common hard drives that most users use at home are 250, 320, 500 gigabytes. There are even fewer, but there are less and less 120, 80 gigabytes, and they are no longer on sale. To store very large information, there are hard drives of 1, 2, 4 terabytes.

Hard disk speed and cache memory

When choosing a hard disk, it is important to pay attention to its operating speed (spindle speed). The speed of the entire computer will depend on this. Typical disc speeds are 5400 and 7200 rpm.

The amount of buffer memory (cache memory) is the physical memory of the hard disk. There are several sizes of such memory 8, 16, 32, 64 megabytes. The higher the speed of the hard disk's RAM, the faster the data transfer rate will be.

In custody

Before buying, check which hard drive is suitable for your motherboard: IDE, SATA or SATA 3. We look at the characteristics of the rotational speed of the drives and the amount of buffer memory, these are the main indicators that you need to pay attention to. We also look at the manufacturer and the volume that suits you.

We wish you happy shopping!

Share your choice in the comments, this will help other users make the right choice!



xn ---- 8sbabec6fbqes7h.xn - p1ai

System administration and more

Using the cache increases the performance of any hard disk, reducing the number of physical disk accesses, and also allows the hard drive to work even when the host bus is busy. Most modern drives have a cache size of 8 to 64 megabytes. This is even more than the hard disk capacity of an average computer in the nineties of the last century.

Despite the fact that the cache increases the speed of the drive in the system, it also has its drawbacks. To begin with, the cache does not speed up the drive at all in case of random requests for information located at different ends of the platter, since with such requests there is no point in overfetching. Also, the cache does not help at all when reading large amounts of data. it is usually quite small, for example, when copying an 80 megabyte file, with the usual 16 megabyte buffer in our time, only slightly less than 20% of the copied file will fit into the cache.

Despite the fact that the cache increases the speed of the drive in the system, it also has its drawbacks. To begin with, the cache does not speed up the drive at all in case of random requests for information located at different ends of the platter, since with such requests there is no point in overfetching. Also, it does not help at all when reading large amounts of data. it is usually quite small. For example, when copying an 80 megabyte file, with the usual 16 megabyte buffer in our time, only slightly less than 20% of the copied file will fit into the cache.

In recent years, hard drive manufacturers have significantly increased the cache capacity in their products. Even in the late 90s, 256 kilobytes was the standard for all drives and only high-end devices had 512 kilobytes of cache. Currently, the cache size of 8 megabytes has already become the de facto standard for all drives, while the most productive models have capacities of 32 or even 64 megabytes. There are two reasons why the drive buffer has grown so rapidly. One of them is a sharp decline in prices for synchronous memory microcircuits. The second reason is the belief of users that doubling or even quadrupling the cache size will greatly affect the speed of the drive.

The size of the hard disk cache, of course, affects the speed of the drive in the operating system, but not as much as users imagine. Manufacturers take advantage of the user's belief in the size of the cache, and in advertising brochures they make loud statements about the quadruple cache size compared to the standard model. However, comparing the same hard drive with buffer sizes of 16 and 64 megabytes, it turns out that the speedup translates into a few percent. What does this lead to? Besides, only a very large difference in the cache size (for example, between 512 kilobytes and 64 megabytes) will significantly affect the speed of the drive. It should also be remembered that the size of the hard drive buffer is rather small in comparison with the computer memory, and often the "soft" cache, that is, an intermediate buffer organized by the operating system for caching operations with the file system and located in the computer memory, has a greater contribution to the operation of the drive. ...

Fortunately, there is a faster option for the cache: the computer writes data to the drive, they go to the cache, and the drive immediately responds to the system that the record has been made; The computer continues to work further, believing that the drive was able to write data very quickly, while the drive "tricked" the computer and only wrote the necessary data to the cache, and only then began to write them to the disk. This technology is called write-back caching.

Due to this risk, some workstations do not cache at all. Modern drives allow you to turn off the write caching mode. This is especially important in applications where data accuracy is critical. Because this type of caching greatly increases the speed of the drive, nevertheless, they usually resort to other methods that can reduce the risk of data loss due to a power outage. The most common method is to connect your computer to an uninterruptible power supply. In addition, all modern drives have the "flush write cache" function, which forces the drive to write data from the cache to the surface, but the system has to execute this command blindly, because it still doesn't know if there is data in the cache or not. Every time the power is turned off, modern operating systems send this command to the hard drive, then the command to park the heads is sent (although this command could not have been sent, since every modern drive automatically parks the heads when the voltage drops below the maximum permissible level ) and only after that the computer turns off. This guarantees the safety of user data and the correct shutdown of the hard drive.

sysadminstvo.ru

Hard disk cache

05.09.2005

All modern drives have a built-in cache, also called a buffer. The purpose of this cache is not the same as that of the processor cache. The cache feature is buffering between fast and slow devices. In the case of hard drives, the cache is used to temporarily store the results of the last read from the disk, as well as to prefetch information that can be requested a little later, for example, several sectors after the currently requested sector.

Using the cache increases the performance of any hard disk, reducing the number of physical disk accesses, and also allows the hard drive to work even when the host bus is busy. Most modern drives have a cache size of 2 to 8 megabytes. Nevertheless, the most advanced SCSI drives have a cache that reaches 16 megabytes, which is even more than the average computer of the nineties of the last century.

It should be noted that when someone talks about the disk cache, most often they mean not just the hard disk cache, but a certain buffer allocated by the operating system to speed up read-write procedures in this particular operating system.

The reason the hard disk cache is very important is the big difference between the speed of the hard disk itself and the speed of the hard disk interface. When searching for the sector we need, whole milliseconds pass. time is spent on moving the head, waiting for the desired sector. In modern personal computers, even one millisecond is a lot. On a typical IDE / ATA drive, the time it takes to transfer a 16Kb block of data from the cache to the computer is about a hundred times faster than the time it takes to find and read it from the surface. This is why all hard drives have an internal cache.

Another situation is writing data to disk. Suppose we need to write the same 16-kilobyte data block with a cache. Winchester instantly throws this data block into the internal cache, and reports to the system that it is again free for requests, while simultaneously writing data to the surface of the magnetic disks. In the case of sequential reading of sectors from the surface, the cache no longer plays a big role, since the sequential read speeds and the interface speed in this case are about the same.

General concepts of hard disk cache operation

The simplest principle of cache operation is storing data not only for the requested sector, but also for several sectors after it. As a rule, reading from the hard disk is performed not in blocks of 512 bytes, but in blocks of 4096 bytes (cluster, although the cluster size may vary). The cache is divided into segments, each of which can store one block of data. When a request to a hard disk occurs, the storage controller first of all checks whether the requested data is in the cache, and, if so, instantly issues it to the computer without physically accessing the surface. If there was no data in the cache, they are first read and put into the cache, and only after that they are transferred to the computer. Because the size of the cache is limited, there is a constant update of the pieces of the cache. Typically, the oldest piece is replaced with a new one. This is called a circular buffer, or circular cache.

To improve the speed of the drive, manufacturers have come up with several methods to increase the speed of work due to the cache:

  1. Adaptive segmentation. Usually the cache is divided into segments of the same size. Since requests can be of different sizes, this leads to unnecessary waste of cache blocks, since one request will be split into fixed-length segments. Many drives today dynamically resize the segment, determining the size of the request and adjusting the segment size to the specific request, thereby increasing efficiency and increasing or decreasing the segment size. The number of segments may also vary. This task is more complicated than operations with fixed-length segments, and can lead to data fragmentation within the cache, increasing the load on the hard disk microprocessor.
  2. Prefetching. The microprocessor of the hard disk, based on the analysis of the requested data at the present moment and requests at the previous points in time, loads into the cache data that has not yet been requested, but has a high percentage of probability. The simplest case of overfetching is loading additional data into the cache, which lies a little further than the currently requested data. statistically they are more likely to be requested later. If the prefetching algorithm is implemented correctly in the drive's firmware, it will increase the speed of its operation in various file systems and with various data types.
  3. User control. High-tech hard drives have a set of commands that allow the user to precisely control all cache operations. These commands include the following: enable and disable cache operation, control segment size, enable and disable adaptive segmentation and resampling, etc.

Despite the fact that the cache increases the speed of the drive in the system, it also has its drawbacks. To begin with, the cache does not speed up the drive at all in case of random requests for information located at different ends of the platter, since with such requests there is no point in overfetching. Also, the cache does not help at all when reading large amounts of data. it is usually quite small, for example, when copying a 10 megabyte file, with the usual 2 megabyte buffer in our time, only a little less than 20% of the copied file will fit into the cache.

Due to these and other features of the cache, it does not speed up the drive as much as we would like. What kind of speed gain it gives depends not only on the size of the buffer, but also on the algorithm for working with the microprocessor cache, as well as on the type of files that are being worked with at the moment. And, as a rule, it is very difficult to find out which cache algorithms are used in this particular drive.

The figure shows the cache chip of the Seagate Barracuda drive, it has a capacity of 4 megabits or 512 kilobytes.

Read-write caching

Despite the fact that the cache increases the speed of the drive in the system, it also has its drawbacks. To begin with, the cache does not speed up the drive at all in case of random requests for information located at different ends of the platter, since with such requests there is no point in overfetching. Also, it does not help at all when reading large amounts of data. it is usually quite small. For example, when copying a 10 megabyte file, with the usual 2 megabyte buffer in our time, only slightly less than 20% of the copied file will fit into the cache.

Due to these features of the cache, it does not speed up the drive as much as we would like. What kind of speed gain it gives depends not only on the size of the buffer, but also on the algorithm for working with the microprocessor cache, as well as on the type of files that are being worked with at the moment. And, as a rule, it is very difficult to find out which cache algorithms are used in this particular drive.

In recent years, hard drive manufacturers have significantly increased the cache capacity in their products. Even in the late 90s, 256 kilobytes was the standard for all drives and only high-end devices had 512 kilobytes of cache. At present, the cache of 2 megabytes has become the de facto standard for all drives, while the most productive models have capacities of 8 or even 16 megabytes. Typically, 16 megabytes are found only on SCSI drives. There are two reasons why the drive buffer has grown so rapidly. One of them is a sharp decline in prices for synchronous memory microcircuits. The second reason is the belief of users that doubling or even quadrupling the cache size will greatly affect the speed of the drive.

The size of the hard disk cache, of course, affects the speed of the drive in the operating system, but not as much as users imagine. Manufacturers take advantage of the user's belief in the size of the cache, and in advertising brochures they make loud statements about the quadruple cache size compared to the standard model. However, comparing the same hard drive with buffer sizes of 2 and 8 megabytes, it turns out that the speedup translates into a few percent. What does this lead to? Moreover, only a very large difference in the cache size (for example, between 512 kilobytes and 8 megabytes) will significantly affect the speed of the drive. It should also be remembered that the size of the hard drive buffer is rather small compared to the computer memory, and often the "soft" cache, that is, an intermediate buffer organized by the operating system for caching operations with the file system and located in the computer memory, has a greater contribution to the operation of the drive. ...

Read caching and write caching are somewhat similar, but they also have many differences. Both of these operations are aimed at increasing the overall speed of the drive: they are buffers between the fast computer and the slow mechanics of the drive. The main difference between these operations is that one of them does not change the data in the drive, while the other does.

Without caching, each write operation would lead to an agonizing wait until the heads move to the right place and data is written to the surface. Working with a computer would be impossible: as we mentioned earlier, this operation on most hard drives would take at least 10 milliseconds, which is a lot from the point of view of the computer as a whole, since the computer's microprocessor would have to wait these 10 milliseconds each time information is written. on the hard drive. The most striking thing is that there is just such a mode of working with the cache, when data is simultaneously written both to the cache and to the surface, and the system waits for both operations to be completed. This is called write-through caching. This technology provides acceleration of work in the event that in the near future the newly recorded data needs to be read back into the computer, and the recording itself takes much longer than the time after which the computer will need this data.

Fortunately, there is a faster option for the cache: the computer writes data to the drive, they go to the cache, and the drive immediately responds to the system that the record has been made; the computer continues to work, believing that the drive was able to write data very quickly, while the drive "tricked" the computer and only wrote the necessary data to the cache, and only then began to write them to the disk. This technology is called write-back caching.

Of course, write-back caching technology increases performance, but, nevertheless, this technology also has its drawback. The hard drive tells the computer that the write has already been made, while the data is only in the cache, and only then it starts writing data to the surface. It takes a while. This is not a problem as long as there is power to the computer. Because cache memory is volatile memory, at the moment of power off all contents of the cache are irretrievably lost. If there was data in the cache waiting to be written to the surface, and at that moment the power was turned off, the data will be lost forever. And, which is also bad, the system does not know if the data has been accurately written to disk. the Winchester has already reported that it did it. Thus, we not only lose the data itself, but we also don’t know which data did not have time to be written, and we don’t even know that there was a failure. As a result, a part of the file may be lost, which will lead to a violation of its integrity, loss of the operating system, etc. Of course, read data caching is not affected by this issue.

Due to this risk, some workstations do not cache at all. Modern drives allow you to turn off the write caching mode. This is especially important in applications where data accuracy is critical. Because this type of caching greatly increases the speed of the drive, nevertheless, they usually resort to other methods that can reduce the risk of data loss due to a power outage. The most common method is to connect your computer to an uninterruptible power supply. In addition, all modern drives have the "flush write cache" function, which forces the drive to write data from the cache to the surface, but the system has to execute this command blindly. it still doesn't know if there is data in the cache or not. Every time the power is turned off, modern operating systems send this command to the hard drive, then the command to park the heads is sent (although this command could not have been sent, since every modern drive automatically parks the heads when the voltage drops below the maximum permissible level ) and only after that the computer turns off. This guarantees the safety of user data and the correct shutdown of the hard drive.

spas-info.ru

What is a hard disk buffer and why is it needed

Today, a common storage device is a magnetic hard disk. It has a certain amount of memory dedicated to storing basic data. It also has a buffer memory, the purpose of which is to store intermediate data. Professionals call the hard disk buffer the term "cache memory" or simply "cache". Let's figure out why the HDD buffer is needed, what it affects and how big it is.

The hard disk buffer helps the operating system temporarily store data that was read from the main memory of the hard drive, but was not transferred for processing. The need for a transit storage is due to the fact that the speed of reading information from the HDD and the operating system bandwidth differ significantly. Therefore, the computer needs to temporarily store data in the "cache", and only then use them for their intended purpose.

The hard disk buffer itself is not a separate sector, as incompetent computer users believe. It is a special memory chip located on the internal HDD board. Such microcircuits are able to work much faster than the drive itself. As a result, they cause an increase (by several percent) in computer performance that is observed during operation.

It should be noted that the size of the "cache memory" depends on the specific disk model. Previously, it was about 8 megabytes, and this figure was considered satisfactory. However, with the advancement of technology, manufacturers have been able to produce chips with more memory. Therefore, most modern hard drives have a buffer, the size of which varies from 32 to 128 megabytes. Of course, the largest "cache" is installed in expensive models.

How hard disk buffer affects performance

Now let's tell you why the size of the hard drive buffer affects the performance of the computer. Theoretically, the more information will be in the "cache memory", the less often the operating system will access the hard drive. This is especially true for a work scenario when a potential user is processing a large number of small files. They simply move to the hard disk buffer and wait for their turn there.

However, if a PC is used to process large files, then the "cache" loses its relevance. After all, information cannot fit on microcircuits, the volume of which is small. As a result, the user will not notice an increase in computer performance, since the buffer will hardly be used. This happens in cases when programs for editing video files, etc. will be launched in the operating system.

Thus, when purchasing a new hard drive, it is recommended to pay attention to the size of the "cache" only if you plan to constantly deal with processing small files. Then you can really notice an increase in the performance of your personal computer. And if the PC will be used for ordinary daily tasks or processing large files, then you can not attach any importance to the clipboard.

Choosing a hard drive for a PC is a very demanding task. After all, it is the main repository of both service and your personal information. In this article, we will talk about the key characteristics of the HDD, which you should pay attention to when buying a magnetic drive.

Introduction

When buying a computer, many users often focus on the characteristics of such components as a monitor, processor, video card. And such an integral component of any PC as a hard drive (in computer slang - hard drive), buyers often acquire, guided only by its volume, practically neglecting other important parameters. Nevertheless, it should be remembered that a competent approach to choosing a hard disk is one of the guarantees of comfort during further work at the computer, as well as saving financial resources, in which we are so often constrained.

A hard disk or hard disk drive (HDD, HDD) is the main data storage device in most modern computers, which stores not only information necessary for the user, including movies, games, photos, music, but also the operating system, as well as everything installed programs. That is why, in fact, the choice of a hard disk for a computer should be treated with due attention. Remember that if any element of the PC fails, it can be replaced. The only negative point in this situation is additional financial costs for repairs or the purchase of a new part. But a hard disk breakdown, in addition to unforeseen costs, can lead to the loss of all your information, as well as the need to reinstall the operating system and all required programs. The main purpose of this article is to help novice PC users in choosing a hard disk model that would best meet the requirements of specific "users" for a computer.

First of all, you should clearly define in which computer device the hard drive will be installed and for what purposes you plan to use this device. Based on the most common tasks, we can conditionally divide them into several groups:

  • Mobile computer for general tasks (work with documents, "surfing" the world wide web, data processing and work with programs).
  • A powerful mobile computer for gaming and resource-intensive tasks.
  • Desktop computer for office tasks;
  • A productive desktop computer (work with multimedia, games, processing of audio, video and images);
  • Multimedia player and data storage.
  • For assembling an external (portable) drive.

In accordance with one of the listed options for using your computer, you can begin to select a suitable model of hard disk according to its characteristics.

Form Factor

Form factor is the physical size of the hard drive. Today, most drives for home computers are 2.5 or 3.5 inches wide. The first, which are smaller, are intended for installation in laptops, the second - in stationary system units. Of course, a 2.5-inch drive can also be installed in a desktop PC if desired.

There are also smaller magnetic drives with sizes of 1.8 ", 1" and even 0.85 ". But these hard drives are much less common and are focused on specific devices, such as ultra-compact computers (UMPC), digital cameras, PDAs and other equipment, where small dimensions and weight of components are very important. We will not talk about them in this article.

The smaller the disk, the lighter it is and the less power it requires to operate. Therefore, 2.5 "hard drives have almost completely replaced 3.5" models in external drives. After all, for the operation of large external drives, additional power is required from the electrical outlet, while the younger brother is content only with power from the USB ports. So if you decide to assemble a portable drive yourself, it is better to use a 2.5-inch HDD for this purpose. It will be a lighter and more compact solution, and you will not have to carry the power supply with you.

As for the installation of 2.5-inch disks in a stationary system unit, this decision looks ambiguous. Why? Read on.

Capacity

One of the main characteristics of any drive (in this regard, a hard drive is no exception) is its capacity (or volume), which today for some models reaches four terabytes (in one terabyte 1024 GB). Even some 5 years ago, such a volume could seem fantastic, but the current OS assemblies, modern software, high-resolution video and photos, as well as three-dimensional computer video games, having a fairly solid "weight", need a large hard drive capacity. So, some modern games need 12 or even more gigabytes of free hard disk space for normal functioning, and an hour and a half HD-quality movie may require more than 20 GB for storage.

Today the capacities of 2.5-inch magnetic media range from 160 GB to 1.5 TB (the most common sizes are 250 GB, 320 GB, 500 GB, 750 GB and 1 TB). 3.5 ”desktop drives are larger and can store 160GB to 4TB of data (most common sizes are 320GB, 500GB, 1TB, 2TB, and 3TB).

When choosing HDD capacity, keep in mind one important detail - the larger the hard disk capacity, the lower the price of 1 GB of information storage. For example, a desktop hard drive for 320 GB costs 1600 rubles, for 500 GB - 1650 rubles, and for 1 TB - 1950 rubles. We consider: in the first case, the cost of a gigabyte of data storage is 5 rubles (1600/320 = 5), in the second - 3.3 rubles, and in the third - 1.95 rubles. Of course, such statistics does not mean that it is necessary to buy a disk of very large capacity, but in this example it is very clear that buying a 320 GB disk is inappropriate.

If you plan to use your computer mainly for solving office tasks, then a hard drive with a capacity of 250 - 320 GB is more than enough for you, or even less, unless, of course, there is no need to store huge archives of documentation on the computer. At the same time, as we noted above, buying a hard drive with a volume below 500 GB is unprofitable. By saving from 50 to 200 rubles, you end up with a very high cost per gigabyte of data storage. At the same time, this fact applies to drives of both form factors.

Do you want to build a gaming or multimedia PC for working with graphics and video, are you planning to download new films and music albums in large quantities to your hard drive? Then it is better to choose a hard drive with a volume of at least 1 TB for a desktop PC and at least 750 GB for a mobile one. But, of course, the final calculation of the capacity of the hard drive must correspond to the specific needs of the user, and in this case we only give recommendations.

We should also mention storage systems (NAS) and multimedia players that have become popular. As a rule, large 3.5 ”disks are installed in such equipment, preferably with a volume of at least 2 TB. After all, these devices are focused on storing large amounts of data, which means that the hard drives installed in them must be capacious with the lowest price for storing 1 GB of information.

Disc geometry, platters and recording density

When choosing a hard disk, one should not blindly rely only on its total capacity, according to the principle “the more, the better.” There are other important characteristics, among which: the recording density and the number of platters used. Indeed, not only the volume of the hard drive, but also the speed of writing / reading data directly depends on these factors.

Let's make a small digression and say a few words about the design features of modern hard disk drives. Data are recorded in them on aluminum or glass disks, called plates, which are covered with a ferromagnetic film. For writing and reading data from one of the thousands of concentric tracks located on the surface of the plates, readheads are responsible, located on special rotary positioner arms, sometimes called "rocker arms". This procedure takes place without direct (mechanical) contact between the disc and the head (they are at a distance of about 7-10 nm from each other), which provides protection against possible damage and a long service life of the device. Each plate has two working surfaces and is served by two heads (one for each side).

To create an address space, the surface of magnetic disks is divided into many circular regions called tracks. In turn, the tracks are divided into equal segments - sectors. Due to such a ring structure, the geometry of the plates, or rather their diameter, affects the speed of reading and writing information.

Closer to the outer edge of the disc, the tracks have a larger radius (longer) and contain more sectors, which means more information that can be read by the device in one revolution. Therefore, on the outer tracks of the disc, the data transfer rate is higher, since the read head in this area travels a greater distance for a certain time interval than on the inner tracks, which are closer to the center. Thus, discs with a diameter of 3.5 inches have a higher performance than discs with a diameter of 2.5 inches.

Several platters can be located inside the hard disk at once, each of which can store a certain maximum amount of data. Strictly speaking, this determines the recording density, measured in gigabits per square inch (Gbit / in 2) or in gigabytes per platter (GB). The larger this value, the more information is placed on one track of the plate, and the faster the recording is carried out, as well as the subsequent reading of information arrays (regardless of the speed of rotation of the disks).

The total volume of the hard drive consists of the containers of each of the plates placed in it. For example, introduced in 2007, the first commercial 1000 GB (1TB) drive had as many as 5 platters with a density of 200 GB each. But technological progress does not stand still, and in 2011, thanks to the improvement of perpendicular recording technology, Hitachi introduced the first 1TB platter, which are widely used in modern large-capacity hard drives.

Reducing the number of platters in hard drives has a number of important advantages:

  • Decrease in data reading time;
  • Reducing power consumption and heat dissipation;
  • Increased reliability and resiliency;
  • Reduced weight and thickness;
  • Reduced cost.

Today in the computer market there are simultaneously models of hard disks that use plates with different recording densities. This means that hard drives of the same volume can have a completely different number of platters. If you are looking for the most efficient solution, it is better to choose HDD with the least number of platters and high recording density. But the problem is that, in almost no computer store, you will not find the value of the above parameters in the descriptions of the characteristics of disks. Moreover, this information is often absent even on the official websites of manufacturers. As a result, for ordinary ordinary users, these characteristics are not always decisive when choosing a hard disk, because of their inaccessibility. Nevertheless, before buying, we recommend that you definitely look for the values ​​of these parameters, which will allow you to choose a hard drive with the most advanced and modern characteristics.

Spindle speed

The performance of a hard disk directly depends not only on the recording density, but also on the rotation speed of the magnetic disks located in it. All the plates inside the hard drive are rigidly attached to its internal axis, called the spindle, and rotate with it, as a whole. The faster the plate rotates, the sooner there is a sector that should be read.

In stationary home computers, models of hard drives are used that have an operating speed of 5400, 5900, 7200, or 10,000 rpm. Devices with a spindle speed of 5400 rpm are usually quieter than their high-speed "competitors" and have less heat. Winchesters with higher rpms, in turn, are distinguished by better performance, but at the same time they are more energy intensive.

For an ordinary office PC, a drive with a spindle speed of 5400 rpm will suffice. Also, such disks are well suited for installation in multimedia players or data storages, where the important role is played not so much by the speed of information transfer, but by the reduced power consumption and heat dissipation.

In other cases, in the overwhelming majority, discs with a rotational speed of the plates of 7200 rpm are used. This applies to both mid-range and top-end computers. The use of HDD with a rotation speed of 10,000 rpm is relatively rare, since such models of hard drives are very noisy and have a rather high cost of storing one gigabyte of information. Moreover, recently, users increasingly prefer to use solid-state drives instead of productive magnetic disks.

In the mobile sector, dominated by 2.5-inch drives, the most common spindle speed is 5400 rpm. This is not surprising, since low power consumption and low heating of parts are important for portable devices. But they did not forget about the owners of productive laptops - there is a large selection of models with a rotation speed of 7200 rpm on the market and even several representatives of the VelociRaptor family with a rotation speed of 10,000 rpm. Although the feasibility of using the latter, even in the most powerful mobile PCs, is in great doubt. In our opinion, if you need to install a very fast disk subsystem, here it is better to pay attention to solid-state drives.

Connection interface

Almost all modern models, both small and large hard drives, are connected to the motherboards of personal computers using the SATA (Serial ATA) serial interface. If you have a very old computer, then the option of connecting using the parallel interface PATA (IDE) is possible. But keep in mind that the range of such hard drives in stores today is very scarce, since their production has almost completely ceased.

As for the SATA interface, there are 2 variants of disks on the market: connection via the SATA II or SATA III bus. In the first variant, the maximum data transfer rate between the disk and RAM can be 300 MB / s (bus bandwidth up to 3 Gb / s), and in the second - 600 MB / s (bus bandwidth up to 6 Gb / s). Also, the SATA III interface has slightly improved power management.

In practice, the bandwidth of the SATA II interface is enough for any classic hard drives. After all, even the most productive HDD models have the speed of reading data from platters just above 200 MB / s. Another thing is solid-state drives, where data is stored not on magnetic platters, but in flash memory, the read speed from which is many times faster and can reach values ​​of over 500 MB / s.

It should be noted that all versions of the SATA interface retained compatibility with each other at the level of exchange protocols, connectors and cables. That is, a hard drive with a SATA III interface can be safely connected to the motherboard via the SATA I connector, although the maximum disk bandwidth will be limited by the capabilities of the older revision and will be 150 MB / s.

Buffer memory (Cache)

Buffer memory is fast intermediate memory (usually a standard type of random access memory) used to level (smooth out) the difference between read, write and transfer speeds over the data interface while the disk is running. The hard drive cache can be used to store the last read data, but not yet transmitted for processing, or those data that can be re-requested.

In the previous section, we already noted the difference between hard disk performance and interface bandwidth. It is this fact that determines the need for transit storage in modern hard drives. Thus, while data is being written or read from magnetic platters, the system can use the information stored in the cache for its own needs without being idle.

The size of the clipboard for modern hard drives made in the 2.5 ”form factor can be 8, 16, 32 or 64 MB. The older 3.5-inch counterparts have a maximum buffer memory of 128 MB. In the mobile sector, the most common disks with a cache of 8 and 16 MB. Among desktop hard drives, the most common buffer sizes are 32 and 64 MB.

In theory, a larger cache should provide better disk performance. But in practice, this is not always the case. There are various operations with the disk, in which the clipboard practically does not affect the performance of the hard drive. For example, this can occur when sequentially reading data from the surface of the platters or when working with large files. In addition, the efficiency of the cache is influenced by algorithms that can prevent errors when working with the buffer. And here a disk with a smaller cache, but advanced algorithms for its work, may be more productive than a competitor with a larger clipboard.

Thus, it is not worth chasing the maximum amount of buffer memory. Especially if you need to pay a lot for a large cache capacity. In addition, manufacturers try to equip their products with the most efficient cache size, based on the class and characteristics of certain disk models.

Other characteristics

In conclusion, let's take a quick look at some of the remaining characteristics that you may come across in the descriptions of hard drives.

Reliability or mean time between failures ( MTBF) - the average duration of the hard drive until its first breakdown or the need for repair. It is usually measured in hours. This parameter is very important for disks used in server stations or file storages, as well as in RAID arrays. Typically, specialized magnetic drives have an average runtime of 800,000 to 1,000,000 hours (for example, WD's RED series or Seagate's Constellation series).

Noise level - noise generated by the elements of the hard disk during its operation. Measured in decibels (dB). It mainly consists of the noise arising from the positioning of the heads (crackling) and the noise from the spindle rotation (rustling). As a rule, the lower the spindle speed, the quieter the hard drive works. A hard drive can be called quiet if its noise level is below 26 dB.

Power consumption - an important parameter for drives installed in mobile devices, where long battery life is appreciated. The heat dissipation of the hard drive also directly depends on energy consumption, which is also important for portable PCs. As a rule, the level of energy consumption is indicated by the manufacturer on the disc cover, but you should not blindly trust these figures. Very often they are far from reality, so if you really want to find out the power consumption of a particular disk model, then it is better to search the Internet for independent test results.

Random access time - the average time during which the positioning of the disk read head over an arbitrary section of the magnetic plate is performed, measured in milliseconds. A very important parameter that affects the performance of the hard drive as a whole. The shorter the positioning time, the faster data will be written to or read from the disk. Can range from 2.5ms (on some server disk models) to 14ms. On average, for modern disks for personal computers, this parameter ranges from 7 to 11 ms. Although there are also very fast models, for example, the WD Velociraptor with an average random access time of 3.6 ms.

Conclusion

In conclusion, I would like to say a few words about the increasingly popular hybrid magnetic drives (SSHD). Devices of this type combine a conventional hard disk drive (HDD) and a small solid state drive (SSD) that acts as additional cache memory. Thus, the developers are trying to combine the main advantages of the two technologies - the large capacity of magnetic plates and the speed of flash memory. At the same time, the cost of hybrid drives is much lower than that of newfangled SSDs, and slightly higher than that of conventional HDDs.

Despite the promise of this technology, so far SSHDs on the hard drive market are very poorly represented by only a small number of models in the 2.5-inch form factor. Seagate is the most active in this segment, although competitors Western Digital (WD) and Toshiba have already presented their hybrid solutions. All this leaves hope that the market for SSHD hard drives will develop, and we will soon see new models of such devices on sale not only for mobile computers, but also for desktops.

This concludes our review, where we examined all the main characteristics of computer hard drives. We hope that based on this material, you will be able to choose a hard drive for any purpose with the optimal parameters corresponding to them.

Let me remind you that the Seagate SeaTools Enterprise utility allows the user to manage the caching policy and, in particular, switch the latest Seagate SCSI drives between two different caching models - Desktop Mode and Server Mode. This item in the SeaTools menu is called Performance Mode (PM) and can have two values ​​- On (Desktop Mode) and Off (Server Mode). The differences between these two modes are purely software - in the case of Desktop Mode, the hard disk cache is divided into a fixed number of segments of constant (equal) size, and then they are used to cache read and write accesses. Moreover, in a separate menu item, the user can even assign the number of segments himself (manage the cache segmentation): for example, instead of the default 32 segments, set a different value (in this case, the volume of each segment will be proportionally reduced).

In the case of Server Mode, buffer (disk cache) segments can be dynamically (re) assigned, changing their size and number. The microprocessor (and firmware) of the disk itself dynamically optimizes the number (and capacity) of cache memory segments depending on the commands coming to the disk for execution.

Then we were able to find out that using the new Seagate Cheetah drives in "Desktop" mode (with fixed sharding by default - 32 segments) instead of the default "Server" with dynamic sharding can slightly increase the performance of drives in a number of tasks more typical for a desktop computer or media servers. Moreover, this increase can sometimes reach 30-100% (!) Depending on the type of task and the disk model, although on average it is estimated at 30%, which, you see, is also not bad. Among such tasks are the routine work of a desktop PC (tests WinBench, PCmark, H2bench), reading and copying files, defragmentation. At the same time, in purely server applications, the performance of drives does not fall almost (if it does, it does not drop significantly). However, we were able to observe a noticeable gain from using Desktop Mode only on the Cheetah 10K.7 disk, while its older sister Cheetah 15K.4 did not care which mode to work on desktop applications in.

Trying to understand further how sharding the cache memory of these hard drives affects performance in various applications and which sharding modes (how many memory segments) are more beneficial when performing certain tasks, I investigated the effect of the number of cache memory segments on the performance of the Seagate Cheetah drive. 15K.4 over a wide range of 4 to 128 segments (4, 8, 16, 32, 64 and 128). The results of these studies are presented to your attention in this part of the review. Let me emphasize that these results are interesting not only for this model of disks (or SCSI disks from Seagate in general) - segmentation of the cache memory and the choice of the number of segments is one of the main directions of firmware optimization, including desktop disks with ATA interface, which are now also equipped with a predominantly 8 MB buffer. Therefore, the performance results of the drive described in this article in various tasks depending on the sharding of its cache memory are also relevant to the desktop ATA drives industry. And since the test methodology was described in the first part, let's go directly to the results themselves.

However, before proceeding to the discussion of the results, let's take a closer look at the structure and operation of the cache memory segments of the Seagate Cheetah 15K.4 drive in order to better understand what is at stake. Of the eight megabytes for the actual cache memory (that is, for caching operations), 7077 KB are available here (the rest is the service area). This area is divided into logical segments (Mode Select Page 08h, byte 13), which are used for reading and writing data (for performing read-ahead functions from platters and deferred writing to the disk surface). To access data on magnetic platters, the segments use the logical addressing of the drive blocks. The drives in this series support a maximum of 64 cache segments, with the length of each segment being an integer number of disk sectors. The amount of available cache memory is apparently distributed equally between the segments, that is, if there are, say, 32 segments, then the size of each segment is approximately 220 KB. With dynamic segmentation (in PM = off mode), the number of segments can be changed by the hard drive automatically, depending on the flow of commands from the host.

Server and desktop applications require different disk caching operations for optimal performance, so it is difficult to provide a single configuration to best perform these tasks. According to Seagate, "desktop" applications require configuring the cache to quickly respond to repeated requests for large numbers of small data segments without delays in read-ahead of adjacent segments. In contrast, server-side tasks require the cache to be configured to accommodate large amounts of sequential data in non-repetitive requests. In this case, the ability of the cache memory to store more data from contiguous segments during read-ahead is more important. Therefore, for Desktop Mode, the manufacturer recommends using 32 segments (in early versions of Cheetah, 16 segments were used), and for Server Mode, the adaptive number of segments starts from only three for the entire cache, although it may increase during operation. In our experiments on the effect of the number of segments on performance in various applications, we will restrict ourselves to the range from 4 segments to 64 segments, and as a check, we will "run" the disk also with 128 segments installed in the SeaTools Enterprise program (the program does not inform that this the number of segments in this disk is invalid).

Physical test results

There is no point in giving linear read speed graphs with different numbers of cache memory segments - they are the same. But according to the speed of the Ultra320 SCSI interface measured by the tests, you can observe a very curious picture: with 64 segments, some programs begin to incorrectly determine the speed of the interface, reducing it by more than an order of magnitude.

According to the measured average access time, the differences between different numbers of cache memory segments become more noticeable - as the segmentation decreases, the average read access time measured under Windows increases slightly, and significantly better readings are observed in PM = off mode, although it should be argued that the number segments are very few or, conversely, very large, based on these data it is difficult. It is possible that the disk in this case simply starts to ignore the prefetch when reading in order to exclude additional delays.

You can try to judge the efficiency of the disk firmware lazy write algorithms and caching the data being written in the drive's buffer by how the average write access time measured by the operating system drops relative to the reading with enabled write-back caching of the drive (it was always enabled in our tests). To do this, we usually use the results of the C "T H2benchW test, but this time we will supplement the picture with a test in the IOmeter program, the read and write patterns for which used 100% random access in blocks of 512 bytes with a unit depth of the request queue. (Of course, you should not think that the average write access time in the two diagrams below really reflects this physical characteristics of drives! This is just a certain parameter, programmatically measured using a test, by which one can judge the efficiency of caching write in the disk buffer. The actual average write access time declared by the manufacturer for the Cheetah 15K.4 is 4.0 + 2.0 = 6.0 ms). By the way, anticipating the questions, I note that in this case (that is, when lazy writing is enabled on the disk), the drive reports to the host about the successful completion of the write command (GOOD status) as soon as they are written to the cache memory, and not directly to the magnetic medium ... This is the reason for the lower value of the average write access time measured from the outside than for the analogous parameter when reading.

According to the results of these tests, there is a clear dependence of the efficiency of caching random writing of small data blocks on the number of cache memory segments - the more segments, the better. With four segments, the efficiency drops sharply and the average write access time rises almost to the values ​​for reading. And in the "server mode" the number of segments in this case is obviously close to 32. Cases of 64 and "128" segments are completely identical, which confirms the software limitation at the level of 64 segments from the top.

Interestingly, the IOmeter test in the simplest patterns for random access in blocks of 512 bytes gives exactly the same values ​​when writing as the C "T H2BenchW test (with an accuracy of literally hundredths of a millisecond), while when reading IOmeter it showed a slightly overestimated result in all segmentation range - possibly 0.1-0.19 ms difference with other tests for random access time while reading due to some "internal" reasons for the IOmeter (or the block size is 512 bytes instead of 0 bytes, as is ideally required for such measurements). However, the read results of the IOmeter practically coincide with those of the disk test of the AIDA32 program.

Performance in applications

Let's move on to benchmarks of storage performance in applications. And first of all, let's try to find out how well the disks are optimized for multithreading. To do this, I traditionally use tests in the NBench 2.4 program, where 100 MB files are written to and read from disk by several simultaneous threads.

This diagram allows us to judge the efficiency of the algorithms for multi-threaded lazy writing of hard disks in real (and not synthetic, as it was in the diagram with the average access time) conditions when the operating system is working with files. The leadership of both Maxtor SCSI disks when recording in several simultaneous streams is beyond doubt, but in Chita we already observe a certain optimum in the region between 8 and 16 segments, while at higher and lower values ​​the disk speed decreases for these tasks. For Server Mode, the number of segments is obviously 32 (with good accuracy :)), and "128" segments is actually 64.

For multi-threaded reads, the Seagate drives clearly improve over Maxtor drives. As for the influence of segmentation, then, as in recording, we observe a certain optimum closer to 8 segments (during recording, it was closer to 16 segments), and with very high segmentation (64), the disk speed decreases significantly (as in recording) ... It is gratifying that Server Mode "monitors the bazaar" of the host here and changes the sharding from 32 when writing to ~ 8 when reading.

Now let's see how the disks behave in the "old" but still popular Disk WinMark 99 tests from the WinBench 99 package. Let me remind you that we conduct these tests not only for the "beginning", but also for the "middle" (in terms of volume) physical media for the two file systems, and the diagrams show the average results. Of course, these tests are not "profile" for SCSI drives, and we present their results here rather pay tribute to the test itself and to those who are used to judging disk speed by WinBench 99 tests. As a "consolation", we note that these tests will show us with some degree of certainty what the performance of these enterprise drives is when performing tasks more typical of a desktop computer.

Obviously, there is an optimum segmenting here as well, with a small number of segments the disk looks inexpressive, and with 32 segments it looks the best (perhaps that is why the Seagate developers "shifted" the default Desktop Mode setting from 16 to 32 segments). However, for Server Mode in office (Business) tasks, segmentation is not entirely optimal, while for professional (High-End) performance, segmentation is more than optimized, noticeably outperforming even the optimal "constant" segmentation. Apparently, it is during the test execution that it changes depending on the flow of commands and due to this, a gain in overall performance is obtained.

Unfortunately, such optimization "during the test" is not observed for the more recent "track" complex tests for evaluating the "desktop" performance of disks in the PCMakr04 and C "T H2BenchW packages.

On both (or rather, on 10 different) "activity tracks" the Server Mode intelligence is noticeably inferior to the optimal constant segmentation, which for PCmark04 is about 8 segments, and for H2benchW - 16 segments.

For both of these tests, 4 cache segments turns out to be very undesirable, and 64 too, and it is difficult to say where the Server Mode gravitates more in this case.

In contrast to these, of course, all the same synthetic (albeit very similar to reality) tests - a completely "real" test of the speed of discs with a temporary file of the Adobe Photoshop program. Here the situation is much more bleak - the more segments, the better! And Server Mode almost "caught" it, using 32 segments for its work (although 64 would be even a little better).

Intel Iometer benchmarks

Let's move on to tasks that are more typical for the profiles of using SCSI drives - the operation of various servers (DataBase, File Server, Web Server) and a workstation (Workstation) according to the corresponding patterns in Intel IOmeter version 2003.5.10.

Maxtor is the best at simulating a database server, while Server Mode is the most profitable for Seagate, although in fact the latter is very close to 32 persistent segments (about 220 KB each). Less or more segmentation is worse in this case. However, this pattern is too simple in terms of the type of requests - let's see what happens for more complex patterns.

When imitating a file server, adaptive segmentation is again in the lead, although the lag behind it by 16 constant segments is negligible (32 segments are a little worse here, although they are also quite sufficient). With a small segmentation, there is a deterioration on a large queue of commands, and with too large (64) any queue is generally contraindicated - apparently, in this case, the size of the cache sectors turns out to be too small (less than 111 KB, that is, only 220 blocks on the media) to effectively cache Amounts of data that are reasonable in size.

Finally, for a Web server, we see an even more amusing picture - with a NON-ONE command queue, Server Mode is equivalent any segmentation level, except 64, although at single it is slightly better than all.

As a result of geometric averaging of the server loads shown above by patterns and request queues (without weights), we find that adaptive sharding is best for such tasks, although 32 persistent segments lag slightly behind, and 16 segments also look good overall. In general, the choice of Seagate is quite understandable.

As for the "workstation" pattern, Server Mode is clearly the best here.

And the optimum for permanent segmentation is 16 segments.

Now - our patterns for IOmeter are closer in purpose to desktop PCs, although they are definitely indicative for enterprise storage devices, since in "deeply professional" systems, hard drives read and write large and small files, and sometimes copy files, the lion's share of the time. And since the nature of calls in these patterns in these patterns in the IOmeter test (at random addresses within the entire disk volume) is more typical for server-class systems, the importance of these patterns for the disks under study is higher.

Reading large files is again better for Server Mode, except for an incomprehensible dip at QD = 4. However, a small number of large segments is clearly preferable for the disk on these operations (which, in principle, is predictable and is in excellent agreement with the results for multi-threaded file reading, see above).

Sporadic recording Large files, on the other hand, are still too tough for Server Mode intelligence, and here constant segmentation at the level of 8-16 segments is more advantageous, as with multithreaded file recording, see above. Separately, we note that in these operations, large segmenting of the cache is extremely harmful - at the level of 64 segments. However, it turns out to be useful for reading operations with small files with a large request queue:

I think this is what Server Mode uses to select the adaptive mode - their graphs are very similar.

At the same time, when writing small files at random addresses, 64 segments fail again, and Server Mode is inferior here to constant segmentation with a level of 8-16 segments per cache, although Server Mode's efforts to use optimal settings can be seen (only with 32-64 segments in the queue 64 bad luck came out;)).

Copying large files is a clear failure of Server Mode! Here, sharding with level 16 is clearly more advantageous (this is the optimum, since 8 and 32 are worse on queue 4).

As for copying small files, 8-16-32 segments are practically equal here, outperforming 64 segments (oddly enough), and Server Mode is a little "odd".

Based on the results of geometric averaging of data for random reading, writing and copying large and small files, we find that the best result on average is given by constant segmentation with a level of only 4 segments per cache (that is, segment sizes of more than 1.5 MB!), While 8 and 16 segments are approximately equal and almost did not lag behind 4 segments, but 64 segments are clearly contraindicated. On average, Adaptive Server Mode yielded only slightly to constant segmentation - a loss of one percent can hardly be considered noticeable.

It remains to be noted that when simulating defragmentation, we observe an approximate equality of all levels of permanent segmentation and a slight advantage of Server Mode (by the same 1%).

And in the pattern of streaming read-write in large and small blocks, it is slightly more profitable to use a small number of segments, although again the differences in the speed of cache memory configurations here, oddly enough, are homeopathic.

conclusions

Having carried out in the second part of our review a more detailed study of the effect of cache memory sharding on the performance of the Seagate Cheetah 15K.4 drive in various tasks, I would like to note that the developers did not call the caching modes as they called them for a reason: in Server Mode, sharding adaptation is often carried out cache memory for the task being performed, and this sometimes leads to very good results - especially when performing "heavy" tasks, including server patterns in Intel IOmeter, and the High-End Disk WinMark 99 test, and random reading of small blocks throughout disk ... At the same time, often the choice of the level of sharding the cache memory in Server Mode turns out to be suboptimal (and requires further work to improve the criteria for analyzing the host command flow), and then Desktop Mode comes out with fixed sharding at the level of 8, 16 or 32 segments per cache. Moreover, depending on the type of task, sometimes it is more profitable to use 16 and 32, and sometimes - 8 or only 4 memory segments! Among the latter are multi-threaded reads and writes (both random and sequential), track tests like PCMark04, and streaming tasks with simultaneous reading and writing. Although the "synthetics" for random write access clearly shows that the efficiency of lazy writes (at arbitrary addresses) significantly decreases with decreasing number of segments. That is, there is a struggle between two trends - and that is why, on average, it is more efficient to use 16 or 32 segments per 8 MB buffer. If the buffer size is doubled, it can be predicted that it is more profitable to keep the number of segments at the level of 16-32, but due to the proportional increase in the capacity of each segment, the average performance of the drive can significantly increase. Apparently, even ineffective now in most tasks, segmenting a cache with 64 segments when doubling the buffer size can turn out to be very useful, while using 4 or even 8 segments in this case will become ineffective. However, these conclusions also strongly depend on what blocks the operating system and applications prefer to operate with the drive, and what files are used in this case. It is quite possible that when the environment changes, the optimum cache memory sharding may shift in one direction or another. Well, we wish Seagate success in optimizing Server Mode "intelligence", which, to a certain extent, can smooth out this "system dependence" and "task dependence" by learning how to select the most optimal segmentation depending on the host command flow in the best way.