Internet Windows Android

Ssd disk for 1s 8.3 file. Gilyov - Transferring the configuration to managed locks

The main purpose of writing the article is not to repeat the obvious nuances to those administrators (and programmers) who have not yet gained experience with 1C.

A secondary goal, if I have any shortcomings, Infostart will point this out to me the fastest.

V. Gilev's test has already become a kind of "de facto" standard. The author on his website gave quite understandable recommendations, but I will simply give some results and comment on the most likely errors. Naturally, the test results on your equipment may differ, this is just a guideline, what should be and what you can strive for. I want to note right away that changes must be made step by step, and after each step, check what result it gave.

There are similar articles on Infostart, in the relevant sections I will put links to them (if I miss something, please tell me in the comments, I will add it). So, suppose you slow down 1C. How to diagnose the problem, and how to understand who is to blame, the administrator or the programmer?

Initial data:

Tested computer, main guinea pig: HP DL180G6, 2*Xeon 5650, 32 Gb, Intel 362i , Win 2008 r2. For comparison, comparable results in a single-threaded test are shown by the Core i3-2100. The equipment was specially taken not the newest, on modern equipment the results are noticeably better.

For testing remote 1C and SQL servers, SQL server: IBM System 3650 x4, 2*Xeon E5-2630, 32 Gb, Intel 350, Win 2008 r2.

To test the 10 Gbit network, Intel 520-DA2 adapters were used.

File version. (the base lies on the server in the shared folder, clients are connected on a network, the CIFS/SMB protocol). Step by step algorithm:

0. Add the Gilev test database to the file server in the same folder as the main databases. We connect from the client computer, run the test. We remember the result.

It is understood that even for old computers 10 years ago (Pentium on 775 socket ) the time from clicking on the 1C:Enterprise label until the database window appears should be less than a minute. ( Celeron = slow work).

If your computer is worse than a Pentium on 775 socket with 1 GB of RAM, then I sympathize with you, and it will be difficult for you to achieve comfortable work on 1C 8.2 in the file version. Consider either upgrading (long overdue) or switching to a terminal (or web, in the case of thin clients and managed forms) server.

If the computer is not worse, then you can kick the administrator. At a minimum, check the operation of the network, antivirus, and HASP protection driver.

If Gilev's test at this stage showed 30 "parrots" and more, but the 1C working base still works slowly - the questions are already for the programmer.

1. For a guideline, how much a client computer can "squeeze out", we check the operation of only this computer, without a network. We put the test base on the local computer (on a very fast disk). If the client computer does not have a normal SSD, then a ramdisk is created. So far, the simplest and free one is Ramdisk enterprise.

To test version 8.2, 256 MB of a ramdisk is enough, and! The most important thing. After restarting the computer with a working ramdisk, it should have 100-200 MB free. Accordingly, without a ramdisk, for normal operation of free memory there should be 300-400 MB.

For testing version 8.3, a 256 MB ramdisk is enough, but more free RAM is needed.

When testing, you need to look at the processor load. In a case close to ideal (ramdisk), the local file 1c loads 1 processor core during operation. Accordingly, if during testing your processor core is not fully loaded, look for weaknesses. A little emotional, but generally correct, the influence of the processor on the operation of 1C is described. Just for reference, even on modern Core i3 with a high frequency, the numbers 70-80 are quite real.

The most common mistakes at this stage.

a) Incorrectly configured antivirus. There are many antiviruses, the settings for each are different, I can only say that with proper configuration, neither the web nor Kaspersky 1C interfere. With the "default" settings - about 3-5 parrots (10-15%) can be taken away.

b) Performance mode. For some reason, few people pay attention to this, and the effect is the most significant. If you need speed, then you must do it, both on client and server computers. (Gilev has a good description. The only caveat is that on some motherboards, if Intel SpeedStep is turned off, then TurboBoost cannot be turned on).

In short, during 1C operation, there are a lot of waiting for a response from other devices (disk, network, etc.). While waiting for a response, if the performance mode is balanced, then the processor lowers its frequency. A response comes from the device, 1C (the processor) needs to work, but the first cycles go at a reduced frequency, then the frequency rises - and 1C again waits for a response from the device. And so - many hundreds of times per second.

You can (and preferably) enable performance mode in two places:

Through BIOS. Disable C1, C1E, Intel C-state (C2, C3, C4) modes. In different bios they are called differently, but the meaning is the same. Search for a long time, a reboot is required, but if you did it once, then you can forget. If everything is done correctly in the BIOS, then the speed will be added. On some motherboards, BIOS settings can be set so that the Windows performance mode will not play a role. (Examples of BIOS setup by Gilev). These settings mainly concern server processors or "advanced" BIOS, if you haven't found it in your system, and you don't have Xeon - it's okay.

Control Panel - Power - High performance. Minus - if the computer has not been serviced for a long time, it will buzz more strongly with a fan, it will heat up more and consume more energy. This is the price of performance.

How to check that the mode is enabled. Run Task Manager - Performance - Resource Monitor - CPU. We wait until the processor is busy with nothing.

These are the default settings.

BIOS C-state included,

balanced power mode


BIOS C-state included, high performance mode

For Pentium and Core, you can stop there,

you can still squeeze some "parrots" out of Xeon


BIOS C-state off, high performance mode.

If you do not use Turbo boost - this is how it should look

server tuned for performance


And now the numbers. Let me remind you: Intel Xeon 5650, ramdisk. In the first case, the test shows 23.26, in the latter - 49.5. The difference is almost twofold. The numbers may vary, but the ratio remains pretty much the same for the Intel Core.

Dear administrators, you can scold 1C as you like, but if end users need speed, you must enable high performance mode.

c) Turbo Boost. First you need to understand if your processor supports this function, for example. If it does, then you can still quite legally get some performance. (I don’t want to touch on the issues of overclocking, especially servers, do it at your own peril and risk. But I agree that increasing the Bus speed from 133 to 166 gives a very noticeable increase in both speed and heat dissipation)

How to turn on turbo boost is written, for example,. But! For 1C there are some nuances (not the most obvious). The difficulty is that the maximum effect of turbo boost is manifested when the C-state is turned on. And it turns out something like this picture:

Please note that the multiplier is the maximum, the Core speed is the most beautiful, the performance is high. But what will happen as a result of 1s?

Factor

Core speed (frequency), GHz

CPU-Z Single Thread

Gilev Ramdisk test

file version

Gilev Ramdisk test

client-server

without turbo boost

C-state off, turbo boost

53.19

40,32

C-state on, turbo boost

1080

53,13

23,04

But in the end, it turns out that according to CPU performance tests, the variant with a multiplier of 23 is ahead, according to Gilev's tests in the file version, the performance with a multiplier of 22 and 23 is the same, but in the client-server version, the variant with a multiplier of 23 horror horror horror (even if C -state set to level 7, it is still slower than with C-state turned off). Therefore, the recommendation, check both options for yourself, and choose the best one from them. In any case, the difference between 49.5 and 53 parrots is quite significant, especially since it is without much effort.

Conclusion - turbo boost must be included. Let me remind you that it is not enough to enable the Turbo boost item in the BIOS, you also need to look at other settings (BIOS: QPI L0s, L1 - disable, demand scrubbing - disable, Intel SpeedStep - enable, Turbo boost - enable. Control Panel - Power - High performance) . And I would still (even for the file version) stop at the option where c-state is turned off, even though the multiplier is less there. Get something like this...

A rather controversial point is the memory frequency. For example, the memory frequency is shown as very influential. My tests did not reveal such dependence. I will not compare DDR 2/3/4, I will show the results of changing the frequency within the same line. The memory is the same, but in the BIOS we force lower frequencies.




And test results. 1C 8.2.19.83, for the file version local ramdisk, for client-server 1C and SQL on one computer, Shared memory. Turbo boost is disabled in both options. 8.3 shows comparable results.

The difference is within the measurement error. I specifically pulled out the CPU-Z screenshots to show that other parameters change with the frequency change, the same CAS Latency and RAS to CAS Delay, which levels out the frequency change. The difference will be when the memory modules physically change, from slower to faster, but even there the numbers are not very significant.

2. When we figured out the processor and memory of the client computer, we move on to the next very important place - the network. Many volumes of books have been written about network tuning, there are articles on Infostart (, and others), here I will not focus on this topic. Before starting testing 1C, please make sure that iperf between two computers shows the entire band (for 1 Gbit cards - well, at least 850 Mbit, but better 950-980), that Gilev's advice is followed. Then - the simplest test of work will be, oddly enough, copying one large file (5-10 gigabytes) over the network. An indirect sign of normal operation on a network of 1 Gbps will be an average copy speed of 100 Mb / s, good work - 120 Mb / s. I want to draw your attention to the fact that the processor load can also be a weak point (including). SMB the protocol on Linux is rather poorly parallelized, and during operation it can quite easily “eat” one processor core and not consume it anymore.

And further. With default settings, windows client works best with windows server (or even windows workstation) and SMB / CIFS protocol, linux client (debian, ubuntu did not look at the others) works best with linux and NFS (it also works with SMB, but on NFS parrots above). The fact that when linearly copying a win-linux server to nfs is copied into one stream faster, does not mean anything. Tuning debian for 1C is a topic for a separate article, I'm not ready for it yet, although I can say that in the file version I even got a little better performance than the Win version on the same equipment, but with postgres with users over 50 I still have everything very bad.

The most important thing , which is known to "burnt" administrators, but beginners do not take into account. There are many ways to set the path to the 1c database. You can do \\server\share, you can \\192.168.0.1\share, you can net use z: \\192.168.0.1\share (and in some cases this method will also work, but not always) and then specify the Z drive. It seems that all these paths point to the same place, but for 1C there is only one way that gives a fairly stable performance. So here's what you need to do right:

On the command line (or in policies, or whatever suits you) - do net use DriveLetter: \\server\share. Example: net use m:\\server\bases. I specifically emphasize NOT an IP address, namely name server. If the server is not visible by name, add it to dns on the server, or locally to the hosts file. But the appeal must be by name. Accordingly, on the way to the database, access this disk (see the picture).

And now I will show in numbers why such advice. Initial data: Intel X520-DA2, Intel 362, Intel 350, Realtek 8169 cards. OS Win 2008 R2, Win 7, Debian 8. Latest drivers, updates applied. Before testing, I made sure that Iperf gives the full bandwidth (except for 10 Gbit cards, it turned out to squeeze out only 7.2 Gbit, later I'll see why, the test server is not yet configured properly). The disks are different, but everywhere is an SSD (specially inserted a single disk for testing, nothing else is loaded) or a raid from an SSD. The speed of 100 Mbit was obtained by limiting the settings of the Intel 362 adapter. There was no difference between 1 Gbit copper Intel 350 and 1 Gbit optics Intel X520-DA2 (obtained by limiting the speed of the adapter). Maximum performance, turbo boost is off (just for comparability of results, turbo boost adds a little less than 10% for good results, for bad results it may not affect at all). Versions 1C 8.2.19.86, 8.3.6.2076. I do not give all the numbers, but only the most interesting ones, so that there is something to compare with.

Win 2008 - Win 2008

calling by ip address

Win 2008 - Win 2008

Address by name

Win 2008 - Win 2008

Calling by ip address

Win 2008 - Win 2008

Address by name

Win 2008 - Win 7

Address by name

Windows 2008 - Debian

Address by name

Win 2008 - Win 2008

Calling by ip address

Win 2008 - Win 2008

Address by name

11,20 26,18 15,20 43,86 40,65 37,04 16,23 44,64
1С 8.2 11,29 26,18 15,29 43,10 40,65 36,76 15,11 44,10
8.2.19.83 12,15 25,77 15,15 43,10 14,97 42,74
6,13 34,25 14,98 43,10 39,37 37,59 15,53 42,74
1C 8.3 6,61 33,33 15,58 43,86 40,00 37,88 16,23 42,74
8.3.6.2076 33,78 15,53 43,48 39,37 37,59 42,74

Conclusions (from the table, and from personal experience. Applies only to the file version):

Over the network, you can get quite normal numbers for work if this network is normally configured and the path is correctly written in 1C. Even the first Core i3s may well give 40+ parrots, which is quite good, and these are not only parrots, in real work the difference is also noticeable. But! the limitation when working with several (more than 10) users will no longer be the network, here 1 Gbit is still enough, but blocking during multi-user work (Gilev).

The 1C 8.3 platform is many times more demanding for competent network setup. Basic settings - see Gilev, but keep in mind that everything can influence. I saw acceleration from the fact that they uninstalled (and not just turned off) the antivirus, from removing protocols like FCoE, from changing drivers to an older, but microsoft certified version (especially for cheap cards like asus and dlinks), from removing the second network card from the server . A lot of options, configure the network thoughtfully. There may well be a situation when platform 8.2 gives acceptable numbers, and 8.3 - two or even more times less. Try to play around with platform versions 8.3, sometimes you get a very big effect.

1C 8.3.6.2076 (maybe later, I haven’t looked for the exact version yet) over the network is still easier to set up than 8.3.7.2008. From 8.3.7.2008 to achieve normal network operation (in comparable parrots) it turned out only a few times, I could not repeat it for a more general case. I didn’t understand much, but judging by the footcloths from Process Explorer, the recording does not go there the way it does in 8.3.6.

Despite the fact that when working on a 100Mbps network, its load schedule is small (we can say that the network is free), the speed of work is still much less than on 1 Gbps. The reason is network latency.

Ceteris paribus (well-functioning network) for 1C 8.2, the Intel-Realtek connection is 10% slower than Intel-Intel. But realtek-realtek can generally give sharp subsidence out of the blue. Therefore, if there is money, it is better to keep Intel network cards everywhere, if there is no money, then put Intel only on the server (your KO). Yes, and there are many times more instructions for tuning intel network cards.

Default antivirus settings (for example, drweb 10 version) take away about 8-10% of parrots. If you configure it properly (allow the 1cv8 process to do everything, although it is not safe) - the speed is the same as without antivirus.

Do NOT read Linux gurus. A server with samba is great and free, but if you put Win XP or Win7 on the server (or even better - server OS), then in the file version 1c will work faster. Yes, both samba and the protocol stack and network settings and much more in debian / ubuntu are well tuned, but this is recommended for specialists. It makes no sense to install Linux with default settings and then say that it is slow.

It's a good idea to test disks connected via net use with fio . At least it will be clear whether these are problems with the 1C platform, or with the network / disk.

For a single-user variant, I can’t think of tests (or a situation) where the difference between 1Gb and 10 Gb would be visible. The only place where 10Gbps for the file version gave better results was connecting disks via iSCSI, but this is a topic for a separate article. Still, I think that 1 Gbit cards are enough for the file version.

Why, with a 100 Mbit network, 8.3 works noticeably faster than 8.2 - I don’t understand, but the fact took place. All other equipment, all other settings are exactly the same, just in one case 8.2 is tested, and in the other - 8.3.

Not tuned NFS win - win or win-lin gives 6 parrots, did not include it in the table. After tuning, I received 25, but it is unstable (the run-up in measurements is more than 2 units). So far I can not give recommendations on the use of windows and the NFS protocol.

After all the settings and checks, we run the test again from the client computer, rejoice at the improved result (if it worked out). If the result has improved, there are more than 30 parrots (and especially more than 40), there are less than 10 users working at the same time, and the working database still slows down - almost definitely a programmer's problem (or you have already reached the peak of the file version's capabilities).

terminal server. (the base lies on the server, clients are connected on a network, the RDP protocol). Step by step algorithm:

0. Add the Gilev test database to the server in the same folder as the main databases. We connect from the same server and run the test. We remember the result.

1. In the same way as in the file version, we set up the work. In the case of a terminal server, the processor generally plays the main role (it is understood that there are no obvious weaknesses, such as lack of memory or a huge amount of unnecessary software).

2. Setting up network cards in the case of a terminal server has practically no effect on the operation of 1s. To provide "special" comfort, if your server gives out more than 50 parrots, you can play around with new versions of the RDP protocol, just for the comfort of users, faster response and scrolling.

3. With the active work of a large number of users (and here you can already try to connect 30 people to one base, if you try), it is very desirable to install an SSD drive. For some reason, it is believed that the disk does not particularly affect the operation of 1C, but all tests are carried out with the controller cache enabled for writing, which is wrong. The test base is small, it fits in the cache, hence the high numbers. On real (large) databases, everything will be completely different, so the cache is disabled for tests.

For example, I checked the work of the Gilev test with different disk options. I put discs from what was at hand, just to show a tendency. The difference between 8.3.6.2076 and 8.3.7.2008 is small (in the Ramdisk Turbo boost version 8.3.6 gives 56.18 and 8.3.7.2008 gives 55.56, in other tests the difference is even smaller). Power consumption - maximum performance, turbo boost disabled (unless otherwise noted).

Raid 10 4x SATA 7200

ATA ST31500341AS

Raid 10 4x SAS 10k

Raid 10 4x SAS 15k

Single SSD

ramdisk

Cache enabled

RAID controller

21,74 28,09 32,47 49,02 50,51 53,76 49,02
1С 8.2 21,65 28,57 32,05 48,54 49,02 53,19
8.2.19.83 21,65 28,41 31,45 48,54 49,50 53,19
33,33 42,74 45,05 51,55 52,08 55,56 51,55
1C 8.3 33,46 42,02 45,05 51,02 52,08 54,95
8.3.7.2008 35,46 43,01 44,64 51,55 52,08 56,18

The included cache of the RAID controller eliminates all the difference between the disks, the numbers are the same for both sat and sas. Testing with it for a small amount of data is useless and is not an indicator.

For the 8.2 platform, the performance difference between SATA and SSD options is more than double. This is not a typo. If you look at the performance monitor during the test on SATA drives. then there is clearly visible "Active disk time (in%)" 80-95. Yes, if you enable the write cache of the disks themselves, the speed will increase to 35, if you enable the raid controller cache - up to 49 (regardless of which disks are being tested at the moment). But these are synthetic parrots of the cache, in real work with large databases there will never be a 100% write cache hit ratio.

The speed of even cheap SSDs (I tested on Agility 3) is enough for the file version to work. The write resource is another matter, here you need to look in each specific case, it is clear that the Intel 3700 will have an order of magnitude higher, but there the price is corresponding. And yes, I understand that when testing an SSD drive, I also test the cache of this drive to a greater extent, the real results will be less.

The most correct (from my point of view) solution would be to allocate 2 SSD disks to a mirror raid for the file base (or several file bases), and not put anything else there. Yes, with a mirror, SSDs wear out the same way, and this is a minus, but at least they are somehow insured against errors in the controller electronics.

The main advantages of SSD disks for the file version will appear when there are many databases, and each with several users. If there are 1-2 bases, and users in the region of 10, then SAS disks will be enough. (but in any case - look at the loading of these disks, at least through perfmon).

The main advantages of a terminal server are that it can have very weak clients, and the network settings affect the terminal server much less (your KO again).

Conclusions: if you run the Gilev test on the terminal server (from the same disk where the working databases are located) and at those moments when the working database slows down, and the Gilev test shows a good result (above 30), then the slow operation of the main working database is to blame, most likely a programmer.

If the Gilev test shows small numbers, and you have both a processor with a high frequency and fast disks, then here the administrator needs to take at least perfmon, and record all the results somewhere, and watch, observe, draw conclusions. There will be no definitive advice.

Client-server option.

Tests were carried out only on 8.2, tk. On 8.3, everything depends quite seriously on the version.

For testing, I chose different server options and networks between them to show the main trends.

SQL: Xeon E5-2630

SQL: Xeon E5-2630

Fiber channel-SSD

SQL: Xeon E5-2630

Fiber channel - SAS

SQL: Xeon E5-2630

Local SSD

SQL: Xeon E5-2630

Fiber channel-SSD

SQL: Xeon E5-2630

Local SSD

1C: Xeon 5650 =

1C: Xeon 5650 =

shared memory

1C: Xeon 5650 =

1C: Xeon 5650 =

1C: Xeon 5650 =

16,78 18,23 16,84 28,57 27,78 32,05 34,72 36,50 23,26 40,65 39.37
1С 8.2 17,12 17,06 14,53 29,41 28,41 31,45 34,97 36,23 23,81 40,32 39.06
16,72 16,89 13,44 29,76 28,57 32,05 34,97 36,23 23,26 40,32 39.06

It seems that I have considered all the interesting options, if you are interested in something else - write in the comments, I will try to do it.

SAS on storage is slower than local SSDs, even though storage has large cache sizes. SSDs and local and storage systems for the Gilev test work at comparable speeds. I don’t know any standard multi-threaded test (not only records, but all equipment) except for the load 1C from the MCC.

Changing the 1C server from 5520 to 5650 gave almost a doubling of performance. Yes, the server configurations do not match completely, but it shows a trend (nothing surprising).

Increasing the frequency on the SQL server, of course, gives an effect, but not the same as on the 1C server, MS SQL Server is perfectly able (if you ask it) to use multi-core and free memory.

Changing the network between 1C and SQL from 1 Gbps to 10 Gbps gives about 10% of parrots. Expected more.

Enabling Shared memory still gives the effect, although not 15%, as described. Make sure to do it, it's quick and easy. If someone gave the SQL server a named instance during installation, then for 1C to work, the server name must be specified not by FQDN (tcp / ip will work), not through localhost or just ServerName, but through ServerName\InstanceName, for example zz-test\zztest. (Otherwise, a DBMS error will occur: Microsoft SQL Server Native Client 10.0: Shared Memory Provider: The shared memory library used to connect to SQL Server 2000 was not found. HRESULT=80004005, HRESULT=80004005, HRESULT=80004005, SQLSrvr: SQLSTATE=08001, state=1, Severity=10, native=126, line=0).

For users less than 100, the only point of splitting into two separate servers is a license for Win 2008 Std (and older versions), which only supports 32 GB of RAM. In all other cases, 1C and SQL should definitely be installed on the same server and given more (at least 64 GB) memory. Giving MS SQL less than 24-28 GB of RAM is unjustified greed (if you think that you have enough memory for it and everything works fine - maybe the 1C file version would be enough for you?)

How much worse a bunch of 1C and SQL works in a virtual machine is the topic of a separate article (hint - noticeably worse). Even in Hyper-V, things are not so clear...

Balanced performance mode is bad. The results are in good agreement with the file version.

Many sources say that the debug mode (ragent.exe -debug) gives a strong decrease in performance. Well, it lowers, yes, but I would not call 2-3% a significant effect.

Designing a server for the needs of "1C:Enterprise 8" for medium and large businesses

The material is intended for technical specialists designing server solutions for the needs of 1C:Enterprise 8 with a load of 25-250 users or more. The issues of assessing the required performance by server components are considered, taking into account extreme workload cases, the impact of virtualization. The issues of building a fault-tolerant corporate infrastructure for large enterprises will be discussed in the following material.

Estimation of the required equipment performance.

To select equipment, at least a preliminary assessment of the need for CPU, RAM, disk subsystem and network interface resources is required.
There are two ways to consider here:
a) Experimental, which allows you to obtain objective data on the load on current equipment and identify bottlenecks;
b) Estimated, which allows you to make an assessment based on empirically obtained averaged data.
The most effective is the joint use of both methodologies.

  1. Load monitoring, evaluating results, finding bottlenecks and generating requirements

Why is it important to do a load analysis if you already have a running system?
Here it would be most correct to compare with medicine. When a patient comes to the doctor, first an examination is carried out, tests are prescribed, then the whole complex of available information is evaluated and treatment is prescribed. The situation is exactly the same when designing a server.
Having made efforts to measure the load parameters and analyze the results, we will receive the best match of the designed server to our tasks as a reward. The final result will be - significant savings, both initial costs and operating costs in the future.

We will evaluate server performance within the framework of the main subsystems: central processors, RAM, disk I / O subsystem and network interfaces. In the Windows environment, there is a standard Windows Performance Monitor (perfmon) toolkit for evaluating the computational load. Other systems have their own equivalent evaluation tools.
In general, the load on each subsystem is highly dependent on the applications and data types they work with. For the block of applications related to 1C, the most critical are the CPU, RAM, and for the SQL server also the disk subsystem. When spread across multiple servers, the network interface is also critical. We will work only with those parameters that are important to us from the point of view of the applied task.
Data for analysis must be collected at least 24 hours in advance on a typical working day. Ideally - to accumulate data for three typical working days. To find bottlenecks, it is desirable to take data on the day of the greatest load.
Everything described below will come in handy, both at the stage of preparing for the design of a new server (for setting a task for the supplier), and then during operation, for an objective assessment of changes in equipment parameters and possible further “tuning” of the software and hardware complex under “1C: Enterprise 8” generally.

CPU. We are most interested in one parameter - " Processor: % Processor Time» (« Processor: % Processor Time "). Microsoft says the following about this parameter: “This counter tracks the time that the CPU spends executing a thread while it is running. Constant CPU utilization levels between 80% and 90% may indicate a CPU upgrade or the need to add multiple processors." Thus, if the CPU load is on average at the level of 70-80%, this is the optimal ratio of the efficiency of using CPU resources and the performance margin for peak periods. Less - the system is underloaded. More than 80% - at risk, 90% - the system is overloaded, it is necessary either to distribute the load to other hosts, or move to a new, more productive server.

CPU analysis . For modern processors, it makes sense to first find out how many cores you need. Windows itself quite effectively distributes the load between the cores, and with the exception of rare cases when there is a clear binding to the cores at the software level, all processor cores will be loaded more or less evenly. In general, if you have a parameter "% cpu time» is within 50-70% - everything is fine, there is a reserve. If less than 50%, then your system already has an excess number of cores, you can reduce their number, or load the server with other tasks. Average load of 80% or more - your system needs more cores.

RAM . It makes sense to track two parameters here:
« Memory: Available Mbytes» (« Memory: Available MB "). On a normally operating system, this counter should be at least 10% of the amount of physical memory installed on the server. If the amount of available memory is too low, the system will be forced to use the paging file for active processes. As a result, there are noticeable delays up to the effect of "freezing" the system.
« Memory:%CommittedbytesInuse», « Memory: % allocated memory usage ". A high value of this counter indicates that the system is experiencing a large load on RAM. It is highly desirable that this parameter be below 90%, because. at 95%, there is a chance of an OutOfMemory error.

RAM Analysis . The key parameter is the availability of available RAM on the server, which allows you to monitor the above counters quite effectively.

disk subsystem. Very often, questions about the performance of 1C:Enterprise 8 are related to insufficient performance of the disk subsystem. And it is here that we have quite a lot of opportunities to optimize equipment for the task. Therefore, we will pay maximum attention to the analysis of disk subsystem counters.

  1. « % Free Space' is the percentage of free space on the logical drive. If less than 15% of the disk capacity is free, it is considered overloaded, and its further analysis will most likely not be entirely correct - it will be strongly affected by data fragmentation on the disk. The recommended amount of free space on the server disk is at least 20%, preferably more for SSD.
  2. « Aug. Disk sec/Transfer» is the average disk access time. The counter shows the average time in milliseconds required for one disk communication operation. For lightly loaded systems (for example, file storages, VM storages), it is desirable to keep its value within 25 - 30 ms. For highly loaded servers (SQL) - it is desirable not to exceed 10 ms. Large counter values ​​indicate that the disk subsystem is overloaded. This is an integral indicator that needs a more detailed analysis. What kind of operations, reading or writing, and in what proportion, the counters show Aug. Disk sec/Read(average disk read time in seconds) and Aug. Disk sec/Write(average disk access time per write).
    Integral indicator Avg. Disk sec/Transfer in RAID5/RAID6, with a significant predominance of read operations, may be within the normal range, and write operations will be performed with significant delays.
    3.Aug. Disk Queue Length(average disk queue length) is, in fact, an integral indicator and consists of Aug. Disk Read Queue Length(average length of the queue to the disk for reading) and Aug. Disk Write Queue Length(the average length of the write queue to the disk). It tells you how many I/O operations are expected on average when the hard drive becomes available. This is not a measurable indicator, but calculated according to Little's law from queuing theory as N = A * Sr, where N is the number of pending requests in the system, A is the rate of requests, Sr is the response time. For a normally working disk subsystem, this indicator should not exceed the number of disks in the RAID group by more than 1. In SQL Server class applications, it is desirable to keep its average value at a level less than 0.2.
    4.Current Disk Queue Length(current disk queue length) shows the number of outstanding and pending requests addressed to the selected disk. This is the current value, a momentary indicator, and not an average value over a period of time. The delay time for processing requests to the disk subsystem is proportional to the length of the queue. For comfortable operation in steady state, the number of pending requests should not exceed the number of physical disks in the array by more than 1.5-2 times (we assume that in an array of several disks, each disk can simultaneously select one request from the queue).
    5.Disk Transfers/sec(disk accesses/sec) - The number of individual disk I/O requests completed within one second. Shows the real needs of applications for random reading and writing to the disk subsystem. As an indicator summing up several individual counters, it allows you to quickly assess the overall situation.
    6.Disk Reads/sec- The number of read accesses per second, that is, the frequency of disk read operations. The most important parameter for SQL Server class applications, which determines the actual performance of the disk subsystem.
    In a normal, established mode, the intensity of accesses should not exceed the physical capabilities of the disks - their individual limits, multiplied by the number of disks in the array.

100-120 IOPS per SATA or NL SAS drive;

200-240 IOPS per SAS 15000 rpm disk;

65,000 IOPS per Intel SSD class s3500 series (SATA);

7.Disk Writes/sec- the number of write accesses per second, that is, the frequency of write operations to disk. An extremely important parameter for SQL Server class applications. When operating in normal mode, the access rate should not exceed the physical limits of the disks, multiplied by the number of disks in the array, and taking into account the write penalty for the selected RAID type.

80-100 IOPS per SATA or NL SAS drive;

180-220 IOPS per SAS disk;

2 .20 GHz

DDR4
1600/1866/2133

3 .50 GHz

DDR4 1600/1866/2133/2400

Table 1 - Parameters for working with RAM

RAM . The performance of the entire server will be affected by the type of memory installed. For example, LR DIMM, due to its architecture, will always have a higher latency than regular DDR4 RDIMM memory. Especially on relatively short queries, typical for SQL when working with 1C. Based on the greater latency and significantly higher price, it makes sense to install LR DIMM only if it is not possible to gain the required amount of RAM using RDIMMs.
Similarly, DDR4 2400 will run slightly faster than DDR4 2133 - if the CPU supports high frequencies.

network interface. Here it is advisable to follow simple rules:
a) The server must have at least three network interfaces 1Gb Ethernet or higher (10Gb, 40Gb), and at least two of them on server network chips. Of course, ceteris paribus, the advantage should be given to 10Gb Ethernet infrastructure, especially given the vanishing small difference in the price of equipment (10Gb network cards and 10Gb ports on 1GB / 10Gb switches).
b) The server must support one or another KVM-over-IP technology for remote management.
Of the subtleties, one can single out very good support for virtualization tools by all Intel server network chips and the ability to effectively distribute the load between CPU cores for 10Gb +.

Disk subsystem :

The disk subsystem consists of two components:
- input/output subsystem in the form of SAS HBA controllers and RAID controllers;
- storage devices, or in our case - SSD and HDD disks.

RAID.
For OS and database storage tasks, as a rule, RAID 1 or RAID 10 is used, as well as their various software counterparts.

1. Fully software RAID (Soft RAID) by means of Windows Server cannot be used for a boot drive, but it is quite suitable for storing DB, tempDB and SQL log. Windows Storage Spaces technology provides fairly high performance in terms of storage reliability and performance, and also offers a number of additional functions, the most interesting of which, conciliatory to 1C tasks, is “Tiered storage”. The advantage of this technology is that part of the most frequently requested data is automatically placed on the SSD by the system.
With regard to 1C tasks, they usually use either an All-Flash array of SSDs, or for very large (1TB and above) and multi-year databases - Tiered storage.
One of the benefits of Windows Storage Spaces technology is its ability to create RAID on NVMe drives.

2. For hosting the OS, the hardware-software RAID1 is effective, built on the basis of the chipset from Intel and Intel® Rapid Storage Technology ( IntelRST).
In it, input-output operations at the hardware level are performed by the motherboard chipset, practically without using CPU resources. And the array is managed at the software level, due to drivers under Windows.
Like any compromise solution, the Intel RST has some drawbacks.
a) The operation of the Intel RST depends on the drivers that are loaded into the operating system. And this carries some potential risk that when updating drivers or OS, a situation may arise that the RAID disk will not be available. It is extremely unlikely, because Intel and Microsoft are very friendly and test their software very well, but it is not excluded.
b) Based on the results of the experiments, indirect evidence suggests that the Intel RST driver model uses RAM resources for write caching. This provides a performance boost, but it also carries some risk of data loss if the server's power is unplanned.
This solution also has advantages.
One of them is always very high performance, which is on par with, and sometimes even better than, all-hardware RAID controllers.
The second is support for hardware-software RAID1 for NVMe drives (not for boot drives at the time of this writing). And here lies an interesting feature for those who use highly loaded disk subsystems. Unlike Windows Storage Spaces, which "loads" the I/O-occupied core to almost 100%, Intel RST, when the core load reaches approximately 70%, connects the next core to the I/O process. As a result, a more even load on the CPU cores and slightly better performance at high loads.

Figure 4 - CPU Utilization Windows Storage Spaces vs. Intel RST

3. A fully hardware RAID in a server with 2-6 SSDs in RAID 1 can be achieved with a SAS HBA on an LSI SAS 3008 chipset, for example, on an Intel® RAID Controller RS3WC080. To do this, a special “IR” firmware is installed in the SAS HBA. Moreover, this SAS HBA supports the SAS 3.0 standard (12 Gb / s), at a price of about $300. An excellent choice here would be the Intel® RAID Controller RS3WC080, which comes with the required firmware.
The essence of this solution is that server SSDs do not need a write cache. Moreover, more advanced RAID controllers also disable their onboard write cache when working with an SSD. Thus, the HBA, which does not have a SAS cache in the RAID controller mode, quite successfully copes with the tasks of high-speed writing and reading directly from the SSD, providing quite decent performance.

4. For high-load servers with a large number of SAS or SATA SSD drives, it is desirable to install a full-fledged RAID controller of the Intel® RAID Controller RS3MC044 or Adaptec RAID 8805 class. They have more efficient I / O processors and advanced algorithms for working with HDD and SSD disks, including those that allow you to speed up the assembly of the array after replacing a failed disk.

Storage Devices (SSDs)and HDD).
a) Reliability SSD and HDD .
Usually, the theoretical reliability of disks is evaluated by the parameter "Non-recoverable Read Errors per Bits Read", which can be translated as "The probability of an unrecoverable read error per number of bits read." It shows how much data is read from the disk, according to statistics, you should expect an unrecoverable error.
Another important parameter shows the probability of disk failure - AFR (annual failure rate), or "Annual failure rate".
The table below shows data for typical drives SATA Enterprise HDD 7200 prm (SATA Raid Edition) , SAS HDD Enterprise 15 000 prm , SATA SSD Enterprise .

Parameter

Disk type

Enterprise SATA\SAS NL 7200 prm

Enterprise SAS 15,000 ppm
(10,000 ppm)

Enterprise SATA SSD

Non-recoverable Read Errors per Bits Read

Volume that is statistically expected to cause an unrecoverable error when read

Tab. 2 - theoretical reliability of HDD and SSD

The Enterprise SATA SSD of the Intel® SSD DC S3510 Series class is 10 times lower than the SAS HDD Enterprise 15,000 prm and 100 times lower than the SATA Enterprise HDD 7200 prm. Thus, Enterprise-class SSDs are theoretically and more reliable than any HDD.

b) Next, we estimate performance SSD and HDD .
From the point of view of the database, which, in fact, is 1C, the most important are only three disk parameters:
- latency (Latency), or disk response time, is measured in microseconds (less is better);
- the number of read operations per second (Disk Reads / sec), measured in IOPS (more is better);
- number of write operations per second (Disk Writes/sec), measured in IOPS.
Let's summarize these three parameters in one table:

Parameter

Disk type

Enterprise SATA / SAS NL 7200 prm

Enterprise SAS 15,000 ppm
(10,000 ppm)

Enterprise SATA SSD

Enterprise NVMe SSD

Latency (disk read/write response time), microseconds

Disk Reads/sec (number of read operations per second), IOPS

Disk Writes/sec (number of write operations per second), IOPS

Tab. 3 - HDD and SSD performance.

As can be clearly seen from the table, NVMe SSD (for example, Intel® SSD DC P3600 Series) latency Outperforms Enterprise SAS HDD 100 times, and by number of I/O operations per second - in 200 times for recording and 1500 times for reading.
Is it reasonable to use HDD technology to host databases?

v) Overwriting volume per day for server SSD .
In addition to any "buns" in the form supercapacitor in case of a power outage and hardware encryption modules, server SSDs have the most important parameter - the estimated amount of rewriting per day from the total capacity of the SSD disk. If we are talking about Intel server SSDs, then we mean daily rewriting of this volume for 5 years, which is included in the warranty. This option allows you to sort SSDs into "read-primarily", "write-read-oriented", and "heavy write-oriented". In tabular form, it looks like this:

Intel SSD Drive

Overwrite per day (from capacity)

Tab 4. - SSD overwriting volume per day.

Accordingly, you can correctly select disks specifically for the task in the server.
For example, Intel SSD s3510 is enough to store OS and SQL log.
For DB and tempDB storage, Intel SSD s3610 or Intel SSD s3710 are more suitable.

Examples of designing disk subsystems.
Armed with the above, let's assemble several disk subsystems for various requirements.
a) Server for 45 users, DB - 15 GB, growth per year - 4 GB, tempDB - 0.5 GB, SQL log - 2 GB.
Here it is economically justified to install RAID1 of two Intel SSD s3510 240 GB disks for OS and SQL Log needs, and RAID1 of two Intel SSD s3510 120 GB disks for DB and tempDB needs. An on-board Intel® RAPID is suitable as a RAID controller.
b) Server for 100 users, DB - 55 GB, growth per year - 15 GB, tempDB - 4 GB, SQL log - 8 GB.
For such a server, you can offer RAID1 of two Intel SSD s3510 240 GB disks for OS and SQL Log needs, and RAID1 of two Intel SSD s3610 200 GB disks for DB and tempDB needs. As a RAID controller, the Intel® RAID Controller RS3WC080 (simple hardware, no cache) is optimal.
c) Server for 200 users, DB - 360 GB, growth per year - 70 GB, tempDB - 24 GB, SQL log - 17 GB.
This server is already quite busy. For OS, we still take RAID1 of two Intel SSD s3510 240 GB disks. SQL Log and tempDB can be hosted on a dedicated RAID1 of two Intel SSD s3510 120 GB drives. And for DB tables, collect RAID10 from four Intel SSD s3610 400 GB disks. As a RAID controller, it is appropriate to use the "advanced" Intel® RAID Controller RS3MC044.

Virtualization
The performance of modern servers often allows you to place on one physical - a number of virtual ones. For their optimal placement, it is advisable to keep in mind how virtualization affects each of the server components.
CPU and RAM are the areas that suffer the least performance loss in a virtual environment. Accordingly, those software components that mainly use them can be painlessly placed in the Virtual Machine (VM). These include 1C:Enterprise 8. Application Server x64, Remote Desktop service and IIS.
I / O subsystems suffer noticeably greater losses during virtualization: 5-15% - the network interface and up to 25% - the disk subsystem. We have a SQL Serve software component that is sensitive to the performance of the disk subsystem - it is quite logical to place it not in the "VM", but in the physical "hardware".
Usually they do this with separate servers or a group of servers under 1C:
- OS Windows and MS SQL Server are installed on the hardware;
- 1C:Enterprise 8. Application Server x64 is launched in the VM and Licensing Server in the same VM;
- in a separate VM service Remote Desktop or IIS.
When using several software components on one server, incl. on different VMs, it is necessary to provide additional space for their placement at the disk subsystem level. As a rule, these are system disks with OS - they are increased to 480 GB or more.

Backup
A fairly common practice is to install two large-capacity HDDs (4-8 TB) in RAID1 into a server to store local copies of databases, as well as as file storage. Such a storage does not have high requirements for the speed of random access. And the linear speed of both reading and writing is quite sufficient to save daily backups and user files on it. You can assemble such a volume on any available version of the RAID controller, and on Intel® RAPID it will still work quite quickly.

And, please, do not forget that a separate server for responsible tasks must have overnutrition .

For many years, there have been discussions on the forums about what can speed up the work of file 1C.

Of course, there are many recipes, including some I share on the course.

But no matter what anyone says, for file 1C, bottleneck number 1 is of course the disk subsystem!

Actually "File".

Multiple disk accesses can really "slow down" all work in 1C Enterprise.

And if we are talking about multi-user access, then this is obvious here.

How can this problem be solved?

Of course, by switching to faster HDD drives, SAS drives, RAID, SSD, or even a way for "extreme people" to place the database on a RAM disk, that is, in the RAM of a PC or server.

Actually in this article we will touch on all the methods, but we will pay special attention, of course, to the last one.

Since there are no adequate articles on the network that could reveal many of the nuances of both using 1C and RAM disks, as well as intelligent tests on all other disk subsystems, taking into account everyday work in 1C.

But there are many questions here:

Who can use it and when?

In what situations?

Reliability?

Application area?

Real speed tests in various operations on 1C?

Let's start, perhaps, from ordinary HDDs.

Of course, the essence of the problem lies in the mechanics of the HDD, which does not provide the necessary speeds for file work in 1C (especially multi-user access).

The only way to speed up the HDD is to replace the 5400 rpm HDD with a 7200 rpm one.

Yes, rotational speed matters and 7200 rpm is certainly faster than 5400.

This is if we want to feel the difference. (But it's worth noting that virtually all HDDs today run at 7200 speeds.)

Drives at 7200 rpm show approximately the same result.

And be it SATA 2 or SATA 3.

SATA (eng. Serial ATA) is a serial data exchange interface with information storage devices.

If you are chasing the SATA III interface (For HDD), then there will be no tangible speed here, only very small in numbers. (later we will test HDD speed with SATA II and SATA III support only).

By the way, you can find out on which interface your disk is currently working (and which interface it supports) using the program, " CrystalDiskInfo».

SATA/300MB/s - SATA 2

SATA/600MB/s - SATA 3

—| SATA/300 (see Figure 1) - the first is the current mode of operation of the disk, and the second SATA/300 is the supported mode of operation. (Sometimes the first one does not display, on older discs).

In the second figure, we see that both work and HDD support we have SATA 3, that is, 600 MB / s. - throughput of the interface.

(We will return to the issue of interfaces later).

Another thing is if we put ordinary HDDs in RAID - 0 (Stripe).

With two or four disks, RAID 0 gives a noticeable gain in data transfer speed, but it does not provide reliability at all. Any cheap and even software RAID controller is suitable for its construction. Suitable for those who need to squeeze the maximum performance from the file system on conventional HDDs at minimal cost.

The speed is comparable even to some old SSDs, but alas, here we pay for speed with reliability. If at least one disk fails, all information on both disks is lost!

So, frequent backups of 1C databases with such a RAID are required.

Due to what speed?

Data in RAID - 0 is evenly distributed across the disks of the array, the disks are combined into one, which can be divided into several. Distributed read and write operations allow you to significantly increase the speed of work, since several disks simultaneously read / write their portion of data.

In other words, RAID 0 simply skillfully bypasses the mechanics, due to this speed.

This is RAID-10

But the minimum number of disks required to organize this system is 4.

Of course, in this article we are talking about a simple file 1C, so we only analyze budget solutions for small companies where entry-level servers are not available at all.

For the same reason, faster and more expensive SAS drives, iSCSI protocols will not be analyzed.

Only SSD drives are faster than server SAS.

A few years ago, I would not advise buying "solid-state" for work in 1C.

But this opinion has changed in the wake of today's reliable and relatively cheap SSD drives.

For example, SAMSUNG today gives a 10-year warranty on some of its discs!

Intel, SanDisk, Corsair and others 5 years warranty on SSD!

SSDs began to work much more reliably, faster, and controllers have become noticeably smarter, hence such guarantees.

About prices

Of course, enterprise-level SSD drives from INTEL will cost us a penny.

But there are also good budget alternatives.

For example, "solid-state" from SanDisk X400 256 GB will cost us only $95!

Actually, we will also test it in 1C, in the next part of the article.

The SanDisk X400 drive is good, reliable (5 year warranty), fast (read/write up to 540/520 MB/s).

And since we are talking about speeds, here we should take into account such a moment as SATA 3.

The SATA III (version 3.x) interface, officially known as SATA 6 Gb/s, is the third generation of SATA interfaces running at 6.0 Gb/s. The bandwidth supported by the interface is 600 MB/s. This interface is backwards compatible with the SATA II -3Gb/s interface.

The bandwidth of SATA II is only 300 MB / s, which is quite enough for an HDD, but absolutely not for today's SSDs.

To unleash the potential of an SSD, you need an interface with at least 600 MB / s bandwidth, that is, SATA III.

But don't worry, if you bought a PC or server after 2010, then you most likely have it in stock. (Otherwise, you need to change the motherboard).

By the way, I want to draw your attention to SATA III controllers from different manufacturers (in the same motherboard), for example, Intel and Marvell, where the former can significantly win in terms of speed. (Actually the other day I was convinced of this myself. Intel turned out to be faster by as much as 35% percent).

Of course, SATA III is not the only interface for communicating with an SSD drive.

The developers of "solid-state" rested on the throughput of SATA III - 600 MB / s, and launched new devices on the market with SATA Express, M.2, mSATA, PCI Express connection interfaces.

There are already completely different speeds:

PCI Express x2 2.0 8Gb/s (800MB/s)

SATA Express 10 Gb/s (1000 MB/s)

PCI Express x4 2.0 16 Gb/s (1600 MB/s)

PCI Express x4 3.0 32Gb/s (3200MB/s)

Unfortunately, now these devices cost a lot of money, and it is difficult to call such a solution a budget one.

To further speed up your SSD, you can create a RAID 0 of two drives, which will even double the speed of the SSD.

But what can be faster than an SSD?

Of course RAM!

Here the speeds are not comparable with HDD, RAID or SSD.

There are ways (special software) with which you can take part of the RAM and create a disk from it.

Now the "RAM" is much cheaper than 5 years ago, and many on the "board" already have 8-16 or even more GB of RAM.

The whole trick is to allocate the required size (under the 1C base, the pace, and if the size allows, then push the entire platform onto this disk.).

I already said that the way "extreme" it's not hard to see why.

If suddenly there is a failure in the system, you will immediately lose the database, as well as everything that will be on this disk!

Of course, in order to really work in 1C, which is located on a RAM disk, you need server hardware, server RAM, uninterruptible power supplies and reliable hardware. (motherboard, processor, etc.).

+ frequent backups.

Then, of course, you can work in this way in 1C.

But what if there is no such “iron”, we are interested in budget solutions?

Why then disassemble the work of 1C on a RAM disk at all?

The benefit is friends!, Of course, not for the constant work of users in 1C, but rather for performing various routine operations.

Closing the month, rescheduling, deleting, "base cut" (any other similar work) associated with a large number of documents, directories and everything else.

Many of these operations can take days! Whereas in RAM for several hours!

If, for example, your users work in 1C through a web browser, then it can be completely placed in RAM, this will significantly speed up the user's work in 1C via the web.

In other words, you can temporarily use the RAM disk to perform various heavy operations in 1C to speed up the process, and then return the base back to the SSD or HDD.

This is a good trick, you can use it!

In order to start real testing of file 1C on the above disk systems, almost everything is ready, except for the RAM disk.

Let's create it!

The free program "Dataram RAMDisk" will help us

Its free version will be enough to create a 4 GB disk. (More - paid ~$21).

The LinkedIn group “Storage Professionals” (by the way, I recommend paying attention to the existence of discussion groups on LinkedIn, it can be interesting) has been discussing the topic for a week now:
SSD drives failure rates
Some quotes from there, which I will give without translation, since everything is clear (each paragraph is a quote-fragment from the message of an individual person in this thread).
I’m working as a contractor at a bank in the midwest and we have SSD’s in EMC VMAX’s for about 9 months. We haven't seen any failures yet
I once ran a multi week attempt to burn out various vendors' SSDs. I ran them flat out 100% random writes for about a month. Fusion IOs at something like 30k IOPs per drive, STECs / Intels around 7k. Never was able to get any of them to fail.
The Fusion IO did as many writes that month as a single SAS drive could do in over a decade.

We have approximately 150 SSD drives and have seen 1 failure during the past 12 months.
I've been using SSDs in a cx4-960 clariion for just under 12 months with no failures (covering large ms sql tempdb).
From my own experience (first shipped SSD systems 2 and half years ago), SLC SSD failure rate is in the same range as rotating drives.

That's it. There is something to think about for those who still believe that the SSD resource is for writing terribly limited that the SSD is unreliable, and when running, Enterprise Flash Drives dies like a burnt Chinese USB flash drive Kingston.

The issue of 1C performance in file mode is quite acute, especially for small firms that cannot afford significant investments in equipment. Nevertheless, the "appetite" of the application is only growing from release to release, and the task of increasing performance at moderate budget costs is becoming more and more urgent. In this case, a good solution would be to purchase and place bases on an SSD.

One of our clients, a small accounting firm, started complaining about the slow performance of 1C:Enterprise. Actually, the not very fast operation of the application became quite dreary after the transition from Accounting 2.0 to Accounting 3.0.

There was a simple terminal server on a Core i3 2120, 8 GB RAM, with a RAID 1 disk array of two Western Digital RE4s, which served three to six users, each of which worked with two or three bases at the same time.

Performance analysis immediately revealed a bottleneck - the disk subsystem (the screenshot was taken after installing the SSD, so the RAID array includes logical drives C: and E:).

Simple calculations showed that launching even one infobase almost completely uses the performance of the array, about 150 IOPS at the current read/write ratio - the actual limit for a mirror of two not the fastest disks. What indirectly indicates the size of the queue.

Simultaneous launch of several databases at the beginning of the working day led to a significant slowdown of the server and reduced system responsiveness. Also, an unpleasant thoughtfulness was observed when working with magazines, when generating reports, etc.

The array performance test also showed a low result, more suitable for portable drives by today's standards.

It became clear that the disk subsystem needs to be upgraded. Even according to preliminary estimates, the creation of a productive array based on mass HDDs was limited both by the affordable budget and the physical capabilities of hardware, which simply did not have the required number of SATA ports and disk cages in the case. Therefore, the decision was made to purchase an SSD.

Since high disk loads were not envisaged, the choice was made primarily for price reasons. Speed ​​characteristics also faded into the background, as the bottleneck was the SATA-II interface. As a result, it was purchased 128Gb Corsair Neutron LAMD, which, when installed on the server, showed the following speed characteristics:

As you can see, serial access operations expectedly ran into interface bandwidth, but in our case this is of secondary importance. The main attention should be paid to random access operations, which are an order of magnitude superior to those of traditional HDDs.

The next question to decide is whether to mirror the SSD and sacrifice TRIM for fault tolerance, or leave it as a single drive and opt for speed over fault tolerance. It should be noted that modern SSDs, in addition to the TRIM command, use their own anti-degradation technologies, such as garbage collection, which allows them to work quite efficiently even on systems without TRIM. The LAMD (Link_A_Media Devices) SSD controller used in this series is just the same with very efficient garbage collection technologies at the level of enterprise-level drives, which is not surprising in general, since its developers have been working in the enterprise segment for a long time.

Since the volume of documents entered daily is small, we limited ourselves to a single SSD with mandatory daily backups. Indirectly, the effect of using a solid state drive can be assessed by the performance monitor:

The number of I / O operations has increased significantly, as well as the speed of exchange with the disk, while the queue length does not exceed one. These are very good indicators, it remains to be checked how much our actions have accelerated work directly with 1C:Enterprise.

To do this, we conducted a small express testing during which we measured the time of loading the infobase and the time of group re-posting of a set of documents for a certain period of time. During testing, the configuration was used 1C: Accounting 3.0.27.7 on the platform 8.3.3.721 .

Also, during the performance analysis, we paid attention to the fact that in its work 1C:Enterprise actively uses temporary folders, which in our case were located on the hard drive. Therefore, in order to achieve maximum performance, they should also be transferred to SSD, however, for those who like to save the resource of solid state drives, we included both options in the test: when the databases are located on the SSD, and the temporary folder on the HDD, and when the application is fully used by the SSD.

As you can see, the transfer of infobases to SSD immediately reduced their loading time by more than half, and the transfer accelerated by approximately 30%. At the same time, the problem with a drop in productivity during joint work was completely removed.

Transferring temporary folders to SSD allows you to reduce loading time by more than three times and speed up documents by approximately two times. There is something to think about even for staunch adherents of saving disk resources. Our opinion on this issue is as follows, if you bought an SSD, then you should use it to the fullest.

Let's make a small digression. The disk we use Corsair Neutron It has resource 2-3K erase/write cycles. Simple calculations show that if the entire disk capacity is completely overwritten every day, then it will take 5-8 years to exhaust the resource. In addition, statistics show that the main reason for SSD failure during the warranty period is not related to the exhaustion of the resource, but is a manufacturing defect or errors in the firmware.

In conclusion, I would like to say that the use of SSD today is perhaps the only effective way to significantly increase the performance of 1C:Enterprise in file mode. And, most importantly, affordable even for small businesses.