Internet Windows Android

Brakes on the file base - how to avoid (from recent experience). Tips for automating 1c works slowly over the network

Is your 1C slowing down again?Wasting time while writing a report?Tired of drinking tea while waiting for data exchange?

The situation with the slow work of 1C is not uncommon. You can put up with this, or you can optimize the settings for 1C and equipment, which will significantly increase the speed of your work.

Our services will help you get more done in your working day! We know how to speed up 1C so that you never repeat the words “1C freezes”.

Why "1C" can freeze or slow down?

The problem may be in the hardware. Lack of memory on the server with 1C, unstable work on the local network, problems with the hard drive or security keys - all this can make 1C slow down and make you nervous. In addition, 1C may hang due to:

  • poor platform and configuration compatibility,
  • blunders of novice 1C programmers,
  • huge base,
  • a large number of users.

Even mistakes during normal actions with 1C can also lead to its slow operation.

How to speed up 1C?

We operate like this:

  • We check the equipment for compliance with technological requirements 1C. Perhaps you need to increase the RAM, configure the 1C server, replace the disk, or check the speed of the local network. In other words, we conduct a comprehensive check of all equipment that is involved in the process.
  • We check the settings of other services involved in the work of 1C. For example, incorrectly configured SQL database or unreliable terminal access can greatly slow down 1C.
  • We check the correctness of the 1C configuration code, when working with which there are problems. It is no secret that the same software problem can be solved in different ways. Not optimal code often causes 1C to freeze.
  • We check the scheme of work of users when working with 1C. Sometimes users themselves slow down 1C and are not aware of it ..

The phrase “1C slows down” must have been heard by everyone working with products on the 1C:Enterprise platform. Someone complained about it, someone accepted complaints. In this article, we will try to consider the most common causes of this problem and options for solving it.

Let's turn to the metaphor: before finding out why a person did not go somewhere, it is worth making sure that he has legs to walk. So, let's start with the hardware and network requirements.

If Windows 7 is installed:

If Windows 8 or 10 is installed:



Also remember that there must be at least 2 GB of free disk space, and a network connection must have a speed of at least 100 Mb / s.

It does not make much sense to consider the characteristics of servers in the client-server version, because in this case everything depends on the number of users and the specifics of the tasks that they solve in 1C.

When choosing a configuration for a server, keep the following in mind:

  • One worker process of the 1C server consumes an average of 4 GB (not to be confused with a user connection, because one worker process can have as many connections as you specify in the server settings);
  • The use of 1C and a DBMS (especially MS SQL) on the same physical server gives a gain when processing large data arrays (for example, closing a month, calculating a budget according to a model, etc.), but significantly reduces performance during unloaded operations (for example, creating and conducting implementation document, etc.);
  • Remember that 1C servers and DBMS must be connected via a channel "thick" from 1 GB;
  • Use high-performance disks and do not combine the roles of the 1C server and DBMS with other roles (for example, file, AD, domain controller, etc.).

If, after checking the equipment, 1C still “slows down”

We have a small company, 7 people, and 1C "slows down". We turned to specialists, and they said that only the client-server option would save us. But for us, such a solution is not acceptable, it is too expensive!

Carry out routine maintenance in the database*:

1. Start the database in configurator mode.


2. Select the "Administration" item in the main menu, and in it - "Testing and fixing".


3. Set all the checkboxes as in the picture. Click Run.

*This procedure may take from 15 minutes to an hour, depending on the size of the base and the characteristics of your PC.

If this does not help, then we make a client-server connection, but without additional investments in hardware and software:

1. Choose the least loaded computer in the office from among stationary (not notebook): it must have at least 4 GB of RAM and a network connection of at least 100 Mb / s.

2. Activate IIS (Internet Information Server) on it. For this:





3. Publish your database on this computer. There is available material on this topic on the ITS, or contact a support specialist.

4. On user computers, configure access to the database through a thin client. For this:


Open the launch window 1C.


Choose your working base. This is your base here. Click Edit. Set the switch to the "On the web server" position, specify in the line below it the name or IP address of the server on which IIS was activated, and the name under which the database was published. Click "Next".


Set the "Main Startup Mode" switch to "Thin Client" mode. Click "Finish".

We have a rather big company, but not very big either, 50-60 people. We use the client-server option, but 1C is terribly slow.

In this case, it is recommended to separate the 1C server and the DBMS server into two different servers. When separating, be sure to remember: if they remained on the same physical server, which was simply virtualized, then the disks of these servers must be different - physically different! Also, be sure to set up scheduled tasks on the DBMS server when it comes to MS SQL (more on this is described on the ITS website)

We have a rather big company, more than 100 users. Everything is set up in accordance with the 1C recommendations for this option, but when conducting some documents, 1C “slows down” very much, and sometimes a blocking error occurs. Maybe make a convolution of the base?

A similar situation arises due to the size of a very specific accumulation register or accounting (but more often - accumulation), due to the fact that the register is either not “closed” at all, i.e. there are income movements, but there are no expenditure movements, or the number of measurements by which the balances of the register are calculated is very large. There may even be a mix of the two previous reasons. How to determine which register spoils everything?

We fix the time when documents are being processed slowly, or the time and user who has a blocking error.

Open the registration log.



We find the document we need, at the right time, for the right user with the “Data.Conduct” event type.



We look at the entire transaction block until the moment the transaction was canceled, if there was a blocking error, or we look for the longest change (the time from the previous record is more than a minute).

After that, we make a decision, bearing in mind that it is cheaper to collapse this particular register in any case than the entire database.

We are a very large company, more than 1000 users, thousands of documents a day, our own IT department, a huge fleet of servers, we have optimized requests several times, but 1C slows down. We, apparently, have outgrown 1C, and we need something more powerful.

In the vast majority of such cases, it is not 1C that “slows down”, but the architecture of the solution used. When making a choice in favor of a new business program, remember that it is cheaper and easier to write your business processes in a program than to remake them for some, especially a very expensive program. Only 1C provides such an opportunity. Therefore, it is better to ask yourself: “How to fix the situation? How to make 1C "fly" on such volumes? Let's take a look at a few treatment options:

  • Use the technologies of parallel and asynchronous programming that 1C supports (background tasks and requests in a loop).
  • When designing the architecture of the solution, refuse to use accumulation registers and accounting registers in the most "narrow" places.
  • When developing a data structure (accumulation and / or information registers), follow the rule: "The fastest table to write and read is a table with one column." What is at stake will become clearer if we look at a typical RAUS mechanism.
  • To process large amounts of data, use auxiliary clusters where the same database is connected (but in no case should this be done when working interactively !!!). This will bypass standard 1C locks, which will make it possible to work with the database at almost the same speed as when working directly with SQL tools.

It is worth noting that 1C optimization for holdings and large companies is a topic for a separate, large article, so stay tuned for updates to the materials on our website.

Recently, users and administrators have increasingly begun to complain that new 1C configurations developed on the basis of a managed application are slow, in some cases unacceptably slow. It is clear that new configurations contain new functions and capabilities, and therefore are more demanding on resources, but most users do not have an understanding of what primarily affects the operation of 1C in file mode. Let's try to fix this gap.

In ours, we have already touched on the impact of the performance of the disk subsystem on the speed of 1C, however, this study concerned the local use of the application on a separate PC or terminal server. At the same time, most small implementations involve working with a file base over a network, where one of the user's PCs is used as a server, or a dedicated file server based on a regular, most often also inexpensive, computer.

A small study of Russian-language resources on 1C showed that this issue is diligently bypassed; in case of problems, it is usually advised to switch to client-server or terminal mode. And it has also become almost generally accepted that configurations on a managed application work much slower than usual ones. As a rule, arguments are given "iron": "here Accounting 2.0 just flew, and the" troika "is barely moving, of course, there is some truth in these words, so let's try to figure it out.

Resource consumption at a glance

Before starting this study, we set ourselves two goals: to find out if managed application-based configurations are actually slower than conventional configurations, and which resources have the highest impact on performance.

For testing, we took two virtual machines running Windows Server 2012 R2 and Windows 8.1, respectively, with 2 cores of the host Core i5-4670 and 2 GB of RAM, which corresponds to an average office machine. The server was placed on a RAID 0 array of two, and the client was placed on a similar array of general-purpose disks.

As experimental bases, we have chosen several configurations of Accounting 2.0, release 2.0.64.12 , which was then updated to 3.0.38.52 , all configurations were run on the platform 8.3.5.1443 .

The first thing that attracts attention is the increased size of the information base of the Troika, and it has grown significantly, as well as much greater appetites for RAM:

We are already ready to hear the usual: "what did they add to this trio", but let's not rush. Unlike users of client-server versions, which require a more or less qualified administrator, users of file versions rarely think about database maintenance. Also, employees of specialized firms serving (read - updating) these bases rarely think about it.

Meanwhile, the 1C information base is a full-fledged DBMS of its own format, which also requires maintenance, and for this there is even a tool called Testing and fixing the infobase. Perhaps the name played a cruel joke, which seems to imply that this is a tool for troubleshooting, but poor performance is also a problem, and restructuring and reindexing, along with table compression, are well-known database optimization tools to any RDBMS administrator. Let's check?

After applying the selected actions, the database dramatically "lost weight", becoming even smaller than the "two", which no one has ever optimized either, and the RAM consumption also slightly decreased.

Subsequently, after loading new classifiers and directories, creating indices, etc. the size of the base will grow, in general, the bases of the "three" are larger than the bases of the "two". However, this is not more important, if the second version was content with 150-200 MB of RAM, then the new edition needs half a gigabyte already, and this value should be taken into account when planning the necessary resources to work with the program.

Net

Network bandwidth is one of the most important parameters for network applications, especially as 1C in file mode, moving significant amounts of data over the network. Most networks of small enterprises are built on the basis of inexpensive 100 Mbps equipment, so we started testing by comparing the performance indicators of 1C in 100 Mbps and 1 Gbps networks.

What happens when you start the 1C file base over the network? The client downloads a fairly large amount of information into temporary folders, especially if this is the first "cold" launch. At 100 Mbps, we expectedly run into the bandwidth and downloading can take a significant amount of time, in our case, about 40 seconds (the price of the graph division is 4 seconds).

The second launch is faster, since some of the data is stored in the cache and remains there until the reboot. The transition to a gigabit network can significantly speed up the loading of the program, both "cold" and "hot", and the ratio of values ​​is observed. Therefore, we decided to express the result in relative terms, taking the largest value of each measurement as 100%:

As you can see from the graphs, Accounting 2.0 loads twice as fast at any network speed, the transition from 100 Mbps to 1 Gbps allows you to speed up the download time by four times. There is no difference between the optimized and non-optimized Troika databases in this mode.

We also checked the impact of network speed on heavy-duty operation, for example, during group re-hosting. The result is also expressed in relative terms:

Here it is more interesting, the optimized base of the "troika" in a 100 Mbit / s network works at the same speed as the "two", and the unoptimized one shows twice the worst result. On a gigabit, the ratios are preserved, the non-optimized "three" is also twice as slow as the "two", and the optimized one lags behind by a third. Also, the transition to 1 Gb / s allows you to reduce the execution time by a factor of three for version 2.0 and two times for version 3.0.

In order to evaluate the impact of network speed on daily work, we used performance measurement by performing a sequence of predefined actions in each database.

Actually, for everyday tasks, network bandwidth is not a bottleneck, an unoptimized "three" is only 20% slower than a two, and after optimization it turns out to be about the same faster - the advantages of working in thin client mode affect. The transition to 1 Gb / s does not give the optimized base any advantages, and the non-optimized base and the deuce start to work faster, showing a small difference between them.

From the tests carried out, it becomes clear that the network is not a bottleneck for new configurations, and the managed application works even faster than usual. You can also recommend switching to 1 Gb/s if heavy tasks and database loading speed are critical for you, in other cases, new configurations allow you to work effectively even in slow 100 Mb/s networks.

So why does 1C slow down? We will investigate further.

Server disk subsystem and SSD

In the previous article, we achieved an increase in 1C performance by placing databases on SSD. Perhaps the performance of the server disk subsystem is not enough? We measured the performance of a disk server during a group run in two databases at once and got a rather optimistic result.

Despite the relatively high number of input / output operations per second (IOPS) - 913, the queue length did not exceed 1.84, which is a very good result for a two-disk array. Based on it, we can make an assumption that a mirror from ordinary disks will be enough for the normal operation of 8-10 network clients in heavy modes.

So is an SSD needed on a server? The best answer to this question will help testing, which we conducted using a similar methodology, the network connection is 1 Gb / s everywhere, the result is also expressed in relative values.

Let's start with the database loading speed.

It may seem surprising to someone, but the SSD base on the server does not affect the download speed of the database. The main limiting factor here, as shown by the previous test, is network throughput and client performance.

Let's move on to rewiring:

We have already noted above that the disk performance is quite enough even for heavy-duty operation, so the speed of the SSD is also not affected, except for the unoptimized base, which caught up with the optimized one on the SSD. Actually, this once again confirms that optimization operations organize information in the database, reducing the number of random I/O operations and increasing the speed of access to it.

On everyday tasks, the picture is similar:

Only the non-optimized base receives the benefit from the SSD. Of course, you can purchase an SSD, but it would be much better to think about the timely maintenance of the bases. Also, don't forget about defragmenting the infobase partition on the server.

Client disk subsystem and SSD

We analyzed the influence of SSD on the speed of locally installed 1C in , much of what has been said is also true for working in network mode. Indeed, 1C quite actively uses disk resources, including for background and scheduled tasks. In the figure below, you can see how Accounting 3.0 is quite actively accessing the disk for about 40 seconds after loading.

But at the same time, one should be aware that for a workstation where active work is performed with one or two information bases, the performance resources of a conventional HDD of a mass series are quite enough. Buying an SSD can speed up some processes, but you will not notice a radical acceleration in everyday work, since, for example, downloading will be limited by network bandwidth.

A slow hard drive can slow down some operations, but it cannot by itself cause a program to slow down.

RAM

Despite the fact that RAM is now obscenely cheap, many workstations continue to work with the amount of memory that was installed when they were purchased. This is where the first problems lie in wait. Based on the fact that the average "troika" requires about 500 MB of memory, we can assume that the total amount of RAM of 1 GB to work with the program will not be enough.

We reduced the system memory to 1 GB and launched two infobases.

At first glance, everything is not so bad, the program has moderated its appetites and completely kept within the available memory, but let's not forget that the need for operational data has not changed, so where did they go? Flushed to disk, cache, swap, etc., the essence of this operation is that data that is not needed at the moment is sent from fast RAM, the amount of which is not enough, to slow disk.

Where it leads? Let's see how the system resources are used in heavy operations, for example, let's start a group rerun in two databases at once. First on a system with 2 GB of RAM:

As you can see, the system actively uses the network to receive data and the processor to process them, disk activity is insignificant, in the process of processing it occasionally grows, but is not a limiting factor.

Now let's reduce the memory to 1 GB:

The situation is changing radically, the main load now falls on the hard disk, the processor and the network are idle, waiting for the system to read the necessary data from disk into memory and send unnecessary data there.

At the same time, even subjective work with two open databases on a system with 1 GB of memory turned out to be extremely uncomfortable, directories and magazines opened with a significant delay and active disk access. For example, opening the Sales of goods and services magazine took about 20 seconds and was accompanied by high disk activity all this time (highlighted by a red line).

In order to objectively assess the impact of RAM on the performance of configurations based on a managed application, we conducted three measurements: the loading speed of the first base, the loading speed of the second base, and group reposting in one of the bases. Both bases are completely identical and created by copying the optimized base. The result is expressed in relative units.

The result speaks for itself, if the loading time grows by about a third, which is still quite tolerable, then the time for performing operations in the database grows three times, there is no need to talk about any comfortable work in such conditions. By the way, this is the case when buying an SSD can improve the situation, but it is much easier (and cheaper) to deal with the cause, not the consequences, and just buy the right amount of RAM.

The lack of RAM is the main reason why working with new 1C configurations is uncomfortable. Minimum suitable configurations should be considered with 2 GB of memory on board. At the same time, keep in mind that in our case "greenhouse" conditions were created: a clean system, only 1C and the task manager were launched. In real life, a browser, an office suite, an antivirus, etc., are usually open on a working computer, so proceed from the need for 500 MB per database plus some margin so that during heavy operations you do not run into a lack of memory and drastic performance degradation.

CPU

The central processing unit, without exaggeration, can be called the heart of the computer, since it is he who ultimately processes all the calculations. To evaluate its role, we ran another set of tests, the same as for RAM, reducing the number of cores available to the virtual machine from two to one, while the test was run twice with memory sizes of 1 GB and 2 GB.

The result turned out to be quite interesting and unexpected, a more powerful processor quite effectively took over the load in the face of a lack of resources, otherwise without giving any tangible benefits. 1C Enterprise (in file mode) can hardly be called an application that actively uses processor resources, rather undemanding. And in difficult conditions, the processor is burdened not so much by calculating the data of the application itself, but by servicing overhead costs: additional I/O operations, etc.

conclusions

So, why does 1C slow down? First of all, this is a lack of RAM, the main load in this case falls on the hard drive and processor. And if they do not shine with performance, as is usually the case in office configurations, then we get the situation described at the beginning of the article - the "two" worked fine, and the "three" shamelessly slows down.

The second place should be given to network performance, a slow 100 Mbps channel can become a real bottleneck, but at the same time, the thin client mode is able to maintain a fairly comfortable level of work even on slow channels.

Then you should pay attention to the disk one, buying an SSD is unlikely to be a good investment, but replacing the disk with a more modern one will not be superfluous. The difference between generations of hard drives can be estimated from the following material: .

And finally the processor. A faster model, of course, will not be superfluous, but there is not much point in increasing its performance, unless this PC is used for heavy operations: batch processing, heavy reports, month closing, etc.

We hope this material will help you quickly understand the question of "why 1C slows down" and solve it most effectively and at no extra cost.

  • Tags:

Please enable JavaScript to view the

Photo by Alena Tulyakova, IA Clerk.Ru

The article indicates the main mistakes that novice 1C administrators make, and shows how to solve them using the example of the Gilev test.

The main purpose of writing the article is not to repeat the obvious nuances to those administrators (and programmers) who have not yet gained experience with 1C.

A secondary goal, if I have any shortcomings, Infostart will point this out to me the fastest.

V. Gilev's test has already become a kind of "de facto" standard. The author on his website gave quite understandable recommendations, but I will simply give some results and comment on the most likely errors. Naturally, the test results on your equipment may differ, this is just a guideline, what should be and what you can strive for. I want to note right away that changes must be made step by step, and after each step, check what result it gave.

There are similar articles on Infostart, in the relevant sections I will put links to them (if I miss something, please tell me in the comments, I will add it). So, suppose you slow down 1C. How to diagnose the problem, and how to understand who is to blame, the administrator or the programmer?

Initial data:

Tested computer, main guinea pig: HP DL180G6, 2*Xeon 5650, 32 Gb, Intel 362i , Win 2008 r2. For comparison, comparable results in a single-threaded test are shown by the Core i3-2100. The equipment was specially taken not the newest, on modern equipment the results are noticeably better.

For testing remote 1C and SQL servers, SQL server: IBM System 3650 x4, 2*Xeon E5-2630, 32 Gb, Intel 350, Win 2008 r2.

To test the 10 Gbit network, Intel 520-DA2 adapters were used.

File version. (the base lies on the server in the shared folder, clients are connected on a network, the CIFS/SMB protocol). Step by step algorithm:

0. Add the Gilev test database to the file server in the same folder as the main databases. We connect from the client computer, run the test. We remember the result.

It is assumed that even for old computers 10 years ago (Pentium on 775 socket), the time from clicking on the 1C:Enterprise shortcut to the appearance of the database window should be less than a minute. (Celeron = slow work).

If your computer is worse than a 775 socket pentium with 1 GB of RAM, then I sympathize with you, and it will be difficult for you to achieve comfortable work on 1C 8.2 in the file version. Consider either upgrading (long overdue) or switching to a terminal (or web, in the case of thin clients and managed forms) server.

If the computer is not worse, then you can kick the administrator. At a minimum, check the operation of the network, antivirus, and HASP protection driver.

If Gilev's test at this stage showed 30 "parrots" and more, but the 1C working base still works slowly - the questions are already for the programmer.

1. For a guideline, how much a client computer can "squeeze out", we check the operation of only this computer, without a network. We put the test base on the local computer (on a very fast disk). If the client computer does not have a normal SSD, then a ramdisk is created. So far, the simplest and free one is Ramdisk enterprise.

To test version 8.2, 256 MB of a ramdisk is enough, and! The most important. After restarting the computer with a working ramdisk, it should have 100-200 MB free. Accordingly, without a ramdisk, for normal operation of free memory there should be 300-400 MB.

For testing version 8.3, a 256 MB ramdisk is enough, but more free RAM is needed.

When testing, you need to look at the processor load. In a case close to ideal (ramdisk), the local file 1c loads 1 processor core during operation. Accordingly, if during testing your processor core is not fully loaded, look for weaknesses. A little emotional, but generally correct, the influence of the processor on the operation of 1C is described. Just for reference, even on modern Core i3 with a high frequency, the numbers 70-80 are quite real.

The most common mistakes at this stage.

  • Incorrectly configured antivirus. There are many antiviruses, the settings for each are different, I can only say that with proper configuration, neither the web nor Kaspersky 1C interfere. With the "default" settings - about 3-5 parrots (10-15%) can be taken away.
  • performance mode. For some reason, few people pay attention to this, and the effect is the most significant. If you need speed, then you must do it, both on client and server computers. (Gilev has a good description. The only caveat is that on some motherboards, if Intel SpeedStep is turned off, then TurboBoost cannot be turned on).
In short, during 1C operation, there are a lot of waiting for a response from other devices (disk, network, etc.). While waiting for a response, if the performance mode is balanced, then the processor lowers its frequency. A response comes from the device, 1C (the processor) needs to work, but the first cycles go at a reduced frequency, then the frequency rises - and 1C again waits for a response from the device. And so - many hundreds of times per second.

You can (and preferably) enable performance mode in two places:

  • through the BIOS. Disable C1, C1E, Intel C-state (C2, C3, C4) modes. In different bios they are called differently, but the meaning is the same. Search for a long time, a reboot is required, but if you did it once, then you can forget. If everything is done correctly in the BIOS, then the speed will be added. On some motherboards, BIOS settings can be set so that the Windows performance mode will not play a role. (Examples of BIOS setup by Gilev). These settings mainly concern server processors or "advanced" BIOS, if you haven't found it in your system, and you don't have Xeon - it's okay.

  • Control Panel - Power - High performance. Minus - if the computer has not been serviced for a long time, it will buzz more strongly with a fan, it will heat up more and consume more energy. This is the price of performance.
How to check that the mode is enabled. Run Task Manager - Performance - Resource Monitor - CPU. We wait until the processor is busy with nothing.
These are the default settings.

BIOS C-state enabled,

balanced power mode


BIOS C-state enabled, high performance mode

For Pentium and Core, you can stop there,

you can still squeeze some "parrots" out of Xeon


In BIOS, C-states are off, high performance mode.

If you do not use Turbo boost - this is how it should look

server tuned for performance


And now the numbers. Let me remind you: Intel Xeon 5650, ramdisk. In the first case, the test shows 23.26, in the latter - 49.5. The difference is almost twofold. The numbers may vary, but the ratio remains pretty much the same for the Intel Core.

Dear administrators, you can scold 1C as you like, but if end users need speed, you must enable high performance mode.

c) Turbo Boost. First you need to understand if your processor supports this function, for example. If it does, then you can still quite legally get some performance. (I don’t want to touch on the issues of overclocking, especially servers, do it at your own peril and risk. But I agree that increasing the Bus speed from 133 to 166 gives a very noticeable increase in both speed and heat dissipation)

How to turn on turbo boost is written, for example,. But! For 1C, there are some nuances (not the most obvious). The difficulty is that the maximum effect of turbo boost is manifested when the C-state is turned on. And it turns out something like this picture:

Please note that the multiplier is the maximum, the Core speed is the most beautiful, the performance is high. But what will happen as a result of 1s?

But in the end, it turns out that according to CPU performance tests, the variant with a multiplier of 23 is ahead, according to Gilev's tests in the file version, the performance with a multiplier of 22 and 23 is the same, but in the client-server version, the variant with a multiplier of 23 horror horror horror (even if C -state set to level 7, it is still slower than with C-state turned off). Therefore, the recommendation, check both options for yourself, and choose the best one from them. In any case, the difference between 49.5 and 53 parrots is quite significant, especially since it is without much effort.

Conclusion - turbo boost must be included. Let me remind you that it is not enough to enable the Turbo boost item in the BIOS, you also need to look at other settings (BIOS: QPI L0s, L1 - disable, demand scrubbing - disable, Intel SpeedStep - enable, Turbo boost - enable. Control Panel - Power - High performance) . And I would still (even for the file version) stop at the option where c-state is turned off, even though the multiplier is less there. Get something like this...

A rather controversial point is the memory frequency. For example, the memory frequency is shown as very influential. My tests did not reveal such dependence. I will not compare DDR 2/3/4, I will show the results of changing the frequency within the same line. The memory is the same, but in the BIOS we force lower frequencies.




And test results. 1C 8.2.19.83, for the file version local ramdisk, for client-server 1C and SQL on one computer, Shared memory. Turbo boost is disabled in both options. 8.3 shows comparable results.

The difference is within the measurement error. I specifically pulled out the CPU-Z screenshots to show that other parameters change with the frequency change, the same CAS Latency and RAS to CAS Delay, which levels out the frequency change. The difference will be when the memory modules physically change, from slower to faster, but even there the numbers are not very significant.

2. When we figured out the processor and memory of the client computer, we move on to the next very important place - the network. Many volumes of books have been written about network tuning, there are articles on Infostart (, and others), here I will not focus on this topic. Before starting testing 1C, please make sure that iperf between two computers shows the entire band (for 1 Gbit cards - well, at least 850 Mbit, but better 950-980), that Gilev's advice is followed. Then - the simplest test of work will be, oddly enough, copying one large file (5-10 gigabytes) over the network. An indirect sign of normal operation on a network of 1 Gbps will be an average copy speed of 100 Mb / s, good work - 120 Mb / s. I want to draw your attention to the fact that the processor load can also be a weak point (including). The SMB protocol on Linux is rather poorly parallelized, and during operation it can quite easily “eat” one processor core and not consume it anymore.

And further. With default settings, windows client works best with windows server (or even windows workstation) and SMB / CIFS protocol, linux client (debian, ubuntu did not look at the rest) works best with linux and NFS (it also works with SMB, but on NFS parrots above). The fact that during linear copying, the Win-Linux server on NFS is copied into one stream faster, does not mean anything. Tuning debian for 1C is a topic for a separate article, I'm not ready for it yet, although I can say that in the file version I even got a little better performance than the Win version on the same equipment, but with postgres with users over 50 I still have everything very bad.

The most important thing is what the "burnt" administrators know about, but beginners do not take into account. There are many ways to set the path to the 1c database. You can make servershare, you can 192.168.0.1share, you can net use z: 192.168.0.1share (and in some cases this method will also work, but not always) and then specify drive Z. It seems that all these paths point to the same thing the same place, but for 1C there is only one way that gives a fairly stable performance. So here's what you need to do right:

On the command line (or in policies, or whatever suits you) - do net use DriveLetter: servershare. Example: net use m:serverbases. I specifically emphasize, NOT the IP address, but the server name. If the server is not visible by name, add it to dns on the server, or locally to the hosts file. But the appeal must be by name. Accordingly, on the way to the database, access this disk (see the picture).

And now I will show in numbers why such advice. Initial data: Intel X520-DA2, Intel 362, Intel 350, Realtek 8169 cards. OS Win 2008 R2, Win 7, Debian 8. Latest drivers, updates applied. Before testing, I made sure that Iperf gives the full bandwidth (except for 10 Gbit cards, it turned out to squeeze out only 7.2 Gbit, later I'll see why, the test server is not yet configured properly). The disks are different, but everywhere is an SSD (specially inserted a single disk for testing, nothing else is loaded) or a raid from an SSD. The speed of 100 Mbit was obtained by limiting the settings of the Intel 362 adapter. There was no difference between 1 Gbit copper Intel 350 and 1 Gbit optics Intel X520-DA2 (obtained by limiting the speed of the adapter). Maximum performance, turbo boost is disabled (just for comparability of results, turbo boost adds a little less than 10% for good results, for bad results it may not affect at all). Versions 1C 8.2.19.86, 8.3.6.2076. I do not give all the numbers, but only the most interesting ones, so that there is something to compare with.

100Mbit CIFS

Win 2008 - Win 2008

calling by ip address

100Mbit CIFS

Win 2008 - Win 2008

address by name

1 Gbit CIFS

Win 2008 - Win 2008

calling by ip address

1 Gbit CIFS

Win 2008 - Win 2008

address by name

1 Gbit CIFS

Win 2008 - Win 7

address by name

1 Gbit CIFS

Windows 2008 - Debian

address by name

10 Gbit CIFS

Win 2008 - Win 2008

calling by ip address

10 Gbit CIFS

Win 2008 - Win 2008

address by name

11,20 26,18 15,20 43,86 40,65 37,04 16,23 44,64
1С 8.2 11,29 26,18 15,29 43,10 40,65 36,76 15,11 44,10
8.2.19.83 12,15 25,77 15,15 43,10 14,97 42,74
6,13 34,25 14,98 43,10 39,37 37,59 15,53 42,74
1C 8.3 6,61 33,33 15,58 43,86 40,00 37,88 16,23 42,74
8.3.6.2076 33,78 15,53 43,48 39,37 37,59 42,74

Conclusions (from the table, and from personal experience. Applies only to the file version):

  • Over the network, you can get quite normal numbers for work if this network is normally configured and the path is correctly written in 1C. Even the first Core i3s may well give 40+ parrots, which is quite good, and these are not only parrots, in real work the difference is also noticeable. But! the limitation when working with several (more than 10) users will no longer be the network, here 1 Gbit is still enough, but blocking during multi-user work (Gilev).
  • platform 1C 8.3 is many times more demanding for competent network setup. Basic settings - see Gilev, but keep in mind that everything can influence. I saw acceleration from the fact that they uninstalled (and not just turned off) the antivirus, from removing protocols like FCoE, from changing drivers to an older, but microsoft certified version (especially for cheap cards like asus and longs), from removing the second network card from the server . A lot of options, configure the network thoughtfully. There may well be a situation when platform 8.2 gives acceptable numbers, and 8.3 - two or even more times less. Try to play around with platform versions 8.3, sometimes you get a very big effect.
  • 1C 8.3.6.2076 (maybe later, I haven’t looked for the exact version yet) over the network is still easier to set up than 8.3.7.2008. From 8.3.7.2008 to achieve normal network operation (in comparable parrots) it turned out only a few times, I could not repeat it for a more general case. I didn’t understand much, but judging by the footcloths from Process Explorer, the recording does not go there the way it does in 8.3.6.
  • Despite the fact that when working on a 100Mbps network, its load schedule is small (we can say that the network is free), the speed of work is still much less than on 1 Gbps. The reason is network latency.
  • Ceteris paribus (well-functioning network) for 1C 8.2, the Intel-Realtek connection is 10% slower than Intel-Intel. But realtek-realtek can generally give sharp subsidence out of the blue. Therefore, if there is money, it is better to keep Intel network cards everywhere, if there is no money, then put Intel only on the server (your KO). Yes, and there are many times more instructions for tuning intel network cards.
  • Default antivirus settings (for example, drweb 10 version) take away about 8-10% of parrots. If you configure it properly (allow the 1cv8 process to do everything, although it is not safe) - the speed is the same as without antivirus.
  • Do NOT read Linux gurus. A server with samba is great and free, but if you put Win XP or Win7 on the server (or even better - server OS), then in the file version 1c will work faster. Yes, both samba and the protocol stack and network settings and much more in debian / ubuntu are well tuned, but this is recommended for specialists. It makes no sense to install Linux with default settings and then say that it is slow.
  • It's a good idea to test disks connected via net use with fio . At least it will be clear whether these are problems with the 1C platform, or with the network / disk.
  • For a single-user variant, I can’t think of tests (or a situation) where the difference between 1Gb and 10 Gb would be visible. The only place where 10Gbps for the file version gave better results was connecting disks via iSCSI, but this is a topic for a separate article. Still, I think that 1 Gbit cards are enough for the file version.
  • Why, with a 100 Mbit network, 8.3 works noticeably faster than 8.2 - I don’t understand, but the fact took place. All other equipment, all other settings are exactly the same, just in one case 8.2 is tested, and in the other - 8.3.
  • Not tuned NFS win - win or win-lin gives 6 parrots, did not include it in the table. After tuning, I received 25, but it is unstable (the run-up in measurements is more than 2 units). So far I can not give recommendations on the use of windows and the NFS protocol.
After all the settings and checks, we run the test again from the client computer, rejoice at the improved result (if it worked out). If the result has improved, there are more than 30 parrots (and especially more than 40), there are less than 10 users working at the same time, and the working database still slows down - almost definitely a programmer's problem (or you have already reached the peak of the file version's capabilities).

terminal server. (the base lies on the server, clients are connected on a network, the RDP protocol). Step by step algorithm:

  • We add the Gilev test database to the server in the same folder as the main databases. We connect from the same server and run the test. We remember the result.
  • In the same way as in the file version, we set up the processor. In the case of a terminal server, the processor generally plays the main role (it is understood that there are no obvious weaknesses, such as lack of memory or a huge amount of unnecessary software).
  • Setting up network cards in the case of a terminal server has practically no effect on the operation of 1s. To provide "special" comfort, if your server gives out more than 50 parrots, you can play around with new versions of the RDP protocol, just for the comfort of users, faster response and scrolling.
  • With the active work of a large number of users (and here you can already try to connect 30 people to one base, if you try), it is very desirable to install an SSD drive. For some reason, it is believed that the disk does not particularly affect the operation of 1C, but all tests are carried out with the controller cache enabled for writing, which is wrong. The test base is small, it fits in the cache, hence the high numbers. On real (large) databases, everything will be completely different, so the cache is disabled for tests.
For example, I checked the work of the Gilev test with different disk options. I put discs from what was at hand, just to show a tendency. The difference between 8.3.6.2076 and 8.3.7.2008 is small (in the Ramdisk Turbo boost version 8.3.6 gives 56.18 and 8.3.7.2008 gives 55.56, in other tests the difference is even smaller). Power consumption - maximum performance, turbo boost disabled (unless otherwise noted).
Raid 10 4x SATA 7200

ATA ST31500341AS

Raid 10 4x SAS 10kRaid 10 4x SAS 15kSingle SSDramdiskramdiskCache enabled

RAID controller

21,74 28,09 32,47 49,02 50,51 53,76 49,02
1С 8.2 21,65 28,57 32,05 48,54 49,02 53,19
8.2.19.83 21,65 28,41 31,45 48,54 49,50 53,19
33,33 42,74 45,05 51,55 52,08 55,56 51,55
1C 8.3 33,46 42,02 45,05 51,02 52,08 54,95
8.3.7.2008 35,46 43,01 44,64 51,55 52,08 56,18
  • The included cache of the RAID controller eliminates all the difference between the disks, the numbers are the same for both sat and sas. Testing with it for a small amount of data is useless and is not an indicator.
  • For the 8.2 platform, the performance difference between SATA and SSD options is more than double. This is not a typo. If you look at the performance monitor during the test on SATA drives. then there is clearly visible "Active disk time (in%)" 80-95. Yes, if you enable the write cache of the disks themselves, the speed will increase to 35, if you enable the raid controller cache - up to 49 (regardless of which disks are being tested at the moment). But these are synthetic parrots of the cache, in real work with large databases there will never be a 100% write cache hit ratio.
  • The speed of even cheap SSDs (I tested on Agility 3) is enough for the file version to work. The write resource is another matter, here you need to look in each specific case, it is clear that the Intel 3700 will have an order of magnitude higher, but there the price is corresponding. And yes, I understand that when testing an SSD drive, I also test the cache of this drive to a greater extent, the real results will be less.
  • The most correct (from my point of view) solution would be to allocate 2 SSD disks to a mirror raid for the file base (or several file bases), and not put anything else there. Yes, with a mirror, SSDs wear out the same way, and this is a minus, but at least they are somehow insured against errors in the controller electronics.
  • The main advantages of SSD disks for the file version will appear when there are many databases, and each with several users. If there are 1-2 bases, and users in the region of 10, then SAS disks will be enough. (but in any case - look at the loading of these disks, at least through perfmon).
  • The main advantages of a terminal server are that it can have very weak clients, and the network settings affect the terminal server much less (your KO again).
Conclusions: if you run the Gilev test on the terminal server (from the same disk where the working databases are) and at those moments when the working database slows down, and the Gilev test shows a good result (above 30), then the slow operation of the main working database is to blame, most likely a programmer.

If the Gilev test shows small numbers, and you have both a processor with a high frequency and fast disks, then here the administrator needs to take at least perfmon, and record all the results somewhere, and watch, observe, draw conclusions. There will be no definitive advice.

Client-server option.

Tests were carried out only on 8.2, tk. On 8.3, everything depends quite seriously on the version.

For testing, I chose different server options and networks between them to show the main trends.

1C: Xeon 5520

SQL: Xeon E5-2630

1C: Xeon 5520

SQL: Xeon E5-2630

Fiber channel-SSD

1C: Xeon 5520

SQL: Xeon E5-2630

Fiber channel - SAS

1C: Xeon 5650

SQL: Xeon E5-2630

1C: Xeon 5650

SQL: Xeon E5-2630

Fiber channel-SSD

1C: Xeon 5650

SQL: Xeon E5-2630

1C: Xeon 5650 =1C: Xeon 5650 =1C: Xeon 5650 =1C: Xeon 5650 =1C: Xeon 5650 =
16,78 18,23 16,84 28,57 27,78 32,05 34,72 36,50 23,26 40,65 39.37
1С 8.2 17,12 17,06 14,53 29,41 28,41 31,45 34,97 36,23 23,81 40,32 39.06
16,72 16,89 13,44 29,76 28,57 32,05 34,97 36,23 23,26 40,32 39.06

It seems that I have considered all the interesting options, if you are interested in something else - write in the comments, I will try to do it.

  • SAS on storage is slower than local SSDs, even though storage has large cache sizes. SSDs and local and storage systems for the Gilev test work at comparable speeds. I don’t know any standard multi-threaded test (not only records, but all equipment) except for the load 1C from the MCC.
  • Changing the 1C server from 5520 to 5650 gave almost a doubling of performance. Yes, the server configurations do not match completely, but it shows a trend (nothing surprising).
  • Increasing the frequency on the SQL server, of course, gives an effect, but not the same as on the 1C server, the MS SQL server is perfectly able (if asked about it) to use multi-core and free memory.
  • Changing the network between 1C and SQL from 1 Gbps to 10 Gbps gives about 10% of parrots. Expected more.
  • Enabling Shared memory still gives the effect, although not 15%, as described in the article. Make sure to do it, it's quick and easy. If someone gave the SQL server a named instance during installation, then for 1C to work, the server name must be specified not by FQDN (tcp / ip will work), not through localhost or just ServerName, but through ServerNameInstanceName, for example zz-testzztest. (Otherwise, the following DBMS error will occur: Microsoft SQL Server Native Client 10.0: Shared Memory Provider: The shared memory library used to connect to SQL Server 2000 was not found. HRESULT=80004005, HRESULT=80004005, HRESULT=80004005, SQLSrvr: SQLSTATE=08001, state=1, Severity=10, native=126, line=0).
  • for users less than 100, the only point of splitting into two separate servers is a license for Win 2008 Std (and older versions), which only supports 32 GB of RAM. In all other cases, 1C and SQL should definitely be installed on the same server and given more (at least 64 GB) memory. Giving MS SQL less than 24-28 GB of RAM is unjustified greed (if you think that you have enough memory for it and everything works fine, maybe the 1C file version would be enough for you?)
  • How much worse a bunch of 1C and SQL works in a virtual machine is the topic of a separate article (hint - noticeably worse). Even in Hyper-V, things are not so clear...
  • Balanced performance mode is bad. The results are in good agreement with the file version.
  • Many sources say that the debug mode (ragent.exe -debug) gives a strong decrease in performance. Well, it lowers, yes, but I would not call 2-3% a significant effect.
There will be the least advice for a particular case, because. brakes in the client-server mode of operation are the most difficult case, and everything is configured very individually. The easiest way is to say that for normal operation you need to take a separate server ONLY for 1C and MS SQL, put processors there with a maximum frequency (above 3 GHz), SSD drives for the base, and more memory (128+), do not use virtualization. It helped - excellent, you are lucky (and there will be a lot of such lucky ones, more than half of the problems are solved by an adequate upgrade). If not, then any other options already require separate consideration and settings.

I often get asked questions like:

  • because of what the server 1C slows down?
  • computer with 1C works very slowly
  • client 1C is terribly slow

What to do and how to win it, and so on in order:

Clients work very slowly with the server version of 1C

In addition to the slow work of 1C, there is also slow work with network files. The problem occurs during normal operation and with RDP

to solve this, after each installation of the Seven or the 2008 server, I always run

netsh int tcp set global autotuning=disabled

netsh int tcp set global autotuninglevel=disabled

netsh int tcp set global rss=disabled chimney=disabled

and the network works without problems

sometimes the best is:

netsh interface tcp set global autotuning= HighlyRestricted

here is what the setup looks like

Configure Antivirus or Windows Firewall

How to configure the Anti-Virus or Windows firewall for the operation of the 1C server (a bundle from the 1C Server: Enterprise and MS SQL 2008, for example).

Add rules:

  • If the SQL server accepts connections on the standard TCP port 1433, then we allow it.
  • If the SQL port is dynamic, you must allow connections to the %ProgramFiles%\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Binn\sqlservr.exe application.
  • Server 1C works on ports 1541, cluster 1540 and range 1560-1591. For completely mystical reasons, sometimes such a list of open ports still does not allow connections to the server. To make it work for sure, allow the range 1540-1591.

Server / Computer Performance Tuning

In order for the computer to work with maximum performance, you need to configure it for this:

1. BIOS settings

  • In the server BIOS, disable all settings to save processor power.
  • If there is "C1E" & be sure to DISCONNECT!!
  • For some not very parallel tasks, it is also recommended to turn off hyperthreading in the bios
  • In some cases (especially for HP!) you need to go into the server's BIOS and turn OFF the items there, in the name of which there are EIST, Intel SpeedStep and C1E.
  • Instead, you need to find in the same place the items related to the processor, in the name of which there is Turbo Boost, and ENABLE them.
  • If the BIOS has a general indication of the power saving mode & enable it in the maximum performance mode (it can also be called "aggressive")

2. Scheme settings in the operating system - High performance

Servers with Intel Sandy Bridge architecture can dynamically change processor frequencies.