1s is slower on the new server why. Automation Tips

This article discusses the main factors: when 1C slows down, 1C freezes and 1C works slowly. The data was prepared based on SoftPoint's many years of experience in optimizing large IT systems built on the 1C + MS SQL combination.

To begin with, it is worth noting the myth that 1C is not intended for the simultaneous work of a large number of users, actively supported by forum users who find in these posts reassurance and a reason to leave everything as it is. With enough patience and knowledge, you can bring the system to any number of users. Slow operation and freezing of 1C will no longer be a problem.

From practice: The easiest way to optimize is 1C v7.7 (Optimization of 1C 8.1, 1C 8.2, 1C 8.3 is a more difficult task, since the application consists of 3 links). Bringing it up to 400 simultaneous users is a fairly typical project. Up to 1500 is already difficult and requires hard work.

The second myth: to improve the performance of 1C and get rid of 1C freezes, you need to install a more powerful server. As a rule, in optimization projects in 95% of cases it is possible to achieve acceptable performance either without an upgrade at all, or by updating a minor part of the equipment, for example, by adding RAM. It should be noted that the equipment must still be server-based, especially the disk subsystem. An outdated disk subsystem is just one of the reasons why 1C works slowly.

The main limitation when working multi-user in 1C is the locking mechanism. It is the blocking in 1C, and not the server equipment, that usually prevents a large number of people from working in the database. To overcome this problem, you have to work hard and change the locking logic in 1C - lower them from tabular to row-based. Then, for example, posting a document will block only one, and not all documents in the system.

Figure 1. 1C blocking queue in the PerfExpert monitoring system, with information about 1C users, a configuration module and a specific line of code in this module.

Changing the 1C locking mechanism is a very complex technology. Not everyone can pull off such a trick, and for them there is only one way left - optimizing the structure and speeding up the execution time of operations. The fact is that blocking in 1C and the execution time of operations are highly interrelated indicators. For example, if the operation of posting a document takes 15 seconds, then if there are a large number of users, there is a high probability that during the transfer someone else will try to post the document and will wait in blocking. If you increase the execution time to at least 1 second, then 1C blocking for this operation will be significantly reduced.

More dangerous from the point of view of blocking are group processing, which can take a long time to complete and at the same time cause 1C blocking. Any processing that changes the data, for example, restoring the sequence or batch processing of documents, locks the tables and prevents other users from posting documents. Naturally, the faster these processings are performed, the less time blocking and makes it easier for users to work.

Heavy reports that perform read-only operations can also be dangerous in terms of locking, although it would seem that they do not lock data. Such reports affect the intensity of blocking in 1C, slowing down other operations in the system. That is, if the report is very heavy and takes up the bulk of the server’s resources, it may turn out that before the report was launched, the same operations were performed for 1 second, and during the report execution they were performed for 15 seconds. Naturally, as the execution time of operations increases, the intensity of blocking will also increase.

Figure 2. Load on the working server in terms of configuration modules, from all users. Each module has its own color. There is a clear imbalance in the load created from 1C.

The basic rule for optimization is that document processing should take a minimum of time and perform only necessary operations. For example, register calculations are often used in posting processing without specifying filtering conditions. In this case, you need to specify filters for registers that allow you to obtain the best selectivity, without forgetting that, according to the filtering conditions, the register must have appropriate indices.

In addition to launching heavy reports, non-optimal settings of MS SQL and MS Windows can slow down the execution time of operations and, therefore, increase the intensity of 1C blocking. This problem occurs in 95% of clients. It should be noted that these are servers of serious organizations; entire departments of highly qualified administrators are engaged in their support and configuration.

The main reason is not correct settings server is the fear of administrators to change anything on a running server and the rule “The best is the enemy of the good.” If the administrator changes the server settings and problems begin, then all the anger of the authorities will pour out on the careless administrator. Therefore, it is more profitable for him to leave everything as it is, and not take a single step without orders from his superiors, than to experiment on his own responsibility.

The second reason is the lack of clear information on network optimization problems. There are a lot of opinions that often completely contradict each other. Every opinion dedicated to optimization has its opponents and fanatics who will defend it. As a result, the Internet and forums are more likely to confuse server settings than to help. In a situation of such uncertainty, the administrator has even less desire to change anything on a server that is somehow working.

At first glance, the picture is clear - you need to optimize everything that slows down the operation of the 1C server. But let's imagine ourselves in the place of such an optimizer - let's say we have 1C 8.1 8.2 8.3 UPP and 50 users are working at the same time. One terrible day, users begin to complain that 1C is slow, and we need to solve this problem.

First of all, we look at what is happening on the server - what if some particularly independent antivirus is conducting a full scan of the system. An inspection shows that everything is fine - the server is loaded at 100%, and only by the sqlservr process.

From practice: one of the junior administrators, on his own initiative, turned on auto-update on the server, Windows and SQL happily updated, and after the update, a massive slowdown in the work of 1C users began, or 1C simply froze.

The next step is to check which programs load MS SQL. Inspection shows that the load is generated by approximately 20 application server connections.

From practice: a program that promptly updates data on a website went into a loop, and instead of updating once every 4 hours, it did it continuously, without pauses, heavily loading the server and blocking the data.

Further analysis of the situation faces great difficulties. We have already found out that the load comes directly from 1C, but how can we understand what exactly users are doing? Or at least who they are. It’s good if there are 10 1C users in an organization, then you can just go through them and find out what they are doing now, but in our case there are fifty of them, and they are scattered across several buildings.

In the example we are considering, the situation is not yet complex. Imagine that the slowdown was not today, but yesterday. Today the situation is not repeating itself, everything is fine, but you need to figure out why the operators couldn’t work yesterday (they naturally complained only before leaving home, since they like chatting all day long, due to the fact that nothing is working, more than working ). This case emphasizes the need for a server logging system that will always keep a history of the main parameters of the server’s operation and from which the sequence of events can be restored.

A logging system is simply an indispensable tool in system optimization. If you add to it the ability to view the current status online, you will get a server status monitoring system. Every optimization project begins by collecting server state statistics to identify bottlenecks.

When we started working in the field of optimization, we tried many server monitoring systems, unfortunately, we were unable to find something that solved this problem at the proper level, so we had to create a system on our own. The result was a unique product, PerfExpert, which made it possible to automate and streamline the processes of optimization of IT systems. The program is distinguished by tight integration with 1C, the absence of any noticeable additional load and repeatedly proven suitability for practical use in combat situations.

Returning to our example, the most likely outcome is: The administrator says, “It’s the programmers who wrote the configuration that are to blame.” The programmers respond, “We have everything written well—it’s the server that’s not working well.” And the cart, as they say, is still there. As a result, 1C slows down, freezes or works slowly.

In any case, to solve 1C performance problems, we recommend that you first purchase and use performance monitoring PerfExpert , this will allow you to make the right decision management decision and save money. The product is suitable for both small 1C:Enterprise ISs - up to 50 users, and for systems - from 1000 users. Since July 2015 performance monitoring PerfExpert received a 1C:Compatible certificate, passed testing in Microsoft and helps solve performance problems not only for 1C systems, but also for other information systems based on MS SQL Server (Axapta, CRM Dynamics, Doc Vision and others).

If you liked the information, recommended further actions:

- If you want to independently deal with technical problems of 1C performance (1C 7.7, 1C 8.1, 1C 8.2,1C 8.3) and other information systems, then for you there is a unique list of technical articles in our Almanac (Blocking and deadlocks, heavy load on the CPU and disks, database maintenance and index tuning are just a small part of the technical materials that you will find there).
.
- If you would like to discuss performance issues with our expert or order a PerfExpert performance monitoring solution, then leave a request and we will contact you as soon as possible.

We often receive questions about what slows down 1c, especially when switching to version 1c 8.3, thanks to our colleagues from Interface LLC, we tell you in detail:

In our previous publications we have already touched on the impact of productivity disk subsystem at the speed of 1C, however this study concerned the local use of the application on a separate PC or terminal server. At the same time, most small implementations involve working with a file database over a network, where one of the user’s PCs is used as a server, or a dedicated file server based on a regular, most often also inexpensive, computer.

A small study of Russian-language 1C resources showed that this question diligently avoids it; if problems arise, it is usually recommended to switch to client-server or terminal mode. It has also become almost generally accepted that configurations on a managed application work much slower than usual. As a rule, the arguments given are “iron”: “Accounting 2.0 just flew, but the “troika” barely moved,” of course, there is some truth in these words, so let’s try to figure it out.

Resource consumption, first glance

Before we began this study, we set ourselves two goals: to find out whether managed application-based configurations are actually slower than conventional configurations, and which specific resources have the primary impact on performance.

For testing, we took two virtual machines running Windows Server 2012 R2 and Windows 8.1, respectively, allocating them with 2 cores of the host Core i5-4670 and 2 GB RAM, which corresponds to approximately the average office machine. The server was placed on a RAID 0 array of two WD Se, and the client was placed on a similar array of general-purpose disks.

As experimental bases, we selected several configurations of Accounting 2.0, release 2.0.64.12 , which was then updated to 3.0.38.52 , all configurations were launched on the platform 8.3.5.1443 .

The first thing that attracts attention is the increased size of the Troika’s information base, which has grown significantly, as well as a much greater appetite for RAM:

We are ready to hear the usual: “why did they add that to this three,” but let’s not rush. Unlike users of client-server versions, which require a more or less qualified administrator, users of file versions rarely think about maintaining databases. Also, employees of specialized companies servicing (read updating) these databases rarely think about this.

Meanwhile, the 1C information base is a full-fledged DBMS of its own format, which also requires maintenance, and for this there is even a tool called Testing and correcting the information base. Perhaps the name played a cruel joke, which somehow implies that this is a tool for troubleshooting problems, but low performance is also a problem, and restructuring and reindexing, along with table compression, are well-known tools for optimizing databases. Shall we check?

After applying the selected actions, the database sharply “lost weight”, becoming even smaller than the “two”, which no one had ever optimized, and RAM consumption also decreased slightly.

Subsequently, after loading new classifiers and directories, creating indexes, etc. the size of the base will increase; in general, the “three” bases are larger than the “two” bases. However, this is not more important, if the second version was content with 150-200 MB of RAM, then the new edition needs half a gigabyte and this value should be taken into account when planning the necessary resources for working with the program.

Net

Network bandwidth is one of the most important parameters for network applications, especially like 1C in file mode, which move significant amounts of data across the network. Most small enterprise networks are built on the basis of inexpensive 100 Mbit/s equipment, so we began testing by comparing 1C performance indicators in 100 Mbit/s and 1 Gbit/s networks.

What happens at startup file database 1C over the network? The client downloads enough to temporary folders large number information, especially if this is the first, “cold” start. At 100 Mbit/s, we are expected to run into channel width and downloading can take considerable time, in our case about 40 seconds (the cost of dividing the graph is 4 seconds).

The second launch is faster, since some of the data is stored in the cache and remains there until the reboot. Switching to a gigabit network can significantly speed up program loading, both “cold” and “hot”, and the ratio of values ​​is respected. Therefore, we decided to express the result in relative values, taking the most great value each measurement:

As you can see from the graphs, Accounting 2.0 loads at any network speed twice as fast, the transition from 100 Mbit/s to 1 Gbit/s allows you to speed up the download time by four times. There is no difference between the optimized and non-optimized "troika" databases in this mode.

We also checked the influence of network speed on operation in heavy modes, for example, during group transfers. The result is also expressed in relative values:

Here it’s more interesting, the optimized base of the “three” in a 100 Mbit/s network works at the same speed as the “two”, and the non-optimized one shows twice as bad results. On gigabit, the ratios remain the same, the unoptimized “three” is also half as slow as the “two”, and the optimized one lags behind by a third. Also, the transition to 1 Gbit/s allows you to reduce the execution time by three times for edition 2.0 and by half for edition 3.0.

In order to evaluate the impact of network speed on everyday work, we used Performance measurement, performing a sequence of predetermined actions in each database.

Actually, for everyday tasks, network throughput is not a bottleneck, an unoptimized “three” is only 20% slower than a “two”, and after optimization it turns out to be about the same faster - the advantages of working in thin client mode are evident. The transition to 1 Gbit/s does not give the optimized base any advantages, and the unoptimized and the two begin to work faster, showing a small difference between themselves.

From the tests performed, it becomes clear that the network is not a bottleneck for new configurations, and the managed application runs even faster than usual. You can also recommend switching to 1 Gbit/s if heavy tasks and database loading speed are critical for you; in other cases, new configurations allow you to work effectively even in slow 100 Mbit/s networks.

So why is 1C slow? We'll look into it further.

Server disk subsystem and SSD

In the previous article, we achieved an increase in 1C performance by placing databases on an SSD. Perhaps the performance of the server's disk subsystem is insufficient? We measured the performance of a disk server during a group run in two databases at once and got a rather optimistic result.

Despite the relatively large number of input/output operations per second (IOPS) - 913, the queue length did not exceed 1.84, which is a very good result for a two-disk array. Based on this, we can make the assumption that a mirror made from ordinary disks will be enough for the normal operation of 8-10 network clients in heavy modes.

So is an SSD needed on a server? The best way to answer this question will be through testing, which we carried out using a similar method, the network connection is 1 Gbit/s everywhere, the result is also expressed in relative values.

Let's start with the loading speed of the database.

It may seem surprising to some, but the SSD on the server does not affect the loading speed of the database. The main limiting factor here, as the previous test showed, is network throughput and client performance.

Let's move on to redoing:

We have already noted above that disk performance is quite sufficient even for working in heavy modes, so the speed of the SSD is also not affected, except for the unoptimized base, which on the SSD has caught up with the optimized one. Actually, this once again confirms that optimization operations organize information in the database, reducing the number of random I/O operations and increasing the speed of access to it.

In everyday tasks the picture is similar:

Only the non-optimized database benefits from the SSD. You, of course, can purchase an SSD, but it would be much better to think about timely maintenance of the database. Also, do not forget about defragmenting the section with infobases on the server.

Client disk subsystem and SSD

We discussed the influence of SSD on the speed of operation of locally installed 1C in the previous material; much of what was said is also true for working in network mode. Indeed, 1C quite actively uses disk resources, including for background and routine tasks. In the figure below you can see how Accounting 3.0 quite actively accesses the disk for about 40 seconds after loading.

But at the same time, you should be aware that for a workstation where active work is carried out with one or two information databases, the performance resources of a regular mass-produced HDD are quite sufficient. Purchasing an SSD can speed up some processes, but you won’t notice a radical acceleration in everyday work, since, for example, loading will be limited throughput networks.

Slow hard drive can slow down some operations, but by itself cannot cause the program to slow down.

RAM

Despite the fact that RAM is now obscenely cheap, many workstations continue to work with the amount of memory that was installed when purchased. This is where the first problems lie in wait. Already based on the fact that the average “troika” requires about 500 MB of memory, we can assume that a total amount of RAM of 1 GB will not be enough to work with the program.

We reduced the system memory to 1 GB and launched two information databases.

At first glance, everything is not so bad, the program has curbed its appetites and fit well into the available memory, but let’s not forget that the need for operational data has not changed, so where did it go? Reset to disk, cache, swap, etc., the essence of this operation is that unnecessary at the moment data is sent from fast RAM, the amount of which is not enough, to slow disk memory.

What will this lead to? Let's see how system resources are used in heavy operations, for example, let's launch a group retransfer in two databases at once. First on a system with 2 GB of RAM:

As we can see, the system actively uses the network to receive data and the processor to process it; disk activity is insignificant; during processing it increases occasionally, but is not a limiting factor.

Now let's reduce the memory to 1 GB:

The situation is changing radically, the main load now falls on the hard drive, the processor and network are idle, waiting for the system to read the necessary data from the disk into memory and send unnecessary data there.

At the same time, even subjective work with two open databases on a system with 1 GB of memory turned out to be extremely uncomfortable; directories and magazines opened with a significant delay and active access to the disk. For example, opening the Sales of goods and services journal took about 20 seconds and was accompanied all this time by high disk activity (highlighted with a red line).

To objectively evaluate the impact of RAM on the performance of configurations based on a managed application, we carried out three measurements: the loading speed of the first database, the loading speed of the second database, and group re-running in one of the databases. Both databases are completely identical and were created by copying the optimized database. The result is expressed in relative units.

The result speaks for itself: if the loading time increases by about a third, which is still quite tolerable, then the time for performing operations in the database increases three times, there is no need to talk about any comfortable work in such conditions. By the way, this is the case when buying an SSD can improve the situation, but it is much easier (and cheaper) to deal with the cause, not the consequences, and just buy the right amount of RAM.

Lack of RAM is the main reason why working with new 1C configurations turns out to be uncomfortable. Configurations with 2 GB of memory on board should be considered minimally suitable. At the same time, keep in mind that in our case, “greenhouse” conditions were created: a clean system, only 1C and the task manager were running. IN real life on a work computer, as a rule, a browser, an office suite are open, an antivirus is running, etc., etc., so proceed from the need for 500 MB per database plus some reserve, so that during heavy operations you do not encounter a lack of memory and a sharp decrease productivity.

CPU

Without exaggeration, the central processor can be called the heart of the computer, since it is it that ultimately processes all calculations. To evaluate its role, we conducted another set of tests, the same as for RAM, reducing the number of cores available to the virtual machine from two to one, and the test was performed twice with memory amounts of 1 GB and 2 GB.

The result turned out to be quite interesting and unexpected: a more powerful processor quite effectively took on the load when there was a lack of resources, the rest of the time without giving any tangible benefits. 1C Enterprise can hardly be called an application that actively uses processor resources; it is rather undemanding. And in difficult conditions, the processor is burdened not so much by calculating the data of the application itself, but by servicing overhead costs: additional input/output operations, etc.

Conclusions

So, why is 1C slow? First of all, this is a lack of RAM; the main load in this case falls on the hard drive and processor. And if they do not shine with performance, as is usually the case in office configurations, then we get the situation described at the beginning of the article - the “two” worked fine, but the “three” is ungodly slow.

In second place is network performance; a slow 100 Mbit/s channel can become a real bottleneck, but at the same time, the thin client mode is able to maintain a fairly comfortable level of operation even on slow channels.

Then you should pay attention to the disk drive; buying an SSD is unlikely to be a good investment, but replacing the drive with a more modern one would be a good idea. The difference between generations hard drives can be assessed by to the following material: Review of two inexpensive Western Digital Blue series drives 500 GB and 1 TB.

And finally the processor. A faster model will certainly not be superfluous, but makes a lot of sense There is no way to increase its performance, unless this PC is used for heavy operations: group processing, heavy reports, closing the month, etc.

We hope this material will help you quickly understand the question “why 1C is slow” and solve it most effectively and without extra costs.

Everyone who works with products on the 1C:Enterprise platform has probably heard the phrase “1C is slow.” Some people complained about it, others accepted complaints. In this article we will try to look at the most common causes of this problem and options for solving it.

Let's turn to a metaphor: before finding out why a person did not come somewhere, you should make sure that he has legs to walk. So, let's start with the hardware and network requirements.

If Windows 7 is installed:

If you have Windows 8 or 10 installed:



Also remember that free space The disk must have at least 2GB, and the network connection must have a speed of at least 100 MB/sec.

It doesn’t make much sense to consider the characteristics of servers in the client-server version, because in this case everything depends on the number of users and the specifics of the tasks that they solve in 1C.

When choosing a server configuration, keep the following in mind:

  • One 1C server worker process consumes an average of 4 GB (not to be confused with a user connection, since one worker process can have as many connections as you specify in the server settings);
  • Using 1C and a DBMS (especially MS SQL) on one physical server gives benefits when processing large amounts of data (for example, closing a month, calculating a budget based on a model, etc.), but significantly reduces performance during unloaded operations (for example, creating and conducting implementation document, etc.);
  • Remember that 1C servers and the DBMS must be connected over a channel “thick” of 1 GB;
  • Use high-performance disks and do not combine the 1C server and DBMS roles with other roles (for example, file, AD, domain controller, etc.).

If after checking the equipment 1C still “slows down”

We have a small company, 7 people, and 1C is slow. We contacted specialists, and they said that only the client-server option would save us. But for us such a solution is not acceptable, it is too expensive!

Carry out routine maintenance in the database*:

1. Launch the database in configurator mode.


2. Select “Administration” in the main menu, and in it – “Testing and correction”.


3. Check all the boxes as in the picture. Click Run.

*This procedure may take from 15 minutes to an hour depending on the size of the database and the characteristics of your PC.

If this does not help, then we make a client-server connection, but without additional investments in hardware and software:

1. Choose the least loaded desktop computer in the office (not a notebook): it must have at least 4 GB of RAM and a network connection of at least 100 MB/sec.

2. Activate IIS (Internet Information Server) on it. To do this:





3. Publish your database on this computer. There is available material on this topic on ITS, or contact a support specialist.

4. On user computers, configure access to the database through a thin client. To do this:


Open the 1C launch window.


Select your work base. Here it is “Your Base”. Click "Edit". Set the switch to the “On a web server” position, indicate in the line below the name or IP address of the server on which IIS was activated, and the name under which the database was published. Click the "Next" button.


Set the "Basic startup mode" switch to "Thin Client" mode. Click "Done".

We have a rather large company, but not a very big one, about 50–60 people. We use a client-server option, but 1C is terribly slow.

In this case, it is recommended to split the 1C server and the DBMS server into two different servers. When separating, be sure to remember: if they remained on the same physical server, which was simply virtualized, then the disks of these servers must be different - physically different! Also, be sure to set up routine tasks on the DBMS server when it comes to MS SQL (more details about this are described on the ITS website)

We have a rather large company, more than 100 users. Everything is configured in accordance with 1C recommendations for this option, but when processing some documents, 1C is very slow, and sometimes a blocking error occurs. Maybe do a base rollup?

A similar situation arises due to the size of a very specific accumulation or accounting register (but more often - accumulation), due to the fact that the register either “closes” altogether, i.e. there are incoming movements, but no flow movements, or the number of dimensions by which register balances are calculated is very large. There may even be a mix of the two previous reasons. How to determine which register is ruining everything?

We record the time when documents are processed slowly, or the time and user who has a blocking error.

Open the registration log.



We find the document we need, at the right time, for the right user with the event type “Data.Post”.



We look at the entire block of execution until the transaction is canceled if there was a blocking error, or we look for the longest change (the time from the previous record is more than a minute).

After this, we make a decision, keeping in mind that collapsing this particular register is in any case cheaper than the entire database.

We are a very large company, more than 1000 users, thousands of documents per day, our own IT department, a huge fleet of servers, we have optimized queries several times, but 1C is slow. We have apparently outgrown 1C, and we need something more powerful.

In the vast majority of such cases, it is not 1C that is slowing down, but the architecture of the solution used. Making a choice in favor new program for business, remember that writing your business processes in a program is cheaper and easier than converting them to some, especially a very expensive program. Only 1C provides this opportunity. Therefore, it is better to ask the question: “How to correct the situation? How can you make 1C “fly” at such volumes?” Let's briefly look at several treatment options:

  • Use parallel and asynchronous programming technologies supported by 1C ( background jobs and queries in a loop).
  • When designing the solution architecture, avoid using accumulation registers and accounting registers in the most bottleneck areas.
  • When developing a data structure (accumulation and/or information registers), adhere to the rule: “The fastest table for writing and reading is a table with one column.” About what we're talking about, it will become clearer if you look at the typical RAUZ mechanism.
  • To process large volumes of data, use auxiliary clusters where the same database is connected (but in no case should this be done during interactive work!!!). This will allow you to bypass standard 1C locks, which will make it possible to work with the database at almost the same speed as when working directly with SQL tools.

It is worth noting that 1C optimization for holdings and large companies is a topic for a separate great article, so stay tuned for updated materials on our website.

2. Features of the program. Often, even with optimal settings, 1C works very slowly. Performance drops especially sharply when the number of simultaneously working with the database exceeds 4-5 users.

Who are you in the company?

Solving the problem slow work 1C depends on who you are in the company. If you are a techie, just read on. If you are a director or accountant, follow the special link ↓

Network Bandwidth

As a rule, not one, but several users work with one information base (IS). At the same time, there is a constant exchange of data between the computer on which the 1C client is installed and the computer on which the information security is located. The volume of this data is quite significant. A situation often arises when a local network operating at a speed of 100 Mbit/s, which is the most common speed, simply cannot cope with the load. And again the user complains about the program being slow.

Each of these factors individually already significantly reduces the speed of the program, but the most unpleasant thing is that usually these things add up.

Now let's look at several solutions to the problem of low 1C operating speed and their cost, using the example local network of 10 average computers.

Solution one. Infrastructure modernization

This is perhaps the most obvious solution. Let's calculate its minimum cost.

At a minimum, for each computer we need a 2 GB RAM stick, which costs, on average, 1,500 rubles, network card with support for speed 1 Gbit/s, costs about 700 rubles. Additionally, you will need at least 1 router that supports a speed of 1 Gbit/s, which will cost approximately 4,000 rubles. Total cost - 26,000 rubles for equipment, excluding work.

In principle, the speed can increase significantly, however, now it will no longer be possible to buy inexpensive computers for the office. Besides, this decision not applicable for those who use Wi-Fi or want to work via the Internet - in their case, the network speed can be tens of times lower. The thought arises: “Is it not possible to implement the entire program on one powerful server, so that the user’s computer does not participate in complex calculations, but simply serves to transfer the image?” Then you can work even on very weak computers, even on low-bandwidth networks. Naturally, such solutions exist.

Solution two. Terminal Server

Gained great popularity back in the days of 1C 7. Implemented on the server Windows versions and copes with our task perfectly. However, it has its pitfalls, namely the cost of licenses.

Herself operating system will cost about 40,000 rubles. In addition to this, we will need for everyone who plans to work in 1C a Windows Server CAL license, costing about 1,700 rubles, and a Windows Remote Desktop Services CAL license, which costs about 5,900 rubles.

Having calculated the cost for a network of 10 computers, we end up with 116,000 rubles. only for one license. Add to this the cost of the server itself (at least 40,000 rubles) and the cost of implementation work, however, even without this, the price for licenses turned out to be impressive.

Solution three. Service 1C Enterprise

1C has developed its own solution to this problem, which can significantly increase the speed of the program. But there is a nuance here too.

The fact is that the cost of such a solution ranges from 50,000 to 80,000 rubles, depending on the edition. For a company with up to 15 users it turns out to be quite expensive. Great hopes were placed on the “1C enterprise mini-server”, which, according to the 1C company, is aimed at small businesses and costs around 10,000 - 15,000 rubles.

However, when it went on sale, this product was a big disappointment. The point is that maximum quantity There were only 5 users with whom the mini-server could be used.

As one 1C programmer wrote on the forum: “It is still not clear why 1C chose exactly 5 connections! The problems only begin with 4 users, but with five it all ends. If you want to connect a sixth person, pay another 50 thousand. We could do at least 10 connections...”

Of course, the mini-server also found its consumer. However, for companies where 5 or more people work with 1C, a simple and inexpensive solution has not appeared.

In addition to the program acceleration methods described above, there is another one that is ideal for the segment of 5 - 15 users, namely web access for 1C in file mode.

Solution four. Web access for 1C in file mode

The principle of operation is as follows: an additional role of a web server is installed on the computer, on which information security is published.

Naturally, this must be either the most powerful computer on the network, or a separate machine dedicated to this role. After that, you can work with 1C in web server mode. All heavy operations will be performed on the server side, and traffic transmitted over the network will be minimized, as will the load on the client’s computer.

Thus, even very weak machines can be used to work in 1C, and network bandwidth does not become critical. Our tests have shown that you can work comfortably through mobile internet on a cheap tablet without experiencing any discomfort.

This option is inferior to the enterprise 1C server in terms of operating speed, but this difference is practically invisible up to 15-20 users. By the way, to implement a web server you can use IIS (for Windows) and Apache (for Linux) and both of these solutions are free!

Despite the obvious advantages, this method optimization of 1C operation has not gained much popularity.

I can’t say for sure, but most likely this is due to two reasons:

  • Quite a weak description technical documentation
  • Located at the intersection of responsibility of the system administrator and 1C programmer

Usually, when a system administrator is approached with a problem of low speed, he suggests upgrading the infrastructure or a terminal server; if a 1C specialist is contacted, he is offered a 1C enterprise server. So, if in your company, a specialist responsible for infrastructure and a specialist responsible for 1C work “hand in hand,” then you can safely use a solution based on a web server.

Let's speed up 1C. Remotely, quickly and without your participation

We know how to speed up 1Ski without disrupting the customer. We delve into the problem, do our job and leave. If you want the program to just work normally, contact us. We'll figure it out.

Leave a request and receive a free consultation on accelerating the program.

IN lately Users and administrators are increasingly beginning to complain that new 1C configurations developed based on a managed application are slow, in some cases unacceptably slow. It is clear that new configurations contain new functions and capabilities, and therefore are more resource-demanding, but most users do not understand what primarily affects the operation of 1C in file mode. Let's try to correct this gap.

In ours, we have already touched on the impact of disk subsystem performance on the speed of 1C, but this study concerned the local use of the application on a separate PC or terminal server. At the same time, most small implementations involve working with a file database over a network, where one of the user’s PCs is used as a server, or a dedicated file server based on a regular, most often also inexpensive, computer.

A small study of Russian-language resources on 1C showed that this issue is diligently avoided; if problems arise, it is usually recommended to switch to client-server or terminal mode. It has also become almost generally accepted that configurations on a managed application work much slower than usual. As a rule, the arguments are “iron”: “Accounting 2.0 just flew, and the “troika” barely moved. Of course, there is some truth in these words, so let’s try to figure it out.

Resource consumption, first glance

Before we began this study, we set ourselves two goals: to find out whether managed application-based configurations are actually slower than conventional configurations, and which specific resources have the primary impact on performance.

For testing, we took two virtual machines running Windows Server 2012 R2 and Windows 8.1, respectively, giving them 2 cores of the host Core i5-4670 and 2 GB of RAM, which corresponds to approximately an average office machine. The server was placed on a RAID 0 array of two, and the client was placed on a similar array of general-purpose disks.

As experimental bases, we selected several configurations of Accounting 2.0, release 2.0.64.12 , which was then updated to 3.0.38.52 , all configurations were launched on the platform 8.3.5.1443 .

The first thing that attracts attention is the increased size of the Troika’s information base, which has grown significantly, as well as a much greater appetite for RAM:

We are ready to hear the usual: “why did they add that to this three,” but let’s not rush. Unlike users of client-server versions, which require a more or less qualified administrator, users of file versions rarely think about maintaining databases. Also, employees of specialized companies servicing (read updating) these databases rarely think about this.

Meanwhile, the 1C information base is a full-fledged DBMS of its own format, which also requires maintenance, and for this there is even a tool called Testing and correcting the information base. Perhaps the name played a cruel joke, which somehow implies that this is a tool for troubleshooting problems, but low performance is also a problem, and restructuring and reindexing, along with table compression, are well-known tools for optimizing databases. Shall we check?

After applying the selected actions, the database sharply “lost weight”, becoming even smaller than the “two”, which no one had ever optimized, and RAM consumption also decreased slightly.

Subsequently, after loading new classifiers and directories, creating indexes, etc. the size of the base will increase; in general, the “three” bases are larger than the “two” bases. However, this is not more important, if the second version was content with 150-200 MB of RAM, then the new edition needs half a gigabyte and this value should be taken into account when planning the necessary resources for working with the program.

Net

Network bandwidth is one of the most important parameters for network applications, especially like 1C in file mode, which move significant amounts of data across the network. Most small enterprise networks are built on the basis of inexpensive 100 Mbit/s equipment, so we began testing by comparing 1C performance indicators in 100 Mbit/s and 1 Gbit/s networks.

What happens when you launch a 1C file database over the network? The client downloads a fairly large amount of information into temporary folders, especially if this is the first, “cold” start. At 100 Mbit/s, we are expected to run into channel width and downloading can take considerable time, in our case about 40 seconds (the cost of dividing the graph is 4 seconds).

The second launch is faster, since some of the data is stored in the cache and remains there until the reboot. Switching to a gigabit network can significantly speed up program loading, both “cold” and “hot”, and the ratio of values ​​is respected. Therefore, we decided to express the result in relative values, taking the largest value of each measurement as 100%:

As you can see from the graphs, Accounting 2.0 loads at any network speed twice as fast, the transition from 100 Mbit/s to 1 Gbit/s allows you to speed up the download time by four times. There is no difference between the optimized and non-optimized "troika" databases in this mode.

We also checked the influence of network speed on operation in heavy modes, for example, during group transfers. The result is also expressed in relative values:

Here it’s more interesting, the optimized base of the “three” in a 100 Mbit/s network works at the same speed as the “two”, and the non-optimized one shows twice as bad results. On gigabit, the ratios remain the same, the unoptimized “three” is also half as slow as the “two”, and the optimized one lags behind by a third. Also, the transition to 1 Gbit/s allows you to reduce the execution time by three times for edition 2.0 and by half for edition 3.0.

In order to evaluate the impact of network speed on everyday work, we used Performance measurement, performing a sequence of predetermined actions in each database.

Actually, for everyday tasks, network throughput is not a bottleneck, an unoptimized “three” is only 20% slower than a “two”, and after optimization it turns out to be about the same faster - the advantages of working in thin client mode are evident. The transition to 1 Gbit/s does not give the optimized base any advantages, and the unoptimized and the two begin to work faster, showing a small difference between themselves.

From the tests performed, it becomes clear that the network is not a bottleneck for new configurations, and the managed application runs even faster than usual. You can also recommend switching to 1 Gbit/s if heavy tasks and database loading speed are critical for you; in other cases, new configurations allow you to work effectively even in slow 100 Mbit/s networks.

So why is 1C slow? We'll look into it further.

Server disk subsystem and SSD

In the previous article, we achieved an increase in 1C performance by placing databases on an SSD. Perhaps the performance of the server's disk subsystem is insufficient? We measured the performance of a disk server during a group run in two databases at once and got a rather optimistic result.

Despite the relatively large number of input/output operations per second (IOPS) - 913, the queue length did not exceed 1.84, which is a very good result for a two-disk array. Based on this, we can make the assumption that a mirror made from ordinary disks will be enough for the normal operation of 8-10 network clients in heavy modes.

So is an SSD needed on a server? The best way to answer this question will be through testing, which we carried out using a similar method, the network connection is 1 Gbit/s everywhere, the result is also expressed in relative values.

Let's start with the loading speed of the database.

It may seem surprising to some, but the SSD on the server does not affect the loading speed of the database. The main limiting factor here, as the previous test showed, is network throughput and client performance.

Let's move on to redoing:

We have already noted above that disk performance is quite sufficient even for working in heavy modes, so the speed of the SSD is also not affected, except for the unoptimized base, which on the SSD has caught up with the optimized one. Actually, this once again confirms that optimization operations organize information in the database, reducing the number of random I/O operations and increasing the speed of access to it.

In everyday tasks the picture is similar:

Only the non-optimized database benefits from the SSD. You, of course, can purchase an SSD, but it would be much better to think about timely maintenance of the database. Also, do not forget about defragmenting the section with infobases on the server.

Client disk subsystem and SSD

We analyzed the influence of SSD on the speed of operation of locally installed 1C in, much of what was said is also true for working in network mode. Indeed, 1C quite actively uses disk resources, including for background and routine tasks. In the figure below you can see how Accounting 3.0 quite actively accesses the disk for about 40 seconds after loading.

But at the same time, you should be aware that for a workstation where active work is carried out with one or two information databases, the performance resources of a regular mass-produced HDD are quite sufficient. Purchasing an SSD can speed up some processes, but you won’t notice a radical speedup in everyday work, since, for example, downloading will be limited by network bandwidth.

A slow hard drive can slow down some operations, but in itself cannot cause a program to slow down.

RAM

Despite the fact that RAM is now obscenely cheap, many workstations continue to work with the amount of memory that was installed when purchased. This is where the first problems lie in wait. Already based on the fact that the average “troika” requires about 500 MB of memory, we can assume that a total amount of RAM of 1 GB will not be enough to work with the program.

We reduced the system memory to 1 GB and launched two information databases.

At first glance, everything is not so bad, the program has curbed its appetites and fit well into the available memory, but let’s not forget that the need for operational data has not changed, so where did it go? Dumped into disk, cache, swap, etc., the essence of this operation is that data that is not needed at the moment is sent from fast RAM, the amount of which is not enough, to slow disk memory.

What will this lead to? Let's see how system resources are used in heavy operations, for example, let's launch a group retransfer in two databases at once. First on a system with 2 GB of RAM:

As we can see, the system actively uses the network to receive data and the processor to process it; disk activity is insignificant; during processing it increases occasionally, but is not a limiting factor.

Now let's reduce the memory to 1 GB:

The situation is changing radically, the main load now falls on the hard drive, the processor and network are idle, waiting for the system to read the necessary data from the disk into memory and send unnecessary data there.

At the same time, even subjective work with two open databases on a system with 1 GB of memory turned out to be extremely uncomfortable; directories and magazines opened with a significant delay and active access to the disk. For example, opening the Sales of goods and services journal took about 20 seconds and was accompanied all this time by high disk activity (highlighted with a red line).

To objectively evaluate the impact of RAM on the performance of configurations based on a managed application, we carried out three measurements: the loading speed of the first database, the loading speed of the second database, and group re-running in one of the databases. Both databases are completely identical and were created by copying the optimized database. The result is expressed in relative units.

The result speaks for itself: if the loading time increases by about a third, which is still quite tolerable, then the time for performing operations in the database increases three times, there is no need to talk about any comfortable work in such conditions. By the way, this is the case when buying an SSD can improve the situation, but it is much easier (and cheaper) to deal with the cause, not the consequences, and just buy the right amount of RAM.

Lack of RAM is the main reason why working with new 1C configurations turns out to be uncomfortable. Configurations with 2 GB of memory on board should be considered minimally suitable. At the same time, keep in mind that in our case, “greenhouse” conditions were created: a clean system, only 1C and the task manager were running. In real life, on a work computer, as a rule, a browser, an office suite are open, an antivirus is running, etc., etc., so proceed from the need for 500 MB per database, plus some reserve, so that during heavy operations you do not encounter a lack of memory and a sharp decrease in productivity.

CPU

Without exaggeration, the central processor can be called the heart of the computer, since it is it that ultimately processes all calculations. To evaluate its role, we conducted another set of tests, the same as for RAM, reducing the number of cores available to the virtual machine from two to one, and the test was performed twice with memory amounts of 1 GB and 2 GB.

The result turned out to be quite interesting and unexpected: a more powerful processor quite effectively took on the load when there was a lack of resources, the rest of the time without giving any tangible benefits. 1C Enterprise (in file mode) can hardly be called an application that actively uses processor resources; it is rather undemanding. And in difficult conditions, the processor is burdened not so much by calculating the data of the application itself, but by servicing overhead costs: additional input/output operations, etc.

Conclusions

So, why is 1C slow? First of all, this is a lack of RAM; the main load in this case falls on the hard drive and processor. And if they do not shine with performance, as is usually the case in office configurations, then we get the situation described at the beginning of the article - the “two” worked fine, but the “three” is ungodly slow.

In second place is network performance; a slow 100 Mbit/s channel can become a real bottleneck, but at the same time, the thin client mode is able to maintain a fairly comfortable level of operation even on slow channels.

Then you should pay attention to the disk drive; buying an SSD is unlikely to be a good investment, but replacing the drive with a more modern one would be a good idea. The difference between generations of hard drives can be assessed from the following material: .

And finally the processor. A faster model, of course, will not be superfluous, but there is little point in increasing its performance unless this PC is used for heavy operations: group processing, heavy reports, month-end closing, etc.

We hope this material will help you quickly understand the question “why 1C is slow” and solve it most effectively and without extra costs.

  • Tags:

Please enable JavaScript to view the