1C 8.3 client freezes when searching. Automation Tips

Users often complain that “1C 8.3 is slow”: document forms open slowly, documents take a long time to process, the program starts, reports take a long time to generate, and so on.

Moreover, such “glitches” can occur in different programs:

The reasons may be different. This is not restored documents, a weak computer or server, the 1C server is incorrectly configured.

In this article I want to look at one of the simplest and most common reasons slow work programs – . This instruction will be relevant for users of file databases for 1-2 users, where there is no competition for resources.

If you are interested in more serious optimization of client-server options for system operation, visit the section of the site.

Where are the scheduled tasks in 1C 8.3?

Before I had time to load the program, 1C executed many background jobs. You can view them by going to the “Administration” menu, then “Support and Maintenance”:

Get 267 video lessons on 1C for free:

This is what the window with completed tasks looks like:

And so full list everyone routine tasks, which are launched:

Among these tasks are such as ““, loading various classifiers, checking the relevance of the program version, and so on. For example, I have no use for almost all of these tasks. I don’t keep currency records, I control the versions myself, and load classifiers as needed.

Accordingly, it is in my (and in most cases in your) interests to disable unnecessary tasks.

Disabling routine and background tasks in 1C 8.3

1) look at the amount of memory allocated by rphost on the 1C server. If you have an x32 version of the server, then the process can use a maximum of 1.75 GB of RAM
If there is not enough memory, the server cannot accept new connections or hangs when the current session requires additional memory
www.viva64.com/ru/k/0036
2) Look at the “Working server settings” settings; the settings may be incorrect. I had this problem and the server kept freezing. My settings are attached. The server is allocated 11 GB.
3) There may be problems in setting up Postgressql.

Provide the characteristics of your server, database sizes, Postgressql configs. It's hard to say without information.

My PostgreSQL config: https://drive.google.com/file/d/0B2qGCc-vzEVDMERVW...
This config is selected for the available amount of RAM.
PostgreSQL installed on Linux, 3 GB RAM, 3 CPU cores.
Server 1C8: 11 GB RAM, 5 CPU cores
4 databases, approximately 1 GB each (uploaded to dt)

Provide all the characteristics of your server: 1C8 server and database, physical or virtual, operating system, amount of RAM on each server, what kind of CPU, how much RAM do rphost processes take up, how many are there? Are you using a RAID array?

Previously, I used PostgreSQL myself, but during the process I encountered some problems when running a database on PostgreSQL and recently switched to MS SQL.

Your server is not bad for these databases. In order to use PostgreSQL you need to have a very good understanding of its configuration. When the databases are small, many configuration errors are forgiven. When we just started implementing 1C + PostgreSQL, we also had very frequent problems with the operation of the database (there were frequent freezes, it worked slowly). PostgreSQL is better used on Linux, not on Windows. I myself am not a database specialist; to set up the database server, we hired a specialist from 1Sbit and he set it up for us and there were no problems in operation after that.

Advice:
You have large databases, don’t skimp, hire a database specialist who can set it up for you. One person cannot be an expert in everything.

1) how long ago have you checked the database itself and reindexed it? VACUUM and REINDEX
2) how long ago did you test and correct the database using 1C tools?
3) is the database log file placed on a separate HDD?
4) is the HDD heavily loaded?

Consider switching to MS Sql; often it requires “virtually” no configuration and is easier to use. Unlike PostgreSQL, MS Sql is ready to work out of the box, but PostgreSQL needs to be configured.

If you have any questions, write, maybe I can help with something on Skype: tisartisar

Hire a database setup specialist

Why we switched to MS SQL:
We use the UT configuration and when closing the month, sometimes errors arose that could not be resolved. If you transferred the database to file mode and started closing the month, then everything closed normally, the same database was loaded into PostgreSQL server Errors occurred when calculating the cost. At that time, we were half a year behind in closing months due to floating errors. We created a test database on MS SQL and the month that could not be closed on PostgreSQL on MS Sql was closed. Also, price rounding in the price list does not work correctly on PostgreSQL. In fact, running 1C on PostgreSQL is supported, but it is still recommended to use MS SQl.
Because of this, it was decided to switch to MS SQL because... stability of operation 1C is more expensive.

I'm glad I could help, please contact me if you have any questions or problems.

1) how much memory is allocated to MS SQL server? this is configured in the MS SQL server itself.
2) Test the database using 1C regularly
3) article on how to set up backup and maintenance. This is important and needs to be done regularly. I do it every day. Check out all 3 parts of the guide.

IN lately Users and administrators are increasingly beginning to complain that new 1C configurations developed based on a managed application are slow, in some cases unacceptably slow. It is clear that new configurations contain new functions and capabilities, and therefore are more resource-demanding, but most users do not understand what primarily affects the operation of 1C in file mode. Let's try to correct this gap.

In ours we have already touched on the impact of productivity disk subsystem at the speed of 1C, however this study concerned the local use of the application on a separate PC or terminal server. At the same time, most small implementations involve working with a file database over a network, where one of the user’s PCs is used as a server, or a dedicated file server based on a regular, most often also inexpensive, computer.

A small study of Russian-language 1C resources showed that this question diligently avoids it; if problems arise, it is usually recommended to switch to client-server or terminal mode. It has also become almost generally accepted that configurations on a managed application work much slower than usual. As a rule, the arguments are “iron”: “Accounting 2.0 just flew, and the “troika” barely moved. Of course, there is some truth in these words, so let’s try to figure it out.

Resource consumption, first glance

Before we began this study, we set ourselves two goals: to find out whether managed application-based configurations are actually slower than conventional configurations, and which specific resources have the primary impact on performance.

For testing, we took two virtual machines running Windows Server 2012 R2 and Windows 8.1, respectively, allocating them with 2 cores of the host Core i5-4670 and 2 GB RAM, which corresponds to approximately the average office machine. The server was placed on a RAID 0 array of two, and the client was placed on a similar array of general-purpose disks.

As experimental bases, we selected several configurations of Accounting 2.0, release 2.0.64.12 , which was then updated to 3.0.38.52 , all configurations were launched on the platform 8.3.5.1443 .

The first thing that attracts attention is the increased size of the Troika’s information base, which has grown significantly, as well as a much greater appetite for RAM:

We are ready to hear the usual: “why did they add that to this three,” but let’s not rush. Unlike users of client-server versions, which require a more or less qualified administrator, users of file versions rarely think about maintaining databases. Also, employees of specialized companies servicing (read updating) these databases rarely think about this.

Meanwhile, the 1C information base is a full-fledged DBMS of its own format, which also requires maintenance, and for this there is even a tool called Testing and correcting the information base. Perhaps the name played a cruel joke, which somehow implies that this is a tool for troubleshooting problems, but low performance is also a problem, and restructuring and reindexing, along with table compression, are well-known tools for optimizing databases. Shall we check?

After applying the selected actions, the database sharply “lost weight”, becoming even smaller than the “two”, which no one had ever optimized, and RAM consumption also decreased slightly.

Subsequently, after loading new classifiers and directories, creating indexes, etc. the size of the base will increase; in general, the “three” bases are larger than the “two” bases. However, this is not more important, if the second version was content with 150-200 MB of RAM, then the new edition needs half a gigabyte and this value should be taken into account when planning the necessary resources for working with the program.

Net

Network bandwidth is one of the most important parameters for network applications, especially like 1C in file mode, which move significant amounts of data across the network. Most small enterprise networks are built on the basis of inexpensive 100 Mbit/s equipment, so we began testing by comparing 1C performance indicators in 100 Mbit/s and 1 Gbit/s networks.

What happens at startup file database 1C over the network? The client downloads enough to temporary folders large number information, especially if this is the first, “cold” start. At 100 Mbit/s, we are expected to run into channel width and downloading can take considerable time, in our case about 40 seconds (the cost of dividing the graph is 4 seconds).

The second launch is faster, since some of the data is stored in the cache and remains there until the reboot. Switching to a gigabit network can significantly speed up program loading, both “cold” and “hot”, and the ratio of values ​​is respected. Therefore, we decided to express the result in relative values, taking the most great value each measurement:

As you can see from the graphs, Accounting 2.0 loads at any network speed twice as fast, the transition from 100 Mbit/s to 1 Gbit/s allows you to speed up the download time by four times. There is no difference between the optimized and non-optimized "troika" databases in this mode.

We also checked the influence of network speed on operation in heavy modes, for example, during group transfers. The result is also expressed in relative values:

Here it’s more interesting, the optimized base of the “three” in a 100 Mbit/s network works at the same speed as the “two”, and the non-optimized one shows twice as bad results. On gigabit, the ratios remain the same, the unoptimized “three” is also half as slow as the “two”, and the optimized one lags behind by a third. Also, the transition to 1 Gbit/s allows you to reduce the execution time by three times for edition 2.0 and by half for edition 3.0.

In order to evaluate the impact of network speed on everyday work, we used Performance measurement, performing a sequence of predetermined actions in each database.

Actually, for everyday tasks, network throughput is not a bottleneck, an unoptimized “three” is only 20% slower than a “two”, and after optimization it turns out to be about the same faster - the advantages of working in thin client mode are evident. The transition to 1 Gbit/s does not give the optimized base any advantages, and the unoptimized and the two begin to work faster, showing a small difference between themselves.

From the tests performed, it becomes clear that the network is not a bottleneck for new configurations, and the managed application runs even faster than usual. You can also recommend switching to 1 Gbit/s if heavy tasks and database loading speed are critical for you; in other cases, new configurations allow you to work effectively even in slow 100 Mbit/s networks.

So why is 1C slow? We'll look into it further.

Server disk subsystem and SSD

In the previous article, we achieved an increase in 1C performance by placing databases on an SSD. Perhaps the performance of the server's disk subsystem is insufficient? We measured the performance of a disk server during a group run in two databases at once and got a rather optimistic result.

Despite the relatively large number of input/output operations per second (IOPS) - 913, the queue length did not exceed 1.84, which is a very good result for a two-disk array. Based on this, we can make the assumption that a mirror made from ordinary disks will be enough for the normal operation of 8-10 network clients in heavy modes.

So is an SSD needed on a server? The best way to answer this question is through testing, which we carried out using a similar method, network connection everywhere 1 Gbit/s, the result is also expressed in relative values.

Let's start with the loading speed of the database.

It may seem surprising to some, but the SSD on the server does not affect the loading speed of the database. The main limiting factor here, as the previous test showed, is network throughput and client performance.

Let's move on to redoing:

We have already noted above that disk performance is quite sufficient even for working in heavy modes, so the speed of the SSD is also not affected, except for the unoptimized base, which on the SSD has caught up with the optimized one. Actually, this once again confirms that optimization operations organize information in the database, reducing the number of random I/O operations and increasing the speed of access to it.

In everyday tasks the picture is similar:

Only the non-optimized database benefits from the SSD. You, of course, can purchase an SSD, but it would be much better to think about timely maintenance of the database. Also, do not forget about defragmenting the section with infobases on the server.

Client disk subsystem and SSD

We analyzed the influence of SSD on the speed of operation of locally installed 1C in, much of what was said is also true for working in network mode. Indeed, 1C quite actively uses disk resources, including for background and routine tasks. In the figure below you can see how Accounting 3.0 quite actively accesses the disk for about 40 seconds after loading.

But at the same time, you should be aware that for a workstation where active work is carried out with one or two information databases, the performance resources of a regular mass-produced HDD are quite sufficient. Purchasing an SSD can speed up some processes, but you won’t notice a radical speedup in everyday work, since, for example, downloading will be limited by network bandwidth.

Slow hard drive can slow down some operations, but by itself cannot cause the program to slow down.

RAM

Despite the fact that RAM is now obscenely cheap, many workstations continue to work with the amount of memory that was installed when purchased. This is where the first problems lie in wait. Based on the fact that the average “troika” requires about 500 MB of memory, we can assume that a total amount of RAM of 1 GB will not be enough to work with the program.

We reduced the system memory to 1 GB and launched two information databases.

At first glance, everything is not so bad, the program has curbed its appetites and fit well into the available memory, but let’s not forget that the need for operational data has not changed, so where did it go? Reset to disk, cache, swap, etc., the essence of this operation is that unnecessary at the moment data is sent from fast RAM, the amount of which is not enough, to slow disk memory.

What will this lead to? Let's see how system resources are used in heavy operations, for example, let's launch a group retransfer in two databases at once. First on a system with 2 GB of RAM:

As we can see, the system actively uses the network to receive data and the processor to process it; disk activity is insignificant; during processing it increases occasionally, but is not a limiting factor.

Now let's reduce the memory to 1 GB:

The situation is changing radically, the main load now falls on the hard drive, the processor and network are idle, waiting for the system to read the necessary data from the disk into memory and send unnecessary data there.

At the same time, even subjective work with two open databases on a system with 1 GB of memory turned out to be extremely uncomfortable; directories and magazines opened with a significant delay and active access to the disk. For example, opening the Sales of goods and services journal took about 20 seconds and was accompanied all this time by high disk activity (highlighted with a red line).

To objectively evaluate the impact of RAM on the performance of configurations based on a managed application, we carried out three measurements: the loading speed of the first database, the loading speed of the second database, and group re-running in one of the databases. Both databases are completely identical and were created by copying the optimized database. The result is expressed in relative units.

The result speaks for itself: if the loading time increases by about a third, which is still quite tolerable, then the time for performing operations in the database increases three times, there is no need to talk about any comfortable work in such conditions. By the way, this is the case when buying an SSD can improve the situation, but it is much easier (and cheaper) to deal with the cause, not the consequences, and just buy the right amount of RAM.

Lack of RAM is the main reason why working with new 1C configurations turns out to be uncomfortable. Configurations with 2 GB of memory on board should be considered minimally suitable. At the same time, keep in mind that in our case, “greenhouse” conditions were created: a clean system, only 1C and the task manager were running. IN real life on a work computer, as a rule, a browser, an office suite are open, an antivirus is running, etc., etc., so proceed from the need for 500 MB per database plus some reserve, so that during heavy operations you do not encounter a lack of memory and a sharp decrease productivity.

CPU

Without exaggeration, the central processor can be called the heart of the computer, since it is it that ultimately processes all calculations. To evaluate its role, we conducted another set of tests, the same as for RAM, reducing the number of cores available to the virtual machine from two to one, and the test was performed twice with memory amounts of 1 GB and 2 GB.

The result turned out to be quite interesting and unexpected: a more powerful processor quite effectively took on the load when there was a lack of resources, the rest of the time without giving any tangible benefits. 1C Enterprise (in file mode) can hardly be called an application that actively uses processor resources; it is rather undemanding. And in difficult conditions, the processor is burdened not so much by calculating the data of the application itself, but by servicing overhead costs: additional input/output operations, etc.

Conclusions

So, why is 1C slow? First of all, this is a lack of RAM; the main load in this case falls on the hard drive and processor. And if they do not shine with performance, as is usually the case in office configurations, then we get the situation described at the beginning of the article - the “two” worked fine, but the “three” is ungodly slow.

In second place is network performance; a slow 100 Mbit/s channel can become a real bottleneck, but at the same time, the thin client mode is able to maintain a fairly comfortable level of operation even on slow channels.

Then you should pay attention to the disk drive; buying an SSD is unlikely to be a good investment, but replacing the drive with a more modern one would be a good idea. The difference between generations hard drives can be assessed by to the following material: .

And finally the processor. A faster model will certainly not be superfluous, but makes a lot of sense There is no way to increase its performance, unless this PC is used for heavy operations: group processing, heavy reports, closing the month, etc.

We hope this material will help you quickly understand the question “why 1C is slow” and solve it most effectively and without extra costs.

  • Tags:

Please enable JavaScript to view the

The impact of blocking on the performance of 1C:Enterprise 8

The gilev team has been working on performance issues for many years and has successfully solved, among other things, the issues of eliminating waits on locks and deadlocks.

Below we will describe our experience in solving these problems.

Detection of blocking problems in 1C

Performance issues in multiplayer mode are not necessarily related to bad code or bad hardware. First, we need to answer the question - what performance problems exist and what causes them?

It is impossible to manually track the activities of hundreds of users; you need a tool that automates the collection of such information.

There are many tools, but almost all of them have one very significant drawback - price.

But there is a way out - we choose

We will investigate the problem on MS SQL Server, so we will need the following services from this set:

1. Monitoring and analysis of long requests(read more about setting up here) - needed to assess the presence of long-term operations for the subd.

Actually, the fact of their presence allows us to say that there are performance problems, and the problems lie in the lines of 1C configuration code, which the service will rank by importance. Problems at the top of the list need to be addressed first. Such solutions to problematic lines will bring the greatest effect, i.e. will be of most benefit and benefit to users of the system.

(read more here) will allow us to evaluate whether the time of long (long) requests is actually caused by waiting for locks or there are other reasons (non-optimal code, overloaded hardware, etc.) The service will show the reason for the wait by the request, namely the resource that was blocked and who blocked him. Those. we will understand the presence of blocking problems and their causes.

3. Analysis of mutual locks in 1C and MS SQL server(read more about the setup here) - will allow us to evaluate more difficult situations with waiting for resources, when several participants have already managed to “capture” some resources by blocking and are now forced to wait for each other due to the fact that they cannot release occupied resources until the capture of other resources blocked by neighbors is completed.

In general, such a difficult situation cannot be sorted out manually; such a service is needed.

4. Control of equipment load(read more about the setup here) helps us answer the questions - how many users are in the system, do they have locks, how many locks are there, can the hardware cope with the load?

Services are very easy to set up, but even if you still have questions, there are!

Using the tools listed above, we have objective information about system performance. This allows us to correctly assess the situation and propose adequate measures.

In fact, we receive information about all performance problems and can accurately answer questions like “how many problems are in the system”, “where exactly do they occur”, “each of the problems with what exact frequency occurs”, “which problems are significant and which are minor”. Those. we see all the prerequisites that formed the cause of the problem.

Services allow you to significantly improve your understanding of the conditions under which problems arise, without forcing you to manually delve into such things as the data storage structure of the information base at the DBMS level, the locking mechanism, etc.

As a result, we get a picture of performance that is measured

— request time (of course, ranking problematic requests by weight (request time by the number of calls to this request);

— waiting time for locks;

So, we launched the service Analysis of expectations on blockings

In the top table, the service shows a list of “victims” of blocking, taking into account the total weight of the “severity of expectations.”

In the lower table, for each victim one or more participants in the “struggle for a highly competitive resource” are considered, where the wait for blocking arose.

In the lower table, open the details for one of the “timeout” events. Like in the picture for example.

By highlighting the line with the “culprit”, we will see that the bottleneck was the _Reference64 table, and a problem arose on the clustered index with the “unknown” area. Perhaps in the future we will rename it to “table”, since in fact this behavior is typical for increasing/enlarging the blocking area.

The line with the “victim” shows which code was a hostage to the situation and could not block everything, just the line “by key” (the minimum data blocking area in this table).

This problem can be solved “correctly” and “easy”.

By the right way it’s more difficult to do - in fact, you need to rewrite the code, minimizing the likelihood of such situations occurring.

One of the factors, strange as it may sound, is a decrease in duration.

You can reduce the transaction duration:

1. rewriting the algorithm

2. by rewriting the query (a faster query reduces the likelihood of locks in complex transactions on tables, which sometimes may not even be in the query!)

2.1 adding the missing covering index (sometimes an index not only speeds up the query, but also reduces the data reading area, which reduces the likelihood of blocking)

3. reducing the amount of data processed in a transaction (in addition to linear speed, we also remember about lock escalation)

4. increasing equipment productivity within each flow

Request execution time

1) different users can work in parallel with different data
2) different users must work strictly sequentially with the same data

However, it is possible to optimize the use of locks, thereby reducing the overall wait time.

How blocking works (you don’t have to read this paragraph)

A special SQL Server module, the Lock Manager, handles locks. His tasks include:

  • creating and installing locks;
  • unlocking;
  • escalation of blocking;
  • determining lock compatibility;
  • eliminating deadlocks and much more.

When a user makes a request to update or read data, the DBMS transaction manager passes control to the DBMS lock manager to determine whether the requested resources have been locked, and, if so, whether the requested lock is compatible with the current one. If locks are incompatible, execution of the current transaction is delayed until the data is unlocked. Once the data is available, the lock manager acquires the requested lock and returns control to the transaction manager.

The main reason that reduces performance is blocking

Lock waits are a major performance issue in multiplayer mode. And this is understandable, because they increase the waiting time for operations, and therefore the response time. Can we say that waiting on locks is not correct and a mistake in a multi-user system? This cannot be said, since the resource blocking mechanism itself ensures data integrity. Using the locking mechanism, concurrent data is WRITTEN CONSEQUENTIALLY.

Difference between necessary and unnecessary locks

When a user reports an error waiting on a lock, then from his point of view this is always an error, because for example it interferes with his work - the time it takes to complete his work increases.

Experience suggests a simple rule: if more than half of the request execution time is actually waiting for a blocked resource, then you need to look: maybe some of the locking can be optimized and the resource blocking time can be reduced.

Here, as if by chance, I introduce a definition:

Waiting on the block is a situation that occurs when two users try to capture the same data at the same time. In this case, one of these users becomes blocked, that is, it must wait until the first user’s transaction completes.

A transaction is a set of calculations and operations with data (most shining example— when carrying out a document) performed as a single whole. Failure to complete any of the transaction's operations results in the entire transaction being canceled.

So, users in multi-user infobases can often complain that it is impossible to work because of these locks, while the code may actually have locks that are not needed in this place (redundant).
And also in the configuration code, they themselves may not be present; you can read about them, for example, here http://kb.1c.ru/articleView.jsp?id=30 (the article is a fragment of the book by P.S. Belousov, A .V.Ostroverh “1C:Enterprise: from 8.0 to 8.1.”). I offer a simplified way to explain the differences between locks on simple example So:

In your configuration in 1C:Enterprise mode, create two identical invoices with the same composition of goods. But be sure to indicate the different receiving warehouses.
In the posting processing code, you need to add a line with a message displayed on the screen (or other code that can delay the execution of posting processing by 21 seconds (the blocking timeout occurs after 20 seconds if the parameters are by default)).
Post two documents.
If a timeout occurs, and logically the goods arrive at different warehouses, there are redundant locks in the application. Business logic (consider common sense) there should be no blocking here.
If we now make identical warehouses in these two invoices. Then the blocking created as a result of an attempt at simultaneous execution will lead to a NECESSARY blocking and this is GOOD!

Those. While the invoice makes changes to the balances in the warehouse, the other must wait.

Of course, even this simple example leaves many questions. For example, what if the documents are from one supplier and the debt on it “moves”. And if it’s not just the balances in the warehouse that are moving, but several registers, and documents of different types.
But the most important question is: BY WHAT BUSINESS LOGIC SHOULD THERE NOT BE BLOCKINGS. Who prescribes this business logic and where in the context of blocking? But let's talk about everything in order.

Excessive locks are unnecessary locks that are not needed from the point of view of ensuring data integrity and at the same time reduce the overall performance of the system, increasing the total downtime - waiting on locks.
Necessary locking occurs when two users acquire the same resources (data objects). If users are working with non-overlapping resources, but are waiting on a lock, then the lock is considered redundant.

The most understandable criteria for locking redundancy are:

1. Mutual locks;

2. The level (area) of blocking is higher than necessary (as special case increasing the blocking level, the so-called. escalation);

3. The locking time is longer than the time of “real” use of the locking object.

Having received information about groupings of problems in the context of 1C:Enterprise metadata, I recommend paying attention first of all to the following objects:

  • Constants
  • Subsequence
  • Accounting registers
  • Accumulation registers
  • Information registers
  • Calculation registers

1) Until recently, there was a well-known recommendation not to write anything into constants. In extreme cases, do this from under one user, and then remember that while the user “writes” one constant, not only this, but also any other constant, other users will “wait”. Therefore, it is especially dangerous to use constants in transaction processing. The values ​​of all constants are stored V one resource.

The figure shows the physical placement of SCP configuration constants in a MS SQL Server 2005 database table.

This means that locking one constant will lock all constants. The DBMS imposes a lock on the ENTIRE single ROW of the table, i.e. for all constants.

However, in the latest releases of the platform, the storage of constants has been changed. Now each constant is a separate table. Don’t get too carried away, though, if you create thousands of tables, you can get a lock on the master base.

Attention, if your configuration has existed for a long time, then you can change the storage format by “restructuring” it in Testing and Correcting the configurator.

2) Refuse to use the Sequence metadata object. At least from movements when operational implementation, carry out during non-operative (additional) procedures. See how it is implemented in latest versions UPP.

3) If the system carries out online recording of movements in the accounting register in multi-user mode, then it is recommended:

  • enable the totals separation mode for this register;
  • Do not use register balance control during operational work.

4) In the accumulation register, in cases where there is no need to obtain “operative” data, you can enable the division of totals, which will increase the parallelism of data recording and speed up the work in general. Carefully monitor the measurements so that the “residues” can be obtained with maximum detail in the measurements.

5) You can get rid of some of the redundant locks created by the platform only by . In the automatic operating mode of configurations, the platform “takes upon itself” blocking resources. Worry-free price automatic mode— locks are possible at the boundaries of index ranges, locks on an empty table, and lock escalation.

These locks completely disappear from the data in the transaction. That is, this interlocking will not be possible when operating in controlled mode.

I have already said “managed locks” and “managed mode” several times. You need to understand that there are two types of locks:
DBMS locks are installed automatically at the DBMS level when executing queries.
1C:Enterprise locks are installed automatically when writing (modifying) data and always manually when reading data.

A meticulous reader will say that 1C also divides into object and non-object locks, but now we will not touch this approach.

But I note that it imposes more requirements on the qualifications and experience of a 1C specialist.

6) Missing indexes (especially in complex queries) are generally the main factor in the occurrence of a higher level of locking than necessary. Those. paradox, on the one hand, I said that before optimizing the query I said that you first need to look at the locks, but now I say that in order to optimize the locks, you need to optimize the query. I have an excuse, switching the configuration to managed locks reduces unnecessary locks even in a non-optimal query. This occurs due to a decrease in the transaction isolation level, which in turn gives the DBMS lock manager fewer reasons to impose an excessive lock.

The main reasons for excessive locking (to summarize the above)

— design errors
(the degree of parallelism is determined by “how finely the data is chopped”: parallel work with two rows of the table is possible, work with one row will only happen sequentially)
(errors in using metadata: recording constants, sequences, operational accounting on accounting registers)
excessive locking due to the fault of the automatic mode (platform-DBMS combination).
- non-optimal query performance
(for example, when scanning a table, the entire table is locked - redundant area
and the blocking time increases - excessive time, an additional number of blocking increases the likelihood of blocking escalation)

As you can see, the task of optimizing locks is “multifaceted”. You need to be as clear as possible about the “context” that caused the problem. On what resources, what code. How much is this blocking really necessary, or is it redundant?

A child and an adult have a sore throat. When the doctor asks the question, “What's wrong?”, the child will look at the doctor and scream (trust me, I know), while the adult will point out the symptoms of the disease. These apparent differences direct the doctor to different methods of identifying the problem.
With a child, the doctor must perform many tests, collect data, combine it, perform analysis and only then make recommendations. Whereas with an adult, he will ask several questions and, since the number of initial data is small, the time for analysis and determination of the problem will be significantly less. As a result, recommendations will be issued much earlier.

Use our services and you will have more opportunities to analyze the problem and find a solution for free!

The user complaint “1C hangs”, which is well known to IT specialists, has many reasons. To make a correct “diagnosis” - identifying and analyzing a problem, it requires its reproduction, because a problem that cannot be reproduced is, as a rule, almost impossible to solve. Having understood the symptoms of 1C freezing, we will take the first step towards an efficiently working system.

Very long system startup

A long launch of a heavy configuration under one user for the first time after adding information security to the list of databases on the computer is a normal phenomenon. During the first launch, the configuration is cached. The second and subsequent runs should be faster.

System startup that takes a long time may indicate problems with the architectural implementation of the configuration. Most of the configuration is read by the platform only the first time the desired metadata object is accessed. A long startup indicates the likelihood of use large number metadata objects (many calls to various common modules, processing, etc.).

It should be taken into account that the first time the text of any module is accessed, it is compiled. This process also takes time, which is especially noticeable if there are many modules. Thus, the problem of slow startup is solved by modifying (optimizing) the configuration, the purpose of which is to disable the execution of all optional algorithms that are executed at system startup.

There is a possibility that the configuration is trying to read data from the Internet when it starts. This also increases system startup time.

Very long opening of forms

Long opening of forms may be due to:

  1. A large number of controls on the form - time is spent on creating the form and interconnecting the arrangement of form elements;
  2. Execution of algorithms during form initialization. It is possible that when the form is created, some conditions are checked and/or related objects are read from the database.

The first problem is “treated” by simplifying the form. For example, some controls can be placed in separate forms, which may even be more convenient for the user. For example, if the form has an address field “City”, “Street”, “House”, etc., then it is better to edit the address in a separate form.

The second problem is solved by analyzing the actions performed when creating and opening a form, and optimizing these algorithms. Perhaps some of the algorithms are already outdated, while others can be simplified and optimized, for example, eliminating or minimizing access to data in the database.

As an interactive action, consider the user attempting to select a value on a form element. In response to it, the system “thinks about something.” This may happen for the following reasons:

  1. The algorithms that run in this action examine or calculate associated data that influences the value selection behavior;
  2. The select form that opens to select this value reads all objects from the database when initialized.

To solve the first problem, you should use the “Performance Measurement”, find resource-intensive algorithms and optimize them.


The second problem can often be solved by simply analyzing the implementation of the choice form. For example, you should make sure that the “Dynamic data reading” property is set for a dynamic list, that the “Main Table” property is set correctly, and that the list implementation does not use obviously resource-intensive algorithms.

There are also situations when, when opening the selection form, some related data is read from the database (for example, when opening the “Item” selection form, the balances of goods in warehouses are read). Typically this is not best solution. It is better to read related data asynchronously, after opening the form. This will cause less discomfort for the user, because After the form is displayed, the user will spend some time absorbing the form, and this time can be spent loading related data.

Very long response to updates

One of the trivial symptoms, however, can tell about some system problems: the 1C update freezes when starting a backup. This mainly happens when updating via the Internet and, most likely, indicates that the configuration has not been updated for a long time and the releases, rolling one after another, caused a freeze. You can prevent such a problem by installing updates in a timely manner, and if you encounter it, you can simply interrupt the backup process. After starting the configurator, the database will start with the changes made in normal mode.

It should be noted that 1C 8.3 freezes during updates most often also because it requires more resource-intensive hardware than previous versions platforms. It is worth paying attention to the amount of RAM and, if necessary, increasing it - this, in principle, should help solve the problem “1C freezes when updating the configuration.”

Long process of recording objects/carrying out documents

In this case, “treatment based on photography” is practically excluded, since the reasons can be very diverse, from a large amount of data in the object to waiting at locks.

But even in THIS case, it is possible to outline a direction for analysis.

The absence of significant changes in recording time due to the time of day or the number of users (as a rough, subjective estimate) indicates a problem in the code or in the amount of data of the object. For analysis, it makes sense to use the “Performance Measurement” tool.

A dramatic change in recording time with unclear dependencies requires performing a statistical analysis of the occurrence of the problem, i.e. performance analysis. The easiest way is to analyze the use of the log book. An additional advantage here is that the 1C:Enterprise 8 platform supports saving log data to a file in SQLite format. This will allow you to use SQL queries to analyze log data. It is quite possible to obtain the object write time from the log data, given the fact that each object write is performed in a transaction, and each transaction has its own identification number.


If the result of statistical analysis showed that the recording time of an object depends on the time of day, and not on the number of users, it is necessary to analyze the load on the 1C server and the database server. It is possible that the server is running routine processes that are taking up unnecessary resources.

If the time it takes to write objects depends on the number of users, the problem is most likely in the code (possibly waiting on locks) or in the hardware throughput. To solve them, you should attract a specialist with the competence “1C: Expert in technological issues", since there are no unified rules for solving such a problem.