Stuck 1s what to do. How to close a frozen program

The impact of blocking on the performance of 1C:Enterprise 8

The gilev team has been working on performance issues for many years and has successfully solved, among other things, the issues of eliminating waits on locks and deadlocks.

Below we will describe our experience in solving these problems.

Detection of blocking problems in 1C

Performance issues in multiplayer mode are not necessarily related to bad code or bad hardware. First, we need to answer the question - what performance problems exist and what causes them?

It is impossible to manually track the activities of hundreds of users; you need a tool that automates the collection of such information.

There are many tools, but almost all of them have one very significant drawback - price.

But there is a way out - we choose

We will investigate the problem on MS SQL Server, so we will need the following services from this set:

1. Monitoring and analysis of long requests(read more about setting up here) - needed to assess whether there are long-term operations for the subd.

Actually, the fact of their presence allows us to say that there are performance problems, and the problems lie in the lines of 1C configuration code, which the service will rank by importance. Problems at the top of the list need to be addressed first. Such solutions to problematic lines will bring the greatest effect, i.e. there will be the most benefit and benefit for system users.

(read more here) will allow us to evaluate whether the time of long (long) requests is actually caused by waiting for locks or there are other reasons (non-optimal code, overloaded hardware, etc.) The service will show the reason for the wait by the request, namely the resource that was blocked and who blocked him. Those. we will understand the presence of blocking problems and their causes.

3. Analysis of mutual locks in 1C and MS SQL server(read more about the setup here) - will allow us to evaluate more difficult situations with waiting for resources, when several participants have already managed to “capture” some resources by blocking and are now forced to wait for each other due to the fact that they cannot release occupied resources until the capture of other resources blocked by neighbors is completed.

In general, such a difficult situation cannot be sorted out manually; such a service is needed.

4. Control of equipment load(read more about the setup here) helps us answer the questions - how many users are in the system, do they have locks, how many locks are there, can the hardware cope with the load?

Services are very easy to set up, but even if you still have questions, there are!

Using the tools listed above, we have objective information about system performance. This allows us to correctly assess the situation and propose adequate measures.

In fact, we receive information about all performance problems and can accurately answer questions like “how many problems are in the system”, “where exactly do they occur”, “each problem with what exact frequency occurs”, “which problems are significant and which are minor”. Those. we see all the prerequisites that formed the cause of the problem.

Services allow you to significantly improve your understanding of the conditions under which problems arise, without forcing you to manually delve into such things as the data storage structure of the information base at the DBMS level, the locking mechanism, etc.

As a result, we get a picture of performance that is measured

— request time (of course, ranking problematic requests by weight (request time by the number of calls to this request);

— waiting time for locks;

So, we launched the service Analysis of expectations on blockings

In the top table, the service shows a list of “victims” of blocking, taking into account the total weight of the “weight of expectations.”

In the lower table, for each victim one or more participants in the “struggle for a highly competitive resource” are considered, where the wait for blocking arose.

In the lower table, open the details for one of the “timeout” events. Like in the picture for example.

By highlighting the line with the “culprit”, we will see that the bottleneck was the _Reference64 table, and a problem arose on the clustered index with the “unknown” area. Perhaps in the future we will rename it to “table”, since in fact this behavior is typical for increasing/enlarging the blocking area.

The line with the “victim” shows which code was a hostage to the situation and could not block everything, just the line “by key” (the minimum data blocking area in this table).

This problem can be solved “correctly” and “easy”.

By the right way it’s more difficult to do - you actually need to rewrite the code, minimizing the likelihood of such situations occurring.

One of the factors, strange as it may sound, is a decrease in duration.

You can reduce the transaction duration:

1. rewriting the algorithm

2. by rewriting the query (a faster query reduces the likelihood of locks in complex transactions on tables, which sometimes may not even be in the query!)

2.1 adding the missing covering index (sometimes an index not only speeds up the query, but also reduces the data reading area, which reduces the likelihood of blocking)

3. reducing the amount of data processed in a transaction (in addition to linear speed, we also remember about lock escalation)

4. increasing equipment productivity within each flow

Request execution time

1) different users can work in parallel with different data
2) different users must work strictly sequentially with the same data

However, it is possible to optimize the use of locks, thereby reducing the overall wait time.

How blocking works (you don’t have to read this paragraph)

A special SQL Server module, the Lock Manager, handles locks. His tasks include:

  • creating and installing locks;
  • unlocking;
  • escalation of blocking;
  • determining lock compatibility;
  • eliminating deadlocks and much more.

When a user makes a request to update or read data, the DBMS transaction manager passes control to the DBMS lock manager to determine whether the requested resources have been locked, and, if so, whether the requested lock is compatible with the current one. If locks are incompatible, execution of the current transaction is delayed until the data is unlocked. Once the data is available, the lock manager acquires the requested lock and returns control to the transaction manager.

The main reason that reduces performance is blocking

Lock waits are a major performance issue in multiplayer mode. And this is understandable, because they increase the waiting time for operations, and therefore the response time. Can we say that waiting on locks is not correct and a mistake in a multi-user system? This cannot be said, since the resource blocking mechanism itself ensures data integrity. Using the locking mechanism, concurrent data is WRITTEN CONSEQUENTIALLY.

Difference between necessary and unnecessary locks

When a user reports an error waiting on a lock, then from his point of view this is always an error, because for example it interferes with his work - the time it takes to complete his work increases.

Experience suggests a simple rule: if more than half of the request execution time is actually waiting for a blocked resource, then you need to look: maybe some of the locking can be optimized and the resource blocking time can be reduced.

Here, as if by chance, I introduce a definition:

Waiting on the block is a situation that occurs when two users try to capture the same data at the same time. In this case, one of these users becomes blocked, that is, it must wait until the first user’s transaction completes.

A transaction is a set of calculations and operations with data (most shining example— when carrying out a document) performed as a single whole. Failure to perform any of the transaction operations results in the cancellation of the entire transaction.

So, users in multi-user infobases can often complain that it is impossible to work because of these locks, while the code may actually have locks that are not needed in this place (redundant).
And also in the configuration code, they themselves may not be present; you can read about them, for example, here http://kb.1c.ru/articleView.jsp?id=30 (the article is a fragment of the book by P.S. Belousov, A .V.Ostroverh “1C:Enterprise: from 8.0 to 8.1.”). I offer a simplified way to explain the differences between locks on simple example So:

In your configuration in 1C:Enterprise mode, create two identical invoices with the same composition of goods. But be sure to indicate the different receiving warehouses.
In the posting processing code, you need to add a line with a message displayed on the screen (or other code that can delay the execution of posting processing by 21 seconds (the blocking timeout occurs after 20 seconds if the parameters are by default)).
Post two documents.
If a timeout occurs, and logically the goods arrive at different warehouses, there are redundant locks in the application. Business logic (consider common sense) there should be no blocking here.
If we now make identical warehouses in these two invoices. Then the blocking created as a result of an attempt at simultaneous execution will lead to a NECESSARY blocking and this is GOOD!

Those. While the invoice makes changes to the balances in the warehouse, the other must wait.

Of course, even this simple example leaves many questions. For example, what if the documents are from one supplier and the debt on it “moves”. And if it’s not just the balances in the warehouse that are moving, but several registers, and documents of different types.
But the most important question is: BY WHAT BUSINESS LOGIC SHOULD THERE NOT BE BLOCKINGS. Who prescribes this business logic and where in the context of blocking? But let's talk about everything in order.

Excessive locks are unnecessary locks that are not needed from the point of view of ensuring data integrity and at the same time reduce the overall performance of the system, increasing the total downtime - waiting on locks.
Necessary locking occurs when two users acquire the same resources (data objects). If users are working with non-overlapping resources, but are waiting on a lock, then the lock is considered redundant.

The most understandable criteria for locking redundancy are:

1. Mutual locks;

2. The level (area) of blocking is higher than necessary (as special case increasing the blocking level, the so-called. escalation);

3. The blocking time is longer than the time of “real” use of the blocking object.

Having received information about groupings of problems in the context of 1C:Enterprise metadata, I recommend paying attention first of all to the following objects:

  • Constants
  • Subsequence
  • Accounting registers
  • Accumulation registers
  • Information registers
  • Calculation registers

1) Until recently, there was a well-known recommendation not to write anything into constants. In extreme cases, do this from under one user, and then remember that while the user “writes” one constant, not only this, but also any other constant, other users will “wait”. Therefore, it is especially dangerous to use constants in transaction processing. The values ​​of all constants are stored V one resource.

The figure shows the physical placement of SCP configuration constants in a MS SQL Server 2005 database table.

This means that locking one constant will lock all constants. The DBMS imposes a lock on the ENTIRE single ROW of the table, i.e. for all constants.

However, in the latest releases of the platform, the storage of constants has been changed. Now each constant is a separate table. Don't get too carried away, though, if you create thousands of tables, you can get a lock on the master base.

Attention, if your configuration has existed for a long time, then you can change the storage format by “restructuring” it in Testing and Correcting the configurator.

2) Refuse to use the Sequence metadata object. At least from movements operational implementation, carry out during non-operative (additional) procedures. See how it is implemented in latest versions UPP.

3) If the system carries out online recording of movements in the accounting register in multi-user mode, then it is recommended:

  • enable the totals separation mode for this register;
  • Do not use register balance control during operational work.

4) In the accumulation register, in cases where there is no need to obtain “operative” data, you can enable the division of totals, which will increase the parallelism of data recording and speed up the work in general. Carefully monitor the measurements so that the “residues” can be obtained with maximum detail in the measurements.

5) You can get rid of some of the redundant locks created by the platform only by . In the automatic operating mode of configurations, the platform “takes over” blocking resources. Worry-free price automatic mode— locks are possible at the boundaries of index ranges, locks on an empty table, and lock escalation.

These locks completely disappear from the data in the transaction. That is, this interlocking will not be possible when operating in controlled mode.

I have already said “managed locks” and “managed mode” several times. You need to understand that there are two types of locks:
DBMS locks are installed automatically at the DBMS level when executing queries.
1C:Enterprise locks are installed automatically when writing (modifying) data and always manually when reading data.

A meticulous reader will say that 1C also divides into object and non-object locks, but now we will not touch this approach.

But I note that it imposes more requirements on the qualifications and experience of a 1C specialist.

6) Missing indexes (especially in complex queries) are generally the main factor in the occurrence of a higher level of locking than necessary. Those. paradox, on the one hand, I said that before optimizing the query I said that you first need to look at the locks, but now I say that in order to optimize the locks, you need to optimize the query. I have an excuse, switching the configuration to managed locks reduces redundant locks even in a non-optimal query. This occurs due to a decrease in the transaction isolation level, which in turn gives the DBMS lock manager fewer reasons to impose an excessive lock.

The main reasons for excessive locking (to summarize the above)

— design errors
(the degree of parallelism is determined by “how finely the data is chopped”: parallel work with two rows of the table is possible, work with one row will only happen sequentially)
(errors in using metadata: recording constants, sequences, operational accounting on accounting registers)
— excessive blocking due to the fault of the automatic mode (platform-DBMS combination).
- non-optimal query performance
(for example, when scanning a table, the entire table is locked - redundant area
and the blocking time increases - excessive time, an additional number of blocking increases the likelihood of blocking escalation)

As you can see, the task of optimizing locks is “multifaceted”. You need to be as clear as possible about the “context” that caused the problem. On what resources, what code. How much is this blocking really necessary, or is it redundant?

A child and an adult have a sore throat. When the doctor asks the question, “What's wrong?”, the child will look at the doctor and scream (trust me, I know), while the adult will point out the symptoms of the disease. These apparent differences direct the doctor to different methods of identifying the problem.
With a child, the doctor must perform many tests, collect data, combine it, perform analysis and only then make recommendations. Whereas with an adult, he will ask several questions and, since the number of initial data is small, the time for analysis and determination of the problem will be significantly less. As a result, recommendations will be issued much earlier.

Use our services and you will have more opportunities to analyze the problem for free and find a solution!

The user complaint “1C hangs”, which is well known to IT specialists, has many reasons. To make a correct “diagnosis” - identifying and analyzing a problem, it requires its reproduction, because a problem that cannot be reproduced is, as a rule, almost impossible to solve. Having understood the symptoms of 1C freezing, we will take the first step towards an efficiently working system.

Very long system startup

Long startup heavy configuration under one user for the first time after adding information security to the list of databases on the computer is a normal phenomenon. During the first launch, the configuration is cached. The second and subsequent runs should be faster.

System startup that takes a long time may indicate problems with the architectural implementation of the configuration. Most of the configuration is read by the platform only the first time the desired metadata object is accessed. A long startup indicates the likelihood of use large number metadata objects (many calls to various common modules, processing, etc.).

It should be taken into account that the first time the text of any module is accessed, it is compiled. This process also takes time, which is especially noticeable if there are many modules. Thus, the problem of slow startup is solved by modifying (optimizing) the configuration, the purpose of which is to disable the execution of all optional algorithms that are executed at system startup.

There is a possibility that the configuration is trying to read data from the Internet when it starts. This also increases system startup time.

Very long opening of forms

Long opening of forms may be due to:

  1. A large number of controls on the form - time is spent on creating the form and interconnecting the arrangement of form elements;
  2. Execution of algorithms during form initialization. It is possible that when the form is created, some conditions are checked and/or related objects are read from the database.

The first problem is “treated” by simplifying the form. For example, some controls can be placed in separate forms, which may even be more convenient for the user. For example, if the form has an address field “City”, “Street”, “House”, etc., then it is better to edit the address in a separate form.

The second problem is solved by analyzing the actions performed when creating and opening a form, and optimizing these algorithms. Perhaps some of the algorithms are already outdated, while others can be simplified and optimized, for example, eliminating or minimizing access to data in the database.

As an interactive action, consider the user attempting to select a value on a form element. In response to it, the system “thinks about something.” This may happen for the following reasons:

  1. The algorithms that run in this action examine or calculate associated data that influences the value selection behavior;
  2. The select form that opens to select this value reads all objects from the database when initialized.

To solve the first problem, you should use the “Performance Measurement”, find resource-intensive algorithms and optimize them.


The second problem can often be solved by simply analyzing the implementation of the choice form. For example, you should make sure that the “Dynamic data reading” property is set for a dynamic list, that the “Main Table” property is set correctly, and that the list implementation does not use obviously resource-intensive algorithms.

There are also situations when, when opening the selection form, some related data is read from the database (for example, when opening the “Item” selection form, the balances of goods in warehouses are read). Typically this is not best solution. It is better to read related data asynchronously, after opening the form. This will cause less discomfort for the user, because After the form is displayed, the user will spend some time processing the opened form, and this time can be spent loading related data.

Very long response to updates

One of the trivial symptoms, however, can tell about some system problems: the 1C update freezes when starting a backup. This mainly happens when updating via the Internet and, most likely, indicates that the configuration has not been updated for a long time and the releases, rolling one after another, caused a freeze. You can prevent such a problem by installing updates in a timely manner, and if you encounter it, you can simply interrupt the backup process. After starting the configurator, the database will start with the changes made in normal mode.

It should be noted that 1C 8.3 freezes during updates most often also because it requires more resource-intensive hardware than previous versions platforms. It is worth paying attention to the volume RAM and, if necessary, increase it - this, in principle, should help solve the problem “1C freezes when updating the configuration.”

Long process of recording objects/carrying out documents

In this case, “treatment based on photography” is practically excluded, since the reasons can be very diverse, from a large amount of data in the object to waiting at locks.

But even in THIS case, it is possible to outline a direction for analysis.

The absence of significant changes in recording time due to the time of day or the number of users (as a rough, subjective estimate) indicates a problem in the code or in the amount of data of the object. For analysis, it makes sense to use the “Performance Measurement” tool.

A dramatic change in recording time with unclear dependencies requires performing a statistical analysis of the occurrence of the problem, i.e. performance analysis. The easiest way is to analyze the use of the log book. An additional advantage here is that the 1C:Enterprise 8 platform supports saving log data to a file in SQLite format. This will allow you to use SQL queries to analyze the log data. It is quite possible to obtain the object write time from the log data, given the fact that each object write is performed in a transaction, and each transaction has its own identification number.


If the result of statistical analysis showed that the recording time of an object depends on the time of day, and not on the number of users, it is necessary to analyze the load on the 1C server and the database server. It is possible that the server is running routine processes that are taking up unnecessary resources.

If the time it takes to write objects depends on the number of users, the problem is most likely in the code (possibly waiting on locks) or in bandwidth equipment. To solve them, you should involve a specialist with the competence “1C: Expert in technological issues", since there are no unified rules for solving such a problem.

This article discusses the main factors: when 1C slows down, 1C freezes and 1C works slowly. The data was prepared based on SoftPoint's many years of experience in optimizing large IT systems built on the 1C + MS SQL combination.

To begin with, it is worth noting the myth that 1C is not intended for the simultaneous work of a large number of users, actively supported by forum users who find in these posts reassurance and a reason to leave everything as it is. With enough patience and knowledge, you can bring the system to any number of users. Slow work and 1C freezing will no longer be a problem.

From practice: The easiest way to optimize is 1C v7.7 (Optimization of 1C 8.1, 1C 8.2, 1C 8.3 is a more difficult task, since the application consists of 3 links). Bringing it up to 400 simultaneous users is a fairly typical project. Up to 1500 is already difficult and requires hard work.

The second myth: to improve the performance of 1C and get rid of 1C freezes, you need to install a more powerful server. As a rule, in optimization projects in 95% of cases it is possible to achieve acceptable performance either without an upgrade at all, or by updating a minor part of the equipment, for example, by adding RAM. It should be noted that the equipment must still be server-based, especially disk subsystem. An outdated disk subsystem is just one of the reasons why 1C works slowly.

The main limitation when working multi-user in 1C is the locking mechanism. It is the blocking in 1C, and not the server equipment, that usually prevents a large number of people from working in the database. To overcome this problem, you have to work hard and change the locking logic in 1C - lower them from tabular to row-based. Then, for example, posting a document will block only one, and not all documents in the system.

Figure 1. 1C blocking queue in the PerfExpert monitoring system, with information about 1C users, a configuration module and a specific line of code in this module.

Changing the 1C locking mechanism is a very complex technology. Not everyone can pull off such a trick, and for them there is only one way left - optimizing the structure and speeding up the execution time of operations. The fact is that blocking in 1C and the execution time of operations are highly interrelated indicators. For example, if the operation of posting a document takes 15 seconds, then when large quantities users, there is a high probability that during the transaction someone else will try to transfer the document and will wait in blocking. If you increase the execution time to at least 1 second, then 1C blocking for this operation will be significantly reduced.

More dangerous from the point of view of blocking are group processing, which can take a long time to complete and at the same time cause 1C blocking. Any processing that changes the data, for example, restoring the sequence or batch processing of documents, locks the tables and prevents other users from posting documents. Naturally, the faster these processings are performed, the less time blocking and makes it easier for users to work.

Heavy reports that perform read-only operations can also be dangerous in terms of locking, although it would seem that they do not lock data. Such reports affect the intensity of blocking in 1C, slowing down other operations in the system. That is, if the report is very heavy and takes up the bulk of the server’s resources, it may turn out that before the report was launched, the same operations were performed for 1 second, and during the report execution they were performed for 15 seconds. Naturally, as the execution time of operations increases, the intensity of blocking will also increase.

Figure 2. Load on the working server in terms of configuration modules, from all users. Each module has its own color. There is a clear imbalance in the load created from 1C.

The basic rule for optimization is that document processing should take a minimum of time and perform only necessary operations. For example, register calculations without specifying filtering conditions are often used in posting processing. In this case, you need to specify filters for registers that allow you to obtain the best selectivity, without forgetting that, according to the filtering conditions, the register must have appropriate indices.

In addition to launching heavy reports, non-optimal settings of MS SQL and MS Windows can slow down the execution time of operations and, therefore, increase the intensity of 1C blocking. This problem occurs in 95% of clients. It should be noted that these are servers of serious organizations; entire departments of highly qualified administrators are involved in their support and configuration.

The main reason is not correct settings server is the fear of administrators to change anything on a running server and the rule “The best is the enemy of the good.” If the administrator changes the server settings and problems begin, then all the anger of the authorities will pour out on the careless administrator. Therefore, it is more profitable for him to leave everything as it is, and not take a single step without orders from his superiors, than to experiment on his own responsibility.

The second reason is the lack of clear information on network optimization problems. There are a lot of opinions that often completely contradict each other. Every opinion dedicated to optimization has its opponents and fanatics who will defend it. As a result, the Internet and forums are more likely to confuse server settings than to help. In a situation of such uncertainty, the administrator has even less desire to change anything on a server that is somehow working.

At first glance, the picture is clear - you need to optimize everything that slows down the operation of the 1C server. But let's imagine ourselves in the place of such an optimizer - let's say we have 1C 8.1 8.2 8.3 UPP and 50 users are working at the same time. One terrible day, users start complaining that 1C is slow, and we need to solve this problem.

First of all, we look at what is happening on the server - what if some particularly independent antivirus is conducting a full scan of the system. An inspection shows that everything is fine - the server is loaded at 100%, and only by the sqlservr process.

From practice: one of the junior administrators, on his own initiative, turned on auto-update on the server, Windows and SQL happily updated, and after the update, a massive slowdown in the work of 1C users began, or 1C simply froze.

The next step is to check which programs load MS SQL. Inspection shows that the load is generated by approximately 20 application server connections.

From practice: a program that promptly updates data on a website went into a loop, and instead of updating once every 4 hours, it did it continuously, without pauses, heavily loading the server and blocking the data.

Further analysis of the situation faces great difficulties. We have already found out that the load comes directly from 1C, but how can we understand what exactly users are doing? Or at least who they are. It’s good if there are 10 1C users in an organization, then you can just go through them and find out what they are doing now, but in our case there are fifty of them, and they are scattered across several buildings.

In the example we are considering, the situation is not yet complex. Imagine that the slowdown was not today, but yesterday. Today the situation is not repeating itself, everything is fine, but you need to figure out why the operators couldn’t work yesterday (they naturally complained only before leaving home, since they like chatting all day long because nothing is working more than working ). This case emphasizes the need for a server logging system that will always keep a history of the main parameters of the server and from which the sequence of events can be restored.

A logging system is simply an indispensable tool in system optimization. If you add to it the ability to view the current status online, you will get a server status monitoring system. Every optimization project begins by collecting server state statistics to identify bottlenecks.

When we started working in the field of optimization, we tried many server monitoring systems, unfortunately, we were unable to find something that solved this problem at the proper level, so we had to create a system on our own. The result was a unique product, PerfExpert, which made it possible to automate and streamline the processes of optimization of IT systems. The program is distinguished by its tight integration with 1C, the absence of any noticeable additional load, and its repeatedly proven suitability for practical use in combat situations.

Returning to our example, the most likely outcome is: The administrator says, “It’s the programmers who wrote the configuration that are to blame.” The programmers respond, “Everything is written well for us - it’s the server that’s not working well.” And the cart, as they say, is still there. As a result, 1C slows down, freezes or works slowly.

In any case, to solve 1C performance problems, we recommend that you first purchase and use performance monitoring PerfExpert , this will allow you to make the right decision management decision and save money. The product is suitable for both small 1C:Enterprise ISs - up to 50 users, and for systems - from 1000 users. Since July 2015 performance monitoring PerfExpert received a 1C:Compatible certificate, passed testing in Microsoft and helps solve performance problems not only for 1C systems, but also for other information systems based on MS SQL Server (Axapta, CRM Dynamics, Doc Vision and others).

If you liked the information, recommended further actions:

- If you want to independently deal with technical problems of 1C performance (1C 7.7, 1C 8.1, 1C 8.2,1C 8.3) and other information systems, then for you there is a unique list of technical articles in our Almanac (Blocking and deadlocks, heavy load on the CPU and disks, database maintenance and index tuning are just a small part of the technical materials that you will find there).
.
- If you would like to discuss performance issues with our expert or order a PerfExpert performance monitoring solution, then leave a request and we will contact you as soon as possible.


This article will help you get rid of freezing programs. In it I will describe a method that will help terminate a frozen program Right. After all, often in order to complete a program, people use methods known to them - this is feverish keystrokes alt + f4 or just a button esc and, in most cases, this does not produce results. Then you have to press the only button that will definitely help - this is the button on system unit or laptop to shut down or restart. In this case, you risk losing data not only from the frozen program, but also from others that are open.

There may be several reasons why the program freezes:

  • If you have 64x bit system(), and you are running a program designed for 32-bit systems, then at best the program simply won’t start, at worst it will freeze. Although there is a nuance here - it happens that such programs work, but either incorrectly or freeze over time.
  • You have too little RAM to run.
  • You have too many programs and processes running that are already loading the system.
  • You have programs running in the background that take up a lot of system resources
  • Viruses
  • Technical problems (thermal paste on the processor has dried out, there is a lot of dust clogged, “weak” hardware, etc.)

    And now you have launched the program and are waiting for it to launch. And she stopped at the loading process and was “silent”. It’s good if background music is playing (essentially for games), it can give you a hint in the form of looping. You can, of course, wait a few minutes (no more than 5) in anticipation of a “miracle” and that the program will hang, but if you don’t want to wait and you know for sure that the program has frozen, then you need to start closing frozen programs.

    In order to terminate a program that is not responding(this is what freezing is also called) you need to call the Task Manager. You can, of course, use ctrl+shift+esc, but I recommend using a more well-known and effective keyboard shortcut ctrl+alt+del.

    In Windows 7, when you press these keys, a window of five options will open in which you need to select the last one.


    In the tab Applications We look for a frozen program (usually its status is Not responding), right-click on it and select from the menu Go to process:


    A tab will open Processes with a dedicated hung process. Here we simply click on End the process


    and agree with the system warning

    Note:
    You can, of course, select in the Task Manager menu not Go to process, A Cancel task and this will be a more “gentle” method, but sometimes it does not help. And I’m somehow used to solving such problems effectively.

    This is how you can “remove” a frozen program without restarting the computer and keep other running programs intact.

    It happens that Explorer is not responding. By this I mean that, for example, you opened a folder on your computer or even just My Computer and the system froze (it starts to think for a long time). This has happened to me myself.
    In this case, Task Manager and the method described above can also help.

    But here important to remember One detail: the Explorer process is called explorer.exe and when it ends, all folders on your computer will be closed. But that's half the trouble. After you “killed” the explorer, the control panel with the Start menu will also disappear. That's why Do not close the Task Manager right away! In order to return what is missing (except for open folders), click File -> Run


    and enter explorer.exe in the line


    Of course, click OK and everything will return to its place.

    Like this one simple way in order to fix the problem What to do if the program does not respond or freezes.

  • 1) look at the amount of memory allocated by rphost on the 1C server. If you have an x32 version of the server, then the process can use a maximum of 1.75 GB of RAM
    If there is not enough memory, the server cannot accept new connections or hangs when the current session requires additional memory
    www.viva64.com/ru/k/0036
    2) Look at the “Working server settings” settings; the settings may be incorrect. I had this problem and the server kept freezing. My settings are attached. The server is allocated 11 GB.
    3) There may be problems in setting up Postgressql.

    Provide the characteristics of your server, database sizes, Postgressql configs. It's hard to say without information.

    My PostgreSQL config: https://drive.google.com/file/d/0B2qGCc-vzEVDMERVW...
    This config is selected for the available amount of RAM.
    PostgreSQL installed on Linux, 3 GB RAM, 3 CPU cores.
    Server 1C8: 11 GB RAM, 5 CPU cores
    4 databases, approximately 1 GB each (uploaded to dt)

    Provide all the characteristics of your server: 1C8 server and database, physical or virtual, operating system, amount of RAM on each server, what kind of CPU, how much RAM do rphost processes take up, how many are there? Are you using a RAID array?

    Previously, I used PostgreSQL myself, but during the process I encountered some problems when running a database on PostgreSQL and recently switched to MS SQL.

    Your server is not bad for these databases. In order to use PostgreSQL you need to have a very good understanding of its configuration. When the databases are small, many configuration errors are forgiven. When we just started implementing 1C + PostgreSQL, we also had very frequent problems with the operation of the database (there were frequent freezes, it worked slowly). PostgreSQL is better used on Linux, not on Windows. I myself am not a database specialist; to set up the database server, we hired a specialist from 1Sbit and he set it up for us and there were no problems in operation after that.

    Advice:
    You have large databases, don’t skimp, hire a database specialist who can set it up for you. One person cannot be an expert in everything.

    1) how long ago have you checked the database itself and reindexed it? VACUUM and REINDEX
    2) how long ago did you test and correct the database using 1C tools?
    3) is the database log file placed on a separate HDD?
    4) is the HDD heavily loaded?

    Consider switching to MS Sql; often it requires “virtually” no configuration and is easier to use. Unlike PostgreSQL, MS Sql is ready to work out of the box, but PostgreSQL needs to be configured.

    If you have any questions, write, maybe I can help with something on Skype: tisartisar

    Hire a database setup specialist

    Why we switched to MS SQL:
    We use the UT configuration and when closing the month, sometimes errors arose that could not be resolved. If you transferred the database to file mode and started closing the month, then everything closed normally, the same database was loaded into PostgreSQL server Errors occurred when calculating the cost. At that time, we were half a year behind in closing months due to floating errors. We created a test database on MS SQL and the month that could not be closed on PostgreSQL on MS Sql was closed. Also, price rounding in the price list does not work correctly on PostgreSQL. In fact, running 1C on PostgreSQL is supported, but it is still recommended to use MS SQl.
    Because of this, it was decided to switch to MS SQL because... stability of operation 1C is more expensive.

    I'm glad I could help, please contact me if you have any questions or problems.

    1) how much memory is allocated to MS SQL server? this is configured in the MS SQL server itself.
    2) Test the database using 1C regularly
    3) article on how to set up backup and maintenance. This is important and needs to be done regularly. I do it every day. Check out all 3 parts of the guide.