1s 8 lock conflict when executing a transaction. How I diagnosed blocking problems. A large number of operations performed

It is not uncommon when working in 1C to receive the error “Lock conflict when executing transactions: The maximum waiting time for granting a lock has been exceeded.” Its essence lies in the fact that several sessions are trying to simultaneously perform similar actions, affecting the same resource. Today we will figure out how to fix this error.

A large number of operations performed

The first step when looking for reasons is to clarify how many concurrent users are in that information base, which gives a similar error. As we know, their maximum number can be quite large. This is both a thousand and five thousand.

The mechanism of locking and transactions is described in the developer's guide. They are used when multiple sessions access the same data simultaneously. It is logical that the same data cannot be changed by different users at the same time.

You should also check whether any of the users are running mass data change processing. This could be like, month end and the like. In this case, after the processing is completed, the error will disappear on its own.

Scheduled tasks

It is not uncommon for the cause of an error to lie in a system that processes large amounts of data. It is recommended to do such things at night. Set a schedule for performing such routine tasks outside of work hours.

In this way, users will work in a stable system, and the routine tasks themselves will be completed successfully, since the likelihood of conflicts with user sessions will be reduced.

"Hung sessions"

The problem of “stuck sessions” of users is familiar to almost everyone who has encountered 1C maintenance. The user could have left the program a long time ago, or closed a document, but his session still remains in the system. The problem is most often isolated and it is enough to end such a session through the administrator console. The same problems can arise with background jobs.

According to numerous comments on the Internet, such situations are more common when using network security keys. If the situation with “freezing sessions” is repeated systematically, this is a reason to thoroughly check and maintain the system and servers (if the database is client-server).

Errors when writing configuration

All standard configurations are developed by qualified specialists and experts. Each system is thoroughly tested and optimized for faster and more correct operation.

In this regard, the cause of the error may lie in suboptimal code written by a third-party developer. This could be a "heavy" request that will block data for a long period of time. There are also frequent cases of constructing algorithms with low performance and violation of logic.

There is a high probability that the locking conflict arose precisely because of developer errors if it arose after a program update. To check, you can simply “roll back” the improvements, or refactor the code.

I couldn’t write down changes to transfer to the distributed database, so I contacted 1C support and offered the following. I solved it simply by rebooting the application server and the SQL server. In general, you can check the box “Blocking regulatory
tasks included"
It also helped without rebooting.

Routine operations at the DBMS level for MS SQL Server

Instructions for performing routine operations at the DBMS level.

The information applies to the client-server version of 1C:Enterprise 8 when using the MS SQL Server DBMS.

General information

One of the most common reasons for suboptimal system operation is incorrect or untimely execution of routine operations at the DBMS level. It is especially important to carry out these regulatory procedures in large information systems ah, which operate under significant load and serve a large number of users simultaneously. The specificity of such systems is that the usual actions performed by the DBMS automatically (based on settings) are not enough for effective operation.

If your running system shows any symptoms of performance problems, you should check that the system is configured correctly and is running all recommended routine operations at the DBMS level.

The implementation of regulatory procedures should be automated. To automate these operations, it is recommended to use the built-in MS SQL Server tool: Maintenance Plan. There are also other ways to automate these procedures. In this article, for each regulatory procedure, an example of its configuration using the Maintenance Plan for MS SQL Server 2005 is given.

It is recommended to regularly monitor the timeliness and correctness of these regulatory procedures.

Update statistics

MS SQL Server builds a query plan based on statistical information about the distribution of values ​​in indexes and tables. Statistical information is collected based on a part (sample) of data and is automatically updated when this data changes. Sometimes this is not enough for MS SQL Server to consistently build the most optimal plan for executing all queries.

In this case, query performance problems may occur. At the same time, the query plans show characteristic signs of non-optimal operation (non-optimal operations).

In order to guarantee the most correct operation of the MS SQL Server optimizer, it is recommended to regularly update the MS SQL database statistics.

To update statistics for all database tables, you must run the following SQL query:

exec sp_msforeachtable N"UPDATE STATISTICS ? WITH FULLSCAN"

Updating statistics does not lead to table locking and will not interfere with the work of other users. Statistics can be updated as often as necessary. It should be taken into account that the load on the DBMS server will increase while updating statistics, which may negatively affect the overall performance of the system.

The optimal frequency for updating statistics depends on the size and nature of the load on the system and is determined experimentally. It is recommended to update statistics at least once a day.

The above query updates statistics for all tables in the database. In a real-life system, different tables require different statistics update rates. By analyzing query plans, you can determine which tables are most in need of frequent statistics updates, and set up two (or more) different routine procedures: for frequently updated tables and for all other tables. This approach will significantly reduce the statistics update time and the impact of the statistics update process on the operation of the system as a whole.

Setting up automatic statistics updates (MS SQL 2005)

Launch MS SQL Server Management Studio and connect to the DBMS server. Open the Management folder and create new plan services:

Create a subplan (Add Subplan) and name it “Updating statistics”. Add the Update Statistics Task to it from the taskbar:

Set up a statistics update schedule. It is recommended to update statistics at least once a day. If necessary, the frequency of statistics updates can be increased.

Configure task settings. To do this, double-click on the task in the lower right corner of the window. In the form that appears, specify the name of the database (or several databases) for which statistics will be updated. In addition, you can specify for which tables statistics should be updated (if you don’t know exactly which tables you need to specify, then set the value to All).

Statistics must be updated with the Full Scan option enabled.

Save the created plan. When the time specified in the schedule arrives, the statistics update will start automatically.

Clearing the procedural cache

The MS SQL Server optimizer caches query plans for re-execution. This is done in order to save time spent on query compilation if the same query has already been executed and its plan is known.

It is possible that MS SQL Server, based on outdated statistical information, will build a non-optimal query plan. This plan will be stored in the procedural cache and used when the same query is called again. If you update statistics but do not clear the procedure cache, SQL Server may choose an old (suboptimal) query plan from the cache instead of building a new (more optimal) plan.

To clear the MS SQL Server procedural cache, you need to run the following SQL query:

This query should be run immediately after updating the statistics. Accordingly, the frequency of its execution should coincide with the frequency of statistics updates.

Setting up procedural cache clearing (MS SQL 2005)

Since the procedural cache must be cleared every time statistics are updated, it is recommended to add this operation to the already created subplan “Updating statistics”. To do this, open the subplan and add the Execute T-SQL Statement Task to its schema. Then you should connect the Update Statistics Task with an arrow to the new task.

In the text of the created Execute T-SQL Statement Task, you should specify the request “DBCC FREEPROCCACHE”:

Defragmentation of indexes

When working intensively with database tables, the effect of index fragmentation occurs, which can lead to reduced query performance.

sp_msforeachtable N"DBCC INDEXDEFRAG (<имя базы данных>, ""?"")"

Defragmenting indexes does not lock tables and will not interfere with the work of other users, but it does create additional load on SQL Server. The optimal frequency of performing this routine procedure should be selected in accordance with the load on the system and the effect obtained from defragmentation. It is recommended that you defragment indexes at least once a week.

You can defragment one or more tables, rather than all tables in the database.

Setting up index defragmentation (MS SQL 2005)

In the previously created maintenance plan, create a new subplan named “Reindexing”. Add the Rebuild Index Task to it:

Set the execution schedule for the index defragmentation task. It is recommended to perform the task at least once a week, and if the data in the database is highly variable, even more often - up to once a day.

Reindexing database tables

Table reindexing involves a complete rebuild of the database table indexes, which leads to significant optimization of their performance. It is recommended to regularly reindex your database tables. To reindex all database tables, you need to run the following SQL query:

sp_msforeachtable N"DBCC DBREINDEX (""?")"

Reindexing tables locks them for the entire duration of their operation, which can significantly affect the user experience. In this regard, it is recommended to perform reindexing during minimal system load.

After reindexing is completed, there is no need to defragment the indexes.

Setting up table reindexing (MS SQL 2005)

In the previously created maintenance plan, create a new subplan named Index Defragmentation. Add the Rebuild Index Task to it:

Set the execution schedule for the table reindexing task. It is recommended to perform the task during minimal system load, at least once a week.

Set up the task by specifying a database (or several databases) and selecting the required tables. If you don't know exactly which tables should be specified, then set the value to All.

“Lock conflict when executing a transaction: The maximum waiting time for granting a lock has been exceeded” is a fairly common error in 1C 8.3 and 8.2 associated with competition for the use of resources in the system.

The 1C system allows a large number of users to work in parallel: as load testing shows, today this number is not limited to five thousand users simultaneously working in the system. However, in order for the 1C 8 database to be able to simultaneously support a large number of users, the configuration must be properly designed.

Get 267 video lessons on 1C for free:

Performing a large number of operations

It is likely that some user launched, for example, over a long period in one transaction. The architecture of 1C 8.3 assumes that the system does not allow changing data that is used in one transaction by another user and blocks them.

This may be a temporary error that will stop occurring when the other user finishes operating the system. If this error appears frequently, then most likely the problem is something else.

Configuration error

In addition to errors in the code, there are often methodologically incorrect decisions. For example, it in itself implies sequential processing of documents. Batch accounting can be replaced with RAUZ - this will seriously increase the productivity of the system.

How to fix this error in 1C 8.3?

In any case, the appearance of the error “Lock conflict when executing a transaction” indicates the need to inspect the system, especially for medium and large information systems in client-server mode (MS SQL, PostgreSQL, etc.). If this is ignored at an early stage, irreversible consequences are possible later, when the operation of the system is especially important (during the reporting period).

For auditing and error correction, it is best to choose a reliable partner. Just call us and we will solve any of your problems in as soon as possible. Details on the page.

When hundreds of users simultaneously work with programs and data, problems arise that are characteristic only of large-scale solutions. We are talking about problems caused by data blocking.

Sometimes users learn about blocking from messages indicating that they cannot write data or perform some other operation. Sometimes due to a very significant drop in program performance (for example, when the time required to perform an operation increases tens or hundreds of times).

Problems caused by blocking do not have a general solution. Therefore, we will try to analyze the causes of such problems and systematize options for solving them.

REASONS FOR TRANSACTION BLOCKINGS

Let's first remember what locks are, and at the same time figure out whether they are necessary. Let's look at a couple of classic examples of blockages that we encounter in life.

Example 1: buying a plane or train ticket. Suppose we voiced our wishes to the cashier. The cashier tells us the availability of available seats, from which we can choose the one we like best (if there are several of them, of course). While we select and confirm our agreement with the proposed option, these seats cannot be sold to anyone else, i.e. are temporarily “blocked”. If they were not blocked, then by the time of confirmation there could be a situation where the tickets we selected had already been sold. And in this case, the selection cycle may have an unpredictable number of repetitions. While we are choosing places, they have already been sold!.. While we are choosing others, and they are no longer there...

Example 2: buying something in a store or at a bazaar. We approached the counter and chose the most beautiful apple from the hundreds available. They chose it and reached into their pockets for the money. What will it look like if, at that moment, while we are counting the money, the apple we chose will be sold to a buyer who came up later than us?

Thus, blocking in itself is a necessary and useful phenomenon. It is thanks to blocking that we guarantee that actions are completed in one step. And most often it’s not the most successful implementation that leads to negativity. software when, for example:

  • an excessive number of objects (tickets, apples) are blocked;
  • The blocking time is unreasonably extended.

EXCESSIVE BLOCKING IN TYPICAL 1C CONFIGURATIONS

On large projects, as a rule, we use 1C:Enterprise. That's why practical recommendations We will try to describe solutions to blocking problems using the example of the 1C:Enterprise + MS-SQL combination.

The 8th generation of 1C:Enterprise provides very, very good parallelism of use. Can work simultaneously with one configuration (that is, on one base) with normal servers and communication channels great amount users. For example, hundreds of storekeepers process the issuance or receipt of goods, economists simultaneously calculate labor costs for various departments, accountants carry out calculations and payroll, etc.

But there is a reason why there is an opinion to the contrary - the myth that with intensive simultaneous use, working with solutions based on 1C:Enterprise is uncomfortable or impossible. After all, as soon as standard solutions for 1C:Enterprise begin to be used by hundreds of users per industrial scale, more and more often a window appears on the screen with a proud inscription: “Error when calling the context method (Write): Locking conflict when executing a transaction: ..." and then, depending on the type of SQL server used, something like “Microsoft OLE DB Provider for SQL Server: Lock request time out period exceeded. ...".

Almost all standard solutions in the proposed out-of-the-box implementation are configured for automatic lock management. “Automatic” here can be perceived as “paranoid”. Just in case, when processing any document, we block everything that may be somehow connected with it. So it turns out that when one user does something (and sometimes just writes it down), the rest can only wait.

I will express my opinion why 1C decided not to configure its standard solutions for high parallel use. The labor costs for such modification are not high - several “man months”, which is not significant on the scale of 1C. It seems to me that the reason is different.

Firstly, such a modification significantly complicates the processing of all documents. This means that for those consumers who use 1C for small tasks, without any gain there will be only a drawback - the difficulty of modifying the standard configuration will become more complicated. Statistics at the same time suggest which category of clients is the main feeding trough for 1C...

The second reason is buried in the typical basic settings of SQL servers, for example, MS-SQL, which is still used more often than others. It just so happened that priorities in the settings were given to saving server RAM, rather than reducing blocking. This leads to the fact that, if it is necessary to lock several rows, the SQL server makes a “memory- and processor-saving” decision - to lock the entire table at once!..

These are the shortcomings standard solutions or the specifics of the database server setup used are often identified with problems caused by locking. As a result, technical deficiencies lead to very significant organizational problems. After all, if an employee is given a reason to be distracted from work or to justify why the work could not be done, a minority will work effectively. Well, a message about blocking transactions or a “braking” program is an ideal justification for why something could not be done.

RECOMMENDATIONS FOR ELIMINATING EXCESSIVE BLOCKINGS FOR 1C:ENTERPRISE

What to do if solving the problems of excessive locking is so important?

At the final stage of implementation of all large complexes, it is necessary to carry out fine-tuning to eliminate unnecessary transaction blocking. It is critical to resolve problems that may arise due to insufficiently developed blocking conditions or implementation techniques.

Because This operation is extremely important and must be performed constantly. Therefore, to simplify such modifications, we have developed a number of basic recommendations that we try to adhere to. Recommendations received and tested through the experience of a significant number of large-scale implementations.

  1. If the DBMS or development system you are using (for example, 1C:Enterprise) uses automatic data blocking mode by default, refuse automatic control blocking. Configure blocking rules yourself, describe the criteria for blocking entire tables or individual rows.
  2. When developing a program, whenever possible, access the tables in the same order.
  3. Try not to write to the same table multiple times within the same transaction. If this is difficult, then at least minimize the time interval between the first and last write operation.
  4. Analyze the possibility of disabling lock escalation at the SQL server level.
  5. Clearly inform users about the reasons why they cannot perform any actions if they are due to blocking. Give accessible and understandable recommendations on what to do next.

If you look carefully at the recommendations, it becomes clear that such testing is appropriate not only for 1C:Enterprise, but for any systems. It doesn’t matter at all what language they are written in and what database server they work with. Most of the recommendations are universal in nature, and therefore are equally valid when using 1C:Enterprise and for “home-written” programs or other “boxed” ERP systems.

P.S. Did you know that we offer professional assistance with updating 1C at the best price?

Tags to search:
  • Transaction locks
  • Removing blockages
  • 1C locks
  • Lock
  • Lock conflict
  • Lock contention during transaction

What are locks in 1C, why are they needed and how to avoid problems when working with them

Surely many of you, when using 1C Enterprise information systems (1C 7.7, 1C 8.1, 1C 8.2, 1C 8.3), have encountered such a phenomenon as blocking. Moreover, as a rule, everyone calls this phenomenon differently: “1C blocking”, “1C blocking conflict”, “1C blocking errors”, “1C transaction blocking” and other names. Let's take a quick look at what locks (not deadlocks) are, why they are needed, and how to avoid problems when working with them.


The blocking itself (including in 1C and other systems) useful tool, which provides the ability to work sequentially with shared resources. For example, the concept of “shared resources” surrounds us throughout life, for example, while you are driving a car, no one else can drive it. Therefore, the car is a shared resource. And the second driver waits until you arrive, for example, your wife/husband. You both compete for a common resource - a car. You determine who will drive the car at the current moment at the conceptual level, but how should we automated systems??? This is why we came up with a tool blocking, which provide organization for the process of accessing a shared resource and define a queue. As a rule, in life, as in information systems (1C 7.7, 1C 8.1, 1C 8.2, 1C 8.3), there are a lot of shared resources, so there are also a lot of blocking. Now the second one important point– how long will your wife/husband wait for your car to be released? It’s logical to assume that it won’t last forever. Therefore, locks are given a timeout limit - otherwise known as a timeout time. Timeout is the maximum time a competing participant (your wife/husband) waits for the shared resource to be freed. Then either he continues to wait for the same amount of time, or he walks. In 1C information systems, the expiration of the timeout ends with the message “1C lock conflict”, “1C lock errors”, “1C transaction locks”, “Timeout during locking”.

An important detail that should also be remembered is that locks (in particular in 1C) can be explicit (set by the user) and implicit (set by the SQL platform). In the article we are talking about explicit locks, so they are always used in a transaction, hence it turns out that “1C Blocking” and “1C Transaction Blocking” are synonyms.

We have decided that if the timeout is exceeded, the user will receive an error message; for him, the waiting process itself looks like the screen of the 1C information system is stuck. The likelihood of a timeout message appearing (1C user error) is affected by the following indicators:

  • Many 1C locks in a transaction;
  • Duration of the transaction.

To minimize messages associated with locking errors, it is necessary to either reduce the number of locks (optimize selectivity) or reduce the duration of transactions.
Now let’s determine how these indicators can be influenced in a real 1C information system.

To reduce a lot of blocking:

In 1C:Enterprise 7.7:

Information system 1C 7.7. For locking, table locks are used, which paralyze the work of users. As a rule, more than 50 people in one database cannot work without errors, and problems can also appear in databases of 20 users.
Solution:

  • Flexible locks 1C from the Softpoint company. With their help, you will not only optimize many locks (replacing table locks with user locks), but also speed up selections, searches and reports.
In 1C:Enterprise 8.x:
Information system 1C 8.1., 1C 8.2., 1C 8.3. in automatic mode it uses redundant locks of the type (REPEATABLEREAD, SERIALIZABLE). This results in a degraded user experience of 100 or more.
Solution:
  • Managed locks 1C - a built-in tool of the 1C platform for more selective configuration of locks. To use it, the programmer must write special operators in the right places in the code to block the necessary ones ( in his opinion!) records in information system tables;
  • Flexible locks 1C - Softpoint technology for replacing standard locks with custom ones.

To reduce transaction times:

For any 1C information systems (1C 7.7., 1C 8.1, 1C 8.2, 1C 8.3) as for other information systems, similar approaches are used:

    Checking and correctly setting up routine database maintenance (maintenance of files, indexes, statistics, temporary table databases, setting up Windows and SQLServer);

    Analysis and optimization of heavy 1C and SQL queries (index tuning, query rewriting);

    Transaction redundancy check. In many cases, operations are unreasonably included in a transaction without realizing how this will affect the duration, and with it the performance.

  1. If you want to independently deal with technical performance problems of 1C (1C 7.7, 1C 8.1, 1C 8.2, 1C 8.3) and other information systems , then for you there is a unique list of technical articles in our Almanac (Blocking and deadlocks, heavy load on the CPU and disks, database maintenance and index tuning are just a small part of the technical materials that you will find there).
  2. If you want to discuss performance issues with our expert or order a PerfExpert performance monitoring solution, then leave a request and we will contact you as soon as possible.