Tuesday, March 15, 2011

Tips for Reducing SQL Server Locks


If your users are complaining that they have to wait for their
transactions to complete, you may want to find out if object locking on
the server is contributing to this problem. To do this, use the SQL
Server Locks Object: Average Wait Time (ms). You can use this counter to
measure the average wait time of a variety of locks, including database,
extent, Key, Page, RID, and table.
If you identify one or more types of locks causing transaction delays,
then you will want to investigate further to see if you can identify
what specific transactions are causing the locking. The Profiler is the
best tool for this detailed analysis. [7.0, 2000] Updated 9-18-2006
*****
Use sp_who and sp_who2 (the sp_who2 stored procedure is not documented
in the SQL Server Books Online, but offers more detail than sp_who) to
identify which processes may be blocking other processes. While blocking
can also be identified using Enterprise Manager, these two commands work
much faster and more efficiently. [6.5, 7.0, 2000] Updated 9-18-2006
*****
On tables that change little, if at all, such as lookup tables, consider
altering the default lock level for the table. By default, SQL Server
uses row level locking for all tables, unless the SQL Query Optimizer
determines that a more appropriate locking level, such as page or table
locks, is more appropriate. For most lookup tables that aren't huge, SQL
Server will automatically use row level locking. Because row locking has
to be done at the row level, SQL Server needs to work harder to maintain
row locks that it does for either page or table locks. Since lookup
tables aren't being changed by users, it would be more efficient to use
a table lock instead of many individual row locks. How do you accomplish
this?
You can override how SQL Server performs locking on a table by using the
SP_INDEXOPTION command. Below is an example of code you can run to tell
SQL Server to use page locking, not row locks, for a specific table:
SP_INDEXOPTION 'table_name', 'AllowRowLocks', FALSE GO SP_INDEXOPTION
'table_name', 'AllowPageLocks', FALSE GO
This code turns off both row and page locking for the table, thus only
table locking is available. [7.0, 2000] Updated 9-18-2006
*****
Keep all Transact-SQL transactions as short as possible. This helps to
reduce the number of locks (of all types), helping to speed up the
overall performance of your SQL Server applications. If practical, you
may want to break down long transactions into groups of smaller
transactions. In addition, only include those Transact-SQL commands
within a transaction that are necessary for the transaction. Leave all
other code outside of the transaction. [6.5, 7.0, 2000, 2005] Updated
8-21-2005
*****
An often overlooked cause of locking is an I/O bottleneck. Whenever your
server experiences an I/O bottleneck, the longer it takes user's
transactions to complete. And the longer they take to complete, the
longer locks must be held, which can lead to other transactions having
to wait for previous locks to be released.
If your server is experiencing excessive locking problems, be sure to
check if you are also running into an I/O bottleneck. If you do find
that you have an I/O bottleneck, then resolving it will help to resolve
your locking problem, speeding up the performance of your server. [6.5,
7.0, 2000, 2005] Updated 8-21-2005
*****
To help reduce the amount of time tables are locked, which hurts
concurrency and performance, avoid interleaving reads and database
changes within the same transaction. Instead, try to do all your reads
first, then perform all of the database changes (UPDATES, INSERTS,
DELETES) near the end of the transaction. This helps to minimize the
amount of time that exclusive locks are held. [6.5, 7.0, 2000, 2005]
Updated 8-21-2005
*****
Any conditional logic, variable assignment, and other related
preliminary setup should be done outside of transactions, not inside
them. Don't ever pause a transaction to wait for user input. User input
should always be done outside of a transaction. Otherwise, you will be
contributing to locking, hurting SQL Server's overall performance. [6.5,
7.0, 2000, 2005] Updated 8-21-2005
*****
Encapsulate all transactions within stored procedures, including both
the BEGIN TRANSACTION and COMMIT TRANSACTION statements in the
procedure. This provides two benefits that help to reduce blocking
locks.
First, it limits the client application and SQL Server to communications
before and after the transaction, thus forcing any messages between the
client and the server to occur at a time other than when the transaction
is running (reducing transaction time).
Second, It prevents the user from leaving an open transaction (holding
locks open) because the stored procedure forces any transactions that it
starts to complete or abort. [6.5, 7.0, 2000, 2005] Updated 8-21-2005
*****
If you have a client application that needs to "check-out" data for
awhile, then perhaps update it later, or maybe not, you don't want the
records locked during the entire time the record is being viewed.
Assuming "viewing" the data is much more common that "updating" the
data, then one way to handle this particular circumstance is to have the
application select the record (not using UPDATE, which will put a share
lock on the record) and send it to the client.
If the user just "views" the record and never updates it, then nothing
has to be done. But if the user decides to update the record, then the
application can perform an UPDATE by adding a WHERE clause that checks
to see whether the values in the current data are the same as those that
were retrieved.
Similarly, you can check a timestamp column in the record, if it exists.
If the data is the same, then the UPDATE can be made. If the record has
changed, then the application must include code to notify the user so he
or she can decide how to proceed. While this requires extra coding, it
reduces locking and can increase overall application performance. [6.5,
7.0, 2000] Updated 10-16-2005
*****
Use the least restrictive transaction isolation level possible for your
user connection, instead of always using the default READ COMMITTED. In
order to do this without causing other problems, the nature of the
transaction must be carefully analyzed as to what the effect of a
different isolation will be.
One example of where not using the default READ COMMITTED isolation
level is when running queries to produce reports. In most cases, using
an isolation level of READ UNCOMMITTED, will turn off locking, speeding
the performance of the query, and other queries hitting the same tables.
[6.5, 7.0, 2000]Updated 10-16-2005
*****
Try one or more of the following suggestions to help avoid blocking
locks: 1) Use clustered indexes on heavily used tables; 2) Make
appropriate use of non-clustered indexes, 3) Try to avoid Transact-SQL
statements that affect large numbers of rows at once, especially the
INSERT and UPDATE statements; 4) Try to have your UPDATE and DELETE
statements use an index; and 5) When using nested transactions, avoid
commit and rollback conflicts. [6.5, 7.0, 2000] Updated 10-16-2005
*****
If there is a lot of contention for a particular table in your database,
consider turning off page locking for that table, requiring SQL Server
to use row level locking instead. This will help to reduce the
contention for rows located on the same page. It will also cause SQL
Server to work a little harder in order to track all of the row locks.
How well this option will work for you depends on the tradeoff in
performance between the contention and the extra work SQL Server has to
perform. Testing will be needed to determine what is best for your
particular environment. Use the SP_INDEXOPTION stored procedure to turn
off page locking for any particular table. [7.0, 2000] Updated
10-16-2005
*****
If table scans are used regularly to access data in a table, and your
table doesn't have any useful indexes to prevent this, then consider
turning off both row locking and page locking for that table. This in
effect tells SQL Server to only use table locking when accessing this
table. This will boost access to this table because SQL Server will not
have to escalate from row locking, to page locking, to table locking
each time a table lock is needed on the table to perform the table scan.
On the negative side, doing this can increase contention for the table.
Assuming the data in the table is mostly read only, then this should not
be too much of a problem. Testing will be needed to determine what is
best for your particular environment. [7.0, 2000] Updated 10-16-2005
*****
Do not create temporary tables from within a stored procedure that is
invoked by the INSERT INTO EXECUTE statement. If you do, locks on the
syscolumns, sysobjects, and sysindexes tables in the TEMPDB database
will be created, blocking others from using the TEMPDB database, which
can significantly affect performance. [6.5, 7.0, 2000]Updated 10-16-2005
*****
To help reduce the amount of time it takes to complete a transaction
(and thus reducing how long records are locked) try to avoid using the
WHILE statement or Data Definition Language (DDL) within a transaction.
In addition, do not open a transaction while browsing data and don't
SELECT more data than you need for the transaction at hand. For best
performance, you always want to keep transactions as short as possible.
[6.5, 7.0, 2000] Updated 10-16-2005
*****
While nesting transactions is perfectly legal, it is not recommended
because of its many pitfalls. If you nest transactions and your code
fails to commit or roll back a transaction properly, it can hold locks
open indefinitely, significantly impacting performance. [6.5, 7.0, 2000]
Updated 10-16-2005
*****
By default in SQL Server, a transaction will wait indefinitely for a
lock to be removed before continuing. If you want, you can assign a
locking timeout value to SQL Server so that long running locks won't
cause other transactions to wait long periods of time. To assign a
locking timeout value to SQL Server, run this command, "SET LOCK_TIMEOUT
length_of_time_in_milliseconds" from Query Analyzer. [7.0, 2000] Updated
10-16-2005
*****
Sometimes you need to perform a mass INSERT or UPDATE of thousands, if
not millions of rows. Depending on what your are doing, this could take
some time. Unfortunately, performing such an operation could cause
locking problems for your other users. If you know users could be
affected by your long-running operation, consider breaking up the job
into smaller batches, perhaps even with a WAITFOR statement, in order to
allow others to "sneak" in and get some of their work done. [6.5, 7.0,
2000] Updated 10-16-2005
*****
In SQL Server 7.0, when a replication snapshot is generated, SQL Server
puts shared locks on all of the tables that are being published for
replication. As you can imagine, this can affect users who are trying to
update records in the locked tables. Because of this, you may want to
schedule snapshots to be created during less busy times of the day. This
is especially true if there are a lot of tables, or if the tables are
very large.
In SQL Server 2000, this behavior has changed. Assuming that all
subscribers will be either SQL Server 7.0 or 2000 servers, then SQL
Server 2000 will use what is called concurrent snapshot processing,
which does not put a share lot on the affected tables, helping to boost
concurrency. [2000] Updated 10-16-2005
*****
One way to help reduce locking issues is to identify those transactions
that are taking a long time to run. The longer they take to run, the
longer their locks will block other processes, causing contention and
reduce performance. The following script can be run to identify current,
long-running transactions. This will provide you with a clue as to what
transactions are taking a long time, allowing you to investigate and
resolve the cause.
SELECT spid, cmd, status, loginame, open_tran, datediff(s, last_batch,
getdate ()) AS [WaitTime(s)] FROM master..sysprocesses p WHERE open_tran
> 0 AND spid > 50 AND datediff (s, last_batch, getdate ()) > 30 ANd
EXISTS (SELECT * FROM master..syslockinfo l WHERE req_spid = p.spid AND
rsc_type <> 2)
This query provides results based on the instant is runs, and will vary
each time you run it. The goal is to review the results and look for
transactions that have been open a long time. This generally indicates
some problem that should be investigated. [7.0, 2000] Updated 10-16-2005
*****
In order to reduce blocking locks in an application, you must first
identify them. Once they are identified, you can then evaluate what is
going on, and perhaps be able to take the necessary action to prevent
them. The following script can be run to identify processes that have
blocking locks that occur longer than a time you specify. You can set
the value used to identify blocking locks to any value you want. In the
example below, it is for 10 seconds.
SELECT spid, waittime, lastwaittype, waitresource FROM
master..sysprocesses
WHERE waittime > 10000 --The wait time is measured in milliseconds
AND spid > 50 -- Use > 50 for SQL Server 2000, use > 12 for SQL
Server 7.0
This query measures blocking locks in real-time, which means that only
if there is a blocking lock fitting your time criteria when you this
query will you get any results. If you like, you can add some additional
code that will loop through the above code periodically in order to more
easily identify locking issues. Or, you can just run the above code
during times when you think that locking is a problem. [7.0, 2000]
Updated 10-16-2005




Sponsored Links
FREE SQL Server Performance Tuning White Papers and Tips Ring in the New
Year with SQL Server Magazine!
Recover SQL Server data in minutes with non-disruptive continuous data
backup software by TimeSpring
DbNetGrid: Create AJAX enabled, database-driven web reports &
applications in minutes.
Get the Newest SQL Server 2005 Books from Amazon.Com Increase SQL Server
performance up to 2500% with SSD - Free White Paper by Texas Memory
Systems Cross-platform database administration from a single console -
download a free trial now Worried about Disaster Recovery? Learn how to
protect your SQL Servers.
Break the tempdb Bottleneck - Free Trial.
Is your SQL application secure? Audit for SQL injection, XSS and other
web attacks with Acunetix Web Vulnerability Scanner.
Manage SQL Servers from your PDA or Windows Mobile Device!
Powerful SQL Server Query and Reporting Tool. Free Trial Download!
Get the Newest SQL Server 2005 Books from Amazon.Com Run your SQL Server
at peak performance...read more Derivatives One : Online Financial
Derivatives Calculator

(c)2000 - 2007 vDerivatives Limited All Rights Reserved.