SQL Performance Troubleshooting (Part 2)
In this post we continue to look at various things you can look at to identify performance issues on SQL Server and will focus mainly on queries that you can run in order to diagnose different issues.
Current Running Queries
In some instances you may have queries that seem to be running a long time, or even locking a whole bunch of tables, so you would need to find out what queries are currently running. In order to do this, you can use the query below:
SELECT
[DatabaseName] = db_name(rq.database_id),
s.session_id,
rq.status,
[SqlStatement] = SUBSTRING (qt.text,rq.statement_start_offset/2,
(CASE WHEN rq.statement_end_offset = -1 THEN LEN(CONVERT(NVARCHAR(MAX),
qt.text)) * 2 ELSE rq.statement_end_offset END - rq.statement_start_offset)/2),
[ClientHost] = s.host_name,
[ClientProgram] = s.program_name,
[ClientProcessId] = s.host_process_id,
[SqlLoginUser] = s.login_name,
[DurationInSeconds] = datediff(s,rq.start_time,getdate()),
rq.start_time,
rq.cpu_time,
rq.logical_reads,
rq.writes,
[ParentStatement] = qt.text, --Name of the Stored Procedure executing the query (If applicable)
p.query_plan,
rq.wait_type,
[BlockingSessionId] = bs.session_id,
[BlockingHostname] = bs.host_name,
[BlockingProgram] = bs.program_name,
[BlockingClientProcessId] = bs.host_process_id,
[BlockingSql] = SUBSTRING (bt.text, brq.statement_start_offset/2,
(CASE WHEN brq.statement_end_offset = -1 THEN LEN(CONVERT(NVARCHAR(MAX),
bt.text)) * 2 ELSE brq.statement_end_offset END - brq.statement_start_offset)/2)
FROM sys.dm_exec_sessions s
INNER JOIN sys.dm_exec_requests rq
ON s.session_id = rq.session_id
CROSS APPLY sys.dm_exec_sql_text(rq.sql_handle) as qt
OUTER APPLY sys.dm_exec_query_plan(rq.plan_handle) p
LEFT OUTER JOIN sys.dm_exec_sessions bs
ON rq.blocking_session_id = bs.session_id
LEFT OUTER JOIN sys.dm_exec_requests brq
ON rq.blocking_session_id = brq.session_id
OUTER APPLY sys.dm_exec_sql_text(brq.sql_handle) as bt
WHERE s.is_user_process =1
AND s.session_id <> @@spid
AND rq.database_id = DB_ID() -- Comment out to look at all databases
ORDER BY rq.start_time ASC;
Longest Running Queries
If we are trying to find out what queries are taking the longest to execute we can run the following query:
SELECT
[DatabaseName] = db_name(rq.database_id),
s.session_id,
rq.status,
[SqlStatement] = SUBSTRING (qt.text,rq.statement_start_offset/2,
(CASE WHEN rq.statement_end_offset = -1 THEN LEN(CONVERT(NVARCHAR(MAX),
qt.text)) * 2 ELSE rq.statement_end_offset END - rq.statement_start_offset)/2),
[ClientHost] = s.host_name,
[ClientProgram] = s.program_name,
[ClientProcessId] = s.host_process_id,
[SqlLoginUser] = s.login_name,
[DurationInSeconds] = datediff(s,rq.start_time,getdate()),
rq.start_time,
rq.cpu_time,
rq.logical_reads,
rq.writes,
[ParentStatement] = qt.text, --Name of the Stored Procedure executing the query (If applicable)
p.query_plan,
rq.wait_type,
[BlockingSessionId] = bs.session_id,
[BlockingHostname] = bs.host_name,
[BlockingProgram] = bs.program_name,
[BlockingClientProcessId] = bs.host_process_id,
[BlockingSql] = SUBSTRING (bt.text, brq.statement_start_offset/2,
(CASE WHEN brq.statement_end_offset = -1 THEN LEN(CONVERT(NVARCHAR(MAX),
bt.text)) * 2 ELSE brq.statement_end_offset END - brq.statement_start_offset)/2)
FROM sys.dm_exec_sessions s
INNER JOIN sys.dm_exec_requests rq
ON s.session_id = rq.session_id
CROSS APPLY sys.dm_exec_sql_text(rq.sql_handle) as qt
OUTER APPLY sys.dm_exec_query_plan(rq.plan_handle) p
LEFT OUTER JOIN sys.dm_exec_sessions bs
ON rq.blocking_session_id = bs.session_id
LEFT OUTER JOIN sys.dm_exec_requests brq
ON rq.blocking_session_id = brq.session_id
OUTER APPLY sys.dm_exec_sql_text(brq.sql_handle) as bt
WHERE s.is_user_process =1
AND s.session_id <> @@spid
AND rq.database_id = DB_ID() -- Comment out to look at all databases
ORDER BY rq.start_time ASC;
What we are looking for is the following:
- Queries that take a long time to run. A general rule of thumb is anything more than 500ms should be looked at.
- Queries that have a high CPU and/or IO usage, when compared to the other queries.
- Queries that have a high number of executions. This could mean that the application is querying more than necessary and the query is a possible candidate for caching.
Missing Indexes
SQL Server can also give us suggestions on what indexes should be created. However, it should be noted that these are suggestions and you shouldn't just implement them. Instead, you need to look for patterns and implement the indexes that make to you.
SELECT
TableName = d.statement, --Table Index should be created on
d.equality_columns, --Columns to be part of the index
d.inequality_columns, --Columns to be part of the index
d.included_columns, --Columns to include in the index
s.user_scans, --Number of times the index could have been used for a scan
s.user_seeks, --Number of times the index could have been used for a seek
s.avg_total_user_cost, --Average cost of statements that have run against this table that would benefit from the index
s.avg_user_impact, --Cost reduced when creating the index in %
AverageCostSavings = ROUND(s.avg_total_user_cost * (s.avg_user_impact/100.0), 3),
TotalCostSavings = ROUND(s.avg_total_user_cost * (s.avg_user_impact/100.0) * (s.user_seeks + s.user_scans),3)
FROM sys.dm_db_missing_index_groups g
INNER JOIN sys.dm_db_missing_index_group_stats s
ON s.group_handle = g.index_group_handle
INNER JOIN sys.dm_db_missing_index_details d
ON d.index_handle = g.index_handle
WHERE d.database_id = db_id()
ORDER BY TableName, TotalCostSavings DESC;
Once you have run this query you should look for the following:
- If there are a large number of user seeks, then you should probably create the index.
- See if you can possibly modify existing queries instead of creating new ones.
Index Usage Statistics
As your database changes and grows overtime you may find that indexes you once created are no longer being used and possibly even making the database slower. In order to find indexes that are no longer being used you can run the following query:
SELECT
[DatabaseName] = DB_Name(db_id()),
[Schema] = sc.name,
[TableName] = OBJECT_NAME(i.object_id),
[IndexName] = i.name,
[IndexType] = i.type_desc,
[TotalUsage] = IsNull(user_seeks, 0) + IsNull(user_scans, 0) + IsNull(user_lookups, 0),
[UserSeeks] = IsNull(user_seeks, 0),
[UserScans] = IsNull(user_scans, 0),
[UserLookups] = IsNull(user_lookups, 0),
[UserUpdates] = IsNull(user_updates, 0) --Number of times this index has needed to be updated
FROM sys.indexes i
INNER JOIN sys.objects o
ON i.object_id = o.object_id
INNER JOIN sys.schemas sc
ON sc.schema_id = o.schema_id
LEFT OUTER JOIN sys.dm_db_index_usage_stats s
ON s.object_id = i.object_id
AND s.index_id = i.index_id
WHERE
(OBJECTPROPERTY(i.object_id, 'IsMsShipped') = 0)
ORDER BY [Schema], [TableName], [IndexName];
In the result set you should look for the following:
- Queries with zero, or very low user seeks. This indicates that SQL server is probably not using the index as intended and you should see why this is the case. It could be that the table structure has changed, or that the query is no longer selective. In these situations you must decide to either update the query or remove it completely.
Well that wraps up this series of posts on SQL Performance. This information has helped me diagnose & correct various issues, so I hope it does the same for you.
Until next time...keep learning!