Sunday, November 10, 2019

Possible Bad NC Indexes

 Possible Bad Non-Clustered Indexes

Identifying Unused Indexes in Your SQL Database

Are you looking to optimize your SQL database performance? One common area for improvement is identifying and addressing **unused indexes**. Indexes are crucial for speeding up data retrieval, but having too many, or indexes that aren't being utilized, can actually hinder performance by increasing write overhead.

Here's a handy SQL query that can help you pinpoint those underperforming or completely unused non-clustered indexes in your SQL Server database. This query focuses on indexes that have significantly more writes (updates) than reads (seeks, scans, lookups), suggesting they might not be pulling their weight.

SELECT
    SCHEMA_NAME(o.[schema_id]) AS [Schema Name],
    OBJECT_NAME(s.[object_id]) AS [Table Name],
    i.name AS [Index Name],
    i.index_id,
    i.is_disabled,
    i.is_hypothetical,
    i.has_filter,
    i.fill_factor,
    s.user_updates AS [Total Writes],
    s.user_seeks + s.user_scans + s.user_lookups AS [Total Reads],
    s.user_updates - (s.user_seeks + s.user_scans + s.user_lookups) AS [Difference]
FROM
    sys.dm_db_index_usage_stats AS s WITH (NOLOCK)
INNER JOIN
    sys.indexes AS i WITH (NOLOCK) ON s.[object_id] = i.[object_id] AND i.index_id = s.index_id
INNER JOIN
    sys.objects AS o WITH (NOLOCK) ON i.[object_id] = o.[object_id]
WHERE
    OBJECTPROPERTY(s.[object_id], 'IsUserTable') = 1
    AND s.database_id = DB_ID()
    AND s.user_updates > (s.user_seeks + s.user_scans + s.user_lookups)
    AND i.index_id > 1
    AND i.[type_desc] = N'NONCLUSTERED'
    AND i.is_primary_key = 0
    AND i.is_unique_constraint = 0
    AND i.is_unique = 0
ORDER BY
    [Difference] DESC,
    [Total Writes] DESC,
    [Total Reads] ASC
OPTION (RECOMPILE);
  

Understanding the Query:

  • This query joins **`sys.dm_db_index_usage_stats`** (which tracks index usage), **`sys.indexes`** (for index metadata), and **`sys.objects`** (for table names).
  • We filter for **user tables** (`OBJECTPROPERTY(s.[object_id], 'IsUserTable') = 1`) within the **current database** (`s.database_id = DB_ID()`).
  • The core of the logic is `s.user_updates > (s.user_seeks + s.user_scans + s.user_lookups)`. This condition looks for indexes where the number of writes significantly exceeds the number of reads, indicating a potential candidate for removal or modification.
  • We specifically target **non-clustered indexes** (`i.[type_desc] = N'NONCLUSTERED'`) and exclude primary keys, unique constraints, and unique indexes, as these often serve other purposes beyond just speeding up reads.
  • The results are ordered to show indexes with the largest difference between writes and reads first, followed by those with higher total writes.

Running this query can give you valuable insights into your database's indexing strategy. Before making any changes, always thoroughly analyze the results and consider the specific needs of your application. Removing an index without proper evaluation could lead to performance regressions in other areas.

Thursday, November 7, 2019

How to read the SQL log file using TSQL

How to read the SQL log file using TSQL

SQL Server provides a hidden system stored procedure called sp_readerrorlog. This procedure enables you to directly view the SQL Server error log files from a query window and even lets you search for specific keywords within the logs.

The parameters for reading the error log files are as follows:

  • Log File Value: 0 = current, 1 = Archive #1, 2 = Archive #2, and so on...
  • Log File Type: 1 or NULL = error log, 2 = SQL Agent log
  • Search String 1: The first keyword you want to search for.
  • Search String 2: An optional second keyword to refine the search results.
  • Start Time: The starting point for the search.
  • End Time: The end time for the search.
  • Sort Order: Use 'N'asc' for ascending order or 'N'desc' for descending order of results.

For example, to search for the word 'backup' in the current error log, you would execute the following SQL query:

EXEC master.dbo.xp_readerrorlog 0, 1, N'backup'

With this approach, you can easily retrieve specific log entries and monitor SQL Server activities in detail.

Sunday, November 3, 2019

Top Procedures By Total Physical Reads

 Top Procedures By Total Physical Reads

This post provides a helpful SQL query to identify your most I/O-intensive stored procedures. By analyzing physical and logical reads, you can pinpoint performance bottlenecks and optimize your database.


SELECT TOP (25) p.name AS [SP Name]
 ,qs.total_physical_reads AS [TotalPhysicalReads]
 ,qs.total_physical_reads / qs.execution_count AS [AvgPhysicalReads]
 ,qs.execution_count
 ,qs.total_logical_reads
 ,qs.total_elapsed_time
 ,qs.total_elapsed_time / qs.execution_count AS [avg_elapsed_time]
 ,CASE WHEN CONVERT(NVARCHAR(max), qp.query_plan) COLLATE Latin1_General_BIN2 LIKE N'%<MissingIndexes>%' THEN 1 ELSE 0 END AS [Has Missing Index]
 ,FORMAT(qs.last_execution_time, 'yyyy-MM-dd HH:mm:ss', 'en-US') AS [Last Execution Time]
 ,FORMAT(qs.cached_time, 'yyyy-MM-dd HH:mm:ss', 'en-US') AS [Plan Cached Time]
-- ,qp.query_plan AS [Query Plan] -- Uncomment if you want the Query Plan 
FROM sys.procedures AS p WITH (NOLOCK)
INNER JOIN sys.dm_exec_procedure_stats AS qs WITH (NOLOCK) ON p.[object_id] = qs.[object_id]
CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) AS qp
WHERE qs.database_id = DB_ID()
 AND qs.total_physical_reads > 0
ORDER BY qs.total_physical_reads DESC
 ,qs.total_logical_reads DESC
OPTION (RECOMPILE);


This query targets the top 25 stored procedures in your current database that have generated physical reads. It provides key metrics like **Total Physical Reads**, **Average Physical Reads per execution**, **Execution Count**, **Total Logical Reads**, and **Total Elapsed Time**. Additionally, it checks for **missing indexes** within the query plan, which can be a significant indicator of performance issues. You'll also see the **Last Execution Time** and **Plan Cached Time** for better context.
Feel free to uncomment the `qp.query_plan` line if you want to inspect the actual query plan for deeper analysis. This script is a great starting point for identifying stored procedures that might be causing high disk I/O and slowing down your database.

Popular Posts