SQL SERVER – Top Reasons for Slow Performance

Yesterday, I had asked a very simple question on twitter about the top reasons for the slow performance. I had received a lot of interesting answers and perspectives on that topic. Lots of people also sent me a detailed email as they thought there is not enough space to type the answer to the question. Here is the blog post which actually I had built based on various comments on twitter and particularly three emails I have received.

Here is the tweet in the discussion.

Slow Performance Tweet

Slow Performance of SQL Server – Poor Architecture

This seems to be the unanimously top reason for the poor performing SQL Server. However, the topic of architecture is very wide and it is impossible to discuss every single point of the architecture inefficiency. The topic of architecture spread from indexes to query re-writing and AlwaysOn to a read-only replica, there are infinite things we can include under this topic. However, here are a few topics that were highlighted during the conversation.

SQL SERVER - Top Reasons for Slow Performance slowperformance-800x483

It is not easy for anyone to build a future proof schema and code. When we start our business, the requirement of the business is very different than when we are running the same business for many years. The success of the business also brings expansion in different geography and growth of the data which all leads to lots of custom code and temporary fixes, which becomes permanent. Here is a quick list that leads to the poor performance of the SQL Server.

Reason #1: Bad Schema Designing

You may find it surprising lots of people start building a schema without fully grasping the requirements and that leads to poor design at the beginning which leads to very complex queries and terrible server performance.

Here are the common mistakes:

Solarwinds
  • Poor Normalization – Flat wide tables or over normalization
  • Redundant data in databases
  • Bad referential integrity (foreign keys and constraints)
  • Wide composite primary keys (and clustered indexes)
  • No stress testing of schema robustness adhering growth patterns

There are many more bad schema designing practices, however, many times we only come across them after our business has been successful and it is difficult to change the schema after a while. If you are starting a new project, it is always recommended to have a proper schema designing from the beginning and follow all the necessary steps of the health check.

Reason #2: Inefficient T-SQL Queries / Code

There are so many reasons for poor coding and most of the time it is because the developer who is writing the code is not 100% familiar with the schema or new T-SQL features. I have seen lots of users using cursors and while loop to insert the data where they can simply insert the data using INSERT…SELECT or SELECT…INTO TABLE.

Here are the common mistakes:

  • Using NOT IN or IN instead of NOT EXISTS or EXISTS
  • Using cursors or white loop instead of INSERT…SELECT or SELECT…INTO TABLE
  • Using SELECT * instead of only necessary column names
  • Forgetting to use parenthesis when using logical OR or AND operators
  • Nesting of subqueries creating a complex execution plan
  • Using functions on Indexed Column in WHERE clause
  • Overuse of scalar user-defined functions
  • Datatype mismatch in predicates (where condition or joins)
  • Interchanging usage of UNION vs UNION ALL
  • Unnecessary using DISTINCT everywhere
  • Dynamic SQL
  • … this can be a month-long series

While I discuss this in detail during my Comprehensive Database Performance Health Check, you can always watch my free videos discussing this topic here.

Reason #3: Poor Indexing Strategies

It is so common to see people using indexing as a solution to get out of the jail card and quite often they end up in jail again for using the card. Indexing is often considered a silver bullet to solve performance problems but honestly, the story is very different. Indexing can work absolutely against you if you using it incorrectly. There are many horror stories along with the indexing. I have blogged a lot about this topic so I will keep the discussion very short.

Here are the common mistakes:

  • Indexing every single foreign key
  • Indexing every column in the table
  • Many single-column indexes
  • Preferring Heap Table over the Clustered index
  • Underindexing your table
  • Not maintaining your index

Here are two of my most popular scripts for indexing, which can help you to balance your index strategies.

Additionally, I suggest you watch my free video here where I demonstrate how an index can reduce the performance of SELECT statements.

Here is the follow-up blog post: Poor Indexing Strategies – 10 Don’ts for Indexes.

Reason #4: Incorrect Server Deployment

This is my favorite topic and as a matter of fact, I can keep on talking about it for hours. If you have attended any of my training sessions, you must be aware of the list of incorrect server deployment practices, I discuss during the session. The biggest complaint, I hear when people upgrade their SQL Server is the despite newer software and hardware they are getting poor performance. There are many different parameters involved in the slow performance of the SQL Server and the biggest one is the proper deployment.

Here are the common mistakes:

  • Keeping Maximum Degree of Parallelism to 0
  • Not setting the index level to fill factor
  • Lower Filegrwoth
  • Single TempDB files
  • Hosting SQL Server data and log files together
  • Running antivirus on SQL Server files
  • Incorrect value in Max Memory Configuration
  • High latency for your log files
  • … and many more I often discuss during the health check.

Reason #5: Under Provisioned Hardware

I really wish this was the major problem in the industry. Most of the time, I just see over-provisioned hardware but incorrectly deployed SQL Server. Let us see some of the sample questions, I often receive.

  • Our CPU is running 100%, should we provide more CPU?
  • Our memory is confused 95% by SQL Server, how to keep them free?
  • Our disk queue is always very high, should we add more disks and a separate workload?

Honestly, you can always throw more hardware and improve the performance but that is never the case, to begin with. Giving more hardware may temporarily solve the problem but that may not be a permanent solution. One should go through a more scientific way to look at how your resources are consumed and based on that make decision of the hardware.

I have recommended upgrading the hardware in 3 out of 100 performance tuning cases. Most of the time we can solve the slow performance issues related to CPU, IO and Memory issue the systematic and scientific process of the health check of the server.

Slow Performance and Knowledge

The solution to all the problems is very simple – Knowledge. If you are a professional in the data platform area, you must keep on learning and developing yourself to stay ahead of all the new upcoming challenges.

If your server is running slow and would like to have a discussion about it, please do not hesitate to reach out to me for further discussion about this topic, Comprehensive Database Performance Health Check. I will be happy to talk to you about your server’s health and help you set it up for success.

You can always connect with me on Twitter and LinkedIn.

Reference: Pinal Dave (https://blog.sqlauthority.com)

Solarwinds
, , , , , , , , , , , , , ,
Previous Post
SQL SERVER – Row Goal and Performance
Next Post
SQL SERVER – OPTION(FAST N) Hint and Performance

Related Posts

4 Comments. Leave new

  • Actually last week I had a massive perfomance problem which also impacted the Windows itself. The whole vm was lacking everything. I tried adding RAM and CPU, but nothing helped. So I moved it to a different host which solved the problem and THEN one of the other admin told me, that there was a problem with that specific NIC the vm had used on the original server. So just assigning a diffenrent NIC to the vm on it’s original server would also have solved the problem.

    Reply
  • boo for over-normalization. there is no such thing, is there? one can only break things down until they can be broken down no more – and that is the goal of “proper” data design is it not?

    Reply
  • These are interesting thoughts Redundant data in databases
    Bad referential integrity (foreign keys and constraints)
    Wide composite primary keys (and clustered indexes)

    In the end, there are three things that will help performance. Indexing, Indexing, and Indexing. Properly performed these are keys to performance.

    It has been true for years and remains true today.

    Sure schema plays a role, yes, when do you use a cursor (NEVER unless its absolutely necessary). SQL is a SET BASED LANGUAGE you return a result (SET). So avoid the programmers mindset and stop the looping unless its absolutely necessary.

    Design the schema to be efficient and return result sets efficiently and give that query engine every opportunity to be successful in chosing an optimised plan quickly and efficiently, and most important create the best indexes for the use of your database (Transactional or reporting).

    There are differences and thus the need for a warehouse to optimize analytical queries using a slightly different approach and indexing scheme than the transactional.

    There is a happy medium and it does not mandate huge costs or expenses, only a correct architecture and approach.

    Reply
  • Very interesting article. Just about every high severity incident i have find my self, these items are always are contributing causes. Specially, not many DBAs understand “Maximum Degree of Parallelism to 0” its still heavily debated with infrastructure engineers, developers and DBAs. I wish MS would actually change the default setting.

    Reply

Leave a Reply

Menu