SQL SERVER – Database Worst Practices – New Town and New Job and New Disasters

SQL SERVER - Database Worst Practices - New Town and New Job and New Disasters TSQL2sDay This blog post is written in response to the T-SQL Tuesday hosted by Amit Banerjee. I know Amit personally and very big fan of his community activities. I read his blog, read his comments, follow his tweets, and the most importantly reach out to him when I feel like talking SQL.

Amit has selected a very interesting subject – Best Practices. When I read the subject, I can directly relate this subject to my real world interactions. I have seen so many developers and DBAs making fatal mistakes in the early days of their career. Most of the time, they get another chance because they are new to the job or that particular task, which they are doing for the first time. However, such excuses cannot be made for experts and senior professionals. They are expected perform and carry out the right practices always.

Instead of writing best practices, I am going to write about few of the worst practices which I have quite often seen in the real world. Some of them are really bad that I often wonder how those who still use them were able to continue on the job so far. Let us go over few of the worst practices I have observed in the industry.

My log file is growing too big – I truncated the log without taking backup.

This has to be the numero uno of worst practices. Every time I come across such a statement, I almost miss a heartbeat. Before I continue, let me confess that at the beginning of the career, I have practiced the same. After a while, I had learned it the hard way. This is never a good practice; truncating the log file is not an option. The reason why I rate this as the worst practice: this one mistake can make database impossible to recover. When someone truncates the log file without taking backup, there is no chance of recovery.

Here are a couple of articles which I have written on how to prevent log file from growing too big.

How to Stop Growing Log File Too Big

Log File Growing for Model Database – model Database Log File Grew Too Big

master Database Log File Grew Too Big

I shrink my database daily to regain the space.

This is one of the popular worst practices. I have seen administrators shrinking the database at the end of the day to gain the space only to lose it the very next day. Shrinking is a VERY BAD operation. It increases fragmentation, reduces the performance, and wastes the resources. I strongly advise not to do it.

Here are few articles I had earlier written on this subject.

Shrinking Database is Bad – Increases Fragmentation – Reduces Performance

SHRINKDATABASE For Every Database in the SQL Server

Shrinking NDF and MDF Files – Readers’ Opinion

SHRINKFILE and TRUNCATE Log File in SQL Server 2008

Clustered Index makes the table to sort every time. I do not have clustered index on any table.

For an OLTP system, Index is very important and clustered index is the most important index (in my opinion). Clustered index forces order to the table and removes the ‘forwarding records’ problem from the database. Personally, I consider tables without clustered index performance to be unacceptable?. In my OLTP system, I always recommend that all tables should have clustered index.

Here is a quick script that can help in identifying a table without clustered index in the database.

Find Table without Clustered Index – Find Table with no Primary Key

Clustered Index on Separate Drive From Table Location

Observation – Effect of Clustered Index over Nonclustered Index

TempDB is not important; so I will keep it on my slow drive.

Personally, I have tremendous respect for TempDB. Even though it resets every time when the server restarts, this is a single most important database that is shared among all the other databases in the system. This database is used for sorting, temporary objects, triggers, row version and in other operations. Keeping it on the slow drive is not the solution, but reality is that it will just create many performance-related problems in the overall system. If your TempDB is becoming full, move it to the another drive.

Here are a few blog posts I have written on TempDB.

T-SQL Script to Find Details About TempDB

TempDB is Full. Move TempDB from one drive to another drive

Reducing Page Contention on TempDB

Improvements in TempDB

TempDB Restrictions – Temp Database Restrictions

Ideal TempDBFileGrowth Value

I am confused between Full, Differential, and Log Backup

Inability to understand the proper recovery model is another worst practice. I have people restoring many differential backups while restoring the database. I quite often see that log file backup interval is so huge it is more than differential backup interval. There are so many factors which can lead to disaster and data loss, leading to people to look for a new job in a new town at times. If you are confused regarding what is tail log backup, then stop, and learn from online books before implementing the backup strategy. Even if you are not responsible for implementing the backup strategy, I would still suggest you to read how to carry out proper backup as you never know when it will land heavily on your job card!

SQL SERVER - Database Worst Practices - New Town and New Job and New Disasters disaster2

Here are few interesting write-ups on this subject on this blog.

Backup Timeline and Understanding of Database Restore Process in Full Recovery Model

Restore Sequence and Understanding NORECOVERY and RECOVERY

Mirrored Backup and Restore and Split File Backup

Restore Database Without or With Backup – Everything About Restore and Backup

Restore Database Backup using SQL Script (T-SQL)

There are many more practices I can write, but I think these five are the top worst practices. Feel free to post your opinions and suggestions.

Reference: Pinal Dave (https://blog.sqlauthority.com)

Quest

Best Practices
Previous Post
SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 12 of 31
Next Post
SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 13 of 31

Related Posts

19 Comments. Leave new

  • Here’s another one:

    Deleting your current full backup before the new one is taken. If the new one fails, you have nowhere to go.

    Reply
  • And another:

    Placing your backups on the same drive as the mdf file or not moving the backups to a secondary drive that is also backed up by a some other means.

    Reply
  • Avinash Lewis
    July 12, 2011 12:49 pm

    Liked you comment “A new job in New town” i have personally seen many doing this…!

    Reply
  • Hi Pinal,

    I am getting an error while loading data from excel sheet into SQL server
    i.e.
    OLE DB provider ‘Microsoft.JET.OLEDB.4.0’ cannot be used for distributed queries because the provider is configured to run in single-threaded apartment mode.
    can u pls tell me the solution.

    Reply
  • Mark S. Rasmussen
    July 12, 2011 11:52 pm

    Note that while a clustered index will force the order of the physical records on disk, the only way your query result order is guaranteed is by using an ORDER BY clause. Certain scanning techniques may/will cause your clustered index query to return out of order – unless you have an ORDER BY clause.

    Reply
  • robert matthew cook
    July 13, 2011 8:22 am

    nice list dave. quick to the point and address some of the more serious and more dangerous mistakes. thanks for posting.

    Reply
  • Peter Szegedi
    July 13, 2011 9:06 pm

    A bit hard to believe the last one. Are there really DBA’s out there who can’t tell what is the difference between backup types? o.O

    Reply
  • Love the one about the Database shrinking (guilty )
    but i am going to stop doing so

    Reply
  • Dirk Hondong
    July 14, 2011 6:43 pm

    Hi Pinal,
    It was already mentioned: Placing the backup files on the same drive as the data file(s).

    Another bad practice is to place a 2nd instance on a productive server without checking the workload before and see if the server is able to handle it and even more worse: leave the mem usage at default settings for both instances.
    I would try t avoid multiple instances at all but sometimes you can’t.

    Regards
    Dirk

    Reply
  • Hi Pinal,

    I have a question, I just have Started using sql server 2008 R2, I found this difficulty..

    There is a Table and I have column with name “Item” in which I have stored varchar data. There is one Item in it is “Watch Video-How to take weight on weight scale?’. Now If I want to retrieve all Item starting with “Watch Video”, I have to use query…

    SELECT * FROM Table WHERE Item LIKE ‘Watch Video%’;

    and I get all Items which are starting with ‘Watch Video’, but when i used

    SELECT * FROM Table WHERE Item = ‘Watch Video%’;

    Then It returns Null, means No Records is returned.

    So, My question is what is difference between ‘LIKE’ and ‘=’ operators??

    Thank you,
    Dev

    Reply
  • “Like” means “like this pattern”

    = means equal to this value, but you have include a wildcard in there, so it doesn’t really know how 2 different things can be “Equal”

    Always use Like when doing string comparisons.

    Reply
  • Very Usefull site in form of blogs…

    Reply
  • Shirish Kulkarni
    August 4, 2011 11:06 am

    Hello ,

    My problem is I have one Table CSZMST in SQL Server 2008r2 which stores 9 to 10 lakh records of ZIP,Sate and Area Code .

    I am importing bulk customer records from CSV file to my database and for each record I want to search for whether ZIP value present or not in table with million records .

    For performance and memory utilization purpose I choose to write table to XML file and use LinQ for search the value. Suggest better options to achieve goal. My code is :

    Imports System.Data
    Imports System.Data.SqlClient
    Imports System.Linq

    Public Class ClsZIPLayer
    Dim strFileName As String = String.Empty
    Public Sub CreateXMLForCSZ()
    Dim oDataSet As DataSet = Nothing
    Dim oDataAdapter As SqlDataAdapter = Nothing
    Try
    Console.WriteLine(DateTime.Now)
    oDataAdapter = New SqlDataAdapter(“SELECT DISTINCT REPLACE(ZIP,’ ‘,”) ZIP_NEW,ZIP FROM CSZ_MST”, “Data Source=testServer;Initial Catalog=TestDataBase;Integrated Security=True”)
    oDataSet = New DataSet()
    oDataAdapter.Fill(oDataSet, “Csz_mst”)
    strFileName = “CSZ” + DateTime.Now.ToString(“yyyymmdd hhMMss”)
    oDataSet.WriteXml(System.IO.Path.Combine(“C:\Phone Tree\DemoXML”, strFileName + “.XML”))
    strFileName = System.IO.Path.Combine(“C:\Phone Tree\DemoXML”, strFileName + “.XML”)
    Console.WriteLine(DateTime.Now)
    Catch ex As Exception
    Console.WriteLine(ex.ToString())
    Finally
    If Not IsNothing(oDataSet) Then
    oDataSet.Dispose()
    oDataSet = Nothing
    End If
    If Not IsNothing(oDataAdapter) Then
    oDataAdapter.Dispose()
    oDataAdapter = Nothing
    End If
    End Try
    End Sub

    Public Function GetZipValue(ByVal strValueToSearch As String) As String
    Dim strResult As String = String.Empty
    ‘ If strFileName.Length > 0 Then
    Dim frdt As DateTime = DateTime.Now
    Dim doc = XDocument.Load(strFileName)
    Dim todt As DateTime = DateTime.Now
    ” Dim ts As TimeSpan = todt.Subtract(frdt)
    ‘Console.WriteLine(“sec ” & ts.Seconds.ToString())
    ‘End If
    ‘Csz_mst
    Dim res = From z In doc.Root. Where z..Value = strValueToSearch Select z..Value
    For Each Val As String In res
    strResult = Val
    Next
    Return strResult
    End Function

    End Class

    Thanks in advance

    Reply
  • Satish Sharma (@satishdixit)
    August 8, 2011 11:49 am

    Awesome article

    Reply
  • My Cashing table increase significantly, is that any suggestion for this.

    Reply
  • Wade (@newclique)
    September 15, 2011 12:39 am

    I would disagree with the daily shrink tip unless you can conclusively demonstrate hard data to support your conclusions. A very large data file is, by default, fragmented. No, it may not appear fragmented on the disk but the data inside of it most certainly is. (Yes, I know about table fill percentages, etc. but data still gets fragmented. If de-fragmentation was a performance panacea, vendors would build it into their application logic. Give the hardware a little credit.)

    Also, the operating system and SQL Server cache frequently used data so unless the older data is being read as frequently as the newer data, I doubt there is much of a disk hit.

    Next, what about backup time? If a 10 GB file can be shrunk each day to 50% the size, backups are sure to take less time as well as less space in the archive. The archive area is most likely a SAN or other shared space that cannot afford to run out of space, either, so trimming out useless data can be very beneficial to the organization as a whole.

    I’ve also seen situations such as a network monitoring product that produces thousands of transactions per minute. After just a day or two, the transaction log can grow to a very large size and the database size grows quickly as well. There’s a nightly script that purges records more than 90 days old but purging simply marks records as deleted but does not shrink the files. I suspect because the server is so busy it cannot get around to doing any sort of autoshrink and this eventually (a few weeks) causes the server to run out of disk space. So, purging and shrinking is a necessity.

    CPU cycles are rarely in such short supply that shrinking to conserve disk space is a burden. If the space gains nightly are significant, I see no reason why this would be a ‘worst’ practice. Your other ones, though, I will agree. Those are some fantastically bad ideas!

    Thanks for the article. Very informative.

    Reply
    • 1) If you click on link regarding Shrinking Database one you will see my proofs.
      2) Try the backup part yourself – the final size will be almost the same.

      It is indeed worst practice.

      Reply
  • I have a MDF file with 30GB and the LDF is 74 GB!!!!, the database is in full mode. I have programmed a complete backup every day at 22:00 and a backup of transaction log every hour from 08:00 to 21:00. The LDF file grows up more than 1GB every day.

    Looking for a solution i´ve found the following Script

    USE mydatabase;
    GO

    ALTER DATABASE mydatabase
    SET RECOVERY SIMPLE;
    GO

    DBCC SHRINKFILE (mydatabase_Log, 1);
    GO

    ALTER DATABASE mydatabase
    SET RECOVERY FULL;
    GO

    But it doesn´t work. I obtain the message: ” Cannot shrink log file %d (%s) because of minimum log space required.”
    I still have a lot of free space in the disk that contains the ldf file, but i need a solution that works.

    Thanks a lot

    Reply
  • I had forgotten comment that i have tried in
    DBCC SHRINKFILE (mydatabase_Log, 1)
    with 4096, 2048….. checkpoints, with Managament studio…. i´m really desesperatly

    Reply

Leave a Reply