SQL SERVER – Automated Index Defragmentation Script

Index Defragmentation is one of the key processes to significantly improve performance of any database. Index fragments occur when any transaction takes place in database table.  Fragmentation typically happens owing to insert, update and delete transactions. Having said that, fragmented data can produce unnecessary reads thereby reducing performance of heavy fragmented tables.

I have often been asked to share my personal Index Defragmentation Script. Well, I use Automated Index Defragmentation Script created by my friend – a SQL Expert – Michelle Ufford (a.k.a SQLFool). Michelle is a SQL Server Developer, DBA, a humble blogger, and an absolute geek! She is also the President of Eastern Iowa PASS Chapter, known as the I380 Corridor Professional Association of SQL Server. Currently, she is working with large, high volume, high performance SQL Server databases at GoDaddy.com. Her nickname might be SQLFool but honestly speaking, she is one of the best Gurus of SQL.  You can read her  blog here.

Michelle has written an excellent article about Automated Index Defragmentation Script. I recommend this wonderful write-up to all those database developers who are searching for a good solution to improve database performance. Read her article Automated Index Defragmentation Script to equip yourself with better understanding on how to improve database performance.

Here, I would to mention an interesting question taken from the above-mentioned article, which will clearly show how powerful this Automated Index Defragmentation Script is.

How long will this Automated Index Defragmentation Script  take to run?
Well, it depends. I don’t necessarily recommend running it without specifying a database; at least, not unmonitored. You *can* do that, but it could take a while. For example, to run sys.dm_db_index_physical_stats for all databases and tables, totaling 2TB, took me 4.5 hours; that doesn’t even count the actual defrags.

You can download the script from here.

Please feel free to contact Michelle or me if you have any questions and doubts regarding this script.

Reference : Pinal Dave (https://blog.sqlauthority.com)

Best Practices, Database, SQL Index, SQL Scripts
Previous Post
SQLAuthority News – Launch of Gandhinagar SQL Server User Group
Next Post
SQL SERVER – Mirrored Backup and Restore and Split File Backup

Related Posts

16 Comments. Leave new

  • Imran Mohammed
    April 4, 2009 12:55 pm

    Hello,

    Script is very cleverly written.

    I looked this script from coding perspective and I did check logic in depth… I assume it would be best ….

    I do have few suggestions, can be ignored if they do not make sense.

    1. To check existing (Tables) , its select Object_ID from systables. To drop procedures we did not use sys.procedures but we took a long path, object_property (object_id (proc_name)…. Any specific reason ? Just curious to know.

    2. Online rebuild is good. What if I do not want to use this feature as it would take long time and I do not want to spent that much time or for some other crapy reason., Even though I have Enterprise Edition I still want to rebuild my indexes offline. Stored procedure accept a parameter @editionCheck , 1 = Rebuild Online, 0 = Rebuild Offline, Even if I give @editionCheck = 0, and if I have Enterprise Edition, Script automatically updates @editionCheck from 0 to 1. ( I wanted it to keep 0, but script forcefully updated it to 1)

    3. We have two while loops. One to store objects in a database other to execute sql statement using those object names,

    a) Is there a specific reason why we are defraging index with high fragmentation first, If there is no reason then why do we want to use select top 1 ? From performance side I dont think this is good. ( I know very little about performance tunning, please correct me if I am wrong)

    b) since we are using top we have to use delete statement, which is again a logged operation, increasing overhead… Why not just use ID = @Var and increment @var to @Var + 1. Later drop #temp table.

    I agree there could be few database, but there could be many tables in a database and many more tables through out server.

    Note :
    I have written a script to rebuild indexes, which is not even comparable with this script ( This script is way better than mine )

    ~IM.

    Reply
  • Michelle Ufford
    April 4, 2009 6:30 pm

    Wow, thank you, Pinal! I’m glad you’ve found the script useful. :)

    Reply
  • Michelle Ufford
    April 4, 2009 6:39 pm

    Hi Imran,

    You ask some good questions! Let me see if I can help explain:

    1. This would require users to be on SQL 2005 SP2. The informal poll I did showed that this would exclude many people, so I went the long way instead.

    2. You need to pass @onlineRebuild = 0 to the stored procedure to turn off online rebuilds. @editionCheck is internal to the stored procedure and does not need to be configured.

    3.a. Yes, there’s a reason for it. I list indexes in order of most fragmented because, if you’re just returning the list and building it manually, you probably want to work on your most fragmented first. Also, I have occasionally had to kill my defrag script during execution; I’d rather know that it had worked on the most heavily fragmented tables and left the less fragmented ones.

    3.b. I could look at doing that, but relatively speaking, the amount of deleting, and the frequency of it, is pretty small. I’ve never had any performance issues with it.

    Thanks for your questions! I hope that helps.

    Regards,

    Michelle

    Reply
  • Imran Mohammed
    April 4, 2009 9:29 pm

    @ Michelle.

    Thanks for your reply. Yes your explanation make sense.

    Sorry about my point No 2. You are correct. I guess I did not check the script properly, I did not see if and set statement which you used later in script. Sorry for the confusion.

    Thanks for the script Michelle.

    ~ IM.

    Reply
  • Hi Pinal

    Can you tell me which index is good from performance point of view,Clustered or non clustered ?

    Thanks
    Vivek

    Reply
  • Hi Pinal,

    I have a large table that perform insert and update for every 15 minute (2000 record). From your point of view, it is ok if i run the automated index defragment on this large table on daily basic or i should run it for every 30 minute?

    My aim is better the index defragment can finish before the next insert or update, meant within 15 minute range.

    thanks
    beh

    Reply
  • Hi Pinal

    I am little confused about which is faster case or inner join ?

    I have large table arround 75 lacs records there is three integer column , I use case output as varchar

    should i use inner join or case ? there is base table for nd_code and status for case values as output

    I mentioned query on below

    SELECT autoid, cycle, mdn, ACC, custcode, Booking_Date, barcode, Name, Place, remarks, associate, area, catagory, sheet_no, initial, relation, ph_no,
    delivery_date, status_date,
    CASE Nd_Code WHEN 1 THEN ‘Shifted’ WHEN 2 THEN ‘Incomplete Address’ WHEN 3 THEN ‘Wrong Address’ WHEN 4 THEN ‘No such Person’ WHEN 5
    THEN ‘Refused To Accept’ WHEN 6 THEN ‘Premises Locked’ WHEN 7 THEN ‘Consignee Not Available’ WHEN 8 THEN ‘No Service’ WHEN 9 THEN ‘Missroute’
    WHEN 10 THEN ‘Under Construction’ WHEN 11 THEN ‘Multi Address’ WHEN 12 THEN ‘No Response’ ELSE ‘ ‘ END AS Nd_code,
    CASE status WHEN 2 THEN ‘New’ WHEN 3 THEN ‘Distributed’ WHEN 4 THEN ‘Telecalling’ WHEN 5 THEN ‘Redirect’ WHEN 6 THEN ‘Missroute’ WHEN 7
    THEN ‘Undelivered’ WHEN 8 THEN ‘Delivered’ WHEN 9 THEN ‘Dlv’ WHEN 10 THEN ‘Insert’ WHEN 16 THEN ‘Feeded ‘ WHEN 11 THEN ‘Rebook’ WHEN
    12 THEN ‘RtoRec’ WHEN 13 THEN ‘Booked’ WHEN 14 THEN ‘PodHandedOver’ WHEN 15 THEN ‘RtoHandedOver’ ELSE ‘ ‘ END AS Status,
    initial_status, address1, address2, address3, City, PinCode, alt_no, state, user_name, user_date, calling_status, scan_user, scan_date,
    AGENCYCODE, DISP_Date, DUE_Date, change_address, category, billing_date, amount, REMARK, CALLING_DATE, CRITICAL_CUST, nname,
    naddress1, naddress2, naddress3, nplace, ncity, npincode, nstate, client, contact_no1, contact_no2, contact_no3, ref_no, cnt_associate, cn_total,
    last_new_address, urgent, MTNLREM, MOBREM
    FROM dbo.RELIANCE

    Regards

    Jayant

    Reply
  • Sqlfool is really nice blog to read, there are many good article
    one of is filter index in sql2008 very nice article.

    Reply
  • Can I use it in SQL2K environment?

    Reply
  • I get the below message when I run the script. Could you please help?
    Msg 102, Level 15, State 1, Procedure dba_indexDefrag_sp, Line 226
    Incorrect syntax near ‘OBJECT_ID’.

    Reply
  • Hi ,

    Great Script…

    I have few query on this script…
    In this script ,
    its checking if @allowPageLocks =0 (count for index which have Allow_Page_Locks=0) and if object have 1 or more LOB Objects then although index fragmentation value exceeds the rebuidthreshold value , it dont make rebuid , but make reorganization on it.

    I am a little confused on it, and need to know , what is reason that we should do rebuid on index if we found ,1 or more LOB objects and Allow_Page_Locks=0

    Reply
  • Good script

    Reply
  • Script failed when I executed it last night against a test database. Can it be modified to skip indexes it cannot lock

    Reply
  • Ironically, sqlfool.com gives the error message “Error establishing a database connection” currently.

    Reply

Leave a Reply