SQL SERVER – ReIndexing Database Tables and Update Statistics on Tables

SQL SERVER 2005 uses ALTER INDEX syntax to reindex database. SQL SERVER 2005 supports DBREINDEX but it will be deprecated in future versions. Let us learn how to do ReIndexing Database Tables and Update Statistics on Tables.

SQL SERVER - ReIndexing Database Tables and Update Statistics on Tables reindexdb-800x227

When any data modification operations (INSERT, UPDATE, or DELETE statements) table fragmentation can occur. The DBCC DBREINDEX statement can be used to rebuild all the indexes on all the tables in the database. DBCC DBREINDEX is efficient over dropping and recreating indexes.

Execution of Stored Procedure sp_updatestats at the end of the Indexes process ensures updating stats of the database.

Method 1: My Preference

USE MyDatabase
GO
EXEC sp_MSforeachtable @command1="print '?' DBCC DBREINDEX ('?', ' ', 80)"
GO
EXEC sp_updatestats
GO

Method 2:

USE MyDatabase
GO
CREATE PROCEDURE spUtil_ReIndexDatabase_UpdateStats
AS
DECLARE @MyTable VARCHAR(255)
DECLARE myCursor
CURSOR FOR
SELECT '['+TABLE_SCHEMA+'].['+TABLE_NAME+']'
FROM information_schema.tables
WHERE table_type = 'base table'
OPEN myCursor
FETCH NEXT
FROM myCursor INTO @MyTable
WHILE @@FETCH_STATUS = 0
BEGIN
PRINT 'Reindexing Table:  ' + @MyTable
DBCC DBREINDEX(@MyTable, '', 80)
FETCH NEXT
FROM myCursor INTO @MyTable
END
CLOSE myCursor
DEALLOCATE myCursor
EXEC sp_updatestats
GO

Though this is an old trick, it works pretty well on the most of the system and it gives very good results. I strongly encourage you to try this out if you have performance issue with SQL Server.

Reference: Pinal Dave (http://www.SQLAuthority.com)

Best Practices, SQL Constraint and Keys, SQL Cursor, SQL Index, SQL Joins, SQL Scripts, SQL Server DBCC, SQL Stored Procedure
Previous Post
SQL SERVER – Shrinking Truncate Log File – Log Full
Next Post
SQL SERVER – Primary Key Constraints and Unique Key Constraints

Related Posts

111 Comments. Leave new

  • Robert Carnegie
    October 5, 2009 8:46 pm

    Hey Sylvain: If your database isn’t broken (test with DBCC CHECKDB), maybe it just got bigger and fell over a cliff of performance, for instance how tempdb database is used in this process. Or maybe tempdb is large and inefficient now and you need to fix that (be very careful, but restarting SQL Server may be all that you need). Or maybe someone added some statistics or indexes that you don’t know about. You could consider whether you need all your statistics, since they don’t come for nothing. I assume that we are supposed to choose one or more recommendations from the Tuning Advisor tool, not -all- of the recommendations.

    I also wonder why the time taken is – apart from the step up – the same every day. Perhaps you are repeating the same task every day on all of your data? You could change your database design to avoid that. For instance, use a partitioned view or partitioned table. Then you can have old data that you don’t UPDATE STATISTICS on so often, only the new data. That may be not a good solution for you, but I want to say that you have many different possible remedies.

    Reply
  • Ross McLoughlin
    October 14, 2009 4:22 am

    Thank you so much. Worked like a charm. My database is zipping along now!

    Reply
  • HI

    I am working on a database with more than 80 million of records. we are using sql server 2005 and have applied fulltext indexes to make search through the database.

    Problem we are facing is that if there are new records in the database then after every week we have to reindex the database and also recreate the fulltext indexes for the database. We have setup the Fulltext Index catalogu to update automatically when some new records comes but still reindexing the tables of the database is too much expensive for us. Is there a way to around so that we can avoid this reindexing the table.

    I do now know how to make a new post with question here on this website. I found this post related to my problem about reindex so i thought it would be good to post it here. Hope to get some solution.

    Regards,
    Qasim

    Reply
  • Fact is, the more often you reindex the less fragmented your indexes will be, and the less fragmented they are the faster performance you’ll see. I reindex nightly and can afford a little downtime for my bigger tables, in the middle of the night. This keeps my users happy in the daytime.

    If outage is an issue then seriously look into the SQL Enterprise license, it allows online reindexing. It’s pricey, though, at around $20k/proc.

    Reply
  • i just wondered if there is such more performance or an advantage in space while the indexes are creating from 0 or doing a re-index ..

    Because if im doing a database re-index logs can grow up fast and some filegroups too… so

    what is the advantage droping all the indexes and recreating all them again.. or doing a DBCC DBREINDEX???

    i have to do a db maintenance but all the databases are HUGE!.. they are between 600GB and 1.5 TB..

    your answers can be very helpfull

    Sending Regards…

    Reply
  • DBREINDEX is like running TSQL statements Dropping indexes and then Creating them … so I dont see any advantage

    question to Dave … why is there a need to run sp_updatestats when DBREINDEX updates the stats?

    Reply
  • Thank you sirji, for your help, as always

    Reply
  • Hi,

    Thanxs for above code, re-indexing works gr8.

    We are using .Net 1.1 and SQL SERVER 2005, and managing 40-50 different websites. But we are using only one database to facing records for all site.

    Issue is that after we shifted our query’s from code to SP we are getting Transaction deadlocked and when our websites hit huge traffic all sites went down. Is using one database for more than 50 site cause this issue.

    Also, when i checked SP_WHO on SQL server we are getting 750-800 processes concurrent. Do you have any suggestion? why we are getting Transaction deadlocked and lots of SP_WHO processes on sql server.

    appreciate for quick reply
    Thanks in advance

    Reply
  • Hi Pinal,

    I read your posts regularly.

    As it’s written at first line regarding (INSERT, UPDATE, or DELETE statements) . And it’s better over dropping and recreating index. Agreed.

    Now My question is,
    what I do usually, drop index, perform insert or update, create index again.

    As you described, insert or update, and run script. but with index on table, insert or update will be slow.

    Am I explaining you correct?

    Thanks in advance.

    Reply
  • Hitesh Raghvani
    February 19, 2010 6:28 pm

    Thanks for the information..
    it has helped me to resolved daily trouble in a few secs.

    Greate work…

    Reply
  • Pinal,

    Not sure why you are updating stats after REINDEX as reindex does a update stats on the index with FULL TABLE SCAN details.

    Reply
  • I would be very-very weary of updating database statistics after rebuilding indexes. According to Paul Randal, “Index rebuilds automatically update statistics with a full scan.” [1] He adds, “If you manually update statistics after an index rebuild, it’s possible to end up with less accurate statistics! This can happen if a sampled scan from the manual update overwrites the full scan generated by the index rebuild.” In my experience, Paul is correct. Pursuant, I wouldn’t waiste cycles updating indexes after an index rebuild.

    [1] Top Tips for Effective Database Maintenance,

    Reply
  • If you find that your transaction log grows to an unacceptable size when you run DBCC REINDEX, you can minimize this growth by switching from the Full Recovery mode to the Bulk-Logged mode before you reindex, and when done, switch back. This will significantly reduce the size of the transaction log growth.

    Reply
  • Imran Mohammed
    March 19, 2010 7:29 am

    @Milthan

    I agree with you, Updating Statistics after rebuilding indexes is of no use (SQL Server 2005). But Rebuilding Indexes after Performing a Shrink File or Shrink DB Operation is a MUST.

    ~ IM.

    Reply
  • Phil@financetutor
    May 5, 2010 4:05 pm

    I had been trying to figure out why my queries were slow when I upgraded to SQL Server 2008. Thanks for helping me figure out the issue. It’s working perfectly fine now and faster than ever!

    Reply
  • thanks

    Reply
  • Method 2 fails when you use schemas, it’s probably safer to do something like
    SELECT table_schema + ‘.’ + table_name

    Reply
  • Hi, I ran the method 1 query. where DB size is 100GB. it is taking huge time more than 1.5 hours.

    Normally how much time is required to ReIndex the 100GB of single database? on which basis time varies while running the ReIndexing query…

    Reply
  • I have to manage DBases of > 1TB too.
    unfortionatly the reindex process takes 24 hours, but we can be offline only during out of business hours.
    (some ppl work sat + sun too, and yes i know about online reindex possibilities)
    I wonder if it makes sense, just to rebuild the “bad ones”,
    while keeping the “good ones”.

    the query

    SELECT * FROM sys.dm_db_index_physical_stats
    (DB_ID(N’mydatabase’), NULL, NULL, NULL , ‘DETAILED’);

    should deliver the necessary information. (avg_fragmentation_in_percent)

    but i think i can’t detect “Out of Order”-Pages with that function ?

    the disadvante is that the function has to scan the whole DB once, wich is also very time consuming :(
    In my short test, rebuilding all indexes took less time then analysing all…
    i have to make more tests to come to a final result….

    what do you think of just rebuilding the index based on its fragmentation-value ?
    are there other indicators i should consider as well ?
    How to detect “out of order”-pages ?

    Reply
  • hello thomas again,
    i have found an article from Tom Mills wich uses the strategy described above:

    Reply

Leave a Reply