SQL SERVER – SSMS: Transaction Log Shipping Status Report

History has its own way to define now civilizations thrived. Most of the cities flourished in the river side and transporting lumber was one of the key activity. Most of the cities like Seattle and many others have this boom and bust life. The idea here was to cut the timber upstream and use the natural flow of rivers to transport to factories downstream using the river. These are classic and wonderful examples of how we typically work with Log-Shipping in SQL Server too. This blog is about Log Shipping Status report.

Ensuring the availability of databases, meeting SLA and performance tuning are some of the top priorities for today’s database administrators (DBAs). One of the important work of DBA is to monitor the database servers and make sure the application is working fine. The monitoring might involve automatic alerts, running scripts or looking at some dashboard. Even for high availability solutions, we need some kind of monitoring mechanism. One of the traditional high availability solution is Log Shipping.

As the name suggests, Log-shipping is based on transaction log backups getting shipped from one server to one or more servers on the other side. For understanding this you need to know basics of transaction log backups. First, log backups can be taken from the database which is in full or bulk logged recovery model. In the simple recovery model, transaction log backups are not allowed because every checkpoint flushes the transaction log file. In other two recovery models log backup would do flush. Another basics of log shipping is that all log backups form a chain. T1, T2 and T3 must be restored in sequence. Missing any one the file would cause an error message during restore. In log shipping, backup, copy and restore is done automatically. The SQL Agent service does that for us. Since we can ship to multiple servers, backup location is shared so that other servers can get a copy of that file to perform the restore. Source server in technical terms is called as the primary server. Rest all servers which are at receiving end are called as a secondary server. You would also hear monitor server, which is responsible to check the health of copy, backup and restore job. If the jobs are not running properly, then secondary would be behind primary server and would defeat the purpose of high availability. Based in the threshold defined, monitor server can raise alerts so that corrective action can be taken.

This is the last report in the list under server node. Based on the name of the report, you might have already guessed that it can be used to “see” the status of log shipping status.

The important note about this report is that the data shown in the column would be dependent on the server where we launch the report. Here is the report, when launched from Primary Server.

If we notice, information about backup section is populated. This is because the report doesn’t make a remote connection to check secondary server status. If the report is launched from a Secondary Server the output would be as below:

The information about copy and restore related information is populated automatically because those are available on secondary server.

If we configure monitor server in log-shipping (which I have not done) and launch report there, we can see information about all three steps (i.e. backup, copy and restore)

The good part about the report is that it shows the alarming pair in red color. To demonstrate, I have configured log shipping for two databases, and for one, I have disabled the backup, copy and restore jobs so that alerts are raised and we can see the impact on report.

You may wonder how this information is fetched. This has the simplest possible query behind the scene.

EXEC sp_help_log_shipping_monitor

As per Books online – “Returns a result set containing status and other information for registered primary and secondary databases on a primary, secondary, or monitor server.”

If you see anything in red color, you need to start investigation further to find the cause of delay. What is the most common cause you have observed, which causes delay in log shipping? Networking, Disk slowness or something else? Please comment and let me know.

Reference: Pinal Dave (http://blog.sqlauthority.com)

About these ads

SQL SERVER – Finding Object Dependencies in SSMS – SQL in Sixty Seconds #071

While we are doing development, we create and drop objects. We build new things and we need to understand the relationships between database objects when we are doing various activities in SQL Server. Well, it is indeed very hard to know all, the relationship between various objects in SQL Server. However, with the help of SQL Server 2014 Management Studio, you can for sure do the same task very easily.

You have to go the object of which you want to see properties of and right click over it.

Now click over the option “View Dependencies”. It will bring up a screen listing various dependencies.

I hope this is clear enough. If not, I strongly suggest you watch following quick video where I explain this concept in extremely simple words.

Action Item

Here are the blog posts I have previously written on SSMS. You can read it over here:

You can subscribe to my YouTube Channel for frequent updates.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – SSMS: Activity – All Blocking Transactions

Working out of India has its own challenges and I enjoy here despite these challenges thrown at me. One of the biggest advantage I have working with Pluralsight is, I can still get my job done by working-from-home occasionally. And this is one of the perks I wish most of the companies give their employees. You might be thinking why I am doing this, well the obvious answer to this question relies on the fact how the previous day went. If it rained heavily, which is does in Bengaluru in July, then the chances are that roads would have a build-up of traffic the next day morning. Taking traffic away from your life is never so easy, but with technology improvements like Maps on the phone, I still manage to get an alternate route to reach my destination. This is what makes life interesting and the exploration into new places always fun.

I just wish SQL Server had some way of achieving the same. Blocking and Locking are fundamental to keeping databases in sync and consistent. This blog is all about Blocking Transactions report from the instance level.

To access the report, get to Server node -> Reports -> Standard Reports -> Activity – All Blocked Transactions.

From this node, if there are no apparent blocking happening in the system at the point this report was run, we will be presented with a “Blank” output as shown below.

The ideal situation for us to be in this state, even for a transitional system, but this will never be the case in reality. For a highly transactional systems which try to modify / insert data in same table, SQL Server will respect the order in which the request came and will not allow incompatible locks to exist at the same time. So this behaviour creates a queue automatically and this is what we call as Blocking.

This brings us to the next output, where we are having multiple transactions running. To show some data in report from my non-production-workload system, I have simulated a blocking scenario using two statements. In such a scenario you can see there are two regions to look at: the Session ID of 52, 53 and 54. From the hierarchy, we know that 52 is blocking both 53 and 54. We can also know there are 2 “#Directly Blocked Transactions” in the system currently from the top row for SPID 52. If there are additional transactions trying to insert or delete, then this will show the complete chain of tractions currently blocked.

We also get to see the type of statement that is waiting in this blocking scenario. In the diagram below we see the two statements involved are – INSERT and DELETE.

Various DMVs which have been used to get this information are sys.dm_tran_locks, sys.dm_tran_active_transactions, sys.dm_tran_session_transactions, sys.dm_tran_database_transactions and sys.dm_exec_requests. Along with above, report also uses DMF sys.dm_exec_sql_text to convert the SQL handle to more meaningful text.
If that was not enough then we can also head to the Activity Monitor and expand the Processes tab to get similar information. It is evident that the head of blocking is 52 whereas 53 and 54 are waiting on 52. It is completely up to us to decide what we need to do. We can Kill process 52 and the other transactions will go through.

As a small note, the Task States can give us vital information of what is happening in the system. Some of the states are worth mentioning:

Sleeping This shows the SPID is waiting for a command or nothing is currently executing.
Running SPID is currently running.
Suspended SPID is waiting for locks or a latch.
Rollback Connection is in rollback state of a transaction.

You can use the state information to take an informed decision of killing a process if required.

At this moment, yet another blog post that is worth a mention is Blocked Process Threshold post. This option makes sure there is a profiler event raised when a request is blocked beyond a predefined period of time. So do take a look at that too if you are interested in that behaviour.

The reports series is catching up and the learnings are multi-fold for me personally. Subsequent posts I will get into the other reports and give you my learnings.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Activity Reports – Dormant Sessions

With schools starting for my daughter, I rarely get involved in her daily routine activity. But there is one thing that I don’t try to miss – the parents-teachers meet. Sometimes it is not about the report card on how my daughter faired against the rest in the class but it is more than that. I am curious to understand how she behaves in the class, how she makes friends in class, how her extra-curricular activities are, is she disciplined overall etc. Those are key attributes and traits I am looking at getting as feedback from the teachers in that hour of interactions.

In the same lines, there are tons of other parameters one needs to be aware off with working with SQL Server. A lot of times DBA’s when they are doing maintenance or monitoring of servers, they need help around who is currently accessing the server, what are the inactive sessions, what are the connections with the most resources, all the active sessions on the server and more. This blog will answer these questions. Here are the reports we would talk about:

  1. Activity – All Cursors
  2. Activity – Top Cursors
  3. Activity – All Sessions
  4. Activity – Top Sessions
  5. Activity – Dormant Sessions
  6. Activity – Top Connections

Activity – All Cursors

This report shows information about the cursors used in SQL Server. They are looping construct in T-SQL world. I have probably heard many times, from different sources, as a best practice; avoid using TSQL cursors. In my opinion, there could be situation where cursors might out-perform as compared to other looping constructs. For example, a cursor would be a good candidate for row-by-row processing that can’t be performed by set based operations. We get flexibility via cursor as it provides a subset of data and that allows manipulation of the data in different ways. Having said that, do perform your own performance tests before using the same – these recommendations have to be used with a pinch of salt rather than as written on stone.

The heart of this report is DMV sys.dm_exec_cursors which has a lot of information available about the cursors that are open in various databases. The reports also uses below DMVs.

sys.dm_exec_sessions To get login name
sys.dm_exec_sql_text To get text of the statement via sql_handle

For seeing the sample data into the report, we can run below query

DECLARE cur CURSOR
FOR SELECT
name FROM sys.objects
DECLARE @temp SYSNAME
OPEN
cur
FETCH NEXT FROM cur INTO @temp
WHILE @@fetch_status >= 0
BEGIN
FETCH
NEXT FROM cur INTO @temp
WAITFOR delay '00:00:01'
END
CLOSE
cur
DEALLOCATE cur

All the values shown are explained in documentation of sys.dm_exec_cursors.

Activity – Top Cursors

This report is same as earlier report and only difference is that we can see them categorized as below.

  1. Top 10 Oldest Cursors – This shows the oldest cursor the on SQL.  (Order by creation_time)
  2. Top 10 Dormant Cursors – shows Cursor sitting idle since last query (open or fetch) (Order by worker_time)
  3. Top 10 IO Intensive Cursors – Shows cursors that are consuming the most IO resources. (Order by reads + writes)
  4. Top 10 CPU Intensive Cursors – Shows cursors that are consuming the most CPU resources. (Order by dormant_duration)

All four sections run exactly same query with different order by clause (which I mentioned in definition) by DMV sys.dm_exec_cursors.

Activity – All Sessions

As the name says – this report shows the details of all sessions, connections, requests and the statements currently active in the server.

This report provides details on all active user sessions on the Instance organized by Login. Since I have started two different login “SlowIO” and “sa”, we are seeing the report shows two groups (highlighted). We can drill down to each group till statement level.  Under the hood it uses sys.dm_exec_sessions,

sys.dm_exec_connections and sys.dm_exec_requests DMVs.

Activity – Top Sessions

SELECT TOP 10 s.session_id,
s.login_time,
s.HOST_NAME,
s.program_name,
s.cpu_time             AS cpu_time,
s.memory_usage * 8     AS memory_usage,
s.total_scheduled_time AS total_scheduled_time,
s.total_elapsed_time   AS total_elapsed_time,
s.last_request_end_time,
s.reads,
s.writes,
COUNT(c.connection_id) AS conn_count
FROM   sys.dm_exec_sessions s
LEFT OUTER JOIN sys.dm_exec_connections c
ON ( s.session_id = c.session_id)
LEFT OUTER JOIN sys.dm_exec_requests r
ON ( r.session_id = c.session_id)
WHERE  ( s.is_user_process = 1)
GROUP  BY s.session_id,
s.login_time,
s.HOST_NAME,
s.cpu_time,
s.memory_usage,
s.total_scheduled_time,
s.total_elapsed_time,
s.last_request_end_time,
s.reads,
s.writes,
s.program_name

Here are the various order by clauses added in each section. You can do it yourself as well.

  1. Top Oldest Sessions (order by s.login_time asc)
  2. Top CPU Consuming Sessions (order by s.cpu_time desc)
  3. Top Memory Consuming Sessions (order by s.memory_usage desc)
  4. Top Sessions By # Reads (order by s.reads  desc)
  5. Top Sessions By # Writes (order by s.writes desc)

Activity – Dormant Sessions

This is an interesting report and shows dormant sessions in SQL Server. Dormant session is a session which has connected earlier, ran some query and sitting idle. This report provides details on Sessions that have been inactive for more than an hour. Behind the scene, the report uses sys.dm_exec_sessions and puts filter on datediff(mi, last_request_end_time, @d1) >= 60 to get dormant sessions.

As shown above, there are three sections in the report. In the first section (1), we can see number of All Sessions, number of Dormant Sessions which are there from more than 1 hour and number of users with Dormant Sessions. This might be different from number of sessions, because single login might have more than one session open at a point in time. The second section (2) shows the Top 10 Dormant Sessions. All of the columns are self-explanatory. Third section (3) shows top 10 dormant sessions by user name. This would be useful in development servers where we use user name to find who is connected.

Activity – Top Connections

This is last Activity report in the list. Earlier reports are based on sessions and this report is based on connections. Since this report is similar, I would not explain much.

Here is the base query used by report

SELECT TOP 10 ( Row_number()
OVER(
ORDER BY c.connect_time) )%2             AS l1,
CONVERT(CHAR(100), c.connection_id)            AS connection_id,
c.session_id,
c.connect_time,
c.num_reads,
c.num_writes,
c.last_read,
c.last_write,
c.client_net_address,
c.client_tcp_port,
(
SELECT COUNT(*)
FROM   sys.dm_exec_requests r
WHERE  ( r.connection_id = c.connection_id)) AS request_count,
s.login_time,
s.HOST_NAME,
s.program_name,
s.login_name,
s.is_user_process
FROM   sys.dm_exec_connections c
LEFT OUTER JOIN sys.dm_exec_sessions s
ON ( s.session_id = c.session_id)

There are three sections. They show similar information but with different order by clauses.

  • 10 Oldest Connections – order by c.connect_time
  • Top Ten Connections By # Reads – order by c.num_reads desc
  • Top Ten Connections By # Writes – order by c.num_writes desc

Well, that was quite a few reports in one go today. I am sure you will play with them and do let me know if you find anything interesting or used these reports in any interesting ways.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Free intellisense add-in for SSMS

This article shows how to use ApexSQL Complete, a free SQL Server intellisense add-in. You can download ApexSQL Complete, and play along through the article.

ApexSQL Complete is a free SQL Server Management Studio and Visual Studio add-in that speeds up SQL coding. In this article, we will explain ApexSQL Complete through its features. To start using ApexSQL Complete, enable it from the ApexSQL menu under the main menu in SSMS:

Hint list – complete your SQL code

This is a key feature of ApexSQL Complete. It helps you find the wanted object or a SQL keyword, and complete the SQL statement faster instead of typing the entire object name or keyword. After you start typing, for example “SE”, SQL intellisense will appear with all keywords and database objects that contains “SE”, listed by importance:

You can uncheck the box for the appropriate objects, and prevent them to appear in the SQL intellisense from the add-in options, under the Hints tab. This will decrease the number of hints in the SQL intellisense list, and speed up the coding process:

Another useful property of the SQL intellisense in ApexSQL Complete is a multiple sections, which allow you to navigate over the database schema, to a table, and select the specific columns, checking the appropriate boxes. You can also select the columns and hit the Enter key to insert them:

If a SQL script you are using is large, at some point you will need to look at SQL code before you continue. ApexSQL Complete offers you to accomplish this without a break. Press and hold the CTRL key, and the SQL intellisense will become transparent, so you can see through, and review SQL code. Releasing the CTRL key will get you back to a previous state, and you can continue typing:

Tab navigation – monitors all opened SSMS tabs

This feature allows you to track opened and recently closed tabs, or to restore previously saved session after crashing. These operations can be managed from the add-in options, under the Tab navigation tab. Here you can set the period for keeping the tabs saved, and set the interval for auto-save.

This could be useful if, for any reason SSMS crashes. The Tab navigation feature saves your time, and gets you back to a point before crashing.

The Tab navigation pane consists of two parts, Opened tabs and Recently closed tabs. In the Opened tabs section, all opened tabs from one session will be shown, and you can easily switch from one to another. You can search the content of the opened and closed queries, and open the query that contains searched results. Double-clicking the query from a list of the Recently closed tabs tab opens it in a new query window. For the opened and closed tabs, if you select a query from the list, a complete content will be shown in the preview section to the right:

At any point, you can save your workspace or opened tabs, and restore to the saved state later, if SSMS crashes.

Code structure – view and find SQL code blocks

This feature provides a tree-like form of SQL code presented in a separate SSMS window on the left side of the query window. When you enable the Code structure feature from the add-in options, it allows you to see all the important parts of SQL code used in the query. SQL code blocks from the Code structure window can be expanded so you can navigate to a specific part of the block. Selecting any item from the Code structure window highlights a SQL code block in the query window:

This way you can move through SQL code in blocks and find the part you are looking for instead of scrolling down the query.

Executed queries – track executed queries

Using this feature allows you to track all executed queries in a defined period. To enable the feature, select the Log execute queries option under the Executed queries tab in the add-in options. You can set the folder for storing the executed queries. The queries are saved as an .xml files. You can also define the maximum number of lines in SQL code, which will be stored. The Default period option allows you to show the queries executed in a defined period.

When activated, the Executed queries form will show all the queries executed in a defined time range. If you select a query from the list, its content appears in the preview section. You can search through the queries, executed in a defined period. Double-clicking any of the executed queries from the list opens it in a new query window in SSMS, so you can additionally change SQL code.

Snippets library – insert often SQL statements

With this feature you can insert often used SQL statements, even a whole procedure, or blocks of SQL code. You can create a snippet from the ApexSQL Complete options, or from the SSMS query window:

1) To create a snippet from the ApexSQL Complete options, navigate to the add-in options, and click Add new snippet option, under the Snippets tab:

Here you can edit any of the predefined snippets from the library, export/import them to use it on another machine.

2) To create a snippet from the SSMS query window, type SQL code you want to be defined as a snippet, select it, and right click on it. From the context menu, choose the New snippet option:

This will open the Create a new snippet window, with the selected code already inserted in the Code section. You just need to define a name for the new snippet, and optionally a description:

To use already created snippet from the Snippet library, click the Insert snippet option from the context menu in the SSMS query window, and double click on a snippet from the list to use it in the query.

Navigate to object – locate an object in the Object Explorer

This allows you to locate the selected object in the Object Explorer pane. In the query window, select the object you want to locate, and right-click on it. From the context menu, choose the Navigate to object option, and the selected object will be located and highlighted in the Object Explorer pane to the left.

Test mode – execute queries without impact to the database

The Test mode feature allows you to execute a query in a test environment, without impact and consequences to the database. To use the Test mode feature, select the Test mode option from the toolbar, and highlight SQL code in the query window you want to execute.

The Test mode feature will add BEGIN TRANSACTION and the ROLLBACK TRANSACTION statements. After the execution, it rolls back the transactions at the beginning:

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – SSMS: Scheduler Health Report

Have you ever been to a music concert? It is the most humbling experience one can get as a music lover. The live music with hundreds of musicians in one stage brings goosebumps with the very thought. I have always been fascinated and wanted to experience this once in a lifetime and am sure that day is not far off. What strikes me big time is the conductor, standing alone with a small stick orchestrates these fabulous musicians to give all of us a delight and experience of a lifetime. This brings me to today’s topic of discussion on the Scheduler Health Report. In a way the conductor inside SQL Server is our scheduler – who makes sure all the activities and all parts get their share of time to execute. Looks like a dream job, but trust me there is lot of effort in understanding how each components works just like how a conductor really knows when to introduce a new instrument as part of the concert. Before I start explaining about this report component, it’s important to go through the basics of SQL Scheduler which would help in understanding this report.

SQL Server has a mini operating system which manages the resource by its own, that’s why you would hear term called SQLOS. By resources we mean CPU, Memory and IO available on the machine.

Whenever any request is received in SQL Server, it would be assigned to a thread and that thread would be scheduled on a scheduler. This might go to multiple scheduler in case of parallelism though. Those threads which are ready to run would be scheduled and sent to the operating system for execution. Imagine a situation of blocking where a blocked thread can’t do anything unless the resource is available. In such situation, does it make sense to send this request out to the operating system for scheduling? Of course not! That’s why this mini operating system does better scheduling and SQL can scale up very well as workload increases. Another advantage of the SQLOS layer is to reduce context switching of threads in operating system because it only sends the threads which can do some meaningful work.

To summarize, SQLOS is a mini operating system within sqlservr.exe process which takes care of managing CPU, Memory, locks, IO and a lot more. In general terms, the scheduler is a logical CPU on SQL Server side. In general, each scheduler is mapped to one logical processors exposed by the operating system. There are hidden and visible scheduler in SQL Server. They can be looked into via DMV sys.dm_os_schedulers

To know more about our Scheduler, here is the place to get this report. Right Click on Server node, choose Reports > Standard Reports > Scheduler Health.

The complete reports revolves around SQLOS. SQLOS has something called non-preemptive scheduling (also known as cooperative scheduling) which is different from the scheduling done by the operating system. Windows operating system does preemptive scheduling where a thread would get a fixed amount of time to run on the CPU. Once the time slice is completed, it would be snatched out of the CPU and put into the queue for the next chance to run. This is a fair game because all threads are getting a chance to run. On the other hand, in SQLOS a thread would do its work and come back to the scheduler by its own; no one is going to take him off the scheduler. This term is called as yielding. If a thread went out from SQLOS layer and didn’t come back – it’s called as non-yielding situation. If all schedulers have the same problem of non-yielding them you can imagine that SQL would go to “hung” state. A scheduler can be in three states – Idle (when work_queue_count <> 0), Hung (when yield_count is not changing) or Active (when it’s not in the other two states). Hence the first part of our report shows which states our Scheduler currently is in. In our case, the scheduler is in the Idle state.

The second part of the report shows the details about worker (can also be accessed via DMV sys.dm_os_workers), tasks (accessed via DMV sys.dm_os_tasks) and processes running under each scheduler. Let us understand these terms in the little details as it would help you in understanding this section of report better.

Task – represents the work that needs to be performed. It can also be called as unit of work that is scheduled by SQL Server. An example of task could be pre-login, login, query execution, logout and may more. The task can be in various states (PENDING, RUNNABLE, RUNNING, SUSPENDED, DONE or SPINLOOP). Please refer this for more details.

Worker - are the threads who would do the task given by the scheduler.
Request – is the logical representation of a request made from the client application (or work done by system threads) to SQL Server. This request would be assigned to a task that the scheduler hands off to a worker to process.

Now that our fundamentals have been sorted, let us have a look at the second report section:

My machine has currently has 8 logical processors and hence we are seeing values of Scheduler ID’s: 0 to 7. Other schedulers are having status as “HIDDEN ONLINE” in sys.dm_os_schedulers. Each scheduler has various workers associated. We can see that in column #Workers for each scheduler rows. Once we click on (+) for the scheduler, we can see details about each worker. Further clicking on (+) for each worker, we can see the work done by that worker.

I hope that this blog has helped you in understanding the basic functionality of SQLOS and how the Scheduler Report drills into the fine prints.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – SSMS: Schema Change History Report

The heat is picking up and I am glad you are liking this series going so far. This particular report is close to my heart and the most recommended. At my recent trip to Delhi and the user group out there, I had the opportunity to meet a number of young DBA’s who were getting into their professional career at various organizations. I always try to persuade such groups with interesting questions to make them inquisitive about learning new concepts with SQL Server.

At this user-group session I wanted people to answer a simple question:

  1. How can I know, who created/dropped/altered the database?
  2. How can I know, who created/dropped/altered the objects?

This caught the attention of this group and I got various answers from DDL Triggers, Auditing, Error Logs, Extended Events and many more innovative response which I will refrain from disclosing his because they were really funny. All these answers were correct in a way and I had to counter them with yet another question to make them thinking.

Though your answers are correct in a way, “what is the easiest / simplest way to find this without writing a single line of code”. Now the twist made the response into something simple. And one attendee had a response stating – “why not use Profiler?”

This response stumped me totally and I said, let me achieve the same with lesser number of clicks for you. And my idea was to show them the use of SQL Server Management Studio – Schema Change History. It has interesting dimensions to examine and let me take a moment to walk them through the same.

Where to start?

The report location can be found from Server node -> Right Click -> Reports -> Standard Reports -> “Schema Changes History”.

One of the important information worth noting here is, the report fetches information from the default trace. We have talked about default trace and how to enable it in our previous post on “Configuration Changes History” report.

If default trace is not enable then this report is smart enough to look at each database and find objects which were created or altered in last 7 days. I was not aware of this until I disable default trace to see the error in the report. To my surprise, the report still came up in some different format. Let us look at the outlook with these options.

With default trace enabled

The report when the default trace enabled is as shown below:

To catch the query that populates this report, I ran Profiler and here is the basic query:

SELECT FROM:: fn_trace_gettabl(e @base_tracefilename, default )
WHERE EventClass in (46,47,164) AND EventSubclass = 0 AND DatabaseID <> 2

To elaborate a bit, the EventClass 46, 47 and 164 corresponds to Object:Created, Object:Deleted and Object:Altered respectively (Refer sys.trace_events from MSDN for more info).

With default trace disabled

Now here is the twist, when Default Trace is disabled, the query which is executed in each database is shown below for reference:

SELECT o.name AS OBJECT_NAME,
o.type_desc,
o.create_date,
s.name AS schema_name
FROM   sys.all_objects o
LEFT OUTER JOIN sys.schemas s
ON ( o.schema_id = s.schema_id)
WHERE  create_date > ( GETDATE() - 7);  

And below is the report generated by the query. It is evident from the last section of “Note” from the report that our default trace is not enabled.

There are two sections in the report. They are based on similar query which I pointed above with a minor difference of “create_date” and “modify_date” column as below.

SELECT o.name AS OBJECT_NAME,
o.type_desc,
o.create_date,
s.name AS schema_name
FROM   sys.all_objects o
LEFT OUTER JOIN sys.schemas s
ON ( o.schema_id = s.schema_id)
WHERE  modify_date > ( GETDATE() - 7);  

The disadvantage of disabling default trace is that we would not be able to see any information if a database was dropped.  I generally have seen this trace to be non-intrusive on most of the systems. But would love to hear from you and learn if you faced any problems with it.

Caveats with Schema Change History Report

One problem in the report is, even if one database is inaccessible, it would give error and fails to report anything for remaining databases. For illustration purposes, I made the database to norecovery state and I refreshed the report to get the below error:

If you ever have such situation, you can run the T-SQL query mentioned above manually on the database under question to view the changes.

Does anyone reading this post, ever disabled your Default Trace ever? Have you used this reports in your environment? Let me know your learnings.

Reference: Pinal Dave (http://blog.sqlauthority.com)