SQL SERVER – How to Increase Number of Errorlog Files

A long back I had written a blog on – SQL SERVER – Recycle Error Log – Create New Log file without Server Restart. Recently one of my blog reader emailed me with this question:

Hi Pinal,
In our SQL Server, we are logging successful logins to ERRORLOG and due to this the file size keeps on increasing. I have read your blog about recycling error log. I have put a job in SQL Agent to recycle the logs every midnight. No problem so far. In near future, we will have audit in our company and as per their requirement, they want to have Errorlog worth 30 days (to check login success/failures). Since I have put midnight job and we have six archive errorlogs (ERRORLOG.1 to ERRORLOG.6) and that would cover last 6 days only.

To solve the problem, I can put a copy step to move the file somewhere else before midnight recycle. But again I also need to put logic to delete 30 days old file. Is there any way to achieve this in SQL Server without much efforts?

Regards,
<Name Hidden>

I thought I had blog covering this answer but surprisingly, I couldn’t find anything on my blog. So, here are various ways to achieve the same.

Using SSMS

Once we connect to SQL instance via SSMS, we can go to “Management” Node and right click on “SQL Server Logs” and choose “Configure” as shown below.

Once we click on configure, the checkbox shown below would be unchecked by default and value would be shown as 6. That’s the reason we have files till ERRORLOG.6.

We can check the box and put the desired value in the box. Based on daily recycle of errorlog which my blog reader had and 30 days requirement, the value can be set to 30 to keep his auditors happy.

Using T-SQL

If you are not a fan of UI then below is the T-SQL which can be used to achieve the same change.

USE [master]
GO
EXEC xp_instance_regwrite N'HKEY_LOCAL_MACHINE', N'Software\Microsoft\MSSQLServer\MSSQLServer', N'NumErrorLogs', REG_DWORD, 30
GO

Above is essentially changing the registry key called NumErrorLogs for this instance. Notice that SQL is invoking xp_instance_regwrite which doesn’t have instance related details as it detects the exact key internally. For my named instance “SQL2014” for SQL Server the exact key would be

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL12.SQL2014\MSSQLServer as shown below

Hope this would help you in real world to keep more errorlogs than default if that is a requirement from your DBA/Network or auditor teams.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Visibility into the Overall Health of Your SQL Server Environment – You Can’t Fix What You Can’t See

I am sure most of us have experienced driving a four wheeler. The first time I went behind the wheels, it was an amazing experience. My excitement was multiplied by tension in the face of my father. After a while, I could see a sigh of relief and confidence even in his eyes. This is a classic case of us being pampered, guided and always shadowed by our loved ones. I loved the learning experience then. The real challenge came way later. One of the days while I was driving to office, the car stopped in the middle of the road. I was clueless to what could be the problem. The human instinct was to look at the dashboard for some signals. I checked the fuel level, oil level indicators or for any indicator that could guide me in finding the problem. It was time to call the experts was my opinion. A lot of times, we are faced with situations where the health of the components cannot be diagnosed easily – what can we do?

The above analogy is just an example of what happens inside SQL Server too. If we don’t know or cannot see the problem in hand, how can we fix the problem or even worst how do we know a problem exists on first place? In the recent past, I was quizzed with a similar problem and didn’t know what course of action has to be taken.

During one of the chat sessions with a DBA friend, he said – “most of the testing and integration environments are managed by junior DBA’s and seniors handle only the production environments.” He explained there was a recent incident that caught his attention and wanted to know if there is a way to solve the problem in hand.

This got me curious to what had really happened. He said, one of the test environments crashed just couple of days before a major release cycle. The testing team goes ahead and creates databases for a release cycle and loads tons of relevant data. After the crash, the DBA’s didn’t have any Log backup because it was failing and they didn’t monitor the same. The DBA said the maintenance plans were in place, yet it had failed.

The junior DBA got curious and asked this question to him. There must be something fundamentally wrong here.

FULL recovery acts like SIMPLE

Sometimes we need to understand the basics first. Even though the database was set in FULL recovery model, it acts like Simple recovery model till the first Full backup is taken. This is a well-known concept. To illustrate this, let me take a Log backup of a newly created database. The T-SQL looks like:

BACKUP LOG [Bigdata] TO DISK = N'C:\Backup\Bigdata.bak'
WITH NOFORMAT, NOINIT,
NAME = N'Bigdata-Log Backup',
SKIP, NOREWIND, NOUNLOAD, STATS = 10
GO

This will throw the below error 4214. This is the exact error they were getting too.

Msg 4214, Level 16, State 1, Line 1

BACKUP LOG cannot be performed because there is no current database backup.

Msg 3013, Level 16, State 1, Line 1

BACKUP LOG is terminating abnormally.

The error is self-explanatory that we don’t have a database backup yet and that will solve the problem. The GUI steps will yield the following error message dialog -

Now that my friend understood this concept, he wanted to be vigilant and proactive in solving this problem. There is no way he can track all the databases created in the test and integration environments. He wanted to know if there is a way to know this ahead of time.

Overall health using Spotlight for SQL Server

There is no way one can keep track of databases that are created randomly by application / testing team every single day. If you have a monitoring tool like Spotlight, this task becomes easy for sure. Yes, we can make a deployment standard that everyone needs to take a Full backup as soon as the database is created. But this required constant monitoring and discipline from users / script writers.

In this example below, I have gone ahead and pointed Spotlight to a newly built server. As soon as this happens, the heatmap view shows us some critical data which needs attention and acknowledgement. Here you can see the tooltip suggest – “Full backups are missing for few databases”. I felt this was a nifty suggestion that can save lives for many.

On double clicking the same, we are taken to the Server home page where we can look at the databases node (marked in Red) and get the detailed information to which all databases are pending for backup.

The learnings here are two folds. We understood the principle that a log backup is not available till the first full backup happens. Secondly, the FULL recovery model acts like Simple recovery model till the first Full backup is taken. To mitigate the risks on business critical environments, it is important to have some monitoring capability like Spotlight for SQL Server.

Try Spotlight on SQL Server free for 30 days!

Spotlight on SQL Server continues to break new ground when it comes to providing DBAs with the ultimate in flexibility to suit their specific needs. Building on the success of the Wait Stat analysis workflow, Spotlight on SQL Server version 11.0 now includes multi-dimensional workload analysis to provide maximum insight into performance issues, an enhanced mobile capability to remotely diagnose issues directly from mobile phones and the introduction of SNMP traps. A fresh new look provides simple to use, powerful visualization to effectively monitor the health of your SQL Server environment.

If you’re looking for additional help on how to optimize your SQL Servers, Visit Dell Software’s SQL Server Resource Center for video, guides and other useful content.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Finding Tables Created Last Week – DBA Tip Part II

When I wrote my previous blog post on SQL SERVER – Finding Tables Created Last Week – DBA Tip, lesser did I know something as simple as this can become such an interesting topic of conversation. I have been overwhelmed with the number of emails I have got in the past week to know more about such real life scenarios. Having said that, a number of you also asked are there a much easier way to look at the same inside SQL Server Management Studio.

As I was thinking about possible options, onething that struck me is – I havent told you how this can be done using UI inside SQL Server Management Studio.

SSMS is a powerful tool and a number of options exisits to address the same scenario. Let me talk about two other easier ways to get the answers to our question next.

Object Explorer

When using Object Explorer, get to Tables node -> Right Click and Select Filter -> Filter Settings. Once here, you will get a “Creation Date” Property which can be used ot filter. Select the Operator and the value and we are all done.

This will filter the “Tables” node on Object Explorer with objects which were created based on the date condition we specified in the filter.

Object Explorer Details Pane

The next option is to use the Object Explorer Details Pane (F7 shortcut). Here on the header, select the “Create Date” Column and then sort by the same. Now the tables that were created latest will bubble to the top.

Awesome right? These are couple more ways to find the latest tables that were created in the database. This is quick and handy if you don’t want to write any TSQL or any other methods. Do let me know if you have any other way to find the same. I look forward to learning from you folks too.

Reference: Pinal Dave (http://blog.sqlauthority.com)

Interview Question of the Week #002 – Script to Find Byte Size of a Row for All the Tables in Database

In recent times I was part of the interview panel where I was stunned to see that following question was asked to interviewee. I personally believed that this question is no longer relevant as we are in the age where database are capable to handle large amount of data and the row size is no more significant. However, the question was still asked and the interviewee has to still answer it with regards to the latest version of SQL Server. Here it goes:

Question: How to find total row size in bytes for any table in SQL Server?

Answer: In SQL Server 2000 and earlier version it was crucial to know that byte size of the row as there was limit to how big a row can be. This limit has been removed in the recent versions of SQL Server so this question is no more relevant now.

Here is the script for recent versions of SQL Server. I have an additional column which indicates that if a table contains any column with datatype MAX.

SELECT
CASE WHEN (GROUPING(sob.name)=1) THEN 'All_Tables'
ELSE ISNULL(sob.name, 'unknown') END AS Table_name,
SUM(scol.max_length) AS Approx_Max_Length,
CASE WHEN MIN(scol.max_length) = -1 THEN 'Yes'
ELSE 'No' END AS Fieldwith_MAX_Datatype
FROM sys.objects sob
INNER JOIN sys.columns scol ON scol.OBJECT_ID=sob.OBJECT_ID
WHERE sob.TYPE='U'
GROUP BY sob.name
WITH CUBE          

If you are using SQL Server 2000, here is the blog post which contains the answer SQL SERVER – Query to Find ByteSize of All the Tables in Database.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – What is the query used in sp_cursorfetch and FETCH API_CURSOR?

In a recent debugging exercise for performance, one of my friend recently posted a question to me:

“When I view activity monitor, under the recent expensive queries and under query column, there I see a “fetch api_cursor0000000000000003”. What is this? How can I know what was running behind this query?

Obviously, the hint from that question was the keywords “Fetch and Cursor”. So I knew what the starting point would be. How many times you have been into situation where you want to track a query but when you capture profiler or use conventional methods of sys.sysprocesses or DBCC INPUTBUFFER you would see something like this:

The same thing in profiler we would look like below:

EXEC sp_cursorfetch 180150003,16,8,1

OR

FETCH API_CURSOR0000000000000001

Note: the number in the query might differ in your environment though. Here is a VBScript code to simulate the problem.

Dim strConnection
Dim MyConnection
Dim MyRecordSet
Dim strSQL
strConnection = “Provider=SQLOLEDB;Data Source=.\SQL2014;Initial Catalog=Master;Integrated Security=SSPI;”
Set MyConnection = CreateObject(“ADODB.Connection”)
MyConnection.Open strConnection
Set MyRecordSet = CreateObject(“ADODB.recordset”)
strSQL = “SELECT TOP 3 * FROM sysobjects”
MyRecordSet.Cursorlocation = 2
MyRecordSet.Open strSQL, MyConnection, 3,3
MyRecordSet.MoveFirst
WHILE NOT MyRecordSet.EOF
MsgBox(MyRecordSet(“name”).value)
MyRecordSet.MoveNext
WEND
MyRecordSet.Close
Set MyRecordSet = Nothing

To reproduce the issue, you can save above code in a file and keep extension .vbs. Then you can run from command prompt. You may need to change value of Source in the connection string to match your environment. Once we run the VB Script, you would get a popup with the table name. If profiler is capture from the beginning, we should see sp_cursoropen which can tell the query. If query is already running then we can use below query

SELECT creation_time,
cursor_id,
c.session_id,
c.properties,
c.creation_time,
c.is_open,
SUBSTRING(st.TEXT, ( c.statement_start_offset / 2) + 1, (
(
CASE c.statement_end_offset
WHEN -1 THEN DATALENGTH(st.TEXT)
ELSE c.statement_end_offset
END - c.statement_start_offset) / 2) + 1) AS statement_text
FROM   sys.Dm_exec_cursors(0) AS c
JOIN sys.dm_exec_sessions AS s
ON c.session_id = s.session_id
CROSS apply sys.Dm_exec_sql_text(c.sql_handle) AS st
GO

Here is the sample execution and as we can see, we can get query text which has opened cursor.

 

Note: The cursor_id is same as what we saw in sp_cursorfetch . Additionally, if we capture complete profile, we can see the below output:

Moral of the story: If you are seeing sp_cursorfetch or FETCH API_CURSOR0000000000000001 then either capture complete profiler, since the starting of the query OR use sys.Dm_exec_cursors to get exact query.

Do let me know if you every faced this problem in your environments before. How did you debug the same?

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – What is is_not_trusted in sys.foreign_keys?

My greatest source of learning is via my blog. Many of you take time in asking your toughest questions and sometimes basic questions that make me think. This interaction creates a source of exploration for me and ultimately translates into my knowledge sharing. All questions asked by DBAs are not straight forward and sometimes I also have to do some research before providing answer to them. Here is another interesting question that I received via email recently -

Hi Pinal, What is the meaning of is_not_trusted column in sys.foreign_keys?

I would have told to check books online but if we look at books online for sys.foreign_keys, the column says “FOREIGN KEY constraint has not been verified by the system.” This is not very clear and easy to understand. No wonder he asked me this questions. Let us learn this using an example:

For simplicity let us assume we have three tables: EmpMaster, AddressMaster and EmpAddress using the below schema:

CREATE DATABASE SQLAuthDB
GO
USE SQLAuthDB
GO
CREATE TABLE EmpMaster(Emp_id INT PRIMARY KEY, FName VARCHAR(10), LName VARCHAR(10))
GO
CREATE TABLE AddressMaster(AddressId INT PRIMARY KEY, Address VARCHAR(100), Status CHAR(1))
GO
CREATE TABLE EmpAddress(Emp_id INT, AddressId INT)
GO

From the name you might have guessed that EmpAddress is the mapping table which can have only those values which are in master tables. So, to enforce referential integrity we should create foreign keys to avoid violation of parent-child relationship.

ALTER TABLE dbo.EmpAddress ADD CONSTRAINT
FK_EmpLocation_EmpMaster FOREIGN KEY
(
Emp_id
) REFERENCES dbo.EmpMaster
GO
ALTER TABLE dbo.EmpAddress ADD CONSTRAINT
FK_EmpLocation_AddressMaster FOREIGN KEY
(
AddressId
) REFERENCES dbo.AddressMaster
GO

We have set the basic structure to work. Now, let us insert one row in each master table as below.

INSERT INTO EmpMaster VALUES (1, 'Pinal','Dave')
GO
INSERT INTO AddressMaster VALUES (1, 'Bangalore, India', 'A')
GO

Now, if we try to insert a value (1, 2) in child table, we would get error

INSERT INTO EmpAddress (Emp_id, AddressId) VALUES (1, 2);

Msg 547, Level 16, State 0, Line 33

The INSERT statement conflicted with the FOREIGN KEY constraint “FK_EmpLocation_AddressMaster”. The conflict occurred in database “SQLAuthDB”, table “dbo.AddressMaster”, column ‘AddressId’.

The statement has been terminated.

This error is expected because “2” is not a valid value for AddressId because it’s not available in parent table i.e. AddressMaster.

Let’s look at metadata for foreign key in the catalog view sys.foreign_keys

SELECT name, is_not_trusted FROM sys.foreign_keys

Is_not_trusted is set to zero by default which means that child is not having data which is not in parent. Now, let’s use below to disable the constraint.

ALTER TABLE [dbo].[EmpAddress] NOCHECK CONSTRAINT [FK_EmpLocation_AddressMaster]

And now insert the value.

INSERT INTO EmpAddress (Emp_id,AddressId) VALUES (1, 2)

This means, now we have inconsistent value in the child table. Executing the query on sys.foreign_keys again will confirm this assumption:

SELECT name, is_not_trusted FROM sys.foreign_keys

As highlighted above, the value 1 in the column means that system has not verified the consistency of the data. Now, let’s try to enable the constraint again.

ALTER TABLE [dbo].[EmpAddress] WITH CHECK CHECK CONSTRAINT [FK_EmpLocation_AddressMaster]
GO

You might say that above looks incorrect but “Check” twice is intended. That’s the correct syntax to enable the constraint. Here is the error one would receive while enabling it.

Msg 547, Level 16, State 0, Line 59

The ALTER TABLE statement conflicted with the FOREIGN KEY constraint “FK_EmpLocation_AddressMaster”. The conflict occurred in database “SQLAuthDB”, table “dbo.AddressMaster”, column ‘AddressId’.

Modify the value as below.

UPDATE EmpAddress
SET AddressId = 1
WHERE AddressId = 2

And now we can re-execute the ALTER TABLE command and it should work.

Once you are done with the above script, you can clean up the database by executing following script:

USE MASTER;
DROP DATABASE SQLAuthDB;

So this was a quick overview of the is_not_trusted value in the foreign_keys table and how it functions. Do let me know if you have every used this feature anywhere.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Iterating over Metadata with BimlScript – Notes from the Field #061

[Note from Pinal]: This is a 36th episode of Notes from the Field series. One of the common question I receive every other day is – I have learned BI but I feel that there is no automation in the BI field, what should I do? How do I go beyond learning BI? How can I fill the gap between BIDS and SSDT? If you have been reading this blog, when it is about BI, I always take help from LinchPin People who are BI experts. I requested Reeves from LinchiPin people to help me answer this unique question.

In this episode of the Notes from the Field series database expert Reeves Smith explains what is Biml and How Can it Help Me with SQL Server BI?. Read the experience of Reeves in his own words.


In previous articles (What is Biml and The Why of Biml), I’ve discussed Business Intelligence Markup Language (Biml) and given some examples of projects that would benefit from its use. I also demonstrated a quick way to get introduced to Biml without learning a lot of C#. As you grow in your understanding of Biml and BimlScript you will want to make your Biml files more dynamic with a programmatic metadata-driven approach. This article will build on the previous articles and add looping with metadata to build more dynamic Biml projects.

Overview

Starting where the previous article left off, I’ll demonstrate how to add a little C# to make a more automated method of metadata retrieval. If you did not walk through the other articles, I would suggest that you review them before you read this one. This article builds on the Biml in Listing 1 and lets you create multiple integration services packages from one Biml file.

Listing 1 – Semi-Automated Common Source Package Biml

Example – Automated Source Packages

This example shows you how to connect to a database and select the metadata to iterate over. Then you can create multiple SQL Server Integration Services (SSIS) packages from one Biml file. The example in the previous article manually updated the metadata and reran each Biml file. This is an effective approach, but you can build on it to create a more robust solution. Take a look at the C# code in Listing 2.

Listing 2 – C# Replacement Code

Replace the first line of code from Listing 1 with the code in Listing 2, which is an excerpt from Listing 3.

  • The first line will add the namespace to the Biml file and enable the calling of functions without a fully qualified name.
  • The second line creates a string variable that contains a connection string of the database with the metadata.
  • The third line creates a connection object (actually an AstOleDbConnection node) that will be used by the ImportDB function.
  • The forth line creates a result set with the metadata from the connection object. This object contains all of the metadata from the AdventureWorks2012 database. The ImportDB function has three parameter. The first parameter is a schema filter, and in the example the filter is on the Person schema. The second parameter is a table filter and is blank in the example, so it will be ignored. The last parameter is a list of ImportOptions, and the views have been excluded. To add more import options use the “|” followed by additional options. Example of multiple options:

ImportOptions.ExcludeForeignKey | ImportOptionsExcludeViews

The example includes an object that contains all of the metadata that you will loop over. You have to place the loop within the <Packages> node because a Biml file can only have one Packages collection. The foreach loop will create multiple <Package> nodes.

A tableName string was built below the foreach loop to reduce the code that you have to alter from the example in the previous article.

Listing 3 – Complete Code

Note: The code in listing 3 has the ConnectionString truncated to save space.

If you receive the error shown in Figure 1, it means that the tables do not exist in the stage database. To check this, add the Person filter to the table parameter in the ImportDB function and return metadata for only one table. The prior articles used this table, so it should exist.

Figure 1 – Error Message

Summary

This article added some more C# to help you refine your Biml files. I hope you can start to see the power of Biml and how it can help remove the repetitive work from your next Integration Services project.

If you want to get started with BIML with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)