SQL SERVER – How to Find Weak Passwords Using T-SQL?

Recently one of my blog reader emailed me below question.

Hi Pinal,
Need your urgent help. In recent past, we have been attacked by the hacker who was able to get in to our SQL Server via sysadmin account and made big damage to our data. To make sure it doesn’t happen in future, I have taken task to find out SQL Server password which are weak.

Do you have any suggestions for me?

This is one of the area which is always haunting all SQL DBAs. There are recommendations to use Windows Authentication to connect to SQL Server and that would save from all such problem. But it is not always feasible to use Windows Authentication. Now, if we decided to choose SQL Authentication, there is a setting which is “Enforce Password Policy” which would ensure that you are choosing a strong password.

If recommendations are not followed, you might end up in situation where SQL Logins have weak and basic passwords. SQL Server has provided a function PWDCOMPARE which can become very useful to find known password. Below are few example use of this out of box funtion:

SELECT NAME,
NAME 'password'
FROM   sys.sql_logins
WHERE  Pwdcompare(NAME, password_hash) = 1
UNION
SELECT
NAME,
'<blank>' AS 'password'
FROM   sys.sql_logins
WHERE  Pwdcompare('', password_hash) = 1
UNION
SELECT
NAME,
'password123' AS 'password'
FROM   sys.sql_logins
WHERE  Pwdcompare('password123', password_hash) = 1

In above query, we are trying to find:

  • Password same as user name – first query
  • Blank password – second query
  • Password = password123 – third query

These are one of the most common password used in the industry. I am sure you can extend this further by modifying it and adding more weak passwords.

Here is the sample output for the above:

Hope this blog would help you in finding weak passwords and make it more complex. Have you ever had a need to use such passwords in your environments? Do let me know.

Reference: Pinal Dave (http://blog.sqlauthority.com)

Interview Question of the Week #003 – How to Write Script for Database Cursor?

Just another day I heard a senior DBA discussing advantages and disadvantages of the cursor. Well, personally I am a firm believer of the set based operations. However, there are few places where cursors are the only solutions. It is quite possible that different experts have different opinion about cursor; some love cursor and some hate it but definitely user can’t ignore cursors.

If you are ever asked to write a cursor, you can just write cursor based on following a script.

USE AdventureWorks2014
GO
DECLARE @ProductID INT
DECLARE
@getProductID CURSOR
SET
@getProductID = CURSOR FOR
SELECT
ProductID
FROM Production.Product
OPEN @getProductID
FETCH NEXT
FROM @getProductID INTO @ProductID
WHILE @@FETCH_STATUS = 0
BEGIN
PRINT
@ProductID
FETCH NEXT
FROM @getProductID INTO @ProductID
END
CLOSE
@getProductID
DEALLOCATE @getProductID
GO            

You can read about cursor examples over here Simple Example of Cursor – Sample Cursor Part 2.

Here are few additional examples of cursors:

Cursor uses WHILE keyword to loop over variables. If you are working with local variable you can also use while loop. Here is an example of WHILE loop where the variable is incremented by 1 at every iteration.

DECLARE @intFlag INT
SET
@intFlag = 1
WHILE (@intFlag <=5)
BEGIN
PRINT
@intFlag
SET @intFlag = @intFlag + 1
END
GO

You can read about this over Simple Example of WHILE Loop With CONTINUE and BREAK Keywords.

This is just a simple example of cursor, in reality there are quite a few different options of the cursor as well. We will discuss this in future blog posts.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Watching Table Variable Data in TempDB

I cannot get enough of working with TempDB and the learning never stops. Previously, when I wrote about SQL SERVER – Is tempDB behaving like a Normal DB?, SQL SERVER – Inside Temp Table Object Creation and Difference TempTable and Table Variable – TempTable in Memory a Myth many did ask me what is the difference when working with table variables is. In one of the email interactions, one of my readers asked – “Is there a way to know table variables are created? Are objects created stored as part of TempDB? Are they written to the T-Log?” 

I thought these questions were interesting. All of us work with Table Variables and assume it is just in-memory and don’t utilize anything from tempDB. Recently, during one of my post session meetings a DBA said their applications were completely built on table variables and their production servers were experiencing huge usage of TempDB. These questions made me write this blog post which I thought is worth a share. Let us look at each step one after another as I explained to my friend:

View Objects Creation and TLog

The first stage for me would be to demystify the object creation process. The thought that table variables are just memory bound objects and nothing gets written to transaction log needs to be clarified first.

Here is a simple steps to clear the doubts. 1) We will clear the T-Log of TempDB. 2) Create a table variable and then check the number of rows added in T-Log of TempDB. 3) We will add rows next. 4) Check the TLogs again and see if they are any different from just table creation. This is my development box and I can assure you there is no other process using tempdb in the meantime on my system.

USE tempdb
GO
CHECKPOINT
GO
-- Output of this is 3
SELECT COUNT(*) [RowNums]
FROM sys.fn_dblog(NULL, NULL)
GO

Next is to create our dummy table variable.

DECLARE @TblVariable TABLE(
id INT NOT NULL IDENTITY (1,1)
,
Name CHAR(30) NOT NULL
,
DOJ DATE NOT NULL
,
Company CHAR(400) NULL DEFAULT 'SQLAuth'
)
-- Output of this is 110+ in my machine
SELECT COUNT(*) [RowNums]
FROM sys.fn_dblog(NULL, NULL)
GO

This small experiment confirms the fact that there is something written to the TLog for table variables creation. I went ahead and cleared the rows and then did a small experiment to see if there are additional rows getting added while I insert rows. So the second script was changed as below.

DECLARE @TblVariable TABLE(
id INT NOT NULL IDENTITY (1,1)
,
Name CHAR(30) NOT NULL
,
DOJ DATE NOT NULL
,
Company CHAR(400) NULL DEFAULT 'SQLAuth'
)
INSERT @TblVariable  (Name, DOJ)
SELECT 'SQLAuth',GETDATE()
-- Output of this is 130+ in my machine
SELECT COUNT(*) [RowNums]
FROM sys.fn_dblog(NULL, NULL)
GO

As you can see there additional rows getting added to TLog showing there is some writes even here. Feel free to look at the description fields of fn_dblog and learn more about the fine prints. My next experiment was to catch the table reference somehow using the sysobjects.

SELECT * FROM sysobjects WHERE TYPE = 'U'

This showed no rows in my runs whenever I execute as I create the table variable. If you wondered why, don’t worry the next steps will demystify.

Seeing Table Variable data

I wanted to try something different so that I can capture the table variable data somehow as it persists in the metadata tables. To mimic the same, I used the below script. Simple, yet powerful.

DECLARE @TblVariable TABLE(
id INT NOT NULL IDENTITY (1,1)
,
Name CHAR(30) NOT NULL
,
DOJ DATE NOT NULL
,
Company CHAR(400) NULL DEFAULT 'SQLAuth'
)
INSERT @TblVariable  (Name, DOJ)
SELECT 'SQLAuth',GETDATE()
SELECT * FROM @TblVariable
WAITFOR DELAY '00:02:00'

Now this script will delay and wait for 2 mins. Since the script is waiting for 2 mins, the batch doesn’t finish and hence the table variable is not destroyed immediately as in the previous case. As the script runs, now we can query sysobjects:

SELECT * FROM sysobjects WHERE TYPE = 'U'

Now, here is our table variable and make sure to note the Object ID. We will next grab the Page which holds the data and examine the same.

SELECT allocated_page_file_id, allocated_page_page_id, *
FROM sys.dm_db_database_page_allocations(2, NULL , 1, NULL, 'DETAILED')
WHERE OBJECT_ID NOT IN (SELECT ID FROM sysobjects WHERE TYPE IN ('S','IT','SQ')) AND page_type=1 AND OBJECT_ID = -1098578155

The output for the above query shows we have Page 306 allocated for data and the next step is to examine the same. For this task, we will use DBCC PAGE and print the output.

DBCC TRACEON(3604)
GO
DBCC PAGE(2,1,306,3)
GO

This confirms our understand that table variables also are written to TempDB and can be very much viewed during its duration of existence just like any normal table. When I showed this to my friend, he was pleasantly surprised. Do let me know if you have ever got a chance to view and debug table variables like this?

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Configuration and Performance of SQL Server is Now Easy to Master

This is one of the toughest topics to address, and I am always looking at new tools that can help me learn and understand SQL Server better. In many cases, I fall back to MSDN documentation and official statements from the Microsoft site because a lot of R&D generally has been done by them already. Though I learn about these recommendations, I make it a point to try them out in my limited capacity of testing. I am always of the opinion – “Accept but yet validate.” This reinforces our understanding and we will never forget the concepts thereafter.

Coming back on the topic, what is the tool that you use to check for configuration settings on your SQL Server box? In the past I have talked about SQL Server Management Studio Standard reports and a number of reports that can help. Today, let me tell you about a tool that has helped me learn some new things every now and then – it is called Microsoft SQL Server 2012 Best Practices Analyzer. As I said before, since it comes from Microsoft directly, I use the recommendations and try to learn from the tool. The documentation is crisp and neat too.

As the name suggests, it is a guideline to best practices recommended by Microsoft. This tool can help analyze the configuration setting of your SQL Server box for:

  • Analysis Services
  • Database Engine
  • Integration Services
  • Replication Settings
  • Reporting Services
  • Setup

Let me talk about some of the nice rules available for Database Engine. These are just a subset of 100’s of rules available in the tool. I am just calling out few of them here for reference:

  • It is a security best practice to know if there is a DDOS attack on the server by auditing Failed Logins. Failed Logins are enabled by default, but the rule makes sure there is a check in case it was disabled by the administrator.
  • I have not been a fan of auto-growth happening on the server. Auto-growth is a safety net but administrators need to make sure the files are grown ahead of time during the maintenance window. The rule checks if auto-growth failed for some reason or if it took longer than expected. The answer to this is enabling Instant File Initialization.
  • Placing data and log files on the same volume is never a great idea. It is recommended to place these on different physical drives.
  • If a database is not enabled with CHECKSUM, it will be difficult to identify IO related inconsistencies. Read more from the KB. The rule makes sure these are enabled at the DB level.
  • Having too many VLFs can be a problem. SQL Server 2012 does put some data into the Error Log, but this rules does a check and lets us know.
  • It is a security risk to grant more than read permissions on the BINN folder inside SQL Server. We don’t want the binaries to vanish or get corrupt by malicious users. So it is the case for the DATA folder where we are placing our system files.
  • If password policies are not enforced in the system, the rule raises the same as alert. It is a good practice to have password policies of expiration enabled on the server.

The number of rules can keep going on. As I discussed earlier, these are great learning resources for us. We can read on each of the rule and then do our R&D to learn more.

Monitoring performance using the Database Performance Analyzer

Moving now from standard configuration settings to the performance topic. There is no single starting point when it comes to troubleshooting performance inside any database. When it comes to performance tuning, you need to think differently. With complex systems built on top of databases, you need something that is built radically different. SQL Server did introduce a concept called Wait Stats and it has been a popular mechanism for many to have a macro high level understanding of the various waits inside SQL Server. In fact I have written a book on Wait Stats based on the 28 days series on Wait Stats over my blog.

From what I have seen in Database Performance Analyzer from SolarWinds (you might also know it byt its former name Confio Ignite), it has enough and more reference to these Waits. Using Waits as an indicator to server health is something unique I saw from this tool. If you are a DBA and want to tune your systems from the inside out, then it is worth taking a note of this tool. Wait-based reports are embedded almost all over the place. Let me start by looking at the home dashboard. It is crisp and to-the-point for high level information like CPU, Memory, Disk, Waits. If there is something to be worried about, then a warning symbol indicates areas of concern. We will look at some of them as we start exploring the tool.

From the main dashboard, if we select the server that we are monitoring, we will be presented with the above screen. It is a great way to look at the trends of various statements and the waits happening inside the system. What surprised me is the way the waits are also tracked for Oracle, Sybase, Oracle (RAC), DB2 servers from the same console. The enterprises of today are a heterogeneous environment and tools like these that give a single pane of view of waits are great tools.

The Trends graph shows the waits and maximum time taken by batch of queries. The Advisor pane at the bottom was the icing on the cake, it shows the query that consumed maximum CPU or Memory on the server. Clicking on the “more…” button we can view the query and take corrective measures on reducing the same. Let us look at one such recommendation below:

Here at the details pane, we can see the query that got executed and possible recommendations for a better plan are presented. In this case, we get the information about possible “Missing Indexes” for the given query. As a DBA, it is almost impossible to go to each query and find what the missing indexes are. Or worse, in SQL Server we do get a list of missing indexes but it fails to tell which queries will benefit. This screen is a classic example of both coming together. A nifty addition for sure. The bottom pane shows the various waits that affect the query or we can get a day view to see what the top waits are.

In this above view, I have taken an historical trend of a typical day to see what is causing possible waits inside our systems. The very fact that we have some sort of CPU contention is quite evident with the “Red” bar consistently.

Who doesn’t like a little help from the system? Above is a typical example where we see high WRITELOG waits and if a junior DBA is looking at the system, they will not know where to start. On clicking on each of the waits, we are presented with a simple steps of documentation to what one can do to mitigate these waits. Though these recommendations are given, I highly encourage you to consult your senior DBA before taking any steps.

The storage trends were a nice visualization for me. The snapshot shows the top IO users and orders them based on that. Here you can see the tempDB ldf file seems to be under contention. The data shows a sparkline trend of latency for read/writes.

DPA also has the information of standard performance counters grouped by resources. This is great basic information and for a seasoned DBA, this can be an awesome tool to view from a single interface.

These are standard performance counters from the comfort of a web browser so a DBA can watch what is happening inside the server. One can set thresholds for various grouping and also keep alerts. These in my opinion significantly make a DBA proactive rather than reactive when a problem occurs.

Conclusion

If you are looking for detailed performance analysis, but lack the time and/or knowledge to decipher Wait Stats and server resource metrics, this tool will not disappoint you.  It gives you pointers to work with the essential performance characteristics.  Do let me know your experience working with Database Performance Analyzer, I would surely like to learn your perspective using this tool.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – The Easy Functions of SQL Server – Notes from the Field #062

[Note from Pinal]: In this episode of the Notes from the Field series database expert Kathi Kellenberger explains about easy and amazing functions of SQL Server. Kathi is an amazing instructor, she was the SQL author I have read in my early career. The reason, I love SQL Server because her writing has instigated love for this technology in me. Today she brings a relatively unknown topic for database experts. Read the experience of  Kathi in her own words.


Have you ever tried to format a datetime value in T-SQL? I have seen some pretty crazy code that looked something like this:

DECLARE @date DATETIME = GETDATE();
SELECT CAST(YEAR(@date) AS CHAR(4)) + '/' +
RIGHT('0' + CAST(MONTH(@date) AS VARCHAR(2)),2) + '/' +
RIGHT('0' + CAST(DAY(@date) AS VARCHAR(2)),2) + ' ' +
RIGHT('0' + CAST(DATEPART(HOUR,@date) AS VARCHAR(2)),2) + ':' +
RIGHT('0' + CAST(DATEPART(MINUTE,@date) AS VARCHAR(2)),2);

Starting with SQL Server 2012, there is an even easier method: The FORMAT function. The FORMAT function, can do the same thing as the previous code, but it is so much easier to write. The FORMAT function is one of five functions added with SQL Server 2012 that I like to call The Easy Functions. The Easy Functions are listed in the table below.

The Easy Functions
Category Function Definition
Logical CHOOSE Select one expression from a list
  IIF Inline IF. Evaluate an expression and substitute another expression for true or false
String CONCAT Add strings together
  FORMAT Format a string including “C” for currency and “P” for percent
Date EOMONTH Returns the last day of the month

The following script demonstrates the logical functions.

USE AdventureWorks2014; --Or 2012
GO
SELECT  CustomerID, COUNT(*) AS OrderCount,
CHOOSE(COUNT(*), 'Bronze','Silver','Gold','Platinum') AS MemberLevel,
AVG(TotalDue) AS AvgOrder,
IIF(AVG(TotalDue) > 1000, 'High','Low') AS Priority
FROM Sales.SalesOrderHeader
WHERE CustomerID IN (11060, 11062, 11065, 11068, 11070, 12165)
GROUP BY CustomerID;

The query returns the count of orders for several customers. A MemberLevel is assigned based on the number of orders the customer has placed, and a Priority is assigned based on the average order. If the average order is over $1000, then the customer has a high priority.

Take a look at the CHOOSE expression. The first argument is an integer, the count of the orders. After the order count, there is a list of values. The list of values is actually a 1-based array, and the first argument is an index into the array.

The IIF function requires three arguments. The first argument is an expression to check, the average order. The second argument is a value to return if the expression is TRUE. The third argument is a value to return for FALSE.

The next example demonstrates the string functions.

SELECT C.CustomerID, P.FirstName, P.MiddleName, P.LastName,
CONCAT(P.FirstName, ' ' + P.MiddleName, ' ', P.LastName) AS FullName,
FORMAT(SUM(TotalDue),'C') AS TotalSales,
FORMAT(SUM(TotalDue)/SUM(SUM(TotalDue)) OVER(),'P') AS PercentOfSales,
FORMAT(MIN(OrderDate),'yyyy.MM.dd') AS FirstOrderDate
FROM Sales.SalesOrderHeader AS SOH
JOIN Sales.Customer AS C ON SOH.CustomerID = C.CustomerID
JOIN Person.Person AS P ON P.BusinessEntityID = C.PersonID
WHERE C.CustomerID IN (11060, 11062, 11065, 11068, 11070, 12165)
GROUP BY C.CustomerID, P.FirstName, P.MiddleName, P.LastName;

The query uses the CONCAT function to build a FullName column. The CONCAT function ignores NULL values. Notice that rows 3 and 4 returned the FullName even though the MiddleName values are NULL. To make sure an extra space doesn’t show up when the MiddleName is missing, the space is combined with MiddleName as one of the arguments. NULL added to a space is NULL, which will be ignored.

There are three examples of FORMAT. The first example formats TotalSales as currency, in this case USD. The next example, PercentOfSales, formats a complex expression comparing each customer’s sales to the overall total. It uses a window aggregate function to calculate the total sales for the query. (Window aggregates is a great topic for another post!) The final example formats the FirstOrderDate in a custom format.

Here is one last easy function: EOMONTH.

SELECT CustomerID, MIN(OrderDate) AS FirstOrderDate,
EOMONTH(MIN(OrderDate)) AS LastDayOfMonth,
EOMONTH(MIN(OrderDate),1) AS LastDayOfNextMonth
FROM Sales.SalesOrderHeader
WHERE CustomerID IN (11060, 11062, 11065, 11068, 11070, 12165)
GROUP BY CustomerID;

The EOMONTH function returns the last day of the month for the supplied date argument. The second example, LastDayOfNextMonth, uses the optional parameter that adds another month.

I hope that using The Easy Functions will make your life easier!

If you want to get started with BIML with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – How to Increase Number of Errorlog Files

A long back I had written a blog on – SQL SERVER – Recycle Error Log – Create New Log file without Server Restart. Recently one of my blog reader emailed me with this question:

Hi Pinal,
In our SQL Server, we are logging successful logins to ERRORLOG and due to this the file size keeps on increasing. I have read your blog about recycling error log. I have put a job in SQL Agent to recycle the logs every midnight. No problem so far. In near future, we will have audit in our company and as per their requirement, they want to have Errorlog worth 30 days (to check login success/failures). Since I have put midnight job and we have six archive errorlogs (ERRORLOG.1 to ERRORLOG.6) and that would cover last 6 days only.

To solve the problem, I can put a copy step to move the file somewhere else before midnight recycle. But again I also need to put logic to delete 30 days old file. Is there any way to achieve this in SQL Server without much efforts?

Regards,
<Name Hidden>

I thought I had blog covering this answer but surprisingly, I couldn’t find anything on my blog. So, here are various ways to achieve the same.

Using SSMS

Once we connect to SQL instance via SSMS, we can go to “Management” Node and right click on “SQL Server Logs” and choose “Configure” as shown below.

Once we click on configure, the checkbox shown below would be unchecked by default and value would be shown as 6. That’s the reason we have files till ERRORLOG.6.

We can check the box and put the desired value in the box. Based on daily recycle of errorlog which my blog reader had and 30 days requirement, the value can be set to 30 to keep his auditors happy.

Using T-SQL

If you are not a fan of UI then below is the T-SQL which can be used to achieve the same change.

USE [master]
GO
EXEC xp_instance_regwrite N'HKEY_LOCAL_MACHINE', N'Software\Microsoft\MSSQLServer\MSSQLServer', N'NumErrorLogs', REG_DWORD, 30
GO

Above is essentially changing the registry key called NumErrorLogs for this instance. Notice that SQL is invoking xp_instance_regwrite which doesn’t have instance related details as it detects the exact key internally. For my named instance “SQL2014” for SQL Server the exact key would be

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL12.SQL2014\MSSQLServer as shown below

Hope this would help you in real world to keep more errorlogs than default if that is a requirement from your DBA/Network or auditor teams.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Visibility into the Overall Health of Your SQL Server Environment – You Can’t Fix What You Can’t See

I am sure most of us have experienced driving a four wheeler. The first time I went behind the wheels, it was an amazing experience. My excitement was multiplied by tension in the face of my father. After a while, I could see a sigh of relief and confidence even in his eyes. This is a classic case of us being pampered, guided and always shadowed by our loved ones. I loved the learning experience then. The real challenge came way later. One of the days while I was driving to office, the car stopped in the middle of the road. I was clueless to what could be the problem. The human instinct was to look at the dashboard for some signals. I checked the fuel level, oil level indicators or for any indicator that could guide me in finding the problem. It was time to call the experts was my opinion. A lot of times, we are faced with situations where the health of the components cannot be diagnosed easily – what can we do?

The above analogy is just an example of what happens inside SQL Server too. If we don’t know or cannot see the problem in hand, how can we fix the problem or even worst how do we know a problem exists on first place? In the recent past, I was quizzed with a similar problem and didn’t know what course of action has to be taken.

During one of the chat sessions with a DBA friend, he said – “most of the testing and integration environments are managed by junior DBA’s and seniors handle only the production environments.” He explained there was a recent incident that caught his attention and wanted to know if there is a way to solve the problem in hand.

This got me curious to what had really happened. He said, one of the test environments crashed just couple of days before a major release cycle. The testing team goes ahead and creates databases for a release cycle and loads tons of relevant data. After the crash, the DBA’s didn’t have any Log backup because it was failing and they didn’t monitor the same. The DBA said the maintenance plans were in place, yet it had failed.

The junior DBA got curious and asked this question to him. There must be something fundamentally wrong here.

FULL recovery acts like SIMPLE

Sometimes we need to understand the basics first. Even though the database was set in FULL recovery model, it acts like Simple recovery model till the first Full backup is taken. This is a well-known concept. To illustrate this, let me take a Log backup of a newly created database. The T-SQL looks like:

BACKUP LOG [Bigdata] TO DISK = N'C:\Backup\Bigdata.bak'
WITH NOFORMAT, NOINIT,
NAME = N'Bigdata-Log Backup',
SKIP, NOREWIND, NOUNLOAD, STATS = 10
GO

This will throw the below error 4214. This is the exact error they were getting too.

Msg 4214, Level 16, State 1, Line 1

BACKUP LOG cannot be performed because there is no current database backup.

Msg 3013, Level 16, State 1, Line 1

BACKUP LOG is terminating abnormally.

The error is self-explanatory that we don’t have a database backup yet and that will solve the problem. The GUI steps will yield the following error message dialog -

Now that my friend understood this concept, he wanted to be vigilant and proactive in solving this problem. There is no way he can track all the databases created in the test and integration environments. He wanted to know if there is a way to know this ahead of time.

Overall health using Spotlight for SQL Server

There is no way one can keep track of databases that are created randomly by application / testing team every single day. If you have a monitoring tool like Spotlight, this task becomes easy for sure. Yes, we can make a deployment standard that everyone needs to take a Full backup as soon as the database is created. But this required constant monitoring and discipline from users / script writers.

In this example below, I have gone ahead and pointed Spotlight to a newly built server. As soon as this happens, the heatmap view shows us some critical data which needs attention and acknowledgement. Here you can see the tooltip suggest – “Full backups are missing for few databases”. I felt this was a nifty suggestion that can save lives for many.

On double clicking the same, we are taken to the Server home page where we can look at the databases node (marked in Red) and get the detailed information to which all databases are pending for backup.

The learnings here are two folds. We understood the principle that a log backup is not available till the first full backup happens. Secondly, the FULL recovery model acts like Simple recovery model till the first Full backup is taken. To mitigate the risks on business critical environments, it is important to have some monitoring capability like Spotlight for SQL Server.

Try Spotlight on SQL Server free for 30 days!

Spotlight on SQL Server continues to break new ground when it comes to providing DBAs with the ultimate in flexibility to suit their specific needs. Building on the success of the Wait Stat analysis workflow, Spotlight on SQL Server version 11.0 now includes multi-dimensional workload analysis to provide maximum insight into performance issues, an enhanced mobile capability to remotely diagnose issues directly from mobile phones and the introduction of SNMP traps. A fresh new look provides simple to use, powerful visualization to effectively monitor the health of your SQL Server environment.

If you’re looking for additional help on how to optimize your SQL Servers, Visit Dell Software’s SQL Server Resource Center for video, guides and other useful content.

Reference: Pinal Dave (http://blog.sqlauthority.com)