SQL SERVER – Can Database Primary File Have Any Other Extention Than MDF

Here is interesting question I received yesterday. I personally think it is one of the most interesting questions I have received this month.

“Can Database Primary File Have Any Other Extention Than MDF?”

Before you read the answer – think for a moment.

Now read the answer in the next line.

Yes, Of course. MDF file extension of the primary data file of the database is just a normal practice, but essentially, you can use any other database file extension as well.

Here is an example where I am able to attach files with PDF extension to the database. Please note that this file was created by SQL Server earlier and was always used for primary data file.

USE [master]
( FILENAME = N'D:\data\tests.pdf' ),
FILENAME = N'D:\data\tests_log.ldf' )

As you can see that other extensions are successfully restored to the database. If you want to practice this example, you can download this example by clicking over here.

Reference: Pinal Dave (http://blog.sqlauthority.com)

About these ads

SQL SERVER – Download RML Utilities for SQL Server

Microsoft RML Utilites are very helpful when analysing performance. It helps answering questions like

  • Do you know which databases are using the most resources on your server?
  • Do you know which applications are using the most resources on your server?
  • How will a service pack upgrade, configuration change or application change affect your production SQL Server?
  • Which application, database or login is using the most resources?
  • Which queries are responsible for high utilization of performance?
  • What queries are running slower in today’s workload?

I believe it is a good tool to give it a try.

Download RML Utilities for SQL Server

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Monitoring SQL Server Never Got This Easy

In administering SQL Server databases, the DBA is often presented with the task of identifying resource bottlenecks on the system and the sources of those bottlenecks. Also, the DBA is challenged with the task of identifying why specific queries take significant time to execute or affect the performance of other queries on the system. This allows the DBA to find opportunities to tune queries and improve their performance and the overall performance impact on the system. In this blog we will explore identifying if any disk I/O bottlenecks are present that impact query performance on the system. Such action would allow a DBA to further deal with the I/O bottleneck.

Download Diagnostic Manager if you want to run along with the example demonstrated in this blog post. .

Introducing Extended Events for IO monitoring

The applications users have worked with the System Administrator on slow-performing application issues. They suspect that slow disk I/O is impacting the performance of database queries. You may be asked to verify if this is the case and you can use SQL Server Extended Events to find if any queries are experiencing on disk I/O waits.

ADD EVENT sqlos.wait_info(ACTION(sqlserver.session_id,sqlserver.sql_text)
WHERE (([package0].[equal_uint64]([wait_type],(68)) OR [package0].[equal_uint64]([wait_type],(66))
[package0].[equal_uint64]([wait_type],(67)) OR [package0].[equal_uint64]([wait_type],(182)))
[package0].[greater_than_uint64]([duration],(0)) AND [package0].[equal_uint64]([opcode],(1)) AND [sqlserver].[is_system]=(0)))
ADD TARGET package0.event_file(SET filename=N'C:\Temp\XeventIObottleneck.xel')

Let’s verify that the session is actively running and capturing data by executing the following DMV query:

SELECT * FROM sys.dm_xe_sessions WHERE name = 'XEvent_IO_Tracker';

The next step is to induce an IO operation and capture using our XEvent. Open a New Query window and copy and paste the following T-SQL command.

USE AdventureWorks2012
SELECT COUNT_BIG(*) FROM [Production].[ProductInventory] a
CROSS JOIN [Production].[ProductInventory] b
CROSS JOIN [Production].[ProductInventory] c
CROSS JOIN [Production].[ProductInventory] d
WHERE a.Quantity > 300

Once the query completes, go back to the other query window and execute the following command to stop the Extended Event session:


In Object Explorer, expand the plus next to XEvent_IO_Tracker and right-click on the package0.event_file and select View Target Data:

In the above example, I used the “View Live Data” feature of Extended Events to look at the data. We can see the type of lock and the actual query too.

This is a great start, but resource constraint inside SQL Server can be of various types and IO is just one of the resource.

Diagnostic Manager does more than the eye can see

There are a number of standard tools like PAL that can be used for monitoring SQL Server. I took a look at the new Diagnostics Manager from Idera to understand what it does. It has been like a Pandora box for monitoring point of view. Some of the things that struck my eye are:


When it comes to active server monitoring, it is important to understand that things can go wrong. We are always dealing with limited resources CPU, Memory, IO, throughput, Network etc. As an intelligent DBA, it is critical for us to know when the CPU is peaking, when the backups happen, when the services stop abruptly, when logs files expand and more. I found most of these are already configured and readily and are available to extend using configuration and templates.

I sort of like the starter out-of-box templates readily available to get started with this tool.

Standard OOB Reporting

The next stand out I found was the ability which is available inside SQL Server Management Studio. A set of standard reports. It is surely beyond ~30+ reports neatly categorized under “Server”, “Virtualization” and “Activity” headings.

I will not get into each of these reports but things like forecast based on existing and historical data is something caught my eye and has been worth a mention. With limited disk capacity, it is critical to know when to increase the hardware configuration based on usage pattern is critical. With SQL Server 2014, it is critical to add the capability of how In-Memory tables sizing will work and how memory needs to be accounted. If these get added, then this can be a great asset for DBA’s running on the latest versions.

Rich Dashboard

I think this is the strength of this tool. It allows standard configured values for known standard resources and gives us the flexibility to create additional counters / values to monitor. This is like opening a can of worms for many. But I am pleasantly surprised that the standard templates are exhaustive enough to start with.

I wish these standard templates have some additional recommendations for monitoring some complex environments like SQL Server AlwaysOn using multiple AG’s health, Database Mirroring status, Cluster Resources health and basic activities like backup operations on databases. Though simple tasks like disk monitoring, PLE, Memory, CPU, Network, Waits etc are quite essential – the servers of today are doing way more work than the eye can meet.

Monitor with guidance

Who doesn’t require some help when it comes to monitoring workloads like SQL Server? I personally liked the concept of reference range values when working with SQL Server performance counters.

Instead of me trying to monitor, track and look at each performance counters, I would love to see how I can bring my own counters and ranges from tools like PAL in the future. The ability to import and if required to export can be of great help.

Some of the salient points that caught my eye and worth a mention in the new versions are:

  • Monitoring of both Physical and Virtual environments of SQL Server.
  • Ability to monitor query from specific application.
  • Ability to look at historic data to troubleshoot past issues on the server.
  • Track database growth so that we can make corrective planning on capacity on servers.
  • Can look at data on mobile devices like iPhone, Android, Blackberry or Tablet to view real-time and historical SQL Server performance data.
  • Management pack available for Microsoft’s System Center Operations Manager (SCOM).
  • TempDb monitoring, CPU, Memory, WaitStats and more.

You can download Diagnostic Manager from here.


This blog post is to show you that most of the SQL Server work that we do as DBA is identify a needle in a haystack. Trust me with tools like Idera’s Diagnostics Manager, we are overwhelmed with information that we need to know – yet there is more to explore, enhance and extend what the tool gives out of the box.  The best learning experience one can get is by knowing what all the tool does and trying to understand how some of these have been implemented ourselves.

Reference: Pinal Dave (http://blog.sqlauthority.com)

Hey DBA – Baselines and Performance Monitoring – Why? – Notes from the Field #058

[Note from Pinal]: This is a 58th episode of Notes from the Field series. Mike Walsh is an amazing person to meet and know. I am fortunate to be his friend. Every single time I meet him, I learn something as a human. He is always looking for an opportunity to improve the life of DBAs and Developers.

In one of the recent communication with Mike, I had asked him a question that what is actually Baselines and Performance Monitoring? How do we define what is normal and what is not normal? Where do we draw lines when we talk about performance? Database expert Mike Walsh decided to help us with the answer of this question.

Read the entire story in his own words.

Last time it was my time to post here on SQL Authority, I suggested that DBAs make a proactive investment in their environments.

I can’t stress this enough – procrastination is our enemy as technologists. Especially as DBAs. We have things we should do on a regular basis to keep our environments moving along. I gave a few suggestions of places to start in that post three months ago.  If you’ve not started down the road on those things, stop reading this and do those things first.

If you’ve started being proactive in those areas already? Keep reading and let’s talk about being proactive in another area:

Baselines and Performance Monitoring – Why?

It’s really important to know what “normal” looks like for a lot of reasons when it comes to performance in your environment.

The answers to these questions all require you to know what normal looks like:

  • When will this environment exhaust the resources available in our current setup?
  • Things are slow – what’s the cause?
  • Can we consolidate onto fewer servers?

Without having a sense of what normal looks like for server performance – you can’t accurately answer those questions. For some of them you need to watch what normal looks like over time. And normal can look different on different servers.

Regularly looking at your performance metrics and comparing them on a regular basis is an important step in being proactive. The process will teach you about your environment and let you see trends develop over time. It will also teach you about SQL Server performance more as you dig into the various counters and understand what they mean.

Baselines and Performance Monitoring – An Approach                 

There are a lot of approaches to take here. You can run your favorite perfmon counters on a regular basis and look at the results in Excel. You can use a performance and event monitoring tool like SQL Sentry – and look at the data it collects over time. Or any number of approaches in between.

While I often advise my consulting clients to use several of these approaches – a good place for you to start learning and doing is with a free tool called PAL – Performance Analysis of Logs. This tool is available on Codeplex and I describe how to use this tool in a couple blog posts and videos on the Linchpin People blog (http://www.linchpinpeople.com/sql-server-performance-baseline-presentation-seacoast-sql/ is a presentation I give on baselines and baseline metric collection; http://www.linchpinpeople.com/how-create-sql-server-baseline-using-pal/ shows a couple videos on how to use the PAL tool and create a perfmon template).

Rather than regurgitate what those resources, and the posts I link to in them, describe – I’ll suggest a few action items that you can follow right now. Those links will help you on your journey here. Even if your environment has no issues, the baseline can prove valuable to you in the future – and the process is guaranteed to teach you something about SQL Server performance you didn’t know before you started. I’ve been working with SQL server for 16 years and I still learn something every time I engage in serious baseline study.

Baselines and Performance Monitoring – Action Plan

Some suggested next steps you can take to get on the road to being proactive about performance:

  • Learn about the PAL tool – use the links here to start.
  • Collect “Fake” Performance Data – Start with a test environment if you aren’t sure, watch the impact and get comfortable with the process.
  • Collect Real Performance Data – Once you are comfortable with the approach, do the steps in production. Track some data for a couple days, a day or a week depending on the sample interval you go with.
  • Analyze The Data – Run the performance metrics through the PAL tool and look for alerts. Keep in mind that not all alerts require immediate action, but look at the alerts and learn about them. Read the tips in the PAL tool, look at blogs like this one to understand what is going on and see if you have issues.
  • Rinse and Repeat – Try this again in a month or a few months. What’s changed? Are you doing more activity (maybe batches/second or transactions/sec or user count) and seeing your resources still performing well? That’s great. Do you see performance slowing down but activity not really that much higher? Dig in and see why you are falling off your baseline.

The point is – get started looking at your baselines. You may not have an urgent and immediate need for a good set of baselines today – but when that day comes, you’ll wish you took the time to collect that data. I can’t tell you how often we get called in to help a customer experiencing performance concerns and a baseline would help at least get a sense for what changed from when things were good. We can still help get them out of their mess all the same – but if we had a few baselines over time we can sometimes more quickly figure out where the most beneficial changes can come.

If you want to get started with performance analytics and triage of virtualized SQL Servers with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Fixing Backup Error – Operating system error 5(Access is denied.) – SQL in Sixty Seconds #077

This error has to be one of the most famous error. I have installed SQL Server quite a many times, but I keep on getting this error once in a while. This is also one of the most searched error online. I have previously blogged about this but today, I planned to create a small video on the same subject. In this video I demonstrate how one can resolve the permissions issue with this error.

ere are few other blog posts related to this error, which I have written earlier.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – List the Name of the Months Between Date Ranges

Here is another interesting question, I received the other day.

“How to list the name of the months between two date ranges?”

Very interesting question. I had no script ready for it so I asked my friend who used to my co-worker earlier and he has sent me the script which is listed below.

@EndDate    DATETIME;
SELECT @StartDate = '20140301' -- March
,@EndDate   = '20140901'; -- September
SELECT  DATENAME(MONTH, DATEADD(MONTH, nos.monthnos, @StartDate)-1) AS MonthName
WHERE     nos.monthnos <= DATEDIFF(MONTH, @StartDate, @EndDate)+1;

Above script does return the name of the months between date ranges. Let me know if there is any other way to achieve the same.

Please note: This blog post is modified based on the feedback of SQL Expert Sanjay Monpara. Thank you Sanjay!

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER -Fix Error – Cannot open backup device. Operating system error 5(Access is denied.)

This has to be the most popular error in taking database backups. There are many reasons for this error and there are different ways to resolve it. I recently faced this error and for a while I was taking backup and found a very interesting solution to the problem. The error was as mentioned below:

Msg 3201, Level 16, State 1, Line 1
Cannot open backup device ‘d:\Log\aw.bak’. Operating system error 5(Access is denied.).
Msg 3013, Level 16, State 1, Line 1
BACKUP DATABASE is terminating abnormally.

Personally, I know this error happens when

  1. There is no directory with Log
  2. There is not enough space on drive d:\
  3. There is not enough permissions on d:\Log drive

In my case all of the above were not the reason for above error. The error was there because of the file was read only. There was already another backup file which existed in the folder with the name aw.bak and it was marked as read only. Due to the same reason, my backup was failing. Once I removed the checkbox for reading only, the backup was successful.

Here are few other blog posts related to this error, which I have written earlier.

Reference: Pinal Dave (http://blog.sqlauthority.com)