SQL SERVER – Why Haven’t You Disabled CPU Power Saving Yet?! – Notes from the Field #087

[Note from Pinal]: This is an 87th episode of Notes from the Field series. Every week, I personally wait for the notes from the fields from Mike because it contains lessons of life which directly impacts DBA and Developer’s life positively. Mike Walsh is an amazing person to meet and know. I am fortunate to be his friend. Every single time I meet him, I learn something as a human. He is always looking for an opportunity to improve the life of DBAs and Developers.

mikewalsh SQL SERVER   Why Haven’t You Disabled CPU Power Saving Yet?!   Notes from the Field #087Mike normally talks about something that impacts a client, impacts DBAs or a professional development topic. However, in today’s blog post Mike has something very interesting and very helpful to the ever DBA in the world.

Read the entire story in his own words.


Intel CPU Core i7 2600K SQL SERVER   Why Haven’t You Disabled CPU Power Saving Yet?!   Notes from the Field #087Today’s post is going to be a very short public service announcement that I shouldn’t have to make, but part of my job as a consultant at Linchpin People is going to customers and performing WellDBA exams. A WellDBA exam is basically an extensive health check of a SQL Server environment. We are looking for small and simple changes to help performance, reliability or manageability. We are looking for deeper changes that could be made to help in these areas and we are helping clients see what sort of architecture decisions and foundational decisions should be addressed to scale into the future.

We have a free checklist you can use to check your own servers also. It describes some of our most common findings and helps you get on the road to having a healthy and a scalable SQL Server environment. You can download it and review your own environment to make sure you are following best practices.

Power Saving?! Hasn’t the SQL Family Talked About That A Lot?!?

Yes! There are posts everywhere about it and why it isn’t great for SQL Servers. Even still – it is something we find on at least 70% of the WellDBA exams we provide to customers. These are customers of all sorts and sizes with all manner of DBA teams or accidental DBAs at the helm.

So… Yes. A reminder about this setting.

What’s This All About?

Processors today have a Power Saving setting on them. They allow themselves to basically be told to operate at a lower performance profile to cut down on power costs. When the CPU gets really busy – the performance may be ramped up – then it ramps back down.

This sounds great in theory – but SQL Server isn’t normally just pegging the CPU – so the CPU performance may never go all the way up – and when it does, it often will go up just as the demand period is over and the boost isn’t needed.

The default on most Windows Server installations is “Balanced Mode” – this means out of the box you are allowing Windows to under clock your CPU experience and only increase it as/when/if needed.

Another way of saying it – you’ve taken your SQL Server instance – which you’ve licensed by CPU core nowadays – and forced it to run on lower power CPUs then you paid for when buying the server and paid for when licensing SQL Server.

What can you do?

Three simple steps really:

  1. See if you are running in balanced mode – You can go to start and then run and type in powercfg.cpl (or browse to control panel and then power settings). And see if you are running in balanced or High Performance.
  2. If running in balanced – change it to High Performance
  3. Consider making a group policy for your SQL Server servers and make sure they always run in High Performance mode.
  4. Celebrate that you are getting the performance you expected and paid for.

That’s it. I could pile on more. Over at the Linchpin blog, I put up a blog post a year ago that shows the impact of this setting and how to check it. There are more details and words there – but the important truth is found in this article that you are reading.

My plea for you is that you will just check this setting yourself and fix it for yourself. You won’t see queries take 200% less time, but you could get a boost in performance to some degree, sometimes even a 10-30% increase.

If you want to get started with performance analytics and Database Healthcheck of SQL Servers with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Steps to Generate Windows Cluster Log?

Every now and then I sit next to DBA to learn some of their tricks. One of this recent learning is what I plan to share here. It is tough to learn everything all by ourselves, so learning is best done by sharing. There are many incidents with SQL cluster where we need to know what is happening with cluster and the best way to know history about cluster related event is to generate a cluster.log file. Event log has events related to Failover clustering but it is not very verbose. Typically, when SQL Resource doesn’t come online, I look at cluster log.

The command to generate cluster log depends on operating system the command with change. I am not an expert on cluster, but I generally use the commands to generate log and see if there is any glaring error message. In Windows Server 2003, the cluster log was written automatically, but Windows 2008 onwards the needs to be generated. I think it was a good decision by Microsoft to not to write on this all the time. Generation of cluster log is done using the cluster.exe command. Here are the steps

  • From one of the nodes of the cluster, open a Command Prompt with Administrator rights (Right click > Run As Administrator)
  • The easiest command to generate the log is to type cluster log /g and hit enter. Parameter g stands for generate.
  • A file with name cluster.log will be generated and stored in the %windir%\Cluster\Reports directory on each node of the cluster.

Here is the output on my lab machine having 4 nodes cluster. If any of the nodes is not working, it would continue with another node.

cluster.log 01 SQL SERVER   Steps to Generate Windows Cluster Log?

Here are the files generated in C:\Windows\Cluster\Reports

cluster.log 02 SQL SERVER   Steps to Generate Windows Cluster Log?

In later version, cluster.exe is not available (unless you install legacy components) and then you have to use PowerShell method of generating cluster log. The command which is required is Get-ClusterLog

PS C:\Windows\system32> Get-ClusterLog
The term ‘get-clusterlog’ is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
At line:1 char:15
+ get-clusterlog <<<<
+ CategoryInfo          : ObjectNotFound: (get-clusterlog:String) [], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException

cluster.log 03 SQL SERVER   Steps to Generate Windows Cluster Log?

If this is the error, then you need to import failover cluster module using “Import-Module FailoverClusters” command. Here is the screenshot. Since we have a 4 node cluster, we are seeing cluster.log four times.

cluster.log 04 SQL SERVER   Steps to Generate Windows Cluster Log?

Here is the syntax of the command from Get-Help Get-ClusterLog command.

SYNTAX
Get-ClusterLog [-InputObject <psobject>] [[-Node] <StringCollection>] [-Cluster <string>] [-Destination <string>] [-TimeSpan <UInt32>] [<CommonParameters>]

You can customize the file based on your need but to be very honest, I have never used the additional parameters.

Have you come across a situation where cluster log has helped you? I am sure in future more of these will get added and your experience in using the same would be of great help to others too. So do let us know if you did something similar in your servers too.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Knowing the Source Application Using APP_NAME() Function

APP_NAME() function in SQL Server returns the application name of the current session in which it is called.

When you execute the following code

SELECT APP_NAME() AS application_name

You get the following result

appnamefun SQL SERVER   Knowing the Source Application Using APP NAME() Function

application_name
———————————————-
Microsoft SQL Server Management Studio – Query

This function can be effectively used to audit where the data come from. If you call this function inside a .NET application say VB.NET, it may return VB.NET (not sure the exact name but it will be the application name)

If you want to do some operation based on where it is called, you can make use of this function. If you want to audit your data points about the source, you can create a table with a column that has default value of APP_NAME() which will automatically stores the application name in which data come from.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Getting started and creating JSON using SQL Server 2016

With SQL Server 2016 CTP out, I know many have started to play around with it and exploring some of the new capabilities. One of the new capabilities that has caught my attention is the introduction of JSON support. In this blog, we will get started in creating a JSON document using our SQL Server instance. Let me first introduce what JSON is and then we will move on.

SQL Server 2016 will support JSON (JavaScript Object Notation). JSON is an open, text-based exchange format based on JavaScript’s object literal notation. JSON is a popular data-interchange format used in modern web and mobile applications, as well for storing unstructured data. It is supported in several DB/NoSQL engines and this improvement will enable developers to put some JSON processing logic in the SQL Server that will enable them to parse, query, analyze, and update JSON data.

In this blog we will query a standard table and then convert it into JSON format. For this example, we are querying the standard AdventureWorks database:

SELECT TOP(2) PP.[PersonType], PP.[FirstName], PP.[MiddleName], PP.[LastName], EA.[EmailAddress]
FROM Person.Person PP
INNER JOIN [Person].[EmailAddress] EA
ON EA.[BusinessEntityID] = PP.[BusinessEntityID]

 SQL SERVER   Getting started and creating JSON using SQL Server 2016

Sample output for rows inside SQL Server Management Studio looks like above. Now we will add the FOR JSON AUTO support to get the same output in JSON format.

SELECT TOP(2) PP.[PersonType], PP.[FirstName], PP.[MiddleName], PP.[LastName], EA.[EmailAddress]
FROM Person.Person PP
INNER JOIN [Person].[EmailAddress] EA
ON EA.[BusinessEntityID] = PP.[BusinessEntityID]
FOR JSON AUTO

The output looks like below:

[
{
"PersonType":"EM",
"FirstName":"Ken",
"MiddleName":"J",
"LastName":"Sánchez",
"EA":[{"EmailAddress":"ken0@adventure-works.com"}]
},
{"PersonType":"EM",
"FirstName":"Terri",
"MiddleName":"Lee",
"LastName":"Duffy",
"EA":[{"EmailAddress":"terri0@adventure-works.com"}]
}
]

I will write about writing nested queries for a different time, but we can also generate JSON using the PATH option like:

SELECT TOP(2) PP.[PersonType], PP.[FirstName], PP.[MiddleName], PP.[LastName], EA.[EmailAddress]
FROM Person.Person PP
INNER JOIN [Person].[EmailAddress] EA
ON EA.[BusinessEntityID] = PP.[BusinessEntityID]
FOR JSON PATH

The output for this query would look little different like:

[
{"PersonType":"EM", "FirstName":"Ken”, “MiddleName":"J”, “LastName":"Sánchez”, “EmailAddress":"ken0@adventure-works.com"},{"PersonType":"EM”, “FirstName":"Terri”, “MiddleName":"Lee”, “LastName":"Duffy”, “EmailAddress":"terri0@adventure-works.com"}
]

Now that we are able to generate few simple output using JSON keyword, we will blog about other options in future blogs.

I would be interested to know if you will ever use JSON in your environment inside SQL Server? What would be your usecase for the same? Let me know and I would be more than happy to blog some of them here.

Reference: Pinal Dave (http://blog.sqlauthority.com)

Interview Question of the Week #026 – 64 Bit Vs 32 Bit Confusion

I often see people getting lost between 64 bit and 32 bit operating systems and application conversation. There are plenty of individuals still today, not sure what is the difference between them as well what runs on what platform. Here are a few very simple questions related to Windows OS and SQL Server.

32bit 64bit Interview Question of the Week #026   64 Bit Vs 32 Bit Confusion

Question: Can you run SQL SERVER 32 bit on Windows 32 bit OS?

Answer: Yes

Question: Can you run SQL SERVER 64 bit on Windows 32 bit OS?

Answer: No

Question: Can you run SQL SERVER 32 bit on Windows 64 bit OS?

Answer: Yes

Question: Can you run SQL SERVER 64 bit on Windows 64 bit OS?

Answer: Yes

I guess there you go. In a simple words, you can run pretty much everything on Windows 64 bit OS. However, Windows 32 bit OS can’t run SQL Server 64 bit.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL Server – 2016 – New Feature: Dynamic Data Masking

As most of you know that latest update from Microsoft about SQL Server 2016 was released of CTP2.1 version. There is a long list of new features getting introduced in SQL Server 2016. In this blog, I would talk about one of the security feature called Dynamic Data Masking.

Whenever you access your account on your bank site, would you be comfortable in seeing your credit card or bank account number in clear text on the web page? There are multiple ways to do this at the application level, but as a human nature, it leaves room for error. One small mistake from the developer can leak sensitive data and can cost a huge loss. Wouldn’t it be great if a credit card number would be returned with only its last 4 digits visible – XXXX-XXXX-XXXX-1234 with no additional coding? Sounds interesting, read on!

Before experimenting this feature please remember that if you are using CTP2.0 then you need to turn on trace flags using below command.

DBCC TRACEON(209,219,-1)

If you don’t enable, then here is the error which you would receive while trying my sample script given later.

Msg 102, Level 15, State 1, Line 14
Incorrect syntax near ‘masked’.

Not to forget that this is SQL Server 2016 feature. Running the script on earlier version of SQL would cause below

Msg 102, Level 15, State 1, Line 14
Incorrect syntax near ‘MASKED’.
Msg 319, Level 15, State 1, Line 14
Incorrect syntax near the keyword ‘with’. If this statement is a common table expression, an xmlnamespaces clause or a change tracking context clause, the previous statement must be terminated with a semicolon.

Here is the script to create objects (database, table, masked column)

SET NOCOUNT ON
GO
-- drop database MaskingDemo - if already exists
USE [master]
GO
IF DB_ID('MaskingDemo') IS NOT NULL
BEGIN
ALTER DATABASE
[MaskingDemo] SET SINGLE_USER WITH ROLLBACK IMMEDIATE
DROP DATABASE
[MaskingDemo]
END
-- create new database called MaskingDemo
CREATE DATABASE MaskingDemo
GO
USE MaskingDemo
GO
-- Create table with different data type columns
CREATE TABLE MyContacts (
ID INT IDENTITY(1, 1) PRIMARY KEY
,fName NVARCHAR(30) NOT NULL
,
lName NVARCHAR(30) NOT NULL
,
CreditCard VARCHAR(20) NULL
,
SalaryINR INT NULL
,
OfficeEmail NVARCHAR(60) NULL
,
PersonalEmail NVARCHAR(60) NULL
,
SomeDate DATETIME NULL
)
-- insert a row
INSERT INTO [dbo].[MyContacts]
([fName],[lName] ,[CreditCard],[SalaryINR],[OfficeEmail],[PersonalEmail], SomeDate)
VALUES('Pinal','Dave','1234-5678-1234-5678',999999,'pinal@SQLAuthority.com','pinal@foobar.in', '31-March-2013')
GO
-- apply masking
ALTER TABLE MyContacts
ALTER COLUMN CreditCard ADD MASKED
WITH (FUNCTION = 'partial(2,"XX-XXXX-XXXX-XX",2)')
ALTER TABLE MyContacts
ALTER COLUMN SalaryINR ADD MASKED
WITH (FUNCTION = 'default()')      -- default on int
ALTER TABLE MyContacts
ALTER COLUMN SomeDate ADD MASKED
WITH (FUNCTION = 'default()')      -- default on date
ALTER TABLE MyContacts
ALTER COLUMN fname ADD MASKED
WITH (FUNCTION = 'default()')      -- default on varchar
ALTER TABLE MyContacts
ALTER COLUMN OfficeEmail ADD MASKED
WITH (FUNCTION = 'email()')
GO
-- create a new user and grant select permissions
USE MaskingDemo
GO
CREATE USER WhoAmI WITHOUT LOGIN;
GRANT SELECT ON MyContacts TO WhoAmI;

I have added additional comments in script so that it’s easy to understand. Below script would show the power of masking.

USE MaskingDemo
GO
SELECT * FROM MyContacts; -- this would show clear data
GO
EXECUTE AS USER = 'WhoAmI';
SELECT * FROM MyContacts;     -- this should show masked data
REVERT;
GO

mask 01 SQL Server   2016   New Feature: Dynamic Data Masking

As we can see above, those fields which are masked are showing obfuscated data based on masking rule.
Note: Versions after CTP2 release, the trace flag will not be needed. If you add trace flag, you would start getting “Incorrect syntax” error.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Tips working with Files inside SQL Server

When two SQL enthusiasts meet, often the conversation drifts to questioning each others understanding and implementation. Now, when two DBA’s meet it is always a great to see how the conversation goes because each carry a ton of baggage which can sometimes contradict too. One of the common rules of thumb has been to placing log and data files of different drives.

Recently, I had a friend say there is nothing much to do apart from placement of the drives. In one such conversation, I said – “Hey, it is not that simple. Isn’t there more to it as a DBA you take care, when working with your database files (data / log)?”. I got a blank look and was asked, Pinal I don’t see you have ever written anything like this over the blog. Though they are spread across over multiple blog posts, I thought this blog will aggregate some of the points I shared in the conversation.

#1 Tip

Physically isolate Data files from Log files. Combining heterogeneous workloads (workloads with very different I/O and latency requirements) can have negative effects on overall performance. Here is a KB article that talks about it in detail: Separate database and transaction log files on different drives for optimal performance and disaster recovery

#2 Tip

Secondly, it is highly recommended to place the database files and backups on separate physical devices. Otherwise, if the device that contains the database fails, your backups will be unavailable as well. Furthermore, placing the data (meaning Data and Log files) and backups on isolated devices optimizes the I/O performance for both production use of the database and writing backups.

#3 Tip

Similarly, it is strongly recommended to place the database files and FILESTREAM data on separate devices. Placing the data (meaning Data and Log files) and FILESTREAM data on isolated devices optimizes the I/O performance and provides better management of FILESTREAM data, namely regularly defragmenting such volumes. Just incase you want to learn more about FILESTREAMs, I recommend you read these older posts.

#4 Tip

Data files should be of equal size within each Filegroup, as SQL Server uses a proportional fill algorithm that favors allocations in files with more free space. You may recognize this recommendation as directed at the TempDB, but it applies to all user databases also. Read more about this from MSDN under File and Filegroup Fill Strategy section.

#5 Tip

Do not change the default file extensions (mdf, ldf and ndf) for data and log files. When talking about this, I would like you to take a look at the post: SQL SERVER – Antivirus Exclusions Best Practices With SQL Server because if any of these are changed, we need to make these exclusions accordingly. I am generally of the opinion not to disturb defaults unless we have a valid reason to do the same.

#6 Tip

Use the Instant File Initialization feature as this will allow a boost in performance, because when extending or creating a data file (does not apply to log files), it will not be zeroed right away, saving time and I/O cycles. This can be accomplished by granting the “Perform volume maintenance tasks” permission to the SQL Server account, which is set in the security policy console (secpol.msc). In this context I am reminded of a post by my good friend Tim Radney here which I attach for reference.

This permission does come with a small security risk, because by not zeroing out the existing space, when deleting data for example, there is a possibility that data could still be read, even though it has been “deleted”, until some other data writes on that specific area of the data file. However, the performance benefits outweighs the security risk and hence the reason for this recommendation.

As I wrap up, I think I have nicely tabulated some of the conversation points in this blog. I would love to hear from you on some of the recommendations you have done in your environments that could be a great addition to this blog via your comments.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – How Do We Find Deadlocks? – Notes from the Field #086

[Note from Pinal]: This is an 86th episode of Notes from the Fields series. Deadlock is very annoying when they happen in our database. In my early career my biggest frustration was I had no idea why the deadlocks are happening and because I did not know the root cause, I was never able to solve them. As time passed by, now I have a better understanding of this situation.

JohnSterrett SQL SERVER   How Do We Find Deadlocks?   Notes from the Field #086

In this episode of the Notes from the Field series database expert John Sterrett (Group Principal at Linchpin People) explains a very interesting subject of how to find deadlocks in SQL Server. Linchpin People are database coaches and wellness experts for a data driven world. Read the experience of John in his own words.


For one reason or another I tend to find out that DBAs in the field have a hard time detecting and understand how deadlocks occur.  A deadlock occurs when two or more sessions are blocked on each other and one statement has to be terminated to prevent the sessions from being hung on each other.

Let’s take a quick look at locking, blocking and deadlocks with a real-world example that is included in the video below. In the simple example provided in the video there are two sessions each with statements that update the same tables concurrently with explicit transactions that are held until the statements are committed or rollback. For example, session one updates all the records in table one while session two updates all the records in table two. At this point we have locking. Session one has a lock on table one and session two has a lock on table two, but neither session is blocking each other from doing work.  Next while our statements are not committed in either session we have session one update the same records being updated on session two. This is known as blocking because session two is blocking session ones update table two.  Next we have session two attempting to update table one which is currently being locked by session one. Now we have a deadlock because session one is blocked by session two and session two is blocked by session one. To prevent both sessions from being hung one of them becomes the deadlock victim while the other get to continue its work.

How Do We Find Deadlocks?

Before SQL Server 2008 we would have to rely on configuring trace flags, service broker, or a server side traces to capture deadlock information so we could be proactive and take action to prevent the deadlocks from reoccurring. In SQL Server 2008 we were given extended events and could leverage a default system health extended event which is automatically running in the background to identify deadlocks. This is a great addition because we didn’t have to manually configure anything to catch the deadlocks. In SQL Server 2012 this got easier because we could rely on the graphical user interface inside of SQL Server Management Studio to filter our system health extended event to focus straight on our deadlocks.

In the video below you will be able to create deadlocks and be able to proactively monitor them with the native tools provided with SQL Server Management Studio in SQL Server 2012 or 2014.

Are your servers running at optimal speed or are you facing any SQL Server Performance Problems? If you want to get started with the help of experts read more over here: Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – FIX – Export Error – ‘Microsoft.ACE.OLEDB.12.0′ provider is not registered on the local machine

Working with Excel files can always be of great interest. I have not seen a reporting solution in the world that doesn’t give an option to export to excel. If you are talking about two separate solutions and you want to export and import, then such options can be of great use. In this blog, I was trying to import data from Excel sheet to SQL Instance by using Import Wizard available in SQL Server Management Studio. I am sure all of us have had this requirement once in a while and would have surely tried to get this working.

ace error 01 SQL SERVER   FIX   Export Error   Microsoft.ACE.OLEDB.12.0 provider is not registered on the local machine

After selecting the option, I selected Data Source as “Microsoft Excel” and selected file name which I wanted to import. Excel file was created with latest version of excel so I selected “Microsoft Excel 2007”.

ace error 02 SQL SERVER   FIX   Export Error   Microsoft.ACE.OLEDB.12.0 provider is not registered on the local machine

As soon as I hit next, I got below error

TITLE: SQL Server Import and Export Wizard
——————————
The operation could not be completed.
——————————
ADDITIONAL INFORMATION:
The ‘Microsoft.ACE.OLEDB.12.0′ provider is not registered on the local machine. (System.Data)
——————————
BUTTONS:
OK
——————————

ace error 03 SQL SERVER   FIX   Export Error   Microsoft.ACE.OLEDB.12.0 provider is not registered on the local machine

I checked management studio and as the message says, I didn’t see the provider as we can see below. We can also use below query to find the details of providers installed on the machine

EXECUTE MASTER.dbo.xp_enum_oledb_providers

ace error 04 SQL SERVER   FIX   Export Error   Microsoft.ACE.OLEDB.12.0 provider is not registered on the local machine

Few blogs have mentioned to download https://www.microsoft.com/en-us/download/details.aspx?id=23734 (2007 Office System Driver: Data Connectivity Components) but that didn’t work. So I tried installing http://www.microsoft.com/en-us/download/details.aspx?id=13255 (Microsoft Access Database Engine 2010 Redistributable) but it also failed because I am using 32 bit version of office on my 64 bit PC.

Note: If you are running 32-bit version of SQL then your issue will be solved by installing above. In my PC, I have to take below approach.

  1. Uninstall 32-bit version of Microsoft Office.
  2. Install the 64-bit version of MS Office (Make sure setup is run from “x64″ folder and run setup.exe in order to install the 64 bit of MS Office).
  3. Install the 64-bit version “Microsoft.ACE.OLEDB.12.0″ (download AccessDatabaseEngine_x64.exe from second link provided earlier).

Here is the location of DLL “ACEOLEDB.DLL”

  1. On a 32-bit operating system
    1. C:\Program Files\Common Files\Microsoft Shared\OFFICE14\ACEOLEDB.DLL
  2. On a 64-bit operating system
    1. If 32-bit office is installed then

C:\Program Files (x86)\Common Files\Microsoft Shared\OFFICE14\ACEOLEDB.DLL

  1. If 64-bit office is installed then

C:\Program Files\Common Files\Microsoft Shared\OFFICE14\ACEOLEDB.DLL

Production Server would not have complete office installed so it would not be a big deal to install correct version of driver.

Have you ever encountered this error in your environments? What did you do to solve them? Let me know.

Reference : Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – PowerShell to Count Number of VLFs in SQL Server

If you are a seasoned DBA, then the conversations and monitoring VLFs is something you are already doing. To learn about basics of VLF, check my earlier blog at: SQL SERVER – Detect Virtual Log Files (VLF) in LDF. Having large number of VLFs have a performance impact and there are a number of blogs that discuss the same. As a DBA in this blog I thought of showing a simple script that will help identifying the number of VLFs available inside a SQL Server instance or a number of instances.

As part of my powershell learnings, this is a simple script that you can use in your environment and take corrective action if you have large number of VLFs in your servers.

<#
EXAMPLE
   Get-VLFCount -ComputerName ComputerName
EXAMPLE
   Get-Content D:\ServerList.txt | Get-VLFCount

   If you are using Windows Authentication and have a list of servers you can use this.
   Get-VLFCount -ComputerName ComputerName1, ComputerName2
#>
function Get-VLFCount
{
    [CmdletBinding()]
    Param
    (
        # SQLInstance is the name(s) of the SQL Instances to scan
        [Parameter(Mandatory=$true,
                   ValueFromPipeline=$True,
                   ValueFromPipelineByPropertyName=$true,
                   Position=0)]
        $SQLInstance
    )
    Begin
    {
        #Load the Assembly to connect to the SQL Instance.
        [System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | out-null
        [System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.ConnectionInfo') | out-null

    }
    Process
    {
        ForEach ($Instance in $SQLInstance)
        {
            $SrvConn = new-object Microsoft.SqlServer.Management.Common.ServerConnection
            $SrvConn.ServerInstance=$Instance
            #Use Integrated Authentication
            $SrvConn.LoginSecure = $true
            $SrvConn.ConnectTimeout = 5
            Write-Debug "Attempting to check $Instance for VLF Counts"
            $srv = new-object Microsoft.SqlServer.Management.SMO.Server($SrvConn)
            $dbs = $srv.Databases
            try
            {
                ForEach ($db in $dbs)
                {
                    Write-Debug "Getting VLFInfo for $db"
                    if ($db.IsAccessible)
                    {

                        $VLFs = $db.ExecuteWithResults("DBCC LOGINFO")
                        $NumVLFs = $VLFs.Tables[0].Rows.count

                        $VLFinfo = $db | Select @{Name='Instance Name'; expression={$Instance}}, @{Name='Database Name'; Expression = {$_.name}} `
                        , @{Name='VLFCount()'; Expression={$NumVLFs}}

                    }
                    else
                    {
                        $VLFInfo = New-Object psobject
                        $VLFInfo | Add-Member -type NoteProperty -name InstanceName ($Instance)
                        $VLFInfo | Add-Member -type NoteProperty -name DBName ("$DB is Inaccessible")
                        $VLFInfo | Add-Member -type NoteProperty -name VLFCount 0
                    }
                    Write-Output $VLFinfo
                }

            }
            catch
            {
                $ex = $_.Exception
                Write-Debug "$ex.Message"
                Write-Error "Could not pull SQL DB Info on $Instance"
            }

        }
    }
}

#If we had our instances in a file, we can use
Get-VLFCount 'localhost'

The above script when run on my server returns the following:

PS VLF Count 01 SQL SERVER   PowerShell to Count Number of VLFs in SQL Server

As you can see in our example output, our AdventureWorks database has large number of VLFs. Incase you have hundreds of VLFs in your environments, then you can look at SQL SERVER – Reduce the Virtual Log Files (VLFs) from LDF file to reduce the number of VLFs for your databases.

Just out of curiosity, I would love to know what is the highest number of VLF in your databases? How long have these been running in your systems? It would be great if you can share the same with us over comments so that it would a great learning for me too.

Reference : Pinal Dave (http://blog.SQLAuthority.com)