SQL SERVER – Only Formatted Files on Which the Cluster Resource of the Server has a Dependency can be Used

Getting little help and learning from each other always helps. When I don’t get to know of something and I need to explore something, I try to check with my SQL friends who can guide me in the right direction. This blog is a classic example of me trying to learn something and help the community. Recently I got an email from my blog reader as below:

<EMAIL>

Hi Pinal,
My Name is Aysha and I have been a regular follower of you blog. Just to let you know that I am a hard-core developer, but also an accidental DBA. So please pardon me if this is a really simple issue for you. Also, I don’t have much knowledge about cluster. Here is the problem: I needed to create a new database on the SQL Server 2012 two node failover cluster to host database for new application under development. By following few internet articles, I added a new drive (M and N) into the SQL Group in the cluster, but I am unable to create database on the newly added drive.

Here is the command I was trying

CREATE DATABASE [TestDatabase]
ON PRIMARY
(NAME = N'TestDatabase', FILENAME = N'M:\SQLDATA\TestDatabase.mdf'  )
LOG ON
(NAME = N'TestDatabase_log', FILENAME = N'N:\SQLLOG\TestDatabase_log.ldf')

But here is the error I am receiving

Msg 5184, Level 16, State 2, Line 1
Cannot use file ‘M:\SQLDATA\Testdatabase.mdf’ for clustered server. Only formatted files on which the cluster resource of the server has a dependency can be used. Either the disk resource containing the file is not present in the cluster group or the cluster resource of the Sql Server does not have a dependency on it.
Msg 1802, Level 16, State 1, Line 1
CREATE DATABASE failed. Some file names listed could not be created. Check related errors.

Do you think I did something wrong? I have restarted SQL multiple times, but still same error. Can you please help?

Thanks
Aysha

<EMAIL>

Here is my reply.

<Reply>

Hi Aysha.

Thanks for contacting me and giving me all the details which I always expect from anyone who contacts me. Since you are new to cluster, you need to read a little more about the dependencies of cluster resource. Dependency decides the order in which they can come online. The child resource must come online before an attempt is made to bring parent resource online in a cluster.

  • The IP Address and the Disk Resource do not depend on anything.
  • SQL Network Name depends on the IP Address.
  • SQL Server depends on the Network Name and a Disk Resource.
  • SQL Agent depends on SQL Server.

Coming back to problem which you are facing (i.e. below error). Here is a breakup of error

Only formatted files on which the cluster resource of the server has a dependency can be used.
Either the disk resource containing the file is not present in the cluster group or the cluster resource of the Sql Server does not have a dependency on it.

See that, dependency terms in error message?

So, all you need to do is add dependency of disk M and N which you have added post installation, to the SQL Server resource. SQL Server is preventing us from creating databases on drive which aren’t a dependency of the SQL Server resource. This is to make sure that the disk is online before the SQL is trying to access database file.

Here are the steps

  • Open Failover Cluster Manager from Start Menu OR you can also go to Start > Run > Cluadmin
  • Select the clustered application, i.e. the group which has SQL Server.
  • Right-click on your “SQL Server” resource then choose Properties.
  • Go to Dependencies tab.
  • Click the Insert button to add an additional row, then select the shared disk (M) that is going to contain your SQL databases. In next row N also to be added.
  • Ensure that all resources are required (there is an AND next to subsequent resources) then click OK.

That’s it. Depending on the version of SQL, you might need to take SQL Server resource offline and bring it back online by choosing from right click.

Hope this helps!

</Reply>

Later she responded back and informed me that she was able to fix the issue after following the steps.

Home work: Can you help me out with this – which SQL version does not need restart after modifying dependencies of the disk? If you already have a working SQL cluster, please test and let me know by commenting. This was a great learning for me too, and felt I would quiz you with some research.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Adding Reference Data to Master Data Services – Notes from the Field #081

[Note from Pinal]: This is a 81th episode of Notes from the Field series. Master Data Services is one of the most important, but very little explored feature of SQL Server. If you have been reading this blog, when it is about BI, I always take help from LinchPin People who are BI experts. I requested Reeves from LinchiPin people to help me answer this unique question.

In this episode of the Notes from the Field series database expert Reeves Smith explains why one should add referencing data to Master Data services to our enterprise. Read the experience of Reeves in his own words.


Reeves Smith SQL SERVER   Adding Reference Data to Master Data Services   Notes from the Field #081

In a previous post, I talked about Master Data Services and how it can enhance your Enterprise Data Warehouse. Through the Master Data Services add-in for Excel, the business can access and manage reference data without much involvement with IT. This might sound like a bad idea, but the subject matter experts (SME) for the data often do not reside in IT. SMEs should have the ability to manage reference data, and with the Master Data Services add-in for Excel they can.

In this post, I’m going to outline the three steps needed to get your data governance people up and running after Master Data Services has been installed. The three are:

  • Create a model to store the reference data (completed on the MDS website)
  • Format the reference data in Excel
  • Create an Entity within the Master Data Services add-in for Excel

IT will only need to perform the first step of the process. The first step to manage reference data is creating a container for the reference data. This step must take place within the Master Data Services website.

Step 1 – Create a model to store the reference data

  1. Open the Master Data Service’s website and Select System Administration, as you see in Figure 1.

notes81 1 SQL SERVER   Adding Reference Data to Master Data Services   Notes from the Field #081

Figure 1 – Master Data Services System Administration

  1. After you select System Administration, the website will list three menus: Model View, Manage, and System. Select Manage and then select Models from the pop-out menu.
  2. Click on the green plus sign to create a new model.
  3. In the Add Model dialog, which you see in Figure 2, add a model name that would be relevant to the appropriate reference data. In this example I will use Demo. Unselect all of the check boxes under the model.

 

Figure 2 – Add Model

  1. Test your model by connecting to it from the Master Data Services add-in for Excel.
    1. Open a blank Excel workbook and select the Master Data Services tab.
    2. Click the Connect button to create a new connection to your Master Date Services Instance.
    3. In the Master Data Explorer, which you see in Figure 3, select your newly created model.

Figure 3 – Master Data Explorer

notes81 2 SQL SERVER   Adding Reference Data to Master Data Services   Notes from the Field #081

Step 2 – Format the reference data in Excel

  1. Create an Excel worksheet with the formatted reference data and provide a header for each of the columns.

Note: Master Data Services requires a Code column and suggests the use of a Name column. Both of these columns do not need to be directly called Name and Code but should have like columns within the reference data. The code column must contain unique values.

notes81 3 SQL SERVER   Adding Reference Data to Master Data Services   Notes from the Field #081

Step 3 – Create an Entity within the Master Data Services add-in for Excel

For this example, I will create a list of customers from the Adventure Works database to enable the discussion of enhancements to the data set in a later post. Your reference data should be less complex then the customers list.

  1. With an open connection to Master Data Services and the worksheet with your reference data, click the Create Entity button, as you see in Figure 4.

notes81 4 SQL SERVER   Adding Reference Data to Master Data Services   Notes from the Field #081

Figure 4 – Master Data Services tab with the Create Entity highlighted

  1. Click the red arrow to the right of the Range text box and select the range of your reference data. Be sure to include the column headers and select the My data has headers check box, as shown in Figure 5.
  2. Select the newly created model; in my case that would be Demo.
  3. Add and new entity name and select the appropriate Code and Name column.

Note: Master Data Services 2012 and forward enables the use of an identity values for the Code if you do not want to manage unique keys.

notes81 5 SQL SERVER   Adding Reference Data to Master Data Services   Notes from the Field #081

Figure 5 – Create Entity Dialog

  1. Click OK.

notes81 6 SQL SERVER   Adding Reference Data to Master Data Services   Notes from the Field #081

At this point you should have a working model with a new entity that is fully managed within Master Data Services.

Summary

In this post, I have walked through the steps that are required to move reference data from an Excel workbook and create a model within Master Data Services. Under MDS you get all of the benefits of a managed master data management solution, like audibility and security.

What’s Next

In a coming post, we will walk through enhancing or reference data. One enhancement we will demonstrate, will include adding domain values within the Model. This helps the business users select the correct data elements and increase the data quality of your master data.

Hope this helps.

If you want to get started with BIML with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Puzzle with MONTH Function – Win USD 50 Amazon Gift Card

It has been a while since we ran the contest. I reached out to kind team of Embarcadero and they agreed to support the next contest. The contest has two steps and they are very simple. It took me a while to build contest, but it is a really fun one. I am very confident that once you try out the contest, you will love it.

Two Giveaways:

amazon gift cards SQL SERVER   Puzzle with MONTH Function   Win USD 50 Amazon Gift Card

(Global) USD 50 Amazon Gift Card to 1 Individual

(India) INR 2500 Flipkart Gift Card to 1 Individual

Contest 1: T-SQL

Run following T-SQL script and observe the answer:

SELECT MONTH(18/200), MONTH(200/18)

monthcontest SQL SERVER   Puzzle with MONTH Function   Win USD 50 Amazon Gift Card

When we look at the answer, it displays 1 as a answer to both of the expression. The question is why does above script return values as 1 instead of throwing an error or displaying any error?

Contest 2: Download and Install DBArtisan

This is the easy part of the contest. You just have to download and install DBArtisan. You can download DBArtisan from here.

How to Participate?

  • Leave an answer for contest 1 in the comment section of the blog.
  • Leave a comment with the same email address which you have used to download DBArtisan.
  • The contest is open till June 2nd, 2015 Midnight PST.
  • The winner will be announced on June 4nd, 2015.
  • There will be two winners 1) Global 2) India.
  • All the correct answer to the contest will be hidden till the winner is announced.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Antivirus Exclusions Best Practices With SQL Server

One of the classics as far as best practices is concerned is to exclude SQL Server files and folders on your antivirus programs running on the server. We all know this statements and often I see DBAs not adding any extra care to this recommendation. There are not many blogs that call them out explicitly.

Although antivirus software is a very important part of security planning, it is important to understand the effect it has on SQL Server files.  After rebooting a server, if the antivirus software locks a SQL Server file before SQL Server can gain access to that file, potential issues ranging from SQL Server not being able access those particular files to possible database corruption may occur. In a number of cases I have seen SQL Server refusing to start sometimes or the CPU / IO seems to be stalling sometimes. As you can see there are a number of these random errors one might get because of not implementing a simple best practice. Therefore, it is recommended that the following files be excluded from all antivirus scans:

  • SQL Server data files (typical extension is .mdf, .ndf, and .ldf)
  • SQL Server backup files (typical extension is .bak and .trn)
  • Full-text catalog files
  • Trace files (typical extension is .trc)
  • SQL Audit files
  • SQL query files (typical extension is .sql)
  • Directories holding Analysis Services data and partitions
  • The directory that holds Analysis Services temporary files used during processing
  • Analysis Services backup files

In addition to excluding SQL Server and Analysis Services files, it is recommended to exclude the following list of processes from antivirus scans:

  • SQLServr .exe
  • ReportingServicesService .exe
  • MSMDSrv .exe

For environments where SQL Server is clustered, exclude the C:\Windows\Cluster directory and the Quorum drive.

Though this list is not exhaustive, I am sure there might be a few more that I might have missed. This was a humble shot at this topic and I am sure as seasoned DBA’s you might have a little more to include. Do let me know under the comments section and I am sure this can be a great source of reference for others searching on this topic too.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL Authority News – SQL Server 2014 Service Pack 1 Available for Download

Last month, Microsoft announced SQL Server 2014 Service Pack 1 and it had issues with SSIS catalog. Microsoft has once again released SQL Server 2014 Service Pack 1 earlier today and it is now available to download. I hope all the issue with earlier version has been taken care and the version is reliable to apply to production server. I strongly suggest that one should wait for a while before going ahead and applying the service pack. It is a good idea to wait and see if the version which we are installing is stable and there are not bugs. I always install, service packs on my development server before I install them on production servers.

Here are the links to download SQL Server 2014 Service pack 1

Reference: Pinal Dave (http://blog.sqlauthority.com)

Interview Question of the Week #020 – What is the Difference Between DISTINCT and GROUP BY?

Question: What is the Difference Between DISTINCT and GROUP BY?

Answer: A DISTINCT and GROUP BY usually generate the same query plan, so performance should be the same across both query constructs. GROUP BY should be used to apply aggregate operators to each group. If all you need is to remove duplicates, then use DISTINCT. If you are using sub-queries execution plan for that query varies so in that case you need to check the execution plan before making decision of which is faster.

USE AdventureWorks2014
GO
-- Example of DISTINCT:
SELECT DISTINCT JobTitle, Gender
FROM [HumanResources].[Employee]
GO

interview 20 1 Interview Question of the Week #020   What is the Difference Between DISTINCT and GROUP BY?

USE AdventureWorks2014
GO
-- Example of GROUP BY:
SELECT JobTitle, Gender
FROM [HumanResources].[Employee]
GROUP BY JobTitle, Gender
GO

interview 20 2 Interview Question of the Week #020   What is the Difference Between DISTINCT and GROUP BY?

USE AdventureWorks2014
GO
-- Example of GROUP BY with aggregate function:
SELECT JobTitle, Gender, COUNT(*) EmployeeCount
FROM [HumanResources].[Employee]
GROUP BY JobTitle, Gender
GO

interview 20 3 Interview Question of the Week #020   What is the Difference Between DISTINCT and GROUP BY?

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Service Pack Error – The User Data directory in the registry is not valid. Verify DefaultData key under the instance hive points to a valid directory

As a good SQL Server DBA, I always keep updating my SQL Server on my all VMs with the latest service pack available. During my last installation of service pack 2 on SQL Server 2012, I got below error.

The User Data directory in the registry is not valid. Verify DefaultData key under the instance hive points to a valid directory.

I remember that I have played with some registry keys and now I realized that they are important keys. I looked into the logs based on MSDN articles and found below in Summary.txt file.

Detailed results:
Feature: Database Engine Services
Status: Failed: see logs for details
Reason for failure: An error occurred during the setup process of the feature.
Next Step: Use the following information to resolve the error, and then try the setup process again.
Component name: SQL Server Database Engine Services Instance Features
Component error code: 0x851A0043
Error description: The User Data directory in the registry is not valid. Verify DefaultData key under the instance hive points to a valid directory.

Then I looked in the Detail.txt and found below:

(01) 2015-05-13 09:19:09 SQLEngine: --SqlEngineSetupPrivate: <strong>Validating path:E:\MSSQL\Data </strong>
(01) 2015-05-13 09:19:09 Slp: Configuration action failed for feature SQL_Engine_Core_Inst during timing Validation and scenario Validation.
(01) 2015-05-13 09:19:09 Slp: The User Data directory in the registry is not valid. Verify DefaultData key under the instance hive points to a valid directory.
(01) 2015-05-13 09:19:09 Slp: The configuration failure category of current exception is ConfigurationValidationFailure
(01) 2015-05-13 09:19:09 Slp: Configuration action failed for feature SQL_Engine_Core_Inst during timing Validation and scenario Validation.
(01) 2015-05-13 09:19:09 Slp: Microsoft.SqlServer.Configuration.SqlEngine.ValidationException: The User Data <strong>directory in the registry is not valid. Verify DefaultData key under the instance hive points to a valid directory.</strong>
(01) 2015-05-13 09:19:09 Slp: at Microsoft.SqlServer.Configuration.SqlEngine.SqlEngineSetupPrivate.CheckIfDirectoryExistsGeneric(String sqlPath, SqlEngineErrorCodes invalidError)
(01) 2015-05-13 09:19:09 Slp: at Microsoft.SqlServer.Configuration.SqlEngine.SqlEngineSetupPrivate.ValidateDataRegistryKeys(EffectiveProperties properties)
(01) 2015-05-13 09:19:09 Slp: at Microsoft.SqlServer.Configuration.SqlEngine.SqlEngineSetupPrivate.Patch(ConfigActionTiming timing, Dictionary`2 actionData, PublicConfigurationBase spcb)
(01) 2015-05-13 09:19:09 Slp: at Microsoft.SqlServer.Configuration.SqlConfigBase.SlpConfigAction.ExecuteAction(String actionId)
(01) 2015-05-13 09:19:09 Slp: at Microsoft.SqlServer.Configuration.SqlConfigBase.SlpConfigAction.Execute(String actionId, TextWriter errorStream)
(01) 2015-05-13 09:19:09 Slp: The following is an exception stack listing the exceptions in outermost to innermost order
(01) 2015-05-13 09:19:09 Slp: Inner exceptions are being indented
(01) 2015-05-13 09:19:09 Slp:
(01) 2015-05-13 09:19:09 Slp: Exception type: Microsoft.SqlServer.Configuration.SqlEngine.ValidationException
(01) 2015-05-13 09:19:09 Slp: Message:
(01) 2015-05-13 09:19:09 Slp: The User Data directory in the registry is not valid. Verify DefaultData key under the instance hive points to a valid directory.

I search for DefaultData key in registry and found at below location

HKLM\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQLServer

RegistryData 01 SQL SERVER   Service Pack Error   The User Data directory in the registry is not valid. Verify DefaultData key under the instance hive points to a valid directory

As we can see, it is pointing to E:\MSSQL\Data and it was not valid. Later I played with various other keys and found below possible errors.

Invalid Values in Registry Error Message
DefaultData The User Data directory in the registry is not valid. Verify DefaultData key under the instance hive points to a valid directory.
DefaultLog The User Log directory in the registry is not valid. Verify DefaultLog key under the instance hive points to a valid directory.
SQLDataRoot The Database Engine system data directory in the registry is not valid.

SQLDataRoot in locating under HKLM\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL11.MSSQLSERVER\Setup

The cause of all errors was invalid path, which I have messed up. Later I search on the Microsoft site and found that there is a connect item filed by someone

https://connect.microsoft.com/SQLServer/feedback/details/778275/sql-server-2012-sp1-installation-issue

Moral of the story is never playing directly with the registry unless you are sure what they are. I was able to fix the error by correcting the value in DefaultData registry key.

Have you encountered such errors and were able to look at the logs?

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – WARNING – SQL Server Has Encountered N Occurrence(s) of I/O Requests Taking Longer Than 15 Seconds

This is one of the most commonly asked questions via blog comments and emails sent across to me. Here is the sample error message which has been reported in SQL Server ERRORLOG.

SQL Server has encountered 201 occurrence(s) of I/O requests taking longer than 15 seconds to complete on file [T:\MSSQL\DATA\tempdb.mdf] in database id tempdb [2].  The OS file handle is 0x0000000000000770.  The offset of the latest long I/O is: 0x0000000008c000

If you are not a SQL Expert and don’t know what ERRORLOG is, no need to worry. Here is the blog to check location of SQL Server ERRORLOG

SQL SERVER – Where is ERRORLOG? Various Ways to Find its Location

Coming back to the error message, the text which is highlighted in bold would vary based on the problem. The sample error message is for tempdb database which has database ID = 2. Path of the MDF file is T drive in the sample message. Some DBA also refer this as a stalled IO. In the sample test we are seeing 201 occurrences but I have seen value going till 5 digits also.

So, what’s the cause of the error message? Is there any problem with SQL Server? Most of the time answer is NO because SQL is reporting warning in Errorlog when it is not able to complete the IO in even in 15 seconds. If you have read any blog or book covering internals of SQL Server, you might know that SQL Server does asynchronous IO, using the win32 APIs like ReadFile, WriteFile, ReadFileScatter and WriteFileGather. One IO is posted, the thread would return and do meaningful work rather than waiting for IO to complete. The threads would keep checking the pending IO and report if they have taken longer.

Here are the thing which a DBA should do before going to hardware team.

1. Note the pattern of the message. If they are logged at any specific time? Check if there is any scheduled maintenance job is running at the same time? Rebuild index and other IO intensive tasks would perform a lot of IO and disk may report slowness at that time.

2. Check Event log (system and application) and check if there are any disk related error or warning messages.

3. Keep monitoring sysprocesses and check for waits which are happening for SPIDs. If the disk is slow all the times then you should see waits of PAGEIOLATCH, WRITELOG or LOGBUFFER with high wait time. Refer my earlier blogs.

LOGBUFFER Explanation

WRITELOG Explanation

 IO15Sec 01 SQL SERVER   WARNING   SQL Server Has Encountered N Occurrence(s) of I/O Requests Taking Longer Than 15 Seconds

4. Capture disk related data using peformance monitor (start > run > perfmon) for extended amount of time and have a look at counters. You should look at the performance counters related to storage performance to evaluate whether the numbers you are seeing are expected and within reasonable bounds.

Avg Disk Sec/Transfer: This counter is available under Logical Disk and Physical Disk object. The value of this counter should be less than 0.020 (=20 ms). Same counter is available for Read and Write also and can be used to find performance difference between reads and writes.

Disk Bytes/sec:  SAN’s generally start from 200-250 MB/s these days (note that the counter value is in bytes so we need to divide that by 1024). Same counter is available for read and write also.

Disk Transfers/sec: this counter represents Number of read and write performance also known as IOPS (I/O per second). This can be used to compare against capacity of storage subsystem. Read and write counter also available.

5. Make sure latest Windows patch has been applied to avoid any known issues with windows itself.

Once basics have been verified and we have confirmed that we are seeing symptoms of slow response from disk, hardware team need to be engaged for further investigation.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – SSIS and How to Load Binary Large Objects, or Blobs – Notes from the Field #080

[Note from Pinal]: This is a new episode of Notes from the Field series. Every time I give an introductory note, however, this time there is no need of intro note. This note is from Andy and as we all know he is amazing person when we have to understand the fundamentals. He has written this blog post with such an interesting way that you must read it to understand the very basic fundamental of SSIS.

andyleonard SQL SERVER   SSIS and How to Load Binary Large Objects, or Blobs   Notes from the Field #080


I still remember my first experience using SSIS to load binary large objects, or blobs. I was doing a gig for a large retailer, moving data for a new in-store application. Part of the data was product images, mostly 3kb-5kb files. During testing, I noticed the load ran much faster if I did not load the images. I wondered why, so I searched the interwebz for an answer.

I did not find an answer. I found lots of comments and posts telling me I could not, in fact, use SSIS (2005) to integrate blob data. At least not the way I was doing it. My response was, “Huh,” as I was already using SSIS to (apparently) do the impossible. I knew right away this represented an opportunity. I learned everything I could about blobs and now, just nine short years later, I’m here to sharea.

When I searched the interwebz this time, I found an excellent blog post by my friend, John Welch (blog | @john_welch), titled Importing Files Using SSIS. John’s idea is straightforward and very “SSIS-y,” as Kevin Hazzard (blog | @KevinHazzard) says. With John’s permission, I am modeling the example for this post on John’s post.

How Does SQL Server Store Blobs?

Before we dive into a demo project, it’s important to know more about how SQL Server stores blob data. The short answer is: it depends. I can hear you asking, “What does it depend on, Andy?” It depends on the size of the blob data. For larger binary large objects, a pointer to a file location is stored in the row. When the row is read, the pointer points to the file location containing the binary data, and the binary data is streamed to the output.

In this example, we’ll take a look at how we use SSIS to move data from the file system into a SQL Server table. I changed a few things but, again, this example was inspired by John Welch’s post titled Importing Files Using SSIS.

Part 1 – Import Column

The Import Column transformation streams file binaries – the contents of a file – into a binary large object (Blob) “column” in a Data Flow Path. From the Data Flow path, these data can be streamed into a database table Blob field. Let’s demonstrate:

In a default instance of SQL Server 2014, I created a database named ImportPics. Then I created a table named PicFile using this statement:

CREATE TABLE PicFile
(
ID INT IDENTITY(1,1)
,
FilePath VARCHAR(255)
,
FileContent IMAGE
)

Next I created an SSIS 2014 project named ImportPicFiles and renamed Package.dtsx to ImportPicFiles.dtsx. I added a Data Flow Task and created a package parameter named ImportFilesDir to hold the path to a directory filled with Snagit screenshots:

notes 70 1 SQL SERVER   SSIS and How to Load Binary Large Objects, or Blobs   Notes from the Field #080

I add a script component as a Source adapter, then configure it to consume the $Package::ImportFilesDir package parameter as a ReadOnlyVariable:

notes 70 2 SQL SERVER   SSIS and How to Load Binary Large Objects, or Blobs   Notes from the Field #080

I add an output column named filename (DT_STR, 255):

notes 70 3 SQL SERVER   SSIS and How to Load Binary Large Objects, or Blobs   Notes from the Field #080

In the Script Editor, I add the following code:

using System;
using System.Data;
using Microsoft.SqlServer.Dts.Pipeline.Wrapper;
using Microsoft.SqlServer.Dts.Runtime.Wrapper;
using System.IO;
[Microsoft.SqlServer.Dts.Pipeline.SSISScriptComponentEntryPointAttribute]
public class ScriptMain : UserComponent
{
public override void CreateNewOutputRows()
{
DirectoryInfo dir = new DirectoryInfo(Variables.ImportFilesDir.ToString());
foreach (var file in dir.GetFiles())
{
Output0Buffer.AddRow();
Output0Buffer.fileName = file.FullName;
}
Output0Buffer.SetEndOfRowset();
}
}

(Yes, that’s C#. I’m learning new things. Yay me! :{>)

Next, I add and configure an Import Columns transformation. On the Input Columns page, I select the fileName field:
notes 70 4 SQL SERVER   SSIS and How to Load Binary Large Objects, or Blobs   Notes from the Field #080

On the Input and Output Properties tab, I expand the Import Column Output node of the treeview, select Output Columns, and click the Add Column button. I name the column “FileContents” and set the DataType property to DT_IMAGE:

notes 70 5 SQL SERVER   SSIS and How to Load Binary Large Objects, or Blobs   Notes from the Field #080

This next part is a little tricky. You can learn more about configuring this – and other tricky SSIS transformation properties – here.

Select the LineageID property value for the FileContents column and copy it to the clipboard:
notes 70 6 SQL SERVER   SSIS and How to Load Binary Large Objects, or Blobs   Notes from the Field #080

Next, expand the Import Column Input treeview node, then expand Input Columns, and then select the fileName column. In the FileDataColumnID property, paste the value you just copied to the clipboard:

notes 70 7 SQL SERVER   SSIS and How to Load Binary Large Objects, or Blobs   Notes from the Field #080

Add an OLE DB Destination adapter, connect it to the database where you created the dbo.PicFile table earlier, and configure the OLE DB Destination adapter to load dbo.PicFile:

notes 70 8 SQL SERVER   SSIS and How to Load Binary Large Objects, or Blobs   Notes from the Field #080

A successful test execution of the data flow task will appear as shown:

notes 70 9 SQL SERVER   SSIS and How to Load Binary Large Objects, or Blobs   Notes from the Field #080

The Import Column transformation is a powerful to load files into database blob columns.

Read SSIS and Blobs, Part 2 to learn even more.

If you want to get started with SSIS with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – PowerShell Script – When Was SQL Server Last Restarted?

I have always wanted to learn a bit of scripting and I was always searching for a good reason to learn something new. As I said, this is my learning and I am open to learning some scripting tricks from you too. So do let me know how the below script can be optimized in your comments. Now, what was my reason to learn this script?

Someone asked me, “How can I find out when SQL Server was started?” There are multiple ways to find out, but I took the simple route and said – “Why don’t you check the SQL Server Error logs?” I know, the SQL Server error logs can get recycled and this data maynot be available. But in most cases, this gives us some indication.

So the other question was how can we script this requirement? So the below script is written to read SQL Server ErrorLog, find a specific text and report the number of times the string was found. We also show the time when the last/recent occurrence was found.

param
 (
 $serverinstance = ".", #Change this to your instance name
 $stringToSearchFor = "Starting up database ''master''"
 )
 # Load Assemblies
 [Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.Smo') | Out-Null
 # Server object
 $server = New-Object -TypeName "Microsoft.SqlServer.Management.Smo.Server" -ArgumentList $serverinstance
 $searchCriteria = "Text like '%{0}%'" -f $stringToSearchFor
 $SQLErrorLogs = $Server.EnumErrorLogs()
 $SearchEntry = @()
 ForEach ($SQLErrorLog in $SQLErrorLogs)
 {
 $SearchResult = $server.ReadErrorLog($SQLErrorLog.ArchiveNo).select($searchCriteria) | Select-Object -Property LogDate, Text
 $SearchEntry = $SearchEntry + $searchResult
 }
 $MeasureOccurences = $SearchEntry | Measure-Object -Property LogDate -Minimum -Maximum
 $SQLSearchInfo = New-Object psobject -Property @{
 SearchText = $stringToSearchFor
 Occurances = $MeasureOccurences.Count
 MinDateOccurred = $MeasureOccurences.Minimum
 MaxDateOccurred = $MeasureOccurences.Maximum
 }
 Write-Output $SQLSearchInfo | FT -AutoSize

Do let me know if you ever used such scripting capability to search your errorlogs? You can change the search string and the default instance to your environment.
I wrote this generically because I was planning to use this to find error messages inside ErrorLogs like “I/O requests taking longer than 15 seconds to complete”. The script can be modified to your needs. Do share your scripts too over the comments so that others can benefit from the sharing.

Reference: Pinal Dave (http://blog.sqlauthority.com)