SQL SERVER – System procedures to know SQL Server Version

There are several ways to know the version of SQL Server. But did you know that there are two system procedures through which you can know the version? They are SP_SERVER_INFO and EXEC XP_MSVER

EXEC sp_SERVER_INFO

If you execute the above, you get a result set with informations about the server. The second will tell you the version number

Result is

attribute_id attribute_name attribute_value
———— ——————– —————-
1 DBMS_NAME Microsoft SQL Server
2 DBMS_VER Microsoft SQL Server 2012 – 11.0.3000.0
.
.
.
.
EXEC xp_MSVER

This is also very similar to SP_SERVER_INFO which will show you the version number (second row from the result set)

Result is

Index Name Internal_Value Character_Value
—— ——————————– ————– ——————-
1 ProductName NULL Microsoft SQL Server
2 ProductVersion 720896 11.0.3153.0
.
.
.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Find the Growth Size for All files in All Databases

I am a big time advocate of some common best practices that I see in the industry. These have been ingrained into every session that I do be it SQL Server, MySQL or any other databases. Though I work extensively on SQL Server and from time-to-time on other databases, the concepts and basics have remained the same across all these variants. One of the practices that I always advocate is the use of “Auto Growth” settings inside SQL Server. The setting is good but I always say, it is there as a safety net. In reality this setting should never be triggered and as a DBA, it is important that we know when the files are getting expanded. As a DBA, you need to build capacity in your environment whenever these settings hit the server. For example, a 10% growth doesn’t look alarming at initial but can be dangerous because 10% of a 1GB database file grows 100MB while for a 100GB database that will grow by 10GB. As a DBA, we are required to be cognizant of the setting and know when the growth happens.

Not very long ago, one of my blog readers sent me below email

Hi Pinal,
My apologies to send you direct email, but this was really urgent. Do you know a quick way to find the growth size set for all the databases on a SQL instance. I was able to figure out that I can use sp_helpfile. Unfortunately it gives details about the files of the database in which I’m running the stored procedure sp_helpfile.

Any quick reply would be appreciated. Thanks in advance!

As always, I searched for it at search.sqlauthority.com and was surprised to see that there were no results. Here is the query which I came up with.

SELECT   'Database Name' = DB_NAME(database_id)
,
'FileName' = NAME
,FILE_ID
,'size' = CONVERT(NVARCHAR(15), CONVERT(BIGINT, size) * 8) + N' KB'
,'maxsize' = (
CASE max_size
WHEN - 1
THEN N'Unlimited'
ELSE CONVERT(NVARCHAR(15), CONVERT(BIGINT, max_size) * 8) + N' KB'
END
)
,
'growth' = (
CASE is_percent_growth
WHEN 1
THEN CONVERT(NVARCHAR(15), growth) + N'%'
ELSE CONVERT(NVARCHAR(15), CONVERT(BIGINT, growth) * 8) + N' KB'
END
)
,
'type_desc' = type_desc
FROM sys.master_files
ORDER BY database_id

 

If you find a better way, please share it via comments. The best way to learn is to learn from each other. Do let me know have you ever encountered this situation in your life?

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Mistake to Avoid: Duplicate and Overlapping Indexes – Notes from the Field #073

[Note from Pinal]: In this episode of the Notes from the Field series database expert Kathi Kellenberger explains about indexes and its impact. We often believe that indexes will improve the performance of the query, but it is not true always. There are cases when indexes can reduce the performance as well. Read the experience of  Kathi in her own words.


Having the correct indexes in place can improve the performance of your queries immensely. Knowing just what the correct indexes are, however, is not an easy task. SQL Server provides tools to help you create indexes based on a specific query or the workload that has run since the last restart of the instance. While these are handy tools, you must take the guidance they give with the time-honored grain of salt.

Having done my share of index tuning over the past few years, I have seen the consequences of blindly creating indexes based on the suggestions of the Database Engine Tuning Advisor, the missing index DMV, or the missing index recommendation found in the execution plan. It’s easy to copy a slowly running query into the Query window of SSMS and view the execution plan only to find that an index can be created to drastically improve the query performance. The problem is that this recommendation does not consider indexes that are already in place. By creating every index that these tools suggest, you will end up with too many indexes and many that overlap. At first you may not think this is a problem: more is better, right?

Having too many nonclustered indexes can cause numerous problems. First, unneeded indexes take up space. This impacts storage costs, backup and recovery times, and index maintenance times. Indexes must be kept up to date whenever data changes. The performance of inserts, updates, and deletes is impacted by nonclustered indexes. Have you ever heard of a SELECT query that runs more slowly because there are too many indexes on a table? I have seen it happen. When the optimizer comes up with a plan for a query, it must consider the available indexes, three types of joining, order of joins, etc. The number of plan choices increases exponentially. The optimizer won’t take long to come up with a plan, however, and will sometimes stop with a “good enough plan”. It’s possible that the optimizer didn’t have enough time to figure out the best index because there were too many to consider.

To avoid creating unnecessary indexes, always take a look at the existing indexes on the table when you think you should add a new one. Instead of creating a brand new index, maybe you can just add a key or included columns to an existing index. Following this practice will help keep the number of indexes from spiraling out of control. Another thing to watch out for is the cluster key. The cluster key is included in every nonclustered index. It’s there to locate rows in the clustered index, but you won’t see it in the nonclustered index definition. The index tools will often tell you to add the cluster key as included column, but that is not necessary.

By following all of the index recommendations from the tools without considering other indexes, you will end up with tables that resemble the following:

CREATE TABLE TestIndexes(Col1 INT, col2 VARCHAR(10), Col3 DATE, Col4 BIT);
CREATE INDEX ix_TestIndexes_Col1 ON dbo.TestIndexes (col1);
CREATE INDEX ix_TestIndexes_Col1_Col2 ON dbo.TestIndexes (col1, col2);
CREATE INDEX ix_TestIndexes_Col1_Col2_Col3 ON dbo.TestIndexes (col1, col2, Col3);
CREATE INDEX ix_TestIndexes_Col2 ON dbo.TestIndexes (col2);
CREATE INDEX ix_TestIndexes_Col1_includes1 ON dbo.TestIndexes (col1) INCLUDE(Col4);
CREATE INDEX ix_TestIndexes_Col1_includes2 ON dbo.TestIndexes (col1) INCLUDE(Col2);

You may think that this is a contrived example, but I see this pattern all the time. How do you find the overlapping indexes that need to be cleaned up? There are many scripts available, but here is a simple script that just looks at the first two index keys for duplicates:

WITH IndexColumns AS (
SELECT '[' + s.Name + '].[' + T.Name + ']' AS TableName,
i.name AS IndexName,  C.name AS ColumnName, i.index_id,ic.index_column_id,
COUNT(*) OVER(PARTITION BY t.OBJECT_ID, i.index_id) AS ColCount
FROM sys.schemas AS s
JOIN sys.tables AS t ON t.schema_id = s.schema_id
JOIN sys.indexes AS i ON I.OBJECT_ID = T.OBJECT_ID
JOIN sys.index_columns AS IC ON  IC.OBJECT_ID = I.OBJECT_ID
AND IC.index_id = I.index_id
JOIN sys.columns AS C ON  C.OBJECT_ID = IC.OBJECT_ID
AND C.column_id = IC.column_id
WHERE IC.is_included_column = 0
)
SELECT DISTINCT a.TableName, a.IndexName AS Index1, b.IndexName AS Index2
FROM IndexColumns AS a
JOIN IndexColumns AS b ON b.TableName = a.TableName
AND b.IndexName <> a.IndexName
AND b.index_column_id = a.index_column_id
AND  b.ColumnName = a.ColumnName
AND a.index_column_id < 3
AND a.index_id < b.index_id
AND a.ColCount <= B.ColCount
ORDER BY a.TableName, a.IndexName;

Once you find indexes that are subsets or even exact duplicates of other indexes, you should manually review each match and figure out what can be deleted or consolidated. Just be sure to test your changes before implementing them in production. After reviewing the information, I came up with the following changes:

DROP INDEX ix_TestIndexes_Col1 ON TestIndexes;
DROP INDEX ix_TestIndexes_Col1_Col2 ON TestIndexes;
DROP INDEX ix_TestIndexes_Col1_Col2_Col3 ON TestIndexes;
DROP INDEX ix_TestIndexes_Col1_includes1 ON TestIndexes;
DROP INDEX ix_TestIndexes_Col1_includes2 ON TestIndexes;
CREATE INDEX ix_TestIndexes_Col1_Col2_Col3_includes ON TestIndexes
(Col1, Col2, Col3) INCLUDE (Col4);

Now there are two indexes, none of them overlap, and I haven’t lost anything that was in place before.

I always say that index tuning is both a science and an art. Now when you use the science of missing index information you will know how to apply the art of avoiding duplicates.

If you want to get started with BIML with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Knowing Nested Transactions Behavior with SQL Server

SQL Server is an interesting database system that baffles me every single day that I get to learn something new. This blog has been my way to share these learnings. Every time I write something new, my blog readers have been kind enough in pushing me with interesting comments that I get to learn a lot more. I am a firm believer that learnings get enhanced every time share something.

This blog is an extension of what I wrote a couple of weeks back SQL SERVER – Error – Msg 6401, Level 16: Cannot roll back Transaction. One of my blog readers asked me, doesn’t SQL Server create or start a new transaction whenever we do these nested transactions? This was a valid question and understandable. I was thinking of a way to show the concept that SQL Server will not create any new transaction even when it is a nested format. Finally, I came up with this script to show the working.

We will create the script in the following sequence:

  • Create our database and a dummy table
  • Create a transaction and enter some value into the table
  • Create a nested transaction and enter some value into the table
  • Look at the Transaction Log and Transaction ID to see if they are any different
  • Create a ROLLBACK operation and show what is entered in TLogs

-- Create our database for testing
USE MASTER;
GO
CREATE DATABASE TransactionsDemo;
GO
USE TransactionsDemo;
GO
-- Create a table for testing
CREATE TABLE tbl_SQLAuth (Col1 INT);
GO
SET NOCOUNT ON;
GO
ALTER DATABASE [TransactionsDemo] SET RECOVERY SIMPLE WITH NO_WAIT
GO
CHECKPOINT

Let us create the data into our table with transactions.

-- Create an explicit transaction and insert single row
BEGIN TRAN OuterTransaction;
GO
INSERT INTO tbl_SQLAuth VALUES (1);
GO
-- Create an explicit so called nested transaction and insert single row
BEGIN TRAN InnerTransaction;
GO
INSERT INTO tbl_SQLAuth VALUES (2);
GO

At this point, if you check the transaction log records – it shows an interesting note. This can be queried using the undocumented command fn_dblog().

-- Look at the TLogs for entries now.
SELECT Operation, [Transaction ID], Description FROM fn_dblog(NULL, NULL)

If you check the Transaction ID column, you can see the values for both the records point to the same transaction ID. So there is no difference when it comes to creating nested transactions.

To extend our understanding, let us try to rollback the transaction. This will rollback both the rows with a compensating record in the TLog. So let us see what happens.

-- Rollback the OuterTransaction
ROLLBACK TRAN OuterTransaction;
GO

Let us look at the TLog entries now.

As you can see both the rows have been rolled back using a COMPENSATION record for each of the rows that were inserted in the Transaction log. The final ABORT_XACT confirms the completion of our Transaction.

Do let me know if you have ever learnt the transaction context ever like this? It is sometimes fun when someone pushes us to learn the basics the hard way. Hope you like this blog and learnt something new today. Let us learn together.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Where is ERRORLOG? Various Ways to Find its Location

Whenever someone reports some weird error on my blog comments or sends email to know about it, I always ask to share SQL Server ERRORLOG file. There have been many occasions where I need to guide them to find location of ERRORLOG file generated by SQL Server. Most DBA’s are intelligent and know some of these, but this is my try to share my learnings.

I decided to write this blog so that I can reuse it rather than sending steps every time. At this point I must point out that even if the name says ERRORLOG, it contains not only the errors but information message also. Here are various ways to find the location of SQL Server Errorlogs:

A) If SQL Server is running and we are able to connect to SQL Server then we can do various things. So we can connect to SQL Server and run xp_readerrorlog.

USE MASTER
GO
EXEC xp_readerrorlog 0, 1, N'Logging SQL Server messages in file'
GO

If you can’t remember above command just run xp_readerrorlog and find the line which says “Logging SQL Server messages”.

B) If we are not able to connect to SQL Server then we should SQL Server Configuration Manageuse. We need to find startup parameter starting with -e. Below is the place in SQL Server Configuration Manager (SQL 2012 onwards) where we can see them.

C) If you don’t want to use both ways then here is the little unknown secret. The ERRORLOG is one of startup parameter and its values are stored in registry key and here is the key in my server. SQLArg1 shows parameter starting with -e parameters which point to Errorlog file.

Here is the key which I highlighted in image: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL12.SQL2014\MSSQLServer\Parameters\

Note that “MSSQL12.SQL2014” would vary based on SQL Server Version and instance name which is installed. Here is the quick table with version reference

SQL Server Version Key Name
SQL Server 2008 MSSQL10
SQL Server 2008 R2 MSSQL10_50
SQL Server 2012 MSSQL11
SQL Server 2014 MSSQL12

In SQL Server 2005, we would see a key name in the format of MSSQL.n (like MSSQL.1) the number n would vary based on instance ID.

Here is a key where we can get mapping of Instance ID and directory.

In the above image, you can see that this computer has a default instance (Instance Name MSSQLSERVER) of SQL Server 2012 and named instance (Instance Name SQL2014) of SQL Server 2014.

In case you are contacting me for any error, get the Errorlog using this blog.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Backup and Restore Behavior of ReadOnly Filegroup Databases

Last week I wrote about SQL SERVER – Marking Filegroup as ReadOnly with SQL Server and it got me interested into few key questions. One of the questions someone asked was, how will backups behave? Are there anything I need to know about ReadOnly Filegroups? Will these databases when restored take this setting along? So in this blog post, let me take few simple steps in the learning journey I had when working with ReadOnly filegroups.

TSQL Scripts

Let us start out by creating the database first.
CREATE DATABASE [ReadOnlyDB]
CONTAINMENT
= NONE
ON  PRIMARY
( NAME = N'ReadOnlyDB', FILENAME = N'C:\Temp\ReadOnlyDB.mdf' , SIZE = 4024KB , FILEGROWTH = 1024KB )
,
FILEGROUP [ReadOnlyDB_FG]
( NAME = N'ReadOnlyDB_FG', FILENAME = N'C:\Temp\ReadOnlyDB_FG.ndf' , SIZE = 4096KB , FILEGROWTH = 1024KB )
LOG ON
( NAME = N'ReadOnlyDB_log', FILENAME = N'C:\Temp\ReadOnlyDB_log.ldf' , SIZE = 20480KB , FILEGROWTH = 10%)
GO
-- Mark the filegroup read-only
ALTER DATABASE ReadOnlyDB MODIFY FILEGROUP ReadOnlyDB_FG READ_ONLY;
GO

I have gone ahead by marking the filegroup as Read_Only. Next I am going ahead with a FULL Backup.

BACKUP DATABASE [ReadOnlyDB]
TO  DISK = N'C:\Temp\ReadOnlyDB.bak'
WITH NOFORMAT, INIT,
NAME = N'ReadOnlyDB-Full Database Backup',
SKIP, NOREWIND, NOUNLOAD,  STATS = 10, CHECKSUM
GO

Next we will drop the database and restore the same.

-- Clean up time
USE MASTER
GO
DROP DATABASE ReadOnlyDB
GO
USE [master]
RESTORE DATABASE [ReadOnlyDB] FROM  DISK = N'C:\Temp\ReadOnlyDB.bak' WITH  FILE = 1,  NOUNLOAD,  STATS = 5
GO

In the above command, we restored the database back to the same location. Now let us go ahead and check the filegroup’s settings for read_only attribute.

USE ReadOnlyDB
-- Check the status
SELECT type_desc, physical_name, is_read_only
FROM sys.database_files
GO

You can see our secondary filegroup is still marked as read_only.

There is one important learning that I got in this experiment. The database fileroups that are marked as read_only is retained as part of fullbackup and when we restore such backups, these settings get carried forward in our restored database.

I hope you got some learning as part of this experiment. Are you using these concepts in your environments? Do let us know via the comments below.

Reference: Pinal Dave (http://blog.sqlauthority.com)

Interview Question of the Week #012 – Steps to Restore Bak File to Database

Here is the question, I was asked the other day at the conference.

Question: If I have a .bak file and I want to restore it database what are the various steps involved in it. Assume that I do not know much about that backup file.

Answer: I love this question, as it has a clear need, but there is ambiguity in the process.

Step 1: Retrieve the logical file name of the database

Step 2: Bring database to single user

Step 3: Restore database with the help of the logical file name retrieved earlier.

Step 4: Bring database to multi user mode.

Here are all the four steps together:

----Retrieve logical file name
RESTORE FILELISTONLY
FROM DISK = 'D:\BackUpYourBaackUpFile.bak'
GO
----Make Database to single user Mode
ALTER DATABASE YourDB
SET SINGLE_USER WITH
ROLLBACK IMMEDIATE
GO
----Restore Database
RESTORE DATABASE YourDB
FROM DISK = 'D:\BackUpYourBaackUpFile.bak'
WITH MOVE 'YourMDFLogicalName' TO 'D:\DataYourMDFFile.mdf',
MOVE 'YourLDFLogicalName' TO 'D:\DataYourLDFFile.ldf'
GO
----Make Database to multi user Mode
ALTER DATABASE YourDB SET MULTI_USER
GO

Here is the blog post I wrote earlier, which describes the same process SQL SERVER – Restore Database Backup using SQL Script (T-SQL).

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Trivia – Days in a Year

Every time I wrote a blog, I tend to get back to most of them via the search. Most of you ask me questions and I do few simple search back to make sure I am able to address them. Yesterday as I was searching for an interesting question, Now back to the question that got me there, one of a friend said he was asked in an interview on how to efficiently find the number of days in a given year.

The question was interesting but what baffled me was – do people still ask such questions? It was a challenge that I wanted to share with you. I wanted to take the challenge and immediately got to my SQL Server machine to solve. The instinctive behavior was to use some basic methods that involved DATEADD and DATEPART options.

The first solution that I got was:

--This method will work on SQL server 2005 and above.
DECLARE @year AS INT
SET
@year=2012
SELECT DATEPART(dy,(DATEADD(YEAR,@year-1899,0)-1))
AS [TOTAL NO OF DAYS]
GO

Immediately I sent the above option. Then he quizzed me asking, can you use some of the New Date time functions that were introduced with SQL Server 2012. Now, I was not prepared for the googly that came my way. But after getting my heads around it, I came up with the below query.

--This technique will work in SQL Server 2012 and above.
DECLARE @year AS INT
SET
@year=2012
SELECT DATEPART(dy,DATEFROMPARTS(@Year,12,31))
AS [TOTAL NO OF DAYS]

Woot !!! That was not simple as I had to search my blog for ideas.

Quiz: Can you come up with some solution which will have lesser number of functions involved? Can you use some of the new SQL Server 2012 Date functions in your solution? Let me know via comments.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Add Node in Cluster – Rule “Node and cluster edition match” failed

Every day I get many emails and blog comment with at least one new problem every day. Recently, one of my blog readers sent an email asking assistance. She already had two node Windows Cluster. Let us call them Node1 and Node2. SQL Server Clustered instance, was successfully installed on Node1. In SQL 2008 onwards, we need to run setup on Node2 and perform “AddNode” action by choosing below option.

While performing AddNode, there was a Rule Failure.

—————————
Rule Check Result
—————————

Rule “Node and cluster edition match” failed.

The edition of the new node does not match the edition of the cluster.

—————————
OK
—————————

I asked to share the setup logs from Node2 and while looking at them, I found below.

2015-03-10 12:00:22 Slp: Initializing rule      : Node and cluster edition match

2015-03-10 12:00:22 Slp: Rule applied features  : ALL

2015-03-10 12:00:22 Slp: Rule is will be executed  : True

2015-03-10 12:00:22 Slp: Init rule target object: Microsoft.SqlServer.Configuration.SetupExtension.AddNodeEditionBlock

2015-03-10 12:00:22 Slp: The edition user specified is: ENTERPRISE

2015-03-10 12:00:22 Slp: The edition of the cluster instance: STANDARD

2015-03-10 12:00:22 Slp: Evaluating rule        : AddNodeEditionBlock

2015-03-10 12:00:22 Slp: Rule running on machine: Node2

2015-03-10 12:00:22 Slp: Rule evaluation done   : Failed

2015-03-10 12:00:22 Slp: Rule evaluation message: The edition of the new node does not match the edition of the cluster.

If you closely look at the highlighted lines, you should be able to understand the issue. In case you didn’t understand – here is the issue.

SQL Server 2008 R2, which is installed is of Edition “Standard” edition (shown as an edition of clustered instance). The media, which were getting used while trying to perform AddNode operation was of “Enterprise” edition (shown as user specified).

Solution of the issue is very straight forward. She was supposed to use the media of the same edition, which was installed on Node1. Here is the screen where we can see edition of media.

The edition of the media is decided by the product key provided in the previous screen.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Adding a Master Data Services to your Enterprise – Notes from the Field #072

[Note from Pinal]: This is a 72th episode of Notes from the Field series. Master Data Services is one of the most important but very little explored feature of SQL Server. If you have been reading this blog, when it is about BI, I always take help from LinchPin People who are BI experts. I requested Reeves from LinchiPin people to help me answer this unique question.

In this episode of the Notes from the Field series database expert Reeves Smith explains why one should add Master Data services to our enterprise. Read the experience of Reeves in his own words.


With all of the focus on big data, I want to remind everyone not to forget about data governance and master data management in general. While working on various data warehouse projects, I have found this area to be often overlooked. I want to stress that if you are building a data warehouse or a reporting system, then data governance should be one of the key components of almost any project.

I wanted to start off with a couple of posts to show how you could implement SQL Server Master Data Services in you enterprise. Tyler Gramm wrote a white paper “Implementing a Phased Approach to Master Data Management” (https://msdn.microsoft.com/en-us/library/ff626496.aspx) that outlines taking small steps to start your first master data management project. In this series I will outline some steps to help bring reference data in SQL Server Master Data Services instance to start you off on your way.

Let’s first define data governance to better understand where master data management fits within that methodology. Data governance can be defined as a complete process with the ability to manage the data assets within the enterprise. Data governance also defines how you manage the assets that have the greatest business value for your organization’s day-to-day operations.

Where does master data management fall within a data governance approach?  Master data management is one area within the data governance process that focuses on management of reference and master data. Master data management is combined with a fair amount of business process and I will leave the process aspect for another author. An example of a business process within master data management would be the decision on which data source is considered the truth when a discrepancy in master data arises between two source systems.

In this post I will discuss the why you would want to consider Microsoft SQL Server Master Data Services. If Microsoft SQL Server is in your environment, integration will easy and Excel provides a great interface for the data stewards.

Note: SQL Server 2012 Master Data Services and SQL Server 2014 Master Data Services are nearly identical.

Master data management is as much a business function as a technical one. Master data is managed within SQL Server and often integrated within Extract-Transform-Load (ETL) processes, but the actual management and oversight of the data should be performed by the business. This involvement is provided with help from roles like a data steward. A data steward can been seen as a representative of the data, while the technical resources serve as the caretakers of the data. Accepting these roles is one of the first hurdles to overcome with any master data management project. The business will control most of the direction within the project, and the technical team will be in charge of system integration and support.

Setting up a Master Data Services instance within SQL Server is a fairly straightforward process after you’ve installed all of the prerequisites with Microsoft Internet Information Services (IIS). Figure 1 shows the SQL Server 2012 Master Data Services Configuration Manager. See https://msdn.microsoft.com/en-us/library/ee633744.aspx for more details on how to install and configure the server and backend processes used within the Master Data Services process.

Figure 1 – Master Data Configuration Manager

Note: You may configure the web services portion of Master Data Services install on another machine. It does not need to be on the same machine as the SQL Server database. Multiple licenses could apply with this configuration, so check with a licensing professional.

After setting up Master Data Services, you will want to give data access to the business to enable the actual management of the data. This is best provided by the installation of the Master Data Services add-in for Excel, which you see in Figure 2. This add-in will give the business users with appropriate knowledge of the data direct access to work with and support master data from their favorite data tool, Excel.

Figure 2 – Master Data Services Add-in for Excel

Note: The add-in for Excel does not support some functions, such as model creation and hierarchy. You will have to manage the unsupported functions from the web site.

Getting Started

One of the best projects for beginning a master data management implementation is to start with managing reference data. Reference data exists in every environment and is frequently managed within Excel spreadsheets without database durability and/or version control.  Moving this data into Master Data Services from an Excel spreadsheet is a fairly straight forward task that can be accomplished with little risk.

What’s Next

In the coming post, we will walk through moving reference data from an excel worksheet to an entity within a Master Data Services model.

At a high level the steps that we will discuss in the following post will be:

  • Create a model to store the reference data (completed on the MDS website)
  • Format the reference data in Excel
  • Click Create Entity button on the Master Data tab within Excel and select your data

Hope this helps you get started.

If you want to get started with BIML with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)