SQL SERVER – Secure by Default – The Mantra

AskTheExperts SQL SERVER   Secure by Default   The MantraGenerally the talks I give at conferences revolve around performance and tuning. These are always the most sought after topic and I don’t disappoint the requests because they give me an opportunity to learn something new too. Last month when I visited Sri Lanka for the first ever SQLSaturday event, the response was the same. We had a fantastic Q&A discussion which involved some random questions from attendees on a variety of topics. I think every conference should have something like this – “Ask the Experts” concept. This gets everyone together and we are having a combined fun time to say the least.

One of the topic one attendee asked was around Security. “Hey Pinal, can you recollect some top 10 things we need to take care as security practices when working with SQL Server?”. It was surely a loaded question and I was not prepared to say the least. I said, “Give me a breather buddy. I can actually do a complete end-to-end session when I come next time if you are interested. (this gave me some seconds to gather my thoughts) Now that you asked, let me throw some of them for your reference.”

The session got over and the gentleman who had asked the question had sent me a great summary of the conversation during the Q&A session. I thought to post them as a blog post so that we can elaborate over them for the future. But these are some great starters for reference.

  • Application Layer Security: SQL Injection is a powerful method to damage or compromise a SQL Server. SQL Injection is the result of failing to validate user input at the application layer. So please validate all the user inputs every single time.
  • Be strict about what represents valid input and reject everything else.
  • Use parameterized queries and avoid string concatenation to build queries.
  • Do not divulge too much information to the end user. The end user may be an attacker. Let your error message not reveal the exact problem.
  • Connect to the server using a least privilege account. Do not connect with System Admin privileges. I know many of you are doing this today, but please refrain henceforth.
  • Secure the communication channel between Client and SQL Server. SSL encryption be mandated for client server communication if this is critical. SSL is provided through the use of certificates issued by a Certificate Authority (CA). SSL encryption ensures the packets between the client and server are encrypted and mitigates the detection of user credentials or data by a network sniffer.
  • Protect the SQL Server files on the physical level. The directory structure for SQL Server should be restricted to the SQL Server service and relevant accounts.
  • Protect the permissions for the registry entries for SQL Server. They should be restricted to Administrators and the SQL Server account.
  • Audit Logins by default. Logins should be audited at the Windows and SQL Server level. At the least, failed login attempts should be logged as a flag to possible intrusion attempts.

Tips shared about SQL Account Management

  • Make the SQL Server accounts least privileged.
  • Delete or disable any unused user accounts.
  • Disable the Windows guest account (this should be automatic for a server).
  • Enforce strong password policies. This is normally controlled via Group Policy.
  • Restrict remote logins to the server.
  • Rename administrator account to stop DDOS attacks. Accounts like “sa” must be disabled/renamed at a minimum.

Seeing the mail, I thought this is a great start to talk when it comes to security and SQL Server. I thought this blog will be one of many to follow later on this topic because I am sure a number of you will also be interested in this topic. Do let me know some of the things you do in your environments so that we can share and learn from each other.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Basic Statistics Maintenance – Notes from the Field #083

[Note from Pinal]: This is a 83rd episode of Notes from the Field series. Maintenance of the database is very critical activity and I have always seen DBA taking it very seriously. There is a only one problem – there is no single solution or guidance for how to maintain the database. Everybody has their own opinion and way to do different tasks. Statistics is one of the most important aspect of the database. The performance of entire application can depend on statistics, as it can help SQL Engine with intelligence to execute optimal plan.

In this episode of the Notes from the Field series database expert Stuart Ainsworth explains about basic statistics maintenance.

 SQL SERVER   Basic Statistics Maintenance   Notes from the Field #083

Statistic maintenance is an important (but often overlooked) aspect of performance tuning for SQL Server.  The query optimizer relies on distribution statistics to determine how the query will be executed, with a particular emphasis on SEEKS vs SCANS as well as estimates of effort needed (the cost threshold for parallelism).  Out-of date stats can impact performance significantly.

Luckily, the default setting for most databases covers most database performance scenarios; SQL Server offers three basic settings for statistics maintenance:

  • Auto Create Statistics – SQL Server will create statistics during an index creation script, or when deemed necessary to satisfy a query; enabled by default.
  • Auto Update Statistics – SQL Server will update statistics when it deems them to be outdated; enabled by default.
  • Auto Update Statistics Asynchronously – When enabled, SQL Server will updated statistics after the execution of a query if it determines that an update is necessary; if disabled, the statistics will be updated first, and then the query executed. This setting is disabled by default, and there’s some controversy over whether or not it’s useful.

notes 83 1 SQL SERVER   Basic Statistics Maintenance   Notes from the Field #083

For the most part, SQL Server does a good job of maintaining statistics if these options are left with the defaults; however, statistics can still become stale over time if the data is updated at a slow rate of change.  If your statistics are more than a year old in your database, it’s probably time to do a refresh.

But how do you tell when your stats are out of date? There’s a catalog view in SQL Server called sys.stats that will give you a lot of information about statistics, but it’s very detailed; data is collected down to the column level, which may be overwhelming if you have lots of databases.  I suggest starting at a higher level, by taking a look at how out of date statistics are across all of your databases.   I use the following query to help me quickly identify troublesome databases:

/*checks last updated stats date for all databases; calculates a percentage of stats updated within the last 24 hours.
useful to determine how out-of-date statistics are.  Also identifies if auto updatestates are on and if asynchronous updates
are enabled.
database_name VARCHAR(100)
oldest_stats DATETIME
, newest_stats DATETIME
, percent_update_within_last_day DECIMAL(5, 2)
is_auto_update_stats_on BIT
, is_auto_update_stats_async_on BIT
DECLARE @dynsql NVARCHAR(4000) = 'use ?
if db_id() >4
WITH    x AS ( SELECT   STATS_DATE(s.object_id, stats_id) date_updated
FROM     sys.stats s
JOIN sys.tables t ON t.object_id = s.object_id
WHERE    t.is_ms_shipped = 0
) ,
AS ( SELECT   MIN(date_updated) AS oldest_stats
, MAX(date_updated) AS newest_stats
FROM     x
SELECT DB_NAME() database_name
, oldest_stats
, newest_stats
, SUM(CASE WHEN DATEDIFF(d, date_updated, newest_stats) <=1 THEN 100.00 ELSE 0.00 END)/COUNT(*) "percent_update_within_last_day"
, d.is_auto_update_stats_on
, d.is_auto_update_stats_async_on
FROM    x
CROSS JOIN sys.databases d
WHERE d.database_id = db_id()
GROUP BY oldest_stats
, newest_stats
, d.is_auto_update_stats_on
, d.is_auto_update_stats_async_on
( database_name
, oldest_stats
, newest_stats
, percent_update_within_last_day
, is_auto_update_stats_on
, is_auto_update_stats_async_on
EXEC sp_MSforeachdb @dynsql
FROM    #dbs d

Results will look similar to the following:

notes 83 2 SQL SERVER   Basic Statistics Maintenance   Notes from the Field #083

Looking at the results, if I see that the oldest database stats are more than a year old, it’s s a pretty good indicator that statistics are not being maintained by some form of ongoing maintenance operation.  If the defaults are not being used, that’s also something that needs to be investigated.

The percent_update_within_last_day is also a good trigger for me to investigate a potential issue; if less than 20% of the statistics in a database were updated in the last 24 hours, the odds of a less-than-optimal execution plan increase significantly.  If the tables in the database are large, it may take several million rows of changed data to trigger a refresh.

To refresh outdated statistics, I recommend Ola Hallengren’s maintenance scripts; they offer a lot of flexibility for developing a customized index and statistics maintenance plan, including the ability to update statistics on a scheduled basis, and focus on updating statistics that need to be refreshed.  Basic maintenance (such as this) can help prevent performance problems from occurring.

If you want to get started with SQL Server with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Adding Reference Data to Master Data Services – Notes from the Field #081

[Note from Pinal]: This is a 81th episode of Notes from the Field series. Master Data Services is one of the most important, but very little explored feature of SQL Server. If you have been reading this blog, when it is about BI, I always take help from LinchPin People who are BI experts. I requested Reeves from LinchiPin people to help me answer this unique question.

In this episode of the Notes from the Field series database expert Reeves Smith explains why one should add referencing data to Master Data services to our enterprise. Read the experience of Reeves in his own words.

Reeves Smith SQL SERVER   Adding Reference Data to Master Data Services   Notes from the Field #081

In a previous post, I talked about Master Data Services and how it can enhance your Enterprise Data Warehouse. Through the Master Data Services add-in for Excel, the business can access and manage reference data without much involvement with IT. This might sound like a bad idea, but the subject matter experts (SME) for the data often do not reside in IT. SMEs should have the ability to manage reference data, and with the Master Data Services add-in for Excel they can.

In this post, I’m going to outline the three steps needed to get your data governance people up and running after Master Data Services has been installed. The three are:

  • Create a model to store the reference data (completed on the MDS website)
  • Format the reference data in Excel
  • Create an Entity within the Master Data Services add-in for Excel

IT will only need to perform the first step of the process. The first step to manage reference data is creating a container for the reference data. This step must take place within the Master Data Services website.

Step 1 – Create a model to store the reference data

  1. Open the Master Data Service’s website and Select System Administration, as you see in Figure 1.

notes81 1 SQL SERVER   Adding Reference Data to Master Data Services   Notes from the Field #081

Figure 1 – Master Data Services System Administration

  1. After you select System Administration, the website will list three menus: Model View, Manage, and System. Select Manage and then select Models from the pop-out menu.
  2. Click on the green plus sign to create a new model.
  3. In the Add Model dialog, which you see in Figure 2, add a model name that would be relevant to the appropriate reference data. In this example I will use Demo. Unselect all of the check boxes under the model.


Figure 2 – Add Model

  1. Test your model by connecting to it from the Master Data Services add-in for Excel.
    1. Open a blank Excel workbook and select the Master Data Services tab.
    2. Click the Connect button to create a new connection to your Master Date Services Instance.
    3. In the Master Data Explorer, which you see in Figure 3, select your newly created model.

Figure 3 – Master Data Explorer

notes81 2 SQL SERVER   Adding Reference Data to Master Data Services   Notes from the Field #081

Step 2 – Format the reference data in Excel

  1. Create an Excel worksheet with the formatted reference data and provide a header for each of the columns.

Note: Master Data Services requires a Code column and suggests the use of a Name column. Both of these columns do not need to be directly called Name and Code but should have like columns within the reference data. The code column must contain unique values.

notes81 3 SQL SERVER   Adding Reference Data to Master Data Services   Notes from the Field #081

Step 3 – Create an Entity within the Master Data Services add-in for Excel

For this example, I will create a list of customers from the Adventure Works database to enable the discussion of enhancements to the data set in a later post. Your reference data should be less complex then the customers list.

  1. With an open connection to Master Data Services and the worksheet with your reference data, click the Create Entity button, as you see in Figure 4.

notes81 4 SQL SERVER   Adding Reference Data to Master Data Services   Notes from the Field #081

Figure 4 – Master Data Services tab with the Create Entity highlighted

  1. Click the red arrow to the right of the Range text box and select the range of your reference data. Be sure to include the column headers and select the My data has headers check box, as shown in Figure 5.
  2. Select the newly created model; in my case that would be Demo.
  3. Add and new entity name and select the appropriate Code and Name column.

Note: Master Data Services 2012 and forward enables the use of an identity values for the Code if you do not want to manage unique keys.

notes81 5 SQL SERVER   Adding Reference Data to Master Data Services   Notes from the Field #081

Figure 5 – Create Entity Dialog

  1. Click OK.

notes81 6 SQL SERVER   Adding Reference Data to Master Data Services   Notes from the Field #081

At this point you should have a working model with a new entity that is fully managed within Master Data Services.


In this post, I have walked through the steps that are required to move reference data from an Excel workbook and create a model within Master Data Services. Under MDS you get all of the benefits of a managed master data management solution, like audibility and security.

What’s Next

In a coming post, we will walk through enhancing or reference data. One enhancement we will demonstrate, will include adding domain values within the Model. This helps the business users select the correct data elements and increase the data quality of your master data.

Hope this helps.

If you want to get started with BIML with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Create Login with SID – Way to Synchronize Logins on Secondary Server

How many times you had a problem where you have deployed a high availability solution, but the application is not able to use database and getting login failed for user after failover? Note that this issue would happen only with SQL Logins not Windows Login. You would understand the reason shortly. Here are the situations where you might face login failure.

  1. Log Shipping (reading from stand-by secondary database)
  2. AlwaysOn Availability Group. (reading from read-only secondary replica)
  3. Database Mirroring (after failover)

Let’s assume that we have SQLServerA and SQLSevrerB which has one database AppDB. As per architecture of the users and login in SQL Server – there would be a user in database mapped to login. For simplicity let’s say login and user is AppUser which is available in AppDB.

USE AppDB GO SELECT name, sid FROM sys.sysusers WHERE name = 'AppUser' GO USE MASTER GO SELECT name, sid FROM sys.sql_logins WHERE name = 'AppUser' GO

LoginWithSID 02 SQL SERVER – Create Login with SID – Way to Synchronize Logins on Secondary Server

As we can see that SID is matching that’s why user is mapped to same login.

Now, if we create AlwaysOn Availability Group or configure database mirroring or log shipping – we would not be able to map the user using sp_change_user_login because secondary database is not writeable, its only read-only mode.

Here is what we would see on secondary server if login is created using UI.

LoginWithSID 03 SQL SERVER – Create Login with SID – Way to Synchronize Logins on Secondary Server

The solution of this would be to drop and create login with known SID which is stored in sys.sysusers in the database.

CREATE Login AppUser WITH password = 'password@123', SID = 0x59B662112A43D24585BFE2BF80D9BE19

Once this is done, application can connect to secondary database. Hopefully this is something which would help you in fixing issue when there are orphan users in database.

Have you ever used this syntax? Leave a comment and let me know.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – 3 Common Mistakes of Agile Development – Notes from the Field #074

[Note from Pinal]: This is a 74th episode of Notes from the Field series.  Agile is a time boxed, iterative approach to software delivery that builds software incrementally from the start of the project, instead of trying to deliver it all at once near the end. Storytelling is an art very few have mastered in their life. When I requested Stuart to share his ideas about agile, he shared a very interesting insight on this subject. He had very interesting story to share about 3 common mistakes of agile developments. I very much enjoyed his notes from the field and I am confident that you will like it too.

In this episode of the Notes from the Field series database expert Stuart Ainsworth explains about 3 Common Mistakes of Agile Development.

 SQL SERVER   3 Common Mistakes of Agile Development   Notes from the Field #074

I’m a developer by history, but a project manager at heart.  I’ve started becoming much more interested in helping teams improve their workflow, not just helping them write better code.  To that end, most of the development shops that I’ve worked have struggled with the ongoing battle to get features built and shipped on a schedule that satisfies business requirements.  Good developers are proud of their craft, and want maximum time to create; business needs features to go out the door quickly in order to compete.  These goals are often in conflict with each other.

Agile methodologies (such as scrum) try to help balance this tension by encouraging active conversation between business and development, and continuously delivering working software that is flexible and adaptable to change.  In the shops where I’ve seen agile development fail to deliver, I’ve noticed the following 3 bad habits:

  1. We have a failure to communicate.

Communication sounds easy, but it’s really, really hard.  Well-defined requirements help, but they’re no substitute for ongoing mutual conversations between developers and business people.  If a requirements document is the only method of explaining what to build and when to build it, you lose the ability to forecast what comes next in terms of building blocks.

Several of the principles of that Agile Manifesto deal with communication between business and development, but my favorite one is “business people and developers must work together daily throughout the project.”  If you want to be an agile development shop, you have to be an agile business.   Business needs to understand the challenges faced by developers, and developers need to be involved in understanding the changing needs of business.

  1. Code releases are always a feature release or a bug fix.

Bug fixes are good, and features are what make money; however, if the only time your shop is releasing code is to implement a feature or fix a bug, then you’re not continually improving your product.  The temptation is to fall back into a waterfall-like methodology; deployments become huge stacks of code that are not added to the product (or operational environment) until the day that a feature is supposed to be released.  The larger the stack of code, the harder it is to test, and the greater the risk of failure.

What agile principles suggest is that you should “deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.”  The goal is to release code frequently, even if that code is only establishing groundwork for future development efforts; smaller code is easier to test, and ultimately, a feature release becomes the final step in a series of releases.

  1. Software release dates depend on the quality and completeness of the code.

This is similar to the second issue, but a slight variation; a shop that constantly changes the length of their iteration will ultimately experience burnout.  It becomes harder and harder to stay on schedule and feature releases get pushed further and further back.

I prefer to have a fixed iteration period, either every four weeks or once a month.  There’s something about a cadence that motivates people to focus and get things done.  If a developer is working on a bit of code that’s supposed to ship in a month, it’s easy to evaluate how likely that’s going to happen within a couple of weeks; if it’s not going to be complete, build the working software, and release it.  With each iteration, it becomes easier to define what can be done in a fixed-length sprint.


Agile software development lends itself to creative implementations, but it’s important to stay true to the principles of good communication, continuous improvement, and maintaining a constant pace for development.  Avoiding some basic pitfalls can help your team stay productive in the ongoing race to get features out the door.

If you want to get started with SQL Server with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Adding a Master Data Services to your Enterprise – Notes from the Field #072

[Note from Pinal]: This is a 72th episode of Notes from the Field series. Master Data Services is one of the most important but very little explored feature of SQL Server. If you have been reading this blog, when it is about BI, I always take help from LinchPin People who are BI experts. I requested Reeves from LinchiPin people to help me answer this unique question.

In this episode of the Notes from the Field series database expert Reeves Smith explains why one should add Master Data services to our enterprise. Read the experience of Reeves in his own words.

Reeves Smith SQL SERVER   Adding a Master Data Services to your Enterprise   Notes from the Field #072

With all of the focus on big data, I want to remind everyone not to forget about data governance and master data management in general. While working on various data warehouse projects, I have found this area to be often overlooked. I want to stress that if you are building a data warehouse or a reporting system, then data governance should be one of the key components of almost any project.

I wanted to start off with a couple of posts to show how you could implement SQL Server Master Data Services in you enterprise. Tyler Gramm wrote a white paper “Implementing a Phased Approach to Master Data Management” (https://msdn.microsoft.com/en-us/library/ff626496.aspx) that outlines taking small steps to start your first master data management project. In this series I will outline some steps to help bring reference data in SQL Server Master Data Services instance to start you off on your way.

Let’s first define data governance to better understand where master data management fits within that methodology. Data governance can be defined as a complete process with the ability to manage the data assets within the enterprise. Data governance also defines how you manage the assets that have the greatest business value for your organization’s day-to-day operations.

Where does master data management fall within a data governance approach?  Master data management is one area within the data governance process that focuses on management of reference and master data. Master data management is combined with a fair amount of business process and I will leave the process aspect for another author. An example of a business process within master data management would be the decision on which data source is considered the truth when a discrepancy in master data arises between two source systems.

In this post I will discuss the why you would want to consider Microsoft SQL Server Master Data Services. If Microsoft SQL Server is in your environment, integration will easy and Excel provides a great interface for the data stewards.

Note: SQL Server 2012 Master Data Services and SQL Server 2014 Master Data Services are nearly identical.

Master data management is as much a business function as a technical one. Master data is managed within SQL Server and often integrated within Extract-Transform-Load (ETL) processes, but the actual management and oversight of the data should be performed by the business. This involvement is provided with help from roles like a data steward. A data steward can been seen as a representative of the data, while the technical resources serve as the caretakers of the data. Accepting these roles is one of the first hurdles to overcome with any master data management project. The business will control most of the direction within the project, and the technical team will be in charge of system integration and support.

Setting up a Master Data Services instance within SQL Server is a fairly straightforward process after you’ve installed all of the prerequisites with Microsoft Internet Information Services (IIS). Figure 1 shows the SQL Server 2012 Master Data Services Configuration Manager. See https://msdn.microsoft.com/en-us/library/ee633744.aspx for more details on how to install and configure the server and backend processes used within the Master Data Services process.

72notes1 SQL SERVER   Adding a Master Data Services to your Enterprise   Notes from the Field #072

Figure 1 – Master Data Configuration Manager

Note: You may configure the web services portion of Master Data Services install on another machine. It does not need to be on the same machine as the SQL Server database. Multiple licenses could apply with this configuration, so check with a licensing professional.

After setting up Master Data Services, you will want to give data access to the business to enable the actual management of the data. This is best provided by the installation of the Master Data Services add-in for Excel, which you see in Figure 2. This add-in will give the business users with appropriate knowledge of the data direct access to work with and support master data from their favorite data tool, Excel.

72notes2 SQL SERVER   Adding a Master Data Services to your Enterprise   Notes from the Field #072

Figure 2 – Master Data Services Add-in for Excel

Note: The add-in for Excel does not support some functions, such as model creation and hierarchy. You will have to manage the unsupported functions from the web site.

Getting Started

One of the best projects for beginning a master data management implementation is to start with managing reference data. Reference data exists in every environment and is frequently managed within Excel spreadsheets without database durability and/or version control.  Moving this data into Master Data Services from an Excel spreadsheet is a fairly straight forward task that can be accomplished with little risk.

What’s Next

In the coming post, we will walk through moving reference data from an excel worksheet to an entity within a Master Data Services model.

At a high level the steps that we will discuss in the following post will be:

  • Create a model to store the reference data (completed on the MDS website)
  • Format the reference data in Excel
  • Click Create Entity button on the Master Data tab within Excel and select your data

Hope this helps you get started.

If you want to get started with BIML with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Marking Filegroup as ReadOnly with SQL Server

My love for writing about ReadOnly databases have pushed me to explore more about this topic. Based on previous blogs, few readers did ask me that they heard about marking filegroups as ReadOnly and they wanted to know how this can be done. This blog post is in continuation to this journey where-in we will learn about the Readonly attribute.

TSQL Script to mark ReadOnly Filegroup

Let us create the database first.

( NAME = N'ReadOnlyDB', FILENAME = N'C:\Temp\ReadOnlyDB.mdf' , SIZE = 4024KB , FILEGROWTH = 1024KB )
( NAME = N'ReadOnlyDB_log', FILENAME = N'C:\Temp\ReadOnlyDB_log.ldf' , SIZE = 20480KB , FILEGROWTH = 10%)

Let us next create an Filegroup which will get marked as ReadOnly.

name = ReadOnlyDB_FG,
FILENAME = 'c:\temp\ReadOnlyDB_FG')

Next we would like to create a table on this specific filegroup. This will form the base for our experiment.

USE ReadOnlyDB
-- Creating our table
CREATE TABLE tbl_SQLAuth (id INT, Longname CHAR(8000))
ON ReadOnlyDB_FG

Let us start our experiment in marking filegroups as ReadOnly. One of the important learning here is to understand, we cannot mark our PRIMARY filegroup as readonly.

-- Mark the PRIMARY filegroup read-only

Msg 5047, Level 16, State 1, Line 29
Cannot change the READONLY property of the PRIMARY filegroup.

We will be presented with the above errors. Now, let us mark the other Filegroup next.

-- Mark the Other filegroup read-only

The filegroup property ‘READ_ONLY’ has been set.

Once this succeeds, we cannot do anything with the filegroup. Let us do few tasks to check the errors.

-- Create a table on the Read_Only Filegroup
-- Explicitly call the FG where the table will be created.
CREATE TABLE tbl_SQLAuth_1 (id INT, Longname CHAR(8000))
ON ReadOnlyDB_FG

Msg 1924, Level 16, State 2, Line 3
Filegroup ‘ReadOnlyDB_FG’ is read-only.

As you can see, we cannot create any tables on this Filegroup. If we try to insert any values into an table which is attached to this Filegroup, we will be presented with an error too.


Msg 652, Level 16, State 1, Line 4
The index “” for table “dbo.tbl_SQLAuth” (RowsetId 72057594040549376) resides on a read-only filegroup (“ReadOnlyDB_FG”), which cannot be modified.

As you can see, the Read_Only Filegroups are functioning just like our ReadOnly Databases which have been explained in our previous blogs. The best part of Read_Only filegroups is that, we get an flexibility to lock just a part of database files when compared to marking the whole database.

-- Clean up time

As I wrap up this blog, I would want to know how many of us out there have used Read_Only filegroups in our environments. What are those scenarios you made these changes? I would like to learn from your experience. Let me know via your comments.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – SCRIPT – SQL Listening on Dynamic Port?

With every release of SQL Server, sometimes I need to revisit some of the old blogs to see how they have enhanced. This learning experience is an ongoing process and the best way I learn is by doing a search on the blog from time to time. Long ago I had written a blog post to find the port number on which SQL Server is listening: SQL SERVER – Find Port SQL Server is Listening – Port SQL Server is Running

Recently one of my blog reader sent email asking below.

I read your blog which talks about reading ERRORLOG and finding the port. But is there any way to find if that port is fixed port or dynamic port?

I know that the port value is stored in registry hive of SQL Server and this is instance specific. Here is the registry key for SQL Server 2014 instance running on my laptop.

sql port 01 SQL SERVER   SCRIPT   SQL Listening on Dynamic Port?

If SQL is configured to use Dynamic Ports then TcpDynamicPorts would have the port value. In case of fixed port (which I have on my SQL Instance) then we would see TcpPort value.

To get same information using T-SQL, I have written a small script which would help you. It would detect dynamic port and static port as well.

,@InstanceID NVARCHAR(128)
@InstanceName NVARCHAR(128)
@tcp_port NVARCHAR(10)
@InstanceKey NVARCHAR(255)
EXECUTE xp_regread @rootkey = 'HKEY_LOCAL_MACHINE'
,@key = 'SOFTWARE\Microsoft\Microsoft SQL Server\Instance Names\SQL'
,@value_name = @InstanceName
,@value = @InstanceID OUTPUT
SELECT @InstanceKey = 'SOFTWARE\MICROSOFT\Microsoft SQL Server\' + @InstanceID + '\MSSQLServer\SuperSocketNetLib\Tcp\IPAll'
EXEC xp_regread @rootkey = 'HKEY_LOCAL_MACHINE'
,@key = @InstanceKey
,@value_name = 'TcpDynamicPorts'
,@value = @tcp_port OUTPUT
IF @tcp_port IS NOT NULL
SELECT 'SQL Server (' + @InstanceName + ') uses dynamic tcp port: ' + CAST(@tcp_port AS NVARCHAR(128))
xp_regread @rootkey = 'HKEY_LOCAL_MACHINE'
,@key = @InstanceKey
,@value_name = 'TcpPort'
,@value = @tcp_port OUTPUT
SELECT 'SQL Server (' + @InstanceName + ') on ' + @ServerName+ ' uses static tcp port: ' + CAST(@tcp_port AS NVARCHAR(128))

Generating such scripts takes time and the best way to learn these is by sharing. So do you have any other script similar to this that is worth a share for other readers? Please share them via the comments section.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – How to Add User Across Databases

In this era of standards, many organizations want to be using some of the cutting edge technologies in their organizations. In many of these interactions I was lucky enough to meet few DBA’s who give me interesting challenges to meet within their organizations. I still vividly remember sharing a script with a DBA who reached out to me to know how we can add a user to all the databases within their SQL Server instance.

On first look, I asked “why?” The answer was simple. Their organization data gets audited once a year by the auditors and they need to be given db_reader privileges during the time of audit. So they wanted to make sure this is done in an automated fashion. I was immediate to give them recommendation to my Plurasight Course. But they were not on the latest version and wanted something in previous versions.

So here is the script that I gave them:

EXEC sp_MSforeachdb '
SELECT @name = ''?''
IF ''?'' not in (''tempdb'')
USE [?]
IF NOT EXISTS(Select * from sys.sysusers where name = ''SQLAuth_Auditor'')
CREATE USER [SQLAuth_Auditor] FOR LOGIN [SQLAuth_Auditor]
PRINT ''Added User for ?''
EXEC sp_addrolemember ''db_datareader'', ''SQLAuth_Auditor''
PRINT ''Added db_datareader for ?''
PRINT ''SKIPPED the ? database''

The script uses an undocumented command sp_msforeachdb which I have used before in earlier blogs too to achieve some interesting requirements.

Here in the script, we check of SQLAuth_Auditor user existence in all the databases. If it is not available, we go ahead and add the db_datareader role to the database.

My ask: How many of you have used sp_MSforeachdb procedure in your environment? What are the use cases where you have successfully used the same.

Quick Quiz:

To achieve the above task, can’t we just add the user to Model database and all databases created will automatically get the user added? Is there any catch provided there are no databases currently in a given instance? Do we still need a script like this? Let me know your thoughts.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Security Conversations and Notes with a DBA

One of the great things about working in Pluralsight is that I get to meet a lot of people (trust me on this). I am lost for words when it comes to how my day job helps me in achieving some of the coolest things presentations at a number of conferences. The Pluralsight booth is always a place where we see some of the great minds from the industry swing by. So in case you get to a conference and get to see a Pluralsight booth, feel free to swing by, take a moment to talk someone out there, I guarantee your time will be well spent.

In one conferences, I vividly remember a conversation from an enthusiast who was coming from the Banking vertical. He attended all my sessions and made sure he caught me at the booth as I spend most of my time there when not delivering sessions. So it was in-between sessions that this gentleman caught me and started asking me a number of questions around security. The whole discussion took close to an hour but I was glad someone was serious enough in trying to understand these basics.

I was super excited and was talking some of the latest and greatest security capabilities of SQL Server 2014 which I explained in my Plurasight Course. But the individual said they were running on a SQL Server 2008 R2. That got me into a fix because I had to rewind my mind to give him recommendations. I kept thinking hard as I starting giving each of the recommendations.

In this blog, let me recollect some of the conversations I had and pen them down for everyone’s reference and most importantly for my reference.

About User Accounts

SQL Server executes as a set of Windows services. SQL Server is required to run under a domain account to interact with network services, to access domain resources such as file shares, or if it uses linked server connections to other SQL Server instances running on other computers in the domain.
When choosing the service account, consider an account with the least amount of privileges needed to do the job and no more.

Using a domain user that is not a member of the Local Administrator group or of the Domain Administrators group will be the best choice for the SQL Server service account.

Some especially sensitive accounts are detailed below:

  • SQL Server Browser is a name resolution service that provides SQL Server instance connection information to clients. If the SQL Server Browser service account is compromised, an attacker can use the permissions associated with the account to expand their control over the Windows environment. It is recommended that the SQL Server Browser service be run under the NETWORK SERVICE account that has the least privileges.
  • Some other security considerations: do not use LOCAL SYSTEM, LOCAL SERVICE, or NETWORK SERVICE as the service account to run MSFTESQL. This could provide an elevation of privileges for the Full-Text Search service.

Prefer Kerberos over NTLM authentication. Kerberos as the default authentication mode for windows connections to SQL Server is possible if client and server are joined in the same domain, or client and server are in different domains, but these two domains are configured as two-way trusts. Furthermore, Kerberos is available by setting the appropriate SPNs. Verify if SQL Server Name (WSFC solution) or Hostname matches the SQL Server service accounts. If so, this could prevent you from defining an SPN on the correct object, because you will get a User and Computer objects with the same name. Refer to Understanding Kerberos and NTLM authentication in SQL Server Connections for further information on setting Kerberos authentication.

As for securing SQL Server roles and permissions, consider the following:

  • The SA account is a well-known and frequent target of malicious users. Disable the account by using the ALTER LOGIN statement. If this is not an option, consider renaming the account by using the ALTER LOGIN statement, as this action can help protect the account. Refer to disabling SA blog.
  • Restrict the membership of the sysadmin fixed server role to logins that use Windows Authentication, assuring a greater protection of SQL Server.
  • If users without sysadmin rights need to perform certain tasks, consider creating proxy accounts. There are two types of proxy accounts, and each relates to a specific set of tasks: the xp_cmdshell proxy and the SQL Agent job proxy.
  • The configuration required by each type of proxy is different, and the way the authorities they give are used is different. If there is a requirement for users without sysadmin rights to run xp_cmdshell, then the xp_cmdshell proxy must be created with the sp_xp_cmdshell_proxy_account Similarly, if there is a requirement for users without sysadmin rights to own SQL Agent jobs, then one or more SQL Agent job proxy accounts must be created.

Though the one hour of conversations spanned a lot of topics and demonstrations on my laptop using MSA account and more. I think this brain dump of the conversation will become a great reference for me if someone asks me these again. Do let me know if you have read about these before in your interactions? Have you implemented these in your environments?

Reference: Pinal Dave (http://blog.sqlauthority.com)