SQL SERVER – Transparent Data Encryption and Frequently Asked Questions

In the recent past, I have been traveling to Delhi to meet customers and also attend a few conferences which my company has been sponsoring. These travel experiences give me an opportunity to meet folks who read my blog regularly as a face-to-face interaction. Most of these interactions mean I get to spend time with really smart people who quiz me for which I get back to my blog for answers. Last week, when I was at a conference, one DBA walked up to me and said – “Hey Pinal, I am a DBA from a reputed banking company. We are in the process of deploying TDE for one of our databases which is in SQL Server 2008 R2. I had a few doubts, can I ask you?” Well, as simple as this interaction went, the whole conversation spanned for close to 30 mins and I am doing a summary of the conversation in this blog for your reference.

When does TDE encrypt the DB?

Once Transparent Data Encryption is enabled by issuing the “Alter Database” command, SQL Server performs basic checks such as Edition Check, Read-only Filegroups, and presence of DEK etc. Once the checks are complete, the command returns immediately with success. This does not mean that the database encryption is complete. Since Encryption is done in the I/O path, all the data pages that are already written to the disk prior to enabling the Encryption have to be read into the memory and then written back to the disk after encrypting the page. This process is also referred to as “Encryption scan”. This task is carried out by Background processes (system SPIDS). The encryption scan, which runs asynchronously to the DDL, takes a shared lock on the database. All normal operations that do not conflict with these locks can proceed without being blocked.

You mentioned READONLY DB, tell me more?

Transparent data Encryption does not work on a database that contains any filegroups that are marked Read-Only or any files that are marked as Read-only. Alter Database fails with an error message (33118) clearly indicating the reason for the failure. Users can enable read-only property on the filegroups once the encryption scan is completed. However, no DDL related to TDE (DEK change) can be run on the database until the read-only property is removed.

What happens to TLog files?

Encryption works differently on a Transaction Log and is complicated. Since Transaction Log is designed to be Write-Once fail safe, TDE does not attempt to Encrypt the contents of the Log file that were written to the disk already. Similarly, the log header cannot be re-written because of this write-once principle so there is no guarantee that log records written to the log even after TDE is enabled will be encrypted. The smallest unit of encryption for log files is a virtual log file (VLF). So either an entire virtual log file (VLF) is encrypted or it’s not. Also the entire VLF is encrypted with the same key. When encryption is turned on for a database, the next time the log manager moves to a new VLF, this new VLF will be encrypted. So there is no deterministic order between when data pages are encrypted by the scan vs. when the log is encrypted.

I thought these were some of the interesting conversations I have had in the recent past that are worth a mention to share. If you have used TDE in your environment, do let me know. I would love to know your experiences in working with it. Is there any catch that you want me to explain next time? Write it over as comments and would love to explore more.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – SPID is KILLED/ROLLBACK state. What to Do Next?

I always recall my fun days after talking to my friends and blog readers. Recently killed/rollback discussion came back when my close DBA friend called me for help. Our discussion was worth blogging.

If you are a SQL DBA or developer, I am sure you must have seen something like below:

killed 01 SQL SERVER – SPID is KILLED/ROLLBACK state. What to Do Next?

The first question which would come to mind would be “what is killed/rollback command?” The answer is very simple – sometimes, DBA/Developer use “KILL” command to end processes that appear in a hung state. Once that happen, if SQL knows how to rollback that work, it would start “real” rollback process and would undo the work done. There are situations where SQL doesn’t know and has no control of work done of the work done and it has no control to roll back the work. These are the cases where its stays in KILLED/ROLLBACK state forever.

How would you decide which action to take? It completely depends on the action done by the SPID which was killed. There are only a few things, which you can do:

  1. Have patience and wait for rollback to finish. Some large operations may take a long time to rollback. Such SPID should show a change in CPU, Memory and IO column in size. sys.sysprocesses DMV (query in the next step).
  2. If the SPID is not showing any change in CPU, Memory and IO column then it is doing nothing. Here is the query which can tell us more about the SPID and from where the connection might have originated.

SELECT spid
,kpid
,login_time
,last_batch
,status
,hostname
,nt_username
,loginame
,hostprocess
,cpu
,memusage
,physical_io
FROM sys.sysprocesses
WHERE cmd = 'KILLED/ROLLBACK'

If the process is originated from different machine, then using task manager and doing end process on that machine might help. This decision has to be taken very carefully as it might be a process of a 3rd party application.

After figuring out the SPID we can run the DBCC INPUTBUFFER (<SPID>) to know what is the batch running under this session.

  1. Restart SQL Service if it was killed and doing nothing.

Can you reproduce it? The simplest way is to use xp_cmdshell and open notepad.exe using below command. DON’T DO THIS ON PRODUCTION

xp_cmdshell 'notepad.exe'

Now, it you kill this SPID it would go to KILLED/ROLLBACK state.

In summary, it is important to understand the cause. Killed/Rollback SPIDs are waiting for an event that will never happen. It could be a real “rollback” situation where SPID is rolling back and trying to get the database into a consistent state. By restarting the SQL Server service, you are postponing that rollback (or roll forward) work to the recovery step that runs before the database becomes available.

Have you ever come across such situation?

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – FIX – Server principal ‘Login Name’ has granted one or more permission(s). Revoke the permission(s) before dropping the server principal

There are parts of SQL Server where I always like to take expert advice. Sometimes a quick call to a friend can save you tons of time. When I see topics around AlwaysOn, though I know what the basic steps are to building and working with AlwaysOn, the hardcore troubleshooting is not the forte perse. But when I encounter questions around it, I put my learning hat to explore the possible reasons. There was a mail from one of the readers which has been on my Inbox for couple of weeks and I wanted to find the answer. His question goes like this:

While working with AlwaysOn availability group and cleaning it up, I am encountering below error while dropping a login.

DROP LOGIN Pinal
GO

Msg 15173, Level 16, State 1, Line 1
Server principal ‘Pinal’ has granted one or more permission(s). Revoke the permission(s) before dropping the server principal.

drop error 02 SQL SERVER   FIX – Server principal ‘Login Name’ has granted one or more permission(s). Revoke the permission(s) before dropping the server principal

Even trying from UI also gives same error

drop error 01 SQL SERVER   FIX – Server principal ‘Login Name’ has granted one or more permission(s). Revoke the permission(s) before dropping the server principal

Next task was to find what is causing the error, so I made below query which can tell the exact objects that are dependent and making our DROP command to fail.

SELECT class_desc,*
FROM sys.server_permissions
WHERE grantor_principal_id = (
SELECT principal_id
FROM sys.server_principals
WHERE NAME = N'Pinal')

SELECT NAME
,type_desc
FROM sys.server_principals
WHERE principal_id IN (
SELECT grantee_principal_id
FROM sys.server_permissions
WHERE grantor_principal_id = (
SELECT principal_id
FROM sys.server_principals
WHERE NAME = N'Pinal'))

drop error 03 SQL SERVER   FIX – Server principal ‘Login Name’ has granted one or more permission(s). Revoke the permission(s) before dropping the server principal

Based on output, it means there is an endpoint on which Pinal has been granted permission. Second result shows that I have used my account i.e. Pinal to create endpoint and granted permission to AppUser account using below command:

CREATE ENDPOINT [hadr_endpoint]
STATE
=STARTED
AS TCP (LISTENER_PORT = 5023, LISTENER_IP = ALL)
FOR DATA_MIRRORING (ROLE = ALL, AUTHENTICATION = WINDOWS NEGOTIATE
, ENCRYPTION = REQUIRED ALGORITHM AES)
GO
GRANT CONNECT ON ENDPOINT::[hadr_endpoint] TO AppUser
GO

As the error message suggested, I revoked permission using the below command:

REVOKE CONNECT ON ENDPOINT::[hadr_endpoint] TO AppUser
GO

After the above step, I was able to delete login. Yes, this was a great learning for me today. Do let me know if you ever encountered similar errors in your environments?

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Secure by Default – The Mantra

AskTheExperts SQL SERVER   Secure by Default   The MantraGenerally the talks I give at conferences revolve around performance and tuning. These are always the most sought after topic and I don’t disappoint the requests because they give me an opportunity to learn something new too. Last month when I visited Sri Lanka for the first ever SQLSaturday event, the response was the same. We had a fantastic Q&A discussion which involved some random questions from attendees on a variety of topics. I think every conference should have something like this – “Ask the Experts” concept. This gets everyone together and we are having a combined fun time to say the least.

One of the topic one attendee asked was around Security. “Hey Pinal, can you recollect some top 10 things we need to take care as security practices when working with SQL Server?”. It was surely a loaded question and I was not prepared to say the least. I said, “Give me a breather buddy. I can actually do a complete end-to-end session when I come next time if you are interested. (this gave me some seconds to gather my thoughts) Now that you asked, let me throw some of them for your reference.”

The session got over and the gentleman who had asked the question had sent me a great summary of the conversation during the Q&A session. I thought to post them as a blog post so that we can elaborate over them for the future. But these are some great starters for reference.

  • Application Layer Security: SQL Injection is a powerful method to damage or compromise a SQL Server. SQL Injection is the result of failing to validate user input at the application layer. So please validate all the user inputs every single time.
  • Be strict about what represents valid input and reject everything else.
  • Use parameterized queries and avoid string concatenation to build queries.
  • Do not divulge too much information to the end user. The end user may be an attacker. Let your error message not reveal the exact problem.
  • Connect to the server using a least privilege account. Do not connect with System Admin privileges. I know many of you are doing this today, but please refrain henceforth.
  • Secure the communication channel between Client and SQL Server. SSL encryption be mandated for client server communication if this is critical. SSL is provided through the use of certificates issued by a Certificate Authority (CA). SSL encryption ensures the packets between the client and server are encrypted and mitigates the detection of user credentials or data by a network sniffer.
  • Protect the SQL Server files on the physical level. The directory structure for SQL Server should be restricted to the SQL Server service and relevant accounts.
  • Protect the permissions for the registry entries for SQL Server. They should be restricted to Administrators and the SQL Server account.
  • Audit Logins by default. Logins should be audited at the Windows and SQL Server level. At the least, failed login attempts should be logged as a flag to possible intrusion attempts.

Tips shared about SQL Account Management

  • Make the SQL Server accounts least privileged.
  • Delete or disable any unused user accounts.
  • Disable the Windows guest account (this should be automatic for a server).
  • Enforce strong password policies. This is normally controlled via Group Policy.
  • Restrict remote logins to the server.
  • Rename administrator account to stop DDOS attacks. Accounts like “sa” must be disabled/renamed at a minimum.

Seeing the mail, I thought this is a great start to talk when it comes to security and SQL Server. I thought this blog will be one of many to follow later on this topic because I am sure a number of you will also be interested in this topic. Do let me know some of the things you do in your environments so that we can share and learn from each other.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Basic Statistics Maintenance – Notes from the Field #083

[Note from Pinal]: This is a 83rd episode of Notes from the Field series. Maintenance of the database is very critical activity and I have always seen DBA taking it very seriously. There is a only one problem – there is no single solution or guidance for how to maintain the database. Everybody has their own opinion and way to do different tasks. Statistics is one of the most important aspect of the database. The performance of entire application can depend on statistics, as it can help SQL Engine with intelligence to execute optimal plan.

In this episode of the Notes from the Field series database expert Stuart Ainsworth explains about basic statistics maintenance.


 SQL SERVER   Basic Statistics Maintenance   Notes from the Field #083

Statistic maintenance is an important (but often overlooked) aspect of performance tuning for SQL Server.  The query optimizer relies on distribution statistics to determine how the query will be executed, with a particular emphasis on SEEKS vs SCANS as well as estimates of effort needed (the cost threshold for parallelism).  Out-of date stats can impact performance significantly.

Luckily, the default setting for most databases covers most database performance scenarios; SQL Server offers three basic settings for statistics maintenance:

  • Auto Create Statistics – SQL Server will create statistics during an index creation script, or when deemed necessary to satisfy a query; enabled by default.
  • Auto Update Statistics – SQL Server will update statistics when it deems them to be outdated; enabled by default.
  • Auto Update Statistics Asynchronously – When enabled, SQL Server will updated statistics after the execution of a query if it determines that an update is necessary; if disabled, the statistics will be updated first, and then the query executed. This setting is disabled by default, and there’s some controversy over whether or not it’s useful.

notes 83 1 SQL SERVER   Basic Statistics Maintenance   Notes from the Field #083

For the most part, SQL Server does a good job of maintaining statistics if these options are left with the defaults; however, statistics can still become stale over time if the data is updated at a slow rate of change.  If your statistics are more than a year old in your database, it’s probably time to do a refresh.

But how do you tell when your stats are out of date? There’s a catalog view in SQL Server called sys.stats that will give you a lot of information about statistics, but it’s very detailed; data is collected down to the column level, which may be overwhelming if you have lots of databases.  I suggest starting at a higher level, by taking a look at how out of date statistics are across all of your databases.   I use the following query to help me quickly identify troublesome databases:

/*checks last updated stats date for all databases; calculates a percentage of stats updated within the last 24 hours.
useful to determine how out-of-date statistics are.  Also identifies if auto updatestates are on and if asynchronous updates
are enabled.
*/
CREATE TABLE #dbs
(
database_name VARCHAR(100)
,
oldest_stats DATETIME
, newest_stats DATETIME
, percent_update_within_last_day DECIMAL(5, 2)
,
is_auto_update_stats_on BIT
, is_auto_update_stats_async_on BIT
)
DECLARE @dynsql NVARCHAR(4000) = 'use ?
;
if db_id() >4
BEGIN
WITH    x AS ( SELECT   STATS_DATE(s.object_id, stats_id) date_updated
FROM     sys.stats s
JOIN sys.tables t ON t.object_id = s.object_id
WHERE    t.is_ms_shipped = 0
) ,
x1
AS ( SELECT   MIN(date_updated) AS oldest_stats
, MAX(date_updated) AS newest_stats
FROM     x
)
SELECT DB_NAME() database_name
, oldest_stats
, newest_stats
, SUM(CASE WHEN DATEDIFF(d, date_updated, newest_stats) <=1 THEN 100.00 ELSE 0.00 END)/COUNT(*) "percent_update_within_last_day"
, d.is_auto_update_stats_on
, d.is_auto_update_stats_async_on
FROM    x
CROSS JOIN x1
CROSS JOIN sys.databases d
WHERE d.database_id = db_id()
GROUP BY oldest_stats
, newest_stats
, d.is_auto_update_stats_on
, d.is_auto_update_stats_async_on
END
'
INSERT  INTO #dbs
( database_name
, oldest_stats
, newest_stats
, percent_update_within_last_day
, is_auto_update_stats_on
, is_auto_update_stats_async_on
)
EXEC sp_MSforeachdb @dynsql
SELECT  *
FROM    #dbs d
DROP TABLE #dbs

Results will look similar to the following:

notes 83 2 SQL SERVER   Basic Statistics Maintenance   Notes from the Field #083

Looking at the results, if I see that the oldest database stats are more than a year old, it’s s a pretty good indicator that statistics are not being maintained by some form of ongoing maintenance operation.  If the defaults are not being used, that’s also something that needs to be investigated.

The percent_update_within_last_day is also a good trigger for me to investigate a potential issue; if less than 20% of the statistics in a database were updated in the last 24 hours, the odds of a less-than-optimal execution plan increase significantly.  If the tables in the database are large, it may take several million rows of changed data to trigger a refresh.

To refresh outdated statistics, I recommend Ola Hallengren’s maintenance scripts; they offer a lot of flexibility for developing a customized index and statistics maintenance plan, including the ability to update statistics on a scheduled basis, and focus on updating statistics that need to be refreshed.  Basic maintenance (such as this) can help prevent performance problems from occurring.

If you want to get started with SQL Server with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Adding Reference Data to Master Data Services – Notes from the Field #081

[Note from Pinal]: This is a 81th episode of Notes from the Field series. Master Data Services is one of the most important, but very little explored feature of SQL Server. If you have been reading this blog, when it is about BI, I always take help from LinchPin People who are BI experts. I requested Reeves from LinchiPin people to help me answer this unique question.

In this episode of the Notes from the Field series database expert Reeves Smith explains why one should add referencing data to Master Data services to our enterprise. Read the experience of Reeves in his own words.


Reeves Smith SQL SERVER   Adding Reference Data to Master Data Services   Notes from the Field #081

In a previous post, I talked about Master Data Services and how it can enhance your Enterprise Data Warehouse. Through the Master Data Services add-in for Excel, the business can access and manage reference data without much involvement with IT. This might sound like a bad idea, but the subject matter experts (SME) for the data often do not reside in IT. SMEs should have the ability to manage reference data, and with the Master Data Services add-in for Excel they can.

In this post, I’m going to outline the three steps needed to get your data governance people up and running after Master Data Services has been installed. The three are:

  • Create a model to store the reference data (completed on the MDS website)
  • Format the reference data in Excel
  • Create an Entity within the Master Data Services add-in for Excel

IT will only need to perform the first step of the process. The first step to manage reference data is creating a container for the reference data. This step must take place within the Master Data Services website.

Step 1 – Create a model to store the reference data

  1. Open the Master Data Service’s website and Select System Administration, as you see in Figure 1.

notes81 1 SQL SERVER   Adding Reference Data to Master Data Services   Notes from the Field #081

Figure 1 – Master Data Services System Administration

  1. After you select System Administration, the website will list three menus: Model View, Manage, and System. Select Manage and then select Models from the pop-out menu.
  2. Click on the green plus sign to create a new model.
  3. In the Add Model dialog, which you see in Figure 2, add a model name that would be relevant to the appropriate reference data. In this example I will use Demo. Unselect all of the check boxes under the model.

 

Figure 2 – Add Model

  1. Test your model by connecting to it from the Master Data Services add-in for Excel.
    1. Open a blank Excel workbook and select the Master Data Services tab.
    2. Click the Connect button to create a new connection to your Master Date Services Instance.
    3. In the Master Data Explorer, which you see in Figure 3, select your newly created model.

Figure 3 – Master Data Explorer

notes81 2 SQL SERVER   Adding Reference Data to Master Data Services   Notes from the Field #081

Step 2 – Format the reference data in Excel

  1. Create an Excel worksheet with the formatted reference data and provide a header for each of the columns.

Note: Master Data Services requires a Code column and suggests the use of a Name column. Both of these columns do not need to be directly called Name and Code but should have like columns within the reference data. The code column must contain unique values.

notes81 3 SQL SERVER   Adding Reference Data to Master Data Services   Notes from the Field #081

Step 3 – Create an Entity within the Master Data Services add-in for Excel

For this example, I will create a list of customers from the Adventure Works database to enable the discussion of enhancements to the data set in a later post. Your reference data should be less complex then the customers list.

  1. With an open connection to Master Data Services and the worksheet with your reference data, click the Create Entity button, as you see in Figure 4.

notes81 4 SQL SERVER   Adding Reference Data to Master Data Services   Notes from the Field #081

Figure 4 – Master Data Services tab with the Create Entity highlighted

  1. Click the red arrow to the right of the Range text box and select the range of your reference data. Be sure to include the column headers and select the My data has headers check box, as shown in Figure 5.
  2. Select the newly created model; in my case that would be Demo.
  3. Add and new entity name and select the appropriate Code and Name column.

Note: Master Data Services 2012 and forward enables the use of an identity values for the Code if you do not want to manage unique keys.

notes81 5 SQL SERVER   Adding Reference Data to Master Data Services   Notes from the Field #081

Figure 5 – Create Entity Dialog

  1. Click OK.

notes81 6 SQL SERVER   Adding Reference Data to Master Data Services   Notes from the Field #081

At this point you should have a working model with a new entity that is fully managed within Master Data Services.

Summary

In this post, I have walked through the steps that are required to move reference data from an Excel workbook and create a model within Master Data Services. Under MDS you get all of the benefits of a managed master data management solution, like audibility and security.

What’s Next

In a coming post, we will walk through enhancing or reference data. One enhancement we will demonstrate, will include adding domain values within the Model. This helps the business users select the correct data elements and increase the data quality of your master data.

Hope this helps.

If you want to get started with BIML with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Create Login with SID – Way to Synchronize Logins on Secondary Server

How many times you had a problem where you have deployed a high availability solution, but the application is not able to use database and getting login failed for user after failover? Note that this issue would happen only with SQL Logins not Windows Login. You would understand the reason shortly. Here are the situations where you might face login failure.

  1. Log Shipping (reading from stand-by secondary database)
  2. AlwaysOn Availability Group. (reading from read-only secondary replica)
  3. Database Mirroring (after failover)

Let’s assume that we have SQLServerA and SQLSevrerB which has one database AppDB. As per architecture of the users and login in SQL Server – there would be a user in database mapped to login. For simplicity let’s say login and user is AppUser which is available in AppDB.

USE AppDB GO SELECT name, sid FROM sys.sysusers WHERE name = 'AppUser' GO USE MASTER GO SELECT name, sid FROM sys.sql_logins WHERE name = 'AppUser' GO

LoginWithSID 02 SQL SERVER – Create Login with SID – Way to Synchronize Logins on Secondary Server

As we can see that SID is matching that’s why user is mapped to same login.

Now, if we create AlwaysOn Availability Group or configure database mirroring or log shipping – we would not be able to map the user using sp_change_user_login because secondary database is not writeable, its only read-only mode.

Here is what we would see on secondary server if login is created using UI.

LoginWithSID 03 SQL SERVER – Create Login with SID – Way to Synchronize Logins on Secondary Server

The solution of this would be to drop and create login with known SID which is stored in sys.sysusers in the database.

CREATE Login AppUser WITH password = 'password@123', SID = 0x59B662112A43D24585BFE2BF80D9BE19

Once this is done, application can connect to secondary database. Hopefully this is something which would help you in fixing issue when there are orphan users in database.

Have you ever used this syntax? Leave a comment and let me know.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – 3 Common Mistakes of Agile Development – Notes from the Field #074

[Note from Pinal]: This is a 74th episode of Notes from the Field series.  Agile is a time boxed, iterative approach to software delivery that builds software incrementally from the start of the project, instead of trying to deliver it all at once near the end. Storytelling is an art very few have mastered in their life. When I requested Stuart to share his ideas about agile, he shared a very interesting insight on this subject. He had very interesting story to share about 3 common mistakes of agile developments. I very much enjoyed his notes from the field and I am confident that you will like it too.

In this episode of the Notes from the Field series database expert Stuart Ainsworth explains about 3 Common Mistakes of Agile Development.


 SQL SERVER   3 Common Mistakes of Agile Development   Notes from the Field #074

I’m a developer by history, but a project manager at heart.  I’ve started becoming much more interested in helping teams improve their workflow, not just helping them write better code.  To that end, most of the development shops that I’ve worked have struggled with the ongoing battle to get features built and shipped on a schedule that satisfies business requirements.  Good developers are proud of their craft, and want maximum time to create; business needs features to go out the door quickly in order to compete.  These goals are often in conflict with each other.

Agile methodologies (such as scrum) try to help balance this tension by encouraging active conversation between business and development, and continuously delivering working software that is flexible and adaptable to change.  In the shops where I’ve seen agile development fail to deliver, I’ve noticed the following 3 bad habits:

  1. We have a failure to communicate.

Communication sounds easy, but it’s really, really hard.  Well-defined requirements help, but they’re no substitute for ongoing mutual conversations between developers and business people.  If a requirements document is the only method of explaining what to build and when to build it, you lose the ability to forecast what comes next in terms of building blocks.

Several of the principles of that Agile Manifesto deal with communication between business and development, but my favorite one is “business people and developers must work together daily throughout the project.”  If you want to be an agile development shop, you have to be an agile business.   Business needs to understand the challenges faced by developers, and developers need to be involved in understanding the changing needs of business.

  1. Code releases are always a feature release or a bug fix.

Bug fixes are good, and features are what make money; however, if the only time your shop is releasing code is to implement a feature or fix a bug, then you’re not continually improving your product.  The temptation is to fall back into a waterfall-like methodology; deployments become huge stacks of code that are not added to the product (or operational environment) until the day that a feature is supposed to be released.  The larger the stack of code, the harder it is to test, and the greater the risk of failure.

What agile principles suggest is that you should “deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.”  The goal is to release code frequently, even if that code is only establishing groundwork for future development efforts; smaller code is easier to test, and ultimately, a feature release becomes the final step in a series of releases.

  1. Software release dates depend on the quality and completeness of the code.

This is similar to the second issue, but a slight variation; a shop that constantly changes the length of their iteration will ultimately experience burnout.  It becomes harder and harder to stay on schedule and feature releases get pushed further and further back.

I prefer to have a fixed iteration period, either every four weeks or once a month.  There’s something about a cadence that motivates people to focus and get things done.  If a developer is working on a bit of code that’s supposed to ship in a month, it’s easy to evaluate how likely that’s going to happen within a couple of weeks; if it’s not going to be complete, build the working software, and release it.  With each iteration, it becomes easier to define what can be done in a fixed-length sprint.

Summary

Agile software development lends itself to creative implementations, but it’s important to stay true to the principles of good communication, continuous improvement, and maintaining a constant pace for development.  Avoiding some basic pitfalls can help your team stay productive in the ongoing race to get features out the door.

If you want to get started with SQL Server with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Adding a Master Data Services to your Enterprise – Notes from the Field #072

[Note from Pinal]: This is a 72th episode of Notes from the Field series. Master Data Services is one of the most important but very little explored feature of SQL Server. If you have been reading this blog, when it is about BI, I always take help from LinchPin People who are BI experts. I requested Reeves from LinchiPin people to help me answer this unique question.

In this episode of the Notes from the Field series database expert Reeves Smith explains why one should add Master Data services to our enterprise. Read the experience of Reeves in his own words.


Reeves Smith SQL SERVER   Adding a Master Data Services to your Enterprise   Notes from the Field #072

With all of the focus on big data, I want to remind everyone not to forget about data governance and master data management in general. While working on various data warehouse projects, I have found this area to be often overlooked. I want to stress that if you are building a data warehouse or a reporting system, then data governance should be one of the key components of almost any project.

I wanted to start off with a couple of posts to show how you could implement SQL Server Master Data Services in you enterprise. Tyler Gramm wrote a white paper “Implementing a Phased Approach to Master Data Management” (https://msdn.microsoft.com/en-us/library/ff626496.aspx) that outlines taking small steps to start your first master data management project. In this series I will outline some steps to help bring reference data in SQL Server Master Data Services instance to start you off on your way.

Let’s first define data governance to better understand where master data management fits within that methodology. Data governance can be defined as a complete process with the ability to manage the data assets within the enterprise. Data governance also defines how you manage the assets that have the greatest business value for your organization’s day-to-day operations.

Where does master data management fall within a data governance approach?  Master data management is one area within the data governance process that focuses on management of reference and master data. Master data management is combined with a fair amount of business process and I will leave the process aspect for another author. An example of a business process within master data management would be the decision on which data source is considered the truth when a discrepancy in master data arises between two source systems.

In this post I will discuss the why you would want to consider Microsoft SQL Server Master Data Services. If Microsoft SQL Server is in your environment, integration will easy and Excel provides a great interface for the data stewards.

Note: SQL Server 2012 Master Data Services and SQL Server 2014 Master Data Services are nearly identical.

Master data management is as much a business function as a technical one. Master data is managed within SQL Server and often integrated within Extract-Transform-Load (ETL) processes, but the actual management and oversight of the data should be performed by the business. This involvement is provided with help from roles like a data steward. A data steward can been seen as a representative of the data, while the technical resources serve as the caretakers of the data. Accepting these roles is one of the first hurdles to overcome with any master data management project. The business will control most of the direction within the project, and the technical team will be in charge of system integration and support.

Setting up a Master Data Services instance within SQL Server is a fairly straightforward process after you’ve installed all of the prerequisites with Microsoft Internet Information Services (IIS). Figure 1 shows the SQL Server 2012 Master Data Services Configuration Manager. See https://msdn.microsoft.com/en-us/library/ee633744.aspx for more details on how to install and configure the server and backend processes used within the Master Data Services process.

72notes1 SQL SERVER   Adding a Master Data Services to your Enterprise   Notes from the Field #072

Figure 1 – Master Data Configuration Manager

Note: You may configure the web services portion of Master Data Services install on another machine. It does not need to be on the same machine as the SQL Server database. Multiple licenses could apply with this configuration, so check with a licensing professional.

After setting up Master Data Services, you will want to give data access to the business to enable the actual management of the data. This is best provided by the installation of the Master Data Services add-in for Excel, which you see in Figure 2. This add-in will give the business users with appropriate knowledge of the data direct access to work with and support master data from their favorite data tool, Excel.

72notes2 SQL SERVER   Adding a Master Data Services to your Enterprise   Notes from the Field #072

Figure 2 – Master Data Services Add-in for Excel

Note: The add-in for Excel does not support some functions, such as model creation and hierarchy. You will have to manage the unsupported functions from the web site.

Getting Started

One of the best projects for beginning a master data management implementation is to start with managing reference data. Reference data exists in every environment and is frequently managed within Excel spreadsheets without database durability and/or version control.  Moving this data into Master Data Services from an Excel spreadsheet is a fairly straight forward task that can be accomplished with little risk.

What’s Next

In the coming post, we will walk through moving reference data from an excel worksheet to an entity within a Master Data Services model.

At a high level the steps that we will discuss in the following post will be:

  • Create a model to store the reference data (completed on the MDS website)
  • Format the reference data in Excel
  • Click Create Entity button on the Master Data tab within Excel and select your data

Hope this helps you get started.

If you want to get started with BIML with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Marking Filegroup as ReadOnly with SQL Server

My love for writing about ReadOnly databases have pushed me to explore more about this topic. Based on previous blogs, few readers did ask me that they heard about marking filegroups as ReadOnly and they wanted to know how this can be done. This blog post is in continuation to this journey where-in we will learn about the Readonly attribute.

TSQL Script to mark ReadOnly Filegroup

Let us create the database first.

CREATE DATABASE [ReadOnlyDB]
CONTAINMENT
= NONE
ON  PRIMARY
( NAME = N'ReadOnlyDB', FILENAME = N'C:\Temp\ReadOnlyDB.mdf' , SIZE = 4024KB , FILEGROWTH = 1024KB )
LOG ON
( NAME = N'ReadOnlyDB_log', FILENAME = N'C:\Temp\ReadOnlyDB_log.ldf' , SIZE = 20480KB , FILEGROWTH = 10%)
GO

Let us next create an Filegroup which will get marked as ReadOnly.

ALTER DATABASE ReadOnlyDB ADD FILEGROUP ReadOnlyDB_FG;
GO
ALTER DATABASE ReadOnlyDB ADD FILE (
name = ReadOnlyDB_FG,
FILENAME = 'c:\temp\ReadOnlyDB_FG')
TO FILEGROUP ReadOnlyDB_FG
GO

Next we would like to create a table on this specific filegroup. This will form the base for our experiment.

USE ReadOnlyDB
GO
-- Creating our table
CREATE TABLE tbl_SQLAuth (id INT, Longname CHAR(8000))
ON ReadOnlyDB_FG
GO

Let us start our experiment in marking filegroups as ReadOnly. One of the important learning here is to understand, we cannot mark our PRIMARY filegroup as readonly.

-- Mark the PRIMARY filegroup read-only
ALTER DATABASE ReadOnlyDB MODIFY FILEGROUP [PRIMARY] READ_ONLY;
GO

Msg 5047, Level 16, State 1, Line 29
Cannot change the READONLY property of the PRIMARY filegroup.

We will be presented with the above errors. Now, let us mark the other Filegroup next.

-- Mark the Other filegroup read-only
ALTER DATABASE ReadOnlyDB MODIFY FILEGROUP ReadOnlyDB_FG READ_ONLY;
GO

The filegroup property ‘READ_ONLY’ has been set.

Once this succeeds, we cannot do anything with the filegroup. Let us do few tasks to check the errors.

-- Create a table on the Read_Only Filegroup
-- Explicitly call the FG where the table will be created.
CREATE TABLE tbl_SQLAuth_1 (id INT, Longname CHAR(8000))
ON ReadOnlyDB_FG
GO

Msg 1924, Level 16, State 2, Line 3
Filegroup ‘ReadOnlyDB_FG’ is read-only.

As you can see, we cannot create any tables on this Filegroup. If we try to insert any values into an table which is attached to this Filegroup, we will be presented with an error too.

INSERT INTO tbl_SQLAuth VALUES (1, 'SQLAuth');

Msg 652, Level 16, State 1, Line 4
The index “” for table “dbo.tbl_SQLAuth” (RowsetId 72057594040549376) resides on a read-only filegroup (“ReadOnlyDB_FG”), which cannot be modified.

As you can see, the Read_Only Filegroups are functioning just like our ReadOnly Databases which have been explained in our previous blogs. The best part of Read_Only filegroups is that, we get an flexibility to lock just a part of database files when compared to marking the whole database.

-- Clean up time
USE MASTER
GO
DROP DATABASE ReadOnlyDB
GO

As I wrap up this blog, I would want to know how many of us out there have used Read_Only filegroups in our environments. What are those scenarios you made these changes? I would like to learn from your experience. Let me know via your comments.

Reference: Pinal Dave (http://blog.sqlauthority.com)