SQL SERVER – Create Login with SID – Way to Synchronize Logins on Secondary Server

How many times you had a problem where you have deployed a high availability solution, but the application is not able to use database and getting login failed for user after failover? Note that this issue would happen only with SQL Logins not Windows Login. You would understand the reason shortly. Here are the situations where you might face login failure.

  1. Log Shipping (reading from stand-by secondary database)
  2. AlwaysOn Availability Group. (reading from read-only secondary replica)
  3. Database Mirroring (after failover)

Let’s assume that we have SQLServerA and SQLSevrerB which has one database AppDB. As per architecture of the users and login in SQL Server – there would be a user in database mapped to login. For simplicity let’s say login and user is AppUser which is available in AppDB.

USE AppDB GO SELECT name, sid FROM sys.sysusers WHERE name = 'AppUser' GO USE MASTER GO SELECT name, sid FROM sys.sql_logins WHERE name = 'AppUser' GO

As we can see that SID is matching that’s why user is mapped to same login.

Now, if we create AlwaysOn Availability Group or configure database mirroring or log shipping – we would not be able to map the user using sp_change_user_login because secondary database is not writeable, its only read-only mode.

Here is what we would see on secondary server if login is created using UI.

The solution of this would be to drop and create login with known SID which is stored in sys.sysusers in the database.

CREATE Login AppUser WITH password = 'password@123', SID = 0x59B662112A43D24585BFE2BF80D9BE19

Once this is done, application can connect to secondary database. Hopefully this is something which would help you in fixing issue when there are orphan users in database.

Have you ever used this syntax? Leave a comment and let me know.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – 3 Common Mistakes of Agile Development – Notes from the Field #074

[Note from Pinal]: This is a 74th episode of Notes from the Field series.  Agile is a time boxed, iterative approach to software delivery that builds software incrementally from the start of the project, instead of trying to deliver it all at once near the end. Storytelling is an art very few have mastered in their life. When I requested Stuart to share his ideas about agile, he shared a very interesting insight on this subject. He had very interesting story to share about 3 common mistakes of agile developments. I very much enjoyed his notes from the field and I am confident that you will like it too.

In this episode of the Notes from the Field series database expert Stuart Ainsworth explains about 3 Common Mistakes of Agile Development.


I’m a developer by history, but a project manager at heart.  I’ve started becoming much more interested in helping teams improve their workflow, not just helping them write better code.  To that end, most of the development shops that I’ve worked have struggled with the ongoing battle to get features built and shipped on a schedule that satisfies business requirements.  Good developers are proud of their craft, and want maximum time to create; business needs features to go out the door quickly in order to compete.  These goals are often in conflict with each other.

Agile methodologies (such as scrum) try to help balance this tension by encouraging active conversation between business and development, and continuously delivering working software that is flexible and adaptable to change.  In the shops where I’ve seen agile development fail to deliver, I’ve noticed the following 3 bad habits:

  1. We have a failure to communicate.

Communication sounds easy, but it’s really, really hard.  Well-defined requirements help, but they’re no substitute for ongoing mutual conversations between developers and business people.  If a requirements document is the only method of explaining what to build and when to build it, you lose the ability to forecast what comes next in terms of building blocks.

Several of the principles of that Agile Manifesto deal with communication between business and development, but my favorite one is “business people and developers must work together daily throughout the project.”  If you want to be an agile development shop, you have to be an agile business.   Business needs to understand the challenges faced by developers, and developers need to be involved in understanding the changing needs of business.

  1. Code releases are always a feature release or a bug fix.

Bug fixes are good, and features are what make money; however, if the only time your shop is releasing code is to implement a feature or fix a bug, then you’re not continually improving your product.  The temptation is to fall back into a waterfall-like methodology; deployments become huge stacks of code that are not added to the product (or operational environment) until the day that a feature is supposed to be released.  The larger the stack of code, the harder it is to test, and the greater the risk of failure.

What agile principles suggest is that you should “deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.”  The goal is to release code frequently, even if that code is only establishing groundwork for future development efforts; smaller code is easier to test, and ultimately, a feature release becomes the final step in a series of releases.

  1. Software release dates depend on the quality and completeness of the code.

This is similar to the second issue, but a slight variation; a shop that constantly changes the length of their iteration will ultimately experience burnout.  It becomes harder and harder to stay on schedule and feature releases get pushed further and further back.

I prefer to have a fixed iteration period, either every four weeks or once a month.  There’s something about a cadence that motivates people to focus and get things done.  If a developer is working on a bit of code that’s supposed to ship in a month, it’s easy to evaluate how likely that’s going to happen within a couple of weeks; if it’s not going to be complete, build the working software, and release it.  With each iteration, it becomes easier to define what can be done in a fixed-length sprint.

Summary

Agile software development lends itself to creative implementations, but it’s important to stay true to the principles of good communication, continuous improvement, and maintaining a constant pace for development.  Avoiding some basic pitfalls can help your team stay productive in the ongoing race to get features out the door.

If you want to get started with SQL Server with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Adding a Master Data Services to your Enterprise – Notes from the Field #072

[Note from Pinal]: This is a 72th episode of Notes from the Field series. Master Data Services is one of the most important but very little explored feature of SQL Server. If you have been reading this blog, when it is about BI, I always take help from LinchPin People who are BI experts. I requested Reeves from LinchiPin people to help me answer this unique question.

In this episode of the Notes from the Field series database expert Reeves Smith explains why one should add Master Data services to our enterprise. Read the experience of Reeves in his own words.


With all of the focus on big data, I want to remind everyone not to forget about data governance and master data management in general. While working on various data warehouse projects, I have found this area to be often overlooked. I want to stress that if you are building a data warehouse or a reporting system, then data governance should be one of the key components of almost any project.

I wanted to start off with a couple of posts to show how you could implement SQL Server Master Data Services in you enterprise. Tyler Gramm wrote a white paper “Implementing a Phased Approach to Master Data Management” (https://msdn.microsoft.com/en-us/library/ff626496.aspx) that outlines taking small steps to start your first master data management project. In this series I will outline some steps to help bring reference data in SQL Server Master Data Services instance to start you off on your way.

Let’s first define data governance to better understand where master data management fits within that methodology. Data governance can be defined as a complete process with the ability to manage the data assets within the enterprise. Data governance also defines how you manage the assets that have the greatest business value for your organization’s day-to-day operations.

Where does master data management fall within a data governance approach?  Master data management is one area within the data governance process that focuses on management of reference and master data. Master data management is combined with a fair amount of business process and I will leave the process aspect for another author. An example of a business process within master data management would be the decision on which data source is considered the truth when a discrepancy in master data arises between two source systems.

In this post I will discuss the why you would want to consider Microsoft SQL Server Master Data Services. If Microsoft SQL Server is in your environment, integration will easy and Excel provides a great interface for the data stewards.

Note: SQL Server 2012 Master Data Services and SQL Server 2014 Master Data Services are nearly identical.

Master data management is as much a business function as a technical one. Master data is managed within SQL Server and often integrated within Extract-Transform-Load (ETL) processes, but the actual management and oversight of the data should be performed by the business. This involvement is provided with help from roles like a data steward. A data steward can been seen as a representative of the data, while the technical resources serve as the caretakers of the data. Accepting these roles is one of the first hurdles to overcome with any master data management project. The business will control most of the direction within the project, and the technical team will be in charge of system integration and support.

Setting up a Master Data Services instance within SQL Server is a fairly straightforward process after you’ve installed all of the prerequisites with Microsoft Internet Information Services (IIS). Figure 1 shows the SQL Server 2012 Master Data Services Configuration Manager. See https://msdn.microsoft.com/en-us/library/ee633744.aspx for more details on how to install and configure the server and backend processes used within the Master Data Services process.

Figure 1 – Master Data Configuration Manager

Note: You may configure the web services portion of Master Data Services install on another machine. It does not need to be on the same machine as the SQL Server database. Multiple licenses could apply with this configuration, so check with a licensing professional.

After setting up Master Data Services, you will want to give data access to the business to enable the actual management of the data. This is best provided by the installation of the Master Data Services add-in for Excel, which you see in Figure 2. This add-in will give the business users with appropriate knowledge of the data direct access to work with and support master data from their favorite data tool, Excel.

Figure 2 – Master Data Services Add-in for Excel

Note: The add-in for Excel does not support some functions, such as model creation and hierarchy. You will have to manage the unsupported functions from the web site.

Getting Started

One of the best projects for beginning a master data management implementation is to start with managing reference data. Reference data exists in every environment and is frequently managed within Excel spreadsheets without database durability and/or version control.  Moving this data into Master Data Services from an Excel spreadsheet is a fairly straight forward task that can be accomplished with little risk.

What’s Next

In the coming post, we will walk through moving reference data from an excel worksheet to an entity within a Master Data Services model.

At a high level the steps that we will discuss in the following post will be:

  • Create a model to store the reference data (completed on the MDS website)
  • Format the reference data in Excel
  • Click Create Entity button on the Master Data tab within Excel and select your data

Hope this helps you get started.

If you want to get started with BIML with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Marking Filegroup as ReadOnly with SQL Server

My love for writing about ReadOnly databases have pushed me to explore more about this topic. Based on previous blogs, few readers did ask me that they heard about marking filegroups as ReadOnly and they wanted to know how this can be done. This blog post is in continuation to this journey where-in we will learn about the Readonly attribute.

TSQL Script to mark ReadOnly Filegroup

Let us create the database first.

CREATE DATABASE [ReadOnlyDB]
CONTAINMENT
= NONE
ON  PRIMARY
( NAME = N'ReadOnlyDB', FILENAME = N'C:\Temp\ReadOnlyDB.mdf' , SIZE = 4024KB , FILEGROWTH = 1024KB )
LOG ON
( NAME = N'ReadOnlyDB_log', FILENAME = N'C:\Temp\ReadOnlyDB_log.ldf' , SIZE = 20480KB , FILEGROWTH = 10%)
GO

Let us next create an Filegroup which will get marked as ReadOnly.

ALTER DATABASE ReadOnlyDB ADD FILEGROUP ReadOnlyDB_FG;
GO
ALTER DATABASE ReadOnlyDB ADD FILE (
name = ReadOnlyDB_FG,
FILENAME = 'c:\temp\ReadOnlyDB_FG')
TO FILEGROUP ReadOnlyDB_FG
GO

Next we would like to create a table on this specific filegroup. This will form the base for our experiment.

USE ReadOnlyDB
GO
-- Creating our table
CREATE TABLE tbl_SQLAuth (id INT, Longname CHAR(8000))
ON ReadOnlyDB_FG
GO

Let us start our experiment in marking filegroups as ReadOnly. One of the important learning here is to understand, we cannot mark our PRIMARY filegroup as readonly.

-- Mark the PRIMARY filegroup read-only
ALTER DATABASE ReadOnlyDB MODIFY FILEGROUP [PRIMARY] READ_ONLY;
GO

Msg 5047, Level 16, State 1, Line 29
Cannot change the READONLY property of the PRIMARY filegroup.

We will be presented with the above errors. Now, let us mark the other Filegroup next.

-- Mark the Other filegroup read-only
ALTER DATABASE ReadOnlyDB MODIFY FILEGROUP ReadOnlyDB_FG READ_ONLY;
GO

The filegroup property ‘READ_ONLY’ has been set.

Once this succeeds, we cannot do anything with the filegroup. Let us do few tasks to check the errors.

-- Create a table on the Read_Only Filegroup
-- Explicitly call the FG where the table will be created.
CREATE TABLE tbl_SQLAuth_1 (id INT, Longname CHAR(8000))
ON ReadOnlyDB_FG
GO

Msg 1924, Level 16, State 2, Line 3
Filegroup ‘ReadOnlyDB_FG’ is read-only.

As you can see, we cannot create any tables on this Filegroup. If we try to insert any values into an table which is attached to this Filegroup, we will be presented with an error too.

INSERT INTO tbl_SQLAuth VALUES (1, 'SQLAuth');

Msg 652, Level 16, State 1, Line 4
The index “” for table “dbo.tbl_SQLAuth” (RowsetId 72057594040549376) resides on a read-only filegroup (“ReadOnlyDB_FG”), which cannot be modified.

As you can see, the Read_Only Filegroups are functioning just like our ReadOnly Databases which have been explained in our previous blogs. The best part of Read_Only filegroups is that, we get an flexibility to lock just a part of database files when compared to marking the whole database.

-- Clean up time
USE MASTER
GO
DROP DATABASE ReadOnlyDB
GO

As I wrap up this blog, I would want to know how many of us out there have used Read_Only filegroups in our environments. What are those scenarios you made these changes? I would like to learn from your experience. Let me know via your comments.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – SCRIPT – SQL Listening on Dynamic Port?

With every release of SQL Server, sometimes I need to revisit some of the old blogs to see how they have enhanced. This learning experience is an ongoing process and the best way I learn is by doing a search on the blog from time to time. Long ago I had written a blog post to find the port number on which SQL Server is listening: SQL SERVER – Find Port SQL Server is Listening – Port SQL Server is Running

Recently one of my blog reader sent email asking below.

I read your blog which talks about reading ERRORLOG and finding the port. But is there any way to find if that port is fixed port or dynamic port?

I know that the port value is stored in registry hive of SQL Server and this is instance specific. Here is the registry key for SQL Server 2014 instance running on my laptop.

If SQL is configured to use Dynamic Ports then TcpDynamicPorts would have the port value. In case of fixed port (which I have on my SQL Instance) then we would see TcpPort value.

To get same information using T-SQL, I have written a small script which would help you. It would detect dynamic port and static port as well.

DECLARE     @ServerName SYSNAME
,@InstanceID NVARCHAR(128)
,
@InstanceName NVARCHAR(128)
,
@tcp_port NVARCHAR(10)
,
@InstanceKey NVARCHAR(255)
SELECT @ServerName = @@SERVERNAME
SELECT @InstanceName = ISNULL((CAST(SERVERPROPERTY('InstanceName') AS NVARCHAR(128))), 'MSSQLSERVER')
EXECUTE xp_regread @rootkey = 'HKEY_LOCAL_MACHINE'
,@key = 'SOFTWARE\Microsoft\Microsoft SQL Server\Instance Names\SQL'
,@value_name = @InstanceName
,@value = @InstanceID OUTPUT
SELECT @InstanceKey = 'SOFTWARE\MICROSOFT\Microsoft SQL Server\' + @InstanceID + '\MSSQLServer\SuperSocketNetLib\Tcp\IPAll'
EXEC xp_regread @rootkey = 'HKEY_LOCAL_MACHINE'
,@key = @InstanceKey
,@value_name = 'TcpDynamicPorts'
,@value = @tcp_port OUTPUT
IF @tcp_port IS NOT NULL
SELECT 'SQL Server (' + @InstanceName + ') uses dynamic tcp port: ' + CAST(@tcp_port AS NVARCHAR(128))
ELSE
BEGIN
EXEC
xp_regread @rootkey = 'HKEY_LOCAL_MACHINE'
,@key = @InstanceKey
,@value_name = 'TcpPort'
,@value = @tcp_port OUTPUT
SELECT 'SQL Server (' + @InstanceName + ') on ' + @ServerName+ ' uses static tcp port: ' + CAST(@tcp_port AS NVARCHAR(128))
END
GO

Generating such scripts takes time and the best way to learn these is by sharing. So do you have any other script similar to this that is worth a share for other readers? Please share them via the comments section.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – How to Add User Across Databases

In this era of standards, many organizations want to be using some of the cutting edge technologies in their organizations. In many of these interactions I was lucky enough to meet few DBA’s who give me interesting challenges to meet within their organizations. I still vividly remember sharing a script with a DBA who reached out to me to know how we can add a user to all the databases within their SQL Server instance.

On first look, I asked “why?” The answer was simple. Their organization data gets audited once a year by the auditors and they need to be given db_reader privileges during the time of audit. So they wanted to make sure this is done in an automated fashion. I was immediate to give them recommendation to my Plurasight Course. But they were not on the latest version and wanted something in previous versions.

So here is the script that I gave them:

EXEC sp_MSforeachdb '
DECLARE @name VARCHAR(500)
SELECT @name = ''?''
IF ''?'' not in (''tempdb'')
BEGIN
USE [?]
IF DATABASEPROPERTYEX(''?'',''Status'') =''ONLINE'' AND DATABASEPROPERTYEX(''?'',''Updateability'')=''READ_WRITE''
BEGIN
IF NOT EXISTS(Select * from sys.sysusers where name = ''SQLAuth_Auditor'')
BEGIN
CREATE USER [SQLAuth_Auditor] FOR LOGIN [SQLAuth_Auditor]
PRINT ''Added User for ?''
END
EXEC sp_addrolemember ''db_datareader'', ''SQLAuth_Auditor''
PRINT ''Added db_datareader for ?''
END
ELSE
PRINT ''SKIPPED the ? database''
END '

The script uses an undocumented command sp_msforeachdb which I have used before in earlier blogs too to achieve some interesting requirements.

Here in the script, we check of SQLAuth_Auditor user existence in all the databases. If it is not available, we go ahead and add the db_datareader role to the database.

My ask: How many of you have used sp_MSforeachdb procedure in your environment? What are the use cases where you have successfully used the same.

Quick Quiz:

To achieve the above task, can’t we just add the user to Model database and all databases created will automatically get the user added? Is there any catch provided there are no databases currently in a given instance? Do we still need a script like this? Let me know your thoughts.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Security Conversations and Notes with a DBA

One of the great things about working in Pluralsight is that I get to meet a lot of people (trust me on this). I am lost for words when it comes to how my day job helps me in achieving some of the coolest things presentations at a number of conferences. The Pluralsight booth is always a place where we see some of the great minds from the industry swing by. So in case you get to a conference and get to see a Pluralsight booth, feel free to swing by, take a moment to talk someone out there, I guarantee your time will be well spent.

In one conferences, I vividly remember a conversation from an enthusiast who was coming from the Banking vertical. He attended all my sessions and made sure he caught me at the booth as I spend most of my time there when not delivering sessions. So it was in-between sessions that this gentleman caught me and started asking me a number of questions around security. The whole discussion took close to an hour but I was glad someone was serious enough in trying to understand these basics.

I was super excited and was talking some of the latest and greatest security capabilities of SQL Server 2014 which I explained in my Plurasight Course. But the individual said they were running on a SQL Server 2008 R2. That got me into a fix because I had to rewind my mind to give him recommendations. I kept thinking hard as I starting giving each of the recommendations.

In this blog, let me recollect some of the conversations I had and pen them down for everyone’s reference and most importantly for my reference.

About User Accounts

SQL Server executes as a set of Windows services. SQL Server is required to run under a domain account to interact with network services, to access domain resources such as file shares, or if it uses linked server connections to other SQL Server instances running on other computers in the domain.
When choosing the service account, consider an account with the least amount of privileges needed to do the job and no more.

Using a domain user that is not a member of the Local Administrator group or of the Domain Administrators group will be the best choice for the SQL Server service account.

Some especially sensitive accounts are detailed below:

  • SQL Server Browser is a name resolution service that provides SQL Server instance connection information to clients. If the SQL Server Browser service account is compromised, an attacker can use the permissions associated with the account to expand their control over the Windows environment. It is recommended that the SQL Server Browser service be run under the NETWORK SERVICE account that has the least privileges.
  • Some other security considerations: do not use LOCAL SYSTEM, LOCAL SERVICE, or NETWORK SERVICE as the service account to run MSFTESQL. This could provide an elevation of privileges for the Full-Text Search service.

Prefer Kerberos over NTLM authentication. Kerberos as the default authentication mode for windows connections to SQL Server is possible if client and server are joined in the same domain, or client and server are in different domains, but these two domains are configured as two-way trusts. Furthermore, Kerberos is available by setting the appropriate SPNs. Verify if SQL Server Name (WSFC solution) or Hostname matches the SQL Server service accounts. If so, this could prevent you from defining an SPN on the correct object, because you will get a User and Computer objects with the same name. Refer to Understanding Kerberos and NTLM authentication in SQL Server Connections for further information on setting Kerberos authentication.

As for securing SQL Server roles and permissions, consider the following:

  • The SA account is a well-known and frequent target of malicious users. Disable the account by using the ALTER LOGIN statement. If this is not an option, consider renaming the account by using the ALTER LOGIN statement, as this action can help protect the account. Refer to disabling SA blog.
  • Restrict the membership of the sysadmin fixed server role to logins that use Windows Authentication, assuring a greater protection of SQL Server.
  • If users without sysadmin rights need to perform certain tasks, consider creating proxy accounts. There are two types of proxy accounts, and each relates to a specific set of tasks: the xp_cmdshell proxy and the SQL Agent job proxy.
  • The configuration required by each type of proxy is different, and the way the authorities they give are used is different. If there is a requirement for users without sysadmin rights to run xp_cmdshell, then the xp_cmdshell proxy must be created with the sp_xp_cmdshell_proxy_account Similarly, if there is a requirement for users without sysadmin rights to own SQL Agent jobs, then one or more SQL Agent job proxy accounts must be created.

Though the one hour of conversations spanned a lot of topics and demonstrations on my laptop using MSA account and more. I think this brain dump of the conversation will become a great reference for me if someone asks me these again. Do let me know if you have read about these before in your interactions? Have you implemented these in your environments?

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Invoking a Stored Procedure from Azure Mobile Services – Notes from the Field #066

[Note from Pinal]: This is a 66th episode of Notes from the Field series. Azure Mobile Services is a very critical aspect and not many people know about it. When I read this article, I find it humorous at points and some of the examples also reminded me of my past experience. If you are in data security, you will have a great time reading these notes, but if you are not, you will still love it.

In this episode of the Notes from the Field series database expert Kevin Hazzard explains Invoking a Stored Procedure from Azure Mobile Services. Read the experience of Kevin in his own words.


One of the questions I’m often asked is whether it’s possible to call SQL stored procedures from Azure Mobile Services. The answer is yes and it’s probably easier than you think. In case you don’t know, Azure Mobile Services is a way to very simply expose an HTTP service over data stored in an Azure SQL database. By default, Azure Mobile Services exposes the SQL tables directly as resources. So the HTTP methods GET, POST, PUT and DELETE will essentially be mapped to SQL operations on the underlying tables.

While this simple mapping mechanism is good for resource-oriented access to the tables, the logic to produce a usable Web API is often a bit more complex than that. Stored procedures can provide an interesting abstraction layer that allows us to use the efficiencies of the SQL Server query engine to reduce round trips to and from Internet clients, for example. Or perhaps stored procedures might be used to hide normalization peculiarities from clients or perhaps to use advanced parameter handling logic. Whatever the case may be, it would be helpful from time to time to be able to invoke stored procedures from the HTTP API that Azure Mobile Services provides.

Let’s start by assuming that an Azure Mobile Service exists with some data that we would like to expose via a stored procedure. For the purposes of this example, my service is called MobileWeatherAlert which contains a backing table in an Azure SQL Database named [MobileWeatherAlert_db]. It’s really helpful that Azure Mobile Services uses schema separation in the underlying database to manage all of its data. That schema separation allows us to expose many separate middle-tier services from one common database if needed. So, in my weather database, there’s a schema called [MobileWeatherAlert] corresponding perfectly to the name of the service that it supports. For the purposes of this example, that schema contains a table called [Observation] which is used to collect weather data by [City].

Figure 1 shows a very simple stored procedure called [GetObservationsForCity] that I’d like to be able to call from the service API.

There are a number of places where this procedure might be invoked. For this example, I’ll implement a custom API in the mobile service called observation.

Figure 2 shows the dialog in the Azure management console where the custom API will be created.

For this simple example, I’ll only implement the HTTP GET method in the API to invoke the stored procedure. For simplicity of the example, I’ll open up access to everyone to avoid having to pass any sort of credentials. Now I can add a bit of JavaScript to the API to make the stored procedure call.

Figure 3 demonstrates adding that JavaScript to the API via the Azure management console.

Lines 1 through 9 in the script encompass the get function that will be invoked when the HTTP GET method is used to call the service. The parameters passed to the JavaScript function are the request and response objects. From the request object, line 2 shows how to obtain a reference to the mssql object which exposes a query function for making calls into the database. Line 3 demonstrates how to call the query function to execute the [GetObservationsForCity] stored procedure, passing a single parameter for the City by which to filter. It’s important to note here that the schema in which the stored procedure resides is not named in the EXEC call. This is counter-intuitive, in my opinion, and is likely to trip up novices as they experiment with this functionality. Since we are invoking the GET method for the MobileWeatherAlert service, there’s an implicit assumption used in the preparation of the SQL statement that objects will reside in a similarly-named database schema.

Notice also on Line 3 that the request object passed into the JavaScript function exposes a query property that conveniently contains an object named city which will be parsed directly from the URL.

Figure 4 shows how that URL might be passed from PostMan, a really excellent Google Chrome plug in that allows the invocation of nearly any sort of HTTP-oriented web service or API.

Finally, lines 4 through 6 of the JavaScript method, the success function that process the results of the SQL query logs the results and returns them to the caller with an HTTP 201 (OK) response. I’ve included a called to the console.log() function to show how easy it is to log just about anything when you’re debugging your JavaScript code in Azure Mobile Services. After invoking an API or custom resource method that logs something, check out the logs tab of the mobile service in the management console to see what got saved. Of course, you’ll want to do minimal logging in production but while you’re testing and debugging, the log is a valuable resource.

In studying the URL and its output in Figure 4, remember that the JavaScript for the observation API didn’t have to do any special parsing of the row set returned by SQL Server to produce this result. Simply returning that data from SQL Server caused the API to emit JavaScript Object Notation (JSON) which has arguably become the lingua franca of the Internet for expressing data.

In closing, I’ll share a couple of thoughts. If you’re interested in building a simple query interface on top of a mobile service, you don’t have to use stored procedures as shown here. Azure Mobile Services implements fairly rich OData support directly on table resources. With OData, filtering, sorting and pagination of SQL data are built in, so to speak. Also, the web way of doing services (sometimes called RESTful based on Dr. Roy Fielding’s dissertation and the HTTP standards that flowed from it), assume that we’ll use HTTP in the way it was intended: accessing and linking resources at a more basic level, using the HTTP methods GET, POST, PUT, and DELETE as a complete, fully-functional language for accessing those resources. Database people inherently understand and respect this access pattern better than many programmers working in traditional programming languages like C# and Java. After all, we’re accustomed to using four basic methods to manipulate data in our databases: SELECT, INSERT, UPDATE, and DELETE. Yet, as database people, we also know that giving software developers strict table-level access can cause all sorts of performance problems. For those situations, where you know that some complex database operation could be performed much more efficiently with a bit of T-SQL code, a stored procedure or a view may be just the prescription your developers need. Hopefully, this article has helped you understand how to invoke programmatic resource in a SQL Azure database and perhaps it will help you along the way to making the correct architectural choices in the design of your modern, data-driven web applications.

If you want to get started with SQL Server with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – The DBA goes to MARS – Maintenance, Architecture, Research and Support – Notes from the Field #063

[Note from Pinal]: This is a 63rdth episode of Notes from the Field series. Storytelling is an art very few have mastered in their life. When I requested Stuart to share his ideas about his famous blog about MARS, I really had not expected the response which I have received in this episode of Notes from the field. It is very critical for any DBA to be proactive in their operation, however are we proactive enough? Well, you need to read this entire episode of notes from the field to grasp the concept.

In this episode of the Notes from the Field series database expert Stuart Ainsworth explains about MARS in this very captivating story.


Database administration can be a complex job, particularly in large scale environments; DBA’s are responsible for a variety of tasks, and there’s a wealth of knowledge on how to administer and support SQL Server.  For this post, however, I wanted to focus not on a specific code-related or configuration tip, but rather on a broader perspective of work in general.  As a manager and a consultant, most SQL Server shops have a common problem: we focus too much on reacting to problems and not enough on preventing them.

In the past, I’ve blogged about what I call the MARS paradigm: Maintenance, Architecture, Research and Support.  Most system administration tasks can be broadly categorized into one of these four areas, and each of these four areas can be conceptually defined in terms of Actions (Proactive\Reactive) and Results (Delayed\Immediate):

Think about any given day, and how much time you spend working in each of the 4 areas; which ones are given priority?  If you’re like most shops, you probably spend the most amount of time working support cases (“my query is slow”, or “can you show me where to find this information in the database?”) instead of focusing on maintenance or architecture.   A healthy IT shop will find ways to balance the need for immediate reaction to the need for proactive long-term results; however, the question that needs to be answered is: why do we react, and how do we become more proactive?

Why Database Administration Is Usually Reactive:

In part, the reactive nature of most IT departments is a cultural thing.  Fighting fires can be fun; people who solve problems are usually lauded as heroes, while people who prevent problems often go unnoticed.  Think about it this way; if a database is corrupted and a restore is required, who gets the credit for saving the day?  The DBA who did the restore or the DBA who set up the backup plan years ago (assuming that they’re not the same DBA)?  Administrators that are quick to act are viewed as being responsive, even though they’re usually built on a foundation that was established invisibly some time ago.

Furthermore, most database administrators are thorough in their work; support issues are usually an opportunity for them to dig into a problem and find the best solution.  Maintenance issues, on the other hand, are addressing problems that don’t yet exist, and therefore don’t yet need a solution.  However, constantly providing support pulls resources away from building a better foundation.  If a DBA is spending time constantly fighting fires, they’re not spending time on monitoring the environment to avoid future problems.   So how do you move from a reactive environment to a proactive environment?

Moving From a Reactive Approach to a Proactive Approach:

First, track your work efforts for some period of time (30 days).  Find a way to categorize your work into one of the four broad areas defined above:

  • Maintenance: efforts needed to keep system performance and stability
  • Architecture: efforts to add new features, functionality, or capability
  • Research: efforts to learn “something new”
  • Support: efforts to sustain the existing functionality or customer requests

Ideally, your work is being tracked in some form of service desk software, but if not, then I’d suggest you start with something simple (like a whiteboard with sticky notes).

Second, figure out a metric for work effort; it could be time spent on an issue, or the number of issues, or some other measure of complexity.  The goal is to have a reasonable method of comparison between categories.  Based on your collection efforts, you should be able to figure out where you’re spending most of your time; if more than 50% of your time is being spent in Support or Research, it’s time to consider becoming more proactive.

Becoming more proactive can be a never-ending journey, but I have a few suggestions on where to start:

  1. Prioritize accordingly. Not every support case deserves to be treated as your top priority; make sure that maintenance and architecture issues are included in your escalation plans.
  2. Set it, but don’t forget it. SQL Server offers a lot of tools for automating maintenance; make sure that you allocate time to review successful executions as well as failures.  Understanding when a system is performing outside of spec is nearly as critical as being notified when it fails.
  3. Communicate with interested parties. Let your manager and your peers know about your maintenance an architecture efforts; the best metric would be a reduction in support calls and cases.

If you want to get started with SQL Server with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – How to Find Weak Passwords Using T-SQL?

Recently one of my blog reader emailed me below question.

Hi Pinal,
Need your urgent help. In recent past, we have been attacked by the hacker who was able to get in to our SQL Server via sysadmin account and made big damage to our data. To make sure it doesn’t happen in future, I have taken task to find out SQL Server password which are weak.

Do you have any suggestions for me?

This is one of the area which is always haunting all SQL DBAs. There are recommendations to use Windows Authentication to connect to SQL Server and that would save from all such problem. But it is not always feasible to use Windows Authentication. Now, if we decided to choose SQL Authentication, there is a setting which is “Enforce Password Policy” which would ensure that you are choosing a strong password.

If recommendations are not followed, you might end up in situation where SQL Logins have weak and basic passwords. SQL Server has provided a function PWDCOMPARE which can become very useful to find known password. Below are few example use of this out of box funtion:

SELECT NAME,
NAME 'password'
FROM   sys.sql_logins
WHERE  Pwdcompare(NAME, password_hash) = 1
UNION
SELECT
NAME,
'<blank>' AS 'password'
FROM   sys.sql_logins
WHERE  Pwdcompare('', password_hash) = 1
UNION
SELECT
NAME,
'password123' AS 'password'
FROM   sys.sql_logins
WHERE  Pwdcompare('password123', password_hash) = 1

In above query, we are trying to find:

  • Password same as user name – first query
  • Blank password – second query
  • Password = password123 – third query

These are one of the most common password used in the industry. I am sure you can extend this further by modifying it and adding more weak passwords.

Here is the sample output for the above:

Hope this blog would help you in finding weak passwords and make it more complex. Have you ever had a need to use such passwords in your environments? Do let me know.

Reference: Pinal Dave (http://blog.sqlauthority.com)