SQL SERVER – Adding File to Database in AlwaysOn Availability Group

Writing some typical interview questions have been on my list always. I look for new ways of tricking the person and most importantly, some of the interview questions are designed to make sure the person has strong fundamentals. This blog is one such attempt to understand the basics.

This is one of the common interview questions – What would happen if you add a new file (database file MDF or transaction log file LDF) to a database which is part of a high availability solution? As you might know that there are various high availability solutions like Failover clustering, database mirroring log shipping and the effect would be different in each solution. Out of many other high availability solutions, in this blog we will talk about availability group feature which was introduced in SQL Server 2012.

Let us talk about the basics of availability group first. Whatever modification operation done on the database would be logged to the transaction log file, called as log record. Since database is part of availability group the same command would be sent to all secondary replicas via log record. This means that adding data or log file to primary database essentially roll-forwards the same command to all secondary replicas.  This means that ADD FILE operation will be executed on Secondary servers. Using below code, I have added the file to drive E on Primary Instance.

USE [master]
GO
ALTER DATABASE [MyConDB]
ADD FILE (NAME = N'SQLAuthority_2',
FILENAME = N'E:\SQLAuthority.ndf' ,
SIZE = 4096KB, FILEGROWTH = 1024KB) TO FILEGROUP [PRIMARY]
GO

If this drive does not exist in secondary, then this operation will fail, resulting into the suspended secondary replica. Here is the information seen in the ERRORLOG (SQL SERVER – Where is ERRORLOG? Various Ways to Find its Location)

2015-01-01 17:52:14.19 spid45s     Error: 5123, Severity: 16, State: 1.
2015-01-01 17:52:14.19 spid45s     CREATE FILE encountered operating system error 3(The system cannot find the path specified.) while attempting to open or create the physical file ‘E:\SQLAuthority.ndf’.

2015-01-01 17:52:14.27 spid45s     Error: 5183, Severity: 16, State: 1.
2015-01-01 17:52:14.27 spid45s     Cannot create the file “SQLAuthority_2”. Use WITH MOVE to specify a usable physical file name. Use WITH REPLACE to overwrite an existing file.

2015-01-01 17:52:14.30 spid45s     AlwaysOn Availability Groups data movement for database ‘SQLAuthority’ has been suspended for the following reason: “system” (Source ID 2; Source string: ‘SUSPEND_FROM_REDO’). To resume data movement on the database, you will need to resume the database manually. For information about how to resume an availability database, see SQL Server Books Online.

2015-01-01 17:52:14.33 spid45s     Error: 3313, Severity: 21, State: 2.
2015-01-01 17:52:14.33 spid45s     During redoing of a logged operation in database ‘SQLAuthority’, an error occurred at log record ID (34:358:1). Typically, the specific failure is previously logged as an error in the Windows Event Log service. Restore the database from a full backup, or repair the database.

If there is a possibility to recreate the same path on secondary replica, then we need to resume the data movement using below command.

ALTER DATABASE [SQLAuthority] SET HADR RESUME;
GO

If it is not possible to recreate the path on secondary, then we need to reinitialize this secondary again. Which means we need to restore full backup and a log backup from the primary.

Have you encountered the same problem in Log shipping or Clustering? What steps did you take to solve it? Do let me know via the comments below, I am sure we can learn from each other this way.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – How to do IF…THEN in SQL SERVER?

Just another day I received following question –

“Pinal,

I am from DotNet expertise and I do not know how to convert the logic of IF…THEN into SQL statement, would you please help?

~Rajesh”

I love simple questions as they have simple answers. In SQL Server, we can use a CASE statement to implement IF…THEN. Here is the example of the CASE statement.

Example 1:

If you are logic is as follows:

IF -1 < 1 THEN 'TRUE'
 ELSE 'FALSE'

You can just use CASE statement as follows:

-- SQL Server 2008 and earlier version solution
SELECT CASE
WHEN -1 < 1 THEN 'TRUE'
ELSE 'FALSE' END AS Result
GO

-- SQL Server 2012 solution
SELECT IIF ( -1 < 1, 'TRUE', 'FALSE' ) AS Result;
GO

If you are interested further about how IIF of SQL Server 2012 works read the blog post.

Well, in our example the condition which we have used is pretty simple, but in the real world the logic can very complex.

You can read more about this subject in the blog post over here: SQL SERVER – Implementing IF … THEN in SQL SERVER with CASE Statements.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – A Stored Procedure for Executing SSIS Packages in the SSIS Catalog – Notes from the Field #092

[Note from Pinal]: This is a new episode of Notes from the Field series. SQL Server Integration Service (SSIS) is one of the most key essential part of the entire Business Intelligence (BI) story. It is a platform for data integration and workflow applications.

andyleonard SQL SERVER   A Stored Procedure for Executing SSIS Packages in the SSIS Catalog   Notes from the Field #092

In this episode of the Notes from the Field series I asked SSIS Expert Andy Leonard a very crucial question – How to create a stored procedure for executing SSIS Package in the SSIS Catalog? Andy was very kind to answer the questions and provides plenty of information about how a novice developer can learn SSIS from the beginning and become expert in the technology.


The following is a snippet from Chapter 2 of the book SSIS Design Patterns co-written by Matt Masson (Blog | @mattmasson), Tim Mitchell (Blog | @Tim_Mitchell), Jessica Moss (Blog | @jessicammoss), and Michelle Ufford (Blog | @sqlfool). Earlier in the chapter there are demos that describe a simple SSIS package named Chapter2.dtsx which is part of an SSIS project named Chapter2, and which is deployed to an instance of the SSIS Catalog in a Folder named “Chapter 2”. But you can use this stored procedure to execute any SSIS package in the SSIS Catalog. That’s the whole point!

SQL Server 2014 provides a new way to manage and execute Integration Services packages: Integration Server Catalogs. We explore this method next.

Integration Server Catalogs

You can only manage SSIS projects that use the Project Deployment Model in Integration Services Catalogs. To execute a package in the catalog, use SSMS to connect to the instance of SQL Server hosting the SSISDB database. Expand the Integration Services Catalogs node, and then expand the SSISDB node. Drill into the folder containing the SSIS project and package(s). Right-click the package you wish to execute and click Execute, as shown in Figure 2-1.

notes91 1 SQL SERVER   A Stored Procedure for Executing SSIS Packages in the SSIS Catalog   Notes from the Field #092

Figure 2-1. Executing an SSIS Package deployed to the SSIS Catalog

The Execute Package Window displays, as shown in Figure 2-2. It allows you to override Parameter values, ConnectionString properties of Connection Managers built at design-time, or any other externalize-able property accessible from a Package Path (via the Advanced tab) for this execution instance of the SSIS package stored in the SSIS Catalog.

notes91 2 SQL SERVER   A Stored Procedure for Executing SSIS Packages in the SSIS Catalog   Notes from the Field #092

Figure 2-2. Execute Package Window

Integration Server Catalog Stored Procedures

Please note the Script button above the Parameters tab in Figure 2-2. This button allows you to generate Transact-SQL statements that will execute the SSIS package. For the Chapter2.dtsx package stored in the SSIS Catalog, the scripts will appear similar to that in Listing 2-1.

Listing 2-1. Transact-SQL Script Generated From the Execute Package Window

Declare @execution_id bigint
EXEC [SSISDB].[catalog].[create_execution]
   @package_name=N'Chapter2.dtsx'
  ,@execution_id=@execution_id OUTPUT
  ,@folder_name=N'Chapter2'
  ,@project_name=N'Chapter2'
  ,@use32bitruntime=False
  ,@reference_id=Null
Select @execution_id
DECLARE @var0 smallint = 1
EXEC [SSISDB].[catalog].[set_execution_parameter_value]
   @execution_id
  ,@object_type=50
  ,@parameter_name=N'LOGGING_LEVEL'
  ,@parameter_value=@var0
EXEC [SSISDB].[catalog].[start_execution] @execution_id
GO

You can use these same stored procedures to execute SSIS Packages in the SSIS Catalog! In fact, I designed a script to create a wrapper stored procedure that will call the Transact-SQL statements executed when an SSIS Package is executed in the SSIS Catalog. You can see that script in Listing 2-2.

Listing 2-2. Script to Build a Wrapper Stored Procedure for Executing SSIS Packages in the SSIS Catalog

 /* Select the SSISDB database */
Use SSISDB
Go

 /* Create a parameter (variable) named @Sql */
Declare @Sql varchar(2000)

 /* Create the Custom schema if it does not already exist */
print 'Custom Schema'
If Not Exists(Select name 
              From sys.schemas 
                Where name = 'custom')
 begin
   /* Create Schema statements must occur first in a batch */
  print ' - Creating custom schema'
  Set @Sql = 'Create Schema custom'
  Exec(@Sql)
  print ' - Custom schema created'
 end
Else
 print ' - Custom Schema already exists.'
print ''

 /* Drop the Custom.execute_catalog_package Stored Procedure if it already exists */
print 'Custom.execute_catalog_package Stored Procedure'
  If Exists(Select s.name + '.' +  p.name
            From sys.procedures p
            Join sys.schemas s
                On s.schema_id = p.schema_id
         Where s.name = 'custom'
           And p.name = 'execute_catalog_package')
   begin
    print ' - Dropping custom.execute_catalog_package'
    Drop Procedure custom.execute_catalog_package
    print ' - Custom.execute_catalog_package dropped'
   end

   /* Create the Custom.execute_catalog_package Stored Procedure */
  print ' - Creating custom.execute_catalog_package'
go

/*

     Stored Procedure: custom.execute_catalog_package
     Author: Andy Leonard
     Date: 4 Mar 2012
     Description: Creates a wrapper around the SSISDB Catalog procedures
                  used to start executing an SSIS Package. Packages in the
                SSIS Catalog are referenced by a multi-part identifier
                 - or path - that consists of the following hierarchy:
        Catalog Name: Implied by the database name in Integration Server 2014
        |-Folder Name: A folder created before or at Deployment to contain the SSIS project
        |-Project Name: The name of the SSIS Project deployed
        |-Package Name: The name(s) of the SSIS Package(s) deployed

        Parameters:
        @FolderName [nvarchar(128)] {No default} – 
         contains the name of the Folder that holds the SSIS Project
        @ProjectName [nvarchar(128)] {No default} – 
         contains the name of the SSIS Project that holds the SSIS Package
        @PackageName [nvarchar(260)] {No default} – 
         contains the name of the SSIS Package to be executed
        @ExecutionID [bigint] {Output} – 
         Output parameter (variable) passed back to the caller
        @LoggingLevel [varchar(16)] {Default} – 
         contains the (case-insensitive) name of the logging level
         to apply to this execution instance
        @Use32BitRunTime [bit] {Default} – 
         1 == Use 64-bit run-time
                                                      0 == Use 32-bit run-time
        @ReferenceID [bigint] {Default} –          contains a reference to an Execution Environment
        @ObjectType [smallint] –          contains an identifier that appears to be related to the          SSIS PackageType property 
        Guessing: @ObjectType == PackageType.ordinal (1-based-array) * 10
         Must be 20, 30, or 50 for catalog.set_execution_parameter_value
         stored procedure

        Test: 
        1. Create and deploy an SSIS Package to the SSIS Catalog.
        2. Exec custom.execute_catalog_package and pass it the 
          following parameters: @FolderName, @ProjectName, @PackageName, @ExecutionID Output 
        @LoggingLevel, @Use32BitRunTime, @ReferenceID, and @ObjectType are optional and 
        defaulted parameters.

         Example:
           Declare @ExecId bigint
           Exec custom.execute_catalog_package
         'Chapter2'
        ,'Chapter2'
        ,'Chapter2.dtsx'
        ,@ExecId Output
        3. When execution completes, an Execution_Id value should be returned.
        View the SSIS Catalog Reports to determine the status of the execution 
        instance and the test.

*/
Create Procedure custom.execute_catalog_package
  @FolderName nvarchar(128)
 ,@ProjectName nvarchar(128)
 ,@PackageName nvarchar(260)
 ,@ExecutionID bigint Output
 ,@LoggingLevel varchar(16) = 'Basic'
 ,@Use32BitRunTime bit = 0
 ,@ReferenceID bigint = NULL
 ,@ObjectType smallint = 50
As

 begin
  
  Set NoCount ON
  
   /* Call the catalog.create_execution stored procedure
      to initialize execution location and parameters */
  Exec catalog.create_execution
   @package_name = @PackageName
  ,@execution_id = @ExecutionID Output
  ,@folder_name = @FolderName
  ,@project_name = @ProjectName
  ,@use32bitruntime = @Use32BitRunTime
  ,@reference_id = @ReferenceID

   /* Populate the @ExecutionID parameter for OUTPUT */
  Select @ExecutionID As Execution_Id

   /* Create a parameter (variable) named @Sql */
  Declare @logging_level smallint
   /* Decode the Logging Level */
  Select @logging_level = Case 
                           When Upper(@LoggingLevel) = 'BASIC'
                           Then 1
                           When Upper(@LoggingLevel) = 'PERFORMANCE'
                           Then 2
                            When Upper(@LoggingLevel) = 'VERBOSE'
                           Then 3
                           Else 0 /* 'None' */
                          End 
   /* Call the catalog.set_execution_parameter_value stored
      procedure to update the LOGGING_LEVEL parameter */
  Exec catalog.set_execution_parameter_value
    @ExecutionID
   ,@object_type = @ObjectType
   ,@parameter_name = N'LOGGING_LEVEL'
   ,@parameter_value = @logging_level

   /* Call the catalog.start_execution (self-explanatory) */
  Exec catalog.start_execution
    @ExecutionID

 end

GO

If you execute this script to create the custom schema and stored procedure in your instance of the SSISDB database, you can test it using the statement in Listing 2-3.

Listing 2-3. Testing the SSISDB.custom.execute_catalog_package Stored Procedure

Declare @ExecId bigint
Exec SSISDB.custom.execute_catalog_package 'Chapter2','Chapter2','Chapter2.dtsx',
@ExecId Output

Conclusion

This custom.execute_catalog_package stored procedure can be used to execute an SSIS package from any folder and project in the SSIS Catalog.

The SSIS Script Task can accomplish much more than generating log messages. This brief introduction and basic example have demonstrated how to get started configuring and using the SSIS Script Task. As you can see, SSIS Script Tasks give you development capabilities you may not have been aware of.

If you want to get started with SSIS with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Knowing Which Database is Consuming My Memory

I have been fortunate enough to be at conferences around the world and it is always refreshing to see how people come up with some really common yet never answered questions from time to time. The classic questions I get asked ever since I started working with databases and SQL Server is – why does SQL Server take all the memory and not return it? Even more bizarre is the question – Can I know how much memory is my databases using?

I always tell them, memory is a big topic and we need to use a number of commands like DBCC MEMORYSTATUS to know the internal working. The much more interesting way is to find out what are the pages in our buffer pool for our various databases. This can be got using DMVs as shown below:

--List the Number of pages in the buffer pool by database and page type
SELECT DB_NAME(database_id),
page_type,
COUNT(page_id) AS number_pages
FROM sys.dm_os_buffer_descriptors
WHERE database_id! = 32767
GROUP BY database_id, page_type
ORDER BY number_pages DESC
GO
--List the number of pages in the buffer pool by database
SELECT DB_NAME(database_id),
COUNT(page_id) AS number_pages
FROM sys.dm_os_buffer_descriptors
WHERE database_id! =32767
GROUP BY database_id
ORDER BY database_id
GO

os buffer descriptors 01 SQL SERVER   Knowing Which Database is Consuming My Memory

As you can see in the above output, we can see the amount of data pages and index pages that are loaded into our SQL Server memory.

A small variation of the above query can be to scan the buffer pool based on the type of pages that are loaded into memory. Below is a typical query fired against the same DMV.

--List the number of pages in the buffer pool by page type
SELECT page_type, COUNT(page_id) AS number_pages
FROM sys.dm_os_buffer_descriptors
GROUP BY page_type
ORDER BY number_pages DESC
GO
--List the number of dirty pages in the buffer pool
SELECT COUNT(page_id) AS number_pages
FROM sys.dm_os_buffer_descriptors
WHERE is_modified = 1
GO

os buffer descriptors 02 SQL SERVER   Knowing Which Database is Consuming My Memory

In the above query, I have also shown the dirty pages that are in memory and are yet to be flushed out.

This DMV is super useful when you have a number of databases that are running on our server and want to find out who is consuming the server memory. Do let me know your thoughts and what output are you seeing in your environment. Is there anything strange that you fine? Let me know via your comments.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Unable to Start SQL – Error: SQL Server Could Not Spawn Lazy Writer Thread

I have a tendency to try out various settings on a typical SQL Server just to see how the behavior changes from time to time. While playing with affinity settings in SQL Server – I shot myself in the foot. It was interesting to reach how I came out of the situation. I was not able to reproduce this error on the enterprise edition though.

Here is what I found in Errorlog:

2015-07-28 17:12:11.31 Server    Processor affinity turned on: node 0, processor mask 0x00000006. Threads will execute on CPUs per affinity settings.

2015-07-28 17:12:11.31 Server    I/O affinity turned on, processor mask 0x00000001. Disk I/Os will execute on CPUs per affinity I/O mask/affinity64 mask config option.

2015-07-28 17:12:17.44 Server    Node configuration: node 0: CPU mask: 0x00000006:0 Active CPU mask: 0x00000006:0.

2015-07-28 17:12:17.45 Server    Error: 17120, Severity: 16, State: 1.

2015-07-28 17:12:17.45 Server    SQL Server could not spawn lazy writer thread. Check the SQL Server error log and the Windows event logs for information about possible related problems.
If we convert highlighted number to binary, we can decode the affinity which I have set.

0x00000006 (in hex) = 0110 (in binary) in Processor affinity
0x00000001 (in hex) = 0001 (in binary) in I/O affinity

You can treat 1s and 0s as checkboxes for processors. Below picture should make it more clear.

affinity 01 SQL SERVER   Unable to Start SQL   Error: SQL Server Could Not Spawn Lazy Writer Thread

As you can see there is no overlap so I don’t see any problem, but still SQL Server didn’t start. I checked various online places, but few articles asked to repair the instance, which will not work in this situation. So finally, I sat down calmly and recalled that there is an option to start SQL with minimal configuration. I have used that earlier with one of my clients for SQL startup due to tempdb misconfiguration.

Here are the steps which worked:

  1. Start SQL in minimal configuration using –f parameter.

NET START MSSQLSERVER /f

If you are having named instance called Inst1 then you need to use below
NET START MSSQL$INST1 /f

  1. Connect to SQL using any client tool (SQLCMD or SSMS)
  2. Change the affinity back to default.

T-SQL

ALTER SERVER CONFIGURATION SET PROCESS AFFINITY CPU = AUTO;
GO
EXEC sys.sp_configure N'show advanced options', N'1';
GO
RECONFIGURE WITH OVERRIDE;
GO
EXEC sys.sp_configure N'affinity I/O mask', N'0';
GO
RECONFIGURE;
GO
EXEC sys.sp_configure N'show advanced options', N'0';
GO
RECONFIGURE WITH OVERRIDE;
GO

UI

affinity 02 SQL SERVER   Unable to Start SQL   Error: SQL Server Could Not Spawn Lazy Writer Thread

  1. Stop SQL Service

Default Instance – NET STOP MSSQLSERVER

Named Instance – NET STOP MSSQL$INST1

  1. Start it normally.

In some situations, you might run into issue where you would get this error while following above steps.

Login failed for user ‘LoginName’. Reason: Server is in single user mode. Only one administrator can connect at this time. (Microsoft SQL Server, Error: 18461)

You can refer my friend Balmukund’s blog to make a connection in single user mode via start up parameter “m”

Help: How to fix error – Reason: Server is in single user mode. Only one administrator can connect at this time

Once you are able to connect using SQLCMD, you need to use T-SQL to fix the affinity value. Hope this blog helps you to solve one of the things I discovered by my experiments. Do let me know.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Increase Characters Displayed in Text Results

Working with various SQL Server options is always a great feeling. Every now and then I go to the Options page to get just the thing I want. When I wrote the blog post – SQL SERVER – Getting started and creating JSON using SQL Server 2016 I was expecting interesting conversations about JSON use. On the contrary, guess what – I got an email from a junior explorer of SQL Server 2016 CTP is stating that he was not getting the same output that I had shown in my blog post. I was surprised by this statement.

I asked the mailer to send me a screen shot or an example of what he was seeing. Lesser did I know the context because this was exactly what I had got and I had changed something for the blog post.

Using Query Options

Here are the steps I did while I was writing the previous blog post.

Right click in the query editor and select Query Options. Under Results, select Text. Uncheck “Include column headers in the result set” and change the maximum number of characters displayed to 8192.

Click on in the editor and click Results To and choose Results to Text and click OK.

increase max number character 01 SQL SERVER   Increase Characters Displayed in Text Results

After making the above change, I was able to get upto 8192 characters as part of output to Text.

This is one the simplest option that I thought was easy to find and yet had to share them with couple of readers. Since I was repeating the same, I thought it was worth to write it as a blog post for you.

Note: The default maximum number of characters displayed in each column is 256. The maximum allowed value is 8192.

Do let me know if you have ever changed this option every in your environments? What value did you use whenever you had to change? Let me know.

Reference: Pinal Dave (http://blog.sqlauthority.com)

Interview Question of the Week #030 – Retrieve Last Inserted Identity of Record

Question: What are the different ways of retrieving the identity of last inserted record? Which method do you prefer?

Answer: There are many different ways to do that. Here are three different ways to retrieve the identity of the last inserted record. I personally use SCOPE_IDENTITY () to return the identity of the recently inserted record as it avoids the potential problems associated with adding a trigger.

SELECT @@IDENTITY
It returns the last IDENTITY value produced on a connection, regardless of the table that produced the value, and regardless of the scope of the statement that produced the value.
@@IDENTITY will return the last identity value entered into a table in your current session. While @@IDENTITY is limited to the current session, it is not limited to the current scope. If you have a trigger on a table that causes an identity to be created in another table, you will get the identity that was created last, even if it was the trigger that created it.

SELECT SCOPE_IDENTITY()
It returns the last IDENTITY value produced on a connection and by a statement in the same scope, regardless of the table that produced the value.
SCOPE_IDENTITY(), like @@IDENTITY, will return the last identity value created in the current session, but it will also limit it to your current scope as well. In other words, it will return the last identity value that you explicitly created, rather than any identity that was created by a trigger or a user defined function.

SELECT IDENT_CURRENT(‘tablename’)
It returns the last IDENTITY value produced in a table, regardless of the connection that created the value, and regardless of the scope of the statement that produced the value.
IDENT_CURRENT is not limited by scope and session; it is limited to a specified table. IDENT_CURRENT returns the identity value generated for a specific table in any session and any scope.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – How to Remove All Characters From a String Using T-SQL?

In the recent past I have seen the best interaction and learning happens when we all get into a puzzle and challenge mode. And that just gets the creative juice in our minds and some really innovative solutions are given. So this blog is written like a nice trivia solution and I would love to hear from you on potential alternate solutions to this as you read the simplified solution.

There are many situations where a developer needs to do a lot of string manipulation using T-SQL. There are many things which ideally should be done via SQLCLR.

One of my blog reader posted a comment recently

I want to get only integer part in string. for example, string contain ab123ce234fe means i want only integer part like 123234. how can i get?

I gave him below suggestion.

SET NOCOUNT ON
DECLARE
@loop INT
DECLARE
@str VARCHAR(8000)
SELECT @str = 'ab123ce234fe'
SET @loop = 0
WHILE @loop < 26
BEGIN
SET
@str = REPLACE(@str, CHAR(65 + @loop), '')
SET @loop = @loop + 1
END
SELECT
@str

Do you think there is a better way to achieve that? Please write it in comments section. It is going to be a great learning opportunity for all.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Puzzle – Inside Working of Datatype smalldatetime

In the recent past, I have been bringing few puzzles around using date and time functions. All these have brought some awesome feedbacks and you all have been sharing some awesome answers. Today’s question comes from an unusual place wherein one of a developer said he was getting unexpected results with working with datetime. So here is how the conversation went:

Dev: Hi Pinal.

Pinal: Yes, buddy. How can I be of help?

Dev: I have a problem working with datetime datatype.

Pinal: Oh is it? Tell me more about it.

Dev: I am trying to CAST a string coming from a front end app to SQL Server.

Pinal: Ok, where is the problem?

Dev: Every now and then, it gives different results than what I try to send via the wire.

Pinal: I am not able to understand. Can you send me an example to understand this better?

Dev: Sure, I will send via email.

After I received the actual code and email, I resumed the conversation.

Pinal: Hey.

Dev: Yes, Sir. Did you find the reason?

Pinal: I found the reason and I am going to blog about it tomorrow. It is a simplified version of the query you sent me. So you are about to find the answers there. The hint is: you are doing a convert to smalldatetime

Dev: Well, thanks. I think I will do my homework and try to answer it in your blog too.

This concept was interesting and I planned to share this experience with you folks like a small puzzle. So what are we talking?

Look at the below query and guess what would be the output:

SELECT
CAST('2015-01-01 12:45:29.755' AS SMALLDATETIME),
CAST('2015-01-01 12:45:35.755' AS SMALLDATETIME)

This is a simple conversion but the two values are different. Why is that? Do you know the reason? Use the comments section to explain this. I may send surprise gift to 2 of the winners.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – DevOps for the DBA – Notes from the Field #091

[Note from Pinal]: This is a 91st episode of Notes from the Field series. Divas is a new buzz word with lots of substance behind it. Not everyone understands what it means and not everyone actually related it to their daily job. Lots of people think that it is alright to not explore and learn about this as after a few days or months it will be meaningless. However, the reality seems very different about DevOps as it is one of the concepts which is just rediscovered. I recently asked my friend Stuart about what it means by DevOps for DBA and he had a informative answer to this question.

In this episode of the Notes from the Field series database expert Stuart Ainsworth explains about DevOps for the DBA. Let us read the story in his own words.


 SQL SERVER   DevOps for the DBA   Notes from the Field #091

Not every problem that a DBA encounters is a technical one; sometimes, it’s a procedural one.  I’ve recently become enamored with the philosophy of DevOps, and have tried to find ways to transform those concepts and ideas into practical, actionable advice for my clients.  It’s still a work in progress, but I’ll try to summarize my thoughts today (and, they could change tomorrow).

A Brief Summary of DevOps

DevOps is a collaborative approach to software development and delivery that is influenced by Agile development methods and Lean IT principles; the idea is todevops SQL SERVER   DevOps for the DBA   Notes from the Field #091 decrease time-to-market by looking at (primarily enterprise) software development as a life cycle, and finding ways to automate and increase efficiency.  The general framework is often described as consisting of the Three Ways (a phrase fully explored in The Phoenix Project):

  1. The First Way: Systems Thinking. A complete software development lifecycle encompasses all the functionality involved in both development and operations (including QA and acceptance testing); the entire system is intertwined in order for the business to bring value to customers.  Development responsibility isn’t complete when software passes QA, and Operational responsibilities don’t simply begin during the install.  Both groups should be involved to some degree throughout the process.
  2. The Second Way: Amplify Feedback Loops. No system is complete without feedback, and DevOps approaches focus on increasing the immediacy of that feedback.  Bugs need to be simple to report and acknowledge; suggestions should be taken seriously.
  3. The Third Way: Culture of Continual Experimentation and Learning. Success is found when you take a risk, and DevOps is about creating an environment where risk is encouraged and rewarded.  This is often problematic for Operations, who are usually responsible for maintaining the stability of an environment, but that’s a key to this principle: Ops MUST be involved in finding a way for Dev to take risks while continuing to ensure the stability of the environment.

From Philosophy to Practice

All that’s well and good, but what does it mean for the average DBA? How do we go from thinking about processes, procedures, and philosophy to real meaningful change?  Here’s three specific practices that operational DBA’s should implement as soon as possible:

  1. Respect the role of development. Consider the following scenario: some performance issue has occurred, in operations, and the DBA has stepped in to make a fix (say an index change, or something more drastic like a stored proc rewrites).  The implemented fix is now different than the source of truth for development (their source control), and that’s bad.  It hampers the development effort, but sometimes emergency interventions are necessary to keep the business afloat; what do you do?First and foremost, DBA’s need to limit the changes that they make to strictly emergency fixes; emergency fixes need to be reported to develop as soon as possible.   Not all performance changes need to be emergency fixes; those should be reported to dev, but unless the business is grinding to a halt, the DBA’s should not change the code in operations.  This practice illustrates both the First and the Second Way above.
  1. Request simple methods to supply feedback. If developers are responsible for writing code, they have to be receptive to feedback from operations.  I’ve worked in shops where only the developers and QA had access to create bugs; this kills the feedback loop.  Operational DBA’s must find ways to report issues to development as soon as possible; any electronic tracking measure should be simple to identify who reported an issue, when they reported it, and what was decided to do about.  If the feedback is associated with an emergency change, the DBA should include the code that they changed, so that source control can be edited to include the current production version.  The key takeaway is that EVERY change to an operational system must be reported to the developers.
  2. Strive to provide operational support to development efforts. The best development environment is one that is identical to operations; that’s not always feasible or practical to implement.   DBA’s need to be part of the development process by finding ways to make the dev environment as similar to production as possible; virtualization technology can be a great tool for that, whether it’s virtualizing servers (e.g., VMware, Hyper-V, VirtualBox) or databases (e.g., Delphix).  Short of that, you can just make sure that your developers have a good representation of the current production statistics.

Summary

Software development is hard; systems administration is hard.  Working together can make it easier, but you have to focus on practical changes to your software culture to make things happen.

If you want to get started with SQL Server with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)