SQL SERVER – Convert Old Syntax of RAISEERROR to THROW

I have been quite a few comments on my Facebook page and here is one of the questions which instantly caught my attention.

“We have a legacy application and it has been a long time since we are using SQL Server. Recently we have upgraded to the latest version of SQL Server and we are updating our code as well.

Here is the question for you, there are plenty of places we have been using old style RAISEERROR code and now we want to convert it to use THROW. Would you please suggest a sample example for the same.”

Very interesting question. THROW was introduced in SQL Server 2012 to handle the error gracefully and return the error message. Let us see quickly two examples of SQL Server 2012 and earlier version.

Earlier Version of SQL Server

BEGIN TRY
SELECT 1/0
END TRY
BEGIN CATCH
DECLARE @ErrorMessage NVARCHAR(2000), @ErrorSeverity INT
SELECT
@ErrorMessage = ERROR_MESSAGE(),
@ErrorSeverity = ERROR_SEVERITY()
RAISERROR (@ErrorMessage, @ErrorSeverity, 1)
END CATCH

SQL Server 2012 and Latest Version

BEGIN TRY
SELECT 1/0
END TRY
BEGIN CATCH
THROW
END CATCH

That’s it! We are done!

Reference: Pinal Dave (http://blog.SQLAuthority.com)

About these ads

SQL SERVER – Validating If Date is Last Day of the Year, Month or Day

Here is one more question I recently received in an email-

“Pinal, is there any ready made function which will display if the given date is the last day or the year, month or day?

For example, if a date is the last day of the Year and last day of the month, I want to display Last Day of the Year and if a date is the last date of the month and last day of the week, I want to display the last day of the week. “

Well, very interesting question and there is no such function available for the same.

However, here is the function I have written earlier for my personal use where I almost accomplish same task.

-- Example of Year
DECLARE @Day DATETIME
SET
@Day = '2014-12-31'
SELECT
CASE
WHEN CAST(@Day AS DATE) = CAST(DATEADD(ms,-3,DATEADD(yy,0,DATEADD(yy,DATEDIFF(yy,0,@Day)+1,0))) AS DATE) THEN 'LastDayofYear'
WHEN CAST(@Day AS DATE) = CAST(DATEADD(s,-1,DATEADD(mm, DATEDIFF(m,0,@Day)+1,0)) AS DATE) THEN 'LastDayofMonth'
WHEN CAST(@Day AS DATE) = CAST(DATEADD(s,-1,DATEADD(wk, DATEDIFF(wk,0,@Day),0)) AS DATE) THEN 'LastDayofWeek'
ELSE 'Day'
END
GO
-- Example of Month
DECLARE @Day DATETIME
SET
@Day = '2014-06-30'
SELECT
CASE
WHEN CAST(@Day AS DATE) = CAST(DATEADD(ms,-3,DATEADD(yy,0,DATEADD(yy,DATEDIFF(yy,0,@Day)+1,0))) AS DATE) THEN 'LastDayofYear'
WHEN CAST(@Day AS DATE) = CAST(DATEADD(s,-1,DATEADD(mm, DATEDIFF(m,0,@Day)+1,0)) AS DATE) THEN 'LastDayofMonth'
WHEN CAST(@Day AS DATE) = CAST(DATEADD(s,-1,DATEADD(wk, DATEDIFF(wk,0,@Day),0)) AS DATE) THEN 'LastDayofWeek'
ELSE 'Day'
END
GO
-- Example of Month
DECLARE @Day DATETIME
SET
@Day = '2014-05-04'
SELECT
CASE
WHEN CAST(@Day AS DATE) = CAST(DATEADD(ms,-3,DATEADD(yy,0,DATEADD(yy,DATEDIFF(yy,0,@Day)+1,0))) AS DATE) THEN 'LastDayofYear'
WHEN CAST(@Day AS DATE) = CAST(DATEADD(s,-1,DATEADD(mm, DATEDIFF(m,0,@Day)+1,0)) AS DATE) THEN 'LastDayofMonth'
WHEN CAST(@Day AS DATE) = CAST(DATEADD(s,-1,DATEADD(wk, DATEDIFF(wk,0,@Day),0)) AS DATE) THEN 'LastDayofWeek'
ELSE 'Day'
END
GO

Let me know if you know any other smarter trick and we can post it here with due credit.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – Script to Find First Day of Current Month

Earlier I wrote a blog post about SQL SERVER – Query to Find First and Last Day of Current Month and it is a very popular post. In this post, I convert the datetime to Varchar and later on use it. However, SQL Expert Michael Usov has made a good point suggesting that it is not always a good idea to convert datetime to any other date format as it is quite possible that we may need it the value in the datetime format for other operation. He has suggested a very quick solution where we can get the first day of the current month with or without time value and keep them with datatype datetime.

Here is the simple script for the same.

-- first day of month
-- with time zeroed out
SELECT CAST(DATEADD(DAY,-DAY(GETDATE())+1, CAST(GETDATE() AS DATE)) AS DATETIME)
-- with time as it was
SELECT DATEADD(DAY,-DAY(GETDATE())+1, CAST(GETDATE() AS DATETIME))

Here is the resultset:

Reference: Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – ​Finding Out What Changed in a Deleted Database – Notes from the Field #041

[Note from Pinal]: This is a 41th episode of Notes from the Field series. The real world is full of challenges. When we are reading theory or book, we sometimes do not realize how real world reacts works and that is why we have the series notes from the field, which is extremely popular with developers and DBA. Let us talk about interesting problem of how to figure out what has changed in the DELETED database. Well, you think I am just throwing the words but in reality this kind of problems are making our DBA’s life interesting and in this blog post we have amazing story from Brian Kelley about the same subject.

In this episode of the Notes from the Field series database expert Brian Kelley explains a how to find out what has changed in deleted database. Read the experience of Brian in his own words.


Sometimes, one of the hardest questions to answer is, “What changed?” A similar question is, “Did anything change other than what we expected to change?”

The First Place to Check – Schema Changes History Report:

Pinal has recently written on the
Schema Changes History report
and its requirement for the Default Trace to be enabled. This is always the first place I look when I am trying to answer these questions.

There are a couple of obvious limitations with the Schema Changes History report. First, while it reports what changed, when it changed, and who changed it, other than the base DDL operation (CREATE, ALTER, DELETE), it does not present what the changes actually were. This is not something covered by the default trace. Second, the default trace has a fixed size. When it hits that
size, the changes begin to overwrite. As a result, if you wait too long, especially on a busy database server, you may find your changes rolled off.

But the Database Has Been Deleted!

Pinal cited another issue, and that’s the inability to run the Schema Changes History report if the database has been dropped. Thankfully, all is not lost.

One thing to remember is that the Schema Changes History report is ultimately driven by the Default Trace. As you may have guess, it’s a trace, like any other database trace. And the Default Trace does write to disk. The trace files are written to the defined LOG directory for that SQL Server instance and have a prefix of log_:

Therefore, you can read the trace files like any other.

Tip: Copy the files to a working directory. Otherwise, you may occasionally receive a file in use error.

With the Default Trace files, if you ask the question early enough, you can see the information for a deleted database just the same as any other database.

Testing with a Deleted Database:

Here’s a short script that will create a database, create a schema, create an object, and then drop the database. Without the database, you can’t do a standard Schema Changes History report.

CREATE DATABASE DeleteMe;
GO
USE DeleteMe;
GO
CREATE SCHEMA Test AUTHORIZATION dbo;
GO
CREATE TABLE Test.Foo (FooID INT);
GO
USE MASTER;
GO
DROP DATABASE DeleteMe;
GO

This sets up the perfect situation where we can’t retrieve the information using the Schema Changes History report but where it’s still available.

Finding the Information:

I’ve sorted the columns so I can see the Event Subclass, the Start Time, the Database Name, the Object Name, and the Object Type at the front, but otherwise, I’m just looking at the trace files using SQL Profiler. As you can see, the information is definitely there:

Therefore, even in the case of a dropped/deleted database, you can still determine who did what and when. You can even determine who dropped the database (loginame is captured). The key is to get the default trace files in a timely manner in order to extract the information.

If you want to get started with performance tuning and database security with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL Authority News – FalafelCON 2014: 2 days with the Best Developers in the World

I love presenting at various forums on various technologies. I am extremely excited that I got invited to speak at Falafel Conference 2014 in San Francisco. I will present two technology sessions on SQL Server. If you are into web development or if you just want to attend a conference with the best of the industry speakers, this may be the right conference for you. What set apart this conference from other conference is technology presented as well as speakers. Usually one has to attend very expensive and high scale event when they have to hear good speakers. At this conference, you will find quite a many industry legends are available to present on the bleeding edge technology.

Here are few of the reasons why I believe you should attend this conference:

  • Choose from four tracks covering Web, Mobile development and testing, Sitefinity, and Automated Testing, or attend sessions from all four!
  • Learn from the best developers and testers in the business in an intimate setting.
  • Surround yourself with your peers and the opportunity to network
  • Learn about the latest platforms and technologies including Kendo UI, AngularJS, ASP.NET MVC, WebAPI, and more!

Here are the details for the sessions which I am going to present at Falafel Conference.

Secrets of SQL Server: Database Worst Practices
Abstract: Chances are you have heard, or even uttered, this expression. This demo-oriented session will show many examples where database professionals were dumbfounded by their own mistakes, and could even bring back memories of your own early DBA days. The goal of this session is to expose the small details that can be dangerous to the production environment and SQL Server as a whole, as well as talk about worst practices and how to avoid them. Shedding light on some of these perils and the tricks to avoid them may even save your current job. After attending this session, Developers will only need 60 seconds to improve performance of their database server in their SharePoint implementation. We will have a quiz during the session to keep the conversation alive. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. Additionally, all attendees of the session will have access to learning material presented in the session.

The Unsung Hero
Abstract: Slow Running Queries are the most common problem that developers face while working with SQL Server. While it is easy to blame the SQL Server for unsatisfactory performance, however the issue often persists with the way queries have been written, and how Indexes has been set up. The session will focus on the ways of identifying problems that slow down SQL Server, and Indexing tricks to fix them. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session.

Register Now!

I have learned from the Falafel Team that they are running out of tickets and soon they will close the registration.  For next 10 days the price for the registration is only USD 149. Trust me, you can’t get such a world class training and networking opportunity at such a low price. Click to Register Here!

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Integrate Your Data with Skyvia – Cloud ETL Solution

In our days data integration often becomes a key aspect of business success. For business analysts it’s very important to get integrated data from various sources, such as relational databases, cloud CRMs, etc. to make correct and successful decisions. There are various data integration solutions on market, and today I will tell about one of them – Skyvia.

Skyvia is a cloud data integration service, which allows integrating data in cloud CRMs and different relational databases. It is a completely online solution and does not require anything except for a browser. Skyvia provides powerful etl tools for data import, export, replication, and synchronization for SQL Server and other databases and cloud CRMs.

Packages page

You can use Skyvia data import tools to load data from various sources to SQL Server (and SQL Azure). Skyvia supports such cloud CRMs as Salesforce and Microsoft Dynamics CRM and such databases as MySQL and PostgreSQL. You even can migrate data from SQL Server to SQL Server, or from SQL Server to other databases and cloud CRMs. Additionally Skyvia supports import of CSV files, either uploaded manually or stored on cloud file storage services, such as Dropbox, Box, Google Drive, or FTP servers.

Azure import

When data import is not enough, Skyvia offers bidirectional data synchronization. With this tool, you can synchronize SQL Server data with other databases and cloud CRMs.

Sync task

After performing the first synchronization, Skyvia tracks data changes in the synchronized data storages. In SQL Server databases (and other relational databases) it creates additional tracking tables and triggers. This allows synchronizing only the changed data.

Skyvia also maps records by their primary key values to each other, so it does not require different sources to have the same primary key structure. It still can match the corresponding records without having to add any additional columns or changing data structure. The only requirement for synchronization is that primary keys must be autogenerated.

With Skyvia it’s not necessary for data to have the same structure in integrated data storages. Skyvia supports powerful mapping mechanisms that allow synchronizing data with completely different structure. It provides support for complex mathematical and string expressions when mapping data, using lookups, etc.

Relation mapping

You may use data splitting – loading data from a single CSV file or source table to multiple related target tables. Or you may load data from several source CSV files or tables to several related target tables. In each case Skyvia preserves data relations. It builds corresponding relations between the target data automatically.

When you often work with cloud CRM data, native CRM data reporting and analysis tools may be not enough for you. And there is a vast set of professional data analysis and reporting tools available for SQL Server. With Skyvia you can quickly copy your cloud CRM data to an SQL Server database and apply corresponding SQL Server tools to the data.

In such case you can use Skyvia data replication tools. It allows you to quickly copy cloud CRM data to SQL Server or other databases without customizing any mapping. You need just to specify columns to copy data from. Target database tables will be created automatically. Skyvia offers powerful filtering settings to replicate only the records you need.

Replication package

Skyvia also provides capability to export data from SQL Server (including SQL Azure) and other databases and cloud CRMs to CSV files. These files can be either downloadable manually or loaded to cloud file storages or FTP server. You can use export, for example, to backup SQL Azure data to Dropbox.

Export package

Any data integration operation can be scheduled for automatic execution. Thus, you can automate your SQL Azure data backup or data synchronization – just configure it once, then schedule it, and benefit from automatic data integration with Skyvia.

Defined schedule

Currently registration and using Skyvia is completely free, so you can try it yourself and find out whether its data migration and integration tools suits for you. Visit this link to register on Skyvia: https://app.skyvia.com/register

Reference: Pinal Dave (http://blog.sqlauthority.com)

Database – Beginning with Cloud Database As A Service

I love my weekend projects. Everybody does different activities in their weekend – like traveling, reading or just nothing. Every weekend I try to do something creative and different in the database world. The goal is I learn something new and if I enjoy my learning experience I share with the world. This weekend, I decided to explore Cloud Database As A Service – Morpheus. In my career I have managed many databases in the cloud and I have good experience in managing them.

I should highlight that today’s applications use multiple databases from SQL for transactions and analytics, NoSQL for documents, In-Memory for caching to Indexing for search.  Provisioning and deploying these databases often require extensive expertise and time.  Often these databases are also not deployed on the same infrastructure and can create unnecessary latency between the application layer and the databases.  Not to mention the different quality of service based on the infrastructure and the service provider where they are deployed.

Moreover, there are additional problems that I have experienced with traditional database setup when hosted in the cloud:

  • Database provisioning & orchestration
  • Slow speed due to hardware issues
  • Poor Monitoring Tools
  • High network latency

Now if you have a great software and expert network engineer, you can continuously work on above problems and overcome them. However, not every organization have the luxury to have top notch experts in the field. Now above issues are related to infrastructure, but there are a few more problems which are related to software/application as well.

Here are the top three things which can be problems if you do not have application expert:

  • Replication and Clustering
  • Simple provisioning of the hard drive space
  • Automatic Sharding

Well, Morpheus looks like a product build by experts who have faced similar situation in the past. The product pretty much addresses all the pain points of developers and database administrators.

What is different about Morpheus is that it offers a variety of databases from MySQL, MongoDB, ElasticSearch to Reddis as a service.  Thus users can pick and chose any combination of these databases.  All of them can be provisioned in a matter of minutes with a simple and intuitive point and click user interface.  The Morpheus cloud is built on Solid State Drives (SSD) and is designed for high-speed database transactions.  In addition it offers a direct link to Amazon Web Services to minimize latency between the application layer and the databases.

Here are the few steps on how one can get started with Morpheus. Follow along with me.  First go to http://www.gomorpheus.com and register for a new and free account.

Step 1: Signup

It is very simple to signup for Morpheus.

Step 2: Select your database

 

I use MySQL for my daily routine, so I have selected MySQL. Upon clicking on the big red button to add Instance, it prompted a dialogue of creating a new instance.

 

Step 3: Create User

Now we just have to create a user in our portal which we will use to connect to a database hosted at Morpheus. Click on your database instance and it will bring you to User Screen. Over here you will notice once again a big red button to create a new user. I created a user with my first name.

 

Step 4: Configure your MySQL client

I used MySQL workbench and connected to MySQL instance, which I had created with an IP address and user.

 

That’s it! You are connecting to MySQL instance. Now you can create your objects just like you would create on your local box. You will have all the features of the Morpheus when you are working with your database.

Dashboard

While working with Morpheus, I was most impressed with its dashboard. In future blog posts, I will write more about this feature.  Also with Morpheus you use the same process for provisioning and connecting with other databases: MongoDB, ElasticSearch and Reddis.

Reference: Pinal Dave (http://blog.sqlauthority.com)