SQL SERVER – Beginning Table Valued Constructors – Notes from the Field #052

[Note from Pinal]: This is a 52th episode of Notes from the Field series. I am very happy that the journey which we started one year ago is a amazing milestone. In this 52 episode in the entire year we have learned a lot of new things from industry experts of LinchPin People. They are amazing sets of people who know what they are doing on the field and in the real world. I have received so many notes from blog readers that they have benefited from the experience shared by LinchPin Team.

In this episode of the Notes from the Field series database expert Kathi Kellenberger explains Table Valued Constructors. Kathi is an amazing instructor, she was the SQL author I have read in my early career. The reason, I love SQL Server because her writing has instigated love for this technology in me. Today she brings a relatively unknown topic for database experts. Read the experience of  Kathi in his own words.


Table Valued Constructors, also called Row Constructors, were introduced with SQL Server 2008, but many SQL Server professionals haven’t heard about them. They make it easy to insert multiple rows of hard-coded values into a table with just one insert statement.  NOTE: In this post, I’ll abbreviate Table Valued Constructors as TVCs.

I’ll often see people using older techniques to populate sample tables when asking for T-SQL help on the forums. The following example demonstrates two techniques.

CREATE TABLE #test(Col1 INT, Col2 VARCHAR(10));
--Method 1, using UNION
INSERT INTO #test(Col1, Col2)
SELECT 1,'a'
UNION ALL
SELECT 2,'b'
UNION ALL
SELECT 3,'c';

--Method 2, multiple select statements
INSERT INTO #test(Col1, Col2)
SELECT 4, 'd';
INSERT INTO #test(Col1, Col2)
VALUES( 5, 'e');

The first method takes advantage of the UNION ALL operator to combine three SELECT queries into one set of results.  The second example uses a separate INSERT statement for each row to be inserted. One statement uses a SELECT with hard-coded values, while the second uses the VALUES keyword.

Beginning with the 2008 version, you can specify multiple rows with the VALUES syntax which is now called TVC. Each set of values must be within parentheses and each set must be separated by a comma. Example 2 demonstrates this technique.

CREATE TABLE #testTVC(Col1 INT, Col2 VARCHAR(10));
INSERT INTO #testTVC(Col1, Col2)
VALUES(1,'A'),(2,'B'),(3,'C'),(4,'D'),(5,'E');

The big advantage of TVCs is that you save typing. Of course, looking cool on the forums is a bonus.

While using the TVC to insert multiple rows with one insert statement is pretty fantastic, there are some other interesting uses. You can create a set of results using a TVC within a derived table. Take a look at this example:

SELECT *
FROM
(VALUES ('January'),('February'),
(
'March'),('April'),('May'),
(
'June'),('July'),('August'),
(
'September'),('October'),
(
'November'),('December')
)
AS Months ([Month]);

 

In the months example above, my TVC is part of the FROM clause. Notice that I must alias the TVC just like a derived table. I also have to provide column names right after the alias.

Another interesting use of TVCs involves the APPLY operator. A feature of APPLY is that columns from the outer query are visible inside the APPLY. Not only can you have hard-coded values as in the previous example, you can use columns from the outer query as well. The following example demonstrates this technique.

SELECT SOD.SalesOrderID, Dates.*
FROM Sales.SalesOrderHeader AS SOD
CROSS APPLY(VALUES('Order Date',OrderDate),
(
'Due Date',DueDate),
(
'Ship Date',ShipDate)) AS Dates(TypeOfDate,TheDate);

In this example, using the AdventureWorks database, each row from the SalesOrderHeader table is returned three times, once for each kind of date. Instead of displaying each date in its own column. This example “unpivots” the dates.

Table Valued Constructors can save you some typing and make your code look more organized. And, as you have seen here, they are also capable of some neat little tricks.

If you want to get started with BIML with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

About these ads

SQL SERVER – The Why of BIML – Notes from the Field #051

[Note from Pinal]: This is a 51th episode of Notes from the Field series. In one of the earlier blog post we discussed about BIML and lots of people asked what is actually BIML and why one should use it. If you have been reading this blog, when it is about BI, I always take help from LinchPin People who are BI experts. I requested Reeves from LinchiPin people to help me answer this unique question.

In this episode of the Notes from the Field series database expert Reeves Smith explains The WHY of BIML. Read the experience of Reeves in his own words.


The Why of Biml

If you question whether Biml can be useful, you are not alone. After I introduce people to Biml, someone always asks, “How can writing something in an XML-based language actually help me out?” I asked this same question when I first started working with Biml. If I’m going to spend the time learning something it had better improve my development experience or bring something else of value to the tool belt. This article will show you how Biml can save you time creating packages.

Overview

As a quick refresher, Biml is an XML language used to describe Business Intelligence (BI) projects. This discussion on Biml will work within the Visual Studio (BIDS/SSDT) environment with BIDS Helper. Using Biml with BIDS and the BIDS Helper add-in allows you to create Integration Services (SSIS) projects. (Note that Biml is compatible with SQL Server Analysis Services projects but not within the BIDS Helper add-in.)

Be aware that the Biml language does not support formatting and layout. The language is built to be simple and effective with an intended design to script and automate.

Example – Source Packages

Why would you need to automate SSIS if all of your packages are different? If you look at most of the packages within your enterprise, some sections of code or complete packages are very similar in design, the only changes being metadata.

Source packages are a typical example of packages that are similar. With most source packages, you do a complete pull of all of the data needed within the tables in the source systems and load that data directly into tables in a stage environment with a similar shape. Most of the time, these packages are very basic and perform a limited number of transformations. Figure 1 illustrates a common source package.

Figure 1 – Common Source Package

If you look into the details of the above package, you will notice that the metadata from package to package is not very different. The differences within the metadata of the above example would include:

SQLT – Truncate Destination Table (Execute SQL Task)

  • SQLStatement: TRUNCATE TABLE TableName

DAFL – Load Destination Table (Data Flow Task)

  • OLE DB Source: Name of the table or view: TableName
  • OLE DB Destination: Name of the table or view: TableName

(Note that these packages are not following the recommended best practices.)

Listing 1 shows what would the above example look like in Biml.

Listing 1 – Common Source Package Biml

Note that the connection strings were truncated to enable the best display for the graphic. A valid connection string with my server settings would be:

provider=SQLOLEDB.1;data source=localhost;database=AdventureWorks2012;integrated security=SSPI

So now that you’ve seen the script needed for a common source package, you may still be asking the original question: Why would you write the above Biml to create five packages?

The best answer might be a chart showing the difference between creating packages manually and creating packages with Biml and then automating them with BimlScript. The chart in Figure 3 assumes it takes about one business day to create one package. Thus one package takes one day, two packages take two days, and so on. With a Biml and BimlScript solution, you have extra development time upfront in creating the Biml code and then adding the BimlScript needed for automation. That investment makes projects with a small number of packages not ideal.

Figure 2 – Package Development Methods

After you create a Biml file, the next step would be to add BimlScript needed for automation. BimlScript is added directly to the Biml file and is able to replace items within the XML.

Practical Use

BimlScript uses C# or Visual Basic as the scripting language. If automation with C# and Visual Basic sounds like another barrier to starting a Biml project, I have another suggestion to enable you to use Biml on your next project. Having to learn both Biml and BimlScript can be an overwhelming requirement to start a Biml project. What if you could defer learning almost any C# and just focus on Biml? The following example is going to demonstrate a method to do that.

If you took the Biml code shown in Listing 1 and added the line of code in Listing 2 to the <Biml> declaration in the script in Listing 1 above, you can add a variable that you could use later in your Biml file. Let’s not worry about anything except that this code creates a variable named tableName, which you’ll use later in the file.

Listing 3 – BimlScript used to create a Variable

This line alone will not accomplish anything if you do not use it within the Biml file. A good test to make sure you declared the BimlScirpt correctly would be to add the code and run Check Biml for Errors. Regenerating a package should still work and nothing will change.

However, if you add one more line of code to four places within the Biml file, you can use and replace that value with the variable declared in Listing 3. The three places within the code are defined by the three red outlines above, and one place is within the package name to create uniquely named SSIS packages.

 

Listing 4 – BimlScript that will use the Variable

The final code is shown below in Listing 4.

Listing 5 – Semi-Automated Common Source Package Biml

If you need to create another package, just update the variable and regenerate the package. This will enable you to create multiple packages within minutes. It is not a perfect solution but adds a great technique on the way to learning Biml.

Note that there are cut-and-paste issues within Visual Studio that will result in extra quotation marks. These are outlined in Listing 5 below. Removing the extra quotes is all you need to do, and you are able to modify the Visual Studio development environment to alleviate this issue. That is discussed here.

Listing 6 – BimlScript Cut-and-Paste Issue

Looking Ahead

This article had touched on a lot, and I hope I’ve given you a reason to consider Biml on your next project. The article used BimlScript, and scripting is the place to focus after you learn Biml. Biml with BimlScript can provide huge savings in development effort once you’ve learned them.

In a future article I will demonstrate how to add more BimlScript to the existing code to increase package automation even further.

Stay Tuned.

Reeves Smith

If you want to get started with BIML with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – ​Tuning Queries is Sometimes Waste of Time – Notes from the Field #049

[Note from Pinal]: This is a 49th episode of Notes from the Field series. Every day I get few emails where I am asked how to tune queries so the entire server run faster. I always end up writing long answer this question. The reason is simple – query tuning is indeed the need of hours, but it is not everything. There are many more things one should do along with tuning queries. I asked the same question to Brian Moran, who is every day dealing with consultants and various organizations who are facing issues with SQL Server.

In this episode of the Notes from the Field series database expert Brian Moran explains a how Query Tuning is sometimes a waste of time when we are facing issues with performance and scalability. Read the experience of Brian in his own words.


Tuning queries is sometimes a waste of time when you’re trying to fix performance and scalability problems. That sounds crazy but it’s true. Designing an efficient workload is almost always more important than maintaining a laser focus on query tuning and other technical tricks of the DBA trade.

Here’s an example that helps to explain what I mean.

Simple Example

What’s the shortest distance between two points? A straight line, of course. Everyone knows that. Imagine that you need to calculate the most efficient path from Washington DC to New York. Imagine those cities on a traditional 3D globe. The straight line is simple to figure out, right? Just head north as the crow flies.

But what if you wander about and take a curvy path as you head north? DC to New York is about 204 miles as the crow flies, but Google says the distance is 226 miles if you take I95 and the New Jersey Turnpike. Google also presents some reasonable travel options that are as far 264 miles. It’s easy to imagine you could increase the distance even more by taking scenic roads and side trips along the way. Of course, you can also draw a straight line on a globe between DC and New York that heads in a southerly direction, which is over 24,000 miles. That’s over 100 times more expensive that the straight route northwards.

Now, let me map that mapping example back to database performance tuning. Sometimes database performance tuning is an exercise in making the workload more efficient, just as you might want to make your route to New York more efficient. To make the database workload more efficient, you can tune queries, add indexes, and do all sorts of other neat tricks. Query tuning is like an exercise in flattening the northerly path from DC to New York to make as straight a line as possible. You want to tune your queries so that they’re as direct as possible and don’t veer off into unnecessary detours.

But, what if you’re talking about trying to flatten a squiggly line from DC to New York that starts out by heading south 180 degrees, the wrong way?  You can make that line as straight and as efficient as you want.  But heading south from DC to get to New York is never going to be efficient no matter how straight of a line you draw. A route that goes the wrong way can be optimized, but it’s still going the wrong way.

Inefficient Workload

The same idea applies to queries. If you start out tuning an inefficient workload, you’ll end up with an inefficient workload that is tuned, but it is still inefficient.

This seems like a silly example. Everyone intuitively knows that the southerly route is inherently a poor choice. You can tune the route as much as you want, but in best case scenarios it’s still going to be over 100X worse than going north.

Oddly enough, 25 years of database consulting tells me many DBAs will spend most of their time tuning queries that are the database equivalent of trying to make that southerly line from DC to New York as straight as possible. They’ll spend days, weeks, or months, trying to shave that last 10%.

All too often, DBAs fail to take a moment to pause, understand the nature of the problem they are trying to solve, and try to envision a more efficient way to solve the problem.  These DBAs don’t recognize that changing the workload can be vastly better than tuning the workload.

Two Classic Examples

Here are a two classic examples that most DBAs intuitively understand. Say you have a business need defined in a way that requires drop-down list box that will be populated with 100,000 rows that users will have to scroll through. You can try to tune the process to make it more efficient at loading 100,000 rows every time the screen is touched. But that tuning does not change the workload.

Rather than simply tuning the existing workload, a better investment in time would be if you helped the designers of that screen understand and accept a solution that doesn’t require loading 100,000 rows. Or perhaps you have a system that makes heavy use of database cursors in a procedural manner of some kind. Most DBA’s know that architecting with a set-based solution will almost always be better than trying to make the cursor-based approach as fast as possible.

Here’s a 15-year-old example that’s still a common design pattern mistake made today. I was working on an office supply site when e-commerce was first becoming big business. The application was built in a very object-oriented manner. Developers love object-oriented approaches, but databases aren’t always as fond of the approach. In this case this object-oriented design pattern led to 1000 and sometimes 2000 or more round trips to SQL Server when a single user searched for office chairs, and the search brought back just one screen of results.

This was a problem because the system needed to support hundreds or thousands of concurrent users. The existing workload might therefore need to handle hundreds of thousands or millions of batch requests per second. The client wanted to do this on a single SQL Server instance since that’s all their budget could afford.

Summary

I could have spent an infinite amount of time tuning the queries used in this approach, but the workload was never going to be efficient. I was never going to be able to handle hundreds of thousands of batch requests per second on a single instance of SQL Server 2000. Instead, I made some changes to the architecture. In other words, I changed the workload, and I was able to achieve the performance goals I had.

The examples I used here are pretty basic, and most DBA’s today are aware of how to deal with such situations. But I’m willing to bet that many people reading this post are struggling with a performance problem where tuning the queries is a poor use of time as compared to completely changing the essence of the workload. It’s too easy to lose sight of the forest for the trees.

Having a hard time figuring out how to tune your queries to achieve performance goals? Maybe you should take a step back and focus on changes you can make to the workload instead?

If you want to get started with performance tuning and database security with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – ​Building Technical Reference Library – Notes from the Field #048

[Note from Pinal]: This is a 48th episode of Notes from the Field series. How do you build a technical reference library? In other word, when you need help how do you create your own reference so you do not have to go out to look for further help. There are so many little tips and tricks one should know and Brian Kelley has amazing skills to explain this simple concept with easy words.

In this episode of the Notes from the Field series database expert Brian Kelley explains a how to find out what has changed in deleted database. Read the experience of Brian in his own words.


Do you have a technical reference library? If you’re not sure what I mean, a technical reference library is your collection of notes, code, configuration options, bugs you’ve hit that you think you’ll hit again, and anything else that you might need to retrieve again in the future related to what you do in IT. If you have a technical reference library (hereafter referred to as TRL), is it:

  • outside of email?
  • distributed across multiple locations/computers?
  • searchable?
  • fast?

With my TRL, I’m more efficient because I‘m not searching the Internet again and again for the same information. I also can ensure I handle strange cases, such as unusual configurations, which we seem to get a lot of in IT. It’s in my TRL, so I don’t have to go back through a vendor’s install document or go run someone down in the organization to get the information I need. I already have it if I put it in my TRL. Because of the efficiency that TRLs provide, most top performing IT professionals that I know have some sort of system.

Outside of Email:

I used to have a folder in email where I kept technical reference documents. Because I try to follow Inbox Zero, I do have a Reference folder, but it’s not for technical documents. My Reference folder is typically related to what that mailbox is for. For instance, my LP Reference folder is for keeping procedures related to Linchpin such as how/where to enter time, who to contact about various things, etc.

Why don’t I have my technical documents in email any longer? Let me ask a question in response to that question: What happens when email is down? When email is down, you have no access to your TRL. Email does go down. I was faced with a case where I was responsible for getting email back up and, you guessed it, my technical notes were in email. That doesn’t work.

A second question to ask: How searchable is your TRL if it’s in email?  If you keep a lot of email, especially if you don’t have a specific folder for your TRL, searching may prove to be painful. That was the other problem I started to face.

Given these two issues, I advise building your TRL outside of email.

Distributed:

If your TRL  is only on a single computer, you’re going to regret it someday. That day usually occurs when the computer in question crashes and all your notes are lost. If you have a backup, anything you put into the library after the backup is gone. Give the prevelance of cloud-based solutions nowadays, having a technical reference library which is distributed is easy. Here are some ideas:

  • Evernote
  • Microsoft OneNote
  • Microsoft SkyDrive
  • DropBox
  • Google Docs
  • Apple iCloud

I’m particular to the first two, Evernote and OneNote, because they aren’t simply “file systems.” They are designed to capture and catalog information for quick retrieval later.

All my examples will come from Evernote, because that’s the application I typically use. In fact, here’s my setup. I have a specific notebook for my TRL:

TRL Notebook

If I know exactly what I’m looking for or if I’ve added it recently, I should be able to find any note quickly in the list of notes for the notebook:

Note: SQL 2012 Slipstream

Searchable (and Fast!):

Even if what I’m looking for isn’t right there at the top of the list, I can search in Evernote (and OneNote, if I was using it) to quickly locate the document. For instance, by typing “Slipstream,” I quickly get to the article that I want:

Search of TRL

Products live Evernote and OneNote have specifically worked on Search in order to retrieve results quickly. They also provide options to search within a notebook, for instance. In my case here, since slipstream is such a specialized term compared with what else is in my Evernote notebooks, I didn’t feel the need to filter by notebook. However, I could have if I recevied a lot of hits back or if the search was taking too long.

Also note that I’ve not added any tags to this article. I’m hitting it using a text search as to the contents alone. The use of tags offers another option in order to locate the material I need quickly. If I had a lot of articles that came up for a particular search word or phrase, I could look at using tags to differentiate them better. It’s another reason to consider tools designed to hold information and make it quickly retrievable.

Build a System That Works for You:

Learning expert Cynthia Tobias was once helping a teacher who asked her students to keep a reference notebook for assignments and handouts in class, an academic version of the TRL I’ve described thus far. The teacher balked at one student’s notebook because it was messy. The teacher couldn’t imagine how the student could locate anything in the notebook and was going to give the student a poor score. Tobias asked the teacher, “What’s the point?” The point, the teacher indicated, was to be able to retrieve an assignment or handout quickly. Tobias challenged the teacher to check to see if the student could retrieve quickly (within a minute, for instance). If the student could, the teacher should leave the student alone. If the student couldn’t, then work with the student to improve the reference system.

That’s what you want to do. You want to develop a reference system that’s efficient for you. I’ve given you a snapshot of what works for me. It may not work for you. That’s okay. Start with something. If you’re starting from scratch, I would recommend starting with Evernote or OneNote. Put some notes in that you’ll need again. See how well you can retrieve those notes, especially as the number of notes increases. Make tweaks as you have to for performance sake. Most of all, build your TRL and become a better professional.

If you want to get started with performance tuning and database security with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – A Practical Use of Backup Encryption

 Backup is extremely important for any DBA. Think of any disaster and backup will come to rescue users in adverse situation. Similarly, it is very critical that we keep our backup safe as well. If your backup fall in the hands of bad people, it is quite possible that it will be misused and become serious data integrity issue. Well, in this blog post we will see a practical scenario where we will see how we can use Backup Encryption to improve security of the bakcup.

Feature description

Database Backup Encryption is a brand new and long expected feature that is available now in SQL Server 2014. You can create an encrypted backup file by specifying the encryption algorithm and the encryptor (either a Certificate or Asymmetric Key).

The ability to protect a backup file with the password has been existing for many years. If you use SQL Server for a long time, you might remember the WITH PASSWORD option for the BACKUP command. The option prevented unauthorized access to the backup file.

However this approach did not provide reliable protection. In that regard, Greg Robidoux noted on MSSQLTIPS: “Although this does add a level of security if someone really wants to crack the passwords they will find a way, so look for additional ways to secure your data.

To protect a backup file, SQL Server 2008 introduced the transparent data encryption (TDE) feature. Thus, a database had to be transparently encrypted before backup. Therefore, start with SQL Server 2012 the PASSWORD and MEDIAPASSWORD parameters are not used while creating backups. Even so, data encryption and backup files encryption are two different scenarios.

In case a database is stored locally, there is no need to encrypt it before backup. Fortunately in SQL Server 2014 there are two independent processes. Along with data encryption it is possible to encrypt a backup file based on a certificate or an asynchronous key. Supported encryption algorithms are:

  • AES 128
  • AES 192
  • AES 256
  • Triple DES

Practical use

To illustrate above mentioned, I will create an encrypted backup of the Adventureworks database. Also, you can back up directly to Azure. If needed, you may restore the encrypted back up file on Azure.

I will use dbForge Studio for SQL Server to create the encrypted backup file.

To protect the backup file we need to create an encryptor: either a Certificate or Asymmetric Key. Then, we need to pass this encryptor to the target SQL Server to restore the backup. For this, the encryptor must be exported from the source SQL Server and imported to the target SQL Server. There are no problems with the certificate in this regard. It is more complicated with asymmetric keys.

Taking into account that the BACKUP ASYMMETRIC KEY command is not available, and we can not just create a duplicate for an asymmetric key (compared to symmetric key), the only approach is to create the asymmetric key outside the SQL Server. Then we can use the sn.exe utility to transfer it inside SQL Server (CREATE ASYMMETRIC KEYkeynameFROM FILE = ‘filename.snk‘). After that we can use this asymmetric key to encrypt the backup file on the source instance. Further we need to use the same *.snk file to create the asymmetric key on the target instance (and restore the backup file).

In our example we will not use asymmetric keys. We will use a certificate. Moreover the certificate (behind the scene) is the pair of open/closed keys.

Let’s create the server certificate and use it to encrypt the backup file.

The certificate will be protected with the database master key, because we didn’t specify the ENCRYPTION BY statement.

This is exactly what we need. Only certificates signed with the database master-key can be used for the encryption purposes. Otherwise, If we for instance, protect the certificate with the password ENCRYPTION BY PASSWORD = ‘strongpassword‘, the following error appears while attempting to encrypt the backup file:

“Cannot use certificate ‘CertName’, because its private key is not present or it is not protected by the database master key.”

Encrypted backups (along with usual backups) can be traditionally created locally on the hard drive and in Azure Storage.

Instead of writing tons of SQL code I will use the convenient dbForge Studio for SQL Server Back Up wizard. The wizard allows to create the database backup in several clicks.

Step 1: Setup the DB Connection and the backup file location.

Step2: Setup mediaset

Step 3: Select the encryption algorithm and certificate.

In case you don’t want to pay extra attention to transferring the backup file to the Windows Azure, you can backup directly to Azure.

After the script execution in the required container the blob (with the backup) appears.

In case you had already created a backup with the same name in the same container, you can get the following error: There is currently a lease on the blob and no lease ID was specified in the request.

Further, you can restore the back up file on the Windows Azure.

Summary: 

Obviously, it is a good practice to encrypt a backup file while transferring. This, for instance, allows to avoid data leak while transferring backups from one DPC to another.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – ​Finding Out What Changed in a Deleted Database – Notes from the Field #041

[Note from Pinal]: This is a 41th episode of Notes from the Field series. The real world is full of challenges. When we are reading theory or book, we sometimes do not realize how real world reacts works and that is why we have the series notes from the field, which is extremely popular with developers and DBA. Let us talk about interesting problem of how to figure out what has changed in the DELETED database. Well, you think I am just throwing the words but in reality this kind of problems are making our DBA’s life interesting and in this blog post we have amazing story from Brian Kelley about the same subject.

In this episode of the Notes from the Field series database expert Brian Kelley explains a how to find out what has changed in deleted database. Read the experience of Brian in his own words.


Sometimes, one of the hardest questions to answer is, “What changed?” A similar question is, “Did anything change other than what we expected to change?”

The First Place to Check – Schema Changes History Report:

Pinal has recently written on the
Schema Changes History report
and its requirement for the Default Trace to be enabled. This is always the first place I look when I am trying to answer these questions.

There are a couple of obvious limitations with the Schema Changes History report. First, while it reports what changed, when it changed, and who changed it, other than the base DDL operation (CREATE, ALTER, DELETE), it does not present what the changes actually were. This is not something covered by the default trace. Second, the default trace has a fixed size. When it hits that
size, the changes begin to overwrite. As a result, if you wait too long, especially on a busy database server, you may find your changes rolled off.

But the Database Has Been Deleted!

Pinal cited another issue, and that’s the inability to run the Schema Changes History report if the database has been dropped. Thankfully, all is not lost.

One thing to remember is that the Schema Changes History report is ultimately driven by the Default Trace. As you may have guess, it’s a trace, like any other database trace. And the Default Trace does write to disk. The trace files are written to the defined LOG directory for that SQL Server instance and have a prefix of log_:

Therefore, you can read the trace files like any other.

Tip: Copy the files to a working directory. Otherwise, you may occasionally receive a file in use error.

With the Default Trace files, if you ask the question early enough, you can see the information for a deleted database just the same as any other database.

Testing with a Deleted Database:

Here’s a short script that will create a database, create a schema, create an object, and then drop the database. Without the database, you can’t do a standard Schema Changes History report.

CREATE DATABASE DeleteMe;
GO
USE DeleteMe;
GO
CREATE SCHEMA Test AUTHORIZATION dbo;
GO
CREATE TABLE Test.Foo (FooID INT);
GO
USE MASTER;
GO
DROP DATABASE DeleteMe;
GO

This sets up the perfect situation where we can’t retrieve the information using the Schema Changes History report but where it’s still available.

Finding the Information:

I’ve sorted the columns so I can see the Event Subclass, the Start Time, the Database Name, the Object Name, and the Object Type at the front, but otherwise, I’m just looking at the trace files using SQL Profiler. As you can see, the information is definitely there:

Therefore, even in the case of a dropped/deleted database, you can still determine who did what and when. You can even determine who dropped the database (loginame is captured). The key is to get the default trace files in a timely manner in order to extract the information.

If you want to get started with performance tuning and database security with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Difference Between Login and User – SQL in Sixty Seconds #070

“What is the difference between Login and User in SQL Server?”

This is a very common question I often receive. Yesterday when I was browsing Facebook, I once again noticed this question once again asked in SQLBangalore group. My very good friends – Vinod Kumar and Balmukund Lakhani had already answered the question there. However, I every time, I read this question, I realize that not everyone knows the basics of these two concepts. If I have to explain the difference between them, it may take a long time, but I will try to explain it at a very basic level.

Basic Explanation:

SQL Server Login is for Authentication and SQL Server User is for Authorization. Authentication can decide if we have permissions to access the server or not and Authorization decides what are different operations we can do in a database. Login are created at the SQL Server instance level and User is created at SQL Server database level. We can have multiple user from different database connected to a single login to a server.

I hope this is clear enough. If not, I strongly suggest you watch following quick video where I explain this concept in extremely simple words.

Action Item

Here are the blog posts I have previously written on the subject of security. You can read it over here:

You can subscribe to my YouTube Channel for frequent updates.

Reference: Pinal Dave (http://blog.sqlauthority.com)