SQL SERVER – Finding Last Backup Time for All Database – Last Full, Differential and Log Backup – Optimized

Four years ago, I wrote a blog post SQL SERVER – Finding Last Backup Time for All Database. It has been a very popular script. SQL Expert Sravani has posted a fantastic script which also displays last full, differential and log backup for the database – I have blogged about it here SQL SERVER – Finding Last Backup Time for All Database – Last Full, Differential and Log Backup.

Well, in technology, there is always space for improvement and we should always be ready to learn and progress. SQL Expert TheSQLGuru came up with further optimized script which also lists all the details which earlier blog post listed, and the script is very quick to execute. Here is the script.

SELECT d.name AS 'DATABASE_Name',
MAX(CASE WHEN bu.TYPE = 'D' THEN bu.LastBackupDate END) AS 'Full DB Backup Status',
MAX(CASE WHEN bu.TYPE = 'I' THEN bu.LastBackupDate END) AS 'Differential DB Backup Status',
MAX(CASE WHEN bu.TYPE = 'L' THEN bu.LastBackupDate END) AS 'Transaction DB Backup Status',
CASE d.recovery_model WHEN 1 THEN 'Full' WHEN 2 THEN 'Bulk Logged' WHEN 3 THEN 'Simple' END RecoveryModel
FROM MASTER.sys.databases d
LEFT OUTER JOIN (SELECT database_name, TYPE, MAX(backup_start_date) AS LastBackupDate
FROM msdb.dbo.backupset
GROUP BY database_name, TYPE) AS bu ON d.name = bu.database_name
GROUP BY d.Name, d.recovery_model

Thanks TheSQLGuru for excellent script, you win USD 20 Amazon Gift Card or INR 1000 Flipkart Gift Voucher. I have already sent you emails with details.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

About these ads

SQLAuthority News – Microsoft Whitepaper – Migrating Content Between Report Servers

A very common challenge developer and administrator face when they have to migrate content between SQL Server Reporting Server. This is because SQL Server Reporting Services (SSRS) currently doesn’t include a feature for migrating content items and settings from one Reporting Services report server to another report server. There are two different methods to do overcome this challenge. 1) Create RSS scripts that are used by rs.exe to migrate the content of Reporting Services, between reports servers of the same mode or different modes. 2) Run Reporting Services Migration Tool to migrate the content from a Native mode report server to a SharePoint mode report server.

The white paper discusses the Reporting Services Migration tool and various aspects of method of migration like the location of the tool, supportability of the tool, a list of contents migrated by the tool, migration steps and known issues with the tool. You can download the tool from here.

Click here to download and read the white paper

Reference: Pinal Dave (http://blog.SQLAuthority.com)

SQLAuthority News – Microsoft Whitepaper – Idle Connection Resiliency

Developers can write data access application that enables ideal connection resiliency with the .NET framework. An idle connection is the one that is active but it’s not executing a command or waiting for data. It is very important to understand how idle connection is reconnecting back in the .NET framework with SQL Server. This white paper actually discusses the same in very simple works. The user has to connect either to a SQL Server 2014 or Microsoft Azure, SQL Database to enable idle connection resiliency.

Here is a very interesting example in the of the idle connection resiliency provided in the Overview section of the Whitepaper.

Let’s imagine that you are a roaming worker that needs to use an Access application to connect to SQL Server.  When you need to move from meeting to meeting, you normally close your notebook’s lid in order to move.  In working online, every time this happens, you may end up disconnected either because your notebooks sleep or due to blind wireless spots in your building.  To avoid the hassle of being disconnected, you may choose to avoid certain places (like elevators, for example) and walk with your notebook’s lid open. Now, imagine if you can close your lid and walk anywhere in your building (even take the elevator) and just arrive to your next meeting, open your lid and find your work there, waiting for you to continue.  To  address this and other scenarios when an idle connection drops, SQL Server introduced a new feature called Idle Connection Resiliency.

Well, that’s it. This white paper describes the internal working of the Idle Connection Resiliency. It further discusses about the Client’s idle connection, reconnect logic, Client session state handling and replay logic, Non-recoverable session states, and General Considerations.

Click here to read the white paper on Idle Connection Resiliency.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – Drop All the Foreign Key Constraint in Database – Create All the Foreign Key Constraint in Database

Earlier I wrote a blog post about how to Disable and Enable all the Foreign Key Constraint in the Database. It is a very popular article. However, there are some scenarios when user needs to drop and recreate the foreign constraints. Here is a fantastic blog comment by SQL Expert Swastik Mishra. He has written a script which drops all the foreign key constraints and recreates them. He uses temporary tables to select the existing foreign keys and respective column name and table name. Then determine the primary key table and column name and accordingly drop and recreate them.

Here is a script by Swastik and it works great.

SET NOCOUNT ON
DECLARE
@table TABLE(
RowId INT PRIMARY KEY IDENTITY(1, 1),
ForeignKeyConstraintName NVARCHAR(200),
ForeignKeyConstraintTableSchema NVARCHAR(200),
ForeignKeyConstraintTableName NVARCHAR(200),
ForeignKeyConstraintColumnName NVARCHAR(200),
PrimaryKeyConstraintName NVARCHAR(200),
PrimaryKeyConstraintTableSchema NVARCHAR(200),
PrimaryKeyConstraintTableName NVARCHAR(200),
PrimaryKeyConstraintColumnName NVARCHAR(200)
)
INSERT INTO @table(ForeignKeyConstraintName, ForeignKeyConstraintTableSchema, ForeignKeyConstraintTableName, ForeignKeyConstraintColumnName)
SELECT
U.CONSTRAINT_NAME,
U.TABLE_SCHEMA,
U.TABLE_NAME,
U.COLUMN_NAME
FROM
INFORMATION_SCHEMA.KEY_COLUMN_USAGE U
INNER JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS C
ON U.CONSTRAINT_NAME = C.CONSTRAINT_NAME
WHERE
C.CONSTRAINT_TYPE = 'FOREIGN KEY'
UPDATE @table SET
PrimaryKeyConstraintName = UNIQUE_CONSTRAINT_NAME
FROM
@table T
INNER JOIN INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS R
ON T.ForeignKeyConstraintName = R.CONSTRAINT_NAME
UPDATE @table SET
PrimaryKeyConstraintTableSchema = TABLE_SCHEMA,
PrimaryKeyConstraintTableName = TABLE_NAME
FROM @table T
INNER JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS C
ON T.PrimaryKeyConstraintName = C.CONSTRAINT_NAME
UPDATE @table SET
PrimaryKeyConstraintColumnName = COLUMN_NAME
FROM @table T
INNER JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE U
ON T.PrimaryKeyConstraintName = U.CONSTRAINT_NAME
--SELECT * FROM @table
--DROP CONSTRAINT:
SELECT
'
ALTER TABLE ['
+ ForeignKeyConstraintTableSchema + '].[' + ForeignKeyConstraintTableName + ']
DROP CONSTRAINT '
+ ForeignKeyConstraintName + '

GO’
FROM
@table
–ADD CONSTRAINT:
SELECT

ALTER TABLE ['
+ ForeignKeyConstraintTableSchema + '].[' + ForeignKeyConstraintTableName + ']
ADD CONSTRAINT ‘
+ ForeignKeyConstraintName + ‘ FOREIGN KEY(+ ForeignKeyConstraintColumnName + ‘) REFERENCES [' + PrimaryKeyConstraintTableSchema + '].[' + PrimaryKeyConstraintTableName + '](‘ + PrimaryKeyConstraintColumnName + ‘)

GO’
FROM
@table
GO

Thanks Swastik for a great script. Swastik, please send me email at my mail id and I will send USD 20 worth Amazon Gift Card or INR 1000 Flipkart Gift Voucher for your wonderful contribution. If you use any such script in your daily routine. Please send me email and I will be glad to post the same on blog with due credit.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Finding Jobs Shrinking Database Files – Notes from the Field #023

[Notes from Pinal]: Search on the internet about ‘Shrinking Database’, you will find plenty of advice why it is bad and it should not be done. If you do not believe me, just try it yourself. Even I have blogged about it before that it is an absolutely bad thing to do. However, there are very few blogs which talks about how to solve this ancient problem. The reason, I call it ancient problem is that there are quite a few organizations which are doing this for many years and they have many different places where they have included Shrinking Database code. There are quite a few places, where I see that people have created database jobs to shrink the database. I was talking to Tim about this and he comes up with a beautifully simple script where he demonstrated how to find out jobs which are shrinking database files.

Linchpin People are database coaches and wellness experts for a data driven world. In this 16th episode of the Notes from the Fields series database expert Tim Radney (partner at Linchpin People) explains a very simple script to find out jobs shrinking database files.


Often when analyzing a SQL Server Instance I come across jobs that are automating shrinking a database file.  Checking for a shrinking operation is important when analyzing a server for a number of reasons.  One is that anytime a data file or log file has to grow, transactions have to wait until the growth operation is complete thus causing a performance impact.  Shrinking a log file can contribute to high virtual log file counts and shrinking a data file will lead to fragmenting the database.

I check to see if any database maintenance plans exist and if so I check to see if auto shrink has been chosen, occasionally I find this is the case.  I also have a TSQL script that will search for any reference to the word ‘shrink’ in a job step.  I find custom jobs like this more often than auto shrink being used in a maintenance plan.  Often times the justification for someone configuring a job like this is due to drive space issues.

In cases where the shrink is to reclaim drive space it is most often due to not having a proper backup routine in place for the transaction logs or a process that keeps a transaction open for an extended period of time.  The transaction log will need to be large enough to handle your index maintenance, ETL processes and transactions that occur between log backups.   If you find that you are having an erratic large growth, then you will need to examine what transactions or processes are holding open an active transaction for such a long duration or the frequency of your log backups.

Below is the script I use to search for the word shrink in any tsql job step.

DECLARE @search VARCHAR(100)
SET @Search = 'shrink'
SELECT  A.[job_id],
B.[name],
[step_id],
[step_name],
[command],
[database_name]
FROM    [msdb].[dbo].[sysjobsteps] A
JOIN [msdb].dbo.sysjobs B ON A.job_id = B.[job_id]
WHERE   command LIKE '%' + @Search + '%'
ORDER BY [database_name],
B.[name],
[step_id]

If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server.

Note: Tim has also written an excellent book on SQL Backup and Recovery, a must have for everyone.


Related Articles:

Reference: Pinal Dave (http://blog.sqlauthority.com)

MySQL – Finding First day and Last day of a Month

MySQL supports a lot of DATE and TIME related functions. If you want to find out last day of a month, you can make use of an inbuilt function named LAST_DAY.

SET @date:='2012-07-11';
SELECT LAST_DAY(@date) AS last_day;

The above code returns the value 2012-07-31

However, there is no inbuilt function to find out first day for a month. We will see two methods to find out the first day.

Method 1 : Use DATE_ADD and LAST_DAY functions

SET @date:='2012-07-11';
SELECT date_add(date_add(LAST_DAY(@date),interval 1 DAY),interval -1 MONTH) AS first_day;

Result is

first_day
 ----------
 2012-07-01

The logic is to find last day of a month using LAST_DAY function; Add 1 day to it using DATE_ADD function so that you will get first day of next month; Subtract 1 month from the result so that you will get first day of the current month

Method 2 : Use DATE_ADD and DAY functions

SET @date:='2012-07-11';
SELECT date_add(@date,interval -DAY(@date)+1 DAY) AS first_day;

Result is

first_day
 ----------
 2012-07-01

The logic is to find the day part of date; add 1 to it and subtract it from the date. The result is the first day of the month.

So you can effectively make use these functions to perform various datetime related logics in MySQL.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – What is SSAS Tabular Data Model and Why to Use it

Analysis services in SQL Server 2012 can be either deployed in multi-dimensional mode or tabular mode or power pivot for SharePoint as well. Tabular mode is a new enhancement in SQL Server 2012 analysis service database structure. It is a columnar database capable of incredible performance and compression ratio. At this point, there is a lot of confusion in users on why to use the tabular model when we already have multidimensional model.So, let’s discuss these points first before creating a tabular model project.

Why to Use:

1)      Tabular model is quite easy to understand and implement, and is particularly made for empowering  information workers.

2)      The tabular model uses DAX for scripting which is similar to using excel formulas and is faster to learn.

3)      It uses Vertipaq (x-velocity) engine for in memory column storage, which gives great performance  and direct query mode for retrieving data and aggregates directly from the database which is beneficial for querying data in real time.

4)      Powerpivot models can be easily upgraded to tabular models, thus providing a path for business users and IT professionals to author models in familiar tools like MS Excel.

5)      All the client applications that support  multi-dimensional mode will also support  tabular and work natively with it. This is because tabular uses the same data provider that understands both MDX and DAX queries.

When Not to Use :

1)      When the source is based on dimensional modeling and has complex relationships with very large volume of data.

2)      No support for writing back or parent, child hierarchy

3)      When you want to do complex calculations, scoping and named sets

Getting started with the Tabular Model Project :-

Click on the new project in SQL server data tools, there will be 5 templates options under Business Intelligence à Analysis Services.  For this example, I‘m creating an Analysis service tabular project.

As soon as you click ok, you will be prompted to specify an instance of analysis services installed in tabular mode. This is the instance which will be used for creating a workspace database for the project.

Once you click ok, you will notice that, a model. bin file is generated. To  import data in the tabular model, go to model  in the toolbar menu and click Import data from source.

As you can see, you can import data from a variety of other sources. For the purpose of this example we will choose Microsoft SQL Server and click on Next >

Here you will be asked to enter credentials to connect from the data source. After entering the credentials, click Next >

Here you have to choose whether you want to import data from tables and views or either write a customized query  to specify the import data. For the purpose of this example, I’ve taken  the first option.

As I choose the first option we will get a list of tables and views from the database we selected in the earlier steps. Here you can select the tables and views that you want to import or optionally choose one table and click on the “selected related tables” button to include all related tables based on relationships. Click on Finish button

After you click Finish button import process will start, you can see the status of the import in the below screen

After the import process is finished, you can see the project, either in grid or diagram view. By default, you will see the grid view of all the tables you selected.

Click on the bottom right corner to see the same in the diagram view

Right click any table to see various options  that will enable you to create relationships, hierarchy or hide table from the client tool.

To summarize it all, we have created a tabular mode project, and imported data from a data source into this project.

Hope this post was helpful to clear all your doubts. To understand these concepts in greater detail, you can enroll for a Microsoft SQL Server 2012 course from a recognized IT training organization such as Koenig Solutions, which offers more than 1000 IT courses and certifications.

Reference: Pinal Dave (http://blog.SQLAuthority.com)