SQL SERVER – Performance Monitoring for Analysis Services – Notes from the Field #093

[Note from Pinal]: This is a new episode of Notes from the Field series. When we build any application, we build it with zeal and enthusiasm. We believe it will do well for many years but it is not the case always. After a while performance started to go down and everything is not in the best shape. This is the time when we need to monitor performance and take action based on our analysis.

Bill%20Anton SQL SERVER   Performance Monitoring for Analysis Services   Notes from the Field #093

In this episode of the Notes from the Field series I asked SQL Expert Bill Anton a very crucial question – How to monitor performance of analysis services? Bill was very kind to answer the questions and provides plenty of information about how a novice developer can work with SSAS.


When it comes to ad-hoc query performance in business intelligence solutions, very few technologies rival a well-designed Analysis Services Multidimensional cube or Tabular model. And when that cube/tabular model is performing well, as it usually does in the beginning, life is good!

notes 93 1 SQL SERVER   Performance Monitoring for Analysis Services   Notes from the Field #093

Photo Credit: SuperCar-RoadTrip.fr

Over time, however, things can change. The underlying business evolves (hopefully) and information workers start asking different questions, resulting in new query patterns. The business grows and now more users are asking more questions of larger data volumes. Performance starts to deteriorate. Queries are taking longer to complete and nightly processing starts to run past the end of the maintenance window into business hours. Users are complaining, management is unhappy, and life is no longer good.

notes 93 2 SQL SERVER   Performance Monitoring for Analysis Services   Notes from the Field #093

Photo Credit: Charlie

How did we not see this coming?

If this describes your personal situation, don’t worry, you’re not alone. In fact the majority of clients who bring me in to troubleshoot and resolve their Analysis Services performance problems ask the same question (or some variation of it).

In my experience, 80% of the time the reason no one sees this type of issue coming is because there wasn’t a performance monitoring solution in place. The other 20% who have a performance monitoring solution simply aren’t using it or reviewing information being collected.

I don’t know why so many Analysis Services environments are neglected (I’m not a Business or IT therapist) but I’m going to tell you what steps need to be taken if you want to avoid it.

The Secret is Simple

The first step to maintaining a well running Analysis Services instance is simple. Take measurements!

notes 93 3 SQL SERVER   Performance Monitoring for Analysis Services   Notes from the Field #093

Photo Credit: Official U.S. Navy Page

Taking regular measurements is the only way to know how things are performing overtime. This seems ridiculously obvious, but so few companies actually do it. My hunch is that these folks just don’t know what needs to be measured.

Analysis Services can be broken down into 2 fundamental types of workloads:

Processing Workloads

Processing is the term used to describe how data gets loaded into the Analysis Services database. This workload is usually done at night, outside of business hours, so as not to coincide with user activity.

From a performance perspective, the primary things to keep an eye on are:

Processing Duration
This is the amount of time it takes to load data into the Analysis Services database. As long as the processing duration fits within the scheduled maintenance window, there’s not much to worry about. However, if this duration is increasing over time and you think it will eventually run past that window, then you’ll need to review the rest of the information to figure out “why” and “what to do”.

How long does it take to process the Analysis Services database?
Is the processing duration increasing over time?

Resource Consumption
Keeping an eye on resource consumption (e.g. CPU, memory, disk, network) during processing is also a good idea. This kind of information can help shed some light on bottlenecks and provide guidance when coming up with a solution to processing related problems.

Is the processing workload requiring more and more memory? Are we maxing out CPU? How’s disk space?

There are many solutions to problems faced during processing, but without some insight into what’s happening on the system, it’s hard to know which solution is the optimal one.

For example, say we have a Multidimensional cube and notice that the processing duration for one of the measure groups is steadily increasing over time. We review the resource consumption and see that there’s plenty of CPU/Memory/IO to spare. In this case, we may consider partitioning this particular fact table and only processing the most recent partition or processing the partitions in parallel.

Pro Tip: Breaking up processing into stages will provide context to the information above and make it much easier to see which part(s) of the Analysis Services database are contributing to the increase in processing duration or memory consumption.

Query Workloads

This refers to any activity that generates queries against the SSAS database. It could be users running reports, loading dashboards, analyzing data via pivot tables, etc. Because users typically don’t run the same queries at the same time every day, this workload can be much more complicated to monitor.

The key to success is to start with the high level stuff and only go into the details if/when necessary.

Queries
The single most important thing to track for this type of workload is a log of the actual queries being executed against the Analysis Services database.

Not only will you be able to see which queries are slow, but you’ll also have the actual MDX/DAX executed. You won’t have to wait for the user to complain (telling you their report is taking too long to load) because you’ll already have the query and can start reviewing it immediately.

Some Analysis Services implementations actually have service level agreements (SLA) with criteria such as “no query should take more than 30 seconds to complete” and “the average query response time should be less than 5 seconds”. If you’re tracking every query against the Analysis Services database, not only will you know if the SLA has been violated, but you’ll know which query or queries it was that led up to the violation and can start troubleshooting immediately.

Users
Tracking the number of folks using your system (and when they are using it) will prove very helpful for knowing if/when to start considering options for scaling the system up and/or out.

This information can usually be extracted from whatever mechanism you use to track queries being executed against the Analysis Services database, but it is important enough to deserve its own section.

Resource Consumption
In the same vein as the above discussion around tracking resource consumption of the processing workload, you’ll also want to track the same measures (e.g. CPU, memory, disk, network) throughout the day. This information may provide some clues as to why a query is slow.

For example, say you’re reviewing the top 10 slowest queries from the previous week and find several of the queries are now running very fast. At this point you can switch over and start looking at the state of the system last week at the time the query was slow (from a resource consumption perspective) and perhaps find some clues, such as memory pressure or CPU bottleneck caused by a spike in activity.

Here are some examples of the types of questions you should be able to answer with the above information:

What are the top 10 slowest queries each week?

Who are your top users and what is the average number of queries they execute per day/week/month?

What are the average number of users per day/week/month?

What is the max number/average number of concurrent users per day/week/month?

Pro Tip: some folks incorrectly assume the OLAPQueryLog tracks queries. However, this table only tracks parts of queries (storage engine requests). A single query executed against an Analysis Services database could potentially generate 10s of records in this table. The implication is that it doesn’t give you the whole story and you won’t always be able to determine which queries are slow.

Next Steps

Now that you know what types of information you should be collecting and reviewing regularly, the next step is to figure out how you’re going to collect that information. The good news is that there are quite a few options available. The bad news is that you’ll have to wait until next time to find out. Here is the SQLPASS session which discusses about the same concept Analysis Services: Show Me Where It Hurts.

If you want to get started on performance monitoring with SSAS with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Understanding the Basics of Write Ahead Logging (WAL) Protocol

The journey for IT started way back in my college days more than a decade back. It was a booming time and it was exciting to see everyone eyeing the Computer Science stream. Almost every other person in those days always wanted to take this new trend. Riding the waves was one part of excitement, while the other was to be unique in whatever we do. It is easy to get stereotyped with someone and we can never be different from the herd. I always believed in doing the basics right – this blog project that I started more than 8 years was to bring these basic learnings back. I am glad I have been doing this.

background logs SQL SERVER   Understanding the Basics of Write Ahead Logging (WAL) ProtocolWhen I was at my hometown, I planned to visit my college and spend some quality time with the students and live some moments back. I always dream of giving time back to the institution that made me who I am now. I think at this moment I request every reader to spend a couple of hours in a year with students from your college. I just wish I had the same exposure in my days back then.

During my last visit, I introduced myself as a database expert (based on what people say). One of the students asked me how durability is achieved in modern databases? Do I know anything about it and how it is done?

This question got me thinking and I immediately said, the first place to search will be my blog. I quickly checked to figure out I haven’t written about this subject. To complete the conversation in the real college style, I took the whiteboard and explained the simple WAL protocol. After a couple of hours of session on various topics, I got out exhausted – after getting out, I thought of writing back here.

The concept of Write Ahead Logging is very common to database systems. This process ensures that no modifications to a database page will be flushed to disk until the associated transaction log records with that modification are written to disk first. Why do we do this? The simple answer is to maintain the ACID properties for a transaction.

What if the database modifications were flushed first and a power failure occurred before the transaction log were written? Well, if the entire transaction was committed and all changes to all pages were also written to disk, it wouldn’t be a problem. But what if the page changes were flushed to disk due to a lazy write to free up buffers and the page changes were part of an active transaction? Transactional consistency would be comprised. The database page on disk will contain changes that are part of an uncommitted transaction (because the log records don’t exist to roll back the change).

This is the reason we write to the Log file first and hence this term is called “Write ahead logging”. Once the transaction gets persisted in the log first and when a power outage happens. The data files and data pages can be appropriately rolled forward (in case of committed transactions) or rolled back (in case of failed/rollback transactions) in the event of abrupt shutdown.

More information on this topic can be found in the SQL Book Online under the topic “Write-Ahead Transaction Log”.

Other sources on this topic include: INF: SQL Server 7.0 and SQL Server 2000 Logging and Data Storage

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Error while starting SQL Server logFileSize <= BlkToByteOffset ((ULONG)-1)

Recently I was playing with tempdb database in SQL Server and made some mistake. Due to that I was then not able to start SQL Service itself. Since it was an interesting problem so worth blogging about it.

Let’s have a look at problem. Whenever there is a SQL startup trouble, always start from SQL ERRORLOG. If you are

SQL SERVER – Where is ERRORLOG? Various Ways to Find its Location


2015-08-08 15:07:16.67 spid10s Clearing tempdb database.
…
2015-08-08 15:07:16.99 spid10s Error: 17066, Severity: 16, State: 1.
2015-08-08 15:07:16.99 <span style="color: #ff0000;">spid10s SQL Server Assertion: File: <"logmgr.cpp"</span>>,<span style="color: #ff0000;"> line=14870 Failed Assertion = '</span>logFileSize<span style="color: #ff0000;"> <= BlkToByteOffset ((ULONG</span>)<span style="color: #ff0000;">-1)'</span>. This error may be timing-related. If the error persists after rerunning the statement, use DBCC CHECKDB to check the database for structural integrity, or restart the server to ensure in-memory data structures are not corrupted.

2015-08-08 15:07:16.99 spid10s Error: 3624, Severity: 20, State: 1.
2015-08-08 15:07:16.99 spid10s A system assertion check has failed. Check the SQL Server error log for details. Typically, an assertion failure is caused by a software bug or data corruption. To check for database corruption, consider running DBCC CHECKDB. If you agreed to send dumps to Microsoft during setup, a mini dump will be sent to Microsoft. An update might be available from Microsoft in the latest Service Pack or in a Hotfix from Technical Support.

2015-08-08 15:07:17.00 spid10s Error: 5173, Severity: 16, State: 1.
2015-08-08 15:07:17.00 spid10s One or more files do not match the primary file of the database. If you are attempting to attach a database, retry the operation with the correct files. If this is an existing database, the file may be corrupted and should be restored from a backup.

2015-08-08 15:07:17.00 spid10s Error: 1802, Severity: 16, State: 4.
2015-08-08 15:07:17.00 spid10s CREATE DATABASE failed. Some file names listed could not be created. Check related errors.
2015-08-08 15:07:17.00 spid10s Could not create tempdb. You may not have enough disk space available. Free additional disk space by deleting other files on the tempdb drive and then restart SQL Server. Check for additional errors in the event log that may indicate why the tempdb files could not be initialized.

Above is snippet of ERRORLOG where I have removed many things to make this clear in the blog. Here is something interesting. If we look at the file size of the tempdb transaction log file, its 0 KB (yes zero KB). I thought its TempDB so it would be recreated anyways, so I deleted them but still no joy. I was having another SQL instance with the same build of SQL Server so I picked files from there and dumped to desired location but SQL Startup automatically replaced those files.

 SQL SERVER   Error while starting SQL Server logFileSize <= BlkToByteOffset ((ULONG) 1)

Solution:

A) Start SQL Server with startup parameter /f which stands for minimal configuration. You need to change instance name/server name. For me it is named instance of SQL Server called SQL2014.

net start MSSQL$SQL2014 /f

If you have default instance, then it would be called as MSSQLServer.

B) Connect to SQL via SQLCMD. You need to change instance name/server name

SQLCMD -S.\SQL2014

C) After connection is made, check the file sizes of tempdb database

SELECT size FROM sys.master_files WHERE database_id = 2

In my case I got 0 for LDF file and that was the cause of the whole problem.

D) If you get zero, then alter the database and change the size again

ALTER DATABASE [tempdb] MODIFY FILE(NAME=N'templog',SIZE= 10240KB);

E) Check size again by running the same command as step 3. Then exit from SQLCMD

F) Stop SQL Service

net stop MSSQL$SQL2014

G) Start SQL normally and if you were hitting the same issue, then it should be fixed. The same command as step 1 but without /f parameter.

net start MSSQL$SQL2014

Here is the screenshot of the solution

 SQL SERVER   Error while starting SQL Server logFileSize <= BlkToByteOffset ((ULONG) 1)

What caused the problem?

As I said in the beginning, I was playing with TempDB and by mistake I ran below command

ALTER DATABASE [tempdb] MODIFY FILE(NAME=N'templog',SIZE= 0);

That zero in size has caused the whole problem.

Note: Please do not run above command on production server. 

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Error: Msg 4305, Level 16, State 1 – The log in this backup set terminates at LSN, which is too early to apply to the database.

While working with transaction log backup restore or log shipping, you must have seen below error message.

Msg 4305, Level 16, State 1, Line 47
The log in this backup set begins at LSN 33000000048000001, which is too recent to apply to the database. An earlier log backup that includes LSN 33000000044800001 can be restored.
Msg 3013, Level 16, State 1, Line 47
RESTORE LOG is terminating abnormally.

Let’s have a look at the cause of this message. Before that let’s have our sample database ready with backups so that we can play with them.

--Create an empty database
CREATE DATABASE SQLAuthority
GO
-- use the database
USE SQLAuthority
GO
--create first test table.
CREATE TABLE MyTestTable1 (iCol INT)
-- perform full backup F1
BACKUP DATABASE SQLAuthority TO DISK = 'c:\SQLAuthority_F1.bak' WITH FORMAT
--create second test table.
CREATE TABLE MyTestTable2 (a INT)
-- perform transaction log backup T1
BACKUP LOG SQLAuthority TO DISK = 'c:\SQLAuthority_T1.trn' WITH FORMAT
--create third test table.
CREATE TABLE MyTestTable3 (a INT)
-- perform transaction log backup T2
BACKUP LOG SQLAuthority TO DISK =  'c:\SQLAuthority_T2.trn' WITH FORMAT
--create forth test table.
CREATE TABLE MyTestTable4 (a INT)
-- perform full backup F2
BACKUP DATABASE SQLAuthority TO DISK = 'c:\SQLAuthority_F2.bak' WITH FORMAT

Now, we have backups taken in below order:

  1. Full back up ‘c:\SQLAuthority_F1.bak’
  2. First Transaction log backup ‘c:\SQLAuthority_T1.trn’
  3. Second Transaction log backup ‘c:\SQLAuthority_T2.trn’
  4. Full back up ‘c:\SQLAuthority_F2.bak’

Reproduce the Error

-- Let’s perform restore now but before that, lets drop the database
USE MASTER
GO
ALTER DATABASE SQLAuthority SET single_user WITH ROLLBACK IMMEDIATE
GO
DROP DATABASE SQLAuthority
GO
RESTORE DATABASE SQLAuthority FROM DISK = 'c:\SQLAuthority_F1.bak' WITH NORECOVERY
GO
RESTORE LOG SQLAuthority FROM DISK = 'c:\SQLAuthority_T2.trn' WITH NORECOVERY
GO

Here is the error.

Processed 280 pages for database ‘SQLAuthority’, file ‘SQLAuthority’ on file 1.
Processed 6 pages for database ‘SQLAuthority’, file ‘SQLAuthority_log’ on file 1.
RESTORE DATABASE successfully processed 286 pages in 0.148 seconds (15.097 MB/sec).
Msg 4305, Level 16, State 1, Line 43
The log in this backup set begins at LSN 33000000026400001, which is too recent to apply to the database. An earlier log backup that includes LSN 33000000023200001 can be restored.
Msg 3013, Level 16, State 1, Line 43
RESTORE LOG is terminating abnormally.

The error appeared because we have skipped T1. As the message says “An earlier log backup that includes … can be restored”. We can get list of backups taken from MSDB database on source server.

DECLARE @db_name VARCHAR(100)
SELECT @db_name = 'SQLAuthority'
--Get Backup History for required database
SELECT TOP (30) s.database_name
,m.physical_device_name
,CAST(CAST(s.backup_size / 1000000 AS INT) AS VARCHAR(14)) + ' ' + 'MB' AS bkSize
,CAST(DATEDIFF(second, s.backup_start_date, s.backup_finish_date) AS VARCHAR(4)) + ' ' + 'Seconds' TimeTaken
,s.backup_start_date
,CAST(s.first_lsn AS VARCHAR(50)) AS first_lsn
,CAST(s.last_lsn AS VARCHAR(50)) AS last_lsn
,CASE s.[type]
WHEN 'D'
THEN 'Full'
WHEN 'I'
THEN 'Differential'
WHEN 'L'
THEN 'Transaction Log'
END AS BackupType
,s.server_name
,s.recovery_model
FROM msdb.dbo.backupset s
INNER JOIN msdb.dbo.backupmediafamily m ON s.media_set_id = m.media_set_id
WHERE s.database_name = @db_name
ORDER BY backup_start_date DESC
,backup_finish_date

We can also get the same information from ERRORLOG as well.

SQL SERVER – Where is ERRORLOG? Various Ways to Find its Location

Here are the sample messages. I have trimmed them for clarity.

2015-08-08 17:49:06.340 Backup Database backed up. Database: SQLAuthority, creation date(time): 2015/08/08(17:49:05), pages dumped: 291, first LSN: 33:136:179, last LSN: 33:232:1, number of dump devices: 1, device information: (FILE=1, TYPE=DISK: {‘c:\SQLAuthority_F1.bak’}).

2015-08-08 17:49:06.550 Backup Log was backed up. Database: SQLAuthority, creation date(time): 2015/08/08(17:49:05), first LSN: 33:136:179, last LSN: 33:264:1, number of dump devices: 1, device information: (FILE=1, TYPE=DISK: {‘c:\SQLAuthority_T1.trn’}).

2015-08-08 17:49:06.680 Backup Log was backed up. Database: SQLAuthority, creation date(time): 2015/08/08(17:49:05), first LSN: 33:264:1, last LSN: 33:280:1, number of dump devices: 1, device information: (FILE=1, TYPE=DISK: {‘c:\SQLAuthority_T2.trn’}).

2015-08-08 17:49:07.110 Backup Database backed up. Database: SQLAuthority, creation date(time): 2015/08/08(17:49:05), pages dumped: 291, first LSN: 33:312:143, last LSN: 33:392:1, number of dump devices: 1, device information: (FILE=1, TYPE=DISK: {‘c:\SQLAuthority_F2.bak’}).

So, once we find missing transaction log backup, we need to restore them in sequence. Do you have any other trick to solve this issue? If you can share, it would help others as well.
Reference: Pinal Dave (http://blog.sqlauthority.com)

Interview Question of the Week #031 – How to do Case Sensitive SQL Query Search

Question – How to do case sensitive search on any column of the table?

Answer

If Column1 of Table1 has following values ‘CaseSearch, casesearch, CASESEARCH, CaSeSeArCh’, following statement will return you all the four records.

SELECT Column1
FROM Table1
WHERE Column1 = 'casesearch'

To make the query case sensitive and retrieve only one record (“casesearch”) from the above query, the collation of the query needs to be changed as follows.

SELECT
Column1
FROM Table1
WHERE Column1 COLLATE Latin1_General_CS_AS = 'casesearch'

Adding COLLATE Latin1_General_CS_AS makes the search case sensitive.

Default Collation of the SQL Server installation SQL_Latin1_General_CP1_CI_AS is not case sensitive.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Adding File to Database in AlwaysOn Availability Group

Writing some typical interview questions have been on my list always. I look for new ways of tricking the person and most importantly, some of the interview questions are designed to make sure the person has strong fundamentals. This blog is one such attempt to understand the basics.

This is one of the common interview questions – What would happen if you add a new file (database file MDF or transaction log file LDF) to a database which is part of a high availability solution? As you might know that there are various high availability solutions like Failover clustering, database mirroring log shipping and the effect would be different in each solution. Out of many other high availability solutions, in this blog we will talk about availability group feature which was introduced in SQL Server 2012.

Let us talk about the basics of availability group first. Whatever modification operation done on the database would be logged to the transaction log file, called as log record. Since database is part of availability group the same command would be sent to all secondary replicas via log record. This means that adding data or log file to primary database essentially roll-forwards the same command to all secondary replicas.  This means that ADD FILE operation will be executed on Secondary servers. Using below code, I have added the file to drive E on Primary Instance.

USE [master]
GO
ALTER DATABASE [MyConDB]
ADD FILE (NAME = N'SQLAuthority_2',
FILENAME = N'E:\SQLAuthority.ndf' ,
SIZE = 4096KB, FILEGROWTH = 1024KB) TO FILEGROUP [PRIMARY]
GO

If this drive does not exist in secondary, then this operation will fail, resulting into the suspended secondary replica. Here is the information seen in the ERRORLOG (SQL SERVER – Where is ERRORLOG? Various Ways to Find its Location)

2015-01-01 17:52:14.19 spid45s     Error: 5123, Severity: 16, State: 1.
2015-01-01 17:52:14.19 spid45s     CREATE FILE encountered operating system error 3(The system cannot find the path specified.) while attempting to open or create the physical file ‘E:\SQLAuthority.ndf’.

2015-01-01 17:52:14.27 spid45s     Error: 5183, Severity: 16, State: 1.
2015-01-01 17:52:14.27 spid45s     Cannot create the file “SQLAuthority_2”. Use WITH MOVE to specify a usable physical file name. Use WITH REPLACE to overwrite an existing file.

2015-01-01 17:52:14.30 spid45s     AlwaysOn Availability Groups data movement for database ‘SQLAuthority’ has been suspended for the following reason: “system” (Source ID 2; Source string: ‘SUSPEND_FROM_REDO’). To resume data movement on the database, you will need to resume the database manually. For information about how to resume an availability database, see SQL Server Books Online.

2015-01-01 17:52:14.33 spid45s     Error: 3313, Severity: 21, State: 2.
2015-01-01 17:52:14.33 spid45s     During redoing of a logged operation in database ‘SQLAuthority’, an error occurred at log record ID (34:358:1). Typically, the specific failure is previously logged as an error in the Windows Event Log service. Restore the database from a full backup, or repair the database.

If there is a possibility to recreate the same path on secondary replica, then we need to resume the data movement using below command.

ALTER DATABASE [SQLAuthority] SET HADR RESUME;
GO

If it is not possible to recreate the path on secondary, then we need to reinitialize this secondary again. Which means we need to restore full backup and a log backup from the primary.

Have you encountered the same problem in Log shipping or Clustering? What steps did you take to solve it? Do let me know via the comments below, I am sure we can learn from each other this way.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – How to do IF…THEN in SQL SERVER?

Just another day I received following question –

“Pinal,

I am from DotNet expertise and I do not know how to convert the logic of IF…THEN into SQL statement, would you please help?

~Rajesh”

I love simple questions as they have simple answers. In SQL Server, we can use a CASE statement to implement IF…THEN. Here is the example of the CASE statement.

Example 1:

If you are logic is as follows:

IF -1 < 1 THEN 'TRUE'
 ELSE 'FALSE'

You can just use CASE statement as follows:

-- SQL Server 2008 and earlier version solution
SELECT CASE
WHEN -1 < 1 THEN 'TRUE'
ELSE 'FALSE' END AS Result
GO

-- SQL Server 2012 solution
SELECT IIF ( -1 < 1, 'TRUE', 'FALSE' ) AS Result;
GO

If you are interested further about how IIF of SQL Server 2012 works read the blog post.

Well, in our example the condition which we have used is pretty simple, but in the real world the logic can very complex.

You can read more about this subject in the blog post over here: SQL SERVER – Implementing IF … THEN in SQL SERVER with CASE Statements.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – A Stored Procedure for Executing SSIS Packages in the SSIS Catalog – Notes from the Field #092

[Note from Pinal]: This is a new episode of Notes from the Field series. SQL Server Integration Service (SSIS) is one of the most key essential part of the entire Business Intelligence (BI) story. It is a platform for data integration and workflow applications.

andyleonard SQL SERVER   A Stored Procedure for Executing SSIS Packages in the SSIS Catalog   Notes from the Field #092

In this episode of the Notes from the Field series I asked SSIS Expert Andy Leonard a very crucial question – How to create a stored procedure for executing SSIS Package in the SSIS Catalog? Andy was very kind to answer the questions and provides plenty of information about how a novice developer can learn SSIS from the beginning and become expert in the technology.


The following is a snippet from Chapter 2 of the book SSIS Design Patterns co-written by Matt Masson (Blog | @mattmasson), Tim Mitchell (Blog | @Tim_Mitchell), Jessica Moss (Blog | @jessicammoss), and Michelle Ufford (Blog | @sqlfool). Earlier in the chapter there are demos that describe a simple SSIS package named Chapter2.dtsx which is part of an SSIS project named Chapter2, and which is deployed to an instance of the SSIS Catalog in a Folder named “Chapter 2”. But you can use this stored procedure to execute any SSIS package in the SSIS Catalog. That’s the whole point!

SQL Server 2014 provides a new way to manage and execute Integration Services packages: Integration Server Catalogs. We explore this method next.

Integration Server Catalogs

You can only manage SSIS projects that use the Project Deployment Model in Integration Services Catalogs. To execute a package in the catalog, use SSMS to connect to the instance of SQL Server hosting the SSISDB database. Expand the Integration Services Catalogs node, and then expand the SSISDB node. Drill into the folder containing the SSIS project and package(s). Right-click the package you wish to execute and click Execute, as shown in Figure 2-1.

notes91 1 SQL SERVER   A Stored Procedure for Executing SSIS Packages in the SSIS Catalog   Notes from the Field #092

Figure 2-1. Executing an SSIS Package deployed to the SSIS Catalog

The Execute Package Window displays, as shown in Figure 2-2. It allows you to override Parameter values, ConnectionString properties of Connection Managers built at design-time, or any other externalize-able property accessible from a Package Path (via the Advanced tab) for this execution instance of the SSIS package stored in the SSIS Catalog.

notes91 2 SQL SERVER   A Stored Procedure for Executing SSIS Packages in the SSIS Catalog   Notes from the Field #092

Figure 2-2. Execute Package Window

Integration Server Catalog Stored Procedures

Please note the Script button above the Parameters tab in Figure 2-2. This button allows you to generate Transact-SQL statements that will execute the SSIS package. For the Chapter2.dtsx package stored in the SSIS Catalog, the scripts will appear similar to that in Listing 2-1.

Listing 2-1. Transact-SQL Script Generated From the Execute Package Window

Declare @execution_id bigint
EXEC [SSISDB].[catalog].[create_execution]
   @package_name=N'Chapter2.dtsx'
  ,@execution_id=@execution_id OUTPUT
  ,@folder_name=N'Chapter2'
  ,@project_name=N'Chapter2'
  ,@use32bitruntime=False
  ,@reference_id=Null
Select @execution_id
DECLARE @var0 smallint = 1
EXEC [SSISDB].[catalog].[set_execution_parameter_value]
   @execution_id
  ,@object_type=50
  ,@parameter_name=N'LOGGING_LEVEL'
  ,@parameter_value=@var0
EXEC [SSISDB].[catalog].[start_execution] @execution_id
GO

You can use these same stored procedures to execute SSIS Packages in the SSIS Catalog! In fact, I designed a script to create a wrapper stored procedure that will call the Transact-SQL statements executed when an SSIS Package is executed in the SSIS Catalog. You can see that script in Listing 2-2.

Listing 2-2. Script to Build a Wrapper Stored Procedure for Executing SSIS Packages in the SSIS Catalog

 /* Select the SSISDB database */
Use SSISDB
Go

 /* Create a parameter (variable) named @Sql */
Declare @Sql varchar(2000)

 /* Create the Custom schema if it does not already exist */
print 'Custom Schema'
If Not Exists(Select name 
              From sys.schemas 
                Where name = 'custom')
 begin
   /* Create Schema statements must occur first in a batch */
  print ' - Creating custom schema'
  Set @Sql = 'Create Schema custom'
  Exec(@Sql)
  print ' - Custom schema created'
 end
Else
 print ' - Custom Schema already exists.'
print ''

 /* Drop the Custom.execute_catalog_package Stored Procedure if it already exists */
print 'Custom.execute_catalog_package Stored Procedure'
  If Exists(Select s.name + '.' +  p.name
            From sys.procedures p
            Join sys.schemas s
                On s.schema_id = p.schema_id
         Where s.name = 'custom'
           And p.name = 'execute_catalog_package')
   begin
    print ' - Dropping custom.execute_catalog_package'
    Drop Procedure custom.execute_catalog_package
    print ' - Custom.execute_catalog_package dropped'
   end

   /* Create the Custom.execute_catalog_package Stored Procedure */
  print ' - Creating custom.execute_catalog_package'
go

/*

     Stored Procedure: custom.execute_catalog_package
     Author: Andy Leonard
     Date: 4 Mar 2012
     Description: Creates a wrapper around the SSISDB Catalog procedures
                  used to start executing an SSIS Package. Packages in the
                SSIS Catalog are referenced by a multi-part identifier
                 - or path - that consists of the following hierarchy:
        Catalog Name: Implied by the database name in Integration Server 2014
        |-Folder Name: A folder created before or at Deployment to contain the SSIS project
        |-Project Name: The name of the SSIS Project deployed
        |-Package Name: The name(s) of the SSIS Package(s) deployed

        Parameters:
        @FolderName [nvarchar(128)] {No default} – 
         contains the name of the Folder that holds the SSIS Project
        @ProjectName [nvarchar(128)] {No default} – 
         contains the name of the SSIS Project that holds the SSIS Package
        @PackageName [nvarchar(260)] {No default} – 
         contains the name of the SSIS Package to be executed
        @ExecutionID [bigint] {Output} – 
         Output parameter (variable) passed back to the caller
        @LoggingLevel [varchar(16)] {Default} – 
         contains the (case-insensitive) name of the logging level
         to apply to this execution instance
        @Use32BitRunTime [bit] {Default} – 
         1 == Use 64-bit run-time
                                                      0 == Use 32-bit run-time
        @ReferenceID [bigint] {Default} –          contains a reference to an Execution Environment
        @ObjectType [smallint] –          contains an identifier that appears to be related to the          SSIS PackageType property 
        Guessing: @ObjectType == PackageType.ordinal (1-based-array) * 10
         Must be 20, 30, or 50 for catalog.set_execution_parameter_value
         stored procedure

        Test: 
        1. Create and deploy an SSIS Package to the SSIS Catalog.
        2. Exec custom.execute_catalog_package and pass it the 
          following parameters: @FolderName, @ProjectName, @PackageName, @ExecutionID Output 
        @LoggingLevel, @Use32BitRunTime, @ReferenceID, and @ObjectType are optional and 
        defaulted parameters.

         Example:
           Declare @ExecId bigint
           Exec custom.execute_catalog_package
         'Chapter2'
        ,'Chapter2'
        ,'Chapter2.dtsx'
        ,@ExecId Output
        3. When execution completes, an Execution_Id value should be returned.
        View the SSIS Catalog Reports to determine the status of the execution 
        instance and the test.

*/
Create Procedure custom.execute_catalog_package
  @FolderName nvarchar(128)
 ,@ProjectName nvarchar(128)
 ,@PackageName nvarchar(260)
 ,@ExecutionID bigint Output
 ,@LoggingLevel varchar(16) = 'Basic'
 ,@Use32BitRunTime bit = 0
 ,@ReferenceID bigint = NULL
 ,@ObjectType smallint = 50
As

 begin
  
  Set NoCount ON
  
   /* Call the catalog.create_execution stored procedure
      to initialize execution location and parameters */
  Exec catalog.create_execution
   @package_name = @PackageName
  ,@execution_id = @ExecutionID Output
  ,@folder_name = @FolderName
  ,@project_name = @ProjectName
  ,@use32bitruntime = @Use32BitRunTime
  ,@reference_id = @ReferenceID

   /* Populate the @ExecutionID parameter for OUTPUT */
  Select @ExecutionID As Execution_Id

   /* Create a parameter (variable) named @Sql */
  Declare @logging_level smallint
   /* Decode the Logging Level */
  Select @logging_level = Case 
                           When Upper(@LoggingLevel) = 'BASIC'
                           Then 1
                           When Upper(@LoggingLevel) = 'PERFORMANCE'
                           Then 2
                            When Upper(@LoggingLevel) = 'VERBOSE'
                           Then 3
                           Else 0 /* 'None' */
                          End 
   /* Call the catalog.set_execution_parameter_value stored
      procedure to update the LOGGING_LEVEL parameter */
  Exec catalog.set_execution_parameter_value
    @ExecutionID
   ,@object_type = @ObjectType
   ,@parameter_name = N'LOGGING_LEVEL'
   ,@parameter_value = @logging_level

   /* Call the catalog.start_execution (self-explanatory) */
  Exec catalog.start_execution
    @ExecutionID

 end

GO

If you execute this script to create the custom schema and stored procedure in your instance of the SSISDB database, you can test it using the statement in Listing 2-3.

Listing 2-3. Testing the SSISDB.custom.execute_catalog_package Stored Procedure

Declare @ExecId bigint
Exec SSISDB.custom.execute_catalog_package 'Chapter2','Chapter2','Chapter2.dtsx',
@ExecId Output

Conclusion

This custom.execute_catalog_package stored procedure can be used to execute an SSIS package from any folder and project in the SSIS Catalog.

The SSIS Script Task can accomplish much more than generating log messages. This brief introduction and basic example have demonstrated how to get started configuring and using the SSIS Script Task. As you can see, SSIS Script Tasks give you development capabilities you may not have been aware of.

If you want to get started with SSIS with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Knowing Which Database is Consuming My Memory

I have been fortunate enough to be at conferences around the world and it is always refreshing to see how people come up with some really common yet never answered questions from time to time. The classic questions I get asked ever since I started working with databases and SQL Server is – why does SQL Server take all the memory and not return it? Even more bizarre is the question – Can I know how much memory is my databases using?

I always tell them, memory is a big topic and we need to use a number of commands like DBCC MEMORYSTATUS to know the internal working. The much more interesting way is to find out what are the pages in our buffer pool for our various databases. This can be got using DMVs as shown below:

--List the Number of pages in the buffer pool by database and page type
SELECT DB_NAME(database_id),
page_type,
COUNT(page_id) AS number_pages
FROM sys.dm_os_buffer_descriptors
WHERE database_id! = 32767
GROUP BY database_id, page_type
ORDER BY number_pages DESC
GO
--List the number of pages in the buffer pool by database
SELECT DB_NAME(database_id),
COUNT(page_id) AS number_pages
FROM sys.dm_os_buffer_descriptors
WHERE database_id! =32767
GROUP BY database_id
ORDER BY database_id
GO

os buffer descriptors 01 SQL SERVER   Knowing Which Database is Consuming My Memory

As you can see in the above output, we can see the amount of data pages and index pages that are loaded into our SQL Server memory.

A small variation of the above query can be to scan the buffer pool based on the type of pages that are loaded into memory. Below is a typical query fired against the same DMV.

--List the number of pages in the buffer pool by page type
SELECT page_type, COUNT(page_id) AS number_pages
FROM sys.dm_os_buffer_descriptors
GROUP BY page_type
ORDER BY number_pages DESC
GO
--List the number of dirty pages in the buffer pool
SELECT COUNT(page_id) AS number_pages
FROM sys.dm_os_buffer_descriptors
WHERE is_modified = 1
GO

os buffer descriptors 02 SQL SERVER   Knowing Which Database is Consuming My Memory

In the above query, I have also shown the dirty pages that are in memory and are yet to be flushed out.

This DMV is super useful when you have a number of databases that are running on our server and want to find out who is consuming the server memory. Do let me know your thoughts and what output are you seeing in your environment. Is there anything strange that you fine? Let me know via your comments.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Unable to Start SQL – Error: SQL Server Could Not Spawn Lazy Writer Thread

I have a tendency to try out various settings on a typical SQL Server just to see how the behavior changes from time to time. While playing with affinity settings in SQL Server – I shot myself in the foot. It was interesting to reach how I came out of the situation. I was not able to reproduce this error on the enterprise edition though.

Here is what I found in Errorlog:

2015-07-28 17:12:11.31 Server    Processor affinity turned on: node 0, processor mask 0x00000006. Threads will execute on CPUs per affinity settings.

2015-07-28 17:12:11.31 Server    I/O affinity turned on, processor mask 0x00000001. Disk I/Os will execute on CPUs per affinity I/O mask/affinity64 mask config option.

2015-07-28 17:12:17.44 Server    Node configuration: node 0: CPU mask: 0x00000006:0 Active CPU mask: 0x00000006:0.

2015-07-28 17:12:17.45 Server    Error: 17120, Severity: 16, State: 1.

2015-07-28 17:12:17.45 Server    SQL Server could not spawn lazy writer thread. Check the SQL Server error log and the Windows event logs for information about possible related problems.
If we convert highlighted number to binary, we can decode the affinity which I have set.

0x00000006 (in hex) = 0110 (in binary) in Processor affinity
0x00000001 (in hex) = 0001 (in binary) in I/O affinity

You can treat 1s and 0s as checkboxes for processors. Below picture should make it more clear.

affinity 01 SQL SERVER   Unable to Start SQL   Error: SQL Server Could Not Spawn Lazy Writer Thread

As you can see there is no overlap so I don’t see any problem, but still SQL Server didn’t start. I checked various online places, but few articles asked to repair the instance, which will not work in this situation. So finally, I sat down calmly and recalled that there is an option to start SQL with minimal configuration. I have used that earlier with one of my clients for SQL startup due to tempdb misconfiguration.

Here are the steps which worked:

  1. Start SQL in minimal configuration using –f parameter.

NET START MSSQLSERVER /f

If you are having named instance called Inst1 then you need to use below
NET START MSSQL$INST1 /f

  1. Connect to SQL using any client tool (SQLCMD or SSMS)
  2. Change the affinity back to default.

T-SQL

ALTER SERVER CONFIGURATION SET PROCESS AFFINITY CPU = AUTO;
GO
EXEC sys.sp_configure N'show advanced options', N'1';
GO
RECONFIGURE WITH OVERRIDE;
GO
EXEC sys.sp_configure N'affinity I/O mask', N'0';
GO
RECONFIGURE;
GO
EXEC sys.sp_configure N'show advanced options', N'0';
GO
RECONFIGURE WITH OVERRIDE;
GO

UI

affinity 02 SQL SERVER   Unable to Start SQL   Error: SQL Server Could Not Spawn Lazy Writer Thread

  1. Stop SQL Service

Default Instance – NET STOP MSSQLSERVER

Named Instance – NET STOP MSSQL$INST1

  1. Start it normally.

In some situations, you might run into issue where you would get this error while following above steps.

Login failed for user ‘LoginName’. Reason: Server is in single user mode. Only one administrator can connect at this time. (Microsoft SQL Server, Error: 18461)

You can refer my friend Balmukund’s blog to make a connection in single user mode via start up parameter “m”

Help: How to fix error – Reason: Server is in single user mode. Only one administrator can connect at this time

Once you are able to connect using SQLCMD, you need to use T-SQL to fix the affinity value. Hope this blog helps you to solve one of the things I discovered by my experiments. Do let me know.

Reference: Pinal Dave (http://blog.sqlauthority.com)