SQL SERVER – FIX: SQLServerAgent is not currently running so it cannot be notified of this action. (Microsoft SQL Server, Error: 22022)

Getting error and finding the cause is something which I like. And off late over the blogs I have shared a number of such incidents that have helped me figure out stuffs all by myself. Of course, I do my research and more often than not make things complicated because there are a number of resolutions that I find. I also make it a point to check with a few friends from Microsoft to a possible cause because it is easy to get help than walk the tough line of finding it myself.  One fine day I restarted my virtual machine and then I wanted to take a full backup of all databases before formatting the machine. As soon as I hit on execute button, as shown below, I was welcomed with an error message.

Agent Not Running 01 SQL SERVER   FIX: SQLServerAgent is not currently running so it cannot be notified of this action. (Microsoft SQL Server, Error: 22022)

Here is the error which I received.

Agent Not Running 02 SQL SERVER   FIX: SQLServerAgent is not currently running so it cannot be notified of this action. (Microsoft SQL Server, Error: 22022)

Complete text of the error message is below:

TITLE: Execute Maintenance Plan
——————————
Execution failed. See the maintenance plan and SQL Server Agent job history logs for details.
——————————
ADDITIONAL INFORMATION:
An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo)
——————————
SQLServerAgent is not currently running so it cannot be notified of this action. (Microsoft SQL Server, Error: 22022)
For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft%20SQL%20Server&ProdVer=12.00.2480&EvtSrc=MSSQLServer&EvtID=22022&LinkId=20476
——————————
BUTTONS:
OK
——————————

Error message is very clear. SQL Server Agent service should be running before we can execute maintenance plan. If we notice the first screen shot, it’s clear that SQL Server Agent was not running. So solution looked very simple, start the SQL Server Agent Service. It was not started by its own because it was set to manual mode.

Agent Not Running 03 SQL SERVER   FIX: SQLServerAgent is not currently running so it cannot be notified of this action. (Microsoft SQL Server, Error: 22022)

To avoid such error in the future, I change the “Start Mode” of the service, to automatic by going to the properties of the service.

Agent Not Running 04 SQL SERVER   FIX: SQLServerAgent is not currently running so it cannot be notified of this action. (Microsoft SQL Server, Error: 22022)

I also found that sometimes you would get this error even if the SQL Server Agent is running. In those cases, you need to check SQLAgent.out file to check if it’s started completely.

Something SQLAgent.out would show ONLY below message

Waiting for SQL Server to recover databases…

If you are running in that issue, please have a look at https://support.microsoft.com/en-us/kb/2640027 to get a fix for the issue.

Have you even fixed such simple issues? The error messages over the ages have become obvious and very helpful too.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Are You Suffering from Unknown SSAS Performance Challenges? – Notes from the Field #109

[Note from Pinal]: This is a new episode of Notes from the Field series. We build our business application with zeal and enthusiasm. We believe it will do well for many years but it is not the case always. After a while performance started to go down and everything is not in the best shape. We often suffer from unknown SSAS performance issues in our application which we are not even aware of.

Bill%20Anton SQL SERVER   Are You Suffering from Unknown SSAS Performance Challenges?   Notes from the Field #109

In this episode of the Notes from the Field series I asked SQL Expert Bill Anton a very crucial question – How to we know if our application is suffering from unknown SSAS performance issue? Bill was very kind to answer the questions and provides plenty of information about how a novice developer can work with SSAS.


Ever wondered what’s actually happening on your Analysis Services server? If so, then you’re certainly not alone. Many DBAs think of Analysis Services as this black box implemented by a crazy group of Business Intelligence developers and then left behind once the developers move on to the next project. Down the road, when a performance issue arises, whether it’s a new report taking a long time to load or an issue with the nightly processing job, it’s up to the DBA to try and resolve the issue. There’s just one problem – the DBA hasn’t had the training necessary to know where to start looking.

As a BI consultant I see this situation all the time. So in this post, I’m going to show you how to take the first step in troubleshooting Analysis Services performance issues – which is to run an extended events trace on your Analysis Services server to see what’s actually happening.

Running an Extended Events Trace

Extended Events (xEvents) is an event-handling system providing insight to the behavioral and performance characteristics of an Analysis Services instance. Even though it was first introduced for the database engine in SQL Server 2008, it didn’t make its way into Analysis Services until SQL Server 2012. However, it is the successor to SQL Profiler (which is has been deprecated) so unless you’re still working with SSAS 2008R2 (or below), it’s time to get comfortable w/ xEvents.

Because there’s no GUI with xEvents for Analysis Services (definitely one of the major deterrents for newcomers) you’ll have to use XMLA commands to start and stop the trace.

Note: to save you time, all scripts shown in the examples below can be downloaded from the link at the bottom of this post.

To start the trace, you can simply open SSMS, connect to the Analysis Services instance, open an Analysis Services XMLA query window (circled in the screenshot below), paste in the command, and hit execute.

109 1 SQL SERVER   Are You Suffering from Unknown SSAS Performance Challenges?   Notes from the Field #109

When you’re ready to stop the trace, you can execute the following XMLA command…

109 2 SQL SERVER   Are You Suffering from Unknown SSAS Performance Challenges?   Notes from the Field #109

Once the trace is stopped you can find the output file in the default Analysis Services log directory…

109 3 SQL SERVER   Are You Suffering from Unknown SSAS Performance Challenges?   Notes from the Field #109

Note: if you want the output files to be written out to another directory then you will need to specify the full filepath for the filename parameter in the XMLA command to start the trace. You’ll also need to make sure that the Analysis Services service account has the correct permissions for the directory you specify.

Now that we have the trace output, let’s focus on how to extract information from the trace file.

Analyzing an Extended Events Trace

The trace we ran in the previous section contains tons of useful information.

Query-Related

For queries we can see every query executed on the system including the total duration, query text, start/end time, as well as the name of the user who ran the query. It also contains information about the execution of the query (which may be more than you care to know) such as the amount of time spent in the Formula Engine vs the Storage Engine, number of partition scans, number of aggregation hits, number of cache hits – all of which can be helpful when troubleshooting slow query performance.

Processing-Related

If the trace is running while the SSAS database is being processed we can see total processing duration as well as the processing duration by major object (e.g. cube, measure group, partition, aggregation, dimension, etc) – which can be very helpful when determining where to focus your effort as the cube grows and the time it takes to process begins to approach the end of the processing window.

However, before we can start tapping into all of this wonderful information, we first need to load the data from the trace file into a SQL Server table.

The most common way to do this is by using the sys.fn_xe_file_target_read_file function. The statement below uses this function to read in the trace file, shred the (XML) contents, and write the flattened results into a temp table named #xevent_raw_trace_data.

109 4 SQL SERVER   Are You Suffering from Unknown SSAS Performance Challenges?   Notes from the Field #109

With the data in a table, we’re ready to start extracting useful information.

Below are a few examples to help get you started. They provide some of the more common types of information you’ll want to know about the query activity happening on the server.

Example 1 – number of queries and average query duration by application and user

This query is useful to see who your heaviest users are, which application they’re using most, and what the user experience is like in terms of query performance. For example, if you see a user with a high query count and a high average query duration (e.g. > 30 seconds), it’s probably worth checking up on them to see what they’re trying to do and if there’s a better way for them to do it.

109 5 SQL SERVER   Are You Suffering from Unknown SSAS Performance Challenges?   Notes from the Field #109

Example 2 – queries that took longer than 10 seconds to complete

This query will show you the worst performing MDX and/or DAX queries are generated by your users. It is one of my favorites and I recommend clients review the results on a regular basis. Typically, what you’ll see is that there are a small handful of users (or applications) that make up the bulk of the slow queries. Those are the users/applications where you’ll want to focus your time and energy in order to have the biggest impact on the overall performance and user experience.

109 6 SQL SERVER   Are You Suffering from Unknown SSAS Performance Challenges?   Notes from the Field #109

Example 3 – query execution details

This one starts to get a bit more complicated. You might use this query after you’ve isolated a slow MDX/DAX query (from the previous example) and you want to start digging deeper.

109 7 SQL SERVER   Are You Suffering from Unknown SSAS Performance Challenges?   Notes from the Field #109

Conclusion

Extended events traces are the key to unlocking the mystery of what’s going on inside your Analysis Services server. They are the successor for SQL Profiler and should be part of any decent Analysis Services Performance Monitoring solution.

It might not be pretty (few things are when it comes to XML), but like it or not, extended events are here to stay. Fortunately, it looks like Microsoft will be adding an extended events GUI in Analysis Services 2016.

Here’s the download link to the XMLA scripts and SQL queries shown in this post to help get you started.

If you want to get started on performance monitoring with SSAS with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER 2016 – New T-SQL Functions – COMPRESS and DECOMPRESS

With every release of SQL Server, Microsoft has been adding enhancements to the product. Earlier I have written below blogs talking about new features/enhancements.

SQL Server – 2016 – New Feature: Dynamic Data Masking

SQL Server – 2016 – T-SQL Enhancement “Drop if Exists” clause

SQL SERVER 2016 – Comparing Execution Plans

SQL SERVER – 2016 – Opening JSON with OPENJSON()

In this blog, I would explore new string functions COMPRESS and DECOMPRESS which are newly available in SQL Server 2016 (CTP 3.1 onwards)

If you try this in an earlier version, then you would be welcomed with an error message.

Msg 195, Level 15, State 10, Line <>
‘COMPRESS’ is not a recognized built-in function name.
Msg 195, Level 15, State 10, Line <>
‘DECOMPRESS’ is not a recognized built-in function name.

Here is the version where it would work. Any version more than 801 should work.

Microsoft SQL Server 2016 (CTP3.1) – 13.0.801.12 (X64)
Dec  1 2015 15:41:43
Copyright (c) Microsoft Corporation
Enterprise Evaluation Edition (64-bit) on Windows Server 2012 R2 Datacenter 6.3 <X64> (Build 9600: ) (Hypervisor)

Now, let’s understand the usage of COMPRESS and DECOMPRESS functions. The compression techniques which were available with earlier version of SQL Server was page level and row level compression. With the new function, we can specify string which needs to be compressed and insert directly. We need to remember that COMPRESS function gives output as byte array of VARBINARY(MAX) type. The algorithm used by these function is GZip.  https://en.wikipedia.org/wiki/Gzip So, an application can compress the data using standard Gzip algorithm and send it to SQL Server. Or Select compress data and decompress in the application.

Here is a quick example.

SET NOCOUNT ON
GO
USE tempdb
GO
DROP TABLE IF EXISTS Team_SQLAuthority;
GO
CREATE TABLE Team_SQLAuthority (
id INT PRIMARY KEY IDENTITY
,name NVARCHAR(MAX)
,
surname NVARCHAR(MAX)
,
info VARBINARY(MAX)
)
GO
INSERT INTO Team_SQLAuthority (
name
,surname
,info
)
VALUES (
'Pinal'
,'Dave'
,COMPRESS('I love SQL Server')
)
SELECT id
,info AS 'COMPRESSED-ed'
,CAST(DECOMPRESS(info) AS VARCHAR(MAX)) 'DECOMPRESS-ed'
FROM Team_SQLAuthority
GO

compress 01 SQL SERVER 2016   New T SQL Functions   COMPRESS and DECOMPRESS

In above example, we are inserting compressed data in SQL Server. We can also select normal data, compress it on the fly via select statement and later client or application can decompress it using standard Gzip algorithm. This would reduce network usage.

The amount of compression would be dependent on type of data. If we have XML or JSON data, it might be compressed more.

DECLARE @STR1 VARCHAR(MAX)
DECLARE @STR2 VARCHAR(MAX)
SELECT @STR1 = 'I LOVE SQL SERVER'
SELECT @STR2 = 'I LOVE SQL SERVER AND ALL OTHER DATABASE PRODUCTS. LET US ADD MORE DATA TO SHOW THAT MORE
LENTH OF THE STRING CAN SHOW BETTER COMPRESSION BECAUSE THERE IS A OVERHEAD DUE TO COMPRESION ITSELF'
SELECT  DATALENGTH(@STR1) 'Original-1',
DATALENGTH(COMPRESS(@STR1)) 'Compressed-1',
DATALENGTH(@STR2) 'Original-2',
DATALENGTH(COMPRESS(@STR2)) 'Compressed-2'

Here is the output.

compress 02 SQL SERVER 2016   New T SQL Functions   COMPRESS and DECOMPRESS

As we can see, longer string gets better compressed.

Do you think this feature would help you?

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Taking Backup Without Consuming Disk Space

Long back, I learned this trick and found an interesting use of it. I totally understand that its very dangerous. Recently I have seen someone getting bitten by this so thought of sharing.

Warning: Don’t do this in any production environment

The trick here is to take backup on NUL device. It’s not a typo and there is no missing L in NUL. It is called as “NULL Device”. As per Wikipedia –

The null device is a device file that discards all data written to it, but reports that the write operation succeeded. Its represented by NUL: or NUL on DOS

Taking backup is NUL device is as good as taking backup and deleting it. Since the backup is taken to a device, SQL would send it to the operating system they way backup would have been sent and due to NUL device, operating system discards it and tells SQL that your data is written successfully. You can assume it as writing to directly to the recycling bin, which would be deleted once completely.

Coming to its innovative use. I have seen few DBAs having scheduled job to take backup of transaction log to the NUL device using below command.

BACKUP LOG ProductionDB TO DISK = 'NUL'

When I asked one of them about why they are doing it, I got a very interesting answer.

Before moving to SQL 2012 AlwaysOn Availability Groups, I had a database which has always been in “simple” recovery model.  To put it in an AG, it must be in “full” recovery model, and I know the reason also.  So due to full recovery mode, I am forced to take a transaction log backup and burn up disk space with transaction log backups. In reality, I don’t need them and I am OK with last full backup, which I take once daily. I need secondary replica for reporting purposes only. That’s the reason I am taking LOG backup to NUL device. On a lighter note, I believe that Microsoft should have another recovery model – for example, “Simple-AvailabilityGroup” – which would function the same as “simple” recovery model and discard log records after they have been applied to all secondary replicas. What do you think Pinal?

Here is the message in the ERRORLOG when backup is taken on NUL. Notice that disk path as ‘nul’

Database backed up. Database: master, creation date(time): 2015/10/15(05:53:51), pages dumped: 909, first LSN: 6795:16:84, last LSN: 6795:72:1, number of dump devices: 1, device information: (FILE=1, TYPE=DISK: {‘nul’}). This is an informational message only. No user action is required.

Hope it’s clear that this command would disturb the disaster recovery steps as you might assume that backups are taken as shown in SQL Server logs but they are not present anywhere. Please be very careful in using the command.

Reference : Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – Steps to Backup to Windows Azure storage

Moving to a cloud based world is inevitable, it is something we need to learn soon. Ever since SQL Server 2014 has been released, the concept of uploading your backups to Blobs with Windows Azure has been around. In reality, performing a backup or restore operations with SQL Server 2012 SP1 CU4 and later requires no additional tools actually, and be done with either T-SQL or SSMS. This blog describes how to perform backup operations with T-SQL. This will be part of a series of blogs to come in the future. Let me walk through the initial steps.

Creating Credentials

To perform the backup and restore procedures on your local SQL Server you will need to create a SQL credential using the Windows Azure Storage Account configuration. The following steps will create the necessary credential:

  1. Connect to SQL Server Management Studio.
  2. On the Standard toolbar, click New Query.
  3. Copy and paste the following example into the query window, modifying as needed.

CREATE CREDENTIAL mycredential
WITH IDENTITY= 'mystorageaccount' --this is the name of the storage account you specified when creating a storage account
, SECRET = '<storage account access key>' -- this should be either the Primary or Secondary Access Key for the storage account.

Steps to backup to Windows Azure storage

The following steps describe how to perform a backup of a database to the Windows Azure storage service. The database can be an on-premises database, or located in a Windows Azure Virtual Machine. The key requirement for this demo variation is that the database must be accessible from SQL Server Management Studio:

If you don’t have your own database, you plan to use for this tutorial, then install AdventureWorks from: http://msftdbprodsamples.codeplex.com/

  1. Connect to SQL Server Management Studio.
  2. In the Object Explorer, connect to the instance of SQL Server on which the database to be backed up is located.
  3. In Object Explorer, connect to the instance of the Database Engine that has the database you plan to backup.
  4. On the Standard menu bar, select New Query.
  5. Copy and paste the following example into the query window, modify as needed, and click Execute.

BACKUP DATABASE [AdventureWorks2014]
TO URL = 'https://mystorageaccount.blob.core.windows.net/privatecontainers/AdventureWorks2014.bak'
/* URL includes the endpoint for the BLOB service, followed by the container name, and the name of the backup file*/
WITH CREDENTIAL = 'mycredential';
/* name of the credential you created in the previous step */
GO

As you can see, the steps to back up to an URL is simple as described above. The pre-requisite for this is to have a storage account in Azure ready before you do the same.

Do let me know if you have ever taken a backup to Azure till date. Please let me know if you want me to write on this topic in the future.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Identify Time Between Backups Calculation

As part of my script archives, I stumbled upon a script that I had written a long time back and thought this must get to the blog. This script was inspired from a simple question asked by an DBA when I had visited them for a session. The question was to identify the time taken between backups. I was immediately in asking why they wanted it.

The DBA told me a story that they were taking backups on a tape drive. After such backups being taken, one of the days their latest backup was corrupt. So they resorted to an old backup taken before that. They wanted to know the amount of data loss which might potentially happen because of this roaster.

I personally felt this was a simple requirement and needed to be addressed in some way. I made a rudimentary script to attack this requirement as shown below:

CREATE TABLE #backupset (backup_set_id INT, database_name NVARCHAR(128), backup_finish_date DATETIME, TYPE CHAR(1), next_backup_finish_date DATETIME);
INSERT INTO #backupset (backup_set_id, database_name, backup_finish_date, TYPE)
SELECT backup_set_id, database_name, backup_finish_date, TYPE
FROM
msdb.dbo.backupset WITH (NOLOCK)
WHERE backup_finish_date >= DATEADD(dd, -14, GETDATE())
AND
database_name NOT IN ('master', 'model', 'msdb', 'ReportServer', 'ReportServerTempDB');
CREATE CLUSTERED INDEX CL_database_name_backup_finish_date ON #backupset (database_name, backup_finish_date);
UPDATE #backupset
SET next_backup_finish_date = (SELECT TOP 1 backup_finish_date FROM #backupset bsNext WHERE bs.database_name = bsNext.database_name AND bs.backup_finish_date < bsNext.backup_finish_date ORDER BY bsNext.backup_finish_date)
FROM #backupset bs;
SELECT bs1.database_name, MAX(DATEDIFF(mi, bs1.backup_finish_date, bs1.next_backup_finish_date)) AS max_minutes_of_data_loss,
'SELECT bs.database_name, bs.type, bs.backup_start_date, bs.backup_finish_date, DATEDIFF(mi, COALESCE((SELECT TOP 1 bsPrior.backup_finish_date FROM msdb.dbo.backupset bsPrior WHERE bs.database_name = bsPrior.database_name AND bs.backup_finish_date > bsPrior.backup_finish_date ORDER BY bsPrior.backup_finish_date DESC), ''1900/1/1''), bs.backup_finish_date) AS minutes_since_last_backup, DATEDIFF(mi, bs.backup_start_date, bs.backup_finish_date) AS backup_duration_minutes, CASE DATEDIFF(ss, bs.backup_start_date, bs.backup_finish_date) WHEN 0 THEN 0 ELSE CAST(( bs.backup_size / ( DATEDIFF(ss, bs.backup_start_date, bs.backup_finish_date) ) / 1048576 ) AS INT) END AS throughput_mb_sec FROM msdb.dbo.backupset bs WHERE database_name = ''' + database_name + ''' AND bs.backup_start_date > DATEADD(dd, -14, GETDATE()) ORDER BY bs.backup_start_date' AS more_info_query
FROM #backupset bs1
GROUP BY bs1.database_name
ORDER BY bs1.database_name
DROP TABLE #backupset;
GO

backupset 01 SQL SERVER   Identify Time Between Backups Calculation

For each of your databases, it lists the maximum amount of time you want between backups over the last two weeks. You can take this query to for your requirements. I had in the past written few queries with variations of these concepts which are worth a note and here for reference:

Get Database Backup History for a Single Database

Finding Last Backup Time for All Database – Last Full, Differential and Log Backup – Optimized

If you change the query, please let me know via comments so that it will help others using the script too.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Configure the Backup Compression Default Server Configuration Option

When I get a chance to talk to Enterprise customers working with a really smart DBA team, I generally look out for options they use on a daily basis and are often missed in the bigger scheme of things. One such feature of SQL Server Enterprise Edition is the ability to do Backup Compression. It is really a powerful feature and the fine prints is that – it is disabled by default. In this blog we will look at how to view or configure the backup compression default server configuration option in SQL Server by using SQL Server Management Studio or Transact-SQL. The backup compression default option determines whether the server instance creates compressed backups by default. When SQL Server is installed, the backup compression default option is off.

Changing the setting using SSMS

To configure this setting, use the following step.

  1. In Object Explorer, right-click a server and select Properties.
  2. Click the Database settings node.
  3. Under Backup and restore, Compress backup shows the current setting of the backup compression default option. This setting determines the server-level default for compressing backups, as follows:
    • If the Compress backup box is blank, new backups are uncompressed by default.
    • If the Compress backup box is checked, new backups are compressed by default.

compression setting default 01 SQL SERVER   Configure the Backup Compression Default Server Configuration Option

If you are a member of the sysadmin or serveradmin fixed server role, you can also change the default setting by clicking the Compress backup box.

TSQL – Steps to configure backup compression default option

The T-SQL command to achieve the same will be as described below:

  1. Connect to the Database Engine.
  2. From the Standard bar, click New Query.
  3. Copy and paste the following example into the query window and click Execute. This example shows how to use sp_configure to configure the server instance to create compressed backups by default.

USE MASTER;
GO
EXEC sp_configure 'backup compression default', 1;
RECONFIGURE WITH OVERRIDE;
GO

If you want to find out if backup compression is enabled by default, then we can use the following TSQL command to find the same.

SELECT value
FROM sys.configurations
WHERE name = 'backup compression default';
GO

This example queries the sys.configurations catalog view to determine the value for backup compression default. A value of 0 means that backup compression is off, and a value of 1 means that backup compression is enabled.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Are Power Options Slowing You Down? – Notes from the Field #095

[Note from Pinal]: This is a 95th episode of Notes from the Fields series. When it is about tuning SQL Server, we always look at the configuration of the SQL Server. However, there are few settings of the operating system can also impact the performance of the SQL Server. I recently asked John Sterrett a very simple question – “Which is the one option from the OS, would you check first when you are tuning SQL Server?”

JohnSterrett SQL SERVER   Are Power Options Slowing You Down?   Notes from the Field #095

In this episode of the Notes from the Field series database expert John Sterrett (Group Principal at Linchpin People) explains a very interesting story about how a simple setting of the OS impacts performance of SQL Server. Read the experience of John in his own words.


Doing several SQL Server health checks I have noticed that the operating system settings can slow you down. Today, we are going to focus on power options. By default many companies have the balanced power plan as the default configured option.  You will see that it is also the recommended setting. This is recommended to save power usage which reduces which reduces the amount of money required to power your servers.  Please keep in mind that your virtualization software or BIOS might also have Power Option settings that should be verified and adjusted. Today, we are focusing on the Windows Operating System.

notes 95 SQL SERVER   Are Power Options Slowing You Down?   Notes from the Field #095

The balanced power option is great for most servers where CPU power is not mission critical to the applications living on your server. SQL Server is an exception and you will want to get as much as you can out of your CPU Power.  In fact Enterprise edition licensing is now licensed by core so you want to make sure you are getting the most out of your SQL Server licensing.

How Do We Script Out The Change?

While you can load the power option GUI and manually change the setting I like to automate as much as possible.  The following script could be executed via PowerShell or Command Prompt to make sure High Performance power option is enabled.

Powercfg -SETACTIVE SCHEME_MIN

Conclusion

SQL Server places a different set of demands on the operating system. OS default settings are not always optimal and should be reviewed.

Are your servers running at optimal speed or are you facing any SQL Server Performance Problems? If you want to get started with the help of experts read more over here: Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Performance Monitoring for Analysis Services – Notes from the Field #093

[Note from Pinal]: This is a new episode of Notes from the Field series. When we build any application, we build it with zeal and enthusiasm. We believe it will do well for many years but it is not the case always. After a while performance started to go down and everything is not in the best shape. This is the time when we need to monitor performance and take action based on our analysis.

Bill%20Anton SQL SERVER   Performance Monitoring for Analysis Services   Notes from the Field #093

In this episode of the Notes from the Field series I asked SQL Expert Bill Anton a very crucial question – How to monitor performance of analysis services? Bill was very kind to answer the questions and provides plenty of information about how a novice developer can work with SSAS.


When it comes to ad-hoc query performance in business intelligence solutions, very few technologies rival a well-designed Analysis Services Multidimensional cube or Tabular model. And when that cube/tabular model is performing well, as it usually does in the beginning, life is good!

notes 93 1 SQL SERVER   Performance Monitoring for Analysis Services   Notes from the Field #093

Photo Credit: SuperCar-RoadTrip.fr

Over time, however, things can change. The underlying business evolves (hopefully) and information workers start asking different questions, resulting in new query patterns. The business grows and now more users are asking more questions of larger data volumes. Performance starts to deteriorate. Queries are taking longer to complete and nightly processing starts to run past the end of the maintenance window into business hours. Users are complaining, management is unhappy, and life is no longer good.

notes 93 2 SQL SERVER   Performance Monitoring for Analysis Services   Notes from the Field #093

Photo Credit: Charlie

How did we not see this coming?

If this describes your personal situation, don’t worry, you’re not alone. In fact the majority of clients who bring me in to troubleshoot and resolve their Analysis Services performance problems ask the same question (or some variation of it).

In my experience, 80% of the time the reason no one sees this type of issue coming is because there wasn’t a performance monitoring solution in place. The other 20% who have a performance monitoring solution simply aren’t using it or reviewing information being collected.

I don’t know why so many Analysis Services environments are neglected (I’m not a Business or IT therapist) but I’m going to tell you what steps need to be taken if you want to avoid it.

The Secret is Simple

The first step to maintaining a well running Analysis Services instance is simple. Take measurements!

notes 93 3 SQL SERVER   Performance Monitoring for Analysis Services   Notes from the Field #093

Photo Credit: Official U.S. Navy Page

Taking regular measurements is the only way to know how things are performing overtime. This seems ridiculously obvious, but so few companies actually do it. My hunch is that these folks just don’t know what needs to be measured.

Analysis Services can be broken down into 2 fundamental types of workloads:

Processing Workloads

Processing is the term used to describe how data gets loaded into the Analysis Services database. This workload is usually done at night, outside of business hours, so as not to coincide with user activity.

From a performance perspective, the primary things to keep an eye on are:

Processing Duration
This is the amount of time it takes to load data into the Analysis Services database. As long as the processing duration fits within the scheduled maintenance window, there’s not much to worry about. However, if this duration is increasing over time and you think it will eventually run past that window, then you’ll need to review the rest of the information to figure out “why” and “what to do”.

How long does it take to process the Analysis Services database?
Is the processing duration increasing over time?

Resource Consumption
Keeping an eye on resource consumption (e.g. CPU, memory, disk, network) during processing is also a good idea. This kind of information can help shed some light on bottlenecks and provide guidance when coming up with a solution to processing related problems.

Is the processing workload requiring more and more memory? Are we maxing out CPU? How’s disk space?

There are many solutions to problems faced during processing, but without some insight into what’s happening on the system, it’s hard to know which solution is the optimal one.

For example, say we have a Multidimensional cube and notice that the processing duration for one of the measure groups is steadily increasing over time. We review the resource consumption and see that there’s plenty of CPU/Memory/IO to spare. In this case, we may consider partitioning this particular fact table and only processing the most recent partition or processing the partitions in parallel.

Pro Tip: Breaking up processing into stages will provide context to the information above and make it much easier to see which part(s) of the Analysis Services database are contributing to the increase in processing duration or memory consumption.

Query Workloads

This refers to any activity that generates queries against the SSAS database. It could be users running reports, loading dashboards, analyzing data via pivot tables, etc. Because users typically don’t run the same queries at the same time every day, this workload can be much more complicated to monitor.

The key to success is to start with the high level stuff and only go into the details if/when necessary.

Queries
The single most important thing to track for this type of workload is a log of the actual queries being executed against the Analysis Services database.

Not only will you be able to see which queries are slow, but you’ll also have the actual MDX/DAX executed. You won’t have to wait for the user to complain (telling you their report is taking too long to load) because you’ll already have the query and can start reviewing it immediately.

Some Analysis Services implementations actually have service level agreements (SLA) with criteria such as “no query should take more than 30 seconds to complete” and “the average query response time should be less than 5 seconds”. If you’re tracking every query against the Analysis Services database, not only will you know if the SLA has been violated, but you’ll know which query or queries it was that led up to the violation and can start troubleshooting immediately.

Users
Tracking the number of folks using your system (and when they are using it) will prove very helpful for knowing if/when to start considering options for scaling the system up and/or out.

This information can usually be extracted from whatever mechanism you use to track queries being executed against the Analysis Services database, but it is important enough to deserve its own section.

Resource Consumption
In the same vein as the above discussion around tracking resource consumption of the processing workload, you’ll also want to track the same measures (e.g. CPU, memory, disk, network) throughout the day. This information may provide some clues as to why a query is slow.

For example, say you’re reviewing the top 10 slowest queries from the previous week and find several of the queries are now running very fast. At this point you can switch over and start looking at the state of the system last week at the time the query was slow (from a resource consumption perspective) and perhaps find some clues, such as memory pressure or CPU bottleneck caused by a spike in activity.

Here are some examples of the types of questions you should be able to answer with the above information:

What are the top 10 slowest queries each week?

Who are your top users and what is the average number of queries they execute per day/week/month?

What are the average number of users per day/week/month?

What is the max number/average number of concurrent users per day/week/month?

Pro Tip: some folks incorrectly assume the OLAPQueryLog tracks queries. However, this table only tracks parts of queries (storage engine requests). A single query executed against an Analysis Services database could potentially generate 10s of records in this table. The implication is that it doesn’t give you the whole story and you won’t always be able to determine which queries are slow.

Next Steps

Now that you know what types of information you should be collecting and reviewing regularly, the next step is to figure out how you’re going to collect that information. The good news is that there are quite a few options available. The bad news is that you’ll have to wait until next time to find out. Here is the SQLPASS session which discusses about the same concept Analysis Services: Show Me Where It Hurts. To continue learning about this subject, you can click over here to read the second part.

If you want to get started on performance monitoring with SSAS with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – A Stored Procedure for Executing SSIS Packages in the SSIS Catalog – Notes from the Field #092

[Note from Pinal]: This is a new episode of Notes from the Field series. SQL Server Integration Service (SSIS) is one of the most key essential part of the entire Business Intelligence (BI) story. It is a platform for data integration and workflow applications.

andyleonard SQL SERVER   A Stored Procedure for Executing SSIS Packages in the SSIS Catalog   Notes from the Field #092

In this episode of the Notes from the Field series I asked SSIS Expert Andy Leonard a very crucial question – How to create a stored procedure for executing SSIS Package in the SSIS Catalog? Andy was very kind to answer the questions and provides plenty of information about how a novice developer can learn SSIS from the beginning and become expert in the technology.


The following is a snippet from Chapter 2 of the book SSIS Design Patterns co-written by Matt Masson (Blog | @mattmasson), Tim Mitchell (Blog | @Tim_Mitchell), Jessica Moss (Blog | @jessicammoss), and Michelle Ufford (Blog | @sqlfool). Earlier in the chapter there are demos that describe a simple SSIS package named Chapter2.dtsx which is part of an SSIS project named Chapter2, and which is deployed to an instance of the SSIS Catalog in a Folder named “Chapter 2”. But you can use this stored procedure to execute any SSIS package in the SSIS Catalog. That’s the whole point!

SQL Server 2014 provides a new way to manage and execute Integration Services packages: Integration Server Catalogs. We explore this method next.

Integration Server Catalogs

You can only manage SSIS projects that use the Project Deployment Model in Integration Services Catalogs. To execute a package in the catalog, use SSMS to connect to the instance of SQL Server hosting the SSISDB database. Expand the Integration Services Catalogs node, and then expand the SSISDB node. Drill into the folder containing the SSIS project and package(s). Right-click the package you wish to execute and click Execute, as shown in Figure 2-1.

notes91 1 SQL SERVER   A Stored Procedure for Executing SSIS Packages in the SSIS Catalog   Notes from the Field #092

Figure 2-1. Executing an SSIS Package deployed to the SSIS Catalog

The Execute Package Window displays, as shown in Figure 2-2. It allows you to override Parameter values, ConnectionString properties of Connection Managers built at design-time, or any other externalize-able property accessible from a Package Path (via the Advanced tab) for this execution instance of the SSIS package stored in the SSIS Catalog.

notes91 2 SQL SERVER   A Stored Procedure for Executing SSIS Packages in the SSIS Catalog   Notes from the Field #092

Figure 2-2. Execute Package Window

Integration Server Catalog Stored Procedures

Please note the Script button above the Parameters tab in Figure 2-2. This button allows you to generate Transact-SQL statements that will execute the SSIS package. For the Chapter2.dtsx package stored in the SSIS Catalog, the scripts will appear similar to that in Listing 2-1.

Listing 2-1. Transact-SQL Script Generated From the Execute Package Window

Declare @execution_id bigint
EXEC [SSISDB].[catalog].[create_execution]
   @package_name=N'Chapter2.dtsx'
  ,@execution_id=@execution_id OUTPUT
  ,@folder_name=N'Chapter2'
  ,@project_name=N'Chapter2'
  ,@use32bitruntime=False
  ,@reference_id=Null
Select @execution_id
DECLARE @var0 smallint = 1
EXEC [SSISDB].[catalog].[set_execution_parameter_value]
   @execution_id
  ,@object_type=50
  ,@parameter_name=N'LOGGING_LEVEL'
  ,@parameter_value=@var0
EXEC [SSISDB].[catalog].[start_execution] @execution_id
GO

You can use these same stored procedures to execute SSIS Packages in the SSIS Catalog! In fact, I designed a script to create a wrapper stored procedure that will call the Transact-SQL statements executed when an SSIS Package is executed in the SSIS Catalog. You can see that script in Listing 2-2.

Listing 2-2. Script to Build a Wrapper Stored Procedure for Executing SSIS Packages in the SSIS Catalog

 /* Select the SSISDB database */
Use SSISDB
Go

 /* Create a parameter (variable) named @Sql */
Declare @Sql varchar(2000)

 /* Create the Custom schema if it does not already exist */
print 'Custom Schema'
If Not Exists(Select name 
              From sys.schemas 
                Where name = 'custom')
 begin
   /* Create Schema statements must occur first in a batch */
  print ' - Creating custom schema'
  Set @Sql = 'Create Schema custom'
  Exec(@Sql)
  print ' - Custom schema created'
 end
Else
 print ' - Custom Schema already exists.'
print ''

 /* Drop the Custom.execute_catalog_package Stored Procedure if it already exists */
print 'Custom.execute_catalog_package Stored Procedure'
  If Exists(Select s.name + '.' +  p.name
            From sys.procedures p
            Join sys.schemas s
                On s.schema_id = p.schema_id
         Where s.name = 'custom'
           And p.name = 'execute_catalog_package')
   begin
    print ' - Dropping custom.execute_catalog_package'
    Drop Procedure custom.execute_catalog_package
    print ' - Custom.execute_catalog_package dropped'
   end

   /* Create the Custom.execute_catalog_package Stored Procedure */
  print ' - Creating custom.execute_catalog_package'
go

/*

     Stored Procedure: custom.execute_catalog_package
     Author: Andy Leonard
     Date: 4 Mar 2012
     Description: Creates a wrapper around the SSISDB Catalog procedures
                  used to start executing an SSIS Package. Packages in the
                SSIS Catalog are referenced by a multi-part identifier
                 - or path - that consists of the following hierarchy:
        Catalog Name: Implied by the database name in Integration Server 2014
        |-Folder Name: A folder created before or at Deployment to contain the SSIS project
        |-Project Name: The name of the SSIS Project deployed
        |-Package Name: The name(s) of the SSIS Package(s) deployed

        Parameters:
        @FolderName [nvarchar(128)] {No default} – 
         contains the name of the Folder that holds the SSIS Project
        @ProjectName [nvarchar(128)] {No default} – 
         contains the name of the SSIS Project that holds the SSIS Package
        @PackageName [nvarchar(260)] {No default} – 
         contains the name of the SSIS Package to be executed
        @ExecutionID [bigint] {Output} – 
         Output parameter (variable) passed back to the caller
        @LoggingLevel [varchar(16)] {Default} – 
         contains the (case-insensitive) name of the logging level
         to apply to this execution instance
        @Use32BitRunTime [bit] {Default} – 
         1 == Use 64-bit run-time
                                                      0 == Use 32-bit run-time
        @ReferenceID [bigint] {Default} –          contains a reference to an Execution Environment
        @ObjectType [smallint] –          contains an identifier that appears to be related to the          SSIS PackageType property 
        Guessing: @ObjectType == PackageType.ordinal (1-based-array) * 10
         Must be 20, 30, or 50 for catalog.set_execution_parameter_value
         stored procedure

        Test: 
        1. Create and deploy an SSIS Package to the SSIS Catalog.
        2. Exec custom.execute_catalog_package and pass it the 
          following parameters: @FolderName, @ProjectName, @PackageName, @ExecutionID Output 
        @LoggingLevel, @Use32BitRunTime, @ReferenceID, and @ObjectType are optional and 
        defaulted parameters.

         Example:
           Declare @ExecId bigint
           Exec custom.execute_catalog_package
         'Chapter2'
        ,'Chapter2'
        ,'Chapter2.dtsx'
        ,@ExecId Output
        3. When execution completes, an Execution_Id value should be returned.
        View the SSIS Catalog Reports to determine the status of the execution 
        instance and the test.

*/
Create Procedure custom.execute_catalog_package
  @FolderName nvarchar(128)
 ,@ProjectName nvarchar(128)
 ,@PackageName nvarchar(260)
 ,@ExecutionID bigint Output
 ,@LoggingLevel varchar(16) = 'Basic'
 ,@Use32BitRunTime bit = 0
 ,@ReferenceID bigint = NULL
 ,@ObjectType smallint = 50
As

 begin
  
  Set NoCount ON
  
   /* Call the catalog.create_execution stored procedure
      to initialize execution location and parameters */
  Exec catalog.create_execution
   @package_name = @PackageName
  ,@execution_id = @ExecutionID Output
  ,@folder_name = @FolderName
  ,@project_name = @ProjectName
  ,@use32bitruntime = @Use32BitRunTime
  ,@reference_id = @ReferenceID

   /* Populate the @ExecutionID parameter for OUTPUT */
  Select @ExecutionID As Execution_Id

   /* Create a parameter (variable) named @Sql */
  Declare @logging_level smallint
   /* Decode the Logging Level */
  Select @logging_level = Case 
                           When Upper(@LoggingLevel) = 'BASIC'
                           Then 1
                           When Upper(@LoggingLevel) = 'PERFORMANCE'
                           Then 2
                            When Upper(@LoggingLevel) = 'VERBOSE'
                           Then 3
                           Else 0 /* 'None' */
                          End 
   /* Call the catalog.set_execution_parameter_value stored
      procedure to update the LOGGING_LEVEL parameter */
  Exec catalog.set_execution_parameter_value
    @ExecutionID
   ,@object_type = @ObjectType
   ,@parameter_name = N'LOGGING_LEVEL'
   ,@parameter_value = @logging_level

   /* Call the catalog.start_execution (self-explanatory) */
  Exec catalog.start_execution
    @ExecutionID

 end

GO

If you execute this script to create the custom schema and stored procedure in your instance of the SSISDB database, you can test it using the statement in Listing 2-3.

Listing 2-3. Testing the SSISDB.custom.execute_catalog_package Stored Procedure

Declare @ExecId bigint
Exec SSISDB.custom.execute_catalog_package 'Chapter2','Chapter2','Chapter2.dtsx',
@ExecId Output

Conclusion

This custom.execute_catalog_package stored procedure can be used to execute an SSIS package from any folder and project in the SSIS Catalog.

The SSIS Script Task can accomplish much more than generating log messages. This brief introduction and basic example have demonstrated how to get started configuring and using the SSIS Script Task. As you can see, SSIS Script Tasks give you development capabilities you may not have been aware of.

If you want to get started with SSIS with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)