SQL SERVER – Three Top Efficiency Features of Management Tool for SQL Server

Every weekend I spend time learning something new and interesting. My primary focus is always on either performance of SQL Server or performance of myself. This weekend, I downloaded dbForge Studio for SQL Server. It is a powerful IDE for SQL Server management, administration, development, data reporting and analysis. I have been using SQL Server Management Studio for most of the my tasks and I like sharing tips and tricks with the same. However, there are moments when I wish that SSMS provides me more than what it has been offered to me. There are moments when I feel that I can write up few macros or automated processes to solve few of my daily routines with SSMS. In simple word, I wish I could customize SSMS a bit more to my own choice.

Though there are Here are a few things I love about dbForge Studio for SQL Server, which I wishes SQL Server Management Studio offered out of box.

1) SQL Coding Assistance

SSMS has intelligence, but I believe it can use quite a few of the enhancements. For example, I like how dbForge Studio for SQL Server can just create whole SQL statements in a few keystrokes and formats the entire code based on our own choice. Here is the image which clearly explains how quickly I can expand * to all the columns of the table.

2) Database Diagram

I travel a lot and visit lots of different organization and review their databases. Everytime when I ask for organizations to provide their schema, they hardly have schema with them. After asking for multiple times, the usual exercise is to go to SSMS diagrams and try to print the diagram. If you have ever gone through this exercise you may know it is an extremely difficult task to do. SSMS is not good with diagrams and they do not print them well at all. Most of the time, I end up once again spending time with original table to understand the structure of the table.

With the help of dbForge Studio for SQL Server I am able to generate a schema diagram for any database in just one click. See attached image below where I have displayed part of AdventureWorks database.

3) Schema Compare

I just love this feature. If the user has to write down this particular feature from scratch it is an almost impossible task. I have learned that if I want to compare schema, I must depend on third party tools and I am extremely pleased with dbForge Studio for SQL Server’s schema comparing feature. It not only compares the schema, but also helps with synchronize database schemas and help deploy scripts targeting different SQL Server editions. Here is the image from the description page of dbForge Studio for SQL Server, which shows how Schema Compare works in dbForge Studio for SQL Server.

Well, this is just three of my favorite features. In future blog posts I will discuss a few of the efficiency features of dbForge Studio for SQL Server. Meanwhile, I encourage you to download the tool and see if you agree with my opinion or not. Trust me, if you start playing with this tool, you will find a few of your own efficiency tricks.

You can download dbForge Studio and post your opinion in the comments section below.

Reference: Pinal Dave (http://blog.sqlauthority.com)

About these ads

SQL SERVER – SSMS: Schema Change History Report

The heat is picking up and I am glad you are liking this series going so far. This particular report is close to my heart and the most recommended. At my recent trip to Delhi and the user group out there, I had the opportunity to meet a number of young DBA’s who were getting into their professional career at various organizations. I always try to persuade such groups with interesting questions to make them inquisitive about learning new concepts with SQL Server.

At this user-group session I wanted people to answer a simple question:

  1. How can I know, who created/dropped/altered the database?
  2. How can I know, who created/dropped/altered the objects?

This caught the attention of this group and I got various answers from DDL Triggers, Auditing, Error Logs, Extended Events and many more innovative response which I will refrain from disclosing his because they were really funny. All these answers were correct in a way and I had to counter them with yet another question to make them thinking.

Though your answers are correct in a way, “what is the easiest / simplest way to find this without writing a single line of code”. Now the twist made the response into something simple. And one attendee had a response stating – “why not use Profiler?”

This response stumped me totally and I said, let me achieve the same with lesser number of clicks for you. And my idea was to show them the use of SQL Server Management Studio – Schema Change History. It has interesting dimensions to examine and let me take a moment to walk them through the same.

Where to start?

The report location can be found from Server node -> Right Click -> Reports -> Standard Reports -> “Schema Changes History”.

One of the important information worth noting here is, the report fetches information from the default trace. We have talked about default trace and how to enable it in our previous post on “Configuration Changes History” report.

If default trace is not enable then this report is smart enough to look at each database and find objects which were created or altered in last 7 days. I was not aware of this until I disable default trace to see the error in the report. To my surprise, the report still came up in some different format. Let us look at the outlook with these options.

With default trace enabled

The report when the default trace enabled is as shown below:

To catch the query that populates this report, I ran Profiler and here is the basic query:

SELECT FROM:: fn_trace_gettabl(e @base_tracefilename, default )
WHERE EventClass in (46,47,164) AND EventSubclass = 0 AND DatabaseID <> 2

To elaborate a bit, the EventClass 46, 47 and 164 corresponds to Object:Created, Object:Deleted and Object:Altered respectively (Refer sys.trace_events from MSDN for more info).

With default trace disabled

Now here is the twist, when Default Trace is disabled, the query which is executed in each database is shown below for reference:

SELECT o.name AS OBJECT_NAME,
o.type_desc,
o.create_date,
s.name AS schema_name
FROM   sys.all_objects o
LEFT OUTER JOIN sys.schemas s
ON ( o.schema_id = s.schema_id)
WHERE  create_date > ( GETDATE() - 7);  

And below is the report generated by the query. It is evident from the last section of “Note” from the report that our default trace is not enabled.

There are two sections in the report. They are based on similar query which I pointed above with a minor difference of “create_date” and “modify_date” column as below.

SELECT o.name AS OBJECT_NAME,
o.type_desc,
o.create_date,
s.name AS schema_name
FROM   sys.all_objects o
LEFT OUTER JOIN sys.schemas s
ON ( o.schema_id = s.schema_id)
WHERE  modify_date > ( GETDATE() - 7);  

The disadvantage of disabling default trace is that we would not be able to see any information if a database was dropped.  I generally have seen this trace to be non-intrusive on most of the systems. But would love to hear from you and learn if you faced any problems with it.

Caveats with Schema Change History Report

One problem in the report is, even if one database is inaccessible, it would give error and fails to report anything for remaining databases. For illustration purposes, I made the database to norecovery state and I refreshed the report to get the below error:

If you ever have such situation, you can run the T-SQL query mentioned above manually on the database under question to view the changes.

Does anyone reading this post, ever disabled your Default Trace ever? Have you used this reports in your environment? Let me know your learnings.

Reference: Pinal Dave (http://blog.sqlauthority.com)

Developer’s Life – Every Developer is Like Transformers

Transformers may not be superheroes – they don’t wear capes, they don’t have amazing powers outside of their size and folding ability, they’re not even human (technically).  Part of their enduring popularity is that while we are enjoying over-the-top movies, we are learning about good leadership and strong personal skills.

There have been four Transformers movies, and before that a long-running children’s cartoon.  For now, though, let’s focus on the most recent installment – Transformers: Age of Extinction.  What can this action-packed movie teach us about developers?

So how are developers like Transformers?

Well, read on my list of reasons.

Best Use of Resources

Work with what you have.  At the beginning of this movie the main character, Cade Yeager, finds what looks like an old, beat-up semi truck.  He is just trying to make a little cash for his family, but this semi truck ends up being the leader of the transformers, Optimus Prime.  Developers may not have a powerful robot hidden inside themselves, but developing the skills to be the best at your job is often like finding buried treasure.

Be a Leader

Be a leader.  In the third movie, Optimus Prime appeared to die, and the rest of the Transformers were lost without him.  In this movie, set years later,  Optimus Prime returns and his loyal brethren return to fight beside him, no questions asked.  Now that’s leadership!  Developers often work behind the scenes for companies, but it is important to take leadership where necessary.

Don’t Underestimate Anyone

Don’t underestimate yourself or others.  The climax of this move rests on the bad guys underestimating the hero, calling him a “nobody.”  He proves them wrong by releasing the Transformers on them.  In the work place, it is easy to feel daunted by a task, or to bite off more than you can chew because you feel like the only person capable of accomplishing the task.  It is important to remember that everybody working together can accomplish much!

Take Action

You have to take action.  A lot of the emotional core of this movie rests on one man who feels out of his league.  He’s not sure what he has gotten himself into, and he’s not sure he can succeed – but even with these doubts, he forges ahead and does end up victorious.  We may face problems in our lives that seem insurmountable.  The important thing to remember is, doing nothing and failing is far worth than trying and failing.

Can’t Please Everyone

You can’t please everyone all of the time.  At the beginning of this movie, the world viewed the Autobots as the bad guys, but they rescued humanity from the Decepticons yet again.  In the real world, the director of this movie (Michael Bay) was openly mocked for “beating a dead horse” with this sequel.  However, it is one of the highest grossing movies of the summer, raking in over $200 million so far.  Developers will also face this kind of challenge.  They might not be the most popular individuals at a company, and they often make unpopular, difficult decisions.  But these are the kind of choices that have to be made, and at the end of the day, they have to go home knowing they did the right thing (just without the $200 million).

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Download SQL SERVER 2014 Database Engine Permission Posters

There is no better way to understand any complex subject than learning via diagrams. Security is very complex and dry subject. Not everyone understands it easily. I have seen quite a few of the developers who are good at performance tuning or other database concepts, but when it is about security, they are not so comfortable with it. Microsoft has recently released SQL Server Database Engine Permissions Poster. These posters are a great way to understand what are different permissions required do various tasks.

For example, if you want to do Audit on the server, you may need control server and database audition permissions. Here is a screen shot of the very small section of the white paper.

I think there are plenty of goodies in the poster. Download the poster from here. In the same page there are plenty of good posters available. Everyone should have this in their workplace.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – SSMS: Configuration Changes History

As we go through each of the reports available in SQL Server Management Studio, I have an anecdote to share almost for most of them actually. I have used them extensively at different times and I am glad to share some of them via this blog series.

Once a friend of mine Joe pinged me over a chat. He was going to join a new organization and it was interesting to see the excitement and energy in his conversation. He was brimming with confidence about his new found job. After joining, he was back to ping me again. I was curious to know how his new job responsibilities were. After our initial greetings, Joe was quick to ask me a SQL Server related query.

Joe: Dave, in my new job we are supposed to have some standards to our servers.

Pinal: Ok, so what is the problem?

Joe: As I get access to some of the servers, I want to quickly know what changes were made to the server.

Pinal: Ok, so what is the problem? I still don’t get it.

Joe: Do you have any query that will give me this information?

Pinal: Now you are talking business.

Joe: I know you can help me, as you have done a number of times in the past.

Pinal: Thanks for your compliments, no need to search anywhere. It is already available with you.

Joe: You must be kidding me

Pinal: No Joe, Have you ever checked the Configuration Change history SQL Server Management Studio Report?

Joe: Ahhh that was a good catch.

Pinal: See, it was all along with you. Just that I helped you find it existed. Now enjoy!!!

To find this SSMS report, it is second in the list from Server node -> Standard reports.

The purpose of this report is to show two key information:

  • Changes made to Server Configuration using sp_configure
  • Changes to trace flags (Enable or Disable) done via T-SQL.

The information from this report is fetched from “default trace” which runs by default in every SQL installation. If the default setting is disabled by administrator by any chance, then we will not get any reports. We will get the below message in report when the default trace is disabled.

Currently, this table does not have any data to show, because default trace is not enabled.

To enable the default trace, below command can be used

sp_configure 'default trace enabled', 1
GO
RECONFIGURE WITH override
GO

For most installations you don’t need to enable because it is running by default. Normally, a user is likely to see the below report with data like:

I am always intrigued when such reports every time. To look at the query fired for this report, we can fire up profiler to capture the same. Report uses fn_trace_gettable to read default trace and get event class 22 and 116.

SELECT name
FROM sys.trace_events
WHERE trace_event_id IN (22, 116)

 

From the trace events, it means whenever a change is made using DBCC TraceStatus or sp_configure, they are recorded in default trace under “ErrorLog” and “Audit DBCC Event”.  To display the SSMS reports additional filters have been applied to show only relevant information.

Do check these reports in your environment and let me know if you found any interesting information that you learnt today?

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Automate Database Operations for Success – Notes from the Field #035

[Note from Pinal]: This is a 35th episode of Notes from the Field series. Doing this faster and easier is always our goal. We all want to do thin which generates maximum return of investment with least efforts. This is a catch 22 situation quite often when it is about database administrators.

In this episode of the Notes from the Field series database expert Brian Kelley explains a how to automate various database administrator tasks for success of business and our efforts. Read the experience of Brian in his own words.


In the Linchpin People mindset, it’s not about how busy you are, but how valuable you are. You don’t increase your value to your organization by your busyness. You do so by accomplishing the work. There are two parts to this:

  • Accomplish more work.
  • Accomplish more important work.

Initially, if your path follows most people, you’ll be asked to do more. This is your opportunity to accomplish more work. As you succeed with the additional work that you receive, you will likely be given opportunities to take on more important work. If this never happens, you’re probably at the wrong organization. Let’s assume you are at a good one and you’re given more important tasks. Obviously, if you succeed at the more important work, you’ll continue to be given more opportunities. And you’ll likely be given tasks and projects that are more and more important to your organization. This is how you become a linchpin.

So how do you complete more work? One part of the answer is automation. Since we’re IT professionals, automation should be near and dear to us. I recently wrote about being a “lazy” DBA. I used the word lazy was to indicate there are some manual tasks we don’t want to repeat. A “lazy” engineer or IT pro tries to automate these tasks in order to reduce the time spent with these tasks.  That frees up the IT pro to have more time for the more important work.

Let’s look at some things that we should automate as DB Pros:

Database Administration:

Build scripts that can do the following:

  • Check disk space on servers.
  • Check database available space.
  • Check security on key servers and databases.
  • Verify backups were taken properly.
  • Perform test restores of key backups.
  • Parse the SQL error log on each SQL Server for important information (failed logins, use of sp_configure, etc.).

For instance, if I want to check backups to ensure a full backup has run on every database within the last 24 hours, I might automate running the following query to report back the datbases where I do not have the proper backup:

SELECT D.name,
BS.database_name,
Isnull(CONVERT(VARCHAR, BS.lastbackup, 106), ‘No recent backup’) AS
LastBackup
FROM   master.sys.databases AS D
LEFT JOIN (SELECT database_name,
Max(backup_finish_date) AS LastBackup
FROM   msdb.dbo.backupset
WHERE  type = ‘D’
GROUP  BY database_name) ASBS
ON D.name = BS.database_name
WHERE  BS.database_name IS NULL
OR BS.lastbackup < ( Dateadd(hour, -24, Getdate()))
ORDER  BY D.name; 

We should also use automation like policy based management or custom scripts to enforce settings. Some examples that we should consider:

  • database ownership
  • recovery models
  • membership in key roles (sysadmin, securityadmin, db_owner, etc.)

And here if I knew every user database on a given server should be in full recovery mode, I can ensure that if I schedule the following script:

DECLARE cursdbs CURSOR fast_forward FOR
SELECT name
FROM   sys.databases
WHERE  state_desc = ‘ONLINE’
AND recovery_model_desc <> ‘FULL’
AND name NOT IN ( ‘master’, tempdb, msdb, ‘model’);
DECLARE @DBName SYSNAME;
DECLARE @SQL NVARCHAR(max);
OPEN cursdbs;
FETCH next FROM cursdbs INTO @DBName;
WHILE ( @@FETCH_STATUS = 0)
BEGIN
PRINT ‘ALTERING DATABASE: ‘ + @DBName;
SET @SQL = ‘ALTER DATABASE [' + @DBName
+ '] SET RECOVERY FULL;’;
EXEC (@SQL);
FETCH next FROM cursdbs INTO @DBName;
END
CLOSE cursdbs;
DEALLOCATE cursdbs; 

You do want to review that output. After all, if you just switched the DB to full recovery mode, you want to ensure you restart the log backup chain with a full or differential database backup.

Database Development:

Encourage continuous integration methods to include database code. This will require tests to validate no new code “breaks the build.” Make sure that these builds come directly from source control.

If you are doing tests that require restores of databases and the application of scripts, write the automation to do these tasks. It makes the tasks repeatable, it reduces the possibility of error, and it frees you up so you don’t have to manually run each step.

With that said, write scripts for anything you will have to repeat when developing a solution. For instance, you might need scripts to:

  • Add, delete, or change data.
  • Change security on a database or server.
  • Encrypt / decrypt setup data.

Can you automate too much?

Yes, you can. Note that in both cases I did include some reporting. If you build automation where you’re not doing any checking, that’s wrong. Automation eliminates you from having to do tedious steps. It doesn’t remove your responsibility/accountability. If you don’t have anything to check, you don’t actually know if the work was completed successfully. Don’t assume. Check.

If you want to get started with performance tuning and database security with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – SSMS: Server Dashboard Report

The term dashboards come from the automobile world where it meant as instrumentation or controls for operating a vehicle. In the IT parlance, the use of dashboards meant as the state or health of system or organizations. Though this analogy cannot be taken too far because running a business is not same as running a vehicle. But the similarities of knowing the health of the system is bang-on.

In our previous post, I gave an introduction on where to find all the SQL Server Management Studio reports. Today we will look at the first and the most basic report all of us must be working with. It is called as “Server Dashboard” report under the Server node, standard reports node.

This report gives and overview of various settings in a SQL Server instance. As the name suggests, it shows the health of SQL Server instance level and has three sections to examine. We will investigate each of these sections next.

Configuration Details

This section is interesting because it can answer some interesting questions:

  • When was the SQL Server started / restarted?
  • What Edition of product are we running and version?
  • If the instance is a Clustered instance or not.
  • Number of logical processors used by the SQL Server instance.

When someone asks me to performance tune a system, I generally tend to ask some of these basic questions to customers. Lesser known to DBA is that instead of querying or relying on their Excel sheets, this report can give this information in an instant.

Most of the other values are self-explanatory. It should be noted that “Scheduler Agent Jobs” shows the number of jobs created, not what is running or enabled.

Non Default Configuration Options

Let us move to the next section. As an administrator, if you want to know the server level settings that were changed from default in a given instance, then this is the section to note.

By default the values are not visible. We need to click on (+) symbol to toggle it.

This part of the report can show various non-default configuration settings done via the executing sp_configure stored procedure. You can read about enabling / disabling sp_configure basics from my previous blog posts.

I always look into this section to see if the administrator or someone else in the system has limited the memory allocated to SQL Server on a given box. This comes under the “Max server memory(MB)” section as shown above.

This section also shows if any trace flag is enabled on the server. This information is normally not available when you just play around with sp_configure.

Activity Details

Next the busier part section with Activity Details come. This has a number of interesting information. A visual graph to show the CPU and IO performed on the instance. But most importantly there are sections on the top as a table which is worth a note.

  1. Shows the number of Active sessions and transactions in the system at the moment.
  2. Active databases available in the instance.
  3. Total server memory in the system.
  4. Logins connected to the server.
  5. Number of blocked transactions in the system
  6. Finally, worth to note is the server side traces running in the system.

Though the information is available in a nice report, the same can be got via various DMVs from SQL Server. I am outlining some of these for reference.

Title DMV and filter
Active session sys.dm_exec_sessions where is_user_process = 1 and status = ‘running’
Active Transactions sys.dm_tran_active_transactions where transaction_state = 2 or( transaction_type = 4 and dtc_state = 1)
Total Server Memory sys.dm_os_performance_counters where counter_name like ‘%Total Server Memory (KB)%’
Idle Sessions sys.dm_exec_sessions where is_user_process = 1 and status = ‘sleeping’
Blocked Transactions sys.dm_tran_locks where request_status = ‘WAIT’
Distinct Connected Logins on Sessions sys.dm_exec_sessions where is_user_process = 1
Traces Running sys.traces where status = 1

The query to generate the graph is shown below. This is based on historical data using sys.dm_exec_query_stats to show the graph for IO and CPU.

Query 1:
SELECT  total_worker_time,
CASE WHEN DB_NAME(dbid) IS NULL THEN 'Adhoc Queries'
ELSE DB_NAME(dbid)
END AS DB_NAME,
dbid
FROM    sys.dm_exec_query_stats s1
CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS s2

Query 2:
SELECT  total_logical_reads + total_logical_writes AS total_io,
CASE WHEN DB_NAME(dbid) IS NULL THEN 'Adhoc Queries'
ELSE DB_NAME(dbid)
END AS DB_NAME
FROM    sys.dm_exec_query_stats s1
CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS s2

Though the reports are a great way to see this data visually, it is nice to see the power of DMVs and how they can be used with SQL Server. All these information are available out-of-box but these are visually represented nicely using the SQL Server Management studio dashboards.

Reference: Pinal Dave (http://blog.sqlauthority.com)