My conversations with my DBA friends are always interesting. They have unique ways to solve problems and organizations take them for granted. Most of the DBA’s want to be in control of their environment. There is no second thought on it when it comes to delivery from their side.
In a recent conversation, one of my friend asked me if there was a way to identify the tables that were created in the database in the past one week. He was asked by his security team to bring a report on when an object is created in the production server. They want to make sure no unnecessary objects are getting created in the database without prior approval.
Though my thought process was to build an audit for this, I thought of sending a simple script that can be used to identify the same.
SELECT o.name AS [Object_Name], s.name [Schema_Name], o.type_desc [Description], o.create_date [Creation_Date], o.modify_date [Modified_Date] FROM sys.all_objects o LEFT OUTER JOIN sys.schemas s ON o.schema_id = s.schema_id WHERE create_date > (GETDATE() - 7) OR modify_date > (GETDATE() - 7)
A sample output is shown below:
As you can see, the query can be added to a SQL Server job and we can send the same as an email to the DBA or security team automatically. The other option I gave my friend was to run an SSRS report and schedule a report output via email to the Security team.
If you are person not doing such hi-fi stuff of sending reports, then there can be a much easier way to find this information. We can use the “Schema Change History” report available inside SQL Server Management Studio to get this information.
A typical output is shown above. This is similar to the query output we discussed before but in a built-in report that is out-of-box. Most of these SQL Server Management Studio tips are simple and are hidden in some shape or form.
Do let me know if you every have had the need for such requirements? What did you do in that case? Would love to hear your experience and implementation for sure.
Earlier I had written a blog about SQL SERVER – Beginning In-Memory OLTP with Sample Example which covers the basics of working with In-Memory OLTP. Though that post gets you started, one of my colleague asked me if there was an easier way to identify In-Memory Tables when working with SQL Server Management Studio. Also my friend said, their database had 1000’s of tables and it will be impossible to look at any icon difference or something that is tough to see.
This statement got me thinking and I wanted to give him few options. On exploring SSMS, you will be surprised all these exist right in front of our eyes.
The simplest solution here was to use the standard system table and query for just the In-Memory tables. The below query will output only In-Memory tables present in the selected database where this query is being run.
SELECT * FROM sys.tables WHERE is_memory_optimized = 1
Though these age old methods exits, these still don’t help us much when it comes to visualizing the same in SSMS.
Using Object Explorer
Method 2 is using the Object explorer -> Filter settings. In SQL Server Management Studio of SQL Server 2014, we get a nice little option to filter based on In-Memory tables. Right click the Tables node and select Filter Settings. Now in this Dialog we have additional settings that are useful.
Using Object Explorer Details
The 3rd and simple way is to use the Object Explorer Details (F7) inside SQL Server 2014 SSMS. On the header, right click to add “Memory Optimized” Column. This will show True or False values. Sort on this column and we get all the In-Memory Optimized tables at the top.
I thought these were simple yet powerful tip when working with these special tables inside SQL Server 2014. Do let me know if you have any other method other than this to filter out in-Memory Optimized tables inside SSMS.
Recently I was working with one of my friends who handles production server databases as a senior DBA. He told that recently one of the Jr. DBA was fired from his company because he has run a drop table command on a production server. He was scared and from that time onwards, every DBA has been asked to run select @@servername commands before executing any query on any server. Similar to this situation, many times a DBA wants a set of command, which they use very frequently, should be available as soon as SQL Server Management Studio is opened.
Both of the above situation can be handled by the trick explained in this blog.
Whenever we launch SQL Server Management Studio, it loads “SQLFile.sql” from the operating system and shows the text in the New Query Window. The location of this file depends on location of SSMS.exe and version of SQL Server. My machine has SQL Server 2012 and SQL Server 2014.
Here is the location of SSMS.exe on my 64 bit machine for SQL 2012
Here is the mapping of SQL versions and internal version numbers (which is 110 from SQL 2012 in the above example).
Microsoft SQL Server 2014
Microsoft SQL Server 2012
Microsoft SQL Server 2008 R2
Microsoft SQL Server 2008
Once you find SSMS.exe, then we can navigate to SqlWorkbenchProjectItems\sql folder as shown below.
The file highlighted “SQLFile.sql” is the file which is loaded as a new query window.
On my machine, I have modified the file and written below
/* select @@version go select @@servername go
Select * fromsys.dm_exec_requests where blocking_session_id <> 0
After modifying, whenever you open new windows, it would be as below.
As we can see that this is EXACTLY same text which we have written in the SQLFile.sql file.
Be Aware: This file is used by all user profiles on the machine. If you delete this file by mistake, you would get below error whenever a new query window is attempted.
Microsoft SQL Server Management Studio
Cannot find template file for the new query (‘E:\Program Files (x86)\Microsoft SQL Server\120\Tools\Binn\ManagementStudio\SqlWorkbenchProjectItems\SQL\SQLFile.sql’).
To fix this, you can create an empty file with the same path and name in the location as per error message. The path of the error message would vary based on installation on your machine. Hope you found this tip on SSMS useful and do let me know if you will be using the same.
Some blog readers reported that why we should use SQLCMD or BCP to generate the file when the same can be achieved using SQL Server Management Studio itself. Well, that was a strong statement and I liked the simplicity of the solution. Here are the steps. A lot of times it is our inability to learn some of the simple techniques present in the tools that we use make us even more productive.
Here are the steps to achieve the same. Go to SQL Server Management Studio > Choose Tools from Menu Bar > Click on Options. Go to “Query Results” > “SQL Server” > Click on “Results to Text”
Once we choose “Comma delimited” and hit OK – we will get below kind of output when we eecure a query (Note that we need to open a new Query Window so that the settings are saved)
Notice that first column is the column name. If we want to avoid that and wants ONLY the data, then we can customized that in the same place where we have modified the separator setting. If we uncheck “Include column headers in the result set” then we will get only the rows as output.
We can also choose custom delimiter. I have set delimiter as pipe (|) and unchecked the box which I mentioned earlier and below is the output for the same query.
As we can see that output is pipe separated and we just have data not column heading. This can be saved and imported again to some other server using bcp command, Bulk Insert command or Import/Export Wizard.
Though I had clarified that all the methods displayed in these two blog posts have the exact same performance, I kept on getting question on this subject, again and again! Well, as there are so many questions, I have decided to create a quick video which demonstrates that there is no performance difference among the four methods which I have displayed earlier.
Here are the blog posts I have previously written. You can read it over here:
Earlier I wrote blog post based on my latest Pluralsight course on learning SQL Server 2014. SQL Server 2014 has introduced many new features and one of performance is Delayed Transaction Durability. This concept is indeed very interesting. To explain this feature in detail, we will require to understand what is Full Transaction Durability. The current default of SQL Server is Full Transaction Durability. A common question I often received is why would we go away from to delayed durability. The answer is – Performance prioritation over Durability.
I hope my earlier blog post clearly explained how delayed durability works for executing query works. If not, I suggest you watch following quick video where I explain this concept in extremely simple words.
Yesterday I wrote blog post based on my latest Pluralsight course on learning SQL Server 2014. I discussed newly introduced live plans. For query before you execute it, you can see the estimated execution plan and after you executed you can see the actual execution plan – however, how to see a query plan while the query is still being executed. This is a very interesting question. In SQL Server 2014 we have now a new DMV, which tracks execution stats at each operator level while the query is still running. The DMV for the same issys.dm_exec_query_profiles.
I hope my earlier blog post clearly explained how live plan for executing query works. If not, I suggest you watch following quick video where I explain this concept in extremely simple words.
I just formatted my computer and installed fresh SQL Server 2014. The very first thing which I did was to open SQL Server Management Studio and execute a sample query. As soon as I executed the query, it returned me some result. Once I did not need the result, my next instinct was to hide the result as I did not need it any further. I am very used to the keyboard shortcut CTRL + R to hide the result. As soon as I typed in the CTRL + R the result pane was still visible. My pressing CTRL+R did not do anything further.
I suddenly remembered that last time when I had installed SQL Server 2012 I had faced the similar situation when a few of my shortcuts did not work. Here is what I did to make my shortcut work again.
Steps to Get Shortcut working again:
Go to Tools Menu in SSSMS
Click on Option
Go to Keyboard and expand the menu
Click on option Keyboard again
Keep the option on the drop down – Apply the following additional keyboard mapping scheme as (Default)
Click on Reset
… and you are done!
Well, on the internet, there are many different solutions and many of them are valid in different scenarios. In my scenario following above steps had fixed my issue.
Yesterday I wrote blog post based on my latest Pluralsight course on learning SQL Server 2014. I discussed newly introduced cardinality estimation in SQL Server 2014 and how it improves the performance of the query. The cardinality estimation logic is responsible for quality of query plans and majorly responsible for improving performance for any query. This logic was not updated for quite a while, but in the latest version of SQL Server 2104 this logic is re-designed. The new logic now incorporates various assumptions and algorithms of OLTP and warehousing workload.
I hope my earlier blog post clearly explained how new cardinality estimation logic improves performance. If not, I suggest you watch following quick video where I explain this concept in extremely simple words.
At conferences and at speaking engagements at the local UG, there is one question that keeps on coming which I wish were never asked. The question around, “Why is SQL Server using up all the memory and not releasing even when idle?” Well, the answer can be long and with the release of SQL Server 2014, this got even more complicated. This release of SQL Server 2014 has the option of introducing In-Memory OLTP which is completely new concept and our dependency on memory has increased multifold. In reality, nothing much changes but we have memory optimized objects (Tables and Stored Procedures) additional which are residing completely in memory and improving performance. As a DBA, it is humanly impossible to get a hang of all the innovations and the new features introduced in the next version. So today’s blog is around the report added to SSMS which gives a high level view of this new feature addition.
This reports is available only from SQL Server 2014 onwards because the feature was introduced in SQL Server 2014. Earlier versions of SQL Server Management Studio would not show the report in the list.
If we try to launch the report on the database which is not having In-Memory File group defined, then we would see the message in report. To demonstrate, I have created new fresh database called MemoryOptimizedDB with no special file group.
Here is the query used to identify whether a database has memory-optimized file group or not.
SELECT TOP(1) 1 FROM sys.filegroups FG WHERE FG.[type] = 'FX'
Once we add filegroup using below command, we would see different version of report.
GO ALTER DATABASE [MemoryOptimizedDB] ADD FILEGROUP [IMO_FG] CONTAINS MEMORY_OPTIMIZED_DATA
The report is still empty because we have not defined any Memory Optimized table in the database. Total allocated size is shown as 0 MB. Now, let’s add the folder location into the filegroup and also created few in-memory tables. We have used the nomenclature of IMO to denote “InMemory Optimized” objects.
GO ALTER DATABASE [MemoryOptimizedDB] ADD FILE ( NAME = N'MemoryOptimizedDB_IMO', FILENAME = N'E:\Program Files\Microsoft SQL Server\MSSQL12.SQL2014\MSSQL\DATA\MemoryOptimizedDB_IMO') TO FILEGROUP [IMO_FG]
You may have to change the path based on your SQL Server configuration. Below is the script to create the table.
GO --Drop table if it already exists. IF OBJECT_ID('dbo.SQLAuthority','U') IS NOT NULL DROP TABLE dbo.SQLAuthority
GO CREATE TABLE dbo.SQLAuthority ( ID INT IDENTITY NOT NULL, Name CHAR(500) COLLATE Latin1_General_100_BIN2 NOT NULL DEFAULT 'Pinal', CONSTRAINT PK_SQLAuthority_ID PRIMARY KEY NONCLUSTERED (ID), INDEX hash_index_sample_memoryoptimizedtable_c2 HASH (Name) WITH (BUCKET_COUNT = 131072)
) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA) GO
As soon as above script is executed, table and index both are created. If we run the report again, we would see something like below.
Notice that table memory is zero butindex is using memory. This is due to the fact that hash index needs memory to manage the buckets created. So even if table is empty, index would consume memory. More about the internals of how In-Memory indexes and tables work will be reserved for future posts. Now, use below script to populate the table with 10000 rows
INSERT INTO SQLAuthority VALUES (DEFAULT) GO 10000
Here is the same report after inserting 1000 rows into our InMemory table.
There are total three sections in the whole report.
Total Memory consumed by In-Memory Objects
Pie chart showing memory distribution based on type of consumer – table, index and system.
Details of memory usage by each table.
The information about all three is taken from one single DMV, sys.dm_db_xtp_table_memory_stats This DMV contains memory usage statistics for both user and system In-Memory tables. If we query the DMV and look at data, we can easily notice that the system tables have negative object IDs. So, to look at user table memory usage, below is the over-simplified version of query.
GO SELECT OBJECT_NAME(OBJECT_ID), * FROM sys.dm_db_xtp_table_memory_stats WHERE OBJECT_ID > 0
This report would help DBA to identify which in-memory object taking lot of memory which can be used as a pointer for designing solution. I am sure in future we will discuss at lengths the whole concept of In-Memory tables in detail over this blog. To read more about In-Memory OLTP, have a look at In-Memory OLTP Series at Balmukund’s Blog.