SQL SERVER – SSMS Trick – Generating CSV file using Management Studio

In my previous blog I talked about using SQLCMD to generate a comma separate files for the table data.

Some blog readers reported that why we should use SQLCMD or BCP to generate the file when the same can be achieved using SQL Server Management Studio itself. Well, that was a strong statement and I liked the simplicity of the solution. Here are the steps. A lot of times it is our inability to learn some of the simple techniques present in the tools that we use make us even more productive.

Here are the steps to achieve the same. Go to SQL Server Management Studio > Choose Tools from Menu Bar > Click on Options. Go to “Query Results” > “SQL Server” > Click on “Results to Text”

Once we choose “Comma delimited” and hit OK – we will get below kind of output when we eecure a query (Note that we need to open a new Query Window so that the settings are saved)

 

Notice that first column is the column name. If we want to avoid that and wants ONLY the data, then we can customized that in the same place where we have modified the separator setting. If we uncheck “Include column headers in the result set” then we will get only the rows as output.

We can also choose custom delimiter. I have set delimiter as pipe (|) and unchecked the box which I mentioned earlier and below is the output for the same query.

As we can see that output is pipe separated and we just have data not column heading. This can be saved and imported again to some other server using bcp command, Bulk Insert command or Import/Export Wizard.

Recommended reading:

Reference: Pinal Dave (http://blog.sqlauthority.com)

About these ads

SQL SERVER – Search Records with Single Quotes – SQL in Sixty Seconds #075

Earlier I wrote two blog posts about Search Records with Single Quotes in two parts. Refer the blog posts over here.

Though I had clarified that all the methods displayed in these two blog posts have the exact same performance, I kept on getting question on this subject, again and again! Well, as there are so many questions, I have decided to create a quick video which demonstrates that there is no performance difference among the four methods which I have displayed earlier.

Action Item

Here are the blog posts I have previously written. You can read it over here:

You can subscribe to my YouTube Channel for frequent updates.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Video Introduction to Delayed Durability – SQL in Sixty Seconds #074

Earlier I wrote blog post based on my latest Pluralsight course on learning SQL Server 2014. SQL Server 2014 has introduced many new features and one of performance is Delayed Transaction Durability. This concept is indeed very interesting. To explain this feature in detail, we will require to understand what is Full Transaction Durability. The current default of SQL Server is Full Transaction Durability. A common question I often received is why would we go away from to delayed durability. The answer is – Performance prioritation over Durability.

I hope my earlier blog post clearly explained how delayed durability works for executing query works. If not, I suggest you watch following quick video where I explain this concept in extremely simple words.

You can download the code used in this video from Simple Example of Delayed Durability.

Action Item

Here are the blog posts I have previously written. You can read it over here:

You can subscribe to my YouTube Channel for frequent updates.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Live Plan for Executing Query – SQL in Sixty Seconds #073

Yesterday I wrote blog post based on my latest Pluralsight course on learning SQL Server 2014. I discussed newly introduced live plans. For query before you execute it, you can see the estimated execution plan and after you executed you can see the actual execution plan – however, how to see a query plan while the query is still being executed. This is a very interesting question. In SQL Server 2014 we have now a new DMV, which tracks execution stats at each operator level while the query is still running. The DMV for the same issys.dm_exec_query_profiles.

I hope my earlier blog post clearly explained how live plan for executing query works. If not, I suggest you watch following quick video where I explain this concept in extremely simple words.

You can download the code used in this video from Simple Demo of New Cardinality Estimation Features of SQL Server 2014.

Action Item

Here are the blog posts I have previously written. You can read it over here:

You can subscribe to my YouTube Channel for frequent updates.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – CTRL + R Shortcut Does Not Work in SSMS

I just formatted my computer and installed fresh SQL Server 2014. The very first thing which I did was to open SQL Server Management Studio and execute a sample query. As soon as I executed the query, it returned me some result. Once I did not need the result, my next instinct was to hide the result as I did not need it any further. I am very used to the keyboard shortcut CTRL + R to hide the result. As soon as I typed in the CTRL + R the result pane was still visible. My pressing CTRL+R did not do anything further.

I suddenly remembered that last time when I had installed SQL Server 2012 I had faced the similar situation when a few of my shortcuts did not work. Here is what I did to make my shortcut work again.

Steps to Get Shortcut working again:

  1.  Go to Tools Menu in SSSMS
  2. Click on Option
  3. Go to Keyboard and expand the menu
  4. Click on option Keyboard again
  5. Keep the option on the drop down – Apply the following additional keyboard mapping scheme as (Default)
  6. Click on Reset

and you are done!

Well, on the internet, there are many different solutions and many of them are valid in different scenarios. In my scenario following above steps had fixed my issue.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – Cardinality Estimation and Performance – SQL in Sixty Seconds #072

Yesterday I wrote blog post based on my latest Pluralsight course on learning SQL Server 2014. I discussed newly introduced cardinality estimation in SQL Server 2014 and how it improves the performance of the query. The cardinality estimation logic is responsible for quality of query plans and majorly responsible for improving performance for any query. This logic was not updated for quite a while, but in the latest version of SQL Server 2104 this logic is re-designed. The new logic now incorporates various assumptions and algorithms of OLTP and warehousing workload.

I hope my earlier blog post clearly explained how new cardinality estimation logic improves performance. If not, I suggest you watch following quick video where I explain this concept in extremely simple words.

You can download the code used in this course from Simple Demo of New Cardinality Estimation Features of SQL Server 2014.

Action Item

Here are the blog posts I have previously written. You can read it over here:

You can subscribe to my YouTube Channel for frequent updates.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – SSMS: Memory Usage By Memory Optimized Objects Report

At conferences and at speaking engagements at the local UG, there is one question that keeps on coming which I wish were never asked. The question around, “Why is SQL Server using up all the memory and not releasing even when idle?” Well, the answer can be long and with the release of SQL Server 2014, this got even more complicated. This release of SQL Server 2014 has the option of introducing In-Memory OLTP which is completely new concept and our dependency on memory has increased multifold. In reality, nothing much changes but we have memory optimized objects (Tables and Stored Procedures) additional which are residing completely in memory and improving performance. As a DBA, it is humanly impossible to get a hang of all the innovations and the new features introduced in the next version. So today’s blog is around the report added to SSMS which gives a high level view of this new feature addition.

This reports is available only from SQL Server 2014 onwards because the feature was introduced in SQL Server 2014. Earlier versions of SQL Server Management Studio would not show the report in the list.

If we try to launch the report on the database which is not having In-Memory File group defined, then we would see the message in report. To demonstrate, I have created new fresh database called MemoryOptimizedDB with no special file group.

Here is the query used to identify whether a database has memory-optimized file group or not.

SELECT TOP(1) 1 FROM sys.filegroups FG WHERE FG.[type] = 'FX'

Once we add filegroup using below command, we would see different version of report.

USE [master]
GO
ALTER DATABASE [MemoryOptimizedDB] ADD FILEGROUP [IMO_FG] CONTAINS MEMORY_OPTIMIZED_DATA
GO

The report is still empty because we have not defined any Memory Optimized table in the database.  Total allocated size is shown as 0 MB. Now, let’s add the folder location into the filegroup and also created few in-memory tables. We have used the nomenclature of IMO to denote “InMemory Optimized” objects.

USE [master]
GO
ALTER DATABASE [MemoryOptimizedDB]
ADD FILE ( NAME = N'MemoryOptimizedDB_IMO', FILENAME = N'E:\Program Files\Microsoft SQL Server\MSSQL12.SQL2014\MSSQL\DATA\MemoryOptimizedDB_IMO')
TO FILEGROUP [IMO_FG]
GO

You may have to change the path based on your SQL Server configuration. Below is the script to create the table.

USE MemoryOptimizedDB
GO
--Drop table if it already exists.
IF OBJECT_ID('dbo.SQLAuthority','U') IS NOT NULL
DROP TABLE dbo.SQLAuthority
GO
CREATE TABLE dbo.SQLAuthority
(
ID INT IDENTITY NOT NULL,
Name CHAR(500)  COLLATE Latin1_General_100_BIN2 NOT NULL DEFAULT 'Pinal',
CONSTRAINT PK_SQLAuthority_ID PRIMARY KEY NONCLUSTERED (ID),
INDEX hash_index_sample_memoryoptimizedtable_c2 HASH (Name) WITH (BUCKET_COUNT = 131072)
)
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA)
GO

As soon as above script is executed, table and index both are created. If we run the report again, we would see something like below.

Notice that table memory is zero but index is using memory. This is due to the fact that hash index needs memory to manage the buckets created. So even if table is empty, index would consume memory. More about the internals of how In-Memory indexes and tables work will be reserved for future posts. Now, use below script to populate the table with 10000 rows

INSERT INTO SQLAuthority VALUES (DEFAULT)
GO 10000

Here is the same report after inserting 1000 rows into our InMemory table.

 

 There are total three sections in the whole report.

  1. Total Memory consumed by In-Memory Objects
  2. Pie chart showing memory distribution based on type of consumer – table, index and system.
  3. Details of memory usage by each table.

The information about all three is taken from one single DMV, sys.dm_db_xtp_table_memory_stats This DMV contains memory usage statistics for both user and system In-Memory tables. If we query the DMV and look at data, we can easily notice that the system tables have negative object IDs.  So, to look at user table memory usage, below is the over-simplified version of query.

USE MemoryOptimizedDB
GO
SELECT OBJECT_NAME(OBJECT_ID), *
FROM sys.dm_db_xtp_table_memory_stats
WHERE OBJECT_ID > 0
GO

This report would help DBA to identify which in-memory object taking lot of memory which can be used as a pointer for designing solution. I am sure in future we will discuss at lengths the whole concept of In-Memory tables in detail over this blog. To read more about In-Memory OLTP, have a look at In-Memory OLTP Series at Balmukund’s Blog.

Reference: Pinal Dave (http://blog.sqlauthority.com)