SQL SERVER – Introduction to SQL Server 2014 In-Memory OLTP

In SQL Server 2014 Microsoft has introduced a new database engine component called In-Memory OLTP aka project “Hekaton” which is fully integrated into the SQL Server Database Engine. It is optimized for OLTP workloads accessing memory resident data. In-memory OLTP helps us create memory optimized tables which in turn offer significant performance improvement for our typical OLTP workload.

The main objective of memory optimized table is to ensure that highly transactional tables could live in memory and remain in memory forever without even losing out a single record. The most significant part is that it still supports majority of our Transact-SQL statement. Transact-SQL stored procedures can be compiled to machine code for further performance improvements on memory-optimized tables. This engine is designed to ensure higher concurrency and minimal blocking. In-Memory OLTP alleviates the issue of locking, using a new type of multi-version optimistic concurrency control. It also substantially reduces waiting for log writes by generating far less log data and needing fewer log writes.

Points to remember

  • Memory-optimized tables refer to tables using the new data structures and key words added as part of In-Memory OLTP.
  • Disk-based tables refer to your normal tables which we used to create in SQL Server since its inception. These tables use a fixed size 8 KB pages that need to be read from and written to disk as a unit.
  • Natively compiled stored procedures refer to an object Type which is new and is supported by in-memory OLTP engine which convert it into machine code, which can further improve the data access performance for memory –optimized tables. Natively compiled stored procedures can only reference memory-optimized tables, they can’t be used to reference any disk –based table.
  • Interpreted Transact-SQL stored procedures, which is what SQL Server has always used.
  • Cross-container transactions refer to transactions that reference both memory-optimized tables and disk-based tables.
  • Interop refers to interpreted Transact-SQL that references memory-optimized tables.

Using In-Memory OLTP

In-Memory OLTP engine has been available as part of SQL Server 2014 since June 2013 CTPs. Installation of In-Memory OLTP is part of the SQL Server setup application. The In-Memory OLTP components can only be installed with a 64-bit edition of SQL Server 2014 hence they are not available with 32-bit editions.

Creating Databases

Any database that will store memory-optimized tables must have a MEMORY_OPTIMIZED_DATA filegroup. This filegroup is specifically designed to store the checkpoint files needed by SQL Server to recover the memory-optimized tables, and although the syntax for creating the filegroup is almost the same as for creating a regular filestream filegroup, it must also specify the option CONTAINS MEMORY_OPTIMIZED_DATA. Here is an example of a CREATE DATABASE statement for a database that can support memory-optimized tables:

CREATE DATABASE InMemoryDB
ON
PRIMARY
(NAME = [InMemoryDB_data],
FILENAME = 'D:\data\InMemoryDB_data.mdf', size=500MB),
FILEGROUP [SampleDB_mod_fg] CONTAINS MEMORY_OPTIMIZED_DATA
(NAME = [InMemoryDB_mod_dir],
FILENAME = 'S:\data\InMemoryDB_mod_dir'),
(
NAME = [InMemoryDB_mod_dir],
FILENAME = 'R:\data\InMemoryDB_mod_dir')
LOG ON (name = [SampleDB_log], Filename='L:\log\InMemoryDB_log.ldf', size=500MB)
COLLATE Latin1_General_100_BIN2;

Above example code creates files on three different drives (D:  S: and R:) for the data files and in memory storage so if you would like to run this code kindly change the drive and folder locations as per your convenience. Also notice that binary collation was specified as Windows (non-SQL). BIN2 collation is the only collation support at this point for any indexes on memory optimized tables.

It is also possible to add a MEMORY_OPTIMIZED_DATA file group to an existing database, use the below command to achieve the same.

ALTER DATABASE AdventureWorks2012 ADD FILEGROUP hekaton_mod CONTAINS MEMORY_OPTIMIZED_DATA;
GO
ALTER DATABASE AdventureWorks2012 ADD FILE (NAME='hekaton_mod', FILENAME='S:\data\hekaton_mod') TO FILEGROUP hekaton_mod;
GO

Creating Tables

There is no major syntactical difference between creating a disk based table or a memory –optimized table but yes there are a few restrictions and a few new essential extensions. Essentially any memory-optimized table should use the MEMORY_OPTIMIZED = ON clause as shown in the Create Table query example.

DURABILITY clause (SCHEMA_AND_DATA or SCHEMA_ONLY)

Memory-optimized table should always be defined with a DURABILITY value which can be either SCHEMA_AND_DATA or  SCHEMA_ONLY the former being the default. A memory-optimized table defined with DURABILITY=SCHEMA_ONLY will not persist the data to disk which means the data durability is compromised whereas DURABILITY= SCHEMA_AND_DATA ensures that data is also persisted along with the schema.

Indexing Memory Optimized Table

A memory-optimized table must always have an index for all tables created with DURABILITY= SCHEMA_AND_DATA and this can be achieved by declaring a PRIMARY KEY Constraint at the time of creating a table. The following example shows a PRIMARY KEY index created as a HASH index, for which a bucket count must also be specified.

CREATE TABLE Mem_Table
(
[Name] VARCHAR(32) NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 100000),
[City] VARCHAR(32) NULL,
[State_Province] VARCHAR(32) NULL,
[LastModified] DATETIME NOT NULL,
)
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA);

Now as you can see in the above query example we have used the clause MEMORY_OPTIMIZED = ON to make sure that it is considered as a memory optimized table and not just a normal table and also used the DURABILITY Clause= SCHEMA_AND_DATA which means it will persist data along with metadata and also you can notice this table has a PRIMARY KEY mentioned upfront which is also a mandatory clause for memory-optimized tables.

We will talk more about HASH Indexes and BUCKET_COUNT in later articles on this topic which will be focusing more on Row and Index storage on Memory-Optimized tables. So stay tuned for that as well.

Now as we covered the basics of Memory Optimized tables and understood the key things to remember while using memory optimized tables, let’s explore more using examples to understand the Performance gains using memory-optimized tables.

I will be using the database which i created earlier in this article i.e. InMemoryDB in the below Demo Exercise.

USE InMemoryDB
GO
-- Creating a disk based table
CREATE TABLE dbo.Disktable
(
Id INT IDENTITY,
Name CHAR(40)
)
GO
CREATE NONCLUSTERED INDEX IX_ID ON dbo.Disktable (Id)
GO
-- Creating a memory optimized table with similar structure and DURABILITY = SCHEMA_AND_DATA
CREATE TABLE dbo.Memorytable_durable
(
Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000),
Name CHAR(40)
)
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA)
GO
-- Creating an another memory optimized table with similar structure but DURABILITY = SCHEMA_Only
CREATE TABLE dbo.Memorytable_nondurable
(
Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000),
Name CHAR(40)
)
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_only)
GO
-- Now insert 100000 records in dbo.Disktable and observe the Time Taken
DECLARE @i_t bigint
SET @i_t =1
WHILE @i_t<= 100000
BEGIN
INSERT INTO
dbo.Disktable(Name) VALUES('sachin' + CONVERT(VARCHAR,@i_t))
SET @i_t+=1
END
-- Do the same inserts for Memory table dbo.Memorytable_durable and observe the Time Taken
DECLARE @i_t bigint
SET @i_t =1
WHILE @i_t<= 100000
BEGIN
INSERT INTO
dbo.Memorytable_durable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t))
SET @i_t+=1
END
-- Now finally do the same inserts for Memory table dbo.Memorytable_nondurable and observe the Time Taken
DECLARE @i_t bigint
SET @i_t =1
WHILE @i_t<= 100000
BEGIN
INSERT INTO
dbo.Memorytable_nondurable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t))
SET @i_t+=1
END

The above 3 Inserts took 1.20 minutes, 54 secs, and 2 secs respectively to insert 100000 records on my machine with 8 Gb RAM.

This proves the point that memory-optimized tables can definitely help businesses achieve better performance for their highly transactional business table and memory- optimized tables with Durability SCHEMA_ONLY is even faster as it does not bother persisting its data to disk which makes it supremely fast. Koenig Solutions is one of the few organizations which offer IT training on SQL Server 2014 and all its updates.

Now, I leave the decision on using memory_Optimized tables on you, I hope you like this article and it helped you understand  the fundamentals of IN-Memory OLTP .

Reference: Pinal Dave (http://blog.sqlauthority.com)

About these ads

SQL SERVER – SSIS Parameters in Parent-Child ETL Architectures – Notes from the Field #040

[Notes from Pinal]: SSIS is very well explored subject, however, there are so many interesting elements when we read, we learn something new. A similar concept has been Parent-Child ETL architecture’s relationship in SSIS.

Linchpin People are database coaches and wellness experts for a data driven world. In this 40th episode of the Notes from the Fields series database expert Tim Mitchell (partner at Linchpin People) shares very interesting conversation related to how to understand SSIS Parameters in Parent-Child ETL Architectures.


In this brief Notes from the Field post, I will review the use of SSIS parameters in parent-child ETL architectures.

A very common design pattern used in SQL Server Integration Services is one I call the parent-child pattern.  Simply put, this is a pattern in which packages are executed by other packages.  An ETL infrastructure built using small, single-purpose packages is very often easier to develop, debug, and troubleshoot than large, monolithic packages.  For a more in-depth look at parent-child architectures, check out my earlier blog post on this topic.

When using the parent-child design pattern, you will frequently need to pass values from the calling (parent) package to the called (child) package.  In older versions of SSIS, this process was possible but not necessarily simple.  When using SSIS 2005 or 2008, or even when using SSIS 2012 or 2014 in package deployment mode, you would have to create package configurations to pass values from parent to child packages.  Package configurations, while effective, were not the easiest tool to work with.  Fortunately, starting with SSIS in SQL Server 2012, you can now use package parameters for this purpose.

In the example I will use for this demonstration, I’ll create two packages: one intended for use as a child package, and the other configured to execute said child package.  In the parent package I’m going to build a for each loop container in SSIS, and use package parameters to pass in a value – specifically, a ClientID – for each iteration of the loop.  The child package will be executed from within the for each loop, and will create one output file for each client, with the source query and filename dependent on the ClientID received from the parent package.

Configuring the Child and Parent Packages

When you create a new package, you’ll see the Parameters tab at the package level.  Clicking over to that tab allows you to add, edit, or delete package parameters.

As shown above, the sample package has two parameters.  Note that I’ve set the name, data type, and default value for each of these.  Also note the column entitled Required: this allows me to specify whether the parameter value is optional (the default behavior) or required for package execution.  In this example, I have one parameter that is required, and the other is not.

Let’s shift over to the parent package briefly, and demonstrate how to supply values to these parameters in the child package.  Using the execute package task, you can easily map variable values in the parent package to parameters in the child package.


The execute package task in the parent package, shown above, has the variable vThisClient from the parent package mapped to the pClientID parameter shown earlier in the child package.  Note that there is no value mapped to the child package parameter named pOutputFolder.  Since this parameter has the Required property set to False, we don’t have to specify a value for it, which will cause that parameter to use the default value we supplied when designing the child pacakge.

The last step in the parent package is to create the for each loop container I mentioned earlier, and place the execute package task inside it.  I’m using an object variable to store the distinct client ID values, and I use that as the iterator for the loop (I describe how to do this more in depth here).  For each iteration of the loop, a different client ID value will be passed into the child package parameter.


The final step is to configure the child package to actually do something meaningful with the parameter values passed into it.  In this case, I’ve modified the OleDB source query to use the pClientID value in the WHERE clause of the query to restrict results for each iteration to a single client’s data.  Additionally, I’ll use both the pClientID and pOutputFolder parameters to dynamically build the output filename.

As shown, the pClientID is used in the WHERE clause, so we only get the current client’s invoices for each iteration of the loop.

For the flat file connection, I’m setting the Connection String property using an expression that engages both of the parameters for this package, as shown above.

Parting Thoughts

There are many uses for package parameters beyond a simple parent-child design pattern.  For example, you can create standalone packages (those not intended to be used as a child package) and still use parameters.  Parameter values may be supplied to a package directly at runtime by a SQL Server Agent job, through the command line (via dtexec.exe), or through T-SQL.

Also, you can also have project parameters as well as package parameters.  Project parameters work in much the same way as package parameters, but the parameters apply to all packages in a project, not just a single package.

Conclusion

Of the numerous advantages of using catalog deployment model in SSIS 2012 and beyond, package parameters are near the top of the list.  Parameters allow you to easily share values from parent to child packages, enabling more dynamic behavior and better code encapsulation.

If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – SSMS: Memory Usage By Memory Optimized Objects Report

At conferences and at speaking engagements at the local UG, there is one question that keeps on coming which I wish were never asked. The question around, “Why is SQL Server using up all the memory and not releasing even when idle?” Well, the answer can be long and with the release of SQL Server 2014, this got even more complicated. This release of SQL Server 2014 has the option of introducing In-Memory OLTP which is completely new concept and our dependency on memory has increased multifold. In reality, nothing much changes but we have memory optimized objects (Tables and Stored Procedures) additional which are residing completely in memory and improving performance. As a DBA, it is humanly impossible to get a hang of all the innovations and the new features introduced in the next version. So today’s blog is around the report added to SSMS which gives a high level view of this new feature addition.

This reports is available only from SQL Server 2014 onwards because the feature was introduced in SQL Server 2014. Earlier versions of SQL Server Management Studio would not show the report in the list.

If we try to launch the report on the database which is not having In-Memory File group defined, then we would see the message in report. To demonstrate, I have created new fresh database called MemoryOptimizedDB with no special file group.

Here is the query used to identify whether a database has memory-optimized file group or not.

SELECT TOP(1) 1 FROM sys.filegroups FG WHERE FG.[type] = 'FX'

Once we add filegroup using below command, we would see different version of report.

USE [master]
GO
ALTER DATABASE [MemoryOptimizedDB] ADD FILEGROUP [IMO_FG] CONTAINS MEMORY_OPTIMIZED_DATA
GO

The report is still empty because we have not defined any Memory Optimized table in the database.  Total allocated size is shown as 0 MB. Now, let’s add the folder location into the filegroup and also created few in-memory tables. We have used the nomenclature of IMO to denote “InMemory Optimized” objects.

USE [master]
GO
ALTER DATABASE [MemoryOptimizedDB]
ADD FILE ( NAME = N'MemoryOptimizedDB_IMO', FILENAME = N'E:\Program Files\Microsoft SQL Server\MSSQL12.SQL2014\MSSQL\DATA\MemoryOptimizedDB_IMO')
TO FILEGROUP [IMO_FG]
GO

You may have to change the path based on your SQL Server configuration. Below is the script to create the table.

USE MemoryOptimizedDB
GO
--Drop table if it already exists.
IF OBJECT_ID('dbo.SQLAuthority','U') IS NOT NULL
DROP TABLE dbo.SQLAuthority
GO
CREATE TABLE dbo.SQLAuthority
(
ID INT IDENTITY NOT NULL,
Name CHAR(500)  COLLATE Latin1_General_100_BIN2 NOT NULL DEFAULT 'Pinal',
CONSTRAINT PK_SQLAuthority_ID PRIMARY KEY NONCLUSTERED (ID),
INDEX hash_index_sample_memoryoptimizedtable_c2 HASH (Name) WITH (BUCKET_COUNT = 131072)
)
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA)
GO

As soon as above script is executed, table and index both are created. If we run the report again, we would see something like below.

Notice that table memory is zero but index is using memory. This is due to the fact that hash index needs memory to manage the buckets created. So even if table is empty, index would consume memory. More about the internals of how In-Memory indexes and tables work will be reserved for future posts. Now, use below script to populate the table with 10000 rows

INSERT INTO SQLAuthority VALUES (DEFAULT)
GO 10000

Here is the same report after inserting 1000 rows into our InMemory table.

 

 There are total three sections in the whole report.

  1. Total Memory consumed by In-Memory Objects
  2. Pie chart showing memory distribution based on type of consumer – table, index and system.
  3. Details of memory usage by each table.

The information about all three is taken from one single DMV, sys.dm_db_xtp_table_memory_stats This DMV contains memory usage statistics for both user and system In-Memory tables. If we query the DMV and look at data, we can easily notice that the system tables have negative object IDs.  So, to look at user table memory usage, below is the over-simplified version of query.

USE MemoryOptimizedDB
GO
SELECT OBJECT_NAME(OBJECT_ID), *
FROM sys.dm_db_xtp_table_memory_stats
WHERE OBJECT_ID > 0
GO

This report would help DBA to identify which in-memory object taking lot of memory which can be used as a pointer for designing solution. I am sure in future we will discuss at lengths the whole concept of In-Memory tables in detail over this blog. To read more about In-Memory OLTP, have a look at In-Memory OLTP Series at Balmukund’s Blog.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – SSMS: Database Consistency History Report

Doctor and Database

The last place I like to visit is always a hospital. With the monsoon season starting, intermittent rains, it has become sort of a routine to get a cycle of fever every other year (seriously I hate it). So when I visit my doctor, it is always interesting in the way he quizzes me. The routine question of – “How many days have you had this?”, “Is there any pattern?”, “Did you drench in rain?”, “Do you have any other symptom?” and so on. The idea here is that the doctor wants to find any anomaly or a pattern that will guide him to a viral or bacterial type. Most of the time they get it based on experience and sometimes after a battery of tests. So if there is consistent behavior to your problem, there is always a solution out. SQL Server has its way to find if the server data / files are in consistent state using the DBCC commands.

Back to SQL Server

In real life, Database consistency check is one of the critical operations a DBA generally doesn’t give much priority. Many readers of my blogs have asked many times, how do we know if the database is consistent? How do I read output of DBCC CHECKDB and find if everything is right or not?

My common answer to all of them is – look at the bottom of checkdb (or checktable) output and look for below line.

CHECKDB found 0 allocation errors and 0 consistency errors in database ‘DatabaseName’.

Above is a “good sign” because we are seeing zero allocation and zero consistency error. If you are seeing non-zero errors then there is some problem with the database. Sample output is shown as below:

CHECKDB found 0 allocation errors and 2 consistency errors in database ‘DatabaseName’.
repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB (DatabaseName).

If we see non-zero error then most of the time (not always) we get repair options depending on the level of corruption. There is risk involved with above option (repair_allow_data_loss), that is – we would lose the data. Sometimes the option would be repair_rebuild which is little safer. Though these options are available, it is important to find the root cause to the problem.

In standard report, there is a report which can show the history of checkdb executed for the selected database. Since this is a database level report, we need to right click on database, click Reports, click Standard Reports and then choose “Database Consistency History” report.

The information in this report is picked from default trace. If default trace is disabled or there is no checkdb run or information is not there in default trace (because it’s rolled over), we would get report like below.

As we can see report says it very clearly: Currently, no execution history of CHECKDB is available or default trace is not enabled.

To demonstrate, I have caused corruption in one of the database and did below steps.

  1. Run CheckDB so that errors are reported.
  2. Fix the corruption by losing the data using repair option
  3. Run CheckDB again to check if corruption is cleared.

After that I have launched the report and below is what we would see.

If you are lazy like me and don’t want to run the report manually for each database then below query would be handy to provide same report for all database. This query is runs behind the scenes by the report. All I have done is remove the filter for database name (at the last – highlighted).

DECLARE @curr_tracefilename VARCHAR(500);
DECLARE @base_tracefilename VARCHAR(500);
DECLARE @indx INT;
SELECT @curr_tracefilename = path FROM sys.traces WHERE is_default = 1;
SET @curr_tracefilename = REVERSE(@curr_tracefilename);
SELECT @indx  = PATINDEX('%\%', @curr_tracefilename) ;
SET @curr_tracefilename = REVERSE(@curr_tracefilename);
SET @base_tracefilename = LEFT( @curr_tracefilename,LEN(@curr_tracefilename) - @indx) + '\log.trc';
SELECT  SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),36, PATINDEX('%executed%',TEXTData)-36) AS command
,       LoginName
,       StartTime
,       CONVERT(INT,SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%found%',TEXTData) +6,PATINDEX('%errors %',TEXTData)-PATINDEX('%found%',TEXTData)-6)) AS errors
,       CONVERT(INT,SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%repaired%',TEXTData) +9,PATINDEX('%errors.%',TEXTData)-PATINDEX('%repaired%',TEXTData)-9)) repaired
,       SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%time:%',TEXTData)+6,PATINDEX('%hours%',TEXTData)-PATINDEX('%time:%',TEXTData)-6)+':'+SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%hours%',TEXTData) +6,PATINDEX('%minutes%',TEXTData)-PATINDEX('%hours%',TEXTData)-6)+':'+SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%minutes%',TEXTData) +8,PATINDEX('%seconds.%',TEXTData)-PATINDEX('%minutes%',TEXTData)-8) AS time
FROM::fn_trace_gettable( @base_tracefilename, DEFAULT)
WHERE EventClass = 22 AND SUBSTRING(TEXTData,36,12) = 'DBCC CHECKDB'
-- AND DatabaseName = @DatabaseName;

Don’t get worried about the logic above. All it is doing is reading the trace files, parsing below entry and getting out information for underlined words.

DBCC CHECKDB (CorruptedDatabase) executed by sa found 2 errors and repaired 0 errors. Elapsed time: 0 hours 0 minutes 0 seconds.  Internal database snapshot has split point LSN = 00000029:00000030:0001 and first LSN = 00000029:00000020:0001.

Hopefully now onwards you would run checkdb and understand the importance of it. As responsible DBAs I am sure you are already doing it, let me know how often do you actually run them on you production environment?

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – SSMS: Backup and Restore Events Report

A DBA wears multiple hats and in fact does more than what an eye can see. One of the core task of a DBA is to take backups. This looks so trivial that most developers shrug this off as the only activity a DBA might be doing. I have huge respect for DBA’s all around the world because even if they seem cool with all the scripting, automation, maintenance works round the clock to keep the business working almost 365 days 24×7, their worth is knowing that one day when the systems / HDD crashes and you have an important delivery to make. So these backup tasks / maintenance jobs that have been done come handy and are no more trivial as they might seem to be as considered by many. So the important question like: “When was the last backup taken?”, “How much time did the last backup take?”, “What type of backup was taken last?” etc are tricky questions and this report lands answers to the same in a jiffy.

So the SSMS report, we are talking can be used to find backups and restore operation done for the selected database. Whenever we perform any backup or restore operation, the information is stored in the msdb database. This report can utilize that information and provide information about the size, time taken and also the file location for those operations.

Here is how this report can be launched.

 

Once we launch this report, we can see 4 major sections shown as listed below.

  • Average Time Taken For Backup Operations
  • Successful Backup Operations
  • Backup Operation Errors
  • Successful Restore Operations

Let us look at each section next.

Average Time Taken For Backup Operations

Information shown in “Average Time Taken For Backup Operations” section is taken from a backupset table in the msdb database. Here is the query and the expanded version of that particular section

USE msdb;
SELECT (ROW_NUMBER() OVER (ORDER BY t1.TYPE))%2 AS l1
,       1 AS l2
,       1 AS l3
,       t1.TYPE AS [type]
,       (AVG(DATEDIFF(ss,backup_start_date, backup_finish_date)))/60.0 AS AverageBackupDuration
FROM backupset t1
INNER JOIN sys.databases t3 ON ( t1.database_name = t3.name)
WHERE t3.name = N'AdventureWorks2014'
GROUP BY t1.TYPE
ORDER BY
t1.TYPE

On my small database the time taken for differential backup was less than a minute, hence the value of zero is displayed. This is an important piece of backup operation which might help you in planning maintenance windows.

Successful Backup Operations

Here is the expanded version of this section.

 

This information is derived from various backup tracking tables from msdb database.  Here is the simplified version of the query which can be used separately as well.

SELECT *
FROM sys.databases t1
INNER JOIN backupset t3 ON (t3.database_name = t1.name)
LEFT OUTER JOIN backupmediaset t5 ON ( t3.media_set_id = t5.media_set_id)
LEFT OUTER JOIN backupmediafamily t6 ON ( t6.media_set_id = t5.media_set_id)
WHERE (t1.name = N'AdventureWorks2014')
ORDER BY backup_start_date DESC,t3.backup_set_id,t6.physical_device_name;

The report does some calculations to show the data in a more readable format. For example, the backup size is shown in KB, MB or GB. I have expanded first row by clicking on (+) on “Device type” column. That has shown me the path of the physical backup file.

Personally looking at this section, the Backup Size, Device Type and Backup Name are critical and are worth a note. As mentioned in the previous section, this section also has the Duration embedded inside it.

Backup Operation Errors

This section of the report gets data from default trace. You might wonder how. One of the event which is tracked by default trace is “ErrorLog”. This means that whatever message is written to errorlog gets written to default trace file as well. Interestingly, whenever there is a backup failure, an error message is written to ERRORLOG and hence default trace. This section takes advantage of that and shows the information. We can read below message under this section, which confirms above logic.

No backup operations errors occurred for (AdventureWorks2014) database in the recent past or default trace is not enabled.

Successful Restore Operations

This section may not be very useful in production server (do you perform a restore of database?) but might be useful in the development and log shipping secondary environment, where we might be interested to see restore operations for a particular database. Here is the expanded version of the section. To fill this section of the report, I have restored the same backups which were taken to populate earlier sections.

Here is the simplified version of the query used to populate this output.

USE msdb;
SELECT *
FROM restorehistory t1
LEFT OUTER JOIN restorefile t2 ON ( t1.restore_history_id = t2.restore_history_id)
LEFT OUTER JOIN backupset t3 ON ( t1.backup_set_id = t3.backup_set_id)
WHERE t1.destination_database_name = N'AdventureWorks2014'
ORDER BY restore_date DESC,  t1.restore_history_id,t2.destination_phys_name

Have you ever looked at the backup strategy of your key databases? Are they in sync and do we have scope for improvements? Then this is the report to analyze after a week or month of maintenance plans running in your database. Do chime in with what are the strategies you are using in your environments.

Reference: Pinal Dave (http://blog.sqlauthority.com)

Developer’s Life – Every Developer is a Captain America

Captain America was first created as a comic book character in the 1940’s as a way to boost morale during World War II.  Aimed at a children’s audience, his legacy faded away when the war ended.  However, he has recently has a major reboot to become a popular movie character that deals with modern issues.

When Captain America was first written, there was no such thing as a developer, programmer or a computer (the way we think of them, anyway).  Despite these limitations, I think there are still a lot of ways that modern Captain America is like modern developers.

So how are developers like Captain America?

Well, read on my list of reasons.

Take on Big Projects

Captain America isn’t afraid to take on big projects – and takes responsibility when the project is co-opted by the evil organization HYDRA.  Developers may not have super villains out there corrupting their work, but they know to keep on top of their projects and own what they do.

Elderly Wisdom

Steve Rogers, Captain America’s alter ego, was frozen in ice for decades, and brought back to life to solve problems. Developers can learn from this by respecting the opinions of their elders – technology is an ever-changing market, but the old-timers still have a few tricks up their sleeves!

Don’t be Afraid of Change

Don’t be afraid of change.  Captain America woke up to find the world he was accustomed to is now completely different.  He might have even felt his skills were no longer necessary.  He, and developers, know that everyone has their place in a team, though.  If you try your best, you will make it work.

Fight Your Own Battle

Sometimes you have to make it on your own.  Captain America is an integral part of the Avengers, but in his own movies, the other superheroes aren’t around to back him up.  Developers, too, must learn to work both within and with out a team.

Solid Integrity

One of Captain America’s greatest qualities is his integrity.  His determine to do what is right, keep his word, and act honestly earns him mockery from some of the less-savory characters – even “good guys” like Iron Man.  Developers, and everyone else, need to develop the strength of character to keep their integrity.  No matter your walk of life, there will be tempting obstacles.  Think of Captain America, and say “no.”

There is a lot for all of us to learn from Captain America, to take away in our own lives, and admire in those who display it – I am specifically thinking of developers.  If you are enjoying this series as much as I am, please let me know who else you would like to see featured.

Reference: Pinal Dave (http://blog.sqlauthority.com)

MySQL – Export the Resultset to CSV file

In SQL Server, you can use BCP command to export the result set to a csv file. In MySQL too, You can export data from a table or result set as a csv file in many methods. Here are two methods.

Method 1 : Make use of Work Bench

If you are using Work Bench as a querying tool, you can make use of it’s Export option in the result window. Run the following code in Work Bench

SELECT db_names FROM mysql_testing;

The result will be shown in the result windows. There is an option called “File”. Click on it and it will prompt you a window to save the result set (Screen shot attached to show how file option can be used). Choose the directory and type out the name of the file.

Method 2 : Make use of OUTFILE command

You can do the export using a query with OUTFILE command as shown below

SELECT db_names FROM mysql_testing
INTO OUTFILE 'C:/testing.csv'
FIELDS ENCLOSED BY '"' TERMINATED BY ';' ESCAPED BY '"'
LINES TERMINATED BY '\r\n';

After the execution of the above code, you can find a file named testing.csv in C drive of the server.

Reference: Pinal Dave (http://blog.sqlauthority.com)