SQL Authority News – FalafelCON 2014: 2 days with the Best Developers in the World

I love presenting at various forums on various technologies. I am extremely excited that I got invited to speak at Falafel Conference 2014 in San Francisco. I will present two technology sessions on SQL Server. If you are into web development or if you just want to attend a conference with the best of the industry speakers, this may be the right conference for you. What set apart this conference from other conference is technology presented as well as speakers. Usually one has to attend very expensive and high scale event when they have to hear good speakers. At this conference, you will find quite a many industry legends are available to present on the bleeding edge technology.

Here are few of the reasons why I believe you should attend this conference:

  • Choose from four tracks covering Web, Mobile development and testing, Sitefinity, and Automated Testing, or attend sessions from all four!
  • Learn from the best developers and testers in the business in an intimate setting.
  • Surround yourself with your peers and the opportunity to network
  • Learn about the latest platforms and technologies including Kendo UI, AngularJS, ASP.NET MVC, WebAPI, and more!

Here are the details for the sessions which I am going to present at Falafel Conference.

Secrets of SQL Server: Database Worst Practices
Abstract: Chances are you have heard, or even uttered, this expression. This demo-oriented session will show many examples where database professionals were dumbfounded by their own mistakes, and could even bring back memories of your own early DBA days. The goal of this session is to expose the small details that can be dangerous to the production environment and SQL Server as a whole, as well as talk about worst practices and how to avoid them. Shedding light on some of these perils and the tricks to avoid them may even save your current job. After attending this session, Developers will only need 60 seconds to improve performance of their database server in their SharePoint implementation. We will have a quiz during the session to keep the conversation alive. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. Additionally, all attendees of the session will have access to learning material presented in the session.

The Unsung Hero
Abstract: Slow Running Queries are the most common problem that developers face while working with SQL Server. While it is easy to blame the SQL Server for unsatisfactory performance, however the issue often persists with the way queries have been written, and how Indexes has been set up. The session will focus on the ways of identifying problems that slow down SQL Server, and Indexing tricks to fix them. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session.

Register Now!

I have learned from the Falafel Team that they are running out of tickets and soon they will close the registration.  For next 10 days the price for the registration is only USD 149. Trust me, you can’t get such a world class training and networking opportunity at such a low price. Click to Register Here!

Reference: Pinal Dave (http://blog.sqlauthority.com)

About these ads

SQL SERVER – Integrate Your Data with Skyvia – Cloud ETL Solution

In our days data integration often becomes a key aspect of business success. For business analysts it’s very important to get integrated data from various sources, such as relational databases, cloud CRMs, etc. to make correct and successful decisions. There are various data integration solutions on market, and today I will tell about one of them – Skyvia.

Skyvia is a cloud data integration service, which allows integrating data in cloud CRMs and different relational databases. It is a completely online solution and does not require anything except for a browser. Skyvia provides powerful etl tools for data import, export, replication, and synchronization for SQL Server and other databases and cloud CRMs.

Packages page

You can use Skyvia data import tools to load data from various sources to SQL Server (and SQL Azure). Skyvia supports such cloud CRMs as Salesforce and Microsoft Dynamics CRM and such databases as MySQL and PostgreSQL. You even can migrate data from SQL Server to SQL Server, or from SQL Server to other databases and cloud CRMs. Additionally Skyvia supports import of CSV files, either uploaded manually or stored on cloud file storage services, such as Dropbox, Box, Google Drive, or FTP servers.

Azure import

When data import is not enough, Skyvia offers bidirectional data synchronization. With this tool, you can synchronize SQL Server data with other databases and cloud CRMs.

Sync task

After performing the first synchronization, Skyvia tracks data changes in the synchronized data storages. In SQL Server databases (and other relational databases) it creates additional tracking tables and triggers. This allows synchronizing only the changed data.

Skyvia also maps records by their primary key values to each other, so it does not require different sources to have the same primary key structure. It still can match the corresponding records without having to add any additional columns or changing data structure. The only requirement for synchronization is that primary keys must be autogenerated.

With Skyvia it’s not necessary for data to have the same structure in integrated data storages. Skyvia supports powerful mapping mechanisms that allow synchronizing data with completely different structure. It provides support for complex mathematical and string expressions when mapping data, using lookups, etc.

Relation mapping

You may use data splitting – loading data from a single CSV file or source table to multiple related target tables. Or you may load data from several source CSV files or tables to several related target tables. In each case Skyvia preserves data relations. It builds corresponding relations between the target data automatically.

When you often work with cloud CRM data, native CRM data reporting and analysis tools may be not enough for you. And there is a vast set of professional data analysis and reporting tools available for SQL Server. With Skyvia you can quickly copy your cloud CRM data to an SQL Server database and apply corresponding SQL Server tools to the data.

In such case you can use Skyvia data replication tools. It allows you to quickly copy cloud CRM data to SQL Server or other databases without customizing any mapping. You need just to specify columns to copy data from. Target database tables will be created automatically. Skyvia offers powerful filtering settings to replicate only the records you need.

Replication package

Skyvia also provides capability to export data from SQL Server (including SQL Azure) and other databases and cloud CRMs to CSV files. These files can be either downloadable manually or loaded to cloud file storages or FTP server. You can use export, for example, to backup SQL Azure data to Dropbox.

Export package

Any data integration operation can be scheduled for automatic execution. Thus, you can automate your SQL Azure data backup or data synchronization – just configure it once, then schedule it, and benefit from automatic data integration with Skyvia.

Defined schedule

Currently registration and using Skyvia is completely free, so you can try it yourself and find out whether its data migration and integration tools suits for you. Visit this link to register on Skyvia: https://app.skyvia.com/register

Reference: Pinal Dave (http://blog.sqlauthority.com)

Database – Beginning with Cloud Database As A Service

I love my weekend projects. Everybody does different activities in their weekend – like traveling, reading or just nothing. Every weekend I try to do something creative and different in the database world. The goal is I learn something new and if I enjoy my learning experience I share with the world. This weekend, I decided to explore Cloud Database As A Service – Morpheus. In my career I have managed many databases in the cloud and I have good experience in managing them.

I should highlight that today’s applications use multiple databases from SQL for transactions and analytics, NoSQL for documents, In-Memory for caching to Indexing for search.  Provisioning and deploying these databases often require extensive expertise and time.  Often these databases are also not deployed on the same infrastructure and can create unnecessary latency between the application layer and the databases.  Not to mention the different quality of service based on the infrastructure and the service provider where they are deployed.

Moreover, there are additional problems that I have experienced with traditional database setup when hosted in the cloud:

  • Database provisioning & orchestration
  • Slow speed due to hardware issues
  • Poor Monitoring Tools
  • High network latency

Now if you have a great software and expert network engineer, you can continuously work on above problems and overcome them. However, not every organization have the luxury to have top notch experts in the field. Now above issues are related to infrastructure, but there are a few more problems which are related to software/application as well.

Here are the top three things which can be problems if you do not have application expert:

  • Replication and Clustering
  • Simple provisioning of the hard drive space
  • Automatic Sharding

Well, Morpheus looks like a product build by experts who have faced similar situation in the past. The product pretty much addresses all the pain points of developers and database administrators.

What is different about Morpheus is that it offers a variety of databases from MySQL, MongoDB, ElasticSearch to Reddis as a service.  Thus users can pick and chose any combination of these databases.  All of them can be provisioned in a matter of minutes with a simple and intuitive point and click user interface.  The Morpheus cloud is built on Solid State Drives (SSD) and is designed for high-speed database transactions.  In addition it offers a direct link to Amazon Web Services to minimize latency between the application layer and the databases.

Here are the few steps on how one can get started with Morpheus. Follow along with me.  First go to http://www.gomorpheus.com and register for a new and free account.

Step 1: Signup

It is very simple to signup for Morpheus.

Step 2: Select your database

 

I use MySQL for my daily routine, so I have selected MySQL. Upon clicking on the big red button to add Instance, it prompted a dialogue of creating a new instance.

 

Step 3: Create User

Now we just have to create a user in our portal which we will use to connect to a database hosted at Morpheus. Click on your database instance and it will bring you to User Screen. Over here you will notice once again a big red button to create a new user. I created a user with my first name.

 

Step 4: Configure your MySQL client

I used MySQL workbench and connected to MySQL instance, which I had created with an IP address and user.

 

That’s it! You are connecting to MySQL instance. Now you can create your objects just like you would create on your local box. You will have all the features of the Morpheus when you are working with your database.

Dashboard

While working with Morpheus, I was most impressed with its dashboard. In future blog posts, I will write more about this feature.  Also with Morpheus you use the same process for provisioning and connecting with other databases: MongoDB, ElasticSearch and Reddis.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – How to Roll Back SQL Server Database Changes

In a perfect scenario, no unexpected and unplanned changes occur. There are no unpleasant surprises, no inadvertent changes. However, even with all precautions and testing, there is sometimes a need to revert a structure or data change.

One of the methods that can be used in this situation is to use an older database backup that has the records or database object structure you want to revert to. For this method, you have to have the adequate full database backup and a tool that will help you with comparison and synchronization is preferred.

In this article, we will focus on another method: rolling back the changes.

This can be done by using:

  • An option in SQL Server Management Studio
  • T-SQL, or
  • ApexSQL Log

The first two solutions have been described in this article

The disadvantages of these methods are that you have to know when exactly the change you want to revert happened and that all transactions on the database executed in a specific time range are rolled back – the ones you want to undo and the ones you don’t.

How to easily roll back SQL Server database changes using ApexSQL Log?

The biggest challenge is to roll back just specific changes, not all changes that happened in a specific time range.

While SQL Server Management Studio option and T-SQL read and roll forward all transactions in the transaction log files, I will show you a solution that finds and scripts only the specific changes that match your criteria. Therefore, you don’t need to worry about all other database changes that you don’t want to roll back.

ApexSQL Log is a SQL Server disaster recovery tool that reads transaction logs and provides a wide range of filters that enable you to easily rollback only specific data changes.

First, connect to the online database where you want to roll back the changes.

Once you select the database, ApexSQL Log will show its recovery model. Note that changes can be rolled back even for a database in the Simple recovery model, when no database and transaction log backups are available. However, ApexSQL Log achieves best results when the database is in the Full recovery model and you have a chain of subsequent transaction log backups, back to the moment when the change occurred.

In this example, we will use only the online transaction log.

In the next step, use filters to read only the transactions that happened in a specific time range.

To remove noise, it’s recommended to use as many filters as possible. Besides filtering by the time of the transaction, ApexSQL Log can filter by the operation type:

Table name:

As well as transaction state (committed, aborted, running, and unknown), name of the user who committed the change, specific field values, server process IDs, and transaction description.

You can select only the tables affected by the changes you want to roll back. However, if you’re not certain which tables were affected, you can leave them all selected and once the results are shown in the main grid, analyze them to find the ones you to roll back.

When you set the filters, you can select how to present the results. ApexSQL Log can automatically create undo or redo scripts, export the transactions into an XML, HTML, CSV, SQL, or SQL Bulk file, and create a batch file that you can use for unattended transaction log reading.

In this example, I will open the results in the grid, as I want to analyze them before rolling back the transactions.

The results contain information about the transaction, as well as who and when made it.

For UPDATEs, ApexSQL Log shows both old and new values, so you can easily see what has happened.

To create an UNDO script that rolls back the changes, select the transactions you want to roll back and click Create undo script in the menu.

For the DELETE statement selected in the screenshot above, the undo script is:

INSERT INTO [Sales].[PersonCreditCard] ([BusinessEntityID], [CreditCardID], [ModifiedDate])
VALUES (297, 8010, '20050901 00:00:00.000')

When it comes to rolling back database changes, ApexSQL Log has a big advantage, as it rolls back only specific transactions, while leaving all other transactions that occurred at the same time range intact. That makes ApexSQL Log a good solution for rolling back inadvertent data and schema changes on your SQL Server databases.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL Authority News – Download and Install Adventure Works 2014 Sample Databases

If you are using SQL Server there are good chances that you are familiar with AdventureWorks. AdventureWorks is a Sample Database shipped with SQL Server and it can be downloaded from CodePlex site. AdventureWorks have replaced Northwind and Pubs from the sample database in SQL Server 2005. The Microsoft team keeps updating the sample database as they release new versions. I use the AdventureWorks database for most of my example, as it is easy to use sample database which is accessible for most of the people out there.

Every new version  of SQL Server should have its own Adventureworks database. The reason is that SQL Server comes up with new features with every version and most of the new features need a new dataset sample to demonstrate the capabilities of the features. This is the why every version of SQL Server has its own AdventureWorks database.

SQL Server 2014 has many new features and to support that Microsoft has released new Advetureworks 2014 Sample Database.

You can download Adventure Works 2014 Sample Databases from here.

Here is a quick tutorial how one can install the AdventureWorks database on your server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Microsoft SQL Server Migration Assistant V6.0 Released

Every company makes a different decision about the database when they start, but as they move forward they mature and make the decision which is based on their experience and best interest of the organization. Similarly, quite a many organizations make different decisions on database, like Sybase, MySQL, Oracle or Access and as time passes by they learn that now they want to move to a different platform. Microsoft makes it easy for SQL Server professional by releasing various Migration Assistant tools. Last week, Microsoft released Microsoft SQL Server Migration Assistant v6.0.

Here are different tools released earlier last week to migrate various product to SQL Server.

Microsoft SQL Server Migration Assistant v6.0 for Sybase
SQL Server Migration Assistant (SSMA) is a free supported tool from Microsoft that simplifies database migration process from Sybase Adaptive Server Enterprise (ASE) to SQL Server and Azure SQL DB. SSMA automates all aspects of migration including migration assessment analysis, schema and SQL statement conversion, data migration as well as migration testing.

Microsoft SQL Server Migration Assistant v6.0 for MySQL
SQL Server Migration Assistant (SSMA) is a free supported tool from Microsoft that simplifies database migration process from MySQL to SQL Server and Azure SQL DB. SSMA automates all aspects of migration including migration assessment analysis, schema and SQL statement conversion, data migration as well as migration testing.

Microsoft SQL Server Migration Assistant v6.0 for Oracle
SQL Server Migration Assistant (SSMA) is a free supported tool from Microsoft that simplifies database migration process from Oracle to SQL Server and Azure SQL DB. SSMA automates all aspects of migration including migration assessment analysis, schema and SQL statement conversion, data migration as well as migration testing.

Microsoft SQL Server Migration Assistant v6.0 for Access
SQL Server Migration Assistant (SSMA) is a free supported tool from Microsoft that simplifies database migration process from Access to SQL Server. SSMA for Access automates conversion of Microsoft Access database objects to SQL Server database objects, loads the objects into SQL Server and Azure SQL DB, and then migrates data from Microsoft Access to SQL Server and Azure SQL DB.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Introduction to SQL Server 2014 In-Memory OLTP

In SQL Server 2014 Microsoft has introduced a new database engine component called In-Memory OLTP aka project “Hekaton” which is fully integrated into the SQL Server Database Engine. It is optimized for OLTP workloads accessing memory resident data. In-memory OLTP helps us create memory optimized tables which in turn offer significant performance improvement for our typical OLTP workload.

The main objective of memory optimized table is to ensure that highly transactional tables could live in memory and remain in memory forever without even losing out a single record. The most significant part is that it still supports majority of our Transact-SQL statement. Transact-SQL stored procedures can be compiled to machine code for further performance improvements on memory-optimized tables. This engine is designed to ensure higher concurrency and minimal blocking. In-Memory OLTP alleviates the issue of locking, using a new type of multi-version optimistic concurrency control. It also substantially reduces waiting for log writes by generating far less log data and needing fewer log writes.

Points to remember

  • Memory-optimized tables refer to tables using the new data structures and key words added as part of In-Memory OLTP.
  • Disk-based tables refer to your normal tables which we used to create in SQL Server since its inception. These tables use a fixed size 8 KB pages that need to be read from and written to disk as a unit.
  • Natively compiled stored procedures refer to an object Type which is new and is supported by in-memory OLTP engine which convert it into machine code, which can further improve the data access performance for memory –optimized tables. Natively compiled stored procedures can only reference memory-optimized tables, they can’t be used to reference any disk –based table.
  • Interpreted Transact-SQL stored procedures, which is what SQL Server has always used.
  • Cross-container transactions refer to transactions that reference both memory-optimized tables and disk-based tables.
  • Interop refers to interpreted Transact-SQL that references memory-optimized tables.

Using In-Memory OLTP

In-Memory OLTP engine has been available as part of SQL Server 2014 since June 2013 CTPs. Installation of In-Memory OLTP is part of the SQL Server setup application. The In-Memory OLTP components can only be installed with a 64-bit edition of SQL Server 2014 hence they are not available with 32-bit editions.

Creating Databases

Any database that will store memory-optimized tables must have a MEMORY_OPTIMIZED_DATA filegroup. This filegroup is specifically designed to store the checkpoint files needed by SQL Server to recover the memory-optimized tables, and although the syntax for creating the filegroup is almost the same as for creating a regular filestream filegroup, it must also specify the option CONTAINS MEMORY_OPTIMIZED_DATA. Here is an example of a CREATE DATABASE statement for a database that can support memory-optimized tables:

CREATE DATABASE InMemoryDB
ON
PRIMARY
(NAME = [InMemoryDB_data],
FILENAME = 'D:\data\InMemoryDB_data.mdf', size=500MB),
FILEGROUP [SampleDB_mod_fg] CONTAINS MEMORY_OPTIMIZED_DATA
(NAME = [InMemoryDB_mod_dir],
FILENAME = 'S:\data\InMemoryDB_mod_dir'),
(
NAME = [InMemoryDB_mod_dir],
FILENAME = 'R:\data\InMemoryDB_mod_dir')
LOG ON (name = [SampleDB_log], Filename='L:\log\InMemoryDB_log.ldf', size=500MB)
COLLATE Latin1_General_100_BIN2;

Above example code creates files on three different drives (D:  S: and R:) for the data files and in memory storage so if you would like to run this code kindly change the drive and folder locations as per your convenience. Also notice that binary collation was specified as Windows (non-SQL). BIN2 collation is the only collation support at this point for any indexes on memory optimized tables.

It is also possible to add a MEMORY_OPTIMIZED_DATA file group to an existing database, use the below command to achieve the same.

ALTER DATABASE AdventureWorks2012 ADD FILEGROUP hekaton_mod CONTAINS MEMORY_OPTIMIZED_DATA;
GO
ALTER DATABASE AdventureWorks2012 ADD FILE (NAME='hekaton_mod', FILENAME='S:\data\hekaton_mod') TO FILEGROUP hekaton_mod;
GO

Creating Tables

There is no major syntactical difference between creating a disk based table or a memory –optimized table but yes there are a few restrictions and a few new essential extensions. Essentially any memory-optimized table should use the MEMORY_OPTIMIZED = ON clause as shown in the Create Table query example.

DURABILITY clause (SCHEMA_AND_DATA or SCHEMA_ONLY)

Memory-optimized table should always be defined with a DURABILITY value which can be either SCHEMA_AND_DATA or  SCHEMA_ONLY the former being the default. A memory-optimized table defined with DURABILITY=SCHEMA_ONLY will not persist the data to disk which means the data durability is compromised whereas DURABILITY= SCHEMA_AND_DATA ensures that data is also persisted along with the schema.

Indexing Memory Optimized Table

A memory-optimized table must always have an index for all tables created with DURABILITY= SCHEMA_AND_DATA and this can be achieved by declaring a PRIMARY KEY Constraint at the time of creating a table. The following example shows a PRIMARY KEY index created as a HASH index, for which a bucket count must also be specified.

CREATE TABLE Mem_Table
(
[Name] VARCHAR(32) NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 100000),
[City] VARCHAR(32) NULL,
[State_Province] VARCHAR(32) NULL,
[LastModified] DATETIME NOT NULL,
)
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA);

Now as you can see in the above query example we have used the clause MEMORY_OPTIMIZED = ON to make sure that it is considered as a memory optimized table and not just a normal table and also used the DURABILITY Clause= SCHEMA_AND_DATA which means it will persist data along with metadata and also you can notice this table has a PRIMARY KEY mentioned upfront which is also a mandatory clause for memory-optimized tables.

We will talk more about HASH Indexes and BUCKET_COUNT in later articles on this topic which will be focusing more on Row and Index storage on Memory-Optimized tables. So stay tuned for that as well.

Now as we covered the basics of Memory Optimized tables and understood the key things to remember while using memory optimized tables, let’s explore more using examples to understand the Performance gains using memory-optimized tables.

I will be using the database which i created earlier in this article i.e. InMemoryDB in the below Demo Exercise.

USE InMemoryDB
GO
-- Creating a disk based table
CREATE TABLE dbo.Disktable
(
Id INT IDENTITY,
Name CHAR(40)
)
GO
CREATE NONCLUSTERED INDEX IX_ID ON dbo.Disktable (Id)
GO
-- Creating a memory optimized table with similar structure and DURABILITY = SCHEMA_AND_DATA
CREATE TABLE dbo.Memorytable_durable
(
Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000),
Name CHAR(40)
)
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA)
GO
-- Creating an another memory optimized table with similar structure but DURABILITY = SCHEMA_Only
CREATE TABLE dbo.Memorytable_nondurable
(
Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000),
Name CHAR(40)
)
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_only)
GO
-- Now insert 100000 records in dbo.Disktable and observe the Time Taken
DECLARE @i_t bigint
SET @i_t =1
WHILE @i_t<= 100000
BEGIN
INSERT INTO
dbo.Disktable(Name) VALUES('sachin' + CONVERT(VARCHAR,@i_t))
SET @i_t+=1
END
-- Do the same inserts for Memory table dbo.Memorytable_durable and observe the Time Taken
DECLARE @i_t bigint
SET @i_t =1
WHILE @i_t<= 100000
BEGIN
INSERT INTO
dbo.Memorytable_durable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t))
SET @i_t+=1
END
-- Now finally do the same inserts for Memory table dbo.Memorytable_nondurable and observe the Time Taken
DECLARE @i_t bigint
SET @i_t =1
WHILE @i_t<= 100000
BEGIN
INSERT INTO
dbo.Memorytable_nondurable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t))
SET @i_t+=1
END

The above 3 Inserts took 1.20 minutes, 54 secs, and 2 secs respectively to insert 100000 records on my machine with 8 Gb RAM.

This proves the point that memory-optimized tables can definitely help businesses achieve better performance for their highly transactional business table and memory- optimized tables with Durability SCHEMA_ONLY is even faster as it does not bother persisting its data to disk which makes it supremely fast. Koenig Solutions is one of the few organizations which offer IT training on SQL Server 2014 and all its updates.

Now, I leave the decision on using memory_Optimized tables on you, I hope you like this article and it helped you understand  the fundamentals of IN-Memory OLTP .

Reference: Pinal Dave (http://blog.sqlauthority.com)