SQL SERVER 2012 Editions – Highlights of The Cloud-Ready Information Platform

Microsoft has just announced SQL Server 2012 Editions information on official SQL Server 2012 site.

SQL Server 2012 will be available in three main editions:

  1. Enterprise
  2. Business Intelligence
  3. Standard

The other editions are Web, Developer and Express.

Here is the salient features of each of the edition:

Enterprise

  • Advanced high availability with AlwaysOn
  • High performance data warehousing with ColumnStore
  • Maximum virtualization (with Software Assurance)
  • Inclusive of Business Intelligence edition’s capabilities

Business Intelligence

  • Rapid data discovery with Power View
  • Corporate and scalable reporting and analytics
  • Data Quality Services and Master Data Services
  • Inclusive of the Standard edition’s capabilities

Standard

  • Standard continues to offer basic database, reporting and analytics capabilities

There is comparison chart of various other aspect of the above editions. Please refer here.

Additionally SQL Server 2012 licensing is also explained here.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

About these ads

SQLAuthority News – SQL Server Interview Questions And Answers Book Summary

Today we are using computers for various activities, motor vehicles for traveling to places, and mobile phones for conversation. How many of us can claim the invention of micro-processor, a basic wheel, or the telegraph? Similarly, this book was not written overnight. The journey of this book goes many years back with many individuals to be thanked for.

To begin with, we want to thank all those interviewers who reject interviewees by saying they need to know ‘the key things’ regardless of having high grades in class. The whole concept of interview questions and answers revolves around knowing those ‘key things’.

The core concept of this book will continue to evolve over time. I am sure many of you will come along with us on this journey and submit your suggestions to us to make this book a key reference for anybody who wants to start with SQL Server. Today we want to acknowledge the fact that you will help us keep this book alive forever with the latest updates. We want to thank everyone who participates in this journey with us.

Though each of these chapters are geared towards convenience we highly recommend reading each of the sections irrespective of the roles you might be doing since each of the sections have some interesting trivia about working with SQL Server. In the industry the role of accidental DBA’s (especially with SQL Server) is very common. Hence if you have performed the role of DBA for a short stint and want to brush-up your fundamentals then the upcoming sections will be a great review.

Table Of Contents

  • Database Concepts With Sql Server
  • Common Generic Questions & Answers
  • Common Developer Questions
  • Common Tricky Questions
  • Miscellaneous Questions On Sql Server 2008
  • Dba Skills Related Questions
  • Data Warehousing Interview Questions & Answers
  • General Best Practices

[Amazon] | [Flipkart]

Reference: Pinal Dave (http://blog.SQLAuthority.com)

SQLAuthority News – New Book Released – SQL Server Interview Questions And Answers

Two days ago, on birthday of my blog – I asked simple question – Guess! What is in this box?

I have received lots of interesting comments on the blog about what is in it. Many of you got it absolutely incorrect and many got it close to the right answer but no one got it 100% correct. Well, no issue at all, I am going to give away the price to whoever has the closest answer first in personal email.

Here is the answer to the question about what is in the box? Here it is – the box has my new book. In fact, I should say our new book as I co-authored this book with my very good friend Vinod Kumar. We had real blast writing this book together and had lots of interesting conversation when we were writing this book. This book has one simple goal – “master the basics.”

This book is not only for people who are preparing for interview. This book is for every one who wants to revisit the basics and wants to prepare themselves to the technology. One always needs to have practical knowledge to do their duty efficiently. This book talks about more than basics. There are multiple ways to present learning – either we can create simple book or make it interesting. We have decided the learning should be interactive and have opted for Interview Questions and Answer format.

Here is quick interview which we have done together.

Details of the books are here

The core concept of this book will continue to evolve over time. I am sure many of you will come along with us on this journey and submit your suggestions to us to make this book a key reference for anybody who wants to start with SQL server. Today we want to acknowledge the fact that you will help us keep this book alive forever with the latest updates. We want to thank everyone who participates in this journey with us.

You can get the books from [Amazon] | [Flipkart].

Read Vinod‘s blog post. Do not forget to wish him happy birthday as today is his birthday and also book release day – two reason to wish him congratulations.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – Import CSV into Database – Transferring File Content into a Database Table using CSVexpress

One of the most common data integration tasks I run into is a desire to move data from a file into a database table.  Generally the user is familiar with his data, the structure of the file, and the database table, but is unfamiliar with data integration tools and therefore views this task as something that is difficult.  What these users really need is a point and click approach that minimizes the learning curve for the data integration tool.  This is what CSVexpress (www.CSVexpress.com) is all about!  It is based on expressor Studio, a data integration tool I’ve been reviewing over the last several months.

With CSVexpress, moving data between data sources can be as simple as providing the database connection details, describing the structure of the incoming and outgoing data and then connecting two pre-programmed operators.   There’s no need to learn the intricacies of the data integration tool or to write code.  Let’s look at an example.

Suppose I have a comma separated value data file with data similar to the following, which is a listing of terminated employees that includes their hiring and termination date, department, job description, and final salary.

EMP_ID,STRT_DATE,END_DATE,JOB_ID,DEPT_ID,SALARY
102,13-JAN-93,24-JUL-98 17:00,Programmer,60,"$85,000"
101,21-SEP-89,27-OCT-93 17:00,Account Representative,110,"$65,000"
103,28-OCT-93,15-MAR-97 17:00,Account Manager,110,"$75,000"
304,17-FEB-96,19-DEC-99 17:00,Marketing,20,"$45,000"
333,24-MAR-98,31-DEC-99 17:00,Data Entry Clerk,50,"$35,000"
100,17-SEP-87,17-JUN-93 17:00,Administrative Assistant,90,"$40,000"
334,24-MAR-98,31-DEC-98 17:00,Sales Representative,80,"$40,000"
400,01-JAN-99,31-DEC-99 17:00,Sales Manager,80,"$55,000"

Notice the concise format used for the date values, the fact that the termination date includes both date and time information, and that the salary is clearly identified as money by the dollar sign and digit grouping.  In moving this data to a database table I want to express the dates using a format that includes the century since it’s obvious that this listing could include employees who left the company in both the 20th and 21st centuries, and I want the salary to be stored as a decimal value without the currency symbol and grouping character.  Most data integration tools would require coding within a transformation operation to effect these changes, but not expressor Studio.  Directives for these modifications are included in the description of the incoming data.

Besides starting the expressor Studio tool and opening a project, the first step is to create connection artifacts, which describe to expressor where data is stored.  For this example, two connection artifacts are required: a file connection, which encapsulates the file system location of my file; and a database connection, which encapsulates the database connection information.  With expressor Studio, I use wizards to create these artifacts.

First click New Connection > File Connection in the Home tab of expressor Studio’s ribbon bar, which starts the File Connection wizard.  In the first window, I enter the path to the directory that contains the input file.  Note that the file connection artifact only specifies the file system location, not the name of the file.

Then I click Next and enter a meaningful name for this connection artifact; clicking Finish closes the wizard and saves the artifact.

To create the Database Connection artifact, I must know the location of, or instance name, of the target database and have the credentials of an account with sufficient privileges to write to the target table.  To use expressor Studio’s features to the fullest, this account should also have the authority to create a table.

I click the New Connection > Database Connection in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  expressor Studio includes high-performance drivers for many relational database management systems, so I can simply make a selection from the “Supplied database drivers” drop down control.  If my desired RDBMS isn’t listed, I can optionally use an existing ODBC DSN by selecting the “Existing DSN” radio button.

In the following window, I enter the connection details.  With Microsoft SQL Server, I may choose to use Windows Authentication rather than rather than account credentials.  After clicking Next, I enter a meaningful name for this connection artifact and clicking Finish closes the wizard and saves the artifact.

Now I create a schema artifact, which describes the structure of the file data.  When expressor reads a file, all data fields are typed as strings.  In some use cases this may be exactly what is needed and there is no need to edit the schema artifact.  But in this example, editing the schema artifact will be used to specify how the data should be transformed; that is, reformat the dates to include century designations, change the employee and job ID’s to integers, and convert the salary to a decimal value.

Again a wizard is used to create the schema artifact.  I click New Schema > Delimited Schema in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  In the first window, I click Get Data from File, which then displays a listing of the file connections in the project.  When I click on the file connection I previously created, a browse window opens to this file system location; I then select the file and click Open, which imports 10 lines from the file into the wizard.


I now view the file’s content and confirm that the appropriate delimiter characters are selected in the “Field Delimiter” and “Record Delimiter” drop down controls; then I click Next.
Since the input file includes a header row, I can easily indicate that fields in the file should be identified through the corresponding header value by clicking “Set All Names from Selected Row. “ Alternatively, I could enter a different identifier into the Field Details > Name text box.  I click Next and enter a meaningful name for this schema artifact; clicking Finish closes the wizard and saves the artifact.

Now I open the schema artifact in the schema editor.  When I first view the schema’s content, I note that the types of all attributes in the Semantic Type (the right-hand panel) are strings and that the attribute names are the same as the field names in the data file.  To change an attribute’s name and type, I highlight the attribute and click Edit in the Attributes grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Attribute window; I can change the attribute name and select the desired type from the “Data type” drop down control.  In this example, I change the name of each attribute to the name of the corresponding database table column (EmployeeID, StartingDate, TerminationDate, JobDescription, DepartmentID, and FinalSalary).  Then for the EmployeeID and DepartmentID attributes, I select Integer as the data type, for the StartingDate and TerminationDate attributes, I select Datetime as the data type, and for the FinalSalary attribute, I select the Decimal type.

But I can do much more in the schema editor.  For the datetime attributes, I can set a constraint that ensures that the data adheres to some predetermined specifications; a starting date must be later than January 1, 1980 (the date on which the company began operations) and a termination date must be earlier than 11:59 PM on December 31, 1999.  I simply select the appropriate constraint and enter the value (1980-01-01 00:00 as the starting date and 1999-12-31 11:59 as the termination date).

As a last step in setting up these datetime conversions, I edit the mapping, describing the format of each datetime type in the source file.

I highlight the mapping line for the StartingDate attribute and click Edit Mapping in the Mappings grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Mapping window in which I either enter, or select, a format that describes how the datetime values are represented in the file.  Note the use of Y01 as the syntax for the year.  This syntax is the indicator to expressor Studio to derive the century by setting any year later than 01 to the 20th century and any year before 01 to the 21st century.  As each datetime value is read from the file, the year values are transformed into century and year values.

For the TerminationDate attribute, my format also indicates that the datetime value includes hours and minutes.

And now to the Salary attribute. I open its mapping and in the Edit Mapping window select the Currency tab and the “Use currency” check box.  This indicates that the file data will include the dollar sign (or in Europe the Pound or Euro sign), which should be removed.

And on the Grouping tab, I select the “Use grouping” checkbox and enter 3 into the “Group size” text box, a comma into the “Grouping character” text box, and a decimal point into the “Decimal separator” character text box.

These entries allow the string to be properly converted into a decimal value.

By making these entries into the schema that describes my input file, I’ve specified how I want the data transformed prior to writing to the database table and completely removed the requirement for coding within the data integration application itself.

Assembling the data integration application is simple.  Onto the canvas I drag the Read File and Write Table operators, connecting the output of the Read File operator to the input of the Write Table operator.

Next, I select the Read File operator and its Properties panel opens on the right-hand side of expressor Studio.  For each property, I can select an appropriate entry from the corresponding drop down control.  Clicking on the button to the right of the “File name” text box opens the file system location specified in the file connection artifact, allowing me to select the appropriate input file.  I indicate also that the first row in the file, the header row, should be skipped, and that any record that fails one of the datetime constraints should be skipped.

I then select the Write Table operator and in its Properties panel specify the database connection, normal for the “Mode,” and the “Truncate” and “Create Missing Table” options.  If my target table does not yet exist, expressor will create the table using the information encapsulated in the schema artifact assigned to the operator.

The last task needed to complete the application is to create the schema artifact used by the Write Table operator.  This is extremely easy as another wizard is capable of using the schema artifact assigned to the Read Table operator to create a schema artifact for the Write Table operator.  In the Write Table Properties panel, I click the drop down control to the right of the “Schema” property and select “New Table Schema from Upstream Output…” from the drop down menu.

The wizard first displays the table description and in its second screen asks me to select the database connection artifact that specifies the RDBMS in which the target table will exist.  The wizard then connects to the RDBMS and retrieves a list of database schemas from which I make a selection.  The fourth screen gives me the opportunity to fine tune the table’s description.  In this example, I set the width of the JobDescription column to a maximum of 40 characters and select money as the type of the LastSalary column.  I also provide the name for the table.

This completes development of the application.  The entire application was created through the use of wizards and the required data transformations specified through simple constraints and specifications rather than through coding.  To develop this application, I only needed a basic understanding of expressor Studio, a level of expertise that can be gained by working through a few introductory tutorials.  expressor Studio is as close to a point and click data integration tool as one could want and I urge you to try this product if you have a need to move data between files or from files to database tables.

Check out CSVexpress in more detail.  It offers a few basic video tutorials and a preview of expressor Studio 3.5, which will support the reading and writing of data into Salesforce.com.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

SQLAuthority News – 5th Anniversary Giveaways

Please read my 5th Anniversary post and my quick note on history of the Database.

I am sure that we all have friends and we value friendship more than anything. In fact, the complete model of Facebook is built on friends. If you have lots of friends, you must be a lucky person. Having a lot of friends is indeed a good thing.
I consider all you blog readers as my friends so now I want do something for you. What is it? Well, send me details about how many of your friends like my page and you would have a chance to win lots of learning materials for yourself and your friends. Here are the exciting prizes awaiting the lucky winner:

Combo set of 5 Joes 2 Pros Book – 1 for YOU and 1 for Friend

This is USD 444 (each set USD 222) worth gift. It contains all the five Joes 2 Pros books (Vol1, Vol2, Vol3, Vol4, Vol5) + 1 Learning DVD. [Amazon] | [Flipkart]

If in case you submitted an entry but didn’t win the Combo set of 5 Joes 2 Pros books, you could still will  my SQL Server Wait Stats book as a consolation prize! I will pick the next 5 participants who have the highest number of friends who “liked” the Facebook page, http://facebook.com/SQLAuth.
Instead of sending one copy, I will send you 2 copies so you can share one copy with a friend of yours. Well, it is important to share our learning and love with friends, isn’t it?
Note: Just take a screenshot of http://facebook.com/SQLAuth using Print Screen function and send it by Nov 7th to pinal ‘at’ sqlauthority.com.. There are no special freebies to early birds so take your time and see if you can increase your friends like count by Nov 7th.

Guess – What is in it?

It is quite possible you are not a Facebook or Twitter user. In that case you can still win a surprise from me. You have 2 days to guess what is in this box. If you guess it correct and you are one of the first 5 persons to have the correct answer – you will get what is in this box for free. Please note that you have only 48 hours to guess. Please give me your guess by commenting to this blog post.

Reference:  Pinal Dave (http://blog.SQLAuthority.com)

SQLAuthority News – History of the Database – 5 Years of Blogging at SQLAuthority

Don’t miss the Contest:Participate in 5th Anniversary Contest

 

Today is this blog’s birthday, and I want to do a fun, informative blog post. Five years ago this day I started this blog. Intention – my personal web blog. I wrote this blog for me and still today whatever I learn I share here. I don’t want to wander too far off topic, though, so I will write about two of my favorite things – history and databases.  And what better way to cover these two topics than to talk about the history of databases.

If you want to be technical, databases as we know them today only date back to the late 1960’s and early 1970’s, when computers began to keep records and store memories.  But the idea of memory storage didn’t just appear 40 years ago – there was a history behind wanting to keep these records.

In fact, the written word originated as a way to keep records – ancient man didn’t decide they suddenly wanted to read novels, they needed a way to keep track of the harvest, of their flocks, and of the tributes paid to the local lord.  And that is how writing and the database began.  You could consider the cave paintings from 17,0000 years ago at Lascaux, France, or the clay token from the ancient Sumerians in 8,000 BC to be the first instances of record keeping – and thus databases.

If you prefer, you can consider the advent of written language to be the first database.  Many historians believe the first written language appeared in the 37th century BC, with Egyptian hieroglyphics. The ancient Sumerians, not to be outdone, also created their own written language within a few hundred years.

Databases could be more closely described as collections of information, in which case the Sumerians win the prize for the first archive.  A collection of 20,000 stone tablets was unearthed in 1964 near the modern day city Tell Mardikh, in Syria.  This ancient database is from 2,500 BC, and appears to be a sort of law library where apprentice-scribes copied important documents.  Further archaeological digs hope to uncover the palace library, and thus an even larger database.

Of course, the most famous ancient database would have to be the Royal Library of Alexandria, the great collection of records and wisdom in ancient Egypt.  It was created by Ptolemy I, and existed from 300 BC through 30 AD, when Julius Caesar effectively erased the hard drives when he accidentally set fire to it.  As any programmer knows who has forgotten to hit “save” or has experienced a sudden power outage, thousands of hours of work was lost in a single instant.

Databases existed in very similar conditions up until recently.  Cuneiform tablets gave way to papyrus, which led to vellum, and eventually modern paper and the printing press.  Someday the databases we rely on so much today will become another chapter in the history of record keeping.  Who knows what the databases of tomorrow will look like!

Reference:  Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – Database Dynamic Caching by Automatic SQL Server Performance Acceleration

My second look at SafePeak’s new version (2.1) revealed to me few additional interesting features. For those of you who hadn’t read my previous reviews SafePeak and not familiar with it, here is a quick brief:

SafePeak is in business of accelerating performance of SQL Server applications, as well as their scalability, without making code changes to the applications or to the databases. SafePeak performs database dynamic caching, by caching in memory result sets of queries and stored procedures while keeping all those cache correct and up to date. Cached queries are retrieved from the SafePeak RAM in microsecond speed and not send to the SQL Server. The application gets much faster results (100-500 micro seconds), the load on the SQL Server is reduced (less CPU and IO) and the application or the infrastructure gets better scalability.

SafePeak solution is hosted either within your cloud servers, hosted servers or your enterprise servers, as part of the application architecture. Connection of the application is done via change of connection strings or adding reroute line in the c:\windows\system32\drivers\etc\hosts file on all application servers.

For those who would like to learn more on SafePeak architecture and how it works, I suggest to read this vendor’s webpage: SafePeak Architecture.

More interesting new features in SafePeak 2.1

In my previous review of SafePeak new I covered the first 4 things I noticed in the new SafePeak (check out my article “SQLAuthority News – SafePeak Releases a Major Update: SafePeak version 2.1 for SQL Server Performance Acceleration”):

  1. Cache setup and fine-tuning – a critical part for getting good caching results
  2. Database templates
  3. Choosing which database to cache
  4. Monitoring and analysis options by SafePeak

Since then I had a chance to play with SafePeak some more and here is what I found.

5. Analysis of SQL Performance (present and history):

In SafePeak v.2.1 the tools for understanding of performance became more comprehensive. Every 15 minutes SafePeak creates and updates various performance statistics. Each query (or a procedure execute) that arrives to SafePeak gets a SQL pattern, and after it is used again there are statistics for such pattern. An important part of this product is that it understands the dependencies of every pattern (list of tables, views, user defined functions and procs). From this understanding SafePeak creates important analysis information on performance of every object: response time from the database, response time from SafePeak cache, average response time, percent of traffic and break down of behavior. One of the interesting things this behavior column shows is how often the object is actually pdated. The break down analysis allows knowing the above information for: queries and procedures, tables, views, databases and even instances level.

The data is show now on all arriving queries, both read queries (that can be cached), but also any types of updates like DMLs, DDLs, DCLs, and even session settings queries.

The stats are being updated every 15 minutes and SafePeak dashboard allows going back in time and investigating what happened within any time frame.

6. Logon trigger, for making sure nothing corrupts SafePeak cache data

If you have an application with many parts, many servers many possible locations that can actually update the database, or the SQL Server is accessible to many DBAs or software engineers, each can access some database directly and do some changes without going thru SafePeak – this can create a potential corruption of the data stored in SafePeak cache. To make sure SafePeak cache is correct it needs to get all updates to arrive to SafePeak, and if a DBA will access the database directly and do some changes, for example, then SafePeak will simply not know about it and will not clean SafePeak cache.

In the new version, SafePeak brought a new feature called “Logon Trigger” to solve the above challenge. By special click of a button SafePeak can deploy a special server logon trigger (with a CLR object) on your SQL Server that actually monitors all connections and informs SafePeak on any connection that is coming not from SafePeak. In SafePeak dashboard there is an interface that allows to control which logins can be ignored based on login names and IPs, while the rest will invoke cache cleanup of SafePeak and actually locks SafePeak cache until this connection will not be closed. Important to note, that this does not interrupt any logins, only informs SafePeak on such connection.

On the Dashboard screen in SafePeak you will be able to see those connections and then decide what to do with them.

Configuration of this feature in SafePeak dashboard can be done here: Settings -> SQL instances management -> click on instance -> Logon Trigger tab.

Other features:

7. User management

ability to grant permissions to someone without changing its configuration and only use SafePeak as performance analysis tool.

8. Better reports

for analysis of performance using 15 minute resolution charts.

9. Caching of client cursors

10. Support for IPv6

Summary

SafePeak is a great SQL Server performance acceleration solution for users who want immediate results for sites with performance, scalability and peak spikes challenges. Especially if your apps are packaged or 3rd party, since no code changes are done. SafePeak can significantly increase response times, by reducing network roundtrip to the database, decreasing CPU resource usage, eliminating I/O and storage access.

SafePeak team provides a free fully functional trial www.safepeak.com/download and actually provides a one-on-one assistance during such trial.

Reference: Pinal Dave (http://blog.SQLAuthority.com)