SQL SERVER – Download SQL Server 2014 Developer Edition at USD 59.59

Since the release of the SQL Server 2014 earlier this year, lots of people have been asking if people can download the full version of SQL Server 2014. SQL Server 2014 full version is not available for FREE. You have to either get standard version or enterprise version. You can download SQL Server 2014 trial version and use it till it expire.

If you want SQL Server 2014 for development purpose or for test system, you do not have to buy standard or enterprise version. You can just download it the development version. The development version costs USD 59.95 and it is available on DVD and immediate download.

Here is the link to the product.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

About these ads

SQL Authority News – Microsoft Whitepaper – SQL Server 2014 and Windows Azure Blob Storage Service: Better Together

Microsoft has released SQL Server 2014 earlier this month and now have released very long but interesting white paper on Windows Azure Blog Storage Services. SQL Server Data Files in Windows Azure feature is available in all editions of SQL Server 2014. This feature is enabled by default and at free of cost. This feature makes SQL Server 2014 hybrid cloud database as it provides native support for database files stored as Windows Azure Blobs. If you notice this is a big leap forward where data stored locally as well as on a cloud are accessed by SQL Server 2014 at the same time. You can also host SQL Server Instance in Azure cloud and it can also use the Windows Azure Blobs just like on-premises SQL Server.

Let me give you another example, if you have a situation that you have stored your database files in the cloud as well as backup on Windows Azure Cloud. Now if you want to restore your backup you do not have to bring that back to on-premises before restoring it. You can just restore your data directly from Windows Azure Storage to your SQL Server 2014 instance in a Virtual Machine. It will remove the necessity to do lots of data movement between cloud and on-premises servers.

This white paper discusses exactly the same feature very much in detailed. This white paper has over 116 pages and the size of the word file is 2.5 MB. However, after a long time I have found a Whitepaper which explains concepts from the beginning and walks users step by step with the examples. The appendix file to this Whitepaper contains all the necessary code as well as very interesting reference material. I think I am going to spend my entire weekend reading and learning from this Whitepaper.

Image included in this blog post is courtesy to the same white paper and it explains the story of entire white paper in just one image. Brilliant.

Download the white paper SQL Server 2014 and Windows Azure Blob Storage Service: Better Together.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

Developers – Drive by Daniel Pink – Book Review

I have been an avid reader of books. I read pretty much one or two books every week. I believe reading helps me a lot in my day job as well as well writing blog post. I am an evangelist and my primary responsibility is to spread the love for the technology. When I read more, I learn more and that effectively leads me to communicate better with my audience. Here is one of the recent book which I read and I loved it. This book is about what motivates us and what is the best job for us. I have read this book a second time and I think finally I have digested the meaning of this book. I love every bit of it and if you do not have it, you should get it.

I am going to give 10 Drive physical books to 10 people who register for my newsletter over here.

If you want to own the book, you can get it from Amazon |Flipkart.


Here are essential few quotes from the book.

The joy of the task was its own reward.

When money is used as an external reward for some activity, the subjects lose intrinsic interest for the activity.

Rewards can deliver a short-term boost – just as a  jolt of caffeine can keep you cranking for a few more hours. But the effects wears off – and, worse, can reduce a person’s longer term motivation to continue the project.

Enjoyment-based intrinsic motivation, namely how creative a person feels when working on the project, is the strongest and most pervasive driver.

Intrinsic motivation is of great importance for all economic activities. It is inconceivable that people are motivated solely or even mainly by external incentives.

Intrinsic motivation is conductive to creativity; controlling extrinsic motivation is detrimental to creativity.

It is those who are least motivated to pursue extrinsic rewards who eventually receive them.

Greatness and nearsightedness are incompatible. Meaningful achievement depends on lifting one’s sights and pushing toward the horizon.

Any extrinsic reward should be unexpected and offered only after the task is complete.

Type Intrisic behavior emerges when people have autonomy over the four T’s: their task, their time, their technique, and their team.

If you want to work with more type Intresic’s the strategy is to become one yourself. Automony, it turns out, can be contagious.

There is complexity, autonomy, and a relationship between effort and reward in doing creative work, and that’s worth more to most of us than money.

Carrots & sticks are so last century. Drive says for 21st century work, we need to upgrade to autonomy, mastery & purpose.

This new approach has three essential elements: (1) Autonomy—the desire to direct our own lives; (2) Mastery—the urge to get better and better at something that matters; and (3) Purpose—the yearning to do what we do in the service of something larger than ourselves.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – Idera SQL XEvent Profiler – a Free Tool for XEvent Monitoring

Download Extended Event Profile FREE

We often resist the change, we do not like change and we keep on using the same old technology which we have been using it to resolve the new challenges which we face in our daily life. The Extended Event is such a technology as well. It was added to SQL Server in SQL 2008 and expanded upon greatly in SQL 2012 is a powerful mechanism for monitoring activity in SQL Server. It has huge implications for tools like SQL Diagnostic Manager, which switched over to XEvent technology for wait monitoring, and for users of all stripes who need to see what is happening on their servers.

Though the product was well thought of it was yet not quite ready for average users due to many reasons. It offered a reduced set of events and could only be manipulated through TSQL. In SQL 2012 the number of events went up dramatically and Microsoft released an XEvent GUI as part of Management Studio. However, lots of users think that the XEvent tool is a bit overkill for a lot of purposes. Therefore, a lot of users continue to use SQL Server Profiler, which uses trace technology. It’s easy to see why – it’s familiar, it’s easy to use, and it takes only a few clicks to get a diagnostic session started.

Team Idera has put together a new free tool called SQL XE Event Profiler, which offers the simplicity of Profiler but uses the powerful and lightweight XEvent technology on the back end. The idea was to provide the simple 2 click behavior that users were looking for, along with the familiar SQL Profiler templates that users are already accustomed to using. The application has the familiar start, stop, and pause buttons, the ability to turn auto-scroll on and off, and the ability to clear the grid. While we were at it, we added some nice grouping functionality to the event grid, along with text searching and export.

I am downloading this tool this weekend and going to play with this tool and see how I can use it on my various servers. I encourage all of you to do the same.  Here is the link to the original blog.

Download Extended Event Profile FREE

Reference: Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – Tools for Proactive DBAs – Central Management Server – Notes from the Field #024

[Note from Pinal]: This is a new episode of Notes from the Fields series. AlwaysOn is a very complex subject and not everyone knows many things about this. The matter of the fact is there is very little information available on this subject online and not everyone knows everything about this. This is why when a very common question related to AlwaysOn comes, people get confused.

In this episode of the Notes from the Field series database expert John Sterrett (Group Principal at Linchpin People) explains a very common issue DBAs and Developer faces in their career and is related to AlwaysOn Availability Group. Linchpin People are database coaches and wellness experts for a data driven world. Read the experience of John in his own words.


After completing several AlwaysOn Availability Group implementations there are two questions that come up frequently. When did my availability group failover? Where is my read-write replica? The answer to the first one is provided here [http://johnsterrett.com/2014/03/18/where-is-my-availability-group/ ]. Today, were going to look at the answer to the second question.

Where is my read-write replica?

The following script, when executed on an availability group replica returns the availability group name, current role state, and database name. This will let you know if this instance is hosting the PRIMARY “read/write” replica.

IF SERVERPROPERTY ('IsHadrEnabled') = 1 BEGIN
SELECT
AvailabilityGroup = ag.name,
AvailabilityGroupRole = ars.role_desc,
db.name
FROM sys.databases db
INNER JOIN sys.availability_databases_cluster adc ON db.group_database_id = adc.group_database_id
INNER JOIN sys.availability_groups ag ON adc.group_id = ag.group_id
INNER JOIN sys.dm_hadr_availability_replica_states ars ON ag.group_id = ars.group_id
AND db.replica_id = ars.replica_id
END
ELSE BEGIN
SELECT
NULL AS AvailabilityGroup, NULL AS AvailabilityGroupRole, name
FROM sys.databases
END

How do I check all replicas?

Now you know how to check if an instance of SQL Server participating in an AlwaysOn Availability Group is the “Read/Write” PRIMARY role. Next, we can utilize Central Management Server to run the same script across your other instances participating as replicas in your AlwaysOn Availability Group configuration. If you have never used Central Management Server this step-by-step guide can get you going in 10 minutes.

For this week, I have an windows failover cluster with two nodes “SQL2012DR” and “SQL2012PROD1 aka localhost,1433”). Each node hosts its own default instance of SQL Server. There are two separate AlwaysOn Availability Groups. Using Central Management Server, I can rerun the query above and quickly see where my read/write replica is.

Are your servers running at optimal speed or are you facing any SQL Server Performance Problems? If you want to get started with the help of experts read more over here: Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Life of a SQL Query – Query States

This is the guest blog post written by James Davies, Sales Engineer, Confio Software, a SolarWinds company. It is written based on the Database Performance Analyzer. I encourage you to download it for free from here.

From the moment a query request is made until the query response is delivered, what really happens to your query in SQL Server? Let’s find out, and talk about why that matters to you.

Put simply, queries in SQL Server can exist in one of three states while executing inside the database engine. These states are defined as:

  • RUNNING–meaning that the query is actively running on the CPU
  • RUNNABLE–meaning that the query is ready to run but CPU resources are not yet available. It is waiting in the Runnable Queue for a CPU to open up
  • SUSPENDED–meaning that the query is waiting for a third party resource to become available (for example,, disk I/O, blocking and so on)

In an overly simplified way, it’s a bit like the checkout line at the grocery store, in which each shopper has just one item. The cashier is the CPU and the item the checker is actively scanning is like a query that is running. The shoppers waiting in line, each with their own item, are like runnable queries, just waiting to be scanned and checked out. Let’s say that when the checker scans your item, the price doesn’t come up, and a price check is called. The checker asks you to step aside, and wait for the price check to complete. When the price comes back, the checker asks you to step back in line, all the way at the end of the line. That’s like a suspended query (and that’s not something you want to happen to your query!). If there are four checkout lines running, then that would be a bit like having four CPU cores, each with their own set of queries in various states of running, runnable and suspended.

So then what is a wait type? In SQL Server wait types are simply a more detailed explanation of the suspension state. As DBAs, we don’t like to see queries in the suspended state, and want to find out just what is causing the query not to process.

When you look at wait statistics, you want to see queries spending the majority of their time in a running or runnable state. You can also use wait statistics to discover why a query is in a suspended state. If you can answer that question, you can fix the problem so that the query is mostly in a non-suspended state. And this is why understanding your query wait statistics is key to maximizing query performance.

You can look at wait statistics using SQL Server Management Studio (SSMS) or third-party tools such as SolarWinds Database Performance Analyzer (DPA). DPA provides simple, fast and visual identification of query wait statistics correlated with system resources, including historical and trend information, so that you can easily pinpoint root cause of query performance issues.


Database Performance Analyzer (formerly Confio Ignite) Shows Top Waits and Which Resources Are Being Waited On

Whichever tools you use, understanding query states and wait statistics is the foundation to a good proactive database performance practice.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – Calculating XIRR in SQL Server – Internal Rate of Return Available

Note: Download the XLeratorDB to play along this blog post.

XIRR is a widely-used used financial calculation that lends itself to evaluating the rate of return on either historic cash flows or anticipated future cash flows. Thus it is a tool that can be used to measure investment performance or to evaluate the comparative desirability of future investment alternatives. Today we will look at Westclintech’s ingenious implementation of the XIRR function in SQL Server using SQL CLR.

According to the Microsoft Office documentation, “XIRR is closely related to XNPV, the net present value function. The rate of return calculated by XIRR is the interest rate corresponding to XNPV = 0.”

Let’s look at the example from the Microsoft Office documentation and try it out in SQL Server.

SELECT
wct.XIRR([values],[dates],NULL)AS XIRR
FROM (VALUES
(-10000,'2008-01-01'),
(
2750,'2008-03-01'),
(
4250,'2008-10-30'),
(
3250,'2009-02-15'),
(
2750,'2009-04-01')
)
n([values],[dates])

This provides us with exactly the same answer as in the Microsoft Office documentation.

XIRR
----------------------
0.373362533509583

The Westclintech developers have taken advantage of the multi-input aggregate capabilities that were added to the SQL CLR with the release of SQL Server 2008. Their implementation of XIRR is just one of more than 750 functions contained in their XLeratorDB function library.

Interestingly, in playing around with the function I discovered cases where it seems like the Wesclintech function produces a result and the Excel function cannot. Here’s the calculation of one example

SELECT
wct.XIRR([values],[dates],NULL)AS XIRR
FROM (VALUES
(0,'2002-06-17'),
(-
42814703.5,'2008-08-19'),
(
44768952.93,'2002-11-16'),
(
0,'2003-10-31'),
(-
192485819.41,'2012-10-21'),
(-
128316512.7,'2003-01-19'),
(-
57458663.82,'2006-08-12'),
(
0,'2006-01-01'),
(
121072521.7,'2001-07-28'),
(
59000533.69,'2008-01-17'),
(
48964026.98,'2003-05-31')
)
n([values],[dates])

where the Westclintech result is

XIRR
----------------------
0.101489308050466

But when I put the same cash flows into EXCEL 2013 I get an answer of 2.9802322388E-09, which seems kind of wrong since it is pretty much zero.

Remembering the Microsoft Office documentation, I decided to check the XNPV calculation with the value returned by XIRR. The XLeratorDB library includes the XNPV calculation, making it pretty easy to double-check the calculation using SQL.

WITH mycte AS (
SELECT
*
FROM (VALUES
(0,'2002-06-17'),
(-
42814703.5,'2008-08-19'),
(
44768952.93,'2002-11-16'),
(
0,'2003-10-31'),
(-
192485819.41,'2012-10-21'),
(-
128316512.7,'2003-01-19'),
(-
57458663.82,'2006-08-12'),
(
0,'2006-01-01'),
(
121072521.7,'2001-07-28'),
(
59000533.69,'2008-01-17'),
(
48964026.98,'2003-05-31')
)
n([values],[dates])
)
SELECT
wct.XNPV(r,[values],[dates]) AS [NPV using XLDB value],
wct.XNPV(2.9802322388E-09,[values],[dates]) AS [NPV using Excel value]
FROM (
SELECT
wct.XIRR([values],[dates],NULL)AS r
FROM
mycte
)n
,mycte

This showed the XLeratorDB value did in fact return an XNPV value pretty close to zero while the Excel value did not.

NPV using XLDB value   NPV using Excel value
---------------------- ----------------------
2.23517417907715E-08   -147269656.931966

I then tried to double check the calculations in EXCEL.

As you can see, Excel didn’t like either of the solutions! A little further research explains why. It turns out that the Excel XNPV function expects the cash flows to be in date order but the Microsoft Office documentation for the XIRR function says “Dates may occur in any order.” Here’s what happens when we put the cash flows in order.

The first thing that I noticed is that Excel has calculated a new XIRR value! Now the XLeratorDB result and the Excel result agree to the first 6 decimal places.

Now I am really intrigued. I am not an expert on financial calculations, but I know that Excel is pretty much the gold standard for this type of financial calculation. Is it possible that there is something wrong with the Excel calculation? Since I now have two tools that should be generating the same answers I keep testing, looking for differences. Here’s another example.

SELECT
wct.XIRR([values],[dates],NULL)AS XIRR
FROM (VALUES
(138516675.3,'2010-09-30'),
(-
33772930.31,'2003-06-21'),
(
34598742.99,'2009-12-14'),
(
42298808.27,'2008-01-15'),
(
0,'2011-10-27'),
(-
122807295.9,'2003-03-11'),
(
4808700.25,'2003-04-06'),
(-
49206052.38,'2011-11-21'),
(
0,'2012-11-11'),
(
0,'2002-06-30'),
(
10152269.87,'2002-07-17')
)
n([values],[dates])

I get this answer in SQL Server.

XIRR
----------------------
0.0259274294071075

And this answer in Excel.

Again a big difference from the XLeratorDB answer. But, we know that we can check the answer using the XNPV calculation, and we know that we need to put the cash flows in order.

Once again, the XIRR calculation in Excel changed (oddly enough, to the same value as in our first Excel example), but we were smart enough to preserve the value that was originally calculated, but this produces the #NUM! value in Excel. And finally, we see that the XLeratorDB value produces the XNPV = 0 value that the Microsoft Office documentation talks about. There is no explanation in the Microsoft Office documentation for Excel 2013 about the #NUM! value, though earlier versions of the documentation suggest that the rate value passed into XNPV must be greater than zero.

On the off-chance that the Excel calculation might be correct, I double-check the calculation using XLeratorDB (notice that I have moved from checking XLeratorDB against Excel to checking Excel against XLeratorDB).

WITH mycte AS (
SELECT
*
FROM (VALUES
(138516675.3,'2010-09-30'),
(-
33772930.31,'2003-06-21'),
(
34598742.99,'2009-12-14'),
(
42298808.27,'2008-01-15'),
(
0,'2011-10-27'),
(-
122807295.9,'2003-03-11'),
(
4808700.25,'2003-04-06'),
(-
49206052.38,'2011-11-21'),
(
0,'2012-11-11'),
(
0,'2002-06-30'),
(
10152269.87,'2002-07-17')
)
n([values],[dates])
)
SELECT
wct.XNPV(r,[values],[dates]) AS [NPV using XLDB value],
wct.XNPV(-0.637033665925265,[values],[dates]) AS [NPV using Excel value]
FROM (
SELECT
wct.XIRR([values],[dates],NULL)AS r
FROM
mycte
)n
,mycte

As I expected, the Excel XIRR value does not return an XNPV = 0

NPV using XLDB value   NPV using Excel value
---------------------- ----------------------
2.84984707832336E-07   1268.43783508614

At this point, I am completely convinced that the XLeratorDB function is at least as good as the Excel function, the only question is how does it perform?

I randomly create a little more than 2.7 million rows of test data and dropped them into a table which consists of three columns: projectno, amt_cf, and date_cf. Projectno is simply a way of grouping cash flows together, amt_cf is the cash flow amount, and date_cf is the date of the cash flow. I didn’t do anything to avoid duplicate dates with a project number.  I then ran the following SQL:

SET NOCOUNT ON
DECLARE
@time_start AS DATETIME
DECLARE
@time_end AS DATETIME
DECLARE
@count_records AS INT
DECLARE
@count_processed AS INT
DECLARE
@results AS TABLE (
[Input Rows] INT,
[Output Rows] INT,
[Elapsed Time] FLOAT
)
CREATE TABLE #x (
ctr INT,
projectno INT,
xirr FLOAT
)
DECLARE @ctr AS INT = 0
WHILE @ctr < 100
BEGIN
SET
@ctr = @ctr + 1
SET @count_records = (SELECT COUNT(*) FROM dbo.XNPV_cashflows)
SET @time_start = GETDATE()
INSERT INTO
#x
SELECT
@ctr,
X.projectno,
wct.XIRR(amt_cf,date_cf, NULL) AS IRR
FROM
dbo.XNPV_cashflows X
GROUP BY
X.projectno
SET @count_processed = @@ROWCOUNT
SET @time_end = GETDATE()
INSERT INTO
@results
SELECT
@count_records AS [Input rows],
@count_processed AS [Output rows],
DATEDIFF(ms,@time_start,@time_end)/1000e+00 AS [Elapsed Time]
END
SELECT
AVG([Input Rows]) AS [Input Rows],
AVG([Output Rows]) AS [Output Rows],
AVG([Elapsed Time]) AS [Elpased Time],
SUM([Input Rows])/SUM([Elapsed Time]) AS Throughput
FROM
@results
DROP TABLE
#x

The results of this test showed an average throughput of over 158,000 rows per second, which means that on average were able to complete and store the results of the XIRR calculation on 2.7 million rows of cash flows in about 17 seconds.

Input Rows  Output Rows Elpased Time           Throughput
----------- ----------- ---------------------- ----------------------
2700080     100045      17.0445555555556       158413.048154836

I did this test on my laptop which is a 64-bit Dell Precision M6600 with an Intel Core i7 2760QM CPU @ 2.40 GHz with 8 cores, 8GB of memory, and running Windows 7. Since the machine has 8 cores, there is plenty of opportunity for parallelization, which is handled automatically by the SQL CLR architecture for aggregate functions, which is what gives this function such astonishing throughput.

I think that the XLeratorDB package makes a compelling case for getting these calculations out of Excel and into SQL Server. Their calculations seem to be more accurate, scale up to millions of rows with absolutely no changes to the SQL, and are blazingly fast. Additionally, since they run on the database, it is easy to integrate these calculations into any other platform or programming language that can open a database connection. All you really need to know is a few lines of SQL. You should download the 15-day free trial today and you should check out some of the other interesting articles about the XIRR function at www.westclintech.com.

Reference: Pinal Dave (http://blog.SQLAuthority.com)