SQL SERVER – Search Records with Single Quotes – Part 2

Yesterday I wrote a blog post about and it has created quite a lot of interest in the community SQL SERVER – Search Records with Single Quotes. Here are a few other alternatives which I received as a comment.

The question was how to search records with single quotes in table columns.

Suggestion from Andreas Driesen

SELECT [ProductModelID]
,[Name]
FROM [AdventureWorks2014].[Production].[ProductModel]
WHERE [Name] LIKE '%'+CHAR(39)+'%'
GO

Suggestion from Sanjay Monpara and Hitesh Shah

SELECT [ProductModelID]
,[Name]
FROM [AdventureWorks2014].[Production].[ProductModel]
WHERE CHARINDEX('''',[Name]) <> 0
GO

Suggestion from Vinod Kumar (via email)

SELECT [ProductModelID]
,[Name]
FROM [AdventureWorks2014].[Production].[ProductModel]
WHERE [Name] <> REPLACE(name, '''','')
GO

Please note that all of them gives excellent performance, so use anyone you like.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

About these ads

SQL SERVER – Search Records with Single Quotes

Every day when I woke up there are hundreds of emails with various questions related to SQL. I spend my first hour of the day answer each of them.

How do I search records with single quotes in it?

Here is the answer to find records with single quotes in the table.

SELECT [ProductModelID]
,[Name]
FROM [AdventureWorks2014].[Production].[ProductModel]
WHERE [Name] LIKE '%''%'
GO

Do let me know if there is any other way to find records with single quotes in it.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – Combined Multiple .SQL files into One Single File

Here is an interesting problem which I enjoyed solving yesterday.

There were multiple SQL Files in my one folder. When I had to send it to my friend I had to collect all the files into a folder and zip it to send it via email. My friend who was using mobile device told me that it is difficult to see the content of the file on a mobile device so it would be nice if I can just send all the SQL files combined in one single file and send it to him.

I loved the idea, but the challenge was that there are over 100s of the files were there and combining them into a single file would be a manual task for hours. I did not like the solution and I went online to find some cool solution. After a while I found a solution that I can use wiht command prompt to combine multiple files into a single file and it is easy to do so.

Here is an example. Let us create three SQL Files.

Now we will execute the following command in command prompt which will combine all the three files into a single file.

The command is

type *.sql > OneFile.sql

Above command will combine all the three files into a single called OneFile.sql.

Following image displays the content of the OneFile.sql which is the concatenation of all the sql files in the folder.

I enjoyed solving this little problem.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

EXCEL / SQL SERVER – Extract the Domain from an Email Address

Just like any business person, I work with Excel pretty much half of my time when I am not working with SQL Server. Recently I faced challenges when I wanted to extract domain from the email address. I was using excel. I was in extreme rush so I did something which was in fact actually longer route than easier way. I laughed at myself and decided to blog about it.

Here is the way I took to extract domain out of email in excel.

  1. Imported Excel to SQL Server Table
  2. Executed script from my blog post to extract domain from email in SQL Server
  3. Exported SQL Server Table to Excel

In reality, I should just have written a new function in different columns of excel which can extract domain out of Excel.

Here is the function which will extract domain from the email address.

=MID(B2,SEARCH(“@”,B2)+1,255)

Following is the image which displays how above simple excel function can return the domain from the email address.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – Finding Max Value From Multiple Values

Here is a question which I have received a few days ago.

“I have three different variables, I want to find out which one of them has the maximum or highest value. How can I do that?

I know I can do this with the IF or CASE statement, but that makes me write a long chode and I have to manually implement logic.

Is there any other way? “

Absolutely, here is the simple example which will work with SQL Server 2008 and later versions.

Example 1: With Variables

DECLARE @Int1 INT = 1, @Int2 INT = 3, @Int3 INT = 5;
SELECT MAX(v)
FROM (VALUES (@Int1), (@Int2), (@Int3)) AS value(v);

Example 2: With Static Values

SELECT MAX(v)
FROM (VALUES (1),(5),(3)) AS value(v);

Example 3: With Columns

CREATE TABLE SampleTable
( ID INT PRIMARY KEY ,
Int1 INT,
Int2 INT,
Int3 INT);
INSERT INTO SampleTable (ID, Int1, Int2, Int3)
VALUES (1, 1, 2, 3);
INSERT INTO SampleTable (ID, Int1, Int2, Int3)
VALUES (2, 3, 2, 1);
INSERT INTO SampleTable (ID, Int1, Int2, Int3)
VALUES (3, 1, 3, 2);
-- Query to select maximum value
SELECT ID,
(
SELECT MAX(v)
FROM (VALUES (Int1), (Int2), (Int3)) AS value(v)) AS MaxValue
FROM SampleTable;

I hope this simple queries helps you to find maximum value from various variables.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – Do You Know Your Data’s Classification? – Notes from the Field #050

[Notes from Pinal]: Data is a very simple word, yet it is very powerful. There is a famous saying – Know Your Data. I have quite often found that developers do not know their data, they are often confused with the same and not sure how to answer that. My friend Tim Radney is an amazing person who usually have answer to each of the questions which looks simple but are complicated in reality.

Linchpin People are database coaches and wellness experts for a data driven world. In this 50th episode of the Notes from the Fields series database expert Tim Radney (partner at Linchpin People) explains in a very simple word. Please follow Tim on his twitter handle at @tradney.


Do you know your data’s classification?

As data professionals, we have many responsibilities. We could be responsible for backing up and restoring data, writing reports, building queries, writing stored procedures, tuning workloads, or any vast number of responsibilities.

If you are in the business of granting access to the data either through reports/queries or provisioning login access, you should be aware of the type of data you are granting access to. Your company likely has policies in place that should guide how access to certain classifications of data should be handled.

Some of the more common types of data that your company would have stricter controls over would be related to PII, PCI, SOX, GLBA, or HIPPA. You should know what the guidelines are within your company for access to this data and help make sure that those standards are being upheld.

These data assets may require additional auditing on who has access to view, print, export, etc. When the data is viewed it may require water marks or headers/footers to be on any reports.

Your organization may require certain types of data, such as PCI to be audited on who is viewing certain elements of that data.

Worst, your organization may need to be doing these things but are not. If not, you should be asking why and helping to implement a data governance plan.

If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server.

Note: Tim has also written an excellent book on SQL Backup and Recovery, a must have for everyone.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Beginning Reading Transaction Log File

In this article we will examine what the SQL Server Transaction Log is, why do we need one and how it works.

Transaction Log Overview

The SQL Server transaction log is maybe one of the most unclear parts of SQL Server architecture. At the same time the transaction log is one of the most essential components of every database.

Every SQL Server database has a transaction log that keeps all transactions and the database changes made by each transaction. In case of a system failure, the transaction log might be required to bring your database back to a consistent state. That is why the transaction log should never be deleted or moved.

Transaction Log Purpose

Let’s have a look at ACID. Atomicity, Consistency, Isolation, and Durability are set of properties that guarantee that database transactions are processed reliably. The transaction log assists us with durability. It guarantees that the committed transactions will stay forever. Durability can be achieved by flushing the transaction’s log records to non-volatile storage before acknowledging commitment. This means that the transaction must be stored on the hard drive. Otherwise, if you got a power failure and your changes were only in the memory on the database server, after the server restart you may find that your changes weren’t saved. Thus, all changes must be written to disk before we can return a successful commit back to the application.

SQL Server stores data rows on 8 KB pages, that can be located anywhere in the data file. A page can hold a big number of rows (about 400 rows on the page). The page is the smallest unit of reading and writing, so if you modify one row on the page, the entire 8 KB page is written back to disk.

Let’s consider the situation when you update 100 rows in a single transaction. Some of these rows could be spread out across the data file.

You would need to drag the disk head to each position, write the updated page to disk, and move onto the next location, waiting for the disk to spin, and the disk head to move each time.

The transaction log is a sequential file, and rather than make a randomly placed record for each page, it capable of writing in 512 byte sectors.

The smallest write should hit one disk sector. A disk sector is a physical sector on a drive. Most drives contain sectors which are 512 bytes.

Based on this, we can conclude that SQL Server will write 512 bytes to the log file for a single tiny write transaction.

Well, as transaction log is sequential, you don’t need to wait for the disk head to move to the next position to write the change. The next position is already right under the disk head. This means you can write the changes down on disk quickly.

While writing changes to the transaction log, the in-memory version of the data page is also being updated. Further, the Lazy Writer process will come and commit these data pages to disk. If you got a power failure, then all the data pages in memory will be lost, but we have a record of all changes that occurred in the database.

Reading Transaction Log File

Let’s examine how SQL Server logs transactions and what information is stored in the transaction log file.

To read the transaction log file we will use the fn_dblog function. This function is one of several undocumented SQL Server functions.

Every record in the SQL Server transaction log is uniquely identified by a log sequence number (LSN). If LSNn+1 > LSNn — the change described by the log record referred to LSNn+1 occurred after the change described by the log record LSNn.

fn_dblog accepts two parameters:

  • The first is starting log sequence number. You can also specify NULL, which means it will return everything from the start of the log.
  • The second is the ending log sequence number. You can also specify NULL, which means you want to return everything to the end of the log.

I will use dbForge Studio for SQL Server to run SQL code.

For demonstration purposes let’s create a test database and a table.

The IDE allows to create a database in several clicks. Once you establish a connection to SQL Server, right click the connection, and then click New Database. The Database designer will appear. In the designer window enter the database name (I will use ViewTransactionLog) and click the Update Database button. That is all. To create a table, expand the database node, right click the Table folder, and then click New Table.

The table will contain three columns: ID, FirstName, and LastName.

Now we can use SQL script editor to execute the fn_dblog function.

Note that 461 rows have been created for the database and the blank table. To see transaction log data:

Note that the Transaction Name column contains the database name. This value appears only when the particular transaction starts with “LOP_BEGIN_XACT” (means begin transaction) in the Operation column. The operation column shows which operation is being performed by SQL Server, either insert, update, delete, shrink, lock, page allocation, etc. Transaction ID is the same for all transaction parts.

Now let’s run several DML commands to see how this affect the transaction log file.

Let’s have a look at the transaction log.

We can read the first raw in the log as:

  • INSERT statement with transaction ID 0000:00000a80
  • started at 2014/10/10 10:44:52:273
  • INSERT a row in a HEAP table ‘dbo.PErson’ in pageID 0000011d:00000140
  • finished at 2014/10/10 10:44:52:273

Conclusion

As you can see, the transaction log is a list of changes that have occurred in a database. The transaction log also gives us the ability to replay any operation that occurred in the database.

If you take a full database backup, and then have copies of the transaction log, you will be able to replay every operation and bring the database back to a consistent state at any point in time.

For instance, if you accidentally dropped a table, you can bring the database back to the point just before the drop time or by a specific LSN, and recover the data.

Reference: Pinal Dave (http://blog.sqlauthority.com)