MySQL – Get Latest Identity Value by Inserts

In SQL Server, whenever you add data to a table that has an identity column, you can get the lastly generated identity value using @@identity or scope_identity(). Similarly, in MySQL you can make use of LAST_INSERT_ID () function

Let us create the following dataset

Create table test(id int AUTO_INCREMENT NOT NULL,name varchar(100));
INSERT INTO test(name) select 'MySQL' union all
INSERT INTO test(name) select 'SQL Server' union all
INSERT INTO test(name) select 'Oracle' ;

After executing the above code, if you want to know the lastly generated identity value, use the following code


The result is 3

Note that this will work only in the current session and it is advisable to use the code immediately after the INSERT statement. If you want to make use of the value at a later point within the block of code, you can assign it to a variable.


Now you can use the value of @id_val at the later point within a block of code or a procedure.

Reference : Pinal Dave (

SQL SERVER – Installation failed with error – Wait on the Database Engine recovery handle failed

You might have seen on Facebook that I have completed the new setup of my machines. Since I use VMs to do my tests. To have a domain, I made a VM as domain controller and another as a member server. Next step was to install SQL Server and by mistake, I have run the setup of SQL on a domain controller. As per Microsoft documentation they don’t recommend installing SQL on the domain controller and there is a warning in SQL Setup.

setup dc 01 SQL SERVER   Installation failed with error   Wait on the Database Engine recovery handle failed

Rule Check Result
Rule “Computer domain controller” generated a warning.
Installing SQL Server 2016 CTP2.4 on a domain controller is not recommended.
I proceeded further and it failed at the end with below error
Wait on the Database Engine recovery handle failed. Check the SQL Server error log for potential causes.
Here is the complete message in summary.txt file.
Feature:                       SQL Server Replication
Status:                        Failed: see logs for details
Reason for failure:            An error occurred for a dependency of the feature causing the setup process for the feature to fail.
Next Step:                     Use the following information to resolve the error, uninstall this feature, and then run the setup process again.
Component name:                SQL Server Database Engine Services Instance Features
Component error code:          0x851A001A
Error description:             Wait on the Database Engine recovery handle failed. Check the SQL Server error log for potential causes.
I followed the error message as it asked to check ERRORLOG. Here is message in ERRORLOG
Error: 17190, Severity: 16, State: 1.
Initializing the FallBack certificate failed with error code: 1, state: 20, error number: 0.
Unable to initialize SSL encryption because a valid certificate could not be found, and it is not possible to create a self-signed certificate.
Error: 17182, Severity: 16, State: 1.
TDSSNIClient initialization failed with error 0x80092004, status code 0x80. Reason: Unable to initialize SSL support. Cannot find object or property.
Error: 17182, Severity: 16, State: 1.
TDSSNIClient initialization failed with error 0x80092004, status code 0x1. Reason: Initialization failed with an infrastructure error. Check for previous errors. Cannot find object or property.
Error: 17826, Severity: 18, State: 3.
Could not start the network library because of an internal error in the network library. To determine the cause, review the errors immediately preceding this one in the error log.
Error: 17120, Severity: 16, State: 1.
SQL Server could not spawn FRunCommunicationsManager thread. Check the SQL Server error log and the Windows event logs for information about possible related problems.

I searched on the internet and found that this could be issued with a SQL Service account. I selected all default configuration and it was NT Service\MSSQLServer

I uninstalled SQL Server and this time I selected Local System account for SQL Service in setup wizard and voilà it went fine.

Have you ever faced any such error on a regular machine? What was the solution?

Reference : Pinal Dave (

MySQL – Different Methods to Know MySQL Version

In SQL Server, to know the current version of the server, you can use system variable @@VERSION

How many of you know that the same can be used in mysql to know the version?

Also there are many ways to know the version of MySQL

Method 1:

Select @@version as version

It will display the current version of the MySQL Server

There is another function named version()

Method 2:

Select version() as version

The above produces the same result as @@version

You can also use another method to know this

Method 3:

Show variables like '%version%'

The above returns a resultset with two columns variable_name and value. You can find the version where variable_name says version.

Reference: Pinal Dave (

SQL SERVER – Trick – Running SSMS With Different Windows Account

Recently in one of my interaction with a DBA inside a big financial organization, I was surprised to see that the administrator was logging into a desktop / server using a different credential. What baffled me was the fact that he was logging off. I immediately asked, why are you doing this? He said, since he was using windows authentication, which was different from his normal credentials that for DB related activities he used to create a different session. After a bit of research, I found this to be a common practice in a number of organizations.

Many companies provide two different windows account to any Administrator. One “regular” account (example SQLAuthority\Pinal) is used for general work like emails, login to laptop etc. And another “admin” account (example SQLAuthority\adm_pinal) for administrative level tasks to be done on the server. The regular account would have no access to SQL Servers, but the admin – level account would. To make security stronger, the companies use a Windows account to connect to SQL Server.

This makes thing little interesting. If one has to connect to SQL Server from the laptop where the DBA has logged in with regular account, he/she has to do “Right Click” with Shift key pressed and then use “Run as Different user” to use the admin account.

ssms runas 01 SQL SERVER   Trick   Running SSMS With Different Windows Account

Pressing shift key is important otherwise we would not see that option. After choosing that, it would pop-up with user name and password window. If you notice, you would notice ssms.exe path which is used.

ssms runas 02 SQL SERVER   Trick   Running SSMS With Different Windows Account

Another way, which I prefer is using Run as command. The program runas.exe allows to let us tell Windows to run a program using a different user’s current network environment instead of the local environment. The full details and switches of the run as the program can be found in TechNet article.

I would normally create shortcut on desktop. Right Click and choose a new shortcut as shown below

ssms runas 03 SQL SERVER   Trick   Running SSMS With Different Windows Account

and give location/parameter as below

C:\Windows\System32\runas.exe /noprofile /env /user:SQLAuthority\adm_Pinal “C:\Program Files (x86)\Microsoft SQL Server\130\Tools\Binn\ManagementStudio\Ssms.exe”

You need to change two things, user name and the correct path for SSMS.exe

Once it’s saved, you can double click and provide password on the command prompt. I have given a shortcut name as Admin SSMS in below screenshot. As soon as I double click, I see below

One credential is provided, SSMS would open with that windows account.

I must conclude by saying that if you are using SQL Authentication then you do not need to do this because Windows credentials are not passed to SQL Server.

Reference: Pinal Dave (

SQL SERVER – How to Change Server Name?

In recent past I have renamed my VM and wanted to make sure that everything is fine. This renaming process is also needed for changing the server names called inside of SQL Server too. So I searched to know various places from where we can get host name and SQL Server name. Finally, I formed this query which can gather same details from various sources:

SELECT  HOST_NAME() AS 'host_name()',
@@servername AS 'ServerName\InstanceName',
SERVERPROPERTY('servername') AS 'ServerName',
SERVERPROPERTY('machinename') AS 'Windows_Name',
SERVERPROPERTY('ComputerNamePhysicalNetBIOS') AS 'NetBIOS_Name',
SERVERPROPERTY('instanceName') AS 'InstanceName',
SERVERPROPERTY('IsClustered') AS 'IsClustered'

Most of these server properties have been there for a while and sometimes I think it requires a refresher like this to remember them again. When we run this query on the non-clustered machine, we should see only two distinguished names. Machine name and Instance name. If you have default instance, then Instance Name would be NULL in the query.

ren sql 01 SQL SERVER – How to Change Server Name?

If there was a rename of host, then you would see host_name would be different and that’s not something which should exist.

Note that Instance Name rename is unsupported. In named instance the server name is in format of ServerName\InstanceName. So, you can change it to NewName\InstanceName but NOT to ServerName\ NewInstanceName. I am sure this is a fantastic trivia to know.

If you find any mismatch, then you need to follow below steps:

  1. execute below to drop the current server name

EXEC sp_DROPSERVER 'oldservername'

  1. Execute below to add new server name. Make sure local is specified.

EXEC sp_ADDSERVER 'newservername', 'local'

  1. Restart SQL Services.
  2. Verify the new name using:
    2. SELECT * FROM sys.servers WHERE server_id = 0

I must point out that you should not perform rename if you are using:

  1. SQL Server is clustered.
  2. Using replication.
  3. Reporting Service is installed.

I would like to know if anyone has ever tried doing this in your environment? What is the typical reason for you doing the same? I hope this helps incase you get into a tangle like this. Let me know your thoughts via comments.

Reference: Pinal Dave (

SQL SERVER – Startup Problem – FIX – Error 5118, Severity 16, State 1

I have been monitoring some of the error messages that SQL Server has been throwing in the recent versions and they are self-explanatory. If we understand the English, then half the problem is solved. It becomes easier and quicker for an administrator if they ever encounter such errors. This blog is an inspiration to one such error which one of my readers found it difficult to understand because he was just hired from college.

My blog reader contacted me via comments and told that he is not able to start SQL Service. Without any delay, I have asked to share the error message. He told that he is stating it from Services.msc and getting an error:

startup 01 SQL SERVER   Startup Problem – FIX    Error    5118, Severity 16, State 1
Windows could not start the SQL Server (SQL2014) on Local Computer. For more information, review the System Event Log. If this is a non-Microsoft service, contact the service vendor, and refer to service-specific error code 3417.

I told that this is not an error which can help much and I asked to share SQL Server ERRORLOG. Since he was not a SQL person, he was not aware of that file. I shared below blog:

SQL SERVER – Where is ERRORLOG? Various Ways to Find its Location

When I looked into the ERRORLOG, found below messages.

2015-06-10 20:54:44.00 spid7s      Starting up database ‘master’.
2015-06-10 20:54:44.00 spid7s      Error: 5118, Severity: 16, State: 1.
2015-06-10 20:54:44.00 spid7s      The file “C:\Program Files\Microsoft SQL Server\MSSQL12.SQL2014\MSSQL\DATA\master.mdf” is compressed but does not reside in a read-only database or filegroup. The file must be decompressed.

I asked the question – Did you make some changes with the file master.mdf? He said NO so I asked to share screenshot.

startup 02 SQL SERVER   Startup Problem – FIX    Error    5118, Severity 16, State 1

Blue color of the file indicates that files are compressed and that’s the precise error, we are seeing in the ERRORLOG also. I asked him to un-compress the file using below option.

startup 03 SQL SERVER   Startup Problem – FIX    Error    5118, Severity 16, State 1

After that he was able to start SQL Service. I asked him the changes which are done and he told that due to disk space issues, complete C:\ drive was compressed.

Have you ever enabled compression on SQL machines on the folder which has SQL database files? Never do that! Have you ever encountered these errors? Let me know some of the reasons you did so and faced this issue.

Reference: Pinal Dave (

SQL SERVER – One Trick of Handling Dynamic SQL to Avoid SQL Injection Attack?

SQL Server has so many things to learn and I always find it amazing. My conversations with customers often come up with security questions esp around SQL Injection. Many have claimed SQL Injection is a SQL Server problem. It takes quite some time for me to let them know there is nothing about SQL Server and SQL Injection. SQL Injection is an outcome of wrong coding practices. One of the recommendations I give is about not using Dynamic SQL. There might be some situations where you can’t avoid it. My only advice would be, avoid if possible. In this blog, I would demonstrate a SQL Injection problem due to dynamic SQL and a possible solution you can have.

Let’s assume that we have a simple search page where user can use blank search or provide filter in any field. We have provided two fields to use “First Name” and “Last Name”. The user types something and hits search. Here is our code of stored procedure which fires behind the scene.

USE AdventureWorks2014
CREATE PROCEDURE search_first_or_last
@firstName NVARCHAR(50)
@lastName NVARCHAR(50)
@sql NVARCHAR(4000)
SELECT @sql = ' SELECT  FirstName ,MiddleName, LastName' +
' FROM Person.Person WHERE 1 = 1 '
IF @firstName IS NOT NULL
SELECT @sql = @sql + ' AND FirstName LIKE ''' + @firstName + ''''
IF @lastName IS NOT NULL
SELECT @sql = @sql + ' AND LastName LIKE ''' + @lastName + ''''
EXEC (@sql)

If I use this string to execute in last name ”;drop table t1–

EXEC search_first_or_last '%K%', ''';drop table t1--'

The dynamic string would be

SELECT  FirstName, MiddleName, LastName FROM Person.Person WHERE 1 = 1  AND FirstName LIKE '%K%' AND LastName LIKE '';DROP TABLE t1--'

Do you see the problem? Yes, users can drop table t1 if code is running under a high privilege account.

One of the solution of the problem would be to use sp_executesql. Here is the better version using

CREATE PROCEDURE search_first_or_last
@firstName NVARCHAR(50)
@lastName NVARCHAR(50)
@sql NVARCHAR(4000)
SELECT @sql = ' SELECT  FirstName , MiddleName, LastName' +
' FROM Person.Person WHERE 1 = 1 '
IF @firstName IS NOT NULL
SELECT @sql = @sql + ' AND FirstName LIKE   @firstName'
IF @lastName IS NOT NULL
SELECT @sql = @sql + ' AND LastName LIKE  @lastName '
EXEC sp_executesql @sql
,N'@firstName nvarchar(50), @lastName nvarchar(50)'

Hope you would be able to use this and implement in your project. Are you using these simple techniques in your production code? Have you ever faced similar problems during audit? Do let me know of your learnings.

Reference: Pinal Dave (

SQL SERVER – Watching Table Variable Data in TempDB

I cannot get enough of working with TempDB and the learning never stops. Previously, when I wrote about SQL SERVER – Is tempDB behaving like a Normal DB?, SQL SERVER – Inside Temp Table Object Creation and Difference TempTable and Table Variable – TempTable in Memory a Myth many did ask me what is the difference when working with table variables is. In one of the email interactions, one of my readers asked – “Is there a way to know table variables are created? Are objects created stored as part of TempDB? Are they written to the T-Log?” 

I thought these questions were interesting. All of us work with Table Variables and assume it is just in-memory and don’t utilize anything from tempDB. Recently, during one of my post session meetings a DBA said their applications were completely built on table variables and their production servers were experiencing huge usage of TempDB. These questions made me write this blog post which I thought is worth a share. Let us look at each step one after another as I explained to my friend:

View Objects Creation and TLog

The first stage for me would be to demystify the object creation process. The thought that table variables are just memory bound objects and nothing gets written to transaction log needs to be clarified first.

Here is a simple steps to clear the doubts. 1) We will clear the T-Log of TempDB. 2) Create a table variable and then check the number of rows added in T-Log of TempDB. 3) We will add rows next. 4) Check the TLogs again and see if they are any different from just table creation. This is my development box and I can assure you there is no other process using tempdb in the meantime on my system.

USE tempdb
-- Output of this is 3
FROM sys.fn_dblog(NULL, NULL)

Next is to create our dummy table variable.

Company CHAR(400) NULL DEFAULT 'SQLAuth'
-- Output of this is 110+ in my machine
FROM sys.fn_dblog(NULL, NULL)

This small experiment confirms the fact that there is something written to the TLog for table variables creation. I went ahead and cleared the rows and then did a small experiment to see if there are additional rows getting added while I insert rows. So the second script was changed as below.

Company CHAR(400) NULL DEFAULT 'SQLAuth'
INSERT @TblVariable  (Name, DOJ)
-- Output of this is 130+ in my machine
FROM sys.fn_dblog(NULL, NULL)

As you can see there additional rows getting added to TLog showing there is some writes even here. Feel free to look at the description fields of fn_dblog and learn more about the fine prints. My next experiment was to catch the table reference somehow using the sysobjects.

SELECT * FROM sysobjects WHERE TYPE = 'U'

This showed no rows in my runs whenever I execute as I create the table variable. If you wondered why, don’t worry the next steps will demystify.

Seeing Table Variable data

I wanted to try something different so that I can capture the table variable data somehow as it persists in the metadata tables. To mimic the same, I used the below script. Simple, yet powerful.

Company CHAR(400) NULL DEFAULT 'SQLAuth'
INSERT @TblVariable  (Name, DOJ)
SELECT * FROM @TblVariable
WAITFOR DELAY '00:02:00'

Now this script will delay and wait for 2 mins. Since the script is waiting for 2 mins, the batch doesn’t finish and hence the table variable is not destroyed immediately as in the previous case. As the script runs, now we can query sysobjects:

SELECT * FROM sysobjects WHERE TYPE = 'U'

table variable create 01 SQL SERVER   Watching Table Variable Data in TempDB

Now, here is our table variable and make sure to note the Object ID. We will next grab the Page which holds the data and examine the same.

SELECT allocated_page_file_id, allocated_page_page_id, *
FROM sys.dm_db_database_page_allocations(2, NULL , 1, NULL, 'DETAILED')
WHERE OBJECT_ID NOT IN (SELECT ID FROM sysobjects WHERE TYPE IN ('S','IT','SQ')) AND page_type=1 AND OBJECT_ID = -1098578155

table variable create 02 SQL SERVER   Watching Table Variable Data in TempDB

The output for the above query shows we have Page 306 allocated for data and the next step is to examine the same. For this task, we will use DBCC PAGE and print the output.

DBCC PAGE(2,1,306,3)

table variable create 03 SQL SERVER   Watching Table Variable Data in TempDB

This confirms our understand that table variables also are written to TempDB and can be very much viewed during its duration of existence just like any normal table. When I showed this to my friend, he was pleasantly surprised. Do let me know if you have ever got a chance to view and debug table variables like this?

Reference: Pinal Dave (

MySQL – How to Create a Distributed Relational SQL Database

Distributed relational databases are a perfect match for Cloud computing models and distributed Cloud infrastructure.  As such, they are the way forward for delivering new web scale applications.

But how is the data distributed in a distributed relational database?  What is the best way to distribute data for my applications?  How to I retune my distributed database for optimal performance as applications evolve and usage patterns change?  You do all of this with your data distribution policy.

In this blog I’d like to explore different aspects of a data distribution policy. I want you to come away with a practical understanding you can use as you explore your distributed relational database options.

So, let’s dive in.

Data Distribution Policy: What It Is and Why You Should You Care

A data distribution policy describes the rules under which data is distributed.  A policy that matches your application’s workflow and usage patterns will give you critical web scale benefits:

  • endless scalability
  • high-availability
  • geo-location of data nearest user populations
  • multi-tenancy
  • archiving
  • datatiering

A poorly conceived data distribution policy will degrade performance, use more system resources and cause you problems.

In The Beginning, there was Sharding, and it wasn’t so Good

In the past, to distribute data across an “array” of linked databases, developers needed to program data distribution logic into their actual applications. The effect was to “shard” a database into slices of data. Quite literally every read or write would need to run through new custom-built application code to know where bits of data should be placed, or could be found.  This is what Facebook, Twitter and many others did as, at the time, there was no better alternative.

This extra sharding code required application developers to take on tasks typically handled by a database.  A do-it-yourself approach may seem like a fun challenge (“hey, after all, how hard can this really be??”).  But with your database divided this way, you face the following issues to contend with:

  1. Operational issues become much more difficult, for example: backing up, adding indexes, changing schema.
  2. You also need to start checking your queries results to test that each query path is actually yielding accurate results.

A lot has been written about the challenges of sharding a relational database (here’s a good whitepaper you can read: Top 10 DIY MySQL Sharding Challenges), so I won’t go into them here.  But, let’s also recognize that some great work has been accomplished by dedicated developers using sharding techniques. They have proven the inherent value of a distributed database to achieve massive scale.  At the time, they had to shard as they had no alternative.

Today, there is a better way.

What is a Good Data Distribution Policy?

As I briefly mentioned, a data distribution policy describes the rules under which data is distributed across a set of smaller databases that, taken together and acting as one, comprise the entire distributed database.

The goal we are aiming for is an even and predictable distribution of workloads across the array of clusters in our distributed database.  This brings us immense scalability and availability benefits to handle more concurrent users, higher transaction throughput and bigger volumes of data. But these benefits are all lost with a poorly conceived data distribution policy that does not align to your application’s unique usage and workloads. Let’s take a look.

Imagine we have a single database that is starting to exhibit signs of reaching its capacity limits.  Throughput is becoming unpredictable.  Users are getting frustrated waiting.

scalebasetran1 MySQL   How to Create a Distributed Relational SQL Database

We decide the best way to improve the situation is to evolve to a distributed database. Our distributed database would aim to evenly divide the total workload across an array of databases.  In this way, data distribution decreases the number of queries that any individual database cluster (or shard) receives.

scalebasetran2 MySQL   How to Create a Distributed Relational SQL Database

Figure 1. A good data distribution policy: ensures that a specific transaction or query is complete within a specific database.

The critical point here is that we want to distribute the data in such a way that we minimize the cross-database chatter (from cluster to cluster, or shard to shard), so that each transaction can be completed within a single cluster and in a single fetch/trip.

If we distribute data without respecting how the data is actually used, we can make matters worse.

scalebasetran3 MySQL   How to Create a Distributed Relational SQL Database

Figure 2. A bad data distribution policy: requires transactions or queries to access or collect data from multiple databases.

In the two images above, you can see that one case depicts 1,000,000 transactions equally spread across available resources.  And the other case shows a bad distribution policy where each query needs to collect information from every cluster (or shard) – thus in every practical sense we are actually increasing the overall workload.

Data Distribution Policy
Bad Data Distribution Policy Good Data Distribution Policy
The load isn’t distributed – it’s multiplied! Distributes the workload evenly across available resources
Doesn’t scale Distributes the sessions
Adding an additional DB does NOT reduce the overall workload Delivers linear scalability
The limitation of a single DB becomes the limitation of the entire array Adding another database, increases the overall scale potential of the distributed database
When queries need data from multiple DBs, transactions must commit multiple separate DBs (2PC) before completing. This adds a lot of overhead to each Commit. Queries complete using data from a single, smaller database. This reduces a lot of overhead to any Commits.

Table 1. A comparison of a good and bad data distribution policy

So, we can see that unless we distribute the data intelligently, we will not achieve any benefit. Actually, we can see things can become worse than before.

The natural question we are lead to ask is: “OK, So what is the best way to distribute data for my applications and my workloads?

Good question!

How Create the Best Data Distribution Policy for Your Application

Distributing data across a cluster of smaller database instances and maintaining full relational database integrity, two-phase commit and rollback, (as well as leveraging SQL!) is today’s state of the art  for distributed relational databases.

We can define two broad types of data distribution policy:

  1. Arbitrary Distribution: This is when data is distributed across database instances, but without any consideration or understanding for specific application requirements and how the data will be used by users or the application;
  2. Declarative, Policy-Based Distribution: This is when data is distributed across database instances, but in a way that specifically understands all application requirements, data relationships, transactions, and how the data is used in reads and writes by the application.
Data Distribution Policy
Arbitrary Data Distribution Policy Declarative Data Distribution Policy
Pros Pros
Unsophisticated  Ensures that a specific transaction finds all the data it needs in one specific database
 Predetermined (no forethought required)  Aligns with schema and DB structure
Cons Highly efficient and scalable
 No intelligence about business, schema, use cases  Anticipates future requirements and growth assumptions
 Leads to excessive use of database nodes Cons
Leads to excessive use of network  Requires forethought and analysis

Arbitrary data distribution is often used by NoSQL database technologies.  In fact, breaking the monolithic single-instance database into a distributed database has been the core of the NoSQL revolution so that NoSQL databases can tap into the scalability benefits of distributed database architecture. However, to get scalability, NoSQL databases have been willing to abandon the relational model. NoSQL and document store type databases can rely on arbitrary data distribution because their data model does not provide for joins. Meanwhile, customers have needed something to handle their massive web scale database loads, so they’ve been willing to try new technologies, like MongoDB, with new non-relational approaches. And in some application scenarios, losing the relational data model has been an OK trade-off. Having a choice is good.

However, nowadays you can get massive web scale and keep the time-tested relational database model, if you use a declarative, policy-based data distribution approach.

Academia has written about various types of distributed relational databases for decades. But today they are a reality. Declarative, policy-based data distribution is the way forward.

The good news is that today tools can identify the best declarative, policy-based data distribution approach for you!

If you use MySQL, you can take what you know now and check out ScaleBase’s free online Analysis Genie service for MySQL. It guides you through very simple steps to create the best data distribution policy matched to your unique application requirements and data.

If you’re just naturally curious about how to evolve your relational database into a modern distributed relational database, let’s dive into the details by looking at two very typical database and development scenarios:

  1. Scaling an existing application
  2. Designing scalability in a brand new application

In tomorrow’s blog post we will discuss about Scaling Existing Applications: Key Observations and Measurements.

Reference: Pinal Dave (

Google Drive Trick – Google Spreadsheet Formatting Dates to String

I have been using google drive and google spreadsheet for a while. I work with experts around the world and we all use different date format. The biggest confusion which often happens when we are sharing data is a timestamp. For some people it is natural to have dates time like dd/mm/yyyy and for some it is natural to have mm/dd/yyyy. Sometimes we are just confused looking at the date as there is no instruction about the format. Finally, we all decided that we will spell out the dates. For example, if we have date like 11/9/2014 to avoid confusion if it is November 9, 2014 or September 11, 2014 we decided to just write it in the string.

Now here is the problem with Google Spreadsheet – when I type any datetime, like as following, it will immediately convert them to mm/dd/yyyy format and defeat our purpose of spelling that out.

googlespreadsheet1 Google Drive Trick   Google Spreadsheet Formatting Dates to String

Finally, we figured out the solution how we can preserve the datetime as a string in Google Spreadsheet. Just start your datetime with a single quotes. For example now we enter datetime as follows:

‘September 9, 2014 instead of September 9, 2014. The best part is that, the single quotes are not visible on the screen.

googlespreadsheet2 Google Drive Trick   Google Spreadsheet Formatting Dates to String

Well, if there was any other way to do this and I was not aware of it, please let me know.

Reference: Pinal Dave (