Interview Question of the Week #058 – What is the Difference Among DECIMAL, FLOAT and NUMERIC?

Question: What is the Difference Among DECIMAL, FLOAT and NUMERIC?

Answer: 

The basic difference between Decimal and Numeric :
They are the exact same. Same thing, different name.

The basic difference between Decimal/Numeric and Float :
Float is Approximate-number data type, which means that not all values in the data type range can be represented exactly.
Decimal/Numeric is a Fixed-Precision data type, which means that all the values in the data type can be represented exactly with precision and scale.

Converting from Decimal or Numeric to float can cause some loss of precision. For the Decimal or Numeric data types, SQL Server considers each specific combination of precision and scale as a different data type. DECIMAL(2,2) and DECIMAL(2,4) are different data types. This means that 11.22 and 11.2222 are different types, though this is not the case for float. For FLOAT(6) 11.22 and 11.2222 are same data types.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – FIX: Error: 18456, Severity: 14, State: 58. Login failed for user

One of the most common and searched SQL Server failure is around “Logins”. Login failed for user has always been tricky and interesting to troubleshoot. I don’t think there is a silver bullet to easily troubleshoot the same. With every version of SQL Server, there are enhancements made in error reporting, which try to make life on DBA’s easy. In previous version of SQL, whenever there is a login failed, it would print message in SQL ERRORLOG along with the state of login failed. It was difficult for a DBA to find the cause further and it all boiled down to doing a number of permutations and combinations of resolution over the internet.

If you are new to SQL Server, then use below to find ERRORLOG

SQL SERVER – Where is ERRORLOG? Various Ways to Find its Location

Here is the message in ERRORLOG

Error: 18456, Severity: 14, State: 58.

Login failed for user ‘AppLogin’. Reason: An attempt to login using SQL authentication failed. Server is configured for Windows authentication only. [CLIENT: <local machine>]

Here was the connection string from the application:

“DRIVER={SQL Server Native Client 10.0};Server=BIGPINAL;Network Library=DBMSSOCN;Initial Catalog=myDB;User ID=AppLogin;Password=myPassword@123”

If you know SQL Server authentication modes, then it’s easy to fix. If you are new to SQL, then here are the steps

  1. Connect to SQL via SQL Server Management Studio.
  2. Right click on the server name on object explorer -> Properties -> Security -> Change Server authentication to “SQL server and Windows authentication mode” -> click OK.

 state58 01 SQL SERVER – FIX: Error: 18456, Severity: 14, State: 58. Login failed for user

  1. After that, open server at object explorer -> Expand security -> Expand Login -> right click on your login -> properties -> type new password -> confirm password -> OK.

In my case AppLogin was created. Else you may need to create the login.

Now disconnect your SQL and restart your system. After this your application should be able to login with the changed password in SQL authentication mode. This tough look simple, sometimes will be of great help. I hope this blog will surely land into someone’s web search someday. I seriously hope it helps you too.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Huge Transaction log (LDF) for Database in Availability Group

One of my blog readers posted on Facebook:

Hi Pinal,
We are using AlwaysOn availability group for our production database. The database is fairly large in size as its around 260 GB.  The database transaction log file grows around 2.5gb every day. Over the weekend the DBA team noticed that the Log file for this database bloated up to around 230 GB and then seems to hold steady. While troubleshooting I noted an error that the rebuild Index failed due to “Availability_Replica”. Here is the error which we saw in ERRORLOG

Error: 9002, Severity: 17, State: 9.
The transaction log for database is full due to ‘AVAILABILITY_REPLICA’

The question I have is what would be causing this log file to get so large, and is this normal behavior for a database in an Availability Group?

Thanks!

Here is my reply to him:

Thanks for asking such a wonderful question. Before I answer the questions, keep below in mind:

  1. Make sure that regular transaction log backups are taken to make sure it’s getting truncated on a regular basis. This is needed for any database which is in full or bulk-logged recovery model.
  2. Make sure you that replica in Sync otherwise the backup of transaction log would not truncate the log.

Coming back to the question: Yes, log-intensive transactions like modifying a large amount of data in tables or even rebuild of the index can cause the unusual log growth. This is because of the fact that the log cannot be truncated to redo has completed the changes in all secondary replicas of the availability group. The transactions like CREATE INDEX or ALTER INDEX are log intensive by nature. You cannot eliminate the log generation, but you can make intelligent maintenance to reduce the impact of the index rebuild on production activities.

In general, you can, you following query which will show you what is preventing the log space reuse:

SELECT log_reuse_wait_desc,*
FROM sys.databases

Working with AlwaysOn Availability Groups can sometimes be very tough and tricky. Understand the basics and knowing the behavior can greatly improve our reasoning to why it behaves like this. Have you also encountered such situation ever in your environment? What did you do for that? Would love to hear from you on this for sure.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – JSON Support for Transmitting Data for SQL Server 2016 – Notes from the Field #114

Kathi SQL SERVER   JSON Support for Transmitting Data for SQL Server 2016   Notes from the Field #114[Note from Pinal]: In this episode of the Notes from the Field series database expert Kathi Kellenberger explains about JSON Support for Transmitting Data for SQL Server 2016. Kathi is an amazing instructor, she was the SQL author I have read in my early career. The reason, I love SQL Server because her writing has instigated love for this technology in me. Today she brings a relatively interesting topic for database experts. Read the experience of  Kathi in her own words.


My favorite aspect of working with SQL Server is the T-SQL language. I love working with data, writing queries, and tuning queries when they don’t perform well. Whenever a new version of SQL Server is on the way, I can’t wait to hear what has been added to T-SQL.

I was really hoping to see additional T-SQL windowing functionality with 2016, but, so far, nothing has been announced. There is however, a very interesting programmability enhancement on the way: JSON support.

JSON, which stands for JavaScript Object Notation, is used for transmitting data and is used in many programming languages. JSON looks a bit like XML with square [] and curly {} brackets instead of angle <> brackets.

Here is an example comparing XML to JSON with data from the Production.Product table.

XML JSON
<Product>
  <Name>HL Road Frame - Black, 58</Name>
  <ListPrice>1431.5000</ListPrice>
</Product>
<Product>
  <Name>HL Road Frame - Red, 58</Name>
  <ListPrice>1431.5000</ListPrice>
</Product>
<Product>
  <Name>Sport-100 Helmet, Red</Name>
  <ListPrice>34.9900</ListPrice>
</Product>
<Product>
  <Name>Sport-100 Helmet, Black</Name>
  <ListPrice>34.9900</ListPrice>
</Product>
<Product>
  <Name>Mountain Bike Socks, M</Name>
  <ListPrice>9.5000</ListPrice>
</Product>

{
"Product": [
{
"Name": "HL Road Frame - Black, 58",
"ListPrice": 1431.5
},
{
"Name": "HL Road Frame - Red, 58",
"ListPrice": 1431.5
},
{
"Name": "Sport-100 Helmet, Red",
"ListPrice": 34.99
},
{
"Name": "Sport-100 Helmet, Black",
"ListPrice": 34.99
},
{
"Name": "Mountain Bike Socks, M",
"ListPrice": 9.5
}
]
}

The additions to T-SQL to support JSON are listed below. There is one new clause and four functions:

Name Type Purpose
FOR JSON Clause Return JSON from a SELECT statement
OPENJSON Function Return JSON data in tabular format
ISJSON Function Returns TRUE if a string is valid JSON string
JSON_VALUE Function Returns a single value from a

JSON string

JSON_QUERY Function Returns an array of values from a JSON string

I used the FOR JSON clause to generate the example JSON data above. Here is the code I used:

SELECT TOP(5) Name, ListPrice
FROM Production.Product
WHERE ListPrice > 0 AND Name NOT LIKE '%Seat%'
FOR JSON PATH, ROOT('Product');

There are several options, so be sure to take a look at the documentation for more information. Here is an example script demonstrating the functions:

--Save a string in JSON format to a variable
DECLARE @Books NVARCHAR(MAX) = N'
{"Books":
{"BookList":
[ {"Book":
{ "Title":"Microsoft SQL Server T-SQL Fundamentals",
"Authors":"Itzik Ben Gan"
}
},
{"Book":
{
"Title":"The Data Warehouse Toolkit: The Definitive Guide to Dimensional Modeling",
"Authors":"Ralph Kimball; Margy Ross"
}
},
{"Book":
{
"Title":"SQL Server Concurrency: Locking, Blocking, and Row Versioning",
"Authors":"Kalen Delaney"
}
}
]
}
}'
;
IF ISJSON(@Books) = 0 BEGIN
   PRINT
'Invalid JSON string';
END
ELSE BEGIN
  
--Turn the JSON string into tabular format
  
SELECT *
  
FROM OPENJSON(@Books,'$.Books.BookList')
  
WITH (
      
Title NVARCHAR(100) '$.Book.Title',
      
Authors NVARCHAR(100) '$.Book.Authors');
  
--Return the BookList array in JSON format
  
SELECT JSON_QUERY(@Books,'$.Books.BookList') AS [Book Array];
  
--Return the second row as JSON
  
SELECT JSON_QUERY(@Books,'$.Books.BookList[1]') AS [Second Book];
  
--Return the title from the second row
  
SELECT JSON_VALUE(@Books,'$.Books.BookList[1].Book.Title') AS [Book Title];
END;

Here are the results:
114 SQL SERVER   JSON Support for Transmitting Data for SQL Server 2016   Notes from the Field #114
The most challenging aspect for me when writing the previous example, was creating a valid JSON string. The path attribute, that string starting with $ used with the functions, looks a bit intimidating, but it is actually simple. The $ is the starting point. Follow the path in the data to get what you want.

Let’s take a look at the last query:

SELECT JSON_VALUE(@Books,'$.Books.BookList[1].Book.Title') AS [Book Title];

The JSON_VALUE function takes two parameters, the JSON string and a path. In this case, I am specifying the item from the array to extract. It is found in Books and the array is called BookList. I want the item at index 1. The item is called Book and I am retrieving the Title property.

The new functionality will make it easier for developers to work with SQL Server data regardless of the programming language they use.

Kathi Kellenberger is the author of Beginning T-SQL, Expert T-SQL Window Functions in SQL Server, and PluralSight course T-SQL Window Functions.

notes 82 3 SQL SERVER   JSON Support for Transmitting Data for SQL Server 2016   Notes from the Field #114If you would like to learn more about T-SQL window functions, be sure to check out my latest book Expert T-SQL Window Functions in SQL Server or my Pluralsight course T-SQL Window Functions.

If you want to get started with SQL Server with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – 2016: Creating Simple Temporal Table

Temporal tables are a new feature that was introduced in SQL Server 2016. Temporal tables give application developers to view the state of data at a point in time. Temporal tables can also be used for logging purposes if required. I wanted to explore how this feature can be used. The documentation has been interesting and I went ahead in trying out how this can be enabled.

One of the most used use case for this feature would be to find the data values in a past date. This can now be easily configured in SQL Server 2016. Let us go ahead and create a database for use and then create a table with the Temporal history database created by default / automatically. The syntax for this would look like:

USE TemporalDB
GO
--Simple temporal table with history table automatically created (name of history table is MSSQL_TemporalHistoryFor_<object_id of table>) :
CREATE TABLE dbo.TemporalExample
(
ID INT NOT NULL PRIMARY KEY,
ValidFrom DATETIME2 GENERATED ALWAYS AS ROW START NOT NULL,
ValidTo DATETIME2 GENERATED ALWAYS AS ROW END NOT NULL ,
Period FOR system_time (ValidFrom, ValidTo)
)
WITH (SYSTEM_VERSIONING = ON)
GO

As you can see below, in my SQL Server 2016 database, the table has been created and the system generated history table is linked with it.

temporal table 01 SQL SERVER   2016: Creating Simple Temporal Table

I wanted to understand if we can get the same value from system tables. I have used the sys.tables DMV to select the same.

SELECT name [Original_Table],
OBJECT_NAME(history_table_id) [History_Table_Name] FROM sys.tables
WHERE history_table_id IS NOT NULL

The output for this is shown below:

temporal table 02 SQL SERVER   2016: Creating Simple Temporal Table

Since these tables are linked, it is not possible to drop the main table without stopping the system versioning. Since we created the table, let us look at the possibility of dropping the tables. We are going to stop versioning and then drop both these tables.

USE [TemporalDB]
GO
ALTER TABLE [dbo].[TemporalExample] SET ( SYSTEM_VERSIONING = OFF)
GO
DROP TABLE [dbo].[TemporalExample]
GO
-- Add the history table which was created in your databases
DROP TABLE [dbo].[MSSQL_TemporalHistoryFor_565577053]
GO

In the above code, your history table data would be different and please make the changes appropriately. Let us go ahead by cleaning the database that we created to experiment Temporal.

USE MASTER
GO
DROP DATABASE TemporalDB
GO

What are some of the use cases you think temporal will be used in your environment? We will surely blog more functions of using temporal in the future. I am sure you will find this interesting.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL Server Auditing and Recovery With ApexSQL Log 2016

ApexSQL Log is a powerful SQL Server transaction reader for forensic auditing and rollback of malicious and/or unintended changes for Microsoft SQL Server. It is an ideal cost solution for recovery, row changes and/or before-after auditing. ApexSQL Log uses technology which enables it to read online transaction log files or transaction log backup files in order to create auditing results including before and after views of the data as well as the full row history of all changes.

From the recovery standpoint, ApexSQL Log provides sophisticated undo/redo capabilities which enable it to create TSQL scripts which can be used to completely reverse or replay all or only selected transactions read from the online transaction log file and added transaction log backups.

To ensure that transaction log files contain the information which ApexSQL Log will use for auditing, it is important to ensure that the SQL Server doesn’t overwrite information in the transaction log file, which it does when database recovery model is set to simple. With this in mind, ensure that your databases are in the full recovery model to provide valid information for auditing and to ensure that you have the highest chance possible to recover the data when a disaster strikes and important data is lost.

To start things up, let’s quickly decide on how we want to perform the auditing: locally or remotely.

Depending on the environment, both local and remote auditing may be preferred, and the following overview of both will help you decide which to choose.

Local auditing

ApexSQL Log can be installed directly on the server which hosts the SQL Server, and auditing is hence performed locally. Since ApexSQL Log has no performance overhead during audit trail capture, it is recommended to use ApexSQL Log locally when possible, to avoid any performance issues which can be caused by network limitations or congestion in high traffic environments.

Remote auditing

If you choose to install ApexSQL Log on specific workstation, and to connect to the SQL Server remotely, it is important to know that you will need to install server-side components which enable remote auditing. These server side components are not installed on the SQL Server itself, but on the machine which hosts it, and are actually a simple service which allows remote auditing of online transaction log file. More on the components can be read in this article.

Installation of these components is straightforward as it gets. Simply start the installation executable on the server and choose the option to ‘install server-side components’.  An alternative is to install them directly from the GUI when connecting to the remote SQL Server. The application will inform you when the server-side components are not installed on the remote server, and offer to install them from the GUI itself.

How it works

ApexSQL Log has a straightforward wizard which will guide the user through the auditing or recovery process.

In order to auditing specific transaction log files, it is necessary to connect to the database to which those transaction log files/backups belong to. Hence, once the application is started, user is required to provide database connection details. After choosing a SQL Server and authentication method (and providing appropriate credentials), the database can be selected from the drop-list.

apexsqllog1 SQL Server Auditing and Recovery With ApexSQL Log 2016

Advancing through the wizard, the next step requires user to specify which data sources (transaction log files, backups or .ldf) will be used for the auditing job. ApexSQL Log automatically loads the online transaction log and all related transaction log files/backups from the default SQL Server folder. Additional files can be loaded manually by using Add file dialog, or by determining a naming pattern policy to add multiple files at once. Simply check files from the list to include them in the auditing process.

apexsqllog2 SQL Server Auditing and Recovery With ApexSQL Log 2016

Note: when selecting transaction log files for auditing, it is important to provide a full chain of transaction log backups if you want to provide a more continuous data stream, and also provide vital transactional information. More on How to create and maintain a full chain of transaction log backups can be read here.

The next step of the wizard which follows the data sources selection process is the output selection. The choices are: Export results, before-after report, undo/redo script or to open results in grid.

apexsqllog3 SQL Server Auditing and Recovery With ApexSQL Log 2016

Export results

This option is pretty self-explanatory – the auditing results will be exported to one of the supported formats. The following article provides detailed information on exporting in ApexSQL Log.

apexsqllog4 SQL Server Auditing and Recovery With ApexSQL Log 2016

Creating a before-after report

This specific report with full before-after (change) details can be created as another ApexSQL Log output, which is perfect for investigating what change has occurred, and what was the original form of the changed row(s) – the ‘what, who and when’ investigation. Here is an article on how to create a data change before-after report

 apexsqllog5 SQL Server Auditing and Recovery With ApexSQL Log 2016

Undo/Redo script creation

This option creates a SQL Server script to revert back any unwanted changes or to replay them vs any other table/database. More on how to revert back unwanted changes or replay changes can be read here

Open results in grid

This is the option which allows in-debth analysis of audited data directly from the GUI. Auditing results will be displayed in the grid/table format where they can be further analyzed or filtered to more specific results. All three previously described outputs can be created from the grid itself, so if you are not perfectly sure on the output, and would need to inspect/modify the results before creating an export or undo/redo, opening results in the grid seems as the best option to choose here

Filtering

The final step in wizard regardless of the output type is the filter setup. ApexSQL Log offers various filters, and user can specify various date/time filters, operation filters, table filters, or use advanced options to further refine the output.

apexsqllog6 SQL Server Auditing and Recovery With ApexSQL Log 2016

The filter setup also provides one really useful feature –continuous auditing. Continuous auditing is used in place of the date/time filter, and enables auditing job on the database to resume exactly where the previous auditing job has finished, which ensures that nothing is missed between the regular/continuous auditing jobs. More on the continuous auditing can be read in this article.

The ‘Grid’ overview

Once the auditing has been completed, all results are shown in a grid, from where they can be analyzed. Additional filtering is available in grid, so it is quite easy to refine received output or to fine tune results to see only specific transactions.

apexsqllog7 SQL Server Auditing and Recovery With ApexSQL Log 2016

The grid overview allows the user the ‘true’ insight into the transaction log files, and various information can be found in several different panes.

Operation details

This pane shows full operation details for each and every audited transaction. This view contains full information on the before-after auditing, and you can see the before (old) and after (new) values for selected operation

apexsqllog8 SQL Server Auditing and Recovery With ApexSQL Log 2016

Transaction information

As the name suggests, this pane contains information on the transaction in which the selected operation was included into

 apexsqllog9 SQL Server Auditing and Recovery With ApexSQL Log 2016

Undo & Redo script panes provide a quick glance to the rollback/replay scripts, while the complete Row history of the selected row as far as the transaction log files reach can be seen in the Row history pane.

apexsqllog10 SQL Server Auditing and Recovery With ApexSQL Log 2016

As previously mentioned, all output options are available in the grid overview, so user can create all exports, before-after report or undo/redo scripts from the GUI. In addition to working from GUI, ApexSQL Log fully supports CLI, and all commands and operations supported in GUI are available in CLI – this enables automation of auditing with ApexSQL Log via windows scheduler or similar tools.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – How to Connect Using NT AUTHORITY\SYSTEM Account?

Sometimes it is needed to connect to SQL Server using System account. Don’t literally ask me if this is a valid scenario in first place. Someone just pinged to say – they want to do it. I was personally not sure why, but they had their own internal requirement to do the same.

It is not possible to provide windows credential in SSMS and they are always grayed out. My exploration is always to keep trying to find a solution to such typical use cases. I was able to find that PSExec can be used to achieve this.  I am a big fan of some of the tools from sysinternals. These are handy and quite small footprint of the servers. One of the tools I have day-in day-out is ZoomIt while doing presentations.

Step 1: Download PSTools from https://technet.microsoft.com/en-us/sysinternals/psexec.aspx

Step 2: Extract and open command prompt to the extracted location.

Step 3: Provide below command

psexec -i -s ssms.exe

-i parameter allow the program to run so that it interacts with the desktop of the specified session on the remote system.

-s parameter launches the process using SYSTEM account.

Here is what we would see on the command prompt.

NtAuth 01 SQL SERVER   How to Connect Using NT AUTHORITY\SYSTEM Account?

And it would launch SSMS program with User name populated as “NT AUTHORITY\SYSTEM”

NtAuth 02 SQL SERVER   How to Connect Using NT AUTHORITY\SYSTEM Account?

When I disable that account, I was getting below error in ERRORLOG

2016-02-03 15:31:07.560 Logon        Error: 18470, Severity: 14, State: 1.
2016-02-03 15:31:07.560 Logon        Login failed for user ‘WORKGROUP\SQLSERVER2016$’. Reason: The account is disabled. [CLIENT: <local machine>]

Notice that we do not see NT AUTHORITY\SYSTEM in Errorlog. We are seeing machine name followed by a dollar symbol. Since my machine is in a Workgroup called “WORKGROUP” and machine name is SQLSERVER2016, we are seeing WORKGROUP\SQLSERVER2016$

Have you ever used any such tool to troubleshoot anytime in your environment? What use cases were you able to use these tools in the past? Do let me know via comments.

Reference: Pinal Dave (http://blog.sqlauthority.com)

Interview Question of the Week #057 – What is GO in SQL SERVER?

GO is not a Transact-SQL statement; it is often used in T-SQL code. Go causes all statements from the beginning of the script or the last GO statement (whichever is closer) to be compiled into one execution plan and sent to the server independent of any other batches. SQL Server utilities interpret GO as a signal that they should send the current batch of Transact-SQL statements to an instance of SQL Server. The current batch of statements is composed of all statements entered since the last GO, or since the start of the ad hoc session or script if this is the first GO.

The GO Statement must be written in new line as it is not a T-SQL command. T-SQL statement cannot occupy the same line as GO. GO statement can contain comments.

Following is example for SQL SERVER for database Adventureworks.

USE AdventureWorks2014;
GO
DECLARE @MyMsg VARCHAR(50)
SELECT @MyMsg = 'Hello, World.'
GO ---- @MyMsg is not valid after this GO ends the batch.

—- Yields an error because @MyMsg not declared in this batch.

PRINT @MyMsg
GO

Reference: Pinal Dave (http://blog.sqlauthority.com)

MySQL – How to Generate Random Number

In MySQL, UUID() function returns Universal Unique Identifier that generates 36 characters long value which is 5 part hexadecimal numbers. If you want to create random password, you can make use of this function.

SELECT UUID() ;

Returns the string 85aeb064-8f73-11e5-85ef-02fcc4101968 (Note that this is random, when you execute you will get different value). As the total length is 36, you can make use of the result to get a random password with any length.

Suppose you want to generate 8 character length, you can generate it using

SELECT LEFT(UUID(),8) random_password ;

If you want to generate 10 character length, you can generate it using

SELECT RIGHT(UUID(),10) random_password ;

This way you can generate a random password. As UUID () returns the hexadecimal number with five parts separated by hyphen, your word will have numbers, alphabets and hyphen depends on the length you use.

There can be many ways of doing this, but this is the simplest one.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Install Error – The /UIMode setting cannot be used in conjunction with /Q or /QS

Doing repetitive tasks is something everyone hates. I am sure many of us have resorted to ingenious techniques to figure out doing the task efficiently. There is no standard tool or technique that everyone uses. Recently I was asked to install a SQL instance with same configuration on 10 servers. Who would like to log in to all those servers, run setup.exe and go through the same setup wizard? Luckily, the SQL Server setup allows to create a configuration file. The very thought of extracting it as configuration is something that made me jump out of my seat.

UIMode 01 SQL SERVER   Install Error   The /UIMode setting cannot be used in conjunction with /Q or /QS

I copied the file from above location and moved to a second server on C drive. This was the easy part because I know the exact replica of configurations are going to flow to the new servers. Then I ran the below command:

setup.exe /CONFIGURATIONFILE=”C:\ConfigurationFile.ini” /QS /IACCEPTSQLSERVERLICENSETERMS

This is a silent install that I am trying to do. The above command failed with below error which was strange to me:

The following error occurred:
The /UIMode setting cannot be used in conjunction with /Q or /QS.

Error result: -2054422500
Result facility code: 1420
Result error code: 28
Please review the summary.txt log for further details

I looked into the file and found below line.

; Parameter that controls the user interface behavior. Valid values are Normal for the full UI,AutoAdvance for a simplied UI, and EnableUIOnServerCore for bypassing Server Core setup GUI block.
UIMODE=”Normal”

I have added ; before that so that it is commented (you can remove that line also) and after that setup went fine and I was able to install it to nine servers in very short time. I also found that I can run setup remotely using PowerShell below command:

Invoke-Command -ComputerName xxx -ScriptBlock
{D:\SQL2012\setup.exe /ConfigurationFile=”D:\SQL2012\ConfigurationFile.ini”}

Have you ever used configuration file in your environments? What has been some of the automation techniques you used for installing softwares? Do let me know via comments below.

Reference: Pinal Dave (http://blog.sqlauthority.com)