MySQL – List User Defined Tables – Two Methods

There are different methods to list out user defined tables in MySQL. We will see two methods in this post.

Let us create the following tables in the database named Test

Create a database named Test

CREATE DATABASE Test;

Create the tables named items and sales in the database Test.

USE TEST;
CREATE TABLE items(item_id INT, item_description VARCHAR(100));
CREATE TABLE sales(sales_id INT auto_increment KEY,item_id INT, sales_date DATETIME, sales_amount DECIMAL(12,2));

Now to list out all user defined tables, use any one of the following methods

Method 1: Use SHOW TABLES command

SHOW TABLES;

If you execute the above statement, it will list out all the user defined tables in the database in which it is run (in this example the database name is Test)

The result is

Tables_in_test
 --------------
 items
 sales

You can also use database name along with this

SHOW TABLES FROM Test;

Which will give you the same result

Method 2: Use INFORMATION_SCHEMA view

You can make use of ANSI standard INFORMATION_SCHEMA.TABLES
view to do the same shown as below

SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_SCHEMA='Test';

The result is

Tables_in_test
 --------------
 items
 sales

This is a very small trick but works great when you want to know what are different tables exist in MySQL Database.

Reference: Pinal Dave (http://blog.sqlauthority.com)

About these ads

SQL SERVER – How to Detect Schema Change Across Two Servers

If you are database developer or database administrator, you will absolutely feel the story. You can download the software discussed in this blog post to play it along.

Story of My Early Career

A very interesting scenario, I have encountered in my earlier career. In earlier days, we used to develop everything on my local box. Our application and database both stayed in the same box. I was allowed to code only on the module or section which was assigned to me. I can’t go out and code or do anything else besides what I was assigned. When we code this modular way, it was easy to deploy the code upon completion of the project. This all worked fine till I went for vacation.

Once I went on vacation and the development of my project was done by my friend and colleague Romesh. Upon returning, Romesh wanted to hand over the entire project to me and he wanted to continue on his own project. The problems started here as his project and my project were totally different. Our database has been the same, but the table and stored procedure which we worked were mostly very different. In this situation the best practice for me was to take the schema changes he has done on the machine to my machine and continue the project. I had to be very careful as I should not take any other schema from his current project. If I have a part of his project, when I deploy my project that will go live and it should not happen.

The Epic Dilemma

Here was the problem – we were sitting together and discussing how to synchronize the schema from his machine to my machine selectively. I did not want the schema of the project he was earlier working on as that will further complicate my deployment, but I do want all the object which belonged to my projects. How do we achieve this complicated goal of selective schema difference detection and synchronization? 

Selective Schema Compare Detection and Synchronization

After some research online, we landed on Embarcadero’s DB Change Manager. We downloaded the trial version and tried to do a selective schema comparison. It is pretty easy to use tool which walks us through the wizard and instantly we were able to detect schema change between my database server and Romesh’s database server. We used a couple of filters to indicate which are my tables and which are his tables and quickly able to only see the selective changes.

Once we detected the changes between schema, we quickly generated the script to deploy on my database server with the help of Manual Synchronization features. I copied the script and I was instantly able to move on with the my project. I really feel that sometime tools like Embarcadero’s DB Change Manager makes quite a positive impact on efficiency of work life.

Here is the quick video which demonstrates how quickly one can use Embarcadero’s DB Change Manager compares schema.

Let me know what you think of this video. You can download Embarcadero’s DB Change Manager.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Q and A from Facebook Page

Social media has changed everything, there were times when we used to send emails but nowadays things have changed. We all have many different ways to connect with each other. Here are a few questions which I have received on the Facebook Page in the month of the march and I am sharing a few of them here.

Q: How to change collation of the our entire server? (link)
A: You will have to install your entire SQL Server to change collation of the server.

Q: How to recover system databases from suspect mode without having any backups? (link)
A: You need backup to recover a corrupt system database.

Q: How to retrieve last 5 inserted rows from a table? (link)
A: You can retrieve the data based on date field or identity field. If you do not have either, no luck. (blog)

Q: What does *= operator does for SQL Query? (link)
A: It is an old way of using left join, now it is deprecated so avoid using it.

Q: How to reduce CXPACKET and PAGEIOLATCH_XX waits in SQL Server database? (link)
A: CXPacket is the effect of parallelism, which may or may not be a problem. PageIOLatch indicates a problem with the slowness of the disk subsystem. This would very well cause CXPacket as well. (blog, blog)

 Well, these are the five questions which I have received on Facebook. If you have any question, let us connect on Facebook.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – SQL Server 2014 Developer Training Kit and Sample Databases

SQL Server 2014 RTM is around the corner (2014). There is plenty of the interest in the SQL Server 2014 in recent time. One of the most frequently asked questions is about learning resources of SQL Server 2014. Everybody wants to learn SQL Server 2014 but there is not enough learning material available currently in the outside world.

Microsoft has released two important resources around it.

SQL Server 2014 Developer Training Kit

The SQL Server 2014 Developer Training Kit includes technical content designed to help you learn how to develop SQL Server 2014 databases, BI Solutions, and gain insights into the new cloud capabilities to simplify cloud adoption for your SQL databases and help you unlock new hybrid scenarios.

SQL Server 2014 Developer Training Kit Sample Databases

This download contains several sample databases for the SQL Server 2014 Developer Training Kit which showcase many of the new and improved features of SQL Server 2014, including Clustered Columnstore, Online Operations, and Resource Governor IO.

Here are few of the key features in SQL Server 2014

  •  Resource Governor IO
  • Online Operations
  • Updatable Columnstore Index
  • In Memory OLTP Engine
  • Enhencements in SSDT and BI tools
  • Power BI Integration

Reference: Pinal Dave (http://blog.sqlauthority.com)

MySQL – Generate Script for a Table Using SQL

In SQL Server, to generate the CREATE TABLE script for a table, you need to rely on the SQL Server Management Studio (SSMS) tool and there is no inbuilt function supported to do this using SQL. However, in MySQL you can generate the script for a table using SQL.

Let us create the following table

CREATE TABLE sales(sales_id INT auto_increment KEY,item_id INT, sales_date DATETIME, sales_amount DECIMAL(12,2));

Now to view the script of the table sales, you can make use of SHOW CREATE TABLE statement. This statement accepts table name as parameter and returns the CREATE TABLE script for that table.

Run the following code

SHOW CREATE TABLE sales;

The resultset has two columns where the second column displays the following script.

CREATE TABLE 'sales' (
'sales_id' INT(11) NOT NULL AUTO_INCREMENT,
'item_id' INT(11) DEFAULT NULL,
'sales_date' DATETIME DEFAULT NULL,
'sales_amount' DECIMAL(12,2) DEFAULT NULL,
PRIMARY KEY ('sales_id')
)
ENGINE=InnoDB DEFAULT CHARSET=latin1;

Note : You can use the same SHOW CREATE TABLE statement to view the script for a VIEW although there is a seperate SHOW CREATE VIEW statement that accepts view name as a parameter.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQLAuthority News – Sign up for SQL Authority Low Frequency Newsletter

One More Newsletter!

You might already be receiving our emails in your mailbox, so why should you sign up for SQL Authority Newsletter (Sign-up).

Well, a great question. Let me answer this first.

Why should you sign up?

First of all – it is not going to be a daily email from the blog which you are receiving. It is going to be a very informal conversation between us. I blog about various details related to technology, but there are few things, I just want to keep exclusively for newsletter. For example, if I want to discuss the book which I am reading or movie I have watched or database news which I have heard from industry or just something really cool which I do not feel appropriate for the blog, I am going to include them in my newsletter. I am not going to send your summary of the week or best of the monthly newsletter, if I am going to send your newsletter, it is going to contain something unique and useful.

What do I mean by Low Frequency Newsletter?

Well, there is no set frequency of this newsletter. I am not going to send this every Tuesday or Thursday. I am going to send it when I have something unique to say and newsletter is the most appropriate way to send email. You may receive this email once a month or maybe once every 2 weeks – there is no set frequency.

What will this newsletter contain primarily?

Again, there is no set agenda for the newsletter but I promise that it is going to be fun stuff and interesting. This blog is primarily for talking technology, but there are so many different elements of life I face every day and I hardly get time to share with everybody. I do quite a bit sharing on Facebook (if you have not liked the page, do it now), but again, there are few things, which just makes sense in the newsletter.

How do I sign up for the newsletter?

Click here to sign up for the newsletter. It will send you a confirmation email, please follow the link in the confirmation email to activate your subscription to the newsletter.

What if I want to unsubscribe to the newsletter?

In the newsletter, there is a link to unsubscribe it. No hard feelings – it is your mailbox and you decide what goes there.

Final Note

If you sign up for the newsletter before the end of this month, you will be entered to win Hexbug Hive (India) or  Hexbug Hive (USA). The winner will be announced in the newsletter only. If you have never played with Hexbug and Hexbug Hive, you may just want to sign up for the newsletter for the chances to win them. It is pretty fun game and people at every age likes it. See video over here.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – SSIS Execution Control Using Precedence Constraints – Notes from the Field #021

[Notes from Pinal]: Lots of people think that SSIS is all about arranging various operations together in one logical flow. Well, the understanding is absolutely correct, but the implementation of the same is not as easy as it seems. Lots of people start with lots of enthusiasm, but when it is about building a control flow, which addresses all the necessary elements of the package execution they face issues with the order of the tasks. This is a very critical subject and it requires some expert advice.

Linchpin People are database coaches and wellness experts for a data driven world. In this 21th episode of the Notes from the Fields series database expert Tim Mitchell (partner at Linchpin People) shares very interesting conversation related to how to control the flow of the program using SSIS package execution.


In this post, I’m going to review the essentials of precedence constraints in SQL Server Integration Services, and will demonstrate how these can be used to precisely control the flow of the program during the SSIS package execution.

In a control flow for SQL Server Integration Services, the “lines” we commonly use to connect tasks to each other are actually smart controls.  These precedence constraints allow the ETL developer to customize the program flow from one task to another.  The most common use of precedence constraints is to simply connect two tasks with the default constraint, which is the success constraint.  In this usage, the downstream task will be executed if and only if the execution of the upstream task is successful.  As shown in the example below, the task DFT Write data to output table will be executed if the preceding task named SQL Truncate output table is successfully executed.  Similarly, the final task in the chain, named SQL Log row count to audit tbl, will be executed only if the preceding data flow task was successful.

The green lines connecting these tasks are the precedence constraints.  In the above example, we know that these are success precedence constraints by their color.

Using precedence constraints for success operations is not the only option we have.  Because these are smart controls, we can configure them in a variety of ways for maximum flexibility.  An example of this would be use a failure precedence constraint to code for the possibility of failure.  In this scenario, we could create one or more tasks that would be executed in the event of a failure within a package.  As shown below, we have a snippet of a package showing a failure path from the data flow by way of a failure precedence constraint.

In this example, we connect a precedence constraint from the data flow task to the task named SQL Remove data from partial load, but instead of using the default setting, we reconfigure this as a failure constraint (shown with the “Failure” label and red color) to redirect the execution flow to that task in the event that the data flow execution fails.  Downstream from there, we have yet another precedence constraint configured as a completion constraint (indicated by the label and the black color).  This constraint indicates that the task SQL Log failure event to event log will be executed whenever the upstream task is finished executing, irrespective of whether the result of that execution was success or failure.

To configure a precedence constraint to a setting other than the default value of success, simply double-click on the precedence constraint and set the appropriate setting, the options for which are shown below.

As shown, we can choose Success (the default setting), Failure, or Completion.

Note that there are a few other options as well.  Above the selection for the type of constraint, you’ll see that we can also choose the evaluation operation.  The default setting is Constraint, which means that the precedence constraint uses only the success/failure/completion results as described above.  However, we also have some other options.  We can use an SSIS expression along with, or perhaps instead of, the execution status evaluation to determine how to control the execution flow.

Additionally, we have the ability to control the behavior when multiple constraints are used.  By default, if multiple constraints are connected to a single downstream task, then all of those constraints must evaluate true for said downstream task to execute.  However, we can override that default behavior by setting this value for multiple constraints to Logical OR, indicating that the downstream task will be executed if any of the connected constraints evaluates true.

As shown in the example below, we have modified the original example, changing both the evaluation operation as well as the multiple constraint behavior.  On the constraint between the data flow task and the data deletion task, we add an SSIS expression to check the row count check to confirm whether any rows were loaded as part of the failed execution (since we wouldn’t need to run the delete operation if no rows were loaded).  Further, we’ve added a completion constraint between the data flow task and the task named SQL Log failure event to event log and set the multiple constraint behavior to Logical OR to make sure we log the failure whether or not the delete operation occurred.

We get visual reminders of these settings as well.  The function indicator (which looks like an italicized fx) appears on the precedence constraint on which we are using the expression.  For the Logical OR setting, the constraints will appear as dashed lines rather than solid lines.

Precedence constraints are much more than just lines drawn between two tasks.  In fact, these are powerful and flexible decision tools that allow the ETL developer a great deal of control over the flow of the SSIS execution.

If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)