Web technologies are moving at a pace that we can hardly keep up in pace. More and more of investments are going on in the space of web that even database developers like me find it interesting. In the same context, recently I was exploring on Skyvia (SQL SERVER – Integrate Your Data with Skyvia – Cloud ETL Solution) from ETL to how one might query and was surprised to see how some of these can be made useful.
This blog stems from a simple idea that was in my mind for a long time and I was struggling for a suitable solution till I saw what was available here. Think that you are a DBA and are on a vacation to a far-away land. You didn’t take your laptop along because you wanted to really enjoy your vacation times. Everything looks normal till you get an SMS from your management that something is wrong on one of your servers and they need 10-15 mins of your time to quickly resolve. Now with no laptop and all the cool things that you would have normally installed on your machine, how can you help your management with a few query support? Is it even possible? Assume you get to an internet kiosk and you want to do some amount of troubleshooting. How will you do? Well, there are a number of solutions.
One of the many available including the use of Skyvia to do query on your servers once you have done the simple step setup. In this example, I have used an SQL Server 2014 version available on a Cloud provider so that we can see what is going wrong.
Let us start by connecting to Skyvia and setting up our connections.
In my example, I am going to use the “SQL Server” source and will give all the credentials. Once done and tested, we must see this in our connections tab.
Once the connection setting seem to be in place, we are all set to run our queries on the backend. We can then head to the Query tab as shown below:
At this stage, I just linked the Query window to my connection that we created in our previous step. This started to list our all the tables that were available on my server. This was really a cool moment because I was able to work on my database in less than 5 mins via the web.
Obviously based on the permissions that I used to connect, the tables and objects are displayed. If there are 100’s of tables to work, then we can surely use the search to find the objects.
The next stage or table is our awesome query window. It is here that we can type in any SQL-92 query that needs to be fired against our database. Since I was wanting to know what was happening in my SQL Server system and what operators were functioning of few long running query, I used a DMV from SQL Server 2014 to check if it works. For my surprise, I was not disappointed.
This is the query result output based on the query that I was running in the backend.
SELECT node_id,physical_operator_name, SUM(row_count) row_count, SUM(estimate_row_count) AS estimate_row_count, CAST(SUM(row_count)*100 AS FLOAT)/SUM(estimate_row_count) FROM sys.dm_exec_query_profiles GROUP BY node_id,physical_operator_name ORDER BY node_id;
Coming back to our query, based on the output I was able to find out what was the query that was running behind the scene. As you can see this is a Dummy query without any filters or restriction on the columns selected. Hence we are seeing a number of Clustered Index Scan in our query output via the web.
SELECT * FROM DimProduct INNER JOIN DimProductSubcategory ON DimProduct.ProductSubcategoryKey = DimProductSubcategory.ProductSubcategoryKey INNER JOIN FactInternetSales ON DimProduct.ProductKey = FactInternetSales.ProductKey
Once the query was identified, you can start looking at what is firing up the query and take corrective actions. Any normal SQL syntax can be fired like this via the web. It is simple and quite powerful to say the least.
The results from Skyvia is powerful that you can also use them to export to CSV format for later offline consumption. As part of simple testing, I also fired few DDL statements and was surprised to have almost everything I do via SQL Server Management Script window here via the web, anytime and anywhere I like.
And another thing I’d like to mention is that Skyvia Query is not just for SQL Server and other databases. Skyvia Query supports SQL for cloud CRMs, such as Salesforce, Dynamics CRM, etc, and allows using SELECT, INSERT, UPDATE and DELETE statements against them. This enables admins, knowing SQL, work with cloud data in the same way as with SQL Server data.
One of the classics as far as best practices is concerned is to exclude SQL Server files and folders on your antivirus programs running on the server. We all know this statements and often I see DBAs not adding any extra care to this recommendation. There are not many blogs that call them out explicitly.
Although antivirus software is a very important part of security planning, it is important to understand the effect it has on SQL Server files. After rebooting a server, if the antivirus software locks a SQL Server file before SQL Server can gain access to that file, potential issues ranging from SQL Server not being able access those particular files to possible database corruption may occur. In a number of cases I have seen SQL Server refusing to start sometimes or the CPU / IO seems to be stalling sometimes. As you can see there are a number of these random errors one might get because of not implementing a simple best practice. Therefore, it is recommended that the following files be excluded from all antivirus scans:
SQL Server data files (typical extension is .mdf, .ndf, and .ldf)
SQL Server backup files (typical extension is .bak and .trn)
Full-text catalog files
Trace files (typical extension is .trc)
SQL Audit files
SQL query files (typical extension is .sql)
Directories holding Analysis Services data and partitions
The directory that holds Analysis Services temporary files used during processing
Analysis Services backup files
In addition to excluding SQL Server and Analysis Services files, it is recommended to exclude the following list of processes from antivirus scans:
For environments where SQL Server is clustered, exclude the C:\Windows\Cluster directory and the Quorum drive.
Though this list is not exhaustive, I am sure there might be a few more that I might have missed. This was a humble shot at this topic and I am sure as seasoned DBA’s you might have a little more to include. Do let me know under the comments section and I am sure this can be a great source of reference for others searching on this topic too.
If you are a DBA, once in a while, you will have a situation where you end up with some tasks which will be boring and annoying. Trust me in my life, I often come across similar scenarios quite often. Here is one such tasks I came across a few days ago.
A Task, I would like to Automate
Just the other day I was assigned a task where I had to take a CSV file from my network and insert into SQL Server which was hosted in remote locations. Well, you may think it as a pretty easy task and I agree with you this is very easy task. The challenge is not about its difficulty, but the challenge was rather about the entire process and my interest. The part which annoyed me the most was that I have to do this every day at 4 PM.
This means, every day I must be at my desk at 4 PM and take a file from the network and upload to remote SQL Server. What about weekends? What about when I have to step away from my desk at 4 PM? What about the situation, when I am busy doing something much more important than this task? Well, as I said, more than task, I have been just one-place with the routine which was associated with it. In simple words, this was an ETL task which needed to be automated, but I can’t depend on my machine always. I have to find a solution which was cloud based and runs efficiently.
Skyvia at Rescue
I was sitting miffed in office and suddenly I remembered that last year I blogged about the tool Skyvia. Here is the blog post Integrate Your Data with Skyvia – Cloud ETL Solution. I quickly referred to my own blog, post and realized I should give Skyvia a try.
Skyvia is a powerful solution for cloud data integration (ETL) and backup. It is a complete online solution and does not require any local software installed except for a web browser. In Skyvia we can create integration packages that define the operations and then we can run them or schedule for automatic execution. An integrated package is a set of data ETL operations that can be stored for future use, executed, modified, or deleted. Skyvia provides several kinds of packages for different integration scenarios. They are Import, Export, Replication, and Synchronization packages.
How did I do it?
Well, here are few of the screenshots of the task which I was assigned.
First, I checked if the table where I have to export data exists or not. As the table was already created, I quickly checked if it contained data or not. The table contained no data.
Next we will open the Skyvia web interface. It is pretty simple and it will list three options on the left bar. We will click Integration there.
In the Integration section, click Create Now under Data Import.
In the data integration screen we will be presented with various options. We will load the CSV file from an FTP server, so we select source as a CSV from FTP and target as SQL Server.
As we will be connecting SQL Server for the very first time we will be creating new connection and that is pretty straight forward procedure.
Then we will configure an FTP connection
Next we will configure CSV options. Here will be providing various options, but in our case all the default options were good enough for us to move next.
Right after that we will select the target table. In our case the target table is actor table.
The next screen will present mapping and we will one more time review various mapping options. We will make sure that all the source and target columns maps correctly.
When we click finish it will bring up the following screen.
Click on Save and now we are back on the following screen. Over here we can execute our task and see if it works or not. Click on the RUN button on the right side of the screen.
In my case the task ran successfully and it shows that it has inserted 200 rows successfully. The time taken to complete this entire task was 35 seconds and it depends on my network connection to the destination server.
We can execute the same select statement which we had executed earlier and see if the table contains the valid data.
Once we commit that our task has worked successfully, we can create a daily schedule.
That’s it! I am done.
Now every day at specific time the task will automatically execute and will log history.
Next Action Items
Team Devart has created Skyvia a feature rich service. One has to experiment with various different options to fully see the vast capability of this amazing product. Here are few things you can consider doing it. Here is the link where you can sign up for Skyvia for totally FREE. Next I will be trying out Skyvia with Salesforce. Skyvia is an all-in-one cloud solution for various Salesforce data integration scenarios. In addition to the standard Salesforce data loader functionality – data import and data export – it offers powerful data replication and synchronization tools and backup functionality with preview and export of backed up data, data search, viewing data changes between backups, and one-click restore.
Every weekend I spend time learning something new and interesting. My primary focus is always on either performance of SQL Server or performance of myself. This weekend, I downloaded dbForge Studio for SQL Server. It is a powerful IDE for SQL Server management, administration, development, data reporting and analysis. I have been using SQL Server Management Studio for most of the my tasks and I like sharing tips and tricks with the same. However, there are moments when I wish that SSMS provides me more than what it has been offered to me. There are moments when I feel that I can write up few macros or automated processes to solve few of my daily routines with SSMS. In simple word, I wish I could customize SSMS a bit more to my own choice.
Though there are Here are a few things I love about dbForge Studio for SQL Server, which I wishes SQL Server Management Studio offered out of box.
1) SQL Coding Assistance
SSMS has intelligence, but I believe it can use quite a few of the enhancements. For example, I like how dbForge Studio for SQL Server can just create whole SQL statements in a few keystrokes and formats the entire code based on our own choice. Here is the image which clearly explains how quickly I can expand * to all the columns of the table.
2) Database Diagram
I travel a lot and visit lots of different organization and review their databases. Everytime when I ask for organizations to provide their schema, they hardly have schema with them. After asking for multiple times, the usual exercise is to go to SSMS diagrams and try to print the diagram. If you have ever gone through this exercise you may know it is an extremely difficult task to do. SSMS is not good with diagrams and they do not print them well at all. Most of the time, I end up once again spending time with original table to understand the structure of the table.
With the help of dbForge Studio for SQL Server I am able to generate a schema diagram for any database in just one click. See attached image below where I have displayed part of AdventureWorks database.
3) Schema Compare
I just love this feature. If the user has to write down this particular feature from scratch it is an almost impossible task. I have learned that if I want to compare schema, I must depend on third party tools and I am extremely pleased with dbForge Studio for SQL Server’s schema comparing feature. It not only compares the schema, but also helps with synchronize database schemas and help deploy scripts targeting different SQL Server editions. Here is the image from the description page of dbForge Studio for SQL Server, which shows how Schema Compare works in dbForge Studio for SQL Server.
Well, this is just three of my favorite features. In future blog posts I will discuss a few of the efficiency features of dbForge Studio for SQL Server. Meanwhile, I encourage you to download the tool and see if you agree with my opinion or not. Trust me, if you start playing with this tool, you will find a few of your own efficiency tricks.
This article shows how to use ApexSQL Data Diff, a SQL data compare tool. You can download it here, install, and play along.
At the beginning, let me say something about this database software.
ApexSQL Data Diff is a SQL Server development tool that is used to compare and analyze data stored in tables, or views, between SQL databases or their backups, and for simplifying data migrations between databases. It can present a detailed view of differences between objects and encrypted objects, and also a row level difference for each object.
When SQL Server data comparison is complete, you can automatically synchronize specified objects between databases, or select a specific row for synchronization. You can isolate the differences that are important with advanced filtering of objects and rows.
The new ApexSQL Data Diff 2014 comes along with SQL Server 2014 support and the new UI style. In the ApexSQL Data Diff Community edition (free), the user can compare and synchronize data in SQL Server Express and Windows Azure SQL databases.
Right after the program is started, a Project management dialog shows up. You can use it to create a new project, or open the previously saved one. If you select New, the New Project dialog opens.
The first option in the Project dialog window is for selecting a source and a destination data source type. You can select two data types from a list: Backup and Database. Also, this dialog offers an option to revert a source and a destination, and to clone information between them.
If you select Database as a data source type, the details you need to provide are login information, server name, and select a database from the drop down list of all available databases on the specified server. If you select Backup as a data source type, you need to provide a path to a backup file.
Beside the regular options, if you click on the Show advanced tabs button, three more tabs will appear: Object filter, Object mapping, and Additional scripts.
In the Object filter tab, there are various options to filter objects and views.
The first option is to edit the Comparison key, which uniquely identifies each single row in a table, or a view, for both the source and destination data sources. Click on the arrow in the Comparison key cell of the selected object, the drop-down menu will open, and you can select the Edit custom comparison key option.
The new window will appear with a list of the selectable comparison keys. By clicking on the check box on the left side of the Column name, you can check/uncheck all of the keys, or to select only the keys which are significant for SQL Server data compare.
This option can help you in a situation when the comparison process shows incomparable objects, because there are no unique indexes or primary keys, and when additional information is required.
You can also select which columns will be included for comparison process. With a click on the arrow in the Columns cell, for the selected object, a drop-down menu will appear.
There, you can select which columns you want to include in the comparison process. In this example, only one column is unchecked, and this will automatically change the number in the Columns cell by reducing the number to one.
You can use the Row filter option to enter the conditions for SQL Server data compare to filter the rows that meet given condition. In the following example, we’ll select the row with the object SalesTerritory, then select the Row filter cell, and click on the three dots button. This will open the Row filter dialog for SallesTerritory, in which you can enter any T-SQL condition.
Click on the Refresh row count button, at the bottom of this dialog, to get the Row count, which is the number of the rows thatmatch previously mentionedconditions.
The last option in object filter is the Filtereditor for the listed objects. By clicking on the Edit filter button, in the top right corner of the object list, a new dialog opens, and you can select custom filter criteria for the tables, or the views. You can include/exclude some of the objects for comparison. With the Edit filter option, you have the ability to specify custom conditions to exclude the irrelevant tables from SQL Server data compare.
Object mapping is used for a custom mapping of tables and indexed views with different names and schemas. Tables and indexed views with the same names and schemas are automatically mapped. You can perform custom table/column mapping. There are two cases for performing the custom table/column mapping:
1) To exclude automatically mapped tables/columns from the comparison
2) To include differently named SQL Server tables/columns with the same schema names
Finally, the Additional scripts tab allows you to use scripts before and/or after the synchronization script is run. You can choose not to use script at all, to use an embedded script, or to use a saved script.
At the end, you can save your selections to a project and use it again. To run this saved project, select it from the Project management dialog and it will load all previously saved settings.
When all options are set, you can click the Compare button. After comparing is done, compared objects are shown in the Main grid. There are several columns in the Main grid: Schema, Object type, Total rows, and difference information which contains: Missing, Identical, Different, and Additional records. Any of those columns can be used for sorting the results, just by clicking on them.
All compared objects are shown, along with basic information and a number of differences between compared objects. Furthermore, the row level differences are shown in the difference details pane, from which you can choose specific rows for the synchronization.
To display certain objects in the Main grid you can use the inline Filter row option. To enable it, go to the View tab, and select the Filter row option. This option is used for filtering compared objects based on different information, object name, total rows, or schema, by entering appropriate condition.
Working with the results grid can be easier by using some of the options, which are located in the View tab, under the Grid group.
There are buttons for grouping the comparison results. The results can be grouped by an object type (tables, views, procedures, etc.). The Collapse group and the Expand group options allow you to automatically collapse, or expand, all selected groups in the Main grid. The Multi column sort option provides more sorting options and you may select up to four columns by which the results are going to be sorted. The last button for adjusting the way the results are shown is Auto size columns, which is used to auto adjust the width of the columns.
The Difference details pane provides viewing and selecting row level differences for synchronization. It’s located beneath the Main grid. If you want to choose some specific rows for synchronization, you have to select the checkbox for those rows.
One of the options that difference details pane provides for users is to view only the rows with specific difference type, by selecting the tabs at the bottom of difference details pane. Those different types can be: Missing types – when records are in the source, but not in the destination database; Identical types – when records are in both the source and the destination database; Different types – when records are in both the source and the destination database, but are not identical; Additional types – when records are not in the source, but are in the destination database.
You can select desired objects, from the Main grid, or you can select all objects by going to the Home tab, and click the Check all button.
After all desired objects are selected, the synchronization process can be started with a click on the Synchronize button, from the Home tab. The Synchronization wizard will appear with the first step, where the source and destination data source are shown, along with the synchronization direction, which can also be reversed.
In the next step, you will be prompted to select the output options. You may choose to Create a synchronization script, or Synchronize a database.
One of the options is to save the synchronization script. The other option is to open the script in an editor and you may choose to open it with the Internal editor, or to assign the other one.
The final step is the Summary and warnings window, where you can see all actions that are going to be performed, along with the possible problems that might occur during the synchronization. You may group the actions by the execution order, by an object, or by the action type.
Finally, select the Create script button, and after you reviewed it, you may execute it.
In ApexSQL Data Diff, you can create reports based on obtained differences, by selecting the Export option from the Home tab. You can export results in three ways: as an HTML plain report, an HTML report, and a CSV data export.
An HTML plain report, which has a simple structure, without user interaction, except when SQL scripts are included in the report. In this report you can see which data comparison options were checked/unchecked, which report options are selected, and data comparison results with all needed information. An HTML plain report is preferred for printing.
An HTML report is similar to an HTML plain report, by its look and presented information, but it also has expand/collapse region with data comparison results. In this way you can easily expand, or collapse, information that you want to see. This report is preferred for browsing.
The last of them is CSV data export, which can generate results into an XML file, for further detailed analysis. An XML file allows using the same document for different systems and platforms, where results can be further analyzed/used in different programs to inspect differences or creating some new report for objects.
In this article, I’ll show how to use ApexSQL Diff, a SQL compare tool. You can download it here, install, and play along while reading this blog post.
But first, let’s say something about this database software.
ApexSQL Diff is a SQL Server tool that is used to analyze and compare object schemas between two SQL databases, and also to synchronize particular, or all differences between them. It is used for SQL Server schema compare and synchronize the following data types:
It can also be used to repair or roll back changes from backup files, which can be very useful if you made some unwanted changes.
The new ApexSQL Diff 2014 comes along with SQL Server 2014 support and the new user interface style. In the free, Community edition, the user can compare and synchronize SQL Server Express and Windows Azure SQL databases.
Right after the program is started, there is a Project management dialog, where you create a new project, or open the previously saved one. If you select New, the New Project dialog opens.
The first option in this dialog window is for selecting source and destination data source types. You can select data types from a list: Backup, Database, Script Folder, Snapshot, and Source Control. The Project dialog offers an option to revert a source and a destination, and to clone information between them. There is also an option to export data source to a snapshot file, a script folder, or to source control.
When you select Database as data source type, the information you need to provide are server name, login information, and select a database from the drop down list of all available databases on the specified server.
If you select Backup or Snapshot as a data source type, then you need to provide a path to a file.
By choosing the Script folder option, apart from the script folder location, you have an option to Configure mappings, to set a default Collation, and a SQL Server version.
When you select the Source control option as a data source type, there are Source control settings which are in wizard form, and will guide you through setting up Source control type, login information, and configuring mapping. There are also options to set the Default collation and the SQL Server version
Apart from the regular options, if you click on the Show advanced tabs button, three more tabs will appear: Schema mapping, Object filter, and Additional scripts.
In the Schema mapping tab you can specify how owners/schemas are handled during the SQL Server schema compareprocess, by custom mapping schemas in a source and a destination database. You can also export/import mappings to save/load previously defined/exported mappings. Owners/schemas with the same name will be automatically mapped to each other.
To select specific objects for a database comparison, you can use the Object filter option. Here, all object types are shown in a detailed list, so you can choose the ones you want to include in the SQL Server schema compare process.
By choosing each object, it will appear in the first column of the Object filter tab, and clicking on it, a new window will show all objects of that type, which you may choose to include/exclude from the SQL Server schema compare process.
Lastbutnot least, the Additional scripts tab allows you to use scripts before and/or after the synchronization script is run. You can choose not to use it, to use an embedded script, or to use a saved script.
At the end, you can save the selections to a project and use it again, and save time to make the selections again. To run a saved project, select it from the Project management dialog and it will load all previously saved settings.
When everything is set up, click the Compare button. The comparison results will be shown in the Main grid. There are several columns in the Main grid: Object type, Source schema, Source object, Destination schema, and Destination object. Any of those columns can be used for sorting the results, just by clicking on them.
In the View menu, there is the Multi column sort option for more advanced sorting. It allows you more sorting options and you may select up to four columns by which the results are going to be sorted.
There are several other buttons, in the View menu, which can be used to adjust the way the results are shown. You can set an alignment to left, or centered. There are buttons for grouping the comparison results. The results can be grouped by a different type (not equal, missing, added, or equal), by an object type (tables, views, procedures, etc.), or no grouping at all. There are two more buttons, Collapse group and Expand group, which allow the user to automatically collapse of expand all selected groups in the Result grid. The last button for adjusting the way the results are shown is Auto size columns, which is used to auto adjust the width of the columns.
When you select an object from the Main grid, differences between the object scripts can be seen in the Script difference pane. This pane is divided on two sides. The source script is shown on the left side, and the destination script is shown on the right side. Both scripts are scrolled together, regardless on which side you’re scrolling, so you can easily see the differences in code between them. Different lines of the scripts are highlighted, so they stand out from the rest of code, which makes them easier to find.
You can access to several options in the Script difference pane from the context menu. By default, the Enable syntax option is selected. It shows all SQL reserved words in a different color. The View whitespace option shows all formatting symbols used in the code, a new line, tab, etc. The Line numbers option is used to quickly find a certain line of code. Finally, there is an option to select different font sizes for displaying code in this pane. The default setting is Medium, but it can be changed to Large or Small.
On the far left side of the pane, there is a bar (marked red in the picture) with blue lines that represent positions in the script where the differences are.
If you want to limit the SQL Server schema compare to the specific objects in a database, you can use the Object filter feature. Select the wanted SQL object, and an icon of the Object filter will appear. By clicking on the icon, a new dialog opens, and you can select a custom filter criteria for that object. You can include/exclude some of the objects in the SQL Server schema compare. With the Filter editor option, you have the ability to specify custom conditions to include the relevant tables:
After you have selected all desired objects, you may start the synchronization process with the click on the Synchronize button. The Synchronization wizard will appear with the first step of synchronization direction, where the source and destination data sources are presented, with an option to reverse the direction of the synchronization.
The next step will show you all dependencies for the selected objects, with an option to include or exclude them from the synchronization.
In the Summary and warnings window you’ll see all actions that are going to be performed. You may group them by the execution order, by an object, or by the action type.
In the final step, you are asked to select the output options. You may choose to Create synchronization script, or to Synchronize now.
If you select the Create synchronization script option, there are options to save the script to a file, where you can specify a folder and a file name, and to open the script in an editor. You can use the internal script editor, which is a default option, or assign another one.
Choosing the Synchronize now option, will also give you the option to save the synchronization script to a file, and to create a backup of a database before the synchronization. For creating a backup, you can choose the default directory, or select some other folder.
After the script is created and reviewed, you may execute it.
ApexSQL Diff provides several options for creating reports.
Under the Home tab, there is an Export button, where you will find all options for creating reports. You can export comparison results as an HTML plain report, an HTML report, and an XML structure report.
An HTML plain report is ideal for printing, while a HTML report is designed for electronic browsing since it contains toggles that expand and collapse groups.
After you select one of these reports, it will be generated and presented.
Few days back my friend called me to know if there is any tool which can be used to get restart notification about SQL in their environment. I told that SQL Server can do it by itself with some configurations. He was happy and surprised to know that he need not spend any extra money.
In SQL Server, we can configure stored procedure(s) to run at start-up of SQL Server. This blog would give steps to achieve how to achieve it.
There are many situations where this feature can be used. Below are few.
Logging SQL Server startup timings
Modify data in some table during startup (i.e. table in tempdb)
Sending notification about SQL start.
Step 1 – Enable ‘scan for startup procs’
This can be done either using T-SQL or User Interface of Management Studio.
EXEC sys.sp_configure N'Show Advanced Options', N'1' GO RECONFIGURE WITH OVERRIDE
GO EXEC sys.sp_configure N'scan for startup procs', N'1' GO RECONFIGURE WITH OVERRIDE
Below is the interface to change the setting. We need to go to “Server” > “Properties” and use “Advanced” tab. “Scan for Startup Procs” is the parameter under “Miscellaneous” section as shown below.
We need to make value as “True” and hit OK.
Step 2 – Create stored procedure
It’s important to note that the procedure is executed after recovery is finished for ALL databases. Here is a sample stored procedure. You can use your own logic in the procedure.
CREATE PROCEDURE SQLStartupProc AS
CREATE TABLE ##ThisTableShouldAlwaysExists (AnyColumnINT) END
Step 3 – Set Procedure to run at startup
We need to use sp_procoption to mark the procedure to run at startup. Here is the code to let SQL know that this is startup proc.
sp_procoption 'SQLStartupProc', 'startup', 'true'
This can be used only for procedures in master database.
Msg 15398, Level 11, State 1, Procedure sp_procoption, Line 89
Only objects in the master database owned by dbo can have the startup setting changed.
We also need to remember that such procedure should not have any input/output parameter. Here is the error which would be raised.
Msg 15399, Level 11, State 1, Procedure sp_procoption, Line 107
Could not change startup option because this option is restricted to objects that have no parameters.
Here is the query to find which procedures is marked as startup procedures.
SELECT name FROM sys.objects WHERE OBJECTPROPERTY(OBJECT_ID, 'ExecIsStartup') = 1
Once this is done, I have restarted SQL instance and here is what we would see in SQL ERRORLOG
Launched startup procedure 'SQLStartupProc'.
This confirms that stored procedure is executed. You can also notice that this is done after all databases are recovered.
Recovery is complete. This is an informational message only.
No user action is required.
After few days my friend again called me and asked – I want to turn this OFF?
Developer’s life is never easy. DBA’s life is even crazier.
When a developer wakes up in the morning, most of the time have no idea what different challenges they are going to face that day. Of course, most of the developers know the project and roadmap, which they are working on. However, developers have no clue what coding challenges which they are going face for that day.
DBA’s life is even crazier. When DBA wakes up in the morning – they often thank that they were not disturbed during the night due to server issues. The very next thing they wish is that they do not want to challenge which they can’t solve for that day. The problems DBA face every single day are mostly unpredictable and they just have to solve them as they come during the day.
Though the life of DBA is not always bad. There are always ways and methods how one can overcome various challenges. Let us see three of the challenges and how a DBA can use various tools to overcome them.
Challenge #1 Synchronize Data Across Server
A Very common challenge DBA receive is that they have to synchronize the data across the servers. If you try to manually write that up, it may take forever to accomplish the task. It is nearly impossible to do the same with the help of the T-SQL. However, thankfully there are tools like dbForge Studio which can save a day and synchronize data across servers. Read my detailed blog post about the same over here: SQL SERVER – Synchronize Data Exclusively with T-SQL.
Challenge #2 SQL Report Builder
DBA’s are often asked to build reports on the go. It really annoys DBA’s, but hardly people care about it. No matter how busy a DBA is, they are just called upon to build reports on things on very short notice. I personally like to avoid any task which is given to me accidently and personally building report can be boring. I rather spend time with High Availability, disaster recovery, performance tuning rather than building report. I use SQL third party tool when I have to work with SQL Report. Others have extended reporting capabilities. The latter group of products includes the SQL report builder built-in todbForge Studio for SQL Server. I have blogged about this earlier over here: SQL SERVER – SQL Report Builder in dbForge Studio for SQL Server.
Challenge #3 Work with the OTHER Database
The manager does not understand that MySQL is different from SQL Server and SQL Server is different from Oracle. For them everything is same. In my career hundreds of times I have faced a situation that I am given a database to manage or do some task when their regular DBA is on vacation or leave. When I try to explain I do not understand the underlying the technology, I have been usually told that my manager has trust on me and I can do anything. Honestly, I can’t but I hardly dare to argue. I fall back on the third party tool to manage database when it is not in my comfort zone. For example, I was once given MySQL performance tuning task (at that time I did not know MySQL so well). To simplify search for a problem query let us use MySQL Profiler in dbForge Studio for MySQL. It provides such commands as a Query Profiling Mode and Generate Execution Plan. Here is the blog post discussing about the same: MySQL – Profiler : A Simple and Convenient Tool for Profiling SQL Queries.
Well, that’s it! There were many different such occasions when I have been saved by the tool. May be some other day I will write part 2 of this blog post.
If you are developer you must have faced situations which I am going to mention next.
A Developer’s Wish
Have you ever built a database application where you are driving your navigation with the help of Data?
Have you ever realized that you are running query much faster on your local server, but when you move them to your test/staging server they are suddenly running slow?
Have you ever wished that you can sync your local machine with the master database so you can run your queries in your near real time scenario so you do not have surprise when you go live on production?
The matter of the fact is that every developer faces such situation when they wish that they can directly work on the live data instead of old historical data on their local machine. If an organization care about the experience of the developer and their product, they will always thrive to provide a tool which makes their developer’s life very easy.
If you ask me what is a good tool which can help you to synchronize data across multiple databases and servers, I will suggest you to take a look at the DB Change Manager. It is a very powerful tool which can do quite a number of different tasks. Here are a few of your daily chores which you can rely on DB Change Manager to comply.
Schema Archive and Compare
Synchronization and Roll-back
Data Compare and Synchronization
Click, Click and Go!
If you are a developer and given a task to make a tool to synchronize data across two different databases, how difficult it would be creating such a tool. Honestly, very difficult as we need to take care of so many different things besides data, for example – schema, configuration, security and what not. However, when we are using the Embarcadero DB Change Manager all the problems are taken care of by the tool.
Once you start the Data Comparison Job, it will take you to a very simple wizard with few steps. Each step is extremely easy to follow and most of the time, you just have to accept everything as a default and click on go.
Once you follow the few quick steps of the wizard, you will land upon the final report page where you will notice a grid is placed with details related your change data. You can export the difference or synchronize the data across your server.
I wish I can write more about this entire process, but as I mentioned earlier, this tool is extremely simple to use and follow. Most of the time, I just do Click, Click and Go and I am done!