Developer – 3 Tips Every SQL Expert Needs to Know to Land the Perfect Job (Part 1 of 3)

I am very fortunate that I know John Sonmez in person. He is one of those individuals who understands the young mind of developers and depth of software development industry. I am a regular follower of his books, videos and ideas. I have personally implemented many of his ideas in my personal life and I have seen dramatic improvement. Earlier this month, I requested him to share his views and suggestions with all of us on this blog. He kindly accepted the request and returned with three amazing blog posts and a plethora of giveaways.

Let us read the John Sonmez in his own words in this first part of a three part series.


You’ve probably invested a lot of time learning about SQL and databases.

That’s great. If you are reading this blog, you are probably the kind of person who really cares about your work and has a passion for what you are doing.

You are already probably ahead of 90% of your peers.

But…

What if I told you that there was another way to get even further ahead—a way that most technical professionals ignore?

Well, there is. In fact, what I am about to teach you in this 3-part series of posts, are some of the soft skills that I used to increase my income by over 500% (yes, that is 5 times—no typo), and live a happier, much more fulfilling life.

Oh, and if that isn’t enough for you. If you hang around for the last post in this 3-part series, I’ve got something extra special for you. I’m going to be giving away How To Market Yourself as a Software Developer.

Most technical professionals think getting a good job is about showing how technically competent you are.

That is exactly what I used to think, so I focused all my efforts on raising my technical skills and learning how to solve hard problems on a whiteboard.

That was before I discovered that the majority of high-paying jobs are given to people who figure out how to get in through the back door—not those who are the most technically competent.

I know it’s difficult to believe.

I had a hard time believing it myself, but many sources—including this NY Times article—are showing that a majority of hires are coming from other sources than the traditional submit-a-resume-and-pass-an-interview process.

So, if you want to land that perfect job, you’ve got to be more than just technically competent.

In my new book, “Soft Skills: The Software Developer’s Life Manual,” I’ve dedicated a whole section of the book to improving your career, landing your dream job and increasing your income. In the next few posts, I’ll share with you some of my best secrets from the book. (You can check out the first chapter for free here.)

Tip #1: Start and maintain a highly focused blog

Pinal Dave has done an excellent job of this and he is reaping the benefits. Do you think Pinal Dave would ever have to do an interview for a position at a company hiring a SQL expert? I don’t think so. I think most employers would want to hire him on the spot, just because of his excellent reputation—which he built mostly from his blog.

I, myself, have been given job offers—completely bypassing the interview process altogether—because the interviewer had been reading my blog.

The key is to get started and be consistent.

I once interviewed Pinal Dave for a course I was creating on how to market yourself as a software developer and he told me that he woke up every morning and wrote a blog post before he even brushed his teeth.

His dedication to writing every day has allowed him to earn a great income off of this blog and to reach around 2 million technical professionals who read this blog every month.

Whenever I speak at conferences about marketing yourself and soft skills for technical people, I always ask how many people have a blog. Usually about half of the room will raise their hand. But, guess what happens when I ask how many people have posted on their blog weekly, for at least the last year?

Often out of a room of 200-300 developers, there will be just 1 or 2 hands raised.

Want to set yourself apart?

Want to stand out and get noticed and have employers knocking down your door to hire you?

Start a highly focused blog—that means pick a specific topic, like Pinal did with SQLAuthority—and blog at least once a week.

Anyone can do it and it’s much easier than you think. To find out more about how to create a successful blog, either check out my blogging chapter in Soft Skills or you can enroll in this free 3-week email course I created that takes you through the process step-by-step.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Error: Fix: Msg 5133, Level 16, State 1, Line 2 Directory lookup for the file failed with the operating system error 2(The system cannot find the file specified.) – Part 2

Errors are a great starting point for learning. Especially, my inbox is always flooded with atleast 2-3 error messages almost every other day. People keep asking me how to solve them, what to do when we encounter an error and so on. Long time back I wrote a simple blog which talks about error message 5133 while creating database. You can read it here:

SQL SERVER – Error : Fix : Msg 5133, Level 16, State 1, Line 2 Directory lookup for the file failed with the operating system error 2(The system cannot find the file specified.)

One of the readers sent me email that she is simply running Create Database <DatabaseName> command and still get the above error. In my blog I mentioned “There must be some typo or error in filepath” but in this case there is no path given. This time I wanted to reproduce the error and was intrigued to why they were getting this error with the simple TSQL Create database command.

I spent some time researching and spoke to few friends about this. They informed that there is a setting in SQL Server which has default path of database files which is picked if nothing is specified. And it made complete sense. Based on this information,  I was finally able to reproduce the error by following his instructions. Here is the error:

Msg 5133, Level 16, State 1, Line 14
Directory lookup for the file “E:\InvalidPath\SQLAuth.mdf” failed with the operating system error 2(The system cannot find the file specified.).
Msg 1802, Level 16, State 1, Line 14
CREATE DATABASE failed. Some file names listed could not be created. Check related errors.

If you look at the path it is “E:\InvalidPath\SQLAuth.mdf”. The path is picked from the below registry value:

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL12.SQL2014\MSSQLServer

In my case, I have SQL 2014 named instance called SQL2014 – that’s why we are seeing MSSQL12.SQL2014

Here is the T-SQL way to get the values.

SELECT SERVERPROPERTY('InstanceDefaultDataPath') AS [Default_Data_path]
SERVERPROPERTY('InstanceDefaultLogPath') AS  [Default_log_path]

We can change it via SQL Server Management Studio too. You can right click on server node and go to properties. Then choose “Database Settings” tab.

Following is the T-SQL way to achieve the change.

USE [master]
GO
EXEC xp_instance_regwrite N'HKEY_LOCAL_MACHINE', N'Software\Microsoft\MSSQLServer\MSSQLServer',
N'DefaultData', REG_SZ, N'E:\InvalidPath'
GO
EXEC xp_instance_regwrite N'HKEY_LOCAL_MACHINE', N'Software\Microsoft\MSSQLServer\MSSQLServer',
N'DefaultLog', REG_SZ, N'E:\InvalidPath'
GO

NOTE: Once you have changed the value, SQL Service restart is needed so that SQL can pick-up changed values.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Error: Msg 701, Level 17, State 103. There is insufficient system memory in resource pool to run this query

Talking and exploring In-Memory topics inside SQL Server 2014 has been interesting to me. When I wrote the blog around table variable not being just an In-Memory structure, one of my course listener (SQL Server 2014 Administration New Features) pinged me on twitter to ask, if In-Memory OLTP was really In-Memory? Wouldn’t SQL Server like to swap the data or memory data to pagination file when there is memory pressure? I told them the concept of In-Memory is that data always resides in memory and the reason for feature name “In-Memory OLTP”.

The next question that came out of this interaction, what happens when we hit the memory boundary? Will SQL Server error out even if there is Memory available in the system or do something else. This was an easy setup to try.

Script Usage

We will create an In-Memory database, restrict it to have limited memory and add some rows to see if there is any error. The basic 4 steps to follow are:

-- Step 1: Create the Resource Pool. Limit to 5% memory.
CREATE RESOURCE POOL InMemory_Pool WITH (MAX_MEMORY_PERCENT = 5);
ALTER RESOURCE GOVERNOR RECONFIGURE;
GO

-- Step 2: Create the InMemory DB
CREATE DATABASE InMemory
ON PRIMARY(NAME = InMemoryData,
FILENAME = 'd:\data\InMemoryData.mdf', size=100MB),
-- Memory Optimized Data
FILEGROUP [InMem_FG] CONTAINS MEMORY_OPTIMIZED_DATA(
NAME = [InMemory_InMem_dir],
FILENAME = 'd:\data\InMemory_InMem_dir')
LOG ON (name = [InMem_demo_log], Filename='d:\data\InMemory.ldf', size=50MB)
GO

-- Step 3: Bind the resource pool to DB
EXEC sp_xtp_bind_db_resource_pool 'InMemory', 'InMemory_Pool'
GO

-- Step 4: For RG to take effect, make DB Offline and Online
USE MASTER
GO
ALTER DATABASE InMemory SET OFFLINE
GO
ALTER DATABASE InMemory SET ONLINE
GO

After this we will create a wide table and add rows to exhaust the memory for this resource pool.

USE InMemory
GO
-- Step 5: Create a Memeory Optimized Table
CREATE TABLE DummyTable_Mem (ID INT NOT NULL,
Name CHAR(8000) NOT NULL
CONSTRAINT ID_Clust_DummyTable_Mem PRIMARY KEY NONCLUSTERED HASH (ID) WITH (BUCKET_COUNT=1000000))
WITH (MEMORY_OPTIMIZED=ON, DURABILITY=SCHEMA_ONLY)
GO

-- Step 6: Add a lot of rows to get the error
SET NOCOUNT ON
DECLARE
@counter AS INT = 1
WHILE (@counter <= 1000000)
BEGIN
INSERT INTO
dbo.DummyTable_Mem VALUES(@counter, 'SQLAuthority')
SET @counter = @counter + 1
END
GO

The Step 6 will not complete because of insufficient memory. We will get an error as shown below:

The statement has been terminated.
Msg 701, Level 17, State 103, Line 49
There is insufficient system memory in resource pool 'InMemory_Pool' to run this query.

To add more rows to the table:

  • Make sure more memory is allocated to the resource pool and reconfigure the same.
  • Delete some rows from In-Memory tables on this database to make space for new allocations.

Let me help you clean up the script after this experiment.

-- Clean up
USE MASTER
GO
DROP DATABASE InMemory
GO
DROP RESOURCE POOL InMemory_Pool
ALTER RESOURCE GOVERNOR RECONFIGURE;
GO

To learn such interesting SQL Server 2014 enhancements, feel free to listen to my Pluralsight course for more such topics.

Reference: Pinal Dave (http://blog.sqlauthority.com)

Interview Question of the Week #004 – List All Columns and Their Data Type for a View

Earlier this week I wrote a blog about finding stored procedure parameters and their data types. After that blog I received few emails from my blog readers asking for similar script for a view. I asked them what did they like about the script and they said that it gives us base datatype and user defined data type as well. So I have put some more efforts to write similar script about view. This is a fine example of how each one of you out there inspire me with some great content ideas. These keep these emails coming my way.

Here is the question – How to list all columns and their datatype for a view in SQL Server?

Usage of script

To use the below script you need to replace the view name (vEmployee in sample code) and schema name (HumanResources in sample code). Also make sure you are in same database which has stored procedure (AdventureWorks2014 is sample code):

USE AdventureWorks2014
GO
DECLARE  @ViewName NVARCHAR(4000)
       ,
@SchemaName NVARCHAR(4000)
SELECT   @ViewName = N'vEmployee'
      
,@SchemaName = N'HumanResources'
SELECT c.NAME AS [Name]
  
,CAST(ISNULL(ic.index_column_id, 0) AS BIT) AS [InPrimaryKey]
  
,CAST(ISNULL((
              
SELECT TOP 1 1
              
FROM sys.foreign_key_columns AS colfk
              
WHERE colfk.parent_column_id = c.column_id
                  
AND colfk.parent_object_id = c.OBJECT_ID
              
), 0) AS BIT) AS [IsForeignKey]
  
,u_t.NAME AS [DataType]
  
,ISNULL(b_t.NAME, N'') AS [SystemType]
  
,CAST(CASE
          
WHEN b_t.NAME IN (
                  
N'nchar'
                  
,N'nvarchar'
                  
)
               AND
c.max_length <> - 1
              
THEN c.max_length / 2
          
ELSE c.max_length
          
END AS INT) AS [Length]
  
,CAST(c.PRECISION AS INT) AS [NumericPrecision]
  
,CAST(c.scale AS INT) AS [NumericScale]
  
,c.is_nullable AS [Nullable]
  
,c.is_computed AS [Computed]
  
,ISNULL(s.NAME, N'') AS [XmlSchemaNamespaceSchema]
  
,ISNULL(xsc.NAME, N'') AS [XmlSchemaNamespace]
  
,ISNULL((
          
CASE c.is_xml_document
              
WHEN 1
                  
THEN 2
              
ELSE 1
              
END
          
), 0) AS [XmlDocumentConstraint]
  
,CAST(c.is_sparse AS BIT) AS [IsSparse]
  
,CAST(c.is_column_set AS BIT) AS [IsColumnSet]
  
,c.column_id AS [ID]
FROM sys.all_views AS v
INNER JOIN sys.all_columns AS c ON c.OBJECT_ID = v.OBJECT_ID
LEFT
JOIN sys.indexes AS i ON i.OBJECT_ID = c.OBJECT_ID
  
AND 1 = i.is_primary_key
LEFT JOIN sys.index_columns AS ic ON ic.index_id = i.index_id
  
AND ic.column_id = c.column_id
  
AND ic.OBJECT_ID = c.OBJECT_ID
  
AND 0 = ic.is_included_column
LEFT JOIN sys.types AS u_t ON u_t.user_type_id = c.user_type_id
LEFT JOIN sys.types AS b_t ON (
      
b_t.user_type_id = c.system_type_id
      
AND b_t.user_type_id = b_t.system_type_id
      
)
   OR (
       (
b_t.system_type_id = c.system_type_id)
       AND (
b_t.user_type_id = c.user_type_id)
       AND (
b_t.is_user_defined = 0)
       AND (
b_t.is_assembly_type = 1)
       )
LEFT JOIN sys.xml_schema_collections AS xsc ON xsc.xml_collection_id = c.xml_collection_id
LEFT JOIN sys.schemas AS s ON s.schema_id = xsc.schema_id
WHERE (v.TYPE = 'V')
   AND (
      
v.NAME = @ViewName
      
AND SCHEMA_NAME(v.schema_id) = @SchemaName
      
)
ORDER BY [ID] ASC

Here is the sample execution. I have highlighted the modification needed to use the script.

I hope these scripts will help you in your environments. I would love to hear back from you how these can be enhanced if possible.

Click to Download Scripts

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – How to Bind Resource Governor for InMemory Enabled Databases?

I have done a number of courses for Pluralsight. Last year when SQL Server 2014 was released, I planned to do a session on the same. The course SQL Server 2014 Administration New Features was a lot of hardwork but it did give me a chance to learn something new and share. Do let me know if you have every got a chance to listen to this course. Would love to hear your feedback.

As part of the course, one of the module included the basics of InMemory capabilities of SQL Server 2014. One of the listener from the course pinged me on Twitter with some interesting conversation. This inspired me to write this blog post.

Follower: Thanks for the InMemory basics, I had a simple doubt.

Pinal: Yes, please tell me. Let me see if I can answer the same.

Follower: I plan to upgrade my server to SQL 2014.

Pinal: Glad the course is helping you. Is there any doubt?

Follower: Yes, I might be planning to use the InMemory capabilities for couple of databases as part of application upgrade.

Pinal: Great.

Follower: I want to know can I restrict the amount of memory a particular InMemory OLTP Database can take.

Pinal: Can you please elaborate a bit please?

Follower: Simple Pinal, I want one DB not to take more than 20% of Memory on my server and the other InMemory DB not to take more than another 40% Memory available on the server.

Pinal: Interesting.

Follower: As part of upgrade I am consolidating and hence these restrictions can be of great help.

Pinal: Now I get it, it is possible with Resource Governor. Havent you tried it?

Follower: I think these are great pointers, I will dig it up. Thanks again.

Pinal: You are welcome. I will write it as a blog for sure in future.

This conversation has been in my minds for a while. It has taken some time to finally get this blog. The script one needs to write is simple. Let me walk you through the same.

  1. Create the Resource Pool
  2. Create the InMemory OLTP Filegroup enabled DB
  3. Bind the Resource Pool to our database
  4. Check the DB metadata
  5. Make the DB Offline and Online to make the changes take effect
  6. Clean up

As the script says, let us first create our Resource Group.

-- Step 1: Create the Resource Pool. Limit to 40% memory.
CREATE RESOURCE POOL InMemory_Pool WITH (MAX_MEMORY_PERCENT = 40);
ALTER RESOURCE GOVERNOR RECONFIGURE;
GO

Next is to create the InMemory DB. This is same as used in the previous blog – Beginning In-Memory OLTP with Sample Example.

-- Step 2: Create the InMemory DB
CREATE DATABASE InMemory
ON PRIMARY(NAME = InMemoryData,
FILENAME = 'd:\data\InMemoryData.mdf', size=100MB),
-- Memory Optimized Data
FILEGROUP [InMem_FG] CONTAINS MEMORY_OPTIMIZED_DATA(
NAME = [InMemory_InMem_dir],
FILENAME = 'd:\data\InMemory_InMem_dir')
LOG ON (name = [InMem_demo_log], Filename='d:\data\InMemory.ldf', size=50MB)
GO

The next step is where the magic begins. We need to bind the DB and the resource pool. This can be achieved using the next command:

-- Step 3: Bind the resource pool to DB
EXEC sp_xtp_bind_db_resource_pool 'InMemory', 'InMemory_Pool'
GO

The success for this step can be viewed with this message:

A binding has been created. Take database ‘InMemory’ offline and then bring it back online to begin using resource pool ‘InMemory_Pool’

The next logical step is to check the metadata if the same has been mapped. Use the sys.databases DMV for this.

-- Step 4: Check the Database metadata
SELECT dbs.database_id, dbs.name, dbs.resource_pool_id
FROM sys.databases dbs
WHERE name LIKE 'InMemory'
GO

Just like how resource governor configuration needs to be reconfigured. We need to do something similar for databases to make this changes take effect. We will need to take the DB offline and bring it online.

-- Step 5: For RG to take effect, make DB Offline and Online
USE MASTER
GO
ALTER DATABASE InMemory SET OFFLINE
GO
ALTER DATABASE InMemory SET ONLINE
GO

That is it. We are good now. Our InMemory DB will not take more than 40% of the memory allocated to SQL Server. Though this was a simple concept, I thought was worth a share. If you would like to clean up this experiment, please use the below script.

-- Clean up
USE MASTER
GO
DROP DATABASE InMemory
GO
DROP RESOURCE POOL InMemory_Pool
ALTER RESOURCE GOVERNOR RECONFIGURE;
GO

To learn such interesting SQL Server 2014 enhancements, feel free to listen to my Pluralsight course for more such topics.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Who is Consuming my TempDB Now?

Off late my love for TempDB and writing on topics of tempDB has been a great learning experience. The more I work with tempDB, the more fascinated I am. TempDb is being used by a number of operations inside SQL Server, let me list some of them here:

  • Temporary user objects like temp tables, table variables
  • Cursors
  • Internal worktables for spool and sorting
  • Row Versioning for snapshot isolation
  • Online Index rebuild operations
  • MARS (Multiple Active Resultsets)
  • AFTER Triggers and more

These are some of the ways in which tempdb in our servers get used. When I was talking about this to one of my DBA friends, he always asks some interesting questions. He said the previous blogs I wrote helped in understanding how to read temp tables, how to see table variable data. Now his requirement was different. He wanted to know if there was any script which will let him know who was consuming tempDB resources. If tempDB is used by many inside the SQL Server context, it is important that we know how to get this information from DMVs. There are a number of DMVs we can use:

  • dm_db_file_space_usage – Returns space usage information for each file in tempdb
  • dm_db_session_space_usage – Returns the number of pages allocated and deallocated by each session
  • dm_db_task_space_usage – Returns page allocation and deallocation activity by task
  • We can join these DMV’s with various other DMV’s like sys.dm_exec_sessions, sys.dm_exec_requests, etc and get to the actual TSQL statement and plan responsible for these allocations.

Here is a simple script that will outline the sessions which are using TempDB currently.
SELECT
st.dbid AS QueryExecutionContextDBID,
DB_NAME(st.dbid) AS QueryExecContextDBNAME,
st.objectid AS ModuleObjectId,
SUBSTRING(st.TEXT,
dmv_er.statement_start_offset/2 + 1,
(
CASE WHEN dmv_er.statement_end_offset = -1
THEN LEN(CONVERT(NVARCHAR(MAX),st.TEXT)) * 2
ELSE dmv_er.statement_end_offset
END - dmv_er.statement_start_offset)/2) AS Query_Text,
dmv_tsu.session_id ,
dmv_tsu.request_id,
dmv_tsu.exec_context_id,
(
dmv_tsu.user_objects_alloc_page_count - dmv_tsu.user_objects_dealloc_page_count) AS OutStanding_user_objects_page_counts,
(
dmv_tsu.internal_objects_alloc_page_count - dmv_tsu.internal_objects_dealloc_page_count) AS OutStanding_internal_objects_page_counts,
dmv_er.start_time,
dmv_er.command,
dmv_er.open_transaction_count,
dmv_er.percent_complete,
dmv_er.estimated_completion_time,
dmv_er.cpu_time,
dmv_er.total_elapsed_time,
dmv_er.reads,dmv_er.writes,
dmv_er.logical_reads,
dmv_er.granted_query_memory,
dmv_es.HOST_NAME,
dmv_es.login_name,
dmv_es.program_name
FROM sys.dm_db_task_space_usage dmv_tsu
INNER JOIN sys.dm_exec_requests dmv_er
ON (dmv_tsu.session_id = dmv_er.session_id AND dmv_tsu.request_id = dmv_er.request_id)
INNER JOIN sys.dm_exec_sessions dmv_es
ON (dmv_tsu.session_id = dmv_es.session_id)
CROSS
APPLY sys.dm_exec_sql_text(dmv_er.sql_handle) st
WHERE (dmv_tsu.internal_objects_alloc_page_count + dmv_tsu.user_objects_alloc_page_count) > 0
ORDER BY (dmv_tsu.user_objects_alloc_page_count - dmv_tsu.user_objects_dealloc_page_count) + (dmv_tsu.internal_objects_alloc_page_count - dmv_tsu.internal_objects_dealloc_page_count) DESC

Have you ever had such requirements to monitor and troubleshoot tempDB in your environments? What have you been using to monitor your tempDB usage? What is the typical output you are getting in your environments? Do let me know as we can learn together.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – The DBA goes to MARS – Maintenance, Architecture, Research and Support – Notes from the Field #063

[Note from Pinal]: This is a 63rdth episode of Notes from the Field series. Storytelling is an art very few have mastered in their life. When I requested Stuart to share his ideas about his famous blog about MARS, I really had not expected the response which I have received in this episode of Notes from the field. It is very critical for any DBA to be proactive in their operation, however are we proactive enough? Well, you need to read this entire episode of notes from the field to grasp the concept.

In this episode of the Notes from the Field series database expert Stuart Ainsworth explains about MARS in this very captivating story.


Database administration can be a complex job, particularly in large scale environments; DBA’s are responsible for a variety of tasks, and there’s a wealth of knowledge on how to administer and support SQL Server.  For this post, however, I wanted to focus not on a specific code-related or configuration tip, but rather on a broader perspective of work in general.  As a manager and a consultant, most SQL Server shops have a common problem: we focus too much on reacting to problems and not enough on preventing them.

In the past, I’ve blogged about what I call the MARS paradigm: Maintenance, Architecture, Research and Support.  Most system administration tasks can be broadly categorized into one of these four areas, and each of these four areas can be conceptually defined in terms of Actions (Proactive\Reactive) and Results (Delayed\Immediate):

Think about any given day, and how much time you spend working in each of the 4 areas; which ones are given priority?  If you’re like most shops, you probably spend the most amount of time working support cases (“my query is slow”, or “can you show me where to find this information in the database?”) instead of focusing on maintenance or architecture.   A healthy IT shop will find ways to balance the need for immediate reaction to the need for proactive long-term results; however, the question that needs to be answered is: why do we react, and how do we become more proactive?

Why Database Administration Is Usually Reactive:

In part, the reactive nature of most IT departments is a cultural thing.  Fighting fires can be fun; people who solve problems are usually lauded as heroes, while people who prevent problems often go unnoticed.  Think about it this way; if a database is corrupted and a restore is required, who gets the credit for saving the day?  The DBA who did the restore or the DBA who set up the backup plan years ago (assuming that they’re not the same DBA)?  Administrators that are quick to act are viewed as being responsive, even though they’re usually built on a foundation that was established invisibly some time ago.

Furthermore, most database administrators are thorough in their work; support issues are usually an opportunity for them to dig into a problem and find the best solution.  Maintenance issues, on the other hand, are addressing problems that don’t yet exist, and therefore don’t yet need a solution.  However, constantly providing support pulls resources away from building a better foundation.  If a DBA is spending time constantly fighting fires, they’re not spending time on monitoring the environment to avoid future problems.   So how do you move from a reactive environment to a proactive environment?

Moving From a Reactive Approach to a Proactive Approach:

First, track your work efforts for some period of time (30 days).  Find a way to categorize your work into one of the four broad areas defined above:

  • Maintenance: efforts needed to keep system performance and stability
  • Architecture: efforts to add new features, functionality, or capability
  • Research: efforts to learn “something new”
  • Support: efforts to sustain the existing functionality or customer requests

Ideally, your work is being tracked in some form of service desk software, but if not, then I’d suggest you start with something simple (like a whiteboard with sticky notes).

Second, figure out a metric for work effort; it could be time spent on an issue, or the number of issues, or some other measure of complexity.  The goal is to have a reasonable method of comparison between categories.  Based on your collection efforts, you should be able to figure out where you’re spending most of your time; if more than 50% of your time is being spent in Support or Research, it’s time to consider becoming more proactive.

Becoming more proactive can be a never-ending journey, but I have a few suggestions on where to start:

  1. Prioritize accordingly. Not every support case deserves to be treated as your top priority; make sure that maintenance and architecture issues are included in your escalation plans.
  2. Set it, but don’t forget it. SQL Server offers a lot of tools for automating maintenance; make sure that you allocate time to review successful executions as well as failures.  Understanding when a system is performing outside of spec is nearly as critical as being notified when it fails.
  3. Communicate with interested parties. Let your manager and your peers know about your maintenance an architecture efforts; the best metric would be a reduction in support calls and cases.

If you want to get started with SQL Server with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)