Developer’s Life – Disaster Lessons – Notes from the Field #039

[Note from Pinal]: This is a 39th episode of Notes from the Field series. What is the best solution do you have when you encounter a disaster in your organization. Now many of you would answer that in this scenario you would have another standby machine or alternative which you will plug in. Now let me ask second question – What would you do if you as an individual faces disaster? 

In this episode of the Notes from the Field series database expert Mike Walsh explains a very crucial issue we face in our career, which is not technical but more to relate to human nature. Read on this may be the best blog post you might read in recent times.


Howdy! When it was my turn to share the Notes from the Field last time, I took a departure from my normal technical content to talk about Attitude and Communication.(http://blog.sqlauthority.com/2014/05/08/developers-life-attitude-and-communication-they-can-cause-problems-notes-from-the-field-027/)

Pinal said it was a popular topic so I hope he won’t mind if I stick with Professional Development for another of my turns at sharing some information here. Like I said last time, the “soft skills” of the IT world are often just as important – sometimes more important – than the technical skills. As a consultant with Linchpin People – I see so many situations where the professional skills I’ve gained and use are more valuable to clients than knowing the best way to tune a query.

Today I want to continue talking about professional development and tell you about the way I almost got myself hit by a train – and why that matters in our day jobs. Sometimes we can learn a lot from disasters. Whether we caused them or someone else did. If you are interested in learning about some of my observations in these lessons you can see more where I talk about lessons from disasters on my blog.

For now, though, onto how I almost got my vehicle hit by a train…

The Train Crash That Almost Was….

My family and I own a little schoolhouse building about a 10 mile drive away from our house. We use it as a free resource for families in the area that homeschool their children – so they can have some class space. I go up there a lot to check in on the property, to take care of the trash and to do work on the property. On the way there, there is a very small Stop Sign controlled railroad intersection. There is only two small freight trains a day passing there. Actually the same train, making a journey south and then back North. That’s it. This road is a small rural road, barely ever a second car driving in the neighborhood there when I am. The stop sign is pretty much there only for the train crossing.

When we first bought the building, I was up there a lot doing renovations on the property. Being familiar with the area, I am also familiar with the train schedule and know the tracks are normally free of trains. So I developed a bad habit. You see, I’d approach the stop sign and slow down as I roll through it. Sometimes I’d do a quick look and come to an “almost” stop there but keep on going. I let my impatience and complacency take over. And that is because most of the time I was going there long after the train was done for the day or in between the runs. This habit became pretty well established after a couple years of driving the route. The behavior reinforced a bit by the success ratio. I saw others doing it as well from the neighborhood when I would happen to be there around the time another car was there.

Well. You already know where this ends up by the title and backstory here. A few months ago I came to that little crossing, and I started to do the normal routine. I’d pretty much stopped looking in some respects because of the pattern I’d gotten into.  For some reason I looked and heard and saw the train slowly approaching and slammed on my brakes and stopped. It was an abrupt stop, and it was close. I probably would have made it okay, but I sat there thinking about lessons for IT professionals from the situation once I started breathing again and watched the cars loaded with sand and propane slowly labored down the tracks…

Here are Those Lessons…

  • It’s easy to get stuck into a routine – That isn’t always bad. Except when it’s a bad routine. Momentum and inertia are powerful. Once you have a habit and a routine developed – it’s really hard to break that. Make sure you are setting the right routines and habits TODAY. What almost dangerous things are you doing today? How are you almost messing up your production environment today? Stop doing that.
  • Be Deliberate – (Even when you are the only one) – Like I said – a lot of people roll through that stop sign. Perhaps the neighbors or other drivers think “why is he fully stopping and looking… The train only comes two times a day!” – they can think that all they want. Through deliberate actions and forcing myself to pay attention, I will avoid that oops again. Slow down. Take a deep breath. Be Deliberate in your job. Pay attention to the small stuff and go out of your way to be careful. It will save you later.
  • Be Observant – Keep your eyes open. By looking around, observing the situation and understanding what your servers, databases, users and vendors are doing – you’ll notice when something is out of place. But if you don’t know what is normal, if you don’t look to make sure nothing has changed – that train will come and get you. Where can you be more observant? What warning signs are you ignoring in your environment today?

In the IT world – trains are everywhere. Projects move fast. Decisions happen fast. Problems turn from a warning sign to a disaster quickly. If you get stuck in a complacent pattern of “Everything is okay, it always has been and always will be” – that’s the time that you will most likely get stuck in a bad situation. Don’t let yourself get complacent, don’t let your team get complacent. That will lead to being proactive. And a proactive environment spends less money on consultants for troubleshooting problems you should have seen ahead of time. You can spend your money and IT budget on improving for your customers.

If you want to get started with performance analytics and triage of virtualized SQL Servers with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

About these ads

SQL Authority News – Play by Play with Pinal Dave – A Birthday Gift

Today is my birthday.

Personal Note

When I was young, I was always looking forward to my birthday as on this day, I used to get gifts from everybody.

Now when I am getting old on each of my birthday, I have almost same feeling but the direction is different. Now on each of my birthday, I feel like giving gifts to everybody. I have received lots of support, love and respect from everybody; and now I must return it back.Well, on this birthday, I have very unique gifts for everybody – my latest course on SQL Server.

How I Tune Performance

I often get questions where I am asked how do I work on a normal day. I am often asked that how do I work when I have performance tuning project is assigned to me. Lots of people have expressed their desire that they want me to explain and demonstrate my own method of solving performance problem when I am facing real world problem. It is a pretty difficult task as in the real world, nothing goes as planned and usually planned demonstrations have no place there.

The real world, demands real solutions and in a timely fashion. If a consultant goes to industry and does not demonstrate his/her capabilities in very first few minutes, it does not matter how much fame he/she is, the door is shown to them eventually. It is true and in my early career, I have faced it quite commonly. I have learned the trick to be honest from the start and request absolutely transparent communication from the organization where I am to consult.

Play by Play

Play by Play is a very unique setup. It is not planned and it is a step by step course. It is like a reality show – a very real encounter to the problem and real problem solving approach. I had a great time doing this course. Geoffrey Grosenbach (VP of Pluralsight) sits down with me to see what a SQL Server Admin does in the real world. This Play-by-Play focuses on SQL Server performance tuning and I go over optimizing queries and fine-tuning the server.

The table of content of this course is very simple.

Introduction

In the introduction I explained my basic strategies when I am approached by a customer for performance tuning.

Basic Information Gathering

In this module I explain how I do gather various information for performance tuning project. It is very crucial to demonstrate to customers for consultant his capability of solving problem. I attempt to resolve a small problem which gives a big positive impact on performance, consultant have to gather proper information from the start. I demonstrate in this module, how one can collect all the important performance tuning metrics.

Removing Performance Bottleneck

In this module, I build upon the previous module’s statistics collected. I analysis various performance tuning measures and immediately start implementing various tweaks on the performance, which will start improving the performance of my server. This is a very effective method and it gives immediate return of efforts.

Index Optimization

Indexes are considered as a silver bullet for performance tuning. However, it is not true always there are plenty of examples where indexes even performs worst after implemented. The key is to understand a few of the basic properties of the index and implement the right things at the right time. In this module, I describe in detail how to do index optimizations and what are right and wrong with Index. If you are a DBA or developer, and if your application is running slow – this is must attend module for you. I have some really interesting stories to tell as well.

Optimize Query with Rewrite

Every problem has more than one solution, in this module we will see another very famous, but hard to master skills for performance tuning – Query Rewrite. There are few do’s and don’ts for any query rewrites. I take a very simple example and demonstrate how query rewrite can improve the performance of the query at many folds. I also share some real world funny stories in this module.

This course is hosted at Pluralsight. You will need a valid login for Pluralsight to watch  Play by Play: Pinal Dave course. You can also sign up for FREE Trial of Pluralsight to watch this course.

As today is my birthday – I will give 10 people (randomly) who will express their desire to learn this course, a free code. Please leave your comment and I will send you free code to watch this course for free.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – SSMS: Top Object and Batch Execution Statistics Reports

The month of June till mid of July has been the fever of sports. First, it was Wimbledon Tennis and then the Soccer fever was all over. There is a huge number of fan followers and it is great to see the level at which people sometimes worship these sports. Being an Indian, I cannot forget to mention the India tour of England later part of July. Following these sports and as the events unfold to the finals, there are a number of ways the statisticians can slice and dice the numbers. Cue from soccer I can surely say there is a team performance against another team and then there is individual member fairs against a particular opponent. Such statistics give us a fair idea to how a team in the past or in the recent past has fared against each other, head-to-head stats during World cup and during other neutral venue games.

All these statistics are just pointers. In reality, they don’t reflect the calibre of the current team because the individuals who performed in each of these games are totally different (Typical example being the Brazil Vs Germany semi-final match in FIFA 2014). So at times these numbers are misleading. It is worth investigating and get the next level information.

Similar to these statistics, SQL Server Management studio is also equipped with a number of reports like a) Object Execution Statistics report and b) Batch Execution Statistics reports. As discussed in the example, the team scorecard is like the Batch Execution statistics and individual stats is like Object Level statistics. The analogy can be taken only this far, trust me there is no correlation between SQL Server functioning and playing sports – It is like I think about diet all the time except while I am eating.

Performance – Batch Execution Statistics

Let us view the first report which can be invoked from Server Node -> Reports -> Standard Reports -> Performance – Batch Execution Statistics.

Most of the values that are displayed in this report come from the DMVs sys.dm_exec_query_stats and sys.dm_exec_sql_text(sql_handle).

This report contains 3 distinctive sections as outline below.

 

Section 1: This is a graphical bar graph representation of Average CPU Time, Average Logical reads and Average Logical Writes for individual batches. The Batch numbers are indicative and the details of individual batch is available in section 3 (detailed below).

Section 2: This represents a Pie chart of all the batches by Total CPU Time (%) and Total Logical IO (%) by batches. This graphical representation tells us which batch consumed the highest CPU and IO since the server started, provided plan is available in the cache.

Section 3: This is the section where we can find the SQL statements associated with each of the batch Numbers. This also gives us the details of Average CPU / Average Logical Reads and Average Logical Writes in the system for the given batch with object details. Expanding the rows, I will also get the # Executions and # Plans Generated for each of the queries.

Performance – Object Execution Statistics

The second report worth a look is Object Execution statistics. This is a similar report as the previous but turned on its head by SQL Server Objects.

The report has 3 areas to look as above. Section 1 gives the Average CPU, Average IO bar charts for specific objects. The section 2 is a graphical representation of Total CPU by objects and Total Logical IO by objects.

The final section details the various objects in detail with the Avg. CPU, IO and other details which are self-explanatory.

At a high-level both the reports are based on queries on two DMVs (sys.dm_exec_query_stats and sys.dm_exec_sql_text) and it builds values based on calculations using columns in them:

SELECT *
FROM    sys.dm_exec_query_stats s1
CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS s2
WHERE   s2.objectid IS NOT NULL AND DB_NAME(s2.dbid) IS NOT NULL
ORDER BY  s1.sql_handle;

This is one of the simplest form of reports and in future blogs we will look at more complex reports.

I truly hope that these reports can give DBAs and developers a hint about what is the possible performance tuning area. As a closing point I must emphasize that all above reports pick up data from the plan cache. If a particular query has consumed a lot of resources earlier, but plan is not available in the cache, none of the above reports would show that bad query.

Reference: Pinal Dave (http://blog.sqlauthority.com)

Developer – Cross-Platform: Fact or Fiction?

This is a guest blog post by Jeff McVeigh. Jeff McVeigh is the general manager of Performance Client and Visual Computing within Intel’s Developer Products Division. His team is responsible for the development and delivery of leading software products for performance-centric application developers spanning Android*, Windows*, and OS* X operating systems. During his 17-year career at Intel, Jeff has held various technical and management positions in the fields of media, graphics, and validation. He also served as the technical assistant to Intel’s CTO. He holds 20 patents and a Ph.D. in electrical and computer engineering from Carnegie Mellon University.


It’s not a homogenous world. We all know it. I have a Windows* desktop, a MacBook Air*, an Android phone, and my kids are 100% Apple. We used to have 2.5 kids, now we have 2.5 devices. And we all agree that diversity is great, unless you’re a developer trying to prioritize the limited hours in the day. Then it’s a series of trade-offs. Do we become brand loyalists for Google or Apple or Microsoft? Do we specialize on phones and tablets or still consider the 300M+ PC shipments a year when we make our decisions on where to spend our time and resources?

We weigh the platform options, monetization opportunities, APIs, and distribution models. Too often, I see developers choose one platform, or write to the lowest common denominator, which limits their reach and market success. But who wants to be ‟me too”? Cross-platform coding is possible in some environments, for some applications, for some level of innovation—but it’s not all-inclusive, yet.

There are some tricks of the trade to develop cross-platform, including using languages and environments that ‟run everywhere.” HTML5 is today’s answer for web-enabled platforms. However, it’s not a panacea, especially if your app requires the ultimate performance or native UI look and feel. There are other cross-platform frameworks that address the presentation layer of your application. But for those apps that have a preponderance of native code (e.g., highly-tuned C/C++ loops), there aren’t tons of solutions today to help with code reuse across these platforms using consistent tools and libraries.

As we move forward with interim solutions, they’ll improve and become more robust, based, in no small part, on our input.

What’s your answer to the cross-platform challenge? Are you fully invested in HTML5 now? What are your barriers? What’s your vision to navigate the cross-platform landscape? 

Here is the link where you can head next and learn more about how to answer the questions I have asked: https://software.intel.com/en-us

Republished with permission from here.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – SSMS: Top Queries by CPU and IO

Let me start this blog post with a personal story.

Personal Story – Dad and I

My fascination for computers started way back when I was about to get into highschool. My father once took me to his office (was for some family day if I remember correctly) and it was fun to watch the PC. Interestingly enough, it was the only color PC in that office it seems – for those days it was a green font’s print CRT monitor. I am not sure how many even had a chance to work on those PC’s. It is a lost era for this generation.

Even from that time the most discussed part of these computers have been its processors – if I remember correctly, it was the 32-bit processors (pre-Pentium era) and the Hard-disks with the (3 ½ inch Floppy drives). It was an era where having few MB’s of data was a big deal.

Fast forward to today, all these stories seem like a great recreation for my daughters bedtime stories. Moore’s Law has proved itself for more than 4 decades and still amuses us. The processors these days on our watch / handheld devices are more powerful (by a factor of 1000x at least) than what we used to work on a PC 15-20 years back. The days are surely changing so should we! I am not sure what the innovations and technology would be when my daughter grows up.

Back to SQL Server

In today’s context, the advancements in technology have not stopped us from troubleshooting these parameters even today. Whenever I get involved in performance tuning exercise, one of the first questions I ask is – “What is the CPU utilization?”, “How is Memory consumption?”, “How is Disk activity and Disk queue length?” and “How is the network doing?”

So for today’s blog post, we will concentrate on 4 different reports:

  1. Top Queries by Average CPU Time
  2. Top Queries by Total CPU Time
  3. Top Queries by Average IO Time
  4. Top Queries by Total IO Time

These are the standard reports from the Server node. Go to Server Node -> Right Click -> Reports -> Standard Reports and you will find these in SQL Server Management Studio.

Top Queries by Average CPU Time

When it comes to CPU, for me Perfmon is still the primary tool for tracking down fundamental CPU usage and I feel it should remain so for you.  However, from time to time we need to track down which process is using a physical CPU. When DBA’s ask me why SQL Server using all the CPU, I ask them the first question – are you sure SQL Server is the process utilizing CPU on server.

Just for the records, I typically read and used the kernel debugger to determine what process is using which exact CPU. For a primer take a look at how to use XPerf tool on MSDN Blogs.

Though these tools are powerful, there is a considerable learning curve for a novice DBA. So let me come back to favorite tool of choice – SQL Server Management Studio. Let us start to look at the Top Queries by Average CPU Time report. The output has two sections: a) Graph Section and b) Details Section.

The first Graph section is a color palette sorted in descending order the Top queries that are consuming Average CPU Time and Total CPU Time. For me the next section has more interesting details to analyze. Before I jump there – there is an important note on the top of the report that is worth a look. It reads as:

Note: This report identifies the queries currently residing in the plan cache that have consumed the most total CPU time over the course of all their executions.  This data is aggregated over the lifetime of the plan in the cache and is available only for plans currently in the cache.

This means that the values will get reset if SQL Server has been restarted because the cache gets flushed and emptied. In your active transactional system this will never be the case and I am sure you will know where to start your tuning exercise when SQL Server is utilizing higher CPUs.

As I mentioned, the second section is of higher interesting values to look at. As you can see, the Top 10 CPU consuming queries are listed and we can start investigating what is the resource utilization of each of these queries and how we can tune them. The label of the colors (1, 2, 3 etc.) are “Query No.” column in the table. It is quite possible that the order in the second graph may not be same as first graph.

In the above picture we can see that the Top most expensive query is utilizing close to 1.774 seconds to execute and on an average of 2 executions it is taking close to 0.887 seconds for each execution. The query under question is also highlighted under the Query Text.

Top Queries by Total CPU Time

Since the output is almost similar to what we discussed before. The queries are sorted by Top CPU time now. That is the only difference. The rough DMV’s used for this report would be:

SELECT TOP(10)
creation_time
,       last_execution_time
,       (total_worker_time+0.0)/1000 AS total_worker_time
,       (total_worker_time+0.0)/(execution_count*1000) AS [AvgCPUTime]
,       execution_count
FROM sys.dm_exec_query_stats  qs
CROSS APPLY sys.dm_exec_sql_text(sql_handle) st
WHERE total_worker_time > 0
ORDER BY total_worker_time DESC

Before I sign off on CPU utilization, I have always relied on this report for CPU workloads. But as a primer there are obvious candidates that I always look out to when CPU is high like:

  1. Query Execution and Parallelism
  2. Compiles and recompiles on the server
  3. Any tracing if enabled in the system – includes running of Activity Monitor somewhere or Profiler (we can use sys.traces to check traces which are configured to run)
  4. If any anti-virus is running in the system.
  5. Invalid drivers for SAN, BIOS or other components

These are some great starting points to eliminate before going into tuning queries for a CPU deprived system.

Performance – Top Queries by Average IO

As described for the CPU output, the output is almost the same here too with two sections. The difference is, it has been sorted by Average IO utilization.

The meat of information is available in section 2 as usual.

This section apart from CPU values, has also the values of Logical Reads, Logical Writes. As you can see, Total Logical IO = Logical Reads + Logical Writes

The query to get this information would be:

SELECT TOP 10
creation_time
,       last_execution_time
,       total_logical_reads AS [LogicalReads]
,       total_logical_writes AS [LogicalWrites]
,       execution_count
,       total_logical_reads+total_logical_writes AS [AggIO]
,       (total_logical_reads+total_logical_writes)/(execution_count+0.0) AS [AvgIO]
,      st.TEXT
,       DB_NAME(st.dbid) AS database_name
,       st.objectid AS OBJECT_ID
FROM sys.dm_exec_query_stats  qs
CROSS APPLY sys.dm_exec_sql_text(sql_handle) st
WHERE total_logical_reads+total_logical_writes > 0
AND sql_handle IS NOT NULL
ORDER BY [AggIO] DESC

It is important to understand that IO for a query is a SUM of both Logical IO and Physical IO. Irrespective of where the data resides the IO is calculated as a Union of these two values.

Performance – Top Queries by Total IO

This output is identical to the previous output with the only difference of it being sorted by the Total IO parameter. The sections and output are exactly same as above reports.

As I sign off this post, wanted to know if anyone has used any of these reports or do you rely on using DMVs and other tools to troubleshoot CPU and/or IO related problems inside SQL Server? Do share your experience.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – SSMS: Top Transaction Reports

Let us start with a personal story.

Personal Story – My Daughter and I

My day has its own part of long meetings and my daughter tries to pull a string here and there to get my attention almost every other day. And recently I was made to watch one of her school assignment on drawing. As a parent, the beauty is in the eye of the beholder – so I can never complain what my daughter drew for me. Since this was some sort of school competition, I wanted to see what the remark of the teacher was – it mentioned “Very Good”. Intrigued by this comment, I started to quiz my daughter to who got the Best painting marks and what did they draw? I am sure the parent inside me was taking a sneak peek into how my daughter performed as compared to others in the class. She was quick to respond with a few names of her friends and said they also drew best. The honesty moved me because as a child we are unbiased and true to our very self. This moved me and I thought it is a great time to take my daughter to the park and spend some quality time as father-daughter.

Getting Back to SQL Server

Returning from park, this incident was on top of my mind and I thought being a class topper or ahead of the crowd is an intrinsic quality we all try to follow as human beings. It is so strange that today’s post is all about the “Top” nature. In life, it is great to be a Top performer but in SQL Server parlance it is not a great thing to be on these Top reports. You are surely going to become a subject for tuning next. This surely is a great boon for Administrators though.

This blog will call out 3 different report from the Server Node -> Reports -> Standard Reports

  1. Top Transactions by Age
  2. Top Transactions by Locks Count
  3. Top Transactions by Blocked Transactions Count

Since all these reports were from the Top category about the transaction based on various factors, I thought to have them all covered in one post.

Top Transactions by Age

This is one of the simplest of reports which shows based on when the query was submitted to the server and how long some of the transactions have been waiting in the instance. This is a good starting point to know which Session ID, Database is holding the # of lock for how long. These for all practical purposes is a very good starting point in my opinion to start looking at long running transactions.

Some of the DMV’s that work behind the scenes to generate the report are:

  • sys.dm_tran_active_transactions – Has information of transactions in a given instance.
  • sys.dm_tran_session_transactions – Has information of transactions for a given session.
  • sys.dm_tran_database_transactions – Gives transactions at a database level.
  • sys.dm_exec_sessions – Has information about the active sessions currently on the server.
  • sys.dm_exec_requests – Information of each request currently running on the server.

From this report, the DBA can take a decision to what process is causing these locks? And why they are held for such a long time.

Top Transactions by Lock Count

I would say, this is in continuation to the previous report. In the previous report, I was able to find out the number of locks for a given Session ID and database. But the specifics on the type of locks were not clear.

This report is all about expanding that part of the uncertainty. This report shows the type of locks are held by a specific session. In our report below we can see the session ID 52 and 53 are holding Object, Page and Key locks respectively. While 52 has an Exclusive Lock already taken, 53 has an Update fired on the dbo.test table.

I am sure on a highly transactional production server this will surely be a busy report to view. Do let me know how many nodes you see on your servers.

The transaction state can be one of the following:

  • Uninitialized
  • Initialized
  • Active
  • Prepared
  • Committed
  • Rolled Back
  • Commiting

These values can be got from sys.dm_tran_database_transactions DMV. While the Active Transaction State can take the below values as defined in the sys.dm_tran_active_transactions   DMV:

  • Invalid
  • Initialized
  • Active
  • Ended
  • Commit Started
  • Prepared
  • Committed
  • Rolling Back
  • Rolled Back

 

Top Transactions by Blocked Transactions Count

The last report under the Top category is the Blocked transaction Count report. This is almost similar to the report we say in our previous post of Blocking Transactions Report. Since we have explained the same out there already, I will refrain from getting into the details here.

 

These reports can help in finding cause of below few errors:

The instance of the SQL Server Database Engine cannot obtain a LOCK resource at this time. Rerun your statement when there are fewer active users. Ask the database administrator to check the lock and memory configuration for this instance, or to check for long-running transactions. (Msg 1204, Level 19, State 4)

Action: Find the session_id which is taking more locks and tune them.

Lock request time out period exceeded.(Msg 1222, Level 16, State 45)

Action: Find the session_id which is holding locks and see why they have a long running transaction.

I am curious to know how many of you have every used the Top reports mentioned here in your environments to debug something? Do let me know how you used them effectively so that we can learn from each other.

Reference: Pinal Dave (http://blog.sqlauthority.com)

Developer’s Life – Every Developer is the Incredible Hulk

The Incredible Hulk is possibly one of the scariest superheroes out there.  All superheroes are meant to be “out of this world” and awe-inspiring, but I think most people will agree with I say The Hulk takes this to the next level.  He is the result of an industrial accident, which is scary enough in it’s own right.  Plus, when mild-mannered Bruce Banner is angered, he goes completely out-of-control and transforms into a destructive monster that he cannot control and has no memories of.

With that said, The Incredible Hulk might also be the most powerful superhero.  While I don’t think that developers should get raging mad and turn into giant green destroyers, I think there are still great lessons we can learn from The Hulk.

So how are developers like the Incredible Hulk?

 

Well, read on my list of reasons.

Keep Calm

“Don’t make me angry.  You won’t like me when I’m angry.”  Bruce Banner warns everyone he comes in contact with, and this can be a lesson to take into the real world.  Don’t think of as fair warning, and now you are allowed to yell at people.  Think of it as advice to yourself.  Very few people (including developers) good do work when they have steam coming out their ears.

A Little Intensity is a Good Thing

A little intensity is a good thing.  In the movie “The Avengers,” Bruce Banner says he is always angry.  Instead of letting it get out of control and turn him into The Hulk, he harnesses that energy to try to find a solution to their problems.  As a developer, when something is frustrating you, try to channel that energy into beating the problem, not beating up your co-workers.

It is NOT all about You!

It’s not all about you.  In another scene in “The Avengers,” Bruce Banner’s tendency to transform into The Hulk becomes a liability to the rest of the team.  He has to rely on his team members to save themselves (and him) from himself.  Developers work as a team as well.

Constructive Anger

So far, we have discussed The Hulk’s anger like it is a bad thing.  But anger can be constructive, too.  When there is a huge evil to be fought, The Incredible Hulk is a fighting machine.  When there is a huge network or server problem, incredible developers join together and do amazing work.

The Incredible Hulk can be scary – no, let me put it another way – he can be intimidating.  But that just means that the lessons he can teach us all, including developers are about controlling ourselves in tough situations, and rising to the occasion.

Reference: Pinal Dave (http://blog.sqlauthority.com)