It is very common for people to move database now a day to the cloud. The very first question I often hear from them is – How to Improve Application Performance on Cloud While Reducing Bandwidth Cost?
Some organizations use SQL Server Always On Availability Groups to help boost performance of applications where the users are geographically dispersed. They accomplish this by placing the secondary database in the same facility as the remote users. While it is almost always a good idea to place the data near the application, choosing Always On may be using a sledge hammer to do a screwdriver’s job, given the very high cost of Always On from a software, hardware, and implementation perspective. Of course, RTO (Recovery Time Objective) and RPO (Recovery Point Objective) are prime factors in deciding whether you need Always On, but these factors are related to recovery and not performance. Many organizations overstate their RTO/RPO requirements as most companies are not under the tight requirements of an airline reservation system or financial trading system. In fact, transactional or merge replication may be just the tool you need to meet both the application performance and recoverability requirements of your organization. Let us learn about When to Use a Sledgehammer and When to use a Screwdriver.
In this blog post we are going to leave about how to Speed Up Performance Without Code Change or Configuration Change in SQL Server. This blog post is going to be a fun and quick read.
SQL Server Analysis Services (SSAS) is becoming increasingly popular as an OLAP platform for business analysts. There are many tools available for enhancing the analysts’ ability to process data and get meaningful insights. These vary from direct queries in Excel to custom applications. For example, using PowerPivot to extend Microsoft Analysis Services directly into an Excel Workbook, you can then use Excel to build and explore an OLAP model using pivot tables and other techniques. One thing you can generally be sure of is that this will result in processing very large sets across the network.
We talk a lot about optimizing SQL with building the most efficient queries and application architectures. It’s what we enjoy doing and what we’re paid to do. We love to twiddle bits and tweak code, and can spend day after day doing this. But are we looking at the big picture? Is what we are working on the highest priority for the business or organization we work for? Are we extending our value beyond just implementing and optimizing SQL databases and applications? Can we deliver value that has a return on investment (ROI) to organizations? That ROI can be in terms of making people more productive, saving infrastructure costs, and even making trade-offs for when we should code and when we should automate. This even applies to when to use consultants like me. Can an hour of consultation with me save days or weeks of research and trial-and-error in addressing a major performance roadblock?
I have blogged before regarding how to Identify Application vs Network Performance Issues using SQL Server Dynamic Management Views (DMV). You do have some further diagnostics as well as optimizations you can perform to drill down into the network issue and address it.
Your SQL is tuned perfectly and delivers thousands of rows in milliseconds on your test system, but your end users are complaining about slow application performance. Time is of the essence because the poor performance affects productivity and the company’s ability to make money. Everybody is looking at the database as the culprit. You know it’s not. But what do you do? Let us learn in this blog about How to Identify Application vs Network Performance Issues?