SQL SERVER – 2008 Star Join Query Optimization

Business Intelligence (BI) plays a significant role in businesses nowadays. Moreover, the databases that deal with the queries related to BI are presently facing an increase in workload. At present, when queries are sent to very large databases, millions of rows are returned. Also the users have to go through extended query response times when joining multiple tables are involved with such queries. ‘Star Join Query Optimization’ is a new feature of SQL Server 2008 Enterprise Edition. This mechanism uses bitmap filtering for improving the performance of some types of queries by the effective retrieval of rows from fact tables.

Improved Query Response Times
In general, data warehouses employ dimensionally modeled star or snowflake schemas. These schemas have one or more than one fact tables that contain transactional data and many dimension tables, which holds information such as product data, customer information, and times and dates – all these define the fact table data. Usually, foreign keys are employed for maintaining relationships between the rows in fact tables and also between the rows in the dimension tables. Databases that contain star schemas are recognized by SQL Server 2008 Enterprise. It uses the new Star Join Query logic for processing queries against such star schemas more efficiently. Typically, on an average, data warehouse queries run faster to approximately 20 percent.

Automatically Implemented
Star Join Query Optimization is automatically implemented by the SQL Server. It does not require a special database or application configuration. The query processor will usually optimize queries with medium selectivity (this refers to the queries that retrieve approximately 10% to 75% of rows from a fact table). Such queries are usually handled using hash joins to join the dimension and fact tables by employing the foreign keys to identify the matching rows. A hash table is built for each dimension table referenced in the query in the case of hash joins; the optimization process uses these hash tables for deriving bitmap filters. The key values from each dimension table are identified by bitmap filters; these key values qualify for inclusion in the query. When the fact table is scanned, the bitmap filters are applied to it. These bitmap filters eliminate those rows of the fact table which are not qualified for inclusion in the result set. The most selective bitmap filter is applied first as it is found to eliminate the highest number of rows. Since the eliminated rows do not need further processing, the subsequent filters need not be applied to them – this way the process becomes more efficient.

Query Selectivity
The performance is enhanced in the case of medium selectivity queries while using bitmap filtering because the rows are filtered before any joins are implemented. Hence, there is a decrease in the number of rows that are processed by each join. Bitmap filtering is not applied when queries are highly selective (i.e., those queries that return less than 10% of the rows in a fact table). In such case, a nested loop join is found to be generally more efficient. Similarly, when the queries are not very selective at all (queries which return more than 75% of the rows in a fact table), bitmap filtering is not applied as there are very few rows to be filtered, and hence, there is no requirement of enhancement in performance in this case.

Integer Data Types
Star join optimization is found to give the highest efficiency when the data type of the columns used in the joins is integer. This feature enables the bitmap filter to be applied as part of the initial table or index scan rather than being used at a later stage in the query plan. Most of the queries are benefited from star join optimization since foreign key relationships are commonly implemented using integer-based alternate key columns.

Reference : Pinal Dave (http://blog.SQLAuthority.com)

SQLAuthority News – SQL Server R2 Resources Downloads, Documentations

Microsoft SQL Server 2008 R2 November Community Technology Preview
Building on SQL Server 2008, R2 provides an even more scalable data platform with comprehensive tools for managing your databases and applications, improving the quality of your data, and empowering your users to build rich analyses and reports using tools they are already familiar with.

Microsoft SQL Server 2008 R2 November Community Technology Preview Feature Pack
The Microsoft SQL Server 2008 R2 Feature Pack is a collection of stand-alone packages which provide additional value for SQL Server 2008 R2.

SQL Server 2008 R2 Books Online Community Technology Preview November 2009
Download the a community technology preview (CTP) release version of the documentation and tutorials for Microsoft SQL Server 2008 R2.

Microsoft SQL Server 2008 R2 November Community Technology Preview – Express Edition
Microsoft SQL Server 2008 R2 Express is a powerful and reliable data management system that delivers a rich set of features, data protection, and performance for embedded application clients, light Web applications, and local data stores.

SQL Server 2008 R2 November CTP Reporting Services Add-in for Microsoft SharePoint Technologies 2010
The Microsoft® SQL Server 2008 R2 November CTP Reporting Services Add-in for Microsoft SharePoint Technologies allows you to integrate your reporting environment with SharePoint to experience the benefits of using the collaborative environment provided by SharePoint.

Microsoft SQL Server 2008 R2 November CTP StreamInsight
Microsoft® SQL Server® StreamInsight is Microsoft’s new Complex Event Processing technology to help businesses derive better insights by correlating event streams from multiple sources with near-zero latency.

Microsoft SQL Server 2008 R2 November CTP Report Builder 3.0
Microsoft SQL Server 2008 R2 Report Builder 3.0 provides an intuitive report authoring environment for business and power users. It supports the full capabilities of SQL Server 2008 R2 Reporting Services. The download provides a stand-alone installer for Report Builder 3.0.

Microsoft SQL Server 2008 R2 – PowerPivot for Microsoft® Excel 2010 – November Community Technology Preview
Microsoft® PowerPivot for Microsoft Excel 2010 provides ground-breaking technology, such as fast manipulation on large data sets (often in millions of rows), streamlined integration of data, and the ability to effortlessly share your analysis through Microsoft SharePoint 2010.

What’s New in SQL Server 2008 R2 November Community Technology Preview (CTP)
This document describes what’s new in the November CTP of Microsoft SQL Server 2008 R2. Analysis Services, Database Engine, Reporting Services, and Service Broker all have new or enhanced features in this release. Master Data Services is a new component in this release.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Introduction to Business Intelligence – Important Terms & Definitions

What is Business Intelligence

Business intelligence (BI) is a broad category of application programs and technologies for gathering, storing, analyzing, and providing access to data from various data sources, thus providing enterprise users with reliable and timely information and analysis for improved decision making. To put it simply, BI is an umbrella term that refers to an assortment of software applications for analyzing an organization’s raw data for intelligent decision making for business success. BI as a discipline includes a number of related activities, including decision support, data mining, online analytical processing (OLAP), querying and reporting, statistical analysis and forecasting.

Business Intelligence for Success

Business intelligence equips enterprises to gain business advantage from data. Once an organization is powered with BI it can anticipate enhanced turnaround time on data collection, come up with fresh ideas for novel business initiatives, foresee accurate picture of customer needs and demands, and perform more targeted marketing campaigns. In addition, it will gain enhanced knowledge that will help it advance its brand into the top slot in terms of market share, cut down its client churn, reduce its overheads and also diminish delays in supply chain, among other advantages. Decisions purely based on the gut feeling cannot assure success; but in BI’s fact-based decision-making framework, confident decisions can be made for assured business success. Further, BI makes an organization agile thereby giving it a competitive edge in today’s evolving market condition.

Business Intelligence Scenario

Every day, a dairy company collects from 2,000 stores information which is uploaded to servers at its headquarters at the end of the day. This information is used by the chain’s main office to instantly analyze key operational measures such as the number of dairy products sold, profits, trends, and so forth.

Next day, by early morning the company’s managers receive performance data. Next, they verify current revenue, time required to perform each job, and other performance measures. With BI, franchisees with multiple locations can have consolidated views, as can the company’s regional managers.

This scenario clearly explains how implementation of Business intelligence can be very fruitful for an organization. BI can catalyze a business’s success in terms of

Revenues

  • Distinguish the products and services that drive revenues.
  • Rank customers and locations based on profitability.

Customer relationship management

  • Categorize low-value customers and work toward improving their value.
  • Find customer relationship issues early and take timely action to resolve them.

Sales and marketing

  • Aim at high-value customers to minimize marketing risk.
  • Rank the success of product promotions based on product and market segment.
  • Find what is in the sales pipeline

Business Intelligence Terms

Ad Hoc Query – Ability to create an immediate ‘request for information’ using an analytical reporting tool to answer a specific business-related query.

Aggregation – Predefined summaries based on comprehensive data in the data warehouse.

Atomic data – Topmost level of detail in the data warehouse.

Attribute – Field represented by a column in reports, tables and charts.

Cube – A powerful analysis tool that is used for viewing and analyzing data in a multi-dimensional format, from any angle, in any combination, through a ‘slice and dice’ and drilldown approach.

Data – Factual information for analysis.

Data Mart – A logical subset of related information, typically around one or a few business processes, or a particular subject area.

Data Warehouse – A repository of well-organized corporate data for Business Analysis and Reporting. It is also a collection of data marts.

Database – A collection of data arranged for convenient and quick search and retrieval.

OLAP – Online Analytical Processing is a category of database software that provides an interface to help users quickly and interactively scrutinize the results in a variety of dimensions of the data.

Balanced Scorecard – A Performance management tool that recapitulates an organization’s performance from several standpoints on a single page.

Dashboard – A reporting tool that presents key indicators on a single screen, which includes measurements, metrics, and scorecards.

Data Cube – A multidimensional structure (typically over three dimensions) that forms the basis for analysis applications, allowing various types of calculations and aggregations.

Data Mapping – Process of spotting a source data element for all data elements in the target data warehouse environment.

Data Quality – It is related to facets such as accessibility, comprehensiveness, exactness, steadiness, significance and timeliness of data.

Data staging – A system where operations like data extraction, transformation and loading operations are executed.

Dimension Table – It contains attributes that describe fact records in the fact table. For instance, pupil enrollment is a central fact table with several dimensions linked to it such as Faculty, Term, and Degree.

Drill Down – A feature that allows users to click on an item to navigate through hierarchy and move to a level of better detail.

EIS – Enterprise Information System – A category of applications and technologies for presenting and analyzing corporate and external data for various management purposes.

ETL – Extract, Transform and Load -It is a three-step process at the center of data warehousing. In the first step, operational data is extracted from multiple sources. In the next step, it is transformed into the required type. In the final step, data is loaded into the data warehouse.

Forecasting – It is the formulation of trends, predictive models, and scenarios to prepare for the future for improved decision making.

Gap analysis – Study of whether the available business data supports business requirements, thus answering questions related to data accessibility, amount of data, missing data, and legacy systems. It examines information and decides on the resources and efforts required to satisfy requirements.

Granularity – Level of detail or summarization of data in the data warehouse. The more the detail, the higher the level of granularity.

Institutional performance management – The process of basing an organization’s actions and decisions on actual measured results of performance. It incorporates performance measures, benchmarks, and goals to attain best results.

Measures – A quantifiable, specific piece of information.

Metrics – Measures of performance that observe progress and evaluate trends within an organization.

Normalization – Standard database design technique for relational databases.

Slice and Dice – Multidimensional tools allow users to view data from any angle. The ability to select various angles to view data from is called slice and dice capability.

Snapshot – View of data at a particular moment.

Reference: Pinal Dave (http://blog.SQLAuthority.com), DNS

SQLAuthority News – SQL Server Accelerator for Business Intelligence (BI)

I have wonderful experience at my recent Business Intelligence tour. I will write down in detail about my experience at different location. However, today I would like to talk about one particular question which was asked at all the locations. It was about SQL Server Accelerator for Business Intelligence (BI). Many attendee asked me how to use this tool.

SQL Server Accelerator for Business Intelligence (BI) is no more supported by Microsoft. Microsoft does not provide any support for this solution accelerator and has no plans to release future versions. Microsoft SQL Server 2005 and later versions include most of the functionality of the accelerator and are supported according to the Microsoft Product Lifecycle.

For the same reason, this tool is not included in the course as well new developer should not focus learning the same.

Reference : Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – What is Data Mining – A Simple Introductory Note

According to MacLennan et al. (2009), data mining is defined as “the process of analyzing data to find hidden patterns using automatic methodologies.” Consider the following simple example that explains this concept. By analyzing the data on the items purchased from a supermarket or a chain of such stores, information on the products that are sold most can be obtained and accordingly supply of that particular products are increased and vice versa. Data mining, in short, is an analytical activity that studies the hidden patterns in a huge pile of data after appropriately classifying and sorting it.

Who all are involved in Data Mining?

Data mining is an activity, which can be programmed, that involves the analysis of data and finally revealing the hidden patterns. Architects, Developers and Analysts are involved in the data mining process.

Data mining is usually carried out by an Analyst. However, it is not necessary that every time, he/she will be able to identify all the hidden patterns of a particular data set, irrespective of its size. Finally these identified patterns are converted into useful information for business purpose. A Developer combines data mining with application solutions, and an Architect understands the needs of the developer and the analyst and meets them accordingly.

Microsoft and Data Mining

Microsoft provides a wide range of data mining options, which includes collaborative solutions and ad hoc analysis (in MS Office Excel). A free plug-in is available in MS Office Excel 2007, which helps the analyst to analyze the data patterns. In addition to this plug-in, the Business Intelligence Development Studio (BIDS) that is free with the SQL Server can also be used for data mining purpose.

It should be noted that data mining is not done on the basis of any known data patterns or any other additional information. The results obtained of data mining are generated from the data presented and not from any other resources. Microsoft data mining applies mathematical techniques on the available data set to obtain models. In addition to BDIS, the .NET framework and Data Mining extensions (DMX) language is also provided by Microsoft for custom solutions. At times, data mining is also known as machine learning.

Results of Data Mining – Data Mining Models

Microsoft data mining results in data mining models, which are statistical information – either predictive or descriptive. A Microsoft Mining Model consists of the following three components: metadata, which is information about the data; patterns, which are mathematical formulas or rules; and bindings, where the data is defined. The statistical results may not be understandable in relation to a business perspective. Hence, these results or models must be translated to useful business information. One who engages in data mining is responsible for creating a link or relation between the resultant data model and the respective business problem.

Role of Data Miner (or Analyst)

A data miner should undergo adequate training with regard to all the tools and technologies used in mining and should not limit himself/herself to only those tools that are required for that particular organization/business. In fact, it is the responsibility of the organization to provide training to the data mining professional on a broader perspective. Data mining is never complete without the analyst. The application of the results of data mining to a specific business significantly depends on how far the analyst has understood the industry-specific objectives.

Applications of Data Mining

Data mining is used in various applications such as forecasting business and customer trends, detecting fraud (especially in the banking sector), generating customized advertisements, grouping customers on the basis of their purchasing trends, and risk analysis.

Benefits of Microsoft Data Mining

Microsoft data mining is extensible. It can be licensed through SQL Server 2008 (or SQL Server 2005) and it is compatible with other technologies, thereby allowing access to data in different formats. Microsoft data mining can also be used for business intelligence solutions and it is scalable unlike other data mining products.

Reference : Pinal Dave (http://blog.sqlauthority.com)

SQLAuthority News – Application and Multi-Server Management

SQL Server 2008 R2 – Application and Multi-Server Management
SQL Server Technical Article
Title: SQL Server 2008 R2Application and Multi-Server Management Introduction
Writers: Geoff Allix
Technical Reviewers: Joanne Hodgins, Omri Bahat, Morgan Oslake
Published: February 2010

SQL Server 2008 R2 introduces new management tools to help improve IT efficiency and productivity. Investments in application and multi-server management will help organizations proactively manage database environments efficiently at scale through centralized visibility into resource utilization. Such investments can help streamline consolidation and upgrade initiatives across the application lifecycle—all with tools that make it fast and easy.

This paper introduces the new extensions in SQL Server Management Studio and the Control Point Explorer, and it walks through the simple process of setting up a SQL Server managed server group, including SQL Server Control Point installation, enrolling an instance into central management, extracting Data-tier Applications from existing deployments, and deploying Data-tier Applications to the new managed server group.

Read whitepaper Application and Multi-Server Management

Abstract courtesy : Microsoft

Reference : Pinal Dave (http://blog.sqlauthority.com)

SQLAuthority News – Future of Business Intelligence and Databases – Article by Nupur Dave

This article is submitted by Nupur Dave

Future of Business Intelligence and Databases

The term business intelligence (BI) was coined by Howard Dresner in the early 1990s. He defined Business Intelligence as “a set of concepts and methodologies to improve decision making in business through use of facts and fact-based systems.” In a time when data warehousing was considered leading-edge he created the vision that led to the development of business intelligence, as it is known today.  The once visionary BI is now commonplace and in near future a momentous transformation is about to take place.

BI is all set to bring information democracy where end users across an organization will be able to view information related to their segment. Further, with the rising consumer expectations there will be demand for capability requirements of BI as well.

Information Democracy with Business Intelligence

BI plays a crucial role in providing cost-effective, comprehensive insights into business processes that help organizations make informed, strategic decisions that assure enhanced business results. Not long ago, BI was the domain of statisticians and corporate analysts. Only the selected few could access BI. Today, BI is spreading to nearly every strata of organization, as companies attempt to provide critical data to business users who need it to perform their jobs. It is quite evident that BI is instrumental in bringing information democracy and it provides everyone the insights they require – delivering information to the right people at the right time across the enterprise. More and more organizations are expected to empower their employees with BI for productivity and operational gains.

Business Intelligence and Performance Management for competitive advantage

Today, when economic downturn is adversely affecting the world market, making insightful decisions about major changes in their business and markets has become crucial for organizations. They need to reconsider their strategies and operating plans. Only companies that can anticipate emerging trends and opportunities, refocus business strategy and vision, and align resources to maximize performance can effectively deal with the adversities of economic downturn and gain competitive advantage. BI and Performance Management brings business sensibility with enhanced visibility and insights, by delivering accurate and latest information to the right people for faster, improved, strategic and tactical decision making. Companies have increasingly starting realizing the importance of timely performance information to improve results, which will further make way for the rising demand for BI and Corporate Performance Management Solutions.

Business Intelligence via Software As A Service (SaaS)

It is being anticipated that in the coming years a number of organizations will have an industry-specific analytic application delivered via software as a service (SaaS) as a standard component of their BI portfolio. Information aggregators will bank on SaaS to deliver domain-specific analytic applications, which are made from industry data they assemble, and bring the balance of power in the BI platform market in their favor. Organizations will share their data with only those aggregators that can assure security as well as confidentiality. Thus, countless information aggregators offering SaaS analytic applications might appear; however, a virtual monopoly is expected to remain within each vertical niche owing to the high barricade that blocks the entry for others.

Business Intelligence and Social Software

Collaborative decision making will soon become the norm, which amalgamates social software with BI Platform capabilities. With social software users can tag assumptions made in the decision-making process to the BI framework, which significantly enhances the business value of BI as it ties all the good. Take for instance, in order to decide how much to invest in marketing a new product, users can tag their assumptions about the future sales of that product to a key performance indicator (KPI) that measures product sales. Whenever the KPI surpasses a threshold the BI platform would send alerts to the user to inform the decision makers about assumptions that no longer hold true.

With the advent of social software it has become easier to utilize the upsurge of interest in informal collaboration. As an alternative to the advancing of the usual top-down decision-making initiative, social software will be employed to collaborate and make decisions.

Business Intelligence Across All Sectors

BI is on its way to become an integral component of every type and size of organization across all sectors, including Real estate, IT, mobile devices, governmental agencies, scientific and engineering communities and R&D labs, banking and insurance, to name a few. BI can effectively deal with industry-specific constraints, operations and objectives thereby helping organizations better understand their customers, optimize their operations, minimize risk, manage revenue, and ultimately improve their results. More importantly, it is expected that in a couple of years BI will soon replace the gut-feel management with real data-based decision-making.

Business Intelligence For Real-Time Data

Today, consumers are demanding faster and improved services from businesses. As it has become imperative for companies to pay attention to consumer satisfaction in order to stay competitive they are depending even more on BI. They now demand near real-time analysis for intelligent decisions for business success. The rising demand for near-real-time data analysis is making way for a new framework for business intelligence. BI 2.0 is the recently-coined term to describe the acquisition, provision and analysis of real- time data, which was earlier lacking in BI. BI 2.0 is a part of the constantly developing business intelligence industry and indicates the next step for BI.

In near future, companies will switch from the traditional model of making business adjustments on a quarterly basis to business intelligence and performance management tools to make real-time shifts in strategy to respond to changes in the marketplace.

Over the coming years, business intelligence will undergo transformation that will have a broad and lasting impact. It will revolutionize the way that we think about business and the way business decisions are made. It’s only when thoughtful analysis supersedes gut feeling and conventional perception, we will enter the next level of business intelligence that will empower businesses with the capacity to reason, prepare, forecast, resolve issues and innovate.

Reference : Nupur Dave (http://blog.sqlauthority.com)

SQLAuthority Author Visit – Ahmedabad SQL Server User Group Meeting – November 2008

Ahmedabad SQL Server User Group Meeting was organized on November 29, 2008 at famous C.G. Road in Ahmedabad. We had great response and wonderful back to back technical sessions. The highlight of whole meeting was participation of UG President Jacob Sebastian – SQL MVP from New York.

Meeting started with introduction and welcome to all the members from SQL Server MVP – Pinal Daveand followed by technical session of “SQL Server 2008 – Backup and Compression” by Pinal Dave. This session was very special because not every database user is aware of the special feature of SQL Server 2008 and how it can improve the life of SQL Administrator.

Download the PDF of the SQL Server Presentation

[Please leave a comment in this thread if you are interested in SQL script which I had used in this demo presentation.]

I am very much encouraged as a User Group Leader because one of the member of UG Tejas Shah presented his very first technical session today. Tejas Shah delivered excellent session of “Introduction to Service Broker”. It is very new subject and UG members liked it.

The most awaited session was presented by SQL Server MVP – Pinal Dave where he discussed How to become MVP?”. Pinal explained the advantages and responsibility of becoming MVP. There were many questions regarding this subject. I think UG members and SQL enthusiastic were really into learning how to become MVP. I did my best to answer all the questions which I had received. Community Service is the short answer of this subject.

Pinal Dave shared his experience of attending MVP Open Day 2008, Goa. There were nearly 15 brand new photographs of the Open Day. To know more about MVP Open Day 2008 please visit following article.

SQLAuthority News – Author Visit – South Asia MVP Open Day 2008 – Goa – Group Photo

UG president Jacob who is New York participated in this meeting using GoToMeeting and discussed his experience of attending PASS conference. This was very special as this was first time when we had live presentation using Audio Video community tools.

Well last but not least, we had our ice creams. Ice Cream was from Havmore and flavor was ‘lonavali’. Let me know if you have any suggestions. Again, if you are interested in participating in UG meetings, please let me know. It is free and good networking with other tech developers.

Pinal Dave UG Introduction

Pinal Dave Technical Session

Tejas Shah on Service Broker

UG Group Photos

Reference : Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – Guidelines and Coding Standards Complete List Download

SQL SERVER – Guidelines and Coding Standards complete List Download

Coding standards and guidelines are very important for any developer on the path of successful career. A coding standard is a set of guidelines, rules and regulations on how to write code. Coding standards should be flexible enough or should take care of the situation where they should not prevent best practices for coding. They are basically the guidelines that one should follow for better understanding.

The concept behind implementing coding standards and guidelines, is that the consistency and uniformity in programming so that if multiple people are working on the same code, it becomes easier to communicate, share with or understand each other’s work.

With the goal of promoting good coding standards and guidelines I have created document which can guide developers.

SQL SERVER – Guidelines and Coding Standards Part – 1

SQL SERVER – Guidelines and Coding Standards Part – 2

SQL SERVER – Guidelines and Coding Standards complete List Download

Reference : Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – 2008 – Interview Questions and Answers – Part 8

SQL Server Interview Questions and Answers
Print Book Available (207 Pages) | Sample Chapters

SQL SERVER – 2008 – Interview Questions and Answers Complete List Download

What is Data Compression?

In SQL SERVE 2008 Data Compression comes in two flavors:

  • Row Compression
  • Page Compression

Row Compression

Row compression changes the format of physical storage of data. It minimize the metadata (column information, length, offsets etc) associated with each record. Numeric data types and fixed length strings are stored in variable-length storage format, just like Varchar.  (Read More Here)

Page Compression

Page compression allows common data to be shared between rows for a given page. Its uses the following techniques to compress data:

  • Row compression.
  • Prefix Compression. For every column in a page duplicate prefixes are identified. These prefixes are saved in compression information headers (CI) which resides after page header. A reference number is assigned to these prefixes and that reference number is replaced where ever those prefixes are being used.

Dictionary Compression.

Dictionary compression searches for duplicate values throughout the page and stores them in CI. The main difference between prefix and dictionary compression is that prefix is only restricted to one column while dictionary is applicable to the complete page.

What is use of DBCC Commands?

The Transact-SQL programming language provides DBCC statements that act as Database Console Commands for SQL Server.  DBCC commands are used to perform following tasks.

  • Maintenance tasks on database, index, or filegroup.
  • Tasks that gather and display various types of information.
  • Validation operations on a database, table, index, catalog, filegroup, or allocation of database pages.
  • Miscellaneous tasks such as enabling trace flags or removing a DLL from memory.

(Read More Here)

How to find tables without Indexes?

Run following query in Query Editor.

USE <database_name>;
GO
SELECT SCHEMA_NAME(schema_id) AS schema_name
,name AS table_name
FROM sys.tables
WHERE OBJECTPROPERTY(OBJECT_ID,'IsIndexed') = 0
ORDER BY schema_name, table_name;
GO

How to copy the tables, schema and views from one SQL Server to another?

There are multiple ways to do this.

  1. “Detach Database” from one server and “Attach Database” to another server.
  2. Manually script all the objects using SSMS and run the script on new server.
  3. Use Wizard of SSMS. (Read More Here)

How to copy data from one table to another table?

There are multiple ways to do this.

1) INSERT INTO SELECT

This method is used when table is already created in the database earlier and data is to be inserted into this table from another table. If columns listed in insert clause and select clause are same, they are not required to list them.

2) SELECT INTO

This method is used when table is not created earlier and needs to be created when data from one table is to be inserted into newly created table from another table. New table is created with same data types as selected columns.

(Read More Here)

What is Catalog Views?

Catalog views return information that is used by the SQL Server Database Engine. Catalog Views are the most general interface to the catalog metadata and provide the most efficient way to obtain, transform, and present customized forms of this information. All user-available catalog metadata is exposed through catalog views.

What is PIVOT and UNPIVOT?

A Pivot Table can automatically sort, count, and total the data stored in one table or spreadsheet and create a second table displaying the summarized data. The PIVOT operator turns the values of a specified column into column names, effectively rotating a table.

UNPIVOT table is reverse of PIVOT Table. (Read More Here)

What is Filestream?

Filestream allows you to store large objects in the file system and have these files integrated within the database. It enables SQL Server based applications to store unstructured data such as documents, images, audios, videos etc. in the file system. FILESTREAM basically integrates the SQL Server Database Engine with New Technology File System (NTFS); it basically stores the data in varbinary (max) data type. Using this data type, the unstructured data is stored in the NTFS file system and the SQL Server Database Engine manages the link between the Filestream column and the actual file located in the NTFS. Using Transact SQL statements users can insert, update, delete and select the data stored in FILESTREAM enabled tables.

What is Dirty Read ?

A dirty read occurs when two operations say, read and write occurs together giving the incorrect or unedited data. Suppose, A has changed a row, but has not committed the changes. B reads the uncommitted data but his view of the data may be wrong so that is Dirty Read.

What is SQLCMD?

sqlcmd is enhanced version of the isql and osql and it provides way more functionality than other two options. In other words sqlcmd is better replacement of isql (which will be deprecated eventually) and osql (not included in SQL Server 2005 RTM). sqlcmd can work two modes – i) BATCH and ii) interactive modes. (Read More)

What is Aggregate Functions?

Aggregate functions perform a calculation on a set of values and return a single value. Aggregate functions ignore NULL values except COUNT function. HAVING clause is used, along with GROUP BY, for filtering query using aggregate values.

Following functions are aggregate functions.

AVG, MIN, CHECKSUM_AGG, SUM, COUNT, STDEV, COUNT_BIG, STDEVP, GROUPING, VAR, MAX, VARP (Read More Here )

What do you mean by Table Sample?
TABLESAMPLE allows you to extract a sampling of rows from a table in the FROM clause. The rows retrieved are random and they are not in any order. This sampling can be based on a percentage of number of rows. You can use TABLESAMPLE when only a sampling of rows is necessary for the application instead of a full result set. (Read More Here)

What is Row_Number()?
ROW_NUMBER() returns a column as an expression that contains the row’s number within the result set. This is only a number used in the context of the result set, if the result changes, the ROW_NUMBER() will change.

What are Ranking Functions?

Ranking functions return a ranking value for each row in a partition. All the ranking functions are non-deterministic. Different Ranking functions are:

ROW_NUMBER () OVER ([<partition_by_clause>] <order_by_clause>)
Returns the sequential number of a row within a partition of a result set, starting at 1 for the first row in each partition.

RANK () OVER ([<partition_by_clause>] <order_by_clause>)
Returns the rank of each row within the partition of a result set.

DENSE_RANK () OVER ([<partition_by_clause>] <order_by_clause>)
Returns the rank of rows within the partition of a result set, without any gaps in the ranking. (Read More Here )

What is the difference between UNION and UNION ALL?

UNION
The UNION command is used to select related information from two tables, much like the JOIN command. However, when using the UNION command all selected columns need to be of the same data type. With UNION, only distinct values are selected.

UNION ALL
The UNION ALL command is equal to the UNION command, except that UNION ALL selects all values.

The difference between Union and Union all is that Union all will not eliminate duplicate rows, instead it just pulls all rows from all tables fitting your query specifics and combines them into a table. (Read More Here)

What is B-Tree?

The database server uses a B-tree structure to organize index information. B-Tree generally has following types of index pages or nodes:

  • root node: A root node contains node pointers to branch nodes which can be only one.
  • branch nodes: A branch node contains pointers to leaf nodes or other branch nodes which can be two or more.
  • leaf nodes: A leaf node contains index items and horizontal pointers to other leaf nodes which can be many.

© Copyright 2000-2009 Pinal Dave. All Rights Reserved. SQLAuthority.com

Reference : Pinal Dave (http://blog.SQLAuthority.com)