Some blog readers reported that why we should use SQLCMD or BCP to generate the file when the same can be achieved using SQL Server Management Studio itself. Well, that was a strong statement and I liked the simplicity of the solution. Here are the steps. A lot of times it is our inability to learn some of the simple techniques present in the tools that we use make us even more productive.
Here are the steps to achieve the same. Go to SQL Server Management Studio > Choose Tools from Menu Bar > Click on Options. Go to “Query Results” > “SQL Server” > Click on “Results to Text”
Once we choose “Comma delimited” and hit OK – we will get below kind of output when we eecure a query (Note that we need to open a new Query Window so that the settings are saved)
Notice that first column is the column name. If we want to avoid that and wants ONLY the data, then we can customized that in the same place where we have modified the separator setting. If we uncheck “Include column headers in the result set” then we will get only the rows as output.
We can also choose custom delimiter. I have set delimiter as pipe (|) and unchecked the box which I mentioned earlier and below is the output for the same query.
As we can see that output is pipe separated and we just have data not column heading. This can be saved and imported again to some other server using bcp command, Bulk Insert command or Import/Export Wizard.
In SQL Server, you can use BCP command to export the result set to a csv file. In MySQL too, You can export data from a table or result set as a csv file in many methods. Here are two methods.
Method 1 : Make use of Work Bench
If you are using Work Bench as a querying tool, you can make use of it’s Export option in the result window. Run the following code in Work Bench
SELECT db_names FROM mysql_testing;
The result will be shown in the result windows. There is an option called “File”. Click on it and it will prompt you a window to save the result set (Screen shot attached to show how file option can be used). Choose the directory and type out the name of the file.
Method 2 : Make use of OUTFILE command
You can do the export using a query with OUTFILE command as shown below
SELECT db_names FROM mysql_testing INTO OUTFILE 'C:/testing.csv' FIELDS ENCLOSED BY '"' TERMINATED BY ';' ESCAPED BY '"' LINES TERMINATED BY '\r\n';
After the execution of the above code, you can find a file named testing.csv in C drive of the server.
I have previously written article how one can import a CSV file into a database table using SSIS package. This was a simple import operation when the CSV file structure corresponded to the table structure column-to-column. However SQL Server Integration Services is a very powerful tool that can be used for much more complex data import operations.
So, let us consider a case when CSV file structure does not correspond to the database schema. For example, let’s import a list of products with their prices to the Salesforce cloud database.
Salesforce has a predefined database structure. Information on products and their prices is stored in three tables: Product2 (containing information on products), PriceBook2, and PriceBookEntry – the table that actually contains prices. PriceBookEntry has foreign keys to PriceBook2 and Product2.
To import data to Salesforce we use dotConnect for Salesforce from Devart – an ADO.NET provider for accessing data of the Salesforce cloud CRM and Database.com through the standard ADO.NET or Entity Framework interfaces.
dotConnect for Salesforce eliminates the need to learn complex Web Services API – it allows you to use your knowledge of ADO.NET and Entity Framework. It offers the easiest way for working with Salesforce data and the highest performance. It uses advanced data caching, connection pooling, and query optimization to increase the performance of working with Salesforce data. It has its own SQL engine that enables you to use SQL-92 statements for selecting data instead of limited SOQL.
Devart’sdotConnect for Salesforce provides advanced Salesforce Source and Salesforce Destination components for SSIS data flows. For example, Salesforce Source component allows using both SOQL and a subset of SQL-92 to query data, provides a powerful editor dialog box that allows you to select objects for querying data visually, supports working with SSIS system variables and deleted records.
Salesforce Destination component uses Salesforce native bulk protocol for loading data to increase performance, and provides advanced error processing. It can return a set of columns, containing original loaded columns, record IDs for the records, that are inserted successfully, error code, error columns, and error description for the columns, that caused an error. The latter feature will be used in this example.
So, we have a Product.CSV file with the following columns:
Let us create a SSIS package that imports this file to the Salesforce database. For this, first we create a new Integration Services project.
Drag Data Flow Task from the Toolbox to the Control Flow tab.
Switch to the Data Flow Task.
Drag Flat File Source from the Toolbox to the Data Flow tab.
Double-click the Flat File Source and click New in its editor.
Specify the path to the file.
Click Advanced and define the columns. Use the New button to create columns, then specify names and datatypes for them.
Click OK in the connection manager editor and then in the Flat File Source editor.
Drag Devart Salesforce Destination from the Toolbox to the Data Flow tab. We will use this instance of Devart Salesforce Destination for inserting data to the Product2 table.
Click the Flat File Source on the Data Flow tab and connect its blue arrow to the Devart Salesforce Destination.
Right-click the Connection Managers tab at the bottom of the designer and select New Connection from the shortcut menu.
In the opened Add SSIS Connection Manager dialog box select DevartSalesforce and click the Add button.
Specify the connection parameters and click OK.
Double-click the Devart Salesforce Destination on the Data Flow tab.
In the Connection Managers grid (in the Connection Manager column) select Devart Salesforce Connection Manager.
Switch to the Component Properties tab and select Product2 for the Salesforce Object property.
Switch to the Column Mappings tab. The Name column will be automatically mapped to the corresponding column in the Product2 table. Click OK.
Drag the Conditional Split component from the Toolbox to the Data Flow tab.
Click the Devart Salesforce Destination on the Data Flow tab and connect its red arrow to the Conditional Split.
In the opened Configure Error Output dialog box, select Redirect row in the Error column and click OK.
This setting will redirect all the rows, both successfully inserted and failed to be inserted, to the Error Output. We will need it to insert price data from the rows, successfully inserted to the Product2 table, to the PriceBookEntry table.
Advanced error processing capability of the Devart Salesforce Destination component allows us to easily distinguish successfully inserted rows from failed ones by the ErrorCode column value. What’s even more important, Devart Salesforce Destination returns Salesforce ID values of the records, inserted to the Product2 table, which are necessary to assign them to the Product2ID foreign key column.
To insert price data to the PriceBookEntry table, we also will need an ID value of a record in the PriceBook2 table corresponding to the pricebook we insert prices to. The retrieval of this value is described later.
So, lets configure the Conditional Split component. Double-click it and add the following output names and conditions to the grid:
the name ‘Success‘ with the condition ‘ErrorCode == -1‘ (successfully inserted records will go to this output)
the name ‘Error‘ with the condition ‘ErrorCode > -1‘ (records that caused errors will go to this output)
Then click OK.
Now we get the ID of the pricebook, which we will insert prices to. We will use the standard pricebook (with IsStandard = true). To get its ID, drag Devart Salesforce Source from the Toolbox to the Data Flow tab.
Double-click the Devart Salesforce Source, and select the Devart Salesforce Connection Manager in the Salesforce Connection drop-down list and enter the following query to the Query box.
WHERE IsStandard = true
As you can see, the Devart Salesforce Source provides a powerful editor that displays all the available Salesforce objects and variables and shows the properties of the selected object. It also allows you to preview the queried data. We will use this feature to retrieve the ID of the pricebook we will use. Click the Preview Data button (highlighted with red).
Copy the ID value, then click Close in the Preview Data window and click OK in the Devart Salesforce Source editor. After we copied the ID value, we don’t need the Devart Salesforce Source any more. We can delete it.
Then drag Derived Column from the Toolbox to the Data Flow tab. This component is used for adding custom columns to data flows.
Click the Conditional Split component and connect its red arrow to the Derived Column.
In the opened Input Output Selection dialog box select Success in the Output box and click OK.
Double-click the Derived Column component. We need to add three custom columns in its editor. First, add the PriceBook2ID column – enter ‘Pricebook2ID‘ into the Derived Column Name column of the grid, paste the copied ID value to the Expression column and put it in quotes.
Then, in the second grid row, enter ‘UseStandardPrice‘ (without quotes) into the Derived Column Name column, and enter ‘false‘ (without quotes) into the Expression column.
In the third grid row enter ‘IsActive‘ into the Derived Column Name column, and enter ‘true‘ into the Expression column. Then click OK.
Drag Devart Salesforce Destination from the Toolbox to the Data Flow tab. We will use this instance of Devart Salesforce Destination for inserting data to the PricebookEntry table.
Click the Derived Column component on the Data Flow tab and connect its blue arrow to the Devart Salesforce Destination 1.
Double-click the Devart Salesforce Destination 1.
In the Connection Managers grid (in the Connection Manager column) select Devart Salesforce Connection Manager.
Switch to the Component Properties tab and select PricebookEntry for the Salesforce Object property.
Switch to the Column Mappings tab. In the grid row with the UnitPrice value in the second column, select Price in the first grid column. Click OK.
Now our package is ready.
You may press the F5 key to execute the package in debug mode.
I use following script very often and I realized that I have never shared this script on this blog before. Creating Comma Separated Values (CSV) from Table Column is a very common task, and we all do this many times a day. Let us see the example that I use frequently and its output.
GO -- Check Table Column SELECT Name FROM HumanResources.Shift
GO -- Get CSV values SELECT SUBSTRING(
(SELECT ',' + s.Name FROM HumanResources.Shift s ORDER BY s.Name FOR XML PATH('')),2,200000) AS CSV
I consider XML as the best solution in terms of code and performance. Further, as I totally prefer this option, I am not even including the linka to my other articles, where I have described other options.
Do you use any other method to resolve this issue? Can you find any significant difference in performance between these options? Please leave your comment here.
This is a very common request recently – How to import CSV file into SQL Server? How to load CSV file into SQL Server Database Table? How to load comma delimited file into SQL Server? Let us see the solution in quick steps.
CSV stands for Comma Separated Values, sometimes also called Comma Delimited Values.
GO CREATE TABLE CSVTest (ID INT, FirstNameVARCHAR(40), LastNameVARCHAR(40), BirthDate SMALLDATETIME) GO
Create CSV file in drive C: with name sweetest. text with the following content. The location of the file is C:\csvtest.txt
Now run following script to load all the data from CSV to database table. If there is any error in any row it will be not inserted but other rows will be inserted.
INSERT CSVTest FROM 'c:\csvtest.txt' WITH ( FIELDTERMINATOR = ',', ROWTERMINATOR = '\n' ) GO --Check the content of the table. SELECT * FROM CSVTest
GO --Drop the table to clean up database. DROP TABLE CSVTest