Skip to main content

Mapping columns from Excel sheet and Load data to SQL Server Database

 As Explained in my Previous Blog, we can just view all the data from Excel sheet in SQL Server Data output Viewer.

Now there was the question, How to Map Each Column from Excel Sheet to SQL Server Database Table to load the data.

A little Modification from my previous blog will do the trick of mapping.

For Example, Let me create a table in SQL Server DB.

USE [Test]
GO

CREATE TABLE [dbo].[Management](
[ID] [int] NULL,
[Name] [varchar](100) NULL,
[Designation] [varchar](100) NULL
) ON [PRIMARY]

GO


Now My Excel sheet and Data Looks like this which I kept it on my desktop. I want to load one of column say Name into table Management Table.



The Query to Map only column 'NAME' From Excel Sheet to 'Name' column of Table Management is as follows:


Insert into dbo.Management (NAME)

SELECT Name
FROM OPENROWSET('Microsoft.ACE.OLEDB.12.0',
'Excel 12.0 Xml;HDR=YES;
Database=C:\Users\dhinakaran\Desktop\Src.xlsx',
'SELECT * FROM [Sheet1$]');






After running this query, we can see Column Name is filled with Excel sheet Data leaving remaining column values as NULL.
The snippet is shown below.




Comments

Popular posts from this blog

BIG Data, Hadoop – Chapter 2 - Data Life Cycle

Data Life Cycle The data life cycle is pictorial defined as show below:     As we see, in our current system, we capture/ Extract our data, then we store it and later we process for reporting and analytics. But in case of big data, the problem lies in storing and then processing it faster. Hence Hadoop takes this portion, where it stores the data in effective format (Hadoop distributed File System) and also process using its engine (Map Reduce Engine). Since Map Reduce engine or Hadoop engine need data on HDFS format to process, We have favorable tools available in market to do this operation. As an example, Scoop is a tool which converts RDBMS to HDFS. Likewise we have SAP BOD to convert sap system data to HDFS.

How to Copy or Move Multiple Files from One Folder to Another Folder using Talend

Hello all, In this Post, I will explain how to move Multiple Files from One Folder (Say Source) to Other folder (Say Destination). This Post will also helps you to understand How to Declare Variable and Use it. To Declare a variable, We are go to use Contexts option in repository. Lets say we have two .txt files in Path D:/Source/ . My Requirement is to move the files from Source Folder ( D:/Source/ ) to Destination Folder ( D:/Dest/ ). Step 1: Open a New job Step 2: Now right click and Create a New Contexts from Repository. Give some Name and give Next. Step 3: Now Fill in the Source Directory Details where the loop on files should happen as shown in the snippet and give finish. Step 4: Now Context is created and The values will be changing based on each file in Folder. Step 5: Click and Drag the context from Repository to Context Job Window below the Job Designer. Step 6: If we Expand the Contexts, We can find the variable SourcePath is holdi...

ROW_NUMBER () using SSIS

Hi Everyone, Would like to share the knowledge how to achieve the ROW_NUMBER () Functionality through SSIS. For this, we shall consider an example. The business logic that needed to be followed was that I had to assign a “Twin Code” to each record. This meant that for each family in the database, if two or more members were born on the same day they should be treated as twins. The twins should be assigned a Code enumerating them in order of birth. This can be achieved through SQL by just writing a simple ROW_NUMBER () function. To achieve this same in SSIS, We shall in need of Data Flow task. Connect an OLEDB Source to the Family table. Now, use a Sort transformation which is likely to be used as ORDER BY Statement in our ROW_NUMBER () Function. We are going to sort by FamilyID and DateOfBirth Column. Now pull out a Script Component. Because we need to “Partition By” Family ID and DateOfBirth, We shall include those as an Input in...