Skip to main content

How to use pass variables in Execute SQL task

Here is a post where I actually did to pass variables into SQL query using Execute SQL task.
Here is my condition that I need to count number of rows in one table and need to pass this count value into a variable and then  using that variable I need to insert into a table.

For this Purpose let me take a table named 'Dim' under Database 'test' and similarly I will take one more table called 'fact' under database name 'test' again.

Using SSIS, I will take Execute SQL task on workspace, In fact two as we need two executions has to be done.
Let us create a variable as we discussed earlier of this task purpose and I'm creating a variable called "testing".






















Right click on first Execute SQL task, go for edit and here, I will configure as shown in snippet below.






























Here importantly we need to set ResultSet as Single row as ResultSet option will activate Result Set tab of Execute process task where we can assign variables and expressions.
There are many other options in ResultSet and I'm concentrating only on Single Row for now.









Note: Whatever query we will write, The result will be passed to Result Set tab of Execute SQL task.


 

Here I'm assigning variable "Testing" I created earlier. This means, whatever values are obtained from SQL statement, That value will be assigned to variable User::Testing.

Now again we need to take one more Execute SQL task for 2nd condition purpose (Inserting the value into table using variable)
Again, right click and go for Edit and continue as shown in snippet.




Here there is no need of giving ResultSet as we know we are not passing result of query in this Execute SQL task to any variable rather, we are getting values from variable to SQL statement.

Now go for Build Query and then



Query will be built automatically. then go for Run.
 (Note: Observe that ? symbol. Instead of this ? symbol we can give @testing and we need to change as @testing even at parameter mapping of This Execute SQL task. I will show you with snippet where to change)

after Run, we have to get this.


Now we will cancel here and give Ok to remaining windows been opened. (Not giving ok at this window will insert a value into table which may lead to duplicates get inserted)


Next go to Parameter Mapping tab


Here we need to give variable value we created.
Now remember, I mentioned earlier we also can give @testing and this is given at parameter name. (second circle)

Here we have given 0, because, We have only one parameter.
If we have created two parameter, Then based on priority, we need to give values like 0,1, etc
(suppose you are having two parameters then 0 or1 )

Now Run the package and we can see the result at destination table FACT


Comments

Popular posts from this blog

BIG Data, Hadoop – Chapter 2 - Data Life Cycle

Data Life Cycle The data life cycle is pictorial defined as show below:     As we see, in our current system, we capture/ Extract our data, then we store it and later we process for reporting and analytics. But in case of big data, the problem lies in storing and then processing it faster. Hence Hadoop takes this portion, where it stores the data in effective format (Hadoop distributed File System) and also process using its engine (Map Reduce Engine). Since Map Reduce engine or Hadoop engine need data on HDFS format to process, We have favorable tools available in market to do this operation. As an example, Scoop is a tool which converts RDBMS to HDFS. Likewise we have SAP BOD to convert sap system data to HDFS.

SSIS: The Value Was Too Large To Fit In The Output Column

I had a SSIS package where I was calling a stored procedure in OLEDB Source and it was returning a “The Value Was Too Large to Fit in the Output Column” error. Well, My Datatype in OLEDB source was matching with my OLEDB Destination table. However, when I googled, we got solutions like to increase the output of OLEDB Source using Advanced Editor option . I was not at all comfortable with their solution as my source, destination and my intermediate transformation all are having same length and data type and I don’t want to change. Then I found that I was missing SET NOCOUNT ON option was missing in Stored Procedure. Once I added it, my data flow task ran successfully. 

How to Copy or Move Multiple Files from One Folder to Another Folder using Talend

Hello all, In this Post, I will explain how to move Multiple Files from One Folder (Say Source) to Other folder (Say Destination). This Post will also helps you to understand How to Declare Variable and Use it. To Declare a variable, We are go to use Contexts option in repository. Lets say we have two .txt files in Path D:/Source/ . My Requirement is to move the files from Source Folder ( D:/Source/ ) to Destination Folder ( D:/Dest/ ). Step 1: Open a New job Step 2: Now right click and Create a New Contexts from Repository. Give some Name and give Next. Step 3: Now Fill in the Source Directory Details where the loop on files should happen as shown in the snippet and give finish. Step 4: Now Context is created and The values will be changing based on each file in Folder. Step 5: Click and Drag the context from Repository to Context Job Window below the Job Designer. Step 6: If we Expand the Contexts, We can find the variable SourcePath is holdi...