Skip to main content

Load JSON Format Data into SQL Server

Hi All,
Many situation when we are pulling the data from website or calling the data through an API, we end up with JSON data.

If we needed to load JSON Data into Database, we need to break the JSON data into columns. We will see how to break the JSON data using C#.

Consider below JSON Data.
   "requestId":"299f#150942d9c47",
   "result":[ 
      { 
         "id":10357,
         "leadId":3032,
         "activityDate":"2015-06-16T20:20:36Z",
         "activityTypeId":1,
         "primaryAttributeValueId":26,
         "primaryAttributeValue":"eos-celebration",
         "attributes":[ 
            { 
               "name":"Client IP Address",
               "value":"132.197.180.58"
            },
            { 
               "name":"Query Parameters",
               "value":""
            },
            { 
               "name":"Referrer URL",
               "value":""
            },
            { 
               "name":"User Agent",
               "value":Mozilla/5.0 (Windows NT 6.1; WOW64)
            }
         ]
      }
   ]
}

I have split the JSON data into 2 colored section. One is purple where separating the data from JSON is straight forward.

The yellow colored part will be tricky as we have a node/key called “primaryAttributeValue” and primaryAttributeValue is going to be a sub node/tree under it called attributes and attributes has many nodes like "name" and “value”.

Please find the comments below on the code highlighted in red to understand:

C# Code:
String activityResult = pgm.getData(); // Data been called from API
var objActivityResult = JObject.Parse(activityResult); // Parse the data we received to JSON Object

                    if (objActivityResult["result"] != null)
                    {
                        foreach (JObject activityElement in objActivityResult["result"])
                        {
                            activityID = activityElement["id"].ToString();
                            activityLeadID = activityElement["leadId"].ToString();
                            activityDate = activityElement["activityDate"] != null ? activityElement["activityDate"].ToString() : string.Empty;
                            activityPrimaryAttributeValueId = activityElement["primaryAttributeValueId"] != null ? activityElement["primaryAttributeValueId"].ToString() : string.Empty;
                            activityPrimaryattributeValue = activityElement["primaryAttributeValue"] != null ? activityElement["primaryAttributeValue"].ToString() : string.Empty;

                            if (activityElement["attributes"] != null && activityElement["attributes"].Count() != 0)
                            {
                                foreach (JObject activityAttributeData in activityElement["attributes"])
                                {
                                    activityAttributeName = activityAttributeData["name"] != null ? activityAttributeData["name"].ToString() : string.Empty;
                                    activityAttributeValue = activityAttributeData["value"] != null ? activityAttributeData["value"].ToString() : string.Empty;
                                    if (Connection.State == ConnectionState.Closed)
                                        Connection.Open();
                                    string cmdText = @"<<INSERT INTO Tabl>>";
                                    SqlCommand STG_MKT_Activity = new SqlCommand(cmdText, Connection);
                                    STG_MKT_Activity.ExecuteNonQuery();
                                    cntActivity = cntActivity + 1;
                                    Connection.Close();
                                    STG_MKT_Activity.Dispose();
                                }
                            }


Comments

Popular posts from this blog

BIG Data, Hadoop – Chapter 2 - Data Life Cycle

Data Life Cycle The data life cycle is pictorial defined as show below:     As we see, in our current system, we capture/ Extract our data, then we store it and later we process for reporting and analytics. But in case of big data, the problem lies in storing and then processing it faster. Hence Hadoop takes this portion, where it stores the data in effective format (Hadoop distributed File System) and also process using its engine (Map Reduce Engine). Since Map Reduce engine or Hadoop engine need data on HDFS format to process, We have favorable tools available in market to do this operation. As an example, Scoop is a tool which converts RDBMS to HDFS. Likewise we have SAP BOD to convert sap system data to HDFS.

OLE DB provider "Microsoft.ACE.OLEDB.12.0" for linked server "(null)" returned message "The Microsoft Access database engine cannot open or write to the file ''. It is already opened exclusively by another user, or you need permission to view and write its data.". Msg 7303, Level 16, State 1, Line 1 Cannot initialize the data source object of OLE DB provider "Microsoft.ACE.OLEDB.12.0" for linked server "(null)".

OLE DB provider "Microsoft.ACE.OLEDB.12.0" for linked server "(null)" returned message "The Microsoft Access database engine cannot open or write to the file ''. It is already opened exclusively by another user, or you need permission to view and write its data.". Msg 7303, Level 16, State 1, Line 1 Cannot initialize the data source object of OLE DB provider "Microsoft.ACE.OLEDB.12.0" for linked server "(null)". If you get this error while Loading Data From Excel to SQL Server, then, close the Excel sheet opened and try to run queries again.