Skip to main content

Restore AdventureWork2012 Database MDF file and common encountered issues!!!

We can restore AdventureWork2012 Database onto our SQL SERVER 2012. We can easily do this by writing a Simple T-SQL query. We can get and download MDF file for SQL Server 2012 in the following link.

CREATE DATABASE AdventureWorks2012 ON (FILENAME = 'C: \dhinakaran\acer\AdventureWorks2012_Data.mdf')
FOR ATTACH_REBUILD_LOG 


Where 'C: \dhinakaran\acer\AdventureWorks2012_Data.mdf’ is the path of my MDF file.  The log files are also available on Microsoft site and if it’s not found, no worries, this query however will start with new Log File.


While restoring AdventureWork2012, we may come across some errors. Most common one is
Msg 5133, Level 16, State 1, Line 1
Directory lookup for the file “
'C: \dhinakaran\acer\AdventureWorks2012_Data.mdf’” failed with the operating system error 5(Access is denied.).
Solution: Go to your MDF file and Right click on it and select properties. Then go for security and click edit.
Now select Full Control option and give apply and rerun the Above SQL Query.





And other common issue is
‘Msg 1813, Level 16, State 2, Line 1
Could not open new database ‘AdventureWorks2012′. CREATE DATABASE is aborted.
Msg 948, Level 20, State 1, Line 1
The database ‘AdventureWorks2012′ cannot be opened because it is version 705. This server supports version 655 and earlier. A downgrade path is not supported.’


This can be solved by moving your MDF file to C:\ProgramFiles\MicrosoftSQLServer\MSSQL11.MSSQLSERVER\MSSQL\DATA) and run the Management Studio as Administrator


Comments

Popular posts from this blog

BIG Data, Hadoop – Chapter 2 - Data Life Cycle

Data Life Cycle The data life cycle is pictorial defined as show below:     As we see, in our current system, we capture/ Extract our data, then we store it and later we process for reporting and analytics. But in case of big data, the problem lies in storing and then processing it faster. Hence Hadoop takes this portion, where it stores the data in effective format (Hadoop distributed File System) and also process using its engine (Map Reduce Engine). Since Map Reduce engine or Hadoop engine need data on HDFS format to process, We have favorable tools available in market to do this operation. As an example, Scoop is a tool which converts RDBMS to HDFS. Likewise we have SAP BOD to convert sap system data to HDFS.

SSIS: The Value Was Too Large To Fit In The Output Column

I had a SSIS package where I was calling a stored procedure in OLEDB Source and it was returning a “The Value Was Too Large to Fit in the Output Column” error. Well, My Datatype in OLEDB source was matching with my OLEDB Destination table. However, when I googled, we got solutions like to increase the output of OLEDB Source using Advanced Editor option . I was not at all comfortable with their solution as my source, destination and my intermediate transformation all are having same length and data type and I don’t want to change. Then I found that I was missing SET NOCOUNT ON option was missing in Stored Procedure. Once I added it, my data flow task ran successfully. 

How to Copy or Move Multiple Files from One Folder to Another Folder using Talend

Hello all, In this Post, I will explain how to move Multiple Files from One Folder (Say Source) to Other folder (Say Destination). This Post will also helps you to understand How to Declare Variable and Use it. To Declare a variable, We are go to use Contexts option in repository. Lets say we have two .txt files in Path D:/Source/ . My Requirement is to move the files from Source Folder ( D:/Source/ ) to Destination Folder ( D:/Dest/ ). Step 1: Open a New job Step 2: Now right click and Create a New Contexts from Repository. Give some Name and give Next. Step 3: Now Fill in the Source Directory Details where the loop on files should happen as shown in the snippet and give finish. Step 4: Now Context is created and The values will be changing based on each file in Folder. Step 5: Click and Drag the context from Repository to Context Job Window below the Job Designer. Step 6: If we Expand the Contexts, We can find the variable SourcePath is holdi...