Skip to main content

Facts and Fact table- Data warehouse fundamentals- part 5

Types of fact table
Fact tables are of two types
  1. CFT (Cumulative fact table)
  2. SFT (Snap Shot fact table)


Cumulative fact table
If we are loading the values into fact table based on time, then that is called cumulative fact table.

Snap Shot fact table
If we are loading the values based on client requirement, then that is called snap shot fact tables.

Types of facts

Fact: Fact is a numeric value, based on that numeric value; we are going to analyze data.
There are three types of facts are there. They are as follows.
  1. Additive fact
  2. Semi additive fact
  3. Non additive fact


Additive fact: - If fact values are coming from all dimensional tables, then such a fact is called additive fact.

Semi additive fact: - If fact values are coming from few dimension tables, then that is called semi additive fact.
Example: Transactions at bank (say if I drop deposit some money in my mom’s account, she only gets to know that amount is credited. She won’t know who credited money or how it is credited etc.)
Non additive fact: - If facts are not coming from any dimension table, then it is called Non additive fact.


REVENUE
PROFIT
PROFIT PERCENTAGE
20000
2000
-----------
40000
4000
50%

Here Revenue and profit are facts and profit percentage which is also a fact, is calculated using these two facts.
Here we can observe that Profit percentage values are not coming from any dimension tables.
Some more examples are Profits, Loss, gains, Ratio etc.


Factless Fact: - If fact table cannot contain any facts, then it is called factless fact.

SLNO
PID
LID
TID
CID
REVENUE
PROFIT
1
22
356
459
16
25,000
6000
2
56
45
546
75
60,000
4500
Upto






71
-----
----------
400 (Representing June 4th)
---------
----------
--------
72
52
56
985
21
75,321
9000


Here we know that by table that there is no sale happened on June 4th (TID- 400). So analysis is done why ‘No sale’ happened on that day.

Factless fact is basically used for negative analysis.

Comments

  1. Hi I read your post very carefully and I think you are right that a well written post should be at least a 100 words and should capture the essence of your blog, book or article.

    MSBI Training in Chennai

    Informatica Training in Chennai

    ReplyDelete

Post a Comment

Popular posts from this blog

BIG Data, Hadoop – Chapter 2 - Data Life Cycle

Data Life Cycle The data life cycle is pictorial defined as show below:     As we see, in our current system, we capture/ Extract our data, then we store it and later we process for reporting and analytics. But in case of big data, the problem lies in storing and then processing it faster. Hence Hadoop takes this portion, where it stores the data in effective format (Hadoop distributed File System) and also process using its engine (Map Reduce Engine). Since Map Reduce engine or Hadoop engine need data on HDFS format to process, We have favorable tools available in market to do this operation. As an example, Scoop is a tool which converts RDBMS to HDFS. Likewise we have SAP BOD to convert sap system data to HDFS.

SSIS: The Value Was Too Large To Fit In The Output Column

I had a SSIS package where I was calling a stored procedure in OLEDB Source and it was returning a “The Value Was Too Large to Fit in the Output Column” error. Well, My Datatype in OLEDB source was matching with my OLEDB Destination table. However, when I googled, we got solutions like to increase the output of OLEDB Source using Advanced Editor option . I was not at all comfortable with their solution as my source, destination and my intermediate transformation all are having same length and data type and I don’t want to change. Then I found that I was missing SET NOCOUNT ON option was missing in Stored Procedure. Once I added it, my data flow task ran successfully. 

How to Copy or Move Multiple Files from One Folder to Another Folder using Talend

Hello all, In this Post, I will explain how to move Multiple Files from One Folder (Say Source) to Other folder (Say Destination). This Post will also helps you to understand How to Declare Variable and Use it. To Declare a variable, We are go to use Contexts option in repository. Lets say we have two .txt files in Path D:/Source/ . My Requirement is to move the files from Source Folder ( D:/Source/ ) to Destination Folder ( D:/Dest/ ). Step 1: Open a New job Step 2: Now right click and Create a New Contexts from Repository. Give some Name and give Next. Step 3: Now Fill in the Source Directory Details where the loop on files should happen as shown in the snippet and give finish. Step 4: Now Context is created and The values will be changing based on each file in Folder. Step 5: Click and Drag the context from Repository to Context Job Window below the Job Designer. Step 6: If we Expand the Contexts, We can find the variable SourcePath is holdi...