Basics Data Warehouse

  Home  Data Warehouse  Basics Data Warehouse


“Basics Data Warehouse guideline for job interview preparation. Explore list of Data Warehousing Basics frequently asked questions(FAQs) asked in number of Basics Data Warehouse interviews. Post your comments as your suggestions, questions and answers on any Data Warehousing Basics Interview Question or answer. Ask Data Warehousing Basics Question, your question will be answered by our fellow friends.”



37 Basics Data Warehouse Questions And Answers

21⟩ Explain What are the methodologies of Data Warehousing?

Every company has methodology of their own. But to name a few SDLC Methodology, AIM methodology are stardadly used. Other methodologies are AMM, World class methodology and many more.

There are only two methodologies in building DW they are

1.Top down

2.Bottom-up

 132 views

22⟩ Explain Difference between Snow flake and Star Schema. What are situations where Snow flake Schema is better than Star Schema to use and when the opposite is true?

Star schema contains the dimesion tables mapped around one or more fact tables.

It is a denormalised model.

No need to use complicated joins.

Queries results fastly.

Snowflake schema

It is the normalised form of Star schema.

contains indepth joins ,bcas the tbales r splitted in to many pieces.We can easily do modification directly in the tables.

We hav to use comlicated joins ,since we hav more tables .

There will be some delay in processing the Query .

in star schema look like a centerally locate fact table and surrounded by dimention tables . its look like a star thats why people colled as a starschema

in star schema dimention tables are de_normalised but fact table is normalised table

in snow flake schema dimention tables are splitted one or more tables

dimention tables are quit bit a table bit space

here dimention tables are normalised

here having the more no of joins

so the performance degrades

as per the client requirement we used star or snow flake schema

client may ask like data normalised or de_normalised

 160 views

24⟩ Explain clearly how to explain any (sales) project in interview.actually feom where report developer work starts?pls reply as soon as possible?

if you are a Report developer

1,you have to specify the front end and back end tool used for creating the reports

2,Then you have to tell the purpose of the project..what you are going to acheive using the reports.

3,Then you can explain the backend part which is important.FOr example,you have to tell what are all the facts and dimension going to be used

4, Once the facts and dimension are identified yo might want to restructure the fact and dimension using the views.Also have to decide on the schemas you are going to use whether it s an snowflake schema or an star schema.

5,Once the schema has been finalised we might want to include certain KPI(Key Performance Indicators) into it.

6,Once the cube has been ready now we are into the deployement of the cube.

7,After the deployement has been done sucessfuly now we have to use our front end tool such as prolclarity and feed the cube into it.

8.Once the cube has been feed into the proclarity we can create various reports that can helps us for the business.In our case i had identified the top 10 customers in sales , successful saples period over time by using trend analysis and using ranking KPI for rank the customer.

 181 views

25⟩ Explain VLDB?

The perception of what constitutes a VLDB continues to grow. A one terabyte database would normally be considered to be a VLDB.

 148 views

26⟩ Explain a linked cube?

Linked cube in which a sub-set of the data can be analysed into great detail. The linking ensures that the data in the cubes remain consistent.

 120 views

27⟩ Explain What is the main difference between Inmon and Kimball philosophies of data warehousing?

Both differed in the concept of building teh datawarehosue..

According to Kimball ...

Kimball views data warehousing as a constituency of data marts. Data marts are focused on delivering business objectives for departments in the organization. And the data warehouse is a conformed dimension of the data marts. Hence a unified view of the enterprise can be obtain from the dimension modeling on a local departmental level.

Inmon beliefs in creating a data warehouse on a subject-by-subject area basis. Hence the development of the data warehouse can start with data from the online store. Other subject areas can be added to the data warehouse as their needs arise. Point-of-sale (POS) data can be added later if management decides it is necessary.

i.e.,

Kimball--First DataMarts--Combined way ---Datawarehouse

Inmon---First Datawarehouse--Later----Datamarts

 127 views

28⟩ What is the main difference between schema in RDBMS and schemas in DataWarehouse?

RDBMS Schema

* Used for OLTP systems

* Traditional and old schema

* Normalized

* Difficult to understand and navigate

* Cannot solve extract and complex problems

* Poorly modelled

DWH Schema

* Used for OLAP systems

* New generation schema

* De Normalized

* Easy to understand and navigate

* Extract and complex problems can be easily solved

* Very good model

 145 views

29⟩ What is the datatype of the surrogate key?

Datatype of the surrogate key is either integer or numeric.it,s always generated by system because surrogate key works as primary key.surrogate key help us to distinguish the information about the data and store the data history.

 148 views

30⟩ What is incremintal loading? What is batch processing? What is crass reference table? What is aggregate fact table?

Incremental loading means loading the ongoing changes in the OLTP.<br><br>Aggregate table contains the [measure] values ,aggregated /grouped/summed up to some level of hirarchy.<br>

Batch Processing means executing more than one session in single run at the same time. we can execute these session in 2 ways : <br>linear: exececuting one after another<br>parralel: executing more than one session at at time

 149 views

32⟩ What is static and local variable?

Static variable is not created on function stack but is created in

the initialized data segment and hence the variable can be shared across the multiple call of the same function. Usage of static variables within a function is not thread safe.

On the other hand local variable or auto variable is created on function stack and valid only in the context of the function call and is not shared across function calls.

 137 views

33⟩ What is metadata in context of a Datawarehouse and how it is important?

Meta data is the data about data; Business Analyst or data modeler usually capture information about data - the source (where and how the data is originated), nature of data (char, varchar, nullable, existance, valid values etc) and behavior of data (how it is modified / derived and the life cycle ) in data dictionary a.k.a metadata. Metadata is also presented at the Datamart level, subsets, fact and dimensions, ODS etc. For a DW user, metadata provides vital information for analysis / DSS.

 135 views

34⟩ What is surrogate key? where we use it explain with example?

surrogate key is a substitution for the natural primary key.

It is just a unique identifier or number for each row that can be used for the primary key to the table. The only requirement for a surrogate primary key is that it is unique for each row in the table.

Data warehouses typically use a surrogate, (also known as artificial or identity key), key for the dimension tables primary keys. They can use Infa sequence generator, or Oracle sequence, or SQL Server Identity values for the surrogate key.

It is useful because the natural primary key (i.e. Customer Number in Customer table) can change and this makes updates more difficult.

Some tables have columns such as AIRPORT_NAME or CITY_NAME which are stated as the primary keys (according to the business users) but ,not only can these change, indexing on a numerical value is probably better and you could consider creating a surrogate key called, say, AIRPORT_ID. This would be internal to the system and as far as the client is concerned you may display only the AIRPORT_NAME.

2. Adapted from response by Vincent on Thursday, March 13, 2003

Another benefit you can get from surrogate keys (SID) is :

Tracking the SCD - Slowly Changing Dimension.

Let me give you a simple, classical example:

On the 1st of January 2002, Employee 'E1' belongs to Business Unit 'BU1' (that's what would be in your Employee Dimension). This employee has a turnover allocated to him on the Business Unit 'BU1' But on the 2nd of June the Employee 'E1' is muted from Business Unit 'BU1' to Business Unit 'BU2.' All the new turnover have to belong to the new Business Unit 'BU2' but the old one should Belong to the Business Unit 'BU1.'

If you used the natural business key 'E1' for your employee within your datawarehouse everything would be allocated to Business Unit 'BU2' even what actualy belongs to 'BU1.'

If you use surrogate keys, you could create on the 2nd of June a new record for the Employee 'E1' in your Employee Dimension with a new surrogate key.

This way, in your fact table, you have your old data (before 2nd of June) with the SID of the Employee 'E1' + 'BU1.' All new data (after 2nd of June) would take the SID of the employee 'E1' + 'BU2.'

You could consider Slowly Changing Dimension as an enlargement of your natural key: natural key of the Employee was Employee Code 'E1' but for you it becomes

Employee Code + Business Unit - 'E1' + 'BU1' or 'E1' + 'BU2.' But the difference with the natural key enlargement process, is that you might not have all part of your new key within your fact table, so you might not be able to do the join on the new enlarge key -> so you need another id.

A surrogate key is a system generated sequential number which acts as a primary key.

 141 views

35⟩ What are the Data Marts?

A data mart is a collection of tables focused on specific business group/department. It may have multi-dimensional or normalized. Data marts are usually built from a bigger data warehouse or from operational data.

 130 views