Data modelling is the process of creating a model for the data to store in a database. It is a conceptual representation of data objects, the association between different data objects, and the rules.
Conceptual: Conceptual data model defines what should the system contain. This model is typically created by business stakeholders and data architects. The purpose is to organize, scope, and define business concepts and rules.
Logical: Defines how the system should be implemented regardless of the DBMS. This model is typically created by data architects and business analysts. The purpose is to develop a technical map of rules and data structures.
Physical: This data model describes how the system will be implemented using a specific DBMS system. This model is typically created by DBA and developers. The purpose is the actual implementation of the database.
3) Explain the fact and fact table The fact represents quantitative data. For example, the net amount which is due. A fact table contains numerical data as well as foreign keys from dimensional tables.
There are two different types of data modelling schemes schemas: 1) Star Schema, and 2) Snowflake Schema
Denormalization is used when there is a lot of involvement of the table while retrieving data. It is used to construct a data warehouse.
Dimensions represent qualitative data. For example, product, class, plan, etc. A dimension table has textual or descriptive attributes. For example, the product category and product name are two attributes of the product dimension table.
Fact less fact is a table having no fact measurement. It contains only the dimension keys.
The collection of rows and columns is called as table. Each and every column has a datatype. Table contains related data in a tabular format.
12) Define data sparsity Data sparsity is a term used for how much data you have for entity/ dimension of the model.
Composite primary key is referred to the case where more than one table column is used as a part of primary key.
Primary key is a column or group of columns that unequally identify each and every row in the table. The value of primary key must not be null. Every table must contain one primary key.
Foreign key is a group of attributes which is used to link parent and child table. The value of the foreign key column, which is available in the child table, is referred to the value of the primary key in the parent table.
Metadata describes the data about data. It shows what type of data is actually stored in the database system.
A data mart is a condensed version of a data warehouse and is designed for use by a specific department, unit, or set of users in an organization. E.g., marketing sales, HR, or finance.
Online transaction processing, shortly known as OLTP, supports transaction-oriented application in 3-tier architecture. OLTP administers the day to day transaction of company or organization.
Types of normalizations are: 1) first normal form, 2) second normal form, 3) third normal forms, 4) boyce-codd fourth, and 5) fifth normal forms.
Forward engineering is a technical term used to describe the process of translating a logical model into a physical implement automatically.
It is a data cube that stores data as a summary. It helps the user to analyse data quickly. The data in PDAP is stored in a way that reporting can be done with ease.
A snowflake schema is an arrangement of a dimension table and fact table. Generally, both tables are further broken down into more dimension tables.
25) Explain analysis service Analysis service gives a combined view of the data that is used in data mining or OLAP.
Sequence clustering algorithm collects paths which are similar or related to each other and sequences of data having events.
Discreet data is a finite data or defined data. E.g., gender, telephone numbers. Continuous data is data that changes in a continuous and ordered manner. E.g., age.
Time series algorithm is a method to predict continuous values of data in table. E.g., Performance one employee can forecast the profit or influence.
BI (Business Intelligence) is a set of processes, architectures, and technologies that convert raw data into meaningful information that drives profitable business actions. It is a suite of software and services to transform data into actionable intelligence and knowledge.
Bitmap indexes are a special type of database index that uses bitmaps (bit arrays) to answer queries by executing bitwise operations.
Data warehousing is a process for collecting and managing data from varied sources. It provides meaningful business enterprise insights. Data warehousing is typically used to connect and analyse data from heterogeneous sources. It is the core of the BI system, which is built for data analysis and reporting.
Junk dimension combines two or more related cardinality into one dimension. It is usually Boolean or flag values.
Data collection frequency is the rate to collect the data. It also passes through various stages. These stages are: 1) extracting from various sources, 3) transforming, 4) cleansing, and 5) storing.
Critical Success Factor is a favorable result of any activity needed for organization to reach its goal.
Data mining is a multi-disciplinary skill that uses machine learning, statistics, AI, and database technology. It is all about discovering unsuspected / previously unknown relationships amongst the data.
Identifying entity relationships in DBMS is used to identify a relationship between two entities: 1) strong entity, and 2) weak entity.
Recursive relationship is a standalone column in a table which is connected to the primary key of the same table.
The process of validating or testing a model which would used to predict testing and validating outcomes. It can be used for machine learning, artificial intelligence, as well as statistics.
A different type of constraint could be unique, null values, foreign keys, composite key or check constraint, etc.
Data modelling tool is a software that helps in constructing data flow and the relation between data. Examples of such tools are Borland Together, Altova Database Spy, casewise, Case Studio 2, etc.
In the hierarchical database, model data is organized in a tree-like structure. Data is stored in a hierarchical format. Data is represented using a parent-child relationship. In hierarchical DBMS parent may have many children, children have only one parent.
Process-driven approach used in data modelling follows a step by step method on the relationship between the entity-relationship model and organizational process.
Two types of data modelling techniques are: 1) entity-relationship (E-R) Model, and 2) UML (Unified Modelling Language).
UML (Unified Modelling Language) is a general-purpose, database development, modelling language in the field of software engineering. The main intention is to provide a generalized way to visualize system design.
The object-oriented database model is a collection of objects. These objects can have associated features as well as methods.
It is a model which is built on hierarchical model. It allows more than one relationship to link records, which indicates that it has multiple records. It is possible to construct a set of parent records and child records. Each record can belong to multiple sets that enable you to perform complex table relationships.
Hashing is a technique which is used to search all the index value and retrieve desired data. It helps to calculate the direct location of data, which are recorded on disk without using the structure of the index.
business or natural keys is a field that uniquely identifies an entity. For example, client ID, employee number, email etc.
When more than one field is used to represent a key, it is referred to as a compound key.
First normal form or 1NF is a property of a relation available in a relational database management system. Any relation is called first normal form if the domain of every attribute contains values which are atomic. It contains one value from that domain.
An artificial key which aims to uniquely identify each record is called a surrogate key. These kinds of key are unique because they are created when you don’t have any natural primary key. They do not lend any meaning to the data in the table. Surrogate key is usually an integer.
Alternate key is a column or group of columns in a table that uniquely identifies every row in that table. A table can have multiple choices for a primary key, but only one can be set as the primary key. All the keys which are not primary key are called an Alternate Key.
Fourth normal form is a level of database normalization where there must not have non trivial dependency other than candidate key.
Database management system or DBMS is a software for storing and retrieving user data. It consists of a group of programs which manipulate the database.
A table is in 5th normal form only if it is in 4th normal form, and it cannot be decomposed into any number of smaller tables without loss of data.
Normalization is a database design technique that organizes tables in a manner that reduces redundancy and dependency of data. It divides larger tables into smaller tables and links them using relationships.
Relational Database Management System is a software which is used to store data in the form of tables. In this kind of system, data is managed and stored in rows and columns, which is known as tuples and attributes. RDBMS is a powerful data management system and is widely used across the world.
The aggregate table contains aggregated data that can be calculated using functions such as: 1) Average 2) MAX, 3) Count, 4) SUM, 5) SUM, and 6) MIN.
A conformed dimension is a dimension which is designed in a way that can be used across many fact tables in various areas of a data warehouse.
XMLA is an XML analysis that is considered as standard for accessing data in Online Analytical Processing (OLAP).
Junk dimension helps to store data. It is used when data is not proper to store in schema.
The situation when a secondary node selects target using ping time or when the closest node is a secondary, it is called as chained data replication.
A virtual data warehouse gives a collective view of the completed data. A virtual data warehouse does not have historical data. It is considered as a logical data model having metadata.
The ability of system to extract, cleanse, and transfer data in two directions is called as a directional extract.
Learn About Data Vault 0 Part 1
Although getting old now this introduction to Data Vault modelling written by Kent is a very readable introduction to the topic.
Kent also blogs regularly on Data Warehouse related topics as the Data Warrior
Dan Linstedt
Dan is the inventor of the Data Vault method and he runs a website and blog talking exclusively about Data Vault related topics.
What is a Data Model?
A data model organizes different data elements and standardizes how they relate to one another and real-world entity properties. So logically then, data modeling is the process of creating those data models.
Data models are composed of entities, and entities are the objects and concepts whose data we want to track. They, in turn, become tables found in a database. Customers, products, manufacturers, and sellers are potential entities.
Each entity has attributesâdetails that the users want to track. For instance, a customerâs name is an attribute.
With that out of the way, letâs check out those data modeling interview questions!
Basic Data Modeling Interview Questions
The three types of data models:
FAQ
What is data vault used for?
What is the core backbone of your data vault?
What is data vault 2.0 methodology?