1) What is ODI tool and how it works compare to other DWH Tools?
Ans: ODI is DWH tool, it is working ELT architecture and other tools working as a ETL architecture.
We can find more features if we are using ELT architecture.
2) What is ELT and ETL Architecture and what are the differences?
Ans:
ETL: (Extraction , Transformations & Loading)
--------------------------------------------
ETL architecture is required separate staging server for transformations.
All ETL tools first it will extract data from Source systems, and loading into staging server and that staging available at ETL server level, All transformations
we are doing at ETL server level then after finishing transformations we will load data into target server.
In ETL Architecture there are three servers,: Source server , Staging(i.e., ETL Server) server and Target servers
3)Disadvantages for ETL Architecture:
-----------------------------------
If we are mainting ETL architecture that staging server different from target. while doing transformations we have to check parent data while loading child data,
that parent data available in target system. for this validations we have to connect target server for checking parent data. whenever we are doing data validations
we have to check at target server so this time data comparision will take more processing time and it is burdden on server and the same is may be using different
systems it will impact other systems as well.
example:
--------
Orders -- Parent Table
Order lines- Child table.
While load order lines data before loading into target will check orders table having those orders in target server. if not available
we will throug error parent dont found and we will load only child data. for this we are connecting different severs like staging and target.
this data validations will considar both staging and target servers. it will impact processing time and server impact as well like CPU processing
and RAM utilization also.
4)ELT: ( Extraction Loading and Transformations)
---------------------------------------------
IN ODI we are using ELT architecture, this architecture first it will extract data from source system and it will load into directly target system
in target we are maintaing staging , in staging we will all transformations and we will load into target tables. In ELT architecture all transformations
happening at target server with in target server we are maintaing staging and target schemas seperatly. While doing transformations we are doing with in the
server and different schemas or we can maintain both in one schema. this transformations happening at target server only it wont impact on processing time
and data validations and other transformations wont take much time and it wont impact on other servers as well.
example:
--------
Orders- Parent table
Order lines -- Child table
while loading order lines data first we will load into staging then we will validate those orders exists or not in orders parent table.
if orders does not exist orders parent table we will throw error like parent not found. in this ELT architecture both staging and target tables are in
same server that is target server. itwont impact on processing time and CPU and Memory utilization.
***************************
ELT Architecture major advantage is Staging we can maintain within target server we dont need seperate staging server. we can maintain staging
in target server itself. ODI staging supporting any Native SQL supporting Database.
OR we can maintain this staging not only target any other database servers like Source or target or other than source or target but that should
Database not ELT server.
This option we have at Interface level in Definition tab. STAGING AREA DIFFFERENT FROM TARGET, There we will select logical schema, default Target database schema we will select
if we want other than target we can select option STAGING OTHER THAN TARGET option and we can select that database logical schema.
5)ODI Other Features compare to Other ETL Tools:
----------------------------------------------
1) ELT Architecture and staging we are maintain at Database level.
2) Inbuilt Scheduling available
3) Easy Code moving from one environment to other environment. its everything files as xml.
4) Inbuilt version controlling.
5) Supporting Public Web services and middleware features using Weblogic server or Websphere.
6) Web based accessing and scheduling and monitoring entire ODI using ODI Console (this we will get if we are using Middleware).
7) Programming Languages Integration like we can run direct Programming languages and we can call shell scripting as well using OS command
and other ODI Tools we can utilize.
8) Sending mail alter for individual programs based on status like after finishing ur interfaces or programs we can make alert if success
or failure messages.
6)What is Repository and Types of Repositories?
Ans: A repository is a metadata matianance database user or schema. in this schema we will maintain all odi components information like
topology, security, designer and operator components information.
There are two types of repositories.
1) Master Repository
2) Work Repository
7)What is Master Repository?
Ans: Master Repository will maintain or will store all Topology and Security components information. and Master will contain Work repository
information as well.
We can maintain only one master repository and multiple works repositories like Development and Execution type of work repositories.
8)What is Work Repository?
Ans: Work repositories will maintain all Designer and Operator components information. there are two types of work repositories.
1) Development
2) Execution
For Development work repository we can use for Development, Testing , UAT and other environments except production environment.
if we are using Development type work repository we can access Designer and Operator components.
For Execution work repository we will use for production environment. here we can access only Operator component.
In production we dont need Designer bcz we are not editing any mappings or interface we are only excuting. in Execution work repository
we will use only SCENARIOS AND LOAD PLANS, this scenarios are not editable.
If we want to edit we have to connect development type repository and we can edit any mapping and we can regenerate scenarios and we can move into
execution work repository.
Note: In production we can edit load plans that load plans contains only scenarios and variables. But SCENARIOS cannoted edited it is ready
onlyobejcts we can use for only execution.
Note: If we are using Development type work repository as a production environment we can edit scenarios?
No, We cannot edit scenarios but we can edit interfaces and we can regenerate scenarios but it is not suggestible in production,
we have to edit in development and testing should properly and moving into production valid scenarios or interfaces.
9)What IS Security in ODI how we will maintain?
Ans: In ODI security we are maintaing at SECURITY Component and we can create users in Security using SUPERVISOR Default adminstrator user.
and we can assign privileges through profiles.
10)What is Profile and Types of Profiles?
Ans: A Profile is a set of objective privileges and we can assign these profiles to users, that user can get all privileges whatever profile.
There two types of profiles.
1) Generic Profile: Generic profiles having default granted privileges.
2) Non-Generic Profile: Non-Generic Profiles dont have default grant privileges, we can give based on our requirements i.e., customised profiles.
Can we create new profile?
YEs, We can create new profile and we can assign object privileges based on our requirements.
11)What is account expire option in Users?
Ans: This option we can use for temporary users, if they need only 1 or 6months access like contracts that time we will give only particular
time access after that they cannot access , they need request administrator for extending user access.
12)Topology:
---------
This topology we are using for to maintain all source and target applications , databases server configurations or connections information.
13)What is Dataserver?
Dataserver is main object to connect your source or target database servers connection information.
Here we will provide Driver details like Host, Port no, SID service name with Credentials like User name and password.
14)What is Physical Schema and Types of Physical Schemas?
Physical schema is your main database or application accessing schema or User.
There two types of Physical Schemas.
1) Data Schema (main schema) : Data Schema is your main database tables having schema
2) Work Schema: Work Schema is your staging tables having schema that is C$, I$,E$ and other staging tables.
We can use both Data schema and work schema same or we can use different schemas for data schema and work schema.
15)What is Context?
----------------
Context is mediator between Physical and logical architecture and this contexts we are using for to maintain Environments or location servers.
each context is assign to one physical schema based logical schema combination.
example: DEV, TEST and PRODUCTION environments.
16)what is difference between Contexts and Repositories?
Ans: Repositories will maintain complete components information like DESIGNER AND Operator
Then whatever we user using inside server in topology we will divide based on evironments and all servers using contexts.
Because for all environemnts we will use only one topology, and multiple work repositories, this context will divide all invironments
servers and it will be usable for all repositories.
example : DEV , TEST and Production these all work repositories will connect only only topology but how this environemnts will identify servers
for Development and testing and production servers based Context only. context will divide servers based on environemnts, it will used seperate
for development, for testing for production in different work repositories.
Note: All environemnt servers we will configure in Topology, but while accessing we will use context for dividing these servers for
seperate work repositories. This repositories will identify physical serversthrogh context only. if development repository that will connect
development context, if production repository this will connect production context.
17)What is Logical Schema?
Logical schema is alias for your physical schemas and it will connect to your physical schemas throgh contexts.
In Designer developer can access only logical schemas he con not access physical schemas, developer can access logical schemas and context combination he
can connect physical schemas.
18)Designer:
---------
Designer component we are using for complete development area. in design we will work for development like creating interfaces, packages, procedures,
variables, sequences and other objects, after finishing development we will generate schenarios and we can move this all scenarios in to production.
19)What is Scenario?
A scenario is a compiled or executable object and it is not editable, in odi production environemnt we will use only scenarios for mataining
security and hiding. because this scenarios is not readeable and not editable. this scenarios only schedules or executable.
20)What is Interface ?
-------------------
An interface we are using to load data from source to target , here we can apply all the transformations and validations and aggregations.
In odi 11g interface we can use only one target in one interface, if we want load data into multiple target tables we have to go for multiple
interfaces and use IKM Multiple table insert knowledge module.
In ODI 12c Interfaces we are calling Mappings and we can use multiple target tables in one mapping.
21)What is Yellow Interface?
--------------------------
Yellow interface we are using for without target table and we can create dynamically target table if we dont have target table.
for this we need to enable CREATE_TARGET_TABLE option=> TRUE at IKM level.
This yellow interface we can use for source table as well.
This yellow interfaces we are calling Reusable mappings in 12c.
22)what is Lookup:
----------------
In odi lookup we using for to check parent data before loading child data. but in backgroup it will create leftouter join.
or sub query join.
lookup means looking into up table for data if data available in up table it will considarother wise it will ignore..
Compare to normal left outer join LOOKUP will give best performance because it is happening at odi level.
23)What is Filter?
---------------
Filters we are using for to filter data from source table that we can apply source or staging.
24)What is Optimization Context:
----------------------------
This context we are using at interface development time only, while execution we need to select in execution window that context is your
main context. optimization context is usable only for interface development only not for execution.
25)What is Simulation option at interface?
---------------------------------------
This option we are using for to show complete interface process like LKM and IKM and CKM process like creating all $tables and
inserting into target table queries everything it will give a report.
Note: Simulation will give only report it wont run in session or program.
26)What is CDC?
Ans:
CDC change data capture we are using for to extract changed data from source. if source accepting Triggers on source table we will use this CDC
feature if source system they are not allowing to create a triggers we can't use this CDC feature we will go for timestamp based filtering.
Note: If source table having any timestamp or modified data column we can filtering if source table dont have timestamp or modified date column we have to
use CDC feature (journalization concept) if they are not allowing to create triggers on source table we have to select everying and we need to maintain seperate
staging table for checking data from source and our staging table data.
Note: If source table dont have timestamp column and they are not allowing to create CDc (triggers) then we need to create one staging table STG_ORDERS
with timestamp column first we will load full data from source table (SOURCE_ORDERS) table to our stagning table, but we will maintain this
staging data permanently and we will create one more interface to load this STG_ORDERS table to TARGET_ORDERS table based on timestamp column.
from next time onwards while select data from SOURCE_ORDERS table we will check STG_ORDERS table if STG_ORDERS table dont have source data
that data only we will select and we will insert.
SELECT * FROM SOURCE_ORDERS WHERE NOT EXITS ( SELECT 'X' FROM STG_ORDERS);
for this requirement we need two inerfaces one for SOURCE_ORDERS table STG_ORDERS table and another interface for
STG_ORDERS table to TARGET_ORDERS table.
STG_ORDERS to TARGET_ORDERS table we can use timestamp filter condition on STG_ORDERS and it will get selected only modified data.
we cannot delete data from STG_ORDERS table.
27)What is SCD?
------------
SCD Slowly changing dimension we are using for to load changing dimension information data.
where are three types of SCD'same
1) SCD TYPE1: No history, we will all columns SCD property as OVERWRITE ON CHANGE in ODI.
SCD TYPE1: we can use IKM Oracle Incremental update KM or we can IKM Slowly Changing Dimension both we can use.
2) SCD TYPE2: Here we can maintain no of times history, so we need to maintain below SCD properties in SCD TYPE 2 table.
1) ADD Row On CHANGE: this property will add a row whenever we are receiving changed data to maintain history.
2) STARTING_ TIME STAMP: This will show record starting date and time
3) Ending Time stamp: this will show record ending date and time
4) RECORD FLAG: this will show active record if it is Y and if it is N for old rows or history rows.
3) SCD TYPE 3: As of now ODI is not supporting we have to customise our IKM Slowly Changing Dimension.
it will maintain only one time history in different column like , CURRENT SALARY and PREVIOUS SALARY two columns.
28)What is Package?
-----------------
Package we are using for making work flow for our developed objects like interfaces, procedures and variables.
we can run multiple objects in package its may be sequential or parallel.
If it is directs objects like procedures, interfaces, variables and packages we can run only Sequential run we can’t run parallel.
if we want run parallel we have to use Scenarios for interfaces, procedures and packages and variables.
In scenario’s there is one option SYNCHRONOUS or ASYNCHRONOUS MODE.
SYNCHRONOUS: Sequential run ********************
ASYNCHRONOUS:Parallel run. **************************
29)How to run Parallel in Package?
We have to only schenarios and set Asynchronous mode in package opations.
if it is direct object we can run only sequential.
30)What is Load Plan:
------------------
Load plan we can run only scenarios and variables but here we can make sequential run, parllel run and condition based and hierarichal run.
like one parllel inside parllel one sequential and inside sequential parllel or sequential run we can use this we can say hierarichal run.
31)Difference between Package and Load Plan? **********************************************
-----------------------------------------
Package we can direct objects like interfaces, procedures and packages and also we can scenarios for those objects.
and we can all ODI Tools means, SFTP GET SFTP PUT, FILE Copy, file move ... all odi tools we can use.
In load plan we can run only scenarios and we cant run direct objects and we cant use odi tools like SFTP Put, get and file related all odi tools.
we can use only scenarios for interfaces, procedures, packages and variables.
In package we can use variables 4 types like Declare variable, Set variable, evaluate variable and refresh variable.
But in load plan we can use only one type variable.
Direct package we cant run in production, we have generate scenario,
But Load plan direct we can run in product , there is no scenario for loadplans, load plan is direct executable or schedulable.
32)What is Procedure?
-------------------
ODI odi procedure we can use for technology specific code execution. we can any any technology direct code execution using this procedure.
command on source we will use for selection and command on target we will use all operations like DML, DDL and other operations.
33)What is Sequence and type of sequences?
---------------------------------------
Sequence we can use to generate sequence of values like number insertintion into tables.
there are three types of sequences.
1) Standard sequence: we can use increment and it will start with 0
2) Specific sequence: we can use increment and it will start with our specific value like 5000 , 500000 .... any our own value.
3) Native Sequence: we can use this our direct Database sequences integration.
34)What is User Functions?
-----------------------
User functions we are using for our won logic or code implementation and we can reuse in any programs or interfaces passing parameters.
35)What is Markers?
-----------------
Markers we are using for to show development progress or status or Priority of objects like high priority interface or low interfaces and smiles as well.
36)What is Difference between Global objects and Project objects?
--------------------------------------------------------------
Global objects we can use all projects and Project objects we can use only with in that project only. if we want to use project objects into other projects
wecant use , we have to create in global and use GLOBAL keywork before object calling.
37)What is SolutionS?
-------------------
Solutions we are using for to create version for entire project or individual objects. This is we are using for mainly version controlling.
38)How to move your interfaces or other objects from Development to PRODUCTION or one work repository to another work repository?
Ans: USing Smart export first we will export all required objects and we will save as .xml file then
we will login into different work repository then we will use smart import feature and select exported xml file into new work repository.
Note: While moving one work repository to another work repository ID's should not be same and both Repository versions should be same
if work repositories versions are different we have to migrate those entire work repository into new work repository version using ODI Migration utility tool.
If both work repository ID's are same we have regenerate or change new id in new work repository.
39)What is Agent?
---------------
An agent is a runtime component , this will come into the picture whenever we are running a program it will take request from client and it will
create session with assign one unique id and chekcing source and target server (pinging source and target servers ) if servers are available it will transfer
our source to server and it will result and it will status success in operator and this complere log information it will store into work repository SNP_SESS_*LOG
tables. and all sessions it will maintain in SNP_SESSION table.
40)What are the types of Agents?
---------------------------------
There are two types of agents in ODI.
1) STand Alone: This will work with in odi level without Middleware
2) J2EE Agent: this will work within middleware and we will get additional featueslike , load balancing, highavailable and public services features.
Ans: ODI is DWH tool, it is working ELT architecture and other tools working as a ETL architecture.
We can find more features if we are using ELT architecture.
2) What is ELT and ETL Architecture and what are the differences?
Ans:
ETL: (Extraction , Transformations & Loading)
--------------------------------------------
ETL architecture is required separate staging server for transformations.
All ETL tools first it will extract data from Source systems, and loading into staging server and that staging available at ETL server level, All transformations
we are doing at ETL server level then after finishing transformations we will load data into target server.
In ETL Architecture there are three servers,: Source server , Staging(i.e., ETL Server) server and Target servers
3)Disadvantages for ETL Architecture:
-----------------------------------
If we are mainting ETL architecture that staging server different from target. while doing transformations we have to check parent data while loading child data,
that parent data available in target system. for this validations we have to connect target server for checking parent data. whenever we are doing data validations
we have to check at target server so this time data comparision will take more processing time and it is burdden on server and the same is may be using different
systems it will impact other systems as well.
example:
--------
Orders -- Parent Table
Order lines- Child table.
While load order lines data before loading into target will check orders table having those orders in target server. if not available
we will throug error parent dont found and we will load only child data. for this we are connecting different severs like staging and target.
this data validations will considar both staging and target servers. it will impact processing time and server impact as well like CPU processing
and RAM utilization also.
4)ELT: ( Extraction Loading and Transformations)
---------------------------------------------
IN ODI we are using ELT architecture, this architecture first it will extract data from source system and it will load into directly target system
in target we are maintaing staging , in staging we will all transformations and we will load into target tables. In ELT architecture all transformations
happening at target server with in target server we are maintaing staging and target schemas seperatly. While doing transformations we are doing with in the
server and different schemas or we can maintain both in one schema. this transformations happening at target server only it wont impact on processing time
and data validations and other transformations wont take much time and it wont impact on other servers as well.
example:
--------
Orders- Parent table
Order lines -- Child table
while loading order lines data first we will load into staging then we will validate those orders exists or not in orders parent table.
if orders does not exist orders parent table we will throw error like parent not found. in this ELT architecture both staging and target tables are in
same server that is target server. itwont impact on processing time and CPU and Memory utilization.
***************************
ELT Architecture major advantage is Staging we can maintain within target server we dont need seperate staging server. we can maintain staging
in target server itself. ODI staging supporting any Native SQL supporting Database.
OR we can maintain this staging not only target any other database servers like Source or target or other than source or target but that should
Database not ELT server.
This option we have at Interface level in Definition tab. STAGING AREA DIFFFERENT FROM TARGET, There we will select logical schema, default Target database schema we will select
if we want other than target we can select option STAGING OTHER THAN TARGET option and we can select that database logical schema.
5)ODI Other Features compare to Other ETL Tools:
----------------------------------------------
1) ELT Architecture and staging we are maintain at Database level.
2) Inbuilt Scheduling available
3) Easy Code moving from one environment to other environment. its everything files as xml.
4) Inbuilt version controlling.
5) Supporting Public Web services and middleware features using Weblogic server or Websphere.
6) Web based accessing and scheduling and monitoring entire ODI using ODI Console (this we will get if we are using Middleware).
7) Programming Languages Integration like we can run direct Programming languages and we can call shell scripting as well using OS command
and other ODI Tools we can utilize.
8) Sending mail alter for individual programs based on status like after finishing ur interfaces or programs we can make alert if success
or failure messages.
6)What is Repository and Types of Repositories?
Ans: A repository is a metadata matianance database user or schema. in this schema we will maintain all odi components information like
topology, security, designer and operator components information.
There are two types of repositories.
1) Master Repository
2) Work Repository
7)What is Master Repository?
Ans: Master Repository will maintain or will store all Topology and Security components information. and Master will contain Work repository
information as well.
We can maintain only one master repository and multiple works repositories like Development and Execution type of work repositories.
8)What is Work Repository?
Ans: Work repositories will maintain all Designer and Operator components information. there are two types of work repositories.
1) Development
2) Execution
For Development work repository we can use for Development, Testing , UAT and other environments except production environment.
if we are using Development type work repository we can access Designer and Operator components.
For Execution work repository we will use for production environment. here we can access only Operator component.
In production we dont need Designer bcz we are not editing any mappings or interface we are only excuting. in Execution work repository
we will use only SCENARIOS AND LOAD PLANS, this scenarios are not editable.
If we want to edit we have to connect development type repository and we can edit any mapping and we can regenerate scenarios and we can move into
execution work repository.
Note: In production we can edit load plans that load plans contains only scenarios and variables. But SCENARIOS cannoted edited it is ready
onlyobejcts we can use for only execution.
Note: If we are using Development type work repository as a production environment we can edit scenarios?
No, We cannot edit scenarios but we can edit interfaces and we can regenerate scenarios but it is not suggestible in production,
we have to edit in development and testing should properly and moving into production valid scenarios or interfaces.
9)What IS Security in ODI how we will maintain?
Ans: In ODI security we are maintaing at SECURITY Component and we can create users in Security using SUPERVISOR Default adminstrator user.
and we can assign privileges through profiles.
10)What is Profile and Types of Profiles?
Ans: A Profile is a set of objective privileges and we can assign these profiles to users, that user can get all privileges whatever profile.
There two types of profiles.
1) Generic Profile: Generic profiles having default granted privileges.
2) Non-Generic Profile: Non-Generic Profiles dont have default grant privileges, we can give based on our requirements i.e., customised profiles.
Can we create new profile?
YEs, We can create new profile and we can assign object privileges based on our requirements.
11)What is account expire option in Users?
Ans: This option we can use for temporary users, if they need only 1 or 6months access like contracts that time we will give only particular
time access after that they cannot access , they need request administrator for extending user access.
12)Topology:
---------
This topology we are using for to maintain all source and target applications , databases server configurations or connections information.
13)What is Dataserver?
Dataserver is main object to connect your source or target database servers connection information.
Here we will provide Driver details like Host, Port no, SID service name with Credentials like User name and password.
14)What is Physical Schema and Types of Physical Schemas?
Physical schema is your main database or application accessing schema or User.
There two types of Physical Schemas.
1) Data Schema (main schema) : Data Schema is your main database tables having schema
2) Work Schema: Work Schema is your staging tables having schema that is C$, I$,E$ and other staging tables.
We can use both Data schema and work schema same or we can use different schemas for data schema and work schema.
15)What is Context?
----------------
Context is mediator between Physical and logical architecture and this contexts we are using for to maintain Environments or location servers.
each context is assign to one physical schema based logical schema combination.
example: DEV, TEST and PRODUCTION environments.
16)what is difference between Contexts and Repositories?
Ans: Repositories will maintain complete components information like DESIGNER AND Operator
Then whatever we user using inside server in topology we will divide based on evironments and all servers using contexts.
Because for all environemnts we will use only one topology, and multiple work repositories, this context will divide all invironments
servers and it will be usable for all repositories.
example : DEV , TEST and Production these all work repositories will connect only only topology but how this environemnts will identify servers
for Development and testing and production servers based Context only. context will divide servers based on environemnts, it will used seperate
for development, for testing for production in different work repositories.
Note: All environemnt servers we will configure in Topology, but while accessing we will use context for dividing these servers for
seperate work repositories. This repositories will identify physical serversthrogh context only. if development repository that will connect
development context, if production repository this will connect production context.
17)What is Logical Schema?
Logical schema is alias for your physical schemas and it will connect to your physical schemas throgh contexts.
In Designer developer can access only logical schemas he con not access physical schemas, developer can access logical schemas and context combination he
can connect physical schemas.
18)Designer:
---------
Designer component we are using for complete development area. in design we will work for development like creating interfaces, packages, procedures,
variables, sequences and other objects, after finishing development we will generate schenarios and we can move this all scenarios in to production.
19)What is Scenario?
A scenario is a compiled or executable object and it is not editable, in odi production environemnt we will use only scenarios for mataining
security and hiding. because this scenarios is not readeable and not editable. this scenarios only schedules or executable.
20)What is Interface ?
-------------------
An interface we are using to load data from source to target , here we can apply all the transformations and validations and aggregations.
In odi 11g interface we can use only one target in one interface, if we want load data into multiple target tables we have to go for multiple
interfaces and use IKM Multiple table insert knowledge module.
In ODI 12c Interfaces we are calling Mappings and we can use multiple target tables in one mapping.
21)What is Yellow Interface?
--------------------------
Yellow interface we are using for without target table and we can create dynamically target table if we dont have target table.
for this we need to enable CREATE_TARGET_TABLE option=> TRUE at IKM level.
This yellow interface we can use for source table as well.
This yellow interfaces we are calling Reusable mappings in 12c.
22)what is Lookup:
----------------
In odi lookup we using for to check parent data before loading child data. but in backgroup it will create leftouter join.
or sub query join.
lookup means looking into up table for data if data available in up table it will considarother wise it will ignore..
Compare to normal left outer join LOOKUP will give best performance because it is happening at odi level.
23)What is Filter?
---------------
Filters we are using for to filter data from source table that we can apply source or staging.
24)What is Optimization Context:
----------------------------
This context we are using at interface development time only, while execution we need to select in execution window that context is your
main context. optimization context is usable only for interface development only not for execution.
25)What is Simulation option at interface?
---------------------------------------
This option we are using for to show complete interface process like LKM and IKM and CKM process like creating all $tables and
inserting into target table queries everything it will give a report.
Note: Simulation will give only report it wont run in session or program.
26)What is CDC?
Ans:
CDC change data capture we are using for to extract changed data from source. if source accepting Triggers on source table we will use this CDC
feature if source system they are not allowing to create a triggers we can't use this CDC feature we will go for timestamp based filtering.
Note: If source table having any timestamp or modified data column we can filtering if source table dont have timestamp or modified date column we have to
use CDC feature (journalization concept) if they are not allowing to create triggers on source table we have to select everying and we need to maintain seperate
staging table for checking data from source and our staging table data.
Note: If source table dont have timestamp column and they are not allowing to create CDc (triggers) then we need to create one staging table STG_ORDERS
with timestamp column first we will load full data from source table (SOURCE_ORDERS) table to our stagning table, but we will maintain this
staging data permanently and we will create one more interface to load this STG_ORDERS table to TARGET_ORDERS table based on timestamp column.
from next time onwards while select data from SOURCE_ORDERS table we will check STG_ORDERS table if STG_ORDERS table dont have source data
that data only we will select and we will insert.
SELECT * FROM SOURCE_ORDERS WHERE NOT EXITS ( SELECT 'X' FROM STG_ORDERS);
for this requirement we need two inerfaces one for SOURCE_ORDERS table STG_ORDERS table and another interface for
STG_ORDERS table to TARGET_ORDERS table.
STG_ORDERS to TARGET_ORDERS table we can use timestamp filter condition on STG_ORDERS and it will get selected only modified data.
we cannot delete data from STG_ORDERS table.
27)What is SCD?
------------
SCD Slowly changing dimension we are using for to load changing dimension information data.
where are three types of SCD'same
1) SCD TYPE1: No history, we will all columns SCD property as OVERWRITE ON CHANGE in ODI.
SCD TYPE1: we can use IKM Oracle Incremental update KM or we can IKM Slowly Changing Dimension both we can use.
2) SCD TYPE2: Here we can maintain no of times history, so we need to maintain below SCD properties in SCD TYPE 2 table.
1) ADD Row On CHANGE: this property will add a row whenever we are receiving changed data to maintain history.
2) STARTING_ TIME STAMP: This will show record starting date and time
3) Ending Time stamp: this will show record ending date and time
4) RECORD FLAG: this will show active record if it is Y and if it is N for old rows or history rows.
3) SCD TYPE 3: As of now ODI is not supporting we have to customise our IKM Slowly Changing Dimension.
it will maintain only one time history in different column like , CURRENT SALARY and PREVIOUS SALARY two columns.
28)What is Package?
-----------------
Package we are using for making work flow for our developed objects like interfaces, procedures and variables.
we can run multiple objects in package its may be sequential or parallel.
If it is directs objects like procedures, interfaces, variables and packages we can run only Sequential run we can’t run parallel.
if we want run parallel we have to use Scenarios for interfaces, procedures and packages and variables.
In scenario’s there is one option SYNCHRONOUS or ASYNCHRONOUS MODE.
SYNCHRONOUS: Sequential run ********************
ASYNCHRONOUS:Parallel run. **************************
29)How to run Parallel in Package?
We have to only schenarios and set Asynchronous mode in package opations.
if it is direct object we can run only sequential.
30)What is Load Plan:
------------------
Load plan we can run only scenarios and variables but here we can make sequential run, parllel run and condition based and hierarichal run.
like one parllel inside parllel one sequential and inside sequential parllel or sequential run we can use this we can say hierarichal run.
31)Difference between Package and Load Plan? **********************************************
-----------------------------------------
Package we can direct objects like interfaces, procedures and packages and also we can scenarios for those objects.
and we can all ODI Tools means, SFTP GET SFTP PUT, FILE Copy, file move ... all odi tools we can use.
In load plan we can run only scenarios and we cant run direct objects and we cant use odi tools like SFTP Put, get and file related all odi tools.
we can use only scenarios for interfaces, procedures, packages and variables.
In package we can use variables 4 types like Declare variable, Set variable, evaluate variable and refresh variable.
But in load plan we can use only one type variable.
Direct package we cant run in production, we have generate scenario,
But Load plan direct we can run in product , there is no scenario for loadplans, load plan is direct executable or schedulable.
32)What is Procedure?
-------------------
ODI odi procedure we can use for technology specific code execution. we can any any technology direct code execution using this procedure.
command on source we will use for selection and command on target we will use all operations like DML, DDL and other operations.
33)What is Sequence and type of sequences?
---------------------------------------
Sequence we can use to generate sequence of values like number insertintion into tables.
there are three types of sequences.
1) Standard sequence: we can use increment and it will start with 0
2) Specific sequence: we can use increment and it will start with our specific value like 5000 , 500000 .... any our own value.
3) Native Sequence: we can use this our direct Database sequences integration.
34)What is User Functions?
-----------------------
User functions we are using for our won logic or code implementation and we can reuse in any programs or interfaces passing parameters.
35)What is Markers?
-----------------
Markers we are using for to show development progress or status or Priority of objects like high priority interface or low interfaces and smiles as well.
36)What is Difference between Global objects and Project objects?
--------------------------------------------------------------
Global objects we can use all projects and Project objects we can use only with in that project only. if we want to use project objects into other projects
wecant use , we have to create in global and use GLOBAL keywork before object calling.
37)What is SolutionS?
-------------------
Solutions we are using for to create version for entire project or individual objects. This is we are using for mainly version controlling.
38)How to move your interfaces or other objects from Development to PRODUCTION or one work repository to another work repository?
Ans: USing Smart export first we will export all required objects and we will save as .xml file then
we will login into different work repository then we will use smart import feature and select exported xml file into new work repository.
Note: While moving one work repository to another work repository ID's should not be same and both Repository versions should be same
if work repositories versions are different we have to migrate those entire work repository into new work repository version using ODI Migration utility tool.
If both work repository ID's are same we have regenerate or change new id in new work repository.
39)What is Agent?
---------------
An agent is a runtime component , this will come into the picture whenever we are running a program it will take request from client and it will
create session with assign one unique id and chekcing source and target server (pinging source and target servers ) if servers are available it will transfer
our source to server and it will result and it will status success in operator and this complere log information it will store into work repository SNP_SESS_*LOG
tables. and all sessions it will maintain in SNP_SESSION table.
40)What are the types of Agents?
---------------------------------
There are two types of agents in ODI.
1) STand Alone: This will work with in odi level without Middleware
2) J2EE Agent: this will work within middleware and we will get additional featueslike , load balancing, highavailable and public services features.
Hi, I really loved reading this article. By this article i have learnt many things about ODI topic, please keep me updating if there is any update.
ReplyDeleteODI Online Training
ODI Classroom Training
ODI Training
ODI Training in Hyderabad
Oracle Data Integrator Training