” Define repository in terms of Siebel Analytics o Repository stores the Meta data information. Siebel repository is a file system ,extension of the repository file. rpd. o META DATA REPOSITORY o With Siebel Analytics Server, all the rules needed for security, data modeling, aggregate navigation, caching, and connectivity is stored in metadata repositories. o Each metadata repository can store multiple business models. Siebel Analytics Server can access multiple repositories ” What is the end to end life cycle of Siebel Analytics? o Siebel Analytics life cycle 1. Gather Business Requirements 2. Identify source systems 3. Design ETL to load to a DW if source data doesn’t exist. 4. Build a repository 5. Build dashboard or use answers for reporting. 6. Define security (LDAP or External table…) 7. Based on performance, decide on aggregations and/or caching mechanism. 8. Testing and QA. ” What were you schemas? How does Siebel Architecture works? Explain the three layers. How do you import sources? o There are five parts of Siebel Architecture. 1. Clients 2. Siebel analytics Web Server 3. Siebel analytics server 4. Siebel analytics scheduler 5. data sorces o Metadata that represents the analytical Model Is created using the siebel Analytics Administration tool. o Repository divided into three layer 1. Physical – Represents the data Sources 2. Business – models the Data sources into Facts And Dimension 3. Presentation – Specifies the users view of the model;rendered in Siebel answer ” If you have 3 facts and 4 dimension and you need to join would you recommend joining fact with fact? If no than what is the option? Why you won’t join fact to fact? o In the BMM layer, create one logical table (fact) and add the 3 fact table as logical table source ” What is connection pool and how many connection pools did you have in your last project? o connection pool is needed for every physical database. o It contains information about the connection to the database, not the database itself. o Can use either shared user accounts or can use pass-through accounts -Use: USER and PASSWORD for pass through . o We can have multiple connection pools for each group to avoid waitin ” Purpose of Alias Tables o An Alias table (Alias) is a physical table with the type of Alias. It is a reference to a logical table source, and inherits all its column definitions and some properties from the logical table source. A logical table source shows how the logical objects are mapped to the physical layer and can be mapped to physical tables, stored procedures, and select statements. An alias table can be a reference to any of these logical table source types. o Alias Tables can be an important part of designing a physical layer. The following is a list of the main reasons to create an alias table: ” To reuse an existing table more than once in your physical layer (without having to import it several times) ” To set up multiple alias tables, each with different keys, names, or joins o To help you design sophisticated star or snowflake structures in the business model layer. Alias tables are critical in the process of converting ER Schemas to Dimensional Schemas. ” How do you define the relationship between facts and dimensions in BMM layer? o Using complex join ,we can define relationship between facts and dimentions in BMM layer. ” What is time series wizard? When and how do you use it? o We can do comparison for certain measures ( revenue.,sales etc.. ) for current year vs previous year, we can do for month or week and day also o Identify the time periods need to be compared and then period table keys to the previous time period. o The period table needs to contain a column that will contain “Year Ago” information. o The fact tables needs to have year ago totals. o To use the “Time series wizard”. After creating your business model right click the business model and click on “Time Series Wizard”. o The Time Series Wizard prompts you to create names for the comparison measures that it adds to the business model. o The Time Series Wizard prompts you to select the period table used for the comparison measures o Select the column in the period table that provides the key to the comparison period. This column would be the column containing “Year Ago” information in the period table. o Select the measures you want to compare and then Select the calculations you want to generate. For ex: Measure: Total Dollars and calculations are Change and Percent change. o Once the Time series wizard is run the output will be: a) Aliases for the fact tables (in the physical layer) b) Joins between period table and alias fact tables c) Comparison measures d) Logical table sources o In the General tab of the Logical table source etc you can find “Generated by Time Series Wizard” in the description section o Then you can add these comparision measures to the presentation layer for your reports. o Ex: Total sales of current qtr vs previous qtr vs same qtr year ago ” Did you create any new logical column in BMM layer, how? o Yes. We can create new logical column in BMM layer. o Example: Right click on fact table -new lgical column-give name for new logical column like Total cost. o Now in fact table source,we have one option column mapping, in that we can do all calculation for that new column. ” Can you use physical join in BMM layer? o yes we can use physical join in BMM layer.when there is SCD type 2 we need complex join in BMM layer. ” Can you use outer join in BMM layer? o yes we can.When we are doing complex join in BMM layer ,there is one option type,outer join is there. ” What are other ways of improving summary query reports other than Aggregate Navigation and Cache Management ” Indexes ” Join algorithm ” Mat/view query rewrite ” Web proper report design its optimal by making sure that it is not getting any addition column or rows ” What is level-base matrics? o Leval-base matrics means, having a measure pinned at a certain level of the dimension. For Example, if you have a measure called “Dollars”, you can create a “Level Based Measure” called “Yearly Dollars” which (you guessed it) is Dollars for a Year. This measure will always return the value for the year even if you drill down to a lower level like quarter, month… etc. To create a level based measure, create a new logical column based on the original measure (like Dollars in the example above). Drag and drop the new logical column to the appropriate level in the Dimension hierarchy (in the above example you will drag and drop it to Year in Time Dim o A LBM is a metric that is defined for a specific level or intersection of levels. o Monthly Total Sales or Quarterly Sales are the examples. o You can compare monthly sales with quarterly sales. You can compare customer orders this quarter to orders this year ” What is logging level?Where can you set logging levels? o You can enable logging level for individual users; you cannot configure a logging level for a group. o Set the logging level based on the amount of logging you want to do. In normal operations, logging is generally disabled (the logging level is set to 0). If you decide to enable logging, choose a logging o level of 1 or 2. These two levels are designed for use by Siebel Analytics Server administrators. o Set Logging Level 1. In the Administration Tool, select Manage > Security. 2. The Security Manager dialog box appears. 3. Double-click the user.s user ID. 4. The User dialog box appears. 5. Set the logging level by clicking the Up or Down arrows next to the Logging Level field ” What is variable in sieble? o You can use variables in a repository to streamline administrative tasks and modify metadata content dynamically to adjust to a chainging data environment.The Administration Tool includes a Variable Manager for defining variables ” What is system variable and non system variable? o System variables o System variables are session variables that the Siebel Analytics Server and Siebel Analytics Web use for specific purposes. System variables have reserved names, which cannot be used for other kinds of variables (such as static or dynamic repository variables, or for nonsystem session variables). o When using these variables in the Web,preface their names with NQ_SESSION. For example, to filter a column on the value of the variable LOGLEVEL set the filter to the Variable NQ_SESSION.LOGLEVEL. o Nonsystem variables. o A common use for nonsystem session variables is setting user filters. For example, you could define a nonsystem variable called SalesRegion that would be initialized to the name of the user.s sales region. You could then set a security filter for all members of a group that would allow them to see only data pertinent to their region. o When using these variables in the Web, preface their names with NQ_SESSION. For example, to filter a column on the value of the variable SalesRegion set the filter to the Variable NQ_SESSION.SalesRegion. ” What are different types of variables? Explain each. o There are two classes of variables: 1. Repository variables 2. Session variables. Repository variables. A repository variable has a single value at any point in time. There are two types of repository variables: static : This value persists, and does not change until a Siebel Analytics Server administrator decides to change it. dynamic:The values are refreshed by data returned from queries. When defining a dynamic repository variable, you will create an initialization block or use a preexisting one that contains a SQL query. You will also set up a schedule that the Siebel Analytics Server will follow to execute the query and periodically refresh the value of the variable. Session Variables Session variables are created and assigned a value when each user logs on. There are two types of session variables: 1.system 2.nonsystem. ” What are the cache management? Name all of them and their uses. For Event polling table do u need the table in your physical layer? o Monitoring and managing the cashe is cache management.There are three ways to do that. o Disable caching for the system.(INI NQ config file), Cashe persistence time for specified physical tables and Setting event polling table. o Disable caching for the system.(INI NQ config file : You can disable caching for the whole system by setting the ENABLE parameter to NO in the NQSConfig.INI file and restarting the Siebel Analytics Server. Disabling caching stops all new cache entries and stops any new queries from using the existing cache. Disabling caching allows you to enable it at a later time without losing any entries already stored in the cache. o Cashe persistence time for specified physical tables : You can specify a cachable attribute for each physical table; that is, if queries involving the specified table can be added to the cache to answer future queries. To enable caching for a particular physical table, select the table in the Physical layer of the Administration Tool and select the option Make table cachable in the General tab of the Physical Table properties dialog box. You can also use the Cache Persistence Time settings to specify how long the entries for this table should persist in the query cache. This is useful for OLTP data sources and other data sources that are updated frequently, potentially down to every few seconds. o Setting event polling table : Siebel Analytics Server event polling tables store information about updates in the underlying databases. An application (such as an application that loads data into a data mart) could be configured to add rows to an event polling table each time a database table is updated. The Analytics server polls this table at set intervals and invalidates any cache entries corresponding to the updated tables. o For event polling table ,It is a standalone table and doesn’t require to be joined with other tables in the physical layer ” What is Authentication? How many types of authentication. o Authentication is the process by which a system verifies, through the use of a user ID and password, that a user has the necessary permissions and authorizations to log in and access data. The Siebel Analytics Server authenticates each connection request it receives. ” Operaing system autentication ” External table authentication ” Database authentication ” LDAP authentication ” What is object level security? o There are two types of object level security: Repository level and Web level o Repository level : In presention layar we can set Repository level security by giving permission or deny permission to users/groups to see particular table or column. o web level:thisprovides security for objects stored in the siebel anlytics web catlog,such as dashboards,dashboards pages,folder,and reportsyou can only view the objects for which you are authorized. For example,a mid level manager may not be granted access to a dashboard containing summary information for an entire department. ” What is data level security? o This controls the type an amount of data that you can see in a report.When multiple users run the same report the results that are returned to each depend on their access rights and roles in the organization.For example a sales vice president sees results for alll regions, while a sales representative for a particular region sees onlu datafor that region. ” What is the difference between Data Level Security and Object Level Security? o Data level security controls the type and amount of data that you can see in a reports.Objectlevel security provides security for objects stored in the siebel analytics web catlog, like dashboards,dashboards pages,folder,and reports. ” How do you implement security using External Tables and LDAP? o Instead of storing user IDs and passwords in a Siebel Analytics Server repository, you can maintain lists of users and their passwords in an external database table and use this table for authentication purposes. The external database table contains user IDs and passwords, and could contain other information, including group membership and display names used for Siebel Analytics Web users. The table could also contain the names of specific database catalogs or schemas to use for each user when querying data o Instead of storing user IDs and passwords in a Siebel Analytics Server repository, you can have the Siebel Analytics Server pass the user ID and password entered by the user to an LDAP(Lightweight Directory Access Protocol ) server for authentication. The server uses clear text passwords in LDAP authentication. Make sure your LDAP servers are set up to allow this. ” If you have 2 fact and you want to do report on one with quarter level and the other with month level how do you do that with just one time dimension? o Using levelbase matrics. ” Did you work on a stand alone Siebel system or was it integrated to other platforms? o Deploying the Siebel analytics platform without other Siebel applications is called Siebel analytics Stand -Alone .If your deployment includes other siebel Analytics Application it called integrated analytics -You can say Stand-Alone siebel analytics ” How to sort columns in rpd and web? o Sorting on web column, sort in the rpd its sort order column ” If you want to create new logical column where will you create (in repository or dashboard) why? o I will create new logical column in repository.because if it is in repository,you can use for any report.If you create new logical column in dashboard then it is going to affect on those reports ,which are on that dashboard.you can not use that new logical column for other dashboard(or request) ” What is complex join, and where it is used? o we can join dimention table and fact table in BMM layer using complex join.when there is SCD type 2 we have to use complex join in Bmm layer. ” If you have dimension table like customer, item, time and fact table like sale and if you want to find out how often a customer comes to store and buys a particular item, what will you do? o write a query as “SELECT customer_name, item_name, sale_date, sum(qty) FROM customer_dim a, item_dim b, time_dim c, sale_fact d WHERE d.cust_key = a.cust_key AND d.item_key = b.item_key AND d.time_key = c.time_key GROUP BY customer_name, item_name, sale_date” ” You worked on standalone or integrated system? o Standalone. ” If you want to limit the users by the certain region to access only certain data, what would you do? o using data level security. o Siebel Analytics Administrator: go to Manage -> Security in left hand pane u will find the user, groups, LDAP server, Hierarchy What you can do is select the user and right click and go to properties, you will find two tabs named as users and logon, go to user tab and click at permission button in front of user name you have selected as soon as u click at permission you will get a new window with user group permission having three tabs named as general ,query limits and filter and you can specify your condition at filter tab, in which you can select presentation table ,presentation columns ,logical table and logical columns where you can apply the condition according to your requirement for the selected user or groups. ” If there are 100 users accessing data, and you want to know the logging details of all the users, where can you find that? o To set a user.s logging level 1. In the Administration Tool, select Manage > Security. The Security Manager dialog box appears. 2. Double-click the user.s user ID. The User dialog box appears. 3. Set the logging level by clicking the Up or Down arrows next to the Logging Level field ” How do implement event polling table? o Siebel Analytics Server event polling tables store information about updates in the underlying databases. An application (such as an application that loads data into a data mart) could be configured to add rows to an event polling table each time a database table is updated. The Analytics server polls this table at set intervals and invalidates any cache entries corresponding to the updated tables. ” Can you migrate the presentation layer only to different server o No we can’t do only presentation layer. And ask him for more information and use one of the above answers o Create a ODBC connection in the different serve and access the layer. o Copy the Rpd and migrate it to other server ” Define pipeline. Did you use it in your projects? o Yes, pipelines are the stages in a particular transaction. assessment, finance etc. ” How do you create filter on repository? o Where condition on content tab. ” How do you work in a multi user environment? What are the steps? o Create a shared directory on the network for Multi-user Development (MUD). o Open the rpd to use in MUD. From Tools->Options, setup the MUD directory to point to the above directory. o Define projects within the rpd to allow multiple users to develop within their subject area or Facts. o Save and move the rpd to the shared directory setup in point 1. o When users work in the MUD mode, they open the admin tool and start with o MUD ->Checkout to checkout the project they need to work on (not use the File open as you would usually do). o After completely the development, user checkin the changes back to the network and merge the changes. ” Where are passwords for userid? Ldap,external table authentication stored respectively? o passwords for userid are in siebel analytics server repository Ldap authentication in Ldap server external database in a table in external database ” Can you bypass siebel analytics server security ?if so how? o yes you can by-pass by setting authententication type in NQSCONFIG file in the security section as:authentication_type=bypass_nqs.instanceconfig.xml and nqsconfig.ini are the 2 places ” Where can you add new groups and set permissions? o you can add groups by going to manage>security>add new groups> You can give permissions to a group for query limitation and filter conditions. ” what are the things you can do in the BMM layer? o Aggrigation navigation,level base matrics,time series wizard,create new logical column,comlex join. ” what is Ragged hierarchy? and how do u manage it o Ragged Hierarchy is one of the different kinds of hierarchy. o A hierarchy in which each level has a consistent meaning, but the branches have inconsistent depths because at least one member attribute in a branch level is unpopulated. A ragged hierarchy can represent a geographic hierarchy in which the meaning of each level such as city or country is used consistently, but the depth of the hierarchy varies. o For example, a geographic hierarchy that has Continent, Country, Province/State, and City levels defined. One branch has North America as the Continent, United States as the Country, California as the Province or State, and San Francisco as the City. However, the hierarchy becomes ragged when one member does not have an entry at all of the levels. For example, another branch has Europe as the Continent, Greece as the Country, and Athens as the City, but has no entry for the Province or State level because this level is not applicable to Greece for the business model in this example. In this example, the Greece and United States branches descend to different depths, creating a ragged hierarchy. ” What is the difference between Single Logical Table Source and Multiple Logical Table Sources? o If a logical table in BMM layer has only one Table as the source table then it is Single LTS. o If the logical table in BMM layer has more than one table as the sources to it then it is called Multiple LTS. o Ex: Usually Fact table has Multiple LTS’, for which sources will be coming from different Physical tables. ” Can you let me know how many aggregate tables you have in your project? On what basis have you created them? o As per resume justification document ” How do you bring/relate the aggregate tables into the Siebel analytics Logical layer? o One way of bringing the Aggregate Tables into the BMM layer is by bringing them as Logical Table sources for the corresponding Fact table. o This is done by dragging and dropping the aggregate table into the corresponding fact table. After doing that establish the column mappings and the set the aggregation levels. ” How do you know which report is hitting which table, either the fact table or the aggregate table? o After running the report, go to “Administration” tab and go to click on “Manage Sessions”. There you can find the queries that are run and in the “View Log” option in the Session Management you can find which report is hitting which table. ” Suppose I have report which is running for about 3 minutes typically. What is the first step you take to improve the performance of the query? o Find the sql query of the report in Admin->manage Session-> run the sql query on toad ->read the explain plan output ->modify the SQL based on the explain plan output ” Suppose you have a report which has the option of running on aggregate table. How does the tool know to hit the Aggregate table and for that what the steps you follow to configure them? o Explain the process of Aggregate navigation ” Have you heard of Implicit Facts? If, so what are they? o An implicit fact column is a column that will be added to a query when it contains columns from two or more dimension tables and no measures. You will not see the column in the results. It is used to specify a default join path between dimension tables when there are several possible alternatives. o For example, there might be many star schemas in the database that have the Campaign dimension and the Customer dimension, such as the following stars: ” Campaign History star. Stores customers targeted in campaign. ” Campaign Response star. Stores customer responses to a campaign. ” Order star. Stores customers who placed orders as a result of a campaign. In this example, because Campaign and Customer information might appear in many segmentation catalogs, users selecting to count customers from the targeted campaigns catalog would be expecting to count customers that have been targeted in specific campaigns. ” To make sure that the join relationship between Customers and Campaigns is through the campaign history fact table, a campaign history implicit fact needs to be specified in Campaign History segmentation catalog. The following guidelines should be followed in creating ” segmentation catalogs: ” Each segmentation catalog should be created so that all columns come from only one physical star. ” Because the Marketing module user interface has special features that allow users to specify their aggregations, level-based measures typically should not be exposed to segmentation users in a segmentation catalog. ” What is aggregate navigation? How do you configure the Aggregate tables in Siebel Analytics? o Aggregate tables store precomputed results, which are measures that have been aggregated (typically summed) over a set of dimensional attributes. Using aggregate tables is a very popular technique for speeding up query response times in decision support systems. o If you are writing SQL queries or using a tool that only understands what physical tables exist (and not their meaning), taking advantage of aggregate tables and putting them to good use becomes more difficult as the number of aggregate tables increases. The aggregate navigation capability of the Siebel Analytics Server, however, allows queries to use the information stored in aggregate tables automatically, without query authors or query tools having to specify aggregate tables in their queries. The Siebel Analytics Server allows you to concentrate on asking the right business question; the server decides which tables provide the fastest answers. ” (Assume you are in BMM layer) We have 4 dimension tables, in that, 2 tables need to have hierarchy, then in such a case is it mandatory to create hierarchies for all the dimension tables? o No, its not mandatory to define hierarchies to other Dimension tables. ” Can you have multiple data sources in Siebel Analytics? o Yes. ” How do you deal with case statement and expressions in siebel analytics? o use expression builder to create case when…then.. end statement ” Do you know about Initialization Blocks? Can you give me an example where you used them? o Init blocks are used for instantiating a session when a user logs in. o To create dynamic variable you have to create IB to write sql statement. ” what is query repository tool? o It is utility of Seibel/OBIEE Admin tool o allows you to examine the repository metadata tool o for example: search for objects based on name,type. o Examine relationship between metadata objects like which column in the presentation layer maps to which table in physical layer ” what is JDK and why do we need it? o Java Development Kit (JDK), A software package that contains the minimal set of tools needed to write, compile, debug, and run Java applets. ” Oracle doesn’t recommend Opaque Views because of performance considerations, so why/when do we use them? o an opaque view is a physical layer table that consists of select statement. an opaque view should be used only if there is no other solution. ” Can you migrate the presentation layer to a different server. o No we have to migrate the whole web & rpd files ” How do you identify what are the dimension tables and how do you decide them during the Business/Data modeling? o Dimension tables contain descriptions that data analysts use as they query the database. For example, the Store table contains store names and addresses; the Product table contains product packaging information; and the Period table contains month, quarter, and year values. Every table contains a primary key that consists of one or more columns; each row in a table is uniquely identified by its primary-key value or values ” Why do we have multiple LTS in BMM layer?What is the purpose? o to improve the performance and query response time. ” what is the full form of rpd? o there is no full form for rpd as such, it is just a repository file (Rapidfile Database) ” how do i disable cache for only 2 particular tables? o in the physical layer, right click on the table there we will have the option which says cacheable ” How do you split a table in the rpd given the condition. ( the condition given was Broker and customer in the same table) Split Broker and customer. o we need to make an alias table in the physical layer. ” What type of protocol did you use in SAS? o TCP/IP OBI Apps – Project Analytics Oracle recently released OBI BI Apps 7.9.6 and the major change is the addition of Project Analytics. Oracle Project Analytics offers organizations a comprehensive analytics solution that delivers pervasive insight into forecast, budgets, cost, revenue, billing, profitability, and other aspects of project management to help effectively track project life cycle status. It provides consolidated and timely information that is personalized, relevant, and actionable to improve performance and profitability. Oracle Project Analytics is also integrated with other applications in the Oracle BI Applications family to deliver cross functional analysis, such as AR and AP invoice aging analysis and procurement transactions by project. Oracle Project Analytics provides role-based reporting and analysis for various roles involved in the project life cycle. Typical roles include Project Executive, Project Manager, Project Cost Engineer/Analyst, Billing Specialist, Project Accountant and Contract Administrator. Executives can closely monitor the organization’s performance and the performance of the projects that the organization is responsible for by looking into a particular program and project and verifying how the period, accumulated, or estimated-at-completion cost is doing compared to budget and forecast. Cost variances and trends can be analyzed so that prompt actions can be taken to get projects on track or make any necessary changes in estimates, minimizing undesired results and reactive measures. Oracle Project Analytics shows past, present, and future performance, and includes estimated metrics at project completion. Further analysis can be done on each project by drilling down to detailed information including profitability and cost at the task level. Project managers can view the projects that they are responsible for, compare key metrics between projects, and analyze the details of a particular project such as cost distribution by task, resource, and person. Oracle Project Analytics provides a comprehensive, high-level view of accumulated and trending information for a single project or group of projects, as well as detailed information, such as budget accuracy and details by project and financial resource. Project managers can view cost and revenue by task, expenditure category, or resource type; and by project or resource. The level of analysis can be as granular as cost, revenue, or budget transaction. Oracle Project Analytics provides out-of-the-box adapters for Oracle EBS 11.5.10 (Family Pack M) and R12, and PeopleSoft 8.9 and 9.0. It also provides universal adapters to extract and load data from legacy sources such as homegrown systems or from sources that have no prepackaged source-specific ETL adapters. Oracle Project Analytics application comprises the following subject areas: Funding. A detailed subject area that provides the ability to track Agreement Amount, Funding Amount, Baselined Amount, and all changes in funding throughout the life cycle of the project. In addition, it provides the ability to do comparative analysis of Agreement Amount, Funding Amount, Invoice Amount, and the remaining funding amount across projects, tasks, customers, organizations, and associated hierarchies. Budgets. A detailed subject area that provides the ability to report on Cost Revenue, Margin for Budgets, and Budget changes including tracking original and current budgets across projects, tasks, organizations, resources, periods and associated hierarchies at budget line level. Forecast. A detailed subject area that provides the ability to report on Cost, Revenue and Margin for Forecasts, and Forecast changes. Forecast change analysis includes tracking original and current forecasts across projects, tasks, organizations, resources, periods and associated hierarchies. It provides the ability to track the metrics that indicate the past, present and future performance of cost, revenue, and margin. Cost. A detailed subject area that provides the ability to report on Cost (Burdened Cost), Raw Cost, Burden Cost for the past and current periods including inception-to-date, year-to-date comparisons across projects, tasks, organizations, resources, suppliers, and associated hierarchies. It provides the ability to track the cost at cost distribution level. Revenue. A detailed subject area that provides the ability to report on Revenue transactions for the past, and current periods including inception-to-date, year-to-date comparisons across projects, tasks, organizations, resources, and associated hierarchies. It provides the ability to track the revenue at Revenue distribution level. Billing. A detailed subject area that provides the ability to report on Billing Amount, Retention Amount, Unearned Amount, and Unbilled Receivables Amounts across the projects, tasks, organizations, resources, and associated hierarchies. It provides the ability to track the invoice amount at invoice (draft invoice) line level only. Note: Invoice tax amount is not captured in this release. Performance. A consolidated subject area that includes combined information from Budgets, Forecasts, Cost, Revenue, and provides the ability to do performance by comparing the actual (cost, revenue, margin and margin percentage) with budgets, and forecasts across projects, tasks, organizations, resources, and associated hierarchies. —– OBIEE – BI Apps – Finance Analytics – Group Account Number Configuration If you are configuring Financial Analytics (OBIEE BI Apps), it is critical that the GL account numbers are mapped to the group account numbers (or domain values) because the metrics in the GL reporting layer use these values. For a list of domain values for GL account numbers, see Oracle Business Analytics Warehouse Data Model Reference (available from metalink only) You can categorize your Oracle General Ledger accounts into specific group account numbers. The group account number is used during data extraction as well as front-end reporting. The GROUP_ACCT_NUM field in the GL Account dimension table W_GL_ACCOUNT_D denotes the nature the nature of the General Ledger accounts (for example, cash account, payroll account). Refer to the GROUP_ACCOUNT_NUM column in the file_group_acct_names.csv file for values you can use. For a list of the Group Account Number domain values, see Oracle Business Analytics Warehouse Data Model Reference. The mappings to General Ledger Accounts Numbers are important for both Profitability analysis and General Ledger analysis (for example, Balance Sheets). The logic for assigning the accounts is located in the file_group_acct_codes_ora.csv file. The table below shows an example configuration of the file_group_acct_codes_ora.csv file. Basically we specified the Financial Statement Item configuration through a CSV (comma separated value) file. This is an example of the Financial Statement Item configuration file. The Financial statement item configuration is part of the configuration steps required for the Finance module. It needs to be constructed by the user before the ETL program starts running. In this CSV file, the user specify the GL accounts, and the nature (which we call Financial Statement Item) of the GL accounts. The nature is indicated by the values in Financial Statement Item column. If consecutive GL accounts have the same nature, you can specify them in ranges as shown above. There are 6 possible domain values for the Financial Statement Item: they are AP, AR, Revenue, TAX, COGS, and Others. The 6 possible values corresponds to our 6 base fact tables: IA_AP_XACTS, IA_AR_XACTS, IA_GL_REVENUE, IA_TAX_XACTS, IA_GL_COGS, and IA_GL_OTHERS. The set of books is an accounting entity. It may be an Oracle specific term. A company can use one single set of books or multiple set of books to keep track of its accounting. When defining a set of books in Oracle, the user specified the chart of account to be used to organize its GL accounts, and a common currency to keep all the transaction amount in. For instance, Siebel US may use a set of book called ‘US Set of Books’ to keep track of its accounting entries, Siebel Europe may use a different set of book to keep track of its accounting entries. The set of books ID is basically the numeric ID of that set of books in the OLTP system. In the above example, accounts 1000 to 1100 for set of books 100 are assigned to AP. Accounts 1200 to 1300 are assigned to AR. A GL account can be assigned to only one Financial statement item. We have another configuration file similar to the Financial Statement Item Configuration file. It is called the Group Account Number configuration file. It allows the user to configure the GL accounts at a more detail level than Financial Statement Item. This is an example of the Group Account Number Configuration file. This configuration is mainly used during the PLP process when we want to aggregate records from the Base fact tables to the Base Aggregate tables. The base fact tables stores records at GL account level whereas the base aggregate tables stores summarized records at Group Account Number level. The group account number is also used in the Siebel Analytics RPD to define metrics definition. For instance, I can have a metric called ‘Sales and Marketing Cost’. The underlying definition of that metric would be, the total amount of all transactions charging to any accounts with Group Account Number ‘SM COST’. In this case all transactions charging to accounts between 4000 to 4100. Install OBIEE BI Apps on Linux and Windows Typically for demonstration purpose, you could get a 4 gb PC/LAPTOP with enough disk space (150 GB) and get the entire BI Apps installed and configured. You can install 10g Database, OBIEE, BI Apps, Informatica and then configure the DAC and run the ETL all on the same windows box. For real implementation you would preferably use linux/unix/solaris boxes for your installation. In this situation where you have to install on Linux, there are components which can be installed only on Windows and then the files need to be FTP’d over to the linux box. At minimum, you will need to find a windows PC to install the following JDK OBIEE BI Apps Informatica client components If you are going to install the above on the windows, you might install the server components on the windows machine so that it provides you an extra play area for local testing and troubleshooting. High level step to install on windows. Install Database o Download 10g database and install on windows o Create olapdb, infadb,dacdb users o Grant grant dba, connect, resource to olapdb,infadb,dacdb Download JDK 1.5* (there is some bug with JDK 1.6) Download OBIEE for windows Download obiee bi apps (available for only windows) Download informatica for Windows Once downloads are done · Install OBIEE on windows · Install OBIEE Apps on windows · Install Informatica Client and Server Download and install JDK on linux Download and install OBIEE on linux Download Informatica for Linux o Unzip disk1 and disk2 o Run install.sh from disk1 o Run install.sh from disk2 (sp5 patch) Copy the DAC, oracle_bi_dw_base.rep and the source and lookup directories from OBIEE/dwrep/informatica directory on window to server in the appropriate informatica directory on the linux server. On windows you can assemble all the needed files into a new folder, zip them up and ftp to linux. Start Informatica services Launch the informatica repository console (admin) Restore the repository oracle_bi_dw_base.rep Launch the DAC client Create new configuration for the Datawarehouse Create the datawarehouse tables Configure Informatica workflow manager Configure DAC server This is just the Mt Everest view for the complete install and initial config. Once this is done you will need to launch the DAC Client, setup the containers, map the domain values, parameters for the required Analytics and finally run the ETL. In a later post I will try to cover some more details on the DAC configuration but thats it for now. I frequently provide remote assistance to lot of clients for installation and configuration of entire BI apps. If you need similar help just shoot me an email at njethwa@gmail.com OBIEE and Daily Business Intelligence At the current client, the customer is using Oracle CRM and the packaged OBIEE BI Apps has CRM content sourced from Siebel CRM. BI Apps can help us provide the content for Finance, Supply Chain, Order Management and HR (peoplesoft). For the CRM piece we have enabled Daily Business Intelligence dashboards for Customer Support, Depot Repair and Field Service. The good thing is Oracle provides the repository for DBI content (which I think they have discontinued promoting it). Using this repository and the OBIEE BI Apps repository we can merge them two and provide a single point of entry for almost all of their Analytics requirements. OBIEE – BI Apps – Analytics and Module Mapping When deciding to implement BI Apps for OBIEE it is worth to understand what each packaged analytics contents are and how do they map to the Oracle e-Business Suite modules. There are lot of do’s and don’ts when it comes to customizations to any seeded content. The usual still holds true “Oracle will not support any customizations” There are always rules to be followed and certain precautions to be taken so that your customization do not break any future patching. So there is always a risk. Following is the mapping between the analytics and ebs modules Oracle Business Intelligence Associated Source Applications Order Management Analytics Application or Module Oracle Order Management Order Fulfillment Analytics Option Oracle Financials (for Revenue) Inventory Analytics Oracle Supply Chain Oracle Discrete Procurement and Spend Analytics Manufacturing Oracle Purchasing/ Supplier Performance Analytics Procurement Oracle iProcurement General Ledger & Profitability Analytics Oracle Financials (Payables) Oracle Financials (GL, Payables Analytics Payables, Receivables) Receivables Analytics Human Resources Operations & Oracle Human Resources Compliance Analytics Oracle Payroll Human Resources Compensation Analytics Financial Services Profitability Oracle Financial Services Analytics3 Applications (OFSA) Financial Data Manager 4.5.x3