Testing in large scale Business Intelligence (BI) projects face challenges in data quality assurance, metrics / aggregation rules verification, source to target mapping accuracy, the test cases to the requirements traceability, and anomalies in the dimensions to facts relationships.

Oracle Business Intelligence Enterprise Edition (OBIEE) is a BI tool that addresses quite a few of these challenges by the nature of the product's growth strategy. As Gartner puts it, "70 functional and industry-specific packaged BI applications built on the Oracle BI Enterprise Edition Platform attests to Oracle's understanding of how to leverage the market interest in domain-specific and prepackaged solutions as a growth driver for its platform." Customers who buy OBIEE typically also buy a relevant packaged solution. These packaged solutions come with pre-defined metrics framework in addition to the other general BI components. Most of this off-the-shelf material is fairly well tested and robust. However, a typical implementation will always have a great deal of customization or configuration and needs a smart testing approach focused on defect detection and quality assurance.

1. Source to target mapping- The backbone of a successful BI solution is an accurate and well defined source to target mapping of each metric and dimensions used. Source to target mapping helps the designers, developers and testers in the project understand where the data comes from and how it transitioned and / or transformed to its final form displayed for the end users. The source to target mapping sheet should identify the original source column name, any filter conditions or transformation rules used in the ETL processes, the destination column name in the data warehouse or data mart, the definitions used in the Repository (RPD file) for the metric / dimension. Also, a separate identifier or color coding can be used to identify the custom defined or customized metrics / dimensions from the off-the-shelf material supplied in the solution. This helps derive a testing strategy focused more on the customized elements of the solution.

2. Categorizing the metrics- It is important to classify the metrics from multiple perspectives such as, their frequency of use, potential performance impacts, and complexity of calculations involved. Such a classification helps drive priority of testing.

3. Authentication and authorization- The project's security requirements should clearly document the authorization and authentication needs. The security test cases have to be written from the perspectives of different user roles. At times, these tests can be complex when the data is accessed over the firewalls and some portion of the application is open to customers or suppliers via internet.

4. Dashboard charts and filter criteria- User interface testing should encompass tests with multiple options in the available filter criteria. OBIEE gives enough drilldown features to verify the underlying data on the clickable components of the charts. Test cases written should be detailed enough to verify data aggregated at various layers. For example, in a typical tab interface, there would be global filters that apply to all the tabs and there would other filters that are applicable to each individual tab. Testing should cover scenarios to test both these levels of filters. Also, the default overview tab may display the summary of most critical data in the other tabs. Here, the test cases have to cover all the available cross verification mechanisms. Testers may build their own queries using the base measures and dimension within the RPD interface to check accuracy of information displayed on the dashboard charts or their detailed drill-downs.

5. Testing in hops- In a typical OBIEE project, it is advisable to test in multiple hops rather than attempting to test everything at once.

a) The first set of tests can verify the accuracy of the column to column transport of the data between the source and target. This verification is typically done using SQL statements on the source and target databases.

b) The next step is to verify the accuracy of the repository (the RPD file.) These tests will include testing with appropriate dimensional filters on the metrics and the formula used to compute those metrics. Testers can build two sets of comparable queries within the repository interface. The first set uses the metrics and the second set uses the base measures (and the formula used to compute the corresponding metric.) The formula defined within the source to target mapping can be used for the purpose of generating the second set of queries. These tests verify the metrics defined within the repository.

c) The next step in testing will be to verify the dashboard / reports against comparable queries on repository metrics. In these tests, testers verify dashboard charts / reports against corresponding results from queries they execute on metrics of the repository.

d) Finally, the functional interface tests will cover tests to verify the lookups, performance, ease of use, look and feel etc.

The first three types of tests are performed by testers who can create simple SQL statements.

6. Structure and organization of test cases- the choices on test cases naming convention and structure can help organize the test artifacts better and aid a great deal in implementing the overall testing strategy. For example, if the test cases are grouped based on the nature of the tests, like, source to target verification, RPD metrics tests, functional, security, performance and usability, it would be easier to pick and choose the tests based on the testing context and tester capabilities. On the other hand, if the tests are grouped based on the user profiles or roles it would help organize tests among larger end user population.

7. User acceptance criteria- Users typically have an existing legacy mechanism to verify if what is displayed in the new solution makes sense. Testers should dig into this and understand how the end users built the project acceptance criteria. Testers should challenge the assumptions made by the business community in deriving the acceptance criteria. This activity helps get an end user perspective built into the testing efforts from early on.

8. Naming conventions in the repository- the naming convention followed within the repository becomes very critical if some of the power users are expected to have ad hoc query capabilities and need ability to generate the reports or charts on their own. When the developers create custom defined metrics or dimensions, their naming can easily mix-up with the off the shelf objects and cause significant confusion to the end users. This aspect should be carefully verified during the peer reviews or the testing efforts of the project.

Conclusion

BI projects using OBIEE typically have an off the shelf packaged components that need relatively less testing effort, but the nature and amount of customization done would warrant a smart testing strategy built based the overall project architecture and situation. Testing strategy should consider ability to isolate the custom built portions from the packaged components based on the source to target maps, metrics definitions and the structure of the repository.