Table of Contents

Search

  1. Preface
  2. Backing Up and Restoring the Data Vault
  3. Configuring Centera as a Remote Data Vault Store
  4. Configuring Data Archive for a Legacy Source Using Legacy Adapters
  5. Data Archive Seamless Access for PeopleSoft
  6. Data Archive Transaction Restore API
  7. Dropping and Truncating Partitions in Data Archive
  8. High Availability Configuration for the Data Archive and File Archive Service Versions 6.1 to 6.3
  9. 0955-High Availability Configuration for the Data Vault Version 6.4 and Later
  10. How to Create Business Rules to Archive and Purge Transactional Data
  11. How to Uninstall Data Archive 5.1
  12. How to Uninstall Data Archive 5.3
  13. How to Use Scripts to Change Database User Passwords in the ILM Repository
  14. IBM DB2 Database Connectivity and Setup for Data Archive
  15. Installing Data Visualization
  16. Integrating Third-Party Schedulers in ILM Engine
  17. Parallel Processing in Data Archive
  18. Seamless Access Configuration for Siebel Applications
  19. Seamless Access Setup for Oracle E-Business Suite
  20. Using the Data Vault Service JDBC Driver to Connect to the Data Vault
  21. Using Multiple Engines in an ILM Environment
  22. Using PowerExchange ODBC Connections in a Data Archive Retirement Project
  23. Discovering Foreign Key Relationships in Enterprise Data Manager

Data Archive How-To Guide

Data Archive How-To Guide

Step 4. Discover Table Relationships

Step 4. Discover Table Relationships

Define the type of profiling that you want to implement in Enterprise Data Manager.
  1. When the import metadata job is complete, select
    View
    Refresh
    and expand the entity tree in the
    View
    Constraints
    view of Enterprise Data Manager.
    The following image shows the entity tree hierarchy:
    The Custom Apps Only application contains a Custom Product Family Version application version, an AIRLINE_LOYALTY schema, and TICKET tables.
    The Custom Apps Only application contains a Custom Product Family Version application version, an AIRLINE_LOYALTY schema, and TICKET tables.
  2. Right-click the schema that you want to run the profile on, and select
    Discover Keys and Relationships
    . For example, right-click the AIRLINE_LOYALTY schema.
    The
    Metadata Import Wizard
    opens.
  3. In the
    Metadata Import Wizard
    , select
    Discovery and Profiling with Informatica Data Quality
    and
    Discover Table Relationships
    .
    The following image shows the
    Metadata Import Wizard
    with the correct parameters selected:
    In the Metadata Import Wizard, the Discovery and Profiling with Informatica Data Quality and the Discover Table Relationships parameters are selected.
  4. Click
    Next
    .
  5. In the Connection Name field, select the connection that you defined in Step 3. Optionally, you can define a new connection.
    If you choose to define a new connection, enter the following parameters:
    Database Type
    The type of database that you want to connect to.
    Host
    The name of the machine that hosts the source data.
    Port
    The port number that the source database uses.
    Database Name / SID
    Unique identifier or system identifier for the source database server.
    Source Connection String
    Connect string that determines how the database native client connects to the database. The Data Integration Service uses the connect string to create a connection object to the source database.
    The following table describes the service name for each source database:
    Source Database
    Description
    Connect String Syntax
    IBM DB2
    Database alias name of the remote database configured from the IBM DB2 Connect client.
    dbname
    Microsoft SQL Server
    Name of the host of the Microsoft SQL Server database.
    servername@dbname
    Oracle
    Full service name or SID for the Oracle database. The TNS name that is configured from the ILM application server to the database server. The name is defined in the application server
    TNSNAMES.ORA
    file.
    dbname.world
    Data Vault
    The ODBC data source name created on the machine hosting Informatica Data Quality.
    dsn
    User Name
    The name of the user that connects to the source database. The user is the owner of the schema that you want to run the profile on. The user must own the tables that you want to run the profile on and must have privileges to select table catalogs.
    For Oracle and IBM DB2, the user name is the same as the schema name.
    Password
    The password for the database user name.
  6. Optionally, you can click
    Advanced Options
    to define the following parameters:
    Data Types Used in Comparison
    Determines whether the profile uses data dictionary data types or generic data types.
    Choose one of the following options:
    • Inferred Data Types. Select to use generic data types.
    • Metadata. Select to use data dictionary data types.
    Comparison Case Sensitivity
    Determines whether profiling takes into account case differences.
    Choose one of the following options:
    • Case Sensitive. Select to run case sensitive profiling. The Case Sensitive option can affect job performance.
    • Not Case Sensitive. Select to ignore case differences.
    Trim Spaces before Comparison
    Determines whether the Data Integration Service includes leading or trailing spaces in column data.
    Maximum Foreign Keys Returned
    The maximum number of inferred columns that the Data Integration Service returns.
    Minimum Confidence Percent
    The minimum percentage of key violations that the profile permits for foreign key discovery. For example, if you enter 90, the profile returns foreign keys that have more than 90% accuracy.
    Regenerate Signature
    Reloads column signatures if the source data changes.
    The following images shows the advanced parameters:
    The advanced parameters are Datatypes Used in Comparison, Comparison Case Sensitivty, Trim Spaces Before Comparison, Maximum Foreign Keys Returned, Minimum Confidence Percent, and Regenerate Signature.
  7. Click
    Save
    to exit the
    Advanced Options
    window.
  8. Click
    Next
    in the
    Metadata Import Wizard
    .
  9. Select the tables that you want to include in the profiling job. You can double-click a table to move it to the Selected box or you can highlight tables and click the down arrow.
    The following image shows the Metadata Import Wizard with the tables that contain ticket information selected:
    The TICKET, TICKET_COUPON, TICKET_COUPON_SAVE, TICKET_PARTITION, TICKET_PARTITION_TARGET, and TICKET_SAVE tables are in the Selected box.
  10. Click
    Finish
    .
    A confirmation window appears with the job ID.
  11. You can check the status of the job in the
    Monitor Jobs
    view in Data Archive. When the job is complete, the job status is Completed.

0 COMMENTS

We’d like to hear from you!