Table of Contents

Search

  1. Preface
  2. Introduction to Informatica Data Engineering Integration
  3. Mappings
  4. Mapping Optimization
  5. Sources
  6. Targets
  7. Transformations
  8. Python Transformation
  9. Data Preview
  10. Cluster Workflows
  11. Profiles
  12. Monitoring
  13. Hierarchical Data Processing
  14. Hierarchical Data Processing Configuration
  15. Hierarchical Data Processing with Schema Changes
  16. Intelligent Structure Models
  17. Blockchain
  18. Stateful Computing
  19. Appendix A: Connections Reference
  20. Appendix B: Data Type Reference
  21. Appendix C: Function Reference

Hive Sources on Hadoop

Hive Sources on Hadoop

You can include Hive sources in an Informatica mapping that runs in the Hadoop environment.
Consider the following restrictions when you configure a Hive source in a mapping that runs in the Hadoop environment:
  • A mapping fails to run when you have Unicode characters in a Hive source definition.
  • The Hive source cannot reside on a remote cluster. A remote cluster is a cluster that is remote from the machine that the Hadoop connection references in the mapping.
  • If a Hive source contains hierarchical data, you cannot push transformation logic to the source database.
  • The third-party Hive JDBC driver does not return the correct precision and scale values for the Decimal data type. As a result, when you import Hive tables with a Decimal data type into the Developer tool, the Decimal data type precision is set to 38 and the scale is set to 0. Consider the following configuration rules and guidelines based on the version of Hive:
    • Hive 0.11. Accept the default precision and scale for the Decimal data type in the Developer tool.
    • Hive 0.12. Accept the default precision and scale for the Decimal data type in the Developer tool.
    • Hive 0.12 with Cloudera CDH 5.0. You can configure the precision and scale fields for source columns with the Decimal data type in the Developer tool.
    • Hive 0.13 and above. You can configure the precision and scale fields for source columns with the Decimal data type in the Developer tool.
    • Hive 0.14 or above. The precision and scale used for the Decimal data type in the Hive database also appears in the Developer tool.
A mapping that runs on the Spark engine can have partitioned Hive source tables and bucketed sources.

0 COMMENTS

We’d like to hear from you!