Table of Contents

Search

  1. Preface
  2. Introduction to Informatica Data Engineering Integration
  3. Mappings
  4. Mapping Optimization
  5. Sources
  6. Targets
  7. Transformations
  8. Python Transformation
  9. Data Preview
  10. Cluster Workflows
  11. Profiles
  12. Monitoring
  13. Hierarchical Data Processing
  14. Hierarchical Data Processing Configuration
  15. Hierarchical Data Processing with Schema Changes
  16. Intelligent Structure Models
  17. Blockchain
  18. Stateful Computing
  19. Appendix A: Connections Reference
  20. Appendix B: Data Type Reference
  21. Appendix C: Function Reference

Informatica Data Engineering Integration Overview

Informatica Data Engineering Integration Overview

Informatica Data Engineering Integration enables your organization to process large, diverse, and fast changing data sets so you can get insights into your data. Use Data Engineering Integration to perform data engineering integration and transformation without writing or maintaining external code.
Use Data Engineering Integration to collect diverse data faster, build business logic in a visual environment, and eliminate hand-coding to get insights on your data. Consider implementing a data engineering project in the following situations:
  • The volume of the data that you want to process is greater than 10 terabytes.
  • You need to analyze or capture data changes in microseconds.
  • The data sources are varied and range from unstructured text to social media data.
You can perform run-time processing in the native environment or in a non-native environment. The native environment is the Informatica domain where the Data Integration Service performs all run-time processing. Use the native run-time environment to process data that is less than 10 terabytes. A non-native environment is a distributed cluster outside of the Informatica domain, such as Hadoop or Databricks, where the Data Integration Service can push run-time processing. Use a non-native run-time environment to optimize mapping performance and process data that is greater than 10 terabytes.

0 COMMENTS

We’d like to hear from you!