Table of Contents

Search

  1. Preface
  2. Introduction to Informatica Big Data Management
  3. Connections
  4. Mappings in the Hadoop Environment
  5. Mapping Objects in the Hadoop Environment
  6. Processing Hierarchical Data on the Spark Engine
  7. Stateful Computing on the Spark Engine
  8. Monitoring Mappings in the Hadoop Environment
  9. Mappings in the Native Environment
  10. Profiles
  11. Native Environment Optimization
  12. Data Type Reference
  13. Complex File Data Object Properties
  14. Function Reference
  15. Parameter Reference

Step 1. Collect the Data

Step 1. Collect the Data

Identify the data sources from which you need to collect the data.
Big Data Management provides several ways to access your data in and out of Hadoop based on the data types, data volumes, and data latencies in the data.
You can use PowerExchange adapters to connect to multiple big data sources. You can schedule batch loads to move data from multiple source systems to HDFS without the need to stage the data. You can move changed data from relational and mainframe systems into HDFS or the Hive warehouse. For real-time data feeds, you can move data off message queues and into HDFS.
You can collect the following types of data:
  • Transactional
  • Interactive
  • Log file
  • Sensor device
  • Document and file
  • Industry format


Updated November 09, 2018