Table of Contents

Search

  1. Preface
  2. Introduction to Informatica Big Data Management
  3. Mappings in the Hadoop Environment
  4. Mapping Sources in the Hadoop Environment
  5. Mapping Targets in the Hadoop Environment
  6. Mapping Transformations in the Hadoop Environment
  7. Processing Hierarchical Data on the Spark Engine
  8. Configuring Transformations to Process Hierarchical Data
  9. Processing Unstructured and Semi-structured Data with an Intelligent Structure Model
  10. Stateful Computing on the Spark Engine
  11. Monitoring Mappings in the Hadoop Environment
  12. Mappings in the Native Environment
  13. Profiles
  14. Native Environment Optimization
  15. Cluster Workflows
  16. Connections
  17. Data Type Reference
  18. Function Reference
  19. Parameter Reference

Generate the Source File Name

Generate the Source File Name

You can generate the source file name for the flat file data object. The content of the file name column remains consistent across different modes of execution.
When you push processing to the specific engine for the required file types, the file name column returns the path based on the following formats:
Run-time Engine
Type of Files Processes
Returned Path
Hive
HDFS source files
<staged path><HDFS file path>
For example,
hdfs://host name:port/hive/warehouse/ff.txt
Hive
Flat files in the local system
<local file path>
For example,
/home/devbld/Desktop/ff.txt
Blaze
Flat files in the local system
<staged path><local file path>
For example,
hdfs://host name:port/hive/warehouse/home/devbld/Desktop/ff.txt
Spark
HDFS source files
hdfs://<host name>:<port>/<file name path>
For example,
hdfs://host name:port/hive/warehouse/ff.txt
Spark
Flat files in the local system
<local file path>
For example,
/home/devbld/Desktop/ff.txt
The file name column returns the content in the following format for a high availability cluster:
hdfs://<host name>/<file name path>
For example,
hdfs://irldv:5008/hive/warehouse/ff.txt


Updated October 23, 2019