Table of Contents

Search

  1. Preface
  2. Part 1: Version 10.5
  3. Part 2: Versions 10.4.1 - 10.4.1.3
  4. Part 3: Versions 10.4 - 10.4.0.2
  5. Part 4: Versions 10.2.2 - 10.2.2 HotFix 1
  6. Part 5: Version 10.2.1
  7. Part 6: Version 10.2 - 10.2 HotFix 2

What's New and Changed

What's New and Changed

PowerExchange for HDFS

PowerExchange for HDFS

Effective in version 10.2.2, PowerExchange for HDFS includes the following new features:
  • When you create a complex file data object, you can select an operating system profile to increase security and to isolate the design-time user environment when you import and preview metadata from a Hadoop cluster.
    You can choose an operating system profile if the Metadata Access Service is configured to use operating system profiles. The Metadata Access Service imports the metadata with the default operating system profile assigned to the user. You can change the operating system profile from the list of available operating system profiles.
  • When you run a mapping in the native environment or on the Spark engine to read data from a complex file data object, you can use wildcard characters to specify the source directory name or the source file name.
    You can use the following wildcard characters:
    ? (Question mark)
    The question mark character (?) allows one occurrence of any character.
    * (Asterisk)
    The asterisk mark character (*) allows zero or more than one occurrence of any character.
  • You can use complex file objects as dynamic sources and targets in a mapping.
  • You can use complex file objects to read data from and write data to a complex file system.
  • When you run a mapping in the native environment or on the Spark engine to write data to a complex file data object, you can overwrite target data, the Data Integration Service deletes the target data before writing new data.
  • When you create a data object read or write operation, you can read the data present in the FileName port that contains the endpoint name and source path of the file.
  • You can now view the data object operations immediately after you create the data object read or write operation.
  • You can add new columns or modify the columns, when you create a data object read or write operation.
  • You can copy the columns of the source transformations, target transformations, or any other transformations and paste the columns in the data object read or write operation directly when you read or write to an Avro, JSON, ORC, or Parquet file.
For more information, see the
Informatica PowerExchange for HDFS 10.2.2 User Guide
.

0 COMMENTS

We’d like to hear from you!