Table of Contents

Search

  1. Preface
  2. Introduction to Data Transformation
  3. Data Processor Transformation
  4. Wizard Input and Output Formats
  5. Relational Input and Output
  6. Using the IntelliScript Editor
  7. XMap
  8. Libraries
  9. Schema Object
  10. Command Line Interface
  11. Scripts
  12. Parsers
  13. Script Ports
  14. Document Processors
  15. Formats
  16. Data Holders
  17. Anchors
  18. Transformers
  19. Actions
  20. Serializers
  21. Mappers
  22. Locators, Keys, and Indexing
  23. Streamers
  24. Validators, Notifications, and Failure Handling
  25. Validation Rules
  26. Custom Script Components

User Guide

User Guide

Optimizing Large COBOL File Processing in the Hadoop Environment

Optimizing Large COBOL File Processing in the Hadoop Environment

You can optimize how a mapping with a complex file reader and a Data Processor transformation processes large COBOL files in the Hadoop environment.
In order to optimize large COBOL file processing, you must be able to use a regular expression to split the records. If the COBOL file can be split with a regular expression, you can define an input parameter for the complex file reader that provides a regular expression that determines how to split record processing in the Hadoop environment.

0 COMMENTS

We’d like to hear from you!