Table of Contents

Search

  1. Preface
  2. Transformations
  3. Source transformation
  4. Target transformation
  5. Aggregator transformation
  6. Cleanse transformation
  7. Data Masking transformation
  8. Deduplicate transformation
  9. Expression transformation
  10. Filter transformation
  11. Hierarchy Builder transformation
  12. Hierarchy Parser transformation
  13. Hierarchy Processor transformation
  14. Input transformation
  15. Java transformation
  16. Java transformation API reference
  17. Joiner transformation
  18. Labeler transformation
  19. Lookup transformation
  20. Mapplet transformation
  21. Normalizer transformation
  22. Output transformation
  23. Parse transformation
  24. Python transformation
  25. Rank transformation
  26. Router transformation
  27. Rule Specification transformation
  28. Sequence Generator transformation
  29. Sorter transformation
  30. SQL transformation
  31. Structure Parser transformation
  32. Transaction Control transformation
  33. Union transformation
  34. Velocity transformation
  35. Verifier transformation
  36. Web Services transformation

Transformations

Transformations

Running a mapping with JSON data

Running a mapping with JSON data

When you run an elastic mapping that contains a Hierarchy Processor transformation with JSON-formatted data, you must create a mapping task. You cannot test the mapping using the
Run
button in the Mapping Designer.
Hierarchical input files
When an input file is in JSON format, the schema often spans across multiple lines. For example:
{ "Name": "Tom", "Surname": "Day", "City": "Redwood City", "State": "CA", "Country": "USA", "Zip": "94063" }
The following example shows the same JSON-formatted schema in a single line:
{"Name":"Tom","Street":"2100 Seaport Blvd","City":"Redwood City","State":"CA","Country":"USA","Zip":"94063"}
To read JSON-formatted input that spans across multiple lines, set the following advanced session property in the mapping task:
Session Property Name
Session Property Value
advanced.custom.property
infaspark.json.parser.multiLine=True
If this property is not set, a multiple-line JSON input file will result in an output file with null values in every column.
Hierarchical output files
When the output is a JSON-formatted file, the Spark engine writes each output record to a separate file by default.
To write the output records to one JSON-formatted file, set the following Spark session property in the mapping task:
Session Property Name
Session Property Value
spark.sql.shuffle.partitions
1