Table of Contents

Search

  1. Preface
  2. Transformations
  3. Source transformation
  4. Target transformation
  5. Access Policy transformation
  6. Aggregator transformation
  7. B2B transformation
  8. Chunking transformation
  9. Cleanse transformation
  10. Data Masking transformation
  11. Data Services transformation
  12. Deduplicate transformation
  13. Expression transformation
  14. Filter transformation
  15. Hierarchy Builder transformation
  16. Hierarchy Parser transformation
  17. Hierarchy Processor transformation
  18. Input transformation
  19. Java transformation
  20. Java transformation API reference
  21. Joiner transformation
  22. Labeler transformation
  23. Lookup transformation
  24. Machine Learning transformation
  25. Mapplet transformation
  26. Normalizer transformation
  27. Output transformation
  28. Parse transformation
  29. Python transformation
  30. Rank transformation
  31. Router transformation
  32. Rule Specification transformation
  33. Sequence transformation
  34. Sorter transformation
  35. SQL transformation
  36. Structure Parser transformation
  37. Transaction Control transformation
  38. Union transformation
  39. Vector Embedding transformation
  40. Velocity transformation
  41. Verifier transformation
  42. Web Services transformation

Transformations

Transformations

Running a mapping with JSON data

Running a mapping with JSON data

To run a mapping that contains a Hierarchy Processor transformation with JSON-formatted data, you need to use a mapping task.

Reading JSON input

When you read JSON data, the input files can be based on a schema with multiple lines or on a schema with a single line.
The following sample shows a JSON schema on a single line:
{"Name":"Tom","Street":"2100 Seaport Blvd","City":"Redwood City","State":"CA","Country":"USA","Zip":"94063"}
The following sample shows a JSON schema that spans across multiple lines:
{ "Name": "Tom", "Surname": "Day", "City": "Redwood City", "State": "CA", "Country": "USA", "Zip": "94063" }
By default, the Hierarchy Processor transformation reads each JSON schema as a single line. To read input that spans across multiple lines, you can configure the formatting options in the Source transformation to read multiple-line JSON files.

Writing JSON output

When you write JSON data, you can write each output record to a separate file, or you can write all output records to one file.
By default, each output record is written to a separate file. To write the output records to one JSON-formatted file, set the following Spark session property in the mapping task:
Session Property Name
Session Property Value
spark.sql.shuffle.partitions
1

0 COMMENTS

We’d like to hear from you!