Hi, I'm Ask INFA!
What would you like to know?
ASK INFAPreview
Please to access Ask INFA.

REST API Reference

REST API Reference

Creating a test case

Creating a test case

Use the following URI to create a test case:
POST /datavalidation-service/api/v1/testcase
Use the IDS-SESSION-ID header in the POST request.
POST request
Use the following fields in the POST request:
Field
Required?
Description
testCaseName
Yes
Name of the test case.
description
No
Description of the test case.
leftObjectType
Yes
Type of the data source.
Specify one of the following values:
  • OBJECT
    . Indicates that the data source is a table or view.
  • SAVED_SQL_QUERY
    . Indicates that the data source is a saved SQL query.
leftConnId
Yes
Connection ID of the first connection.
When you open the connection in Administrator, the numeric value that you see in the URL is the connection ID.
For example, if the URL is
<Informatica Intelligent Cloud Services URL>/cloudUI/products/administer/main/ConnectionDetailsWS/014N8B0B000000000898/read
, the connection ID is
014N8B0B000000000898
.
leftSavedSqlQuery
No
ID of the saved SQL query of the first connection.
Use the following format:
"leftSavedSqlQuery" : { "id" : <value> }
leftTableName
No
Table name of the first connection.
leftTablePk
Yes
Primary key of the table in the first connection.
You can also enter multiple keys.
rightObjectType
Yes
Type of the data source.
Specify one of the following values:
  • OBJECT
    . Indicates that the data source is a table or view.
  • SAVED_SQL_QUERY
    . Indicates that the data source is a saved SQL query.
rightConnId
Yes
Connection ID of the second connection.
When you open the connection in Administrator, the numeric value that you see in the URL is the connection ID.
For example, if the URL is
<Informatica Intelligent Cloud Services URL>/cloudUI/products/administer/main/ConnectionDetailsWS/014N8B0B000000000898/read
, the connection ID is
014N8B0B000000000898
.
rightSavedSqlQuery
No
ID of the saved SQL query of the second connection.
Use the following format:
"rightSavedSqlQuery" : { "id" : <value> }
rightTableName
No
Table name of the second connection.
rightTablePk
Yes
Primary key of the table in the second connection.
You can also enter multiple keys.
runtimeEnvId
Yes
Runtime environment where the test case runs.
stagingConnectionId
Yes
Connection ID of the flat file connection that Data Validation uses to stores reports.
When you open the flat file connection in Administrator, the numeric value that you see in the URL is the connection ID.
For example, if the URL is
<Informatica Intelligent Cloud Services URL>/cloudUI/products/administer/main/ConnectionDetailsWS/014N8B0B000000000ATW/read
, the connection ID is
014N8B0B000000000ATW
.
frsDocLocation
Yes
The location to save the test case.
To save the test case in a project, use the following syntax:
"{\"type\":\"Project\",\"id\":\"<project_ID>\",\"path\":\"<project_name>\"}",
To save the test case in a folder, use the following syntax:
"{\"type\":\"Folder\",\"id\":\"<folder_ID>\",\"path\":\"<folder_name>\"}",
When you open a project, the numeric value that you see in the URL is the project ID. Similarly, when you open a folder, the numeric value that you see in the URL is the folder ID.
By default, the test case is saved under the
Default
project.
emailNotificationStrategy
Yes
Specify one of the following values:
  • ORG. Data Validation uses the default email notification options that are configured in Administrator.
  • CUSTOM. Specify the email addresses that Data Validation must use for the success and failure email notifications. Separate multiple email addresses with a comma.
  • NONE. Data Validation doesn't send email notifications.
successEmails
Required if emailNotificationStrategy is set to CUSTOM
Specify the email addresses that Data Validation must use for the success email notifications. Separate multiple email addresses with a comma.
errorEmails
Required if emailNotificationStrategy is set to CUSTOM
Specify the email addresses that Data Validation must use for the failure email notifications. Separate multiple email addresses with a comma.
leftConnPath
Required for Amazon Redshift v2 and Amazon S3 v2 connections
If you use an Amazon Redshift v2 or an Amazon S3 v2 connection, enter a valid path for the first connection.
leftConnAdvancedParams
No
If you use an Amazon Redshift v2, Amazon S3 v2, or flat file connection, specify the advanced parameters to be used for the first connection.
Amazon Redshift v2
For Amazon Redshift v2 connections, specify the S3 bucket name in the advanced parameters. For example, enter:
{ "S3BucketName": "<value>" }
Amazon S3 v2
For Amazon S3 v2 connections, Data Validation supports the Parquet file and flat file formats.
For Parquet file format, specify the following advanced parameters:
{"formatId":"Parquet","@type":"dataFormat","dataFormatAttributes": {}}
For flat file format, specify the following default advanced parameters:
{ "formatId": "Flat", "@type":"dataFormat", "dataFormatAttributes": { "escapeChar": "<value>", "delimiter": "<value>", "headerLineNumber": <value>, "qualifier": "<value>", "firstDataRow": <value>} }
For example:
{ "formatId": "Flat", "@type": "dataFormat", "dataFormatAttributes": { "escapeChar": "", "delimiter": ",", "headerLineNumber": 1, "qualifier": "\"", "firstDataRow": 2} }
Flat file
For flat file connections, specify the advanced parameters to determine how the flat file must be parsed.For example, enter:
{ "srcFFAttrs" :{ "delimiter" : "<value>" , "textQualifier" : "<value>" , "escapeChar" : "<value>" , "headerLineNo" : <value> , "firstDataRow" : <value> }}
rightConnPath
Required for Amazon Redshift v2 and Amazon S3 v2 connections
If you use an Amazon Redshift v2 or an Amazon S3 v2 connection, enter a valid path for the second connection.
rightConnAdvancedParams
No
If you use an Amazon Redshift v2, Amazon S3 v2, or flat file connection, specify the advanced parameters to be used for the second connection.
Amazon Redshift v2
For Amazon Redshift v2 connections, specify the S3 bucket name in the advanced parameters. For example, enter:
{ "S3BucketName": "<value>" }
Amazon S3 v2
For Amazon S3 v2 connections, Data Validation supports the Parquet file and flat file formats.
For Parquet file format, specify the following advanced parameters:
{ "formatId":"Parquet", "@type":"dataFormat", "dataFormatAttributes": {} }
For flat file format, specify the following default advanced parameters:
{ "formatId": "Flat", "@type":"dataFormat", "dataFormatAttributes": { "escapeChar": "<value>", "delimiter": "<value>", "headerLineNumber": <value>, "qualifier": "<value>", "firstDataRow": <value>} }
For example:
{ "formatId": "Flat", "@type": "dataFormat", "dataFormatAttributes": { "escapeChar": "", "delimiter": ",", "headerLineNumber": 1, "qualifier": "\"", "firstDataRow": 2} }
Flat file
For flat file connections, specify the advanced parameters to determine how the flat file must be parsed.For example, enter:
{ "srcFFAttrs" :{ "delimiter" : "<value>" , "textQualifier" : "<value>" , "escapeChar" : "<value>" , "headerLineNo" : <value> , "firstDataRow" : <value> }}
leftConnWhereClause
No
WHERE clause to be used for the first connection.
If you use a WHERE clause, Data Validation selects those rows for sampling that meet the condition specified in the WHERE clause.
rightConnWhereClause
No
WHERE clause to be used for the second connection.
If you use a WHERE clause, Data Validation selects those rows for sampling that meet the condition specified in the WHERE clause.
verboseMode
No
Defines the logging level.
Enter
true
to use the verbose mode. The log includes messages of all logging levels. This option is useful for debugging.
Enter
false
to use the standard mode. The log includes only error messages.
keepDIAssets
No
Defines whether you want to save the Data Integration mappings and tasks that Data Validation creates when it runs the test case.
Enter
true
to save the Data Integration assets after the test case run.
Enter
false
if you do not want to save the Data Integration assets after the test case run.
ignoreCase
No
Defines whether you want to ignore casing differences in the data.
Enter
true
to ignore casing differences in the data.
Enter
false
to consider the casing differences in the data as a mismatch.
trimString
No
Defines whether you want to trim leading and trailing white spaces in string values.
Enter
true
to trim leading and trailing white spaces in string values.
Enter
false
to retain leading and trailing white spaces in string values.
Default is
false
.
badRecordLimit
No
Defines the maximum number of unmatched, extra, and missing records to show in the detailed test results. Enter the value as
100
,
500
, or
1000
.
Default is
100
.
colMappings
Yes
Defines the name, type, precision, and scale for each column mapping. Use the following syntax to map columns in the first connection and second connection:
"colMappings": [ { "aggrFunc": "<function1,function2,functionn>", "leftColName": "<value>", "leftColType": "<value>", "rightColName": "<value>", "rightColType": "<value>", "rightColPrecision": <value>, "rightColScale": <value>, "leftColPrecision": <value>, "leftColScale": <value>, }, { "leftColName": "<value>", "leftColType": "<value>", "rightColName": "<value>", "rightColType": "<value>", "rightColPrecision": <value>, "rightColScale": <value>, "leftColPrecision": <value>, "leftColScale": <value>, "aggrFunc": "min" } ],
Use the
aggrFunc
field to define the aggregation functions that Data Validation must use to compare the table. Separate multiple aggregation functions with a comma. The aggregation function names must be in uppercase.
An aggregation functions test retrieves summarized information about the data contained in the data sources. Use aggregation to verify whether all records were moved or to identify incorrect logic in WHERE clauses.
Based on the data type of the columns, you can use one or more of the following aggregation functions:
  • COUNT
    . Counts the number of rows that contain non-null values for a string or numeric column.
  • COUNT_ROWS
    . Counts the number of rows for a string or numeric column. Includes rows that contain nulls.
  • MIN
    . Calculates the minimum value for a numeric column.
  • MAX
    . Calculates the maximum value for a numeric column.
  • AVG
    . Calculates the average value of a numeric column.
  • SUM
    . Calculates the total value of a numeric column.
sampling
Yes
Configure data sampling for Data Validation to compare between sample source data and sample target data. The less data that Data Validation samples, the faster the test runs.
Use the following syntax to specify the sampling type and value:
"sampling": { "type": "<value>", "val": <value> } }
Use one of the following options:
  • Enter the
    type
    as
    percentage
    to sample a percentage of the rows. The test report shows records that are missing or unmatched in the Connection 2 table, but doesn't show extra records if any exist. Enter the
    val
    as
    1
    ,
    5
    ,
    10
    ,
    25
    ,
    50
    , or
    75
    . For example, enter:
    "sampling": { "type": "percentage", "val": 25 } }
  • Enter the
    type
    as
    first
    to sample a defined number of rows at the beginning of the table. Enter the
    val
    as
    100
    ,
    1000
    , or
    10000
    . For example, enter:
    "sampling": { "type": "first", "val": 100 } }
  • Enter the
    type
    as
    last
    to sample a defined number of rows at the end of the table. Enter the
    val
    as
    100
    ,
    1000
    , or
    10000
    . For example, enter:
    "sampling": { "type": "last", "val": 1000 } }
  • Enter the
    type
    as
    where_clause
    to sample those rows that meet the condition specified in the WHERE clause. If you use the
    where_clause
    option, you must specify a valid where clause in at least one of the following attributes:
    • leftConnWhereClause
    • rightConnWhereClause
POST request sample
The following snippet shows a POST request sample to create a test case:
{ "testCaseName": "PCToCDITestCase", "frsDocLocation": "{\"type\":\"Project\",\"id\":\"436TU7N2RsicOmYQGVpWAl\",\"path\":\"SG\"}", "description": "", "leftConnId": "014N8B0B000000000AGX", "leftConnType": "Oracle", "leftTableName": "EMPS_SNOW", "leftTablePk": ["EMPID"], "rightConnId": "014N8B0B000000000AGX", "rightConnType": "Oracle", "leftConnPath": "", "rightConnPath": "", "leftConnAdvancedParams": {}, "rightConnAdvancedParams": {}, "rightTableName": "EMPS_UPPERLOWERCASEFIELD", "rightTablePk": ["emp id"], "runtimeEnvId": "014N8B25000000000006", "stagingConnectionId": "014N8B0B0000000003HI", "colMappings": [{ "aggrFunc": "", "leftColName": "NICKNAME", "leftColType": "string", "leftColPrecision": 100, "leftColScale": 0, "rightColName": "NICKNAME", "rightColType": "string", "rightColPrecision": 100, "rightColScale": 0 }, { "aggrFunc": "", "leftColName": "CITY", "leftColType": "string", "leftColPrecision": 50, "leftColScale": 0, "rightColName": "CITY", "rightColType": "string", "rightColPrecision": 50, "rightColScale": 0 }, { "aggrFunc": "", "leftColName": "GENDER", "leftColType": "string", "leftColPrecision": 20, "leftColScale": 0, "rightColName": "GENDER", "rightColType": "string", "rightColPrecision": 20, "rightColScale": 0 }, { "aggrFunc": "", "leftColName": "EMPID", "leftColType": "decimal", "leftColPrecision": 38, "leftColScale": 0, "rightColName": "emp id", "rightColType": "decimal", "rightColPrecision": 38, "rightColScale": 0 }, { "aggrFunc": "", "leftColName": "NAME", "leftColType": "string", "leftColPrecision": 200, "leftColScale": 0, "rightColName": "Full Name", "rightColType": "string", "rightColPrecision": 200, "rightColScale": 0 }, { "aggrFunc": "", "leftColName": "SALARY", "leftColType": "decimal", "leftColPrecision": 9, "leftColScale": 1, "rightColName": "SALARY1", "rightColType": "decimal", "rightColPrecision": 9, "rightColScale": 1 }, { "aggrFunc": "", "leftColName": "ORACLESTATUS", "leftColType": "string", "leftColPrecision": 70, "leftColScale": 0, "rightColName": "oracle status", "rightColType": "string", "rightColPrecision": 70, "rightColScale": 0 }, { "aggrFunc": "", "leftColName": "ORACLECOMMENTS", "leftColType": "string", "leftColPrecision": 80, "leftColScale": 0, "rightColName": "Oracle Commments", "rightColType": "string", "rightColPrecision": 80, "rightColScale": 0 } ], "verboseMode": false, "keepDIAssets": false, "ignoreCase": false, "trimString": false, "leftConnWhereClause": "", "rightConnWhereClause": "", "badRecordLimit": 100, "successEmails": "", "errorEmails": "", "emailNotificationStrategy": "NONE" }
POST response
If the test case was created successfully, the POST request returns a
200 Successful operation
response and returns the following response fields:
Field
Type
Description
testCaseId
String
ID of the test case.
frsDocId
String
ID of the test case in the Informatica repository.
testCaseName
String
Name of the test case.
If the test case creation failed, the POST request returns a
400 Bad request
response or a
500 Internal Server Error
response. It also returns the following response fields:
Field
Type
Description
code
String
Code of the error message.
message
String
Error message.
debugMessage
String
Message that can be used for debugging the issue.
details
String
Detailed information about the error message.

0 COMMENTS

We’d like to hear from you!