In the [DESTINATION] section of the configuration file, specify parameters for the target database. The output control files and the data load script use these parameters to load data to the target database.
The following parameters are supported:
db_type
The target database type. Options are:
amazon_redshift
. Use this option for Amazon Redshift targets.
cloudera
. Use this option for Cloudera targets.
flat
. Use this option for flat file targets.
greenplum3
. Use this option for Greenplum targets.
hive
. Use this option for Hive targets.
hortonworks
. Use this option for Hortonworks targets.
mssql
. Use this option for Microsoft SQL Server targets.
netezza
. Use this option for Netezza targets.
oracle
. Use this option for Oracle targets.
teradata
. Use this option for Teradata targets.
vertica
. Use this option for Vertica targets.
Default value: oracle
dba_password
A valid password for the user that is specified by the
dba_userid
parameter. If you specify a forward slash (/) in the
dba_userid
parameter, Fast Clone ignores the password and uses operating system authentication.
By default, the Fast Clone Console sets the
encrypt_database_passwords
parameter to true and writes an encrypted password. However, if the
encrypt_database_passwords
parameter is set to false, specify a clear text password.
dba_userid
The name of a user who has permissions to connect to the target database and load the source data. If you specify a forward slash (/) in this parameter, Fast Clone ignores the password and uses operating system authentication.
greenplum_loader_home
For Greenplum targets, if you load data to the target database from data files or pipes, the path to the Greenplum loader root directory that Fast Clone uses to locate the gpload utility and generate valid load scripts.
host
The host name or IP address of the system where the target database runs.
instance
The target instance name or database name for the tables to be loaded.
Default value: the value that is specified by the
instance
parameter in the [SOURCE] section of the cloning configuration file.
mssql_batchsize
The number of rows in a batch that is passed to the Microsoft SQL Server osql utility for loading data to the target. The osql utility copies each batch to the target server as one transaction.
Fast Clone uses this value as the BATCHSIZE option value in the BULK INSERT statement for the osql utility.
Range of values: 1 through 1000000
Default value: 10000
mssql_maxerrors
The maximum number of syntax errors that can appear in the output data files before the Microsoft SQL Server osql utility cancels the BULK INSERT operation. The osql utility ignores all of the rows that it cannot parse and counts an error for each row.
Fast Clone uses this value as the value of the MAXERRORS option in the BULK INSERT statement for the osql utility.
Default value: 100000
owner
The target schema name or schema owner for the tables to be loaded.
port
The port number that Fast Clone uses to connect to the target database.
Default value: No default value is provided.
The default value that is specified by the Fast Clone Console depends on the target type.
protocol
For Oracle targets, the protocol that Fast Clone uses to connect to the target database. Valid values are:
TCP
. Use the TCP/IP protocol.
TCPS
. Use the TCP/IP protocol with the Secure Sockets Layer (SSL).
Default value: TCP
s3_bucket
Specifies the name of the Amazon S3 bucket that stores the temporary files with the data to be loaded to the Amazon Redshift target.
s3_file_size
Specifies the maximum size of the temporary files, in kilobytes, that Fast Clone creates in the Amazon S3 bucket before loading the source data to Amazon Redshift.
Default value: 5120 KB
s3_key_id
Specifies an AWS access key ID that Fast Clone must use to access the Amazon S3 account resources including the bucket where Fast Clone creates temporary files.
s3_path
Specifies the name of the directory in the Amazon S3 bucket where Fast Clone creates the temporary files that store source data.
s3_secret_key
Specifies the secret access key for the access key ID that is specified in the s3_key_id parameter. The access key ID must have the authority to access the Amazon S3 account resources.
stream_maxerrors
For Greenplum, Netezza, Teradata, and Vertica targets, the maximum number of errors that can occur when streaming data to the target database with DataStreamer. If the number of errors exceeds this maximum, DataStreamer ends with an error.
For Netezza targets, review the .nzbad file to which DataStreamer writes the data rows that caused an error. DataStreamer does not report any errors in the output log if the number of rows that caused an error is less than the value of this parameter.
Range of values: 1 through 2000000000
Default value: 100000
textfile_charset_collation
The name of a Microsoft SQL Server collation that is used to store character and Unicode data in the output data files. Fast Clone uses this value to correctly specify a column collation in the format file that maps the fields of the data file to the table columns. Fast Clone uses the specified collation for all of the table columns.
Fast Clone uses this value as the FORMATFILE option value in the BULK INSERT statement for the osql utility.