The Fast Clone Console is the graphical user interface from which you configure and manage data-cloning jobs. From the Fast Clone Console, you can generate a cloning configuration file and run data-cloning jobs on a local or remote system. The cloning configuration file name is unload.ini by default. The Fast Clone Console runs on Linux, UNIX, and Windows. You can run it on the Oracle source system or on a standalone system. To start the Fast Clone Console, run gui.cmd on Windows or gui.sh on Linux or UNIX.
Fast Clone Executable
The Fast Clone executable, FastReader.exe on Windows or FastReader on Linux or UNIX, runs data-cloning jobs. You can start the Fast Clone executable from the command line or from the Fast Clone Console. If you run the Fast Clone executable from the command line, you can manually enter parameters to control unload processing.
Fast Clone Server
The Fast Clone Server is an optional add-on component that you can purchase to enable network communication across systems in a distributed Fast Clone topology. These systems can include the Fast Clone Console and Fast Clone instances on source and target database systems.
The Fast Clone Server runs as a Windows service or as a daemon on Linux or UNIX.
Use the Fast Clone Server to initiate unloads of Oracle data and metadata on a remote system or to transmit the output files to the target system. For example, use the Fast Clone Server to retrieve Oracle data that was unloaded by scheduled Fast Clone unload operations on separate Oracle systems and then make that data available to another Fast Clone instance for loading to the target database.
DataStreamer
The DataStreamer component is an add-on component that you can purchase for Amazon Redshift, Greenplum, Netezza, Teradata, and Vertica targets. For Greenplum, Netezza, and Teradata targets, DataStreamer is an optional component to load data faster. For Amazon Redshift targets, DataStreamer is a required component that is used transparently and always enabled.
With DataStreamer, you must use the direct path unload method. Depending on the target type, DataStreamer streams the unloaded Oracle data to the target in one of the following ways:
For Amazon Redshift targets, DataStreamer sends the unloaded data to the Amazon Simple Storage Service (Amazon S3). After the source data is in Amazon S3 storage, Fast Clone issues a copy command that copies the data to the Amazon Redshift target tables.
If you plan to run unload jobs on a Windows system, you must install the PostgreSQL ODBC driver on the system. If you plan to run unload jobs on a Linux and UNIX system, use the DataDirect ODBC driver for PostgreSQL that Fast Clone provides.
For Greenplum targets, DataStreamer sends the unloaded data directly to the Greenplum parallel file distribution server (gpfdist) for loading to the target.
For Netezza targets, DataStreamer writes the unloaded data to the named pipes that represent the Netezza external tables. The Netezza ODBC driver reads the data from these pipes and loads the data to the Netezza target tables.
To use the Netezza DataStreamer, you must install the Netezza ODBC driver on the system where you plan to run unload jobs.
For Teradata targets, DataStreamer sends the unloaded data directly to the Teradata Parallel Data Pump, FastLoad, or MultiLoad utility for loading to the target.
To use the Teradata DataStreamer, you must install the TPT libraries on the system where you plan to run unload jobs.
For Vertica targets, DataStreamer uses the COPY command on the server side or the LCOPY command on the client side to send the unloaded data directly to Vertica targets.