Metadata Manager 10.1
- Metadata Manager 10.1
- All Products
When you upgrade the domain from Informatica 9.0.1 to 10.1 and the domain configuration repository uses an Oracle, Sybase ASE, or Microsoft SQL Server database, the upgrade fails with an error. The upgrade error might vary depending on the database.
The upgrade fails with the following error for an Oracle database:
The upgrade fails with the following error for an Microsoft SQL Server database:
Attempts to fetch data from an SQL Data Service fail when the physical data source contains Unicode characters.
When the domain creates system services before it assigns them to a node, you cannot use the Administrator tool to edit the service properties, and functionality of the system services is limited.
The infacmd cms purge command might delete an active reference table if you run the command after you run the infacmd rtm deployImport command more than once.
When a connection name contains the characters "ID" a mapping fails to compile because the connection is not found.
When a web service request times out waiting for a web service or when you stop the web service from the Monitoring tool, the Data Integration Service shuts down.
A workflow fails to run if it contains a gateway that connects directly to another gateway with a conditional sequence flow that you configure as the default sequence flow.
When mapping logic contains the IN() function and the mapping logic is pushed down to the source, an outer join returns incorrect results.
A java-level deadlock occurs when the alert service, the user management service, and domain configuration service wait on each other. For example, a deadlock occurs in the following situation: The alert service checks for user permissions while processing an email alert for a node inactive event. At the same time, the user logs in to the Administrator tool and the inactive node registers with the domain.
The Model repository does not respond when web services are invoked for the first time.
A security vulnerability enables any user to browse to any known file in the Model repository.
When the master node in a grid fails over to a non-master node, the Content Management Service on the non-master node cannot find the reference data audit tables.
Unable to associate users to a group if the group has more members than the value of the MaxValRange limit set for the LDAP server.
When you upgrade from 9.6.1 to 10.0, the Model repository service fails with the following error:
The Repository Service operation failed
If a Mapping task fails in a workflow and runs to completion when the workflow recovers, the workflow graph does not reflect the completed status of the task.
The Model repository requires a minimum values for the maximum heap size setting. Set maximum heap size to the recommended value of 1 GB, and the maxPermGen size to 512 MB. Lower settings generate an error.
When you select a parent group in the Create Group wizard, the new group appears in the Native folder but is not nested under the parent group.
If you update the connection details on exception record database, you must recycle the Data Integration Service before you run a workflow that writes data to the database.
When you remove a node with the compute role from a Data Integration Service grid or disable the compute role on a node in the grid, the node is no longer listed in the Compute view for the service. However, the service retains the previously configured compute values for the node. If you add the node back to the grid or enable the compute role again, the node is configured with the previous values instead of the default values.
When you run the infacmd sch updateschedule command, the schedule end date changes to No End Date. The end date changes regardless of whether you specify a value for the -ed option.
When you schedule a job to run every 23 hours, the Scheduler Service might run the job at the wrong time.
When you define a schedule with a repeat count value, the schedule properties do not display the value that you specify.
You can configure a schedule that specifies a start date that is in the past.
Workflow recovery fails when the workflow database is a Microsoft SQL Server database that uses a non-default schema.
If the Data Integration Service stops unexpectedly while a Mapping task runs on grid, the monitoring tool indicates that the mapping is aborted and that the Mapping task is running.
The monitoring tool indicates that a Mapping task in a workflow is running when the task failed because Data Integration Service execution instance was unavailable.
In a mapping with a flat file data source that includes a column with double data type, the Data Integration Service erroneously reads data that should be rejected because it contains non-numeric characters. For example, the row should be rejected when it contains a value such as
12345678901234567890123456ab, but the Data Integration Service fails to reject the row. Instead, it reads the numeric characters and ignores the non-numeric characters.
A Consolidation transformation writes different values to the IsSurvivor port in the native environment and in the Hadoop environment.
You can run infacmd wfs recoverWorkflow to restart a workflow that is not enabled for recovery.
If a Command task in a workflow enters a canceled state and runs to completion when the workflow recovers, the monitoring tool does not display the completed task.
Export from the Model repositoryof a project with multiple objects failed.
The workflow log does not provide the following information:
You cannot delete the backup file that the Model repository backup process produces.
When a compile returns an error in the Java transformation and the client operating system default codepage is not ASCII, the error message is unreadable. A codepage mismatch occurs between the Java compiler and the Developer tool.
If you update the password for the workflow database or for a database that stores exception record data, you do not need to recycle the Data Integration Service.
When you create a non-master Content Management Service on a grid node, you do not need to create a Data Integration Service on the node.