Table of Contents

Search

  1. Preface
  2. Introduction to Informatica Edge Data Streaming
  3. Licenses
  4. Using Informatica Administrator
  5. Creating and Managing the Edge Data Streaming Service
  6. Edge Data Streaming Entity Types
  7. Edge Data Streaming Nodes
  8. Data Connections
  9. Working With Data Flows
  10. Managing the Edge Data Streaming Components
  11. Security
  12. High Availability
  13. Disaster Recovery
  14. Monitoring Edge Data Streaming Entities
  15. Troubleshooting
  16. Frequently Asked Questions
  17. Regular Expressions
  18. Command Line Program
  19. Configuring Edge Data Streaming to Work With a ZooKeeper Observer
  20. Glossary

User Guide

User Guide

Troubleshooting Entities

I get the following error when I use a flat file source service to read from log files:
The process cannot access the file because it is being used by
another process.
This error might occur if you use Windows. The file locking mechanism of the Windows file system restricts access to a file by allowing access to only one user or process at a specific time.
To resolve this issue, perform the following steps:
  1. Create a new file from the application every time it wants to create or rotate the log file.
  2. Name the log file as follows:
    <logfilename>.timestamp
  3. Configure EDS to accept the following pattern:
    <logfilename>.*
  4. Schedule a task in Windows to delete old log files.
I use an MQTT source service and I get the following exception in the EDS Node log file:
Exception in thread "main" Persistence already in use (32200)
at org.eclipse.paho.client.mqttv3.persist.MqttDefaultFilePersistence.open(MqttDefaultFilePersistence.java:112)
at org.eclipse.paho.client.mqttv3.MqttAsyncClient.<init>(MqttAsyncClient.java:286)
at org.eclipse.paho.client.mqttv3.MqttAsyncClient.<init>(MqttAsyncClient.java:167)
at org.eclipse.paho.client.mqttv3.MqttClient.<init>(MqttClient.java:224)
at org.eclipse.paho.client.mqttv3.MqttClient.<init>(MqttClient.java:136)
The process cannot access the file because it is being used by
another process.
This exception might occur when you run two MQTT sources on the same machine with the same client ID.
The MQTT source uses a persistence mechanism to store messages when they are being processed, and the client ID is used as the identifier for the persistence store. If two MQTT sources run on the same machine with the same client ID, this exception occurs when both the sources try to access the same persistence store. Configure a unique client ID for each connection.
I am writing to an HDFS target service and I see that the target file size and content are not increasing.
This behavior occurs because the HDFS target service writes to the target file only after the HDFS block size is reached.
A request from the HDFS target service to create a file, does not reach the master server in the HDFS cluster immediately. The target service initially caches the file data into a temporary local file. When the temporary file accumulates data that is more than one HDFS block size, the target service contacts the server. The server allocates a target file and communicates its identity to the target service. The target service then flushes the block of data from the temporary file to the target file.
The Syslog UDS source service fails to start and I get the following error message in the node log:
java.lang.IncompatibleClassChangeError: Found interface org.objectweb.asm.ClassVisitor, but class was expected
This error occurs if you have configured the HADOOPBASEDIR environment variable. To resolve this error, ensure that you have not configured the HADOOPBASEDIR environment variable when you start the node on which the Syslog UDS source is deployed.
I use a Cassandra target service in the data flow to write to a Cassandra database and I get the following error in the Cassandra database log files:
following JSON field not present in "KEYSPACE.TABLENAME", will be dropped: [ "JSON-FIELDNAME" ]
This error might occur in the following situations:
  • You change the Cassandra database table by adding columns when the data flow is in a deployed state.
  • The JSON input to the Cassandra target service has additional fields that were not in the Cassandra database table when you first deployed the data flow.
To resolve this error, undeploy and deploy the data flow.
I see duplicate messages in the target after I restart a EDS Node that contains the Static File source service.
This issue occurs if you specify the End-of-File (EOF) delimiter and do not specify the Processed File Directory property for the Static File source service. When you restart the EDS Node on which the EDS source service is running, data from the files which was already copied to the target is copied again and results in duplicate messages.
To resolve this issue, delete the files which were read by the Static File source service before you restart the EDS Node.
The client application is unable to send messages to the source service in a data flow.
This issue might occur when you use the following source services to read messages from client applications:
  • TCP source service. To read messages from a TCP client application.
  • Syslog TCP source service. To read messages from a Syslog TCP client application.
This issue occurs if the EDS Node on which the target service is running is down. As a result, the TCP window size between the client application and the source becomes zero and therefore the client application does not send any more data.
To resolve this issue, navigate to the following directory:
<EDS Node Installation Directory>/node/logs/
View the <node name>-node.log file in the target node and correct the exceptions to bring up the target service.
After the EDS Node on which the target service is running comes up, the TCP client application starts sending messages to the TCP source service and the TCP source service writes data to the target service from the point where it had stopped.
I have deleted a Edge Data Streaming Service and created a new one. When I start the Edge Data Streaming Service, I see the following exception in the Administrator Daemon log file:
Name is already used by an existing object
This issue might occur if you do not drop all the tables that were used by the previous Edge Data Streaming Service.
To resolve this issue, perform the following steps:
  1. Delete the existing Edge Data Streaming Service.
  2. Drop the database schema that was used by the previous Edge Data Streaming Service.
  3. Ensure that the Administrator Daemon is up.
  4. Create a Edge Data Streaming Service.
  5. Select the appropriate database type when you specify the database properties for the new Edge Data Streaming Service.
I see an error condition for the Edge Data Streaming Service after I recycle the service.
The Recycle option is not supported on EDS. When you select the Edge Data Streaming Service in the Navigator after you recycle the service, the Edge Data Streaming Service icon in the Navigator displays a recycling state and the Enabling Service message appears in the contents panel. As a result, you are not able to access the Edge Data Streaming view and the Properties view in the contents panel.
Restart the Administrator tool. The Services and Nodes view of the Domain tab may still display an error condition for the Edge Data Streaming Service. This error does not impact the functionality of the Edge Data Streaming Service and you can ignore the message.
Alternatively, perform the following steps to delete the Edge Data Streaming Service and create a new service to resolve the error condition:
  1. Restart the Administrator tool.
  2. In the Navigator, right-click the Edge Data Streaming Service and click Delete.
    When you click the Edge Data Streaming Service in the Navigator, the Edge Data Streaming Service icon in the Navigator displays a recycling status and the Enabling Service message appears in the contents panel. You can ignore this message.
    The Recycle Service window appears.
  3. Click OK.
  4. Restart the Administrator tool.
  5. Create a new Edge Data Streaming Service.
I use an Unstructured Data Parser transformation in the data flow and I get the following exception in the EDS Node log file:
java.util.regex.PatternSyntaxException
This exception might occur when you have specified an incorrect pattern in the transformation.
Use the following debugger to verify if the pattern you have specified is correct:


Updated March 19, 2019


Explore Informatica Network