Connectors

Using File Systems (FTP, FTPS, SFTP, and Kafka)

File System

When a file needs to be exported or imported from the server hosting Web Central or a shared folder on the server, you can export or import from a file. Use the local file systems syntax to refer to the file.

FTP

In some cases, the file to read or write may be on a remote FTP server. In these cases, the Connector will process the file from the FTP server. To use FTP, simply specify the FTP Connection information in the FTP Configuration fields on the Connector Properties form.

FTPS (FTP over TLS)

If your FTP server is on a secure or encrypted port using TLS, you might have to specify a few additional Connector Parameters

{"useFtpOverTLS":"true", "clientCertificateFileName":"C:\\WebCentralInstall\key\myFtpsClientKey.pem", "serverCertificateFileName":"C:\\WebCentralInstall\key\myFtpsServerKey.pem", "clientKeyPassphrase":"AADSFLKJEEGF"}

In order to specify the use of FTPS, you must specify the first parameter, useFtpOverTLS; however, depending upon your server configuration, the other parameters may not be required.

SFTP

In addition to FTP support, the Connectors support the use of SSH-based FTP or SFTP. Do not confuse SFTP with FTPS; FTPS is simply FTP over TLS, whereas SFTP uses a different protocol (SSH) to communicate with the server. 

To use SFTP you must complete the FTP settings with the SFTP host information as shown below. Secure FTP should be set to Yes and an FTP Username should be specified. Typically, the FTP password is not required as a private/public key is usually used to authenticate.

In addition to the FTP settings shown above, you must specify the location of the SFTP private as a Connector Parameter in JSON format. The server will have the public key.

{"clientCertificateFileName":"C:\\WebCentralInstall\\keys\\mySftpKey.ppk"}

 

Parameter Description Example

useFtpOverTLS or useFtpOverSSL

When set to true, the connector will use FTPS, or in other words FTP over TLS.

{"useFtpOverTLS":"true", "clientCertificateFileName":"C:\\WebCentralInstall\\key\\myFtpsClientKey.ppk" "serverCertificateFileName":"C:\\WebCentralInstall\\key\\myFtpsServerKey.ppk", "clientKeyPassphrase":"AADSFLKJEEGF"}

uploadDocument

Uploads the file to the afm_docvers table.

{"uploadDocument":{ "table":"doc_templates", "value":"BLReport", "docField":"template", "comment":"Building Report"} }

clientCertificateFileName

The location of the client’s private key file, to prove to the FTPS server that Web Central is who the server expects it to be.

see useFtpOverTLS

serverCertificateFileName

The location of the server’s public key file, to verify the server is who they are expected to be.

see useFtpOverTLS

clientKeyPassphrase

The passphrase to access the client’s private key.

see useFtpOverTLS

Apache Kafka

Kafka is a messaging bus, the description of which is beyond this document, but it can essentially be treated as a file system that notifies listeners when temporary files are written to it. This means like FTP, you can export any file. To publish to Kafka you must configure the connector to be compatible with the Kafka service you intend to use. This section describe how to supply this configuration to the connector.

Kafka URL

The URL connectors accept for producers (export) and listeners (import) is of the form:

Again, without going into details, host:port identifies the location of the Kafka service on the network. Topic is like a folder and key is like a file name. Partition is about load balancing writes to the service, but the balancing is per-topic. The partition can be omitted in favor of letting the Kafka client assign one automatically, e.g. kafka://host:port/topic//key

You can also specify producer and consumer parameters as properties of the URL (e.g. kafka://host:port/topic//key?acks=all), but it is recommended to do this in the connector parameters.

Parameters

Parameter Description
producer

A JSON Object specifying any Kafka producer parameters for export

Topic: Producer Configurations | Confluent Documentation.

Some values are automatically set, e.g. from the connection string, but anything set here will override everything else.

Do not set key.serializer or value.serializer.

consumer

A JSON Object specifying any Kafka consumer parameters for import.

Topic: Consumer Configurations | Confluent Documentation

Some values are automatically set, e.g. from the connection string, but anything set here will override everything else.

Recommend setting group.id.

Do not set key.deserializer or value.deserializer.

consumerTimeout

When importing or listening, indicates how long to wait for a message to be received from Kafka.

You may want a larger value when listening in the background and a small value for on demand imports.

This is specified as a:

Duration (Java Platform SE 8 )

replicationFactor

MUST match the topic’s configuration on the Kafka service when publishing. Typically 3.

Topic Configurations | Confluent Documentation

producerTemplate.timestamp

A long representing the publication time of a message as the milliseconds from Epoch. Overrides the timestamp for the message sent to Kafka.

Not recommended.

attemptCreateTopic Defaults to true. If the topic isn’t found on the Kafka server the connector will attempt to create it. Setting this to false suppresses that behavior.

Examples

Export

{ kafka: { producer: { security.protocol: SASL_SSL, sasl.jaas.config: 'org.apache.kafka.common.security.plain.PlainLoginModule required username=\'MyUser\' password=\'MyPassword\';', sasl.mechanism: PLAIN, client.dns.lookup: use_all_dns_ips, acks: all, max.request.size: 8388608 compression.type: gzip }, replicationFactor: 3 } }

Import

{ kafka: { consumer: { security.protocol: SASL_SSL, sasl.jaas.config: 'org.apache.kafka.common.security.plain.PlainLoginModule required username=\'MyUser\' password=\'MyPassword\';', sasl.mechanism: PLAIN, client.dns.lookup: use_all_dns_ips, fetch.max.bytes: 8388608 } } }

Listeners

When Kafka is polled for data, the call will block until a message is available or the consumerTimeout is reached. This can best be handled as a continuous background process. Archibus Jobs aren’t designed to run continuously, so this is best configured as a listener. See: Archibus Connectors - Help - Connector Listeners