Syncplicity Support


Installing the Storage Connector

The Syncplicity On-Premise Storage Connector is delivered as a virtual machine image (in an OVA format) to simplify the deployment process. The virtual machine image is based on the CentOS 6.8 Linux operating system. It ships with the necessary Syncplicity software.

NOTE: After the initial installation, you are responsible for maintaining the operating system on the virtual machine, which includes staying current with updates and bug fixes.

The following tasks describe how to install the Storage Connector in your datacenter.

If you are installing the Connector in an Amazon Web Services (AWS) environment, follow the installation procedure in Installing the Storage Connector in an Amazon Web Services (AWS) environment.

Provisioning a Virtual Machine

You must first download the software and then connect the Storage Connector software to a VMware ESXi server.

To provision a virtual machine:

  1. Download the Storage Connector OVA file from:

  1. Connect to the appropriate VMware ESXi server using VMware vSphere Client.

You must perform the following tasks for each of the Storage Connector servers that you plan to deploy (at least two are required).

Task 1: Deploy the OVF Template

You must use the vSphere Client's built-in support for OVF/OVA packages to create a Storage Connector virtual machine instance.

To deploy the OVF template:

  1. Click File > Deploy OVF Template... to initiate the process.
  2. Accept the EULA.
  3. Configure the amount of memory, CPU cores, and disk space to allocate to the virtual machine. Each virtual machine must be configured with 8GB of RAM, 8 virtual cores (Intel Xeon E5 Family processors, 2.20 GHz), and a minimum of a 50GB HDD.
  4. Start the deployed Storage Connector Virtual Machine. 

 Task 2: Log In and Change Your Password

An administrative account with sudo privileges called syncp has already been created in the virtual machine. The initial password for that account is onprem. As soon as you log in, change the password by typing "passwd".

The minimum password complexity requirements have been enhanced as follows:

  1. Passwords must have at least 14 characters.
  2. Passwords must use at least one of each of the four available character types: lowercase letters, uppercase letters, numbers, and symbols.
  3. Passwords cannot reuse the last 5 passwords.
  4. Passwords must contain at least 5 characters which are different from the previous password.

Task 3: Configure the Network Connection

The server listens for incoming connections on TCP port 22 for incoming SSH connections. You need to configure the Storage Connector servers with static IP addresses, rather than dynamic IP addresses that are automatically assigned by DHCP.

The next steps describe how to disable DHCP, which is installed and enabled by default, and then how to switch using a static IP address.

  1. Type:

sudo vi /etc/sysconfig/network-scripts/ifcfg-eth0

  1. Replace “BOOTPROTO=dhcp” with “BOOTPROTO=static”
  2. Add the following lines to this file:










To turn on networking and configure the hostname, follow these steps: 

  1. Type:

sudo vi /etc/sysconfig/network

  1. Make sure that the NETWORKING=yes entry is in the file.
  2. Add the following lines to this file:


HOSTNAME =<hostname>


For example:




To configure the IP addresses for the name server, follow these steps: 

1. Type:

sudo vi /etc/resolv.conf

2. Delete the contents of the file.

3. Add a line for each name server's IP address or host name:

nameserver <ip-address-of-name-server-1> 
nameserver <ip-address-of-name-server-2> 

4. Restart the server by typing the following command:

sudo service network restart 

The server now listens for incoming SSH connections only. No other ports have been opened. By default, the Storage Connector does not have a firewall turned on.

If your network configuration needs to restrict connections to for time server synchronization, you need to edit the /etc/ntp.conf file and set a different NTP server to which the Storage Connector can connect. If you use Atmos storage, make sure that both Storage Connectors and Atmos connect to same NTP servers.

Task 4: Configure SSL

You now need to configure SSL for secure communication between the Storage Connector and the Syncplicity client.

  • You must deploy a load balancer in front of your Storage Connectors and configure it to perform SSL offloading.
  • Ensure that the SSL-offloading Load Balancer uses a Certificate Authority (CA)-issued certificate that has been correctly chained.

    A certificate chain consists of all the certificates needed to certify the subject identified by the end certificate. In practice this includes the end certificate, the certificates of intermediate CAs, and the certificate of a root CA trusted by all parties in the chain. Every intermediate CA in the chain holds a certificate issued by the CA one level above it in the trust hierarchy. The root CA issues a certificate for itself.

    If you want to create a proper chain, you must use a text editor of your choice, such as Notepad or vi, to copy and paste each of the two or three (if there is an intermediate root) certificates into one text file in the following order:
      • Server (Storage Connector) Public KeyCertificate; e. g., Storage_Connector _node.pem
      • Intermediate Root Certificate (if there is one); e. g., Intermediate_Root.pem
      • Certificate Authority (VeriSign, Thawte, Entrust, etc.) Root Certificate; e. g,  CA_Root.pem

Note: The use of self-signed certificates is not supported.

  You may contact the Certificate Authority (CA) that signed the Storage Connector Node Public Key Certificate to provide the additional Intermediate Root Certificate as well as the Certificate Authority Root Certificate.

  • Your externally-addressable SSL-offloading Load Balancer load balances Syncplicity client traffic across all Storage Connectors. The specific instructions may vary based on the type of load balancer that you have deployed.
  • Configure your Load Balancer to offload SSL traffic on a port, e.g., 443; then load balance this traffic across the IP addresses of all Storage Connectors on port 9000.

Task 5 (Optional): Enable HSTS

If required, configure HTTP Strict Transport Security (HSTS) for secure communication between the Storage Connector and the Syncplicity client.

Note: HSTS is configured at the Load Balancer level. Please review the documentation for your Load Balancer on how to enable HSTS.

Task 6: Mount the dedicated Syncplicity NFS share

If your storage backend of choice is Atmos, s3 or Azure blob storage, you can skip this task.

Task 6a: Configure Storage Connector with NFS

When using a networked mountpoint for storage and the mount is lost, the Storage Connector does not stop automatically. As a result, the Storage Connector may begin to save files on the local file system. To prevent this issue, use the "chattr" command to make the mountpoint where sync-storage saves the content to be immutable. When NFS is mounted at the mountpoint, the permissions of the NFS-mounted storage override the local mountpoint permissions. As long as the NFS mount is present, the Storage Connector can write to the mountpoint.

Here is an example of using the "chattr" command:

#> mkdir /mnt/syncp
#> chattr +i /mnt/syncp

Task 6b: Configure Isilon

If your storage backend of choice is Isilon, you must mount the dedicated Syncplicity share to the server at /mnt/syncp. Use the NFS filesystem type. To make sure that the Isilon share is mounted automatically at system startup:

  1. Type:

sudo vi /etc/fstab

  1. Add the following line to the file:

<Isilon_cluster_name_or_IP_address>:/ <Syncplicity_data_directory> <mount_point>  nfs  rw

Where <mount_point> is the value you have set for the key "rootdir" for the platform section (Isilon, VNX, fs) in the config file /etc/syncp-storage/syncp-storage.conf.

Do not include the addr=<server> option since this can cause connectivity issues to Isilon.

Example: /mnt/syncdata  nfs  rw

  1. Type:

sudo mount <mount_point>

For production environments, ensure that the Isilon cluster name (used in the NFS mount entry in /etc/fstab) is a SmartConnect DNS name for the Isilon cluster and that the SmartConnect settings are configured for Dynamic IP Addresses. This ensures that the Storage Connectors can leverage the high availability (HA) features of the EMC Isilon architecture.  Configuring the mount options to access a SmartConnect zone also maximizes performance to the EMC Isilon cluster.

NOTE: The Isilon storage should have a directory created specifically for Syncplicity data. This directory must have its permissions and NFS export configured for the Storage Connectors, as described in the Configuring Isilon storage procedure in the Prerequisites article. 

Task 6c: Configure Standard NFS Storage

If your storage backend of choice uses a standard NFS interface (excluding Isilon), you must mount a dedicated Syncplicity share to the server at /mnt/syncp. Make sure that you use the NFS filesystem type. To verify that the NFS share is mounted automatically at system startup:

  1. Type:

sudo vi /etc/fstab

  1. Add the following line to the file:

/<mount_point>  nfs  rw

Where <mount_point> is the value you have set for the key "rootdir" for the platform section (Isilon, VNX, fs) in the config file /etc/syncp-storage/syncp-storage.conf.

Example: /mnt/syncdata  nfs  rw

Task 7: Configure the Storage Connector

Go to the Configuring the Storage Connector article to complete the installation. 

Powered by Zendesk