Syncplicity Support

Search our knowledgebase to get the help you need, today

Follow

Edit the Storage Connector configuration file

Most of the settings for the Storage Connector service are saved in file /etc/syncp-storage/syncp-storage.yml on the Storage Connector virtual machine. You must modify this file to configure Storage Connector to work in your environment. Complete list of available settings is available in Storage Connector configuration parameters.

  1. On the Storage Connector virtual machine, open the syncp-storage.yml file for editing:
    sudo vi /etc/syncp-storage/syncp-storage.yml
  2. In the syncplicity.ws section of the syncp-storage.yml file, replace <syncplicity access key> with the access key from the Custom Storage Settings page.
    Example:
    accesskey: "d4jJDpO7erZEmrlKab6w"
  3. If your company is using the EU PrivacyRegion, your on-premise Storage Connector must be configured with the following settings:
    syncplicity.ws.url: "https://xml.eu.syncplicity.com/1.1"
    syncplicity.ws.external.url: "https://api.eu.syncplicity.com"
    syncplicity.health.url: "https://health.eu.syncplicity.com/v1"
  4. If using a proxy, set the enable/disable flag to true and specify the proxy hostname and port in the proxy section.

    syncplicity:
        httpClient:
            proxy:
                enabled: false
                host: "my_proxy.mycompany.com"
                port: 8080
  5. In the syncplicity.storage section of the syncp-storage.yml file, replace <storage type> with:

    • atmos for EMC Atmos systems
    • azure for Azure storage blobs
    • google for Google Cloud Storage (GCS)
    • fs for generic NFS v3 or v4 systems
    • s3 for EMC ECS systems or AWS s3 buckets

    For example, if you are configuring for Azure blob storage, enter:

    syncplicity:
        storage:
            # The backend storage type. One of { fs, s3, azure, atmos, google }
            type: azure
  6. If type is atmos, configure your Atmos storage settings:
    Under the atmos section of the syncp-storage.yml file, set url to the URL and port to the port which your Atmos installation listens.
    Make sure that you explicitly include the port number.
    Example:
    url: "https://atmos.internal:443"
    Set token to your Atmos authentication token.
    Example:
    token: "7ce21bbh56ek8feg0a7c23f343ad8df99/tenant"
    Set secret to your Atmos secret key.
    Example:
    secret: "poSq7g5123t1TEQp5PlWhv4SAxk="
  7. If type is s3 for AWS, configure your AWS storage settings under the s3 section of the syncp-storage.yml file. Enter the name of the bucket you created, and access key and secret provided. For AWS, the secret was generated when you created the IAM user. For example:

    syncplicity: 
        storage:
            type: s3
    
            # S3 configuration
            s3:
                data:
                    # Check syncplicity.crypto.keyStore section for how to enable this or setup keystore
                    # access: <s3 data access key>
                    # secret: <s3 data secret key>
                    bucket: <s3 data bucket name>
                image:
                    # Check syncplicity.crypto.keyStore section for how to enable this or setup keystore
                    # access: <s3 image-access key>
                    # secret: <s3 image-secret key>
                    bucket: <s3 image-bucket name>
                irm:
                    # Check syncplicity.crypto.keyStore section for how to enable this or setup keystore
                    # access: <s3 irm access key>
                    # secret: <s3 irm secret key>
                    bucket: <s3 irm bucket name>
    
    
  8. If type is s3 for EMC ECS, configure your EMC ECS storage settings under the s3 section of the syncp-storage.yml file by providing the following information:

    • Full URL of the ECS storage, including the port. Refer to your ECS Storage administrator for the exact ports being used. Default ports are 9020 for http and 9021 for https.
    • Name of the bucket you created.
    • Access key used for authentication, which is generated by the ECS administrator. With ECS, the access key used is typically an email address.
    • Secret used for authentication, which is generated by the ECS administrator

    For example:

    s3 {
      url: "http://10.1.1.1:9020"
      # name of the bucket
      bucket: "MyStorageVault_bucket"
      # the s3 access key
      access: "syncplicity@mycompany.com"
      # the s3 secret
      secret: "put secret key here"
    }
    
    syncplicity: 
        storage:
            type: s3
    			
            # S3 configuration
            s3:
    			url: http://10.1.1.1:9020
                data:
                    # Check syncplicity.crypto.keyStore section for how to enable this or setup keystore
                    # access: <s3 data access key>
                    # secret: <s3 data secret key>
                    bucket: <s3 data bucket name>
                image:
                    # Check syncplicity.crypto.keyStore section for how to enable this or setup keystore
                    # access: <s3 image-access key>
                    # secret: <s3 image-secret key>
                    bucket: <s3 image-bucket name>
                irm:
                    # Check syncplicity.crypto.keyStore section for how to enable this or setup keystore
                    # access: <s3 irm access key>
                    # secret: <s3 irm secret key>
                    bucket: <s3 irm bucket name>
    
    

    NOTE: When an IP address is used in the URL, the Base URL (fully qualified URL) must be defined in the ECS admin console. The Base URL should correspond to the URL you use in the syncp-storage.yml file. The Base URL is used by ECS as part of the object address where virtual host style addressing is used and enables ECS to know which part of the address refers to the bucket and, optionally, namespace.
    Add the Base URL in the ViPR console for ALL your VDCs. Otherwise, you might get upload errors, similar to the following: 
    The request signature we calculated does not match the signature you provided. Check your Secret Access Key and signing method. For more information, see REST Authentication and SOAP Authentication for details.

    As a good practice, setup the connection in a test-phase via http-protocol and only after successful connection through HTTP, configure the connection over SSL/HTTPS. When using HTTPS, you cannot use the IP-address, but should rather use an FQDN that corresponds to the SSL-certificate hosted with the S3-storage. Verify that the certificate can be validated by the Storage Connector or is added to the Storage Connector’s trusted certificates.

  9. If type is fs (generic NFS v3 or v4), configure your NFS storage settings:
    • In the syncplicity.storage section of the syncp-storage.yml file, add or edit the following lines and set rootdir to the mount point of your NFS v3 or v4 server on this server.'

      storage:
              type: fs
      
              # Generic NFS v3 or v4 configuration
              fs:
                  data.rootDir: /tmp
                  image.rootDir: /tmp
                  irm.rootDir: /tmp

      Note: The monitorMountPointEnabled setting, when enabled, checks if the directory specified by syncplicity.storage.fs.rootdir is available every monitorMountPointInterval seconds. If the directory is not available and the check fails, the following message will appear in the Storage Connector log file:
      2017-11-27 10:56:18,148 [E] [status] - Resource fs.data check failed. Remote mount point '/tmp/syncp1' is unavailable. '/usr/bin/checkmount.sh' exit status 0.
      Make sure that syncp-storage:syncp-storage owns the mount point. To set ownership of the mount point, type the following command:
      chown -R syncp-storage:syncp-storage <mount_point>

  10. If type is azure, configure your Azure storage settings under the azure section of the syncp-storage.yml file. Enter the Azure storage account name, the storage account key and the name of the Azure blob storage container.

    For example:

    storage:
            type: azure
    
            # Azure configuration
            azure:
                data:
                    accountName: <account name>
                    # Check syncplicity.crypto.keyStore section for how to enable this or setup keystore
                    # accountKey: <storage account secret key>
                    container: <container name>
                image:
                    accountName: <account name>
                    # Check syncplicity.crypto.keyStore section for how to enable this or setup keystore
                    # accountKey: <storage account secret key>
                    container: <container name>
                irm:
                    accountName: <account name>
                    # Check syncplicity.crypto.keyStore section for how to enable this or setup keystore
                    # accountKey: <storage account secret key>
                    container: <container name>

    NOTE: When configuring the Storage Connector to utilize Azure blob storage, the Storage Connector server(s) should be hosted in the Azure VPC to minimize latency between the Storage Connector and the storage.

  11. If type is google, configure your GCS settings under the google section of the syncp-storage.yml file. Enter the name of the bucket you created, and the JSON string with authentication credentials provided in a downloadable file when your service account key is generated (see GCS documentation).

    For example:

    storage:
            type: google
    
            # Google Storage configuration
            google:
                # Check syncplicity.crypto.keyStore section for how to enable this or setup keystore
                authJson: <the authentication credentials JSON for the service account>
                data.bucket: <data bucket name>
                image.bucket: <image bucket name>
                irm.bucket: <irm bucket name>

 

Powered by Zendesk