Code Monkey home page Code Monkey logo

ecs-sync's Introduction

ecs-sync

ecs-sync is a bulk copy utility that can move data between various systems in parallel

For more information, please see the wiki

Dependency Updates

To check for updated dependency versions across all modules, use the gradle-versions-plugin:

./gradlew dependencyUpdates

ecs-sync's People

Contributors

jasoncwik avatar subras21 avatar twincitiesguy avatar xiaoxin-ren avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ecs-sync's Issues

--store-metadata doesn't work

By some reason I can't force storing metadata when cloning atmos space to filesystem.

Unrecognized option: --store-metadata
    use --help for a detailed (quite long) list of options

The way I run ecs-sync is as following:

C:\ecs-sync\build\libs>java -jar ecs-sync-3.2.5.jar -source atmos:http:/tokenId@host:port -target file:c:\atms\ecslocal --log-level verbose --store-metadata

Seems like --store-metadata option is not consumed as after modifying source code and recompiling (changing boolean assignment storeMetadata = true in config class) I was able to get metadata persisted on file system.

New Sync Test

I would like to perform a connectivity communication test using the New Sync feature. Can I select Source: Atmos Target: Simulated Storage for Testing and fill in the Atmos info with valid information? Will it work?

NewSync

ecs-sync-ui does not validate ECS certificate with subject alternate names or without IP addresses

We have a company policy not to include IP addresses in the certificates signed by our corporate CA and the ECS has been configured with all 4 subnets (public, management, data and replication) and were using an LTM with SSL pass-through, thus the SSL/TLS certificate has multiple subject alternate names for the DNS names that include specific names for the LTM for S3, Swift and NFS as well as each nodes' DNS name for the data vlan/subnet.

The ecs-sync host does have the company CA included in the openssl and java certificate/key stores.

We're intending to primarily use ecs-sync for CAS data migration from Centera.

I picked up the issue when attempting to configure ecs-sync using the web ui to store and retrieve it's configuration on an ECS S3 bucket, and the process failed when using the HTTPS setting and port 9021 for the specified ECS hosts, but worked when using HTTP and port 9020.

From the ecs-sync-ui.log it seems as if the code uses the IP address of the supplied ECS nodes' DNS names and expects this to match the certificate common name - I don't know enough about the way the Java library checks the SSL certificates to know if this would succeed if the IP address was in the subject alternate names list on the certificate.

The expected behavior would be for ecs-sync-ui (and ecs-sync?) to not expect the IP address in the certificate but to use the supplied DNS hostnames and verify the certificate accordingly.

This is the behavior that pretty much all other S3 clients and web browsers have - using the DNS name when supplied/used in the connection and not only the IP address to verify the SSL certificate sent from the ECS nodes.

Logs:
ecs-sync-ui.log
ui-config.xml:
ui-config.xml.txt

ecs sync ignores Dhttp.nonProxyHosts=

Hello I am using ecs-sync library ecs-sync-3.2.9.jar and when running on a server I try to by pass proxy settings but it seems like the non proxy hosts property is not being apply as I get a java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)

I tried to connect to the Ecs server using a simple http client and it connects ok so all firewalls are open

14% failure rate with -10014 errors on ECS but ECS-Sync shows no failures at all

Hi all
We've got a cas to cas job running for close to 7 million clips in a cliplist.
The job validates on the ECS target too. Steady state of some 20 Objects a second without any errors.
The ECS UI shows 14% of failures all with -10014 (FP_FILE_NOT_STORED_ERR) errors.
As the validation also shows no errors I wonder if there is a good explantion for these errors and why ecs-sync does not show any objects as erroring out.

Thanks, Holger

Using ECS Sync to migrate blobs from PostgreSQL to ECS

I was looking at the ECS Sync wiki at https://github.com/EMCECS/ecs-sync/wiki

Under the section “Why use ecs-sync?”, the wiki states “With ecs-sync, you can pull blobs out of a database and move them into an S3 bucket”

However, if I look at the ECS sync storage plugins available at https://github.com/EMCECS/ecs-sync/wiki/Storage-Plugins , I can’t see a plugin for moving data from any structured data store to ECS

I was looking at guidance how we can achieve moving blobs from relational database to ECS ?

Thank you

ECS-Synch 3.2.7 OVA - DHCP

We are currently using ECS-Synch to migrate data to an ECS S3 Bucket .. UI is working fine, but we need to amend the dhcp network to a static network address, and although we have the instructions to complete this in the OVA, we are unable to find the logon credentials to the VM Console to enable us to do this .. does anybody have any where they can point me to get this information

Thanks

CAS Skipped files

I am copying files from one CAS system to another. I have 32 millions files in the source. I have decided to break it down into 1 million chunks. During my first run on the 1st million items, I am getting 717 skipped. Is there an option or setting that I can output those 717 skipped items to a CSV file or txt file?

single thread vs. multiple threads for large number of files?

I am currently using the ESC-SYNC tool to copy 10.5 million files using CAS from ATMOS to ECS with 16 threads. Is it better to use multiple threads or single thread? This is running on a Virtual Machine with 16 cores, 16GB of RAM. The CPU is currently running between 9-11% utilization and bandwitch is running at 2.1Ms/s. Looking at ways to increase performance. I have 4 more of these type of data transfers. Any suggestions?

OVA root password issue

I have read several of the issues concerning the OVA root password. I found one with an email address that I could request passwords. Can I get a new, valid email address to send my request too?

Thanks, In Advance.
Dave O

Two Documentation Suggestions

  • Filters are applied in the order specified, right? The doc on the EMC community site doesn't explicitly say that.

  • Provide an example of using the Delete Target + Auditing using an ID. It took some looking through the code to really understand what this means. Here might be an example: importing objects from a file list called objectlist.txt, deleting said objects from Atmos source, logging every deleted object to a file deleted.txt with the format [oid], was-deleted.

    java -jar vipr-sync-1.1.jar --source atmos:http://123456789:[email protected] --source-oid-list objectlist.txt --target delete --deleted-target-id was-deleted --filters id-logging --id-log-file deleted.txt

Getting XML out of ecs-sync-ui

When I construct an XML nothing seems to get transferred I see the following in the logs:

Aug 14 15:14:03 ecssync java[9553]: 2017-08-14 15:14:03 INFO  [sync-pool-41-t-1] SyncTask: O--* skipping /vol1/pathogen/sample1920X1080dpx10bit.dpx because it is up-to-date in the target
Aug 14 15:14:03 ecssync java[9553]: 2017-08-14 15:14:03 INFO  [sync-pool-41-t-2] SyncTask: O--* skipping /vol1/pathogen/delete_test.png because it is up-to-date in the target

But there's nothing in the bucket!

I feel I'm doing my XML with the wrong settings. Is there a away to get the XML from the job submitted via the UI? So I can verify I'm using the right values?

ECS Sync OVA root password

Deployed ECS Sync ver 3.2.7 but cannot find the root password anywhere.
The old password for ver 3.1 not longer works.
Thank you,
Stefan

Please increase /home directory to be much larger

/home is only 30G, yet the file system has the best part of 150G. Some clip lists are large and by the time we create source and target lists, sort them then do a difference check, we can often easily end up filling up the /home directory.

Specific Data

Can this tool be used to pull specific objects (audio file, and metadata) from Atmos to my local machine? if I somehow fed in a list of the object ID's im interested in?

After the Put md5sum mismatch

I am using the ecs-sync to copy the files from one object storage to other and getting the below error
java.lang.RuntimeException: MD5 sum mismatch (CB0B4A92FE6025C55EAC8A7990329E0F != 5EAF476670396AA3829ECC9AF05650E1)
at com.emc.ecs.sync.Md5Verifier.verify(Md5Verifier.java:67)
at com.emc.ecs.sync.SyncTask.run(SyncTask.java:131)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

The object has versioned files in it. Even though i was able to copy the files manually but still seeing this error when i verify the files.

3.2.7 ova deployment: running nmtui impossible

Hi all
I deployed the most recent ova today and tried somehow to log in to the console. The old root user password somehow did not work. Was that changed?
Or are we supposed to use a admin user at the linux level?

The wiki states to change the os passwords but does not list them. I remember they were once there.

Other than that on 3.2.6 we ran 100 threads which migrated more than 700 clips per second, small files, from Centera to a virtual ECS CE. Customer was impressed about the performance.

Thanks a lot, Holger

ecs-sync install

I installed the ecs-sync ova, but cannot locate the root password to login via CLI.

ova deployment documentation

Hi all
The ova deployment documentation tells to run nmtui. Also to update the OS.
All fine but would someone mind to put the login details somewhere where they can be found?

Thanks, Holger

Problem with downloading Complete Object Report

Hi,

The Download Complete Object report function seems like it's broken in the last few releases, I get 502 Proxy error when trying to download any complete object report (see Screenshot)

capture

capture2

Sync s3 to file system target for backup

I'm looking to backup some s3 buckets to a filesystem. I have been able to successfully sync from s3 to the filesystem, but I can't find a way to cleanup unreferenced files on the target. What I would like to do is delete a file on the target if it has not been referenced by the source s3 for over 30 days.

There is a --delete-older-than flag but this only appears to be for source objects.

Is this possible (without using force-sync)? I was thinking if there was an easy way to know when each file was last checked for syncing it could be done. Files could be purged if their last sync time was > 30 days (so long as sync ran more frequent than every 30 days). It could also be done if the target filesystem syncer had an option to always touch a file as a "liveness" indicator (without always redownloading it). Files with a timestamp > X days could be purged.

A plugin injection point of "no-op" or something along those lines could be added to SyncStorage
and called here: https://github.com/EMCECS/ecs-sync/blob/master/src/main/java/com/emc/ecs/sync/TargetFilter.java#L78
Plugins could then perform custom logic if a sync was not performed. In this case the file system plugin could have an option to force touch a file.

Duplicates in source-list-file cause confusing errors in UI

During a migration, a user encountered errors and skipped objects, but saw no errors in the errors report nor in the database. This caused confusion and reduced confidence that all data was transferred.

In this particular case, there were duplicate entries in the source list file, which caused some objects to be skipped (that were already copied in the same job). The UI reports on every line of the list file (including duplicates), whereas the DB only tracks unique objects. This explains the discrepancy between the UI and the database and is expected.

However, errors are not expected in this case. It turns out there is a race condition where two threads try to insert a record into the DB at exactly the same time. In that situation, one thread will fail, while the other will succeed. The winning thread copies the data and records its results in the database, while the losing thread does not. That's why no errors appear in the DB. However, the net result is that those objects were copied successfully.

This bug is to address the race condition and eliminate the errors. Duplicate entries in the source list will still cause skipped objects, but this is expected behavior and will be added to the troubleshooting guide.

Add Source/Target Support type: Device or PIPE

Hi,
It would be really amazing if we could have support for block devices or PIPE.
If so, we would be able to take FileSystem Snapshots and send/receive it from ECS, or facilitate data manipulation without any requirement to save it in a filesystem (consuming precious space) before sync.

Sincerely,
Victor da Costa

NFS plugin path handling improvements

If the path is relative, it should not start with a slash. However, we should be flexible enough to allow either. Also, we need to support syncing a single file. It doesn't have to be easy (right now you have to put the complete target file name in), but we need to support it (other plugins do)

UI crashed if job name contains space

1. Create a new sync task which job name contains a space

2. Archive this task after task finish

3. UI crashed

  • ecs-sync-ui.log
2019-05-15 11:29:26.405 ERROR --- [0.1-8080-exec-4] .a.c.c.C.[.[.[.[grailsDispatcherServlet] : Servlet.service() for servlet [grailsDispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is org.grails.gsp.GroovyPagesException: Error processing GroovyPageView: [views/history/index.gsp:51] Error executing tag <sitemesh:captureBody>: Illegal character in path at index 44: /download/archive/report/20190515T112813-job name.report.csv] with root cause

java.net.URISyntaxException: Illegal character in path at index 44: /download/archive/report/20190515T112813-job name.report.csv
        at java.net.URI$Parser.fail(URI.java:2848)
        at java.net.URI$Parser.checkChars(URI.java:3021)
        at java.net.URI$Parser.parseHierarchical(URI.java:3105)
        at java.net.URI$Parser.parse(URI.java:3063)
        at java.net.URI.<init>(URI.java:588)
  • archive file name contains a space
    /opt/emc/ecs-sync/config/archive/20190515T112813-job name.xml

ECS-SYNC pulls metadata and CDF's, but not blobs

I have the ECS-SYNC OVA running version 3.3.0 which I upgraded. I ran a "sudo yum update" as well before trying anything. When running a UI migration from my Centera to an NFS path, I am getting JSON/CDF files with the CAS checksum name, but the system is not pulling the blob files with it. As a result, I have a migration of 2500 objects that is 4MB, but should be around 290GB. I am able to retrieve the metadata as well, but don't need it for my purpose.

To debug, I have tried a single thread with 1GB of buffer, all the "experimental" settings, and even pulling to the local filesystem. Thread count, buffer size, experimental settings and target do not seem to matter. Verification does seem to fail when it's on. I am not getting permission errors on the targets. I installed another OVA of ECS-SYNC 3.2.7, in case I somehow messed up the upgrade, but that seems to make no difference. The debug logging shows that the object sizes are being detected correctly--they are just not fetched or saved (not exactly sure which).

As another test, I loaded the JCASScript on a Windows machine, and I am able to use the "clipToFile" function to save a blob file using the original filename from the CDF. I did a "query" to produce a clip list, and that runs in 5 minutes instead of 5 hours, but the result is the same--no blobs.

Does anyone have any suggestions for next steps? I am using the UI, and don't really want to mess with the CLI options. I don't see anything relevant in them that might help, but it's possible the UI doesn't have all features. I can post logs, but have to sanitize them first per company policy, which is why they aren't posted now.

There was a mismatch between the signature in the request and the signature computed by the server

Hello,
I ran several ATMOS REST migrations (from ATMOS 2.4.2 to ECS 3.2.0.1) without any problem, but with one particular job I have the following error with ecs-sync 3.2.9:

AtmosException{errorCode=1032, httpCode=403} com.emc.atmos.AtmosException: There was a mismatch between the signature in the request and the signature computed by the server.
at com.emc.atmos.api.jersey.ErrorFilter.handle(ErrorFilter.java:75)
at com.emc.atmos.api.jersey.RetryFilter.handle(RetryFilter.java:63)
at com.emc.atmos.api.jersey.AuthFilter.handle(AuthFilter.java:53)
at com.sun.jersey.api.client.Client.handle(Client.java:652)
at com.sun.jersey.api.client.WebResource.handle(WebResource.java:682)
at com.sun.jersey.api.client.WebResource.get(WebResource.java:193)
at com.emc.atmos.api.jersey.AtmosApiClient.getServiceInformation(AtmosApiClient.java:157)
at com.emc.ecs.sync.storage.AtmosStorage.configure(AtmosStorage.java:165)
at com.emc.ecs.sync.EcsSync.run(EcsSync.java:212)
at com.emc.ecs.sync.service.SyncJobService$SyncTask.run(SyncJobService.java:339)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

I can provide in private msg the XML file of the job migration.
What could be the different reasons that could raise this error ?

Thank you

Could not create task of type 'ShadowJar’.

When building the project I get:

* What went wrong:
A problem occurred evaluating root project 'ecs-sync'.
> Failed to apply plugin [class 'com.github.jengelman.gradle.plugins.shadow.ShadowJavaPlugin']
   > Could not create task of type 'ShadowJar’.

When I printed the stack trace I found:

Caused by: java.lang.NoSuchMethodError: org.gradle.api.java.archives.internal.DefaultManifest.<init>(Lorg/gradle/api/internal/file/FileResolver;)V
at com.github.jengelman.gradle.plugins.shadow.tasks.DefaultInheritManifest.<init>(DefaultInheritManifest.groovy:15)
at com.github.jengelman.gradle.plugins.shadow.tasks.ShadowJar.<init>(ShadowJar.java:44)
at com.github.jengelman.gradle.plugins.shadow.tasks.ShadowJar_Decorated.<init>(Unknown Source)
at org.gradle.api.internal.DependencyInjectingInstantiator.newInstance(DependencyInjectingInstantiator.java:48)
at org.gradle.api.internal.ClassGeneratorBackedInstantiator.newInstance(ClassGeneratorBackedInstantiator.java:36)
at org.gradle.api.internal.project.taskfactory.TaskFactory$1.call(TaskFactory.java:121)
... 89 more

This is likely as a result of the following issue johnrengelman/shadow#177. Upgrading seems to resolve the issue. PR to follow shortly.

Bulk sync with verify option generates errors

We have setup a esc sync to be run from an NFS export to an EMC ESC hardware appliance.
Access has been setup correctly, because data is being written to the target.
But right after the sync is started it generates java errors for each directory and subdirectories that is found in the nfs export.

When looking at what is written on the target, it has written all directories and files that are also on the source.

Error example:

"nfsserver.local:/volname_1/directory1/directory2/directory3","directory2/directory3/","true","0","","0","[com.emc.ecs.sync.storage.ObjectNotFoundException: directory2/directory3/] com.emc.ecs.sync.storage.ObjectNotFoundException: directory2/directory3/
    at com.emc.ecs.sync.storage.s3.EcsS3Storage.loadObject(EcsS3Storage.java:254)
    at com.emc.ecs.sync.storage.s3.AbstractS3Storage.loadObject(AbstractS3Storage.java:63)
    at com.emc.ecs.sync.storage.s3.EcsS3Storage.loadObject(EcsS3Storage.java:239)
    at com.emc.ecs.sync.TargetFilter.reverseFilter(TargetFilter.java:110)
    at com.emc.ecs.sync.SyncTask.run(SyncTask.java:128)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)"

The above error is displayed for very single directory and subdirectory.
It seems the errors are only generated when the Verify or the Verify_only option is selected.

For recreating the error these are the settings used:
Source: NFS Export
Target: DELL/EMC ECS Hardware Appliance
Bucket on the ECS has been created by hand before the sync.

newsync_1
newsync_2
newsync_3

jcenter.bintray.com:80 failed to respond

When I attempt to build the project I get:

* What went wrong:
A problem occurred configuring root project 'ecs-sync'.
> Could not resolve all dependencies for configuration ':classpath'.
   > Could not resolve org.jdom:jdom2:2.0.5.
     Required by:
         :ecs-sync:unspecified > com.github.jengelman.gradle.plugins:shadow:1.2.3
      > Could not resolve org.jdom:jdom2:2.0.5.
         > Could not get resource 'http://jcenter.bintray.com/org/jdom/jdom2/2.0.5/jdom2-2.0.5.pom'.
            > Could not HEAD 'http://jcenter.bintray.com/org/jdom/jdom2/2.0.5/jdom2-2.0.5.pom'.
               > jcenter.bintray.com:80 failed to respond
   > Could not resolve org.ow2.asm:asm:5.0.3.
     Required by:
         :ecs-sync:unspecified > com.github.jengelman.gradle.plugins:shadow:1.2.3
      > Could not resolve org.ow2.asm:asm:5.0.3.
         > Could not parse POM http://jcenter.bintray.com/org/ow2/asm/asm/5.0.3/asm-5.0.3.pom
            > Could not resolve org.ow2.asm:asm-parent:5.0.3.
               > Could not resolve org.ow2.asm:asm-parent:5.0.3.
                  > Could not get resource 'http://jcenter.bintray.com/org/ow2/asm/asm-parent/5.0.3/asm-parent-5.0.3.pom'.
                     > Could not HEAD 'http://jcenter.bintray.com/org/ow2/asm/asm-parent/5.0.3/asm-parent-5.0.3.pom'.
                        > jcenter.bintray.com:80 failed to respond
   > Could not resolve org.ow2.asm:asm-commons:5.0.3.
     Required by:
         :ecs-sync:unspecified > com.github.jengelman.gradle.plugins:shadow:1.2.3
      > Could not resolve org.ow2.asm:asm-commons:5.0.3.
         > Could not parse POM http://jcenter.bintray.com/org/ow2/asm/asm-commons/5.0.3/asm-commons-5.0.3.pom
            > Could not resolve org.ow2.asm:asm-parent:5.0.3.
               > Could not resolve org.ow2.asm:asm-parent:5.0.3.
                  > Could not get resource 'http://jcenter.bintray.com/org/ow2/asm/asm-parent/5.0.3/asm-parent-5.0.3.pom'.
                     > Could not HEAD 'http://jcenter.bintray.com/org/ow2/asm/asm-parent/5.0.3/asm-parent-5.0.3.pom'.
                        > jcenter.bintray.com:80 failed to respond
   > Could not resolve commons-io:commons-io:2.4.
     Required by:
         :ecs-sync:unspecified > com.github.jengelman.gradle.plugins:shadow:1.2.3
      > Could not resolve commons-io:commons-io:2.4.
         > Could not parse POM http://jcenter.bintray.com/commons-io/commons-io/2.4/commons-io-2.4.pom
            > Could not resolve org.apache.commons:commons-parent:25.
               > Could not resolve org.apache.commons:commons-parent:25.
                  > Could not get resource 'http://jcenter.bintray.com/org/apache/commons/commons-parent/25/commons-parent-25.pom'.
                     > Could not HEAD 'http://jcenter.bintray.com/org/apache/commons/commons-parent/25/commons-parent-25.pom'.
                        > jcenter.bintray.com:80 failed to respond
   > Could not resolve org.apache.ant:ant:1.9.4.
     Required by:
         :ecs-sync:unspecified > com.github.jengelman.gradle.plugins:shadow:1.2.3
      > Could not resolve org.apache.ant:ant:1.9.4.
         > Could not parse POM http://jcenter.bintray.com/org/apache/ant/ant/1.9.4/ant-1.9.4.pom
            > Could not resolve org.apache.ant:ant-parent:1.9.4.
               > Could not resolve org.apache.ant:ant-parent:1.9.4.
                  > Could not get resource 'http://jcenter.bintray.com/org/apache/ant/ant-parent/1.9.4/ant-parent-1.9.4.pom'.
                     > Could not HEAD 'http://jcenter.bintray.com/org/apache/ant/ant-parent/1.9.4/ant-parent-1.9.4.pom'.
                        > jcenter.bintray.com:80 failed to respond
   > Could not resolve org.codehaus.plexus:plexus-utils:2.0.6.
     Required by:
         :ecs-sync:unspecified > com.github.jengelman.gradle.plugins:shadow:1.2.3
      > Could not resolve org.codehaus.plexus:plexus-utils:2.0.6.
         > Could not parse POM http://jcenter.bintray.com/org/codehaus/plexus/plexus-utils/2.0.6/plexus-utils-2.0.6.pom
            > Could not resolve org.codehaus.plexus:plexus:2.0.7.
               > Could not resolve org.codehaus.plexus:plexus:2.0.7.
                  > Could not get resource 'http://jcenter.bintray.com/org/codehaus/plexus/plexus/2.0.7/plexus-2.0.7.pom'.
                     > Could not HEAD 'http://jcenter.bintray.com/org/codehaus/plexus/plexus/2.0.7/plexus-2.0.7.pom'.
                        > jcenter.bintray.com:80 failed to respond

Changing the jcenter repository URL to https resolves the issue. PR to follow.

CAS Source example

Could someone give me an example of what the CAS "Connection-String" should look like? I am rather confused with:

UI_connection

SyncUI

FAQ question

Hi all
If I re-run a CAS to CAS job, will the query start back at epoch to now or is there a saved timestamp which will result in a query of timestamp to now?
Thanks, Holger

ECS Metering Output with 0 capacity after Centera CAS to ECS CAS Migration

Hi all

I migrated a couple of pools from Center CAS to ECS CAS using ecs-sync. After the migration, the metering output in ECS shows the objects, but the capacity remains at 0.
Is this an ECS issue (already know?) or something that concerns ecs-sync?
As the ECS Migration Tool does not geo-replicate migrated content it is basically useless and ecs-sync or other tools need to be used. But if capacity shows 0 after the migration, one or the other customer may be concerned about his data.

Thanks, Holger

Difference in ECS S3 and S3 behaviour

Hi all
I know it does not make sense a lot but I figured one could maybe use an S3 target like ECS or some generic S3 to backup a Centera to.

The setup of CAS to ECS S3 saves the cdf information to ECS. Not the files themselves.
When using CAS to S3 (ECS or some other S3 provider) this results in size mismatch errors.

As kind of a generic migration platform, I thought I would get your insights. Is this normal behaviour that generic S3 and ECS S3 behave differently?
I would still like the idea to be able to make one last copy of the Centera content when someone moves away from the CAS protocol to a native S3 integration. An application takes all it's data and rewrites it using S3. But many of our customers would like to keep a copy of the Centera content around just in case something happens. We could then install a small system and restore to that one. After a year the bucket containing the CAS S3 backup could be deleted. This would avoid some of the CAS limitations that exist currently.

Best regards, Holger

Centera SDK Installation question

Hi all

I'm trying to use ecs-sync in a custom vm instead of the OVA. I followed the installation steps of the wiki in a minimal centos 7 installation.

copy ecs-sync-3.1.1.zip to /usr/local
yum install unzip
cd /usr/local
unzip ecs-sync-3.1.1.zip
copy ecs-sync-ui-3.1.1.jar to /usr/local/ecs-sync-3.1.1
delete ecs-sync-3.1.1.zip in /usr/local
sudo yum update
cd ecs-sync-3.1.1
ova/configure-centos.sh
set mariadb password
remove anonymous user
disallow root from remote
remove test database
reload privilege tables
ova/install.sh

In oder to install the Centera SDK Version 3.4 I did the following as root:
copy Centera_SDK_Linux-gcc4.tgz to /usr/tmp
cd /usr/tmp
gzip -d Centera_SDK_Linux-gcc4.tgz
tar -xvf Centera_SDK_Linux-gcc4.tar
./install

echo "PATH=${PATH}:/usr/local/Centera_SDK/lib/64" > /etc/profile.d/CenteraSDK-path.sh && chmod 755 /etc/profile.d/CenteraSDK-path.sh
echo "LD_LIBRARY_PATH=/usr/local/Centera_SDK/lib/64" > /etc/profile.d/CenteraLIB-path.sh && chmod 755 /etc/profile.d/CenteraLIB-path.sh

. /etc/profile

Logged on to the GUI and created a CAS to CAS Migration job.
[java.lang.UnsatisfiedLinkError: com.filepool.natives.FPLibraryNative.setLastError(I)V] java.lang.UnsatisfiedLinkError: com.filepool.natives.FPLibraryNative.setLastError(I)V at com.filepool.natives.FPLibraryNative.setLastError(Native Method) at com.filepool.fplibrary.FPLibraryException.retrieveErrorString(Unknown Source) at com.filepool.fplibrary.FPLibraryException.(Unknown Source) at com.filepool.fplibrary.FPPool.RegisterApplication(Unknown Source) at com.emc.ecs.sync.storage.cas.CasStorage.configure(CasStorage.java:96) at com.emc.ecs.sync.EcsSync.run(EcsSync.java:213) at com.emc.ecs.sync.service.SyncJobService$SyncTask.run(SyncJobService.java:310) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)

The Centera SDK Installation documentation only states:
Go to the Centera_SDK/install subdirectory on your machine and run the install
script. Run the script with root privileges since it installs files in system directories.
You may want to modify this script to reflect your own directory structure. Add the
directory where the libraries were installed to the library path environment variable
(LD_LIBRARY_PATH).

Where do the path and ld_library_path variables need to be configured so that the ecs-sync processes see them correctly?
Do I just need to restart something?
Do I need to set the path to the 32bit version?
Do I need the 3.3 Version instead of 3.4?
The other GCC Version?

As it's working in the OVA image, you probably know where I went the wrong way :-)
I'm testing for a migration of several hundred millions of clips.

Thanks, Holger

Database Tools

Has anyone every used MySQL Workbench to access the ECS-Sync database? If so do you have any tips?

No documentation on configuring access to an SMTP server for Alerts

There is no documentation or information on how or where to configure the SMTP server and account for sending alerts.

The GUI forces you to enter and email address to send alerts to, but that's it :-(

FYI - The SMTP config is actually in /opt/emc/ecs-sync/application-production.yml. There are some commented lines you can uncomment if necessary.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.