Code Monkey home page Code Monkey logo

nextcloud-s3-local-s3-migration's People

Contributors

mracet avatar stereu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

nextcloud-s3-local-s3-migration's Issues

A missing step migrating from S3 to local

Thank you so much for the script, had to adapt it a bit, but it seemed to have worked in the end.
Have found out that there was a missing step migrating from S3 to local storage. It looked like the previews wouldn't be loaded and always give up 404.

The thing that helped is this kind of sql query to undo the migration to S3 step:

UPDATE oc_mounts 
JOIN oc_storages ON oc_mounts.storage_id = oc_storages.numeric_id 
SET mount_provider_class = 'OC\\Files\\Mount\\LocalHomeMountProvider' 
WHERE oc_storages.id LIKE 'home::%' and mount_provider_class = 'OC\\Files\\Mount\\ObjectHomeMountProvider';

Also would be nice to add oc prefix from config

postgres support, dry-run

Hi!
Thanks for great job! Previously I had issue in original lukasmu/nextcloud-s3-to-disk-migration repo about postgres support. I'm not programmer, so I cannot implement that. But lot of users use postgres with NC. Will be awesome if you add psql support

Second question is about dry-run. Is it possible to run the script but without real copy of data, f.e. to test if everything is ok.

And third question: is it possible to copy from S3 to local, but keep objects in S3? in case if something will go wrong there should be possibility to switch back to S3. of course, DB backup is mandatory before starting.

Backblaze HTTP 500

hello,

thank you for your amazing scripts!

unfortunately backblaze is abit finicky and every now and then says error 500 for large multipart uploads

according to their docs (and the error message) the client should simply retry the action. can this be implemented? https://www.backblaze.com/blog/b2-503-500-server-error/

- Part 287: Error executing "UploadPart" on "https://mybucket.s3.eu-central-003.backblazeb2.com/urn%3Aoid%3A1047419?partNumber=287&uploadId=4_zd8898f4c64d3c36c899b0813_f24099f1f57fd9841_d20230926_m203929_c003_v0312019_t0015_u01695760769549"; AWS HTTP error: Server error: `PUT https://mybucket.s3.eu-central-003.backblazeb2.com/urn%3Aoid%3A1047419?partNumber=287&uploadId=4_zd8898f4c64d3c36c899b0813_f24099f1f57fd9841_d20230926_m203929_c003_v0312019_t0015_u01695760769549` resulted in a `500 Internal Server Error` response:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<Error>
    <Code>InternalError</Code>
    <Message>An internal  (truncated...)
 InternalError (server): An internal error occurred.  Please retry your upload. - <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<Error>
    <Code>InternalError</Code>
    <Message>An internal error occurred.  Please retry your upload.</Message>
</Error>

No file on nextcloud web interface afer running the script

Hello !

First of all, thank you for your work on the script.

I have run the migration script to S3 through the steps described, but I have the following problem: No files appear in the nextcloud interface.

  • The files are well on the S3
  • The files have the right storage ID in oc_filecache, which refers to the right id of "oc_storages".
  • Also, if I create a folder from Nextcloud, at the root, it appears in oc_filecache, and the name is prefixed with "file/". So if i create i folder named "MyFolder", it's appears as "file/MyFolder" in oc_filecache. I noticed that for migrated files, there is no prefix, it's just "MyFolder".
    • Is that a problem?

I don't really know what else to check, do you have any idea?

Thanks

Greg

Final phase produces error: `Fatal error: Uncaught mysqli_sql_exception: Duplicate entry`

I have succeeded in running the utility for transfer from local files to S3 in all stages but the last ($TEST = 0).

The tail end of the output, showing the error, appears below. Note that the earlier phases produced no error as such, but did produce the output EXPERIMENTAL, I have not had this problem, so can not test.

Do you have any advice for recovery?

check for canceled uploads in oc_filecache...
=> EXPERIMENTAL, I have not had this problem, so can not test.. => check only!PHP Fatal error:  Uncaught mysqli_sql_exception: Duplicate entry 'object::store:amazon::bucket_name' for key 'storages_id_index' in /root/work/s3/nextcloud-S3-local-S3-migration/localtos3.php:686
Stack trace:
#0 /root/work/s3/nextcloud-S3-local-S3-migration/localtos3.php(686): mysqli->query()
#1 {main}
  thrown in /root/work/s3/nextcloud-S3-local-S3-migration/localtos3.php on line 686

Fatal error: Uncaught mysqli_sql_exception: Duplicate entry 'object::store:amazon::bucket_name' for key 'storages_id_index' in /root/work/s3/nextcloud-S3-local-S3-migration/localtos3.php:686
Stack trace:
#0 /root/work/s3/nextcloud-S3-local-S3-migration/localtos3.php(686): mysqli->query()
#1 {main}
  thrown in /root/work/s3/nextcloud-S3-local-S3-migration/localtos3.php on line 686

Local to S3 upload limitation. 5G

Thank you very much for providing this project. I have encountered an issue where it prompts me that the file I am trying to upload exceeds the size limit. After reviewing the AWS documentation, I found that using the SDK for uploading supports a single file size of 5G, but it also provides a method for segmented uploading. I am looking forward to the implementation of this feature in the project. Cheers!


https://github.com/awsdocs/aws-doc-sdk-examples/blob/main/php/example_code/s3/MultipartUpload.php


Next Aws\S3\Exception\S3Exception: Error executing "PutObject" on "https://xxxx-nextcloud.s3-xxxxxx-1.amazonaws.com/urn%3Aoid%3A261344"; AWS HTTP error: Client error: `PUT https://xxxx-nextcloud.s3-xxxxxx-1.amazonaws.com/urn%3Aoid%3A261344` resulted in a `400 Bad Request` response:
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>EntityTooLarge</Code><Message>Your proposed upload exceeds the maxim (truncated...)
 EntityTooLarge (client): Your proposed upload exceeds the maximum allowed size - <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>EntityTooLarge</Code><Message>Your proposed upload exceeds the maximum allowed size</Message><ProposedSize>6161602382</ProposedSize><MaxSizeAllowed>5368709120</MaxSizeAllowed><RequestId>XW298HH7XXXXST9A</RequestId><HostId>1PcidcefrBv3q5KxpYb+v0qWyeZJr1tXBqAIpx1vbEwPQxB666JgckDsTNHP8VUbIZtLzcosuJ0=</HostId></Error> in /root/nextcloud-S3-local-S3-migration-main/vendor/aws/aws-sdk-php/src/WrappedHttpHandler.php:195
Stack trace:
#0 /root/nextcloud-S3-local-S3-migration-main/vendor/aws/aws-sdk-php/src/WrappedHttpHandler.php(97): Aws\WrappedHttpHandler->parseError()
#1 /root/nextcloud-S3-local-S3-migration-main/vendor/guzzlehttp/promises/src/Promise.php(204): Aws\WrappedHttpHandler->Aws\{closure}()
#2 /root/nextcloud-S3-local-S3-migration-main/vendor/guzzlehttp/promises/src/Promise.php(169): GuzzleHttp\Promise\Promise::callHandler()
#3 /root/nextcloud-S3-local-S3-migration-main/vendor/guzzlehttp/promises/src/RejectedPromise.php(42): GuzzleHttp\Promise\Promise::GuzzleHttp\Promise\{closure}()
#4 /root/nextcloud-S3-local-S3-migration-main/vendor/guzzlehttp/promises/src/TaskQueue.php(48): GuzzleHttp\Promise\RejectedPromise::GuzzleHttp\Promise\{closure}()
#5 /root/nextcloud-S3-local-S3-migration-main/vendor/guzzlehttp/guzzle/src/Handler/CurlMultiHandler.php(159): GuzzleHttp\Promise\TaskQueue->run()
#6 /root/nextcloud-S3-local-S3-migration-main/vendor/guzzlehttp/guzzle/src/Handler/CurlMultiHandler.php(184): GuzzleHttp\Handler\CurlMultiHandler->tick()
#7 /root/nextcloud-S3-local-S3-migration-main/vendor/guzzlehttp/promises/src/Promise.php(248): GuzzleHttp\Handler\CurlMultiHandler->execute()
#8 /root/nextcloud-S3-local-S3-migration-main/vendor/guzzlehttp/promises/src/Promise.php(224): GuzzleHttp\Promise\Promise->invokeWaitFn()
#9 /root/nextcloud-S3-local-S3-migration-main/vendor/guzzlehttp/promises/src/Promise.php(269): GuzzleHttp\Promise\Promise->waitIfPending()
#10 /root/nextcloud-S3-local-S3-migration-main/vendor/guzzlehttp/promises/src/Promise.php(226): GuzzleHttp\Promise\Promise->invokeWaitList()
#11 /root/nextcloud-S3-local-S3-migration-main/vendor/guzzlehttp/promises/src/Promise.php(269): GuzzleHttp\Promise\Promise->waitIfPending()
#12 /root/nextcloud-S3-local-S3-migration-main/vendor/guzzlehttp/promises/src/Promise.php(226): GuzzleHttp\Promise\Promise->invokeWaitList()
#13 /root/nextcloud-S3-local-S3-migration-main/vendor/guzzlehttp/promises/src/Promise.php(62): GuzzleHttp\Promise\Promise->waitIfPending()
#14 /root/nextcloud-S3-local-S3-migration-main/vendor/aws/aws-sdk-php/src/AwsClientTrait.php(58): GuzzleHttp\Promise\Promise->wait()
#15 /root/nextcloud-S3-local-S3-migration-main/vendor/aws/aws-sdk-php/src/AwsClientTrait.php(86): Aws\AwsClient->execute()
#16 /root/nextcloud-S3-local-S3-migration-main/localtos3.php(793): Aws\AwsClient->__call()
#17 /root/nextcloud-S3-local-S3-migration-main/localtos3.php(556): S3put()
#18 {main}
  thrown in /root/nextcloud-S3-local-S3-migration-main/vendor/aws/aws-sdk-php/src/WrappedHttpHandler.php on line 195

encryption for source versus target stores

For migration (local to S3), are local encryption settings duplicated for the remote storage, or are they reset?

For example, is the case supported to migrate a local store that is encrypted into an S3 store that is plaintext?

use objectstore arguments *use_ssl* and *port*

Hi there. I used your script and stumbled upon ignorance of arguments use_ssl and port. (See Nextcloud's documentation on object storage for details)

The fix is easy. In localtos3.php I added and modified five lines so now it reads:

[snip]
echo "\nconnect to S3...";
$bucket = $CONFIG['objectstore']['arguments']['bucket'];
$proto = isset($CONFIG['objectstore']['arguments']['use_ssl']) ? $CONFIG['objectstore']['arguments']['use_ssl'] : true;  // ← added line
$proto = $proto ? 'https' : 'http';  // ← added line
$port = isset($CONFIG['objectstore']['arguments']['port']) ? $CONFIG['objectstore']['arguments']['port'] : 443;  // ← added line
if($CONFIG['objectstore']['arguments']['use_path_style']){
  $s3 = new S3Client([
    'version' => 'latest',
    'endpoint' => $proto.'://'.$CONFIG['objectstore']['arguments']['hostname'].':'.$port.'/'.$bucket,  // ← modified line
    'bucket_endpoint' => true,
    'use_path_style_endpoint' => true,
    'region'  => $CONFIG['objectstore']['arguments']['region'],
    'credentials' => [
      'key' => $CONFIG['objectstore']['arguments']['key'],
      'secret' => $CONFIG['objectstore']['arguments']['secret'],
    ],
  ]);
}else{
  $s3 = new S3Client([
    'version' => 'latest',
    'endpoint' => $proto.'://'.$bucket.'.'.$CONFIG['objectstore']['arguments']['hostname'].':'.$port,  // ← modified line
    'bucket_endpoint' => true,
    'region'  => $CONFIG['objectstore']['arguments']['region'],
    'credentials' => [
      'key' => $CONFIG['objectstore']['arguments']['key'],
      'secret' => $CONFIG['objectstore']['arguments']['secret'],
    ],
  ]);
}
[snip]

I'm not too dexterous with pull requests yet. So please excuse that I submit the fix this way. I'd love to follow the "official" process. Assistance/consulting on that would be really appreciated. I'm willing to learn.

SQL escape special characters

when filename contains special caracters like a ' they are not escaped and this leads to an sql error:

Fatal error: Uncaught mysqli_sql_exception: You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near 'xxxxxxx.docx' AND ST.numeric_id = FC.storage AND FC.mimetype ...' at line 1 in /var/www/nextcloud-S3-local-S3-migration/localtos3.php:393
Stack trace:
#0 /var/www/nextcloud-S3-local-S3-migration/localtos3.php(393): mysqli->query('SELECT ST.`id...')
#1 {main}
thrown in /var/www/nextcloud-S3-local-S3-migration/localtos3.php on line 393

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.