Code Monkey home page Code Monkey logo

Comments (10)

timvaillancourt avatar timvaillancourt commented on August 25, 2024

Thanks @joelchen,

Could I see the full log of the backup from start -> end? Feel free to remove any sensitive info like hostnames, etc or you can email direct to [email protected] instead.

@islue recently made some changes to this code so maybe they can assist also when there is more info.

from mongodb_consistent_backup.

joelchen avatar joelchen commented on August 25, 2024

Here is a full output a few minutes ago @timvaillancourt:

$ sudo mongodb-consistent-backup -c backup.yml
[2017-07-21 23:38:11,150] [INFO] [MainProcess] [Main:init:143] Starting mongodb-consistent-backup version 1.0.3 (git commit: c2a2207ba4e607ab424881f57bff86b086b27fd9)
[2017-07-21 23:38:11,151] [INFO] [MainProcess] [Main:init:144] Loaded config: {"archive": {"method": "tar", "tar": {"compression": "gzip", "threads": 1}, "zbackup": {"binary": "/usr/bin/zbackup", "cache_mb": 128, "compression": "lzma"}}, "authdb": "admin", "backup": {"location": "/var/lib/mongodb-consistent-backup", "method": "mongodump", "mongodump": {"binary": "/usr/bin/mongodump", "compression": "auto"}, "name": "default"}, "configPath": "backup.yml", "environment": "production", "host": "localhost", "lock_file": "/tmp/mongodb-consistent-backup.lock", "log_dir": "/var/log/mongodb-consistent-backup", "notify": {"method": "none"}, "oplog": {"compression": "none", "tailer": {"status_interval": 30}}, "port": 27017, "replication": {"max_lag_secs": 5, "max_priority": 1000}, "sharding": {"balancer": {"ping_secs": 3, "wait_secs": 300}}, "upload": {"method": "s3", "remove_uploaded": true, "s3": {"access_key": "AKIAIJ6OCVNLSST2QT6A", "acl": "none", "bucket_name": "db", "bucket_prefix": "/", "chunk_size_mb": 50, "region": "ap-southeast-1", "retries": 5, "secret_key": "XXXXXXXXXXXXXXXXXXXX", "secure": true, "threads": 1}}}
[2017-07-21 23:38:11,151] [INFO] [MainProcess] [Stage:init:32] Notify stage disabled, skipping
[2017-07-21 23:38:11,156] [INFO] [MainProcess] [State:init:135] Initializing root state directory /var/lib/mongodb-consistent-backup/default
[2017-07-21 23:38:11,157] [INFO] [MainProcess] [State:load_backups:153] Found 0 existing completed backups for set
[2017-07-21 23:38:11,157] [INFO] [MainProcess] [State:init:119] Initializing backup state directory: /var/lib/mongodb-consistent-backup/default/20170721_2338
[2017-07-21 23:38:11,233] [INFO] [MainProcess] [Main:run:268] Running backup in replset mode using seed node(s): localhost:27017
[2017-07-21 23:38:11,246] [INFO] [MainProcess] [Mongodump:choose_compression:48] Mongodump binary supports gzip compression, auto-enabling gzip compression
[2017-07-21 23:38:11,246] [INFO] [MainProcess] [Task:compression:38] Setting Mongodump compression method: gzip
[2017-07-21 23:38:11,247] [INFO] [MainProcess] [Main:run:296] Backup method supports compression, disabling compression in archive step
[2017-07-21 23:38:11,247] [INFO] [MainProcess] [Task:compression:38] Setting Tar compression method: none
[2017-07-21 23:38:11,247] [INFO] [MainProcess] [Stage:run:83] Running stage mongodb_consistent_backup.Backup with task: Mongodump
[2017-07-21 23:38:11,254] [INFO] [MainProcess] [Replset:find_primary:132] Found PRIMARY: prod-rs/mongo0.prod.org:27017 with optime Timestamp(1500651489, 1)
[2017-07-21 23:38:11,254] [INFO] [MainProcess] [Replset:find_secondary:205] Found SECONDARY prod-rs/mongo1.prod.org:27017: {'priority': 1, 'lag': 0, 'optime': Timestamp(1500651489, 1), 'score': 100}
[2017-07-21 23:38:11,254] [WARNING] [MainProcess] [Replset:find_secondary:160] Found down or unhealthy SECONDARY prod-rs/mongo2.prod.org:27017 with state: (not reachable/healthy)
[2017-07-21 23:38:11,255] [INFO] [MainProcess] [Replset:find_secondary:215] Choosing SECONDARY prod-rs/mongo1.prod.org:27017 for replica set prod-rs (score: 100)
[2017-07-21 23:38:11,257] [INFO] [MainProcess] [Mongodump:run:139] Starting backups using mongodump r3.4.5 (options: compression=gzip, threads_per_dump=2)
[2017-07-21 23:38:11,262] [INFO] [MongodumpThread-2] [MongodumpThread:run:125] Starting mongodump backup of prod-rs/mongo1.prod.org:27017
[2017-07-21 23:38:11,288] [INFO] [MongodumpThread-2] [MongodumpThread:wait:92] prod-rs/mongo1.prod.org:27017:	writing admin.system.users to
[2017-07-21 23:38:11,289] [INFO] [MongodumpThread-2] [MongodumpThread:wait:92] prod-rs/mongo1.prod.org:27017:	done dumping admin.system.users (4 documents)
[2017-07-21 23:38:11,290] [INFO] [MongodumpThread-2] [MongodumpThread:wait:92] prod-rs/mongo1.prod.org:27017:	writing admin.system.version to
[2017-07-21 23:38:11,290] [INFO] [MongodumpThread-2] [MongodumpThread:wait:92] prod-rs/mongo1.prod.org:27017:	done dumping admin.system.version (2 documents)
[2017-07-21 23:38:11,291] [INFO] [MongodumpThread-2] [MongodumpThread:wait:92] prod-rs/mongo1.prod.org:27017:	writing test-prod.provider_notes to
[2017-07-21 23:38:11,291] [INFO] [MongodumpThread-2] [MongodumpThread:wait:92] prod-rs/mongo1.prod.org:27017:	writing test-prod.results to
[2017-07-21 23:38:11,295] [INFO] [MongodumpThread-2] [MongodumpThread:wait:92] prod-rs/mongo1.prod.org:27017:	done dumping test-prod.results (17 documents)
[2017-07-21 23:38:11,296] [INFO] [MongodumpThread-2] [MongodumpThread:wait:92] prod-rs/mongo1.prod.org:27017:	writing test-prod.reports to
[2017-07-21 23:38:11,297] [INFO] [MongodumpThread-2] [MongodumpThread:wait:92] prod-rs/mongo1.prod.org:27017:	done dumping test-prod.provider_notes (40 documents)
[2017-07-21 23:38:11,297] [INFO] [MongodumpThread-2] [MongodumpThread:wait:92] prod-rs/mongo1.prod.org:27017:	writing test-prod.accounts to
[2017-07-21 23:38:11,299] [INFO] [MongodumpThread-2] [MongodumpThread:wait:92] prod-rs/mongo1.prod.org:27017:	done dumping test-prod.reports (14 documents)
[2017-07-21 23:38:11,299] [INFO] [MongodumpThread-2] [MongodumpThread:wait:92] prod-rs/mongo1.prod.org:27017:	writing test-prod.measurements to
[2017-07-21 23:38:11,300] [INFO] [MongodumpThread-2] [MongodumpThread:wait:92] prod-rs/mongo1.prod.org:27017:	done dumping test-prod.measurements (9 documents)
[2017-07-21 23:38:11,301] [INFO] [MongodumpThread-2] [MongodumpThread:wait:92] prod-rs/mongo1.prod.org:27017:	writing test-prod.roles to
[2017-07-21 23:38:11,301] [INFO] [MongodumpThread-2] [MongodumpThread:wait:92] prod-rs/mongo1.prod.org:27017:	done dumping test-prod.accounts (12 documents)
[2017-07-21 23:38:11,301] [INFO] [MongodumpThread-2] [MongodumpThread:wait:92] prod-rs/mongo1.prod.org:27017:	writing test-prod.assessment to
[2017-07-21 23:38:11,302] [INFO] [MongodumpThread-2] [MongodumpThread:wait:92] prod-rs/mongo1.prod.org:27017:	done dumping test-prod.roles (8 documents)
[2017-07-21 23:38:11,302] [INFO] [MongodumpThread-2] [MongodumpThread:wait:92] prod-rs/mongo1.prod.org:27017:	writing test-prod.todos to
[2017-07-21 23:38:11,302] [INFO] [MongodumpThread-2] [MongodumpThread:wait:92] prod-rs/mongo1.prod.org:27017:	done dumping test-prod.assessment (6 documents)
[2017-07-21 23:38:11,303] [INFO] [MongodumpThread-2] [MongodumpThread:wait:92] prod-rs/mongo1.prod.org:27017:	writing test-prod.assessment_filled to
[2017-07-21 23:38:11,303] [INFO] [MongodumpThread-2] [MongodumpThread:wait:92] prod-rs/mongo1.prod.org:27017:	done dumping test-prod.todos (2 documents)
[2017-07-21 23:38:11,303] [INFO] [MongodumpThread-2] [MongodumpThread:wait:92] prod-rs/mongo1.prod.org:27017:	writing test-prod.broadcasts to
[2017-07-21 23:38:11,304] [INFO] [MongodumpThread-2] [MongodumpThread:wait:92] prod-rs/mongo1.prod.org:27017:	done dumping test-prod.assessment_filled (2 documents)
[2017-07-21 23:38:11,304] [INFO] [MongodumpThread-2] [MongodumpThread:wait:92] prod-rs/mongo1.prod.org:27017:	writing test-prod.admin_programs to
[2017-07-21 23:38:11,304] [INFO] [MongodumpThread-2] [MongodumpThread:wait:92] prod-rs/mongo1.prod.org:27017:	done dumping test-prod.broadcasts (2 documents)
[2017-07-21 23:38:11,305] [INFO] [MongodumpThread-2] [MongodumpThread:wait:92] prod-rs/mongo1.prod.org:27017:	writing test-prod.enrolled_programs to
[2017-07-21 23:38:11,305] [INFO] [MongodumpThread-2] [MongodumpThread:wait:92] prod-rs/mongo1.prod.org:27017:	done dumping test-prod.admin_programs (1 document)
[2017-07-21 23:38:11,305] [INFO] [MongodumpThread-2] [MongodumpThread:wait:92] prod-rs/mongo1.prod.org:27017:	writing test-prod.teams to
[2017-07-21 23:38:11,306] [INFO] [MongodumpThread-2] [MongodumpThread:wait:92] prod-rs/mongo1.prod.org:27017:	done dumping test-prod.enrolled_programs (1 document)
[2017-07-21 23:38:11,306] [INFO] [MongodumpThread-2] [MongodumpThread:wait:92] prod-rs/mongo1.prod.org:27017:	writing test-prod.organizations to
[2017-07-21 23:38:11,306] [INFO] [MongodumpThread-2] [MongodumpThread:wait:92] prod-rs/mongo1.prod.org:27017:	done dumping test-prod.teams (1 document)
[2017-07-21 23:38:11,306] [INFO] [MongodumpThread-2] [MongodumpThread:wait:92] prod-rs/mongo1.prod.org:27017:	done dumping test-prod.organizations (1 document)
[2017-07-21 23:38:11,307] [INFO] [MongodumpThread-2] [MongodumpThread:wait:92] prod-rs/mongo1.prod.org:27017:	writing captured oplog to
[2017-07-21 23:38:11,605] [INFO] [MongodumpThread-2] [MongodumpThread:run:161] Backup prod-rs/mongo1.prod.org:27017 completed in 0.34 seconds, 0 oplog changes
[2017-07-21 23:38:15,263] [INFO] [MainProcess] [Mongodump:wait:92] All mongodump backups completed successfully
[2017-07-21 23:38:15,265] [INFO] [MainProcess] [Stage:run:92] Completed running stage mongodb_consistent_backup.Backup with task Mongodump in 4.02 seconds
[2017-07-21 23:38:15,266] [INFO] [MainProcess] [Stage:run:83] Running stage mongodb_consistent_backup.Archive with task: Tar
[2017-07-21 23:38:15,275] [INFO] [MainProcess] [Tar:run:56] Archiving backup directories with pool of 2 thread(s)
[2017-07-21 23:38:15,277] [INFO] [PoolWorker-3] [TarThread:run:41] Archiving directory: /var/lib/mongodb-consistent-backup/default/20170721_2338/prod-rs
[2017-07-21 23:38:17,278] [INFO] [MainProcess] [Stage:run:92] Completed running stage mongodb_consistent_backup.Archive with task Tar in 2.01 seconds
[2017-07-21 23:38:17,279] [INFO] [MainProcess] [Stage:run:83] Running stage mongodb_consistent_backup.Upload with task: S3
[2017-07-21 23:38:17,279] [INFO] [MainProcess] [S3:run:81] Starting multipart AWS S3 upload to key: db/default/20170721_2338/prod-rs.tar using 1 threads, 50mb chunks, 5 retries
[2017-07-21 23:38:17,910] [INFO] [PoolWorker-5] [S3UploadThread:run:34] Uploading file: /var/lib/mongodb-consistent-backup/default/20170721_2338/prod-rs.tar (part num: 1)
[2017-07-21 23:38:18,657] [ERROR] [MainProcess] [S3:run:131] Uploading to AWS S3 failed! Error:
[2017-07-21 23:38:18,677] [ERROR] [MainProcess] [Stage:run:95] Stage mongodb_consistent_backup.Upload did not complete!
[2017-07-21 23:38:18,677] [CRITICAL] [MainProcess] [Main:exception:218] Problem performing upload of backup! Error: Stage mongodb_consistent_backup.Upload did not complete!
[2017-07-21 23:38:18,677] [INFO] [MainProcess] [Main:cleanup_and_exit:174] Starting cleanup procedure! Stopping running threads
[2017-07-21 23:38:18,682] [INFO] [MainProcess] [Main:cleanup_and_exit:204] Cleanup complete, exiting
[2017-07-21 23:38:18,682] [INFO] [MainProcess] [Logger:rotate:81] Running rotation of log files
[2017-07-21 23:38:18,682] [INFO] [MainProcess] [Logger:compress:60] Compressing log file: /var/log/mongodb-consistent-backup/backup.default.20170721_0300.log

from mongodb_consistent_backup.

islue avatar islue commented on August 25, 2024

Um, the exception is supposed to be printed out in the error message, but it's empty.
Sorry, I have no idea.

[2017-07-21 23:38:17,910] [INFO] [PoolWorker-5] [S3UploadThread:run:34] Uploading file: /var/lib/mongodb-consistent-backup/default/20170721_2338/prod-rs.tar (part num: 1)

The multipart upload does support 1 part upload according to the document. But if you can, have a try to do a backup with a bigger data set to force a multiple parts upload.

from mongodb_consistent_backup.

joelchen avatar joelchen commented on August 25, 2024

I do not have a bigger data set to test in my current environment.

Since this is not reliable, can I suggest this project to integrate with something better maintained like Rclone (https://rclone.org)?

from mongodb_consistent_backup.

timvaillancourt avatar timvaillancourt commented on August 25, 2024

I think the fix here is to not use multipart if there is only one chunk to upload (due to the API limitation), or to auto-reduce the chunk size to something smaller than the size of the file.

I will attempt one or both of these ideas in a branch to share soon.

from mongodb_consistent_backup.

timvaillancourt avatar timvaillancourt commented on August 25, 2024

@joelchen, I believe this branch will fix this. Please test and confirm.

This git command will check out and build this branch:

cd /a/new/path
git clone -b issue_187_s3_multipart_1chunk https://github.com/timvaillancourt/mongodb_consistent_backup
cd mongodb_consistent_backup
make
./bin/mongodb-consistent-backup <flags here>

URL: https://github.com/timvaillancourt/mongodb_consistent_backup/tree/issue_187_s3_multipart_1chunk

from mongodb_consistent_backup.

jessewiles avatar jessewiles commented on August 25, 2024

FYI, this didn't work for me:

[2017-09-17 02:57:52,729] [ERROR] [MainProcess] [S3:run:140] Uploading to AWS S3 failed! Error: S3ResponseError: 400 Bad Request\
<Error><Code>EntityTooSmall</Code><Message>Your proposed upload is smaller than the minimum allowed size</Message><ProposedSize>5120</ProposedSize><MinSizeAllowed>5242880</MinSizeAllowed><PartNumber>1</PartNumber><ETag>b6b96b0e96964de9ff49d6de2c75c761</ETag><RequestId>2CC14A9DBB3DEB2A</RequestId><HostId>MXm3OKr7Sk+AVcvNZ8rLqqEommOXKLzNAHEqidauJ0UCGY02Z5feuoMqO+gtkq4tDLQ=</HostId></Error>

It looks like only the last part is allowed to be less than the chunk size (I think).

from mongodb_consistent_backup.

timvaillancourt avatar timvaillancourt commented on August 25, 2024

Thanks for testing. This minimum limit (5242880 bytes/5.24 megabytes) is good to know.

I'll make another change to skip multipart entirely if the files are less than 5.24mb shortly.

The logic I added to use 2 x threads of 50% of the bytes if the file is less than chunk_size_mb is still valid, I think, so this will add to that logic.

from mongodb_consistent_backup.

jessewiles avatar jessewiles commented on August 25, 2024

I'm not sure dynamic chunk sizing is the right approach. We were able to fix the problem by separating the ACLs logic from the complete_upload operation. Near as i can tell, there's something wrong with the get_key operation. I moved the get_key inside of the ACL flow and everything works (we're not using ACLs).

I realize this isn't a complete solution, but I think it's the right track.

** UPDATE ** Submitted a PR which changes slightly how the ACLs are applied. This fixes the problem in our tests.

from mongodb_consistent_backup.

timvaillancourt avatar timvaillancourt commented on August 25, 2024

Hello all,

I believe this "S3 overhaul" branch fixes the issues discussed here with S3 upload. There was a lot of problems to tackle so this is a large change:

https://github.com/timvaillancourt/mongodb_consistent_backup/tree/s3_upload_overhaul_v1

PR #249 will merge these fixes and explains each of them in detail.

Please test out this branch, any feedback is appreciated before it's merged.

from mongodb_consistent_backup.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.