Code Monkey home page Code Monkey logo

mysqldump's People

Contributors

ajouve avatar akamaozu avatar bradzacher avatar cperryk avatar dafrok avatar deathmark avatar dependabot[bot] avatar donpaul120 avatar droidmunkey avatar enapupe avatar euberdeveloper avatar jonathanargentiero avatar jumplink avatar kosta-github avatar krasnyd avatar lukasgjetting avatar marco-fp avatar pmkroeker avatar webcaetano avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mysqldump's Issues

View schemas occur prior to referenced tables in dumps

I have a db with view names that are alphabetically prior to the tables that they reference, so when trying to load the dump files it would die with an error when it came to the CREATE OR REPLACE VIEW.

I patched it locally by sorting the return value of getSchemaDumps to put views after tables. PR is on the way.

RangeError: Invalid string length

Hi,

I'm using this library to backup a single database nightly but all of a sudden the backups have stopped working with the following error. I'm not sure where to begin debugging this, any pointers would be much appreciated!

/bundle/bundle/programs/server/npm/node_modules/mysqldump/index.js:163
                  callback(null,results.createSchemaDump.concat(results.createDataDump).join("\n\n"));
                                                                                        ^
RangeError: Invalid string length
    at Array.join (native)
    at Array.async.auto.getDataDump (/bundle/bundle/programs/server/npm/node_modules/mysqldump/index.js:163:74)
    at listener (/bundle/bundle/programs/server/npm/node_modules/async/lib/async.js:605:42)
    at /bundle/bundle/programs/server/npm/node_modules/async/lib/async.js:544:17
    at _arrayEach (/bundle/bundle/programs/server/npm/node_modules/async/lib/async.js:85:13)
    at Immediate.taskComplete (/bundle/bundle/programs/server/npm/node_modules/async/lib/async.js:543:13)
    at processImmediate [as _immediateCallback] (timers.js:396:17)

Cannot enqueue Quit after invoking quit.

I'm working to get backup for multi database in for loop, but i got this error.

`events.js:154
throw er; // Unhandled 'error' event
^
Error: Cannot enqueue Quit after invoking quit.
at Protocol._validateEnqueue (/home/mohammad/wallet_webservices/node_modules/mysql/lib/protocol/Protocol.js:202:16)
at Protocol._enqueue (/home/mohammad/wallet_webservices/node_modules/mysql/lib/protocol/Protocol.js:135:13)
at Protocol.quit (/home/mohammad/wallet_webservices/node_modules/mysql/lib/protocol/Protocol.js:88:23)
at Connection.end (/home/mohammad/wallet_webservices/node_modules/mysql/lib/Connection.js:242:18)
at /home/mohammad/wallet_webservices/node_modules/mysqldump/index.js:158:20
at /home/mohammad/wallet_webservices/node_modules/async/lib/async.js:52:16
at /home/mohammad/wallet_webservices/node_modules/async/lib/async.js:550:17
at /home/mohammad/wallet_webservices/node_modules/async/lib/async.js:544:17
at _arrayEach (/home/mohammad/wallet_webservices/node_modules/async/lib/async.js:85:13)
at Immediate.taskComplete (/home/mohammad/wallet_webservices/node_modules/async/lib/async.js:543:13)

`

Import

How import file.sql to restore?

Project Update

Hi folks!

Sorry about the lack of maintership for the last few months.

I acquired ownership of the package and moved it into my previous company's namespace when I was working there, and maintained it until I left in July.
Unfortunately they no longer had the free resources to maintain an open source project, so it has languished in the mean time.

I have just (on 23rd Dec) taken back ownership of this repo into my own open source portfolio.
I'll be looking into reviewing/merging PRs and triaging Issues after the holidays.

Thanks for the patience!

-Brad

Dump to a string

There seems to be no api to get a string output. The easier alternative seems to be using {dumpToFile: "/tmp/dump.sql"}, and then read the file.

Compression does not work

compressFile: true does not compress file and gunzip says that file is not in gzip format.

mysqldump({
    connection: {
        host: 'localhost',
        user: dbConfig.user,
        password: dbConfig.password,
        database: dbConfig.database
    },
    dumpToFile: `../MySQLBackup/${fName}.sql.gz`,
    compressFile: true,
}, err => {
    if(err)
        console.log(err);
    process.exit(0);
});

Mysqldump version is 2.3.0

Encoding issues and 'SET NAME <encoding>'

Hello,

I've encountered the dreaded encoding errors dumping/importing using this library, and finally fixed them by manually adding 'SET NAME utf8mb4' to the top of the file.
Would it be appropriate to somehow automate this process?
For example retrieving the desired encoding from the ConnectionOptions.charset option?
Or running some sort of 'GET NAME' query and parsing its' results?

SQL formatter is slow and very CPU-intensive

_.trimEnd used by sql-formatter is very CPU intensive, using mysqldump with formatter disabled vs enabled is difference of 38s vs ~300s, and CPU usage of ~35% vs 100% in my case.

\ CPU time
Formatter 100% 318s
No formatter ~35% 38s

My dump does contain non-trivial amount of data.

I think formatter should be disabled by default.

Profile with no formatter:
image
image

Profile with formatter (default):
image
image

UNIX Socket

There's no option to connect a UNIX socket. Some servers require (to prevent issues) to not expose any port but UNIX Sockets.

Dumping several database with async causes several issues

I'm trying to create dumps of several database, for some database i've selected tables and for others I've all the tables.

I used an async.forEach to run in parallel all the various database's dump but everytime I see that the module endup mixing up the tables with the database.

For example if the database 'AAA' have the tables '111' and '222' and the database 'BBB' have the table '333' and '444' I end up seeing errors like this:

(this is an actual error, just changed the tables and database name)

2016-11-09T16:03:31.418Z] SHOW CREATE TABLE `111` Error: ER_NO_SUCH_TABLE: Table 'BBB.111' doesn't exist
/Users/thecrius/Sites/db-sync/node_modules/mysql/lib/protocol/Parser.js:78
        throw err; // Rethrow non-MySQL errors
        ^

TypeError: Cannot read property '0' of undefined
    at /Users/thecrius/Sites/db-sync/node_modules/mysqldump/index.js:126:21
    at /Users/thecrius/Sites/db-sync/node_modules/mysqldump/node_modules/async/lib/async.js:726:13
    at /Users/thecrius/Sites/db-sync/node_modules/mysqldump/node_modules/async/lib/async.js:52:16
    at done (/Users/thecrius/Sites/db-sync/node_modules/mysqldump/node_modules/async/lib/async.js:241:17)
    at /Users/thecrius/Sites/db-sync/node_modules/mysqldump/node_modules/async/lib/async.js:44:16
    at /Users/thecrius/Sites/db-sync/node_modules/mysqldump/node_modules/async/lib/async.js:723:17
    at /Users/thecrius/Sites/db-sync/node_modules/mysqldump/node_modules/async/lib/async.js:167:37
    at Query._callback (/Users/thecrius/Sites/db-sync/node_modules/mq-node/dist/index.js:98:29)
    at Query.Sequence.end (/Users/thecrius/Sites/db-sync/node_modules/mysql/lib/protocol/sequences/Sequence.js:86:24)
    at Query.ErrorPacket (/Users/thecrius/Sites/db-sync/node_modules/mysql/lib/protocol/sequences/Query.js:94:8)

This happens because the table 111 is a table of the database AAA and not BBB.
Changing it to an async.forEachSeries fixed the problem (I suppose) but I ended up with this other error:

/Users/thecrius/Sites/db-sync/node_modules/mysqldump/index.js:163
                        callback(null,results.createSchemaDump.concat(results.createDataDump).join("\n\n"));
                                                                                              ^

RangeError: Invalid string length
    at Array.join (native)
    at Array.async.auto.getDataDump (/Users/thecrius/Sites/db-sync/node_modules/mysqldump/index.js:163:74)
    at listener (/Users/thecrius/Sites/db-sync/node_modules/mysqldump/node_modules/async/lib/async.js:605:42)
    at /Users/thecrius/Sites/db-sync/node_modules/mysqldump/node_modules/async/lib/async.js:544:17
    at _arrayEach (/Users/thecrius/Sites/db-sync/node_modules/mysqldump/node_modules/async/lib/async.js:85:13)
    at Immediate.taskComplete (/Users/thecrius/Sites/db-sync/node_modules/mysqldump/node_modules/async/lib/async.js:543:13)
    at runCallback (timers.js:637:20)
    at tryOnImmediate (timers.js:610:5)
    at processImmediate [as _immediateCallback] (timers.js:582:5)

No idea why this time because I hadn't info on which table was causing the issue.
The function is this one:

getDataDump:['createSchemaDump','createDataDump',function(callback,results){
			if(!results.createSchemaDump || !results.createSchemaDump.length) results.createSchemaDump=[];
			if(!results.createDataDump || !results.createDataDump.length) results.createDataDump=[];
			callback(null,results.createSchemaDump.concat(results.createDataDump).join("\n\n"));
		}

Maybe something with empty table?

I tried adding a console.log to the beginning of the function to see the content of results but that caused the script to end up with an out of memory:

FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
 1: node::Abort() [/usr/local/bin/node]
 2: node::FatalException(v8::Isolate*, v8::Local<v8::Value>, v8::Local<v8::Message>) [/usr/local/bin/node]
 3: v8::internal::V8::FatalProcessOutOfMemory(char const*, bool) [/usr/local/bin/node]
 4: v8::internal::Factory::NewRawTwoByteString(int, v8::internal::PretenureFlag) [/usr/local/bin/node]
 5: v8::internal::Object* v8::internal::StringReplaceGlobalAtomRegExpWithString<v8::internal::SeqTwoByteString>(v8::internal::Isolate*, v8::internal::Handle<v8::interna
l::String>, v8::internal::Handle<v8::internal::JSRegExp>, v8::internal::Handle<v8::internal::String>, v8::internal::Handle<v8::internal::JSArray>) [/usr/local/bin/node]
 6: v8::internal::Runtime_StringReplaceGlobalRegExpWithString(int, v8::internal::Object**, v8::internal::Isolate*) [/usr/local/bin/node]
 7: 0x35cdc4a092a7

I'll have to find another solution for my dumps but I'd gladly see this module improve as it's seems very clean and simple while still having some issue to be taken care of.

SyntaxError.Unexpected identifier

I am getting this following error on linux os

(function (exports, require, module, __filename, __dirname) { import mysqldump from 'mysqldump'
^^^^^^^^^

SyntaxError: Unexpected identifier

Process hangs after dump

Hello!

I tried running a simple test with Node 4.2.1, 5.11.0 and 6.1.1 and the process hangs without ever exiting after dumping the file in all versions.
image
I put a call to process.exit in the callback for now but that's kind of unclean.

I got empty files every time when dumping

Using code like this:


mysqlDump({
		host: 'localhost',
		user: 'root',
		password: '',
		database: 'databasename',
		dest:'./db1-dump.sql' // destination file 
	},function(err)
	{
		// create data.sql file; 
		if(err)
		{
			console.log('Error:', err);
			return;
		}
		
		console.log('Result: status done');
	});

I get no errors on Windows 8.1, but exported file is always empty.
I make sure that mysql and mysqldump are set to path in case this script uses them but nothing has changed the output.

Mysql version I have is from XAMPP

Export with custom value

Hello! Is it possible to export a specific table with a different value of a particular column? For instance, table products where column quantity is currently 100 but you want to export it as 0. Thanks!

select rows

Hey,

is there a way to backup only selected rows? Like mysqldump's '--where'?

Take care

M

Not able to back up when the database is large

I am using my own server to back up my database which is large. I also extended the timeout time to execute the command but it cant. How can I minimize it size or is there a way to compress it?

Dump file complete event

When dumping a large-ish database, is there an event that fires when the dump file has been created. I am currently using a file watcher, but it fires when the file has been created and not when all the data is in the file.

Is there an accepted way to do this?

Some errors

There is multiple error when we try to IMPORT the .sql file.

For me :
First try --> #1005 - Can't create table xibo2.blacklist (errno: 150 "Foreign key constraint is incorrectly formed")

So after I try again with the box "checking for foreign key" not active !!
Second try --> #1416 - Cannot get geometry object from data you send to the GEOMETRY field

And when I export a sql file with MyPhpAdmin it works perfect.
I think it need some updates.

TypeError: Cannot read property 'replace' of undefined V 2.2.1

V 1.4 was working very fine
But new Update doesn't work throw error on MAC but working on LINUX

TypeError: Cannot read property 'replace' of undefined
at tablesRes.map.r (/Users/apple/Documents/Webstorm Projects/psa-prd2-admin/node_modules/mysqldump/dist/cjs.js:53:36)
at Array.map ()
at /Users/apple/Documents/Webstorm Projects/psa-prd2-admin/node_modules/mysqldump/dist/cjs.js:52:40
at Generator.next ()
at fulfilled (/Users/apple/Documents/Webstorm Projects/psa-prd2-admin/node_modules/mysqldump/dist/cjs.js:40:58)
at
at process._tickCallback (internal/process/next_tick.js:188:7)

My code

await mysqlDump({
connection: {
host: 'localhost',//appConfig.DB['HOST'],
user: 'root',//appConfig.DB['USER'],
password: 'root',//appConfig.DB['PASS'],
database: 'Name_DB',//appConfig.DB['DATABASE'],
},
table: {
ifNotExist: true,
},
dumpToFile: appConfig['FILES_BASE_PATH'] + 'backups/backup_' + Date.now() + '.sql'
})

Callback doesnt fetch error.

dbdump_errorlog
Callback function doesnt fetch "err". It enters "else" in case of no error but doesnt enter if condition when error is thrown. "err" is always null.

Support ssl connection options to connect tot Azure Mysql and Amazon RDS.

It would be great if the connection would support a ssl option, to be able to dump databases over encrypted connection.
The Azure Mysql service and Amazon Mysql instances support connectivity over ssl:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_MySQL.html#MySQL.Concepts.SSLSupport
https://docs.microsoft.com/en-us/azure/mysql/howto-configure-ssl
Without this option the application crashes with the following error when trying to connect to Azure Mysql with ssl option turned on:

SHOW TABLES FROM `xxxxx` Error: UNKNOWN_CODE_PLEASE_REPORT: SSL connection is required. Please specify SSL options and retry.

Running mysqldump with Promise.all

I'm having some trouble running this with promise.all. It's probably my understanding of promises, but I thought I'd open an issue to see if anyone could help.

I've got an env variable with a comma separated list of databases. I'm trying to dump them all at the same time, so I tried to setup a function that dumped the database (makeMyFiles), then looped through that function and added X number of promises to the array. Then I can call promise.all and, I think, wait for all the promises to resolve. Well, it's returning null, so something is going wrong, but I can't get any meaningful errors.

'use strict';

interface FileObj {
  path: string;
  fileSizeInBytes: number;
}

import { Context, ScheduledEvent } from 'aws-lambda';
// @ts-ignore
import { unlink, existsSync, mkdirSync, Stats, statSync } from 'fs';
import { sendMyFile } from './sendMyFile';
import { join } from 'path';
import { tarMyDirectory } from './tarMyDirectory';
import mysqldump from 'mysqldump';

const makeMyFiles = async (schema: string, time: string) => {
  const path = join('/','tmp', time, `${schema}_${time}.sql`);

  console.log(schema, path);

  // Dump the database to a file
  const options = {
    connection: {
      charset: 'utf8mb4_unicode_ci',
      database: schema,
      host: process.env.HOST_NAME,
      password: process.env.DB_PASS,
      user: process.env.DB_USER
    },
    dumpToFile: path
  };

  mysqldump(options).then(()=>{
    // Get File Stats
    const stats: Stats = statSync(path);
    const fileSizeInBytes: number = stats.size;
    const file: FileObj = { path, fileSizeInBytes };
    console.log(schema, file);
    return file;
  }).catch((err)=>{
    console.error(err);
    return err;
  });
};

module.exports.rdsdumpToS3 = async (event: ScheduledEvent, context: Context) => {

  // Get time
  const timestring: string = new Date().toISOString().split('.')[0].replace(/-/g,'.').replace(/:/g,'.');

  const path: string = join('/','tmp', timestring);
  if (!existsSync(path)) {
    mkdirSync(path);
  }

  // Create the promises
  const promises: any = [];
  // Loop through all databases at once
  process.env.DB_NAME.split(/\s*,\s*/).forEach((schema) => {
    // Push the promise
    promises.push(makeMyFiles(schema,timestring));
  });

  // Resolve all the promises
  Promise.all(promises)
    .then((results) => {
      console.log('Promises are done');
      // We have run all the dumps, now I should have a 'path' folder with all the schemas in it,
      // now, we should zip them all up and send that to s3
      // First, we need to tar up the directory
       tarMyDirectory(path, timestring);
       // Now send the file to S3
       sendMyFile(path);
      return {
        success: true
      }
    }).catch((e)=>{
      console.error(e);
      return {
        e
      };
    });
};

Add CREATE DATABASE and DB selection before dump

Is it currently possible to add statement like this before the actual dump?

DROP DATABASE   IF     EXISTS base;
CREATE DATABASE IF NOT EXISTS base;
USE base;

I've large list of different databases that I want to dump and this would help importing them in bulk to the new database.

Add table locks

Hello, I'm using this lib and I noticed it doesn't add table locks to the generated .sql dumps. How can I add this to the library?

How to dump all databases

we can dump all the mysql databases using below CLI command:
mysqldump -u [username] –p[password] –all-database > [dump_file.sql]
But, how can we dump all databases using nodejs?

[Wanted] Maintainers

Hey folks,

I am looking for maintainers to help triage issues and merge pull requests for this repository 💯

I'm very happy this library, but unfortunately I just don't have time to maintain it right now like it should.
Also to help it grow is better to have multiple maintainers.

If you're interested in becoming a maintainer, please leave a comment below and let me know. I would really appreciate the help!

ECONNREFUSED

When the database is not available, mysqldump throws an exception and stops the server.
Instead, it should call the callback function, giving the error as a parameter.

Compressed output?

Is there some option to compress the output, for example backup.sql.tar / backup.sql.gz?

Blobs are not dumped correctly

When I try to dump tables with blobs, the module is going to generate MySQL syntax errors by adding an additional single quote before the value.

Example:
BLOB Value: 613a303a7b7d
Output in INSERT statement:

[...] '1',NULL,'',NULL,NULL,''613a303a7b7d',NULL [...]
                             ^

This leads to problems with exporting BLOB-including tables.

Views are not dumped.

I have a view in my database and it resulted in undefined; line in the resulting SQL dump which in turn causes applications that use the dump to fail on that line.

Are views not supported? If not, can we have that fixed so that we don't get an undefined in the dump output?

Amazon S3

Great module.

any plans to add an option to save the mysql dump to s3 instead of to local disk file? looks like this might be changed by expanding line 151?

Alternatively, could you have on option to turn off the file creation on line 151 and instead return the datadump in the callback so the user can do the s3 upload themselves?

Tim

Can I skip a table

I'd like skip a log table, just like the option of mysqldump --ignore-table

Not working with long file names?

I have tried it with a long filename of 127 characters and I get
Error: ENOENT: no such file or directory, open 'C:\Development\xxxxxxx\data\dumps\1449492501532_dump_[categories|countries|contributor_roles|affiliation_types|subscription_types].sql'

When I use a shorter name it works.
Don't know if it's an issue in your package, just wanted to let you know!

Add header flags to the string output

The dumped SQL file has a set of session variables before and after the dump to do things like disable foreign key checks.

The string that is returned from the function (i.e. in-memory dumping) doesn't include those same variables.

This means that in a lot of cases you can't just run the output that is returned from the function, which makes it kind of useless.

Handeling table relations

Hello,

Is it possible to change the position of where relations are added from the top during table creation to the bottom after all tables have been created ? When I try to use the sql file to import into an enmpty db, he can't find the relations.

Problem with default tables

When I use default table (there is no tables property), I have an issue:

SHOW CREATE TABLE undefined Error: ER_NO_SUCH_TABLE: Table 'ordersmanager.undefined' doesn't exist
[ERROR] MYSQL CONNECTION ERROR: ERROR: ER_NO_SUCH_TABLE: TABLE 'ORDERSMANAGER.UNDEFINED' DOESN'T EXIST
SHOW CREATE TABLE undefined Error: ER_NO_SUCH_TABLE: Table 'ordersmanager.undefined' doesn't exist
SHOW CREATE TABLE undefined Error: ER_NO_SUCH_TABLE: Table 'ordersmanager.undefined' doesn't exist

This means that the request SHOW TABLES FROM ... does not work (at least for me)

Errors on Importing

Thanks for this brilliant package! I'm getting one error though when importing a dump.

Errors:

[ERROR in query 4] Invalid default value for 'comment_date'
[ERROR in query 77] Table 'wordpress.wp_2_comments' doesn't exist

The particular queries in the dump are:

DROP TABLE IF EXISTS `wp_2_comments`;
CREATE TABLE IF NOT EXISTS `wp_2_comments` (
  `comment_ID` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
  `comment_post_ID` bigint(20) unsigned NOT NULL DEFAULT '0',
  `comment_author` tinytext COLLATE utf8mb4_unicode_520_ci NOT NULL,
  `comment_author_email` varchar(100) COLLATE utf8mb4_unicode_520_ci NOT NULL DEFAULT '',
  `comment_author_url` varchar(200) COLLATE utf8mb4_unicode_520_ci NOT NULL DEFAULT '',
  `comment_author_IP` varchar(100) COLLATE utf8mb4_unicode_520_ci NOT NULL DEFAULT '',
  `comment_date` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
  `comment_date_gmt` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
  `comment_content` text COLLATE utf8mb4_unicode_520_ci NOT NULL,
  `comment_karma` int(11) NOT NULL DEFAULT '0',
  `comment_approved` varchar(20) COLLATE utf8mb4_unicode_520_ci NOT NULL DEFAULT '1',
  `comment_agent` varchar(255) COLLATE utf8mb4_unicode_520_ci NOT NULL DEFAULT '',
  `comment_type` varchar(20) COLLATE utf8mb4_unicode_520_ci NOT NULL DEFAULT '',
  `comment_parent` bigint(20) unsigned NOT NULL DEFAULT '0',
  `user_id` bigint(20) unsigned NOT NULL DEFAULT '0',
  PRIMARY KEY (`comment_ID`),
  KEY `comment_post_ID` (`comment_post_ID`),
  KEY `comment_approved_date_gmt` (`comment_approved`,`comment_date_gmt`),
  KEY `comment_date_gmt` (`comment_date_gmt`),
  KEY `comment_parent` (`comment_parent`),
  KEY `comment_author_email` (`comment_author_email`(10))
) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_520_ci;
INSERT INTO `wp_2_comments` (`comment_ID`,`comment_post_ID`,`comment_author`,`comment_author_email`,`comment_author_url`,`comment_author_IP`,`comment_date`,`comment_date_gmt`,`comment_content`,`comment_karma`,`comment_approved`,`comment_agent`,`comment_type`,`comment_parent`,`user_id`) VALUES (1,1,'A WordPress Commenter','[email protected]','http://localhost/','','2016-11-28 14:54:54.000','2016-11-28 14:54:54.000','Hi, this is a comment.\nTo get started with moderating, editing, and deleting comments, please visit the Comments screen in the dashboard.\nCommenter avatars come from <a href=\"https://gravatar.com\">Gravatar</a>.',0,'1','','',0,0);

Any idea what might be causing the problem?

Very high memory footprint

At first thanks for the nice lib :)

I am trying to use your library for a small DB dump using AWS Lambda.

native mysqldump size ~45MB
nodejs mysqldump size ~90MB

Though this is not the problem. What got me irritated was the very high memory footprint. Lambda says it uses 800MB of RAM to build the 90MB dump. Is this supported to be like that?

Have you thought about implementing some streaming API where we could pipe the result directly to some gzip filter?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.