mathieuloutre / mock-aws-s3 Goto Github PK
View Code? Open in Web Editor NEWLibrary to mock the AWS SDK for Node.js
License: MIT License
Library to mock the AWS SDK for Node.js
License: MIT License
Relevant code from getObject
:
Lines 312 to 314 in 6b5aa02
Ignoring the fact that this probably shouldn't be doing synchronous I/O (if not, then why isn't readFileSync being called instead of the async variant?), the issue is the underlying file may be deleted in between the readFile
and statSync
calls. This results in an uncaught Error: ENOENT: no such file or directory
, instead of the expected NoSuchKey
error being returned.
Running the following code:
var AWSMock = require('mock-aws-s3');
AWSMock.config.basePath = './tmp/buckets/' // Can configure a basePath for your local buckets
var s3 = AWSMock.S3({
params: { Bucket: 'example' }
});
async function test() {
await s3.createBucket({
Bucket: "example"
}).promise()
}
test()
throws this error:
/home/me/testing/node_modules/mock-aws-s3/lib/mock.js:376
return callback(err);
^
TypeError: callback is not a function
at /home/me/testing/node_modules/mock-aws-s3/lib/mock.js:376:12
at /home/me/testing/node_modules/mkdirp/index.js:38:26
at FSReqWrap.oncomplete (fs.js:153:5)
I get the error if I want to getList of empty bucket:
s3 = AWSMock.S3({Bucket: 'my-bucket'});
s3.listObjects({Bucket: 'my-bucket', Prefix: 'new-one/'}, (data)=>{
console.log(data);
});
Error: ENOENT: no such file or directory, scandir '/tmp/buckets//my-bucket'
It looks like Marker is not being used properly. According the to docs here http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGET.html, the NextMarker in the response should be what you're setting as the Marker, and Marker in the response should be what was passed in with the request.
Technically, according to the AWS docs for List Objects, "A response can contain CommonPrefixes only if you specify a delimiter."
http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGET.html
I have noticed that listObjectsV2
does not return NextContinuationToken
which is required when iterating through larger sets of objects on S3.
The response looks like this:
{
Contents: [
{
Key: 'file_0',
ETag: '"cc6c6102174b3050bc3397c724f00f63"',
LastModified: 2022-08-17T13:10:29.351Z,
Size: 3
},
{
Key: 'file_1',
ETag: '"cc6c6102174b3050bc3397c724f00f63"',
LastModified: 2022-08-17T13:10:29.351Z,
Size: 3
}
],
CommonPrefixes: [ { Prefix: '/' } ],
IsTruncated: true,
NextContinuationToken: undefined,
ContinuationToken: undefined,
StartAfter: undefined
}
Note that while the tokens are missing, isTruncated
is still set to true, which would lead to an infinite loop in many applications.
When an object doesn't exist, getObject
returns an object with the requisite properties, but it's not an Error object. This makes debugging more difficult (since there's no stack
property), breaks Nodejs conventions (callers typically expect the first argument to a callback to be an Error object), and deviates from the official AWS SDK (which returns Error objects).
Lines 331 to 340 in 6b5aa02
Every other method seems to correctly return Error objects. So this appears to be an outlier.
getObject
works as expected, but putObject
looks like it's still stubbed out without a fake implementation (one that would call the callback or work as a promisified)
Line 621 in 6b5aa02
I think the solution would be just to add callback(null, url)
on like 625
? Would need to look more at the spec for getSignedUrl
to make sure the dummy url would work for both
I've just upgraded both aws-sdk and mock-aws-s3 to their latest version, but I am getting the following type error (using TypeScript):
Type 'S3' is missing the following properties from type 'S3':
deleteBucketIntelligentTieringConfiguration, deleteBucketOwnershipControls, getBucketIntelligentTieringConfiguration, getBucketOwnershipControls, and 4 more.
Version used:
mock-aws-s3@npm:4.0.2
aws-sdk@npm:2.995.0
Hello
Firstly, thanks for the work you've all put in on this package - I just started using it on a small module i'm writing. I just need 2 methods for my tests, S3.putObject and S3.createBucket - obviously only the former is implemented right now.
I'm just wanting to ask if you'd consider me contributing a PR to add s3.createBucket. Having looked through the source a little, i have an idea how i'd add it to be compatible with the existing code. I guess the simplest way would be to just create the dir on disk and then perhaps as a second phase, modify e.g. s3.putObject to verify the existence of a bucket before allowing a put (this would also help my use case, incidentally).
Cheers
mock-aws-s3 is using the underscore pkg version #1.8.3 which is having a critical vulnerability. Can you please upgrade the underscore pkg version to 1.12.* or above. Please let us know ASAP.
Hi,
I discovered this library and it's really nice but I encountering a problem with listObjects and listObjectsV2 methods.
I don't have same responses that aws sdk. I just want to list all directory and files at a specific path. So I specify delimiter : "/" and Prefix with my path.
The problem is that response contains all subdirectories and files including in these directories.
Official AWS SDK returns only directories and files to specific prefix
|--- root_folder/
______| file.txt
______|--- Folder1/
__________|--- Subfolder1
_______________|---File1.txt
_______________|---File12.txt
______| Folder2/
__________| File2.txt
Options :
Results should contains array of CommonPrefixes with "Prefix" (Folder1, Folder2) and Key "file.txt". But it returns "Subfolder1", "File1", "File12", "File2" too.
The core aws-sdk dependency added promises to the core, via a ".promise" call appended to each service. Ex:
var s3 = new AWS.S3();
s3.listBuckets(params).promise().then();
Any chance of that functionality being added to this library?
Lines 221 to 255 in 4b247d6
Me and @AllieRays coded a solution to return an object containing the send
method, simulating the s3 module. ๐
@AllieRays Don't forget to fork, commit, pull request that change, so after npm install
ing you will not lose it. ๐
Issue and fix outline here:
#38
It looks like getObject doesn't set the LastModified time in the mocked response.
I think this is set by the AWS code. I'm new to S3 though
Instead of throwing an AWSError with NoSuchKey
code, headObject
just allows an ENONENT
to be emitted
I noticed that the real S3 sdk takes the bound parameters (default options) in the "params" key:
var s3bucket = new AWS.S3({ params: {Bucket: 'myBucket'} });
(see http://docs.aws.amazon.com/AWSJavaScriptSDK/guide/node-services.html#Bound_Parameters)
The mock s3 has them at the top level:
var s3 = AWSMock.S3({
Bucket: '/tmp/example'
});
Hi again
After doing the work for createBucket, the simple version, I noticed that there are at least some tests which are only testing for the correct, direct use-case. I think it'd be prudent to verify correct handling for e.g. incorrect method arguments. I realise that you don't want to emulate the AWS API entirely but I think some handling of incorrect args etc. would be very worthwhile.
Any thoughts?
Cheers
Tried to find deleteObject function, but it seems to be removed from the library. There exists only deleteObjects functionality
Is there interest in adding waitFor
?
http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#waitFor-property
From the S3 docs, the second parameter to the putObject
callback should "the de-serialized data returned from the request. Set to null if a request error occurs." But mock-aws-s3's version isn't passing anything.
I get Failed: s3.headObject(...).promise is not a function
with the following code
s3.headObject(params).promise()
Currently when a readable stream passed into the Body
finishes, the one call back with only a boolean is fired:
Line 487 in 6b5aa02
this should return the location, key and bucket just as the Buffer does:
done(null, { Location: dest, Key: search.Key, Bucket: search.Bucket })
I'm running v 4.0.1
copyObject is returning /tmp/buckets//<bucket_name>/key
Code:
const copyObject = s3 => sourceBucket => outputBucket => ( oldKey, newKey) => {
const CopySource = `/${sourceBucket}/${oldKey}`;
const params = {
Bucket : outputBucket,
CopySource,
Key : newKey
};
return s3.copyObject(params).promise();
test file:
const AWSMock = require('mock-aws-s3');
AWSMock.config.basePath = '/tmp/buckets';
const FAKE_S3 = AWSMock.S3({
params : {
Bucket : KNOWN_OUTPUT_BUCKET
}
}),
FAKE_OLD_KEY = 'oldkey',
FAKE_NEW_KEY = 'newkey',
FAKE_BODY = '123';
copyObject(FAKE_S3)(KNOWN_OUTPUT_BUCKET)(KNOWN_OUTPUT_BUCKET)(FAKE_OLD_KEY, FAKE_NEW_KEY)
.then(res => {
console.log('res',res)
Output:
res {
Bucket: '/tmp/buckets/media-output-dev-use1-v1',
CopySource: '/tmp/buckets//media-output-dev-use1-v1/oldkey',
Key: 'newkey'
}
The promise interface breaks the ability for getObject
to return a stream when not provided with a callback. Same is the case for headObject
.
One more suggestion- You might also want to export the S3Mock
for consumers to override certain behaviours if required.
While using the deleteBucket function, I get the following error:
AssertionError [ERR_ASSERTION]: false == true
at rmdir (/media/sylvain/Store/projects/devpipeline/services/AmbraIngestionServer/node_modules/fs-extra/node_modules/rimraf/rimraf.js:159:5)
at /media/sylvain/Store/projects/devpipeline/services/AmbraIngestionServer/node_modules/fs-extra/node_modules/rimraf/rimraf.js:97:16
at FSReqWrap.oncomplete (fs.js:135:15)
After some research on google, it looks like it is caused by the package rimraf not being up to date.
The rimraf package is included in the fs-extra package (also out of date) which mock-aws-s3 includes.
I have confirmed the error no longer occurs after updating fs-extra from 0.6.4 to 7.0.1 but I don't know what impact updating fs-extra could have on other functions.
seems no more promisfy in @aws-sdk/client-s3
getObject() on an entry which does not exists results in an "ENOENT" - file not found exception, while the mocked S3 tries to open the object's file without checking its existence. This behavior does not mimic the AWS S3, since S3 sends no error in this scenario, just an object with the answer:
...
code: "NoSuchKey",
message: "The specified key does not exist.",
name: "NoSuchKey",
region: null,
statusCode: 404
...
S3 file upload does not work correctly within cucumber.js. The file is created but it does not have any content.
See the attached test cases:
Run node test.js and show that it is working in the normal case.
Run npm test and look for the echoed "content" string within the cucumber message. This should be "content template" but appears as "content" hence no file content - confirm by opening the file within a text editor (C:/projects/test/files/test/test.txt).
client.upload
closes the readable stream passed as the parameter automatically. The actual S3 client doesn't do that.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.