Code Monkey home page Code Monkey logo

fake-s3's People

Contributors

batedurgonnadie avatar bjoerne2 avatar bradgessler avatar cheister avatar djanjic avatar ejennings-mdsol avatar frankleonrose avatar gaul avatar iq-dot avatar jbyler avatar johanneswuerbach avatar jturkel avatar jubos avatar leikind avatar m1foley avatar mkacherovich avatar mlewislogic avatar mthssdrbrg avatar ncarroll-mdsol avatar ngauthier avatar oggy avatar olemchls avatar omarkhan avatar pickhardt avatar plainprogrammer avatar samuel-belcastro avatar sbussetti avatar stuarthicks avatar twalpole avatar vadims avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fake-s3's Issues

Rack compatibility

Love this idea, will make testing s3 stuff much easier. Would be much better as a Rack app so I could just throw it in my local passenger config and wouldn't have to run it by hand each time.

listObjects MaxKeys has no effect

I'm using javascript API and called listObjects with MaxKeys set to 30, expecting 30 objects to return. However, fakeS3 seems to have resorted to default (1000) in that case.

Anyone seen this?

No objects after restart of fakes3

I'm using HEAD of fakes3, and whenever I restart the fakes3 server lookinng up keys is not working anymore, I always get an empty list of objects for each bucket.

Add eventual consistency support

In the us-east-1 region, eventual consistency is not guaranteed even for gets of new objects.

To better simulate the real S3, it would be great if fake-s3 could support new objects becoming available only N seconds after being uploaded.

How do I stop server

How do I kill the s3 server? Either via cmd line (preferred) or programmatically(java)? Looking for using this with integration and unit testing.

Totally unable to get started; consider posting quickstart example?

Hey there,

I've been really excited to try FakeS3 ever since I heard about it. I love the idea of being able to develop code against S3 even when I'm offline.

I finally got around to it, but unfortunately I'm having a ton of trouble figuring out how to use it for even something as basic as accessing an image from my web browser. I'm a Ruby noob unfortunately, so I'm also not having any luck trying to debug.

I'm on Mac OS X 10.7.5 (Lion), running the system default Ruby 1.8.7. I installed fakes3 via sudo gem install fakes3 (just gem install fakes3 gave me a permissions error), which installed FakeS3 0.1.5.

I tried starting it with -p 4567 -r ~/Dropbox, to see if I could browse my Dropbox files via FakeS3. Going to http://localhost:4567/ indeed showed the top-level directories in my Dropbox, but trying to browse to any sub-directory, or any individual file within any of those sub-directories, e.g. http://localhost:4567/Pictures/Misc/pixel.gif, always resulted in a 404.

I saw that indeed, FakeS3 was thinking each of those top-level dirs were buckets, and you've documented that it's recommended to use hostname-style requests, so I added s3.amazonaws.com and dropbox.s3.amazonaws.com to my /etc/hosts and ran with -r ~/, and indeed, I'm now able to view my home directory top-level dirs as buckets at http://s3.amazonaws.com:4567/. Unfortunately, I see the same buckets at http://dropbox.s3.amazonaws.com:4567/. And still, I can't access any file, e.g. http://dropbox.s3.amazonaws.com:4567/Pictures/Misc/pixel.gif or http://s3.amazonaws.com:4567/Dropbox/Pictures/Misc/pixel.gif.

I've done a bunch more experimenting, and still can't manage to just view a simple image in my browser. What am I doing wrong?

My sincere apologies if I'm missing something obvious, and thanks for your help. It might be helpful to have a simple "quickstart" example in the readme. If you'd like, I'd be happy to submit one after I figure this out.

Thanks!

Webstor

Just fyi it works with webstor (wscmd) client, so you can add to your list.

https://github.com/klokan/webstor

Multipart upload fails, but this is due to issue in wsmcd client indexing of parts I believe.

Here is the log:

localhost.localdomain - - [12/Sep/2015:08:44:21 PDT] "POST /services/Walrus/buxy/TEST%2Fdildo?uploads HTTP/1.0" 200 251

  • -> /services/Walrus/buxy/TEST%2Fdildo?uploads
    localhost.localdomain - - [12/Sep/2015:08:44:21 PDT] "PUT /services/Walrus/buxy/TEST%2Fdildo?partNumber=1&uploadId=84a0fd783ca99370383f61eb9457280c HTTP/1.0" 200 0
  • -> /services/Walrus/buxy/TEST%2Fdildo?partNumber=1&uploadId=84a0fd783ca99370383f61eb9457280c
    localhost.localdomain - - [12/Sep/2015:08:44:22 PDT] "PUT /services/Walrus/buxy/TEST%2Fdildo?partNumber=2&uploadId=84a0fd783ca99370383f61eb9457280c HTTP/1.0" 200 0
  • -> /services/Walrus/buxy/TEST%2Fdildo?partNumber=2&uploadId=84a0fd783ca99370383f61eb9457280c
    localhost.localdomain - - [12/Sep/2015:08:44:23 PDT] "PUT /services/Walrus/buxy/TEST%2Fdildo?partNumber=3&uploadId=84a0fd783ca99370383f61eb9457280c HTTP/1.0" 200 0
  • -> /services/Walrus/buxy/TEST%2Fdildo?partNumber=3&uploadId=84a0fd783ca99370383f61eb9457280c
    localhost.localdomain - - [12/Sep/2015:08:44:24 PDT] "PUT /services/Walrus/buxy/TEST%2Fdildo?partNumber=4&uploadId=84a0fd783ca99370383f61eb9457280c HTTP/1.0" 200 0
  • -> /services/Walrus/buxy/TEST%2Fdildo?partNumber=4&uploadId=84a0fd783ca99370383f61eb9457280c
    [2015-09-12 08:44:25] ERROR Errno::ENOENT: No such file or directory - /mnt/RAM/FAKES3/services/84a0fd783ca99370383f61eb9457280c_Walrus/buxy/TEST/dildo_part0/.fakes3_metadataFFF/content
    /home/alian/.gem/ruby/gems/fakes3-0.2.1/lib/fakes3/file_store.rb:228:in initialize' /home/alian/.gem/ruby/gems/fakes3-0.2.1/lib/fakes3/file_store.rb:228:inopen'
    /home/alian/.gem/ruby/gems/fakes3-0.2.1/lib/fakes3/file_store.rb:228:in block in combine_object_parts' /home/alian/.gem/ruby/gems/fakes3-0.2.1/lib/fakes3/file_store.rb:224:ineach'
    /home/alian/.gem/ruby/gems/fakes3-0.2.1/lib/fakes3/file_store.rb:224:in combine_object_parts' /home/alian/.gem/ruby/gems/fakes3-0.2.1/lib/fakes3/server.rb:250:indo_POST'
    /usr/share/ruby/webrick/httpservlet/abstract.rb:106:in service' /usr/share/ruby/webrick/httpserver.rb:138:inservice'
    /usr/share/ruby/webrick/httpserver.rb:94:in run' /usr/share/ruby/webrick/server.rb:295:inblock in start_thread'
    localhost.localdomain - - [12/Sep/2015:08:44:25 PDT] "POST /services/Walrus/buxy/TEST%2Fdildo?uploadId=84a0fd783ca99370383f61eb9457280c HTTP/1.0" 500 437
  • -> /services/Walrus/buxy/TEST%2Fdildo?uploadId=84a0fd783ca99370383f61eb9457280c

failed attempt at using fake-s3 as a docker registry:2 backend

The end result of this endeavor was unsuccessful.

However, I'd like to share my findings / quick fixes nonetheless.

In server.rb https://github.com/jubos/fake-s3/blob/master/lib/fakes3/server.rb#L419 I added unescaping of the incoming source:

        unescaped_src_elems   = CGI::unescape(copy_source.first)
        src_elems   = unescaped_src_elems.split("/")

Then also in server.rb https://github.com/jubos/fake-s3/blob/master/lib/fakes3/server.rb#L482 I added unescapingHTML of the incoming xml in order for the regex to work on the quote (because it fails to match a " string )

      parts_xml = CGI::unescapeHTML(parts_xml)

On my setup the following line seems to be bad Ruby https://github.com/jubos/fake-s3/blob/master/lib/fakes3/file_store.rb#L231 replacing it with the following resolved this error: "ERROR NoMethodError: undefined method `Error' for #<FakeS3::FileStore:"

    raise "Invalid file chunk'" unless part[:etag] == etag

After these changes a docker push still fails with the following error:

Error pushing to registry: Server error: 400 trying head request for

It appears the HEAD method has not been implemented by fake-s3 at all, so I ran into a dead end.

botocmd.py tests fail when tests are run the first time

I see the following failure when I run rake test for the first time:

$ rake test 
./fake-s3/lib/fakes3/version.rb:2: warning: already initialized constant VERSION
Loaded suite ./gem-home/gems/rake-10.0.3/lib/rake/rake_test_loader
Started
Traceback (most recent call last):
  File "./fake-s3/test/botocmd.py", line 87, in <module>
    handler(*args)
  File "./fake-s3/test/botocmd.py", line 61, in put
    bucket = self.conn.get_bucket(bucket_name)
  File "virtualenv/main/lib/python2.7/site-packages/boto/s3/connection.py", line 371, in get_bucket
    bucket.get_all_keys(headers, maxkeys=0)
  File "virtualenv/main/lib/python2.7/site-packages/boto/s3/bucket.py", line 347, in get_all_keys
    '', headers, **params)
  File "virtualenv/main/lib/python2.7/site-packages/boto/s3/bucket.py", line 314, in _get_all
    response.status, response.reason, body)
boto.exception.S3ResponseError: S3ResponseError: 404 Not Found
<?xml version="1.0" encoding="UTF-8"?><Error><Code>NoSuchBucket</Code><Message>The resource you requested does not exist</Message><Resource>s3cmd_bucket</Resource><RequestId>1</RequestId></Error>

I suspect this means we need to create a bucket in boto before trying to run the test. Also, we probably should update the test task so that it clears out test_root before running.

s3cmd ls does not list objects

I do not have details to reproduce it, it just never works, you can't 's3cmd ls' of anything you have put in it. I will have to find out how to run the tests, so that I can provide a test case.

boto.exception.S3DataError: BotoClientError: ETag from S3 did not match computed MD5

I believe I found a compatibility bug that has only shown up recently in our testing.

Whenever I try to write a file via key.set_contents_from_string (or set_contents_from_filename for that matter), I get the the following Error:

BotoClientError: ETag from S3 did not match computed MD5

When I inspect the key's md5 and etag properties in iPython after the error, they provide the same hash.

I've tried this out both in iPython and using the botocmd.py in fake-s3/test. I'm using fakes3 v0.1.5 and have tried with boto 2.9.0, 2.9.9 and 2.10.0. All versions have behaved the same.

Example test case:

from boto.s3 import key, connection

conn = connection.S3Connection(is_secure=False,
                    calling_format=connection.OrdinaryCallingFormat(),
                    aws_access_key_id='',
                    aws_secret_access_key='',
                    port=4567, 
                    host='localhost')

# i have fakes3 running in another terminal - port: 4567, root: /tmp/test

conn.create_bucket('foo')
b = conn.get_bucket('foo')

k = key.Key(b)
k.key = 'somepath/foo.txt'
k.set_contents_from_string('foobarfoo')

----------------------------------------------------------------------------------------------
3DataError                               Traceback (most recent call last)

[... stack trace ...]

S3DataError: BotoClientError: ETag from S3 did not match computed MD5

# but it appears etag and md5 are the same (even after failure)

k.md5
>>> 'c98cbfeb6a347a47eb8e96cfb4c4b890'

k.etag
>>> 'c98cbfeb6a347a47eb8e96cfb4c4b890'

Output from fakes3 looks successful:

localhost - - [14/Aug/2013:09:21:32 PDT] "PUT /foo/somepath/foo.txt HTTP/1.1" 200 0
 -> /foo/somepath/foo.txt

however, when I go to see the file in fakes3 bucket, I find the path has been created, but the file (foo.txt) is a directory... It's been a long time since I've used this tool, so maybe that's new and by design?

Spark DataFrame save to Parquet cannot put file into fake S3

I'm working with Spark and try to save DataFrame into Parquet files. It seems it's able to create buckets, directories, list them, but when it tries to create a file, then server throws an exception into console.

Here is how I installed fake-s3:

$ sudo gem install fakes3
Password:
Fetching: thor-0.19.1.gem (100%)
Successfully installed thor-0.19.1
Fetching: builder-3.2.2.gem (100%)
Successfully installed builder-3.2.2
Fetching: fakes3-0.2.1.gem (100%)
Successfully installed fakes3-0.2.1
Parsing documentation for thor-0.19.1
Installing ri documentation for thor-0.19.1
Parsing documentation for builder-3.2.2
Installing ri documentation for builder-3.2.2
Parsing documentation for fakes3-0.2.1
Installing ri documentation for fakes3-0.2.1
3 gems installed

I have updated my /etc/hosts 127.0.0.1 with my_bucket.localhost

Here are the first messages server prints out on start:

$ fakes3 -r ~/tmp/fakes3_root -p 4569 -H localhost
Loading FakeS3 with /Users/emorozov/tmp/fakes3_root on port 4569 with hostname localhost
[2015-06-25 12:48:24] INFO  WEBrick 1.3.1
[2015-06-25 12:48:24] INFO  ruby 2.0.0 (2014-02-24) [universal.x86_64-darwin13]
[2015-06-25 12:48:24] INFO  WEBrick::HTTPServer#start: pid=90416 port=4569

Finally the exception:

[2015-06-25 13:12:39] ERROR Errno::ENOENT: No such file or directory - /Users/emorozov/tmp/fakes3_root/reltio%2Ftmp6%2F_temporary%2F0%2F_temporary%2Fattempt_201506251311_0024_r_000000_0%2Fpart-r-00001.parquet/.fakes3_metadataFFF/metadata
    /Library/Ruby/Gems/2.0.0/gems/fakes3-0.2.1/lib/fakes3/file_store.rb:109:in `initialize'
    /Library/Ruby/Gems/2.0.0/gems/fakes3-0.2.1/lib/fakes3/file_store.rb:109:in `open'
    /Library/Ruby/Gems/2.0.0/gems/fakes3-0.2.1/lib/fakes3/file_store.rb:109:in `copy_object'
    /Library/Ruby/Gems/2.0.0/gems/fakes3-0.2.1/lib/fakes3/server.rb:176:in `do_PUT'
    /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/webrick/httpservlet/abstract.rb:106:in `service'
    /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/webrick/httpserver.rb:138:in `service'
    /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/webrick/httpserver.rb:94:in `run'
    /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/webrick/server.rb:295:in `block in start_thread'

localhost - - [25/Jun/2015:13:12:39 FET] "PUT /tmp6%2F_temporary%2F0%2Ftask_201506251311_0024_r_000000%2Fpart-r-00001.parquet HTTP/1.1" 500 496
- -> /tmp6%2F_temporary%2F0%2Ftask_201506251311_0024_r_000000%2Fpart-r-00001.parquet

Push latest version to RubyGems?

I just ran into a bug in 0.1.5, which is the version currently installed when doing gem install fakes3. I came here and found out the problem had already been fixed (quotes in ETag header).

Would you please push 0.1.5.1 to RubyGems so I can tell users to just do gem install instead of building the gem themselves? Thanks!

[Feature] Ability to set canned ACL

It could be nice to have ability to set canned ACL on uploaded file (like the real S3). It could be useful in dev environment to test private files.

list objects not working?

I am using the PHP AWS SDK with no issue except when I perform a list objects:

$response = $this->list_objects($s3bucket,array(
'prefix'=>'u/public/'
));

the logs say
localhost - - [31/Aug/2012:11:30:30 EST] "GET /?prefix=u%2F2public%2F HTTP/1.1" 200 300
http://127.0.0.1:4567/?prefix=u%2Fpublic%2F -> /?prefix=u%2Fpublic%2F

and it gives a bucket list as the return.

is this a known issue?
am I doing something wrong? The same query works on s3 directly.

Thanks
Andrew

List supported methods (documentation)

Hello, could you please be more specific in the documentation and add a concrete list of supported methods?

FakeS3 doesn't support all of the S3 command set, but the basic ones like put, get, list, copy, and make bucket are supported. More coming soon.

It seems that it is not clear what is supported

List Objects seems to return empty structure for me, and when I am trying to use https://www.npmjs.org/package/s3-upload-stream library it seems that multipart upload is not working properly. Could there be a list of all supported methods perhaps according to the AWS-SDK?

Unable to serve file under other hostnames

Fakes3 is unable to serve files under another hostname.
I'm using fakes3 0.2.1.

Here's how to reproduce the problem:

Save the file s3client.py:

from boto.s3.connection import S3Connection, OrdinaryCallingFormat

s3 = S3Connection('', '', is_secure=False, port=4567, host='localhost', calling_format=OrdinaryCallingFormat())
bucket = s3.create_bucket('created_bucket')
key = bucket.new_key('created_file_in_the_bucket')
key.set_contents_from_string('This is the content of the created file\n')
for item in bucket.list():
    print item
    print item.get_contents_as_string()

Create the fakes3 root:

makdir fake_s3_root

Run fakes3 with the default localhost hostname:

fakes3 server --port=4567 --root=fake_s3_root

Launch s3client.py to create the created_bucket bucket and the created_file_in_the_bucket file.

$ python s3client.py 
<Key: created_bucket,created_file_in_the_bucket>
This is the content of the created file

Perfect, now curl the file:

$ curl http://localhost:4567/created_bucket/created_file_in_the_bucket
This is the content of the created file

Works great!

But now curl the same file with another hostname pointing to the same IP:

$ curl http://localhost.localdomain:4567/created_bucket/created_file_in_the_bucket
<?xml version="1.0" encoding="UTF-8"?><Error><Code>NoSuchKey</Code><Message>The specified key does not exist</Message><Key>created_bucket/created_file_in_the_bucket</Key><RequestId>1</RequestId><HostId>2</HostId></Error>

fakes3 does not serve the file, as it tries to look for it elsewhere:

No such file or directory @ rb_sysopen - /Users/analogue/tmp/fake_s3_root/localhost/created_bucket/created_file_in_the_bucket/.fakes3_metadataFFF/metadata
/usr/local/lib/ruby/gems/2.2.0/gems/fakes3-0.2.1/lib/fakes3/file_store.rb:83:in `initialize'
/usr/local/lib/ruby/gems/2.2.0/gems/fakes3-0.2.1/lib/fakes3/file_store.rb:83:in `open'
/usr/local/lib/ruby/gems/2.2.0/gems/fakes3-0.2.1/lib/fakes3/file_store.rb:83:in `get_object'
/usr/local/lib/ruby/gems/2.2.0/gems/fakes3-0.2.1/lib/fakes3/server.rb:94:in `do_GET'
/usr/local/Cellar/ruby/2.2.2/lib/ruby/2.2.0/webrick/httpservlet/abstract.rb:106:in `service'
/usr/local/Cellar/ruby/2.2.2/lib/ruby/2.2.0/webrick/httpserver.rb:138:in `service'
/usr/local/Cellar/ruby/2.2.2/lib/ruby/2.2.0/webrick/httpserver.rb:94:in `run'
/usr/local/Cellar/ruby/2.2.2/lib/ruby/2.2.0/webrick/server.rb:294:in `block in start_thread'
localhost - - [10/Aug/2015:23:02:30 EDT] "GET /created_bucket/created_file_in_the_bucket HTTP/1.1" 404 220
- -> /created_bucket/created_file_in_the_bucket

For a reason I can't understand, fakes3 look for the file in:

Users/analogue/tmp/fake_s3_root/localhost/created_bucket/created_file_in_the_bucket/.fakes3_metadataFFF/metadata

Instead of looking at the file in the legit place:

Users/analogue/tmp/fake_s3_root/created_bucket/created_file_in_the_bucket/.fakes3_metadataFFF/metadata

Fakes3 tries to append the configure hostname localhost to the root folder.

Expected behavior:
When I launch fakes3 without a configured hostname, I expect fakes3 to serve any domain pointing to the IP it's running on the same way.

Cannot connect with AWS-SDK

I've tried everything connecting to aws-sdk with no luck as yet. Solely connection refused attempts.

I am set up running two vagrant machines. One running fake-s3, which I can verify is available by navigating to 10.0.10.6 (where it is located) and viewing the XML bucket result output.

When I attempt to connect from my rails application. Or simply the rails console using the following.

s3 = AWS::S3.new({:access_key_id => "123", :secret_access_key => "abc", :s3_endpoint => "10.0.10.6",
 :s3_force_path_style => true, :s3_port => 4567, :use_ssl => false})

The AWS::S3 object gets created bucket s3.buckets.each return a connection error. In addition to the force_path_styles. I've tried using xip.io as a tld to route back to my bucket but no luck as well. I could not seem to find the s3_endpoint configuration option under the aws:sdk source as well, but I may not have been looking correctly.

Has anyone had any luck with this?

Jets3t 0.9.0 support

I'm trying to use fake-s3 with jets3t version 0.9.0.
jets3t version 0.7.1 worked great , but when I'm trying to upgrade I get the error Connection refused.

The error I got:

Caused by: java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:637)
at org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:375)
at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:148)
at org.apache.http.impl.conn.AbstractPoolEntry.open(AbstractPoolEntry.java:149)
at org.apache.http.impl.conn.AbstractPooledConnAdapter.open(AbstractPooledConnAdapter.java:121)
at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:573)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:425)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:820)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:754)
at org.jets3t.service.impl.rest.httpclient.RestStorageService.performRequest(RestStorageService.java:334)
at org.jets3t.service.impl.rest.httpclient.RestStorageService.performRequest(RestStorageService.java:281)
at org.jets3t.service.impl.rest.httpclient.RestStorageService.performRestHead(RestStorageService.java:942)
at org.jets3t.service.impl.rest.httpclient.RestStorageService.getObjectImpl(RestStorageService.java:2148)
at org.jets3t.service.impl.rest.httpclient.RestStorageService.getObjectDetailsImpl(RestStorageService.java:2075)
at org.jets3t.service.StorageService.getObjectDetails(StorageService.java:1093)
at org.jets3t.service.StorageService.getObjectDetails(StorageService.java:548)
at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.retrieveMetadata(Jets3tNativeFileSystemStore.java:174)

Is there a configuration that will make it work?

This library used in Hadoop for the last few years to connect to Amazon S3, so it will be very useful if it was supported (if it's not supported already).

Bucket keys containing a hyphen causes timeouts

Breaks 100% of the time with a timeout. As long as the bucket name contains a hyphen the request times out

[39] pry(main)> s3 = AWS::S3.new(:access_key_id => Settings.s3.access_key_id, :secret_access_key => Settings.s3.access_key_id, :s3_endpoint => 'localhost',:s3_port => 9444, :use_ssl => false)
=> AWS::S3
[40] pry(main)> bucket = s3.buckets["foo-bar"]
=> #AWS::S3::Bucket:foo-bar
[41] pry(main)> object = bucket.objects["key1"]
=> AWS::S3::S3Object:foo-bar/key1
[42] pry(main)> object.write("testtest")
[AWS S3 200 62.120395 3 retries] put_object(:bucket_name=>"foo-bar",:content_length=>8,:data=>#StringIO:0x007f96fb8d30d0,:key=>"key1") Net::OpenTimeout execution expired

Net::OpenTimeout: execution expired
from /Users/zundradaniel/.rvm/rubies/ruby-2.1.5/lib/ruby/2.1.0/net/http.rb:879:in `initialize'
[43] pry(main)>

Same call but underscore instead of a hyphen works

[43] pry(main)> s3 = AWS::S3.new(:access_key_id => Settings.s3.access_key_id, :secret_access_key => Settings.s3.access_key_id, :s3_endpoint => 'localhost',:s3_port => 9444, :use_ssl => false)
=> AWS::S3
[44] pry(main)> bucket = s3.buckets["foo_bar"]
=> #AWS::S3::Bucket:foo_bar
[45] pry(main)> object = bucket.objects["key1"]
=> AWS::S3::S3Object:foo_bar/key1
[46] pry(main)> object.write("testtest")
[AWS S3 200 6.008638 0 retries] put_object(:bucket_name=>"foo_bar",:content_length=>8,:data=>#StringIO:0x007f96f97a68f8,:key=>"key1")

=> AWS::S3::S3Object:foo_bar/key1
[47] pry(main)>

Combine step fails in multipart upload (from Java)

I'm getting errors when uploading using multipart from java, using the low level API.

To remove the possibility that our code mess things up I made a small project based on amazon Low-Level API (as described in http://docs.aws.amazon.com/AmazonS3/latest/dev/llJavaUploadFile.html) and just added an option that allows me to set the endpoint right after the client creation:

AmazonS3 s3Client = new AmazonS3Client(new ProfileCredentialsProvider());
s3Client.setEndpoint("http://fakes3vm:4567");
[...the rest is just the same]

I upload several files to S3 without problems, but the same code fails uploading to fakes3.

I declared fakes3vm in /etc/hosts and subdomains for the buckets too (I tried several things, not sure if that last is actually needed) I'm running fakes3 in an ubuntu 14 VM in virtualBox, installed with gem (version is 0.2.1)

The requests are getting there but the last one, that should trigger the combine of the parts fails. Here is the output from fakes3:

Juans-MBP.biomatters.com - - [26/Mar/2015:16:40:32 NZDT] "POST /someKeyName?uploads HTTP/1.1" 200 251
- -> /someKeyName?uploads
Juans-MBP.biomatters.com - - [26/Mar/2015:16:40:32 NZDT] "PUT /someKeyName?uploadId=fa0a9a679fd15d411cc62b869791f03f&partNumber=1 HTTP/1.1" 200 0
- -> /someKeyName?uploadId=fa0a9a679fd15d411cc62b869791f03f&partNumber=1

[...parts 2-8 without problem]

Juans-MBP.biomatters.com - - [26/Mar/2015:16:40:36 NZDT] "PUT /someKeyName?uploadId=fa0a9a679fd15d411cc62b869791f03f&partNumber=9 HTTP/1.1" 200 0
- -> /someKeyName?uploadId=fa0a9a679fd15d411cc62b869791f03f&partNumber=9
[2015-03-26 16:40:36] ERROR NoMethodError: undefined method `Error' for #<FakeS3::FileStore:0x00000001bc7e50>
    /var/lib/gems/1.9.1/gems/fakes3-0.2.1/lib/fakes3/file_store.rb:231:in `block in combine_object_parts'
    /var/lib/gems/1.9.1/gems/fakes3-0.2.1/lib/fakes3/file_store.rb:224:in `each'
    /var/lib/gems/1.9.1/gems/fakes3-0.2.1/lib/fakes3/file_store.rb:224:in `combine_object_parts'
    /var/lib/gems/1.9.1/gems/fakes3-0.2.1/lib/fakes3/server.rb:250:in `do_POST'
    /usr/lib/ruby/1.9.1/webrick/httpservlet/abstract.rb:106:in `service'
    /usr/lib/ruby/1.9.1/webrick/httpserver.rb:138:in `service'
    /usr/lib/ruby/1.9.1/webrick/httpserver.rb:94:in `run'
    /usr/lib/ruby/1.9.1/webrick/server.rb:191:in `block in start_thread'

[...that error is repeated 4 times and then]

Juans-MBP.biomatters.com - - [26/Mar/2015:16:40:38 NZDT] "POST /someKeyName?uploadId=fa0a9a679fd15d411cc62b869791f03f HTTP/1.1" 500 375
- -> /someKeyName?uploadId=fa0a9a679fd15d411cc62b869791f03f
[2015-03-26 16:40:39] WARN  Could not determine content-length of response body. Set content-length of the response or set Response#chunked = true
Juans-MBP.biomatters.com - - [26/Mar/2015:16:40:39 NZDT] "DELETE /someKeyName?uploadId=fa0a9a679fd15d411cc62b869791f03f HTTP/1.1" 204 0
- -> /someKeyName?uploadId=fa0a9a679fd15d411cc62b869791f03f

I don't get much from java with that code, but the exception is caught and tries to abort the upload. It seems to fail too as I see all the parts in the bucket when I use s3cmd:

Juans-MBP:~ juanottonello$ s3cmd ls s3://mybucket
2015-03-26 03:40  15728640   s3://mybucket/fa0a9a679fd15d411cc62b869791f03f_someKeyName_part1
2015-03-26 03:40  15728640   s3://mybucket/fa0a9a679fd15d411cc62b869791f03f_someKeyName_part2
2015-03-26 03:40  15728640   s3://mybucket/fa0a9a679fd15d411cc62b869791f03f_someKeyName_part3
2015-03-26 03:40  15728640   s3://mybucket/fa0a9a679fd15d411cc62b869791f03f_someKeyName_part4
2015-03-26 03:40  15728640   s3://mybucket/fa0a9a679fd15d411cc62b869791f03f_someKeyName_part5
2015-03-26 03:40  15728640   s3://mybucket/fa0a9a679fd15d411cc62b869791f03f_someKeyName_part6
2015-03-26 03:40  15728640   s3://mybucket/fa0a9a679fd15d411cc62b869791f03f_someKeyName_part7
2015-03-26 03:40  15728640   s3://mybucket/fa0a9a679fd15d411cc62b869791f03f_someKeyName_part8
2015-03-26 03:40   9736092   s3://mybucket/fa0a9a679fd15d411cc62b869791f03f_someKeyName_part9

The same fakes3 install seems to work well with s3cmd: when I upload a "big file" with it it seems to be using multipart upload

Juans-MBP:~ juanottonello$ s3cmd put ~/Downloads/Boot2Docker-1.4.1.pkg s3://mybucket
WARNING: Module python-magic is not available. Guessing MIME types based on file extensions.
/Users/juanottonello/Downloads/Boot2Docker-1.4.1.pkg -> s3://mybucket/Boot2Docker-1.4.1.pkg  [part 1 of 9, 15MB]
 15728640 of 15728640   100% in    1s    12.55 MB/s  done
/Users/juanottonello/Downloads/Boot2Docker-1.4.1.pkg -> s3://mybucket/Boot2Docker-1.4.1.pkg  [part 2 of 9, 15MB]
 15728640 of 15728640   100% in    0s    19.23 MB/s  done
/Users/juanottonello/Downloads/Boot2Docker-1.4.1.pkg -> s3://mybucket/Boot2Docker-1.4.1.pkg  [part 3 of 9, 15MB]
 15728640 of 15728640   100% in    0s    20.87 MB/s  done
/Users/juanottonello/Downloads/Boot2Docker-1.4.1.pkg -> s3://mybucket/Boot2Docker-1.4.1.pkg  [part 4 of 9, 15MB]
 15728640 of 15728640   100% in    0s    22.57 MB/s  done
/Users/juanottonello/Downloads/Boot2Docker-1.4.1.pkg -> s3://mybucket/Boot2Docker-1.4.1.pkg  [part 5 of 9, 15MB]
 15728640 of 15728640   100% in    0s    18.86 MB/s  done
/Users/juanottonello/Downloads/Boot2Docker-1.4.1.pkg -> s3://mybucket/Boot2Docker-1.4.1.pkg  [part 6 of 9, 15MB]
 15728640 of 15728640   100% in    0s    20.49 MB/s  done
/Users/juanottonello/Downloads/Boot2Docker-1.4.1.pkg -> s3://mybucket/Boot2Docker-1.4.1.pkg  [part 7 of 9, 15MB]
 15728640 of 15728640   100% in    0s    24.61 MB/s  done
/Users/juanottonello/Downloads/Boot2Docker-1.4.1.pkg -> s3://mybucket/Boot2Docker-1.4.1.pkg  [part 8 of 9, 15MB]
 15728640 of 15728640   100% in    0s    17.99 MB/s  done
/Users/juanottonello/Downloads/Boot2Docker-1.4.1.pkg -> s3://mybucket/Boot2Docker-1.4.1.pkg  [part 9 of 9, 9MB]
 9736092 of 9736092   100% in    0s    18.04 MB/s  done

I tried to change the part size to 15MB in the java code, as s3cmd is using that size successfully but without any luck...

Any ideas on this? I can share my java proj if that makes things easier, but is just a minimal web interface for the aws example.

Support for folders

On Amazon S3 I can create folders for example with CloudBerry Explorer (http://www.cloudberrylab.com/free-amazon-s3-explorer-cloudfront-IAM.aspx).
When I try to do the same on fake-s3 I only get an empty file.

Is there support for this in fake-s3?
The main issue I have is that the library I am using tries to recursively traverse the folder structure and I get the error message that there is no .fakes3_metadataFFF directory for the parent folder.

404 when actually trying to get the file

Hello! I find this gem pretty useful, so I decided to use it. I found a problem though, that when I generate an URL to get the actual file, I get a 404. In Chrome's inspector I get the next: http://d.pr/i/49oW and in the console:


undefined method `[]' for #<Psych::Nodes::Mapping:0x007fdf618d2a28>
localhost - - [20/Apr/2012:19:55:24 CEST] "GET /bucketname/me.jpg?AWSAccessKeyId=123&Expires=2147382000&Signature=fbJHDSrEGWobYHkJTUmmqg2diGQ HTTP/1.1" 404 0
- -> /bucketname/me.jpg?AWSAccessKeyId=123&Expires=2147382000&Signature=fbJHDSrEGWobYHkJTUmmqg2diGQ

I tried finding by myself, but couldn't manage to. Apart of this little bug, this gem is awesome, great work!

Missing license and homepage in gemspec

The gemspec is missing a homepage and license, causing a build warning:

WARNING:  licenses is empty.  Use a license abbreviation from:
  http://opensource.org/licenses/alphabetical
WARNING:  no homepage specified

rake test_server fails

I am trying to run the fakes3 tests. When I tried running "rake test_server", it fails with no such file to load. Any idea what is going wrong here?

arul:fake-s3 arul$ rake test_server
(in /Users/arul/github/local/fake-s3)
rake aborted!
no such file to load -- bundler
/Users/arul/github/local/fake-s3/Rakefile:1:in require' (See full trace by running task with --trace) arul:fake-s3 arul$ ls Gemfile Gemfile.lock README.md Rakefile bin fakes3.gemspec lib test arul:fake-s3 arul$ rake --trace test_server (in /Users/arul/github/local/fake-s3) rake aborted! no such file to load -- bundler /Users/arul/github/local/fake-s3/Rakefile:1:inrequire'
/Users/arul/github/local/fake-s3/Rakefile:1
/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/rake.rb:2383:in load' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/rake.rb:2383:inraw_load_rakefile'
/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/rake.rb:2017:in load_rakefile' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/rake.rb:2068:instandard_exception_handling'
/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/rake.rb:2016:in load_rakefile' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/rake.rb:2000:inrun'
/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/rake.rb:2068:in standard_exception_handling' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/rake.rb:1998:inrun'
/usr/bin/rake:31

CORS

Hello,
Thanks for creating this project! Is there a way to configure fake-s3 so that it allows requests from all IP addresses? I noticed in server.rb response['Access-Control-Allow-Origin']='*'

My Javascript call to fake-s3 results in Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://172.17.42.1:4567/mb-cosmosproject-dev/0/0/4/0/0040b97f9310741fde6a634fa896d186/artifacts/pe_dump.json?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=foo%2F20150122%2Flocal-s3%2Fs3%2Faws4_request&X-Amz-Date=20150122T210522Z&X-Amz-Expires=3600&X-Amz-Signature=8e64a5cda541015eac8d92fcb25e1882a1b357f489a272379c21cf8caafc0bd7&X-Amz-SignedHeaders=Host. This can be fixed by moving the resource to the same domain or enabling CORS.

How can I allow fake-s3 to allow all origins?

Thanks,
Mathew

Requests with unconfigured hostname

I've been trying to use fakes3 with the aws-sdk v2. The first use case that I had is simply creating a bucket and then checking to see that that bucket exists. Creating the bucket seems to work fine the new directory is created with the matching bucket name; however any request to list_buckets on an instance of Aws::S3::Client always returned no results.

After looking into the code it seems like the problem occurred because the hostname that I was issuing the request to (the hostname of the server) wasn't in the root_host array in FakeS3::Servlet. I wasn't aware that if I was going to issue a GET request that the fakes3 server needed to be running configured (-H) with the hostname that the client is using to address the service.

This behavior seems nonintuitive, especially with the working POST to create the buckets. Can some documentation be added to explain how to properly configure the server/client with hostnames other then the defaults?

DeleteObjects

Hi everyone,

I have a problem to remove all objects for one bucket.
I'm using aws-sdk-js for nodejs and I call 'deleteObjects' to remove them but I am not getting any error from fake-s3 and the objects are not deleted.
What am I missing?

Maybe this helps, I see the following on the fake-s3 log:

localhost - - [21/May/2014:18:02:08 ART] "POST /fakeBucket?delete HTTP/1.1" 200 0
- -> /fakeBucket?delete

Thanks in advance,
PoLa.

Access Key Issue

I am unable to find access key, I have use fake detail, also tried your sample .s3cfg, but still no success, so please let me know where to define access keys

Error

ERROR: S3 error: 403 (InvalidAccessKeyId): The AWS Access Key Id you provided does not exist in our records.

Address s3-tests failures

Presently fake-s3 incompletely implements the S3 API. Ceph s3-tests give good coverage and currently fail with:

Ran 305 tests in 32.599s

FAILED (SKIP=4, errors=54, failures=134)

aws-sdk

is there anyway to support aws-sdk? with the sdk we do not have access to establish_connection to specify host and port :(

incompatibility with boto due to format of Etag header parameter

boto evidently checks the Header portion of the response after uploading a file to compare it vs it's local md5 hash.

When it does this, it assumes the header Etag parameter matches the XML representation of the Etag parameter, which means it expects the hash value to be wrapped in doublequotes.

In version 0.1.5 of fakes3, as well as on HEAD, you folks are correctly constructing the XML (e.g. with quotes), however when assigning the header values you omit the doublequotes, which results in upload failures when attempting to communicate with Python's boto library.

The resolution is simple:

line 91:
response['Etag'] = ""#{real_obj.md5}""

line 138:
response['Etag'] = ""#{real_obj.md5}""

which mimics what you're already doing in lib/fakes3/xml_adapter.rb

Failing to delete bucket due to open object file handler

While running some tests on a Windows machine, I kept getting Permission Denied errors when trying to delete a bucket shortly after (1) getting an object and then consequently (2) deleting the object. After some searching it looks like it is related to the NTFS pending deletion state (see this Stack Overflow post). After looking through the code (I'm no Ruby expert so feel free to correct me if I'm wrong), it looks like the issue might be originating from Line 78 in file_store.rb where the file descriptor to the metadata file isn't being opened in a block.

Create bucket on startup

We don't need to test the creation of a bucket just files being uploaded/deleted/downloads. As we expect the S3 bucket to already be in existence it would be handy to have an option on the CLI to create a bucket if it doesn't already exist.

copy_object fails if dst_bucket does not exist

I am not familiar with ruby, but here is a proposed fix. (I do not know how to submit patches, i tried to write a test for it, but i really do not know ruby.)

diff --git a/lib/fakes3/file_store.rb b/lib/fakes3/file_store.rb
index 10546a8..1139eb2 100644
--- a/lib/fakes3/file_store.rb
+++ b/lib/fakes3/file_store.rb
@@ -135,6 +135,10 @@ module FakeS3
       src_bucket = self.get_bucket(src_bucket_name)
       dst_bucket = self.get_bucket(dst_bucket_name)

+      if dst_bucket.nil?
+        dst_bucket = self.create_bucket(dst_bucket_name)
+      end
+
       obj = S3Object.new
       obj.name = dst_name
       obj.md5 = src_metadata[:md5]

undefined method `md5' for nil:NilClass

When uploading any file, I get this exception:

 ERROR NoMethodError: undefined method `md5' for nil:NilClass

The exception is raised in server.rb:

response.header['ETag'] = "\"#{real_obj.md5}\""

So real_obj is nil. Further, the request times out. I don't know if the timeout causes real_obj to be nil or vice versa. Oddly, the file is successfully persisted and can be downloaded. This occurs for the master branch and the current version on rubygems.org (0.1.5.2).

The full shell session looks like this:

bundle exec fakes3 --root ~/fakes3 --port 10001

Loading FakeS3 with /Users/username/fakes3 on port 10001 with hostname s3.amazonaws.com
[2014-09-03 15:51:09] INFO  WEBrick 1.3.1
[2014-09-03 15:51:09] INFO  ruby 2.1.0 (2013-12-25) [x86_64-darwin12.0]
[2014-09-03 15:51:09] INFO  WEBrick::HTTPServer#start: pid=35502 port=10001
WEBrick::HTTPStatus::RequestTimeout
/Users/username/.rvm/rubies/ruby-2.1.0/lib/ruby/2.1.0/webrick/httprequest.rb:525:in `rescue in _read_data'
/Users/username/.rvm/rubies/ruby-2.1.0/lib/ruby/2.1.0/webrick/httprequest.rb:518:in `_read_data'
/Users/username/.rvm/rubies/ruby-2.1.0/lib/ruby/2.1.0/webrick/httprequest.rb:534:in `read_data'
/Users/username/.rvm/rubies/ruby-2.1.0/lib/ruby/2.1.0/webrick/httprequest.rb:477:in `read_body'
/Users/username/.rvm/rubies/ruby-2.1.0/lib/ruby/2.1.0/webrick/httprequest.rb:255:in `body'
/Users/username/.rvm/gems/ruby-2.1.0/bundler/gems/fake-s3-67f5a3517970/lib/fakes3/file_store.rb:175:in `block in store_object'
/Users/username/.rvm/gems/ruby-2.1.0/bundler/gems/fake-s3-67f5a3517970/lib/fakes3/file_store.rb:174:in `open'
/Users/username/.rvm/gems/ruby-2.1.0/bundler/gems/fake-s3-67f5a3517970/lib/fakes3/file_store.rb:174:in `store_object'
/Users/username/.rvm/gems/ruby-2.1.0/bundler/gems/fake-s3-67f5a3517970/lib/fakes3/server.rb:150:in `do_PUT'
/Users/username/.rvm/rubies/ruby-2.1.0/lib/ruby/2.1.0/webrick/httpservlet/abstract.rb:106:in `service'
/Users/username/.rvm/rubies/ruby-2.1.0/lib/ruby/2.1.0/webrick/httpserver.rb:138:in `service'
/Users/username/.rvm/rubies/ruby-2.1.0/lib/ruby/2.1.0/webrick/httpserver.rb:94:in `run'
/Users/username/.rvm/rubies/ruby-2.1.0/lib/ruby/2.1.0/webrick/server.rb:295:in `block in start_thread'
[2014-09-03 15:51:57] ERROR NoMethodError: undefined method `md5' for nil:NilClass
  /Users/username/.rvm/gems/ruby-2.1.0/bundler/gems/fake-s3-67f5a3517970/lib/fakes3/server.rb:151:in `do_PUT'
  /Users/username/.rvm/rubies/ruby-2.1.0/lib/ruby/2.1.0/webrick/httpservlet/abstract.rb:106:in `service'
  /Users/username/.rvm/rubies/ruby-2.1.0/lib/ruby/2.1.0/webrick/httpserver.rb:138:in `service'
  /Users/username/.rvm/rubies/ruby-2.1.0/lib/ruby/2.1.0/webrick/httpserver.rb:94:in `run'
  /Users/username/.rvm/rubies/ruby-2.1.0/lib/ruby/2.1.0/webrick/server.rb:295:in `block in start_thread'
localhost - - [03/Sep/2014:15:51:27 EDT] "PUT /my_bucket/c80f5dcbaf70116a106cad087acc008e64e42d2f9cf4479e1ab77f4596cabe5ff5a8cb13ccd68f3fa85c5944c8eb63329206d010ef8fba3112f21bdd28167072 HTTP/1.1" 500 320
- -> /my_bucket/c80f5dcbaf70116a106cad087acc008e64e42d2f9cf4479e1ab77f4596cabe5ff5a8cb13ccd68f3fa85c5944c8eb63329206d010ef8fba3112f21bdd28167072
localhost - - [03/Sep/2014:15:51:58 EDT] "PUT /my_bucket/c80f5dcbaf70116a106cad087acc008e64e42d2f9cf4479e1ab77f4596cabe5ff5a8cb13ccd68f3fa85c5944c8eb63329206d010ef8fba3112f21bdd28167072 HTTP/1.1" 200 0
- -> /my_bucket/c80f5dcbaf70116a106cad087acc008e64e42d2f9cf4479e1ab77f4596cabe5ff5a8cb13ccd68f3fa85c5944c8eb63329206d010ef8fba3112f21bdd28167072

aws java sdk wants uppercase headers (ETag)

per
aws/aws-sdk-java#436

https://github.com/aws/aws-sdk-java/blob/master/aws-java-sdk-s3/src/main/java/com/amazonaws/services/s3/Headers.java#L34 and https://github.com/aws/aws-sdk-java/blob/master/aws-java-sdk-s3/src/main/java/com/amazonaws/services/s3/model/ObjectMetadata.java#L625

the aws java sdk expects headers to be in a particular case (not lowercase), even though headers are supposed to be case insensitive, and webrick makes all header keys lowercase.

I can't see how to get webrick to not lowercase all headers, and I'm unfamiliar with ruby generally, so I'm going to add a proxy for my own use that sets the case but it's pretty lame.

Uploading audio files causes XMLParseError

I'm trying to upload an audio file to S3 from my machine. I'm using this fake S3 service as to avoid using AWS resources whilst developing.

With the code pasted below I can successfully upload text, pdf and image files. However, when it comes to audio files (I've tried mp3 and mp4) it returns the error:

{ [XMLParserError: Unexpected close tag
Line: 11
Column: 9
Char: >]
  message: 'Unexpected close tag\nLine: 11\nColumn: 9\nChar: >',
  code: 'XMLParserError',
  time: Mon May 04 2015 16:16:16 GMT+0100 (BST),
  statusCode: 400,
  retryable: false,
  retryDelay: 30 }

Code used for the uploading:

require('spurious-aws-sdk-helper')()

const filename = process.argv[2];
const fs = require('fs');
const AWS = require("aws-sdk");

AWS.config.update({
  accessKeyId: "access",
  secretAccessKey: "secret",
  region: "eu-west-1",
  sslEnabled: false,
  s3ForcePathStyle: true
})

const s3 = new AWS.S3();

var params = {Bucket: 'bucket', Key: 'key', Body:     fs.createReadStream(filename)};
s3.upload(params, function(err, data) {
  console.log(err, data);
});

I've tried uploading the file in different manners such as a buffer, stream and a blob. It always returns this error message.

Rack adapter

Love this idea, will make testing s3 stuff much easier. Would be much better as a Rack app so I could just throw it in my local passenger config and wouldn't have to run it by hand each time.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.