Code Monkey home page Code Monkey logo

s3's Introduction

<img src=“https://travis-ci.org/qoobaa/s3.svg?branch=master” alt=“Build Status” />

S3 library provides access to Amazon’s Simple Storage Service.

It supports all S3 regions through the REST API.

Installation

gem install s3

Usage

Initialize the service

require "s3"
service = S3::Service.new(:access_key_id => "...",
                          :secret_access_key => "...")
#=> #<S3::Service:...>

List buckets

service.buckets
#=> [#<S3::Bucket:first-bucket>,
#    #<S3::Bucket:second-bucket>]

Find bucket

first_bucket = service.buckets.find("first-bucket")
#=> #<S3::Bucket:first-bucket>

or

first_bucket = service.bucket("first-bucket")
#=> #<S3::Bucket:first-bucket>

service.bucket("first-bucket") does not check whether a bucket with the name "first-bucket" exists, but it also does not issue any HTTP requests. Thus, the second example is much faster than buckets.find. You can use first_bucket.exists? to check whether the bucket exists after calling service.bucket.

Create bucket

new_bucket = service.buckets.build("newbucketname")
new_bucket.save(:location => :eu)

Remember that bucket name for EU can’t include “_” (underscore).

Please refer to: docs.aws.amazon.com/AmazonS3/latest/dev/BucketRestrictions.html for more information about bucket name restrictions.

List objects in a bucket

first_bucket.objects
#=> [#<S3::Object:/first-bucket/lenna.png>,
#    #<S3::Object:/first-bucket/lenna_mini.png>]

Find object in a bucket

object = first_bucket.objects.find("lenna.png")
#=> #<S3::Object:/first-bucket/lenna.png>

Access object metadata (cached from find)

object.content_type
#=> "image/png"

Access object content (downloads the object)

object.content
#=> "\x89PNG\r\n\x1A\n\x00\x00\x00\rIHDR\x00..."

Delete an object

object.destroy
#=> true

Create an object

new_object = bucket.objects.build("bender.png")
#=> #<S3::Object:/synergy-staging/bender.png>

new_object.content = open("bender.png")

new_object.acl = :public_read

new_object.save
#=> true

Please note that new objects are created with “private” ACL by default.

Request access to a private object

Returns a temporary url to the object that expires on the timestamp given. Defaults to one hour expire time.

new_object.temporary_url(Time.now + 1800)

Fetch ACL

object = bucket.objects.find('lenna.png')
object.request_acl # or bucket.request_acl

This will return hash with all users/groups and theirs permissions

Modify ACL

object = bucket.objects.find("lenna.png")
object.copy(:key => "lenna.png", :bucket => bucket, :acl => :public_read)

Upload file direct to amazon

Rails 3

Check the example in this gist, which describes how to use a simple form element to upload files directly to S3.

See also

Copyright © 2009 Jakub Kuźma, Mirosław Boruta. See LICENSE for details.

s3's People

Contributors

adammcarth avatar andresf avatar arthurnn avatar bbrowning avatar bensie avatar betelgeuse avatar chewi avatar denny avatar dmitry-ilyashevich avatar donaldpiret avatar drogus avatar duonoid avatar electrum avatar elia avatar exadeci avatar fjoachim avatar icambron avatar jackdanger avatar lencioni avatar miloops avatar mlangenberg avatar musha68k avatar nfo avatar pier-oliviert avatar qoobaa avatar radarek avatar rob-at-thewebfellas avatar simi avatar simonoff avatar suhastech avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

s3's Issues

max_keys option of objects.find_all() is ignored

The fix for Issue #25 has introduced a bug that ignores max_keys.

Currently, find_all() will return all objects in the bucket, obeying prefix and marker.

This happens because the truncated flag is set as soon as the limit is reached, either by the default limit of 1000 or what you send in the max_keys option.

Further, performance is hindered, because it will only request up to max_keys at a time, so a max_keys of 100 will cause 10 request to happen instead of 1 when listing a bucket that returns 1000 keys.

The fix for #25 needs to be adjusted to allow for max_keys set as an option, and stop looping when the limit has been reached.

Cannot generate tempfile - Paperclip 2.3.5 / Rails 3.0.3

There seems to be an issue with the to_file method in extras/paperclip.rb when using Paperclip 2.3.5 with Rails 3.0.3.

Any attempts generate (example):
cannot generate tempfile `/var/folders/Jv/JvqkFZNhEN4lP+EmlBOA-U+++TI/-Tmp-/photos/47/original/1287551730.jpeg20101119-49390-13hl3mn-9'

Just a note, the following commit is necessary to get Paperclip working on Rails 3.0.3:
lightyear/paperclip@d7f0057

support streaming?

any chance that streaming support will be added? sending objs that are 2-3 gigs in length are taking upto 8 gigs of memory to send.

the crux is this line in objects.rb
body = content.is_a?(IO) ? content.read : content

aws-s3 and right_aws both do this, though having gone through the code I know it is non-trivial.

Navigating sub-directories/buckets?

Once you dig into a bucket and retrieve the object contents, you just get a giant list of everything. When you're trying to read/list contents on a per-directory basis this proves difficult.

Any way or future plan to allow navigating through sub-directories (or buckets, if that's what they really are)?

Copying a file between two buckets results in 500 internal errors

Hello, I've been trying to use the copy function to copy a file between two buckets, but keep getting a 500 internal error message from Amazon (been trying it over the past few hours in case it was a temporary outage of some kind). Here is a part of the debug code:

opening connection to roomorama-staging.s3.amazonaws.com...
opened
<- "PUT /bucketexplorerbucketdefaults_roomorama.xml? HTTP/1.1\r\nX-Amz-Copy-Source: roomorama/bucketexplorerbucketdefaults_roomorama.xml\r\nContent-Md5: 1B2M2Y8AsgTpgAmY7PhCfg==\r\nAccept: /\r\nContent-Type: application/octet-stream\r\nX-Amz-Acl: public-read\r\nAuthorization: XXXXXXXXXXXXXXXXXXXXXXX (masked) \r\nDate: Tue, 30 Mar 2010 10:27:00 GMT\r\nContent-Length: 0\r\nX-Amz-Metadata-Directive: REPLACE\r\nHost: roomorama-staging.s3.amazonaws.com\r\n\r\n"
<- ""
-> "HTTP/1.1 500 Internal Server Error\r\n"
-> "x-amz-request-id: 74E2940B94984830\r\n"
-> "x-amz-id-2: MJd3fMoA1HeNFs8Gwqz77tQ8vyN6zwNKGqWHjA8aukbidfpY7fARk8jYKLhKdWAK\r\n"
-> "Content-Type: application/xml\r\n"
-> "Transfer-Encoding: chunked\r\n"
-> "Date: Tue, 30 Mar 2010 10:26:59 GMT\r\n"
-> "nnCoection: close\r\n"
-> "Server: AmazonS3\r\n"
-> "\r\n"
-> "e7\r\n"
reading 231 bytes...
-> "InternalErrorWe encountered an internal error. Please try again.74E2940B94984830MJd3fMoA1HeNFs8Gwqz77tQ8vyN6zwNKGqWHjA8aukbidfpY7fARk8jYKLhKdWAK"
read 231 bytes
reading 2 bytes...
-> "\r\n"
read 2 bytes
-> "0\r\n"
-> "\r\n"
Conn keep-alive

undefined method `closed?' for nil:NilClass on find object

Last night one of our batch processing systems pulled in 0.3.15 for the first time and failed while downloading a file from s3 with the following stack trace.

NoMethodError: undefined method `closed?' for nil:NilClass
        from /usr/lib/ruby/1.9.1/net/http.rb:2457:in `stream_check'
        from /usr/lib/ruby/1.9.1/net/http.rb:2377:in `read_body'
        from /usr/lib/ruby/1.9.1/net/http.rb:2404:in `body'
        from /home/hadoop/resources/gems/ruby/1.9.1/gems/s3-0.3.15/lib/s3/object.rb:268:in `parse_headers'
        from /home/hadoop/resources/gems/ruby/1.9.1/gems/s3-0.3.15/lib/s3/object.rb:197:in `object_headers'
        from /home/hadoop/resources/gems/ruby/1.9.1/gems/s3-0.3.15/lib/s3/object.rb:61:in `retrieve'
        from /home/hadoop/resources/gems/ruby/1.9.1/gems/s3-0.3.15/lib/s3/objects_extension.rb:12:in `find_first'

For us, this is 100% repro. Here is the code I'm using:

require 's3'
service = S3::Service.new(:access_key_id => "your_aws_access_id",
                          :secret_access_key => "your_secret_key")
bucket = service.buckets.find("bucket")
object = bucket.objects.find("some/object/key")

here's my Gemfile:

source 'http://rubygems.org'
gem 's3', '0.3.15'
GEM
  remote: http://rubygems.org/
  specs:
    proxies (0.2.1)
    s3 (0.3.15)
      proxies (~> 0.2.0)

PLATFORMS
  ruby

DEPENDENCIES
  s3 (= 0.3.15)

NameError: uninitialized constant S3::Service::Proxy

Hi.

As I ran into this
#34
I tried to use s3 via gem. Added s3 to my Gemfile, require 's3' and now I get
NameError: uninitialized constant S3::Service::Proxy
when trying to use it.

Do I have to require something more manually?

Thanks and regards, Phil

using rails (2.3.11), s3 (0.3.11), proxies (0.2.1)

Invalid bucket name

I'm not sure how it got on our account but I've just found an invalid bucket on it. It's called something like ._bucket-name. I managed to delete the offending bucket with aws/s3. S3Fox wouldn't even delete. But thought I'd mention it as I get ArgumentError: Invalid bucket name: ._bucket-name when doing service.buckets.

[Bug + Fix] Command-line bucket show ignores arguments (e.g. prefix, delimiter, etc)

In bin/s3, this method:

def show_bucket(service, name, options = {})
  service.buckets.find(name).objects.find_all.each do |object|
    puts "#{name}/#{object.key}"
  end
end

Should instead be this (pass options to find_all):

def show_bucket(service, name, options = {})
  service.buckets.find(name).objects.find_all(options).each do |object|
    puts "#{name}/#{object.key}"
  end
end

Thanks for the good library.

Object temporary_url returns invalid URL

I found a bug in object.rb: temporary_url

I changed the code to this, which works:

def temporary_url(expires_at = Time.now + 3600)
  ckey = CGI.escape(key.to_s)
  signature = Signature.generate_temporary_url_signature(:bucket => name,
                                                         :resource => ckey,
                                                         :expires_on => expires_at.to_i,
                                                         :secret_access_key => secret_access_key)

  "#{protocol}#{host}/#{path_prefix}#{ckey}?Signature=#{signature}&Expires=#{expires_at.to_i.to_s}"
end

The main issues were:

  • key has to be CGI escaped. Without converting to a string and passing to a new variable, it returns blank or gives an error.
  • expires_at needs to be converted to an integer when using the Time.now convention.
  • Signature is already escaped when it's returned from generate_temporary_url_signature, and expires at is already an int, so it doesn't need to be escaped (but does need to be converted to a string or it gives an error in the return value.
  • I've manually added back protocol, host, etc because otherwise, it doesn't know to escape the key.

Anyways, this took several hours to get everything converted correctly for it to pass Amazon's check, and I figured it might help somebody else if I passed it along.

Oh, one other thing, you have to pass AWSAccessKeyId in the URL... so you might want to add that to the return value or at least make mention of that in your documentation.

Thanks,

Brian Broderick

Getting S3::Error::SignatureDoesNotMatch on some buckets

Some of the buckets I try to access on my account are giving me:
S3::Error::SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your key and signing method.

I can access it via S3Fox and aws/s3. Any ideas?

Amazon supports mixed case letters for bucket names

I kept getting an exception when I had buckets with upper case letters. Here is the patch.

bucket.rb

@@ -180,7 +180,7 @@ module S3
end

 def name_valid?(name)
  •  name =~ /\A[a-z0-9][a-z0-9._-]{2,254}\Z/ and name !~ /\A#{URI::REGEXP::PATTERN::IPV4ADDR}\Z/
    
  •  name =~ /\A[a-z0-9][a-z0-9._-]{2,254}\Z/i and name !~ /\A#{URI::REGEXP::PATTERN::IPV4ADDR}\Z/
    
    end
    end
    end

Any way to retrieve objects using a partial key?

Hi there

I have some scripts which create data as objects in a 'folder' on S3, where the names of the contents of the folder are not known in advance. To retrieve the data, I'm using "s3sync/s3cmd.rb list :". Is there any way to do something similar using your gem?

I can't use bucket.objects.find {|obj| obj.name =~ folder}, because there are more than 1000 objects in the bucket.

If you can point me in the right direction, I'd be happy to write this as a patch, if the gem can't do it now, and submit it back.

Thanks for this gem, I like it a lot.

David

unable to install

gem install s3
ERROR: While executing gem ... (Gem::DependencyError)
Unable to resolve dependencies: s3 requires proxies (~> 0.2.0)

Ruby 1.8.7
Ubuntu

Wrong params order in expiring_url for paperclip

s3_paperclip.rb (to be copied as an initalizer) defines expiring_url as


but in paperclip's s3.rb it is defined as

So, trying to use something like

yields a url with an expiration of 3600, not 10.

Reversing the params order in the initializer just solves the problem. It would be nice to have it reversed also in s3_paperclip.rb so it can be copied as is.

Cache Control

Doesn't look like there's any way to set arbitrary headers. I need to se the Cache-Control header for s3 objects.

Couple options:

  • add a headers accessor hash that merges in when saving the object
  • add some cache accessor (max_age, etc) that gets saved as the 'Cache-Control' header

http://blog.bigcurl.de/2008/11/amazon-s3-save-money-by-setting-cache.html

Let me know know what you think. I think I can send you over a patch for this functionality soon

`@[]' is not allowed as an instance variable name

Hi!

I try to implement this tutorial: http://thewebfellas.com/blog/2010/1/31/paperclip-vs-amazon-s3-european-buckets with Ruby 1.9.2, Rails 3.1.3, Mongoid 2.2.0, mongoid-paperclip 0.0.7, s3 0.3.9

And I get this error:

NameError in PhotosController#create

`@[]' is not allowed as an instance variable name

lib/patches/paperclip.rb:33:in `block in extended'
lib/patches/paperclip.rb:32:in `instance_eval'
lib/patches/paperclip.rb:32:in `extended'

Request

Parameters:

{"utf8"=>"✓",
 "authenticity_token"=>"neUiYqS3PMWismvVWPSnqUV5NLfAiurkd2x+nslOOAY=",
 "photo"=>{"name"=>"sfs",
 "description"=>"sdd",
 "image"=>#<ActionDispatch::Http::UploadedFile:0x007fe51f274708 @original_filename="IMG_2827new12.jpg",
 @content_type="image/jpeg",
 @headers="Content-Disposition: form-data; name=\"photo[image]\"; filename=\"IMG_2827new12.jpg\"\r\nContent-Type: image/jpeg\r\n",
 @tempfile=#<File:/var/folders/q8/1cs1q0550x96dq1w1z81f8m40000gn/T/RackMultipart20111128-86583-m3qq5v>>},
 "commit"=>"Создать Photo",
 "locale"=>"ru"}

My /lib/patches/paperclip.rb is https://gist.github.com/1399954

What could it be?

Efficiently writing s3 object to file

s3 really needs a way to stream content to a file somehow. Loading .content on a large file pretty much puts everything in memory and destroys any heroku worker that it touches. (they cap at 300mb memory limit)

Something like this would be awesome:

s3obj = s3_bucket.objects.find("my_huge_object.mov")
s3obj.write_to_file("/tmp/my_huge_object.mov")

especially if it could avoid loading the entire content into memory at once

Copy fails if the file name has two simultaneous spaces

I've implemented a direct to s3 file upload mechanism, where the file first gets uploaded to the bucket root and then gets copied over to its final resting place using a copy command..

The copy fails if the file name has two spaces, as illustrated below -

1.9.3p484 :052 > s3_object_1 = artifact.attachment.bucket.objects.find("a  b.pdf")
 => #<S3::Object:/my-bucket/a  b.pdf> 
1.9.3p484 :053 > s3_object_1.copy(:key => "copy_of_file_1.pdf", :acl => :private)
S3::Error::SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your key and signing method.
  from /home/user_name/.rvm/gems/ruby-1.9.3-p484/gems/s3-0.3.11/lib/s3/connection.rb:217:in `handle_response'
  from /home/user_name/.rvm/gems/ruby-1.9.3-p484/gems/s3-0.3.11/lib/s3/connection.rb:204:in `send_request'
  from /home/user_name/.rvm/gems/ruby-1.9.3-p484/gems/s3-0.3.11/lib/s3/connection.rb:89:in `request'
  from /home/user_name/.rvm/gems/ruby-1.9.3-p484/gems/s3-0.3.11/lib/s3/service.rb:74:in `service_request'
  from /home/user_name/.rvm/gems/ruby-1.9.3-p484/gems/s3-0.3.11/lib/s3/bucket.rb:171:in `bucket_request'
  from /home/user_name/.rvm/gems/ruby-1.9.3-p484/gems/s3-0.3.11/lib/s3/object.rb:160:in `copy_object'
  from /home/user_name/.rvm/gems/ruby-1.9.3-p484/gems/s3-0.3.11/lib/s3/object.rb:98:in `copy'
  from (irb):53
  from /home/user_name/.rvm/gems/ruby-1.9.3-p484/gems/railties-3.2.16/lib/rails/commands/console.rb:47:in `start'
  from /home/user_name/.rvm/gems/ruby-1.9.3-p484/gems/railties-3.2.16/lib/rails/commands/console.rb:8:in `start'
  from /home/user_name/.rvm/gems/ruby-1.9.3-p484/gems/railties-3.2.16/lib/rails/commands.rb:41:in `<top (required)>'
  from script/rails:6:in `require'
  from script/rails:6:in `<main>'
1.9.3p484 :054 > s3_object_2 = artifact.attachment.bucket.objects.find("x y.pdf")
 => #<S3::Object:/my-bucket/x y.pdf> 
1.9.3p484 :055 > s3_object_2.copy(:key => "copy_of_file_2.pdf", :acl => :private)
 => #<S3::Object:/my-bucket/copy_of_file_2.pdf>

The data setup is proper, "a b.pdf" (note doublespaces) and "x y.pdf" files exist in the same bucket as artifact.attachment ("artifact" is a rails active record object)

Can't delete buckets with more than a thousand objects

 s3.buckets.find('whatever').destroy(true) # whatever is a bucket with over 1000 objects

It seems it's waiting for all the objects in the bucket to be deleted but because the default limit is to only get 1000 objects it doesn't delete them all so s3 just sits there. If I delete the remaining objects with S3Fox the s3 gem completes.

I'll try to make a patch if no one gets around to it.

S3::Error::RequestTimeout: Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed.

Hi.

I'm trying to upload an image to my bucket. Thats my code:


require 's3'

service = S3::Service.new(access_key_id: 'xxxxx', secret_access_key: 'xxxxx')
service.buckets # returns me correct list of my buckets so connection is established and works ok
obj = service.buckets.first.objects.build('1.jpg') # now create an object
obj.content = open('1.jpg', 'rb', encoding: 'BINARY') # file i want to upload
obj.save # save it

result is:


/usr/lib/ruby/gems/1.9.1/gems/s3-0.3.11/lib/s3/connection.rb:217:in `handle_response': Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed. (S3::Error::RequestTimeout)
    from /usr/lib/ruby/gems/1.9.1/gems/s3-0.3.11/lib/s3/connection.rb:204:in `send_request'
    from /usr/lib/ruby/gems/1.9.1/gems/s3-0.3.11/lib/s3/connection.rb:201:in `send_request'
    from /usr/lib/ruby/gems/1.9.1/gems/s3-0.3.11/lib/s3/connection.rb:89:in `request'
    from /usr/lib/ruby/gems/1.9.1/gems/s3-0.3.11/lib/s3/service.rb:74:in `service_request'
    from /usr/lib/ruby/gems/1.9.1/gems/s3-0.3.11/lib/s3/bucket.rb:171:in `bucket_request'
    from /usr/lib/ruby/gems/1.9.1/gems/s3-0.3.11/lib/s3/object.rb:207:in `object_request'
    from /usr/lib/ruby/gems/1.9.1/gems/s3-0.3.11/lib/s3/object.rb:189:in `put_object'
    from /usr/lib/ruby/gems/1.9.1/gems/s3-0.3.11/lib/s3/object.rb:83:in `save'
    from test.rb:8:in `'

I'm following your guide and and everything except upload works :( My ruby version is 1.9.3 btw.

temporary_url method throws URI::InvalidURIError or an AWS SignatureDoesNotMatch

Hi

I'm trying to generate a temporary URL for a bucket object:

s.bucket_items[5].temporary_url

But I get the error:

URI::InvalidURIError: bad URI(is not URI?): /files/file.text

It looks like the temporary_url method needs to pass the host param to Signature.generate_temporary_url_signature:

def temporary_url(expires_at = Time.now + 3600)
      uri = URI.parse(self.url)
      signature = Signature.generate_temporary_url_signature(:bucket => name,
                                                             :resource => URI.escape(key),
                                                             :expires_at => expires_at,
                                                             :secret_access_key => secret_access_key,
                                                             :host => uri.host)

      "#{url}?AWSAccessKeyId=#{self.bucket.service.access_key_id}&Expires=#{expires_at.to_i.to_s}&Signature=#{signature}"
end

However this does not fix the problem. Any URLs generated with this will throw a SignatureDoesNotMatch from Amazon.
I've looked for AWS docs on how to generate signed expiring URL's but cant find any.

Any ideas on how to fix this issue?

Cheers

Paperclip and extras/s3_paperclip.rb

Hi!

I've followed the instructions and put s3_paperclip.rb in my config/initializers dir and it loads when the app loads (as it should).

However, when my model gets loaded I get the error 

no such file to load -- aws/s3 (You may need to install the aws-s3 gem)

indicating the monkey patch hasn't been applied. I can also confirm this by changing my model definition to manually requiring the file after the first request has been loaded. If I restart the server it's broken once again.

Running Rails 3.0.2 with the latest s3 and paperclip gem.

Any insights and/or tips?
Regards,
Fredrik

Rewrite console tool

The console tool was messy and needed complete rewrite. It should be moved to a separate gem.

In production: No such file to load -- s3

I get this error, when unicorn is refreshing the gems list:

/1.9.1/gems/activesupport-3.2.9/lib/active_support/dependencies.rb:317:in `rescue in depend_on': No such file to load -- s3 (LoadError)

My Gemfile looks like this:

gem 's3'

paperclip/attachment_fu (3rd party) integration:: ease by moving files from /extra to /lib/extra

Current instructions for paperclip integration (extra/s3_paperclip.rb) begin with, "Copy the file to: ..."

Attempting to avoid copy/paste and manual merging/synchronization with upstream:

It seems that if 'extra/s3_paperclip.rb' were in the rubygems load path instead (in /lib), that we could do this in an initializer instead:

require 'paperclip/storage/s3' # load code to be monkey-patched require 's3/extra/s3_paperclip' # patch

thx!

is there possible to use wildcards?

In here I can see that there is a possibility to retrieve a bunch of objects using wildcards like * or ?.

Is it possible to use this feature through the s3 gem?, for example:

bucket.object( "path/to/*.txt" )

In case of negative do you want me implementing it?

Regards

f.

S3 and Paperclip error

I am use you gem S3 with gem Paperclip. I have a strange bug, maybe you met her or tells how to fix it?

Error occurs only when you remove mp4 file. I correctly delete the JPG file size of 5.1 MB, but I can not delete the mp4 file size of 1mb.

To use the S3 gem with Paperclip I am use this tutorial http://thewebfellas.com/blog/2010/1/31/paperclip-vs-amazon-s3-european-buckets

When i am try to remove Paperclip object get this error:

S3::Error::InvalidRange (The requested range is not satisfiable):
vendor/gems/s3-0.2.6/lib/s3/connection.rb:197:in handle_response' vendor/gems/s3-0.2.6/lib/s3/connection.rb:185:insend_request'
vendor/gems/s3-0.2.6/lib/s3/connection.rb:72:in request' vendor/gems/s3-0.2.6/lib/s3/service.rb:101:inservice_request'
vendor/gems/s3-0.2.6/lib/s3/bucket.rb:184:in bucket_request' vendor/gems/s3-0.2.6/lib/s3/object.rb:185:inobject_request'
vendor/gems/s3-0.2.6/lib/s3/object.rb:162:in get_object' vendor/gems/s3-0.2.6/lib/s3/object.rb:49:inretrieve'
vendor/gems/s3-0.2.6/lib/s3/object.rb:57:in exists?' lib/patches/paperclip.rb:77:inexists?'
app/controllers/assets_controller.rb:23:in `destroy'

In my asset contriller:
class AssetsController < ApplicationController
...

def destroy
@asset.destroy
redirect_to :back
end

end

uninitialized constant S3::Service::Proxy

uninitialized constant S3::Service::Proxy
/usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:105:in const_missing' /usr/lib/ruby/gems/1.8/gems/s3-0.3.7/lib/s3/service.rb:39:inbuckets'

When trying to use with Paperclip. s3 version 0.3.7, proxies version 0.1.1, paperclip 2.3.5

Encoding problem, ruby 1.9.3

on bundle install:

Using s3 (0.3.11) from http://github.com/qoobaa/s3.git (at master)
ArgumentError: invalid byte sequence in US-ASCII
An error occurred while installing s3 (0.3.11), and Bundler cannot continue.
Make sure that gem install s3 -v '0.3.11' succeeds before bundling.

License missing from gemspec

RubyGems.org doesn't report a license for your gem. This is because it is not specified in the gemspec of your last release.

via e.g.

spec.license = 'MIT'
# or
spec.licenses = ['MIT', 'GPL-2']

Including a license in your gemspec is an easy way for rubygems.org and other tools to check how your gem is licensed. As you can imagine, scanning your repository for a LICENSE file or parsing the README, and then attempting to identify the license or licenses is much more difficult and more error prone. So, even for projects that already specify a license, including a license in your gemspec is a good practice. See, for example, how rubygems.org uses the gemspec to display the rails gem license.

There is even a License Finder gem to help companies/individuals ensure all gems they use meet their licensing needs. This tool depends on license information being available in the gemspec. This is an important enough issue that even Bundler now generates gems with a default 'MIT' license.

I hope you'll consider specifying a license in your gemspec. If not, please just close the issue with a nice message. In either case, I'll follow up. Thanks for your time!

Appendix:

If you need help choosing a license (sorry, I haven't checked your readme or looked for a license file), GitHub has created a license picker tool. Code without a license specified defaults to 'All rights reserved'-- denying others all rights to use of the code.
Here's a list of the license names I've found and their frequencies

p.s. In case you're wondering how I found you and why I made this issue, it's because I'm collecting stats on gems (I was originally looking for download data) and decided to collect license metadata,too, and make issues for gemspecs not specifying a license as a public service :). See the previous link or my blog post about this project for more information.

MIME detection...

Hi!

I use s3 gem for uploading Rails 3.1 assets to Amazon s3 in rake file. (https://gist.github.com/1053855). The only problem that i'm having is that files posted using this lib have wrong mime type and so later on they get served using wrong mime type (octet stream) and that couses some anoying "hard to find" problems.

For now I'm avoiding this like this...

mimetype = `file -ib #{path}/#{f}`.gsub(/\n/,"") # if "-ib" does not work on your OS use "-Ib"
mimetype = mimetype[0,mimetype.index(';')]
mimetype = "application/javascript" if "#{path}/#{f}" =~ /\.js/
mimetype = "text/css" if "#{path}/#{f}" =~ /\.css/

Hope that this will help someone :)

How do you create a new bucket?

I have read through the source code trying to figure out how to actually create a new bucket but cannot figure it out. How would I programatically create a new bucket?

"NameError: uninitialized constant REXML::Encoding::UTF_8" with ruby-1.9.3-preview1

Hi, I'm checking if my apps are compatible with ruby-1.9.3-preview1, and report all problems I find. Here is one for the s3 gem.

NameError: uninitialized constant REXML::Encoding::UTF_8
    ~/.rvm/gems/ruby-1.9.3-preview1/gems/s3-0.3.8/lib/s3/parser.rb:6:in `rexml_document'
    ~/.rvm/gems/ruby-1.9.3-preview1/gems/s3-0.3.8/lib/s3/parser.rb:22:in `parse_list_bucket_result'
    ~/.rvm/gems/ruby-1.9.3-preview1/gems/s3-0.3.8/lib/s3/bucket.rb:119:in `list_bucket'
    ~/.rvm/gems/ruby-1.9.3-preview1/gems/s3-0.3.8/lib/s3/objects_extension.rb:29:in `find_all'

Problems with resources that contain spaces

I'm using your gem in combination with paperclip. Unfortunately, the temporary URL generation breaks when there are spaces in the requested file. The problem here is that the passed resource is not escaped and URI.escape is imo broken. My suggestion is to use Addressable::URI instead: https://github.com/sporkmonger/addressable.

My workaround for now can be found below. It uses Addressable::URI to escape the resource and it ensures the URL is not escaped again using URI.escape, as that breaks the resulting string again.

require "addressable/uri"

module Paperclip
  module Storage
    module S3
      def path *args
        Addressable::URI.escape(super)
      end
    end
  end
end

module S3
  class Object    
    def url
      "#{protocol}#{host}/#{path_prefix}#{key}"
    end
  end
end

Exception:

bad URI(is not URI?): /vergunningversneller-staging/files/28/1/Schermafbeelding 2010-12-22 om 14.55.45.png
 /opt/ruby-enterprise-1.8.7-2010.02/lib/ruby/1.8/uri/common.rb:436:in `split'
 /opt/ruby-enterprise-1.8.7-2010.02/lib/ruby/1.8/uri/common.rb:485:in `parse'
 s3 (0.3.7) lib/s3/signature.rb:217:in `canonicalized_resource'
 s3 (0.3.7) lib/s3/signature.rb:105:in `canonicalized_signature'
 s3 (0.3.7) lib/s3/signature.rb:60:in `generate_temporary_url_signature'
 s3 (0.3.7) lib/s3/object.rb:115:in `temporary_url'
 config/initializers/s3_paperclip.rb:63:in `expiring_url'

ACL method not found

The code:

require "s3"
service = S3::Service.new(:access_key_id => "foo",
                          :secret_access_key => "bar")

bucket = service.buckets.find("my-bucket") # bucket exists
object = bucket.objects.find("my-file.html") # object exists
puts object.request_acl

Results in:

s3.rb:9:in `<main>': undefined method `request_acl' for #<S3::Object:/my-bucket/my-file.html> (NoMethodError)

How do I list an object's permissions? Cheers.

Exception when using s3 from a rake task

Hi,
I use the library in a rails rake task that upload file to S3. I got the following exception:

wrong number of arguments (2 for 0)
/usr/local/rvm/gems/ruby-1.8.7-p302/gems/s3-0.3.7/lib/s3/objects_extension.rb:6:in initialize' /usr/local/rvm/gems/ruby-1.8.7-p302/gems/s3-0.3.7/lib/s3/objects_extension.rb:6:innew'
/usr/local/rvm/gems/ruby-1.8.7-p302/gems/s3-0.3.7/lib/s3/objects_extension.rb:6:in send' /usr/local/rvm/gems/ruby-1.8.7-p302/gems/s3-0.3.7/lib/s3/objects_extension.rb:6:inbuild'

The task does require rails environment. The same code executed just fine by script/runner however.

Thanks.

0.3.5 Results in timeouts when transferring to S3

Reverting back to 0.3.4 fixes the issue instantly.

Here's the relevant part of the backtrace as reported in my delayed_job logs, not sure if it's all that useful though:

execution expired
/Users/kev/.rvm/rubies/ree-1.8.7-2010.02/lib/ruby/1.8/timeout.rb:60:in `rbuf_fill'
/Users/kev/.rvm/rubies/ree-1.8.7-2010.02/lib/ruby/1.8/net/protocol.rb:134:in `rbuf_fill'
/Users/kev/.rvm/rubies/ree-1.8.7-2010.02/lib/ruby/1.8/net/protocol.rb:116:in `readuntil'
/Users/kev/.rvm/rubies/ree-1.8.7-2010.02/lib/ruby/1.8/net/protocol.rb:126:in `readline'
/Users/kev/.rvm/rubies/ree-1.8.7-2010.02/lib/ruby/1.8/net/http.rb:2233:in `read_chunked'
/Users/kev/.rvm/rubies/ree-1.8.7-2010.02/lib/ruby/1.8/net/http.rb:2213:in `read_body_0'
/Users/kev/.rvm/rubies/ree-1.8.7-2010.02/lib/ruby/1.8/net/http.rb:2179:in `read_body'
/Users/kev/.rvm/rubies/ree-1.8.7-2010.02/lib/ruby/1.8/net/http.rb:2204:in `body'
/Users/kev/.rvm/rubies/ree-1.8.7-2010.02/lib/ruby/1.8/net/http.rb:2143:in `reading_body'
/Users/kev/.rvm/rubies/ree-1.8.7-2010.02/lib/ruby/1.8/net/http.rb:1053:in `request_without_newrelic_trace'
/Users/kev/.rvm/gems/ree-1.8.7-2010.02@global/gems/newrelic_rpm-2.13.2/lib/new_relic/control/../agent/instrumentation/net.rb:11:in `request'
/Users/kev/.rvm/gems/ree-1.8.7-2010.02@global/gems/newrelic_rpm-2.13.2/lib/new_relic/agent/method_tracer.rb:141:in `trace_execution_scoped'
/Users/kev/.rvm/gems/ree-1.8.7-2010.02@global/gems/newrelic_rpm-2.13.2/lib/new_relic/control/../agent/instrumentation/net.rb:10:in `request'
/Users/kev/.rvm/gems/ree-1.8.7-2010.02@global/gems/s3-0.3.5/lib/s3/connection.rb:192:in `send_request'
/Users/kev/.rvm/rubies/ree-1.8.7-2010.02/lib/ruby/1.8/net/http.rb:543:in `start'
/Users/kev/.rvm/gems/ree-1.8.7-2010.02@global/gems/s3-0.3.5/lib/s3/connection.rb:175:in `send_request'
/Users/kev/.rvm/gems/ree-1.8.7-2010.02@global/gems/s3-0.3.5/lib/s3/connection.rb:86:in `request'
/Users/kev/.rvm/gems/ree-1.8.7-2010.02@global/gems/s3-0.3.5/lib/s3/service.rb:70:in `service_request'
/Users/kev/.rvm/gems/ree-1.8.7-2010.02@global/gems/s3-0.3.5/lib/s3/bucket.rb:146:in `bucket_request'
/Users/kev/.rvm/gems/ree-1.8.7-2010.02@global/gems/s3-0.3.5/lib/s3/object.rb:208:in `object_request'
/Users/kev/.rvm/gems/ree-1.8.7-2010.02@global/gems/s3-0.3.5/lib/s3/object.rb:179:in `object_headers'
/Users/kev/.rvm/gems/ree-1.8.7-2010.02@global/gems/s3-0.3.5/lib/s3/object.rb:59:in `retrieve'
/Users/kev/.rvm/gems/ree-1.8.7-2010.02@global/gems/s3-0.3.5/lib/s3/object.rb:67:in `exists?'
/Users/kev/code/rails/nycki/lib/patches/paperclip.rb:84:in `exists?'
/Users/kev/.rvm/gems/ree-1.8.7-2010.02@global/gems/paperclip-2.3.4/lib/paperclip/attachment.rb:325:in `queue_existing_for_delete'
/Users/kev/.rvm/gems/ree-1.8.7-2010.02@global/gems/paperclip-2.3.4/lib/paperclip/attachment.rb:324:in `map'
/Users/kev/.rvm/gems/ree-1.8.7-2010.02@global/gems/paperclip-2.3.4/lib/paperclip/attachment.rb:324:in `queue_existing_for_delete'
/Users/kev/.rvm/gems/ree-1.8.7-2010.02@global/gems/paperclip-2.3.4/lib/paperclip/attachment.rb:156:in `clear'
/Users/kev/.rvm/gems/ree-1.8.7-2010.02@global/gems/paperclip-2.3.4/lib/paperclip/attachment.rb:87:in `assign'
/Users/kev/.rvm/gems/ree-1.8.7-2010.02@global/gems/paperclip-2.3.4/lib/paperclip.rb:246:in `data='
/Users/kev/code/rails/nycki/app/models/nycki_file.rb:165:in `move_to_s3'
/Users/kev/code/rails/nycki/lib/s3_file_job.rb:3:in `perform'

If there is any way I can get more useful information out I'll be happy to help.

Updating only meta data for an object

The ability to update just the meta data for an object (content-disposition, etc.) would be very useful for certain tasks.

Currently when saving an object it appears to download and then re-upload the entire object including it's content even when the only thing that has been changed is a header which wastes a lot of bandwidth and time.

Does not handle 307 temporary redirects

When creating a bucket in the EU region, subsequent requests can receive a 307 temporary redirect response for a while with a new host location to use. The request should catch this error and resend it to the new host location

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.