Code Monkey home page Code Monkey logo

dragonfly-s3_data_store's Introduction

Dragonfly::S3DataStore

Amazon AWS S3 data store for use with the Dragonfly gem.

Gemfile

gem 'dragonfly-s3_data_store'

Usage

Configuration (remember the require)

require 'dragonfly/s3_data_store'

Dragonfly.app.configure do
  # ...

  datastore :s3,
    bucket_name: 'my-bucket',
    access_key_id: 'blahblahblah',
    secret_access_key: 'blublublublu'

  # ...
end

Available configuration options

:bucket_name
:access_key_id
:secret_access_key
:region               # default 'us-east-1', see http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region for options
:storage_headers      # defaults to {'x-amz-acl' => 'public-read'}, can be overridden per-write - see below
:url_scheme           # defaults to "http"
:url_host             # defaults to "<bucket-name>.s3.amazonaws.com", or "s3.amazonaws.com/<bucket-name>" if not a valid subdomain
:use_iam_profile      # boolean - if true, no need for access_key_id or secret_access_key
:root_path            # store all content under a subdirectory - uids will be relative to this - defaults to nil
:fog_storage_options  # hash for passing any extra options to Fog::Storage.new, e.g. {path_style: true}

Per-storage options

Dragonfly.app.store(some_file, {'some' => 'metadata'}, path: 'some/path.txt', headers: {'x-amz-acl' => 'public-read-write'})

or

class MyModel
  dragonfly_accessor :photo do
    storage_options do |attachment|
      {
        path: "some/path/#{some_instance_method}/#{rand(100)}",
        headers: {"x-amz-acl" => "public-read-write"}
      }
    end
  end
end

BEWARE!!!! you must make sure the path (which will become the uid for the content) is unique and changes each time the content is changed, otherwise you could have caching problems, as the generated urls will be the same for the same uid.

Serving directly from S3

You can get the S3 url using

Dragonfly.app.remote_url_for('some/uid')

or

my_model.attachment.remote_url

or with an expiring url:

my_model.attachment.remote_url(expires: 3.days.from_now)

or with an https url:

my_model.attachment.remote_url(scheme: 'https')   # also configurable for all urls with 'url_scheme'

or with a custom host:

my_model.attachment.remote_url(host: 'custom.domain')   # also configurable for all urls with 'url_host'

or with other query parameters (needs an expiry):

my_model.attachment.remote_url(expires: 3.days.from_now, query: {'response-content-disposition' => 'attachment'})  # URL that downloads the file

dragonfly-s3_data_store's People

Contributors

aprescott avatar banyan avatar dgiunta avatar gaffneyc avatar janraasch avatar lcowell avatar markevans avatar maxschulze avatar tricknotes avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

dragonfly-s3_data_store's Issues

S3 connection timeout with IAM

Everything works perfectly when using file storage, am trying to upload to an S3 bucket but getting the following error:

[fog][WARNING] Unable to fetch credentials: connect timeout reached
[fog][WARNING] Unable to fetch credentials: connect timeout reached
[fog][WARNING] Unable to fetch credentials: connect timeout reached
   (0.2ms)  ROLLBACK
Completed 500 Internal Server Error in 181906ms

Excon::Errors::Forbidden (Expected(200) <=> Actual(403 Forbidden)
excon.error.response
  :body          => "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>CCEECDDC7942972B</RequestId><HostId>IIZYUhoTtGeD1HFN50/IB3cgo9O29c5zw/UcA3WbpkNbO9bU2Wk1NrQWaa8lZnS/J+zUlLbb8WQ=</HostId></Error>"
  :headers       => {
    "Content-Type"     => "application/xml"
    "Date"             => "Wed, 30 Dec 2015 14:49:09 GMT"
    "Server"           => "AmazonS3"
    "x-amz-id-2"       => "IIZYUhoTtGeD1HFN50/IB3cgo9O29c5zw/UcA3WbpkNbO9bU2Wk1NrQWaa8lZnS/J+zUlLbb8WQ="
    "x-amz-request-id" => "CCEECDDC7942972B"
  }
  :local_address => "XX.XX.XX.XX"
  :local_port    => 60431
  :reason_phrase => "Forbidden"
  :remote_ip     => "XX.XX.XX.XX"
  :status        => 403
  :status_line   => "HTTP/1.1 403 Forbidden\r\n"
):

dragonfly.rb configured as follows:

require 'dragonfly'
require 'dragonfly/s3_data_store'

# Configure
Dragonfly.app.configure do
  plugin :imagemagick

  secret "-- desensitised --"

  url_format "/media/:job/:name"

#  datastore :file,
#    root_path: Rails.root.join('public/system/dragonfly', Rails.env),
#    server_root: Rails.root.join('public')
  datastore :s3,
    bucket_name: AWS_S3_BUCKET,
    access_key_id: AWS_ACCESS_KEY_ID,
    secret_access_key: AWS_SECRET_ACCESS_KEY,
    region: 'eu-west-1',
    use_iam_profile: true,
    url_scheme: 'https',
    fog_storage_options: {
        :provider => "AWS",
        :aws_access_key_id => AWS_ACCESS_KEY_ID,
        :aws_secret_access_key => AWS_SECRET_ACCESS_KEY
    }

end

This might be more concerned with Fog than dragonfly but any chance you can help with a little diagnosis.

The AWS_ACCESS_KEY_ID and SECRET work fine with the aws-sdk gem and I can manually put files no problem.

I was thinking about doing a fork for aws-sdk but it seems pointless to duplicate what Fog is supposed to do if its fog's issue.

Thanks.

Excon::Errors::Forbidden with files containing Umlauts

Uploading files to S3 fails if the file has an umlaut in its name.

Excon::Errors::Forbidden - Expected(200) <=> Actual(403 Forbidden)
  response => #<Excon::Response:0x007f90aecc91b0 @data={:body=>"<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message><StringToSignBytes>50 55 54 0a 0a 69 6d 61 67 65 2f 70 6e 67 0a 54 68 75 2c 20 30 36 20 46 65 62 20 32 30 31 34 20 31 32 3a 30 36 3a 30 30 20 2b 30 30 30 30 0a 78 2d 61 6d 7a 2d 61 63 6c 3a 70 75 62 6c 69 63 2d 72 65 61 64 0a 78 2d 61 6d 7a 2d 6d 65 74 61 2d 6a 73 6f 6e 3a 7b 22 6e 61 6d 65 22 3a 22 70 61 73 73 66 6f 74 6f 2d 6d 69 74 2d 6c c3 83 c2 a4 63 68 65 6c 6e 2e 70 6e 67 22 2c 22 6d 6f 64 65 6c 5f 63 6c 61 73 73 22 3a 22 50 69 63 74 75 72 65 22 2c 22 6d 6f 64 65 6c 5f 61 74 74 61 63 68 6d 65 6e 74 22 3a 22 69 6d 61 67 65 22 2c 22 61 6e 61 6c 79 73 65 72 5f 63 61 63 68 65 22 3a 7b 22 69 6d 61 67 65 5f 70 72 6f 70 65 72 74 69 65 73 22 3a 7b 22 66 6f 72 6d 61 74 22 3a 22 70 6e 67 22 2c 22 77 69 64 74 68 22 3a 39 32 2c 22 68 65 69 67 68 74 22 3a 31 32 38 7d 2c 22 77 69 64 74 68 22 3a 39 32 2c 22 68 65 69 67 68 74 22 3a 31 32 38 7d 7d 0a 2f 66 61 76 69 63 73 2d 74 65 73 74 69 6e 67 2f 32 30 31 34 2f 30 32 2f 30 36 2f 31 33 2f 30 35 2f 35 39 2f 39 38 39 2f 70 61 73 73 66 6f 74 6f 5f 6d 69 74 5f 6c 5f 63 68 65 6c 6e 2e 70 6e 67</StringToSignBytes><RequestId>BF4085C899FB04BC</RequestId><HostId>BgGDAsOaAeSg3UtcYa1VsjXW5wImkyj55S+yPcYBE5SaSjC3xcVGWdH2a12X872f</HostId><SignatureProvided>wmJyDEg8Djr9E/Jm1huYjpZmK8A=</SignatureProvided><StringToSign>PUT\n\nimage/png\nThu, 06 Feb 2014 12:06:00 +0000\nx-amz-acl:public-read\nx-amz-meta-json:{\"name\":\"passfoto-mit-l\xC3\x83\xC2\xA4cheln.png\",\"model_class\":\"Picture\",\"model_attachment\":\"image\",\"analyser_cache\":{\"image_properties\":{\"format\":\"png\",\"width\":92,\"height\":128},\"width\":92,\"height\":128}}\n/favics-testing/2014/02/06/13/05/59/989/passfoto_mit_l_cheln.png</StringToSign><AWSAccessKeyId>AKIAJWH23ATSHYAAVEKA</AWSAccessKeyId></Error>", :headers=>{"x-amz-request-id"=>"BF4085C899FB04BC", "x-amz-id-2"=>"BgGDAsOaAeSg3UtcYa1VsjXW5wImkyj55S+yPcYBE5SaSjC3xcVGWdH2a12X872f", "Content-Type"=>"application/xml", "Transfer-Encoding"=>"", "Date"=>"Thu, 06 Feb 2014 12:06:01 GMT", "Connection"=>"close", "Server"=>"AmazonS3"}, :status=>403, :remote_ip=>"178.236.4.121"}, @body="<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message><StringToSignBytes>50 55 54 0a 0a 69 6d 61 67 65 2f 70 6e 67 0a 54 68 75 2c 20 30 36 20 46 65 62 20 32 30 31 34 20 31 32 3a 30 36 3a 30 30 20 2b 30 30 30 30 0a 78 2d 61 6d 7a 2d 61 63 6c 3a 70 75 62 6c 69 63 2d 72 65 61 64 0a 78 2d 61 6d 7a 2d 6d 65 74 61 2d 6a 73 6f 6e 3a 7b 22 6e 61 6d 65 22 3a 22 70 61 73 73 66 6f 74 6f 2d 6d 69 74 2d 6c c3 83 c2 a4 63 68 65 6c 6e 2e 70 6e 67 22 2c 22 6d 6f 64 65 6c 5f 63 6c 61 73 73 22 3a 22 50 69 63 74 75 72 65 22 2c 22 6d 6f 64 65 6c 5f 61 74 74 61 63 68 6d 65 6e 74 22 3a 22 69 6d 61 67 65 22 2c 22 61 6e 61 6c 79 73 65 72 5f 63 61 63 68 65 22 3a 7b 22 69 6d 61 67 65 5f 70 72 6f 70 65 72 74 69 65 73 22 3a 7b 22 66 6f 72 6d 61 74 22 3a 22 70 6e 67 22 2c 22 77 69 64 74 68 22 3a 39 32 2c 22 68 65 69 67 68 74 22 3a 31 32 38 7d 2c 22 77 69 64 74 68 22 3a 39 32 2c 22 68 65 69 67 68 74 22 3a 31 32 38 7d 7d 0a 2f 66 61 76 69 63 73 2d 74 65 73 74 69 6e 67 2f 32 30 31 34 2f 30 32 2f 30 36 2f 31 33 2f 30 35 2f 35 39 2f 39 38 39 2f 70 61 73 73 66 6f 74 6f 5f 6d 69 74 5f 6c 5f 63 68 65 6c 6e 2e 70 6e 67</StringToSignBytes><RequestId>BF4085C899FB04BC</RequestId><HostId>BgGDAsOaAeSg3UtcYa1VsjXW5wImkyj55S+yPcYBE5SaSjC3xcVGWdH2a12X872f</HostId><SignatureProvided>wmJyDEg8Djr9E/Jm1huYjpZmK8A=</SignatureProvided><StringToSign>PUT\n\nimage/png\nThu, 06 Feb 2014 12:06:00 +0000\nx-amz-acl:public-read\nx-amz-meta-json:{\"name\":\"passfoto-mit-l\xC3\x83\xC2\xA4cheln.png\",\"model_class\":\"Picture\",\"model_attachment\":\"image\",\"analyser_cache\":{\"image_properties\":{\"format\":\"png\",\"width\":92,\"height\":128},\"width\":92,\"height\":128}}\n/favics-testing/2014/02/06/13/05/59/989/passfoto_mit_l_cheln.png</StringToSign><AWSAccessKeyId>AKIAJWH23ATSHYAAVEKA</AWSAccessKeyId></Error>", @headers={"x-amz-request-id"=>"BF4085C899FB04BC", "x-amz-id-2"=>"BgGDAsOaAeSg3UtcYa1VsjXW5wImkyj55S+yPcYBE5SaSjC3xcVGWdH2a12X872f", "Content-Type"=>"application/xml", "Transfer-Encoding"=>"", "Date"=>"Thu, 06 Feb 2014 12:06:01 GMT", "Connection"=>"close", "Server"=>"AmazonS3"}, @status=403, @remote_ip="178.236.4.121">:

uri encode the filename before storing works:

dragonfly_accessor :image do
  after_assign { |a| a.name = URI.escape(a.name) }
end

Override hostname used in read for CDN usage

We've been using the gem for some time to store/retrieve files in S3. We've added a CDN in front of S3 now and would like to be able to generate URLs that access the stored files using the CDN hostname (with the same file path).

This is working fine for 'url_for' with the hostname overrride, but the read method doesn't seem to take this into account and only accesses S3 directly. Is there any way to override that?

Error accessing image width: 403 Forbidden

Looks like bucket credentials are not presented when trying to fetch image attributes, like width.

2.1.7 :003 > i.image
 => <Dragonfly Attachment uid="2015/12/27/12/57/05/201cc3f4-ad94-47a7-ae4e-564f9964c8eb/141828108918.jpg", app=:default> 
2.1.7 :004 > i.image.width
Excon::Errors::Forbidden: Expected([200, 206]) <=> Actual(403 Forbidden)
excon.error.response
  :body          => "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>C5A217CF95695361</RequestId>

I can however assign a remote image and have it uploaded to S3 without a problem

Add option "ignore_destroy"

We have a production bucket and a development bucket.
We regularly use a script to copy all files from production into development.
As there is no bucket copy feature given by amazon, the copy process takes a lot of time especially if multiple files where deleted in development since the last synchronization.
Also, if one of the developper remove a file in his local environment, it will be removed from s3 and other developer would get a file missing error.

Having an option ignore_destroy would make the synchronization a lot faster as it will only need to import created file since the last synchronization, it will also prevent any file missing due to a local modification by one of the developer.

If you are interested to integrate this feature into this project, please let me know and I will try to implement it.

Mime type not detected on existing files

Recently we migrated a few TB of files over to Wasabi S3 from a NAS.
We then enabled the s3 data store with dragonfly.
All the files are found and mapped, but, every single one of them comes back with an application/octet-stream mime type, causing browsers not to render and download the files

For instance, I have the following image in one of my model
:image_uid => "2021/10/07/6d13jlg995_Five_Start_Big_Center_Star.png", :image_name => "Five Start Big Center Star.png", :image_size => 435695
If I try to get the mime-type:
p.image.mime_type => "application/octet-stream"
Trying to get info through s3cmd on the file, i can see the mime type is reported
`s3cmd info 's3://heatwave-assets/secure_assets/production/2021/10/07/6d13jlg995_Five_Start_Big_Center_Star.png'

s3://heatwave-assets/secure_assets/production/2021/10/07/6d13jlg995_Five_Start_Big_Center_Star.png (object):
File size: 435695
Last mod: Thu, 07 Oct 2021 23:45:57 GMT
MIME type: image/png
Storage: STANDARD
MD5 sum: d83f65fcb65b56418be9c0e32703fe17
SSE: AES256
Policy: none
CORS:
ACL: cbillen: FULL_CONTROL
`

With --debug I get the header:
'content-type': 'image/png'

But Dragonfly doesn't see this and returns application/octet-stream

In order to fix this i found out i have to add this header
x-amz-meta-json':'{"name":"Five Start Big Center Star.png"}'

Then image.mime_type will return the right type
image/png

I could also re-upload to fix the header
model.image = model.image.to_file(model.image_name)

With millions of files, doing the above one by one is a nightmare. But there are other indicator in the content type header, in the image file name in the model, which could all have been used to determine the mime_type, so why weren't they?

Is there an alternative to going in these files one by one by telling dragonfly to leverage a database attribute or the content type returned by s3?

Thank you

How to get cropped images?

Dragonfly gives the thumb processor with default imagemagick plugin.
Like, @photo.thumb('30x30').url But this s3-data-store doesn't have such method.
So, the question is how to serve assets via S3 but the cropped or thumbnail version?

Ignoring url_format and/or not sanitizing image names

Hi, nice lib.

Appears to me that this lib is ignoring url_format directive, example:

initializers/dragonfly.rb

require 'dragonfly'
require 'dragonfly/s3_data_store'

Dragonfly.app.configure do
  plugin :imagemagick
  secret "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
  url_format "/media/:job/:sha/:name"

  if Rails.env.development? || Rails.env.test?
    datastore :file,
      root_path:   Rails.root.join('public/system/dragonfly', Rails.env),
      server_root: Rails.root.join('public')
  else
    datastore :s3,
      bucket_name:       'xxxxxxx',
      access_key_id:     'xxxxxxxxxxxx',
      secret_access_key: 'xxxxxxxxxxxxxxxxxxxxxxxx',
      url_scheme:        'https'
  end
end

Dragonfly.logger = Rails.logger
Rails.application.middleware.use Dragonfly::Middleware
if defined?(ActiveRecord::Base)
  ActiveRecord::Base.extend Dragonfly::Model
  ActiveRecord::Base.extend Dragonfly::Model::Validations
end

app/models/picture.rb

class Image < ActiveRecord::Base
  dragonfly_accessor :image do
    after_assign do |img|
      img.encode!('jpg', '-filter Lanczos -interlace Plane -quality 80') if img.image?
    end
  end

  validates :image, presence: true
end 

While in development everything works as expected, example: When uploading file "roge%C2%A6u%CC%88rio Menezes.jpg" it becomes "[...]/2997bnok9i_rogeurio_menezes.jpg".

But in production (using s3 data store) becomes "[...]//roge¦ürio Menezes.jpg".

To get things working I did the following:

app/models/picture.rb

class Image < ActiveRecord::Base
  before_save :rename
  dragonfly_accessor :image do
    after_assign do |img|
      img.encode!('jpg', '-filter Lanczos -interlace Plane -quality 80') if img.image?
    end
  end

  validates :image, presence: true

  protected
    def rename()
      return unless self.image.present?
      path_obj = Pathname(self.image.name)
      self.image.name = path_obj.sub_ext('').to_s.downcase.strip.gsub(' ', '-').gsub(/[^\w-]/, '') + path_obj.extname
    end
end 

Documentation error?

Hi,

Maybe I'm missing something but according to the documentation the file path can be set like so:

Dragonfly.app.store(some_file, path: 'some/path.txt')

but here I would always get the autogenerated path unless I did

Dragonfly.app.store(some_file, {}, path: 'some/path.txt')

Looks like the parent gem's store method has 2 distinct hashes, the latter one being passed down to here.
Am I understanding it all wrong or is the doc simply outdated?

Make :path per-storage option as a global configuration option

This is an excellent gem and is used in RefineryCMS https://github.com/refinery/refinerycms .
As a user of Refinery, I would like to be able to set the path on S3, as indicated in the docs here https://github.com/markevans/dragonfly-s3_data_store like so:

    Dragonfly.app.store(some_file, path: 'some/path.txt', headers: {'x-amz-acl' => 'public-read-write'})

The problem is that Refinery does not call Dragonfly.app.store(...) but installs Dragonfly as an extension to ActiveRecord:

# refinerycms/images/lib/refinery/images/dragonfly.rb

          ActiveRecord::Base.extend ::Dragonfly::Model
          ActiveRecord::Base.extend ::Dragonfly::Model::Validations
          ...    
          app_images = ::Dragonfly.app(:refinery_images)
          ...
          if ::Refinery::Images.s3_backend
            require 'dragonfly/s3_data_store'
            options = {
              bucket_name: Refinery::Images.s3_bucket_name,
              access_key_id: Refinery::Images.s3_access_key_id,
              secret_access_key: Refinery::Images.s3_secret_access_key
            }
            # S3 Region otherwise defaults to 'us-east-1'
            options.update(region: Refinery::Images.s3_region) if Refinery::Images.s3_region
            app_images.use_datastore :s3, options
          end

I propose to add a :global_path configuration option (similar to :root_path) that will have the following behaviour. If :global_path is specified, it is used instead of generate_uid

# dragonfly-s3_data_store/lib/dragonfly/s3_data_store.rb
...
      configurable_attr :url_host
      configurable_attr :global_path
...

def store(temp_object, opts={})
    ...
    headers['Content-Type'] = mime_type if mime_type
    unless global_path.nil?
        uid = global_path
    else
        uid = opts[:path] || generate_uid(temp_object.name || 'file')
    end
    ...
end

I understand that it will be my responsibility to make sure the paths are unique.
Thanks!

Dragonfly S3 failing with Excon::Errors::SocketError / OpenSSL::SSL::SSLError

Hi -- I have a pre-1.0 Dragonfly app that uploads and retrieves images fine.
However, when I try to upgrade to 1.0.5, and use this gem to handle the data store, I get the following error:

Excon::Errors::SocketError - hostname "www.myapp.com-images-production.s3-eu-west-1.amazonaws.com" does not match the server certificate (OpenSSL::SSL::SSLError)

My config looks (kind-of) like this:

app.configure do
  plugin :imagemagick
  datastore(
    :s3,
    bucket_name: "www.myapp.com-images-production",
    access_key_id: "<REDACTED>",
    secret_access_key: "<REDACTED>",
    region: "eu-west-1"
  )
end

And is basically the same values as the config for the app that's working fine.

Could you please advise me on what I could do to get this working?

Thanks,
Doug.

aws-s3 gem broken for the latest rails (4.1.0)

We upgraded a project to 4.1.0 and the aws-s3 gem which looks like abandon-ware monkey patches stuff in activesupport in all the wrong ways. We would like to switch this gem to run on the s3 gem which has been updated recently, and uses oo design.

Config to work against fake-s3

Wanted to simulate s3 for feature tests: https://github.com/jubos/fake-s3

I've had to debug the code to arrive to this config. Might be useful to place in the README.

    datastore :s3,
              bucket_name: '',
              access_key_id: 'myaccess',
              region: 'eu-west-1',
              secret_access_key: 'mysecret',
              fog_storage_options: {
                  host: '127.0.0.1',
                  port: 4567,
                  scheme: 'http'
              }

S3 store failing when used with IAM

I have been trying to configure this gem with IAM profiles. As per instructions in the README, I have set up my dragonfly.rb like so:

datastore :s3,
      use_iam_profile: true,
      bucket_name: ENV['S3_BUCKET']

(Removed the access key and the secret.)

But in the production, I get the following error:

[fog][WARNING] Unable to fetch credentials: Connection refused - connect(2) for 169.254.169.254:80 (Errno::ECONNREFUSED)
  ArgumentError (Missing required arguments: aws_access_key_id, aws_secret_access_key):

Am I doing this right?

Add path_style to avoid fog Excon::Errors::SocketError hostname does not match ssl error

I have a AWS s3 bucket with dot '.' in the bucket name. This causes issues with ssl. If I would have written this app I wouldn't have created a bucket with a dot in it. Unfortunately the app is large and migrating could be a pain. Fog has a config that gets around this called path_style. You can set it to path_style: true to solve the issue.

See fog/fog#2381 (comment)

I've forked and added path_style to the gem and it seems to solve my problem:

darrenterhune@f932c686b

Thoughts?

Crash With Rails 5.2

Looks like the Bootsnap gem, which ships with Rails 5.2, is having trouble. Dragonfly with a file datastore starts up fine.

/.rvm/gems/ruby-2.5.1/gems/bootsnap-1.3.2/lib/bootsnap/compile_cache/iseq.rb:12: [BUG] Segmentation fault at 0x0000000000000000
ruby 2.5.1p57 (2018-03-29 revision 63029) [x86_64-darwin17]

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.