Code Monkey home page Code Monkey logo

shrine-transloadit's People

Contributors

janko avatar mfilej avatar ramonpm avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

shrine-transloadit's Issues

Undefined method `transloadit_process' on Uploader that isn't using Transloadit

I will preface this by saying likely something in our code caused this to break, but it might be valuable to others if I share here because it seems complicated to debug or get any kind of logging for what exactly is happening with this issue.

  • I have various custom uploaders in my app which inherit from Shrine.
  • My shrine.rb initializer itself does not have the transloadit plugin, instead the custom uploader that requires it does.
  • Everything was working fine and I didn't touch the uploaders, but somehow now after a recent deploy, any uploader that didn't use transloadit has the following error after trying to promote: Undefined method `transloadit_process' on Uploader that isn't using Transloadit.

I am using Uppy file inputs to upload and storing in an S3 cache which should then be promoted to the custom store during update. For some reason I'm not getting Undefined method `transloadit_process' for CustomUploader. The CustomUploader isn't using transloadit at all and the file in this case was never supposed to touch transloadit.

What I receive is a 500 response with this error and the file is never uploaded to the proper store.

Again, I haven't changed anything in my uploaders or updated shrine gems so I am wondering if this could be an issue with conflicting JS somewhere causing Shrine to want to use a different uploader for the finalize and promote step? Where I see the error reported, undefined method `transloadit_process' for CustomUploader, the uploader mentioned in the error is the one I expected and wanted to be used, the issue is for some reason it is trying to do something with transloadit now all of a sudden.

These custom uploaders either go directly to S3 or other similar locations. Below is what the end of my backtrace looks like when I receive this error:

/gems/shrine-transloadit-0.5.1/lib/shrine/plugins/transloadit.rb:79 in transloadit_process
/gems/shrine-transloadit-0.5.1/lib/shrine/plugins/transloadit.rb:35 in block in configure
/gems/shrine-2.12.0/lib/shrine/plugins/backgrounding.rb:231 in instance_exec
/gems/shrine-2.12.0/lib/shrine/plugins/backgrounding.rb:231 in _promote
/gems/shrine-2.12.0/lib/shrine.rb:572 in finalize
/gems/shrine-2.12.0/lib/shrine/plugins/activerecord.rb:113 in block in included
/gems/activesupport-5.2.5/lib/active_support/callbacks.rb:426 in instance_exec

Any help would be greatly appreciated!

undefined method `data' for #<Hash:0x007fabf6664078>

I've got myself a bit stuck and tried approaching the issue from a couple directions, but still can't seem to solve it. When I get the response from transloadit and run transloadit_save(data) I get a "undefined method `data' for #Hash:0x007fabf6664078" error occurring.

I'm using the approach in the go rails tutorial that uploads the file to cache via jquery uploader. Then submitting the key for that file. transloadit appears to get the file and transcode it fine, even firing off the web hook correctly. But the returned data from the web hook is causing an issue. Have I missed a setup step?

Exported transloadit derivatives not using pretty_location?

I've got my Transloadit transloadit_processor and transloadit_saver methods hooked up as shown in the docs here:
https://github.com/shrinerb/shrine-transloadit#usage

after calling

attacher.transloadit_save(response["results"])
attacher.derivatives

my derivatives are there, however their paths on the storage (S3) seems to be using the default locations, instead of going through the code I have in my custom generate_location(io, name: nil, record: nil, derivative: nil, metadata: {}, **options) method as part of the pretty_location plugin.

In my generate_location implementation I'm nesting files underneath a few directories, something like

accounts/{account_id}/responses/{response_id}/videos/{video_id}

This is working for the main file when I upload it through Shrine, but not for any of the derivatives I'm pulling in via the transloadit_saver and merge_derivatives methods.

Basically, the uploaded file is being correctly stored at the S3 path:

accounts/123/responses/456/videos/789/original.webm

but the derivatives coming in via shrine-transloadit are not, instead shrine is sticking them at the root of the storage in a path like:

bc/97dbdbdc9d4f02be1475d647b263da/original-14a4d011c346d4b657467cd4b7bb74d6.webm

Is there a way to make these transloadit derivatives go through the same pretty_location logic so I can store them alongside the original?

Thanks!

Deprecating Transloadit's rails-sdk

Hi @janko!

Wanted to let you know that we're deprecating Transloadit's rails-sdk. And of course shrine-transloadit is using that currently.

The reason is, it is really just a small wrapper around bundling the jquery-sdk, while these days folks often have their own js bundlers, and Uppy is a much nicer uploading integration for Transloadit (and any other target) than the jquery-sdk is.

So we're currently recommending folks to use Uppy, and then just catch the xhr upload in rails their own way, or maybe upload directly to s3, etc.

Still, that leaves some to be desired. I think the shrine-transloadit plugin for instance still could be very valuable if it removed the rails-sdk and offered to use uppy instead. What do you think? Would you be willing to work with us on this? Happy to sponsor those efforts.

Transloadit Adaptive and Versions

I am trying to use transloadits /video/adaptive robot and there are a couple issues I am encountering.

First problem is that when using the /video/adaptive robot it produces an array of results not just a single result in a hash. This array is in no particular order so sometimes Shrine stores random files and not the final playlist which is what I really want. (I guess in the future some people might want to store the other playlists or segments as well?)

The next issue is that the path when saving to S3 for this robot is kind of important as the playlists build themselves from the relative location of the segments. Meaning I need to maintain the folder structure produced by the default path "${file.meta.relative_path}/${file.name}"

And finally because I want to use versions because I am also generating a thumbnail from the video. I am unable to pass through steps without making them files. I seem able to pass the import step, but no other steps?

Support Transloadit Templates?

Does this plugin support using Transloadit Templates ?

If not, is there a recommended way to get some of the benefits of templates (mainly not having to include sensitive data and credentials inside assembly steps..)

Thanks!

Processing a file before promoting it to permanent storage

This week I'm giving upgrading to Shrine 3.0 another go. I'm noticing something that I think might be a difference in behaviour between 2.0x and 3.x, but maybe I'm just doing it wrong.

Just for reference, with Shrine 2.x we had this plugin working in the following way:

Shrine uploads to 'cache' bucket → Transloadit reads from it, writes the result to the 'store' bucket

With 3.x I'm seeing this flow instead:

Shrine uploads to 'cache' storage → Shrine uploads to 'store' storage → … 

So it seems the file is immediately promoted to the store storage, before it is even handed off to Transloadit.

I tried to write a minimal example showing this:
https://gist.github.com/mfilej/ec7afcb7227e83325cc9b56db136d659

Output:

-- create_table(:media_items, {:if_not_exists=>true})
D, [2020-10-07T11:04:06.576914 #60481] DEBUG -- :    (1.4ms)  CREATE TABLE IF NOT EXISTS "media_items" ("id" bigserial primary key, "item_data" text, "created_at" timestamp(6) NOT NULL, "updated_at" timestamp(6) NOT NULL)
   -> 0.1391s
D, [2020-10-07T11:04:06.656286 #60481] DEBUG -- :   ActiveRecord::InternalMetadata Load (3.1ms)  SELECT "ar_internal_metadata".* FROM "ar_internal_metadata" WHERE "ar_internal_metadata"."key" = $1 LIMIT $2  [["key", "environment"], ["LIMIT", 1]]
I, [2020-10-07T11:04:06.720899 #60481]  INFO -- : Metadata (0ms) – {:storage=>:cache, :io=>File, :uploader=>VideoUploader}
I, [2020-10-07T11:04:16.705779 #60481]  INFO -- : Upload (9984ms) – {:storage=>:cache, :location=>"ecdaa8d35fb4470bfea3ef6eb205bd11.mp4", :io=>File, :upload_options=>{}, :uploader=>VideoUploader}
D, [2020-10-07T11:04:16.707895 #60481] DEBUG -- :    (1.4ms)  BEGIN
D, [2020-10-07T11:04:16.712659 #60481] DEBUG -- :   MediaItem Create (4.5ms)  INSERT INTO "media_items" ("item_data", "created_at", "updated_at") VALUES ($1, $2, $3) RETURNING "id"  [["item_data", "{\"id\":\"ecdaa8d35fb4470bfea3ef6eb205bd11.mp4\",\"storage\":\"cache\",\"metadata\":{\"filename\":\"short.mp4\",\"size\":9104458,\"mime_type\":null}}"], ["created_at", "2020-10-07 09:04:16.706182"], ["updated_at", "2020-10-07 09:04:16.706182"]]
D, [2020-10-07T11:04:16.717074 #60481] DEBUG -- :    (4.1ms)  COMMIT
I, [2020-10-07T11:04:16.896918 #60481]  INFO -- : Upload (179ms) – {:storage=>:store, :location=>"63be42c15db65784a2ac144c5722b511.mp4", :io=>VideoUploader::UploadedFile, :upload_options=>{}, :uploader=>VideoUploader}
D, [2020-10-07T11:04:16.899217 #60481] DEBUG -- :    (1.3ms)  BEGIN
D, [2020-10-07T11:04:16.902701 #60481] DEBUG -- :   MediaItem Update (3.2ms)  UPDATE "media_items" SET "item_data" = $1, "updated_at" = $2 WHERE "media_items"."id" = $3  [["item_data", "{\"id\":\"63be42c15db65784a2ac144c5722b511.mp4\",\"storage\":\"store\",\"metadata\":{\"filename\":\"short.mp4\",\"size\":9104458,\"mime_type\":null}}"], ["updated_at", "2020-10-07 09:04:16.897442"], ["id", 7]]
D, [2020-10-07T11:04:16.905304 #60481] DEBUG -- :    (2.3ms)  COMMIT
I, [2020-10-07T11:04:17.475725 #60481]  INFO -- : Transloadit (570ms) – {:processor=>:default, :uploader=>VideoUploader}
...

In the output you can see the two Upload lines happening before the Transloadit line. If I check the assembly on Transloadit, I can see by the path key that it is indeed trying to read from the store storage:

{
  "steps": {
    "import": {
      "use": null,
      "path": "63be42c15db65784a2ac144c5722b511.mp4",

Is this an issue with my code?

Configuring storage credentials outside of transloadit.com

Hi, I'm migrating to Shrine 3 and I'm having trouble with credentials in the process.

Before the migration I used to define S3 credentials when defining shrine storages:

Shrine.storages = {
  store: Shrine::Storage::S3.new(access_key_id: k, secret_access_key: a, region: r, bucket: b),
  cache: ,
}

Now with this approach I'm getting an error:

Shrine::Plugins::Transloadit::Error (credentials not registered for storage :store)

Is this something that is still possible? Having to define the credentials in the web interface on transloadit.com is not ideal from our deployment perspective, so I was wondering if there was a way around this.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.