Code Monkey home page Code Monkey logo

s3-uploader's Introduction

S3 Uploads

Show your support!

Star my code in github or atmosphere if you like my code or shoot me a dollar or two!

DONATE HERE

Installation

<!-- On your server -->
$ npm i --save s3up-server
<!-- On your client -->
$ npm i --save s3up-client
<!-- If you are using react, install s3up-react instead of s3up-client for additional functionality -->
$ npm i --save s3up-react

How to use

Step 1

Instantiate your S3Up Instance. SERVER SIDE

import { S3Up } from 's3up-server';

const s3Up = new S3Up({
  // You may pass any of the parameters described in aws-sdk.S3's documentation
  accessKeyId: process.env.AWS_ACCESS_KEY_ID,
  secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
  bucket: "bucketname", // required
});

S3 parameters

Step 2

Expose S3Up's methods to the client. Here is an example in Meteor. SERVER SIDE

Meteor.methods({
  authorizeUpload: function(ops) {
    this.unblock();
    // Do some checks on the user before signing the upload
    return s3Up.signUpload(ops);
  },
})

signUpload parameters Requires at least key to determine the target location of the upload

Step 3

Wire up your views with the upload function. Here is an example with Meteor Blaze's template events. CLIENT SIDE

import { uploadFiles } from 's3up-client';

Template.example.events({
  'click .upload': function(event, instance) {
    uploadFiles(instance.$("input.file_bag")[0].files, {
      signer: (file) => new Promise((resolve, reject) => Meteor.call('authorizeUpload', (err, res) => {
        if (err) return reject(err);
        return resolve(res);
      })),
      onProgress: function(state) {
        console.log(state);
        console.log(state.percent);
      },
    });
  },
});

Create your Amazon S3

For all of this to work you need to create an aws account.

1. Create an S3 bucket in your preferred region.

NOTE: Do not block all public access unless you are planning to only use signed requests to get objects.

2. Access Key Id and Secret Key

  1. Navigate to your bucket
  2. On the top right side you'll see your account name. Click it and go to Security Credentials.
  3. Create a new access key under the Access Keys (Access Key ID and Secret Access Key) tab.
  4. Enter this information into your app as defined in "How to Use" "Step 1".

3. Enable CORS

Setting this will allow your website to POST data to the bucket. If you want to be more cautious, set the AllowedOrigin and AllowedHeader to your domain.

  1. Select the bucket's properties and go to the "Permissions" tab.
  2. Click "Edit CORS Configuration" and paste this:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
    <AllowedOrigin>*</AllowedOrigin>
    <AllowedMethod>HEAD</AllowedMethod>
    <AllowedMethod>GET</AllowedMethod>
    <AllowedMethod>PUT</AllowedMethod>
    <AllowedMethod>POST</AllowedMethod>
    <AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
  1. Click "Edit bucket policy" and paste this (Replace the bucket name with your own) to allow anyone to read content from the bucket (only do this if you have set "block public access" to off):
{
  "Version": "2008-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": "*",
      "Action": "s3:GetObject",
      "Resource": "arn:aws:s3:::YOURBUCKETNAME/*"
    }
  ]
}
  1. Click Save

API Server

new S3Up(args): Class for handling data with your S3 Bucket. You may use other methods to authenticate the bucket as described here, this class takes in everything a common S3 class takes and expands on it without extending it.

  • args.bucket (required): Target bucket for all subsequent S3Up commands.
  • args.accessKeyId (required unless authenticating with something else): IAM S3 User Access Key ID.
  • args.secretAccessKey (required unless authenticating with something else): IAM S3 Secret Access Key

S3Up.signUpload(args): For authorizing client side uploads more docs

  • args.key (required): The location of the file in S3.
  • args.expires (optional): The number of seconds for which the presigned policy should be valid. (default: 3600)
  • args.conditions (optional): An array of conditions that must be met for the presigned policy to allow the upload. This can include required tags, the accepted range for content lengths, etc.
  • args.fields (optional): Fields to include in the form. All values passed in as fields will be signed as exact match conditions.

S3Up.download(args): For downloading files in s3 to your server

  • args.to (required): Location of file, does not check whether the directory exists, you'll need to take care of this yourself.
  • args.from.key (required): Key (ex: 'directory/thing.txt') of the S3 file

S3Up.upload(args): For uploading files stored in your server to s3

  • args.body (required): The file you're uploading (buffer, blob, or stream)
  • args.key (required): The location of the file you're uploading
  • args.onProgress: A function called when upload progress is made

API Client

uploadFile(file, args): For uploading a single file

  • file (required): An instance of File as provided by HTML.input[type="file"]'s FileList
  • args.signer (required): A function or async function that will call the server's S3Up.signUpload() function and return its full response ({ url, fields }).
  • args.onProgress(state) (optional): A function for tracking upload progess
    • state.url: The full location of the file once the upload is complete
    • state.key: The key of the file in S3
    • state.loaded: How many bytes have loaded up
    • state.total: How many bytes will be loaded up
    • state.percent: How much progress as a percentage [0-100] the upload has completed
  • args.isBase64 (optional): A boolean describing whether uploaded files need to be converted to a blob
  • args.base64ContentType (optional): The content type of the base64 files

uploadFiles(files, args): For uploading multiple files in batches (this makes sure the client doesn't run into any memory issues)

  • files (required): An instance of FileList as provided by HTML.input[type="file"], an array of Files or an array of objects with a file key that contains an instance of File
  • args.signer (required): A function or async function that will call the server's S3Up.signUpload() function and return its full response ({ url, fields }).
  • args.onProgress(state) (optional): A function for tracking upload progess
    • state.list: An object of all files being uploaded
      • state.list[n].url: The full location of the file once the upload is complete
      • state.list[n].key: The key of the file in S3
      • state.list[n].loaded: How many bytes have loaded up
      • state.list[n].total: How many bytes will be loaded up
      • state.list[n].percent: How much progress as a percentage [0-100] the upload has completed
      • state.list[n]...: If you pass an array of objects, the remaining properties that do not collide with any of the previous keys will be passed through
    • state.toArray(): A function that returns state.list as an array
    • state.total(): A function that calculates current known total bytes to upload
    • state.loaded(): A function that calculates current known total bytes uploaded
    • state.percent(): A function that calculates current known progress of all uploads
  • args.isBase64 (optional): A boolean describing whether uploaded files need to be converted to a blob
  • args.base64ContentType (optional): The content type of the base64 files

API React

Everything in API client is exposed as well as the following

const [upload, state] = useSignedUpload(args) (REACT only): For uploading files and managing state easily

  • upload(FileList): A function that runs the uploads as described by uploadFiles. Returns the resulting state after completion.
  • state: The current state of uploads as described by uploadFiles but without requiring function calls
  • args: The upload functions definition
    • args.signer (required): A function or async function that will call the server's S3Up.signUpload() function and return its full response ({ url, fields }).
    • args.isBase64 (optional): A boolean describing whether uploaded files need to be converted to a blob
    • args.base64ContentType (optional): The content type of the base64 files

s3-uploader's People

Contributors

lepozepo avatar ziedmahdi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

s3-uploader's Issues

Assign path within bucket?

Thanks so much for creating this great, updated package! I'm migrating from the older Atmosphere package and unsure how to specify the path an upload should go to within a bucket here.

In the Atmosphere package, we could specify the path like so:

S3.upload(
      {
        file: thisFile,
        path: pathFolderVariable + '/imports'
      },
      ( error, response ) => {
        ... handle response
      }
    );

Is there a similar option in this package?

Thanks!

[Question] Using with Ionic Apps

Hello. I'm trying to use this package with server side operating on meteor and client side on an android app built with Ionic Framework. Is this possible? For now i get the authorizer from server and try the upload on the app but i receive the error authorizer is not a function. Any ideas? Thanks

Slingshot type functionality

With meteor-slingshot, the user is able to directly upload the files to S3 without putting extra load on the application server (proxying the request). Does your package have this capability?

authorizer throws Exception, should invoke callback instead

Hi Marcelo,

When the authorizer method call fails (ie. if user not authorised), the Exception thrown by the authorizer cannot be caught by the client. The authorizer should instead invoke the callback that was passed in.

try {
  upload_file(base64, {
          encoding: 'base64',
          **authorizer**: Meteor.call.bind(this, "authorize_upload"),
          upload_event: addedFn
   });
}catch (e) {
    // NOT REACHED 
}

Example from upload_file.js :

return authorizer({
   path: path,
   acl: acl,
   bucket: bucket,
   region: region,
   expiration: expiration,
   file_name: file_name,
   file_type: file.type,
   file_size: file.size
 }, function (error, signature) {
   var form_data, xhr;
   if (error) {
        **throw error**
   }
 ...

Proposed fix:
if (error) {
//throw error
return upload_event(error, null);
}

Best regards,
Beemer

set ACL parameters when uploading

Hi, great package, very useful.

I could manage to upload file but somehow I can't read them yet. I'd like to set up the ACL parameters to public-read so that everyone could have access to the file, where shall I change this?I tried to do the following in s3Up.signUpload function:

s3Up.signUpload({
key,
ACL: "public-read"
})

but it didn't succeed.

Also is there a more secured way to control who has access to the pictures (getting signedUrl to download for example, in Mteor to at least check if user is authenticated before accessing a picture, etc... ?

Thanks a lot

Update npm to latest version?

Currently trying to replace Meteor Slingshot with this -- so far working well! I tried running this using the latest npm version, but ran into the bucket issue that was fixed in #8 so I manually overrode it. Does the npm package need to be updated to reflect the latest changes?

Cache Control & s3Up.signUpload

Hello,

We are using the library and like it a lot. However I don't manage to set up Cache Control when using the signUpload on Server Side. I have tried the following but doesn't seem to be working:

    return s3Up.signUpload({
      key,
      fields: {
        ContentType: fileType,
        CacheControl: "max-age=31536000",
      },
    });

Thanks in advance for your feedback.

Action required: Greenkeeper could not be activated 🚨

🚨 You need to enable Continuous Integration on Greenkeeper branches of this repository. 🚨

To enable Greenkeeper, you need to make sure that a commit status is reported on all branches. This is required by Greenkeeper because it uses your CI build statuses to figure out when to notify you about breaking changes.

Since we didn’t receive a CI status on the greenkeeper/initial branch, it’s possible that you don’t have CI set up yet. We recommend using Travis CI, but Greenkeeper will work with every other CI service as well.

If you have already set up a CI for this repository, you might need to check how it’s configured. Make sure it is set to run on all new branches. If you don’t want it to run on absolutely every branch, you can whitelist branches starting with greenkeeper/.

Once you have installed and configured CI on this repository correctly, you’ll need to re-trigger Greenkeeper’s initial pull request. To do this, please click the 'fix repo' button on account.greenkeeper.io.

Metadata

Hi @Lepozepo,

how do you suggest we pass metadata details for files? My present interest is for Cache_Control and Expires which are indeed necessary for Cloudfront/ S3 caching.

I am having a modified version of your npm but I am not sure I want to upload it as ... another package. Do you have any plans for this or I should rather go ahead and do my own thing?

The changes I did are in the upload_file.js, authorize.js and the upload method. I initialize a variable with the metadata details and pass it to the method as well as to the uploader. If no metadata is provided, the files upload with Cache_Control = no-cache and Expires = '0' (expired).

Looking forward to hearing from you,
Paul

Possible release ?

Hi, I was just wondering if there a release date?
I really like this library but I'm moving away from meteor and I still need be able to upload files to my S3 bucket.

Thanks.

Feature Request: Pre-signed GET URL Component/hook

Hello, I'd like to start off by thanking the maintainers for their hard work on this package. It's a very solid piece of software that has saved me a lot of time 😃.

Would it be possible to add pre-signed URL support? I would like to restrict viewing of files, and the SO question I asked about doing this kind of thing pointed to pre-signed URLs as the way to go.

@Lepozepo You mentioned possibly using Authorizer.S3.getSignedUrl, what might that look like specifically? I'm interested in making a PR for this feature but I don't know where to start.

ping @wesleyfsmith

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.