Code Monkey home page Code Monkey logo

s3-website's Introduction

s3-website

Easily publish static websites to Amazon S3. TLS encryption can be enabled via Cloudfront.

Creates a bucket with the specified name and enables static website hosting on it. Also, sets up a public-read bucket policy.

Your AWS credentials should either be in ~/.aws/credentials, a file in the local directory entitled .env with the values

AWS_ACCESS_KEY_ID=MY_KEY_ID
AWS_SECRET_ACCESS_KEY=MY_SECRET_KEY

or in the environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.

Your website policy and configuration will only be sent to S3 when it differs from the existing.

Note!

Because of limitations of the S3 API, any changes made to the website policy or configuration in the S3 web interface, or elsewhere, will be overwritten by the settings provided to s3-website.

Installation

s3-website is a node.js program/module.

npm install -g s3-website

Usage (CLI)


  $ s3-website -h
  Commands:

    create [options] <domain>      Will create and configure an s3 website
    deploy [options] <upload-dir>  Will push contents of directory to specified s3 website

To see options for each command s3-website command -h

Create

Usage s3-website create <desired.bucket.name> [options]

  • Will create a new bucket with desired name
  • Will configure bucket for static hosting

Deploy

Usage s3-website deploy <directory-to-upload> [options]

  • Will upload all contents of directory to bucket, replacing existing files
  • Bucket can be specified by providing command line argument -d, or --domain followed by the name of the s3 bucket. If no option is provided s3-website will look for config file written when bucket is created. ** Because there is an issue in the command line library, you must to put the "-d" option last. The next dependency version should fix this issue

All the options are optional ;-).

s3-website -r eu-central-1 cool.website.me creates a website bucket with the following URL http://cool.website.me.s3-website.eu-central-1.amazonaws.com. You can then set up a CNAME record for cool.website.me pointing to that hostname.

For the TLS related options take a look at the cloudfront-tls readme.

Usage (API)

const create = require('s3-website').s3site;

create({
  domain: 'test.site.me', // required, will be the bucket name
  region: 'eu-central-1', // optional, default: us-east-1
  index: 'index.html', // optional index document, default: index.html
  error: '404.html', // optional error document, default: none
  exclude: ['.git/*', '.gitignore'], // optional path patterns to be excluded from being created/updated/removed, default: [], `*` is the wildcard
  routes: [{
    Condition: {
        KeyPrefixEquals: 'foo/'
    },
    Redirect: {
        HostName: 'foo.com'
    }
  }]
}, (err, website) => {
  if(err) {
    throw err;
  }
  console.log(website);
})

You can also pass in the same the TLS related options as in cloudfront-tls. So you might want to take a look at its readme if you want to use your own certificates.

If you want to deploy using the API, create an s3 instance:

const deploy = require('s3-website').deploy
    , config = require('./config')
    , AWS = require('aws-sdk')
    , s3 = new AWS.S3({ region: config.region });

deploy(s3, config, (err, website) => {
  if(err) {
    throw err;
  }
  console.log(website);
})

Routing Rules

RoutingRules can be provided via cli and API. From the cli you will need to provide the path to a file that can be loaded via require, that is to say, a .js or .json file. This file should export an array of rules that conform to the S3 Routing Rules syntax. Likewise, you can provide an array of rules to the API with the routes option.

Redirecting All Requests

To redirect all requests to another domain eg: www -> non www You can use the rederectall option. NOTE: index, error, and routing rules are not needed when redirecting all requests to another domain.

const create = require('s3-website').s3site;

create({
  domain: 'www.site.me', // required, will be the bucket name
  region: 'eu-central-1', // optional, default: us-east-1
  redirectall: 'site.me'
}, (err, website) => {
  if(err) {
    throw err;
  }
  console.log(website);
})

Custom Content Types

Sometimes you may want to change the Content Type header for specific files, for example, serve from S3 php files as HTML. You can now pass an object (contentTypes) describing your custom needs:

config.contentTypes = {
  php: 'text/html'
}

deploy(s3, config, (err, website) => {
  if(err) {
    throw err;
  }
  console.log(website);
})

Contributors

License

ISC

s3-website's People

Contributors

chainsawhunter avatar kalinchernev avatar klaemo avatar mshick avatar nick-benoit14 avatar pablodgonzalez avatar rgruesbeck avatar rlyle avatar rokoroku avatar seahken avatar simoncurd avatar staymanhou avatar vladejs avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

s3-website's Issues

Folder/directory path problem when deployed from Windows

Thanks for this convenient utility. One issue that came up for me on Windows- when deploying a directory with folders (possibly multiple levels deep), the paths in S3 are not suitable for HTTP resolution, since paths contain the \ separator rather than /.

For example, I have the following file in my build directory (the directory pushed to S3):

  • <project>\build\css\style.css

This goes into S3 as:

  • css\style.css, yielding the undesirable url: http://<s3.bucket.url>/css%5Cstyle.css

Instead, this file needs to go into S3 as:

  • css/style.css, yielding the proper url: http://<s3.bucket.url>/css/style.css.

Feel free to take it or leave it, but I did fork some changes to fix this on Windows. It was necessary to intercept a couple functions on the node path module, but everything appears to be working correctly now.

Redirects are being ignored

My .s3-website.json has this value:

   "routes": [
      {
         "Redirect": {
            "example/path/index.html": "new/path/",
          }
      }]

When I run s3-website deploy the config file is used, as evidenced by correctly uploading most of the content. However, it seems to be ignoring these redirect configurations entirely.

I'm happy to contribute, but curious to check first if I'm just doing something wrong.

Allow to set custom Content-Type per file extension

Hi, I'm trying to upload a folder to S3 and change the Content-Type of .php files to "text-plain/html".

On line 177 there's a function that uploads a single file, and the Content-type is set by looking at the filextension:

function uploadFile (s3, config, file, cb) {
  var params = {
    ...
    ContentType: mime.lookup(file),
  }
  s3.putObject(params, function (err, data) {
    if (err && cb) { return cb(err, data, file) }
    if (cb) { cb(err, data, file) }
  })
}

Why not accept an object that specifies custom content-types according to extension, something like:

config.contentType = {
  php: 'application/text-html',
  zip: 'application/octect-stream',
  ...
}

and then implement the function like this:

function uploadFile (s3, config, file, cb) {
  const extension = getExtension(file);
  const desiredCT = config.contentType && config.contentType[extension];

  var params = {
    ...
    ContentType: desiredCT || mime.lookup(file),
   }

  s3.putObject(params, function (err, data) {
    if (err && cb) { return cb(err, data, file) }
    if (cb) { cb(err, data, file) }
  })
}

This will be very useful indeed

Upgrade to AWS SDK v3

AWS will be putting SDK v2 into maintenance mode this year, and is trying to push people to update to v3. When you run s3-website now, you get the following message:

(node:64563) NOTE: We are formalizing our plans to enter AWS SDK for JavaScript (v2) into maintenance mode in 2023.

Please migrate your code to use AWS SDK for JavaScript (v3).
For more information, check the migration guide at https://a.co/7PzMCcy

@klaemo I would be happy to get this done -- I've been converting my projects over to v3, and it wouldn't take me long at all. That said, I notice there are two PRs that are now years old with no resolution, and a bunch of legit issues with no activity. Have you stopped maintaining this repo? Do you need help? I like the simplicity of this library, and I'm loathe to figure out a different one. Maybe with a little community help, we could dust this project off?

Transforming files when deploying

It would be very useful if there was a hook to manipulate the file path as each file is being uploaded to S3. In some cases it's possible to just do this locally in advance before running s3-website deploy, but because S3 is not a traditional filesystem, certain things that are possible in S3 are not possible locally. For example, a file that looks like a folder name but also contains files in its “directory”.

file
file/with_file_in_it

That wouldn't be possible to prepare locally first. The only way is to change the destination path during upload, I think.

Ignore error --feature

I am trying to build a CI/CD using your module but because I did not set the my s3 bucket to be a static I am receiving an error saying that is not proper configured.

Is there any way of implementing a flag to ignore the error and finish with success? Thanks!

How to specify "Cache-control" max-expires for objects

Hi,
when deploying a site, I enable cloudfront for them with the option to redirect http -> https requests.

Problem is that the 301 redirect is permanently cached in chrome. Surfing the web I found that settings "Cache-control"/max-expires headers to S3 objects avoids the caching, so I would like to know if that's possible with this library

Call to deploy fails if excluded configs list is not provided

With the changes added in 3.1.1 to enable the --excluded tag, a config passed into the deploy(s3, config, cb) call that does not include the exclude property will throw an error. The block related to this issue starts at line 516 in index.js:

// exclude files from the diff
config.exclude.forEach(function (excludePattern) {
  for (var key in data) {
    data[key] = data[key].filter(function (path) {
      return !wildcard(excludePattern, path)
    })
  }
})

One fix would be to do a simple check to see if exclude is defined before assuming it is and calling forEach on it.

No documentation for usage of CloudFront config

I can set enableCloudfront to true, but there's no indication in any of the documentation as to how I am supposed to configure it with my CloudFront distribution id.

I'm happy to contribute, but curious to check first if I'm just doing something wrong.

Deployment default to public?

hi there, great tool, thank you!

I'm wondering if there is a way to set permissions of files being deployed to 'public read' automagically? Currently after deploying, I have to manually set the objects to public view.

Thank you,
Paola

Is this possible to maintain previous routing rules and then add new rules?

where we need to give credentials??

const create = require('s3-website').s3site;

create({
domain: 'test.site.me', // required, will be the bucket name
region: 'eu-central-1', // optional, default: us-east-1
index: 'index.html', // optional index document, default: index.html
error: '404.html', // optional error document, default: none
exclude: ['.git/*', '.gitignore'], // optional path patterns to be excluded from being created/updated/removed, default: [], * is the wildcard
routes: [{
Condition: {
KeyPrefixEquals: 'foo/'
},
Redirect: {
HostName: 'foo.com'
}
}]
}, (err, website) => {
if(err) {
throw err;
}
console.log(website);
})

Any example available for this?

FIles uploading but getting an AccessDenied: Access Denied

I'm using this module in API mode. All the files get uploaded but an error is thrown and the DONE console log doesn't get called.

Here's my code:

const config = {
	region: 'eu-west-2',
	domain: 'bucket.name',
	deploy: true,
	index: 'index.html',
	uploadDir: './public/'
};

function deploySite () {

	process.env.AWS_ACCESS_KEY_ID = 'XXXXXXXXXXX';
	process.env.AWS_SECRET_ACCESS_KEY = 'XXXXXXXXXXX';

	const AWS = require('aws-sdk');
	const s3 = new AWS.S3({
		region: config.region,
	});
	
	const deploy = require('s3-website').deploy;	

	deploy(s3, config, (err, website) => {
		if(err) {	
			throw err;
		}
			console.log('DONE');
		});
        });

}

And here's the error:

throw err;
            ^
AccessDenied: Access Denied

Do you know what might be going on here? Many thanks!

Deploy rewrites my .s3-website.json config with an empty array; fails to exclude anything

My .s3-website.json has this value:

   "exclude": ["*.xcf"],

When I run s3-website deploy the config file is used, as evidenced by correctly uploading most of the content. However:

  • the files I've asked it to exclude are uploaded anyway
  • s3-website modifies my config file— I see the message "Updated config file: .s3-website.json" and after that, the .s3-website.json file has an empty array for exclude:

I'm happy to contribute, but curious to check first if I'm just doing something wrong.

Use glob pattern to support exclude (or include) option

wildcard module is not suitable to implement the exclude option.
wildcard only supports tokenized string matching, so to match file patterns like js/name.chunk.js.map, I have to specify exclude: ['*.*.*.*.map'], not exclude: ['**/*.map']

Liberty upload error

Hi, I've used this plugin with great success for several years, and have setup a pipeline that relies on this plugin to upload built files to AWS S3. However from today (9/01/2022), it suddenly stopped working, with a very weird hacked-like message (screenshot below):
Screenshot (46)

I've investigated and found most of the dependencies of this package did not change for sometime. Except for aws-sdk which I tried downgrading with no luck.

Can't figure out what's going on, and it's happening on my local machine (MacOS Big Sur version 11.6.2), and also my pipeline deployment machine ( Ubuntu Bionic 18.04 LTS). Please help!

express and s3-website

Hi, I have multiple express apps, which i want to serve the same website. I thought to put the static website in s3 (so it would be easy to modify the code if needed and not deploy to each express app). i thought using your module inside the express app but i did not understand how.
right now i do this:
app.use(express.static('public'));
I also noted that i can do this:
app.get('/', function(req, res) { res.sendfile('index.html'); });

but this is when i store the index.html in the public folder of the app...I want it to be store in the s3 using your module.

Accidentally pushed to master

I accidentally pushed a feature branch to master. I wasn't sure of a great way to contact you @klaemo so I figured this would be a good way to let you know. Should I revert, and push a new master branch?

Getting access denied

the beginning:
screen shot 2017-10-29 at 16 02 37

and the end:

screen shot 2017-10-29 at 16 03 01

here's my s3-website.json config file:

{
   "index": "index.html",
   "region": "us-west-2",
   "uploadDir": "web",
   "prefix": "",
   "exclude": [],
   "corsConfiguration": [],
   "enableCloudfront": true,
   "retries": 20,
   "domain": "cscade.com",
   "configFile": ".s3-website.json"
}

my .env credentials file is in the same directory with the one i'm uploading: web

Ready for release

Hey, I added some new stuff. A summary of the things that were added can be found in the last pull request description. I don't think I ever got ahold of your actual email. Sorry to ping you via issue again.

Thanks,
Nick Benoit

CNAMEAlreadyExists issue with enableCloudFront

When combining with cloundFront options, only the first run success, and other run fails with following error:

CNAMEAlreadyExists: One or more of the CNAMEs you provided are already associated with a different resource.
    at Request.extractError (/Users/rok/WebstormProjects/closer-landing-page/node_modules/aws-sdk/lib/protocol/rest_xml.js:53:29)
    at Request.callListeners (/Users/rok/WebstormProjects/closer-landing-page/node_modules/aws-sdk/lib/sequential_executor.js:105:20)
    at Request.emit (/Users/rok/WebstormProjects/closer-landing-page/node_modules/aws-sdk/lib/sequential_executor.js:77:10)
    at Request.emit (/Users/rok/WebstormProjects/closer-landing-page/node_modules/aws-sdk/lib/request.js:673:14)
    at Request.transition (/Users/rok/WebstormProjects/closer-landing-page/node_modules/aws-sdk/lib/request.js:22:10)
    at AcceptorStateMachine.runTo (/Users/rok/WebstormProjects/closer-landing-page/node_modules/aws-sdk/lib/state_machine.js:14:12)
    at /Users/rok/WebstormProjects/closer-landing-page/node_modules/aws-sdk/lib/state_machine.js:26:10
    at Request.<anonymous> (/Users/rok/WebstormProjects/closer-landing-page/node_modules/aws-sdk/lib/request.js:38:9)
    at Request.<anonymous> (/Users/rok/WebstormProjects/closer-landing-page/node_modules/aws-sdk/lib/request.js:675:12)
    at Request.callListeners (/Users/rok/WebstormProjects/closer-landing-page/node_modules/aws-sdk/lib/sequential_executor.js:115:18)

Is there any configuration points that we can specify CF distribution ID or something else?

Readme is out of date

The programmatic API examples should point to require('s3-website').s3site, also a usage example for programmatic deployment would be useful (involves creating an s3 instance from aws-sdk to pass to putWebsiteContent), would you like a PR?

Can you leave any existing files in the bucket

When we are deploying this is cleaning out any files in the s3 bucket which are not in the local folder which is nice, although in some situations we want to keep those files as they represent older versions of the site.

Is this possible already? thanks

Support ACM Certificates

Right now s3-website assumes that your certificates have been manually imported in to IAM, when many users will have their certificates provisioned natively by ACM.

Add support to delete bucket

I am using your tool in a CI environment where I create/deploy preview apps. Since these apps don't last that long I need to delete them very often. Would be great to have an option to remove the site as well.

AWS Profiles

Hi there,

Is it possible that this application can support AWS profiles?

I could submit a PR when time permits if desired.

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.