mhart / aws4fetch Goto Github PK
View Code? Open in Web Editor NEWA compact AWS client and signing utility for modern JS environments
License: MIT License
A compact AWS client and signing utility for modern JS environments
License: MIT License
I'm running NodeJS 10.7.0, but it looks like util.TextEncoder should exist in this version. Any ideas what I'm doing wrong here?
/snip/node_modules/aws4fetch/dist/aws4fetch.cjs.js:5
const encoder = new TextEncoder('utf-8');
^
ReferenceError: TextEncoder is not defined
at Object.<anonymous> (/snip/node_modules/aws4fetch/dist/aws4fetch.cjs.js:5:17)
at Module._compile (internal/modules/cjs/loader.js:689:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:700:10)
at Module.load (internal/modules/cjs/loader.js:599:32)
at tryModuleLoad (internal/modules/cjs/loader.js:538:12)
at Function.Module._load (internal/modules/cjs/loader.js:530:3)
at Module.require (internal/modules/cjs/loader.js:637:17)
at require (internal/modules/cjs/helpers.js:20:18)
at Object.<anonymous> (/snip/my_code.js:3:18)
at Module._compile (internal/modules/cjs/loader.js:689:30)
Would it be possible to add R2 support to
Line 376 in 447d747
The guessed region should be auto
and the service
would be s3
for <account>.r2.cloudflarestorage.com
(or when using virtual bucket addressing, <bucket>.<account>.r2.cloudflarestorage.com
)
S3 console gives problematic URLs, they take the form:
<bucket_name>.s3-<region>.amazonaws.com
Note the -
instead of a .
after s3
.
This causes service/region guess to fail. Illustrated here:
region is s3-us-west-2
and service is the bucket name.
I think this might be a reasonable patch if you're not too opposed to adding another regex:
--- node_modules/aws4fetch/dist/aws4fetch.cjs2.js 2020-04-22 21:05:03.136867900 -0700
+++ node_modules/aws4fetch/dist/aws4fetch.cjs.js 2020-04-22 21:00:49.403364900 -0700
@@ -270,6 +270,9 @@
} else if (region === 's3' || region === 's3-accelerate') {
region = 'us-east-1';
service = 's3';
+ } else if (/^s3-\w+-\w+-\d+/.test(region)) {
+ region = region.substring(3);
+ service = 's3';
} else if (service === 'iot') {
if (hostname.startsWith('iot.')) {
service = 'execute-api';
Hey Michael 👋
This library is awesome. Thanks you!
I'm trying to make a cross-account call for the pinpoint service.
Account A 111111111111
holds the Cognito user pool and is minting the creds awsfetch
is using
Account B 222222222222
holds the pinpoint service. Both in the same region.
When making the fetch call in a single account everything just works ™
When making the call across accounts I get this error back
// req to https://pinpoint.us-east-1.amazonaws.com/v1/apps/123456789/events
{"message":
"User: arn:aws:sts::111111111111:assumed-role/blah-IdentityPoolUnauthenticatedRole-1HMXYI2Z7K8DF/CognitoIdentityCredentials
is not authorized to perform: mobiletargeting:PutEvents
on resource: arn:aws:mobiletargeting:us-east-1:111111111111:apps/123456789/events"}
Inspecting the error, it looks like the account ID on the resource is wrong. It's hitting account 111111111111
and should be hitting account 222222222222
eg arn:aws:mobiletargeting:us-east-1:222222222222:apps/123456789/events
Is the account ID of the request automatically inferred by AWS somehow? I can't seem to find that in the construction of the signed request.
Do you know if it is possible to make a cross-account signed request from the client?
Thanks again for the lib. You saved me very many kbs ❤️
I am new to CF Workers so decided to first create an issue rather than creating a PR with changes to docs. When I try to run your example I get: SyntaxError: Cannot use import statement outside a module
I have tried running it with wrangler preview
and only have seen en example.com page without any activity from my worker. Same for wrangler dev
. Am I doing it right and is the problem correctly stated?
Hi
I am trying to create presigned urls for objects on digital ocean using your library. I cannot use the default aws package as i need the X-Amz-Date. to be in the future.
I have tried to just generate a url (which i get) but I always get permission denied. The urls are currently being generated in the browser.
Here is what I have tried
this.awsClient = new AwsClient({ accessKeyId: environment.ACCESS_KEY_ID, secretAccessKey: environment.SECRET_ACCESS_KEY });
const result = await this.awsClient.sign('https://[BUCKET_NAME].[REGION].digitaloceanspaces.com/[PATH]', {
method: 'GET',
aws: {
signQuery: true,
},
},
)
result.url has a similar url to those generated using the aws sdk.
Futhermore, are we able to add expire params during the signing process. I guess they would go in the headers or body but wasnt sure.
Thanks a lot
Hi,
I am trying to use this library to integrate it in a custom frontend project, and i found out that there is no way to give custom options to window.fetch
API through your library.
Can you please add this functionality? One use case I have is that I need to pass {mode: 'no-cors'}
as I am trying to test some custom Lambda APIs I am working on at the moment.
Thank you in advance,
Julian
Say you have the following bit of code:
const lambdaResponse = await aws.fetch(target_url, {
method: request.method,
headers: cleaned_headers,
body: request_body,
})
And you're getting this error message when a redirect (302, 307, etc) is returned:
{"message":"The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details."}
This is not an error with aws4fetch, it's because your fetch
is making a follow-up request that is not signed properly. You can solve this by just telling fetch not to follow the redirect automatically.
e.g. the fix is:
const lambdaResponse = await aws.fetch(target_url, {
method: request.method,
headers: cleaned_headers,
body: request_body,
redirect: "manual"
})
Here's my worker.js code modified from https://github.com/obezuk/cf-worker-signed-backblaze-s3-api :
import { AwsClient } from 'aws4fetch'
const HTML = 'some HTML content'
const aws = new AwsClient({
"accessKeyId": AWS_ACCESS_KEY_ID,
"secretAccessKey": AWS_ACCESS_KEY_SECRET,
"region": AWS_S3_REGION
});
addEventListener('fetch', event => {
event.respondWith(handleRequest(event))
});
async function handleRequest(event) {
var request = event.request;
var url = new URL(request.url);
var headers = new Headers();
url.hostname = AWS_S3_BUCKET + '.' + AWS_S3_ENDPOINT;
if (request.headers.has('Range')) {
headers.set('Range', request.headers.get('Range'));
}
var signedRequest = await aws.sign(url, {
"method": request.method,
"headers": headers,
"aws": {
"allHeaders": true
}
});
if (url.pathname === '/') {
return new Response(HTML, { headers: { 'Content-Type': 'text/html; charset=utf-8' }});
} else {
return await aws.fetch(signedRequest, { cf: { "cacheEverything": true, "cacheTtl": 300 }});
}
}
After deploy to cloudflare worker, I got this:
cf-cache-status is BYPASS with every knid of file. I ask online and they said I should add cache-control header for it.
So is that cf:{}
part work and how could I add cache-control header to it?
Here’s the code I’m running:
import { AwsClient } from 'aws4fetch'
const aws = new AwsClient({
"accessKeyId": 'ACCESS_KEY_HERE',
"secretAccessKey": 'SECRET_ACCESS_KEY_HERE',
"region": 'nyc3'
});
addEventListener('fetch', function (event) {
event.respondWith(handleRequest(event.request))
});
async function handleRequest(request) {
var url = new URL(request.url);
url.hostname = 'bucketname.nyc3.digitaloceanspaces.com';
var signedRequest = await aws.sign(url);
return await fetch(signedRequest, { "cf": { "cacheEverything": true } });
}
DigitalOcean Spaces supports AWS Signature 4 according to their docs: https://developers.digitalocean.com/documentation/spaces/#authentication
So I’m not sure what I'm doing wrong here. I'd appreciate any help you can provide.
I'd love to use this module to sign an IAM-authorized connection to a WebSocket API Gateway.
Unfortunately it tries to parse the URL you're asking it to sign (here) but in the case of websockets you are apparently supposed to leave off the protocol (see https://medium.com/@o.bredenberg/iam-sign-your-api-gateway-websocket-connection-request-no-custom-auth-from-your-frontend-65451166757d).
It'd be nice if I could pass in an "invalid" URL to be signed.
Thanks!
When moduleResolution
is set to bundler
the typescript module resolution isn't working properly.
Error:
Could not find a declaration file for module 'aws4fetch'. '/path/to/project/node_modules/.pnpm/[email protected]/node_modules/aws4fetch/dist/aws4fetch.esm.mjs' implicitly has an 'any' type.
There are types at '/path/to/project/node_modules/aws4fetch/dist/main.d.ts', but this result could not be resolved when respecting package.json "exports". The 'aws4fetch' library may need to update its package.json or typings.
Hi.
I have to authorize access to a file in S3(backed by Wasabi), however, the url generated does not work when accessing from a browser / postman and S3 always returns" SignatureDoesNotMatch" exception.
Interestinly, the code below returns my file when accessing with global.fetch using the signed request.
const signed = await client.sign('https://s3.wasabisys.com/coolbucket/coolfile.exe',
{
method: 'GET',
aws: {
service: 's3',
region: 'us-east-1',
signQuery: true
},
});
console.log(signed.url) => the url won't work in browser.
return global.fetch(signed).then(e => {
return e;
})
`
I tested and had the same behavior on Amazon S3
Does anyone knows what this issue could be?
AWS's SDKs keep getting bigger and bigger. I love the simplicity and lean design of this library. You did such a great job here!
I'm curious how I can use this on lambda directly. Seems like I need the accessKeyId
and secretAccessKey
values, but I'm not sure where to get those inside a running lambda. Any ideas?
FYI I did a bundle size comparison using bun build
between using @aws-sdk/client-cloudfront
and this library, and the difference is pretty big for my app:
@aws-sdk/client-cloudfront
: ~574 KiBDifference: 439 KiB 🙂 (with sourcemaps the difference is over 1 MiB)
I've tried to use this service to connect to AWS SES but without any luck, for example:
const region = 'us-east-1'
const aws = new AwsClient({ accessKeyId: SES_AWS_ACCESS_KEY_ID, secretAccessKey: SES_AWS_SECRET_ACCESS_KEY })
await aws.fetch(`https://email.${region}.amazonaws.com`, { body: { from, to, subject, text }})
But I get a 4xx status code. Any help would be appriciated.
Could you please add interfaces for TypeScript? Currently everything is undefined/unknown.
When using signQuery: true
to create presigned URLs, and also providing the X-Amz-Expires
header to control expiry, aws4fetch is adding x-amz-expires
into the x-amz-signedheaders
parameter.
Since presigned URLs should only utilise query parameters and nothing else, requests will not include the X-Amz-Expires
request header and is rejected.
This behaviour has been tested against Cloudflare R2 and Amazon S3, both of which reject the request with a SignatureDoesNotMatch
error.
This is a Cloudflare Worker.
import { AwsClient } from "aws4fetch";
export default <ExportedHandler>{
async fetch() {
const ACCOUNT_ID = "";
const ACCESS_KEY_ID = "";
const SECRET_ACCESS_KEY = "";
const BUCKET_NAME = "";
const R2_URL = `https://${ACCOUNT_ID}.r2.cloudflarestorage.com`;
const client = new AwsClient({
accessKeyId: ACCESS_KEY_ID,
secretAccessKey: SECRET_ACCESS_KEY,
});
const request = new Request(`${R2_URL}/${BUCKET_NAME}/dog.png`, {
method: "PUT",
});
const signed = await client.sign(request, {
aws: { signQuery: true },
headers: {
"X-Amz-Expires": 3600,
},
});
const response = await fetch(signed.url, {
method: "PUT",
body: "123",
// Uncomment me and it'll work!
// headers: {
// "X-Amz-Expires": "3600",
// },
});
console.log(signed.url);
console.log(response.status, await response.text());
return new Response("lol");
},
};
It would be great if someone could help me use this library with React Native.
Thanks!
the change in bc5adb8 gives the following error in esm builds:
import { AwsV4Signer } from 'aws4fetch';
...
file:///Users/nobody/codez/test/src/utils.js:18
import { AwsV4Signer } from 'aws4fetch';
^^^^^^^^^^^
SyntaxError: Named export 'AwsV4Signer' not found. The requested module 'aws4fetch' is a CommonJS module, which may not support all module.exports as named exports.
CommonJS modules can always be imported via the default export, for example using:
import pkg from 'aws4fetch';
const { AwsV4Signer } = pkg;
changing it to the CJS format produces:
import aws4fetch from 'aws4fetch';
const { AwsV4Signer } = aws4fetch;
(node:63327) Warning: To load an ES module, set "type": "module" in the package.json or use the .mjs extension.
(Use `node --trace-warnings ...` to show where the warning was created)
/Users/nobody/codez/test/node_modules/aws4fetch/dist/aws4fetch.esm.js:279
export { AwsClient, AwsV4Signer };
^^^^^^
SyntaxError: Unexpected token 'export'
Renaming the ems files to .mjs
solves the issue:
"exports": {
".": {
"import": "./dist/aws4fetch.esm.mjs",
"worker": "./dist/aws4fetch.esm.mjs",
"browser": "./dist/aws4fetch.umd.js",
"require": "./dist/aws4fetch.cjs.js",
"default": "./dist/aws4fetch.umd.js"
}
},
I just installed version 1.0.13 on NPM. When trying to compile JS from TS with any imports from AwsV4Signer or AwsClient, TypeScript 4.0.5 throws an series of 17 errors related to missing type definitions in node_modules/aws4fetch/dist/main.d.ts
.
All the missing types seem to be from the Request library from the most part, I think. I noticed in aws4fetch's tsconfig that declarations are not enabled -- and I think that's the problem. Adding { declaration: true } to tsconfig.json should make the compiler include all the required type definitions that it otherwise leaves out (because that's a sane default...?). Also I should note that adding { sourceMap: true } helps you link up TS --> JS directly in debugging which is awesome.
Would you mind including the type definitions in your next update, please? Thank you!
I gave up trying to troubleshoot this one (I just switched to using an ArrayBuffer, I didn't really need to use a stream), but sometimes (not always) I get a race condition where the ReadableStream body is read twice.
✘ [ERROR] Uncaught (in response) TypeError: This ReadableStream is disturbed (has already been read from), and cannot be used as a body.
✘ [ERROR] Uncaught (in promise) TypeError: This ReadableStream is disturbed (has already been read from), and cannot be used as a body.
return new Request(signed.url.toString(), Object.assign({ duplex: 'half' }, signed))
^
at sign
I don't know if 169db6f is related (my guess is not).
Mainly documenting this for others in the future.
I dropped this module into some existing boilerplate using the fetch API and it mostly worked except for one hitch: passing {method: 'get'}
as the HTTP method results in an invalid signature as the string to compare needs the upper-case GET
.
I believe the Fetch API standard is case-sensitive (so you could pass an HTTP method like WhatEver
with mixed-case) however typically the methods get
, post
, delete
, etc., are normalized to uppercase. So typically passing the HTTP method lower-case as get
just works.
https://fetch.spec.whatwg.org/#concept-method-normalize
I can find examples online where the browser fetch API is invoked using get
in lower-case. But, yes, a bad habit.
I opened this issue to discuss normalizing the common methods like GET
, PUT
, POST
, PATCH
, OPTIONS
, DELETE
if a user of the library provides them lower-case (e.g. get
) so that the string-to-sign will be uppercase as needed.
The error does not surface in Chrome DevTools as anything sensible and instead complains about CORS in the console. I had to 1) navigate to Network tools to see that the response included an error header (x-amzn-errortype: InvalidSignatureException
) and then 2) replay the request in cURL to see the actual signature error JSON in the response.
Scenario:
Sending large post with byte array using ReadableStream for performance reasons.
Expected Behavior:
ReadableStream is sent to S3 via a streaming read, offering improved performance.
Actual Behavior:
Receive "body must be a string, ArrayBuffer or ArrayBufferView, unless you include the X-Amz-Content-Sha256 header"
Steps to reproduce:
let { readable, writable } = new TransformStream();
const arr = new Uint8Array([ 102,111,111,98,97,114 ])
writeArrayToStream(arr, writable);
const signed = await aws.sign(url, { method: 'PUT', headers, body: readable});
const resp = await fetch(signed);
return resp
Details:
Problem 1 - Cloning the request
In the signing flow, the request is cloned (https://github.com/mhart/aws4fetch/blob/master/src/main.js#L89) for signing, including the body. This is probably strike #1 for using ReadableStream, ideally there is no cloning of a large request.
Problem 2 -
In the Canonical String generation, there is a call to get the hexBodyHash (https://github.com/mhart/aws4fetch/blob/master/src/main.js#L310).
In the hexBodyHash method, it checks that the body is of type string or has a byteLength (like in an ArrayBuffer I think). A readable stream meets neither of those cases, so it throws the Error.
Proposed solution:
Add an option for unsigned payloads (see https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-header-based-auth.html) by replacing the hexBodyHash with the literal string UNSIGNED-PAYLOAD. I didn't dig too far into this, but it looks like it may be an option on the node version of this (https://github.com/mhart/aws4/blob/master/aws4.js#L244)?
I had a custom domain with API Gateway.
Struggled for an hour to get it to work, as a workaround we can pass "execute-api"
in the service
param to make it work.
Here is the config:
const signer = new AwsV4Signer({
url,
accessKeyId: accessKeyId as string,
secretAccessKey: secretKey as string,
region: "ap-south-1",
service: "execute-api",
method: "GET",
});
Make sure to include region in the param.
Hi,
Thanks for the great lib. I have it currently working to send emails through SES from a cloudflare worker. I now was hoping to use the Lib to send SMS messages but after a few days looking through the AWS documentation I am no closer to getting it working.
If by any chance anyone knows if, a) this can work and b) How to do it if it can.
Much appreciated.
edit ------
I found this code on another git ( https://github.com/Sean-Bradley/AWS-SNS-SMS-with-NodeJS ) that uses the aws-sdk npm package.
var AWS = require('aws-sdk');
var params = {
Message: message,
PhoneNumber: '+' + number,
MessageAttributes: {
'AWS.SNS.SMS.SenderID': {
'DataType': 'String',
'StringValue': subject
}
}
};
var publishTextPromise = new AWS.SNS({ apiVersion: '2010-03-31' }).publish(params).promise();
The publish class is defined here..
https://github.com/aws/aws-sdk-js-v3/blob/e3772e0ade/clients/client-sns/SNS.ts#L984
works well in the browser however there are some differences in the WebAPI vs node. Are you planning to add SSR support as well? Thanks
Dear,
I',m trying to do 2 calls to the same object on S3 (GET and POST).
The user (of the API credentials) has AdministratorAccess and AmazonS3FullAccess . I'm getting a method not allowed answer for the POST.
I'm using the REST endpoint, not the WEBSITE endpoint.
MethodNotAllowed
The specified method is not allowed against this resource.POSTOBJECT3FBC4A28E4475E7EiLYyMMkmBD9iburQ8s+PptMqxagFQFjrclFoK458UNhjX7dCk4+qNNw+CrrDqi/i0Z5CSVr4Cw8=
Is there any limitation on the library? Have you tried something like this?
Thank you,
Sergio
According to the documentation:
method
- if not supplied, will default to 'POST' if there's a body, otherwise 'GET'
This does not work for AWS APIs (e.g. MediaLive /prod/channels/:channelId/start
) that expect just an empty POST (i.e. Content-Length: 0
/ body: ""
).
Proposed fix: Test body != null
(i.e. treating null/undefined as not present, as in TS type body?: BodyInit | null
), instead of boolean-evaluating body
, here:
Line 153 in 03f9802
Hey there,
I found this library recently and was quite excited to try it out - everything worked perfectly for my app. However, when I tried to run tests using it, it seems to error out - presumably because Jest runs in a node-like environment, and aws4fetch expects a browser-like environment. The first error I got was that 'TextEncoder' was missing, and then when I got past that, that 'Request' was missing, and then 'crypto'...and then I stopped because having to essentially rebuild dependencies of a library isn't what I want to spend my time doing, and means that my tests aren't true to what's actually going on anyway.
This code:
const Environment = require('jest-environment-jsdom');
module.exports = class CustomTestEnvironment extends Environment {
async setup() {
await super.setup();
if (typeof this.global.TextEncoder === 'undefined') {
const { TextEncoder } = require('util');
this.global.TextEncoder = TextEncoder;
}
if (typeof this.global.Request === 'undefined') {
const { request } = require('http');
this.global.Request = request;
}
}
};
Was what I started with (which would go as a arg to a test
script ie "test": "jest --env=./__tests__/setup/jsdomSetup.js"
I'm not sure how other users of this library are getting around this, but not being able to run tests whilst using this library means it's a no-go, for now. I hope you can find some way of making it possible!
I'm using aws4fetch with cloudflare workers to fetch files from an S3 storage but the file in question has a special character in it (eg. é
) and it doesn't work. On closer inspection I found out that the character has been encoded twice: é -> %C3%A8 -> %25C3%25A8
when it should have been encoded once only.
Passing singleEncode
option to fetch resolves this problem but looking at the description of it in README seems to indicate that this wasn't designed for this use case but rather for some unspecified test case, so I can't be sure if using singleEncode
will cause other problems or not.
When using aws4fetch
with Jest it results in the below warning;
Constructing headers with an undefined argument fails in Chrome <= 56 and Samsung Internet ~6.4. You should use `new Headers({})`.
Let me know if you need any further information but I'll put through a PR for this issue shortly too.
I love this lib. I have tested it successfully with S3, minio, backblaze and R2. However, I am getting stuck with GCS. I have not worked out the root problem yet, I suspect GCS requires signed content-lengh headers or something, which is explicitly turned off in this library (for proxy support?)
Just wondering how open you are for a PR in this direction, I think GCS XML might work too differently to the others, do you want to support more vendors?
I think if I were to try and fix it I would declare the service "GCS" instead of "S3" and use that as the hook for feature flags. Wdyt?
When I use aws s3 presign
like this,
AWS_REGION=ap-northeast-1 \
AWS_ACCESS_KEY_ID=AKIA................ \
AWS_SECRET_ACCESS_KEY=........................................ \
aws s3 presign 's3://BUCKET_NAME/BUCKET_KEY'
I get a URL like this, which works.
https://BUCKET_NAME.s3.ap-northeast-1.amazonaws.com/BUCKET_KEY
?X-Amz-Algorithm=AWS4-HMAC-SHA256
&X-Amz-Credential=AKIA................%2F20210608%2Fap-northeast-1%2Fs3%2Faws4_request
&X-Amz-Date=20210608T051344Z
&X-Amz-Expires=3600
&X-Amz-SignedHeaders=host
&X-Amz-Signature=ccf8e91fc184c3805342093f0b68f111cfa04790f5e8b2ae7abd3511f05ff127
But when I use AwsV4Signer::sign like this,
let url = 'https://BUCKET_NAME.s3.ap-northeast-1.amazonaws.com/BUCKET_KEY'
let params = {url, accessKeyId, secretAccessKey, signQuery: true}
let signer = new AwsV4Signer(params)
let {url: {href}} = await signer.sign()
console.log(href)
I get a URL like this, which does not work.
https://BUCKET_NAME.s3.ap-northeast-1.amazonaws.com/BUCKET_KEY
?X-Amz-Date=20210608T051604Z
&X-Amz-Expires=86400
&X-Amz-Algorithm=AWS4-HMAC-SHA256
&X-Amz-Credential=AKIATTIKK53SN5ZYA7UX%2F20210608%2Fap-northeast-1%2Fs3%2Faws4_request
&X-Amz-SignedHeaders=host%3Bx-amz-content-sha256
&X-Amz-Signature=bffa444c66d3f959a691b9d4886387c0a1b74809dd0658939128e5741cd16056
It seems that aws4fetch is adding the x-amz-content-sha256
to the signed headers (and omitting it from the querystring), which apparently is not correct for querystring signatures.
I fixed this for my limited use case by adding 'x-amz-content-sha256'
to UNSIGNABLE_HEADERS
, but I imagine this would break signing for non-querystring, non-GET cases.
At the moment, it doesn't appear that Lambda Function URLs are decoded properly. Here's an example of one:
ibvt72cx3dkyksnw7jktvkwhme0legmv.lambda-url.us-east-1.on.aws
https://docs.aws.amazon.com/lambda/latest/dg/lambda-urls.html
The format will always be:
https://<url-id>.lambda-url.<region>.on.aws
As a temporary workaround, I'm manually setting service
and region
.
const aws = new AwsClient({ accessKeyId: env.AWS_ACCESS_KEY_ID, secretAccessKey: env.AWS_SECRET_ACCESS_KEY, service: "lambda", region: "us-east-1" });
Hi,
I've encountered trouble signing requests containing a Range HTTP-header (used for "multipart"/ multiple-connection downloads).
Looking at the source of this package, the header is marked as unsignable (unsignableHeader array defined in main.js). This array also refers to the AWS JS SDK, having a similar array of unsignable headers: the AWS variant does not include this header (anymore). https://github.com/aws/aws-sdk-js/blob/cc29728c1c4178969ebabe3bbe6b6f3159436394/lib/signers/v4.js#L190-L198
So far I've tried removing the Range header locally, with success. I only wonder if this change includes any side effects/ there is a particular reason for this package leaving it unsigned.
I could make a pull request with an update for this, if you would like.
Hi
with AWS "@aws-sdk/client-s3": "3.321.1", the client signs the 'content-length' when targeting a custom endpoint. However, aws4fetch explicitly masks these out when generating a signature and so I cannot get them to match.
Here is the authorization header as sent by amazon's client:
AWS4-HMAC-SHA256 Credential=05ce8c0cb2aac98b3e7cb0f28ad964bd/20230513/us-west-2/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-user-agent, Signature=433974ad26ee6294252724cf9e9ec5e03568c0167b1da6c76ee092a538fec832
The bug is that aws4fetch.authHeader
should sign all headers I tell it to.
Hi
Im trying to create a custom hook out of this package to wrap all my client requests
for some reason if there is error and it goes to the catch block the only info I get is error.stack and error.message
I would like to get the status code so that if its 403 i will logout the user
when looking at dev tools I do see error for the request with 403 Forbidden error
code snippet
function useApiGatewayClient() {
const awsClient = new AwsClient({
accessKeyId :...',
secretAccessKey: ...,
sessionToken: ...
region: 'eu-central-1',
service: 'execute-api',
})
const awsFetch = async (input: RequestInfo, init?: RequestInit) => {
console.log('1')
const response = await awsClient.fetch(input, init)
console.log('2')
console.log({ response })
const json = await response.json()
return json
}
return awsFetch
}
export { useApiGatewayClient }
thanks
By the sigv4 documentation this is a mandatory header to be generated.
I can't seem to find a way of creating it. Is this a bug?
I got this error
ReferenceError: crypto is not defined
0|mobarmegeen-backend | at hmac (node_modules/aws4fetch/dist/aws4fetch.cjs.js:223:21)
0|mobarmegeen-backend | at AwsV4Signer.signature (node_modules/aws4fetch/dist/aws4fetch.cjs.js:185:27)
0|mobarmegeen-backend | at AwsV4Signer.authHeader (node_modules/aws4fetch/dist/aws4fetch.cjs.js:177:34)
0|mobarmegeen-backend | at AwsV4Signer.sign (node_modules/aws4fetch/dist/aws4fetch.cjs.js:164:52)
0|mobarmegeen-backend | at AwsClient.sign (node_modules/aws4fetch/dist/aws4fetch.cjs.js:56:57)
0|mobarmegeen-backend | at AwsClient.fetch (node_modules/aws4fetch/dist/aws4fetch.cjs.js:69:40)
0|mobarmegeen-backend | at FilesCloudflareService.save (/apps/src/api/files/fs-providers/files.cloudflare.service.ts:54:8)
0|mobarmegeen-backend | at /apps/src/api/posts/posts.service.ts:135:10
0|mobarmegeen-backend | at matcher (/apps/src/api/files/files.utils.ts:47:14)
0|mobarmegeen-backend | at replaceAsync (/apps/src/api/files/files.utils.ts:90:9)
0|mobarmegeen-backend | at extractInlineData (/apps/src/api/files/files.utils.ts:52:7)
0|mobarmegeen-backend | at PostsService.saveFiles (/apps/src/api/posts/posts.service.ts:121:29)
0|mobarmegeen-backend | at PostsService.updateById (/apps/src/api/posts/posts.service.ts:45:17)
0|mobarmegeen-backend | at PostsController.update (/apps/src/api/posts/posts.controller.ts:104:25)
0|mobarmegeen-backend | at node_modules/@nestjs/core/router/router-execution-context.js:38:29
0|mobarmegeen-backend | at processTicksAndRejections (node:internal/process/task_queues:95:5)
0|mobarmegeen-backend | at async node_modules/@nestjs/core/router/router-execution-context.js:46:28
0|mobarmegeen-backend | at async Object.<anonymous> (node_modules/@nestjs/core/router/router-proxy.js:9:17)
the code:
let client = new AwsClient({
accessKeyId: ***,
secretAccessKey: ***!,
});
let baseUrl = `https://${accountId}.r2.cloudflarestorage.com`
client
.fetch(`${baseUrl}/file.txt`, {
method: 'put',
body: 'test',
})
I don't use crypto
anywhere in my code
even installing crypto doesn't help npm i crypto
Lines 20 to 35 in 03f9802
See TypeScripts signature here: https://github.com/microsoft/TypeScript/blob/main/lib/lib.dom.d.ts#L16971
Note the inclusion of the | URL
in the input
parameter types.
This is causing errors when using code/types that include things like:
type FetchParameters = Parameters<typeof fetch>
...
const [url, init]: FetchParams = ...
const signed = aws.sign(url, { ...init, ...}) // Type error about incompatible types with `URL`
I deploy a worker on Cloudflare.
import { AwsClient } from 'aws4fetch'
const HTML = 'Hello World'
const aws = new AwsClient({
"accessKeyId": AWS_ACCESS_KEY_ID,
"secretAccessKey": AWS_ACCESS_KEY_SECRET,
"region": AWS_S3_REGION
});
addEventListener('fetch', function(event) {
event.respondWith(handleRequest(event.request))
});
async function handleRequest(request) {
var url = new URL(request.url);
url.hostname = AWS_S3_BUCKET + '.' + AWS_S3_ENDPOINT;
var signedRequest = await aws.sign(url);
if (url.pathname === '/') {
return new Response(HTML, { headers: { 'Content-Type': 'text/html; charset=utf-8' }});
} else {
return await fetch(signedRequest, { "cf": { "cacheEverything": true } });
}
}
And it cannot handle non-English characters.
For example:
I am need to insert into DynamoDB table using aws4fetch is there any example.
Looking for the suggestion
I'm using this module with Cloudflare workers.
In Workers runtime, %5B will be forced decode to '[' by URL api, same as strings already in encodeRfc3986()
here's a URL demo:
addEventListener("fetch", (event) => {
event.respondWith(
handleRequest(event.request).catch(
(err) => new Response(err.stack, { status: 500 })
)
);
});
/**
* Many more examples available at:
* https://developers.cloudflare.com/workers/examples
* @param {Request} request
* @returns {Promise<Response>}
*/
async function handleRequest(request) {
let responseText = "";
responseText += "request.url: " + request.url + "\n"
const url = new URL(request.url);
responseText += "URL().href: " + url.href
return new Response(responseText, {status: 200})
}
The response will be:
request.url: https://example.workers.dev/%5B%20SomeNormalString
URL().href: https://example.workers.dev/[%20SomeNormalString
This will cause problems when filename contains '['
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.