Code Monkey home page Code Monkey logo

libs3's Introduction

This directory contains the libs3 library.

The libs3 library is free software.  See the file LICENSE for copying
permission.

libs3's People

Contributors

alexandersack avatar alexeip0 avatar andreikop avatar benmcclelland avatar bester avatar bingmann avatar bji avatar chenji-kael avatar dalgaaf avatar earlephilhower avatar ellert avatar guillermomuntaner avatar jengelh avatar konfan avatar ktdreyer avatar likema avatar martinprikryl avatar meinemitternacht avatar mutantkeyboard avatar sergeydobrodey avatar sivachandran avatar spielkind avatar vlibioulle avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

libs3's Issues

Multipart Copy

Is the Multipart Copy working? I'm getting the following response when I try to copy a 6 GB file from a bucket to another:

ERROR: ErrorInvalidArgument
Message: The specified header is not valid in this context
Extra Details:
ArgumentName: x-amz-metadata-directive
ArgumentValue: REPLACE
RequestId: B9A7B3C41FB12F55
HostId: oTuaiCPM5pCw99BQ1RDzSMjSTRUXhCGKgkYvwa6KrhmGp8yM8l70+9UwvXXbH2RN9cilgPyHPRE=

Store etag value in S3_upload_part

Currently, there is no callback to store the etag value in upload_part. Even if there is a way to store the etags, we don't know the format to shove the list of etags to complete_multipart_upload.

bucket's region

how to set region for connection? i cannot operate the North Californa region's bucket, but the US Standard region's bucet is ok. i am in china.

curl timeout when trying to set meta on large file

When trying to set meta (or del meta or copy file inside bucket) on large (>800MB) file I get timeout error from curl. If I comment out curl_easy_setopt_safe(CURLOPT_LOW_SPEED_LIMIT, 1024) and curl_easy_setopt_safe(CURLOPT_LOW_SPEED_TIME, 15) in request.c then it works fine. There is a todo comment ("make these configurable") so it seems time has come

doult of function should_retry

I read source code of function should_retry:

`static int should_retry()
{
if (retriesG--) {
// Sleep before next retry; start out with a 1 second sleep
static int retrySleepInterval = 1 * SLEEP_UNITS_PER_SECOND;
sleep(retrySleepInterval);
// Next sleep 1 second longer
retrySleepInterval++;
return 1;
}

return 0;

}`
in once loop, var retrySleepInerval is reassigned, can retrySleepInterval++ be effective?

docs?

Are there docs somewhere that explain how to use this? Or do we just have to read the source?

Add options to output debug logging

libs3 currently does not write any logs, making it difficult to troubleshoot errors. This is particularly problematic in the case of libcurl errors, since a lot of libcurl's error codes are translated to S3StatusInternalError without any further info.

It would be very useful if libs3 allowed enabling detailed logs for the progress of requests, e.g., via an S3_set_log_level() function and S3_LOG_LEVEL environment variable. Also, it would be useful to allow enabling libcurl verbose mode (CURLOPT_VERBOSE) without having to rebuild libs3.

segfault while putting large files read from stdin

Hi,

Encountered a segfault during putting large files that are read from stdin. Appears to be the "heap-use-after-free" issue, a growbuffer is accessed after it was already freed. 100% reproducible.

Below is the output from the library built with gcc4.9 ASAN; putting a 160MB file:

$ LD_LIBRARY_PATH=build-debug/lib ./build-debug/bin/s3 put files/zzz < ../file.tar 
Sending Part Seq 1, length=15728640
15712256 bytes remaining (85% complete) ...
15695872 bytes remaining (85% complete) ...
15679488 bytes remaining (85% complete) ...
15663104 bytes remaining (85% complete) ...
...
98304 bytes remaining (99% complete) ...
81920 bytes remaining (99% complete) ...
65536 bytes remaining (99% complete) ...
49152 bytes remaining (99% complete) ...
32768 bytes remaining (99% complete) ...
16384 bytes remaining (99% complete) ...
Sending Part Seq 2, length=15728640
=================================================================
==10047==ERROR: AddressSanitizer: heap-use-after-free on address 0x631000000800 at pc 0x402af5 bp 0x7ffe3afb1270 sp 0x7ffe3afb1268
READ of size 4 at 0x631000000800 thread T0
    #0 0x402af4 in growbuffer_read src/s3.c:458
    #1 0x40ab1e in putObjectDataCallback src/s3.c:2012
    #2 0x7efd21abca7b in curl_read_func src/request.c:193
    #3 0x7efd205c8295 (/usr/lib/x86_64-linux-gnu/libcurl.so.4+0x28295)
    #4 0x7efd205c8f1c (/usr/lib/x86_64-linux-gnu/libcurl.so.4+0x28f1c)
    #5 0x7efd205d29db (/usr/lib/x86_64-linux-gnu/libcurl.so.4+0x329db)
    #6 0x7efd205d3180 in curl_multi_perform (/usr/lib/x86_64-linux-gnu/libcurl.so.4+0x33180)
    #7 0x7efd205ca7b2 in curl_easy_perform (/usr/lib/x86_64-linux-gnu/libcurl.so.4+0x2a7b2)
    #8 0x7efd21ac4b72 in request_perform src/request.c:1220
    #9 0x7efd21ad5906 in S3_upload_part src/multipart.c:222
    #10 0x40cb20 in put_object src/s3.c:2453
    #11 0x41227d in main src/s3.c:3640
    #12 0x7efd20828ec4 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21ec4)
    #13 0x402018 (/home/zz/_src/libs3/build-debug/bin/s3+0x402018)

0x631000000800 is located 0 bytes inside of 65560-byte region [0x631000000800,0x631000010818)
freed by thread T0 here:
    #0 0x7efd20c205c7 in __interceptor_free (/usr/lib/x86_64-linux-gnu/libasan.so.1+0x545c7)
    #1 0x402e4e in growbuffer_read src/s3.c:473
    #2 0x40ab1e in putObjectDataCallback src/s3.c:2012
    #3 0x7efd21abca7b in curl_read_func src/request.c:193
    #4 0x7efd205c8295 (/usr/lib/x86_64-linux-gnu/libcurl.so.4+0x28295)

previously allocated by thread T0 here:
    #0 0x7efd20c207df in __interceptor_malloc (/usr/lib/x86_64-linux-gnu/libasan.so.1+0x547df)
    #1 0x402578 in growbuffer_append src/s3.c:415
    #2 0x40c4c5 in put_object src/s3.c:2313
    #3 0x41227d in main src/s3.c:3640
    #4 0x7efd20828ec4 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21ec4)

SUMMARY: AddressSanitizer: heap-use-after-free src/s3.c:458 growbuffer_read
Shadow bytes around the buggy address:
  0x0c627fff80b0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c627fff80c0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c627fff80d0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c627fff80e0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c627fff80f0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
=>0x0c627fff8100:[fd]fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
  0x0c627fff8110: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
  0x0c627fff8120: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
  0x0c627fff8130: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
  0x0c627fff8140: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
  0x0c627fff8150: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07 
  Heap left redzone:       fa
  Heap right redzone:      fb
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack partial redzone:   f4
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Contiguous container OOB:fc
  ASan internal:           fe
==10047==ABORTING

With best regards,
Ivan.

bucket-region for authentication

I couldn't access buckets which is created other than us-east-1, (N.Virginia). for rest other buckets i get a error code as "ErrorPermanentRedirect"

Version macro in libs3.h

Can you include one or more version macro variants in the header file please.

#define LIBS3_MAJOR_VER 2
#define LIBS3_MINOR_VER 0

and / or

#define LIBS3_VERSION 020000 /* major-ver * 10000 + minor-ver * 100 + patch */

Is there a way to send multipart upload data without using callback mechanism?

I am working on a use case where I do not have all of the data for a multipart upload at the time I begin the part upload. I can't hold all of it in memory due to potential memory exhaustion nor can I first write it to disk because the S3 happens to be faster than local disk.

The sender calls a send() function to give me more data. The sender is driving this.

The callback option won't work because once I am in the callback loop I can't get out of it. S3_upload_part() will not return until all the data is sent. If it doesn't return, I am not available to receive new calls to send().

I have taken the approach to do the S3_upload_part() in a background thread. I am just wondering if there is a way to escape from the callback and continue as I get more data. (I was looking at CURL and couldn't figure out a way to send data without the callback either.)

This is more of a question not an issue. Apologies if this isn't the correct forum for it.

Releases after 2.0 are not tagged

I know it is probably a subtle / insignificant thing in comparison to everything else about this project, but it is a good thing to do, especially for the sake of communicating various aspects of the library (API, ABI etc) and the stability from the PoV of authors and maintainers.

Something like semver would be a good, widely understood, versioning scheme to consider. Combine that with git-flow and it will make things easier to manage. At least that is the summary of what I have tried with other projects.

It for sure, will benefit anyone integrating with the library, looking for a later, stable version, without having to scan through the commit history to find the git-hash to freeze at.

Bucket list returns permanent redirect

I've just built libs3 for Windows and I'm using it to query my S3 bucket. The first attempt goes well:
"s3 list" returns "mybucket"
"s3 test mybucket" returns a status of "ap-southeast-2"

But trying to list the contents of the bucket fails with a permanent redirect:
"s3 list mybucket"
ERROR: ErrorPermanentRedirect
Message: The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.
Extra Details:
Bucket: mybucket
Endpoint: mybucket.s3.amazonaws.com

"s3 list mybucket.s3.amazonaws.com" returns:
ERROR: ErrorNoSuchBucket
Message: The specified bucket does not exist

Searching around I find there's a problem when you're bucket wasn't created in the US (e.g. https://forums.aws.amazon.com/message.jspa?messageID=196878). So I try:
"s3 list mybucket.s3-ap-southeast-2.amazonaws.com" but again I get:
ERROR: ErrorNoSuchBucket
Message: The specified bucket does not exist

Reading the issue above, I tried:
"s3 list http://s3-ap-southeast-2.amazonaws.com/mybucket/" but that returns:
ERROR: InvalidBucketNameCharacter

This works fine using the AWS C# library so I assume libs3 isn't handling a redirection message or something.

a set param question

hey ,i want to consult a question ,that is :i wang to call function S3_put_object but i have a param which i dont know how to put its value ,the struct S3RequestContext *requestContext.
if the version is the newest ,i want to know the meaning of each param of the struct member ,thanks!!

Unable to access to S3 with temporary credentials.

I am not certain it's the best place to report this but ...

I am using libs3 and I am trying to access a bucket from outside AWS. If I used the "root" <key,secret> pair everything is fine.

Now, if I generate temporary <key,secret,token> credentials (using sts:AssumeRole) I can't access the bucket (error is ErrorInvalidAccessKeyId). Access is possible only from inside AWS.

Is this a bug in libs3 or is it my doing something wrong?

Thank you!

Additional licenses?

Any chance of getting this software released under Apache V2 or BSD 3-clause licenses?

Does libs3 work with OpenTelekomCloud?

Hello, I was trying to get libs3 up and running to communicate with OpenTelekomCloud. So far, I wasn't very successful.
My first question: Is this possible at all?
My second question: I didn't find any value which I could use as security_token, what goes in here in case of OpenTelekomCloud?
What else do I need to take into account to be able to use OTC with this library?

Kind regards

geniack

It does not work when the host is a poxy server.

Hello
Thank you for your share, at first!
I have a question like this.
1. I have created an ceph cluster, and create a RGW client for the cluster.
2. Then I create a nginx Server as a proxy of the RGW.
3. I test the libs3(bji) and find that: it works well when the host is RGW server'hostname. and it does not work when the host is nginx'hostname.
4. At the same time, i use the boto.s3.connection of python and test libs3 with python code. it works well.
Could you help me ? I have tested seviral days and the problem is still unsolved.
Thanks!

What is dynamic object (.do) comparing to object (.o)

I want to port this as a cmake build for making the shared library, everything goes good but when i link my shared library lib to my application segmentation fault occurs.
I spotted GNUMakefile in this project builds .do instead .o to build the library, i made sure my fpic flag was turned ON. could some one help on this

buffer overrun in base64Encode for small buffers

base64Encode expects an output buffer whose size is ((4 * (inLen + 1)) / 3) bytes, as per comment. But for the input buffer of size 16, it overruns it. For the input buffer of size 16, the output buffer should be: ((4 * (16 + 1)) / 3) = 22 bytes. But in this case, base64Encode returns output length of 24 bytes, overrunning the input buffer. The following C code demonstrates it:

#define SRC_LEN 16
#define B64_LEN(n) (((n) + 1) * 4) / 3

int main(void)
{
    unsigned char in_buff[SRC_LEN] = {1,2,3,4,5,6,7,8,9,10,11,12,13,14,15};
    const unsigned int b64len = B64_LEN(SRC_LEN);
    char b64[B64_LEN(SRC_LEN) + 16] = {'\0'};
    unsigned int outLen = 0;

    printf("b64len = %u\n", b64len);
    printf("Before encoding b64[%u]=0x%X\n", b64len, b64[b64len]);
    outLen = base64Encode(in_buff, 16, b64);
    printf("After encoding outLen=%u, b64[%u]=0x%X\n", outLen, b64len, b64[b64len]);
    return 0;  
}

The output is:

b64len = 22
Before encoding b64[22]=0x0
After encoding outLen=24, b64[22]=0x3D

base64Encode should only touch bytes from b64[0] to b64[21] (because the required length is supposed to be 22). But it clearly touches b64[22] as well, thus overrunning the output buffer (should its length was as per the comment).

Note that for larger input sizes (like 20), this problem does not happen:

b64len = 28
Before encoding b64[28]=0x0
After encoding outLen=24, b64[28]=0x0

Here base64Encode uses only 24 bytes out of 28 bytes.

request_headers_done doesn't check for request->status when calling the properties callback

It may happen that when we enter this function, we already have request->status != S3StatusOk. Then we call curl_easy_getinfo(CURLINFO_RESPONSE_CODE) and we receive httpResponseCode=200. As a result, we will call the properties callback, due to this code:

    // Only make the callback if it was a successful request; otherwise we're
    // returning information about the error response itself
    if (request->propertiesCallback &&
        (request->httpResponseCode >= 200) &&
        (request->httpResponseCode <= 299)) {
        request->status = (*(request->propertiesCallback))
            (&(request->responseHeadersHandler.responseProperties), 
             request->callbackData);
    }

This code doesn't check request->status. The properties callback returns S3StatusOk, and we kill the original request->status value. As a result, caller thinks request succeeded, but it really failed.

The following simple fix works:

    // Only make the callback if it was a successful request; otherwise we're
    // returning information about the error response itself
    if (request->propertiesCallback &&
        // Also check request->status
        request->status == S3StatusOK &&
        (request->httpResponseCode >= 200) &&
        (request->httpResponseCode <= 299)) {
        request->status = (*(request->propertiesCallback))
            (&(request->responseHeadersHandler.responseProperties), 
             request->callbackData);
    }

Cannot contain space in my key.

Hi:
I use libs3 as my upload component in my project, but I find that the key cannot contain space. I also use CyberDuck on windows as my client, the key can contain space and work well.

Hope your reply.

scott. 2016-05-24

Can't upload file into Oracle Cloud Storage thru their s3 API

I'm planning to access Oracle Cloud Storage thru the S3 API described here:
https://docs.oracle.com/en/cloud/iaas/storage-cloud/cssto/using-s3-api-compatible-clients-access-oracle-storage-cloud-service.html

I used curl to retrieve my S3 secret key (MySecretKey).

I'm using libs3 on the client side and test with s3 binary.
I set the env variables as follow:
export S3_ACCESS_KEY_ID=Storage-acme
export S3_SECRET_ACCESS_KEY=MySecretKey
export S3_HOSTNAME=acme.storage.oraclecloud.com

Where acme is my personal accound and MySecretKey has been authorize as decribe in the doc.

I can list the content of a bucket:
./s3 list clibucket
I can download objects from it:
./s3 get clibucket/Vol1/test filename=~/test

BUT I consistantly can't upload a file into the bucket:

./s3 put clibucket/Vol1/test filename=~/test
doesn't work.

I enable debugging in libs3 by uncommenting:
// curl_easy_setopt_safe(CURLOPT_VERBOSE, 1);
and
//#define SIGNATURE_DEBUG

here is the trace I got:

tracelog.txt

got error 422 Unprocessable Entity.

According to Oracle documentation (https://docs.oracle.com/en/cloud/iaas/storage-cloud/cssto/error-code-reference-oracle-storage-cloud-service.html):

422 Unprocessable Entity
Cause
The value of the ETag header specified in the upload request doesn’t match the MD5 checksum of the HTTP response.
Solution
This error may be due to a problem in data transmission. Delete the specified object and try again.

I tried to tweek a bunch of headers like region, Etag, content-length, x-amz-content-sha256, md5, etc.
Nothing seems to help.
Any hint to get libs3 put to work in this configuration would be greatly appreciated.

libs3 crash if using libxml2 with multithreaded support

Hi
I'm getting crash on windows when i'm using libxml2 with multithreading support
http://xmlsoft.org/threads.html
I tried calling xmlInitParser() in S3_initialize , it fixes the issue.
But can i be sure that if i use older version of libxml2(without multithreading support ) in multithreaded application without any synchronization on my side ,it will work correctly.Or I should synchronize libs3 calls?

Thanks in advance

Range is set improperly for S3_copy_object_range.

Shouldn't the end range for the following be params->startByte + params->byteCount - 1?

I am getting an error because of an attempt to write one byte beyond the file.

libs3/src/request.c

Lines 404 to 406 in 287e4be

snprintf(byteRange, sizeof(byteRange), "bytes=%zd-%zd",
params->startByte, params->startByte + params->byteCount);
append_amz_header(values, 0, "x-amz-copy-source-range", byteRange);

64-bit unsafely in GET range request

startByte should be an 8-byte value, not 4 bytes.

--- a/inc/request.h
+++ b/inc/request.h
@@ -81,7 +81,7 @@ typedef struct RequestParams
     const S3GetConditions *getConditions;
 
     // Start byte
-    size_t startByte;
+    off_t startByte;
 
     // Byte count
     size_t byteCount;
diff --git a/src/request.c b/src/request.c
index dd66863..ea01c82 100644
--- a/src/request.c
+++ b/src/request.c
@@ -401,8 +401,9 @@ static S3Status compose_amz_headers(const RequestParams *params,
         // If byteCount != 0 then we're just copying a range, add header
         if (params->byteCount > 0) {
             char byteRange[S3_MAX_METADATA_SIZE];
-            snprintf(byteRange, sizeof(byteRange), "bytes=%zd-%zd",
-                     params->startByte, params->startByte + params->byteCount);
+            snprintf(byteRange, sizeof(byteRange), "bytes=%lld-%lld",
+                     (long long)params->startByte,
+                     (long long)params->startByte + params->byteCount);
             append_amz_header(values, 0, "x-amz-copy-source-range", byteRange);
         }
         // And the x-amz-metadata-directive header

tinyxml2

tinyxml2 would make a lot of your XML woes go away. It is what Amazon uses in their C++ SDK. I poked around adding it but the changes are very invasive - half of the code in libs3 will disappear. It also removes the external dependency on 8MB libxml2 which is important for embedded systems.

https://github.com/leethomason/tinyxml2

I do wish libs3 was MIT or Apache licensed so that I can directly link to it. I can't direct link LGPL3 since it conflicts with the license of other code I link to. I would have to leave it as a shared object which complicates things. When direct linking the linker can eliminate unused functions making things smaller.

S3_get_object leaks if the object doesn't exist

when calling S3_get_object when the key in question doesn't exist in bucket, valgrind reports a bunch of conditional jumps on unitialized values and a leak.

example program

#include <stdio.h>
#include "libs3.h"

static S3Status getObjectDataCallback(int bufferSize, const char *buffer
{
  FILE *outfile = (FILE *) callbackData;
  if (bufferSize <= 0) return S3StatusOK;
  size_t wrote = fwrite(buffer, 1, bufferSize, outfile);
  return ((wrote < (size_t) bufferSize) ? S3StatusAbortedByCallback : S3
}

int s3_get(const char * key, FILE * stream)
{
  S3GetObjectHandler getObjectHandler = {
    responseHandler,
    &getObjectDataCallback
  };

  S3_get_object(
      &bucketContext,
      key,
      NULL,
      0,
      0,
      NULL,
      0,
      &getObjectHandler,
      stream
      );
  return 0;
}

int main(int argc, char ** argv)
{
  if (argc != 2) {
    fprintf(stderr, "usage: %s key\n", argv[0]);
    return -1;
  }

  S3_initialize(NULL, S3_INIT_ALL, host);

  s3_get(argv[1], stdout);                                              

  S3_deinitialize();


  return 0;
}

Data returned from s3_head_object

Does anyone know what is returned in the callbackData field for the s3_head_object call? I'm having a hard time figuring out how to cast the data coming back.

Thanks
Howard

error: ‘%s’ directive output may be truncated writing up to 2511 bytes into a region of size between 875 and 966

Hi! When i try to compile libs3 on a clean Ubuntu 18.04 I run into troubles:

λ  make
build/obj/request.do: Compiling dynamic object
src/request.c: In function ‘setup_request’:
src/request.c:1056:74: error: ‘%s’ directive output may be truncated writing up to 2511 bytes into a region of size between 875 and 966 [-Werror=format-truncation=]
             "Authorization: AWS4-HMAC-SHA256 Credential=%s,SignedHeaders=%s,Signature=%s",
                                                                          ^~
In file included from /usr/include/stdio.h:862:0,
                 from /usr/include/libxml2/libxml/tree.h:15,
                 from /usr/include/libxml2/libxml/parser.h:16,
                 from src/request.c:32:
/usr/include/x86_64-linux-gnu/bits/stdio2.h:64:10: note: ‘__builtin___snprintf_chk’ output between 70 and 2736 bytes into a destination of size 1024
   return __builtin___snprintf_chk (__s, __n, __USE_FORTIFY_LEVEL - 1,
          ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        __bos (__s), __fmt, __va_arg_pack ());
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
src/request.c: In function ‘request_api_initialize’:
src/request.c:1448:51: error: ‘%s’ directive output may be truncated writing up to 64 bytes into a region of size between 31 and 96 [-Werror=format-truncation=]
         snprintf(platform, sizeof(platform), "%s%s%s", utsn.sysname,
                                                   ^~
                  utsn.machine[0] ? " " : "", utsn.machine);
                                              ~~~~
In file included from /usr/include/stdio.h:862:0,
                 from /usr/include/libxml2/libxml/tree.h:15,
                 from /usr/include/libxml2/libxml/parser.h:16,
                 from src/request.c:32:
/usr/include/x86_64-linux-gnu/bits/stdio2.h:64:10: note: ‘__builtin___snprintf_chk’ output between 1 and 130 bytes into a destination of size 96
   return __builtin___snprintf_chk (__s, __n, __USE_FORTIFY_LEVEL - 1,
          ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        __bos (__s), __fmt, __va_arg_pack ());
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
src/request.c: In function ‘S3_generate_authenticated_query_string’:
src/request.c:1745:14: error: ‘%s’ directive output may be truncated writing up to 2511 bytes into a region of size between 170 and 329 [-Werror=format-truncation=]
              "X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=%s"
              ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
src/request.c:1749:14:
              computed.signedHeaders, computed.requestSignatureHex);
              ~~~~~~~~
src/request.c:1747:36: note: format string is defined here
              "&X-Amz-SignedHeaders=%s&X-Amz-Signature=%s",
                                    ^~
In file included from /usr/include/stdio.h:862:0,
                 from /usr/include/libxml2/libxml/tree.h:15,
                 from /usr/include/libxml2/libxml/parser.h:16,
                 from src/request.c:32:
/usr/include/x86_64-linux-gnu/bits/stdio2.h:64:10: note: ‘__builtin___snprintf_chk’ output between 117 and 2851 bytes into a destination of size 428
   return __builtin___snprintf_chk (__s, __n, __USE_FORTIFY_LEVEL - 1,
          ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        __bos (__s), __fmt, __va_arg_pack ());
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
cc1: all warnings being treated as errors
GNUmakefile:223: recipe for target 'build/obj/request.do' failed
make: *** [build/obj/request.do] Error 1

S3 on Frankfurt: ERROR: ErrorInvalidRequest - Please use AWS4-HMAC-SHA256

Hi,
When using libs3 with S3 on Frankfurt (eu-central-1), I get the following error:

ERROR: ErrorInvalidRequest
Message: The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256.
Extra Details:
RequestId: xxxxxx
HostId: xxxxxxx

Has anybody ran into this? Any idea how to fix it?

Error while compiling

Hello Bji,

Thanks for sharing libs3 with us. I want to ask you something.
I am using Mac OS and trying to install libs3, but I get all the time this error so I can't continue

In file included from src/acl.c:30:
inc/request.h:131:14: error: sizeof on pointer operation will return size of
      'char *' instead of 'char [9]' [-Werror,-Wsizeof-array-decay]
    char uri[MAX_URI_SIZE + 1];
             ^~~~~~~~~~~~
inc/util.h:61:51: note: expanded from macro 'MAX_URI_SIZE'
     MAX_URLENCODED_KEY_SIZE + (sizeof("?torrent" - 1)) + 1)
                                       ~~~~~~~~~~ ^
1 error generated.
make: *** [build/obj/acl.do] Error 1

Do you know how can I solve it?
Thanks in advance for your help,
best regards,
Karim

src/request.c:298: out of scope ?

[src/request.c:293] -> [src/request.c:290] -> [src/request.c:298]: (error) Using pointer to local variable 'headerNameWithPrefix' that is out of scope.

Source code is

if (addPrefix) {
    char headerNameWithPrefix[S3_MAX_METADATA_SIZE - sizeof(": v")];
    snprintf(headerNameWithPrefix, sizeof(headerNameWithPrefix),
             S3_METADATA_HEADER_NAME_PREFIX "%s", headerName);
    headerStr = headerNameWithPrefix;
}

// Make sure the new header (plus ": " plus string terminator) will fit
// in the buffer.
if ((values->amzHeadersRawLength + strlen(headerStr) + strlen(headerValue)
    + 3) >= sizeof(values->amzHeadersRaw)) {

Force V4 authorization for us-east-2

Using libs3 in us-east-1, us-west-1 and 2 work for us, but us-east-2 fails. We get Error: S3 Copy Failed : The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256.

Is there a way to force V4 authorization for these newer regions.
The following regions don't support Signature Version 2. You must use Signature Version 4 to sign API requests in these regions:

US East (Ohio) Region
Canada (Central) Region
Asia Pacific (Mumbai) Region
Asia Pacific (Seoul) Region
EU (Frankfurt) Region
EU (London) Region
China (Beijing) Region

Create bucket request sets "Expires" to zero

Working with an ECS Single Node virtual machine, I discovered that my object user could delete buckets but not create them.

The cause I believe is because S3_create_bucket() sets expires to zero which sets the "Expires:" header to 1/1/70 and the ECS node barfs (request expired etc).

But more to the point:

http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUT.html

Doesn't include an "Expires" header nor is that part of the common request headers. Perhaps the S3PutProperties could be set to -1 which would avoid this non-sense all together.

diff --git a/src/bucket.c b/src/bucket.c
index c3f1126..1f3ba07 100644
--- a/src/bucket.c
+++ b/src/bucket.c
@@ -263,7 +263,7 @@ void S3_create_bucket(S3Protocol protocol, const char *accessKeyId,
0, // cacheControl
0, // contentDispositionFilename
0, // contentEncoding

  •    0,                                       // expires
    
  •    -1,                                      // expires
     cannedAcl,                               // cannedAcl
     0,                                       // metaDataCount
     0,                                       // metaData
    

etc.

That works.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.