Code Monkey home page Code Monkey logo

storage's Introduction

Supabase Storage Engine

Coverage Status

A scalable, light-weight object storage service.

Read this post on why we decided to build a new object storage service.

  • Multi-protocol support (HTTP, TUS, S3)
  • Uses Postgres as its datastore for storing metadata
  • Authorization rules are written as Postgres Row Level Security policies
  • Integrates with S3 Compatible Storages
  • Extremely lightweight and performant

Supported Protocols

  • HTTP/REST
  • TUS Resumable Upload
  • S3 Compatible API

Architecture

Documentation

Development

  • Copy .env.sample to .env file.
  • Copy .env.test.sample to .env.test.
cp .env.sample .env && cp .env.test.sample .env.test

Your root directory should now have both .env and .env.test files.

  • Then run the following:
# this sets up a postgres database and postgrest locally via docker
npm run infra:restart
# Start the storage server
npm run dev

The server should now be running at http://localhost:5000/

The following request should insert and return the list of buckets.

# insert a bucket named avatars
curl --location --request POST 'http://localhost:5000/bucket' \
--header 'Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJyb2xlIjoic2VydmljZV9yb2xlIiwiaWF0IjoxNjEzNTMxOTg1LCJleHAiOjE5MjkxMDc5ODV9.th84OKK0Iz8QchDyXZRrojmKSEZ-OuitQm_5DvLiSIc' \
--header 'Content-Type: application/json' \
--data-raw '{
    "name": "avatars"
}'

# get buckets
curl --location --request GET 'http://localhost:5000/bucket' \
--header 'Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJyb2xlIjoic2VydmljZV9yb2xlIiwiaWF0IjoxNjEzNTMxOTg1LCJleHAiOjE5MjkxMDc5ODV9.th84OKK0Iz8QchDyXZRrojmKSEZ-OuitQm_5DvLiSIc'

Testing

To perform your tests you can run the following command: npm test

storage's People

Contributors

abbit avatar alaister avatar ankitjena avatar bariqhibat avatar bnjmnt4n avatar burmecia avatar buschco avatar darora avatar dependabot[bot] avatar fenos avatar ftonato avatar hf avatar ikhsan avatar inian avatar jianhan avatar kiwicopple avatar mingfang avatar pcnc avatar rahul3v avatar sduduzog avatar seanedwards avatar soedirgo avatar sweatybridge avatar thebengeu avatar wdavidw avatar xenirio avatar yoohoomedia avatar yuriyyakym avatar zqianem avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

storage's Issues

Allow forcing a file to download instead of opening in the browser

Feature request

Is your feature request related to a problem? Please describe.

I have json files uploaded in supabase storage and I provide presigned links to them in <a> tags in my webapp. Currently, when users click on these links, they are navigated to the json file, which displays in the browser. I would prefer clicking on the link to instead pop up a "Download file" prompt and leave the user on the same page (i.e. don't navigate away), so that downloading json files behaves the same as downloading other files.

Describe the solution you'd like

I think there are 2 options that might make sense:

  1. Accept a Content-Disposition header when uploading a file to storage-api (pass it through to the S3/File backends similar to the Content-Type header)
  2. Allow a Content-Disposition header to be provided when creating a presigned url, that header is then included in the response when a client follows the presigned url
    1. This option is similar to how the AWS SDK allows you to override a variety of different headers for presigned urls, see the Overriding Response Header Values heading in the docs

Describe alternatives you've considered

  1. Using the download attribute on anchor tags (<a>) doesn't work, because my webapp is on a different domain/origin than the storage-api/kong (which is on a supabase.co subdomain).
  2. You can get around the above same-origin limitation by downloading as a blob: URL, but that's clunky and slow and restrictive. Source: https://stackoverflow.com/a/49500465
  3. I could upload files with a bogus content type (e.g. application/octet-stream), so they would be served back with the same content type header and the browser wouldn't think it could open it directly. But, this seems hacky.

A bucket named "public" will fail download of files with 404.

Bug report

If you create a bucket called "public", using the download command in the UI and doing a storage.from.download from the API will return 404 for any files.
Public URL works as expected if a public bucket.

Probably related:
#22

Describe the bug

using

supabase
        .storage
        .from('public')
        .download('shot.jpg')

GET https://instanceid.supabase.co/storage/v1/object/public/shot.jpg 404

Using UI download from menu:

shot

NOTE adding public/ to the path in .download does work if bucket is public.....
GET https://instanceid.supabase.co/storage/v1/object/public/public/shot.jpg Works (if bucket public)

Edit:
Issue seems to be this code:

https://github.com/supabase/storage-api/blob/1e55aa38e8572dfb399527e033edfa8e86f86a01/src/routes/object/getPublicObject.ts#L38

  fastify.get<getObjectRequestInterface>(
    '/public/:bucketName/*',
    {

If I read that correctly, a url route with public in it automatically gets treated as a request for a different interface than any other name. So a bucket named "public" can not work. This seems hard to fix without major change to the storage protocol.
Probably easier to not allow buckets to be named public in the bucket creation process and deal with existing buckets on a case by case basis (but is difficult as buckets cannot be renamed at least for the api I don't believe).

To Reproduce

Create a public bucket named public. Upload a file. Click download from UI. Get 404 error.
Create a public bucket named test. Upload same file. Click on download from UI it works.
EDIT: creating a private bucket named "public" also fails.

Expected behavior

A public bucket named "public" should work, or error on creation, or at least be documented not to work.

System information

Current supabase.js and Supabase instance.

Additional context

Add any other context about the problem here.

Allow security policy to upload (upsert true), update and remove storage.objects without requiring Select permission

Feature request

Is your feature request related to a problem? Please describe.

I want to set security policies to allow the user to upload (upsert true), update and remove storage.objects, but not allowed to download.
I do not want to give the Select permission which is currently required for upload (upsert true), update and remove.

Describe the solution you'd like

Please review the need for Select permission for upload (upsert true), update and remove for storage.objects.

Describe alternatives you've considered

Please tell me how to use the current security policy model to implement the use case of:
Allow the user to upload (upsert true), update and remove storage.objects, but not allowed to download.

Thanks!

Storage PUT returning an error

Bug report

Describe the bug

The PUT method used for updating storage files is failing with a cryptic error:

node[9379]: ../src/node_http_parser.cc:567:static void node::{anonymous}::Parser::Initialize(const v8::FunctionCallbackInfo<v8::Value>&): Assertion `args[3]->IsInt32()' failed.
 1: 0xb200e0 node::Abort() [node]
 2: 0xb2015e  [node]
 3: 0xb377b2  [node]
 4: 0xd6fc5b  [node]
 5: 0xd7110c  [node]
 6: 0xd71786 v8::internal::Builtin_HandleApiCall(int, unsigned long*, v8::internal::Isolate*) [node]
 7: 0x1640939  [node]
Aborted

To Reproduce

  const arrayBuffer = new ArrayBuffer(8);
  let buffer = Buffer.from(arrayBuffer);

        fetch(
          `${SUPABASE_URL}/storage/v1/object/${projectRef}/avatars/${name}`,
          {
            method: 'PUT',
            mode: 'cors',
            cache: 'no-cache',
            headers: {
              Accept: '*/*',
              Authorization: `Bearer ${SUPABASE_SERVICE}`,
            },
            body: buffer,
          },
        )
          .then((res) => res.json())
          .then((res) => {
            console.log(res);
          });

The above code works perfectly with the POST method so I assume it's not something wrong with my code? Swagger mentions needing a project's projectRef, I assume that's the random string I can find in my url when viewing a project on Supabase?

Expected behavior

My file should be updated.

System information

  • OS: Linux
  • Version of supabase-js: N/A
  • Version of Node.js: 16.4.0

Upload File Storage - Nextjs

Bug report

Describe the bug

I'm currently attempting to Upload an Image from my Nextjs Server, and each time I upload the image to Supabase the file size is only 15 bytes (unreadable).

However, when I console log the file in my server, the size shows 181kb.

To Reproduce

Client

    let formData = new FormData();
    formData.append("file", file.file);
    let res = await fetch('/api/product/add, {
      method: "POST",
      body: formData
    });

Backend

      const {data, error} = await supabase.storage.from('my-bucket-name')
        .upload(`my-file-path`, files[0], {
          cacheControl: '3600',
          upsert: false,
          contentType: files[0].type
        })

Middleware for Nextjs Multipart/form-data

export const multipartFormMiddleware = (req, res) => {
  return new Promise((resolve, reject) => {
    const form = new formidable.IncomingForm();
    form.uploadDir = "./";
    form.keepExtensions = true;
    form.parse(req, (err, fields, files) => {
      if (err) {
        const error = new APIError('Multipart Form Error: Server', req.url, 422, req.method);
        error.info = err;
        res.status(422).json({error: error.message, info: error.info })
        return reject(err)
      }

      let parsedFields = {};
      Object.keys(fields).map((key) => {
        parsedFields[key] = JSON.parse(fields[key])
      });

      let parsedFiles = [];
      Object.keys(files).map((key) => {
        parsedFiles.push(files[key]);
      })

      req.body = parsedFields;
      req.files = parsedFiles;

      return resolve(true)
    })
  })
}

My File (Image) - Console log

File {
  _events: [Object: null prototype] {},
  _eventsCount: 0,
  _maxListeners: undefined,
  size: 180634,
  path: 'upload_10beef299c6709f0f56bee0e3813f158.jpg',
  name: 'JuanHeadshot copy.jpg',
  type: 'image/jpeg',
  hash: null,
  lastModifiedDate: 2021-11-08T01:44:35.703Z,
  _writeStream: WriteStream {
    _writableState: WritableState {
      objectMode: false,
      highWaterMark: 16384,
      finalCalled: true,
      needDrain: true,
      ending: true,
      ended: true,
      finished: true,
      destroyed: true,
      decodeStrings: true,
      defaultEncoding: 'utf8',
      length: 0,
      writing: false,
      corked: 0,
      sync: false,
      bufferProcessing: false,
      onwrite: [Function: bound onwrite],
      writecb: null,
      writelen: 0,
      afterWriteTickInfo: null,
      buffered: [],
      bufferedIndex: 0,
      allBuffers: true,
      allNoop: true,
      pendingcb: 0,
      prefinished: true,
      errorEmitted: false,
      emitClose: true,
      autoDestroy: true,
      errored: null,
      closed: true
    },
    _events: [Object: null prototype] {},
    _eventsCount: 0,
    _maxListeners: undefined,
    path: 'upload_10beef299c6709f0f56bee0e3813f158.jpg',
    fd: null,
    flags: 'w',
    mode: 438,
    start: undefined,
    autoClose: true,
    pos: undefined,
    bytesWritten: 180634,
    closed: true,
    [Symbol(kFs)]: {
      appendFile: [Function: appendFile],
      appendFileSync: [Function: appendFileSync],
      access: [Function: access],
      accessSync: [Function: accessSync],
      chown: [Function (anonymous)],
      chownSync: [Function (anonymous)],
      chmod: [Function (anonymous)],
      chmodSync: [Function (anonymous)],
      close: [Function: close],
      closeSync: [Function: closeSync],
      copyFile: [Function: copyFile],
      copyFileSync: [Function: copyFileSync],
      createReadStream: [Function: createReadStream],
      createWriteStream: [Function: createWriteStream],
      exists: [Function: exists],
      existsSync: [Function: existsSync],
      fchown: [Function (anonymous)],
      fchownSync: [Function (anonymous)],
      fchmod: [Function (anonymous)],
      fchmodSync: [Function (anonymous)],
      fdatasync: [Function: fdatasync],
      fdatasyncSync: [Function: fdatasyncSync],
      fstat: [Function (anonymous)],
      fstatSync: [Function (anonymous)],
      fsync: [Function: fsync],
      fsyncSync: [Function: fsyncSync],
      ftruncate: [Function: ftruncate],
      ftruncateSync: [Function: ftruncateSync],
      futimes: [Function: futimes],
      futimesSync: [Function: futimesSync],
      lchown: [Function (anonymous)],
      lchownSync: [Function (anonymous)],
      lchmod: [Function (anonymous)],
      lchmodSync: [Function (anonymous)],
      link: [Function: link],
      linkSync: [Function: linkSync],
      lstat: [Function (anonymous)],
      lstatSync: [Function (anonymous)],
      lutimes: [Function: lutimes],
      lutimesSync: [Function: lutimesSync],
      mkdir: [Function: mkdir],
      mkdirSync: [Function: mkdirSync],
      mkdtemp: [Function: mkdtemp],
      mkdtempSync: [Function: mkdtempSync],
      open: [Function: open],
      openSync: [Function: openSync],
      opendir: [Function: opendir],
      opendirSync: [Function: opendirSync],
      readdir: [Function: readdir],
      readdirSync: [Function: readdirSync],
      read: [Function: read],
      readSync: [Function (anonymous)],
      readv: [Function: readv],
      readvSync: [Function: readvSync],
      readFile: [Function: readFile],
      readFileSync: [Function: readFileSync],
      readlink: [Function: readlink],
      readlinkSync: [Function: readlinkSync],
      realpath: [Function],
      realpathSync: [Function],
      rename: [Function: rename],
      renameSync: [Function: renameSync],
      rm: [Function: rm],
      rmSync: [Function: rmSync],
      rmdir: [Function: rmdir],
      rmdirSync: [Function: rmdirSync],
      stat: [Function (anonymous)],
      statSync: [Function (anonymous)],
      symlink: [Function: symlink],
      symlinkSync: [Function: symlinkSync],
      truncate: [Function: truncate],
      truncateSync: [Function: truncateSync],
      unwatchFile: [Function: unwatchFile],
      unlink: [Function: unlink],
      unlinkSync: [Function: unlinkSync],
      utimes: [Function: utimes],
      utimesSync: [Function: utimesSync],
      watch: [Function: watch],
      watchFile: [Function: watchFile],
      writeFile: [Function: writeFile],
      writeFileSync: [Function: writeFileSync],
      write: [Function: write],
      writeSync: [Function: writeSync],
      writev: [Function: writev],
      writevSync: [Function: writevSync],
      Dir: [class Dir],
      Dirent: [class Dirent],
      Stats: [Function: Stats],
      ReadStream: [Getter/Setter],
      WriteStream: [Getter/Setter],
      FileReadStream: [Getter/Setter],
      FileWriteStream: [Getter/Setter],
      _toUnixTimestamp: [Function: toUnixTimestamp],
      F_OK: 0,
      R_OK: 4,
      W_OK: 2,
      X_OK: 1,
      constants: [Object: null prototype],
      promises: [Getter],
      gracefulify: [Function: patch]
    },
    [Symbol(kCapture)]: false,
    [Symbol(kIsPerformingIO)]: false
  },
  [Symbol(kCapture)]: false
}

Expected behavior

A clear and concise description of what you expected to happen.

Screenshots

image

System information

  • OS: macOS
  • Browser: Chrome

Additional context

Add any other context about the problem here.

[Storage] [Feature request] Programmatically handle/bust Supabase's CDN cache for public URLs of replaced files with same name

Discussed in supabase/supabase#5737

Feature request

Is your feature request related to a problem? Please describe.

When you use a storage item's public URL, but the file with the same name got replaced (e.g. via supabase.storage .from("...").upload("..."), the CDN (Cloudflare) cache is still active.

Describe the solution you'd like

It would be nice if we as developers would be able to do bust the CDN cache programmatically via Supabase, because we know exactly when busting the cache would be necessary.

Problem description

I would like to e.g. save avatar images in the storage.
In a perfect world, the image file name is the user ID, which makes it easy to handle the access rights for PUT/POST/DELETE requests.

In my app, a user can upload an image, which is uploaded to the storage via the Supabase client like so:

await supabaseServer.storage
    .from("...")
    .upload("user-id-123.jpg", avatarFileParsed, {
      upsert: true,
      contentType: 'image/jpg',
    })

This works fine.
Now, I use the image's public URL for Next.js' image component, which also "caches" the image locally, but this is out of the scope of this question.

Looking at the response of a request for a storage image's public URL shows that it is cached via Cloudflare. It also has the cache control set for that (e.g. max-age).

This means that even though a new image has been uploaded to the Supabase storage, the CDN still uses the old image and therefore the image you receive for a public URL is also the old one - the app therefore shows the old image, even if a user clears his browser cache.

Describe alternatives you've considered

  • Can one somehow use the cacheControl parameter of the upload function to bust the (CDN) cache?
  • Can one maybe use URL params (at the end of a public URL) to trick the app to not use the cached version?
  • Is there another way to bust the (CDN) cache?

My solution to circumvent this problem right now is to create a random ID for the filename on every upload and save that ID for each user in the database. That way, the CDN still works as intended, it's just that I don't have to bust the cache for a new/replaced image. This is obviously far from perfect, if not only for the additional code I had to write.

In a discussion for this topic, @rahul3v proposed an even better solution: Using something like updatedAt, which you typically already have for your user object, together with search params:

<image_url>?t=<update_time>

Additional context

Edit: It got mentioned by @GaryAustin1 that even Supabase's dashboard uses the second variant above (URL params) to show fresh data:

image

This works great for the use case of e.g. Supabase's dashboard, where you always want to see fresh data, but in a real-world app, where you don't know whether the cache should be fresh or not, I deliberately want to hit the cache if possible. It's not like you can bust the cache with the URL params "once" after a user uploads a new image - you'd need to always use the URL params and therefore will always miss the cache.

Proposed solution

We're already able to do upsert via .upload("..."), but this is only relevant for the file itself. Maybe the .upload("...") function could add another option parameter (something like bustCDNCache: true), which we as developers can use when replacing the same file.

add a precommit hook to validate commit messages

it's annoying that commits dont get picked up by our semantic-commit-analyser plugin and they are not reflected properly in our changelog. our version number is also not bumped and we need to do fake commits like this.

We should have a pre commit hook (via husky?) to validate commit messages adhere to the right format

emptyBucket ignores postgrest deleteError

Bug report

Describe the bug

If you call emptyBucket on a bucket and there is some postgrest error for the delete operation, the error is logged (somewhere 🤷‍♂️) but not reflected in the emptyBucket response at all, leaving the user with a {"message": "Successfully emptied"} response and a bucket that hasn't been emptied.

Some links:

To Reproduce

I encountered this bug because I had another database table with an on delete restrict foreign key to storage.objects. So, trying to delete the object via storage-api failed due to the restrict part, but storage-api still responded with success. A minimal repro of this situation follows:

  1. Create a new supabase project

  2. Go to the SQL Editor and add a new snippet

  3. Paste in the following code and run it:

    truncate storage.buckets cascade;
    
    create table if not exists files (
      object_id uuid primary key references storage.objects (id) on delete restrict
    );  
    
    -- Create a bucket and an object in it
    insert into storage.buckets (id, name) values ('mybucket', 'mybucket');
    with _object as (
      insert into storage.objects (bucket_id, name) values ('mybucket', 'myfile')
      returning *
    )
    -- Create a row in the table that references the storage.objects row
    insert into files (object_id) values ((select id from _object));
    
    -- Deleting the file as storage-api tries to do...
    -- delete from storage.objects where name = 'myfile';
    -- ... throws an error from postgres:
    -- ERROR:  update or delete on table "objects" violates foreign key constraint "files_object_id_fkey" on table "files"
    -- DETAIL:  Key (id)=(ee019ecc-a1c3-46eb-8cde-7cf2c39c920f) is still referenced from table "files".
    
  4. Call the emptyBucket API and get a success response

    curl -X POST https://APP_ID.supabase.co/storage/v1/bucket/mybucket/empty -H "Authorization: Bearer $SERVICE_ROLE_JWT" -H "apikey: $ANON_APIKEY"
    
  5. See in the supabase UI or database that the bucket is not empty (the myfile row still exists in storage.objects).

Expected behavior

I would expect a success response from emptyBucket to mean that my bucket was emptied.

System information

I tested this with a new supabase project created today.

Storage API: PUT 'the request is not multipart'

Discussed in supabase/supabase#2051

Originally posted by creativiii June 22, 2021
I'm so sorry, I know I just opened a discussion but as soon as I fixed that I ended up bumping into another problem with the same endpoint but with the PUT method.

        fetch(`${SUPABASE_URL}/storage/v1/object/avatars/${name}`, {
          method: 'PUT',
          mode: 'cors',
          cache: 'no-cache',
          headers: {
            'Content-Type': file.type,
            'Content-Length': file.size.toString(),
            Authorization: `Bearer ${SUPABASE_SERVICE}`,
          },
          body: buffer,
        })

Response is:

{
  statusCode: '406',
  error: 'Not Acceptable',
  message: 'the request is not multipart'
}

I tried setting my Content-Type to multipart/form-data, but that didn't do much.

API route clash with existing buckets named public or sign

User report

So there are three different ways to access objects now

/object/avatars/grumpy.jpg -> getObject -> needs auth header
/object/sign/avatars/grumpy.jpg -> getSignedObject -> needs token query param
/object/public/avatars/grumpy.jpg -> getPublicObject -> needs bucket to be public

We need a better way to disambiguate downloading from existing buckets named public or sign and getPublicObject and getSignedObject operations respectively.

More usable request log configuration

The default request logs produced by fastify are-whilst comprehensive-excessively verbose. We should disable them ( https://www.fastify.io/docs/latest/Reference/Server/#disablerequestlogging ) and add in custom hooks to output something more minimal (eg using what nginx output's as a baseline would be pretty decent).

Here's an example of the current logs (redacted if needed, though it's just a healthcheck anyway):

{
  "level": "info",
  "time": "2022-02-23T22:45:44.335Z",
  "pid": 30,
  "hostname": "<redacted>",
  "reqId": "req-ha6",
  "tenantId": "",
  "res": {
    "log": {},
    "raw": {
      "_contentLength": null,
      "_defaultKeepAlive": true,
      "_events": {},
      "_eventsCount": 1,
      "_expect_continue": false,
      "_hasBody": true,
      "_header": "HTTP/1.1 200 OK\r\ncontent-type: application/json; charset=utf-8\r\ncontent-length: 15\r\nDate: Wed, 23 Feb 2022 22:45:44 GMT\r\nConnection: close\r\n\r\n",
      "_headerSent": true,
      "_keepAliveTimeout": 5000,
      "_last": true,
      "_removedConnection": false,
      "_removedContLen": false,
      "_removedTE": false,
      "_sent100": false,
      "_trailer": "",
      "chunkedEncoding": false,
      "destroyed": false,
      "finished": true,
      "outputData": [],
      "outputSize": 0,
      "sendDate": true,
      "shouldKeepAlive": false,
      "socket": null,
      "statusCode": 200,
      "statusMessage": "OK",
      "useChunkedEncodingByDefault": true,
      "writable": true
    },
    "request": {
      "body": null,
      "context": {
        "_middie": null,
        "_parserOptions": {
          "limit": null
        },
        "attachValidation": false,
        "config": {
          "method": "GET",
          "url": "/status"
        },
        "contentTypeParser": {
          "cache": {
            "first": null,
            "items": {},
            "last": null,
            "max": 100,
            "size": 0,
            "ttl": 0
          },
          "customParsers": {
            "": "[Object]",
            "application/json": "[Object]",
            "multipart": "[Object]",
            "text/plain": "[Object]"
          },
          "parserList": [
            "multipart",
            "application/json",
            "text/plain"
          ],
          "parserRegExpList": []
        },
        "logLevel": "",
        "onError": null,
        "onRequest": [
          null,
          null,
          null
        ],
        "onResponse": [
          null
        ],
        "onSend": null,
        "onTimeout": null,
        "preHandler": null,
        "preParsing": null,
        "preSerialization": null,
        "preValidation": null,
        "schema": {
          "response": {
            "200": "[Object]"
          }
        }
      },
      "id": "req-ha6",
      "log": {},
      "params": {},
      "query": {},
      "raw": {
        "_consuming": false,
        "_dumped": true,
        "_events": {
          "end": [
            null,
            null
          ]
        },
        "_eventsCount": 1,
        "_readableState": {
          "autoDestroy": false,
          "awaitDrainWriters": null,
          "buffer": {
            "head": null,
            "length": 0,
            "tail": null
          },
          "closeEmitted": false,
          "closed": false,
          "dataEmitted": false,
          "decoder": null,
          "defaultEncoding": "utf8",
          "destroyed": false,
          "emitClose": true,
          "emittedReadable": false,
          "encoding": null,
          "endEmitted": true,
          "ended": true,
          "errorEmitted": false,
          "errored": null,
          "flowing": true,
          "highWaterMark": 16384,
          "length": 0,
          "multiAwaitDrain": false,
          "needReadable": false,
          "objectMode": false,
          "pipes": [],
          "readableListening": false,
          "reading": false,
          "readingMore": true,
          "resumeScheduled": false,
          "sync": true
        },
        "aborted": false,
        "client": {
          "_events": {
            "end": "[Array]"
          },
          "_eventsCount": 9,
          "_hadError": false,
          "_host": null,
          "_httpMessage": null,
          "_parent": null,
          "_paused": false,
          "_pendingData": null,
          "_pendingEncoding": "",
          "_readableState": {
            "autoDestroy": false,
            "awaitDrainWriters": null,
            "buffer": "[Object]",
            "closeEmitted": false,
            "closed": false,
            "dataEmitted": false,
            "decoder": null,
            "defaultEncoding": "utf8",
            "destroyed": false,
            "emitClose": false,
            "emittedReadable": false,
            "encoding": null,
            "endEmitted": false,
            "ended": false,
            "errorEmitted": false,
            "errored": null,
            "flowing": true,
            "highWaterMark": 16384,
            "length": 0,
            "multiAwaitDrain": false,
            "needReadable": true,
            "objectMode": false,
            "pipes": [],
            "readableListening": false,
            "reading": true,
            "readingMore": false,
            "resumeScheduled": false,
            "sync": false
          },
          "_server": {
            "_connectionKey": "4:0.0.0.0:5000",
            "_connections": 1,
            "_events": "[Object]",
            "_eventsCount": 3,
            "_handle": "[Object]",
            "_unref": false,
            "_usingWorkers": false,
            "_workers": [],
            "allowHalfOpen": true,
            "headersTimeout": 60000,
            "httpAllowHalfOpen": false,
            "keepAliveTimeout": 5000,
            "maxHeadersCount": null,
            "pauseOnConnect": false,
            "requestTimeout": 0,
            "timeout": 0
          },
          "_sockname": null,
          "_writableState": {
            "afterWriteTickInfo": null,
            "allBuffers": true,
            "allNoop": true,
            "autoDestroy": false,
            "bufferProcessing": false,
            "buffered": [],
            "bufferedIndex": 0,
            "closeEmitted": false,
            "closed": false,
            "corked": 0,
            "decodeStrings": false,
            "defaultEncoding": "utf8",
            "destroyed": false,
            "emitClose": false,
            "ended": true,
            "ending": true,
            "errorEmitted": false,
            "errored": null,
            "finalCalled": true,
            "finished": false,
            "highWaterMark": 16384,
            "length": 0,
            "needDrain": false,
            "objectMode": false,
            "pendingcb": 1,
            "prefinished": false,
            "sync": false,
            "writecb": null,
            "writelen": 0,
            "writing": false
          },
          "allowHalfOpen": true,
          "connecting": false,
          "parser": {
            "_consumed": true,
            "_headers": [],
            "_url": "",
            "incoming": null,
            "maxHeaderPairs": 2000,
            "outgoing": null,
            "socket": "[Circular]"
          },
          "server": {
            "_connectionKey": "4:0.0.0.0:5000",
            "_connections": 1,
            "_events": "[Object]",
            "_eventsCount": 3,
            "_handle": "[Object]",
            "_unref": false,
            "_usingWorkers": false,
            "_workers": [],
            "allowHalfOpen": true,
            "headersTimeout": 60000,
            "httpAllowHalfOpen": false,
            "keepAliveTimeout": 5000,
            "maxHeadersCount": null,
            "pauseOnConnect": false,
            "requestTimeout": 0,
            "timeout": 0
          }
        },
        "complete": true,
        "headers": {
          "accept-encoding": "gzip, compressed",
          "connection": "close",
          "host": "<redacted>",
          "user-agent": "ELB-HealthChecker/2.0"
        },
        "httpVersion": "1.1",
        "httpVersionMajor": 1,
        "httpVersionMinor": 1,
        "method": "GET",
        "rawHeaders": [
          "Host",
          "<redacted>",
          "Connection",
          "close",
          "User-Agent",
          "ELB-HealthChecker/2.0",
          "Accept-Encoding",
          "gzip, compressed"
        ],
        "rawTrailers": [],
        "socket": {
          "_events": {
            "end": "[Array]"
          },
          "_eventsCount": 9,
          "_hadError": false,
          "_host": null,
          "_httpMessage": null,
          "_parent": null,
          "_paused": false,
          "_pendingData": null,
          "_pendingEncoding": "",
          "_readableState": {
            "autoDestroy": false,
            "awaitDrainWriters": null,
            "buffer": "[Object]",
            "closeEmitted": false,
            "closed": false,
            "dataEmitted": false,
            "decoder": null,
            "defaultEncoding": "utf8",
            "destroyed": false,
            "emitClose": false,
            "emittedReadable": false,
            "encoding": null,
            "endEmitted": false,
            "ended": false,
            "errorEmitted": false,
            "errored": null,
            "flowing": true,
            "highWaterMark": 16384,
            "length": 0,
            "multiAwaitDrain": false,
            "needReadable": true,
            "objectMode": false,
            "pipes": [],
            "readableListening": false,
            "reading": true,
            "readingMore": false,
            "resumeScheduled": false,
            "sync": false
          },
          "_server": {
            "_connectionKey": "4:0.0.0.0:5000",
            "_connections": 1,
            "_events": "[Object]",
            "_eventsCount": 3,
            "_handle": "[Object]",
            "_unref": false,
            "_usingWorkers": false,
            "_workers": [],
            "allowHalfOpen": true,
            "headersTimeout": 60000,
            "httpAllowHalfOpen": false,
            "keepAliveTimeout": 5000,
            "maxHeadersCount": null,
            "pauseOnConnect": false,
            "requestTimeout": 0,
            "timeout": 0
          },
          "_sockname": null,
          "_writableState": {
            "afterWriteTickInfo": null,
            "allBuffers": true,
            "allNoop": true,
            "autoDestroy": false,
            "bufferProcessing": false,
            "buffered": [],
            "bufferedIndex": 0,
            "closeEmitted": false,
            "closed": false,
            "corked": 0,
            "decodeStrings": false,
            "defaultEncoding": "utf8",
            "destroyed": false,
            "emitClose": false,
            "ended": true,
            "ending": true,
            "errorEmitted": false,
            "errored": null,
            "finalCalled": true,
            "finished": false,
            "highWaterMark": 16384,
            "length": 0,
            "needDrain": false,
            "objectMode": false,
            "pendingcb": 1,
            "prefinished": false,
            "sync": false,
            "writecb": null,
            "writelen": 0,
            "writing": false
          },
          "allowHalfOpen": true,
          "connecting": false,
          "parser": {
            "_consumed": true,
            "_headers": [],
            "_url": "",
            "incoming": null,
            "maxHeaderPairs": 2000,
            "outgoing": null,
            "socket": "[Circular]"
          },
          "server": {
            "_connectionKey": "4:0.0.0.0:5000",
            "_connections": 1,
            "_events": "[Object]",
            "_eventsCount": 3,
            "_handle": "[Object]",
            "_unref": false,
            "_usingWorkers": false,
            "_workers": [],
            "allowHalfOpen": true,
            "headersTimeout": 60000,
            "httpAllowHalfOpen": false,
            "keepAliveTimeout": 5000,
            "maxHeadersCount": null,
            "pauseOnConnect": false,
            "requestTimeout": 0,
            "timeout": 0
          }
        },
        "statusCode": null,
        "statusMessage": null,
        "trailers": {},
        "upgrade": false,
        "url": "/status"
      }
    }
  },
  "responseTime": 0.5057080090045929,
  "msg": "request completed"
}

Add RLS tests for buckets

We do not have tests checking if RLS policies are properly respected for the buckets table.

We have tests for the object table for this (example). The tests for the bucket table will be similar.

file size

is there a way to get Storage file info before downloading (size, update date)?

Explore partitioning the objects table

Postgres native partitioning might be useful when there are millions of rows in the object table. If we partition by bucket_id, all API operations can just operate on a single partition.

Another advantage is one huge bucket shouldn't affect the query performance of another bucket.

I think we can use the list partition. New partitions can be dynamically created when new buckets are created (in the API code or via triggers). Dropping a bucket becomes simple since we can just delete the object partition table belonging to that bucket_id.

Query planning takes longer and more memory if there are more partitions. Will this be a problem if there are 1000s of buckets?

This is probably worth exploring after we have a proper benchmark in place.

Storage download (storage.from.download url) is cached in the CDN even for deleted file

Bug report

It is desirable to cache image and some file urls.
The storage download function though seems more like a database call to get the file data from the database/storage system.
A deleted file from storage (no object and data) will still download when called with supabase.storage download.
Cache busting with ?bust=123 in the path provided to download does return 404.
The download url does not appear to be cached in the browser, using a 2nd browser will still return a file even though deleted and never used on the browser.
After 1 hour the download function does get a 404 error.

To Reproduce

Create a file in the UI in a private bucket.
Download it with:

  .storage
  .from('avatars')
  .download('folder/avatar1.png')

Delete the file.
Run the download code on any browser and file still downloads.

Expected behavior

This behavior should at least be documented as it is not clear download is cached by it appears just the CDN.
Or download should not be cached at all as it is more like a database call then a url for an image.

Screenshots

System information

Latest Supabase instance and supabase.js code.

Additional context

Add support for search in bucket

Feature request

Relevant PR: supabase/supabase#4961

Searching for storage in the dashboard at the moment is a client side search - we do a name string search against what was already fetched from the API

As we're looking to introduce infinite scrolling to the dashboard, this means that we won't (be attempting to) pull all of the folder contents when the user clicks on a folder, hence we need to shift the search logic to the API level

Searching should also support pagination as well (we can just stick with offset limit as per the list items API)

Add the initial schema as a migration

Instead of relying on worker or the cli to set it up. Makes it easy to use the storage-api as a standalone server without relying on other components of our stack.

Signed urls for upload

Feature request

Is your feature request related to a problem? Please describe.

At Labelflow, we developed a tool to upload images on our Supabase storage, based on one nextJs API route. The goal is for us to abstract the storage method from the client-side by querying a generic upload route to upload any file and to ease the permission management. Indeed, in the server-side function, one service role Supabase client is manipulated to actually make the upload. We use next-auth to secure the route (and to manage authentication in the app in general).

Client-side upload looks like that:

await fetch("https://labelflow.ai/api/upload/[key-in-supabase]", {
                  method: "PUT",
                  body: file,
                });

Server-side API route looks more or less like that (I don't show the permission management part):

import { createClient } from "@supabase/supabase-js";
import nextConnect from "next-connect";

const apiRoute = nextConnect({});
const client = createClient(
  process?.env?.SUPABASE_API_URL as string,
  process?.env?.SUPABASE_API_KEY as string
);
const bucket = "labelflow-images";

apiRoute.put(async (req, res) => {
  const key = (req.query.id as string[]).join("/");
  const { file } = req;
  const { error } = await client.storage.from(bucket).upload(key, file.buffer, {
    contentType: file.mimetype,
    upsert: false,
    cacheControl: "public, max-age=31536000, immutable",
  });
  if (error) return res .status(404);
  return res.status(200);
});

export default apiRoute;

The problem is that we face a serious limitation in terms of upload size since we use Vercel for deployment which doesn't allow serverless functions to handle requests that are more than 5Mb. Since we send over images in the upload request from the client to the server, we're likely to reach that limit quite often.

Describe the solution you'd like

As we don't want to manipulate Supabase clients client-side, we think that the ideal solution would be to allow us to upload directly to Supabase, using an upload signed URL. The above upload route could now take only a key as an input and return a signed URL to make the upload to.

Client-side upload would now be in two steps:

// Get Supabase signed Url
const { signedUrl } = await (await fetch("https://labelflow.ai/api/upload/[key-in-supabase]", {
                  method: "GET",
                })).json();

// Upload the file
await fetch(signedUrl, {
                  method: "PUT",
                  body: file,
                });

And our API route would look like that, more or less:

import { createClient } from "@supabase/supabase-js";
import nextConnect from "next-connect";

const apiRoute = nextConnect({});
const client = createClient(
  process?.env?.SUPABASE_API_URL as string,
  process?.env?.SUPABASE_API_KEY as string
);
const bucket = "labelflow-images";

apiRoute.get(async (req, res) => {
  const key = (req.query.id as string[]).join("/");
  const { signedURL } = await client.storage
    .from(bucket)
    .createUploadSignedUrl(key, 3600); // <= this is the missing feature

  if (signedURL) {
    res.setHeader("Content-Type", "application/json");
    return res.status(200).json({signedURL});
  }

  return res.status(404);
});

export default apiRoute;

Describe alternatives you've considered

I described them in our related issue:

Additional context

We're happy to work on developing this feature at Labelflow if you think this is the best option!

Migrations failing to run

Bug report

Describe the bug

Migrations are failing to run when starting the storage server

A clear and concise description of what the bug is.

[email protected] restart:db
docker-compose --project-dir . -f src/test/db/docker-compose.yml down && docker-compose --project-dir . -f src/test/db/docker-compose.yml up -d && sleep 5 && npm run migration:run

Stopping storage-api_rest_1 ... done
Stopping storage-api_db_1 ... done
Removing storage-api_rest_1 ... done
Removing storage-api_db_1 ... done
Removing network storage-api_default
Creating network "storage-api_default" with the default driver
Creating storage-api_db_1 ... done
Creating storage-api_rest_1 ... done

[email protected] migration:run
ts-node-dev ./src/scripts/migrate-call.ts

[INFO] 17:56:58 ts-node-dev ver. 1.1.6 (using ts-node ver. 9.1.1, typescript ver. 4.2.3)
running migrations
(node:2616) UnhandledPromiseRejectionWarning: Error: Migration failed. Reason: An error occurred running 'create-migrations-table'. Rolled back this migration. No further migrations were run. Reason: no schema
has been selected to create in
at D:\PROJECTS\storage-api\node_modules\postgres-migrations\dist\migrate.js:63:27
at processTicksAndRejections (internal/process/task_queues.js:93:5)
at D:\PROJECTS\storage-api\node_modules\postgres-migrations\dist\with-lock.js:25:28
(Use node --trace-warnings ... to show where the warning was created)
(node:2616) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not han
dled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag --unhandled-rejections=strict (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection
id: 1)
(node:2616) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.

To Reproduce

Steps to reproduce the behavior, please provide code snippets or a repository:

  1. clone supabase/storage-api
  2. cd storage-api
  3. npm run restart:db
    4 migrations failing to run

Expected behavior

A clear and concise description of what you expected to happen.

Server should run

Screenshots

If applicable, add screenshots to help explain your problem.

System information

  • OS: [e.g. macOS, Windows]
  • Browser (if applies) [e.g. chrome, safari]
  • Version of supabase-js: [e.g. 6.0.2]
  • Version of Node.js: [e.g. 10.10.0]

Additional context

Add any other context about the problem here.

The automated release is failing 🚨

🚨 The automated release from the master branch failed. 🚨

I recommend you give this issue a high priority, so other packages depending on you could benefit from your bug fixes and new features.

You can find below the list of errors reported by semantic-release. Each one of them has to be resolved in order to automatically publish your package. I’m sure you can resolve this 💪.

Errors are usually caused by a misconfiguration or an authentication problem. With each error reported below you will find explanation and guidance to help you to resolve it.

Once all the errors are resolved, semantic-release will release your package the next time you push a commit to the master branch. You can also manually restart the failed CI job that runs semantic-release.

If you are not sure how to resolve this, here is some links that can help you:

If those don’t help, or if this issue is reporting something you think isn’t right, you can always ask the humans behind semantic-release.


No npm token specified.

An npm token must be created and set in the NPM_TOKEN environment variable on your CI environment.

Please make sure to create an npm token and to set it in the NPM_TOKEN environment variable on your CI environment. The token must allow to publish to the registry https://registry.npmjs.org/.


Good luck with your project ✨

Your semantic-release bot 📦🚀

Tighten number of policies required

Policies for the following API calls can be reduced either by using Postgrest minimal representation or by using serviceKey in postgrest requests accordingly.

This will align more closely to user's mental model of what the policies should be. For example, createBucket should only require insert policy on the buckets table and not the select policy as well.

- createBucket requires insert(bucket), select(bucket) 
- deleteBucket requires select(bucket), delete(bucket)
- emptyBucket requires select(bucket), select(object), delete(object)
- copyObject requires select(object), insert(object)
- createObject requires insert(object), update(object)

Cannot Delete image from bucket

Hello dears,
i have 2 days trying to delete an image from the bucket
1- the bucket is private
2- the bucket name is => images
3- i try multiple things and all fails

here is my code used to delete the image:

deleteImage(bucked: string, imageId: string) {
return this.supabaseConfig
.getSupabaseClient()
.storage.from(bucked)
.remove([${imageId}]);
}

text/plain and application/json MIME type not uploaded in Storage Buckets

Bug report

Describe the bug

text/plain and application/json MIME type are uploaded in Storage Buckets as empty files.
Whereas changing them to text/script and text/json makes them to be uploaded successfully as text files.

To Reproduce

  1. Create a non-empty text file as new_file.txt
  2. Upload the file with REST APIs
POST  /storage/v1/object/<bucket-id>/new_file.txt
-H "Content-Length: " + file.length()
-H "Content-Type: text/plain"
-H "Authorization: Bearer " + token
-body file.get_buffer() # Read file as a ByteArray
  1. File is uploaded as empty
  2. Change the Content-Type: to text/script
  3. File is uploaded correctly

Same thing happens for .json files uploaded as application/json.

Expected behavior

text/plain and application/json files should be uploaded as common text files.

Screenshots

Text file with text/plain MIME TYPE
image

Text file with text/script MIME TYPE
image

JSON File with application/json MIME TYPE
image

System information

Additional context

More complex files such as .ogg as audio/ogg , .mp3 as audio/mpeg and .pdf as application/pdf are successfully uploaded.
image
image

Allow objects / buckets to be fully public

There are use-cases where users would want to access an object without creating a signed URL.

  • Once we launch CDN integration, you can upload your static assets to supabase storage. Accessing it via a SignedUrl is messy.
  • There have been other requests from users, but so far they have been answered by creating signedURLs.

The main challenge with allowing this is that objects might be exposed publicly by accident.
Another issue I have with S3 is that it is very hard to find out which objects are public in a bucket.

I think a good tradeoff would be to allow entire buckets (instead of individual objects) to be public. That way we can have huge warnings in the UI that the bucket is public.

Unable to delete `auth.users` row due to `objects_owner_fkey` FK constraint

Bug report

Describe the bug

When a user uploads an object to a bucket, the object's row in storage.objects has a column owner that has a FK constraint objects_owner_fkey to auth.users.id. However, it's not set up with on delete {cascade|set null}—which prevents the user from actually being deleted.

Attempting to delete a user with a storage.object referencing the user results in a FK constraint violation.

To Reproduce

  1. Create a user
  2. Authenticate as that user on the client
  3. Upload an object as that user on the client
  4. Delete that user via dashboard
  5. You'll get a FK constraint violation error:

    Deleting user failed: update or delete on table "users" violates foreign key constraint "objects_owner_fkey" on table "objects"

Expected behavior

Should be able to delete user whilst retaining the object in the database.

Suggested fix & temporary workaround

Add on delete set null to the objects_owner_fkey constraint:

alter table storage.objects
drop constraint objects_owner_fkey,
add constraint objects_owner_fkey
   foreign key (owner)
   references auth.users(id)
   on delete set null;

Fix extension function

Right now the extension function returns the second part of the filename after splitting by ..

Instead the last part should be returned so that the extension for the file cat.meme.jpg is returned as jpg and not as meme.

Allow uploading from URL

Feature request

Is your feature request related to a problem? Please describe.

We receive thousands of photos in original quality, typically via Dropbox or other service which can transfer files to Dropbox via the backend "internet highway". Currently we can select photos directly from Dropbox when adding them to Airtable, bypassing the slow down/upload connections to/from my team's laptops.

Describe the solution you'd like

I will use the Dropbox file picker in our app/CMS which generates temporary URLs to selected files, which I can then pass via the JS client to Supabase storage to upload files directly from Dropbox servers to Supabase/S3.

Describe alternatives you've considered

My team down/uploading the file, or a server proxy syncing all files from Dropbox to file system and then uploading to Supabase Storage, but this add unnecessary complexity and also I'd have to sync all photos, and not just those we need.

Folders in storage buckets are disappearing

Bug report

Describe the bug

Folders in storage buckets are disappearing

To Reproduce

  1. Go to Storage tab
  2. Create a bucket
  3. Create a sub-folder
  4. Refresh page
  5. Sub-folder has disappeared

Expected behavior

Sub-folder would still be there

System information

  • OS: macOS
  • Browser: Chrome

File backend crashes on deleteObjects

Bug report

Describe the bug

The deleteObjects implementation for the file backend crashes on the fs.rm call:

https://github.com/supabase/storage-api/blob/30b3cb50295d72d2816e98f2c345d9ca5ab59539/src/backend/file.ts#L90

The reason, from what I can tell, is that the version of fs-extra in use (8.1.0) doesn't export a version of rm (https://github.com/jprichardson/node-fs-extra/blob/8.1.0/lib/fs/index.js#L33), so it falls back to using the node built-in version of fs.rm (docs and code) which requires a callback parameter. The storage-api calls fs.rm without a callback parameter, leading to an error like the below when calling deleteObjects or emptyBucket API endpoints:

TypeError: callback is not a function
    at CB (internal/fs/rimraf.js:59:5)
    at internal/fs/rimraf.js:90:14
    at FSReqCallback.oncomplete (fs.js:179:23)
[ERROR] 18:43:57 TypeError: callback is not a function

Here's the line that crashes: https://github.com/nodejs/node/blob/v14.19.0/lib/internal/fs/rimraf.js#L59

And more context about the callback parameter: isaacs/rimraf#111

To Reproduce

Follow the Development instructions in the README in this repo, but use the file STORAGE_BACKEND. Example of the .env I used:

ANON_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJyb2xlIjoiYW5vbiIsImlhdCI6MTYxMzUzMTk4NSwiZXhwIjoxOTI5MTA3OTg1fQ.mqfi__KnQB4v6PkIjkhzfwWrYyF94MEbSC6LnuvVniE
SERVICE_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJyb2xlIjoic2VydmljZV9yb2xlIiwiaWF0IjoxNjEzNTMxOTg1LCJleHAiOjE5MjkxMDc5ODV9.th84OKK0Iz8QchDyXZRrojmKSEZ-OuitQm_5DvLiSIc
TENANT_ID=stub
REGION=stub
GLOBAL_S3_BUCKET=stub
POSTGREST_URL=http://localhost:3000
PGRST_JWT_SECRET=f023d3db-39dc-4ac9-87b2-b2be72e9162b
DATABASE_URL=postgresql://postgres:[email protected]/postgres
PGOPTIONS="-c search_path=storage,public"
FILE_SIZE_LIMIT=52428800
STORAGE_BACKEND=file
FILE_STORAGE_BACKEND_PATH=./data

# Multitenant
# IS_MULTITENANT=true
MULTITENANT_DATABASE_URL=postgresql://postgres:[email protected]:5433/postgres
X_FORWARDED_HOST_REGEXP=
POSTGREST_URL_SUFFIX=/rest/v1
ADMIN_API_KEYS=apikey
ENCRYPTION_KEY=encryptionkey

After you've started the server and made the curl request to create a bucket, create a file and make another curl request to upload the file:

echo "hello" > world.txt
curl --location --request POST 'http://localhost:5000/object/avatars/myavatar' \
--header 'Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJyb2xlIjoic2VydmljZV9yb2xlIiwiaWF0IjoxNjEzNTMxOTg1LCJleHAiOjE5MjkxMDc5ODV9.th84OKK0Iz8QchDyXZRrojmKSEZ-OuitQm_5DvLiSIc' \
--data @world.txt

Then, call the deleteObjects API:

curl --location --request DELETE 'http://localhost:5000/object/avatars' \
-H 'Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJyb2xlIjoic2VydmljZV9yb2xlIiwiaWF0IjoxNjEzNTMxOTg1LCJleHAiOjE5MjkxMDc5ODV9.th84OKK0Iz8QchDyXZRrojmKSEZ-OuitQm_5DvLiSIc' \
-H 'content-type: application/json' \
--data '{"prefixes": ["myavatar"]}'

In the npm run dev output, you'll see the error:

TypeError: callback is not a function
    at CB (internal/fs/rimraf.js:59:5)
    at internal/fs/rimraf.js:90:14
    at FSReqCallback.oncomplete (fs.js:179:23)
[ERROR] 18:43:57 TypeError: callback is not a function

The file is deleted before the crash happens.

The dev server doesn't crash back out to a shell prompt, but it stops listening on port 5000. If you use supabase-cli and re-create this error, you'll see the storage-api container restart.

Expected behavior

I would expect the deleteObjects and emptyBucket API endpoints to not crash the server process/container.

Possible fixes

I'm not too familiar with the JS ecosystem yet, so I'm not sure what solution makes the most sense, but here are a couple ideas:

  • instead of fs.rm, use fsPromises.rm, which doesn't have a required callback and matches how the current fs.rm call is being executed
  • provide the required callback and wait for it before returning a response from storage-api

System information

  • Version of Node.js: v16.13.1 and v14.19.0 both repro
  • Version of storage-api: v0.13.1

Additional info

It seems to me like the fs-extra types for rm are incorrect (missing the callback) which might be what led to this problem in the first place. I'm not sure I'm tracking the implementation across all the different packages and versions correctly for the 9.0+ version of fs-extra though.

Handle If-Modified-Since or If-None-match

Right now we send cache-control with a max-age header. This is enough if there is a CDN in front of our API which take care of handling 304s if the object hasn't changed.

But since we don't have a CDN yet, we need to implement this logic ourselves. The fix should be pretty simple - just pass the If-Modified-Since or If-None-match headers from the request on to S3. S3 Should send back a 304 based on that which we can stream on to the client.

Last modified timestamp will not update after update

Bug report

Describe the bug

A replacment of a file with upload (upsert: true) or update will not update the last modified timestamp. The file itself has been updated.

To Reproduce

Steps to reproduce the behavior, please provide code snippets or a repository:

const avatarFile = event.target.files[0]
const { data, error } = await supabase
  .storage
  .from('avatars')
  .update('public/avatar1.png', avatarFile, {
    cacheControl: 3600,
    upsert: false
  })

Expected behavior

A replacement of a file should update the timestamp.

System information

supabase-js 1.22.0

createObject does not set cacheControl

Bug report

Describe the bug

TL;DR When I upload an object to storage via storage-js, the API doesn't care about the cache-control value.

At the moment, when you call upload() from the Supabase client, it sets the default cache-control field to 3600 (https://github.com/supabase/storage-js/blob/main/src/lib/StorageFileApi.ts#L15). The API should take this field and set that as max-age (https://github.com/supabase/storage-api/blob/master/src/routes/object/createObject.ts#L59). If this value is not present, it is set to no-cache.

When I upload an object to a bucket, I can see that cache-control is set in the request (see screenshot below). This should mean that the server should respond with cache-control header value set tomax-age=3600 when I request for the object later on.
image

However, when I download the object afterward, the header is set to no-cache.
image

This is not desirable behavior, as we want to cache as many things as possible for a good user experience (and probably also lessen the load for Supabase too).

To Reproduce

  1. Using the supabase client, upload any object, with or without an optional cacheControl value set.
  2. Download the object using the supabase client, refresh the page after the download finishes.
  3. on refresh, check the network tab to see that (1) the browser requested and downloaded the object again (2) and the response has no-cache header set.

Expected behavior

Supabase API sets cache-control headers that we pass in.

Screenshots

If applicable, add screenshots to help explain your problem.

System information

  • OS: Windows
  • Browser (if applies): chrome
  • Version of supabase-js: "1.7.7",
  • Version of Node.js: v12.0.0

Missing field error if you leave BOOL as the field type

Bug report

Describe the bug

I add a new column to a table via the UI. I want that column to be a BOOL. I set 'false' or 'true' as the Default Value (this part doesn't really matter) and get Error for input at Type: Field cannot be blank even though the selected value is bool

To Reproduce

Steps to reproduce the behavior, please provide code snippets or a repository:

  1. Go to Table Editor
  2. Click on Add Column
  3. Name the column
  4. Set a default value (sort of tangential to the bug)
  5. Click Save
    (Note that it works if you select another field type and then select BOOL

Expected behavior

New column should be added

Screenshots

new-column.mov

System information

  • OS: macOS
  • Browser (if applies) Chrome

Additional context

The solution is either to have a NULL option in the Field Type select (in which case the error would make sense) or set the default values in the underlying JS to BOOL.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.