Code Monkey home page Code Monkey logo

Comments (5)

myomv100 avatar myomv100 commented on June 12, 2024

Problem solved by reducing chunkSize from 262144 to 65536. Sometimes the readable stream can't read more than 65536 at a time.
I moved to antoher soultion to handle fetched data and decrypt it on the fly.. In case someone needs it :

try {
  async function* streamAsyncIterable(stream) {
    const reader = stream.getReader()
    try {
      while (true) {
        const {
          done,
          value
        } = await reader.read()
        if (done) return
        yield value
      }
    } finally {
      reader.releaseLock()
    }
  }

  const response = await fetch(url)
  var responseSize = 0;
  var chunkMaxSize = 65536;
  var totalWorkArray = new Uint8Array([]);
  const fileStream = streamSaver.createWriteStream(filename, {
    size: fileSize
  });
  const writer = fileStream.getWriter()
  for await (const chunk of streamAsyncIterable(response.body)) {
    let chunkSize = chunk.length
    responseSize += chunkSize
    var mergedArray = new Uint8Array(totalWorkArray.length + chunk.length);
    mergedArray.set(totalWorkArray);
    mergedArray.set(chunk, totalWorkArray.length);
    totalWorkArray = mergedArray
    while (totalWorkArray.length > chunkMaxSize) {
      const c = totalWorkArray.slice(0, chunkMaxSize);
      let work = new Blob([c], {
        type: 'application/octet-stream'
      })
      var temp = totalWorkArray.slice(chunkMaxSize)
      totalWorkArray = temp
      const plain = await FileDecrypt(work, 1, work.type);
      const readableStream = new Blob([plain], {
        type: plain.type
      }).stream()
      await readableStream.getReader().read().then(async ({
        value,
        done
      }) => {
        await writer.write(value)
      });
    }
  }
  const work = new Blob([totalWorkArray], {
    type: 'application/octet-stream'
  })
  const plain = await FileDecrypt(work, 1, work.type);
  const readableStream = new Blob([plain], {
    type: plain.type
  }).stream()
  await readableStream.getReader().read().then(async ({
    value,
    done
  }) => {
    await writer.write(value)
    await writer.close()
  });

} catch (error) {
  ...
}

from streamsaver.js.

jimmywarting avatar jimmywarting commented on June 12, 2024

Ouch. This solution makes me sad to see.
Will try to reply back to this in a hour or so when i have access to a computer with a solution that i think works better. In the Meanwhile. Can share the code to the decoder and a encrypted file?

from streamsaver.js.

jimmywarting avatar jimmywarting commented on June 12, 2024

Okey... so here is what i came up with:

/**
 * Read a stream into same underlying ArrayBuffer of a fixed size.
 * And yield a new Uint8Array view of the same underlying buffer.
 * @param {ReadableStreamBYOBReader} reader
 * @param {number} chunkSize
 */
async function* blockReader(reader, chunkSize) {
  let offset = 0;
  let buffer = new ArrayBuffer(chunkSize)
  let done, view

  while (!done) {
    ({value: view, done} = await reader.read(new Uint8Array(buffer, offset, chunkSize - offset)))
    buffer = view.buffer
    if (done) break
    offset += view.byteLength;
    if (offset === chunkSize) {
      yield view
      offset = 0
      // if you want to reuse the same allocated buffer for efficiency,
      // comment the following line:
      // buffer = new ArrayBuffer(chunkSize)
    }
  }

  if (offset > 0) {
    yield view.buffer.slice(0, offset)
  }
}

const url = 'https://raw.githubusercontent.com/lukeed/clsx/main/src/index.js'
const filename = 'clsx.js'
const fileSize = 1000000
const chunkMaxSize = 65536

const response = await fetch(url)
const fileStream = streamSaver.createWriteStream(filename, {
  size: fileSize
})
const writer = fileStream.getWriter()
const reader = response.body.getReader({ mode: 'byte' })
const type = 'application/octet-stream'
const iterator = blockReader(reader, chunkMaxSize)

// `sameUnderlyingBuffer` is an Uint8Array of the same underlying ArrayBuffer
// It means that the Uint8Array is detached in every loop.
// and not reusable in the next loop. (so don't try to concat them all)
// However the ArrayBuffer is reused / recycled and updated in every loop.
// This is the most efficient way to read a stream.
for await (const sameUnderlyingBuffer of iterator) {
  const plain = await FileDecrypt(sameUnderlyingBuffer,  1, type)
  await writer.write(plain)
}

await writer.close()

from streamsaver.js.

jimmywarting avatar jimmywarting commented on June 12, 2024

a problem you have in both examples are that you are creating a stream from new Blob(...).stream()
and only read the first chunk.

await readableStream.getReader().read().then(async ({ value, done }) => await writer.write(value))

one .read() may not read everything from a blob.
it that case you want to do new Blob(...).arrayBuffer() or better yet, try avoid create a blob at all in the first place. cuz there is not much need in doing that.

from streamsaver.js.

jimmywarting avatar jimmywarting commented on June 12, 2024

also if i may suggest i would add this polyfill:

ReadableStream.prototype.values ??= function({ preventCancel = false } = {}) {
    const reader = this.getReader();
    return {
        async next() {
            try {
                const result = await reader.read();
                if (result.done) {
                    reader.releaseLock();
                }
                return result;
            } catch (e) {
                reader.releaseLock();
                throw e;
            }
        },
        async return(value) {
            if (!preventCancel) {
                const cancelPromise = reader.cancel(value);
                reader.releaseLock();
                await cancelPromise;
            } else {
                reader.releaseLock();
            }
            return { done: true, value };
        },
        [Symbol.asyncIterator]() {
            return this;
        }
    };
};

ReadableStream.prototype[Symbol.asyncIterator] ??= ReadableStream.prototype.values;

that way you could do:

for await (const chunk of response.body) { ... }

only servers and Firefox have async iterator.

from streamsaver.js.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.