Provides a Java-like, Netty-inspired ByteBuffer implementation using typed arrays. It also tries to abstract a bit of the complexity away by providing convenience methods for those who just want to write stuff without caring about signed, unsigned and the actual bit sizes. It's also one of the components driving ProtoBuf.js.
-
Mimics Java ByteBuffers as close as reasonable while using typed array terms
-
Full 64bit support via Long.js (optional)
-
Simple allocation (
new ByteBuffer(capacity[, littleEndian])
orByteBuffer.allocate(capacity[, littleEndian])
) -
Wrapping of quite everything which is or includes an ArrayBuffer (
ByteBuffer.wrap(buffer[, littleEndian])
) -
Cloning using the same (
ByteBuffer#clone()
) and copying using an independent backing buffer (ByteBuffer#copy()
) -
Slicing using the same (
ByteBuffer#slice(begin, end)
) and using an indepentent backing buffer (ByteBuffer#sliceAndCompact(begin, end)
) -
Manual offset (
ByteBuffer#offset
andByteBuffer#length
) and array manipulation (ByteBuffer#array
) -
Remaining readable bytes (
ByteBuffer#remaining()
) and backing buffer capacity getters (ByteBuffer#capacity()
) -
Explicit (
ByteBuffer#resize(capacity)
) and implicit resizing (ByteBuffer#ensureCapacity(capacity)
) -
Efficient implicit resizing by doubling the current capacity
-
Flipping (
ByteBuffer#flip()
), marking (ByteBuffer#mark([offset])
) and resetting (ByteBuffer#reset()
) -
Compacting of the backing buffer (
ByteBuffer#compact()
) -
Conversion to ArrayBuffer (
ByteBuffer#toArrayBuffer([forceCopy])
) (i.e. to send data over the wire, e.g. a WebSocket withbinaryType="arraybuffer"
) -
Conversion to Buffer (
ByteBuffer#toBuffer()
) if running inside of node.js -
Reversing (
ByteBuffer#reverse()
), appending (ByteBuffer#append(src[, offset])
) and prepending (ByteBuffer#prepend(src[, offset])
) of other ByteBuffers with implicit capacity management -
Explicit destruction (
ByteBuffer#destroy()
) -
ByteBuffer#writeUint/Int8/16/32/64(value[, offset])
andByteBuffer#readUint/Int8/16/32/64([offset])
-
ByteBuffer#writeVarint32/64(value[, offset])
andByteBuffer#readVarint32/64([offset])
to write a base 128 variable-length integer as used in protobuf -
ByteBuffer#writeZigZagVarint32/64(value[, offset])
andByteBuffer#readZigZagVarint32/64([offset])
to write a zig-zag encoded base 128 variable-length integer as used in protobuf for efficient encoding of signed values -
ByteBuffer#writeFloat32/64(value[, offset])
andByteBuffer#readFloat32/64([offset])
-
ByteBuffer#write/readByte
,ByteBuffer#write/readShort
,ByteBuffer#write/readInt
,ByteBuffer#write/readLong
(all signed),ByteBuffer#write/readVarint
andByteBuffer#write/readZigZagVarint
(both 32bit signed),ByteBuffer#write/readFloat
,ByteBuffer#write/readDouble
aliases for the above for convenience -
ByteBuffer#writeUTF8String(str[, offset])
,ByteBuffer#readUTF8String(chars[, offset])
andByteBuffer#readUTF8StringBytes(length[, offset])
using the included UTF8 en-/decoder (full 6 bytes, ref) -
ByteBuffer#writeLString(str[, offset]))
andByteBuffer#readLString([offset])
to write respectively read a length-prepended (number of characters as UTF8 char) string -
ByteBuffer#writeVString(str[, offset]))
andByteBuffer#readVString([offset])
to write respectively read a length-prepended (number of bytes as base 128 variable-length 32bit integer) string -
ByteBuffer#writeCString(str[, offset])
andByteBuffer#readCString([offset])
to write respectively read a NULL-terminated (Uint8 0x00) string -
ByteBuffer#writeJSON(data[, offset[, stringify]])
andByteBuffer#readJSON([offset[, parse]])
to write respectively read arbitraty object data. Allows overriding the default stringify (default: JSON.stringify) and parse (default: JSON.parse) implementations. -
All with implicit offset advance if the offset parameter is omitted or without, if specified
-
Chaining of all operations that allow this (i.e. do not return some specific value like in read operations), e.g.
var bb = new ByteBuffer(); ... bb.reset().writeInt(1).writeLString("Hello world!").flip().compact()...
-
Switching between little endian and big endian byte order through
ByteBuffer#LE()
andByteBuffer#BE()
, e.g.var bb = new ByteBuffer(8).LE().writeInt(1).BE().writeInt(2).flip(); // toHex: <01 00 00 00 00 00 00 02>
-
ByteBuffer#toString()
,ByteBuffer#toHex([wrap])
,ByteBuffer#toASCII([wrap])
andByteBuffer#printDebug()
(emits hex + ASCII + offsets to console, looks like your favourite hex editor) for pain-free debugging
- CommonJS compatible
- RequireJS/AMD compatible
- node.js compatible, also available via npm
- Browser compatible
- Closure Compiler ADVANCED_OPTIMIZATIONS compatible (fully annotated,
ByteBuffer.min.js
has been compiled this way,ByteBuffer.min.map
is the source map) - Fully documented using jsdoc3
- Well tested through nodeunit
- Zero production dependencies (Long.js is optional)
- Small footprint
- Install:
npm install bytebuffer
var ByteBuffer = require("bytebuffer");
var bb = new ByteBuffer();
bb.writeLString("Hello world!");
bb.flip();
console.log(bb.readLString()+" from ByteBuffer.js");
Optionally depends on Long.js for long (int64) support. If you do not require long support, you can skip the Long.js include.
<script src="//raw.github.com/dcodeIO/Long.js/master/Long.min.js"></script>
<script src="//raw.github.com/dcodeIO/ByteBuffer.js/master/ByteBuffer.min.js"></script>
var ByteBuffer = dcodeIO.ByteBuffer;
var bb = new ByteBuffer();
bb.writeLString("Hello world!");
bb.flip();
alert(bb.readLString()+" from ByteBuffer.js");
Optionally depends on Long.js for long (int64) support. If you do not require long support, you can skip the Long.js config. Require.js example:
require.config({
"paths": {
"Long": "/path/to/Long.js"
"ByteBuffer": "/path/to/ByteBuffer.js"
}
});
require(["ByteBuffer"], function(ByteBuffer) {
var bb = new ByteBuffer();
bb.writeLString("Hello world!");
bb.flip();
alert(bb.readLString()+" from ByteBuffer.js");
});
As of the ECMAScript specification, number types have a maximum value of 2^53. Beyond that, behaviour might be unexpected. However, real long support requires the full 64 bits with the possibility to perform bitwise operations on the value for varint en-/decoding. So, to enable true long support in ByteBuffer.js, it optionally depends on Long.js, which actually utilizes two 32 bit numbers internally. If you do not require long support at all, you can skip it and save the additional bandwidth. On node, long support is available by default through the long dependency.
- Working ArrayBuffer & DataView implementations (i.e. use a polyfill)
You basically have the following three options:
If you compile your code but want to use ByteBuffer.js as an external dependency that's not actually compiled "into" your project, add the provided externs file to your compilation step (which usually excludes compilation of ByteBuffer.js).
Use ByteBuffer.js if you want the ByteBuffer class to be exposed to the outside world (of JavaScript) so it can be called by external scripts. This also removes the requirement of using externs but the compiler will also keep possibly unused code.
Use ByteBuffer.noexpose.js if you want the ByteBuffer class to be fully integrated into your (single file) project. Of course no external scripts will be able to call it or its method (trivially) because quite everything will become renamed, some parts inlined and moved around. This will also allow the compiler to actually remove unused code.
Dretch (IE8 comp.)
Apache License, Version 2.0 - http://www.apache.org/licenses/LICENSE-2.0.html