Code Monkey home page Code Monkey logo

atomically's Introduction

Atomically

Read and write files atomically and reliably.

Features

  • Overview:
    • This library is a rewrite of write-file-atomic, with some important enhancements on top, you can largely use this as a drop-in replacement.
    • This library is written in TypeScript, so types aren't an afterthought but come with library.
    • This library is slightly faster than write-file-atomic, and it can be 10x faster, while being essentially just as safe, by using the fsyncWait option.
    • This library has 0 third-party dependencies, so there's less code to vet and the entire thing is roughly 20% smaller than write-file-atomic.
    • This library tries harder to write files on disk than write-file-atomic does, by default retrying some failed operations and handling some more errors.
  • Reliability:
    • Reads are retried, when appropriate, until they succeed or the timeout is reached.
    • Writes are atomic, meaning that first a temporary file containing the new content is written, then this file is renamed to the final path, this way it's impossible to get a corrupt/partially-written file.
    • Writes happening to the same path are queued, ensuring they don't interfere with each other.
    • Temporary files can be configured to not be purged from disk if the write operation fails, which is useful for when keeping the temporary file is better than just losing data.
    • Any needed missing parent folder will be created automatically.
    • Symlinks are resolved automatically.
    • ENOSYS errors on chmod/chown operations are ignored.
    • EINVAL/EPERM errors on chmod/chown operations, in POSIX systems where the user is not root, are ignored.
    • EMFILE/ENFILE/EAGAIN/EBUSY/EACCESS/EACCES/EACCS/EPERM errors happening during necessary operations are caught and the operations are retried until they succeed or the timeout is reached.
    • ENAMETOOLONG errors, both appening because of the final path or the temporary path, are attempted to be worked around by smartly truncating paths.
  • Temporary files:
    • By default they are purged automatically once the write operation is completed or if the process exits (cleanly or not).
    • By default they are created by appending a .tmp-[timestamp][randomness] suffix to destination paths:
      • The tmp- part gives users a hint about the nature of these files, if they happen to see them.
      • The [timestamp] part consists of the 10 least significant digits of a milliseconds-precise timestamp, making it likely that if more than one of these files are kept on disk the user will see them in chronological order.
      • The [randomness] part consists of 6 random hex characters.
      • If by any chance a collision is found then another suffix is generated.
  • Custom options:
    • chown: it allows you to specify custom group and user ids:
      • by default the old file's ids are copied over.
      • if custom ids are provided they will be used.
      • if false the default ids are used.
    • encoding: it allows you to specify the encoding of the file content:
      • by default when reading no encoding is specified and a raw buffer is returned.
      • by default when writing utf8 is used when.
    • fsync: it allows you to control whether the fsync syscall is triggered right after writing the file or not:
      • by default the syscall is triggered immediately after writing the file, increasing the chances that the file will actually be written to disk in case of imminent catastrophic failures, like power outages.
      • if false the syscall won't be triggered.
    • fsyncWait: it allows you to control whether the triggered fsync is waited or not:
      • by default the syscall is waited.
      • if false the syscall will still be triggered but not be waited.
        • this increases performance 10x in some cases, and at the end of the day often there's no plan B if fsync fails anyway.
    • mode: it allows you to specify the mode for the file:
      • by default the old file's mode is copied over.
      • if false then 0o666 is used.
    • schedule: it's a function that returns a promise that resolves to a disposer function, basically it allows you to provide some custom queueing logic for the writing operation, allowing you to perhaps wire atomically with your app's main filesystem job scheduler:
      • even when a custom schedule function is provided write operations will still be queued internally by the library too.
    • timeout: it allows you to specify the amount of maximum milliseconds within which the library will retry some failed operations:
      • when writing asynchronously by default it will keep retrying for 7500 milliseconds.
      • when writing synchronously by default it will keep retrying for 1000 milliseconds.
      • if 0 or -1 no failed operations will be retried.
      • if another number is provided that will be the timeout interval.
    • tmpCreate: it's a function that will be used to create the custom temporary file path in place of the default one:
      • even when a custom function is provided the final temporary path will still be truncated if the library thinks that it may lead to ENAMETOOLONG errors.
      • paths by default are truncated in a way that preserves an eventual existing leading dot and trailing extension.
    • tmpCreated: it's a function that will be called with the newly created temporary file path.
    • tmpPurge: it allows you to control whether the temporary file will be purged from the filesystem or not if the write fails:
      • by default it will be purged.
      • if false it will be kept on disk.

Install

npm install --save atomically

Usage

This is the shape of the optional options object:

type Disposer = () => void;

type ReadOptions = string | {
  encoding?: string | null,
  mode?: string | number | false,
  timeout?: number
};

type WriteOptions = string | {
  chown?: { gid: number, uid: number } | false,
  encoding?: string | null,
  fsync?: boolean,
  fsyncWait?: boolean,
  mode?: string | number | false,
  schedule?: ( filePath: string ) => Promise<Disposer>,
  timeout?: number,
  tmpCreate?: ( filePath: string ) => string,
  tmpCreated?: ( filePath: string ) => any,
  tmpPurge?: boolean
};

This is the shape of the provided functions:

function readFile ( filePath: string, options?: ReadOptions ): Promise<Buffer | string>;
function readFileSync ( filePath: string, options?: ReadOptions ): Buffer | string;
function writeFile ( filePath: string, data: Buffer | string | undefined, options?: WriteOptions ): Promise<void>;
function writeFileSync ( filePath: string, data: Buffer | string | undefined, options?: WriteOptions ): void;

This is how to use the library:

import {readFile, readFileSync, writeFile, writeFileSync} from 'atomically';

// Asynchronous read with default option
const buffer = await readFile ( '/foo.txt' );

// Synchronous read assuming the encoding is "utf8"
const string = readFileSync ( '/foo.txt', 'utf8' );

// Asynchronous write with default options
await writeFile ( '/foo.txt', 'my_data' );

// Asynchronous write that doesn't prod the old file for a stat object at all
await writeFile ( '/foo.txt', 'my_data', { chown: false, mode: false } );

// 10x faster asynchronous write that's less resilient against imminent catastrophies
await writeFile ( '/foo.txt', 'my_data', { fsync: false } );

// 10x faster asynchronous write that's essentially still as resilient against imminent catastrophies
await writeFile ( '/foo.txt', 'my_data', { fsyncWait: false } );

// Asynchronous write with a custom schedule function
await writeFile ( '/foo.txt', 'my_data', {
  schedule: filePath => {
    return new Promise ( resolve => { // When this returned promise will resolve the write operation will begin
      MyScheduler.schedule ( filePath, () => { // Hypothetical scheduler function that will eventually tell us to go on with this write operation
        const disposer = () => {}; // Hypothetical function that contains eventual clean-up logic, it will be called after the write operation has been completed (successfully or not)
        resolve ( disposer ); // Resolving the promise with a disposer, beginning the write operation
      })
    });
  }
});

// Synchronous write with default options
writeFileSync ( '/foo.txt', 'my_data' );

License

MIT © Fabio Spampinato

atomically's People

Contributors

fabiospampinato avatar lights0123 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

atomically's Issues

v2.0.0 changelog

Hi! Saw that v2.0.0 was officially released. I wasn't able to find a changelog, so I was wondering what changes are required on the user-end for the update?

Thanks!

Use an exponential backoff retry strategy

Currently retried operations are retried immediately, instead there should probably be an exponential backoff strategy implemented, otherwise the thing could backfire, e.g. it's possible that atomically would keep the CPU busy not giving it a chance to do the work required by other processes for unlocking busy/locked files.

EXDEV: cross-device link not permitted, rename

I use this library to safe user json config file.

The full error is like:

EXDEV: cross-device link not permitted, rename 'C:\Users<username>\AppData\Roaming\xmcl\user.json.tmp-48051806880cfebe' -> 'C:\Users<username>\AppData\Roaming\xmcl\user.json'

I only observe this issue in telemetry. I cannot repro this in my environment.

Any idea about this issue?

I'm using atomically 2.0.1. Currently the users encountered this issue are using Windows.

Explore ignoring intermediate writes

If a write has been initiated, and another 10 writes to the same path have got queued up in the meantime, it's possible that the computer will crash before finishing all those writes, maybe it would be better to just skip intermediate writes and go straight to the last write queued up.

Issue with store creating multiple .temp files and having issues writing to it

Hi,

I was asked to open a issue here by sindresorhus as he mentioned that you could help here (original request - sindresorhus/electron-store#261)

Issue:

I have a stored named MasterStore and it creates a MasterStore.json files and everything works prefect. However, after sometime for some reason the .setCache function errors out because it is trying to access some MasterStore.json.temp file. below is the error log

Error: EPERM: operation not permitted, rename 'C:\Users\AppData\Roaming\neclient\MasterStore.json.tmp-900921090850e765' -> 'C:\Users\AppData\Roaming\neclient\MasterStore.json' at renameSync (node:fs:993:3) at attempt (C:\Users\AppData\Local\Programs\neclient\resources\app\node_modules\atomically\dist\utils\retryify.js:33:27) at attempt (C:\Users\AppData\Local\Programs\neclient\resources\app\node_modules\atomically\dist\utils\retryify.js:39:36) at attempt (C:\Users\AppData\Local\Programs\neclient\resources\app\node_modules\atomically\dist\utils\retryify.js:39:36) at Object.writeFileSync (C:\Users\AppData\Local\Programs\neclient\resources\app\node_modules\atomically\dist\index.js:160:50) at ElectronStore._write (C:\Users\AppData\Local\Programs\neclient\resources\app\node_modules\conf\dist\source\index.js:375:28) at set store [as store] (C:\Users\AppData\Local\Programs\neclient\resources\app\node_modules\conf\dist\source\index.js:296:14) at ElectronStore.set (C:\Users\AppData\Local\Programs\neclient\resources\app\node_modules\conf\dist\source\index.js:189:20) at Object.setCacheValue (C:\Users\AppData\Local\Programs\neclient\resources\app\system_services\cache-service.js:59:21) at BeforeAppShutDown (C:\Users\AppData\Local\Programs\neclient\resources\app\main.js:352:23)

i crate the store using
const masterStore = new Store({ name: 'MasterStore' });
and set contents by masterStore.set(key, value);

can someone tell me what's the reason its doing this? and how i can fix it?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.