Code Monkey home page Code Monkey logo

madfs's Introduction

MadFS

workflow workflow workflow

Source code for FAST '23 paper: MadFS: Per-File Virtualization for Userspace Persistent Memory Filesystems by Shawn Zhong*, Chenhao Ye*, Guanzhou Hu, Suyan Qu, Andrea Arpaci-Dusseau, Remzi Arpaci-Dusseau, and Michael Swift. (*equal contribution.) FAST '23. Paper. Video. Slides. Code.

Abstract

Persistent memory (PM) can be accessed directly from userspace without kernel involvement, but most PM filesystems still perform metadata operations in the kernel for security and rely on the kernel for cross-process synchronization.

We present per-file virtualization, where a virtualization layer implements a complete set of file functionalities, including metadata management, crash consistency, and concurrency control, in userspace. We observe that not all file metadata need to be maintained by the kernel and propose embedding insensitive metadata into the file for userspace management. For crash consistency, copy-on-write (CoW) benefits from the embedding of the block mapping since the mapping can be efficiently updated without kernel involvement. For cross-process synchronization, we introduce lock-free optimistic concurrency control (OCC) at user level, which tolerates process crashes and provides better scalability.

Based on per-file virtualization, we implement MadFS, a library PM filesystem that maintains the embedded metadata as a compact log. Experimental results show that on concurrent workloads, MadFS achieves up to 3.6x the throughput of ext4-DAX. For real-world applications, MadFS provides up to 48% speedup for YCSB on LevelDB and 85% for TPC-C on SQLite compared to NOVA.

BibTex
@inproceedings {285756,
author = {Shawn Zhong and Chenhao Ye and Guanzhou Hu and Suyan Qu and Andrea Arpaci-Dusseau and Remzi Arpaci-Dusseau and Michael Swift},
title = {{MadFS}: {Per-File} Virtualization for Userspace Persistent Memory Filesystems},
booktitle = {21st USENIX Conference on File and Storage Technologies (FAST 23)},
year = {2023},
isbn = {978-1-939133-32-8},
address = {Santa Clara, CA},
pages = {265--280},
url = {https://www.usenix.org/conference/fast23/presentation/zhong},
publisher = {USENIX Association},
month = feb,
}

Prerequisites

  • MadFS is developed on Ubuntu 20.04.3 LTS and Ubuntu 22.04.1 LTS. It should work on other Linux distributions as well.

  • MadFS requires a C++ compiler with C++ 20 support. The compilers known to work includes GCC 11.3.0, GCC 10.3.0, Clang 14.0.0, and Clang 10.0.0.

  • Install dependencies and configure the system
    • Install build dependencies

      sudo apt update
      sudo apt install -y cmake build-essential gcc-10 g++-10
    • Install development dependencies (optional)

      # to run sanitizers and formatter
      sudo apt install -y clang-10 libstdc++-10-dev clang-format-10
      # for perf
      sudo apt install -y linux-tools-common linux-tools-generic linux-tools-`uname -r`
      # for managing persistent memory and NUMA
      sudo apt install -y ndctl numactl
      # for benchmarking
      sudo apt install -y sqlite3
    • Configure the system

      ./scripts/init.py
  • Configure persistent memory
    • To emulate a persistent memory device using DRAM, please follow the guide here.

    • Initialize namespaces (optional)

      # remove existing namespaces on region0
      sudo ndctl destroy-namespace all --region=region0 --force 
      # create new namespace `/dev/pmem0` on region0
      sudo ndctl create-namespace --region=region0 --size=20G
      # create new namespace `/dev/pmem0.1` on region0 for NOVA (optional)
      sudo ndctl create-namespace --region=region0 --size=20G
      # list all namespaces
      ndctl list --region=0 --namespaces --human --idle
    • Use /dev/pmem0 to mount ext4-DAX at /mnt/pmem0-ext4-dax

      # create filesystem
      sudo mkfs.ext4 /dev/pmem0
      # create mount point
      sudo mkdir -p /mnt/pmem0-ext4-dax
      # mount filesystem
      sudo mount -o dax /dev/pmem0 /mnt/pmem0-ext4-dax
      # make the mount point writable
      sudo chmod a+w /mnt/pmem0-ext4-dax
      # check mount status
      mount -v | grep /mnt/pmem0-ext4-dax
    • Use /dev/pmem0.1 to mount NOVA at /mnt/pmem0-nova (optional)

      # load NOVA module
      sudo modprobe nova
      # create mount point
      sudo mkdir -p /mnt/pmem0-nova
      # mount filesystem
      sudo mount -t NOVA -o init -o data_cow  /dev/pmem0.1 /mnt/pmem0-nova
      # make the mount point writable
      sudo chmod a+w /mnt/pmem0-nova           
      # check mount status
      mount -v | grep /mnt/pmem0-nova          
    • To unmount the filesystems, run

      sudo umount /mnt/pmem0-ext4-dax
      sudo umount /mnt/pmem0-nova

Build and Run

  • Build the MadFS shared library

    # Usage: make [release|debug|relwithdebinfo|profile|pmemcheck|asan|ubsan|msan|tsan]
    #             [CMAKE_ARGS="-DKEY1=VAL1 -DKEY2=VAL2 ..."] 
    make BUILD_TARGETS="madfs"
  • Run your program with MadFS

    LD_PRELOAD=./build-release/libmadfs.so ./your_program
    Sample output
    BuildOptions: 
        build type:
            name: release
            debug: 0
            use_pmemcheck: 0
        hardware support:
            clwb: 1
            clflushopt: 1
            avx512f: 1
        features: 
            map_sync: 1
            map_populate: 1
            tx_flush_only_fsync: 1
            enable_timer: 0
        concurrency control:
            cc_occ: 1
            cc_mutex: 0
            cc_spinlock: 0
            cc_rwlock: 0
    
    RuntimeOptions:
        show_config: 1
        strict_offset_serial: 0
        log_file: None
        log_level: 1
    
    # Your program output here
    
    MadFS unloaded
  • Run tests

    ./scripts/run.py [test_basic|test_rc|test_sync|test_gc]
    # See `./scripts/run.py --help` for more options
    
  • Run and plot single-threaded benchmarks
    ./scripts/bench_st.py --filter="seq_pread"
    ./scripts/bench_st.py --filter="rnd_pread"
    ./scripts/bench_st.py --filter="seq_pwrite"
    ./scripts/bench_st.py --filter="rnd_pwrite"
    ./scripts/bench_st.py --filter="cow"
    ./scripts/bench_st.py --filter="append_pwrite"
    
    # Limit to set of file systems
    ./scripts/bench_st.py -f MadFS SplitFS
    
    # Profile a data point
    ./scripts/bench_st.py --filter="seq_pread/512" -f MadFS -b profile
    
    # See `./scripts/bench_st.py` --help for more options
  • Run and plot multi-threaded benchmarks
    ./scripts/bench_mt.py --filter="unif_0R"
    ./scripts/bench_mt.py --filter="unif_50R"
    ./scripts/bench_mt.py --filter="unif_95R"
    ./scripts/bench_mt.py --filter="unif_100R"
    ./scripts/bench_mt.py --filter="zipf_2k"
    ./scripts/bench_mt.py --filter="zipf_4k"
  • Run and plot metadata benchmarks
    ./scripts/bench_open.py
    ./scripts/bench_gc.py
  • Run and plot macrobenchmarks (SQLite and LevelDB)
    ./scripts/bench_tpcc.py
    ./scripts/bench_ycsb.py

Directory Structure

  • src/: Source code for the MadFS shared library

  • scripts/: Scripts for building, running, and plotting benchmarks

  • bench/: Source code for benchmarks

  • test/: Source code for tests

  • tools/: Source code for tools (e.g., gc, conversion, info)

  • cmake/: CMake modules

  • data/: Data files for benchmarks

Contact

If you have any questions, feel free to open an issue or contact Shawn Zhong ([email protected]) and Chenhao Ye ([email protected]). We are also happy to accept pull requests.

madfs's People

Contributors

chenhao-ye avatar josehu07 avatar qusuyan avatar shawnzhong avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

madfs's Issues

Memset newly allocated blocks

For now, only transaction blocks are sensitive to the residual values on them, so we should zero before using them.

Do not `open_shm` for read-only files and `*stat`

According to the profiling, opening a file takes ~1 us but setting up other related stuff takes ~9 us, which mostly comes from page fault and mapping shared memory bitmaps (even more costly than replaying the tx history).

Thus, as an optimization, if a file is not meant for writing, there is no need to open shared memory. This should significantly reduce the overhead of open for read-only files and *stat.

Profile for single-threaded 4k overwrite

With Spin Lock

For single-threaded 4k overwrite, spin lock takes ~18.88% of the overall execution time:

BENCH_NUM_ITER=1000000 ./run bench profile --prog_args="--benchmark_filter='overwrite/4096/.*/threads:1$'"

image

Specifically, the lock instruction takes the majority of the time in pthread_spin_lock

image

Without Spin Lock

When the spinlock is commented out, ~18.33% of the time is spent on BlkTable::update, and ~16.31% of the time is on the method itself (as opposed to the function it calls e.g., advance_tx_idx).

Within BlkTable::update, ~55.56% of the time is spent on the line if (tx_idx) *tx_idx = tail_tx_idx;

image

Garbage collection for shared memory object

To support garbage collection (i.e. remove) on shared memory objects, the key is to figure out how many processes are still referring to this shared memory object. Maintaining a reference counter on the shared memory seems ad hoc as it cannot handle failure. Here is the proposal for implementing garbage collection for shared memory objects:

  • every process that opens the shared memory file acquires a shared flock
  • the garbage collection process attempts to acquire an exclusive lock with a non-blocking flag; if acquire succeeds, this means no process is using it, and we should be free to delete it; otherwise do not touch it

This is just a tentative proposal. There surely be other issues to handle i.e. the race condition between acquiring flock and open, etc. We only consider this proposal after DRAM bitmaps are fully functional.

Missing fence in flush

In the current implementation of persist operation, there is only one fence:

static inline void persist_cl_fenced(void *p) {
  persist_cl_unfenced(p);
  _mm_sfence();
}

The problem is, people typically update the cache line *p, and then call persist. persist should have a fence before flushing so that operations on *p won't get reordered after the flush.

Use flock for file initialization

In the current implementation, the process that creates the file is responsible to initialize the file. To avoid the race condition, every process who doesn't think it has creator responsibility will check and spin until the first bit of bitmap in the metablock is 1 (this bit indicates the metablock itself is taken). However, it could be the case that a process may create and die, leaving the file layout uninitialized. We may need to use flock instead of the first bit of bitmap to handle this race condition.

Support online garbage collection

A few thoughts on how to implement GC when there are still other processes running:

  • As discussed previously, we could move blocks at the end of the file to fill holes in the middle. Let' call it relocation. For the blocks that are organized as linked lists (e.g. LogEntryBlock), we could use an RCU to atomically relocate the block: copy it to the hole and CAS to update the pointer pointing to it.
  • To relocate DataBlock, we must scan through the log (and clean up dead log entries too), since the log will contain pointer points to DataBlock.
  • In MetaBlock, we maintain a gc_clock, which is a logical clock (or version number, if this is a better term); every time a GC completes, increase gc_clock by one. In the current block allocation, every process will always try to allocate from the recent bitmap it has successfully allocated; if fails, then try to allocate from the next bitmap, so bitmaps are searched in increasing order. When a process fails to allocate blocks from the last bitmap, it then tries to extend the bitmap. Now that we gc_clock, the process could compare it to the last gc_clock it has seen. If it is mismatched, it means a new GC happened so instead of extending the file, it tries to rescan all the bitmap from the head.

New bitmap layout

In the previous discussion, we consider organizing bitmaps as a linked list. It turns out that, when all bitmaps are marked as taken, it is liked we have to call ftruncate to extend the file and put the new bitmap as the next block. This works fine, but there could be an easier implementation: a bitmap block contains 32k bits, which maintains 128M blocks. We could instead just claims that for every 128M blocks, the first block is always a bitmap block. This saves the effort to organize things into a linked list, and it might be easier for GC.

add NotImplementedException

For APIs like ftruncate that we did not support yet but would hurt the file integrity, it's better to hook them up and throw exceptions so that we could capture the issue.

Move bitmap to shared DRAM

Currently, bitmap is persistent on PM, but since we are going to scan through the tx log to rebuild the image during open, we can always know the bitmap latest state from the shared memory. The idea is, when the first process open the file, it creates a shared memory object and leaves this object name in the meta block. Later processes could just check the meta block to open the same shared memory for bitmap.

By moving the bitmap to DRAM, we get additional benefits:

  • no need for cache line flush during allocation.
  • no GC is needed for allocated (by some processed) but unused block. Just remove the shared memory (when no one is opening the file) and ask the next one to open it to rebuild the bitmap image. GC is only needed for TX and Log block. GC for these two blocks are easier because we could just dump the new image into a new log and switch the pointer in meta block to the new log. Then we discard everything in the old log.

There is also additional attention needed:

  • How to name the shared memory object. Probably not just filename because rename may mess things up. Some version number or timestamp might be perferred.

Handle log tail flush

As Mike suggested, we flush the cache line in the tx block every time using up the whole cache line; this helps for fsync latency.

Fix `fclose`

Since fclose bypass glibc and directly call into syscall close, we have to intercept fclose too. However, in the current implementation, we only close the fd but don't flush FILE stream.

Reference implementation can be found here.

Use `int16_t` for block local index

Block local index is used to index into an offset within a block, so it typically only requires 12 bits. The current implementation uses int32_t, which might be wasteful in the case of using a container of block local index.

Block Idx Shift Type Cast

The block index is of type uint32_t, but commonly we have idx << BLOCK_SHIFT to get file size. This would result in overflow if the file size is larger than 4G.

To fix it, we could add another macro or inline function idx_to_block_size with casting block index into uint64_t first and then perform shifting.

Implement read/write with file offset change

The tricky piece of read/write (in contrast to pread and pwrite) is, changing file offset would imply a serialization point while OCC will pick another serialization point. Such divergence could result in anomalies.

The easiest way to resolve it is to use a lock for file offset. A multithreaded program that has concurrent read/write on a file usually uses pread and pwrite instead.

A better solution is to fully exploit OCC: in BlkTable.update, a thread A acquires the current offset and the last offset-changing thread's tx tail. Then it performs its IO, commits, and leaves the tx tail it has seen (this is the serialization point). Another thread B that does IO concurrently should see A's offset changing and A does not finish (probably implemented as a ticket lock etc); B could proceed and when it is about to commit, it must ensure A has already committed and it must ensure its' commit is after A.

Move `num_blocks` and `file_size` to the second cache line of the meta block

In the current implementation, num_blocks and file_size are in the first cache line of the meta block, but these two fields are written often, while the other fields are mostly read-only. It might be a good idea to move these two fields to the second cache line because these two fields' change usually require meta_lock held.

  union {
    struct {
      // file signature
      char signature[SIGNATURE_SIZE];

      // file size in bytes (logical size to users)
      uint64_t file_size;

      // total number of blocks actually in this file (including unused ones)
      uint32_t num_blocks;

      // if inline_tx_entries is used up, this points to the next log block
      LogicalBlockIdx next_tx_block;

      // hint to find tx log tail; not necessarily up-to-date
      TxEntryIdx tx_log_tail;
    };

    // padding avoid cache line contention
    char cl1[CACHELINE_SIZE];
  };

  union {
    // address for futex to lock, 4 bytes in size
    // this lock is ONLY used for ftruncate
    Futex meta_lock;

    // set futex to another cacheline to avoid futex's contention affect
    // reading the metadata above
    char cl2[CACHELINE_SIZE];
  };

Contention and scalability

For now, there are some unknown scalability bottlenecks:

  1. When trying to commit to the global timeline log, multiple processes might result in contention of the log tail.
  2. In a strict mode when every process might apply the newest log entries detected in every read operation: once a process publishes its transaction by committing it to the global timeline log, other processes would see it and try to read this log and apply it. This would cause a huge amount of traffic to read the cacheline that holding the log entries.

1 is yet an open question. One possible solution is to introduce partial ordering, but might be too complicated and out of the scope.
2 is possible to alleviate: once a process commits its log entries to the global timeline, it prefers to use another cacheline to write its next log entries. This will leave the previous cacheline holding the log entries untouched and shared-read by other processes.

A more optimized way for writing `zero_init`

Note @chenhao-ye : you don't need to change anything for now, we should do measurement (profiling) first.

For zero_init, we currently have the following logic:

  • For each cacheline:
    • memcmp
    • memset
    • persist
  • fence

First, it might not be desirable to do the for loop in cache line granularity as pmem internally uses 256 bytes granularity. Second, it may unnecessarily pollute the cache with a bunch of zero cache lines.

It would be desirable to follow how PMDK handles this at https://pmem.io/2019/01/22/extended-memcpy.html.

The logic is simple - if a modification is smaller than 256 bytes (configurable using PMEM_MOVNT_THRESHOLD environment variable) they use normal mov instructions followed by pmem_flush, but for modifications of 256 bytes or more they use non-temporal (NT) stores. These instructions on x86_64 have 2 properties - they bypass CPU caches (so that pmem_flush is not needed) and treat destination memory as write-combining type. The latter property means that if the destination memory is not in the cache, the CPU doesn’t have to fetch full cache lines, only to flush them a moment later. This is important not because data is not stored in the cache, but because there’s no fetch of previous data. This means that application can update the same cache lines multiple times without waiting for them to be available for reading.

Revisit fence and CAS in `futex.h`

In the current implementation of Futex:

  void acquire() {
    atomic_thread_fence(std::memory_order_acquire);
    while (true) {
      uint32_t one = 1;
      __atomic_compare_exchange_n(&val, &one, 0, true, __ATOMIC_ACQ_REL,
                                  __ATOMIC_ACQUIRE);

      long rc = futex(&val, FUTEX_TRYLOCK_PI, 0, nullptr, nullptr, 0);
      if (errno == EAGAIN) continue;
      if (rc == -1) perror("futex-acquire");
      return;
    }
  }

  void release() {
    val = 0;
    long rc = futex(&val, FUTEX_WAKE, 1, nullptr, nullptr, 0);
    if (rc == -1) perror("futex-release");
    atomic_thread_fence(std::memory_order_release);
  }

These fence functions might not be necessary, given that we call into kernel anyway. In addition, __atomic_compare_exchange_n here looks weird because we are going to make futex syscall anyway... Maybe some revisit of this piece of code is necessary.

LogEntryBlock layout, LogEntry allocation and management

Design:

Base struct is struct LogEntry of size 8 bytes. Two subtypes:

  • LogHeadEntry stores:
    • bool overflow (1 bit -- true if there is an overflow segment in another LogBlock following me)
    • bool saturate (1 bit -- true if the current segment fills the current LogBlock; overflow implies saturate)
    • op (2 bits)
    • leftover_bytes (12 bits -- previously last_remaining)
    • leftover_blocks (8 bits -- 6 bits is actually enough, max 64)
    • num_local_entries (8 bits -- number of local body entries of this segment inside my LogBlock)
    • union next (4 bytes, content depends on the value of saturate)
      • if !saturate, stores LogLocalIdx next_local_idx : 8 -- block idx is the same as mine
      • if saturate, stores LogicalBlockIdx next_block_idx : 32 -- local idx must be zero
  • LogBodyEntry: stores begin_virtual_idx (4 bytes) and begin_logical_idx (4 bytes).

The idea behind the union is to pack LogHeadEntry within 8 bytes in all cases. We need either next_local_idx or next_block_idx to locate the next header, but not both, depending on the value of saturate.

Without the saturate indicator and the union, it is not possible to support the case where there is no overflow segment after me (so I must store the leftover_* numbers) but I'm also filling up the current LogBlock (so I must store the 4-byte next block idx).

Use hugepage

There are some OS configurations for the huge page.

We need to add a script to set up huge page, and enable BuildOptions::use_hugepage

Organization of thread-local data structures

Current Impl

As talked with @chenhao-ye, the current organization of thread-local data structure is as follows:

  • Per-process data structures are maintained in a mapping fd -> file.

  • Per-thread data structures are maintained in thread-local mappings: fd -> allocator and fd ->log_mgr. A thread-local object is inserted into the mapping the first time the thread accesses the file.

Upon receiving a write request, we would have 3 look-ups: 1 from the global files, and 2 from the thread-local mappings. And the number could increase if we have more thread-local data structures. (I think TxMgr could be made thread-local as well?):

  • From the global files map, look up the file instance
  • Look up the thread-local allocator from allocators
  • Look up the thread-local log_mgr from log_mgrs

Alternative Impl

An alternative implementation would be the following:

  • Have a per-thread mapping of type thread_local std::unordered_map<int, File> where File contains all the information about a file. Some fields are local to the thread, and some fields are pointers to the per-process data structure (see below), such as mem_table and blk_table.

  • Have a per-process mapping tbb::concurrent_unordered_map<int, SharedFile> where SharedFile stores the shared data structure, including mem_table and blk_table.

Upon opening a file, the thread:

  • emplace a SharedFile to the global mapping
  • create a local File object, and insert it into the local mapping

Upon reading a file, the thread:

  • looks up the local mapping to find the file
  • if not found, the thread needs to look up the global data structure for the corresponding SharedFile and insert the File into local mapping.

In this case, a write request can be handled with only 1 lookup most of the time.

tx.h:109:78: error: conversion to ‘unsigned int’ from ‘int’ may change the sign of the result

Hi, out of experimentation I tried compiling MadFS on GitHub Codespaces (hoping to then write a mini CI that just tries to verify whether MadFS compiles on the ubuntu-latest GitHub runner).

Is this error expected? If not then I believe an explicit cast might be needed.

@osalbahr ➜ /workspaces/MadFS (main) $ make BUILD_TARGETS="madfs"
cmake -S . -B build-release -DCMAKE_BUILD_TYPE=release 
-- Configuring done
-- Generating done
-- Build files have been written to: /workspaces/MadFS/build-release
cmake --build build-release -j --target madfs -- --quiet 
[ 72%] Built target pmem2
[ 74%] Building CXX object CMakeFiles/madfs.dir/src/file/write.cpp.o
[ 75%] Building CXX object CMakeFiles/madfs.dir/src/file/read.cpp.o
In file included from /workspaces/MadFS/src/tx/write.h:3,
                 from /workspaces/MadFS/src/tx/write_aligned.h:1,
                 from /workspaces/MadFS/src/file/write.cpp:2:
/workspaces/MadFS/src/tx/tx.h: In member function ‘bool madfs::dram::Tx::handle_conflict(madfs::pmem::TxEntry, madfs::VirtualBlockIdx, madfs::VirtualBlockIdx, std::vector<madfs::BaseIdx<unsigned int, madfs::IdxType::LOGICAL_BLOCK_IDX> >&, bool*)’:
/workspaces/MadFS/src/tx/tx.h:109:78: error: conversion to ‘unsigned int’ from ‘int’ may change the sign of the result [-Werror=sign-conversion]
  109 |         VirtualBlockIdx end_vidx = curr_entry.inline_entry.begin_virtual_idx +
      |                                    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
  110 |                                    curr_entry.inline_entry.num_blocks;
      |                                    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~         
In file included from /workspaces/MadFS/src/tx/read.h:3,
                 from /workspaces/MadFS/src/file/read.cpp:1:
/workspaces/MadFS/src/tx/tx.h: In member function ‘bool madfs::dram::Tx::handle_conflict(madfs::pmem::TxEntry, madfs::VirtualBlockIdx, madfs::VirtualBlockIdx, std::vector<madfs::BaseIdx<unsigned int, madfs::IdxType::LOGICAL_BLOCK_IDX> >&, bool*)’:
/workspaces/MadFS/src/tx/tx.h:109:78: error: conversion to ‘unsigned int’ from ‘int’ may change the sign of the result [-Werror=sign-conversion]
  109 |         VirtualBlockIdx end_vidx = curr_entry.inline_entry.begin_virtual_idx +
      |                                    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
  110 |                                    curr_entry.inline_entry.num_blocks;
      |                                    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~         
cc1plus: all warnings being treated as errors
make[4]: *** [CMakeFiles/madfs.dir/build.make:102: CMakeFiles/madfs.dir/src/file/read.cpp.o] Error 1
make[4]: *** Waiting for unfinished jobs....
cc1plus: all warnings being treated as errors
make[4]: *** [CMakeFiles/madfs.dir/build.make:141: CMakeFiles/madfs.dir/src/file/write.cpp.o] Error 1
make[3]: *** [CMakeFiles/Makefile2:365: CMakeFiles/madfs.dir/all] Error 2
make[2]: *** [CMakeFiles/Makefile2:372: CMakeFiles/madfs.dir/rule] Error 2
make[1]: *** [Makefile:236: madfs] Error 2
make: *** [Makefile:16: release] Error 2

Might be related:

target_compile_options(madfs PRIVATE -Werror -Wsign-conversion)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.