Code Monkey home page Code Monkey logo

cppgfs2.0's Introduction

Build Status Docker Image Build License

cppGFS2.0

A distributed Google File System (GFS), implemented in C++

Demo

Getting Started

In this project, we plan to use Bazel as our main build tool. You can install Bazel by following their website instructions.

As of writing, you will need at least Bazel version 5.1.0 for a proper working demo.

Installation scripts have been provided to install bazel in scripts folder.

For example, for Linux, run with desired Bazel version:

export BAZEL_VERSION=5.1.0
chmod +x scripts/install_bazel_linux_x86_64.sh
scripts/install_bazel_linux_x86_64.sh

For MacBook (Intel Chip), use scripts/install_bazel_darwin_x86_64.sh.

For MacBook (Apple Chips), use scripts/install_bazel_darwin_arm64.sh.

Then, from the root directory, you can run Bazel commands as normal. For example:

bazel build ...
bazel test --test_output=errors ...

To learn more about how to use Bazel, or how to write Bazel build rule for C++, see the official documentation.

Note: If you get an error in the form of illegal thread local variable reference to regular symbol, try adding --features=-supports_dynamic_linker to your Bazel build flags. For example, bazel build --features=-supports_dynamic_linker ....

Running GFS client

Make sure you have the GFS server clusters are up and running.

You can either write a binary by importing the GFS client at src/client/gfs_client.h, or using the GFS command line binary.

To build the command line binary, run

bazel build :gfs_client_main

Then, you can run any of these modes:

# To create a file
bazel-bin/gfs_client_main --mode=create --filename=/test

# To read a file
bazel-bin/gfs_client_main --mode=read --filename=/test --offset=0 --nbytes=100

# To write a file
# This will create the file if it doesn't exist; if you don't want this behavior,
# use the 'write_no_create' mode instead
bazel-bin/gfs_client_main --mode=write --filename=/test --offset=0 --data='Hello World!'

Running GFS server clusters using Docker

Make sure you have Docker and Docker compose installed

To start all servers and expose respective server ports outside of Docker for connection, run:

docker-compose up --build

As of writing, the Dockerfile doesn't support MacBook M1 Max yet. You will need to manually update the file to install/use a ARM64 compatible Bazel image. You can refer to the scripts folder for installation instructions.

Then, you can use GFS client to interact with the cluster.

After you are done with it, turn everything off by typing Ctrl + C, and then

docker-compose down

Benchmark Performance

We use Google Benchmarks open source library to test our performance.

To run them, simply start the GFS cluster in the background, and run the benchmark binaries in src/benchmarks

Known Issues

  • If the initialization of the first file chunk fails at any of the chunk server during file creation, the metadata will be created but chunks won't be initialized, so future file creation on the same filename fails, and file read/write will use inconsistent chunk server locations
  • We do not bring stale replica up to date, at the moment

C++ Style Guide

Please, if possible, follow Google C++ style guide. If you use an IDE or any common text editors, they have extensions that help you auto format and lint your code for style errors.

cppgfs2.0's People

Contributors

gan-tu avatar micheal-o avatar xicheng87 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

cppgfs2.0's Issues

Chunk Server Manager

  1. Knows who and where each chunk is, using communication manager
  2. Decides which chunk server to place new file chunks, for load balancing effects
  3. Maintains a Heartbeat Manager for tracking chunk server health status, and asks re-replication manager to bring replica counts up to minimum replica count, when required (this is phase 2/3)

Communication Manager

Uses gRPC to handle communications between nodes.

This defines all the gRPC endpoints that client, master, chunk server has to individually implement, and handles the bulk of gRPC communication work.

Note that this means after this module is finished, implementations for each grpc call are done individually by client, master, and chunk server code. This is where our crucial business logics are.

Build error

It showed the below error when trying to build in Mac OS X.

ERROR: /private/var/tmp/_bazel_balachandhar/14943156afbb9d37410525ecd02ad6fd/external/com_google_protobuf/BUILD:975:21: in proto_lang_toolchain rule @com_google_protobuf//:cc_toolchain: '@com_google_protobuf//:cc_toolchain' does not have mandatory provider 'ProtoInfo'.
INFO: Repository boringssl instantiated at:
/Users/balachandhar/Documents/cppGFS2.0/WORKSPACE:52:10: in
/private/var/tmp/_bazel_balachandhar/14943156afbb9d37410525ecd02ad6fd/external/com_github_grpc_grpc/bazel/grpc_deps.bzl:130:21: in grpc_deps
Repository rule http_archive defined at:
/private/var/tmp/_bazel_balachandhar/14943156afbb9d37410525ecd02ad6fd/external/bazel_tools/tools/build_defs/repo/http.bzl:336:31: in
INFO: Repository remotejdk11_macos instantiated at:
/DEFAULT.WORKSPACE.SUFFIX:102:6: in
/private/var/tmp/_bazel_balachandhar/14943156afbb9d37410525ecd02ad6fd/external/bazel_tools/tools/build_defs/repo/utils.bzl:201:18: in maybe
Repository rule http_archive defined at:
/private/var/tmp/_bazel_balachandhar/14943156afbb9d37410525ecd02ad6fd/external/bazel_tools/tools/build_defs/repo/http.bzl:336:31: in
INFO: Repository remote_java_tools_darwin instantiated at:
/DEFAULT.WORKSPACE.SUFFIX:219:6: in
/private/var/tmp/_bazel_balachandhar/14943156afbb9d37410525ecd02ad6fd/external/bazel_tools/tools/build_defs/repo/utils.bzl:201:18: in maybe
Repository rule http_archive defined at:
/private/var/tmp/_bazel_balachandhar/14943156afbb9d37410525ecd02ad6fd/external/bazel_tools/tools/build_defs/repo/http.bzl:336:31: in
ERROR: Analysis of target '//src/protos/grpc:cc_chunk_server_lease_service_grpc' failed; build aborted: Analysis of target '@com_google_protobuf//:cc_toolchain' failed
INFO: Elapsed time: 8.781s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (72 packages loaded, 2959 targets configured)

Write Lease Manager

  1. Handles lease grant/revoke/renew for primary replica used for write operations.
  2. Appoints primary replica and increment chunk version number for all chunk servers holding that chunk, when necessary.

See below for high level explanation:

Screen Shot 2020-05-03 at 8 37 43 PM

Note: I found this has high overlap between metadata manager and chunk server manager, and michael@ may also wants more things to do, so I instead assign this to both of you to coordinate

Master file namespace Lock manager

Acquires proper read/write locks on file namespaces for concurrent master operations to properly serialize its operational order

Update: the lock is released when replying to client, and this is NOT holding read/write locks for the client, but simply for master's own concurrency/serialization purposes

Metadata Manager

  1. handles how we store and read the file metadata: (filename, chunk_index) -> chunk locations
  2. handles creation of new metadata for file creation and snapshot operations.
  3. when a chunk server starts/restarts and connects to master, the metadata manager is responsible for updating its metadata to include the chunks this chunk server has in its "locations" mapping

Configuration Manager

Implements how nodes read configs. Such as IPs, directories etc. can either read from yaml/toml file, command line, or environment variables defined by Dockerfile

GFS Client API

Public client API exposed to end users for supported GFS file operations. This is what a user of our GFS uses to access files. This is also what we use for integration testing.

This will be open, close, create, read, and write APIs. Every one, except close will talk to a server with GFS logics; close is simply deallocating memory resources used by client for the given file operation session

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.