Code Monkey home page Code Monkey logo

buckyos / cyfs Goto Github PK

View Code? Open in Web Editor NEW
2.0K 215.0 276.0 13.23 MB

CYFS is the next-generation technology to build real Web3 by upgrading the basic protocol of Web (TCP/IP+DNS+HTTP),is short for CYberFileSystem. https://www.cyfs.com/, cyfs://cyfs/index_en.html.

Home Page: https://www.cyfs.com/

License: BSD 2-Clause "Simplified" License

JavaScript 0.27% Rust 99.53% C 0.01% Makefile 0.01% Python 0.02% Shell 0.04% RenderScript 0.01% Objective-C 0.04% Java 0.08%
blockchain cryptography named-data-networking p2p rust web3 named-object-networking protocol ndn non

cyfs's Introduction

BuckyOS Launch!

Why BuckyOS?

Services running on the Cloud (Server) are closely related to our lives today, and people can hardly live without services in their daily lives. However, there is no operating system specifically designed to run services.

Already have Cloud Native? Cloud Native is designed for commercial companies or large organizations, a specialized operating system for running services, which is difficult for ordinary people to install and use. From a historical perspective, traditional System V operating systems (Unix) initially ran on IBM minicomputers, far from ordinary people, but after iOS, ordinary people can easily use modern operating systems: managing their applications and data well and running stably for a long time, without understanding a lot of profound technology. iOS pioneered a new era where everyone can use personal software.

Today, services are important to everyone, and people should have the freedom to install and use services (we call this type of service Personal Service). Personal Services that can run independently without relying on commercial companies are also known as dApps, and the entire Web3 industry has already had a large number of people engaged in this, doing a lot of work based on blockchain and smart contract technology. We believe that the simplest and most direct way to achieve this goal is for people to purchase consumer-grade servers with pre-installed OS, and then people can simply install applications on this OS: the application includes both Client and Service, and related data will only be saved on the user-owned server. The OS operation is simple and easy to understand, and also ensures high reliability and high availability of services. When a failure occurs, usually only replacing the damaged hardware can restore the system to work, without relying on professional IT Support.

BuckyOS was born to solve these problems and will become the "iOS" in the CloudOS field, pioneering a new era of the Internet where everyone has their own Personal Service.

Goals of BuckyOS

Buckyos is an Open Source Cloud OS (Network OS) aimed at end users. Its primary design goal is to allow consumers to have their own cluster/cloud (to distinguish from conventional terminology, we call this cluster Personal Zone, or Zone for short), with all devices in the consumer's home and the computing resources on these devices connected to this cluster. Consumers can install Services in their own Zone as easily as installing Apps. Based on BuckyOS, users can own all their data, devices, and services. In the future, if there is enough computing power within the Zone, they can also have Local LLM, and on top of that, have a truly AI Agent that serves themselves completely.

BuckyOS has several key design goals:

Out-of-the-box: After ordinary users purchase commercial PersonalServer products with BuckyOS installed, they can complete initial setup very simply. The basic process is: Install BuckyOS Control App -> Create identity (essentially a decentralized identity similar to BTC wallet address) -> Set/Create Zone ID (domain name) -> Plug in the device to power and network and turn it on -> Issue pending activation device in App -> Activate the device and add it to your Zone -> Apply default settings -> Start default Service (at least one network file system).

Service Store: Managing services running on BuckyOS is as simple as managing Apps on iOS. The Service Store will also build a healthy dApp ecosystem: achieving a win-win situation for developers, users, and device manufacturers. BuckyOS has a complete Service permission control system, Services run in specified containers, fully controllable, ensuring user data privacy and security.

High Reliability: Data is the most important asset for people in the digital age, and not losing data is an important reason why users choose to use Services rather than Software today. BuckyOS must design a reasonable hardware architecture for future commercial Personal Server products to ensure that no data will be lost in case of any hard disk failure (which is inevitable). At the same time, BuckyOS also constructs an open Backup System, choosing different backup tasks according to the importance of data, and backing up to different backup sources. Backup inevitably has costs, and the open Backup System guarantees users' right to choose. When the system is in a fully backed-up state, even if all the hardware of the system is destroyed, it can be fully restored on new hardware.

Access Anywhere: Users' Personal Servers are usually deployed in their own homes, and local network access is definitely fast and stable: for example, home security cameras storing important videos in the massive storage space provided by Personal Server is definitely faster and more stable than storing them in the Cloud today. But more often, we hope that App Clients running on mobile phones can connect to Personal Servers at any time. An important design goal of the BuckyOS system is to make all Services transparently obtain this feature. We mainly achieve this goal through 3 methods:

  1. Better integration of IPv6
  2. Integrate P2P protocols to achieve NAT traversal as much as possible
  3. Encourage users to deploy Gateway Nodes on public networks. Achieve Access Anywhere through traffic forwarding.

Zero Operation: As time passes, any system may be damaged or need to be adjusted according to actual conditions. BuckyOS helps ordinary consumers who don't have professional operation and maintenance capabilities to complete necessary operation and maintenance operations independently by defining some simple processes, ensuring the availability, reliability, and scalability of the system.

  1. Hardware damage without causing failure: Purchase new equipment, activate new equipment with the same hardware name to replace old equipment, wait for the status of new equipment to become normal, then unplug the faulty old equipment.
  2. Hardware damage causing failure: If urgent, you can immediately enable cloud virtual devices and restore from backup to make the system available. Then purchase new equipment, activate new equipment with the same hardware name to replace old equipment, wait for the status of new equipment to become normal.
  3. Insufficient storage space: Purchase new equipment, after activation, the system's available space increases.

Zero Depend: The operation of BuckyOS does not depend on any commercial company or any centralized infrastructure. From the user's perspective, there's no need to worry about functionality issues after the manufacturer of the purchased Personal Server with BuckyOS goes bankrupt. The standard open-source distribution of BuckyOS uses decentralized infrastructure, such as integrating decentralized storage (ERC7585) to achieve decentralized backup. The commercial distribution of BuckyOS can integrate some paid subscription services (such as traditional backup subscription services), but these services must give users the right to choose another supplier.

Upgrade to High Availability: Considering the home attribute of Personal Server, BuckyOS is usually installed on a small cluster composed of 3-5 Servers. In such a small-scale cluster, our trade-off is to try to ensure high reliability rather than high availability. This means that we allow the system to enter a read-only or unavailable state at some times, as long as the system can be restored to availability after some simple operation and maintenance operations by the user. However, when BuckyOS is installed on a medium-sized cluster composed of dozens or even hundreds of servers, this cluster is usually owned by small and medium-sized enterprises and can be configured to be highly available with simple IT Support: when expected hardware failures occur, the system still maintains complete availability.

System Architecture Design

I have already designed the overall architecture of BuckyOS, which I think can achieve the above goals. The complete system architecture of BuckyOS definitely needs in-depth discussion and repeated iterations, and we will have a continuously updated document to specifically discuss it. Here, I want to stand at the origin and describe the entire architecture as macroscopically as possible. Let engineers who are exposed to BuckyOS for the first time have a rough understanding of the key design of BuckyOS in the shortest time. As BuckyOS iterates, the specific design of the system will be continuously adjusted, but I am confident that some of the most basic design principles and framework thinking on the main processes mentioned in this article will have a long life.

At the time of completion of this article, BuckyOS has completed the demo version (0.1), so many designs have been verified to some extent. Many designs can already be observed through the crude code of DEMO.

Some Basic Concepts

Typical Topology of BuckyOS

Referring to the typical topology diagram above, understand some of the most important basic concepts in BuckyOS:

  • A group of physical servers form a cluster, BuckyOS is installed on this cluster, this cluster becomes a Zone.
  • Zone is composed of logical Nodes. For example, 6 virtual machines running on the servers in this cluster form the mutually isolated ZoneA and ZoneB, a formal Zone consists of at least 3 Nodes.

Zone: A cluster installed with BuckyOS is called a Zone, the devices in the cluster usually physically belong to the same organization. According to BuckyOS's assumed scenario, this organization is usually a family or small business, so the devices in the Zone are usually mostly connected to the same local area network. BuckyOS itself supports multiple users, a Zone can serve multiple users, but the same logical user (identified by DID) can only exist in one Zone.

Each Zone has a globally unique ZoneId. ZoneId is a human-friendly name, preferably a domain name, for example, buckyos.org can be considered a ZoneId. BuckyOS will also natively support Blockchain-Based name systems, such as ENS. Anyone can query the current public key, configuration file (ZoneConfig), and signature of the configuration file through ZoneId. Possessing the private key corresponding to the current public key of the Zone gives the highest authority of the Zone.

Node: The Server that makes up the Zone is called a Node. A Node can be a physical Server or a virtual Server. Any two Nodes within the same Zone cannot run on the same physical Server.

Each Node has a unique NodeId within the Zone. NodeId is a human-readable friendly name, which can accurately point to a Node in the form of $node_id.zone_id. In a normally running Zone, the public key of the Node, configuration file (NodeConfig), and signature of the configuration file can be queried through NodeId. The private key of the Node is usually stored in the Node's private storage area and changed periodically, and the BuckyOS kernel services running on the Node use this private key to periodically declare identity and obtain correct permissions.

System Startup

Below is an introduction to the key process of BuckyOS from system startup to application service startup:

  1. Each Node independently starts the node_daemon process. The following process occurs simultaneously on all Nodes in the Zone.
  2. When the node_daemon process starts, it will know its zone_id and node_id according to the node_identity configuration. A node that cannot read this configuration indicates that it has not joined any Zone.
  3. Query zone_config based on zone_id through the nameservice component.
  4. Prepare etcd according to zone_config. etcd is the most important basic component in the BuckyOS system, providing reliable consistent structured storage capabilities for the system.
  5. The successful initialization of the etcd service means that BuckyOS booting is successful. Subsequently, node_daemon will further start kernel services and application services by reading node_config stored on etcd.
  6. The processes of application services are all stateless, so they can run on any Node. Application services manage state by accessing kernel services (DFS, system_config).
  7. BuckyOS exposes specific application services and system services to outside the Zone through the cyfs-gateway kernel service,
  8. When the system has changed, the buckyOS scheduler will work, modifying the node_config of specific nodes to make this change effective
  9. Changes in Node_config will cause 3 types of things to happen:
    • a. Kernel service processes start or stop on a certain Node
    • b. Application service processes start or stop on a certain Node
    • c. Execute/cancel a specific operation and maintenance task on a certain Node (such as data migration)
  10. Adding new devices, installing new applications, system failures, system setting modifications may all cause system changes.
  11. After the system changes, the BuckyOS scheduler will start working, the scheduler will reassign which processes run on which Nodes, which data is stored on which Nodes.
  12. Scheduling is transparent to 99% of applications, the scheduler can make the system better exert the capabilities of the hardware, improving the reliability, performance and availability of the system.

From an implementation perspective, the running BuckyOS is a distributed system, definitely composed of a series of processes running on Nodes. By understanding the core logic of these important processes, we can further understand BuckyOS:

BuckyOS MainLoop

"Any operating system is essentially a loop", the above figure shows the two most important loops in BuckyOS.

Node Loop: The main logic of the most important kernel module "node_daemon". The core purpose of this loop is to manage processes and operation and maintenance tasks running on the current node according to node_config. This flowchart also shows the boot process of node_daemon starting etcd.

Scheduling loop: The most important loop in traditional operating systems, handling important system events, updating node_config and implementing purposes through NodeLoop. Due to space limitations, specific examples cannot be given here, but after our deduction, the above two loops can simply and reliably implement the key system design goals of BuckyOS.

The above dual-loop design also has the following advantages:

  1. Low-frequency scheduling: The scheduler only needs to start when events occur, reducing the consumption of system resources.
  2. The scheduler can crash: Because updating NodeConfig is an atomic operation, the scheduler can crash at any time during operation. And the system only needs to have one scheduler process, without the need for complex distributed design. Even if the system splits, NodeLoop will continue to work faithfully according to the state of the last system.
  3. Scheduling logic can be extended, specific scheduling of large-scale systems can be manually involved: Node Loop doesn't care how node_config is generated. For large-scale complex systems where it's difficult to write automated scheduling logic, it can be handled by professionals, making the scheduler a pure node_config construction auxiliary tool.
  4. Simple and reliable: During the operation of NodeLoop, it does not involve any network communication operations and is completely independent. This structure does not have any additional assumptions about the distributed state consistency of BuckyOS.

After understanding the above process, let's take an overall look at the architecture layering and key components of BuckyOS:

System Architecture and Key Components

System Architecture Diagram of BuckyOS

The BuckyOS system architecture consists of three layers:

  1. User Apps:

    • App-services managed by users and run in user-mode. They can be developed in any language.
    • BuckyOS user-mode isolation is ensured by the App-Container.
    • App-services cannot depend on each other.
  2. System Frame (Kernel) Services:

    • Frame-Service is a kernel-mode service that exposes system functions to App-Services through kernel RPC.
    • It can be extended by the system administrator, similar to installing new kmods in Linux.
    • Frame-service usually runs in a container, but this container is not a virtual machine.
  3. Kernel Models:

    • The Kernel Models are not extensible. This layer prepares the environment for Kernel Services.
    • As the system's most fundamental component, it aims to reach a stable state as quickly as possible, minimizing modifications.
    • Some basic components in this layer can load System Plugins to extend functions, such as pkg_loader supporting new pkg download protocols through predefined SystemPlugin.

Below, we provide a brief overview of each component from the bottom up.

  • etcd: A mature distributed KV storage service running on 2n+1 nodes, implementing BuckyOS’s core state distributed consistency. All critical structured data is stored in etcd, which requires robust security measures to ensure only authorized components can access it.

  • backup_client: Part of BuckyOS Backup Service. During node boot, it attempts to restore etcd data from backup servers outside the Zone based on zone_config. This ensures cross-zone data reliability.

  • name_service_client: Crucial to BuckyOS Name Service, it resolves information based on given NAMES (zoneid, nodeid, DID, etc.). Before etcd starts, it queries public infrastructures like DNS and ENS. Once etcd is running, it resolves based on etcd configurations. It aims to be an independent, decentralized open-source DNS Server.

  • node_daemon: A fundamental service in BuckyOS, running NodeLoop. Servers in a Zone must run node_daemon to become a functional Node.

  • machine_active_daemon: Strictly speaking, not part of BuckyOS, but its BIOS. It supports server activation to become a node. Hardware manufacturers are encouraged to design more user-friendly activation services to replace machine_active_daemon.

  • cyfs-gateway: A critical service in BuckyOS, implementing SDN logic. We will provide a detailed article on cyfs-gateway. Its long-term goal is to be an independent, open-source web3 gateway, meeting various network management needs.

  • system_config: A core kernel component (lib) that semantically wraps etcd access. All system components should use etcd through system_config.

  • acl_config: A core kernel component that further wraps ACL permissions management based on system_config.

  • kLog: An essential service providing reliable logging for all components, crucial for debugging and performance analysis in production. While based on the raft protocol, kLog is optimized for write-heavy scenarios. We aim to merge kLog and etcd for simplicity and reliability in the future.

  • pkg_loader: A core kernel component with two primary functions: loading pkgs based on pkg_id and downloading pkgs from repo-server based on pkg_id. Pkgs are akin to software packages (similar to apk) and can include version, hash, and other information.

  • dfs: As a critical Frame-Service, dfs provides reliable data management. The current backend implementation uses GlusterFS, but we plan to design a custom distributed file system called "dcfs" for new hardware and smaller clusters.

  • rdb: We may provide a standard RDB for developers in the future, enabling migration of sqlite/mysql/redis/mongoDB to BuckyOS.

  • kRPC: Short for Kernel-Switch RPC, it authenticates frame-service callers and supports ACL queries. It also aids developers in exposing functions reliably (similar to gRPC).

  • control_panel: Exposes SystemConfig's read/write interface via kRPC and includes the default BuckyOS management WebUI. It modifies system configurations and waits for the scheduler to apply changes.

  • Scheduler: The brain of BuckyOS, implementing the scheduling loop. It provides advanced capabilities of a distributed OS, maintaining simplicity and reliability in other components.

BuckyOS will also include some extension frame-services for essential product features, not detailed here.

Application Installation and Operation

We will now describe key processes from an application-centric perspective:

BuckyOS App Install Flow

The above flow illustrates the application installation and startup process, emphasizing Zero Dependency & Zero Trust principles.

  1. The system needs external services to download pkgs during installation. This doesn’t lead to a complete dependency on external services; users can configure alternative pkg repo servers if the default one fails.
  2. Since pkg_id can include hash information, Zero Trust is achieved by verifying pkgs from any source.
  3. Once installed, the system stores app-pkgs in an internal repo-server, backed up as part of system data. Node app-pkg downloads rely solely on zone-internal repo servers.

Next is the application startup process.

BuckyOS App Start Flow

The diagram outlines the app container startup process and how kRPC service access control is implemented.

BuckyOS aims to reduce the development threshold for AppServices: a. Compatible with containerized apps, running them on BuckyOS with appropriate permissions. b. Trigger configuration (compatible with fast-cgi), further reducing development complexity and resource usage. c. Apps developed or modified with the BuckyOS SDK can access all frame service functions.

Coding Principles

BuckyOS uses Rust as the primary development language. Leveraging Rust’s system programming insights, we don’t need to reiterate traditional principles in resource management, multithreading, and exception handling. However, BuckyOS, being a distributed system, involves inherent complexity. Below are distilled principles for developing high-quality distributed system code. Non-compliant code will not be merged.

Simplicity and Reliability

Define clear component boundaries and dependencies to reduce cognitive load. Implement functions outside the kernel or system services whenever possible. Design core components as independent products, avoiding a tightly coupled system that obscures true component boundaries.

Write evidently reliable code. Distributed system complexity cannot be mitigated solely through extensive testing. For high-reliability components, reduce code volume, making it comprehensible and evidently reliable. Minimize mental burden during code review, cautiously introduce proprietary concepts, and prudently select simple, reliable third-party libraries. For simplicity, module design may forgo some reusability. For distributed systems, KISS always trumps DRY.

Avoid Potential Global State Dependencies

Making decisions based on current cluster state is intuitive but often unwise. Distributed systems cannot reliably and entirely capture global state, and directives based on this state are challenging to fully and timely execute.

Let It Crash

Do not fear crashes in distributed systems. Log incidents and exit (crash) on unexpected situations. Do not retry or restart dependencies. Only a few components may retry or start another process, subject to thorough review.

Log First

Logging is critical infrastructure in distributed systems, serving as a diagnostic tool during development and a foundation for fault detection and performance analysis in production. Understand logging standards (especially for info level and above) and carefully design log outputs.

Avoid Request Amplification

Responding to one request often involves issuing multiple requests. While controlled amplification is acceptable (e.g., three DFS requests for a file download), beware of uncontrolled amplification, like issuing requests to all eligible nodes or indefinite retries. "Query-control" operations can exacerbate resource strain during system congestion.

Zero Dependency & Zero Trust

Minimize dependencies on external Zone facilities. Consider necessity, frequency, and alternative designs for accessing external servers. Distrust external Zone servers, relying on public-private key systems for content verification. Verify content creators, not sources, and build a validation system on this principle.

Comprehensive Understanding of Processing Chains

Avoid adding implicit intermediaries to solve design problems. Over-architecting distributed systems leads to complexity despair. Solve issues by locating the optimal position within the process chain, avoiding superficial solutions like additional caches or queues. Only one cache per processing chain.

Respect for Persistent Data

User data is precious. Distinguish between structured and unstructured data, thoughtfully deciding on state persistence. Utilize mature services for state preservation when possible. Direct disk operations require comprehensive design and testing.

ACID Guarantees by Core Components

Distributed transactions are notoriously complex. Avoid implementing distributed transactions whenever possible. Use mature distributed transaction services instead of custom implementations, as achieving correctness is highly challenging.

Appropriate Network Request Handling

BuckyOS’s cyfs-gateway extends network protocol paradigms (tcp/udp/http) to include NamedObject (NDN) semantics. Understanding network request paradigms optimizes resource usage and request handling efficiency.

Using SourceDAO for Open Source Collaboration

"Open source organizations have a long history and brilliant achievements. Practice has proved that an open source organization can achieve the goal of writing better code only by working in the virtual world. We believe that software development work is very suitable for DAO. We call this DAO for decentralized organizations to jointly develop software as SourceDAO." ---- from the White Paper of CodeDAO (https://www.codedao.ai)

SourceDAO provides a comprehensive design for DAO-ification of open source projects. The CYFS Core Dev Team has implemented smart contracts accordingly. Using OpenDAN as an example, we outline the basic operation process. Detailed design is available in the white paper. Our approach stems from a fundamentalist perspective on open source (aligned with GPL and GNU), influenced by Bitcoin’s "man is evil if not restrained" assumption. Although some designs may seem extreme, we believe they are understandable.

Basic

Operation Process

  1. Create an organization: Define goals, initial DANDT distribution, initial members, and establish the initial Roadmap.
  2. Roadmap: Link system maturity to token release. More mature systems release more tokens. The Roadmap outlines project phases: PoC, MVP, Alpha, Beta, Formula (Product Release), each with a DANDT release plan.
  3. Development as mining: Achieve DAO goals by advancing the Roadmap. Set plans, calculate contributions, and distribute DANDT accordingly.
  4. Market behavior: Use DANDT to increase project visibility, incentivize new users, and design fission rewards.
  5. DAO governance: DANDT holders participate in governance.
  6. Financing: Use DANDT for resource acquisition.

Currently, the BuckyOS DAO contract plans to deploy on Polygon, with a total of 2.1 billion tokens, abbreviated as BDT (BuckyOS DAO Token). Initial deployment requires an initial version plan to outline BDT release speed and establish the first committee (at least three people, possibly selected from DEMO contributors).

Preliminary Version Plan:

2024

  • 0.1 Demo: 2.5% (Done)
  • 0.2 PoC: 2.5%
  • 0.3 Pre-Alpha: 5% (First complete version)
  • 0.4 Alpha: 2.5% (2024 Q4)

2025

  • 0.5 Beta: 2.5%
  • 0.6 RC: 5% (First public release version)
  • 0.7 First Release: 2.5% (2025 Q3)

License

BuckyOS is a free, open-source, decentralized system encouraging vendors to build commercial products based on BuckyOS, fostering fair competition. Our licensing choice aims to achieve ecosystem win-win, maintain a decentralized core, protect contributor interests, and build a sustainable ecosystem. We adopt dual licensing: a traditional LGPL-based license requiring GPL compliance for kernel modifications, allowing closed-source applications (which cannot be essential system components), and a SourceDAO-based license. When a DAO-token issuing organization uses BuckyOS, it must donate a portion of its tokens to the BuckyOS DAO according to this license.

There is currently no license that meets our requirements, so we will temporarily use the BSD license for DEMO. I think we will definitely have a formal license ready when the PoC is completed.

cyfs's People

Contributors

alexsunxl avatar dependabot[bot] avatar glen0125 avatar jing-git avatar jinquantianxia avatar lurenpluto avatar photosssa avatar song0125 avatar streetycat avatar tracy101 avatar waterflier avatar weiqiushi avatar winevery avatar zwong91 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cyfs's Issues

Proxy node service proof

implement service proof to support a OOD running as proxy node, to help devices behind NAT which failed to play hold-punching, or speed up link between devices

Stream send data content is incorrect

Describe the bug

(1)LN connect RN second success
(2)LN use stream send 1M data to RN
(3)RN resp 1M data to LN,
LN recv data content is incorrect, the content [0..8] is remark data length ,RN send [0..8] length is 1048576 ,LN recv length is 5941175098377794146
To Reproduce

    pub async fn send_file(&mut self, size: u64) -> Result<HashValue, BuckyError> {
        let mut hashs = Vec::<HashValue>::new();
        let mut send_buffer = Vec::new();
        send_buffer.resize(PIECE_SIZE, 0u8);
        let mut gen_count = PIECE_SIZE;
        let mut size_need_to_send = size + 8;
        if gen_count as u64 > size_need_to_send {
            gen_count = size_need_to_send as usize;
        }
        send_buffer[0..8].copy_from_slice(&size_need_to_send.to_be_bytes());
        Self::random_data(send_buffer[8..].as_mut());

        loop {
            let hash = hash_data(&send_buffer[0..gen_count]);
            hashs.push(hash);

            let _ = self.stream.write_all(&send_buffer[0..gen_count]).await.map_err(|e| {
                log::error!("send file failed, e={}",&e);
                e
            });
            size_need_to_send -= gen_count as u64;

            if size_need_to_send == 0 {
                break;
            }

            gen_count = PIECE_SIZE;
            if gen_count as u64 > size_need_to_send {
                gen_count = size_need_to_send as usize;
            }
            Self::random_data(send_buffer[0..].as_mut());
        }

        let mut total_hash = Vec::new();
        for h in hashs.iter() {
            total_hash.extend_from_slice(h.as_slice());
        }
        let hash = hash_data(total_hash.as_slice());

        log::info!("send file finish, hash={:?}", &hash);

        Ok(hash)
    }
pub async fn recv_file(&mut self) -> Result<(u64, HashValue), BuckyError> {
        let mut hashs = Vec::<HashValue>::new();
        let mut recv_buffer = Vec::new();
        recv_buffer.resize(PIECE_SIZE, 0u8);
        let mut piece_recv: usize = 0;
        let mut file_size: u64 = 0;
        let mut total_recv: u64 = 0;
        loop {
            let len = self.stream.read(recv_buffer[piece_recv..].as_mut()).await.map_err(|e| {
                log::error!("recv failed, e={}", &e);
                e
            })?;
            if len == 0 {
                log::error!("remote close");
                return Err(BuckyError::new(BuckyErrorCode::ConnectionReset, "remote close"));
            }
            piece_recv += len;
            total_recv += len as u64;

            if file_size == 0 {
                if piece_recv < FILE_SIZE_LEN {
                    continue;
                }
                let mut b = [0u8; FILE_SIZE_LEN];
                b.copy_from_slice(&recv_buffer[0..FILE_SIZE_LEN]);
                file_size = u64::from_be_bytes(b);
                log::info!("=====================================file_size={}", file_size);
            }

            if file_size > 0 {
                if total_recv == file_size || piece_recv == PIECE_SIZE {
                    let recv_hash = hash_data(&recv_buffer[0..piece_recv].as_ref());
                    hashs.push(recv_hash);
                }

                if total_recv == file_size {
                    log::info!("=====================================recv finish");
                    break;
                }
            }

            if piece_recv == PIECE_SIZE {
                piece_recv = 0;
            }
        }

        let mut total_hash = Vec::new();
        for h in hashs.iter() {
            total_hash.extend_from_slice(h.as_slice());
        }
        let hash = hash_data(total_hash.as_slice());
        log::info!("recv file finish, hash={:?}", &hash);
        Ok((file_size, hash))
    }

GlobalState op-env adds new methods support

On the basis of the existing interface, the following new interfaces and features are added

update

Unlike commit, update can be called multiple times. If something changes internally, it will try to commit and update the global state, returning the latest root
update does not affect the holding status of op-env on lock, so the lock on the specified path can continue to be valid (until commit or abort)

get_current_root

Get the root state of the current op-env, SingleOpEnv and PathOpEnv will return the corresponding values

SingleOpEnv -> root object

PathOpEnv -> (root, revision, global_root)

It should be noted that for PathOpEnv, calling get_current_root will make the op-env immediately bind the corresponding global-state snapshot (but lock will not)

reset

For SingleOpEnv, reset the current iterator

App-Manager support for OOD standby mode

In the standby mode of OOD, App-Manager should not start the app service, and should not modify the root-state (because it is read-only mode), and when the OOD working mode is switched, all app services need to be stopped

A more concise solution is that when the mode is switched, the OOD-daemon is responsible for stopping the App-Manager, and the App-Manager needs to stop all currently running app services under the --stop command.

gateway unwrap

Problem
[2022-09-08 12:42:08.567990 +00:00] INFO [component/cyfs-debug/src/panic/panic.rs:32] thread 'async-std/runtime' panicked at 'called Option::unwrap() on a None value': component/cyfs-stack/src/interface/http_ws_listener.rs:88

To Reproduce
vood batch and concurrent op_env

Expected behavior
gateway work well

System information
Intel Xeon Processor (Skylake, IBRS) 2 Core
RAM: 4GB
OS: Ubuntu20.4

Additional context

[2022-09-08 12:42:08.567524 +00:00] INFO [component/cyfs-debug/src/panic/panic.rs:31] thread 'async-std/runtime' panicked at 'called Option::unwrap() on a None value': component/cyfs-stack/src/interface/http_ws_listener.rs:88
0: 0x0000564ec338d390
1: 0x0000564ec337b690
2: 0x0000564ec31ea950
3: 0x0000564ec3926000
4: 0x0000564ec3925f30
5: 0x0000564ec3924710
6: 0x0000564ec3925cd0
7: 0x0000564ec1dfa110
8: 0x0000564ec1df9fc0
9: 0x0000564ec23da310
10: 0x0000564ec2e52ed0
11: 0x0000564ec2e97c80
12: 0x0000564ec2e4d2c0
13: 0x0000564ec2ed9a30
14: 0x0000564ec38ba180
15: 0x0000564ec38babc0
16: 0x0000564ec38b88f0
17: 0x0000564ec38af430
18: 0x0000564ec38ae980
19: 0x0000564ec38b02d0
20: 0x0000564ec392b840
21: 0x00007f152b671530
22: 0x00007f152b44112d
23: 0x0000000000000000
[2022-09-08 12:42:08.567990 +00:00] INFO [component/cyfs-debug/src/panic/panic.rs:32] thread 'async-std/runtime' panicked at 'called Option::unwrap() on a None value': component/cyfs-stack/src/interface/http_ws_listener.rs:88
0: cyfs_debug::panic::manager::PanicManager::start::{{closure}}
1: std::panicking::rust_panic_with_hook
at /rustc/4b91a6ea7258a947e59c6522cd5898e7c0a6a88f/library/std/src/panicking.rs:702:17
2: std::panicking::begin_panic_handler::{{closure}}
at /rustc/4b91a6ea7258a947e59c6522cd5898e7c0a6a88f/library/std/src/panicking.rs:586:13
3: std::sys_common::backtrace::__rust_end_short_backtrace
at /rustc/4b91a6ea7258a947e59c6522cd5898e7c0a6a88f/library/std/src/sys_common/backtrace.rs:138:18
4: rust_begin_unwind
at /rustc/4b91a6ea7258a947e59c6522cd5898e7c0a6a88f/library/std/src/panicking.rs:584:5
5: core::panicking::panic_fmt
at /rustc/4b91a6ea7258a947e59c6522cd5898e7c0a6a88f/library/core/src/panicking.rs:142:14
6: core::panicking::panic
at /rustc/4b91a6ea7258a947e59c6522cd5898e7c0a6a88f/library/core/src/panicking.rs:48:5
7: <core::future::from_generator::GenFuture as core::future::future::Future>::poll
8: <core::future::from_generator::GenFuture as core::future::future::Future>::poll
9: std::thread::local::LocalKey::with
10: <core::future::from_generator::GenFuture as core::future::future::Future>::poll
11: async_task::raw::RawTask<F,T,S>::run
12: <core::future::from_generator::GenFuture as core::future::future::Future>::poll
13: <core::future::from_generator::GenFuture as core::future::future::Future>::poll
14: async_io::driver::block_on
15: async_global_executor::threading::thread_main_loop
16: std::sys_common::backtrace::__rust_begin_short_backtrace
17: core::ops::function::FnOnce::call_once{{vtable.shim}}
18: <alloc::boxed::Box<F,A> as core::ops::function::FnOnce>::call_once
at /rustc/4b91a6ea7258a947e59c6522cd5898e7c0a6a88f/library/alloc/src/boxed.rs:1951:9
<alloc::boxed::Box<F,A> as core::ops::function::FnOnce>::call_once
at /rustc/4b91a6ea7258a947e59c6522cd5898e7c0a6a88f/library/alloc/src/boxed.rs:1951:9
std::sys::unix::thread::Thread::new::thread_start
at /rustc/4b91a6ea7258a947e59c6522cd5898e7c0a6a88f/library/std/src/sys/unix/thread.rs:108:17
19: start_thread
20: clone

StateStorage implementation

Add the implementation of the basic data structure based on ObjectMap to realize the support of data storage and access such as FriendList

Hello CYFS Guide Feedback

Describe
This issue is to discuss the Hello CYFS Guide. If you have any issues with or feedback on the document, please leave a comment here.

Note: this is for feedback on the document, not on CYFS or on the CYFS changes.

Thank you for your efforts!

Improve Perf module

Improve Perf module

Goal

  • Multi report target, Default local_cache of the local cyfs stack
  • Use local cache to store stat information
  • Easy to use
  • Start using the Perf module inside OOD service and CYFS stack

SharedCyfsStack.util().build_dir_from_object_map parameter input req_path will cause the request to fail

Describe the bug

build_dir_from_object_map filling in req_path will cause the request to fail

To Reproduce

let dir_obj_resp1 = await stack.util().build_dir_from_object_map({
common: {
req_path: "qaTest",
target : stack.local_device_id().object_id,
dec_id: ZoneSimulator.APPID,
flags: 0,
},
object_map_id: get_ret1.unwrap().object.object_id,
dir_type: cyfs.BuildDirType.Zip,
});

result:

Error: err: (4, request not handled: method=POST, path=/util/build_dir_from_object_map/qaTest, undefined)
let dir_obj_resp1 = await stack.util().build_dir_from_object_map({
common: {
req_path: "",
target : stack.local_device_id().object_id,
dec_id: ZoneSimulator.APPID,
flags: 0,
},
object_map_id: get_ret1.unwrap().object.object_id,
dir_type: cyfs.BuildDirType.Zip,
});

result:

{"ok":true,"err":false,"val":{"file_id":"95RvaS5aUeMomNs3LBR89YwU6b3Aj7Tm5aQJPsJfH8SC"}}

Expected behavior
req_path is an optional parameter in UtilOutputRequestCommon, it should be able to be filled in successfully

CYFS DSG Cache Service

DSG Contract support cache service:

  • Cache storage fields;
  • Global cache storage option for cache node;

Prepare cached chunk list:

  • DSG cache aggregation objects: traffic proof, chunk traffic;
  • Update chunk list;

Standby ood failed to sync chunk data

Describe the bug

the standby ood just sync the object_map ,not sync chunk data
To Reproduce

(1) use SharedCyfsStack ndn_service put a chunk to master ood
(2)master ood's root_state add a ObjectMap of chunkId
(3) waitting for standby ood sync data complete from master OOD
(4) check standby ood can use SharedCyfsStack ndn_service get the chunk
(5)the standby ood root_state revision is sync ,but get chunk data failed, ndn get_chunk from local but not found!
Err(err: (4, ndn get_chunk from local but not found! id=7EYaFPU2ry4McmHUJtxz1iDKEyPfkSaoDz7AFuxiDFJv, undefined))
Expected behavior

(1) the standby ood root_state revision is sync , the standby ood should get chunk from NOC

NOC/NON/ACL redesign and refactoring

Mainly include the following two core changes

  1. Refactor the underlying structure and implementation of NOC, introduce permission design and GC mechanism support, etc.
  2. Refactor the NON interface to work with the new NOC and authority mechanism

Refactor cyfs-stack front-end support for protocol layer

The purpose of the task
In order to better and systematically improve the runtime layer's support for the protocol, solve some leftover problems and inconsistent errors in the past historical evolution, and better support the o/r/a protocol

The core changes are as follows

  • Add the Front module, deconstruct the existing translator processing mechanism, and uniformly handle the o/r/a protocol
  • The non/ndn/global-state processing mechanism of cyfs-stack strips the support for runtime protocol layers such as mode and format, and the function is more explicit and pure

Spend time
About one to two weeks, including testing

BDT packet version campatible

  • Version field in handshake packets;
  • Version property on tunnel;
  • Semantic session implemention differs from versioned tunnel;

The parameters of A link should be maintained after transform.

Describe the bug
The parameters will be removed when I open an A link html file.

To Reproduce
For example, I will open the following link:
cyfs://a/9tGpLNna8UVtPYCfV1LbRN2Bqa5G9vRBKhDhZiWjd7wA/index.html?hello=world
once I open it, the link where transform to like this :
http://127.0.0.1:38090/o/5hLXAcNtoFKLkvPEmdMFDJ9NjpCWJxdSZU6uKXPkR6m9/95RvaS5cWCTCKejHojmHtauJTQwL7FZ1AMeLLba1Dk1w/index.html?dec_id=9tGpLNna8UVtPYCfV1LbRN2Bqa5G9vRBKhDhZiWjd7wA
which means I've lost my parameter.

Expected behavior
In fact, the result should be like this:
http://127.0.0.1:38090/o/5hLXAcNtoFKLkvPEmdMFDJ9NjpCWJxdSZU6uKXPkR6m9/95RvaS5cWCTCKejHojmHtauJTQwL7FZ1AMeLLba1Dk1w/index.html?dec_id=9tGpLNna8UVtPYCfV1LbRN2Bqa5G9vRBKhDhZiWjd7wA?hello=world

[Congestion Control] The performance problem of BDT TCP file transmission in high latency and high packet loss

【Problem Description】
Multinational network Shanghai, China DCFS <-> US VOOD BDT only uses TCP protocol and the transmission fails
Network environment: 20-30% packet loss, 150ms delay, 30MB bandwidth, BDT transmission rate of 20-50KB/s, there is a small probability of transmission failure

【Modify system configuration optimization plan】

Linux kernel configuration to enable TCP congestion control can definitely improve performance
vim /etc/sysctl.conf

net.core.default_qdisc = fq 
net.ipv4.tcp_congestion_control = bbr

【Lab Verification】
0% packet loss 0ms delay bandwidth 30MB speed 3MB/s
0% packet loss 150ms delay bandwidth 30MB speed 1MB/s
20% packet loss 150ms delay bandwidth 30MB speed 500KB/s

SN Online Timeout

Describe the bug
cyfs stack wait sn online 30s timeout
run open bdt satck 166498 times success , timeout 237 times
To Reproduce

let stack = Stack::open(
            local, 
            key, 
            params).await?;

        let acceptor = stack.stream_manager().listen(0).unwrap();

        let peer_impl = unsafe {
            &mut *(Arc::as_ptr(&self.0) as *mut PeerImpl)
        };
        peer_impl.lazy_components = Some(LazyComponents {
            stack, 
            acceptor
        });
let begin_time = system_time_to_bucky_time(&std::time::SystemTime::now());
                    let mut local = peer.get_stack().local();
                    log::info!("on create succ, local: {}", local.desc().device_id());
                    let mut result = 0 ;
                    if sn.len()>0{
                        log::info!("peer sn list len={},wait for sn online",sn.len());
                        let result = match future::timeout(Duration::from_secs(30), peer.get_stack().net_manager().listener().wait_online()).await {
                            Err(_) => {
                                log::error!("sn online timeout");
                                1000
                            },
                            Ok(_) => {
                                log::info!("sn online success");
                                0
                            }
                        };
                    }
                    let online_time = system_time_to_bucky_time(&std::time::SystemTime::now()) - begin_time;

danger for `import-people`

Exporting identities to the developer's computer is dangerous, and we may need a tool that authorizes publishing; exporting identities is only an option that is convenient for the development phase.

文檔建議

強迫症提示:建議所有關鍵字統一規範大小寫書寫風格,中文和英文字符串之間增加空格來達到更有好的可讀性,典型的案例如:

舊:订阅基于VPS实现的立即可用的VirtualOOD, VirtualOOD是门槛最低开通速度最快的OOD方式,相信会是大部分普通用户第一次的选择。

新:订阅基于 VPS 实现的立即可用的 Virtual OOD,Virtual OOD 是门槛最低开通速度最快的 OOD 方式,相信会是大部分普通用户第一次的选择。

另外,很多操作步驟沒有命令行實例,如:身份生成后,复制<save_path>/ood.desc和<save_path>/ood.sec两个文件到OOD机器的${cyfs_root}/etc/desc下,并重命名为device.desc和device.sec,這裡可以考慮用 cp 命令或者 cp 後再使用 mv 命令的完整提示。

Need ood-daemon-stop.sh on DIYOOD

The purpose of the feature
I want stop the ood service and backup the /cyfs/data/, now I killed service process manually

Describe the solutions to the feature you'd like
I need start/stop scripts~

Improve the event mechanism of zone role switching

At present, zone role switching is a built-in event of role manager, which can only be used in the same process of CyfsStack. It needs to be switched to rely on the standard RouterEvent mechanism to provide cross-process access mode. For example, ood-daemon/app-manager will also pay attention to this event.

OOD-daemon supports read-only mode of gateway service

OOD-daemon needs to deal with the ood working mode of the gateway, mainly in the case of readonly in the current standby-ood mode, the app-manager service needs to be stopped, including the monitoring mechanism of dynamic switching, etc.

Add list method to op-env

Now op-env supports iterator mode, which has better support for the data structure of large mode, but for ObjectMap in most cases, it is more convenient and efficient to use list directly

NDN refactor for new ACL and load logic

Loading logic of dir+inner-path

dir is the same as objectmap, the loading is changed to the full local protocol stack, that is, dir+inner_path is loaded from the target protocol stack, and the subdir, chunk and file that a dir depends on must be completely in the same protocol stack, and the internal nodes are no longer support complicated search across protocol stacks (consistent with objectmap loading and operation behavior)

NDN

ndn loads data, supports three types

  • chunk_id
  • file_id
  • dir_id+inner-path

The new ACL needs to match the following fields

  • referer_object
  • req_path

Several key designs

  • The minimum granularity of permission control is the file_id/dir_id corresponding to the referer_object, or the chunk_id/file_id/dir_id in the direct request (if the referer_object is empty), finer granularity is not supported
  • A request supports at most one referer_object to specify the association with the request target; or if no referer_object is specified, multiple referer_objects are no longer supported
  • Bridge target_object_id and req_path through referer_object to determine the permission of target_object_id, so the permission determination is basically the following two modes
    • referer_object + target_object_id + req_path triple
    • target_object_id + req_path 2-tuple

for chunk-id

Load a chunk data directly, check the permissions as follows

  1. referer_object is empty
    Then use req_path+chunk_id directly for verification, that is, require chunk_id to be linked to root_state
  2. referer_object = file_id
    First check if chunk_id exists in file.chunklist
    Then use req_path+file_id for verification, that is, the referenced file_id is required to be linked to root_state
  3. referer_object = dir_id + inner_path
    First get the corresponding file_id/dir_id through dir_id + inner_path
    Check if chunk_id is in file.chunklist or embedded chunk in dir
    Then use req_path+file_id or req_path + dir_id for verification, that is, the referenced file_id/dir_id is required to be linked to root_state

for file_id

Load a file data directly, verify the permissions as follows

  1. referer_object is empty
    Then use req_path+file_id directly for verification, that is, file_id is required to be linked to root_state
  2. referer_object = dir_id + inner_path
    First get the corresponding file_id through dir_id + inner_path
    Check if chunk_id is in file.chunklist
    Then use req_path + dir_id for verification, that is, the referenced dir_id is required to be linked to root_state and meet the permission requirements

For dir_id+inner_path

Load a file corresponding to the internal path of dir, and the permission check is as follows
The referer_object parameter is not accepted in this mode, and the dir_id itself is used directly
Use req_path + dir_id for verification, that is, the referenced dir_id is required to be linked to root_state and meet the permission requirements

NDN get_data improvements and optimizations for local data access

  1. Integrate the existing get_data support with ChunkManager, ndc and tracker, which is consistent with the achievements of the BDT layer
  2. Optimize the reading mechanism of local file chunks, improve the performance of delayed reading of multiple blocks and the reading part with a range

Window ood-installer 重新启动导致的gateway.toml被覆盖的bug

Describe the bug

CYFSOOD-x86-64-1.0.0.505-nightly安装包。WIN10专业版,安装配置完成后,再次重新启动机器,不能正确启动服务导致CYFS_browser中,info不能正确的显示OOD的信息。同时会每隔30秒启动一次服务(应该是cyfs-drive-server.exe)
To Reproduce

1,确保本机没有安装mogodb
2,下载 diy-ood-window版CYFSOOD-x86-64-1.0.0.505-nightly安装包,安装
3,使用https://youtrack.buckycloud.com/articles/SHARES-A-2/CYFS开发环境搭建 (注意host修改)中提供的 cyfs身份创建工具 中的,gateway.toml,替换 c:\cyfs\etc\gateway\gateway.toml 这个文件。此文件中,数据库配置为sqlite
4,关联账号
5,打开browser,可以看到ood的info信息是正确的
6,关机重新启动,ood-installer 又被自动执行。
7,c:\cyfs\etc\gateway\gateway.toml 这个文件被重新覆盖,其中的数据库配置又被修改为mongodb。
8,启动browser,不能正确显示ood 信息,同时会不停的启动某个服务(一闪而过)。

Expected behavior

再次重启启动,运行ood-installer ,不应该覆盖被修改的配置文件
System information

ftp 服务上下载 的CYFSOOD-x86-64-1.0.0.505-nightly安装包

OS:win10

Additional context

Optimized compilation speed

Speed up the compilation of ood service

channel.rs moved from cyfs-base to new project cyfs-version
Move all logging-related code from cyfs-debug to the new project cyfs-log
keep the old dead-related code in cyfs-debug

This way, every time you change the version number, channel, etc. and recompile, you don't need to compile cyfs-base, cyfs-bdt, cyfs-lib and other libraries that take a long time, but only the final binary library, which can greatly speed up the compilation speed.


加快ood service的编译速度

channel.rs从cyfs-base里移出到新建工程cyfs-version
所有log相关的代码从cyfs-debug里移出到新建工程cyfs-log
cyfs-debug里保留原有的check相关代码

这样,每次修改版本号重编译的时候,就不需要再编cyfs-base, cyfs-bdt, cyfs-lib等需要很长时间的库了,而只需要编译最终的二进制库,可以大大加快编译速度

NDN Transfer Group basis

NDN Transfer Group is designed for named data transfer sessions, which may devices request some same named datas at same time; each transfer group could have different strategy for requested named data feature differs, BDT has to design transfer group basis to support making these strategies:

  • more fields on existing NDN package(e.g. interest package), or new defined NDN package to support transfer group member communication;
  • design basic roles in transfer group: tracker, hub, seeder etc.
  • design basic actions in transfer group member.

Add additional sync support to DirObject

Due to the particularity and complexity of DirObject, there are multiple layers of Object nesting and Chunk interleaving. Therefore, the synchronization mechanism needs to be considered additionally and corresponding implementations must be added.

Device add bdt_version field

For the sake of bdt protocol version compatibility, an additional field needs to be added to indicate the bdt protocol version of the protocol stack used by the current device.

Device body content needs to add a field as follows:
bdt_version: Option<u8>

Compatibility between old and new binary data layers needs to be considered

BDT support scale out for NDN request

  • create download session for data piece returned from a channel which is different from where interest is sent;
  • add a from field to Interest packet: if interest is transmit from one device to another device, this field points to orign device where interest if sent from;

Super frame datagram

  • support super frame datagram: send/receive datagrams beyound UDP MTU;
  • support plaintext datagram;

Windows bash environment cross compile cyfs_raptorq error

After compiling the linux version of the cyfs project using windows bash environment , and then using windows cmd environment to compile the cyfs project, an error will appear in the compilation of the cyfs_raptorq module

error[E0786]: found invalid metadata files for crate `cyfs_raptorq`
  --> CYFS\src\component\cyfs-bdt\src\ndn\chunk\encode\raptor.rs:10:5
   |
10 | use cyfs_raptorq::{
   |     ^^^^^^^^^^^^
   |
   = note: invalid metadata version found: \\?  CYFS\src\target\release\deps\libcyfs_raptorq.rlib

After deleting the "target" directory, windows cmd compile pass

BDT-Stream Direct failure between BDT LAN devices

【Test Node EP】
PC_0015_0 L4tcp192.168.1.145:38549
PC_0016_0 L4tcp192.168.1.142:45531
【Steps】

  1. PC_0015_0, PC_0016_0 do not use SN
  2. PC_0015_0 -> PC_0016_0 ,remote_desc contain EP L4tcp192.168.1.142:45531
  3. PC_0016_0 -> PC_0015_0 , remote_desc contain EP L4tcp192.168.1.145:38549
    【expected outcome】
  4. The connection is successful
    【actual results】
    1.The connection is failure . if not SN,Direct just can use WAN address

NDN Cache Node support in BDT

interest package routing semantic

  • foward interest package to another device;
  • reply interest with another source;
  • reply interest with waiting;

scheduling upload tasks

  • resource usage statistics for upload tasks
  • resource quota for upload tasks, when reaches, reply waiting to all incoming interests

service proof of traffic for cache node

  • embed service proof fields in NDN packages

Datagram broadcasting

  • enable DHT routing map on BDT stack;
  • implement datagram broadcasting with DHT;
  • filter broadcating with area code in device object;

Add mechanism to clear handlers based on dec

Due to the persistence mechanism of the handler, after dec modifies the handler, if you forget to remove the old handler, there will be residues that will affect the handler mechanism. So consider adding an interface for clearing the handler based on dec granularity. After dec starts or when necessary, you can call the clearing history version once.

brief information in object-id

Now to save any information, we need to construct a special object, which is too complicated when the amount of information is small. We can design an object, and the information is directly encapsulated in "Object-Id", such as: number, short byte stream, short string etc.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.