Code Monkey home page Code Monkey logo

neofs-node's Introduction

NeoFS logo

NeoFS is a decentralized distributed object storage integrated with the NEO Blockchain.


Report GitHub release (latest SemVer) License

Overview

NeoFS nodes are organized in a peer-to-peer network that takes care of storing and distributing user's data. Any Neo user may participate in the network and get paid for providing storage resources to other users or store their data in NeoFS and pay a competitive price for it.

Users can reliably store object data in the NeoFS network and have a transparent data placement process due to a decentralized architecture and flexible storage policies. Each node is responsible for executing the storage policies that the users select for geographical location, reliability level, number of nodes, type of disks, capacity, etc. Thus, NeoFS gives full control over data to users.

Deep Neo Blockchain integration allows NeoFS to be used by dApps directly from NeoVM on the Smart Contract code level. This way dApps are not limited to on-chain storage and can manipulate large amounts of data without paying a prohibitive price.

NeoFS has a native gRPC API and has protocol gateways for popular protocols such as AWS S3, HTTP, FUSE and sFTP allowing developers to integrate applications without rewriting their code.

Supported platforms

Now, we only support GNU/Linux on amd64 CPUs with AVX/AVX2 instructions. More platforms will be officially supported after release 1.0.

The latest version of neofs-node works with neofs-contract v0.19.1.

Building

To make all binaries you need Go 1.20+ and make:

make all

The resulting binaries will appear in bin/ folder.

To make a specific binary use:

make bin/neofs-<name>

See the list of all available commands in the cmd folder.

Building with Docker

Building can also be performed in a container:

make docker/all                     # build all binaries
make docker/bin/neofs-<name> # build a specific binary

Docker images

To make docker images suitable for use in neofs-dev-env use:

make images

Running

CLI

neofs-cli allows to perform a lot of actions like container/object management connecting to any node of the target network. It has an extensive description for all of its commands and options internally, but some specific concepts have additional documents describing them:

neofs-adm is a network setup and management utility usually used by the network administrators. Refer to docs/cli-adm.md for mode information about it.

Both neofs-cli and neofs-adm can take configuration file as a parameter to simplify working with the same network/wallet. See cli.yaml for an example of what this config may look like. Control service-specific configuration examples are ir-control.yaml and node-control.yaml for IR and SN nodes respectively.

Node

There are two kinds of nodes -- inner ring nodes and storage nodes. Most of the time you're interested in running a storage node, because inner ring ones are special and are somewhat similar to Neo consensus nodes in their role for the network. Both accept parameters from YAML or JSON configuration files and environment variables.

See docs/sighup.md on how nodes can be reconfigured without restart.

See docs/storage-node-configuration.md on how to configure a storage node.

Example configurations

These examples contain all possible configurations of NeoFS nodes. All parameters are correct there, however, their particular values are provided for informational purposes only (and not recommended for direct use), real networks and real configuration are likely to differ a lot for them.

See node.yaml for configuration notes.

Private network

If you're planning on NeoFS development take a look at neofs-dev-env. To develop applications using NeoFS we recommend more light-weight neofs-aio container. If you really want to get your hands dirty refer to docs/deploy.md for instructions on how to do things manually from scratch.

Contributing

Feel free to contribute to this project after reading the contributing guidelines.

Before starting to work on a certain topic, create a new issue first, describing the feature/topic you are going to implement.

Credits

NeoFS is maintained by NeoSPCC with the help and contributions from community members.

Please see CREDITS for details.

License

neofs-node's People

Contributors

acid-ant avatar alexvanin avatar aliceinhunterland avatar annashaleva avatar aprasolova avatar carpawell avatar cthulhu-rider avatar dependabot[bot] avatar dmitryzabolotsky avatar elichin avatar fyrchik avatar kirillovdenis avatar mastersplinter01 avatar mike-petrov avatar notimetoname avatar roman-khimov avatar smallhive avatar tivizi avatar vdomnich-yadro avatar vvarg229 avatar xiaoxianboy avatar zhangtao1596 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

neofs-node's Issues

Add SQL-like policy setting tool to neofs-cli

neofs-cli needs to work with Storage policy in human friendly format. There should be:

  • policy module provinding a way to write a policy in human format (SQL-like, JSON) and apply it netmap (current or pre-constructed)
  • container module has to accept human friendly format as storage policy on container creation
  • as a best effort, it would be nice to have the ability to translate storage policy of already existing containers from protobuf format into human-readable form.

Storage GAS distribution on large network map

Inner ring nodes periodically transfer sidechain GAS to storage nodes so they can send sidechain txs: create container, bootstrap, etc.

According to #139 (comment) there is a possible case when network map has more nodes than available GAS to transfer. We can solve this issue with several approaches:

Do not do anything

This situation is possible when there are too many nodes in network map. If we set available GAS amount to 2(Fixed8), then network map should contain more than 200 000 000 nodes, which is HIGHLY unlikely.

However event if there are less nodes in network map, each node can get insufficient amount of GAS to operate with sidechain. In this case it should wait several epochs to get enough GAS. It is quite unpleasant for the users and node holders so we might think about alternatives.

Increase GAS distribution amount

  • adjust GAS distribution value locally on each node,
  • use global config value in sidechain or mainnet contracts.

In first case we calculate GAS distribution value in runtime automatically, which is good. But it should stay in some limits so it won't be abused and inner ring node won't spend all it's remaining GAS.

Second option is more explicit and controllable, but takes out-of-chain coordination of inner ring node holders.

Send GAS to a subset of nodes

We can choose a set of nodes that inner ring will transfer a GAS. E.g. inner ring node sends GAS to network map nodes where storageNodeListIndex % innerRingListIndex == 0. Inner ring nodes stays in the GAS distribution amount limit, but some storage node still can get insufficient amount of GAS, see first option.

Add string type argument support in morph/client

Some smart-contracts have strings as method arguments. While VM can interpret any []byte as strings in contracts, this can be changed later with stronger typing.

toStackParamether method converts native go types to smartcontract.Parameter and it should support strings.

Incorrect SHA256 range hash with non zero offset

Hash range with non-zero offset returns invalid value for non-split object. I compare neofs-node results with combination of xxd and sha256 output:

xxd -s <offset> -l <length> <filepath> | xxd -r | sha256sum

Current Behavior

✔️ Whole file

$ neofs-cli -c ../config.yml object hash --cid <> --oid <>
83f19b8a49e9655e7d704f5ee6547fa53dc7f5a524035ca589f365cb4e4be3d9
$ xxd -s 0 -l 1626 ../go.mod | xxd -r | sha256sum
83f19b8a49e9655e7d704f5ee6547fa53dc7f5a524035ca589f365cb4e4be3d9  -

✔️ Chunk with zero offset

$ neofs-cli -c ../config.yml object hash --cid <> --oid <> --range 0:500
Offset=0 (Length=500)   : 1984b3d081b204dadf0b36eb35a37b58b0d5395be57af1450038bfc4c11c5b51
$ xxd -s 0 -l 500 ../go.mod | xxd -r | sha256sum
1984b3d081b204dadf0b36eb35a37b58b0d5395be57af1450038bfc4c11c5b51  -

❌ Chunk with non zero offset

$ neofs-cli -c ../config.yml object hash --cid <> --oid <> --range 1:500
Offset=1 (Length=500)   : d5a54a6b06539f1b6f58216b8176f83ac62d604452fc154b9869af1f573273fb
$ xxd -s 1 -l 500 ../go.mod | xxd -r | sha256sum
46609c92a267c792a12961b5760767184fad5e2a5d3f538ce163350b964fc9d9  -

Your Environment

  • Version used: 0.12.0-rc3-22-g54818d5

Tombstone classified as root object

Object search with --root flag in CLI returns tombstone object ID but it shouldn't.

Steps to Reproduce

  1. Upload non-split object
  2. Remove non-split object
  3. Wait for garbage collector to remove original object
  4. Search in container for root objects.

Your Environment

  • Version used: 0.12.0-rc3-22-g54818d5

Move placement policy parsers to SDK

QL parsers are quite useful for applications such as protocol gates or GUI apps. It provides human readable way to set up container policies. So it will be nice to have it in SDK library.

Add storage group module to neofs-cli

Basic set of StorageGroup operations to implement:

  • Create
  • List StorageGroups for container
  • Get StorageGroup by ID
    There should be an option to save SG to a file or print in a desired format
  • Delete StorageGroup by ID

Add github actions

Add github actions similar to neofs-api-go at pull requests:

  • check sign-of line in commits,
  • run unit tests,
  • run linter.

Add container module to neofs-cli

neofs-cli needs to have a module for container operations. As a first step the minimal required feature set would be:

  • Create container with Storage Policy, BasicACL and Attributes
  • Delete container
  • List containers by owner
  • List objectIDs in container
  • Get container structure and display parsed information of dump to file or stdout
  • Get list of nodes serving container by ID or from file dump for netmap from file dump or stdin

Tombstone does not include parent ID

Tombstone body contains object ID that should be removed. This is helpful when object was split into several parts, so tombstone contains ID's of every smaller part. However it does not include removed object ID itself.

Now meta-base stores virtual object IDs of big objects in separate index. If object ID is not presented in tombstone, then it is not marked on meta-base, then it will be present in search responses after delete which is incorrect.

Add --rpc-endpoint option to neofs-cli

neofs-cli needs to have --rpc-endpoint option in root module to set NeoFS node to connect to.

Node address should be accepted in both multiaddr and plain traditional IPv4/IPv6/DNSname:port formats.

CLI set filepath instead of filename in object header

With #117 CLI sets object filename header but it uses value provided from command line. This value is actually a path to the file, so we have to extract filename first.

Expected

$ ./bin/neofs-cli object put --cid <> --file ../2_5422526881285015969.pdf
$ ./bin/neofs-cli object head --cid <> --oid <>
...
Attributes:
  FileName=2_5422526881285015969.pdf

Got

$ ./bin/neofs-cli object put --cid <> --file ../2_5422526881285015969.pdf
$ ./bin/neofs-cli object head --cid <> --oid <>
...
Attributes:
  FileName=../2_5422526881285015969.pdf

Set meta header in responses

Most of the services of neofs-storage do not set response meta header in responses. Response meta header should contain at least default TTL value, node's epoch counter value and used API version.

Specify pool of NEO endpoints in config

Both inner ring and storage node configurations have NEO endpoint fields. It would be nice to support pool of addresses.

# Pure YAML:
morph:
  endpoint: 
    - https://morph1.nspcc.ru
    - https://morph2.nspcc.ru
    - https://morph3.nspcc.ru

# With ENV support:
morph:
  endpoint_0: https://morph1.nspcc.ru
  endpoint_1: https://morph2.nspcc.ru
  endpoint_2: https://morph3.nspcc.ru

# With priorities:
morph:
  endpoint_0: 
    url: https://morph1.nspcc.ru
    priority: 1.0
  endpoint_1: 
    url: https://morph2.nspcc.ru
    priority: 0.5
  endpoint_2: 
    url: https://morph3.nspcc.ru
    priority: 0.5

Neofs-node can choose random address and use it in runtime if it is active. This way we can uniformly distribute load of RPC nodes that will be present in preinstalled configuration.

What's wrong with runtime connection pool?

Instead of choosing one active address before startup, neofs-node can keep pool of active connections, healthcheck it in runtime and rotate connections whenever it needs. This increases stability of RPC node failures but there is a tricky part.

While connection pool works fine with clients, neofs-node also uses event listener to fetch new notifications from blockchain. Therefore neofs-node should multiplex several listeners to provide single stable channel of events. This is not trivial task and requires connections to several RPC nodes simultaneously. The other option: rotate listeners the same way as client connection, but we can have a bit of downtime during rotation, which is bad.

So the convenient way to address this problem: choose one endpoint and work with it. As soon as there are problems listener will close connection and client will return an error. In this case we can restart / reinitialize application with new single RPC endpoint.

Add tombstone id to delete object output in neofs-cli

It would be great to add a tombstone id in the neofs-cli output.

For example:

$ neofs-cli --rpc-endpoint s01.neofs.devenv:8080 --key L1XsFNrFqyB9VA6dbus6UtAcv8yJMdfpfqCcteHAahpz8rqin9K3 object delete --cid 7KjvGB3geDDUkpXkyV7E8mj6n3x5eb2h6kXm6ZfUgYMN --oid 8pw9MagFHW7DDWhS4qYMToENv65HhEag5ds7AbDTF5BT 
Object removed successfully.
  ID: 8pw9MagFHW7DDWhS4qYMToENv65HhEag5ds7AbDTF5BT
  CID: 7KjvGB3geDDUkpXkyV7E8mj6n3x5eb2h6kXm6ZfUgYMN
Тombstone ID: 5s2XS8uJPZsThBjjv6D8Qo8pmTePneEB35U7W585bYuF

Container service ignores message id in put request

Container has salt (nonce) field to avoid container id collisions. This field set via MessageID field in container put request. This field ignored by neofs-node, since container put response has different container id.

If you get container with returned container id, it will return with empty salt (nonce) field. This marks container as invalid.

Expected Behavior

Container put response has the same container id as your locally created structure.
Container get response returns valid container with non empty salt (nonce) field.

Current Behavior

Container put response has different container id from locally created container structure.
Container get response has empty salt (nonce) field, when MessageID was not empty.

Possible Solution

Do not regenerate salt (nonce) at server side. Return error if MessageID field is not set. Check if salt field did not missed during conversion from proto-defined container structure into neofs-node defined container structure.

Steps to Reproduce

  1. Create container.Container{} structure from neofs-api-go and fill it.
  2. Create container.PutRequest{} and fill the fields according to container structure.
  3. Check if PutResponse.CID field has different container id than container id structure.
  4. Wait container to be stored in morph chain.
  5. Get container with container id from PutResponse.
  6. Check if ID() function returns error because salt(nonce) field is empty.

Your Environment

  • Version used: neofs-node v0.11.0 at latest neofs-dev-env.

neofs-cli search operation with --filter does not operate with object id

neofs-cli search operation with --filter does not operate with object id.

For example for object FM4vHDMDGN2ESBwFgK9cBfrgp2F26xPfA8ysBgtnaVjS with headers:
{"objectID":{"value":"1Sep9/aMFvx9y/kn4x+vy29sWpqPYLzVBd8iM7L5tiU="},"signature":{"key":"AgsHtCnrUUyZDtJMMUXqDuSNl1pNNaqjVgJ8fJ9kme2N","signature":"BIXS1CLrPUlsmot96xory2o0fBxTMBU62ULcenlgcffzJ3H3wl/CxlJygdm7QDAL2cf6ECl3TqTUix72rLBopJg="},"header":{"version":{"major":2,"minor":0},"containerID":{"value":"5UCvtrxiFLRZshssllLlzYJL8LPLAv8ubJDMTcnaUZ4="},"ownerID":{"value":"NeMOrwz/s6pJpbz3iuxvjRAb2ulLdNSB4w=="},"creationEpoch":"46","payloadLength":"20000000","payloadHash":{"type":"SHA256","sum":"4uAFRMGh+kff29mO/rbFPmfqz7bBxkVN/xAxZvJv+cY="},"objectType":"REGULAR","homomorphicHash":{"type":"TZ","sum":"frFCPV2/1YDrABHsg3maDn061aCbbXK1+AhK6cP5jeooYqGYV/O72FxTMBkQjj+rUpeksQIc6mPrZpsC/ePQnQ=="},"sessionToken":{"body":{"id":"6DreJiJBTzqW0gbt9ZdYew==","ownerID":{"value":"NeMOrwz/s6pJpbz3iuxvjRAb2ulLdNSB4w=="},"lifetime":{"exp":"0","nbf":"0","iat":"0"},"sessionKey":"AgsHtCnrUUyZDtJMMUXqDuSNl1pNNaqjVgJ8fJ9kme2N","object":{"verb":"PUT","address":{"containerID":{"value":"5UCvtrxiFLRZshssllLlzYJL8LPLAv8ubJDMTcnaUZ4="},"objectID":null}}},"signature":{"key":"AkfmbTvwi++QggnbMcifHtQpW4fkaB1KlblI0/scf6gC","signature":"BL1y0JgSD6BJdBQqth1mvFXeHbMWB2vAiqpkAuS6/njfXlk6f3p6puRgyFXl8yaw48B9lxh+kniivbQsB9V6hng="}},"attributes":[{"key":"key1","value":"1"},{"key":"key2","value":"abc"},{"key":"FileName","value":"b8c2cbc8-48db-418e-ab25-438553013bb8"},{"key":"Timestamp","value":"1605657650"}],"split":null},"payload":""}

neofs-cli --rpc-endpoint s03.neofs.devenv:8080 --key L2GseHKsbhXZFa9xXaUPXErhxPRQs2vgnH5ie6aFVP1kG2keNCHV  --ttl 1 object search --root --cid GRuaANbC7pyiG7qEsyK1odTAnk1qN4muTECRPJeogV6u --filters objectID=9uk6MGNQ87LiH2JhtEyLYrTGR3keGj2NZtC1roVyEsjP
Found 0 objects.
neofs-cli --rpc-endpoint s03.neofs.devenv:8080 --key L2GseHKsbhXZFa9xXaUPXErhxPRQs2vgnH5ie6aFVP1kG2keNCHV  --ttl 1 object search --root --cid GRuaANbC7pyiG7qEsyK1odTAnk1qN4muTECRPJeogV6u --filters key1=1
Found 1 objects.
FM4vHDMDGN2ESBwFgK9cBfrgp2F26xPfA8ysBgtnaVjS

Transform base64 encoded `NodeInfo` structure to JSON

Network map is stored in blockchain as a array of binary encoded NodeInfo structures. Neo-go can invoke Netmap() method to return these structures. On top of that we can build whole network map structure represented in JSON.

To do that we should be able to convert base64 encoded NodeInfo into JSON. CLI can work as a unix filter:

echo "RXhhbXBsZQ==" | neofs-cli decode node-info | jq

Add eACL support to neofs-cli

neofs-cli need to support eACL operations in container module.
Minimal required feature set should be:

  • CRUD for eACL table for particular container
  • eACL table converter from/to binary protobuf/JSON formats

Gracefully handle RPC node failure

When Neo RPC node fails, event listener generates huge amount of empty events that hang whole application. This case should be gracefully handled by switching node to passive state (for inner ring) and making reconnects. Or just shut down application.

Add Issue template

It would be nice to have GitHub issue template for neofs-node repository.

Add accounting operations to neofs-cli

neofs-cli needs to have accounting->balance command implementing Balance request.

Any correct OwnerID must be accepted as argument. By default OwnerID is calculated for the key in use.

Make heuristic evaluation for extra fee

Inner ring can't calculate precise fee for contract invocation with signature collection. While most of the nodes only register signature, one node will trigger method execution, that requires much more gas to spend. Amount of extra fee depends on method complexity. Some methods depends on inner ring size, e.g. container creation triggers nep-5 transfers to every single inner ring nodes. The more inner ring nodes we have, the more gas method spend.

So there should be some heuristic evaluation before each invocation for extra fee.

There are also different solution approaches for this issue. For example inner ring nodes can repeat tx that not accepted in block: tx that trigger method execution lacks gas after testinvoke evaluation. But next time testinvoke evaluation will be precise. However this approach can slower all chain operations.

Set well-known attributes of objects and containers in CLI

After nspcc-dev/neofs-api-go#172 will be implemented in SDK, there will be well-known attributes for containers and objects. We should adopt it in CLI.

Container attributes

  • Timestamp attribute can be set up automatically by CLI. Consider adding --disable-timestamp flag to not set it.
  • Name of the container should be provided with --name argument, that can be omitted.

Object attributes

  • Timestamp can be set up the same as in container. Consider --disable-timestamp flag.
  • FileName attribute can be set up based on --file argument automatically. Consider adding --disable-filename flag to not set it.

Container's object listing ignored split objects

When searching for objects from particular container, split objects are ignored, because they do not exist as a single object.

Expected Behavior

Container listing returned all objects: regular objects, split objects, split object parts.

Current Behavior

Split object are ignored

Possible Solution

On each node serving search request for container listing, additionally list all OIDs from Parent fields and merge those lists in response.

Mark removed objects in meta-storage on RPC request

Now all delete operations happen with some sort of delay. object.Delete invocation generates tombstone and node broadcast it to all container nodes. Some of container nodes store tombstone but every container nodes put list of removed objects from tombstone into a GC queue. After a some time GC takes object IDs from the queue, marks them as removed in meta-storage and removes them from object storage.

With new meta-storage it can be implemented without GC timeout. By receiving tombstone node can mark all removed object IDs in meta-storage index. Then GC will take batch of object IDs asynchronously from index of removed objects, remove it from object storage and rebuild index.

Implement "Replicator" service

Replicator service has to PUT objects to other nodes according to container's storage policy. ObjectID's are taken from a queue. Replicator's request queue is processed in a such way that it would not harm the main object service performance.

To avoid unnecessary network load, Replicator should take into account hints provided in it's queue items, including the list of nodes known to already store the object under replication and nodes known not to store it.

Can't get split object from storage nodes

There is an issue with object.GetRange that leads to errors in object.Get. This issue happens only with split object, non-split objects are fetched correctly. object.Get returns an error (formatted for better readability):

Error: can't put object: could not receive Get response: rpc error: code = Unknown desc = 
 could not receive response: 
   could not receive response message for signing: 
     (*object.getStreamResponser) could not receive response: 
       could not receive response message for signing: 
         (*getsvc.streamer) could not receive get response: 
           (*getsvc.Streamer) could not receive range response: 
             (*rangesvc.streamer) incomplete get payload range

Steps to Reproduce

  1. Upload object with size > 1 megabyte (max object size in dev-env)
  2. Get that object

Your Environment

  • Node version used: 0.12.0-rc3-22-g54818d5
  • Container with public basic ACL

Implement "Policer" service

Policer service goes through locally known objects and makes sure they are stored in the NeoFS network according to container's storage policy. To check if object is stored on a particular node the HEAD request is used. Policer should rely on MetaBase requests without producing additional load to object storage subsystem.

If the object is not stored properly, it's address is passed to Replicator service's request queue. Request queue item should have additional information including the list of nodes known to store the object and a list of nodes known to miss the object despite being supposed to store it.

Because other storage nodes may not be trusted, additional checks will be implemented when Reputation and Data Audit subsystems will be ported.

Parse placement policies without specific attributes

Some placement policies do not specify selection attributes e.g. "put two copies in container of 6 nodes with SSD". This can be presented as

REP 2
SELECT 6 FROM F
FILTER StorageType EQ SSD as F

Take generic policy "put two copies in container of 6 nodes":

REP 2
SELECT 6 FROM *

Expected Behavior

These queries parsed with selectors:

SelectorExample1 {           SelectorExample2 {
  name:     ""                 name:     ""
  count     6                  count     6
  clause    SAME               clause    SAME
  attribute ""                 attribute ""
  filter    "F"                filter    "*"
}                            }

Current Behavior

Placement policy parser grammar requires IN token in selectors to specify bucket, that makes all these queries invalid.

2:10: unexpected token "FROM" (expected "IN")

Possible Solution

Make IN token optional in placement policy parser grammar.

cc: @realloc @fyrchik

Session token is not written to the object

Expected Behavior

In case of trusted object placement within opened session between user and trusted node session token must be written to object body.

Current Behavior

Session token is not written to the objects during trusted placement.

Possible Solution

  1. write session token to object in format transformer;

  2. take session token from the request and write to object before transformations.

Your Environment

  • jindo branch

Sync config sections between neofs applications

Neofs-node repository contains neofs-storage, neofs-ir and neofs-cli applications. At least first two applications use config files. Both of them use viper as a config reader and both share some configuration parameters, such as:

  • blockchain endpoint,
  • smart-contract script hases,
  • ...

It will be great if both apps, and even neofs-cli, will be able to share sections of config file.

Use base58 encoders in search and EACL queries

With nspcc-dev/neofs-api-go#147 we've decided to encode object ID, owner ID and container ID as base58 strings. These strings used in search filters and EACL filters.

Now match functions for these filter use hex encoded representation:

func idValue(id *objectSDK.ID) string {

func idValue(id *objectSDK.ID) string {
	return hex.EncodeToString(id.ToV2().GetValue())
}

func cidValue(id *container.ID) string {
	return hex.EncodeToString(id.ToV2().GetValue())
}

func ownerIDValue(id *owner.ID) string {
	return hex.EncodeToString(id.ToV2().GetValue())
}

func idValue(id *objectSDK.ID) string {

func idValue(id *objectSDK.ID) string {
	return hex.EncodeToString(id.ToV2().GetValue())
}

func cidValue(id *container.ID) string {
	return hex.EncodeToString(id.ToV2().GetValue())
}

func ownerIDValue(id *owner.ID) string {
	return hex.EncodeToString(id.ToV2().GetValue())
}

After nspcc-dev/neofs-api-go#147 they should adopt stringers from neofs-api-go.

Add netmap module to neofs-cli

Basic set of operations to implement:

  • Dump netmap for latest Epoch or for Epoch number set in option, if available.
    Should be saved to file or printed to stdout in different formats including JSON and protobuf.

  • Get latest Epoch number

  • Request LocalNodeInfo from particular host

  • Filter netmap from file or stdin using set of filters. List of filters can
    be set in binary, JSON or SQL-like format

ACL - Inner Ring PUT/GET operation to the private container

In the previous NeoFS version attempt to PUT/GET an object into a private container from the Inner Ring key was impossible.

In the current version of the NeoFS ACL, it is possible to PUT not only for the container nodes (SYS group) but also for the inner ring nodes.

Do we think this behavior is correct for the current version of NeoFS?

For example for container with basic ACL 0x1C8C8CCC:

$ neofs-cli --rpc-endpoint s01.neofs.devenv:8080 --key <INNER RING KEY> object put --file 31bec1f6-9d84-4d92-b9ca-1ed97b91319a --cid 6fWrinptxkQWfNr7qFExmHViz4Wb5ctVMdZFBQ1QhAVy 
[31bec1f6-9d84-4d92-b9ca-1ed97b91319a] Object successfully stored
  ID: 4mo44oJ4KeG6w3YM7HA9XmrLXBoEaT8mN8wAqgabCbi6
  CID: 6fWrinptxkQWfNr7qFExmHViz4Wb5ctVMdZFBQ1QhAVy

Add --key option to neofs-cli

To select the key to be used for operation neofs-cli has to support --key option in root module. File path has to be used as argument.

Some objects won't be eventually removed

Right now object delete scheme for split objects looks like this:

  • Node looks for linking object;
  • If it finds it, then node fills tombstone with ID's from linking object;
  • If linking object was not found, then node looking for most-right child and traverse linked list to the most-left child, then put all ID's into tombstone.

There are a few things that can go wrong:

  1. Linking object exists but unavailable to node right now (e.g. network issues)
  2. Both one of the child and linking object are lost forever.

We already examined (1) due to bug in the implementation (#164). In this case linking object will be stored forever since it is not presented in a tombstone. I think this issue is more likely to happen than (2) and needs to be addressed.

In case (2) we can't properly build tombstone itself so we will delete at best only part of the object.

In both cases left objects are not ROOT objects and probably they will be removed after there won't be any payment for them.
However in the other environment without payment subsystem, these objects will be stored forever.

Make container sanity checks in container.Put method

It would be nice if node could make sanity checks on container fields at container.Put request:

  • do not accept containers without owner ID or with invalid owner ID,
  • do not accept containers without nonce or with invalid nonce,
  • do not accept containers without placement policy,
  • do not accept containers with unknown version,
  • optional: check container signature (smart-contract won't accept containers with invalid signature anyway, but we can make one more check).

Read max object size from netmap contract

Max object size as other global system preferences stored at netmap contract. At node initialization, storage node has to read configuration parameters and store it locally.

To do that it is proposed to create method in morph/netmap/client to get configuration parameters. Then implement getter in morph/netmap/client/wrapper such as GetObjectMaxSize() that will call client method with MaxObjectSize argument.

Support bearer tokens

ACL service in neofs-storage should check bearer token if it is present and use it for extended ACL check instead of on-chain extended ACL table.

Also bearer token should be attached to all generated request from the node the same way as session token.

Add object module to neofs-cli

neofs-cli needs to have a module for object operations.

Minimal required feature set:

  • Get object by address
    Header is ignored by default. Using an option it should be printed or saved to file in JSON or protobuf format
    Payload should be streamed to stdout or saved to file
    There should be an option to dump the requested object 'as-is' in binary form
    Use GetRange request if --range option is set

  • Head object by address
    --main and raw options must be supported
    Header should be printed or saved in JSON or protobuf format to file

  • Get object's payload hash
    Return full payload hash if --range option is not set

  • Delete object by address

  • Search
    Search query should be accepted in binary, JSON or SQL-like format

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.