Code Monkey home page Code Monkey logo

uplink's Introduction

Storj logo

Libuplink

Go library for Storj V3 Network.

Go Report Card Go Doc

Storj is building a decentralized cloud storage network. Check out our white paper for more info!


Storj is an S3-compatible platform and suite of decentralized applications that allows you to store data in a secure and decentralized manner. Your files are encrypted, broken into little pieces and stored in a global decentralized network of computers. Luckily, we also support allowing you (and only you) to retrieve those files!

Introducing Storj DCS—Decentralized Cloud Storage for Developers

Installation

go get storj.io/uplink

Example

Ready to use example can be found here: examples/walkthrough/main.go

Provided example requires Access Grant as an input parameter. Access Grant can be obtained from Satellite UI. See our documentation.

A Note about Versioning

Our versioning in this repo is intended to primarily support the expectations of the Go modules system, so you can expect that within a major version release, backwards-incompatible changes will be avoided at high cost.

Documentation

Language bindings

License

This library is distributed under the MIT license (also known as the Expat license).

Support

If you have any questions or suggestions please reach out to us on our community forum or file a support ticket at https://support.storj.io.

uplink's People

Contributors

aleitner avatar aligeti avatar azdagron avatar barterio avatar brimstone avatar bryanchriswhite avatar cam-a avatar coyle avatar crawter avatar egonelbre avatar ethanadams avatar fadila82 avatar ifraixedes avatar iglesiasbrandon avatar isaachess avatar jenlij avatar jessicagreben avatar jtolio avatar kaloyan-raev avatar littleskunk avatar mniewrzal avatar mobyvb avatar navillasa avatar qweder93 avatar rikysya avatar stefanbenten avatar thepaul avatar vinozzz avatar wthorp avatar zeebo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

uplink's Issues

There is no way to list or copy objects with an empty prefix

I can create an empty prefix with aws s3 via Tardigrade S3 Gateway:

$ aws s3 --profile gateway --endpoint http://localhost:7777 mb s3://test324
$ aws s3 --profile gateway --endpoint http://localhost:7777 cp ./1.txt s3://test324//prefix/
upload: ./1.txt to s3://test324//prefix/1.txt
$ aws s3 --profile gateway --endpoint http://localhost:7777 ls s3://test324
                           PRE /
$ aws s3 --profile gateway --endpoint http://localhost:7777 ls s3://test324//
                           PRE prefix/

However, I can't list with such empty prefix with uplink:

$ uplink ls sj://test324//
PRE /
$ uplink ls sj://test324/
PRE /

It's possible only with --recursive flag.

$ uplink ls --recursive sj://test324/
OBJ 2020-09-26 19:55:39          126 /prefix/1.txt

However, I can't remove or copy or cat such object

$ uplink cat sj://test324//prefix/1.txt
Error: uplink: object not found ("prefix/1.txt")
$ uplink rm sj://test324//prefix/1.txt
Deleted sj://test324//prefix/1.txt
$ uplink ls --recursive sj://test324/
OBJ 2020-09-26 19:55:39          126 /prefix/1.txt

But I can remove this object via gateway:

$ aws s3 --profile gateway --endpoint http://localhost:7777 rm s3://test324//prefix/1.txt
delete: s3://test324//prefix/1.txt
$ aws s3 --profile gateway --endpoint http://localhost:7777 ls s3://test324//
$ aws s3 --profile gateway --endpoint http://localhost:7777 ls s3://test324/

By the way, with legacy AWS S3 the empty prefix is valid too.
Discussion on the forum: https://forum.storj.io/t/uplink-ls-and-uplink-ls-recrusive-results-are-different/9297

Extra nodes

Hello
Does CalcNeededNodes in uplink/storage/segments/store.go still needs to enforce one extra node since erasure code error detection was replaced by pieces hash checks to save egress bandwidth?

func CalcNeededNodes(rs storj.RedundancyScheme) int32 {
	extra := int32(1)

	if rs.OptimalShares > 0 {
		extra = int32(((rs.TotalShares - rs.OptimalShares) * rs.RequiredShares) / rs.OptimalShares)
		if extra == 0 {
			// ensure there is at least one extra node, so we can have error detection/correction
			extra = 1
		}
	}

Or pieces hash checks apply only to downloads of satellite repairer and uplink still needs to download extra pieces for data integrity check before segment decode?

Getting error while running go get command.

Hi,
Using the uplink RC v1.0.3 I've created a package and uploaded it on my repo. On using the go get command following warning/error shows up:
`

storj.io/common/rpc/rpcpool

go\src\storj.io\common\rpc\rpcpool\pool.go:73:5: cannot use (*Conn)(nil) (type *Conn) as type drpc.Conn in assignment:
*Conn does not implement drpc.Conn (missing Closed method)
`
PS: I had deleted storj.io/uplink package in my system before running the command.
But even after these, I'm able to build my project successfully. Any specific reason behind these?

Uplink Upload Object Issues

Hello! I have been playing around with the Uplink Go library and I have come across issues when uploading data (EU1 satellite) above a certain size . I was wondering if someone would be able to investigate. I first thought there was an issue with my code, but I have tested the same piece of code on another machine (macOS) and there are no uploading issues at all. Only when I run the code on a windows machine do I get problems. Both machines are using the same network, so I do not think it is a network issue. Thanks in advance.

Note: Smaller uploads such as 1-20MB work fine on windows, but file sizes such as 30MB always fail.

This is the error I get when uploading data on the windows machine:

uplink: stream error: ecclient error: successful puts (65) less than success threshold (80)

Go and Windows Versions:

Go Version:     go1.16.2 windows/amd64
Edition:	Windows 10 Home
OS build:	19043.1052

The code I have been using requires you to provide the access grant as parameter when you run it. Access grant is required to be in long form (With satellite+Api Key) :

package main

import (
	"bytes"
	"context"
	"fmt"
	"io"
	"os"
	"strings"

	"github.com/btcsuite/btcutil/base58"

	"storj.io/common/grant"
	"storj.io/common/pb"

	"storj.io/uplink"
)

func main() {
	// provide your access grant here (long form)
	// with satellite + API key
	// as arg
	var accessGrant string
	if len(os.Args) > 1 {
		accessGrant = os.Args[1]
	} else {
		fmt.Println("Access Grant Required, exiting")
		os.Exit(0)
	}
	// get satelliteAddress from the access grant
	satellite, err := getGrantSatelliteAddress(accessGrant)
	if err != nil {
		panic(err)
	}
	// get the API key from the access grant
	apiKey, err := parseAccessGrant(accessGrant)
	if err != nil {
		panic(err)
	}
	// use your passphrase here
	uplinkPassphrase := "password"
	ctx := context.Background()

	access, err := uplink.RequestAccessWithPassphrase(ctx, satellite, apiKey, uplinkPassphrase)
	if err != nil {
		panic(err)
	}

	project, err := uplink.OpenProject(ctx, access)
	if err != nil {
		panic(err)
	}

	bucketName := "mybucket"
	filePath := "bobData"

	// Create bucket if it does not exist
	_, err = project.EnsureBucket(ctx, bucketName)
	if err != nil {
		panic(err)
	}

	// defer so that bucket/data is deleted
	// even if the program crashes
	defer deleteTestData(project, bucketName)

	// create data/string that is of 30MB in size
	numberOfMegaBytes := 30
	numBytes := numberOfMegaBytes * 1024 * 1024
	dataString := strings.Repeat("b", numBytes)
	dataBytes := []byte(dataString)

	upload, err := project.UploadObject(ctx, bucketName, filePath, &uplink.UploadOptions{})
	if err != nil {
		panic(err)
	}

	_, err = io.Copy(upload, bytes.NewBuffer(dataBytes))

	if err != nil {
		panic(err)
	}
	err = upload.Commit()

	if err != nil {
		fmt.Println("Failed to Upload project")
		fmt.Println(err)
	} else {
		fmt.Println("Upload Successful!")
	}

}
// place this in a function so we can defer it
func deleteTestData(project *uplink.Project, bucketName string) {
	_, err := project.DeleteBucketWithObjects(context.Background(), bucketName)
	if err != nil {
		panic(err)
	}
}

// getGrantSatelliteAddress returns the Node ID + URL of the accessGrant
func getGrantSatelliteAddress(accessGrant string) (string, error) {
	data, _, err := base58.CheckDecode(accessGrant)
	if err != nil {
		return "", fmt.Errorf("GetMacSatellite : %v", err)
	}
	scope := new(pb.Scope)
	pb.Unmarshal(data, scope)
	return scope.SatelliteAddr, nil
}

// parse accessGrant and return the api Key
func parseAccessGrant(accessGrant string) (string, error) {
	parsedGrant, err := grant.ParseAccess(accessGrant)
	if err != nil {
		return "", fmt.Errorf("LongMacToShortMac: %v", err)
	}

	apiKey := parsedGrant.APIKey
	serializedKey := apiKey.Serialize()
	return serializedKey, nil
}

go.mod file :

module test

go 1.16

require (
	github.com/btcsuite/btcutil v1.0.3-0.20201208143702-a53e38424cce
	storj.io/common v0.0.0-20210702123130-0f973652e4bb
	storj.io/uplink v1.4.6
)

Add more limit errors to public API

We have two limits that are not mapped in uplink errors.

  • Exceeded Storage Limit
  • Exceeded Segment Limit

We should add corresponding uplink errors:

  • ErrStorageLimitExceeded
  • ErrSegmentLimitExceeded

For storage limit we need to update limit tests and update it with new error.

Go version 1.17.2 incompatibility

After upgrading go from 1.16.x to 1.17.2 I'm now getting the following uplink library error:

# storj.io/uplink
..\..\..\go\pkg\mod\storj.io\[email protected]\bucket.go:128:14: invalid operation: existing == metaclient.Bucket{} (struct containing []byte cannot be compared)     
..\..\..\go\pkg\mod\storj.io\[email protected]\bucket.go:153:14: invalid operation: existing == metaclient.Bucket{} (struct containing []byte cannot be compared) 

I also noticed this issue seems to exist in 1.6.0 as well.

func (objects *ObjectIterator) Item() *Object returns nil.

Hi,
I've written the following code in go:

list := project.ListObjects(ctx, bucket.Name, &cfg)
fmt.Println("Objects list:", list)
fmt.Println("Current Object", list.Item())

and get the following output:

Objects list: &{0xc0001f0f00 0xc0001b8420 {[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] influxtest04 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] {0 0 <nil>} 2 0 {0 0 0 0 0 0} {0 0}} {uploadPath01/testdbtxt/  0 false 2 0} {uploadPath01/testdbtxt/  false false false} <nil> 0 false <nil>}
Current Object <nil>

I don't understand if I have one object inside the list then why does the Item() function return nil.
Please look into this.
Thank you.

Set default DialTimeout for Config

It turns out that currently uplink.Config is by default 0 and that means that where are no timeout at all. We need to decide if we want to keep this state or not.

@egonelbre idea was:

I think the default shouldn't be infinite wait. I'm fine with having a possibility to use -1 for infinite or time.Hour.

Add encrypted_metadata_encrypted_key to pb.ObjectListItem

While listing objects we are not returning encrypted_metadata_encrypted_key to uplink. Because of that, we need to use pb.StreamMeta on uplink side to get key and nonce for metadata. What we need to do:

  • extend pb.ObjectListItem (metainfo.proto) with encrypted_metadata_encrypted_key
  • update metainfo.ListObjects to include value for encrypted_metadata_encrypted_key. Object value from encrypted_metadata_encrypted_key has higher priority over value from pb.StreamMeta so in case having both choose object encrypted_metadata_encrypted_key
  • update uplink to use pb.ObjectListItem.encrypted_metadata_encrypted_key
  • except for standard unit tests we also need to verify if older uplinks are still listing objects well so we need to do a backward compatibility test, example of such test can be found below.

See:
https://github.com/storj/common/blob/34a5992b485615f0d4166fee1fc43d97ab5950ee/pb/metainfo.proto#L381
https://github.com/storj/storj/blob/99deec47b663e6549d4341a1ace2c65c4d6bd4b9/satellite/metainfo/metainfo.go#L2593

stream, streamMeta, err := TypedDecryptStreamInfo(ctx, pi.Bucket, unencKey,

https://github.com/storj/uplink/blob/main/testsuite/backcomp/old_uplink_test.go#L22

[Round 2] Add option to set new metadata to with CopyObject

By default metadata from original object is copied into new object but to be s3 compatible we need an option to set new metadata while creating a copy of the object.
We need to:

  • define where and how to put this option (most probably CopyObjectOptions.Metadata)
  • integrate with server-side (we need to implement server-side first)

Improve documentation on error codes

Request from forum https://forum.storj.io/t/documentation-on-error-codes/16077

Each libuplink error code should be documented as a separate entity, detailing:
- the technical meaning of the error code,
- additional diagnostic information available (e.g. key or bucket name) and how to extract them from the error code,
- whether a given error code represents a transient problem (e.g. an I/O error), a configuration problem (e.g. lack of permissions), a user code bug (e.g. wrong arguments), maybe a libuplink code bug (e.g. a failed assert).

Documentation of each API call available in libuplink should have a comprehensive list of error codes that an API call can return, detailing:
- conditions required for this API call to return a given error code,
- whether the error code is restartable for the given API call (ie. does it make sense to perform the same API call again),
- what state are the objects on which the API call has triggered the error in (e.g. is it still possible to perform other API calls on that object, ).

Right now it looks like the current libuplink implementation makes it close to impossible to write high reliability software with libuplink or comply with Toubon Law.

panic in share command

OS: Ubuntu 20.04 LTS, on WSL

running following command:

uplink share sj://data

Receiving this error:
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0xa3dd66]

goroutine 1 [running]:
github.com/spacemonkeygo/monkit/v3.newSpan.func1(0xc00028f9c0)
/go/pkg/mod/github.com/spacemonkeygo/monkit/[email protected]/ctx.go:147 +0x3bb
panic(0xbc8c20, 0x1172ca0)
/usr/local/go/src/runtime/panic.go:975 +0x47a
storj.io/monkit-jaeger.(*UDPCollector).Stop(0xc000254000)
/go/pkg/mod/storj.io/[email protected]/udp.go:181 +0x26
storj.io/private/process.cleanup.func1.3(0xc000254000, 0xc000212280, 0xc000211620, 0xc000345800, 0xc00028f9d0)
/go/pkg/mod/storj.io/[email protected]/process/exec_conf.go:317 +0x45
storj.io/private/process.cleanup.func1(0xc0002eeb00, 0xc00028f9b0, 0x1, 0x1, 0x0, 0x0)
/go/pkg/mod/storj.io/[email protected]/process/exec_conf.go:403 +0x1ac0
github.com/spf13/cobra.(*Command).execute(0xc0002eeb00, 0xc00028f980, 0x1, 0x1, 0xc0002eeb00, 0xc00028f980)
/go/pkg/mod/github.com/spf13/[email protected]/command.go:842 +0x47c
github.com/spf13/cobra.(*Command).ExecuteC(0x117dbc0, 0xc000138101, 0xcde5c0, 0x1)
/go/pkg/mod/github.com/spf13/[email protected]/command.go:950 +0x375
github.com/spf13/cobra.(*Command).Execute(...)
/go/pkg/mod/github.com/spf13/[email protected]/command.go:887
storj.io/private/process.ExecWithCustomConfig(0x117dbc0, 0xffffff01, 0xcde5c0)
/go/pkg/mod/storj.io/[email protected]/process/exec_conf.go:88 +0x178
main.main()
/go/src/storj.io/storj/cmd/uplink/main.go:16 +0x3e

[Round 2] Handle copying object to the same location

One way to update metadata with S3 is doing server-side copy to the same location with new metadata. We should also support this case in uplink but we don't need to do full server-side copy operation but we can use UpdateObjectMetadata on uplink side when we detect that the source location is equal to the destination location.

AC:

  • extend CopyObject method to handle coping to the same location
  • use UpdateObjectMetadata metainfo endpoint to set new metadata if source location and destination location are the same and new metadata was set
  • add test cases

Improve library documentation

Jeff:
so a thing i noticed from doing a lot of homework review is that users of the library tend to seem confused about when to parse/create accesses, open projects, etc. and how long they should be stored. for example, a common thing i've seen is, for every request, parsing the access, opening the project, doing the operation, and closing the project. sometimes this even includes doing the key derivation from the api key and password. maybe we should clarify this in the docs?

Fadila:
And maybe also explain more specifically how to set expiration time and mention that it won't be accessible after that time (also noticed during homework reviews that it does not seem clear).

Older objects are not listed after move

Metadata for object is encrypted. To decrypt it we need to have key and nonce. Currently, we are keeping it in objects table as a separate columns:

  • encrypted metadata
  • encrypted metadata encrypted key
  • encrypted metadata nonce

This state is relatively new because in the past we had only encrypted metadata which is serialized StreamMeta with SegmentMeta inside it. In the past SegmentMeta was keeping key and nonce for object metadata.

The bug was introduced while implementing move operation on the uplink side because for old objects we missed to adjust
encrypted key and nonce from StreamMeta.last_segment_meta to new object key. To fix it we need:

  • while move operation on client-side check if object contains StreamMeta.last_segment_meta (we need unmarshal first level of encrypted metadata)
  • if it contains key and nonce (and object itself doesn't contain it) we should adjust it to new object key
  • set it to directly as NewEncryptedObjectKey and NewEncryptedMetadataKeyNonce for FinishMoveObject request

Most probably it would be good later to make a change to satellite to remove redundant key and nonce from StreamMeta when FinishMoveObject is executed on the satellite side.

Move operation

uplink/move.go

Line 24 in 4da201e

func (project *Project) MoveObject(ctx context.Context, oldbucket, oldkey, newbucket, newkey string, options *MoveObjectOptions) (err error) {

Protobuf https://github.com/storj/common/blob/c627e971052285ede7055907453d9fd4c1f96169/pb/streams.proto#L25

Issue found thanks to storj/storj#4286

Connection leak in closeAndTryFetchError and client.closeWithError(nil)

Setup
I have played around with the uplink code and inserted 2 lines into the download code. One of the lines was closeAndTryFetchError followed by a return statement. There are several other similar blocks in the code.

// send an order
if newAllocation > 0 {
order, err := signing.SignUplinkOrder(ctx, client.privateKey, &pb.Order{
SerialNumber: client.limit.SerialNumber,
Amount: client.allocated + newAllocation,
})
if err != nil {
// we are signing so we shouldn't propagate this into close,
// however we should include this as a read error
client.unread.IncludeError(err)
client.closeWithError(nil)
return read, nil
}
err = client.stream.Send(&pb.PieceDownloadRequest{
Order: order,
})
if err != nil {
// other side doesn't want to talk to us anymore or network went down
client.unread.IncludeError(err)
// if it's a cancellation, then we'll just close with context.Canceled
if errs2.IsCanceled(err) {
client.closeWithError(err)
return read, err
}
// otherwise, something else happened and we should try to ask the other side
client.closeAndTryFetchError()
return read, nil
}
// update our allocation step
client.allocated += newAllocation
client.allocationStep = client.client.nextAllocationStep(client.allocationStep)
}

Expected Behavior
I would expect that it terminates the connection between uplink and storage node.

Actual Behavior
lsof -p is showing that the number of open connections is increasing as long as my uplink process hasn't terminated. If I replace closeAndTryFetchError with client.closeWithError(nil) it is still showing the same issue. To me, it looks like both functions are not closing the connection. For uplink that shouldn't matter but for any third-party app that would use our library, this would get a problem over time.

Add CopyObject method to uplink API

The initial version of the new method signature should look like this

func (project *Project) CopyObject(ctx context.Context, oldbucket, oldkey, newbucket, newkey string, options *MoveObjectOptions) (object *uplink.Object, err error)

We still need to publicly discuss this new method in uplink API but this initial draft should accurate enough to use it to write tests.

Filtering metadata doesn't work well with more than one listing object page

Listing objects is using iterator which is using metainfo ListObjects endpoint. This endpoint returns results in pages (1000 entries at once). Recently we added support for filtering system and custom metadata on satellite side. Now we have a bug because filtering options (system/custom) are applied only to first listed page. It was working well in the past because filtering was done on client side and we always get all data from satellite.

Broken access grant if restricted again without path

If I restrict an access grant with a bucket restriction, and then restrict it again without the bucket restriction, it doesn't work any more.

So this one works:

"caveats": [
  {
    "allowed_paths": [
      {
        "bucket": "orbiter"
      }
    ],
    "nonce": "i/YSMg=="
  },
  {
    "allowed_paths": [
      {
        "bucket": "orbiter"
      }
    ],
    "nonce": "8I36Ew=="
  }
],

But this one doesn't:

"caveats": [
  {
    "allowed_paths": [
      {
        "bucket": "orbiter"
      }
    ],
    "nonce": "i/YSMg=="
  },
  {
    "allowed_paths": [
      {
        "bucket": "orbiter"
      }
    ],
    "nonce": "8I36Ew=="
  },
  {
    "nonce": "cL1+Bg=="
  }
],

It will say I don't have permission to upload or download from the bucket.

Issue with UpdateObjectMetadata

During a multipart upload of an object, uplink can update object metadata. If we add the arguments EncryptedMetadata to CommitObject, they will need to account for them being optional. Leading to scenarios where uplink calls update metadata, but wants to clear them during commit object.

Files created using Uplink are not visible in Storj GUI

1. I was trying to upload some pictures to my bucket in storj using Uplink cli.

Screenshot from 2021-07-26 11-14-55

2. After uploading it using uplink cli is not visible in my storj GUI.

Screenshot from 2021-07-26 11-16-13

3.whereas it is visible in uplink cli

Screenshot from 2021-07-26 11-15-28

----Update----

when i was trying to delete that bucket using storj gui it was giving me warning that something is there in that bucket and it couldn't delete.
That means the files are there in that bucket but not visible in storj.

Screenshot from 2021-07-26 11-34-26

Getting error in uplink internal go files while using the uplink package in a go program.

Hi,
I've created a package storj.go that imports the package "storj.io/uplink" with the uplink version v1.0.0. The package storj is used in a file that creates an interface to transfer the contents of one platform to storj storage platform.

When I run the build command on my main program I get the following error:

storj.io/uplink/private/metainfo

..\storj.io\uplink\private\metainfo\client.go:361:3: cannot use partnerID (type "github.com/skyrings/skyring-common/tools/uuid".UUID) as type "storj.io/common/uuid".UUID in field value

I, then, replaced the imported package "github.com/skyrings/skyring-common/tools/uuid" by "storj.io/common/uuid" and ran the built command again and received the following error:

storj.io/uplink

..\storj.io\uplink\upload.go:151:3: unknown field 'ContentLength' in struct literal of type pb.SerializableMeta

I don't think making changes in the internal would solve the purpose. I request you to kindly address this issue and help me sort the same.

Verify which errors we should expose in API in first place

Gateway is getting a lot of errors from uplink that have no corresponding type. It's hard to map them without it. This is just list I quickly collected as a starting point. We should figure out which errors we should handle better in the first place.

uplink: stream: metaclient: metabase: pending object missing
uplink: metainfo: invalid direction 0
uplink: metaclient: metabase: conflict: object already exists
uplink: stream: metaclient: object not found: metabase: object with specified version and pending status is missing
uplink: stream: metaclient: signature verification: signature is not valid
uplink: stream: metaclient: eestream: requires 1 <= k <= n <= 256
uplink: stream: ecclient: successful puts (79) less than success threshold (80)
uplink: stream: ecclient: successful puts (1) less than or equal to repair threshold (35)
uplink: object not found: segment not found: segment missing
uplink: missing encryption base: "minisolong"/"store-entrance.mp4"
uplink: bucket: metaclient: cannot delete the bucket because it's being used by another process
uplink: stream: metaclient: metabase: unable to insert segment: ERROR: result is ambiguous (error=rpc error: code = Unavailable desc = transport is closing [exhausted]) (SQLSTATE 40003)
uplink: metainfo: metaclient: ERROR: no inbound stream connection (SQLSTATE XXUUU)
uplink: metaclient: metainfo: bucket name must contain only lowercase letters, numbers or hyphens
uplink: metaclient: metabase: size of part number 3 is below minimum threshold, got: 100.0 KiB, min: 5.0 MiB

Remove unused fields from metaclient.CreateBucketParams

Remove unused fields from metaclient.CreateBucketParams.

	// TODO remove those values when satellite will be adjusted
	PathCipher                  storj.CipherSuite
	PartnerID                   []byte
	DefaultSegmentsSize         int64
	DefaultRedundancyScheme     storj.RedundancyScheme
	DefaultEncryptionParameters storj.EncryptionParameters

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.