Code Monkey home page Code Monkey logo

strato's Introduction

multi-cloud

Releases Build Status LICENSE Go Report Card codecov.io

Introduction

SODA Multi-cloud project provides a cloud vendor agnostic data management for hybrid cloud, intercloud or intracloud. It provides an s3-compatible interface. It can be hosted on prem or cloud native.

It provides a backend manager which is S3 compatible to connect with any cloud vendors. It supports various cloud backends like MS Azure, GCP, AWS, Huawei, IBM and more. It also supports Ceph backed to enable on-prem. We have also integrated some optimizations and also YIG-Ceph backend from China Unicom YIG project.

Currently it provides Object Storage and we are working to support file and block services from the cloud vendors.

This is one of the SODA Core Projects and is maintained by SODA Foundation directly. The multi cloud project has been renamed to STRATO

Documentation

https://docs.sodafoundation.io

Quick Start - To Use/Experience

https://docs.sodafoundation.io

Quick Start - To Develop

https://docs.sodafoundation.io

Latest Releases

https://github.com/sodafoundation/multi-cloud/releases

Support and Issues

https://github.com/sodafoundation/multi-cloud/issues

Project Community

https://sodafoundation.io/slack/

How to contribute to this project?

Join https://sodafoundation.io/slack/ and share your interest in the ‘general’ channel

Checkout https://github.com/sodafoundation/multi-cloud/issues labelled with ‘good first issue’ or ‘help needed’ or ‘help wanted’ or ‘StartMyContribution’ or ‘SMC’

Project Roadmap

We envision to build a SODA Distributed Data Store (SODA DDS) which can support File, Block and Object across Edge, On-prem and Cloud. We are exploring to transform or integrate SODA Multicloud to other SODA Projects to build SODA DDS.

https://docs.sodafoundation.io

Join SODA Foundation

Website : https://sodafoundation.io

Slack : https://sodafoundation.io/slack/

Twitter : @sodafoundation

Mailinglist : https://lists.sodafoundation.io

strato's People

Contributors

anvithks avatar click2cloud-alpha avatar click2cloud-gamma avatar click2cloud-hebe avatar click2cloud-pallas avatar click2cloud-rninja avatar devanshjain7 avatar himanshuvar avatar hopkings2008 avatar josephjacobmorris avatar kumarashit avatar leonwanghui avatar lijuncloud avatar liuqinguestc avatar najmudheenct avatar nguptaopensds avatar pravinranjan10 avatar rajat-soda avatar rhsakarpos avatar sfzeng avatar skdwriting avatar subi9 avatar sunfch avatar sushanthakumar avatar thisisclark avatar vineela1999 avatar wbhanshy avatar wisererik avatar xing-yang avatar xxwjj avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

strato's Issues

Confused about in-cloud transition rule

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug

My test cases were:

  1. There is a bucket named "bkt1" in which some of objects come from "aws1" backend(it's default backend for bkt1) and the others come from 'azure1" backend.
  2. I created an in-cloud transition rule for all objects of bkt1.

What happened:

Objects from azure1 were transmitted to aws1 when the rule took effect.

What you expected to happen:

Objects from azure1 werenot transmitted to aws1 when the rule took effect.

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Gelato(release/branch) version: development branch
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

Multi-tenant is not supported.

Is this a BUG REPORT or FEATURE REQUEST?:

Multi-tenant is not supported.

/kind bug

What happened:

  1. Login with one user, and create a bukcet.
  2. Login with another user which belongs to a different tenant, and still can see the bucket created in step 1.

What you expected to happen:
User cannot see the bucket belongs to another tenant.

How to reproduce it (as minimally and precisely as possible):
Do as the steps list above.

Anything else we need to know?:

Environment:

  • Gelato(release/branch) version:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

Some conflicts appeared when I set two rules of crossed-cloud transition and in-cloud transition

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug

What happened:

Some conflicts appeared when I set two rules of crossed-cloud transition and in-cloud transition:

  1. Firstly I use crossed-cloud transition rule on objects of aws_1, all objects were transmitted to aws_2 from aws_1, then I created a new rule based on in-cloud and disabled crossed-cloud rule, like the bellowing:

Selection_009

  1. it's weird that all the objects of aws_1(standard) were not transmitted to aws_1(standard ia).
    Selection_008

What you expected to happen:

In-cloud transition rule is only applied to aws_1 but it didn't happen.

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Gelato(release/branch) version: development branch
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

Put object/Put object range - Azure

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug
/kind feature

What happened:

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Gelato(release/branch) version:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

Error "The specified backend does not exists." while creating bucket on a storage backend

The steps to reproduce this error is as follows:

  1. I have installed all the required dependencies on Ubuntu 16.04 as mentioned in the installation and testing document.
  2. When I test the curl command to register AWS S3 as a storage backend, the name(bucket name of AWS S3) parameter provided is already created on AWS. This command gives proper response even if the access key or security key provided is invalid.
  3. While Creating bucket on the storage backend, I get error as {"code":"404","message":"The specified backend does not exists."}.
    Please help me with this error. Thanks in advance.
    The screenshot is also provided to reffer the error

capture

api crashed when I upload object

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug

What happened:
I tried to upload an object more than 16MB, and api service crashed:
dashboard
log

What you expected to happen:
api service runs normally.
How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Gelato(release/branch) version:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

Issue While doing Multi-part Upload on GCP (google cloud) [Just Verify]

Test Scenario: Performing migration of large data from AWS to GCP
File Size : 1.0 GB
File name : CentOs6.vhd
Error log
ss

Reason : GCP doesn’t provide Multipart Upload.
Blocking : we can achieve it with GCP parallel upload but OpenSDS multi-cloud common controller architecture supports multi-part API not composite and parallel upload proposed by GCP .
Limitation : GCP has limitation for multipart upload.

Reference link: https://stackoverflow.com/questions/27830432/google-cloud-storage-support-of-s3-multipart-upload
https://cloud.google.com/storage/docs/composite-objects

suggested Solution : if we need to resolve this , we would need your suggestion like how we can manage this parallel and composite upload of GCP with OpenSDS Multi-cloud Controller architechture.

issues about lifecycle

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug
/kind feature

What happened:
we click the disable button , it doesn't work.
when we create transition and delete rules, transition rules work,delete rules don't work.
What you expected to happen:
we click the disable button, the function disabled.
when we create transition and delete rules,delete rules work.
How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Gelato(release/branch) version:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

Put object/Put object range - HWS

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug
/kind feature

What happened:

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Gelato(release/branch) version:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

DataLifecycle: CreateBucket Lifecycle always keeping the latest and removing other rules

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug

What happened:

  1. Create Bucket Life Cycle Config rule
  2. Create 2nd rule
  3. Only second one exists.. (Latest)
    What you expected to happen:
    Should maintain all rules created
    How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Gelato(release/branch) version:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

log config file can not be read

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug

/kind feature

What happened:
log config file can not be read, so log configuration can not be changed by modifying the log config file.

What you expected to happen:
log configuration could be changed by modifying the log config file.

How to reproduce it (as minimally and precisely as possible):
Change the configuration in log config file and check the log that whether the modified configuration worked.

Anything else we need to know?:

Environment:

  • Gelato(release/branch) version: latesd
  • OS (e.g. from /etc/os-release): Ubuntu 16.04
  • Kernel (e.g. uname -a): Linux ubuntu 4.4.0-62-generic
  • Install tools:
  • Others:

Put object/Put object range - AWS

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug
/kind feature

What happened:

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Gelato(release/branch) version:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

Should not display sensitive information when list or show backend api

Is this a BUG REPORT or FEATURE REQUEST?:

/kind bug

What happened:

access key and security key are returned in list or show backend api

What you expected to happen:

Should not display sensitive information when list or show backend api

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Gelato(release/branch) version: v0.5.3
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

DataLifeCycle-initializingStorageClass for UserDefinedStorage..its wrong

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug
What happened:
In Service.go file, initStorageClass() function loads the Storage classes and Transistions based on Env variable USE_DEFAULT_STORAGE_CLASS
If value of this variable is non positive zero, loads default storage and transitions
else should load user defined ones
But if value is zero or negative, its loading Default Transitions instead of User Defined Transitions
What you expected to happen:
If value is less than 1, should load User Defined Transitions
How to reproduce it (as minimally and precisely as possible):
loadDefaultStorage

Anything else we need to know?:

Environment:

  • Gelato(release/branch) version:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

Administrator can not get all resources

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind feature

What happened:
Administrator cannot get resources belongs to other tenant.

What you expected to happen:
Administrator cannot get all resources, no matter with tenant they belongs to.

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:
This one is created to trace the issue mentioned in https://github.com/opensds/multi-cloud/pull/176

Environment:

  • Gelato(release/branch) version:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

issues for tier99 and tier999 support for GCP

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:
/kind feature

What happened:
We do not support tier99 and tier999 for GCP in lifecycle now, but that is not limited when creating lifecycle rules.
What you expected to happen:
Do not let the user choose GCP backend if the target tier is tier99 or tier999 when creating lifecycle rule.
How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Gelato(release/branch) version:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

DataLifeCycle-loading Ceph default values is wrong

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug

What happened:
In code, default values for Ceph loading is wrong
func loadCephDefault(i2e *map[string]*Int2String, e2i *map[string]*String2Int) {
t2n := make(Int2String)
t2n[Tier1] = CEPH_STANDARD
(*i2e)[OSTYPE_CEPTH] = &t2n

n2t := make(String2Int)
n2t[CEPH_STANDARD] = Tier1
(*e2i)[**OSTYPE_OBS**] = &n2t

}
here instead of OSTYPE_CEPTH, assigned to OSTYPE_OBS
What you expected to happen:
(*e2i)[OSTYPE_CEPTH] = &n2t
How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Gelato(release/branch) version:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

Put object/Put object range - YIG

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug
/kind feature

What happened:

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Gelato(release/branch) version:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

Some information is confused when register backend.

When we register backend, we need input some information, like region, but it seems like that not all these information are necessary for all backend types, for example, region is not necessary for ceph. So can we make sure which information is necessary for each backend type, and let users just input those information, otherwise it would be confused.

PUT lifecycle API does not update/modify the rule

Is this a BUG REPORT or FEATURE REQUEST?:

/kind feature

What happened:
PUT bucket lifecycle always add new rule even if the request is to update the existing one.
What you expected to happen:
PUT bucket lifecycle API should modify the existing rule and update the database
How to reproduce it (as minimally and precisely as possible):
1.Create one rule using dashboard
2.Try to edit the same rule giving different value to field
3.It will add another rule in the configuration
Anything else we need to know?:

Environment:

  • Gelato(release/branch) version:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

Error is not processed when get bucket failed in api/pkg/s3/bucketlifecycledelete.go

Is this a BUG REPORT or FEATURE REQUEST?:

/kind bug

What happened:
Error is not processed when get bucket failed in api/pkg/s3/bucketlifecycledelete.go api/pkg/s3/bucketlifecycledelete.go
What you expected to happen:
Error is processed and return corresponding error message to the end user.
How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Gelato(release/branch) version:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

Should not display sensitive information when list or show backend api

Is this a BUG REPORT or FEATURE REQUEST?:

/kind bug

What happened:

access key and security key are returned in list or show backend api

What you expected to happen:

Should not display sensitive information when list or show backend api

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Gelato(release/branch) version: v0.5.3
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

Optimization suggestion for migration

Based on the current implementation, no matter how many objects need to be migrated, all object metadata will be load from the database at once, and that may bring pressure to memory, so I suggested to load a part of them at once.

bugs abort delete function

There are some bugs abort delete function.

  1. we can't delete a backend while some buckets are still in the backend.
  2. we can't delete a backend while some objects are still related to the backend.
  3. we can't delete a bucket while some objects are still in the bucket.

The directory name of api/pkg/s3/datastore/hws is confused.

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug

What happened:
The directory name of api/pkg/s3/datastore/hws is confused.
What you expected to happen:
Change it to be explicit name.
How to reproduce it (as minimally and precisely as possible):
NA

Anything else we need to know?:

Environment:

  • Gelato(release/branch) version:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

Put object/Put object range - GCP

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug
/kind feature

What happened:

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Gelato(release/branch) version:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.