apache / shardingsphere-on-cloud Goto Github PK
View Code? Open in Web Editor NEWA collection of tools and best practices to take ShardingSphere into the cloud
License: Apache License 2.0
A collection of tools and best practices to take ShardingSphere into the cloud
License: Apache License 2.0
Dear Apache ShardingSphere Community,
Shardingsphere-on-cloud is a very good project to better deployment of ShardingSphere-Proxy.
Now maybe we need dynamic one-click deployment support for different types of databases,
because ShardingSphere's SPI is designed to do just that.
Perhaps we can use this property to dynamic pulling of drivers.
server.yaml
in ShardingSpere-Proxy
proxy-frontend-database-protocol-type : OpenGuass # MySQL , Oracle
Looking forward to seeing this feature~
Execution of make build
under ./shardingsphere-operator
failed, below are the error messages:
/goprojects/shardingsphere-on-cloud/shardingsphere-operator/bin/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./..."
open hack/boilerplate.go.txt: no such file or directory
Error: not all generators ran successfully
runcontroller-gen object:headerFile=hack/boilerplate.go.txt paths=./... -w
to see all available markers, orcontroller-gen object:headerFile=hack/boilerplate.go.txt paths=./... -h
for usage
make: *** [generate] Error 1
Hi community,
This issue is for #73
Add more cases for ConstructCascadingDeployment to improve unit test coverage.
The function is
.The cases should be added to https://github.com/apache/shardingsphere-on-cloud/blob/main/shardingsphere-operator/pkg/reconcile/reconcile_test.go.
Issues:
Hi community,
This issue is for #73
Add more cases for ToYAML to improve unit test coverage.
When I am deploying the apache-shardingsphere-proxy-charts to EKS, it is overriding the contents of /opt/shardingsphere-proxy/conf
directory.
The deployment.yaml file is mounting that directory and somehow it is mounting it with default server.yaml file generated from the configuration in values.yaml.
Is there any way to retain the custom conf files that I have already packaged in the docker image that is being used to deploy?
{
"clientVersion": {
"major": "1",
"minor": "26",
"gitVersion": "v1.26.0",
"gitCommit": "b46a3f887ca979b1a5d14fd39cb1af43e7e5d12d",
"gitTreeState": "clean",
"buildDate": "2022-12-08T19:51:43Z",
"goVersion": "go1.19.4",
"compiler": "gc",
"platform": "darwin/arm64"
},
"kustomizeVersion": "v4.5.7"
}
Currently, we have apache-shardingsphere-operator-charts and apache-shardingsphere-operator-cluster-charts charts, but the both name are confusing.
Refered to the installation rabbitmq-cluster-operator and grafana-operator charts, we plan merge apache-shardingsphere-operator-charts and apache-shardingsphere-operator-cluster-charts, the name of new chart will be renamed to apache-shardingsphere-cluster-operator-charts
. The new chart will have both shardingsphere-operator and shardingsphere-proxy-cluster installed.
Also, the pod name installed by operator is too long, the following is example:
ss-apache-shardingsphere-proxy-cluster-57999bd6b8-fzcdx 1/1 Running 0 21h
ss-apache-shardingsphere-proxy-cluster-57999bd6b8-ms2wb 1/1 Running 0 21h
ss-apache-shardingsphere-proxy-cluster-57999bd6b8-r9kq9 1/1 Running 0 21h
ss-apache-shardingsphere-proxy-cluster-operator-577dc4fdc9p5k2r 1/1 Running 0 21h
ss-apache-shardingsphere-proxy-cluster-operator-577dc4fdc9qxb7m 1/1 Running 0 21h
ss
is RELEASE_NAME
for helm.
apache-shardingsphere-proxy-cluster
is the default value of the nameOverride
for our chart.
When users set RELEASE_NAME
, it is easy to exceed the 63 characters of the kubernetes object name character limit,
please refer to https://kubernetes.io/docs/concepts/overview/working-with-objects/names/
, so maybe we need to simplify the value of nameOverride
.
Hi community,
This issue is for #73
Add more cases for UpdateService to improve unit test coverage.
Since this is a subproject of Apache ShardingSphere, the readme and other documents should share the same information with Apache ShardingSphere.
n/a
Add test case of Sharding for Helm Chart integration. Since we have proposed the proxy-integration.yml
under .github/workflows/
, and start MySQL client for some basic test. We can extend this case to real Sharding.
This work could be finished with an action file under .github/workflows/
.
The directories in LICENSE files are outdated. Consider updating the following files before next release:
N/A
Write a CloudFormation Template using this AMI built in #49
With the help of AMI from #49 , we need to create a CloudFormation Stack Template for spinning up a three nodes ShardingSphere cluster.
This work could be finished with a document under doc/
recorded the operating procedure.
What should the domain be called?
Hi community,
This issue is for #73
Add more cases for ConstructCascadingConfigmap to improve unit test coverage.
The function is
The cases should be added to https://github.com/apache/shardingsphere-on-cloud/blob/main/shardingsphere-operator/pkg/reconcile/reconcile_test.go.
Add documentation for starting a ShardingSphere-Proxy cluster on AWS using cloudformation
Consider moving Helm Charts of ShardingSphere to this repo. So we can better collect the solutions.
Reconcile is core part of ShardingSphere Operator, which need to be very robust. So it need more unit test cases:
Functions including:
TODO
Consider merging CRDs ShardingSphereProxy
and ShardingSphereProxyServerConfig
into ComputeNode
.
Here the spec of ComputeNode
.
Im use helm start shardingSphere proxy,by source code install, now proxy started,but can not config rules in values.ymal , if I do it ,I
ll get exception:
Cannot create property=rules for JavaBean=org.apache.shardingsphere.proxy.backend.config.yaml.YamlProxyDatabaseConfiguration@971s0s8 in 'reader',line 1 colum 1:
dataSources:
java.lang.InstantiationException
in 'reader', line 22,column 1:
And I get the shardingsphere source code checked the name is matched.Also compared the example config-sharding.yaml.
I can't find the cause of the problem.
so,anyone do it success,please teach me。
Repeat my question:
I want use helm start shardingSphere-proxy,and config the rules。 THX!
Require
kubernetes 1.20+
Currently, in the Deployment
configuration generated by ShardingSphere Operator
for ShardingSphere-Proxy
, the Strategy is v1.RecreateDeploymentStrategyType
. This will cause the proxy pod
to be deleted and rebuilt when the user updates, cause a temporary interruption of the user connect to the proxy.
The Strategy
of the Deployment
for SahrdingSphere-Proxy
should be changed to RollingUpdateDeploymentStrategyType
to avoid connetcion lost cause temporary interruption
Since this repo was moved from another repo, I found some of the Go dependencies were still using previous package name. These names should be updated with new module name.
The new go module name is github.com/apache/shardingsphere-on-cloud/shardingsphere-operator
Hi community,
This issue is for #73
Add more cases for fromInt32 to improve unit test coverage.
Given an example of ShardingSphere logs output to CloudWatch Log Group
When a ShardingSphere cluster is running on AWS EC2, the application logs could be output to CloudWatch for further consumption.
This work could be finished with a document under doc/
recorded the operating procedure.
Hi community,
This issue is for #73
Add more cases for ConstructCascadingService to improve unit test coverage.
The function is
The cases should be added to https://github.com/apache/shardingsphere-on-cloud/blob/main/shardingsphere-operator/pkg/reconcile/reconcile_test.go.
Kubectl:
Client Version: v1.26.0 Kustomize Version: v4.5.7
Helm:
version.BuildInfo{Version:"v3.10.3", GitCommit:"835b7334cfe2e5e27870ab3ed4135f136eecc704", GitTreeState:"clean", GoVersion:"go1.19.4"}
No changes in values.yaml. Just overriding them via helm --set
as mentioned below.
When I try the following, I see the error zsh: no matches found: compute.serverConfig.authority.users[0].user=root
helm upgrade --install hello-pp-chart shardingsphere/apache-shardingsphere-proxy-charts --version 0.1.0 --set compute.serverConfig.authority.users[0].user=root --set compute.serverConfig.authority.users[0].password=root
Alternatively, when I try the following, I see a warning coalesce.go:223: warning: destination for apache-shardingsphere-proxy-charts.compute.serverConfig.authority.users is a table. Ignoring non-table value ([map[password:root user:root@%]]) coalesce.go:223: warning: destination for apache-shardingsphere-proxy-charts.compute.serverConfig.authority.users is a table. Ignoring non-table value ([map[password:root user:root@%]]) Release "hello-pp-chart" has been upgraded. Happy Helming!
helm upgrade --install hello-pp-chart shardingsphere/apache-shardingsphere-proxy-charts --version 0.1.0 --set compute.serverConfig.authority.users.user=root --set compute.serverConfig.authority.users.password=root
Contents of your CRD resource file. Include proxy.shardingsphere.apache.org/v1alpha1 and proxyconfig.shardingsphere.apache.org/v1alpha1
Kubernetes distro (e.g. RKE/K3s/EKS/OpenShift) and version
Shoud pick up user and password for serverConfig when --set
via helm install or upgrade.
*** This is the umbrella issue for upgrading the ShardingSphere-Operator CRDs.***
Currently we have the ShardingSphere-Operator, which can help achieve automatically operating with two CRDs shardingsphereproxies.shardingsphere.apache.org
and shardingsphereproxyserverconfigs.shardingsphere.apache.org
. It really work but also has some limitations and less extensions. For example, it doesn't support the Cluster
concept, users cannot define a logical cluster explicitly using CRDs. Also it cannot support any DistSQL jobs, especially in some scaling scenarios where developer need to spin up new storage nodes and initialize the schemas.
Before architecture:
Proposing two major changes to shardingsphereproxies.shardingsphere.apache.org
and shardingsphereproxyserverconfigs.shardingsphere.apache.org
:
ComputeNode
ComputeNode
The ShardingSphere Proxy Pods will still be controlled by the kubernetes native workload Deployment.
More details about CRD changes please take a look at issues: #166
Introduce new CRD called Cluster
. This change is according to the architecture of ShardingSphere Cluster
that contains three components: Governance Node
(e.g ZooKeeper, Etcd), Compute Node( ShardingSphere Proxy itself)
and Storage Node(e.g. MySQL, PostgreSQL)
.
CRD Cluster
will not contain Governance Node and Storage Node in this version. Users need to set the correct ZooKeeper service and MySQL service in ComputeNode specs. The name of Cluster will be handled as mode.repository.props.namespace
in ShardingSphere Proxy server config. The size no matter it is replicas or automatically scaling will be moved to Cluster.
More details about CRD changes please take a look at issues: #167
TODOs
Currently, ShardingSphere-Agent support to prometheus metrics exports. However, the HPA configuration provided by the CRD does not support the "Custom Metric" configuration.
We want to be able to provide custom metrics configuration in the HPA. The following is possiable tasks.
AutomaticScaling
struct.Proposed a new CRD Cluster
. Here is the spec
Need a gitpages for the storage of charts products
Given an example of ShardingSphere using CloudWatch Events for Alerting.
Once the application logs could be output to CloudWatch, and we can setup Alerting with collected logs.
This work could be finished with a document under doc/
recorded the operating procedure.
Hi community,
After last release, we have received a lot of feedback from users and developers. Now we plan to release the ShardingSphere on Cloud 0.2.0 version these days, which has made a lot of optimization and improvement for these feedback.
Please refer to this release notes to confirm whether it contains the features you expect. If I missed anything, please remind me ASAP. If there are any suggestions, please feel free to tell us.
Hello everyone, in order to better operate the ShardingSphere community and allow issues and PRs in the community to be dealt with in a timely manner, we plan to arrange a community duty schedule.
Apache ShardingSphere community members, all persons interested in ShardingSphere.
Please refer to the duty guide: https://shardingsphere.apache.org/community/en/involved/committer/responsibilities/
It doesn't matter if you don't have permission to operate the ShardingSphere repository, you just need to reply or mention relevant people in the issue or PR, and community members will tag the issue/PR, assign people, merge code, etc.
Please choose your preferred date of duty by yourself.
If you have permission to edit the current discussion directly, please edit the schedule directly to add your duty wishes date. If you don't have permission, you can leave a message about your willingness to be on duty date in the current discussion, and a community member will assist you in completing the registration.
Start Date | End Date | Person | Remark |
---|---|---|---|
2022/10/24 | 2022/10/30 | @mlycore | |
2022/10/31 | 2022/11/6 | @windghoul | |
2022/11/7 | 2022/11/13 | @xuanyuan300 | |
2022/11/14 | 2022/11/20 | @wbtlb | |
2022/11/21 | 2022/11/27 | @mlycore | |
2022/11/28 | 2022/12/4 | @windghoul | |
2022/12/5 | 2022/12/11 | @xuanyuan300 | |
2022/12/12 | 2022/12/18 | @wbtlb | |
2022/12/19 | 2022/12/25 | @mlycore | |
2022/12/26 | 2023/1/1 | @windghoul | |
2023/1/2 | 2023/1/8 | @xuanyuan300 | |
2023/1/9 | 2023/1/15 | @wbtlb | |
2023/1/16 | 2023/1/22 | @mlycore | |
2023/1/23 | 2023/1/29 | @windghoul | |
2023/1/30 | 2023/2/5 | @xuanyuan300 | |
2023/2/6 | 2023/2/12 | @wbtlb | |
2023/2/13 | 2023/2/19 | @mlycore | |
2023/2/20 | 2023/2/26 | @xuanyuan300 | |
2023/2/27 | 2023/3/5 | @wbtlb | |
2023/3/6 | 2023/3/12 | @mlycore | |
2023/3/13 | 2023/3/19 | @xuanyuan300 | |
2023/3/20 | 2023/3/26 | @wbtlb | |
2023/3/27 | 2023/4/2 | @mlycore | |
2023/4/2 | 2023/4/9 | @xuanyuan300 | |
2023/4/10 | 2023/4/16 | @wbtlb | |
2023/4/17 | 2023/4/23 | @Xu-Wentao | |
2023/4/24 | 2023/4/30 | @mlycore |
Hi community,
After last release, we have received a lot of feedback from users and developers. Now we plan to release the ShardingSphere on Cloud 0.1.0 version these days, which has made a lot of optimization and improvement for these feedback.
Please refer to this release notes to confirm whether it contains the features you expect. If I missed anything, please remind me ASAP. If there are any suggestions, please feel free to tell us.
Mail: https://lists.apache.org/thread/odhnfkg1gznq66dxcmpvc8x1hn3jy50t
Hi community,
This issue is for #73
Add more cases for ConstructHPA to improve unit test coverage.
The function is
.The cases should be added to https://github.com/apache/shardingsphere-on-cloud/blob/main/shardingsphere-operator/pkg/reconcile/reconcile_test.go.
We should provide doc consistent with the style of ShardingSphere doc.
Using AMI to bootstrap tools and frameworks are common practice on AWS. Since ShardingSphere Proxy has a definite deployment pattern, so it can be a goot shot for building an AMI for ShardingSphere on AWS.
Hi community,
This issue is for #73
Add more cases for addInitContainer to improve unit test coverage.
Hi community,
After last release, we have received a lot of feedback from users and developers. Now we plan to release the ShardingSphere on Cloud 0.1.2 version these days, which has made a lot of optimization and improvement for these feedback.
Please refer to this release notes to confirm whether it contains the features you expect. If I missed anything, please remind me ASAP. If there are any suggestions, please feel free to tell us.
Email: https://lists.apache.org/thread/287khfdpld334r1hg6wsn0btkjjgo5qc
Build a Grafana dashboard for ShardingSphere which running with agent, the grafana displays the RDS CloudWatch metrics would be better.
Since ShardingSphere Proxy has exposed some metrics indicating the running status, check out this page for more details.
The metrics could be gathered with Prometheus and then displayed with a Grafana Dashboard.
This work could be finished with a document under doc/
recorded the operating procedure.
Hi community,
This issue is for #73
Add more cases for processOptionalParameter to improve unit test coverage.
Since there are several workflow failures which report as below:
Error: .github#L1
helm/[email protected] is not allowed to be used in apache/shardingsphere-on-cloud. Actions in this workflow must be: within a repository owned by apache, created by GitHub, verified in the GitHub Marketplace, or matching the following: /@[a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9]+, AdoptOpenJDK/install-jdk@, JamesIves/github-pages-deploy-action@5dc1d5a, TobKed/label-when-approved-action@, actions-cool/issues-helper@, actions-rs/, al-cheb/configure-pagefile-action@, amannn/action-semantic-pull-request@, apache/, burrunan/gradle-cache-action@, bytedeco/javacpp-presets/.github/actions/, chromaui/action@, codecov/codecov-action@, conda-incubator/setup-miniconda@, container-tools/kind-action@, container-tools/microshift-action@, dawidd6/action-download-artifact@, delaguardo/setup-graalvm@, docker://jekyll/jekyll:, docker://pandoc/core:2.9, eps1lon/actions-label-merge-conflict@, gaurav-nelson/github-action-markdown-link-...
The action provided by helm/kind-action
is now allowed to be used in Apache projects. This should be fixed in another way.
As a critical components for any production use, ShardingSphere plays a very important role which have a high level availability requirements.
not related to deployment
In the Readme's development prerequisite section only the download link of Golang is made available. We can the downloads links for Git, and others also.
nothing
No response
Build an AMI which contains JDK 1.8+, ShardingSphere Proxy 5.2.0 and Zookeeper 3.8 packages.
AMI refers to Amazon Machine Images, we can build the AMI which is already installed with ShardingSphere Proxy 5.2.0 and Zookeeper 3.8. So ShardingSphere Proxy and Zookeeper is going to be started once the EC2 using this AMI is started.
This work could be finished with a document under doc/
recorded the operating procedure.
scripts:
wget https://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper/zookeeper-3.8.0/apache-zookeeper-3.8.0-bin.tar.gz
wget https://dlcdn.apache.org/shardingsphere/5.2.0/apache-shardingsphere-5.2.0-shardingsphere-proxy-bin.tar.gz
/etc/systemd/system/multi-user.target.wants/zookeeper.service:
[Unit]
Description=zookeeper.service
After=network.target
ConditionPathExists=/usr/local/zookeeper-3.8.0/conf/zoo.cfg
[Service]
Environment=JAVA_HOME=/usr/local/jdk1.8.0_333
Type=forking
User=root
Group=root
ExecStart=/usr/local/zookeeper-3.8.0/bin/zkServer.sh start
ExecStop=/usr/local/zookeeper-3.8.0/bin/zkServer.sh stop
[Install]
WantedBy=multi-user.target
Currently the shardingsphere-operator supports a deployment pattern take the advantage of Kubernetes native workload Deployment. As Apache ShardingSphere is very strong to provide a bunch of abilities such as sharding, encryption, scaling, and shadow, etc. A more flexible deployment pattern should be introduced, with is own initialization configurations.
After the repository is donated, need to change the domain of the CRD. change domain sphere-ex.com to apache.org
Since the ShardingSphere on Cloud has been donated to Apache ShardingSphere, we need to update copyrights and domains in project. Includes:
shardingsphere.apache.org
. Such as files:
None.
Hi community,
This issue is for #73
Add more cases for UpdateHPA to improve unit test coverage.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.