Code Monkey home page Code Monkey logo

catalog's Introduction

KubeVela Catalog

KubeVela is a modern software delivery control plane to make deploying and operating applications across today's hybrid, multi-cloud environments easier, faster and more reliable.

One of the core goals of KubeVela is to build an open, inclusive, and vibrant OSS developer community focused on solving real-world application delivery and operation problems, sharing the reusable building blocks and best practices.

Here's the catalog of the shared resources, we called them addon.

Introduction

This repo is a catalog of addons which extend the capability of KubeVela control plane. Generally, an addon consists of Kubernetes CRD and corresponding X-definition, but none of them is necessary. For example, the fluxcd addon consists of FluxCD controller and the helm component definition, while VelaUX just deploy a web server without any CRD or Definitions.

There're basically two kinds of addons according to maturity. They're verified addons which have been tested for a long time can be used in product environment and experimental addons which contain new features but still need more verification.

Community users can install and use these addons by the following way:

  • Verified Addons: when a pull request merged, the changes of these addon will be automatically packaged and synced to the OSS bucket, and serving in the official addon registry. This will be displayed in vela CLI by vela addon list or VelaUX as follows. image

  • Experimental Addons: the experimental addons will also be packaged and synced to the OSS bucket automatically, but in the experimental folder, you need to add the experimental registry manually to use it.

    vela addon registry add experimental --type=helm --endpoint=https://addons.kubevela.net/experimental/
    

    image

How to use

The https://addons.kubevela.net will be deprecated and changed to https://kubevela.github.io/catalog/official. You can run the following command to set up the new registry.

vela addon registry delete KubeVela
vela addon registry update KubeVela --type helm --endpoint=https://kubevela.github.io/catalog/official
vela addon registry add experimental --type helm --endpoint=https://kubevela.github.io/catalog/experimental

You can enable these addons by vela command line by:

vela addon enable <official-addon-name>
vela addon enable experimental/<experimental-addon-name>

You can also enable addons by click the page on VelaUX.

Please refer to doc for more infos.

History versions

All versions of addons will be reserved in the OSS bucket, you can check the old versions and download in this page: https://addons.kubevela.net/index.yaml.

Create an addon

To create an addon, you can use the following command to create an addon scaffold:

vela addon init <addon-name>

The best way to learn how to build an addon is follow existing examples:

You can refer this doc to learn all details of how to make an addon and the mechanism behind it.

Contribute an addon

All contributions are welcome, just send a pull request to this repo following the below rules:

  • A new addon should be accepted as experimental one first with the following necessary information:

    • An accessible icon url and source url defined in addon's metadata.yaml.
    • A detail introduction include a basic example about how to use and what's the benefit of this addon in README.md.
    • It's more likely to be accepted if useful examples are provided in example dir.
  • An experimental addon must meet these conditions to be promoted as a verified one.

  • If you come across with any addon problems, feel free to raise a github issue or just send pull requests to fix them. Please make sure to update the addon version in your pull request.

Community

Welcome to KubeVela community for discussion, please refer to the community repo.

catalog's People

Contributors

barnettzqg avatar captainroy-hy avatar charlie0129 avatar chengleqi avatar chivalryq avatar fogdong avatar fourierr avatar hanmengnan avatar hongchaodeng avatar kichristensen avatar kolossi avatar lixd avatar majian159 avatar mdsahil-oss avatar my-pleasure avatar oeular avatar resouer avatar ryanzhang-oss avatar s4rd1nh4 avatar somefive avatar stevenleizhang avatar suwliang3 avatar wangyikewxgm avatar wonderflow avatar xdatcloud avatar yanghua avatar yangsoon avatar yue9944882 avatar zhaohuiweixiao avatar zzxwill avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

catalog's Issues

[BUG][poddisruptionbudgettrait] When the trait uses workloadtype as deployment, there will be crash

Look at the following example:

apiVersion: core.oam.dev/v1alpha2
kind: Component
metadata:
  name: example-component
spec:
  workload:
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: example-component
    spec:
      selector:
        matchLabels:
          app: example-component
      template:
        metadata:
          labels:
            app: example-component
        spec:
          containers:
          - name: nginx
            image: nginx
            ports:
            - containerPort: 80
              name: pa
---
apiVersion: core.oam.dev/v1alpha2
kind: ApplicationConfiguration
metadata:
  name: example-appconfig
spec:
  components:
    - componentName: example-component
      traits:
        - trait:
            apiVersion: core.oam.dev/v1alpha2
            kind: PodDisruptionBudgetTrait
            metadata:
              name: example-pdb-trait
            spec:
              minAvailable: 1

After deployment, the controller will have the following errors:

2020-11-03T08:57:15.622Z	INFO	controllers.PodDisruptionBudgetTrait	Cannot locate any resources	{"total resources": 0}
E1103 08:57:15.649445       1 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 307 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic(0x14db100, 0x23a9050)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:74 +0xa3
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:48 +0x82
panic(0x14db100, 0x23a9050)
	/usr/local/go/src/runtime/panic.go:679 +0x1b2
k8s.io/api/policy/v1beta1.(*PodDisruptionBudget).GetObjectKind(0x0, 0xc0003d0100, 0x18f95c0)
	<autogenerated>:1 +0x5
sigs.k8s.io/controller-runtime/pkg/client.(*client).Patch(0xc0000e63f0, 0x18bf120, 0xc0000560b0, 0x18922e0, 0x0, 0x18961e0, 0x23e4df8, 0xc0005d5ce0, 0x2, 0x2, ...)
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/client/client.go:151 +0x57
github.com/oam-dev/catalog/traits/poddisruptionbudgettrait/controllers.(*PodDisruptionBudgetTraitReconciler).Reconcile(0xc0003d0100, 0xc000489c2c, 0x4, 0xc000380900, 0x11, 0xc0003ae518, 0x1895480, 0xc0003ae510, 0xc0003adcb0)
	/workspace/controllers/poddisruptionbudgettrait_controller.go:110 +0xe96
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc000504b40, 0x1531360, 0xc0003b1940, 0x43e600)
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:233 +0x162
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc000504b40, 0xc00048e600)
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:209 +0xcb
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker(0xc000504b40)
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:188 +0x2b
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0005183e0)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155 +0x5e
  • solution

We revised the definition of workload (deployment):

apiVersion: core.oam.dev/v1alpha2
kind: WorkloadDefinition
metadata:
  creationTimestamp: "2020-11-02T05:22:33Z"
  generation: 3
  name: deployments.apps
  resourceVersion: "164262607"
  selfLink: /apis/core.oam.dev/v1alpha2/workloaddefinitions/deployments.apps
  uid: e552e772-8810-4d01-8414-094ba1f43668
spec:
  childResourceKinds:
  - apiVersion: apps/v1
    kind: ReplicaSet
  definitionRef:
    name: deployments.apps

clean up registry

Make sure all these following Definitions can really work.

  1. add knative as workload and add componentDefinition for it.
  2. fix knative auto scale trait, make it real work
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
  annotations:
    definition.oam.dev/description: "Auto scale for knative serving"
  name: knative-autoscale
  namespace: vela-system
spec:
  appliesToWorkloads:
    - knative-serving # this should be some knative like workload
  schematic:
    cue:
      template: |-
        import "encoding/json"
        patch: {
          metadata: annotations: {
              "my.autoscale.ann": json.Marshal({
                  "minReplicas": parameter.min
                  "maxReplicas": parameter.max
              })
          }
        }
        parameter: {
          min: *1 | int
          max: *3 | int
        }
  1. fix expose trait and add into kubevela chart as inner trait
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
  name: expose
  namespace: vela-system
spec:
  schematic:
    cue:
      template: |-
        parameter: [string]: int

        outputs: {
          for k, v in parameter {
              "\(k)": {
                  apiVersion: "v1"
                  kind:       "Service"
                  spec: {
                      selector:
                          app: context.name
                      ports: [{
                          port:       v
                          targetPort: v
                      }]
                  }
              }
          }
        }
  1. Delete the code of PodDisruptionBudgetTrait and write a CUE based trait instead, put it into registry. https://kubernetes.io/docs/tasks/run-application/configure-pdb/

refine the readme for catalog

Similar to our main repo: https://github.com/kubevela/kubevela , the README should contain following things:

  1. introduce what's this repo for, what contains in it.
  2. how can users benefit from this repo?
  3. introduce how to use the catalog, this is what contains in the readme now. refer to the detailed documentation more clear.
  4. a brief introduction the community, refer to community repo.
  5. how to contribute catalog

[Addon] Not all default storagecalss match the requirement of prometheus server

➜  crds git:(master) ✗ k get storageclass
NAME                                PROVISIONER                       RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
alicloud-disk-available (default)   diskplugin.csi.alibabacloud.com   Delete          Immediate              true                   154m
alicloud-disk-efficiency            diskplugin.csi.alibabacloud.com   Delete          Immediate              true                   154m
alicloud-disk-essd                  diskplugin.csi.alibabacloud.com   Delete          Immediate              true                   154m
alicloud-disk-ssd                   diskplugin.csi.alibabacloud.com   Delete          Immediate              true                   154m
alicloud-disk-topology              diskplugin.csi.alibabacloud.com   Delete          WaitForFirstConsumer   true                   154m
➜  crds git:(master) ✗ k describe pod -n vela-system  prometheus-server-56c7c89d4f-bj9hd
Name:           prometheus-server-56c7c89d4f-bj9hd
Namespace:      vela-system
Priority:       0
Node:           cn-beijing.192.168.0.131/192.168.0.131
Start Time:     Thu, 23 Dec 2021 01:05:38 +0800
Labels:         app=prometheus
                chart=prometheus-14.4.1
                component=server
                heritage=Helm
                pod-template-hash=56c7c89d4f
                release=prometheus
Annotations:    kubernetes.io/psp: ack.privileged
Status:         Pending
IP:
IPs:            <none>
Controlled By:  ReplicaSet/prometheus-server-56c7c89d4f
Containers:
  prometheus-server-configmap-reload:
    Container ID:
    Image:         jimmidyson/configmap-reload:v0.5.0
    Image ID:
    Port:          <none>
    Host Port:     <none>
    Args:
      --volume-dir=/etc/config
      --webhook-url=http://127.0.0.1:9090/-/reload
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /etc/config from config-volume (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from prometheus-server-token-wj2d7 (ro)
  prometheus-server:
    Container ID:
    Image:         prom/prometheus:v2.26.0
    Image ID:
    Port:          9090/TCP
    Host Port:     0/TCP
    Args:
      --storage.tsdb.retention.time=15d
      --config.file=/etc/config/prometheus.yml
      --storage.tsdb.path=/data
      --web.console.libraries=/etc/prometheus/console_libraries
      --web.console.templates=/etc/prometheus/consoles
      --web.enable-lifecycle
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Liveness:       http-get http://:9090/-/healthy delay=30s timeout=10s period=15s #success=1 #failure=3
    Readiness:      http-get http://:9090/-/ready delay=30s timeout=4s period=5s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /data from storage-volume (rw)
      /etc/config from config-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from prometheus-server-token-wj2d7 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      prometheus-server
    Optional:  false
  storage-volume:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  prometheus-server
    ReadOnly:   false
  prometheus-server-token-wj2d7:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  prometheus-server-token-wj2d7
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                  Age        From                               Message
  ----     ------                  ----       ----                               -------
  Warning  FailedScheduling        <unknown>                                     0/6 nodes are available: 6 pod has unbound immediate PersistentVolumeClaims.
  Warning  FailedScheduling        <unknown>                                     0/6 nodes are available: 6 pod has unbound immediate PersistentVolumeClaims.
  Normal   Scheduled               <unknown>                                     Successfully assigned vela-system/prometheus-server-56c7c89d4f-bj9hd to cn-beijing.192.168.0.131
  Normal   SuccessfulAttachVolume  3m23s      attachdetach-controller            AttachVolume.Attach succeeded for volume "d-2zegrsafadin8hyrbiwd"
  Warning  FailedMount             3m19s      kubelet, cn-beijing.192.168.0.131  MountVolume.MountDevice failed for volume "d-2zegrsafadin8hyrbiwd" : rpc error: code = Internal desc = SDK.ServerError
ErrorCode: InvalidInstanceType.NotSupportDiskCategory
Recommend: https://error-center.aliyun.com/status/search?Keyword=InvalidInstanceType.NotSupportDiskCategory&source=PopGw
RequestId: 69E5B2D9-8C60-5CFA-8202-C6A50D166015
Message: The instanceType of the specified instance does not support this disk category., Disk(d-2zegrsafadin8hyrbiwd) is not supported by instance, please refer to: https://help.aliyun.com/document_detail/25378.html
  Warning  FailedMount  3m18s  kubelet, cn-beijing.192.168.0.131  MountVolume.MountDevice failed for volume "d-2zegrsafadin8hyrbiwd" : rpc error: code = Internal desc = SDK.ServerError
ErrorCode: InvalidInstanceType.NotSupportDiskCategory
Recommend: https://error-center.aliyun.com/status/search?Keyword=InvalidInstanceType.NotSupportDiskCategory&source=PopGw
RequestId: 9E8BD8C1-B99A-576C-B8DE-18CF71C045CF
Message: The instanceType of the specified instance does not support this disk category., Disk(d-2zegrsafadin8hyrbiwd) is not supported by instance, please refer to: https://help.aliyun.com/document_detail/25378.html
  Warning  FailedMount  3m17s  kubelet, cn-beijing.192.168.0.131  MountVolume.MountDevice failed for volume "d-2zegrsafadin8hyrbiwd" : rpc error: code = Internal desc = SDK.ServerError
ErrorCode: InvalidInstanceType.NotSupportDiskCategory
Recommend: https://error-center.aliyun.com/status/search?Keyword=InvalidInstanceType.NotSupportDiskCategory&source=PopGw
RequestId: 95306460-F4C4-5C56-A2B4-6B81A821A110
Message: The instanceType of the specified instance does not support this disk category., Disk(d-2zegrsafadin8hyrbiwd) is not supported by instance, please refer to: https://help.aliyun.com/document_detail/25378.html
  Warning  FailedMount  3m14s  kubelet, cn-beijing.192.168.0.131  MountVolume.MountDevice failed for volume "d-2zegrsafadin8hyrbiwd" : rpc error: code = Internal desc = SDK.ServerError
ErrorCode: InvalidInstanceType.NotSupportDiskCategory
Recommend: https://error-center.aliyun.com/status/search?Keyword=InvalidInstanceType.NotSupportDiskCategory&source=PopGw
RequestId: A749136A-4341-5A5C-8165-AD4CE005F5D9
Message: The instanceType of the specified instance does not support this disk category., Disk(d-2zegrsafadin8hyrbiwd) is not supported by instance, please refer to: https://help.aliyun.com/document_detail/25378.html
  Warning  FailedMount  3m10s  kubelet, cn-beijing.192.168.0.131  MountVolume.MountDevice failed for volume "d-2zegrsafadin8hyrbiwd" : rpc error: code = Internal desc = SDK.ServerError
ErrorCode: InvalidInstanceType.NotSupportDiskCategory
Recommend: https://error-center.aliyun.com/status/search?Keyword=InvalidInstanceType.NotSupportDiskCategory&source=PopGw
RequestId: 4B8E8CFD-3D35-5432-AB3F-F559DC70E016
Message: The instanceType of the specified instance does not support this disk category., Disk(d-2zegrsafadin8hyrbiwd) is not supported by instance, please refer to: https://help.aliyun.com/document_detail/25378.html
  Warning  FailedMount  3m2s  kubelet, cn-beijing.192.168.0.131  MountVolume.MountDevice failed for volume "d-2zegrsafadin8hyrbiwd" : rpc error: code = Internal desc = SDK.ServerError
ErrorCode: InvalidInstanceType.NotSupportDiskCategory
Recommend: https://error-center.aliyun.com/status/search?Keyword=InvalidInstanceType.NotSupportDiskCategory&source=PopGw
RequestId: E66F800C-E487-5AB0-AB65-A56D2A3F8687
Message: The instanceType of the specified instance does not support this disk category., Disk(d-2zegrsafadin8hyrbiwd) is not supported by instance, please refer to: https://help.aliyun.com/document_detail/25378.html
  Warning  FailedMount  2m45s  kubelet, cn-beijing.192.168.0.131  MountVolume.MountDevice failed for volume "d-2zegrsafadin8hyrbiwd" : rpc error: code = Internal desc = SDK.ServerError
ErrorCode: InvalidInstanceType.NotSupportDiskCategory
Recommend: https://error-center.aliyun.com/status/search?Keyword=InvalidInstanceType.NotSupportDiskCategory&source=PopGw
RequestId: 567756E1-BCCF-55F1-8BA8-C23ACFE78062
Message: The instanceType of the specified instance does not support this disk category., Disk(d-2zegrsafadin8hyrbiwd) is not supported by instance, please refer to: https://help.aliyun.com/document_detail/25378.html
  Warning  FailedMount  2m13s  kubelet, cn-beijing.192.168.0.131  MountVolume.MountDevice failed for volume "d-2zegrsafadin8hyrbiwd" : rpc error: code = Internal desc = SDK.ServerError
ErrorCode: InvalidInstanceType.NotSupportDiskCategory
Recommend: https://error-center.aliyun.com/status/search?Keyword=InvalidInstanceType.NotSupportDiskCategory&source=PopGw
RequestId: D4DFF8A4-3A49-5A4E-AEE5-07511C9D420B
Message: The instanceType of the specified instance does not support this disk category., Disk(d-2zegrsafadin8hyrbiwd) is not supported by instance, please refer to: https://help.aliyun.com/document_detail/25378.html
  Warning  FailedMount  80s  kubelet, cn-beijing.192.168.0.131  Unable to attach or mount volumes: unmounted volumes=[storage-volume], unattached volumes=[config-volume prometheus-server-token-wj2d7 storage-volume]: timed out waiting for the condition
  Warning  FailedMount  69s  kubelet, cn-beijing.192.168.0.131  (combined from similar events): MountVolume.MountDevice failed for volume "d-2zegrsafadin8hyrbiwd" : rpc error: code = Internal desc = SDK.ServerError
ErrorCode: InvalidInstanceType.NotSupportDiskCategory
Recommend: https://error-center.aliyun.com/status/search?Keyword=InvalidInstanceType.NotSupportDiskCategory&source=PopGw
RequestId: 5F892E8D-14F1-5FB4-A541-1C9ECDB4C72B
Message: The instanceType of the specified instance does not support this disk category., Disk(d-2zegrsafadin8hyrbiwd) is not supported by instance, please refer to: https://help.aliyun.com/document_detail/25378.html

Refine parameter "Selector" of trait Metrics

I find I am not quite sure how to set parameter selector when using Trait Metrics, giving an example in Description/Usage filed might do some help.

Besides, selector is a concept in Kubernetes context which is exposed in any workload. And it's said it "will discovery automatically by default".

KubeVela is trying to deliver applications for developers who don't need much Kubernetes knowledge. So how about we just deleting it?

Name Type Description Notes
Path string the metrics path of the service [default to /metrics]
Format string +format of the metrics, default as prometheus [default to prometheus]
Scheme string [default to http]
Enabled bool [default to true]
Port int32 the port for metrics, will discovery automatically by default [default to 0], >=1024 & <=65535
Selector map[string]string the label selector for the pods, will discovery automatically by default [optional]

Re-design ServiceExpose trait

The idea is Expose should answer below questions:

  1. which target port I want to expose?
  2. expose to what service port?
  3. what's the type of this expose?
  4. what's the protocol?

It should be a list, with Kind of Expose. So I'd suggest re-design ServiceExpose as below.

Example 1:

apiVersion: core.oam.dev/v1alpha2
kind: Expose
spec:
  servicePorts:
    - port: 80
      targetPort: 80
    - port: 8001
      targetPort: 9376

This will generate one k8s Service:

apiVersion: v1
kind: Service
metadata:
  name: <component-name>-clusterip
spec:
  clusterIP: 10.96.193.247 # auto gen
  ports:
  - name: tcp-80 # auto gen
    port: 80
    protocol: TCP # default
    targetPort: 80
  - name: tcp-8080 # auto-gen
    port: 8080
    protocol: TCP # default
    targetPort: 80
  selector:
    component: <component-name>
  type: ClusterIP

Example 2:

apiVersion: core.oam.dev/v1alpha2
kind: Expose
spec:
  servicePorts:
    - port: 80
      targetPort: 80
      type: NodePort
    - port: 8001
      targetPort: 9376
      clusterIP: 10.0.171.239
      type: LoadBalancer
      protocol: HTTP

This will generate two k8s Services:

apiVersion: v1
kind: Service
metadata:
  name: <component-name>-nodeport
spec:
  type: NodePort
  ports:
  - name: tcp-80 # auto gen
    port: 80
    protocol: TCP # default
    targetPort: 80
  selector:
    component: <component-name> # this is auto label for workload
apiVersion: v1
kind: Service
metadata:
  name: <component-name>-loadbalancer
spec:
  type: LoadBalancer
  clusterIP: 10.0.171.239
  ports:
  - name: http-8001 # auto gen
    port: 8001
    protocol: HTTP
    targetPort: 9376
  selector:
    component: <component-name> # this is auto label for workload

For auto labels of workload, ref: crossplane/oam-kubernetes-runtime#174

Feat: load the system version requirements from the addon's meta.yaml

A new feature of automatically finding available addon‘s version has been added to KubevVela's repository.

kubevela/kubevela#4181

In order to adapt this feature, we need to add a system field to the meta.yaml file to record the system version requirements corresponding to a certain version of the addon.

name: example
version: 1.0.1
description: Extended workload to do continuous and progressive delivery
icon: https://raw.githubusercontent.com/fluxcd/flux/master/docs/_files/weave-flux.png
url: https://fluxcd.io
system: "vela>=1.4.0; kubernetes>=1.20.0"

This information will be loaded when synchronizing the addon package, and finally loaded to the index.yaml corresponding to the addon repository.

Can you provide better advice on where to load system version requirements? @wangyikewxgm @wonderflow

Feat: Extract the view files used by VelaUX addon into a separate folder for deployment.

Kubevela has supported the new directory structure. For details, see kubevela/kubevela#4154.

The addon directory structure is:

├── resources/
├── definitions/
├── schemas/
├── views/
│   ├── pod-view.cue
│   └── component-pod-view.cue
├── README.md
├── metadata.yaml
└── template.yaml

View file supports Yaml file format and CUE file.

An example of Yaml file is:

apiVersion: "v1"
kind: "ConfigMap"
metadata:
  name: "cloud-resource-view"
  namespace: "vela-system"
data:
  template: |
    import (
    "vela/ql"
    )
    
    parameter: {
      appName: string
        appNs:   string
    }
    resources: ql.#ListResourcesInApp & {
      app: {
        name:      parameter.appName
          namespace: parameter.appNs
          filter: {
            "apiVersion": "terraform.core.oam.dev/v1beta1"
              "kind":       "Configuration"
          }
          withStatus: true
      }
    }
    status: {
      if resources.err == _|_ {
        "cloud-resources": [ for i, resource in resources.list {
          resource.object
        }]
      }
      if resources.err != _|_ {
        error: resources.err
      }
    }

An example of CUE file is:

import (
	"vela/ql"
)

parameter: {
	name:      string
	namespace: string
	cluster:   *"" | string
}
pod: ql.#Read & {
	value: {
		apiVersion: "v1"
		kind:       "Pod"
		metadata: {
			name:      parameter.name
			namespace: parameter.namespace
		}
	}
	cluster: parameter.cluster
}
eventList: ql.#SearchEvents & {
	value: {
		apiVersion: "v1"
		kind:       "Pod"
		metadata:   pod.value.metadata
	}
	cluster: parameter.cluster
}
podMetrics: ql.#Read & {
	cluster: parameter.cluster
	value: {
		apiVersion: "metrics.k8s.io/v1beta1"
		kind:       "PodMetrics"
		metadata: {
			name:      parameter.name
			namespace: parameter.namespace
		}
	}
}
status: {
	if pod.err == _|_ {
		containers: [ for container in pod.value.spec.containers {
			name:  container.name
			image: container.image
			resources: {
				if container.resources.limits != _|_ {
					limits: container.resources.limits
				}
				if container.resources.requests != _|_ {
					requests: container.resources.requests
				}
				if podMetrics.err == _|_ {
					usage: {for containerUsage in podMetrics.value.containers {
						if containerUsage.name == container.name {
							cpu:    containerUsage.usage.cpu
							memory: containerUsage.usage.memory
						}
					}}
				}
			}
			if pod.value.status.containerStatuses != _|_ {
				status: {for containerStatus in pod.value.status.containerStatuses if containerStatus.name == container.name {
					state:        containerStatus.state
					restartCount: containerStatus.restartCount
				}}
			}
		}]
		if eventList.err == _|_ {
			events: eventList.list
		}
	}
	if pod.err != _|_ {
		error: pod.err
	}
}

env injector trait needed

Inject env to all pod template, including sidecar container.

It will need order to work with sidecar injector trait.

Fluxcd addon upgrade issue from 1.3.5 to 2.1.0

While upgrading Fluxcd addon version from 1.3.5 to 2.1.0 (Kubevela version being updated from 1.4.1 to 1.5.4 as well), we see that some CRDs are getting deleted and recreated every few seconds in the reconciliation loop.
Vela CLI version - v1.5.4

Steps to reproduce:

Upgrade Kubevela version from 1.4.1 to 1.5.4
Upgrade addon fluxcd version from 1.3.5 to 2.1.0
Upgrade addon terraform from 1.0.9 to 1.0.13

Multicluster is not enabled in our case.

During upgrade, crds created by fluxcd are getting patched and deleted for eg. helmrepositories.source.toolkit.fluxcd.io, gitrepositories.source.toolkit.fluxcd.io
As a result, the addon terraform is stuck in enabling status because helm crd is not found.
image

Fluxcd is deleting existing crd of fluxcd.io and tries to recreate every interval. This is happening in a loop every few seconds.
Fluxcd addon application status shows running -
image

See below screenshot for resourcetrackers:
image

We tried only upgrading Kubevela version and kept fluxcd addon version same and that is working fine. We suspect there is some change in the new version on fluxcd which is recreating the crds.

Please see the vela core logs for more details:

kubevela-vela-core-logs.txt

Render ClickHouse addon application fail

When I enabled clickhouse with vela in k8s cluster, I don't know why the clickhouse addon rendered the application fail.
image

Besides, the clickhouse addon is an experimental addon, why it appears in the VelaUX as a verified addons?😂
image

example of workloads and traits can not be run

if deploy addon-oam-kubernetes-local, workloads doesn't work, but it works after I deploy crossplan-oam-sample.

step:

  1. Install controllers
kubectl create namespace cert-manager
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v0.14.0/cert-manager.yaml
kubectl create namespace oam-system
helm install controller -n oam-system ./charts/oam-core-resources/ 

waiting controller running

kubectl get pod -n oam-system
NAME                                             READY   STATUS    RESTARTS   AGE
oam-core-resources-controller-69b6477d57-ppwdb   2/2     Running   0          106m
  1. clone catalog
git clone https://github.com/oam-dev/catalog.git
  1. deploy workloads/deployment
cd workloads/deployment
kubectl apply -f rbac.yaml
kubectl apply -f sample-deployment-component.yaml
kubectl apply -f sample-applicationconfiguration.yaml

but it doesn't work !

describe of applicationconfigurations

kubectl describe applicationconfigurations.core.oam.dev example-deployment-appconfig
Name:         example-deployment-appconfig
Namespace:    default
Labels:       <none>
Annotations:  API Version:  core.oam.dev/v1alpha2
Kind:         ApplicationConfiguration
Metadata:
  Creation Timestamp:  2020-06-28T03:54:31Z
  Generation:          1
  Resource Version:    6466562
  Self Link:           /apis/core.oam.dev/v1alpha2/namespaces/default/applicationconfigurations/example-deployment-appconfig
  UID:                 ce8e2699-3ec1-4f92-9097-1b9fcfb4b6c7
Spec:
  Components:
    Component Name:  example-deployment
Events:              <none>

describe of components

kubectl describe components.core.oam.dev example-deployment
Name:         example-deployment
Namespace:    default
Labels:       <none>
Annotations:  API Version:  core.oam.dev/v1alpha2
Kind:         Component
Metadata:
  Creation Timestamp:  2020-06-28T03:54:24Z
  Generation:          1
  Resource Version:    6466525
  Self Link:           /apis/core.oam.dev/v1alpha2/namespaces/default/components/example-deployment
  UID:                 8a59410c-4df5-4945-beff-670289ac19a0
Spec:
  Workload:
    API Version:  apps/v1
    Kind:         Deployment
    Metadata:
      Labels:
        App:  nginx
      Name:   nginx-deployment
    Spec:
      Selector:
        Match Labels:
          App:  nginx
      Template:
        Metadata:
          Labels:
            App:  nginx
        Spec:
          Containers:
            Image:  nginx:1.17
            Name:   nginx
            Ports:
              Container Port:  80
Events:                        <none>

logs of controller, it can't watch CR

I0628 03:53:40.104911       1 request.go:621] Throttling request took 1.044545187s, request: GET:https://10.43.0.1:443/apis/cluster.cattle.io/v3?timeout=32s
2020-06-28T03:53:40.110Z	INFO	controller-runtime.metrics	metrics server is starting to listen	{"addr": ":8080"}
2020-06-28T03:53:40.110Z	INFO	oam controller	starting the OAM controller manager
I0628 03:53:40.111010       1 leaderelection.go:242] attempting to acquire leader lease  oam-system/oam-controller-runtime...
2020-06-28T03:53:40.111Z	INFO	controller-runtime.manager	starting metrics server	{"path": "/metrics"}
I0628 03:53:57.506466       1 leaderelection.go:252] successfully acquired lease oam-system/oam-controller-runtime
2020-06-28T03:53:57.506Z	INFO	controller-runtime.controller	Starting EventSource	{"controller": "oam/manualscalertrait", "source": "kind source: /, Kind="}
2020-06-28T03:53:57.506Z	DEBUG	controller-runtime.manager.events	Normal	{"object": {"kind":"ConfigMap","namespace":"oam-system","name":"oam-controller-runtime","uid":"bca19c73-486f-4271-b3ea-421deb52babd","apiVersion":"v1","resourceVersion":"6466409"}, "reason": "LeaderElection", "message": "oam-core-resources-controller-69b6477d57-ppwdb_398a142a-9b9a-46eb-9a38-b41f20ca1bc8 became leader"}
2020-06-28T03:53:57.506Z	INFO	controller-runtime.controller	Starting EventSource	{"controller": "oam/containerizedworkload", "source": "kind source: /, Kind="}
2020-06-28T03:53:57.607Z	INFO	controller-runtime.controller	Starting Controller	{"controller": "oam/manualscalertrait"}
2020-06-28T03:53:57.607Z	INFO	controller-runtime.controller	Starting workers	{"controller": "oam/manualscalertrait", "worker count": 1}
2020-06-28T03:53:57.607Z	INFO	controller-runtime.controller	Starting EventSource	{"controller": "oam/containerizedworkload", "source": "kind source: /, Kind="}
2020-06-28T03:53:57.753Z	INFO	controller-runtime.controller	Starting EventSource	{"controller": "oam/containerizedworkload", "source": "kind source: /, Kind="}
2020-06-28T03:53:57.853Z	INFO	controller-runtime.controller	Starting Controller	{"controller": "oam/containerizedworkload"}
2020-06-28T03:53:57.853Z	INFO	controller-runtime.controller	Starting workers	{"controller": "oam/containerizedworkload", "worker count": 1}

Remove route trait's dependency on ingress.

The Installation documentation requires that you install Ingress-nginx yourself https://kubevela.io/docs/install/#1-choose-kubernetes-cluster, while route trait will automatically install version 3.22.0 of ingress-nginx https://github.com/oam-dev/catalog/blob/master/registry/route.yaml#L15-L21

If the ingress nginx you enable is not the same as the route trait

  1. the user's network is fine, and there will be two different versions of ingress nginx
  2. If the user's network is not working well, you will get an error:
    unable to install helm chart dependency nginx-ingress(1.41.2 from https://kubernetes-charts.storage.googleapis .com/) for this trait 'route': looks like "https://kubernetes-charts.storage.googleapis.com/" is not a valid chart repository or cannot be reached: failed to fetch https://kubernetes-charts.storage.googleapis.com/index.yaml : 403 Forbidden

So it would be better to remove the route trait's dependency on ingress.

Reason: Ingress is our built-in trait, so ingress will ask the user to install it, so route trait should not install ingress again.

Provide dex connector for GitHub Enterprise

Background

Currently, dex connector is available for public GitHub use.

GitHub Enterprise requires additional host settings in dex-connector-def.yaml.

Related link

https://dexidp.io/docs/connectors/github/#github-enterprise

Need to be changed

def

github?: {
// +usage=GitHub client ID
clientID: string
// +usage=GitHub client secret
clientSecret: string
// +usage=GitHub redirect URI
redirectURI: string
}

ui-schema

- jsonKey: github
sort: 3
uiType: Ignore
validate:
required: true
conditions:
- jsonKey: type
op: "=="
value: "github"
subParameters:
- jsonKey: clientID
uiType: Password
sort: 1
- jsonKey: clientSecret
uiType: Password
sort: 3
- jsonKey: redirectURI
sort: 5

Helm Chart Repository Cannot Be Reached

I am trying to configure my cluster to support the RouteTrait trait

Following instructions detailed in: https://github.com/oam-dev/catalog/tree/master/traits/routetrait

I tried to add the "http://oam.dev/catalog" helm chart repo, I experienced an error:

helm repo add oam.catalog  http://oam.dev/catalog
Error: looks like "http://oam.dev/catalog" is not a valid chart repository or cannot be reached: failed to fetch http://oam.dev/catalog/index.yaml : 404 Not Found

Please advise

deprecate the raw component type

The raw component type is deprecated starting at vela 1.2, but, many addons always use this type, we should move to use k8s-objects type.

Add Helm repository fail on VelaUX

reproduce :

  1. enable addon fluxCD
  2. go to velaux
  3. goto "integrations"
  4. click "Helm Repository"
  5. click button "Add"
  6. fill textbox Url, Username and Password
  7. click button "save"
  8. velaux returns error
    9.api-server raises these errors

{"level":"error","ts":1652687180.283481,"caller":"bcode/bcode.go:102","msg":"Business exceptions, error message: admission webhook "validating.core.oam.dev.v1beta1.applications" denied the request: field "schematic": Invalid value error encountered, cannot create the validation process context of app=eeee in namespace=vela-system: evaluate base template app=eeee in namespace=vela-system: invalid cue template of workload eeee after merge parameter and context: output.stringData.username: conflicting values "xxxxxxxx" and "yyyyyy". , path:/api/v1/config_types/config-helm-repository method:POST","stacktrace":"github.com/oam-dev/kubevela/pkg/apiserver/rest/utils/bcode.ReturnError\n\t/workspace/pkg/apiserver/rest/utils/bcode/bcode.go:102\ngithub.com/oam-dev/kubevela/pkg/apiserver/rest/webservice.(*configWebService).createConfig\n\t/workspace/pkg/apiserver/rest/webservice/config.go:153\ngithub.com/emicklei/go-restful/v3.(*FilterChain).ProcessFilter\n\t/go/pkg/mod/github.com/emicklei/go-restful/[email protected]/filter.go:21\ngithub.com/oam-dev/kubevela/pkg/apiserver/rest/usecase.(*rbacUsecaseImpl).CheckPerm.func1\n\t/workspace/pkg/apiserver/rest/usecase/rbac.go:554\ngithub.com/emicklei/go-restful/v3.(*FilterChain).ProcessFilter\n\t/go/pkg/mod/github.com/emicklei/go-restful/[email protected]/filter.go:19\ngithub.com/oam-dev/kubevela/pkg/apiserver/rest/webservice.authCheckFilter\n\t/workspace/pkg/apiserver/rest/webservice/authentication.go:115\ngithub.com/emicklei/go-restful/v3.(*FilterChain).ProcessFilter\n\t/go/pkg/mod/github.com/emicklei/go-restful/[email protected]/filter.go:19\ngithub.com/oam-dev/kubevela/pkg/apiserver/rest.(*restServer).requestLog\n\t/workspace/pkg/apiserver/rest/rest_server.go:230\ngithub.com/emicklei/go-restful/v3.(*FilterChain).ProcessFilter\n\t/go/pkg/mod/github.com/emicklei/go-restful/[email protected]/filter.go:19\ngithub.com/emicklei/go-restful/v3.(*Container).OPTIONSFilter\n\t/go/pkg/mod/github.com/emicklei/go-restful/[email protected]/options_filter.go:15\ngithub.com/emicklei/go-restful/v3.(*FilterChain).ProcessFilter\n\t/go/pkg/mod/github.com/emicklei/go-restful/[email protected]/filter.go:19\ngithub.com/emicklei/go-restful/v3.CrossOriginResourceSharing.Filter\n\t/go/pkg/mod/github.com/emicklei/go-restful/[email protected]/cors_filter.go:52\ngithub.com/emicklei/go-restful/v3.(*FilterChain).ProcessFilter\n\t/go/pkg/mod/github.com/emicklei/go-restful/[email protected]/filter.go:19\ngithub.com/emicklei/go-restful/v3.(*Container).dispatch\n\t/go/pkg/mod/github.com/emicklei/go-restful/[email protected]/container.go:282\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2047\nnet/http.(*ServeMux).ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2425\ngithub.com/emicklei/go-restful/v3.(*Container).ServeHTTP\n\t/go/pkg/mod/github.com/emicklei/go-restful/[email protected]/container.go:300\nnet/http.serverHandler.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2879\nnet/http.(*conn).serve\n\t/usr/local/go/src/net/http/server.go:1930"}
{"level":"info","ts":1652687180.2836726,"caller":"rest/rest_server.go:239","msg":"request log","clientIP":"10.1.7.112","path":"/api/v1/config_types/config-helm-repository","method":"POST","status":500,"time":"25.704548ms","responseSize":481}

ServiceTrait should be renamed

Service is a widely recognized miss-used terminology in Kubernetes community, we should name it better.

For example: ServiceExpose etc

Versioning of addons

Background

The function of an addon, especially a Terraform provider addon like terraform-alibaba, depends on the version of KubeVela core. For example, if this PR is not released to a new version of vela-core, any changes of a Terraform provider addon will be directly delivered to all users who install vela-core from scratch or choose to upgrade the addon.

Depencies of vela-core

Proposal

Per the suggestion from @wonderflow , a version dependency of vela-core needs to be set for an addon.

In the section dependencies of an addon's metadata.yaml, add another item vela-core to mark the dependency requirement of the version of vela-core.

# metadata.yaml

+ system:
+  - name: vela-core
+    version: ">=1.2.4"
  1. a higher version of vela-core
    version: ">=$VELA-VERSION" or version: ">$VELA-VERSION"
  2. a lower version of vela-core
    version: "<=$VELA-VERSION" or version: "<$VELA-VERSION"
  3. be in a range versions of vela-core
    version: ">=$VELA-VERSION1 <=$VELA-VERSION2"
  4. no required for versioning
    Just leave out the version line.

As an addon itself is not revisioned, users who install an old vela-core from scratch, or upgrade the addon by an old vela command line in an old vela-core release, will be affected.

Tasks

  • Check dependency of vela-core's version when executing vela addon enable/upgrade xxx
  • Check dependency of vela-core's version when enabling or upgrading an addon

More to be taken into consideration

  • When executing vela addon ls or showing all addons in VelaUX, can an addon, who needs a higher version of vela-core, be listed?
  • How to define different vela-core dependencies for an addon in GitHub?

Versioning of addon itself

Status: Drafting

Proposal

Set up GitHub releasing for repo oam-dev/catalog to mark the version of each addon.

  • vela addon registry setup
  • vela addon list
  • vela addon enable/upgrade

support configurable resource graph for non-owner-reference resources

    Hi guys. I found this can only be implemented by built-in velaql way like HelmRelease.

velaql can list all the resource the HelmRelease has because of the helmRelease2AnyListOption func but we cannot pass function or pass dynamic relevant labels through configmap. And a Deployment doesn't have OwnerReferences with its Kustomization so it also cannot be listed through GetOwnerReferences when the ListOption func is missed.

This time we can just add Kustomization type beside the HelmRelease but I think we should find a better way to improve the extendability of configmap way.

@wangyikewxgm @wonderflow @FogDong

Originally posted by @chengleqi in #487 (comment)

[Important] Revisit the next step of OAM catalog

In general, there're two main features of OAM as a model:

  1. Standard application definition for Kubernetes. This is very similar to Application CRD but with extra benefits:
    • A DevOps workflow with separate concerns - e.g. Operational Capabilities as Service
    • Manageability for ops capabilities (trait) - register, discover and conflict detect
      • kubectl get traits
      • For given component, one could directly tell out how many ops capabilities are bound to it (by yaml file, and by kubectl tree)
    • More information: kapp, and GC
  2. A framework to build abstractions (for either workload or trait).
    • composition/decomposition
    • automatic workloadRef (or ownerRef)
    • "break the abstraction" (auto-inject childResource)? See: #31 (comment)
    • anything else?

As K8s standard app definition:

We need to make sure ANY existing k8s resource can be defined as trait/workload/scope with zero effort. /cc @zzxwill I'd suggest modeling every application you can find in cloud native community, for example: https://github.com/istio/istio/tree/master/samples/bookinfo by OAM and put the example YAML in catalog/applications or somewhere else. Please work with @szihai on this.

Open question: how to handle xxxDefinition in this case? Let's create a tool (e.g. cli)?

kubectl oam definition-gen --all
kubectl oam definition-gen service deployment statefulset

As an abstraction framework:

Traits in OAM should be mostly external capabilities or higher level abstractions for built-in capabilities, not "translation" of k8s built-in api resources. I've raised similar concern in #26 (comment).

Similarly, workloads should come from wider community like Terraform/Crossplane, not StatefulSet, Deployment etc.

For instance, it's less valuable to create a ServiceExpose trait which simply removed label selectors - user should define k8s service as trait freely. workloadRef is an internal helper for building abstractions yet itself is not "better user experience".

A perfect example of how a trait is defined is Ingress API v2: https://containo.us/blog/kubernetes-ingress-service-api-demystified. Though there are still opportunities like Route, Blue-green/Canary, monitoring, logging and many. What's still missing in Kubernetes today?

Building abstractions with zero effort is important, decomposition/composition is great and we also need to improve its UX. /cc @wonderflow

The catalog repo need refactoring and make things work

The contents in catalog are contributed and maintained by the community so they are generally not guaranteed or supported by OAM/KubeVela maintainers. But, they should still work with OAM based app platform.

The refactoring needed IMO:

  1. Remove/deprecate unneeded traits/workloads which overlap with KubeVela.
  2. Use KubeVela which is end user facing as the example, rather than OAM runtime which is a lower level component.
  3. Align them with requirement of latest OAM runtime release.
  4. New directory layout:
    my-cool-trait
      |- README # must have, use KubeVela as example
      |- trait-definition.yaml # must have
      |- my-cool-trait-code # optional

I believe @wonderflow also have some idea about how to make oam-dev/catalog become the first class capability center of KubeVela, assigned.

azure-keyvault-csi trait causes Application component deploy failure when other volumeMounts

The patch to add the secrets-store volume does not correctly patch when there are existing volumeMounts, causing the component deployment to fail with a message of the form:

step apply: run step(provider=oam,do=component-apply): GenerateComponentManifest: evaluate base template component={component-name} app={app-name}: failed to have the workload/trait unstructured: cue: marshal error: spec.template.spec.containers.0.volumeMounts.0.name: from source

CI issue: Unable to process command '##[add-path]/home/runner/configurator/bin' successfully.

Github CI hit the issue as below https://github.com/oam-dev/catalog/pull/66/checks?check_run_id=1608389744

Run engineerd/[email protected]
Downloading tool from https://get.helm.sh/helm-v3.1.2-linux-amd64.tar.gz...
/bin/tar xz -C /tmp/tmp/runner/temp -f /home/runner/work/_temp/adf49048-7c16-4634-921d-449ec1edf5df
chmod +x /home/runner/configurator/bin/helm
Error: Unable to process command '##[add-path]/home/runner/configurator/bin' successfully.
Error: The `add-path` command is disabled. Please upgrade to using Environment Files or opt into unsecure command execution by setting the `ACTIONS_ALLOW_UNSECURE_COMMANDS` environment variable to `true`. For more information see: https://github.blog/changelog/2020-10-01-github-actions-deprecating-set-env-and-add-path-commands/

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.