Code Monkey home page Code Monkey logo

meilisearch-kubernetes's Introduction

Meilisearch Kubernetes

Meilisearch Kubernetes

License Bors enabled

The Meilisearch tool for Kubernetes ⚓️

Meilisearch is an open-source search engine. Discover what Meilisearch is!

Table of Contents

📖 Documentation

See our Documentation or our API References.

⚡ Supercharge your Meilisearch experience

Say goodbye to server deployment and manual updates with Meilisearch Cloud. Get started with a 14-day free trial! No credit card required.

🚀 Getting Started

Kubernetes (K8s), is an open-source system for automating deployment, scaling, and management of containerized applications. You can run a Meilisearch instance inside your Kubernetes cluster, either if you want to expose it to the outside world or just let some other applications use it inside your cluster and take advantage of the instant and powerful search engine.

First of all, you will need a Kubernetes cluster up and running. If you are not familiar with how Kuberentes works or need some help with this step, please check the Kubernetes documentation.

Install kubectl

kubectl is the most commonly used CLI to manage a Kubernetes cluster. The installation instructions are available here.

Deploy Meilisearch using manifests

Install and run Meilisearch

kubectl apply -f manifests/meilisearch.yaml

Uninstall Meilisearch

kubectl delete -f manifests/meilisearch.yaml

Deploy Meilisearch using Helm

Helm works as a package manager to run pre-configured Kubernetes resources. Using our Helm chart you will be able to deploy a Meilisearch instance in you Kubernetes cluster, with several customizable configurations.

Install helm

Helm CLI is a Command Line Interface which will automate chart management and installation on your Kubernetes cluster. To install Helm, follow the Helm installation instructions.

The Parameters section lists the parameters that can be configured during installation.

Install Meilisearch chart

First, add the Meilisearch chart repository

helm repo add meilisearch https://meilisearch.github.io/meilisearch-kubernetes

Now install/upgrade the chart

# Replace <your-instance-name> with the name you would like to give to your service
helm upgrade -i <your-service-name> meilisearch/meilisearch

Uninstalling the Chart

To uninstall/delete the Meilisearch deployment:

# Replace <your-instance-name> with the name of your deployed service
helm uninstall <your-service-name>

🤖 Compatibility with Meilisearch

This chart only guarantees the compatibility with the version v1.7.0 of Meilisearch.

⚙️ Development Workflow and Contributing

Any new contribution is more than welcome in this project!

If you want to know more about the development workflow or want to contribute, please visit our contributing guidelines for detailed instructions!


Meilisearch provides and maintains many SDKs and Integration tools like this one. We want to provide everyone with an amazing search experience for any kind of project. If you want to contribute, make suggestions, or just know what's going on right now, visit us in the integration-guides repository.

meilisearch-kubernetes's People

Contributors

adinhodovic avatar alallema avatar allangalera avatar beshkenadze avatar bors[bot] avatar brunoocasali avatar c0bra avatar carlreid avatar curquiza avatar dependabot[bot] avatar deshetti avatar edvinaskrucas avatar eskombro avatar helmutlety avatar jlandic avatar johankok avatar kacy avatar legal90 avatar macropin avatar meili-bors[bot] avatar meili-bot avatar nlamirault avatar pauldn-wttj avatar pykupejibc avatar tchoupinax avatar thearas avatar tpayet avatar vashiru avatar viceice avatar yagince avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

meilisearch-kubernetes's Issues

Add action workflows for testing kubernetes deployments

As part of verifying changes to this repository during the CI, it is necessary to implement automatic deployments to a kubernetes cluster in order to verify that MeiliSearch is being deployed correctly and can serve requests.

Tools like kind allow you to run a kubernetes cluster on docker, which makes it a perfect fit for the CI environment.

BUG: Chart renders Invalid YAML when Ingress option is enabled

Description
The helm chart yields invalid YAML when the ingress option is enabled.

Expected behavior
The helm chart should yield valid k8s manifests with valid YAML syntax.

Current behavior
Running the following command:

helm template meilisearch/meilisearch --set=ingress.enabled=true --version 0.1.34

Yields the following error:

Error: YAML parse error on meilisearch/templates/ingress.yaml: error converting YAML to JSON: yaml: line 11: mapping values are not allowed in this context

If you add the debug flag to the previous command, you can see the invalid syntax in the output:

helm template meilisearch/meilisearch --set=ingress.enabled=true --version 0.1.34 --debug

image

Screenshots or Logs
N/A

Environment (please complete the following information):

  • OS: Window/Linux (Ubuntu) via WSL2
  • Meilisearch version: 0.27.1
  • meilisearch-kubernetes version: 0.1.34

Ingress does not contain a valid IngressClass

Description
The nginx controller causes the error "Ignoring ingress because of error while validating ingress class".
The networking.k8s.io/v1 API requires to use the nginx class name.

Expected behavior
Should work.

Current behavior
Doesn't work.

Environment (please complete the following information):

  • Meilisearch version: meilisearch-0.1.33
  • meilisearch-kubernetes version: v1.22.8

No namespace property in PersistentVolumeClaim

Description
In the generated YAML manifest, the PersistentVolumeClaim hasn't the namespace property.

Expected behavior
When the helm chart is created in a namespace that is not "default", the chart should create a PVC in the same namespace of the chart.

Current behavior
The chart fails to run due to "persistentvolumeclaim "meilisearch" not found."

Screenshots or Logs
image

Enabling Ingress Requires Port Override in Config

Description
In the event that an end developer wants to enable ingress, they also have to override the Service Port to port 80 since it's hardcoded in the template.

I would have opened a PR, but I'm not sure how the maintainers want to address it. We could tackle it in documentation, assume port 80 on the service port, or alter the ingress template to support the Helm variable.

Expected behavior
What you expected to happen.

Current behavior
Enabling the ingress flag results in the following error: Translation failed: invalid ingress spec: could not find port "&ServiceBackendPort{Name:,Number:80,}" in service "default/meilisearch"

Screenshots or Logs
If applicable, add screenshots or logs to help explain your problem.

Environment (please complete the following information):

  • Meilisearch version: main branch
  • meilisearch-kubernetes version: main branch

New path for data.ms?

Since v0.24.0, MeiliSearch runs using the meili users in the docker container. This involves a change regarding the path of the data.ms folder by default. It's now /home/meili/data.ms instead of /data.ms

Shouldn't we change the lines here according to the new path? I'm really asking, I don't know the answer, you have to check @alallema 😄

persistence:
enabled: false
accessMode: ReadWriteOnce
## Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
size: 10Gi
annotations: {}
volume:
name: data
mountPath: /data.ms

Related to: meilisearch/meilisearch#1969

Feature request: volume mounting

I really like the volume mounting feature to bind data.ms when using Meili in docker. Is there a way in values.yaml to do the same thing?

Add a CONTRIBUTING.md

Create a nice welcoming file for new contributors, specifying every detail, and especially the developer workflow, and how to test, lint and release.

We should add a workflow where manifests are re-created after a Helm chart modification (either automatically in the CI or by the developer).

It can be inspired in our SDKs CONTRIBUTING.md files, such as this or this

Change master branch to main

Let's be allies and make this change that means a lot.

Here is a blog post that explain a little more why it's important, and how to easily do it. It will be a bit more complicated with automation, but we still should do it!

Automated the manifest file

We having a redundancy between the chart file and the meilisearch.yaml file in Manifest. Which is not a bad thing. Although, it may be good to have the manifest file created automatically when a change is done in the charts, or at least compared, during a pipeline.

What do you guys think? We having now the possibility that we changing the charts, and forgetting about the manifest, and this one will not correspond to it any more.

Can't run Meili with persistance enabled

Description

Using latest helm chart, on a k8s v1.18.17 cluster, values.yaml only set the storage class (that target a kadalu.io working storage class).
The error occurs identically on production and development mode.
If I remove the persistence, Meili works as expected.
The Kadalu server works perfectly (dynamic provisioning, the generated PV is functional)

Expected behavior

Meili starting.

Current behavior

Meili doesn't start with error in logs: Error: heed error; Value too large for data type (os error 75)
Pod put in Crashloop Backoff state.

Environment (please complete the following information):

  • Kubernetes cluster v1.18.17 on 11 baremetals nodes managed with rancher2
  • MeiliSearch version: v0.19.0
  • meilisearch-kubernetes version: meilisearch-0.1.14

Meilisearch Pod fails on Openshift due to root permission denial

We are using Openshift, and none of the pods have root access (for security reasons). This means the line in the Dockerfile CMD ["/bin/sh" "-c" "./meilisearch"] fails with message Error: Permission denied (os error 13), and the meilisearch fails with a CrashLoopBackoff error.

We were able to reproduce this behavior by creating a debug pod and running the command with the same result :

~ $ /bin/sh -c ./meilisearch
Error: Permission denied (os error 13)

Do you know why root access is needed for this operation ? Do you know of any workarounds ?
Thanks in advance.

Ingress compatibility with kubernetes 1.22+

Description
From 1.20, API networking.k8s.io/v1beta1 is deprecated, and removed in 1.22. To deploy in 1.22+, we have to use networking.k8s.io/v1

Expected behavior
Helm chart should deploy ingress in kubernetes 1.23

Current behavior
Helm chart deployment fail

Environment (please complete the following information):

  • OS: Debian based
  • MeiliSearch version: 0.25.2
  • meilisearch-kubernetes version: v0.1.24

Add custom labels to values and all templates

Description
Add the possibility to customize the labels on all templates so that it can improve governance. On the company I work we use these values to identify which squad, csot center, etc.

Basic example
Add a customLabels: {} on values.yaml and add to all labels on templates.

I forked the repo be more clear:

values.yaml
templates

I'm not an expert on this matter so if I made any silly mistake forgive me.
I hope my solution is generic enought. I wanted so that any one can benefit from this change 😸

Installing more than one replica doesn't work

I am trying to install using helm chart on DigitalOcean, but I believe the same would happen on GCS or AWS since the default storageclass on these providers doesn't support ReadWriteMany access mode.

When I tried installing the chart using replicaCount: 2, I see that the first pod is created and running as expected and the second one is stuck in the status ContainerCreating

Following the is the events when I execute describe command on the pod:

Events:
  Type     Reason              Age    From                     Message
  ----     ------              ----   ----                     -------
  Normal   Scheduled           4m52s  default-scheduler        Successfully assigned dega/meili-meilisearch-1 to pool-8gb4cpu-3stpq
  Warning  FailedAttachVolume  4m52s  attachdetach-controller  Multi-Attach error for volume "pvc-f256af54-5cb1-451c-86f8-d03ba4861014" Volume is already used by pod(s) meili-meilisearch-0
  Warning  FailedMount         2m49s  kubelet                  Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[meili-meilisearch-token-zd9rl data]: timed out waiting for the condition
  Warning  FailedMount         33s    kubelet                  Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[data meili-meilisearch-token-zd9rl]: timed out waiting for the condition

Installation notes displayed for helm chart shows wrong info for port-forwarding

When I installed the chart with ClusterIP, the following notes are displayed:

1. Get the application URL by running these commands:
  export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=meilisearch,app.kubernetes.io/instance=meili-14-test" -o jsonpath="{.items[0].metadata.name}")
  echo "Visit http://127.0.0.1:7700 to use your application"
  kubectl port-forward $POD_NAME 7700:80

Issue 1: The port-forward displayed should be based on the service and not the pod, it should display the following info for port-forwarding:

kubectl port-forward svc/<service-name> 7700:7700

Issue 2: Display the access URL after the port-forwarding

Pod has unbound immediate PersistentVolumeClaims

Hi guys! I installed meilisearch on my k8s cluster and it's pending forever. It shows 'failedscheduling‘ for 'Pod has unbound immediate PersistentVolumeClaims', even though I enabled pvc persistence in values.yaml. Here is the code:
persistence:
type: pvc
enabled: true
storageClassName: default
accessMode: ReadWriteOnce
size: 10Gi
finalizers:
- kubernetes.io/pvc-protection

Any help would be really appreciated!

Add MEILI_MASTER_KEY in values

To have this chart production ready, it would be good to have the MEILI_MASTER_KEY env set in the values file.
If MEILI_ENV is set to production, it will inject the key into the pod.
This can also be used to have the production environment tested, because there are some differences between development and production.

This Week I had the issue that the new version was working fine on development mode (used in staging), but when upgrading my production it went all 💥

Recommended way to do backups?

I want to backup my MeiliSearch data but not sure what approach to take. I was thinking of something using a cron job that uploads the dump to S3. What's the recommended way to do this?

Error: Version file is missing or the previous MeiliSearch engine version was below 0.24.0. Use a dump to update MeiliSearch.

Description
Meilisearch won't start with v0.25.0 but will work with v0.24.0

Expected behavior
Meilisearch starts

Current behavior
Meilisearch doesn't start

Screenshots or Logs
Error: Version file is missing or the previous MeiliSearch engine version was below 0.24.0. Use a dump to update MeiliSearch.

Environment (please complete the following information):

  • MeiliSearch version: v0.25.0
  • meilisearch-kubernetes version: v0.1.24

So I've installed Meilisearch v0.24.0, everything looks all right but I decided to update to the new version to get persistence enabled on the good path ( issue #89 ) and now I can't get it to start.
The problem is that even when I tried to first install the application with v.0.25.0 I got the same error hence the downgrade to v0.24.

Doesn't work on M1 / ARM

I tried deploying the Helm chart on a minikube cluster on my M1 MacBook and got the following error:

simo at makro in ~/C/p/m/charts
↪ kubectl logs meilisearch-0
Error: An internal error has occurred. `Function not implemented (os error 38)`.

Environment (please complete the following information):

  • OS: macOS 11.5.2 on MacBook Pro 2021 M1
  • MeiliSearch version: v0.24.0
  • meilisearch-kubernetes version: v0.1.21

meili-docs helm chart

I've thinking about the idea of having a ready-to-use helm chart that would do the following:

  • Deploy the meilisearch chart
  • Deploy the docs-scraper as another component. Options could be:
    • Cronjob to run on a specific schedule, e.g daily, hourly
    • A listener pod that could wait for requests to trigger the docs-scraper cli
  • A default frontend website that would have a search interface and links to each of the scraped endpoints

Any thoughts? Any other suggestions that could be added?

Migrate some static values from statefulset.yaml to values.yaml

The value of this fields should be defined in values.yaml instead of having static values on statefulset.yaml to allow better customization:

  • spec.template.spec.volumes.name (data)

  • spec.template.spec.containers.volumeMounts.name (data) (linked to name in the previous line)

  • spec.template.spec.containers.volumeMounts.mountPath (data.ms)

  • spec.template.spec.containers.ports.containerPort (data.ms)

GKE ingress health check fails

If someone is to create a GKE ingress, it requires that the health check endpoint respond with a code 200 as described here. Currently, the workaround is to switch back to the / endpoint rather than the /health because / responds with a 200 rather than a 204.

readinessProbe:
  httpGet:
    path: /
    port: 7700

I haven't looked at the helm charts, but I did modify the /manifest/meilisearch.yaml file to include the above change and that allowed me to pass the GKE ingress health checks.

Add BORS to repo

BORS is an automatic tool for runing tests, staging and merging, widely used in MeiliSearch repositories, but missing in this repository.

It makes open-source contributions and the repository maintaining extremely pleasant 🎉

Helm: make livenessProbe customizable by adding initialDelaySeconds and periodSeconds

With #28 , livenessProbe checks for MeiliSearch API /health route in order to check container liveness.

livenessProbe could/should be customizable by the user, by adding two items that should ve present in the values.yaml:

  • initialDelaySeconds: Defines the delay that should happen before the first health check
  • periodSeconds: Defines the interval in which the container health should be checked

Checks stuck in Waiting for status to be reported

Description
When using the automatic fixing tool by comment for sync chart, Github checks/actions get stuck in:
Expected — Waiting for status to be reported
Bors get stuck too after a bors try even if the checks have passed and it never comments with:
Build succeeded

Content of the repo

Would this repo store plain kubernetes manifests, or is the plan to support multiple deployment options?, e.g:

  • Helm charts
  • Kustomize
  • Plain kubernetes manifests

Implement existingClaim functionality

Description
Make possible to bind pvc existingClaim

Basic example
Someting like this :
{{- if and if .Values.persistence.enabled if .Values.persistence.existingClaim }}
- name: data
persistentVolumeClaim:
claimName: {{ .Values.persistence.existingClaim }}

Meilisearch (v0.26.0) failed to infer the version of the database. Please consider using a dump to load your data.

Description
Trying to install on EKS and this error keeps coming up. Its a fresh installation so there's no previous dump.
Meilisearch doesn't start, it just logs the above error. It works when persistence is disabled

Heres what my values.yml looks like

 affinity: {}
 auth: 
   existingMasterKeySecret: meilisearch-master-key
 container: 
   containerPort: 7700
 customLabels: {}
 environment: 
   MEILI_ENV: production
   MEILI_NO_ANALYTICS: true
  fullnameOverride: meilisearch
 image: 
    pullPolicy: IfNotPresent
    pullSecret: ~
    repository: getmeili/meilisearch
    tag: v0.26.0
  ingress: 
    annotations: {}
    enabled: false
    hosts: 
      - meilisearch-example.local
    path: /
    tls: []
  livenessProbe: 
    InitialDelaySeconds: 60
    periodSeconds: 60
  nameOverride: ""
  nodeSelector: {}
  persistence: 
    accessMode: ReadWriteOnce
    annotations: {}
    enabled: true
    size: 100Gi
    storageClass: gp2
    volume: 
      mountPath: /data.ms
      name: data
  podAnnotations: {}
  readinessProbe: 
    InitialDelaySeconds: 60
    periodSeconds: 60
  replicaCount: 1
  resources: {}
  service: 
    annotations: {}
    port: 7700
    type: ClusterIP
  serviceAccount: 
    annotations: {}

Pod status is never ready

I tried running the Helm Chart with

$ helm install meilisearch charts/meilisearch -f values.yaml
image:
  repository: server_meilisearch
  tag: latest
  pullPolicy: IfNotPresent

# Environment loaded into the configMap
environment:
  MEILI_NO_ANALYTICS: true
  MEILI_ENV: development
  # For production deployment, the environment MEILI_MASTER_KEY is required.
  # If MEILI_ENV is set to "production" without setting MEILI_MASTER_KEY, this
  # chart will automatically create a secure MEILI_MASTER_KEY and push it as a
  # secret. Otherwise the below value of MEILI_MASTER_KEY will be used instead.
  # MEILI_MASTER_KEY:

This is the server_meilisearch Dockerfile image that was passed into minikube using minikube image load server_meilisearch

FROM ubuntu

WORKDIR /meilisearch

RUN apt-get update
RUN apt-get install -y \
	libc6-dev \
	curl

RUN curl -L https://install.meilisearch.com | sh
RUN chmod +x meilisearch

EXPOSE 7700

CMD ["/bin/sh", "-c", "./meilisearch"]

image
This also occurs on other versions of MeiliSearch.

  • OS: macOS 11.5.2 on MacBook Pro 2021 M1
  • MeiliSearch version: v0.23.1
  • meilisearch-kubernetes version: v0.1.21

Support multi replicas

Description

Same as #25. Any update?
(I can't reopen it so I open a new one)

Can I just change pvc's accessModes to ReadWriteMany and replicas to 2?

PVC gets deleted even when persistence is enabled

I installed the helm chart with persistence.enabled: true and when I uninstalled the helm chart I see that the PVC was deleted. I would assume that PVC would not be deleted when I uninstall the chart in persistence mode, it should be only be deleted manually.

Installation with enabled ingress fails on Kubernetes >=1.19

Description

When installing the chart with ingress enabled, Helm refuses to install the application with the error below.

This issue is introduced by #103. .Values.service.port exists in the global scope, but not in within the .range block. I've created a Pull Request to resolve this.

Expected behavior
Successfully deployed helm chart.

Current behavior

[johan@fedora1 ~]$ helm upgrade -i fancy-instance-name --set ingress.enabled=true meilisearch/meilisearch
Error: UPGRADE FAILED: template: meilisearch/templates/ingress.yaml:51:34: executing "meilisearch/templates/ingress.yaml" at <.Values.service.port>: can't evaluate field Values in type interface {}

Environment (please complete the following information):

  • OS: Fedora
  • Meilisearch version: v0.26.0
  • meilisearch-kubernetes version: 0.1.27

Cleanup manifests with the info related to Helm

I was looking at the manifests and not sure if we need these lines? I am thinking the reason for manifests is to install on Kubernetes without Helm.

helm.sh/chart: meilisearch-0.1.4
app.kubernetes.io/managed-by: Helm

In fact, they are wrong IMO, cuz they are not managed by Helm if the manifest files are installed separately on Kubernetes.

Add github action for manifests

Create a Github action that generates manifests using helm template meilisearch charts/meilisearch and compares the output with the manifest stored at manifests/meilisearch.yaml to be sure that the manifests are up to date with the Helm chart changes.

It would be interesting to explore if there is a better way of automatically generating manifests in the CI.

Add the possibility to precise nodePort in values.yaml if service type is NodePort

In values.yaml it is possible to define the Service type. But if a user defines it as a NodePort service, there is no field in the chart to specify the nodePort, so kubernetes will allocate a random port.

In a NodePort Service, the nodePort field is defined as follows:

Screenshot 2021-02-11 at 18 48 52

But MeiliSearch chart is not prepared to let the user specify this nodePort Value in values.yaml and integrate it in the template (cf. see here).

The service template could check if the Service type is NodePort, and in that case, add the nodePort field to the Service Template.

Pod has unbound immediate PersistentVolumeClaims'

Hi guys! I installed meilisearch (from https://github.com/meilisearch/meilisearch-kubernetes) on my Centos k8s cluster and the pod is 'pending' forever. It shows 'failedscheduling‘ for 'Pod has unbound immediate PersistentVolumeClaims', even though there is a pvc.yaml in the 'templates' folder and I enabled pvc persistence in values.yaml. Here is the code in my yaml file:

persistence:
type: pvc
enabled: true
storageClassName: default
accessMode: ReadWriteOnce
size: 10Gi
finalizers: - kubernetes.io/pvc-protection

Any help would be really appreciated!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.