Code Monkey home page Code Monkey logo

experimental-addons's Introduction

Harvester

Build Status Go Report Card Releases Slack

Harvester is a modern, open, interoperable, hyperconverged infrastructure (HCI) solution built on Kubernetes. It is an open-source alternative designed for operators seeking a cloud-native HCI solution. Harvester runs on bare metal servers and provides integrated virtualization and distributed storage capabilities. In addition to traditional virtual machines (VMs), Harvester supports containerized environments automatically through integration with Rancher. It offers a solution that unifies legacy virtualized infrastructure while enabling the adoption of containers from core to edge locations.

harvester-ui

Overview

Harvester is an enterprise-ready, easy-to-use infrastructure platform that leverages local, direct attached storage instead of complex external SANs. It utilizes Kubernetes API as a unified automation language across container and VM workloads. Some key features of Harvester include:

  1. Easy to install: Since Harvester ships as a bootable appliance image, you can install it directly on a bare metal server with the ISO image or automatically install it using iPXE scripts.
  2. VM lifecycle management: Easily create, edit, clone, and delete VMs, including SSH-Key injection, cloud-init, and graphic and serial port console.
  3. VM live migration support: Move a VM to a different host or node with zero downtime.
  4. VM backup, snapshot, and restore: Back up your VMs from NFS, S3 servers, or NAS devices. Use your backup to restore a failed VM or create a new VM on a different cluster.
  5. Storage management: Harvester supports distributed block storage and tiering. Volumes represent storage; you can easily create, edit, clone, or export a volume.
  6. Network management: Supports using a virtual IP (VIP) and multiple Network Interface Cards (NICs). If your VMs need to connect to the external network, create a VLAN or untagged network.
  7. Integration with Rancher: Access Harvester directly within Rancher through Rancher’s Virtualization Management page and manage your VM workloads alongside your Kubernetes clusters.

The following diagram outlines a high-level architecture of Harvester:

architecture.svg

  • Longhorn is a lightweight, reliable, and easy-to-use distributed block storage system for Kubernetes.
  • KubeVirt is a virtual machine management add-on for Kubernetes.
  • Elemental for SLE-Micro 5.3 is an immutable Linux distribution designed to remove as much OS maintenance as possible in a Kubernetes cluster.

Hardware Requirements

To get the Harvester server up and running the following minimum hardware is required:

Type Requirements
CPU x86_64 only. Hardware-assisted virtualization is required. 8-core processor minimum for testing; 16-core or above required for production
Memory 32 GB minimum; 64 GB or above required for production
Disk Capacity 250 GB minimum for testing (180 GB minimum when using multiple disks); 500 GB or above required for production
Disk Performance 5,000+ random IOPS per disk (SSD/NVMe). Management nodes (first three nodes) must be fast enough for etcd
Network Card 1 Gbps Ethernet minimum for testing; 10Gbps Ethernet required for production
Network Switch Trunking of ports required for VLAN support

We recommend server-class hardware for best results. Laptops and nested virtualization are not officially supported.

Quick start

You can use the ISO to install Harvester directly on the bare-metal server to form a Harvester cluster. Users can add one or many compute nodes to join the existing cluster.

To get the Harvester ISO, download it from the Github releases.

During the installation, you can either choose to create a new Harvester cluster or join the node to an existing Harvester cluster.

  1. Mount the Harvester ISO file and boot the server by selecting the Harvester Installer option. iso-install.png
  2. Use the arrow keys to choose an installation mode. By default, the first node will be the management node of the cluster. iso-install-mode.png
    • Create a new Harvester cluster: Select this option to create an entirely new Harvester cluster.
    • Join an existing Harvester cluster: Select this option to join an existing Harvester cluster. You need the VIP and cluster token of the cluster you want to join.
    • Install Harvester binaries only: If you choose this option, additional setup is required after the first bootup.
  3. Choose the installation disk you want to install the Harvester cluster on and the data disk you want to store VM data on. By default, Harvester uses GUID Partition Table (GPT) partitioning schema for both UEFI and BIOS. If you use the BIOS boot, then you will have the option to select Master boot record (MBR). iso-choose-disks.png
    • Installation disk: The disk to install the Harvester cluster on.
    • Data disk: The disk to store VM data on. Choosing a separate disk to store VM data is recommended.
    • Persistent size: If you only have one disk or use the same disk for both OS and VM data, you need to configure persistent partition size to store system packages and container images. The default and minimum persistent partition size is 150 GiB. You can specify a size like 200Gi or 153600Mi.
  4. Configure the HostName of the node.
  5. Configure network interface(s) for the management network. By default, Harvester will create a bonded NIC named mgmt-bo, and the IP address can either be configured via DHCP or statically assigned. iso-config-network.png
  6. (Optional) Configure the DNS Servers. Use commas as a delimiter to add more DNS servers. Leave blank to use the default DNS server.
  7. Configure the virtual IP (VIP) by selecting a VIP Mode. This VIP is used to access the cluster or for other nodes to join the cluster. iso-config-vip.png
  8. Configure the cluster token. This token will be used for adding other nodes to the cluster.
  9. Configure and confirm a Password to access the node. The default SSH user is rancher.
  10. Configure NTP servers to make sure all nodes' times are synchronized. This defaults to 0.suse.pool.ntp.org. Use commas as a delimiter to add more NTP servers.
  11. (Optional) If you need to use an HTTP proxy to access the outside world, enter the proxy URL address here. Otherwise, leave this blank.
  12. (Optional) You can choose to import SSH keys by providing HTTP URL. For example, your GitHub public keys https://github.com/<username>.keys can be used.
  13. (Optional) If you need to customize the host with a Harvester configuration file, enter the HTTP URL here.
  14. Review and confirm your installation options. After confirming the installation options, Harvester will be installed on your host. The installation may take a few minutes to complete.
  15. Once the installation is complete, your node restarts. After the restart, the Harvester console displays the management URL and status. The default URL of the web interface is https://your-virtual-ip. You can use F12 to switch from the Harvester console to the Shell and type exit to go back to the Harvester console. iso-installed.png
  16. You will be prompted to set the password for the default admin user when logging in for the first time. first-login.png

Releases

NOTE:

  • <version>* means the release branch is under active support and will have periodic follow-up patch releases.
  • Latest release means the version is the latest release of the newest release branch.
  • Stable release means the version is stable and has been widely adopted by users.
  • EOL means that the software has reached the end of its useful life and no further code-level maintenance will be provided. You may continue to use the software within the terms of the licensing agreement.

https://github.com/harvester/harvester/releases

Release Version Type Release Note (Changelog) Upgrade Note
1.3* 1.3.1 Latest πŸ”— πŸ”—
1.2* 1.2.2 Stable πŸ”— πŸ”—
1.1* 1.1.3 EOL πŸ”— πŸ”—

Documentation

Find more documentation here.

Demo

Check out this demo to get a quick overview of the Harvester UI.

Source code

Harvester is 100% open-source software. The project source code is spread across a number of repos:

Name Repo Address
Harvester https://github.com/harvester/harvester
Harvester Dashboard https://github.com/harvester/dashboard
Harvester Installer https://github.com/harvester/harvester-installer
Harvester Network Controller https://github.com/harvester/harvester-network-controller
Harvester Cloud Provider https://github.com/harvester/cloud-provider-harvester
Harvester Load Balancer https://github.com/harvester/load-balancer-harvester
Harvester CSI Driver https://github.com/harvester/harvester-csi-driver
Harvester Terraform Provider https://github.com/harvester/terraform-provider-harvester

Community

If you need any help with Harvester, please join us at either our Slack #harvester channel or forums where most of our team hangs out at.

If you have any feedback or questions, feel free to file an issue.

License

Copyright (c) 2024 Rancher Labs, Inc.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

experimental-addons's People

Contributors

bk201 avatar guangbochen avatar ibrokethecloud avatar starbops avatar yu-jack avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

experimental-addons's Issues

[Bug] Rancher v2.7.11 not found in chart index

Bug Description

Rancher VCluster plugin not able to install Rancher v2.7.11. Helm displays error stating that the helm chart for Rancher version v2.7.11 could not be found in the helm index

Despite the error, the Rancher VCluster plugin reports the deployment as successful:
Screenshot at 2024-03-22 15-11-04

Observed Behavior

  • The deployment of Rancher v2.7.11 was reported to be completed successfully
  • The deployment of Rancher v2.7.11 was not actually completed successfully

Expected Behavior

  • Rancher VCluster status report should indicate errors when they appear
  • Rancher v2.7.11 should have deployed successfully

Steps to reproduce

  • Deploy Harvester v1.2.1
  • Deploy Rancher VCluster Addon on Harvester:
  • Deployment doesn't finish successfully
Rancher VCluster Addon Manifest
apiVersion: harvesterhci.io/v1beta1
kind: Addon
metadata:
  creationTimestamp: '2024-03-22T13:53:18Z'
  generation: 1
  labels:
    addon.harvesterhci.io/experimental: 'true'
  managedFields:
  - apiVersion: harvesterhci.io/v1beta1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:labels:
          .: {}
          f:addon.harvesterhci.io/experimental: {}
      f:spec:
        .: {}
        f:chart: {}
        f:enabled: {}
        f:repo: {}
        f:valuesContent: {}
        f:version: {}
    manager: OpenAPI-Generator
    operation: Update
    time: '2024-03-22T13:53:18Z'
  name: rancher-vcluster
  namespace: rancher-vcluster
  resourceVersion: '15330'
  uid: 06a86ba9-66f9-498d-9885-5059b422d832
spec:
  chart: vcluster
  enabled: true
  repo: https://charts.loft.sh
  valuesContent: |-
    hostname: "rancher.172.19.108.35.nip.io"
    rancherVersion: "v2.7.11"
    bootstrapPassword: "password1234"
    vcluster:
      image: rancher/k3s:v1.27.10-k3s2
    sync:
      ingresses:
        enabled: "true"
    init:
      manifestsTemplate: |-
        apiVersion: v1
        kind: Namespace
        metadata:
          name: cattle-system
        ---
        apiVersion: v1
        kind: Namespace
        metadata:
          name: cert-manager
          labels:
            certmanager.k8s.io/disable-validation: "true"
        ---
        apiVersion: helm.cattle.io/v1
        kind: HelmChart
        metadata:
          name: cert-manager
          namespace: kube-system
        spec:
          targetNamespace: cert-manager
          repo: https://charts.jetstack.io
          chart: cert-manager
          version: v1.5.1
          helmVersion: v3
          set:
            installCRDs: "true"
        ---
        apiVersion: helm.cattle.io/v1
        kind: HelmChart
        metadata:
          name: rancher
          namespace: kube-system
        spec:
          targetNamespace: cattle-system
          repo: https://releases.rancher.com/server-charts/stable/
          chart: rancher
          version: {{ .Values.rancherVersion }}
          set:
            ingress.tls.source: rancher
            hostname: {{ .Values.hostname }}
            replicas: 1
            global.cattle.psp.enabled: "false"
            bootstrapPassword: {{ .Values.bootstrapPassword | quote }}
            fleet.apiServerURL: "https://rancher.172.19.108.35.nip.io/"
          helmVersion: v3
  version: v0.19.0

Environment

  • Bare-metal deployed with seeder on hp-199
  • Harvester v1.2.1
  • Rancher v2.7.11

Additional Context

Helm installation logs from rancher-vcluster namespace
> k -n rancher-vcluster logs pod/helm-install-rancher-qxdk9-x-kube-system-x-rancher-vcluster
if [[ ${KUBERNETES_SERVICE_HOST} =~ .*:.* ]]; then
        echo "KUBERNETES_SERVICE_HOST is using IPv6"
        CHART="${CHART//%\{KUBERNETES_API\}%/[${KUBERNETES_SERVICE_HOST}]:${KUBERNETES_SERVICE_PORT}}"
else
        CHART="${CHART//%\{KUBERNETES_API\}%/${KUBERNETES_SERVICE_HOST}:${KUBERNETES_SERVICE_PORT}}"
fi

set +v -x
+ [[ '' != \t\r\u\e ]]
+ export HELM_HOST=127.0.0.1:44134
+ HELM_HOST=127.0.0.1:44134
+ helm_v2 init --skip-refresh --client-only --stable-repo-url https://charts.helm.sh/stable/
+ tiller --listen=127.0.0.1:44134 --storage=secret
[main] 2024/03/22 14:05:28 Starting Tiller v2.17.0 (tls=false)
[main] 2024/03/22 14:05:28 GRPC listening on 127.0.0.1:44134
[main] 2024/03/22 14:05:28 Probes listening on :44135
[main] 2024/03/22 14:05:28 Storage driver is Secret
[main] 2024/03/22 14:05:28 Max history per release is 0
$HELM_HOME has been configured at /home/klipper-helm/.helm.
Not installing Tiller due to 'client-only' flag having been set
++ timeout -s KILL 30 helm_v2 ls --all '^rancher$' --output json
++ jq -r '.Releases | length'
[storage] 2024/03/22 14:05:28 listing all releases with filter
+ V2_CHART_EXISTS=
+ [[ '' == \1 ]]
+ [[ v3 == \v\2 ]]
+ shopt -s nullglob
+ [[ -f /config/ca-file.pem ]]
+ [[ -f /tmp/ca-file.pem ]]
+ [[ -n '' ]]
+ helm_content_decode
+ set -e
+ ENC_CHART_PATH=/chart/rancher.tgz.base64
+ CHART_PATH=/tmp/rancher.tgz
+ [[ ! -f /chart/rancher.tgz.base64 ]]
+ return
+ [[ install != \d\e\l\e\t\e ]]
+ helm_repo_init
+ grep -q -e 'https\?://'
+ [[ helm_v3 == \h\e\l\m\_\v\3 ]]
+ [[ rancher/rancher == stable/* ]]
+ [[ -n https://releases.rancher.com/server-charts/stable/ ]]
+ [[ -f /auth/username ]]
+ helm_v3 repo add rancher https://releases.rancher.com/server-charts/stable/
"rancher" already exists with the same configuration, skipping
+ helm_v3 repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "rancher" chart repository
Update Complete. ⎈Happy Helming!⎈
+ helm_update install --namespace cattle-system --version v2.7.11 --set-string bootstrapPassword=password1234 --set-string fleet.apiServerURL=https://rancher.172.19.108.35.nip.io/ --set global.cattle.psp.enabled=false --set-string hostname=rancher.172.19.108.35.nip.io --set-string ingress.tls.source=rancher --set replicas=1
+ [[ helm_v3 == \h\e\l\m\_\v\3 ]]
++ helm_v3 ls --all -f '^rancher$' --namespace cattle-system --output json
++ jq -r '"\(.[0].app_version),\(.[0].status)"'
++ tr '[:upper:]' '[:lower:]'
+ LINE=null,null
+ IFS=,
+ read -r INSTALLED_VERSION STATUS _
+ VALUES=
+ [[ install = \d\e\l\e\t\e ]]
+ [[ null =~ ^(|null)$ ]]
+ [[ null =~ ^(|null)$ ]]
+ echo 'Installing helm_v3 chart'
+ helm_v3 install --namespace cattle-system --version v2.7.11 --set-string bootstrapPassword=password1234 --set-string fleet.apiServerURL=https://rancher.172.19.108.35.nip.io/ --set global.cattle.psp.enabled=false --set-string hostname=rancher.172.19.108.35.nip.io --set-string ingress.tls.source=rancher --set replicas=1 rancher rancher/rancher
Error: INSTALLATION FAILED: chart "rancher" matching v2.7.11 not found in rancher index. (try 'helm repo update'): no chart version found for rancher-v2.7.11

Rancher vCluster is missing CRDs needed for UI plugins

Summary

As the title would suggest it appears the CRDs necessary for deploying UI extensions in the Rancher vCluster are missing.

helm helm install --namespace=cattle-ui-plugin-system --version=1.5.0 kubewarden /home/shell/helm/kubewarden-1.5.0.tgz                   
helm Error: INSTALLATION FAILED: unable to build kubernetes objects from release manifest: resource mapping not found for name: "kubewarden" namespace: "cattle-ui-plugin-system" from "": no matches for kind "UIPlugin" in version "catalog.cattle.io/v1"                      
helm ensure CRDs are installed first 
proxy W0522 19:42:07.010680       7 proxy.go:175] Request filter disabled, your proxy is vulnerable to XSRF attacks, please be cautious  
proxy Starting to serve on 127.0.0.1:8001

Request

Is there any reason to believe these could not be added manually? The repo has been deprecated for Rancher release 2.9.0 but the built in version is 2.8.2 (https://github.com/rancher/ui-plugin-operator)

provisioned nodes can't reach rancher-vcluster on VIP

rancher-vcluster shares Harvester's VIP, and the Harvester guest VMs that Rancher provisions as RKE2 nodes are unable to reach the Rancher Server to apply the initial plan. They have no trouble pinging the VIP, to which the configured Rancher Server URL does resolve in DNS on the guest node, and they are able to reach the Harvester UI with a cURL GET to the Harvester node IP.

The HTTP request with Host: rancher.hella header, from the guest node to the Rancher Server URL, results in a SYN that is never acknowledged. The SYN flows from the guest node's veth to the Harvester node's mgmt-br, where it is mangled to an array of destination IPs that have routes to the Harvester node's calico and flannel interfaces.

A cURL GET on the Harvester node where the guest node is running is able to fetch the Harvester UI web server without a Host header, and the Rancher Server UI with the Host: rancher.hella header, proving the destination is correct. The expected response is the HTTP 302 redirect to the dashboard location.

rancher@nuc2:~> curl -ks https://rancher.hella|sha256sum
3509bf97089da3314f168d5811fd5a5015bc185c50e24f4855dab26bf7df8f8b  -

rancher@nuc2:~> curl -ks https://10.52.1.36 -H 'host: rancher.hella'|sha256sum 
3509bf97089da3314f168d5811fd5a5015bc185c50e24f4855dab26bf7df8f8b  -

It's interesting that the guest VM can reach the web server running on some, but not all, of the three Harverster node IPs, and the ones that can not be reached changes depending upon where the VIP is currently bound and on which Harvester node the RKE2 node guest is scheduled.

If the VIP is bound by node3, then the guest running on node2 is able to reach node3's primary interface and GET / gets an HTTP 302 to /dashboard/. The same request to node1, node2 IP times out. Request to the VIP with or without host header times out.

When the VIP is bound on the same node2 as the guest, the previously successful GET / times out, and the request to node1 begins to succeed!

I believe it's commonplace for VIPs like Harvester's to be assigned to the mgmt-br interface with subnet mask /32, despite the primary interface address having /24. Noting this in case Harvester's VIP should actually be using /24.

How to create a Secret during rancher-vcluster init? [Question]

Hi, @bk201 @guangbochen @ibrokethecloud
I want to use DNS01 challenge for the rancher-vcluster SSL certificate. But I haven't been successful in creating a secret for DNS provider credentials. I tried to put the yaml in manifest or manifestTemplate besides rancher and cert-manager configuration. But no secret is created during installation of the addon. I have to manually add the secret after the installation. And if I add the code before other helm install yaml code, the following code won't be execute either. So I think there is something wrong with the yaml code I wrote, but I can not find any Docs about it.

manifests: |-
        apiVersion: helm.cattle.io/v1
        kind: Secret
        metadata:
          name: namecheap-credentials
          namespace: cert-manager
        type: Opaque
        stringData:
          apiKey: xxxxxxxxxxxxxxxxxxxxxxxxxxxxx
          apiUser: xxxxxx

Is it about the "stringData" not supported here? Do I have to use "data" and manually convert the content to base64? Or do I have to create a helm chart to deploy the secret, not directly?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.