Code Monkey home page Code Monkey logo

k3os's Introduction

GitHub go.mod Go version GitHub release (latest SemVer including pre-releases)

Project Status

k3os is no longer maintained and has been superceeded by [Elemental] (https://elemental.docs.rancher.com/). Please do not submit PRs or issues to this repo.

k3OS

k3OS is a Linux distribution designed to remove as much OS maintenance as possible in a Kubernetes cluster. It is specifically designed to only have what is needed to run k3s. Additionally the OS is designed to be managed by kubectl once a cluster is bootstrapped. Nodes only need to join a cluster and then all aspects of the OS can be managed from Kubernetes. Both k3OS and k3s upgrades are handled by the k3OS operator.

  1. Quick Start
  2. Design
  3. Installation
  4. Configuration
  5. Upgrade/Maintenance
  6. Building
  7. Configuration Reference

Quick Start

Download the ISO from the latest release and run it in VMware, VirtualBox, KVM, or bhyve. The server will automatically start a single node Kubernetes cluster. Log in with the user rancher and run kubectl. This is a "live install" running from the ISO media and changes will not persist after reboot.

To copy k3OS to local disk, after logging in as rancher run sudo k3os install. Then remove the ISO from the virtual machine and reboot.

Live install (boot from ISO) requires at least 2GB of RAM. Local install requires 1GB RAM.

Design

Core design goals of k3OS are

  1. Minimal OS for running Kubernetes by way of k3s
  2. Ability to upgrade and configure using kubectl
  3. Versatile installation to allow easy creation of OS images.

File System Structure

Critical to the design of k3OS is how that file system is structured. A booted system will look as follows

/etc - ephemeral
/usr - read-only (except /usr/local is writable and persistent)
/k3os - system files
/home - persistent
/var - persistent
/opt - persistent
/usr/local - persistent

/etc

All configuration in the system is intended to be ephemeral. If you change anything in /etc it will revert on next reboot. If you wish to persist changes to the configuration they must be done in the k3OS config.yaml which will be applied on each boot.

/usr

The entire user space is stored in /usr and as read-only. The only way to change /usr is to change versions of k3OS. The directory /usr/local is a symlink to /var/local and therefore writable.

/k3os

The k3OS directory contains the core operating system files references on boot to construct the file system. It contains squashfs images and binaries for k3OS, k3s, and the Linux kernel. On boot the appropriate version for all three will be chosen and configured.

/var, /usr/local, /home, /opt

Persistent changes should be kept in /var, /usr/local, /home, or /opt.

Upstream Distros

Most of the user-space binaries comes from Alpine and are repackaged for k3OS. Currently the kernel source is coming from Ubuntu 20.04 LTS. Some code and a lot of inspiration came from LinuxKit

Installation

Interactive Installation

Interactive installation is done from booting from the ISO. The installation is done by running k3os install. The k3os install sub-command is only available on systems booted live. An installation to disk will not have k3os install. Follow the prompts to install k3OS to disk.

The installation will format an entire disk. If you have a single hard disk attached to the system it will not ask which disk but just pick the first and only one.

Automated Installation

Installation can be automated by using kernel cmdline parameters. There are a lot of creative solutions to booting a machine with cmdline args. You can remaster the k3OS ISO, PXE boot, use qemu/kvm, or automate input with packer. The kernel and initrd are available in the k3OS release artifacts, along with the ISO.

The cmdline value k3os.mode=install or k3os.fallback_mode=install is required to enable automated installations. Below is a reference of all cmdline args used to automate installation

cmdline Default Example Description
k3os.mode install Boot k3OS to the installer, not an interactive session
k3os.fallback_mode install If a valid K3OS_STATE partition is not found to boot from, run the installation
k3os.install.silent false true Ensure no questions will be asked
k3os.install.force_efi false true Force EFI installation even when EFI is not detected
k3os.install.device /dev/vda Device to partition and format (/dev/sda, /dev/vda)
k3os.install.config_url https://gist.github.com/.../dweomer.yaml The URL of the config to be installed at /k3os/system/config.yaml
k3os.install.iso_url https://github.com/rancher/k3os/../k3os-amd64.iso ISO to download and install from if booting from kernel/vmlinuz and not ISO.
k3os.install.no_format true Do not partition and format, assume layout exists already
k3os.install.tty auto ttyS0 The tty device used for console
k3os.install.debug false true Run installation with more logging and configure debug for installed system
k3os.install.power_off false true Shutdown the machine after install instead of rebooting

Custom partition layout

By default k3OS expects one partition to exist labeled K3OS_STATE. K3OS_STATE is expected to be an ext4 formatted filesystem with at least 2GB of disk space. The installer will create this partitions and file system automatically, or you can create them manually if you have a need for an advanced file system layout.

Bootstrapped Installation

You can install k3OS to a block device from any modern Linux distribution. Just download and run install.sh. This script will run the same installation as the ISO but is a bit more raw and will not prompt for configuration.

Usage: ./install.sh [--force-efi] [--debug] [--tty TTY] [--poweroff] [--takeover] [--no-format] [--config https://.../config.yaml] DEVICE ISO_URL

Example: ./install.sh /dev/vda https://github.com/rancher/k3os/releases/download/v0.10.0/k3os.iso

DEVICE must be the disk that will be partitioned (/dev/vda). If you are using --no-format it should be the device of the K3OS_STATE partition (/dev/vda2)

The parameters names refer to the same names used in the cmdline, refer to README.md for
more info.

Remastering ISO

To remaster the ISO all you need to do is copy /k3os and /boot from the ISO to a new folder. Then modify /boot/grub/grub.cfg to add whatever kernel cmdline args for auto-installation. To build a new ISO just use the utility grub-mkrescue as follows:

# Ubuntu: apt install grub-efi grub-pc-bin mtools xorriso
# CentOS: dnf install grub2-efi grub2-pc mtools xorriso
# Alpine: apk add grub-bios grub-efi mtools xorriso
mount -o loop k3os.iso /mnt
mkdir -p iso/boot/grub
cp -rf /mnt/k3os iso/
cp /mnt/boot/grub/grub.cfg iso/boot/grub/

# Edit iso/boot/grub/grub.cfg

grub-mkrescue -o k3os-new.iso iso/ -- -volid K3OS

GRUB2 CAVEAT: Some non-Alpine installations of grub2 will create ${ISO}/boot/grub2 instead of ${ISO}/boot/grub which will generally lead to broken installation media. Be mindful of this and modify the above commands (that work with this path) accordingly. Systems that exhibit this behavior typically have grub2-mkrescue on the path instead of grub-mkrescue.

Takeover Installation

A special mode of installation is designed to install to a current running Linux system. This only works on ARM64 and x86_64. Download install.sh and run with the --takeover flag. This will install k3OS to the current root and override the grub.cfg. After you reboot the system k3OS will then delete all files on the root partition that are not k3OS and then shutdown. This mode is particularly handy when creating cloud images. This way you can use an existing base image like Ubuntu and install k3OS over the top, snapshot, and create a new image.

In order for this to work a couple of assumptions are made. First the root (/) is assumed to be an ext4 partition. Also it is assumed that grub2 is installed and looking for the configuration at /boot/grub/grub.cfg. When running --takeover ensure that you also set --no-format and DEVICE must be set to the partition of /. Refer to the AWS packer template to see this mode in action. Below is any example of how to run a takeover installation.

./install.sh --takeover --debug --tty ttyS0 --config /tmp/config.yaml --no-format /dev/vda1 https://github.com/rancher/k3os/releases/download/v0.10.0/k3os.iso

ARM Overlay Installation

If you have a custom ARMv7 or ARM64 device you can easily use an existing bootable ARM image to create a k3OS setup. All you must do is boot the ARM system and then extract k3os-rootfs-arm.tar.gz to the root (stripping one path, look at the example below) and then place your cloud-config at /k3os/system/config.yaml. For example:

curl -sfL https://github.com/rancher/k3os/releases/download/v0.10.0/k3os-rootfs-arm.tar.gz | tar zxvf - --strip-components=1 -C /
cp myconfig.yaml /k3os/system/config.yaml
sync
reboot -f

This method places k3OS on disk and also overwrites /sbin/init. On next reboot your ARM bootloader and kernel should be loaded, but then when user space is to be initialized k3OS should take over. One important consideration at the moment is that k3OS assumes the root device is not read only. This typically means you need to remove ro from the kernel cmdline. This should be fixed in a future release.

Configuration

All configuration is done through a single cloud-init style config file that is either packaged in the image, downloaded though cloud-init or managed by Kubernetes. The configuration file is found at

/k3os/system/config.yaml
/var/lib/rancher/k3os/config.yaml
/var/lib/rancher/k3os/config.d/*

The /k3os/system/config.yaml file is reserved for the system installation and should not be modified on a running system. This file is usually populated by during the image build or installation process and contains important bootstrap information (such as networking or cloud-init data sources).

The /var/lib/rancher/k3os/config.yaml or config.d/* files are intended to be used at runtime. These files can be manipulated manually, through scripting, or managed with the Kubernetes operator.

Sample config.yaml

A full example of the k3OS configuration file is as below.

ssh_authorized_keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQAB...
- github:ibuildthecloud
write_files:
- encoding: ""
  content: |-
    #!/bin/bash
    echo hello, local service start
  owner: root
  path: /etc/local.d/example.start
  permissions: '0755'
hostname: myhost
init_cmd:
- "echo hello, init command"
boot_cmd:
- "echo hello, boot command"
run_cmd:
- "echo hello, run command"

k3os:
  data_sources:
  - aws
  - cdrom
  modules:
  - kvm
  - nvme
  sysctl:
    kernel.printk: "4 4 1 7"
    kernel.kptr_restrict: "1"
  dns_nameservers:
  - 8.8.8.8
  - 1.1.1.1
  ntp_servers:
  - 0.us.pool.ntp.org
  - 1.us.pool.ntp.org
  wifi:
  - name: home
    passphrase: mypassword
  - name: nothome
    passphrase: somethingelse
  password: rancher
  server_url: https://someserver:6443
  token: TOKEN_VALUE
  labels:
    region: us-west-1
    somekey: somevalue
  k3s_args:
  - server
  - "--cluster-init"
  environment:
    http_proxy: http://myserver
    https_proxy: http://myserver
  taints:
  - key1=value1:NoSchedule
  - key1=value1:NoExecute

Refer to the configuration reference for full details of each configuration key.

Kubernetes

Since k3OS is built on k3s all Kubernetes configuration is done by configuring k3s. This is primarily done through environment and k3s_args keys in config.yaml. The write_files key can be used to populate the /var/lib/rancher/k3s/server/manifests folder with apps you'd like to deploy on boot.

Refer to k3s docs for more information on how to configure Kubernetes.

Kernel cmdline

All configuration can be passed as kernel cmdline parameters too. The keys are dot separated. For example k3os.token=TOKEN. If the key is a slice, multiple values are set by repeating the key, for example k3os.dns_nameserver=1.1.1.1 k3os.dns_nameserver=8.8.8.8. You can use the plural or singular form of the name, just ensure you consistently use the same form. For map values the form key[key]=value form is used, for example k3os.sysctl[kernel.printk]="4 4 1 7". If the value has spaces in it ensure that the value is quoted. Boolean keys expect a value of true or false or no value at all means true. For example k3os.install.efi is the same as k3os.install.efi=true.

Phases

Configuration is applied in three distinct phases: initrd, boot, runtime. initrd is run during the initrd phase before the root disk has been mounted. boot is run after the root disk is mounted and the file system is setup, but before any services have started. There is no networking available yet at this point. The final stage runtime is executed after networking has come online. If you are using a configuration from a cloud provider (like AWS userdata) it will only be run in the runtime phase. Below is a table of which config keys are supported in each phase.

Key initrd boot runtime
ssh_authorized_keys x x
write_files x x x
hostname x x x
run_cmd x
boot_cmd x
init_cmd x
k3os.data_sources x
k3os.modules x x x
k3os.sysctls x x x
k3os.ntp_servers x x
k3os.dns_nameservers x x
k3os.wifi x x
k3os.password x x x
k3os.server_url x x
k3os.token x x
k3os.labels x x
k3os.k3s_args x x
k3os.environment x x x
k3os.taints x x

Networking

Networking is powered by connman. To configure networking a couple of helper keys are available: k3os.dns_nameserver, k3os.ntp_servers, k3os.wifi. Refer to the reference for a full explanation of those keys. If you wish to configure a HTTP proxy set the http_proxy, and https_proxy fields in k3os.environment. All other networking configuration should be done by configuring connman directly by using the write_files key to create connman service files.

Upgrade and Maintenance

Upgrading and reconfiguring k3OS is all handled through the Kubernetes operator. The operator is still in development. More details to follow. The basic design is that one can set the desired k3s and k3OS versions, plus their configuration and the operator will roll that out to the cluster.

Automatic Upgrades

Integration with rancher/system-upgrade-controller has been implemented as of v0.9.0. To enable a k3OS node to automatically upgrade from the latest GitHub release you will need to make sure it has the label k3os.io/upgrade with value latest (for k3OS versions prior to v0.11.x please use label plan.upgrade.cattle.io/k3os-latest). The upgrade controller will then spawn an upgrade job that will drain most pods, upgrade the k3OS content under /k3os/system, and then reboot. The system should come back up running the latest kernel and k3s version bundled with k3OS and ready to schedule pods.

Pre v0.9.0

If your k3OS installation is running a version prior to the v0.9.0 release or one of its release candidates you can setup the system upgrade controller to upgrade your k3OS by following these steps:

# apply the system-upgrade-controller manifest (once per cluster)
kubectl apply -f https://raw.githubusercontent.com/rancher/k3os/v0.10.0/overlay/share/rancher/k3s/server/manifests/system-upgrade-controller.yaml
# after the system-upgrade-controller pod is Ready, apply the plan manifest (once per cluster)
kubectl apply -f https://raw.githubusercontent.com/rancher/k3os/v0.10.0/overlay/share/rancher/k3s/server/manifests/system-upgrade-plans/k3os-latest.yaml
# apply the `plan.upgrade.cattle.io/k3os-latest` label as described above (for every k3OS node), e.g.
kubectl label nodes -l k3os.io/mode plan.upgrade.cattle.io/k3os-latest=enabled # this should work on any cluster with k3OS installations at v0.7.0 or greater

Manual Upgrades

For single-node or development use cases, where the operator is not being used, you can upgrade the rootfs and kernel with the following commands. If you do not specify K3OS_VERSION, it will default to the latest release.

When using an overlay install such as on Raspberry Pi (see ARM Overlay Installation) the original distro kernel (such as Raspbian) will continue to be used. On these systems the k3os-upgrade-kernel script will exit with a warning and perform no action.

export K3OS_VERSION=v0.10.0
/usr/share/rancher/k3os/scripts/k3os-upgrade-rootfs
/usr/share/rancher/k3os/scripts/k3os-upgrade-kernel

You should always remember to backup your data first, and reboot after upgrading.

Manual Upgrade Scripts Have Been DEPRECATED

These scripts have been deprecated as of v0.9.0 are still on the system at /usr/share/rancher/k3os/scripts.

Building

To build k3OS you just need Docker and then run make. All artifacts will be put in ./dist/artifacts. If you are running on Linux you can run ./scripts/run to run a VM of k3OS in the terminal. To exit the instance type CTRL+a c to get the qemu console and then q for quit.

The source for the kernel is in https://github.com/rancher/k3os-kernel and similarly you just need to have Docker and run make to compile the kernel.

Configuration Reference

Below is a reference of all keys available in the config.yaml

ssh_authorized_keys

A list of SSH authorized keys that should be added to the rancher user. k3OS primarily has one user, rancher. The root account is always disabled, has no password, and is never assigned a ssh key. SSH keys can be obtained from GitHub user accounts by using the format github:${USERNAME}. This is done by downloading the keys from https://github.com/${USERNAME}.keys.

Example

ssh_authorized_keys:
- "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC2TBZGjE+J8ag11dzkFT58J3XPONrDVmalCNrKxsfADfyy0eqdZrG8hcAxAR/5zuj90Gin2uBR4Sw6Cn4VHsPZcFpXyQCjK1QDADj+WcuhpXOIOY3AB0LZBly9NI0ll+8lo3QtEaoyRLtrMBhQ6Mooy2M3MTG4JNwU9o3yInuqZWf9PvtW6KxMl+ygg1xZkljhemGZ9k0wSrjqif+8usNbzVlCOVQmZwZA+BZxbdcLNwkg7zWJSXzDIXyqM6iWPGXQDEbWLq3+HR1qKucTCSxjbqoe0FD5xcW7NHIME5XKX84yH92n6yn+rxSsyUfhJWYqJd+i0fKf5UbN6qLrtd/D"
- "github:ibuildthecloud"

write_files

A list of files to write to disk on boot. These files can be either plain text, gziped, base64 encoded, or base64+gzip encoded.

Example

write_files:
- encoding: b64
  content: CiMgVGhpcyBmaWxlIGNvbnRyb2xzIHRoZSBzdGF0ZSBvZiBTRUxpbnV4...
  owner: root:root
  path: /etc/connman/main.conf
  permissions: '0644'
- content: |
    # My new /etc/sysconfig/samba file

    SMDBOPTIONS="-D"
  path: /etc/sysconfig/samba
- content: !!binary |
    f0VMRgIBAQAAAAAAAAAAAAIAPgABAAAAwARAAAAAAABAAAAAAAAAAJAVAAAAAA
    AEAAHgAdAAYAAAAFAAAAQAAAAAAAAABAAEAAAAAAAEAAQAAAAAAAwAEAAAAAAA
    AAAAAAAAAwAAAAQAAAAAAgAAAAAAAAACQAAAAAAAAAJAAAAAAAAcAAAAAAAAAB
    ...
  path: /bin/arch
  permissions: '0555'
- content: |
    15 * * * * root ship_logs
  path: /etc/crontab

hostname

Set the system hostname. This value will be overwritten by DHCP if DHCP supplies a hostname for the system.

Example

hostname: myhostname

init_cmd, boot_cmd, run_cmd

All three keys are used to run arbitrary commands on startup in the respective phases of initrd, boot and runtime. Commands are ran after write_files so it is possible to write a script to disk and run it from these commands. That often makes it easier to do longer form setup.

k3os.data_sources

These are the data sources used for download config from cloud provider. The valid options are:

aws
cdrom
digitalocean
gcp
hetzner
openstack
packet
scaleway
vultr

More than one can be supported at a time, for example:

k3os:
  data_sources:
  - openstack
  - cdrom

When multiple data sources are specified they are probed in order and the first to provide /run/config/userdata will halt further processing.

k3os.modules

A list of kernel modules to be loaded on start.

Example

k3os:
  modules:
  - kvm
  - nvme

k3os.sysctls

Kernel sysctl to setup on start. These are the same configuration you'd typically find in /etc/sysctl.conf. Must be specified as string values.

k3os:
  sysctl:
    kernel.printk: 4 4 1 7      # the YAML parser will read as a string
    kernel.kptr_restrict: "1"   # force the YAML parser to read as a string

k3os.ntp_servers

Fallback ntp servers to use if NTP is not configured elsewhere in connman.

Example

k3os:
  ntp_servers:
  - 0.us.pool.ntp.org
  - 1.us.pool.ntp.org

k3os.dns_nameservers

Fallback DNS name servers to use if DNS is not configured by DHCP or in a connman service config.

Example

k3os:
  dns_nameservers:
  - 8.8.8.8
  - 1.1.1.1

k3os.wifi

Simple wifi configuration. All that is accepted is name and passphrase. If you require more complex configuration then you should use write_files to write a connman service config.

Example:

k3os:
  wifi:
  - name: home
    passphrase: mypassword
  - name: nothome
    passphrase: somethingelse

k3os.password

The password for the rancher user. By default there is no password for the rancher user. If you set a password at runtime it will be reset on next boot because /etc is ephemeral. The value of the password can be clear text or an encrypted form. The easiest way to get this encrypted form is to just change your password on a Linux system and copy the value of the second field from /etc/shadow. You can also encrypt a password using openssl passwd -1.

Example

k3os:
  password: "$1$tYtghCfK$QHa51MS6MVAcfUKuOzNKt0"

Or clear text

k3os:
  password: supersecure

k3os.server_url

The URL of the k3s server to join as an agent.

Example

k3os:
  server_url: https://myserver:6443

k3os.token

The cluster secret or node token. If the value matches the format of a node token it will automatically be assumed to be a node token. Otherwise it is treated as a cluster secret.

Example

k3os:
  token: myclustersecret

Or a node token

k3os:
  token: "K1074ec55daebdf54ef48294b0ddf0ce1c3cb64ee7e3d0b9ec79fbc7baf1f7ddac6::node:77689533d0140c7019416603a05275d4"

k3os.labels

Labels to be assigned to this node in Kubernetes on registration. After the node is first registered in Kubernetes the value of this setting will be ignored.

Example

k3os:
  labels:
    region: us-west-1
    somekey: somevalue

k3os.k3s_args

Arguments to be passed to the k3s process. The arguments should start with server or agent to be valid. k3s_args is an exec-style (aka uninterpreted) argument array which means that when specifying a flag with a value one must either join the flag to the value with an = in the same array entry or specify the flag in an entry by itself immediately followed the value in another entry, e.g.:

# K3s flags with values joined with `=` in single entry
k3os:
  k3s_args:
  - server
  - "--cluster-cidr=10.107.0.0/23"
  - "--service-cidr=10.107.1.0/23"

# Effectively invokes k3s as:
# exec "k3s" "server" "--cluster-cidr=10.107.0.0/23" "--service-cidr=10.107.1.0/23" 
# K3s flags with values in following entry
k3os:
  k3s_args:
  - server
  - "--cluster-cidr"
  - "10.107.0.0/23"
  - "--service-cidr"
  - "10.107.1.0/23"

# Effectively invokes k3s as:
# exec "k3s" "server" "--cluster-cidr" "10.107.0.0/23" "--service-cidr" "10.107.1.0/23" 

k3os.environment

Environment variables to be set on k3s and other processes like the boot process. Primary use of this field is to set the http proxy.

Example

k3os:
  environment:
    http_proxy: http://myserver
    https_proxy: http://myserver

k3os.taints

Taints to set on the current node when it is first registered. After the node is first registered the value of this field is ignored.

k3os:
  taints:
  - "key1=value1:NoSchedule"
  - "key1=value1:NoExecute"

License

Copyright (c) 2014-2020 Rancher Labs, Inc.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

k3os's People

Contributors

alinanova21 avatar alxandr avatar aram535 avatar bhale avatar brandond avatar brlbil avatar carlocorradini avatar cbron avatar chris93111 avatar cjlarose avatar claycooper avatar corvus-ch avatar danopz avatar deekue avatar dweomer avatar freekingdean avatar gris-gris avatar hall avatar ibuildthecloud avatar jdbohrman avatar jille avatar mogiepete avatar niusmallnan avatar ppouliot avatar pryorda avatar rancher-sy-bot avatar ssmiller25 avatar tfiduccia avatar vincent99 avatar zimme avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

k3os's Issues

docker-machine create/deploy friendly

I'm not expecting this to be a plug-in replacement for rancherOS, however, rancherOS has features and tools that make deploying with docker-machine trivial. I can deploy to azure, digitalocean, vmware. I can expand and contract the cluster from the console. The ROS util is also very helpful in deploying clusters.

config init_cmd is not working

Version - v0.2.1-rc2

Steps:

  1. Create a config.yaml with:
ssh_authorized_keys:
- github:<username>
init_cmd:
- "echo 'init command' && sleep 120"
k3os:
  password: asdf
  1. During boot process, press e to get into GNU Grub
  2. On linux cmdline add k3os.mode=install k3os.install.config_url=<raw path of config.yml>

Results: The init command never get run. The terminal never sleeps or print out the echo.

vm doesn't reboot if I set k3os.mode = install

Version - v0.2.0

Steps:

  1. Setup VMWare machine
  2. During boot process, press e to get into GNU Grub
  3. On linux cmdline add k3os.mode=install
  4. Ctrl-x to save, answer question and wait for reboot

Result: Machine goes to shutdown state instead of rebooting.
image

Installing agent to disk, reboot, not showing in cluster

Hi,

After installing k3os to disk agents don't show up in the cluster when I use the commando:
kubectl get nodes.

However if I manually add them to the cluster with the following steps it will work:
pkill -9 k3s
sudo k3s agent --server https://10.0.0.196:6443 --token $TOKENCODE

Then it will connect to the cluster. But this isn't usable in a live environment that way.

Building ISO and MacOS

Hi! When I build the iso file on MacOS (without any modifications, just cloned the repo and ran make), it initially works to run the live environment, but when I try to install it on the disk through os-config, I get the error unknown filesystem type 'hfsplus'. This is when it tries to mount the K3OS iso into /run/k3os/iso during the installation process.

Is this because I made the USB on MacOS?
I used dd bs=4m if=dist/artifacts/k3os-amd64.iso of=/dev/disk3 for example.

k3os.install.silent=true still asks final Configuration question

Version - v0.2.0

Steps:

  1. Setup VMWare machine
  2. During boot process, press e to get into GNU Grub
  3. In linux add k3os.mode=install & k3os.install.silent=true
  4. Ctrl-x to save and continue

Results: Silent work all the way through the installation. The final Configuration question is asked and requires input
image

Issues with disk space on new disk install to 4GB disk

I think I have enough free space in general, but this seems to be taking specifically about the tmpmounts when loading up images. This is on VirtualBox right now and I can't get very far yet.

k3os-21574 [/]$ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda2       545M  495M   18M  97% /
/dev/loop1       50M   50M     0 100% /usr
none            997M  1.4M  996M   1% /etc
tmpfs           200M  224K  200M   1% /run
tmpfs           200M  296K  200M   1% /tmp
dev              10M     0   10M   0% /dev
shm             997M     0  997M   0% /dev/shm
cgroup_root      10M     0   10M   0% /sys/fs/cgroup
/dev/loop2      200M  200M     0 100% /usr/lib/modules
/dev/sda1       476M   92M  384M  20% /boot

containerd.log

time="2019-04-27T16:58:24.800670127Z" level=info msg="apply failure, attempting cleanup" error="failed to extract layer sha256:5dacd731af1b0386ead06c8b1feff9f65d9e0bdfec032d2cd0bc03690698feda: mount callback failed on /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount252943319: write /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount252943319/lib/x86_64-linux-gnu/libc-2.24.so: no space left on device: unknown" key="extract-247511086-BP4B sha256:5dacd731af1b0386ead06c8b1feff9f65d9e0bdfec032d2cd0bc03690698feda"
time="2019-04-27T16:58:24.845060165Z" level=info msg="apply failure, attempting cleanup" error="failed to extract layer sha256:ed9e1db7c0e6e5aeee786920b0c9db919cee1f5d237c792e4fd07038da0ae597: mount callback failed on /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount861930739: write /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount861930739/coredns: no space left on device: unknown" key="extract-319977195-Yeie sha256:a46466f6a77a598059d6aaddfc99ab668459e5175a397ec422472d6912791396"
time="2019-04-27T16:58:24.845103665Z" level=info msg="apply failure, attempting cleanup" error="failed to extract layer sha256:d635f458a6f8a4f3dd57a597591ab8977588a5a477e0a68027d18612a248906f: mount callback failed on /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount641878561: write /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount641878561/usr/lib/bash/realpath: no space left on device: unknown" key="extract-558504132-lzQd sha256:a95a173483558b1cfb499b6827a64cad3729dc2e7fa6ff8f8e1320b4d61d3600"
time="2019-04-27T16:58:24.845523052Z" level=error msg="PullImage \"nginx:latest\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to unpack image on snapshotter overlayfs: failed to extract layer sha256:5dacd731af1b0386ead06c8b1feff9f65d9e0bdfec032d2cd0bc03690698feda: mount callback failed on /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount252943319: write /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount252943319/lib/x86_64-linux-gnu/libc-2.24.so: no space left on device: unknown"
time="2019-04-27T16:58:24.856313793Z" level=error msg="PullImage \"coredns/coredns:1.3.0\" failed" error="failed to pull and unpack image \"docker.io/coredns/coredns:1.3.0\": failed to unpack image on snapshotter overlayfs: failed to extract layer sha256:ed9e1db7c0e6e5aeee786920b0c9db919cee1f5d237c792e4fd07038da0ae597: mount callback failed on /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount861930739: write /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount861930739/coredns: no space left on device: unknown"

And on a deployment, similar errors getting those images, too (no surprising) from k3s kubectl describe for any of the pods on this test

Name:               nginx-7db9fccd9b-w4sgk
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               k3os-21574/
Start Time:         Sat, 27 Apr 2019 16:58:15 +0000
Labels:             pod-template-hash=7db9fccd9b
                    run=nginx
Annotations:        <none>
Status:             Failed
Reason:             Evicted
Message:            The node was low on resource: ephemeral-storage. 
IP:                 
Controlled By:      ReplicaSet/nginx-7db9fccd9b
Containers:
  nginx:
    Image:        nginx
    Port:         <none>
    Host Port:    <none>
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rkz48 (ro)
Volumes:
  default-token-rkz48:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-rkz48
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                From                 Message
  ----     ------            ----               ----                 -------
  Warning  FailedScheduling  20m (x3 over 21m)  default-scheduler    0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
  Warning  FailedScheduling  14m (x5 over 19m)  default-scheduler    0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
  Normal   Scheduled         14m                default-scheduler    Successfully assigned default/nginx-7db9fccd9b-w4sgk to k3os-21574
  Normal   Pulling           14m                kubelet, k3os-21574  Pulling image "nginx"
  Warning  Failed            14m                kubelet, k3os-21574  Failed to pull image "nginx": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/nginx:latest": failed to unpack image on snapshotter overlayfs: failed to extract layer sha256:5dacd731af1b0386ead06c8b1feff9f65d9e0bdfec032d2cd0bc03690698feda: mount callback failed on /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount252943319: write /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount252943319/lib/x86_64-linux-gnu/libc-2.24.so: no space left on device: unknown
  Warning  Failed            14m                kubelet, k3os-21574  Error: ErrImagePull
  Normal   BackOff           14m                kubelet, k3os-21574  Back-off pulling image "nginx"
  Warning  Failed            14m                kubelet, k3os-21574  Error: ImagePullBackOff
  Warning  Evicted           14m                kubelet, k3os-21574  The node was low on resource: ephemeral-storage.

powertop

This looks like a great project. Are you also going to optimize the system for low power usage? In a typical Ubuntu setup you can save a lot using powertop

K3OS image for LXC

I’ve been using k3s on LXC for local experiments with Kubernetes (cluster in a box). Having an LXC image with k3os would make setting up a cluster in a box a lot easier / faster.

services appear twice

Except k3s-service, the other services appear twice in the return value of the rc-update command.
Is this normal?

image

VMware deployment

When using VMware to fire one of these up, which Guest Operating System should I be using on ESXi 6.5.0.

I spun one up with the v0.1.0 iso and did the auto login, but neither kubectl nor sudo os-config were found.

using latest iso now.

thanks

K3OS VMs querying their own DNS like crazy

Hi Everyone,

I created two k3os one-node cluster VMs in Proxmox lately, and when I checked the PiHole DNS blackhole service on the LAN, which is for filtering out ads and other unwanted sites, I saw that the k3os VMs are querying themselves in an extreme measure, like 30+ thousand times in a day. After I created the first VM, it ran for some two days, then I destroyed it for experimentation purposes, then created the current one yesterday, which does the same.

Both VM's init was the following:

# get latest iso once (it was 0.2.0-rc6 previously this week)
cd /var/lib/vz/template/iso
GITHUB_REPO="rancher/k3os"
GITHUB_LATEST_RELEASE=$(curl -L -s -H 'Accept: application/json' https://github.com/${GITHUB_REPO}/releases/latest)
GITHUB_LATEST_VERSION=$(echo $GITHUB_LATEST_RELEASE | sed -e 's/.*"tag_name":"\([^"]*\)".*/\1/')
GITHUB_ORIG_FILE="k3os-amd64.iso"
GITHUB_DOWN_FILE="k3os-amd64-${GITHUB_LATEST_VERSION//v/}.iso"
GITHUB_URL="https://github.com/${GITHUB_REPO}/releases/download/${GITHUB_LATEST_VERSION}/${GITHUB_ORIG_FILE}"
wget -O $GITHUB_DOWN_FILE $GITHUB_URL

qm create 200 --agent 1 --cores 2 --ide2 local:iso/${GITHUB_DOWN_FILE},media=cdrom --memory 3072 --name k-node-1 --net0 virtio,bridge=vmbr0,firewall=1 --numa 0 --onboot 1 --ostype l26 --scsi0 local-lvm:8 --scsihw virtio-scsi-pci --sockets 1
qm start 200

# login in VNC as rancher
sudo passwd rancher
# type rancher
# exit VNC

# ssh login using a DHCP allocated address
ssh [email protected]
# type rancher

sudo os-config
# Install to disk: 1
# Cloud-init: N
# Github SSH: N
# Type new pass 2 times
# Wifi: N
# Server: 1
# Cluster secret: generate new with `openssl rand -hex 20` in a new tab
# Continue: y
# reboot

# host info changes, so remove it
ssh-keygen -f "/root/.ssh/known_hosts" -R 10.0.0.88
ssh [email protected]
# using new pass

Nothing else was set up, I just looked around inside the VMs. The process could be easier on the setup side (no VNC interaction, automated network settings, installed image by default), but there are other issues open for cloud-init, etc. so I won't bother with it.

My only problem now is the VM going berserk on querying its own DNS, it amounts to ~90% of the local DNS traffic these days. Can it be stopped somehow? I would like to use fixed IPs, but I just haven't got the time to set it up yet with conman (I also need to look into conman's config options, new stuff to me). Could it stop the flood?

k3s not starting on disk install

Version: v0.2.0-rc3
Virtualisation: VirtualBox

I installed k3os to disk (server mode) and did a reboot. After the reboot I logged in, waited a couple minutes and tried some kubectl commands. The only output I am getting is:

The connection to the server localhost:6443 was refused - did you specify the right host or port?

Did I miss something to make it work?

kubectl throws strange error

when running deployment with help or kubectl, sometimes get strange errors:
error: SchemaError(io.k8s.api.core.v1.CinderVolumeSource): invalid object doesn't have additional properties

for example,

$ kubectl apply -f https://raw.githubusercontent.com/rancher/longhorn/master/deploy/longhorn.yaml
error: SchemaError(io.k8s.api.core.v1.CinderVolumeSource): invalid object doesn't have additional properties

Building ISO with baked in custom config.yaml

Can we get instructions on where to place a customized config.yaml to have it baked into an iso/image when building.

Ideally if this doesn't exist it would be awesome to have a .gitignored location where I can drop my config, run make and get an iso that I can use for a completely unattended install.

--no-deploy traefik servicelb not respected

I'm having trouble disabling the auto deployment of traefik.
I have added:

k3os-8393 [/var/lib/rancher/k3os]$ cat config.yaml
k3os:
  k3s_args:
  - server
  - "--no-deploy traefik servicelb"

and deleted all the jobs/pods/services related to traefik, but still after 120minutes everything is recreated.

ps-ax gives the following:

 2007 ?        S      0:00 supervise-daemon k3s-service --start --pidfile /var/run/k3s-service.pid --respawn-delay 5 /sbin/k3s -- server --no-deploy traefik
 2009 ?        Ssl   96:56 /sbin/k3s server --no-deploy traefik

Any thoughts on what I can do to stop this from happing? I have to deploy my own LB and Ingress server to handle TLS, ACME etc..

What ARM boards would the community like supported?

I'd like to use this issue so people can comment on what ARM boards should be supported by k3OS. It would work best if one comment per board was create and then other can just 👍 the comment so we can get a count.

As a bonus if you are already running k3s on some existing Linux distro on these boards if you could please share what those distros are that would be great. We don't have resources to maintain kernels for tons of boards to if we can use kernels and boot loaders from other community images and just replace the user space with k3OS then we should be able to support a good number of boards.

[feature request] support for regular cloud-init network config

It would be great if it was possible to configure base networking settings (IP, gateway, dns domain, dns servers, hostname, search domains) through the regular cloud init interfaces (instead of having to create a connman service file).

Why is this important (in my opinion)? There are virtualization systems, like Proxmox VE, that allow you to configure a limited set of cloudinit parameters for a VM regarding the network config (but don't support full cloudinit scripts). This is usually enough to run any standard cloud image (Debian, CentOS, etc). It would be great if the k3os iso would behave similarly.

image

https://pve.proxmox.com/wiki/Cloud-Init_Support
https://pve.proxmox.com/wiki/Cloud-Init_FAQ

Requesting for resources using kubectl throws an error

I'm running the latest release of k3OS on VirtualBox 6.0.6 r130049. Running kubectl get nodes throws the following error:

The connection to the server localhost:6443 was refused - did you specify the right host or port?

I tried other resources - I get the same result. At first I thought there may be an issue with the VM's networking - but on further reflection, I no longer think so. Probably a configuration issue of some sort.

Attaching a screenshot.

image

Kernel panic on boot under KVM

Greetings,

I'm trying out k3os, but I can't seem to get it to boot. I get a kernel panic, attaching screenshot below. This is 0.2.0rc2.

Screen Shot 2019-04-25 at 1 53 51 PM

Boot_cmd running after write_files

Version - v0.2.1-rc2

Steps:

  1. Create a config.yaml with
write_files:
- enconding: ""
  content: |
    Here is a line.
    Another line is here
  owner: root
  path: test2.txt
  permissions: '0777'
run_cmd:
- "echo 'run hello' >> test2.txt"
boot_cmd:
- "echo 'boot hello' >> test2.txt"
k3os:
  password: asdf
  1. During boot process, press e to get into GNU Grub
  2. On linux cmdline add k3os.mode=install k3os.install.config_url="config yaml url"
  3. Log in and cat/test2.txt

results:

Here is a line
Another line is here
run hello

Expected: Documentation says that boot, run and init command run after the write_files. So I would expect to see the "boot hello" text in there as well. If I run the boot_cmd to a different file it shows up.

Configuring a VPN service

wondering how to connect k3OS nodes over a VPN (this might not be possible yet...)
On RancherOS I deploy a vpn service in my cloud-config (see bellow)

is it possible to achieve something similar on k3OS?

  services:
    zerotier:
      image: dwitzig/zerotier:1.2.12
      labels:
        io.rancher.os.scope: system
      volumes:
        - /opt/zerotier-one:/var/lib/zerotier-one
      restart: always
      net: host
      devices:
        - /dev/net/tun:/dev/net/tun
      cap_add:
        - NET_ADMIN
        - SYS_ADMIN
      volumes_from:
        - system-volumes
      entrypoint: /zerotier-one
    zerotier-join:
      image: dwitzig/zerotier:1.2.12
      labels:
        io.rancher.os.scope: system
      volumes:
        - /opt/zerotier-one:/var/lib/zerotier-one
      restart: on-failure
      net: host
      entrypoint: /zerotier-cli join $NETWORK_ID
      depends_on:
        - zerotier

routing conflict because of duplicate routes on container interfaces set by connman

I stumbled upon this unexpected behaviour:

I have created a k3os cluster with static network config for the nodes (i.e. no dhcp). I'm setting all of the interface config in a connman service file - including the default gateway and the DNS servers. Network config im /k3os/system/config.yaml looks like:

[...]
write_files:
- encoding: ""
  content: |-
    [service_default-interface]
    Type=ethernet
    IPv4=192.168.200.2/24/192.168.200.1
    IPv6=off
    Nameservers=192.168.100.2,192.168.100.3
    SearchDomains=mynetwork.local
    Timeserver=192.168.100.2,192.168.100.3
    Domain=mynetwork.local
  owner: root
  path: /var/lib/connman/default-interface.config
  permissions: '0644'
[...]

This seems to work fine for the host's main interface. But has an unexpected side effect on the virtual container interfaces that gets dynamically created for every running container (veth########). For every container interface, a static route to the default gateway and the IPs of the DNS servers gets created (which is just wrong and can't work).

$ route
[...]
192.168.200.0    *               255.255.255.0 U     0      0        0 vethfb2114ac
192.168.200.1    *               255.255.255.0 U     0      0        0 vethfb2114ac
192.168.100.2    *               255.255.255.0 U     0      0        0 vethfb2114ac
192.168.100.3    *               255.255.255.0 U     0      0        0 vethfb2114ac
[...]

This leads to a routing conflict: The default gw and the DNS servers become unreachable both from the host and also from inside the containers. (Routing still continues to work though). The major issue this leads to is that the CoreDNS pod can't reach the internal DNS server. As CoreDNS is set up to fall back to Internet root dns servers it can still query public domains, but can't resolve any local dns zones only known by the Intranet dns server (i.e. zone mynetwork.local on DNS server 192.168.100.2).

I believe this is a connman issue and related to that connman creates a config file for every container interface (/var/lib/connman/ethernet_<id>_cable/settings) with settings derived from the main interface.

The workaround for me is to not have connman set routes and DNS servers. When I use the following network config in /k3os/system/config.yaml no duplicate routes appear on the container interfaces:

in /k3os/system/config.yaml:

[...]
run_cmd:
- "route add default gw 192.168.200.1"
write_files:
- encoding: ""
  content: |-
    [service_default-interface]
    Type=ethernet
    IPv4=192.168.200.2/24
    IPv6=off
  owner: root
  path: /var/lib/connman/default-interface.config
  permissions: '0644'
- encoding: ""
  content: |-
    nameserver 192.168.100.2
    nameserver 192.168.100.3
    search mynetwork.local
  owner: root
  path: /etc/resolv.conf
  permissions: '0644'
[...]

With this the routes created for a container interface look like:

192.168.200.0    *               255.255.255.0 U     0      0        0 vethfb2114ac

and everything works as expected (i.e. CoreDNS can reach the internal DNS server and resolve the zone mynetwork.local).

README has duplicate information in the first two sections.

Looks like some edited copy was left in the README. First and second sections of the README are nearly identical, with some grammar edits.
"k3OS

k3OS is a linux distribution designed to remove as much as possible OS maintaince in a Kubernetes cluster. It is specifically designed to only have what is need to run k3s. Additionally the OS is designed to be managed by kubectl once a cluster is bootstrapped. Nodes only need to join a cluster and then all aspects of the OS can be managed from Kubernetes. Both k3OS and k3s upgrades are handled by k3OS.
Quick Start

Download the ISO from the latest release and run in VMware, VirtualBox, or KVM. The server will automatically start a single node kubernetes cluster. Log in with the user rancher and run kubectl. This is a "live install" running from the ISO media and changes will not persist after reboot.

To copy k3os to local disk, after logging in as rancher run sudo os-config. Then remove the ISO from the virtual machine and reboot.

Live install (boot from ISO) requires at least 1GB of RAM. Local install requires 512MB RAM."

----- Is the same as ----

"k3OS

k3OS is a Linux distribution designed to remove as much OS maintenance as possible in a Kubernetes cluster. It is specifically designed to only have what is needed to run k3s. Additionally the OS is designed to be managed by kubectl once a cluster is bootstrapped. Nodes only need to join a cluster and then all aspects of the OS can be managed from Kubernetes. Both k3OS and k3s upgrades are handled by k3OS.
Quick Start

Download the ISO from the latest release and boot it on VMware, VirtualBox, or KVM. The server will automatically start a single node cluster. Log in with the user rancher to run kubectl.
Configuration

All configuration is done through a single cloud-init style config file that is either packaged in the image, downloaded though cloud-init or managed by Kubernetes.

More docs to come"

option to set proxy settings for continerd daemon required.

To download container images, proxy settings are required for running containerd.

Typically, if containerd service is run with setting, it works for kubernetes + containerd.

[Service]
ExecStart=/usr/local/bin/containerd
Restart=always
Environment="HTTP_PROXY=http://example.com:8080"
Environment="HTTPS_PROXY=http://example.com:8080"

cannot find systemd service file for containerd. Proxy info can be asked while initial setup and passed while starting containerd.

Get version

Request - Be nice to have some sort of function to get the version of k3os

auto install using password in config.yaml causes first time login to hang

Version - v0.2.0

Steps:

  1. Setup VMWare machine
  2. During boot process, press e to get into GNU Grub
  3. In github create gist file that contains
k3os:
  password: asdf
  1. In linux add k3os.mode=install & k3os.install.config_url="url of gist file"
  2. Ctrl-x to save and continue
  3. After reboot, type in rancher as user name
  4. Try to type in password "asdf"

Results: nothing happens, you can't type in a password and basically you are hung up. If you hit Ctrl-C a bunch of time it will reset and you can type in both username and password. This will happen every time you restart the machine and have to login.

Set locales and keyboard

Hello, perhaps I don't see it, but I don't see how to set locale, timezone and keyboard layout.

Thanks

ulimits is too low run some application

When delpoying elastic helm chart, I got this error
[1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535]

No matter how I configure ulimits inside the container, the error persistent. I guess it is host has low limits numbers.
is there any possible way to configure the /etc/security/limits.conf?

Boot k3os iso from usb stick fails

Hi,
I tried to boot k3os from usb stick (Dell Vostro notebook) and it fails with

cat /proc/cmdline
for x in $(cat /proc/cmdline
case $x in 
for x in $(cat /proc/cmdline
case $x in 
for x in $(cat /proc/cmdline
case $x in 
for x in $(cat /proc/cmdline
case $x in 
for x in $(cat /proc/cmdline
case $x in 
'[' -z '' ']'
blkid -L K3=S_STATE
'[' -n '' ']'
'[' -n '' ']'
'[' -z '' ']'
pfatal Failed to determine boot mode
echo '[FATAL] Failed to determine boot mode'
[FATAL] Failed to determine boot mode
exit 1
rescue
...

Dropped to bash then.

traefik pending forever.

I'm using Virtualbox. It has a host-only adp in vboxnet0. It has eth1 = 192.168.99.102
How do I config traefik right? also how do I access traefik UI?

Captura de Tela 2019-05-12 às 02 53 34

Thanks!

Cheers.

If agent, don't show add agent to server text

Version - v0.2.0-rc3

  1. Create a server
  2. Create and install an agent

Results: Text about node token and how to get it and add agents is showing up. Be nice not to show this text but something about it being an agent.

apk add failed

$ apk add vim
ERROR: Unable to lock database: Read-only file system
ERROR: Failed to open apk database: Read-only file system

but by mount get that / mount point is "rw" mounted.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.