Code Monkey home page Code Monkey logo

terraform-provider-incus's Introduction

Linux Containers logo

LXC

LXC is the well-known and heavily tested low-level Linux container runtime. It is in active development since 2008 and has proven itself in critical production environments world-wide. Some of its core contributors are the same people that helped to implement various well-known containerization features inside the Linux kernel.

Status

Type Service Status
CI (Linux) GitHub Build Status
CI (Linux) Jenkins Build Status
Project status CII Best Practices CII Best Practices
Fuzzing OSS-Fuzz Fuzzing Status
Fuzzing CIFuzz CIFuzz

System Containers

LXC's main focus is system containers. That is, containers which offer an environment as close as possible as the one you'd get from a VM but without the overhead that comes with running a separate kernel and simulating all the hardware.

This is achieved through a combination of kernel security features such as namespaces, mandatory access control and control groups.

Unprivileged Containers

Unprivileged containers are containers that are run without any privilege. This requires support for user namespaces in the kernel that the container is run on. LXC was the first runtime to support unprivileged containers after user namespaces were merged into the mainline kernel.

In essence, user namespaces isolate given sets of UIDs and GIDs. This is achieved by establishing a mapping between a range of UIDs and GIDs on the host to a different (unprivileged) range of UIDs and GIDs in the container. The kernel will translate this mapping in such a way that inside the container all UIDs and GIDs appear as you would expect from the host whereas on the host these UIDs and GIDs are in fact unprivileged. For example, a process running as UID and GID 0 inside the container might appear as UID and GID 100000 on the host. The implementation and working details can be gathered from the corresponding user namespace man page.

Since unprivileged containers are a security enhancement they naturally come with a few restrictions enforced by the kernel. In order to provide a fully functional unprivileged container LXC interacts with 3 pieces of setuid code:

  • lxc-user-nic (setuid helper to create a veth pair and bridge it on the host)
  • newuidmap (from the shadow package, sets up a uid map)
  • newgidmap (from the shadow package, sets up a gid map)

Everything else is run as your own user or as a uid which your user owns.

In general, LXC's goal is to make use of every security feature available in the kernel. This means LXC's configuration management will allow experienced users to intricately tune LXC to their needs.

A more detailed introduction into LXC security can be found under the following link

Removing all Privilege

In principle LXC can be run without any of these tools provided the correct configuration is applied. However, the usefulness of such containers is usually quite restricted. Just to highlight the two most common problems:

  1. Network: Without relying on a setuid helper to setup appropriate network devices for an unprivileged user (see LXC's lxc-user-nic binary) the only option is to share the network namespace with the host. Although this should be secure in principle, sharing the host's network namespace is still one step of isolation less and increases the attack vector. Furthermore, when host and container share the same network namespace the kernel will refuse any sysfs mounts. This usually means that the init binary inside of the container will not be able to boot up correctly.

  2. User Namespaces: As outlined above, user namespaces are a big security enhancement. However, without relying on privileged helpers users who are unprivileged on the host are only permitted to map their own UID into a container. A standard POSIX system however, requires 65536 UIDs and GIDs to be available to guarantee full functionality.

Configuration

LXC is configured via a simple set of keys. For example,

  • lxc.rootfs.path
  • lxc.mount.entry

LXC namespaces configuration keys by using single dots. This means complex configuration keys such as lxc.net.0 expose various subkeys such as lxc.net.0.type, lxc.net.0.link, lxc.net.0.ipv6.address, and others for even more fine-grained configuration.

LXC is used as the default runtime for Incus, a container hypervisor exposing a well-designed and stable REST-api on top of it.

Kernel Requirements

LXC runs on any kernel from 2.6.32 onwards. All it requires is a functional C compiler. LXC works on all architectures that provide the necessary kernel features. This includes (but isn't limited to):

  • i686
  • x86_64
  • ppc, ppc64, ppc64le
  • riscv64
  • s390x
  • armv7l, arm64
  • loongarch64

LXC also supports at least the following C standard libraries:

  • glibc
  • musl
  • bionic (Android's libc)

Backwards Compatibility

LXC has always focused on strong backwards compatibility. In fact, the API hasn't been broken from release 1.0.0 onwards. Main LXC is currently at version 4.*.*.

Reporting Security Issues

The LXC project has a good reputation in handling security issues quickly and efficiently. If you think you've found a potential security issue, please report it by e-mail to all of the following persons:

  • serge (at) hallyn (dot) com
  • stgraber (at) ubuntu (dot) com
  • brauner (at) kernel (dot) org

For further details please have a look at

Becoming Active in LXC development

We always welcome new contributors and are happy to provide guidance when necessary. LXC follows the kernel coding conventions. This means we only require that each commit includes a Signed-off-by line. The coding style we use is identical to the one used by the Linux kernel. You can find a detailed introduction at:

and should also take a look at the CONTRIBUTING file in this repo.

If you want to become more active it is usually also a good idea to show up in the LXC IRC channel #lxc-dev on irc.libera.chat. We try to do all development out in the open and discussion of new features or bugs is done either in appropriate GitHub issues or on IRC.

When thinking about making security critical contributions or substantial changes it is usually a good idea to ping the developers first and ask whether a PR would be accepted.

Semantic Versioning

LXC and its related projects strictly adhere to a semantic versioning scheme.

Downloading the current source code

Source for the latest released version can always be downloaded from

You can browse the up to the minute source code and change history online

Building LXC

Without considering distribution specific details a simple

meson setup -Dprefix=/usr build
meson compile -C build

is usually sufficient.

Getting help

When you find you need help, the LXC projects provides you with several options.

Discuss Forum

We maintain a discuss forum at

where you can get support.

IRC

You can find us in #lxc on irc.libera.chat.

Mailing Lists

You can check out one of the two LXC mailing list archives and register if interested:

terraform-provider-incus's People

Contributors

adamcstephens avatar al20ov avatar c10l avatar dependabot[bot] avatar dorkamotorka avatar gboutry avatar hasusuf avatar jgraichen avatar johnmaguire avatar johnweldon avatar jtopjian avatar kapows avatar klim8d avatar lastkrick avatar lukedirtwalker avatar maveonair avatar mjrider avatar musicdin avatar nbolten avatar pmalhaire avatar renonat avatar rjpearce avatar ruanbekker avatar shantanugadgil avatar simondeziel avatar sl1pm4t avatar stgraber avatar styxman avatar unitiser avatar yobert avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-provider-incus's Issues

Terraform provider does not work with my PKI and certs, however accessing incus API directly with curl works

summary

I have several incus servers, and want to create containers from one central place (i.e. my laptop). therefore it needs to work via https and incus API. I also need to use my PKI so that I can issue certificates to other users.

On Incus server, I placed CA cert in /var/lib/incus/server.ca and using incus config trust add-certificate command. See relevant links below.

I stored client cert/key and CA (client.crt, client.key, client.ca) on my laptop in ~/.config/incus/.

curl tests pass, see below.

issue

I think certificates and PKI are setup correctly, and deployed into incus server correctly. curl test works with these certs,

however does not work when I use terraform.

I tried concatenating (cat) client.ca into client.crt, still works for curl but still does not work for terraform.

incus_instance.instance: Creating...
╷
│ Error: Failed to retrieve Incus InstanceServer
│ 
│   with incus_instance.instance,
│   on main.tf line 1, in resource "incus_instance" "instance":
│    1: resource "incus_instance" "instance" {
│ 
│ Unable to create server client for remote "test-11": Unable to authenticate with remote server: not authorized

terraform code:

# provider.tf
terraform {
  required_providers {
    incus = {
      source = "lxc/incus"
      version = "0.1.1"
    }
  }
}

provider "incus" {
  generate_client_certificates = false
  accept_remote_certificate    = true
  remote {
    name = "test-11"
    scheme = "https"
    address = "192.168.1.11"
    default = true
  }
}

# main.tf
resource "incus_instance" "instance" {
  name = "testytest"
  image = "images:debian/bookworm/cloud"
  profiles = ["default"]
}

other relevant logs

terraform can access certs

$ inotifywait -m -e open ~/.config/incus/*
Setting up watches.
Watches established.
/home/invizus/.config/incus/servercerts/ OPEN test-11.crt
/home/invizus/.config/incus/client.crt OPEN 
/home/invizus/.config/incus/client.ca OPEN 
/home/invizus/.config/incus/client.key OPEN

curl works:

$ curl -s -k --cert ~/.config/incus/client.crt --cacert ~/.config/incus/client.ca \
--key ~/.config/incus/client.key https://192.168.1.11:8443/1.0 -X GET | jq .metadata.auth
"trusted"

Update: Just FYI curl works only when concatenating CA into client cert.

relevant links

https://discuss.linuxcontainers.org/t/how-to-add-a-certificate-to-incus-remotely/19549

https://linuxcontainers.org/incus/docs/main/authentication/#using-a-pki-system

Publish to Terraform Registry

Our Terraform provider should be published in the Terraform Registry, see Publishing Providers.

Open questions:

  • @stgraber: To publish the package, we need to grant the Terraform Registry access to the lxc GitHub organization. We also need to create a signing key that will be used to publish the package. How would you like to proceed here?
  • How should we organize the versioning and changelog from now on? Incus started with 0.0.1 and so we could do the same here, but maybe start with 1.0.0?

Make resource naming more consistent

I think we want to perform the following renaming to have things be consistent:

  • incus_volume => incus_storage_volume
  • incus_volume_copy => incus_storage_volume_copy
  • incus_snapshot => incus_instance_snapshot
  • incus_publish_image => incus_image_publish
  • incus_cached_image => incus_image

We're luckily early enough in Incus and this provider that we can likely get away with a rename followed by a minor version bump.

Prioritize host interfaces for ipv4_address/ipv6_address/mac

Hi, first of all I'd like to thank all the maintainers for this provider, it's awesome.
Recently, I've been trying to automate the creation of a kubernetes cluster (k3s, RKE2, k0s and others) on Incus VMs and everything works well enough except for when I try to run terraform apply a second time after installing a kubernetes distribution:

Kubernetes components create a bunch of networks, bridges and so on and these addresses show up in the Incus webui as well as in the CLI. Only problem is, the ipv4_address and ipv6_address exported values from my incus_instance resources change from a private IPv4 on the same nework I run terraform from to a private IP only accessible from the networks created by Kubernetes.

Example:

Upon running a second terraform apply, the IPv4 changes

Changes to Outputs:
  ~ instance-ip = "10.127.0.41" -> "10.42.0.0"
null_resource.kubeconfig: Destroying... [id=5068605674949049145]
null_resource.kubeconfig: Destruction complete after 0s
null_resource.get-kubeconfig: Creating...
null_resource.get-kubeconfig: Provisioning with 'local-exec'...
null_resource.get-kubeconfig (local-exec): Executing: ["/bin/sh" "-c" "ssh -o 'StrictHostKeyChecking no' [email protected] sudo cat /etc/rancher/rke2/rke2.yaml > temp-kubeconfig"]
null_resource.get-kubeconfig: Still creating... [10s elapsed]
...
[Times out eventually because 10.42.0.0 is not reachable from my computer]
+-----------+---------+-----------------------+--------------------------------+-----------------+-----------+
|   NAME    |  STATE  |         IPV4          |              IPV6              |      TYPE       | SNAPSHOTS |
+-----------+---------+-----------------------+--------------------------------+-----------------+-----------+
| instance0 | RUNNING | 10.42.0.0 (flannel.1) | 2001::e91 (enp5s0) | VIRTUAL-MACHINE | 0         |
|           |         | 10.127.0.41 (enp5s0)  |                                |                 |           |
+-----------+---------+-----------------------+--------------------------------+-----------------+-----------+

In this example, the address I'm trying to reach instance0 from is 10.127.0.41 and that's what ipv4_address returns at first, but upon installing RKE2, ipv4_address returns 10.42.0.0 which is not accessible from my computer.

Now what I think is happening is something is grabbing the first address (index 0) from a list of IP addresses given by Incus agent and rolls with it. It's not a problem when the VM/container only has one address, but could become incorrect when it has more than one.

Unfortunately, it seems that that address is not always the one we're interested in.
Maybe ipv4_addresses could be changed to a map of interface names to IPs? enp5s0 -> 10.127.0.41, flannel.1 -> 10.42.0.0, and so on?

Failed to retireve image info for instance for non-admin user

OK. The typo is one thing (occurs four times).

The real issue is that I'm getting this error for a simple example. I have added an alias to a local image. Then
this example:

resource "incus_instance" "c9" {
  name  = "c9"
  image = "local:centos/9-Stream/cloud/vm"
  type = "virtual-machine"

  config = {
    "boot.autostart" = true
    "security.secureboot" = false
  }

  limits = {
    cpu = 2
  }
  profiles = ["default", "user-kees-centos", "config-centos"]
}

The user is member of the incus group, not incus-admin.

incus_instance.c9: Creating...
╷
│ Error: Failed to retireve image info for instance "c9"
│ 
│   with incus_instance.c9,
│   on main.tf line 1, in resource "incus_instance" "c9":
│    1: resource "incus_instance" "c9" {
│ 
│ Image not found
╵

If I run this with an incus-admin user it succeeds.

profile "default" cannot be tracked

terraform {
  required_providers {
    incus = {
      source = "lxc/incus"
      version = "~> 0.1.1"
    }
  }
  required_version = ">= 1.8"
}

provider "incus" {}

resource "incus_project" "project" {
  name = "myproject"

  config = {
    "features.profiles" = true
  }
}

resource "incus_profile" "default" {
  name = "default"
  project = incus_project.project.name

  device {
    type = "disk"
    name = "root"

    properties = {
      pool = "default"
      path = "/"
    }
  }
}

returns the next error

│ Error: Failed to create profile "default"
│ 
│   with incus_profile.default,
│   on main.tf line 21, in resource "incus_profile" "default":
│   21: resource "incus_profile" "default" {
│ 
│ Error inserting "default" into database: The profile already exists

The error is expected because incus always creates a default profile.
Of course, we can change the name of the profile and working without the default profile but in this case we cannot track the "default" dynamically created.
Import would not work because the profile is not known before the project creation.
I can remind that other providers (aws) uses data source with the default.
Would data for default profile (like data profile_default) fix the problem?

Add contributions guideline

We should add a contribution guideline to the repository, inspired by CONTRIBUTING.md of the incus project.

Why it is helpful

  • Standardization: Establishes a consistent process for submitting contributions.
  • Clarity for contributors: Provides a clear roadmap for effective contributions.
  • Improved collaboration: Promotes a collaborative and inclusive environment.
  • Improve code quality: Enforces coding standards and best practices.
  • Growing the community: Attracts diverse contributors, fostering community growth.

Don't allow `auto` as value for `ipv4.address` or `ipv6.address`

terraform {
  required_providers {
    incus = {
      source = "lxc/incus"
      version = "~> 0.1.1"
    }
  }
  required_version = ">= 1.8"
}

provider "incus" {}

resource "incus_network" "net" {
  name = "a-bridged-network"

  type = "bridge"

  config = {
    "ipv4.address" = "auto"
    "ipv4.nat" = "true"

    "ipv6.address" = "auto"
    "ipv6.nat" = "true"
  }
}

gives the next errors.

incus_network.net: Creating...
╷
│ Error: Provider produced inconsistent result after apply
│ 
│ When applying changes to incus_network.net, provider "provider[\"registry.terraform.io/lxc/incus\"]" produced an unexpected new value: .config["ipv4.address"]: was
│ cty.StringVal("auto"), but now cty.StringVal("10.58.213.1/24").
│ 
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.
╵
╷
│ Error: Provider produced inconsistent result after apply
│ 
│ When applying changes to incus_network.net, provider "provider[\"registry.terraform.io/lxc/incus\"]" produced an unexpected new value: .config["ipv6.address"]: was
│ cty.StringVal("auto"), but now cty.StringVal("fd42:1c94:e5e9:4d18::1/64").
│ 
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.

though it is expected on incus side because the keyes ipv{4,6}.address are generated if we don't override them (auto as initial value)

How to define 'volatile.eth0.hwaddr' for an instance?

I migrate my lxd infrastructure to incus. I use terraform to create network, profile and instance.
Some instance have a defined MAC address. With lxd provider, I use to configure this MAC address like this:

resource "incus_instance" "machine" {
...
  config =  {
            "volatile.eth0.hwaddr" = "00:16:3e:73:dc:45"
  }
...
}

But with incus provider I get this message: Config key cannot have "volatile." or "image." prefix. Got: "volatile.eth0.hwaddr".

Is some body can tell me how can I define this option?
thx

Incussocket with write permissions not found for non-admin users

This error pops up if a user is "just" in the incus group, not in the incus-admin group.

The code in determineIncusDir checks if /var/lib/incus/unix.socket is writable, which it isn't for non admin users.

A workaround is to set envvar INCUS_SOCKET=/var/lib/incus/unix.socket.user

Update code/doc to refer to `instance` rather than `container`

It looks like there's still quite a bit of legacy wording around containers rather than the more generic terminology of instances (as apply to both containers and VMs).

In general we should be using instance everywhere and so only use container or virtual-machine in the cases where the particular feature only applies to just one of the two types.

The provider doesn't seem to understand that volumes inherit from `volume.XYZ` keys on the pool

I've noticed OpenTofu getting pretty confused about a volume.zfs.remove_snapshots=true config key appearing on a volume that's defined in terraform. That's because the config key was set on the parent storage pool and so propagated to the new volume.

That's normal and we definitely don't want terraform to try to rectify this by deleting and re-creating the volume as it attempted here :)

incus_storage_bucket produces inconsistent result

Given:

resource "incus_storage_bucket" "this" {
  name = "bucket"

  pool = "default"

  config = {
    "size" = "100MiB"
  }
}
╷
│ Error: Provider produced inconsistent result after apply
│
│ When applying changes to incus_storage_bucket.this, provider "provider[\"registry.terraform.io/lxc/incus\"]"
│ produced an unexpected new value: .config: new element "block.filesystem" has appeared.
│
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.
╵
╷
│ Error: Provider produced inconsistent result after apply
│
│ When applying changes to incus_storage_bucket.this, provider "provider[\"registry.terraform.io/lxc/incus\"]"
│ produced an unexpected new value: .config: new element "block.mount_options" has appeared.
│
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.

Workaround:

resource "incus_storage_bucket" "this" {
  name = "bucket"

  pool = "default"

  config = {
    "size" = "100MiB"
    "block.filesystem" : "ext4"
    "block.mount_options" : "discard"
  }
}

When config.yml doesn't exist, a directory called "$HOME" is created in the terraform directory

I have not created a ~/.config/incus/config.yml and my config looks like this:

provider "incus" {
  generate_client_certificates = true
  accept_remote_certificate    = true

  remote {
    name    = "demeter"
    scheme  = "https"
    address = "192.168.123.15"
    token = "<redacted>"
    default = true
  }

  //config_dir = "/Users/jmaguire/.config/incus"
}

Running terraform apply creates a new keypair saved to $HOME/.config/incus directory structure in the local directory. Additionally, if I move the config directory to my home directory, it's ignored. This is because config.yml does not exist:

// Determine Incus configuration directory.
configDir := data.ConfigDir.ValueString()
if configDir == "" {
configDir = "$HOME/.config/incus"
}
// Try to load config.yml from determined configDir. If there's
// an error loading config.yml, default config will be used.
configPath := os.ExpandEnv(filepath.Join(configDir, "config.yml"))
config, err := incus_config.LoadConfig(configPath)
if err != nil {
config = incus_config.DefaultConfig()
config.ConfigDir = configDir
}

Note that os.ExpandEnv is only applied to the complete config.yml file name, not the configDir var assigned to the new config struct.

❯ terraform version
Terraform v1.6.6
on darwin_arm64
+ provider registry.terraform.io/lxc/incus v0.0.2

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.