Code Monkey home page Code Monkey logo

python3-lxc's Introduction

Linux Containers logo

LXC

LXC is the well-known and heavily tested low-level Linux container runtime. It is in active development since 2008 and has proven itself in critical production environments world-wide. Some of its core contributors are the same people that helped to implement various well-known containerization features inside the Linux kernel.

Status

Type Service Status
CI (Linux) GitHub Build Status
CI (Linux) Jenkins Build Status
Project status CII Best Practices CII Best Practices
Fuzzing OSS-Fuzz Fuzzing Status
Fuzzing CIFuzz CIFuzz

System Containers

LXC's main focus is system containers. That is, containers which offer an environment as close as possible as the one you'd get from a VM but without the overhead that comes with running a separate kernel and simulating all the hardware.

This is achieved through a combination of kernel security features such as namespaces, mandatory access control and control groups.

Unprivileged Containers

Unprivileged containers are containers that are run without any privilege. This requires support for user namespaces in the kernel that the container is run on. LXC was the first runtime to support unprivileged containers after user namespaces were merged into the mainline kernel.

In essence, user namespaces isolate given sets of UIDs and GIDs. This is achieved by establishing a mapping between a range of UIDs and GIDs on the host to a different (unprivileged) range of UIDs and GIDs in the container. The kernel will translate this mapping in such a way that inside the container all UIDs and GIDs appear as you would expect from the host whereas on the host these UIDs and GIDs are in fact unprivileged. For example, a process running as UID and GID 0 inside the container might appear as UID and GID 100000 on the host. The implementation and working details can be gathered from the corresponding user namespace man page.

Since unprivileged containers are a security enhancement they naturally come with a few restrictions enforced by the kernel. In order to provide a fully functional unprivileged container LXC interacts with 3 pieces of setuid code:

  • lxc-user-nic (setuid helper to create a veth pair and bridge it on the host)
  • newuidmap (from the shadow package, sets up a uid map)
  • newgidmap (from the shadow package, sets up a gid map)

Everything else is run as your own user or as a uid which your user owns.

In general, LXC's goal is to make use of every security feature available in the kernel. This means LXC's configuration management will allow experienced users to intricately tune LXC to their needs.

A more detailed introduction into LXC security can be found under the following link

Removing all Privilege

In principle LXC can be run without any of these tools provided the correct configuration is applied. However, the usefulness of such containers is usually quite restricted. Just to highlight the two most common problems:

  1. Network: Without relying on a setuid helper to setup appropriate network devices for an unprivileged user (see LXC's lxc-user-nic binary) the only option is to share the network namespace with the host. Although this should be secure in principle, sharing the host's network namespace is still one step of isolation less and increases the attack vector. Furthermore, when host and container share the same network namespace the kernel will refuse any sysfs mounts. This usually means that the init binary inside of the container will not be able to boot up correctly.

  2. User Namespaces: As outlined above, user namespaces are a big security enhancement. However, without relying on privileged helpers users who are unprivileged on the host are only permitted to map their own UID into a container. A standard POSIX system however, requires 65536 UIDs and GIDs to be available to guarantee full functionality.

Configuration

LXC is configured via a simple set of keys. For example,

  • lxc.rootfs.path
  • lxc.mount.entry

LXC namespaces configuration keys by using single dots. This means complex configuration keys such as lxc.net.0 expose various subkeys such as lxc.net.0.type, lxc.net.0.link, lxc.net.0.ipv6.address, and others for even more fine-grained configuration.

LXC is used as the default runtime for Incus, a container hypervisor exposing a well-designed and stable REST-api on top of it.

Kernel Requirements

LXC runs on any kernel from 2.6.32 onwards. All it requires is a functional C compiler. LXC works on all architectures that provide the necessary kernel features. This includes (but isn't limited to):

  • i686
  • x86_64
  • ppc, ppc64, ppc64le
  • riscv64
  • s390x
  • armv7l, arm64
  • loongarch64

LXC also supports at least the following C standard libraries:

  • glibc
  • musl
  • bionic (Android's libc)

Backwards Compatibility

LXC has always focused on strong backwards compatibility. In fact, the API hasn't been broken from release 1.0.0 onwards. Main LXC is currently at version 4.*.*.

Reporting Security Issues

The LXC project has a good reputation in handling security issues quickly and efficiently. If you think you've found a potential security issue, please report it by e-mail to all of the following persons:

  • serge (at) hallyn (dot) com
  • stgraber (at) ubuntu (dot) com
  • brauner (at) kernel (dot) org

For further details please have a look at

Becoming Active in LXC development

We always welcome new contributors and are happy to provide guidance when necessary. LXC follows the kernel coding conventions. This means we only require that each commit includes a Signed-off-by line. The coding style we use is identical to the one used by the Linux kernel. You can find a detailed introduction at:

and should also take a look at the CONTRIBUTING file in this repo.

If you want to become more active it is usually also a good idea to show up in the LXC IRC channel #lxc-dev on irc.libera.chat. We try to do all development out in the open and discussion of new features or bugs is done either in appropriate GitHub issues or on IRC.

When thinking about making security critical contributions or substantial changes it is usually a good idea to ping the developers first and ask whether a PR would be accepted.

Semantic Versioning

LXC and its related projects strictly adhere to a semantic versioning scheme.

Downloading the current source code

Source for the latest released version can always be downloaded from

You can browse the up to the minute source code and change history online

Building LXC

Without considering distribution specific details a simple

meson setup -Dprefix=/usr build
meson compile -C build

is usually sufficient.

Getting help

When you find you need help, the LXC projects provides you with several options.

Discuss Forum

We maintain a discuss forum at

where you can get support.

IRC

You can find us in #lxc on irc.libera.chat.

Mailing Lists

You can check out one of the two LXC mailing list archives and register if interested:

python3-lxc's People

Contributors

anirudh-goyal avatar chang-andrew avatar consolatis avatar cypresslin avatar michaelsatanovsky avatar stgraber avatar thmo avatar vicamo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

python3-lxc's Issues

Crash when custom LXC path is passed

Required information

  • Distribution: RHEL
  • Distribution version: 7
  • The output of
    • lxc-start --version = 4.0.1
    • lxc-checkconfig
      --- Namespaces ---
      Namespaces: enabled
      Utsname namespace: enabled
      Ipc namespace: enabled
      Pid namespace: enabled
      User namespace: enabled
      Network namespace: enabled

--- Control groups ---
Cgroups: enabled

Cgroup v1 mount points:
/sys/fs/cgroup/systemd
/sys/fs/cgroup/net_cls,net_prio
/sys/fs/cgroup/perf_event
/sys/fs/cgroup/devices
/sys/fs/cgroup/pids
/sys/fs/cgroup/cpu,cpuacct
/sys/fs/cgroup/cpuset
/sys/fs/cgroup/freezer
/sys/fs/cgroup/blkio
/sys/fs/cgroup/debug
/sys/fs/cgroup/memory

Cgroup v2 mount points:

Cgroup v1 clone_children flag: enabled
Cgroup device: enabled
Cgroup sched: enabled
Cgroup cpu account: enabled
Cgroup memory controller: enabled
Cgroup cpuset: enabled

--- Misc ---
Veth pair device: enabled, not loaded
Macvlan: enabled, not loaded
Vlan: enabled, not loaded
Bridges: enabled, loaded
Advanced netfilter: enabled, not loaded
CONFIG_NF_NAT_IPV4: enabled, loaded
CONFIG_NF_NAT_IPV6: enabled, loaded
CONFIG_IP_NF_TARGET_MASQUERADE: enabled, loaded
CONFIG_IP6_NF_TARGET_MASQUERADE: enabled, not loaded
CONFIG_NETFILTER_XT_TARGET_CHECKSUM: enabled, loaded
CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled, not loaded
FUSE (for use with lxcfs): enabled, loaded

--- Checkpoint/Restore ---
checkpoint restore: enabled
CONFIG_FHANDLE: enabled
CONFIG_EVENTFD: enabled
CONFIG_EPOLL: enabled
CONFIG_UNIX_DIAG: enabled
CONFIG_INET_DIAG: enabled
CONFIG_PACKET_DIAG: enabled
CONFIG_NETLINK_DIAG: enabled
File capabilities:

Note : Before booting a new kernel, you can check its configuration
usage : CONFIG=/path/to/config /bin/lxc-checkconfig

  • uname -a Linux proc1 4.9.98-rt76-7.5-#1 SMP PREEMPT Tue Apr 14 10:39:18 EDT 2020 x86_64 x86_64 x86_64 GNU/Linux

  • cat /proc/self/cgroup
    11:memory:/user.slice
    10:debug:/
    9:blkio:/user.slice
    8:freezer:/
    7:cpuset:/
    6:cpu,cpuacct:/user.slice
    5:pids:/user.slice
    4:devices:/user.slice
    3:perf_event:/
    2:net_cls,net_prio:/
    1:name=systemd:/user.slice/user-1000.slice/session-2.scope

  • cat /proc/1/mounts

sysfs /sys sysfs rw,seclabel,nosuid,nodev,noexec,relatime 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
devtmpfs /dev devtmpfs rw,seclabel,nosuid,size=1995476k,nr_inodes=498869,mode=755 0 0
securityfs /sys/kernel/security securityfs rw,nosuid,nodev,noexec,relatime 0 0
tmpfs /dev/shm tmpfs rw,seclabel,nosuid,nodev,noexec 0 0
devpts /dev/pts devpts rw,seclabel,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /run tmpfs rw,seclabel,nosuid,nodev,mode=755 0 0
tmpfs /sys/fs/cgroup tmpfs ro,seclabel,nosuid,nodev,noexec,mode=755 0 0
cgroup /sys/fs/cgroup/systemd cgroup rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd 0 0
pstore /sys/fs/pstore pstore rw,seclabel,nosuid,nodev,noexec,relatime 0 0
cgroup /sys/fs/cgroup/net_cls,net_prio cgroup rw,nosuid,nodev,noexec,relatime,net_cls,net_prio 0 0
cgroup /sys/fs/cgroup/perf_event cgroup rw,nosuid,nodev,noexec,relatime,perf_event 0 0
cgroup /sys/fs/cgroup/devices cgroup rw,nosuid,nodev,noexec,relatime,devices 0 0
cgroup /sys/fs/cgroup/pids cgroup rw,nosuid,nodev,noexec,relatime,pids 0 0
cgroup /sys/fs/cgroup/cpu,cpuacct cgroup rw,nosuid,nodev,noexec,relatime,cpu,cpuacct 0 0
cgroup /sys/fs/cgroup/cpuset cgroup rw,nosuid,nodev,noexec,relatime,cpuset 0 0
cgroup /sys/fs/cgroup/freezer cgroup rw,nosuid,nodev,noexec,relatime,freezer 0 0
cgroup /sys/fs/cgroup/blkio cgroup rw,nosuid,nodev,noexec,relatime,blkio 0 0
cgroup /sys/fs/cgroup/debug cgroup rw,nosuid,nodev,noexec,relatime,debug 0 0
cgroup /sys/fs/cgroup/memory cgroup rw,nosuid,nodev,noexec,relatime,memory 0 0
/dev/mapper/vg1-lv_root / xfs rw,seclabel,relatime,attr2,inode64,noquota 0 0
selinuxfs /sys/fs/selinux selinuxfs rw,relatime 0 0
systemd-1 /proc/sys/fs/binfmt_misc autofs rw,relatime,fd=33,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=5902 0 0
debugfs /sys/kernel/debug debugfs rw,seclabel,relatime 0 0
mqueue /dev/mqueue mqueue rw,seclabel,relatime 0 0
hugetlbfs /dev/hugepages hugetlbfs rw,seclabel,relatime 0 0
/dev/vda1 /boot xfs rw,seclabel,nosuid,nodev,relatime,attr2,inode64,noquota 0 0
/dev/mapper/vg1-lv_var /var xfs rw,seclabel,nosuid,nodev,relatime,attr2,inode64,noquota 0 0
/dev/mapper/vg1-lv_home /home xfs rw,seclabel,nosuid,nodev,relatime,attr2,inode64,noquota 0 0
/dev/mapper/vg1-lv_tmp /tmp xfs rw,seclabel,nosuid,nodev,noexec,relatime,attr2,inode64,noquota 0 0
/dev/mapper/vg1-lv_log /var/log xfs rw,seclabel,nosuid,nodev,relatime,attr2,inode64,noquota 0 0
/dev/mapper/vg1-lv_audit /var/log/audit xfs rw,seclabel,nosuid,nodev,noexec,relatime,attr2,inode64,noquota 0 0
sunrpc /var/lib/nfs/rpc_pipefs rpc_pipefs rw,relatime 0 0
lxcfs /var/lib/lxcfs fuse.lxcfs rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other 0 0
fusectl /sys/fs/fuse/connections fusectl rw,relatime 0 0
tmpfs /var/lib/lxd/shmounts tmpfs rw,seclabel,relatime,size=100k,mode=711 0 0
tmpfs /var/lib/lxd/devlxd tmpfs rw,seclabel,relatime,size=100k,mode=755 0 0
tmpfs /run/user/1000 tmpfs rw,seclabel,nosuid,nodev,relatime,size=404504k,mode=700,uid=1000,gid=1000 0 0
gvfsd-fuse /run/user/1000/gvfs fuse.gvfsd-fuse rw,nosuid,nodev,relatime,user_id=1000,group_id=1000 0 0
/dev/fuse /run/user/1000/doc fuse rw,nosuid,nodev,relatime,user_id=1000,group_id=1000 0 0
tmpfs /run/user/0 tmpfs rw,seclabel,nosuid,nodev,relatime,size=404504k,mode=700 0 0
binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,relatime 0 0

Issue description

lxc python API not accepting custom config_path; python bails with a Segmentation Fault. This is working in the C API.

[root@proc1 ~]# python
Python 2.7.5 (default, Sep 26 2019, 13:23:47)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] on linux2
Type "help", "copyright", "credits" or "license" for more information.

import lxc
c = lxc.Container("c1", "/containers/nodes")
Segmentation fault
[root@proc1 ~]#

Steps to reproduce

  1. Create custom container path utilizing lxc commands at path : example /containers/nodes/
  2. Verify Container node (ex. c1) has a config and rootfs in /containers/nodes/c1/
  3. python
    import lxc
    c = lxc.Container("c1", "/containers/nodes")
  4. Observe Segmentation Fault

Build fails as lxc-dev package is missing on macOS

Hi,

I'm trying to build using python setup.py build on macOS. I am greeted with the following error:

lxc.c:27:10: fatal error: 'lxc/lxccontainer.h' file not found
#include <lxc/lxccontainer.h>
         ^~~~~~~~~~~~~~~~~~~~

This is probably because missing lxc-dev which is not supplied by brew. Could you please help me on how I could resolve this issue?

Insufficiently detailed error messages

Hi,

After hundreds of container startups using the python3 bindings sometimes I get a sporadic error like:

    c = lxc.Container(container_id)

  File "/usr/lib64/python3.9/site-packages/lxc/__init__.py", line 157, in __init__
    _lxc.Container.__init__(self, name)

RuntimeError: Container_init:lxc.c:542: error during init for container 'c1'.

My problem with this is that there is no detail whatsoever on the reason behind the RuntimeError. I know that these are generated bindings but is there any easy way to debug this like make it possible to output debug information via some DEBUG enabling env variable or perhaps a log file?

Thanks in advance,

documentation?

Unless I'm missing something, I don't see any documentation on how to use this other than a few examples. Is there a link that I'm not seeing?

Unable to set multiple network devices during creation

Brief description

Crash when attempting to set multiple network devices using Python bindings during container creation.

Required information

  • Distribution: Ubuntu
  • Distribution version: 18.04
  • The output of
    • lxc-start --version: 3.0.3
    • lxc-checkconfig

Kernel configuration not found at /proc/config.gz; searching...
Kernel configuration found at /boot/config-4.15.0-45-generic
--- Namespaces ---
Namespaces: enabled
Utsname namespace: enabled
Ipc namespace: enabled
Pid namespace: enabled
User namespace: enabled
Network namespace: enabled

--- Control groups ---
Cgroups: enabled

Cgroup v1 mount points:
/sys/fs/cgroup/systemd
/sys/fs/cgroup/devices
/sys/fs/cgroup/blkio
/sys/fs/cgroup/rdma
/sys/fs/cgroup/cpu,cpuacct
/sys/fs/cgroup/memory
/sys/fs/cgroup/net_cls,net_prio
/sys/fs/cgroup/pids
/sys/fs/cgroup/hugetlb
/sys/fs/cgroup/perf_event
/sys/fs/cgroup/cpuset
/sys/fs/cgroup/freezer

Cgroup v2 mount points:
/sys/fs/cgroup/unified

Cgroup v1 clone_children flag: enabled
Cgroup device: enabled
Cgroup sched: enabled
Cgroup cpu account: enabled
Cgroup memory controller: enabled
Cgroup cpuset: enabled

--- Misc ---
Veth pair device: enabled, not loaded
Macvlan: enabled, not loaded
Vlan: enabled, not loaded
Bridges: enabled, loaded
Advanced netfilter: enabled, not loaded
CONFIG_NF_NAT_IPV4: enabled, not loaded
CONFIG_NF_NAT_IPV6: enabled, not loaded
CONFIG_IP_NF_TARGET_MASQUERADE: enabled, not loaded
CONFIG_IP6_NF_TARGET_MASQUERADE: enabled, not loaded
CONFIG_NETFILTER_XT_TARGET_CHECKSUM: enabled, not loaded
CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled, not loaded
FUSE (for use with lxcfs): enabled, not loaded

--- Checkpoint/Restore ---
checkpoint restore: enabled
CONFIG_FHANDLE: enabled
CONFIG_EVENTFD: enabled
CONFIG_EPOLL: enabled
CONFIG_UNIX_DIAG: enabled
CONFIG_INET_DIAG: enabled
CONFIG_PACKET_DIAG: enabled
CONFIG_NETLINK_DIAG: enabled
File capabilities:

Note : Before booting a new kernel, you can check its configuration
usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig

  • uname -a

Linux bruce6556 4.15.0-45-generic lxc/lxc#48-Ubuntu SMP Tue Jan 29 16:28:13 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

  • cat /proc/self/cgroup

12:freezer:/user/bdallen/0
11:cpuset:/
10:perf_event:/
9:hugetlb:/
8:pids:/user.slice/user-1000.slice/[email protected]
7:net_cls,net_prio:/
6:memory:/user/bdallen/0
5:cpu,cpuacct:/user.slice
4:rdma:/
3:blkio:/user.slice
2:devices:/user.slice
1:name=systemd:/user.slice/user-1000.slice/[email protected]/gnome-terminal-server.service
0::/user.slice/user-1000.slice/[email protected]/gnome-terminal-server.service

  • cat /proc/1/mounts

sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
udev /dev devtmpfs rw,nosuid,relatime,size=4039308k,nr_inodes=1009827,mode=755 0 0
devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /run tmpfs rw,nosuid,noexec,relatime,size=813912k,mode=755 0 0
/dev/sda1 / ext4 rw,relatime,errors=remount-ro,data=ordered 0 0
securityfs /sys/kernel/security securityfs rw,nosuid,nodev,noexec,relatime 0 0
tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0
tmpfs /run/lock tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k 0 0
tmpfs /sys/fs/cgroup tmpfs ro,nosuid,nodev,noexec,mode=755 0 0
cgroup /sys/fs/cgroup/unified cgroup2 rw,nosuid,nodev,noexec,relatime 0 0
cgroup /sys/fs/cgroup/systemd cgroup rw,nosuid,nodev,noexec,relatime,xattr,name=systemd 0 0
pstore /sys/fs/pstore pstore rw,nosuid,nodev,noexec,relatime 0 0
cgroup /sys/fs/cgroup/devices cgroup rw,nosuid,nodev,noexec,relatime,devices 0 0
cgroup /sys/fs/cgroup/blkio cgroup rw,nosuid,nodev,noexec,relatime,blkio 0 0
cgroup /sys/fs/cgroup/rdma cgroup rw,nosuid,nodev,noexec,relatime,rdma 0 0
cgroup /sys/fs/cgroup/cpu,cpuacct cgroup rw,nosuid,nodev,noexec,relatime,cpu,cpuacct 0 0
cgroup /sys/fs/cgroup/memory cgroup rw,nosuid,nodev,noexec,relatime,memory 0 0
cgroup /sys/fs/cgroup/net_cls,net_prio cgroup rw,nosuid,nodev,noexec,relatime,net_cls,net_prio 0 0
cgroup /sys/fs/cgroup/pids cgroup rw,nosuid,nodev,noexec,relatime,pids 0 0
cgroup /sys/fs/cgroup/hugetlb cgroup rw,nosuid,nodev,noexec,relatime,hugetlb 0 0
cgroup /sys/fs/cgroup/perf_event cgroup rw,nosuid,nodev,noexec,relatime,perf_event 0 0
cgroup /sys/fs/cgroup/cpuset cgroup rw,nosuid,nodev,noexec,relatime,cpuset,clone_children 0 0
cgroup /sys/fs/cgroup/freezer cgroup rw,nosuid,nodev,noexec,relatime,freezer 0 0
systemd-1 /proc/sys/fs/binfmt_misc autofs rw,relatime,fd=32,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=16475 0 0
mqueue /dev/mqueue mqueue rw,relatime 0 0
fusectl /sys/fs/fuse/connections fusectl rw,relatime 0 0
hugetlbfs /dev/hugepages hugetlbfs rw,relatime,pagesize=2M 0 0
debugfs /sys/kernel/debug debugfs rw,relatime 0 0
configfs /sys/kernel/config configfs rw,relatime 0 0
tracefs /sys/kernel/debug/tracing tracefs rw,relatime 0 0
/dev/loop9 /snap/gnome-system-monitor/51 squashfs ro,nodev,relatime 0 0
/dev/loop0 /snap/gnome-calculator/180 squashfs ro,nodev,relatime 0 0
/dev/loop6 /snap/gnome-3-26-1604/74 squashfs ro,nodev,relatime 0 0
/dev/loop2 /snap/core/6350 squashfs ro,nodev,relatime 0 0
/dev/loop4 /snap/gnome-3-26-1604/70 squashfs ro,nodev,relatime 0 0
/dev/loop3 /snap/gnome-characters/124 squashfs ro,nodev,relatime 0 0
/dev/loop1 /snap/core/6259 squashfs ro,nodev,relatime 0 0
/dev/loop15 /snap/gnome-system-monitor/57 squashfs ro,nodev,relatime 0 0
/dev/loop5 /snap/gnome-calculator/238 squashfs ro,nodev,relatime 0 0
/dev/loop7 /snap/gnome-logs/37 squashfs ro,nodev,relatime 0 0
/dev/loop8 /snap/gnome-calculator/260 squashfs ro,nodev,relatime 0 0
/dev/loop11 /snap/gnome-characters/103 squashfs ro,nodev,relatime 0 0
/dev/loop12 /snap/gnome-logs/45 squashfs ro,nodev,relatime 0 0
/dev/loop13 /snap/gtk-common-themes/818 squashfs ro,nodev,relatime 0 0
/dev/loop16 /snap/gtk-common-themes/808 squashfs ro,nodev,relatime 0 0
/dev/loop17 /snap/gnome-characters/139 squashfs ro,nodev,relatime 0 0
binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,relatime 0 0
lxcfs /var/lib/lxcfs fuse.lxcfs rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other 0 0
tmpfs /run/user/121 tmpfs rw,nosuid,nodev,relatime,size=813908k,mode=700,uid=121,gid=125 0 0
tmpfs /run/user/1000 tmpfs rw,nosuid,nodev,relatime,size=813908k,mode=700,uid=1000,gid=1000 0 0
gvfsd-fuse /run/user/1000/gvfs fuse.gvfsd-fuse rw,nosuid,nodev,relatime,user_id=1000,group_id=1000 0 0
tmpfs /run/snapd/ns tmpfs rw,nosuid,noexec,relatime,size=813912k,mode=755 0 0
nsfs /run/snapd/ns/gnome-system-monitor.mnt nsfs rw 0 0
/dev/loop18 /snap/gtk-common-themes/1122 squashfs ro,nodev,relatime 0 0
/dev/loop19 /snap/gnome-3-26-1604/78 squashfs ro,nodev,relatime 0 0
/dev/loop14 /snap/core/6405 squashfs ro,nodev,relatime 0 0

Issue description

Crashes when attempting to set up second network device when using Ubuntu 18.

Note that the network device sets up fine when using Ubuntu 16.

Steps to reproduce

  1. Create two bridge devices:

     sudo brctl addbr bridge0; sudo brctl addbr bridge1
    
  2. Configure your target OS, xenial or bionic, in the demo program, line 18.

  3. Run the demo program, which attempts to create and start a container with two network devices:

     #!/usr/bin/env python3
     # ref. https://linuxcontainers.org/lxc/documentation/
     # requires: apt install python3-lxc
     #
     # setup: Create two bridge devices:
     #        "sudo brctl addbr bridge0; sudo brctl addbr bridge1"
     #
     # teardown: To destroy container:
     #           "sudo lxc-stop -n C1; sudo lxc-destroy -n C1"
     #           Remove the two bridge devices:
     #           "sudo brctl delbr bridge0; sudo brctl delbr bridge1"
     
     import lxc
     import sys
     
     container="C1"
     
     #target="xenial" # Ubuntu 16
     target="bionic" # Ubuntu 18
     
     # for Ubuntu 18 use "bionic", for Ubuntu 16 use "xenial"
     create_params = {"dist": "ubuntu", "release": target, "arch": "amd64"}
     
     # how to clean up
     print("To clean up type: 'sudo lxc-stop -n %s; sudo lxc-destroy -n %s'"%(
                                                          container, container))
     
     # create the container
     print("Creating container %s..."%container)
     c = lxc.Container(container)
     
     # skip if defined
     if c.defined:
         print("container %s already exists.  Aborting."%container)
         sys.exit(1)
     
     # create container rootfs
     success = c.create("download", lxc.LXC_CREATE_QUIET, create_params)
     if not success:
         print("Error creating container %s."%container, file=sys.stderr)
         sys.exit(1)
     
     # set up network device [0]
     c.network[0].type = "veth"
     c.network[0].flags = "up"
     c.network[0].link = "bridge0"
     if target=="xenial":
         c.network[0].ipv4 = "10.0.1.0/24"
     elif target=="bionic":
         c.network[0].ipv4_address = "10.0.1.0/24"
     else:
         raise RuntimeError("configuration error")
     
     # set up network device [1]
     c.network[1].type = "veth"
     c.network[1].flags = "up"
     c.network[1].link = "bridge1"
     if target=="xenial":
         c.network[1].ipv4 = "10.0.0.1/24"
     elif target=="bionic":
         c.network[1].ipv4_address = "10.0.1.1/24"
     else:
         raise RuntimeError("configuration error")
     
     # start the container
     success = c.start()
     if not success:
         print("Error starting container %s."%container, file=sys.stderr)
         sys.exit(1)
    
  4. Observe output

Run the demo: sudo ./demo.py

When it works (Ubuntu 16) it creates the container with the two network devices correctly configured. Type sudo lxc-ls -f to see success:

NAME    STATE   AUTOSTART GROUPS IPV4               IPV6 
C1      RUNNING 0         -      10.0.0.1, 10.0.1.0 -    

When it fails (Ubuntu 18) it crashes with IndexError: list index out of range:

Creating container C1...
Traceback (most recent call last):
  File "./zic2.py", line 39, in <module>
    c.network[1].type = "veth"
  File "/usr/lib/python3/dist-packages/lxc/__init__.py", line 122, in __getitem__
    raise IndexError("list index out of range")
IndexError: list index out of range
  1. Cleanup

Remove the container and the two Bridge devices:

    sudo lxc-destroy -n C1
    sudo brctl delbr bridge0; sudo brctl delbr bridge1

Recommended action

I don't mind a hack to get by. Maybe something to initialize multiple network entries. Or maybe a configure file that puts in two dummy network entries so that my code can set the values as needed.

Information to attach

  • any relevant kernel output (dmesg)
  • container log (The file from running lxc-start -n <c> -l <log> -o DEBUG)
  • the containers configuration file

KeyError when getting the number of networks

Required information

Distribution: Debian
Distribution version: Stretch
lxc-start --version: 2.0.7
uname -a: Linux 4.9.0-3-amd64 lxc/lxc#1 SMP Debian 4.9.30-2+deb9u2 (2017-06-26) x86_64 GNU/Linux
This is specifically for the Python API
python version: 3.5.3

Issue description

Calling the len() function on an empty ContainerNetworkList causes a KeyError.

Steps to reproduce

  1. create a new lxc container.
    test = lxc.Container("test")
  2. Call the len() function on the Network list
    len(test.network)

Information to attach

Traceback (most recent call last):
File "", line 1, in
File "/usr/lib/python3/dist-packages/lxc/init.py", line 127, in len
values = self.container.get_config_item("lxc.network")
File "/usr/lib/python3/dist-packages/lxc/init.py", line 296, in get_config_item
value = _lxc.Container.get_config_item(self, key)
KeyError: 'Invalid configuration key'

Able to fix by changing the len function in the ContainerNetworkList class:
wrap the line:
values = self.container.get_config_item("lxc.network")
in a try except block:
try:
values = self.container.get_config_item("lxc.network")
except KeyError:
return 0

publish on PyPI

Please publish python3-lxc via PyPI, so it can be properly included in pip/setup.py dependencies without the hassle of having to specify a Git/GitHub dependency (which is a pain in the backside in corporate environments).

Many keys fail to get from config while they are in the config

This already has been reported though in wrong place at lxc/lxc#1518

Using latest python3-lxc-3.0.2 release I face the same issue:

 wa01 cat /home/lxc/wa01/config

 lxc.network.type = veth
lxc.network.link = lxcbr0
lxc.network.flags = down
lxc.network.hwaddr = 00:16:3e:84:67:55
lxc.rootfs = /var/lib/lxc/wa01/rootfs
lxc.rootfs.backend = dir
lxc.include = /usr/share/lxc/config/debian.common.conf
lxc.tty = 4
lxc.utsname = wa01-a
lxc.arch = amd64
lxc.loglevel = 2
lxc.start.auto = 0
lxc.network.ipv4 = 10.0.3.37
lxc.cgroup.cpuset.cpus = 0-1

Some kind of demonstration with config before:

import lxc
c = lxc.Container('wa01')
keys = ['lxc.network.type','lxc.network.link','lxc.network.flags','lxc.network.hwaddr','lxc.rootfs','lxc.rootfs.backend','lxc.include','lxc.tty','lxc.utsname','lxc.arch','lxc.loglevel','lxc.start.auto','lxc.network.ipv4',]

for k in keys:
    try:
        print(c.get_config_item(k))
    except:
        print("Item {} raises invalid key".format(k))

Results

Item lxc.network.type raises invalid key
Item lxc.network.link raises invalid key
Item lxc.network.flags raises invalid key
Item lxc.network.hwaddr raises invalid key
/var/lib/lxc/wa01/rootfs
dir
Item lxc.include raises invalid key
4
wa01-a
x86_64
INFO
0
Item lxc.network.ipv4 raises invalid key

bdev_specs are not supported

Are you planning to support bdev_specs in python and go bindings? Because I think it stops many people from using official bindings.

various lxc commands state containers are stopped when containers are started via python3-lxc API

Required information

  • Distribution: RHEL
  • Distribution version: 7
  • The output of
    • lxc-start --version = 4.0.1
    • lxc-checkconfig
      --- Namespaces ---
      Namespaces: enabled
      Utsname namespace: enabled
      Ipc namespace: enabled
      Pid namespace: enabled
      User namespace: enabled
      Network namespace: enabled

--- Control groups ---
Cgroups: enabled

Cgroup v1 mount points:
/sys/fs/cgroup/systemd
/sys/fs/cgroup/net_cls,net_prio
/sys/fs/cgroup/perf_event
/sys/fs/cgroup/devices
/sys/fs/cgroup/pids
/sys/fs/cgroup/cpu,cpuacct
/sys/fs/cgroup/cpuset
/sys/fs/cgroup/freezer
/sys/fs/cgroup/blkio
/sys/fs/cgroup/debug
/sys/fs/cgroup/memory

Cgroup v2 mount points:

Cgroup v1 clone_children flag: enabled
Cgroup device: enabled
Cgroup sched: enabled
Cgroup cpu account: enabled
Cgroup memory controller: enabled
Cgroup cpuset: enabled

--- Misc ---
Veth pair device: enabled, not loaded
Macvlan: enabled, not loaded
Vlan: enabled, not loaded
Bridges: enabled, loaded
Advanced netfilter: enabled, not loaded
CONFIG_NF_NAT_IPV4: enabled, loaded
CONFIG_NF_NAT_IPV6: enabled, loaded
CONFIG_IP_NF_TARGET_MASQUERADE: enabled, loaded
CONFIG_IP6_NF_TARGET_MASQUERADE: enabled, not loaded
CONFIG_NETFILTER_XT_TARGET_CHECKSUM: enabled, loaded
CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled, not loaded
FUSE (for use with lxcfs): enabled, loaded

--- Checkpoint/Restore ---
checkpoint restore: enabled
CONFIG_FHANDLE: enabled
CONFIG_EVENTFD: enabled
CONFIG_EPOLL: enabled
CONFIG_UNIX_DIAG: enabled
CONFIG_INET_DIAG: enabled
CONFIG_PACKET_DIAG: enabled
CONFIG_NETLINK_DIAG: enabled
File capabilities:

Note : Before booting a new kernel, you can check its configuration
usage : CONFIG=/path/to/config /bin/lxc-checkconfig

  • uname -a Linux proc1 4.9.98-rt76-7.5-#1 SMP PREEMPT Tue Apr 14 10:39:18 EDT 2020 x86_64 x86_64 x86_64 GNU/Linux

  • cat /proc/self/cgroup
    11:memory:/user.slice
    10:debug:/
    9:blkio:/user.slice
    8:freezer:/
    7:cpuset:/
    6:cpu,cpuacct:/user.slice
    5:pids:/user.slice
    4:devices:/user.slice
    3:perf_event:/
    2:net_cls,net_prio:/
    1:name=systemd:/user.slice/user-1000.slice/session-2.scope

  • cat /proc/1/mounts

sysfs /sys sysfs rw,seclabel,nosuid,nodev,noexec,relatime 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
devtmpfs /dev devtmpfs rw,seclabel,nosuid,size=1995476k,nr_inodes=498869,mode=755 0 0
securityfs /sys/kernel/security securityfs rw,nosuid,nodev,noexec,relatime 0 0
tmpfs /dev/shm tmpfs rw,seclabel,nosuid,nodev,noexec 0 0
devpts /dev/pts devpts rw,seclabel,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /run tmpfs rw,seclabel,nosuid,nodev,mode=755 0 0
tmpfs /sys/fs/cgroup tmpfs ro,seclabel,nosuid,nodev,noexec,mode=755 0 0
cgroup /sys/fs/cgroup/systemd cgroup rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd 0 0
pstore /sys/fs/pstore pstore rw,seclabel,nosuid,nodev,noexec,relatime 0 0
cgroup /sys/fs/cgroup/net_cls,net_prio cgroup rw,nosuid,nodev,noexec,relatime,net_cls,net_prio 0 0
cgroup /sys/fs/cgroup/perf_event cgroup rw,nosuid,nodev,noexec,relatime,perf_event 0 0
cgroup /sys/fs/cgroup/devices cgroup rw,nosuid,nodev,noexec,relatime,devices 0 0
cgroup /sys/fs/cgroup/pids cgroup rw,nosuid,nodev,noexec,relatime,pids 0 0
cgroup /sys/fs/cgroup/cpu,cpuacct cgroup rw,nosuid,nodev,noexec,relatime,cpu,cpuacct 0 0
cgroup /sys/fs/cgroup/cpuset cgroup rw,nosuid,nodev,noexec,relatime,cpuset 0 0
cgroup /sys/fs/cgroup/freezer cgroup rw,nosuid,nodev,noexec,relatime,freezer 0 0
cgroup /sys/fs/cgroup/blkio cgroup rw,nosuid,nodev,noexec,relatime,blkio 0 0
cgroup /sys/fs/cgroup/debug cgroup rw,nosuid,nodev,noexec,relatime,debug 0 0
cgroup /sys/fs/cgroup/memory cgroup rw,nosuid,nodev,noexec,relatime,memory 0 0
/dev/mapper/vg1-lv_root / xfs rw,seclabel,relatime,attr2,inode64,noquota 0 0
selinuxfs /sys/fs/selinux selinuxfs rw,relatime 0 0
systemd-1 /proc/sys/fs/binfmt_misc autofs rw,relatime,fd=33,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=5902 0 0
debugfs /sys/kernel/debug debugfs rw,seclabel,relatime 0 0
mqueue /dev/mqueue mqueue rw,seclabel,relatime 0 0
hugetlbfs /dev/hugepages hugetlbfs rw,seclabel,relatime 0 0
/dev/vda1 /boot xfs rw,seclabel,nosuid,nodev,relatime,attr2,inode64,noquota 0 0
/dev/mapper/vg1-lv_var /var xfs rw,seclabel,nosuid,nodev,relatime,attr2,inode64,noquota 0 0
/dev/mapper/vg1-lv_home /home xfs rw,seclabel,nosuid,nodev,relatime,attr2,inode64,noquota 0 0
/dev/mapper/vg1-lv_tmp /tmp xfs rw,seclabel,nosuid,nodev,noexec,relatime,attr2,inode64,noquota 0 0
/dev/mapper/vg1-lv_log /var/log xfs rw,seclabel,nosuid,nodev,relatime,attr2,inode64,noquota 0 0
/dev/mapper/vg1-lv_audit /var/log/audit xfs rw,seclabel,nosuid,nodev,noexec,relatime,attr2,inode64,noquota 0 0
sunrpc /var/lib/nfs/rpc_pipefs rpc_pipefs rw,relatime 0 0
lxcfs /var/lib/lxcfs fuse.lxcfs rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other 0 0
fusectl /sys/fs/fuse/connections fusectl rw,relatime 0 0
tmpfs /var/lib/lxd/shmounts tmpfs rw,seclabel,relatime,size=100k,mode=711 0 0
tmpfs /var/lib/lxd/devlxd tmpfs rw,seclabel,relatime,size=100k,mode=755 0 0
tmpfs /run/user/1000 tmpfs rw,seclabel,nosuid,nodev,relatime,size=404504k,mode=700,uid=1000,gid=1000 0 0
gvfsd-fuse /run/user/1000/gvfs fuse.gvfsd-fuse rw,nosuid,nodev,relatime,user_id=1000,group_id=1000 0 0
/dev/fuse /run/user/1000/doc fuse rw,nosuid,nodev,relatime,user_id=1000,group_id=1000 0 0
tmpfs /run/user/0 tmpfs rw,seclabel,nosuid,nodev,relatime,size=404504k,mode=700 0 0
binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,relatime 0 0

Issue description

Not sure where this should reside.... Various lxc commands state containers are stopped when containers are started via python3-lxc API

Steps to reproduce

  1. Create custom container path utilizing lxc commands at path : example /containers/nodes/
  2. Verify Container node (ex. c1) has a config and rootfs in /containers/nodes/c1/
  3. Start Container via python3-lxc API
  4. Run lxc-info c1 -P /containers/nodes
    or lxc-ls -P /containers/nodes --active

Option "container_command" does not always work

Option "container_command" does not always work:

container_command: |
echo "{{ ssh_pubkey_lxc }}" > /etc/openssh/authorized_keys/root

Debug:

(0, b'Exception ignored in: <function _after_at_fork_child_reinit_locks at 0x7feac03b8200>\r\nTraceback (most recent call last):\r\n File "/usr/lib64/python3.7/logging/init.py", line 258, in _after_at_fork_child_reinit_locks\r\n File "/usr/lib64/python3.7/logging/init.py", line 226, in _releaseLock\r\nRuntimeError: cannot release un-acquired lock\r\nTraceback (most recent call last):\r\n File "/tmp/.private/root/ansible_lxc_container_payload_bbmqz68a/ansible_lxc_container_payload.zip/ansible/modules/cloud/lxc/lxc_container.py", line 557, in create_script\r\n File "/usr/lib64/python3.7/tempfile.py", line 340, in mkstemp\r\n File "/usr/lib64/python3.7/tempfile.py", line 258, in _mkstemp_inner\r\nFileNotFoundError: [Errno 2] \xd0\x9d\xd0\xb5\xd1\x82 \xd1\x82\xd0\xb0\xd0\xba\xd0\xbe\xd0\xb3\xd0\xbe \xd1\x84\xd0\xb0\xd0\xb9\xd0\xbb\xd0\xb0 \xd0\xb8\xd0\xbb\xd0\xb8 \xd0\xba\xd0\xb0\xd1\x82\xd0\xb0\xd0\xbb\xd0\xbe\xd0\xb3\xd0\xb0: '/tmp/.private/root/lxc-attach-script_mu_0_72'\r\n\r\n{"changed": true, "lxc_container": {"interfaces": ["eth0", "lo"], "ips": ["10.88.5.155", "2a0c:88c0:2:400:216:3eff:fe04:3e77"], "state": "running", "init_pid": 3470, "name": "redmine"}, "invocation": {"module_args": {"name": "redmine", "container_log": true, "container_log_level": "DEBUG", "container_command": "echo \"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIK2O9dl+1rAzssb7ARE3aWjxbzyt4/K2xwrjlC/Pp+jq [email protected]\" > /etc/openssh/authorized_keys/root\n", "template": "ubuntu", "backing_store": "dir", "vg_name": "lxc", "fs_type": "ext4", "fs_size": "5G", "state": "started", "clone_snapshot": false, "archive": false, "archive_compression": "gzip", "template_options": null, "config": null, "thinpool": null, "directory": null, "zfs_root": null, "lv_name": "redmine", "lxc_path": null, "container_config": null, "clone_name": null, "archive_path": null}}}\r\n', b'OpenSSH_7.9p1, OpenSSL 1.1.1g 21 Apr 2020\r\ndebug1: Reading configuration data /home/aas/.ssh/config\r\ndebug1: Reading configuration data /etc/openssh/ssh_config\r\ndebug1: /etc/openssh/ssh_config line 24: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 4 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 7525\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\nShared connection to lxctest closed.\r\n')

Please help me solve the problem.

$ ansible --version
ansible 2.9.15
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/aas/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.4 (default, Apr 17 2020, 12:15:50) [GCC 8.4.1 20200305 (ALT p9 8.4.1-alt0.p9.1)]

Issues with Ansible lxc connection plugin

I'm using Ansible's lxc connection plugin on Alpine Linux 3.9.3. I've had a couple of issues which seem to be related to the python3-lxc package used by Ansible's connection plugin.

  1. Ansible hangs when applying a playbook to an lxc container (and works fine on other hosts). Ansible issue and my initial attempt to troubleshoot at ansible/ansible#54659

  2. Since upgrading to python 3.7 Ansible produces multiple RuntimeError: cannot release un-acquired lock messages when applying a playbook to an lxc container (and works fine on other hosts). Ansible issue and discussion pointing at python3-lxc at ansible/ansible#55729

Segmentation fault from container creation call within a nested container

Hi,

Running this minimal script

import lxc
c = lxc.Container("taskc")
if not c.defined:
    # Create the container rootfs
    if not c.create("download", lxc.LXC_CREATE_QUIET, {"dist": "fedora",
                                                       "release": "32",
                                                       "arch": "i386"}):
        print("Could not create container")

within a container with nesting.conf enabled results in Segmentation fault (core dumped) with LXC version 3.2.1. Restricting the possible sources shows that the create call is the reason for the segmentation fault.

Has anyone else experienced this?

script fails when run as a cron

Required information

  • Distribution:
    CentOS Linux release 7.6.1810 (Core)

lxc-start --version: 1.0.11
`lxc-checkconfig
Kernel configuration not found at /proc/config.gz; searching...
Kernel configuration found at /boot/config-3.10.0-957.10.1.el7.x86_64
--- Namespaces ---
Namespaces: enabled
Utsname namespace: enabled
Ipc namespace: enabled
Pid namespace: enabled
User namespace: enabled
newuidmap is not installed
newgidmap is not installed
Network namespace: enabled
Multiple /dev/pts instances: enabled

--- Control groups ---
Cgroup: enabled
Cgroup clone_children flag: enabled
Cgroup device: enabled
Cgroup sched: enabled
Cgroup cpu account: enabled
Cgroup memory controller: enabled
Cgroup cpuset: enabled

--- Misc ---
Veth pair device: enabled
Macvlan: enabled
Vlan: enabled
Bridges: enabled
Advanced netfilter: enabled
CONFIG_NF_NAT_IPV4: enabled
CONFIG_NF_NAT_IPV6: enabled
CONFIG_IP_NF_TARGET_MASQUERADE: enabled
CONFIG_IP6_NF_TARGET_MASQUERADE: enabled
CONFIG_NETFILTER_XT_TARGET_CHECKSUM: enabled

--- Checkpoint/Restore ---
checkpoint restore: enabled
CONFIG_FHANDLE: enabled
CONFIG_EVENTFD: enabled
CONFIG_EPOLL: enabled
CONFIG_UNIX_DIAG: enabled
CONFIG_INET_DIAG: enabled
CONFIG_PACKET_DIAG: enabled
CONFIG_NETLINK_DIAG: enabled
File capabilities: enabled
`

  • uname -a
    Linux hostname.com 3.10.0-957.10.1.el7.x86_64 #1 SMP Mon Mar 18 15:06:45 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
  • cat /proc/self/cgroup
    11:perf_event:/ 10:cpuset:/ 9:devices:/user.slice 8:pids:/user.slice 7:cpuacct,cpu:/ 6:net_prio,net_cls:/ 5:blkio:/ 4:hugetlb:/ 3:memory:/ 2:freezer:/ 1:name=systemd:/user.slice/user-1001.slice/session-1133.scope
  • cat /proc/1/mounts
    rootfs / rootfs rw 0 0 sysfs /sys sysfs rw,seclabel,nosuid,nodev,noexec,relatime 0 0 proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0 devtmpfs /dev devtmpfs rw,seclabel,nosuid,size=3900020k,nr_inodes=975005,mode=755 0 0 securityfs /sys/kernel/security securityfs rw,nosuid,nodev,noexec,relatime 0 0 tmpfs /dev/shm tmpfs rw,seclabel,nosuid,nodev 0 0 devpts /dev/pts devpts rw,seclabel,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0 tmpfs /run tmpfs rw,seclabel,nosuid,nodev,mode=755 0 0 tmpfs /sys/fs/cgroup tmpfs ro,seclabel,nosuid,nodev,noexec,mode=755 0 0 cgroup /sys/fs/cgroup/systemd cgroup rw,seclabel,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgr oups-agent,name=systemd 0 0 pstore /sys/fs/pstore pstore rw,nosuid,nodev,noexec,relatime 0 0 efivarfs /sys/firmware/efi/efivars efivarfs rw,nosuid,nodev,noexec,relatime 0 0 cgroup /sys/fs/cgroup/freezer cgroup rw,seclabel,nosuid,nodev,noexec,relatime,freezer 0 0 cgroup /sys/fs/cgroup/memory cgroup rw,seclabel,nosuid,nodev,noexec,relatime,memory 0 0 cgroup /sys/fs/cgroup/hugetlb cgroup rw,seclabel,nosuid,nodev,noexec,relatime,hugetlb 0 0 cgroup /sys/fs/cgroup/blkio cgroup rw,seclabel,nosuid,nodev,noexec,relatime,blkio 0 0 cgroup /sys/fs/cgroup/net_cls,net_prio cgroup rw,seclabel,nosuid,nodev,noexec,relatime,net_prio,net_cls 0 0 cgroup /sys/fs/cgroup/cpu,cpuacct cgroup rw,seclabel,nosuid,nodev,noexec,relatime,cpuacct,cpu 0 0 cgroup /sys/fs/cgroup/pids cgroup rw,seclabel,nosuid,nodev,noexec,relatime,pids 0 0 cgroup /sys/fs/cgroup/devices cgroup rw,seclabel,nosuid,nodev,noexec,relatime,devices 0 0 cgroup /sys/fs/cgroup/cpuset cgroup rw,seclabel,nosuid,nodev,noexec,relatime,cpuset,clone_children 0 0 cgroup /sys/fs/cgroup/perf_event cgroup rw,seclabel,nosuid,nodev,noexec,relatime,perf_event 0 0 configfs /sys/kernel/config configfs rw,relatime 0 0 /dev/mapper/cl-root / xfs rw,seclabel,relatime,attr2,inode64,noquota 0 0 selinuxfs /sys/fs/selinux selinuxfs rw,relatime 0 0 mqueue /dev/mqueue mqueue rw,seclabel,relatime 0 0 debugfs /sys/kernel/debug debugfs rw,relatime 0 0 hugetlbfs /dev/hugepages hugetlbfs rw,seclabel,relatime 0 0 /dev/sdb2 /boot xfs rw,seclabel,relatime,attr2,inode64,noquota 0 0 /dev/sdb1 /boot/efi vfat rw,relatime,fmask=0077,dmask=0077,codepage=437,iocharset=ascii,shortname=winnt,errors=remount-ro 0 0 /dev/mapper/cl-home /home xfs rw,seclabel,relatime,attr2,inode64,noquota 0 0 sunrpc /var/lib/nfs/rpc_pipefs rpc_pipefs rw,relatime 0 0 tmpfs /run/user/42 tmpfs rw,seclabel,nosuid,nodev,relatime,size=783396k,mode=700,uid=42,gid=42 0 0 systemd-1 /proc/sys/fs/binfmt_misc autofs rw,relatime,fd=30,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=109481 0 0 tmpfs /run/user/0 tmpfs rw,seclabel,nosuid,nodev,relatime,size=783396k,mode=700 0 0 /dev/sda1 /home/nas/drives/backup ext4 rw,seclabel,relatime,data=ordered 0 0 tmpfs /run/user/1001 tmpfs rw,seclabel,nosuid,nodev,relatime,size=783396k,mode=700,uid=1001,gid=1001 0 0

Note : Before booting a new kernel, you can check its configuration
usage : CONFIG=/path/to/config /bin/lxc-checkconfig

Issue description

I have a python script that is using LXC to validate postgresql backups inside a CentOS 7 container. The script runs fine when run from the command line. However, when it runs as a cron it errors out with some cryptic messages:

lxc_container: attach.c: lxc_attach_run_command: 1298 No such file or directory - failed to exec 'runuser'

The commands being run by attach_wait(lxc.attach_run_command...) appear to be running out of order when run inside a cron.

Steps to reproduce

  1. Run the script manually. Success.
  2. Schedule cron job in /etc/cron.d/xyz
  3. Wait for cron job to run... then observe failure

Information to attach

Here is the script I am running:
https://github.com/jjriv/Python-Sys-Admin-Scripts/blob/master/postgres-backup-validation

lxc keeps config file open

I am working on a project for managing an lxc based virtualization platform that relies on this python lxc module. I have noticed that when I create an instance of the lxc container object referencing an existing lxc it keep the underlying lxc config file open during the object's life time. It was not the case in previous versions of LXC. It started from 3.0.0.

~ # python3.5
Python 3.5.5 (default, May 15 2018, 08:59:46) 
[GCC 7.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import os
>>> import lxc
>>> vps = lxc.Container("Test2")
>>> os.getpid()
6975
>>>

~ # lsof /home/lxc/Test2/config 
COMMAND    PID USER  FD   TYPE DEVICE SIZE/OFF  NODE NAME
python3.5 6975 root mem    REG 230,48     1652 50539 /home/lxc/Test2/config

This issue prevent me to migrate to LXC 3.0. I hope this was not an intentional behaviour. Also, I believe this is not related to this python binding but to the lxc core code.

Odd behaviour when manipulating config items

I think something is odd, perhaps is because this library tarjets lxc 2.1/3 and I am using lxc 2.0.x (Debian 9) but:

clear_config_item(whatever) doesn't work, sometimes fails that may be my fault, but when it doesn't always returns False, eg container.clear_config_item('lxc.start.auto') and never really clears entry in config file. So when setting new value we got two values for the same key:
lxc.start.auto = 0
lxc.start.auto = 1

set_config_item('lxc.network.0.whatever', 'that') adds:
lxc.network.0.whatever = that
to container config

So this library now feels useless to interact with container configuration, though container operations create/copy/start/stop/freeze works great.

I can see that some examples at examples/api_tests.py targets higher versions > 2.1 so maybe this library scope is only those linux distributions that deliver those lxc versions.

Let me now if you want me further reporting and testing, I would like to, but if debian stable 2.0.x is not a target (future debian release Sid is also lxc 2.0.9) please state compulsory minimal lxc version at project docs or fail on install or on runtime.

Meanwhile I have to replace it with the nasty "ini config file without DEFAULT section" hack to direct read file and crossfingers. (https://github.com/EstudioNexos/LXC-Web-Panel)

Thanks :) really

lxc-copy equivalent?

Is there some way to mimic the lxc-copy operation using Python bindings? In particular, I would like to run something such as lxc-copy -s -B overlayfs -n oldid -N newid.

python: set_config_item("lxc.cgroup.*") generates duplicate lines in config file

With Python bindings:

    container.set_config_item('lxc.cgroup.memory.limit_in_bytes', mem_limit)
    container.set_config_item('lxc.cgroup.memory.memsw.limit_in_bytes', memsw_limit)

generates following lines in container config:

lxc.cgroup.memory.limit_in_bytes = 3G
lxc.cgroup.memory.memsw.limit_in_bytes = 
lxc.cgroup.memory.memsw.limit_in_bytes = 4G

And if there are other set_config_items before 'lxc.cgroup.memory.limit_in_bytes':

lxc.cgroup.memory.limit_in_bytes = 
lxc.cgroup.memory.limit_in_bytes = 3G
lxc.cgroup.memory.memsw.limit_in_bytes = 
lxc.cgroup.memory.memsw.limit_in_bytes = 4G

Happens on current stable-2.0 branch, haven't tested master branch yet

Can't create container

I'm having trouble finding a tutorial that I can follow without failing at the create container stage.

Python 3.6.8 (default, Aug 24 2020, 17:57:11)
[GCC 8.3.1 20191121 (Red Hat 8.3.1-5)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import lxc
>>> container = lxc.Container('test')
>>> container.create('busybox')
False

New release?

There's been quite a few commits since the last tag; would it be possible to make a new release? I'd like to update the version of this library that will be included in the Debian bookworm release.

Incorrect behavior with create() after destroy()

Required information

  • Distribution: Debian
  • Distribution version: Stretch
  • python version: 3.5.3
  • The output of
    • lxc-start --version: 2.0.7
    • uname -a: Linux 4.9.0-3-amd64 lxc/lxc#1 SMP Debian 4.9.30-2+deb9u2 (2017-06-26) x86_64 GNU/Linux

Issue description

After instantiating, creating, and destroying a linux container, calling create without re-instantiating returns false rather than recreating the container

Steps to reproduce

  1. test = lxc.Container("test")
  2. test.create(template="debian")
  3. test.destroy()
  4. test.create(template="debian")

the second call simply returns false

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.