Code Monkey home page Code Monkey logo

hwe's Introduction

HWE

build-38 build-39 build-40

The purpose of these images is to provide community Fedora images with hardware enablement (ASUS and Surface) and Nvidia. This approach can lead to greater reliability as failures can be caught at the build level instead of the client machine. This also allows for individual sets of images for each series of Nvidia drivers, allowing users to remain current with their OS but on an older, known working driver. Performance regression with a recent driver update? Reboot into a known-working driver after one command. That's the goal!

Documentation

hwe's People

Contributors

andrejsh3 avatar awebeer256 avatar bigpod98 avatar bobslept avatar bsherman avatar castrojo avatar dependabot[bot] avatar dylanmtaylor avatar eyecantcu avatar github-actions[bot] avatar joshua-stone avatar karajan9 avatar kylegospo avatar marcoceppi avatar p5 avatar plata avatar qoijjj avatar thetredev avatar trofosila avatar vulongm avatar xynydev avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hwe's Issues

Possibility to add supergfxctl (or something similar) to the nvidia image?

Hello!

Would it be a good idea to layer the supergfxctl package to the nvidia image? Supergfxctl was developed by the asus-linux project as a way to disable the nvidia device in dual-GPU laptops, to achieve better power savings than what's usually possible only using nvidias own methods. This is especially relevant on older nvidia cards which lack RTD3 available in newer GPUs, and on my machine it means the difference between 6 hours of battery and 10 hours of battery. Compared to other similar projects, this works without rebooting the machine, logging out and in again is usually enough. Despite being developed as part of the asus-linux project, it's supposed to work on most computers regardless of manufacturer.

Supergfxctl is currently not available in the fedora repos, but there's a copr available: https://copr.fedorainfracloud.org/coprs/lukenukem/asus-linux/

Other similar projects:

  • Bumblebee - One of the older methods of switching between GPUs, needs rebooting in between and supposedly has performance issues.
  • optimus-manager - No wayland support, only targets arch and manjaro distros.
  • EnvyControl - Seems to be the most promising alternative, and seems to pretty much have feature parity with supergfxctl. There seems to be a copr available, but I haven't tested it.

Personally, I think including one of these tools could be very beneficial for laptop users, especially those with pre-Turing hardware. However, most of them not being available on the official repos adds some issues in whether ublue wants to include unofficial packages from unverified sources such as copr, so I understand if this might be outside of the project aims for now. :)

Switching to the discrete GPU under Wayland causes black screen on internal display

I tried to use supergfxctl on my fresh install of silverblue-nvidia to switch to NvidiaNoModeset instead of Hybrid, because my battery isn't a concern and I'd rather get the substantial performance boost of not having to copy frames around in video memory. Upon rebooting, everything does seem to be exclusively running on my dGPU, but my laptop's internal display was completely black, and systemctl status supergfxd.service is reporting several errors like "Did not have dGPU handle" and "Could not find dGPU." This makes supergfxctl-gex not work, but I tried to use the shell command to switch back to Hybrid graphics, but upon logout/login again (or reboot) it's still on NvidiaNoModeset, completely ignoring my command, and still complaining about not being able to find the dGPU.

Is there a way to:

  1. Make my dGPU visible to supergfxctl again after switching to full nvidia
  2. Make my laptop's internal display work again?

I switched to silverblue-nvidia after running into nvidia driver trouble on tumbleweed, but this is far worse than anything I experienced there.

nvidia-powerd.service failure

After rebasing to the Nvidia builds of Lxqt, KDE, and Mate I'm having the nvidia-powerd.service fail for each one of them, which causes the computer to not finish booting. I'm having the issue on a Dell XPS 15 with a Nvidia GTX 1650 as well as a MSI GS66 Stealth with a Nvidia 3060 laptop GPU.

This also happens in muliti-user (terminal) mode.

38-530 - breaks wayland

Not sure if this is an issue with this image or fedora-38, but using 38-530 seems to break wayland (and prevents a usable GDM/Gnome, unless you disable wayland in /etc/gdm/custom/conf)

is there some extra step i might have missed going from 38-525 ?

consolidate HWE repos and build workflows

We have a number of HWE (hardware enablement) repos in this org:

I'd like the consolidate all these repos into one as it will greatly simplify our maintenance. Currently, I know for sure, some of these repos are missing features and fixes found in main and nvidia because it's a lot of headache to fixup all the different repos.

I created a discord thread which has some discussion about this too.

https://discord.com/channels/1072614816579063828/1220077695279435777

Chore(CI): builds should succeed in self-contained PR builds

When nvidia builds run in a PR, they fail because after building the akmods image, that is not published, thus the follow on builds fail because they attempt to reference the just built image with that PR tag.

Need a solution to this which allows valid builds but doesn't allow signing/publishing of images in PRs.

System76 devices

Some of System76's computers rely on dkms modules to enable certain functionality (depending on the model). For example, I have a Thelio desktop which relies on system76-io-dkms and system76-power to tune the case fan. Without them, it runs at 100%.

System76 also has some laptops which require system76-dkms to control the keyboard backlight, fans and airplane mode.

There's also system76-acpi-dkms for some of their open firmware systems, but my understanding is its functionality has been upstreamed into the linux kernel and users generally don't need to install it now.

Pop OS install all of this software by default, and it doesn't seem to cause problems on any of their hardware.

All of this software is available in the szydell/system76 copr.

I'd like to see a System76 ublue image which installs this vital software.

PCI-Express Runtime D3 (RTD3) Power Management Lost after update

Hello,
I have a Dell laptop with two graphic cards (an Intel Chip and an NVIDIA RTX3050 Ti)
Before Fedora kernel update 6.3.5-200.fc38.x86_64, Nvidia's PCI-Express Runtime D3 (RTD3) Power Management was running perfectly out of the box.

cat /sys/class/drm/card1/device/power_state returned D3Cold when no process was running on the graphic card.

After update, the card never enters to D3Cold and stay to D0.

I don't know why but on my system it means a huge impact on power consumption and less battery time. I'm I the only one impacted ?

Problem upgrading from Fedora Silverblue 38 to 39

This problem occurs and I don’t know how to fix it

❯ rpm-ostree update
note: automatic updates (stage) are enabled
Pulling manifest: ostree-image-signed:docker://ghcr.io/ublue-os/silverblue-nvidia:latest
Checking out tree e9c4768... done
Enabled rpm-md repositories: updates fedora rpmfusion-free-updates rpmfusion-free rpmfusion-nonfree-updates rpmfusion-nonfree google-chrome rpmfusion-nonfree-nvidia-driver rpmfusion-nonfree-steam phracek-PyCharm copr:copr.fedorainfracloud.org:champe20:apx
Importing rpm-md... done
rpm-md repo 'updates' (cached); generated: 2023-11-08T01:14:26Z solvables: 9928
rpm-md repo 'fedora' (cached); generated: 2023-11-03T02:50:25Z solvables: 70825
rpm-md repo 'rpmfusion-free-updates' (cached); generated: 2023-11-07T19:31:11Z solvables: 13
rpm-md repo 'rpmfusion-free' (cached); generated: 2023-11-04T16:49:08Z solvables: 445
rpm-md repo 'rpmfusion-nonfree-updates' (cached); generated: 2023-11-07T19:50:21Z solvables: 22
rpm-md repo 'rpmfusion-nonfree' (cached); generated: 2023-11-04T17:26:32Z solvables: 208
rpm-md repo 'google-chrome' (cached); generated: 2023-11-07T22:23:37Z solvables: 3
rpm-md repo 'rpmfusion-nonfree-nvidia-driver' (cached); generated: 2023-11-07T15:52:41Z solvables: 29
rpm-md repo 'rpmfusion-nonfree-steam' (cached); generated: 2023-08-10T16:27:35Z solvables: 2
rpm-md repo 'phracek-PyCharm' (cached); generated: 2023-08-10T15:35:19Z solvables: 5
rpm-md repo 'copr:copr.fedorainfracloud.org:champe20:apx' (cached); generated: 2023-10-24T12:06:39Z solvables: 4
Resolving dependencies... done
Checking out packages... done
error: Checkout binutils-2.40-13.fc39.x86_64: Hardlinking b0/5c0ba45128dbfb28a8089c44e19f93cd0e4531678f017696f69515b916f6c3.file to ld: File exists

cannot layer binutils

I'm trying to install binutils:

$ rpm-ostree install binutils
[...]
error: Checkout binutils-2.40-13.fc39.x86_64: Hardlinking b0/5c0ba45128dbfb28a8089c44e19f93cd0e4531678f017696f69515b916f6c3.file to ld: File exists

afaics this is because the file exists already in the image:

$ podman run -it ghcr.io/ublue-os/silverblue-nvidia:39 ls -l /usr/bin/ld         
lrwxrwxrwx. 1 root root 20 Jan 15 15:37 /usr/bin/ld -> /etc/alternatives/ld

$ podman run -it ghcr.io/ublue-os/silverblue-nvidia:39 ls -l /etc/alternatives/ld
lrwxrwxrwx. 1 root root 15 Jan 15 15:37 /etc/alternatives/ld -> /usr/bin/ld.bfd

$ podman run -it ghcr.io/ublue-os/silverblue-nvidia:39 ls -l /usr/bin/ld.bfd     
ls: cannot access '/usr/bin/ld.bfd': No such file or directory

it's a symlink to /etc/alternatives/ld, which itself points to /usr/bin/ld.bfd, which doesn't exist.

The symlinks are created here:
https://github.com/ublue-os/nvidia/blob/98c79efbd4f1e929df1dc4416bb701bb9a319969/post-install.sh#L9-L10

I'm wondering if these lines are still relevant, as the file they're linking to doesn't exist anyway?

Make avaliable beta Fedora 40 to help to debug issues with nvidia driver

HI thanks a lot for you everyday work , i would like to help to debug some possibles issues that could appen prior to the out of Fedora 40 , i try to rebase to 40 but images are not here yet ( i use bazzite-gnome-nvidia because of razer kernel driver but don't need more)

rpm-ostree rebase ostree-unverified-registry:ghcr.io/ublue-os/bazzite-gnome-nvidia:40
Pulling manifest: ostree-unverified-registry:ghcr.io/ublue-os/bazzite-gnome-nvidia:40
error: Creating importer: Failed to invoke skopeo proxy method OpenImage: remote error: reading manifest 40 in ghcr.io/ublue-os/bazzite-gnome-nvidia: manifest unknown

rpm-ostree rebase ostree-image-signed:docker://ghcr.io/ublue-os/silverblue-nvidia:40
Pulling manifest: ostree-image-signed:docker://ghcr.io/ublue-os/silverblue-nvidia:40
error: Creating importer: Failed to invoke skopeo proxy method OpenImage: remote error: reading manifest 40 in ghcr.io/ublue-os/silverblue-nvidia: manifest unknown

Thanks again !

can't rebase to surface images

very happy to use this on old surface 4 pro
doesn't seem to matter which image i choose, eg

rpm-ostree rebase ostree-image-signed:docker://ghcr.io/ublue-os/bluefin-surface:39

error: Generating initramfs overlay: Adding modules-load.d/ublue-surface.conf: stat: No such file or directory (os error 2)

doesn't reboot, have to revert

Maybe i'm missing something? very new to this stuff

GPUs in containers appears to have broken

The instructions to test GPUs in containers don't appear to be working. I run:

podman run \
    --user 1000:1000 \
    --security-opt=no-new-privileges \
    --cap-drop=ALL \
    --security-opt label=type:nvidia_container_t  \
    docker.io/nvidia/samples:vectoradd-cuda11.2.1

and get the following output:

Trying to pull docker.io/nvidia/samples:vectoradd-cuda11.2.1...
Getting image source signatures
Copying blob fe72fda9c19e done   |
Copying blob b3afe92c540b done   |
Copying blob ddb025f124b9 done   |
Copying blob b25f8d7adb24 done   |
Copying blob d519e2592276 done   |
Copying blob d22d2dfcfa9c done   |
Copying blob c88b7b7dd6ba done   |
Copying blob f2c9b54e36bc done   |
Copying blob 50333516d41c done   |
Copying config 02c32dc6d0 done   |
Writing manifest to image destination
Failed to allocate device vector A (error code CUDA driver version is insufficient for CUDA runtime version)!
[Vector addition of 50000 elements]

nvidia-smi output:

Sun Oct 22 19:22:29 2023
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.113.01             Driver Version: 535.113.01   CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA GeForce RTX 4090        Off | 00000000:01:00.0  On |                  Off |
|  0%   34C    P8              11W / 450W |    465MiB / 24564MiB |     16%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|    0   N/A  N/A      2295      G   /usr/bin/gnome-shell                        154MiB |
|    0   N/A  N/A      3495      G   /usr/lib64/firefox/firefox                  249MiB |
|    0   N/A  N/A      4459      G   /usr/bin/alacritty                           28MiB |
+---------------------------------------------------------------------------------------+

just --unstable nvidia-test-cuda:

[Vector addition of 50000 elements]
Copy input data from the host memory to the CUDA device
CUDA kernel launch with 196 blocks of 256 threads
Copy output data from the CUDA device to the host memory
Test PASSED
Done

sudo nvidia-container-cli -k -d /dev/tty info:

-- WARNING, the following logs are for debugging purposes only --

I1023 02:24:09.451383 5247 nvc.c:376] initializing library context (version=1.14.3, build=1eb5a30a6ad0415550a9df632ac8832bf7e2bbba)
I1023 02:24:09.451465 5247 nvc.c:350] using root /
I1023 02:24:09.451470 5247 nvc.c:351] using ldcache /etc/ld.so.cache
I1023 02:24:09.451475 5247 nvc.c:352] using unprivileged user 65534:65534
I1023 02:24:09.451504 5247 nvc.c:393] attempting to load dxcore to see if we are running under Windows Subsystem for Linux (WSL)
I1023 02:24:09.451575 5247 nvc.c:395] dxcore initialization failed, continuing assuming a non-WSL environment
I1023 02:24:09.464419 5248 nvc.c:278] loading kernel module nvidia
I1023 02:24:09.464518 5248 nvc.c:282] running mknod for /dev/nvidiactl
I1023 02:24:09.464550 5248 nvc.c:286] running mknod for /dev/nvidia0
I1023 02:24:09.464568 5248 nvc.c:290] running mknod for all nvcaps in /dev/nvidia-caps
I1023 02:24:09.469693 5248 nvc.c:218] running mknod for /dev/nvidia-caps/nvidia-cap1 from /proc/driver/nvidia/capabilities/mig/config
I1023 02:24:09.469798 5248 nvc.c:218] running mknod for /dev/nvidia-caps/nvidia-cap2 from /proc/driver/nvidia/capabilities/mig/monitor
I1023 02:24:09.471388 5248 nvc.c:296] loading kernel module nvidia_uvm
I1023 02:24:09.471433 5248 nvc.c:300] running mknod for /dev/nvidia-uvm
I1023 02:24:09.471527 5248 nvc.c:305] loading kernel module nvidia_modeset
I1023 02:24:09.471559 5248 nvc.c:309] running mknod for /dev/nvidia-modeset
I1023 02:24:09.471908 5249 rpc.c:71] starting driver rpc service
I1023 02:24:09.477201 5250 rpc.c:71] starting nvcgo rpc service
I1023 02:24:09.487458 5247 nvc_info.c:798] requesting driver information with ''
I1023 02:24:09.488257 5247 nvc_info.c:176] selecting /usr/lib64/libnvoptix.so.535.113.01
I1023 02:24:09.488289 5247 nvc_info.c:176] selecting /usr/lib64/libnvidia-tls.so.535.113.01
I1023 02:24:09.488310 5247 nvc_info.c:176] selecting /usr/lib64/libnvidia-rtcore.so.535.113.01
I1023 02:24:09.488346 5247 nvc_info.c:176] selecting /usr/lib64/libnvidia-ptxjitcompiler.so.535.113.01
I1023 02:24:09.488367 5247 nvc_info.c:176] selecting /usr/lib64/libnvidia-pkcs11-openssl3.so.535.113.01
I1023 02:24:09.488687 5247 nvc_info.c:176] selecting /usr/lib64/libnvidia-opticalflow.so.535.113.01
I1023 02:24:09.488742 5247 nvc_info.c:176] selecting /usr/lib64/libnvidia-opencl.so.535.113.01
I1023 02:24:09.488782 5247 nvc_info.c:176] selecting /usr/lib64/libnvidia-nvvm.so.535.113.01
I1023 02:24:09.488848 5247 nvc_info.c:176] selecting /usr/lib64/libnvidia-ngx.so.535.113.01
I1023 02:24:09.488880 5247 nvc_info.c:176] selecting /usr/lib64/libnvidia-ml.so.535.113.01
I1023 02:24:09.488923 5247 nvc_info.c:176] selecting /usr/lib64/libnvidia-glvkspirv.so.535.113.01
I1023 02:24:09.488944 5247 nvc_info.c:176] selecting /usr/lib64/libnvidia-glsi.so.535.113.01
I1023 02:24:09.488964 5247 nvc_info.c:176] selecting /usr/lib64/libnvidia-glcore.so.535.113.01
I1023 02:24:09.489006 5247 nvc_info.c:176] selecting /usr/lib64/libnvidia-fbc.so.535.113.01
I1023 02:24:09.489051 5247 nvc_info.c:176] selecting /usr/lib64/libnvidia-encode.so.535.113.01
I1023 02:24:09.489087 5247 nvc_info.c:176] selecting /usr/lib64/libnvidia-eglcore.so.535.113.01
I1023 02:24:09.489125 5247 nvc_info.c:176] selecting /usr/lib64/libnvidia-cfg.so.535.113.01
I1023 02:24:09.489158 5247 nvc_info.c:176] selecting /usr/lib64/libnvidia-allocator.so.535.113.01
I1023 02:24:09.489179 5247 nvc_info.c:176] selecting /usr/lib64/libnvcuvid.so.535.113.01
I1023 02:24:09.489312 5247 nvc_info.c:176] selecting /usr/lib64/libcudadebugger.so.535.113.01
I1023 02:24:09.489333 5247 nvc_info.c:176] selecting /usr/lib64/libcuda.so.535.113.01
I1023 02:24:09.489420 5247 nvc_info.c:176] selecting /usr/lib64/libGLX_nvidia.so.535.113.01
I1023 02:24:09.489453 5247 nvc_info.c:176] selecting /usr/lib64/libGLESv2_nvidia.so.535.113.01
I1023 02:24:09.489484 5247 nvc_info.c:176] selecting /usr/lib64/libGLESv1_CM_nvidia.so.535.113.01
I1023 02:24:09.489507 5247 nvc_info.c:176] selecting /usr/lib64/libEGL_nvidia.so.535.113.01
I1023 02:24:09.490482 5247 nvc_info.c:176] selecting /usr/lib/libnvidia-tls.so.535.113.01
I1023 02:24:09.491316 5247 nvc_info.c:176] selecting /usr/lib/libnvidia-ptxjitcompiler.so.535.113.01
I1023 02:24:09.491584 5247 nvc_info.c:176] selecting /usr/lib/libnvidia-opticalflow.so.535.113.01
I1023 02:24:09.492166 5247 nvc_info.c:176] selecting /usr/lib/libnvidia-opencl.so.535.113.01
I1023 02:24:09.492770 5247 nvc_info.c:176] selecting /usr/lib/libnvidia-nvvm.so.535.113.01
I1023 02:24:09.493375 5247 nvc_info.c:176] selecting /usr/lib/libnvidia-ml.so.535.113.01
I1023 02:24:09.493985 5247 nvc_info.c:176] selecting /usr/lib/libnvidia-glvkspirv.so.535.113.01
I1023 02:24:09.494307 5247 nvc_info.c:176] selecting /usr/lib/libnvidia-glsi.so.535.113.01
I1023 02:24:09.494887 5247 nvc_info.c:176] selecting /usr/lib/libnvidia-glcore.so.535.113.01
I1023 02:24:09.495182 5247 nvc_info.c:176] selecting /usr/lib/libnvidia-fbc.so.535.113.01
I1023 02:24:09.495484 5247 nvc_info.c:176] selecting /usr/lib/libnvidia-encode.so.535.113.01
I1023 02:24:09.496078 5247 nvc_info.c:176] selecting /usr/lib/libnvidia-eglcore.so.535.113.01
I1023 02:24:09.496388 5247 nvc_info.c:176] selecting /usr/lib/libnvidia-allocator.so.535.113.01
I1023 02:24:09.496975 5247 nvc_info.c:176] selecting /usr/lib/libnvcuvid.so.535.113.01
I1023 02:24:09.497526 5247 nvc_info.c:176] selecting /usr/lib/libcuda.so.535.113.01
I1023 02:24:09.498115 5247 nvc_info.c:176] selecting /usr/lib/libGLX_nvidia.so.535.113.01
I1023 02:24:09.498685 5247 nvc_info.c:176] selecting /usr/lib/libGLESv2_nvidia.so.535.113.01
I1023 02:24:09.499039 5247 nvc_info.c:176] selecting /usr/lib/libGLESv1_CM_nvidia.so.535.113.01
I1023 02:24:09.499378 5247 nvc_info.c:176] selecting /usr/lib/libEGL_nvidia.so.535.113.01
W1023 02:24:09.499395 5247 nvc_info.c:402] missing library libnvidia-nscq.so
W1023 02:24:09.499401 5247 nvc_info.c:402] missing library libnvidia-gpucomp.so
W1023 02:24:09.499406 5247 nvc_info.c:402] missing library libnvidia-fatbinaryloader.so
W1023 02:24:09.499412 5247 nvc_info.c:402] missing library libnvidia-compiler.so
W1023 02:24:09.499417 5247 nvc_info.c:402] missing library libnvidia-pkcs11.so
W1023 02:24:09.499423 5247 nvc_info.c:402] missing library libvdpau_nvidia.so
W1023 02:24:09.499428 5247 nvc_info.c:402] missing library libnvidia-ifr.so
W1023 02:24:09.499434 5247 nvc_info.c:402] missing library libnvidia-cbl.so
W1023 02:24:09.499439 5247 nvc_info.c:406] missing compat32 library libnvidia-cfg.so
W1023 02:24:09.499446 5247 nvc_info.c:406] missing compat32 library libnvidia-nscq.so
W1023 02:24:09.499451 5247 nvc_info.c:406] missing compat32 library libcudadebugger.so
W1023 02:24:09.499456 5247 nvc_info.c:406] missing compat32 library libnvidia-gpucomp.so
W1023 02:24:09.499461 5247 nvc_info.c:406] missing compat32 library libnvidia-fatbinaryloader.so
W1023 02:24:09.499466 5247 nvc_info.c:406] missing compat32 library libnvidia-compiler.so
W1023 02:24:09.499471 5247 nvc_info.c:406] missing compat32 library libnvidia-pkcs11.so
W1023 02:24:09.499475 5247 nvc_info.c:406] missing compat32 library libnvidia-pkcs11-openssl3.so
W1023 02:24:09.499479 5247 nvc_info.c:406] missing compat32 library libnvidia-ngx.so
W1023 02:24:09.499484 5247 nvc_info.c:406] missing compat32 library libvdpau_nvidia.so
W1023 02:24:09.499490 5247 nvc_info.c:406] missing compat32 library libnvidia-ifr.so
W1023 02:24:09.499495 5247 nvc_info.c:406] missing compat32 library libnvidia-rtcore.so
W1023 02:24:09.499501 5247 nvc_info.c:406] missing compat32 library libnvoptix.so
W1023 02:24:09.499506 5247 nvc_info.c:406] missing compat32 library libnvidia-cbl.so
I1023 02:24:09.499991 5247 nvc_info.c:302] selecting /usr/bin/nvidia-smi
I1023 02:24:09.500014 5247 nvc_info.c:302] selecting /usr/bin/nvidia-debugdump
I1023 02:24:09.500036 5247 nvc_info.c:302] selecting /usr/bin/nvidia-persistenced
I1023 02:24:09.500076 5247 nvc_info.c:302] selecting /usr/bin/nvidia-cuda-mps-control
I1023 02:24:09.500099 5247 nvc_info.c:302] selecting /usr/bin/nvidia-cuda-mps-server
W1023 02:24:09.500203 5247 nvc_info.c:428] missing binary nv-fabricmanager
I1023 02:24:09.500259 5247 nvc_info.c:488] listing firmware path /lib/firmware/nvidia/535.113.01/gsp_ga10x.bin
I1023 02:24:09.500267 5247 nvc_info.c:488] listing firmware path /lib/firmware/nvidia/535.113.01/gsp_tu10x.bin
I1023 02:24:09.500298 5247 nvc_info.c:561] listing device /dev/nvidiactl
I1023 02:24:09.500303 5247 nvc_info.c:561] listing device /dev/nvidia-uvm
I1023 02:24:09.500309 5247 nvc_info.c:561] listing device /dev/nvidia-uvm-tools
I1023 02:24:09.500315 5247 nvc_info.c:561] listing device /dev/nvidia-modeset
W1023 02:24:09.500393 5247 nvc_info.c:352] missing ipc path /var/run/nvidia-persistenced/socket
W1023 02:24:09.500414 5247 nvc_info.c:352] missing ipc path /var/run/nvidia-fabricmanager/socket
W1023 02:24:09.500453 5247 nvc_info.c:352] missing ipc path /tmp/nvidia-mps
I1023 02:24:09.500457 5247 nvc_info.c:854] requesting device information with ''
I1023 02:24:09.506035 5247 nvc_info.c:745] listing device /dev/nvidia0 (GPU-cf22c558-a271-8c6a-eed9-067e58966643 at 00000000:01:00.0)
NVRM version:   535.113.01
CUDA version:   12.2

Device Index:   0
Device Minor:   0
Model:          NVIDIA GeForce RTX 4090
Brand:          GeForce
GPU UUID:       GPU-cf22c558-a271-8c6a-eed9-067e58966643
Bus Location:   00000000:01:00.0
Architecture:   8.9
I1023 02:24:09.506058 5247 nvc.c:434] shutting down library context
I1023 02:24:09.506082 5250 rpc.c:95] terminating nvcgo rpc service
I1023 02:24:09.506457 5247 rpc.c:135] nvcgo rpc service terminated successfully
I1023 02:24:09.507428 5249 rpc.c:95] terminating driver rpc service
I1023 02:24:09.507548 5247 rpc.c:135] driver rpc service terminated successfully

nvidia-container-cli -V:

cli-version: 1.14.3
lib-version: 1.14.3
build date: 2023-10-19T11:32+0000
build revision: 1eb5a30a6ad0415550a9df632ac8832bf7e2bbba
build compiler: gcc 4.8.5 20150623 (Red Hat 4.8.5-44)
build platform: x86_64
build flags: -D_GNU_SOURCE -D_FORTIFY_SOURCE=2 -DNDEBUG -std=gnu11 -O2 -g -fdata-sections -ffunction-sections -fplan9-extensions -fstack-protector -fno-strict-aliasing -fvisibility=hidden -Wall -Wextra -Wcast-align -Wpointer-arith -Wmissing-prototypes -Wnonnull -Wwrite-strings -Wlogical-op -Wformat=2 -Wmissing-format-attribute -Winit-self -Wshadow -Wstrict-prototypes -Wunreachable-code -Wconversion -Wsign-conversion -Wno-unknown-warning-option -Wno-format-extra-args -Wno-gnu-alignof-expression -Wl,-zrelro -Wl,-znow -Wl,-zdefs -Wl,--gc-sections

stuck when rebase

i got stuck in
rpm-ostree rebase ostree-unverified-registry:ghcr.io/ublue-os/silverblue-nvidia:39
Pulling manifest: ostree-unverified-image:docker://ghcr.io/ublue-os/silverblue-nvidia:39
Importing: ostree-unverified-image:docker://ghcr.io/ublue-os/silverblue-nvidia:39 (digest: sha256:0debc3a961faac985c7a8b6bc052f68ed7acbb24c7f56f1f0c5fe0f104f06b13)
ostree chunk layers already present: 65
custom layers already present: 1
custom layers needed: 1 (680.7 MB)

i dont know is it still downloading or not.
this rpm status.

rpm-ostree status
State: busy
AutomaticUpdates: check; rpm-ostreed-automatic.timer: no runs since boot
Transaction: rebase ostree-unverified-registry:ghcr.io/ublue-os/silverblue-nvidia:39
Initiator: client(id:cli dbus:1.166 unit:vte-spawn-c717707f-f555-4048-b893-4d8faa973afd.scope uid:1000)
Deployments:
● fedora:fedora/39/x86_64/silverblue
Version: 39.1.5 (2023-10-31T22:06:37Z)
BaseCommit: 3f6c3c54e77690b576ced4cf01528b8415a691bcf5afbe5df203b046ff396c67
GPGSignature: Valid signature by E8F23996F23218640CB44CBE75CF5AC418B8E74C
RemovedBasePackages: firefox firefox-langpacks 119.0-2.fc39 gnome-tour 45.0-1.fc39
toolbox 0.0.99.4-3.fc39
LayeredPackages: fastfetch

ostree-unverified-registry:ghcr.io/ublue-os/bazzite-gnome-nvidia:stable
Digest: sha256:1dc0116d4f0d2d740925f09c242903c20e37b604fbae0e0ae87ac1d5c2be69f3
Version: 39.20240116.0 (2024-03-10T08:19:50Z)
RemovedBasePackages: toolbox 0.0.99.5-2.fc39
LayeredPackages: fastfetch

journalctl -f
Mar 11 16:19:27 fedora gnome-shell[1979]: g_signal_handler_disconnect: assertion 'G_TYPE_CHECK_INSTANCE (instance)' failed
Mar 11 16:20:40 fedora gnome-shell[1979]: invalid (NULL) pointer instance
Mar 11 16:20:40 fedora gnome-shell[1979]: g_signal_handler_disconnect: assertion 'G_TYPE_CHECK_INSTANCE (instance)' failed
Mar 11 16:20:40 fedora gnome-shell[1979]: invalid (NULL) pointer instance

im new here,i hope this the right place to ask.

Waydroid does not start after upgrade to silverblue-surface 40

After rebasing to version 40. Waydroid does not start. In the logs I see

  1. [20:22:38] % lxc-info -P /var/lib/waydroid/lxc -n waydroid -sH
    lxc-start: waydroid: ../src/lxc/conf.c: lxc_storage_prepare: 507 No such file or directory - Failed to access to "/usr/lib64/lxc/rootfs". Check it is present
    lxc-start: waydroid: ../src/lxc/conf.c: lxc_rootfs_init: 542 Invalid argument - Failed to prepare rootfs storage
    lxc-start: waydroid: ../src/lxc/start.c: __lxc_start: 2079 Failed to handle rootfs pinning for container "waydroid"

.....

(008245) [20:22:48] org.freedesktop.DBus.Python.OSError: Traceback (most recent call last):
File "/usr/lib64/python3.12/site-packages/dbus/service.py", line 712, in _message_cb
retval = candidate_method(self, *args, **keywords)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/waydroid/tools/actions/container_manager.py", line 34, in Start
do_start(self.args, session)
File "/usr/lib/waydroid/tools/actions/container_manager.py", line 189, in do_start
helpers.lxc.start(args)
File "/usr/lib/waydroid/tools/helpers/lxc.py", line 394, in start
wait_for_running(args)
File "/usr/lib/waydroid/tools/helpers/lxc.py", line 388, in wait_for_running
raise OSError("container failed to start")
OSError: container failed to start

Akmods key generation on arm64 mac

generate-akmods-key pull an amd64 image only. I've tested a number of different Silverblue images unsuccessfully. My current workaround is to pull registry.fedoraproject.org/fedora and install the missing packages with dnf. This takes forever. Anyone can suggest an image to use? I would gladly submit a fix for it if I knew one.

Cannot install steam on ublue-os 39 beta conflicting requests

Hi i got this error on my main system and it make some probleme with wine32 and wine64 compatibility

ostree-image-signed:docker://ghcr.io/ublue-os/silverblue-nvidia:39
Digest: sha256:196e44fa2dc4fb0192213a75f6c7da481b42e4fa2e6254a593cca4092c98993e
Version: 39 (2023-10-03T10:31:29Z)
Diff: 16 removed
LayeredPackages: fish langpacks-fr perf

rpm-ostree install steam
Checking out tree 7c0ffef... done
Enabled rpm-md repositories: updates fedora rpmfusion-free-updates-testing rpmfusion-free rpmfusion-nonfree-updates-testing rpmfusion-nonfree copr:copr.fedorainfracloud.org:phracek:PyCharm google-chrome rpmfusion-nonfree-nvidia-driver rpmfusion-nonfree-steam updates-testing
Importing rpm-md... done
rpm-md repo 'updates' (cached); generated: 2018-02-20T19:18:14Z solvables: 0
rpm-md repo 'fedora' (cached); generated: 2023-10-06T09:48:20Z solvables: 70844
rpm-md repo 'rpmfusion-free-updates-testing' (cached); generated: 2023-09-26T11:23:58Z solvables: 63
rpm-md repo 'rpmfusion-free' (cached); generated: 2023-09-26T11:34:15Z solvables: 449
rpm-md repo 'rpmfusion-nonfree-updates-testing' (cached); generated: 2023-09-26T11:50:17Z solvables: 3
rpm-md repo 'rpmfusion-nonfree' (cached); generated: 2023-09-26T11:57:10Z solvables: 228
rpm-md repo 'copr:copr.fedorainfracloud.org:phracek:PyCharm' (cached); generated: 2023-08-10T15:35:19Z solvables: 5
rpm-md repo 'google-chrome' (cached); generated: 2023-10-04T18:54:18Z solvables: 3
rpm-md repo 'rpmfusion-nonfree-nvidia-driver' (cached); generated: 2023-09-26T10:39:41Z solvables: 29
rpm-md repo 'rpmfusion-nonfree-steam' (cached); generated: 2023-08-10T16:27:35Z solvables: 2
rpm-md repo 'updates-testing' (cached); generated: 2023-10-07T02:21:25Z solvables: 13730
Resolving dependencies... done
error: Could not depsolve transaction; 1 problem detected:
Problem: conflicting requests

  • package steam-1.0.0.78-2.fc39.i686 from rpmfusion-nonfree requires libnsl(x86-32), but none of the providers can be installed
  • package steam-1.0.0.78-2.fc39.i686 from rpmfusion-nonfree-steam requires libnsl(x86-32), but none of the providers can be installed
  • package libnsl-2.38-6.fc39.i686 from fedora requires glibc(x86-32) = 2.38-6.fc39, but none of the providers can be installed
  • package libnsl-2.38-7.fc39.i686 from updates-testing requires glibc(x86-32) = 2.38-7.fc39, but none of the providers can be installed
  • cannot install both glibc-2.38-6.fc39.i686 from fedora and glibc-2.38-4.fc39.i686 from @System
  • cannot install both glibc-2.38-7.fc39.i686 from updates-testing and glibc-2.38-4.fc39.i686 from @System

Extra NVIDIA packages

Hey! I've been using Kinoite with Nvidia for quite some time and love the idea of having a pre-packaged Nvidia variant available! I'm currently layering these two extra packages on my client that others may find useful:

nvidia-container-toolkit: Very useful for exposing GPU drivers to Podman. Officially, Fedora isn't supported by the package, but the CentOS8/RHEL package should work just fine (I'm using the centos8 package for my containers right now).

It does require some fiddling with SELinux labels which this guide by RedHat explains. I'm not sure if any of that can be included OOTB with the OCI images though.

nvidia-vaapi-driver (and libva-utils for vainfo): Could be useful for those wanting hardware accelerated decoding in Firefox. Unfortunately it doesn't play very nicely with Flatpak Firefox quite yet, but I think there could still be merit including it in this image.

Just some things to consider, I can still layer them just fine on my client.

No usage of nvidia-drivers after rebase

Hello there,

i recently tried to rebase to different images like "silverblue-nvidia" and "bluefin-nvidia".
Whenever i did this i coulnd't get the nvidia drivers to work for me.

These are the steps i followed after rebasing:

image

Whenever i tried to put this in console:

sudo mokutil --import /etc/pki/akmods/certs/akmods-ublue.der

i got this error message:

EFI variables are not supported on this system

So i cannot get nvidia-drivers running, which i can proof by having a look into nvidia-xserver settings + having flickering frames in any games.

Maybe you guys have some ideas or workarounds for this, so that i don't have to fully reinstall for changing images?
I'm currently running on kinoite-nvidia build and it works perfectly fine with nvidia drivers, therefore i would really like to try some of the other distros without completely reinstalling.

[Question] Does this image provide general improvement ?

Hey guys,

I've got issues lately with Nvidia drivers on Fedora Silverblue. Wayland isn't working at all and X11 is overall very slow and sloppy.
I'm curious about this image, does it add some improvements or is it just an initial configuration to support Nvidia cards from scratch ?

Cheers

AKMOD_PRIVKEY_20230518 is not documented

Build fail without much information on the missing AKMOD_PRIVKEY_20230518 secret. Should this be changed back to AKMOD_PRIVKEY or documented and renamed to something more meaning full?

settings to use dedicated card by default in PRIME machines

I successfully installed this on an older dell xps with the 37-470 line of drivers. The laptop runs 100% of the time powered by ac and I’m wondering what kind of configuration could be done to make the dedicated gpu the default so the desktop experience and all apps are launched running off the nvidia. Ideally gnome would be accelerated along with video players and ff/chrome etc.

Thanks!

Stuck in GDM / no login prompt

Since I upgraded to a newer version I am stuck in GDM. The grey background is there but there is no login prompt or any other buttons. When switching to another session with Ctrl+Alt+F2, I am able to login using the TUI and startx successfully gets me to my desktop. Unfortunately, none of my Flatpaks are launching there though. I tried systemctl restart gdm which allowed me to also log in with the original Xorg session but there I can also not launch any Flatpaks. When I restart gdm the TUI greets me with the following error:

xf86EnableIOPorts: failed to set IOPL for I/O (Operation not permitted) 
MESA-LOADER: Failed to open simpledrm: /usr/lib64/dri, suffix _dri)
kmsro: driver missing

System:

  • ublue (Nvidia)
  • RTX 2070
  • Last known working version 38.20230531.0 (2023-06-01T07:32:58Z)
  • Target version 38.20230607.0 (2023-06-07T11:57:03Z)
  • no packages layered / default ublue experience

KDE/Kinoite Night Color Broken

Hey folks, I recently rebased from regular Kinoite to ublue-Kinoite upon switching to a Nvidia GPU. Everything seems to work so far, except KDE's Night Color feature doesn't appear to change the color temperature as it should. Is this a known issue and is there a fix? Thanks.

Adding supergfxctl-plasmoid to the KDE version

Is it possible to add supergfxctl-plasmoid to the KDE version? It comes from a copr repo (here).

I'm currently using it through manually adding in the repo and installing it in a layered package, but it would be nice if it came with the image itself.

What images should have what desktop configurations??

From ublue-os/surface#23

How to handle different Surface variants/DEs? The Surface laptops should have the desktop GNOME/Plasma but for the tablets we should have Phosh/GNOME Mobile Shell/Plasma Mobile.

I don't have one of these so we'll need to start gathering input from folks and then start filing issues after we figure out the how.

cuda toolkit not installed for user

Steps To Recreate

  1. Perform a clean install of bazzite-nvidia.
  2. Login as the user.
  3. Check for cuda by running nvcc --version. It will fail to find the command.

Expected Behavior

rpm-ostree and nvidia-smi show that cuda and cuda toolkit should be installed, however nvcc --version fails to work.

reap@fedora:~$ nvidia-smi
Thu Feb 22 18:58:12 2024       
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 545.29.06              Driver Version: 545.29.06    CUDA Version: 12.3     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA RTX 4000 SFF Ada ...    Off | 00000000:01:00.0 Off |                  Off |
| 30%   33C    P8               5W /  70W |      2MiB / 20475MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
                                                                                         
+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|  No running processes found                                                           |
+---------------------------------------------------------------------------------------+

reap@fedora:~$ rpm -qa | grep nvidia
nvidia-gpu-firmware-20240115-2.fc39.noarch
ublue-os-nvidia-addons-0.10-1.fc39.noarch
xorg-x11-drv-nvidia-cuda-libs-545.29.06-2.fc39.x86_64
nvidia-modprobe-545.29.06-1.fc39.x86_64
nvidia-persistenced-545.29.06-1.fc39.x86_64
nvidia-container-toolkit-base-1.14.5-1.x86_64
libnvidia-container1-1.14.5-1.x86_64
libnvidia-container-tools-1.14.5-1.x86_64
nvidia-container-toolkit-1.14.5-1.x86_64
xorg-x11-drv-nvidia-kmodsrc-545.29.06-2.fc39.x86_64
libva-nvidia-driver-0.0.11-1.fc39.x86_64
xorg-x11-drv-nvidia-libs-545.29.06-2.fc39.i686
xorg-x11-drv-nvidia-libs-545.29.06-2.fc39.x86_64
nvidia-settings-545.29.06-1.fc39.x86_64
xorg-x11-drv-nvidia-power-545.29.06-2.fc39.x86_64
kmod-nvidia-6.7.5-201.fsync.fc39.x86_64-545.29.06-3.fc39.x86_64
xorg-x11-drv-nvidia-545.29.06-2.fc39.x86_64
xorg-x11-drv-nvidia-cuda-libs-545.29.06-2.fc39.i686
xorg-x11-drv-nvidia-cuda-545.29.06-2.fc39.x86_64
xorg-x11-drv-nvidia-devel-545.29.06-2.fc39.x86_64

reap@fedora:~$ nvcc --version
# only works after the workaround

Hardware

B550I Aurus Pro AX
AMD Ryzen 7 5700G
Nvidia RTX 4000 SFF Ada Gen
2x32GB @ 3200 MHz
2TB NVME Drive

Setup Notes

  • Secureboot is disabled in the BIOS.
  • OS and KDE run on the AMD GPU. Steam Games are able to successfully launch on the Nvidia gpu.
  • After applying the workaround PyTorch is also able to successfully run on the Nvidia gpu.

The Workaround

note :: The workaround does not fix the issue for podman containers running with CDI. Any cuda required workloads will have to be run in the userspace.

$ nvidia-smi
# this shows the correct output and says that cuda 12.3 is installed
$ nvcc --version
# this should fail to find nvcc
$ ls /etc/local
# this output does not contain cuda which confirms that the cuda toolkit is not installed

$ wget https://developer.download.nvidia.com/compute/cuda/12.3.2/local_installers/cuda_12.3.2_545.23.08_linux.run
$ sudo sh cuda_12.3.2_545.23.08_linux.run
# this will require you to accept the licence first. You should only be installing the cuda drivers as the system already has nvidia drivers.
$ ls /etc/local
# now we have the cuda toolkit, but nvcc will still fail as it is not on your path

# add this to your ~/.bashrc so that it is loaded every boot
$ export PATH=/usr/local/cuda-12.3/bin${PATH:+:${PATH}}
$ export LD_LIBRARY_PATH=/usr/local/cuda-12.3/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
$ nvcc --version 
# nvcc now works

Related Issues

python tensorflow in nvidia-enabled tumbleweed and fedora distroboxes unable to talk to GPU

Symptoms

Whether I create an ephemeral fedora rawhide or 39 distrobox with --nvidia, or use the tumbleweed distrobox I created from a distrobox-assemble with nvidia=true, and whether I create a python venv and then pip install tensorflow[and-cuda] or just do pip install --break-system-packages tensorflow[and-cuda] publicly, when installing those packages afresh, I get this output when trying to use tensorflow with my gpu:

$ python3
Python 3.12.2 (main, Feb 21 2024, 00:00:00) [GCC 14.0.1 20240217 (Red Hat 14.0.1-0)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
2024-04-12 14:29:54.089365: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-04-12 14:29:54.126693: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-04-12 14:29:54.786606: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
>>> tf.config.list_logical_devices()
2024-04-12 14:29:57.714446: I external/local_xla/xla/stream_executor/cuda/cuda_executor.cc:998] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
2024-04-12 14:29:57.714953: W tensorflow/core/common_runtime/gpu/gpu_device.cc:2251] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
[LogicalDevice(name='/device:CPU:0', device_type='CPU')]

Steps to reproduce

  1. Create a distrobox with nvidia enablement, either tumbleweed or fedora (and probably others)
  2. install tensorflow with cuda
  3. run import tensorflow as tf; tf.config.list_logical_devices()
  4. Observe results

Ensure image info script is working

I was new to image-info.sh when combining all the HWE repos so I didn't prioritize it. Should evaluate to see it its doing the right thing still.

And maybe add to main.

rpm-ostree rebase fails with new signed images

When trying to rebase my Fedora Silverblue with

rpm-ostree rebase ostree-image-signed:docker://ghcr.io/ublue-os/silverblue-nvidia:latest

as told by the README since 4 days ago (pull #130 / commit 0c1ae8a)

I get the following error:

Pulling manifest: ostree-image-signed:docker://ghcr.io/ublue-os/silverblue-nvidia:latest

error: Preparing import: Fetching manifest: containers-policy.json specifies a default of `insecureAcceptAnything`; refusing usage

And it only works by using the old unsigned way:

rpm-ostree rebase ostree-unverified-registry:ghcr.io/ublue-os/silverblue-nvidia:latest

Do I need to do anything else before rebasing to the signed images?

migrate akmod keys from repo to org level

We need to migrate AKMOD_PRIVKEY and AKMOD_PUBKEY to be org-level secrets rather than nvidia repo level.

Reason: we plan to use the same single key for akmods signing of both our current nvidia akmod and our future akmods.

readme-url is incorrect, needs to be templated

io.artifacthub.package.readme-url=https://raw.githubusercontent.com/ublue-os/base/main/README.md

We forgot to template out this part. This results in all the nvidia images pointing to the readme of the ublue-os/base image instead of the correct one.

Kinoite/KDE support?

Would it be possible to have a Kinoite-based version of this image?
I already tried my hand at it here https://github.com/EinoHR/fedora-kinoite-nvidia, just by changing the BASE_IMAGE. That did not work, as it started spitting out errors that I can't decipher in the build image task around the second FROM (line 60 of containerfile).

If you do not want to add official support for Kinoite+Nvidia images, I'd be interested in knowing what this repository's rather long containerfile does differently/better when compared to just having one that enables RPMFusion and installs Nvidia drivers as shown here.

ublue-nvctk-cdi.service runs always

In the Nvidia images, we have the ublue-nvctk-cdi.service to support containers.

The only dependencies this service has is if the binary exists, is executable, and we are after local-fs.target. This is problematic because it will always run even if the Nvidia modules are not loaded due to an Nvidia card not being present. For eGPUs, the Nvidia card is not present until much later in the boot process. Instead of using a service, this should be handled via udev rule since this script is dependent on the necessary hardware being present. Right now with an eGPU, you have to manually restart the service before entering any containers.

I'll try converting the service to a udev rule to test.

[Secure Boot] Asus Image not able to boot when secure boot enabled

Hi ublue team!

I am using silverblue-asus-nvidia

The asus variant image will not boot with secure boot enabled and resulting errors:

error: ../../grub-core/kern/efi/sb.c:182: bad shim signature.
error: ../../grub-core/loader/i386/efi/linux.c:258: you need to load the kernel first.

I tried to disable the secure boot, reset the MOK list and re-enroll the key by:

sudo mokutil --reset
ujust enroll-secure-boot-key

and enable the secure boot but nothing has changed, the errors still exist.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.