Code Monkey home page Code Monkey logo

ledfxdocker's People

Contributors

pr0mises avatar shirommakkad avatar spiro-c avatar vanixxx avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

ledfxdocker's Issues

start.sh missing in the stream example

Hi,
I wanted to compose the stream example, but the entrypoint start.sh from the shairport.yml is missing. I also tried the entrypoint.sh but its also not in the image.

Having issues connecting to Snapcast Server / Mopidy

Hello I have Mopidy and Snapcast running in docker and wanted to get this LedFx container setup to work with them but am having a couple of issues.

My Mopidy has an audio output that is
output = audioresample ! audioconvert ! audio/x-raw,rate=48000,channels=2,format=S16LE ! filesink location=/tmp/snapfifo

From what i can tell this LedFx container is looking for an output in a file called stream. So if i change the Mopidy out to
output = audioresample ! audioconvert ! audio/x-raw,rate=48000,channels=2,format=S32LE ! filesink location=/tmp/stream

LedFx can access that, however that then renders my snapcast unavailable as it is wanting snapfifo.

So my question is how would i get this all working together, what files do i need to edit to have this all work in sync. If this is a mopidy config file setting, and someone has an example i can look at that would be great. I'm just a little confused at how this all works together.

Not most recent version of LedFx?

It seems like the docker container does not contain the current version of LedFx. It would be great if you could update it.

Thanks in advance.

How to get audio in the container...

I am running the container on a libreelec installation (inside libreelec) on an RPI4 with following cmd:

docker run \
-p 8888:8888 \
-v ~/ledfx-config:/app/ledfx-config \
-v ~/audio:/app/audio \
-e HOST=192.168.1.222 \
-e FORMAT='-r 48000 -f S16_LE -c 2' \
--restart always \
--name ledfx -h ledfx shirom/ledfx

The ledfx device is showing up on the snapserver webinterface on the libreelec-Addon, but no sound is received.

The other problem is, that even a successful configuration would prevent using HDMI-Audio and Snapserver at the same time.

Maybe it's just simple and I don't get it:
I need a audio-pipe that transfers the audio from libreelec (all sounds) to the container on the same machine.
What options are needed to run the container and receive audio while playing audio with the "ALSA: vc4-hdmi-0" device?

Thanks anyway for your work!

Update snapclient version

Hi,

big thanks for the Snapcast-Integration ๐Ÿ‘

Could you update the Dockerfile to use always the latest upstream version of Snapclient (0.26.0 at the moment)?
Currently the Dockerfile uses apt which result in an old version (0.15.0).

MQTT integration

Hello,
I've seen that there was an integration for Home Assistant, which is deprecated. The developer seemed to move directly to the LEDfx team. In the old project https://github.com/YeonV/ledfxrm it is mentioned that an integration with MQTT will come. Is there a chance to get this into this docker container? I just tested it out today, the container works great. I nearly containerized everything I could and I am happy to get an LEDfx container that works out of the box.
It would be great to see the MQTT integration primarily to check the state of LEDfx.
Regards,
Harald

AirPlay (not an issue)

This repo seems like a total gem, can't wait to try it out. One thing before I start (and I'll report back on how it went): @ShiromMakkad can you confirm it's possible to have another docker container of sorts that is used as an AirPlay receiver, so when I AirPlay something to it, it can forward the stream to your LedFx container and therefore control the LEDs?
AirPlay is mentioned in the documentation but it isn't too clear for me. Sorry!

pipe microphone output

Hello,

We have connected a chromecast to the audio input of a computer. I can hear the audio if i pipe the microphone to the internal speakers using qpwgraph. I however can't seem to figure out how to pipe this microphone to the stream file

Building after latest commit fails

 => ERROR [16/19] RUN dpkg -i snapclient.deb                                                                       0.7s
------
 > [16/19] RUN dpkg -i snapclient.deb:
#19 0.566 dpkg: error processing archive snapclient.deb (--install):
#19 0.566  package architecture (armhf) does not match system (arm64)
#19 0.631 Errors were encountered while processing:
#19 0.631  snapclient.deb

specifically when building for linux/arm64

improving performance on ARM devices

I found performance very poor on Raspberry Pi3B, Nanopi Neo3 and NanopiR4S. After a few minutes the animation would lag severely behind the audio (Squeezebox/LMS). After longer times (10 min or so), when the audio was stopped, the animation would continue for several more seconds.
Testing more extensively on the R4S running Armbian and docker, I found that running the container 'privileged' strongly improves performance. Limiting to specific CPU's also seems to help. This is my adjusted compose file:

--------------------------------------
version: '2'

services:
  ledfx:
    image: shirom/ledfx
    container_name: ledfx
    environment: 
      - FORMAT=-r 44100 -f S16_LE -c 2
      - SQUEEZE=1
    ports:
      - 8888:8888
    network_mode: host
    volumes:
      - ./ledfx-config:/app/ledfx-config
      - ~/audio:/app/audio
    cpuset: '4,5'
    privileged: true

--------------------------------------

in non-privileged mode, the performance was 'best' (but still very poor) when limiting to only 1 CPU.
With the compose above, it is now running OK for 7 WLED devices with about 1500 pixels in total.

No entrypoint.sh

There is a ENTRYPOINT entrypoint.sh in the Dockerfile however it is nowhere to be found in this repo

Run LedFX with USB sound card

Hi, could you help me how to use your image with an external USB sound card? I went through your examples, but I got nothing.

Thanks very much.

Errors when building

Hello!

Building the image myself gives me this error:
standard_init_linux.go:219: exec user process caused: exec format error

PulseAudio example

Hello!

I'd like to use this image with PulseAudio however I'm unsure about how I would go about that... I'm trying to avoid using snapcast entirely.

I'm getting this:
[Logs]    [2022-9-28 15:33:03] [ledfx] ALSA lib seq_hw.c:466:(snd_seq_hw_open) open /dev/snd/seq failed: No such file or directory
[Logs]    [2022-9-28 15:33:03] [ledfx] [ERROR   ] ledfx.integrations.midi        : Unable to enumerate midi devices: MidiInAlsa::initialize: error creating ALSA sequencer client object.

I'm using this inside a balenasound image without snapcast with an external sound card

Running on arm64?

Hello,
I would like to run this docker in the architecture arm64, is it possible?
What would be the steps to manage it?
Best regards

LedFx v0.10.7 is out currently running version: v.0.10.5

Hey, thanks for that amazing project.
You saved me a few more weeks figuring everything out!

There are new versions of LedFX, do you compile them yourself and upload the images to docker or is it a broken automatic build?

Microphone

Hello, tell me please is there an opportunity to connect a microphone?

Getting Spotify audio input into LedFX

Hi,

I'm using this image as my goal is to stream music from Spotify to LedFX and create sound reactive WLED effect.
My problem is that I don't know where should I configure my Spotify account, is it somewhere in Snapserver container?

I've tried several docker-compose using different images, with no luck so far.
My current configuration uses docker-compose.yml from this project examples.
https://github.com/ShiromMakkad/LedFxDocker/blob/master/examples/snapcast.yml

version: '3'

services:
snapserver:
image: ivdata/snapserver
container_name: snapserver
ports:
- 1704:1704
- 1705:1705
volumes:
- ./snapcast:/tmp/snapcast
ledfx:
image: shirom/ledfx
container_name: ledfx
environment:
- HOST=snapserver
ports:
- 8888:8888
volumes:
- ./ledfx-config:/root/.ledfx

Second issue I have is that ledfx container is complaining about ALSA and reporting following error in logs:

ALSA lib seq_hw.c:466:(snd_seq_hw_open) open /dev/snd/seq failed: No such file or directory
[ERROR ] ledfx.integrations.midi : Unable to enumerate midi devices: MidiInAlsa::initialize: error creating ALSA sequencer client object.
ALSA lib seq_hw.c:466:(snd_seq_hw_open) open /dev/snd/seq failed: No such file or directory

Can someone explain me what I'm missing or having incorrectly configured?

Thanks

Does this project supersede the need for a USB audio card recommended by LedFx?

Hi,

First of all, thank you for creating this project. It's awesome and I managed to get it up and running fairly quickly. This is not really an issue, but more of a post with questions.

On the official page of https://ledfx.readthedocs.io/en/latest/installing.html#raspberry-pi-installation, they say I need a USB audio card for a raspberry pi.
I am actually running your container on raspberry pi 3 and it works with the built-in audio card. Is the main reason for this that you integrated snapcast into this repository and did some magic to direct the audio to LedFx with use of snapcast (basically the 50 hours of work you mention in the README :))? I haven't tried just running LedFx without your container so I'm not sure if USB audio card is necessary there too, but I assume yes given the documentation.

If the answer to the above is yes, then I'm curious if your work would ever be upstreamed to LedFx to make it work on a stock RPi without extra audio cards. Don't get me wrong, I support your project, but it would be even better if it was used in the original LedFx for the sake of simplicity and lower maintenance.

No call for action here, mere curiosity. Thank you again for this amazing contribution.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.