Code Monkey home page Code Monkey logo

personal-blog's People

Contributors

benstepp avatar captbaritone avatar dardub avatar dependabot[bot] avatar gesposito avatar gsimone avatar heinrichfilter avatar hsribei avatar irekrog avatar kyleamathews avatar lgraubner avatar luandro avatar marcobiedermann avatar markcatley avatar martypenner avatar mathiasbynens avatar mef avatar meherranjan avatar mmeinzer avatar patrykkopycinski avatar philhawksworth avatar pieh avatar rstacruz avatar rsternagel avatar sethwilsonus avatar snyk-bot avatar strangehill avatar wowu avatar

Watchers

 avatar  avatar

personal-blog's Issues

Sharing music with friends using fly.io, Azuracast, and Mopidy


slug: sharing-music-with-friends
date: September 26, 2022
description:

TL;DR: Use Mopidy with Iris + Azuracast to live stream your music for cheap on fly.io. Here's the repo with my deployments. Feel free to fork and follow the instructions to have a self-hosted music and streaming service up and running quickly.

In my search for a simple self-hosted, highly available music service to play music for my friends, I've finally landed on navidrome for the player and azuracast for sharing the stream with friends. One of my goals is to be able to run roleplaying games for my friends wherever I am using a tablet or a low-spec laptop. So I need services that run somewhere in the cloud. By using navidrome, I can grab free apps like Substreamer to connect to my personal music library.

Substreamer is great because I can make select albums or playlists available offline, limiting my bandwidth usage.

If I'm on an iPad, I can stream from Substreamer to Azuracast using the IziCast app. Basically, IziCast provides a way of connecting to any Icecast server.

Tired of hearing the word "cast"? Same.

One downside of this approach is having to use an external audio loopback device to route my device audio. It's clunky and expensive. So I've opted to use Mopidy with the Iris frontend instead. In this case, Mopidy connects directly to Azuracast; no intermediary device or software required. Win!

I'm deploying all of this on the excellent fly.io platform. Their free tier is quite generous. Azuracast needs more resources than that unfortunately, so I'm using a beefier instance. In order to mitigate the expense, I've set up a GitHub action to scale down all instances every night at midnight. By adding workflow_dispatch to the action, I can trigger it manually. By adding an input dropdown allowing start and stop values, I can start or stop the instances at will.

In the future, I suspect fly.io machines will allow auto-scaling based on requests. Looking forward to that!

Since my music exists locally, I needed to tell fly.io to build locally only, then push the built images up. This seemed a lot more resilient than having it push up the docker context before building remotely. The downside is long upload times since the entire music library is being bsked in to the docker image. Bit of a tradeoff. I could probably set up syncthing, or curl a tar from an S3 bucket, but that seems less fun to me for now.

Re: azuracast, since it uses a few different internal paths for storage and fly.io does not currently allow multiple volume mounts, we resort to copying a copyright-free track to the station media directory every time the app boots up, i.e. after a new deploy. One caveat here is that we have to manually restart broadcasting after every new deploy or machine restart. Kind of a bummer, but not a huge deal.

restart broadcasting

Lastly, in order to avoid opening public ports, I'm running every container with a tailscale sidecar process. This allows only me to access Mopidy and Navidrome while locking it down for everyone else.

Getting Tauri working in WSL


slug: getting-tauri-working-in-wsl
date: October 5, 2022
description: how I got Tauri.app working in WSL

Update: Tauri has good guidelines for working with homebrew and a clear set of prerequisites for linux. Not sure if I missed these before or they were added later, but those docs basically describe how to avoid the installation problems I ran into.

TL;DR:

  • Don't use homebrew; use your distro's package manager instead (I used apt on ubuntu 20.04).
  • Install wslg according to instructions.
  • Install latest GPU drivers.
  • DON'T install a 3rd-party X server like VcXsrv.
  • DON'T install xfce4.
  • Add export DISPLAY=:0 to your shell file, e.g. ~/.zshrc.

I've been chipping away at a personal task manager clone (who isn't these days?), and wanted to begin packaging the app for desktop. I recently heard about tauri.app, which sounds like a very promising electron alternative. I love their philosophy of security and local-first apps; this has been something on my mind for quite a while. I appreciate cloud connectivity as an option, but for something like a simple task manager, why do you need to talk to your servers?! Let me choose that please.

Tauri is built in Rust, and because it's a desktop app, it obviously has some visual components that run outside of the cli. I do a lot of my personal development on my windows machine in WSL. It's fantastic overall, but how do I run desktop apps from it? There's no way I wanted to set up Tauri in windows and pollute my environment again.

It turns out that wslg has been around for a bit and provides a way to run linux desktop apps as companion apps to the rest of your windows app. That's so cool. Through a process of trial and error, I discovered that getting wslg going was pretty easy, but getting tauri from within wsl was not. Essentially, I found that using homebrew to install wslg packages in my wsl distro (Ubuntu 20.04) caused a lot of issues. Resorting to the native package manager to install the same dependencies worked much better.

A number of *-dev packages were required to be present to boot tauri properly. I noticed a number of issues with pkg-config and missing packages. I used the ubuntu 20.04 package repository search whenever I saw an error message about a missing package to determine which packages to install.

The full install line was this:

sudo apt install pkg-config \
  libdbus-1-dev \
  libgtk-3-dev \
  libsoup2.4-dev \
  libjavascriptcoregtk-4.0-dev \
  libwebkit2gtk-4.0-dev

Finally, I need to add export DISPLAY=:0 to my ~/.zshrc file.

After running through the various required steps and navigating around some speedbumps, I got it working! Tauri has a very easy-to-use init script. Watching that desktop GUI come up the first time after running npm run tauri dev is very satisfying. I'm looking forward to exploring its APIs.

For more info on potential issues with tauri and wslg, have a look at this: tauri-apps/tauri#380

tauri

Manipulating SVGs using CSS


slug: manipulating-svg-with-css
date: March 24 2019
description:

At Race Roster, I've been experimenting with using SVGs to show real-time theme updates as a user changes the theme colours in their branding settings. A more traditional approach would require using a set of <div> and <span> elements with border or background color CSS attributes to achieve this. I opted instead to try using SVGs instead, since path and color manipulation seemed to be one of their strong points. Here's a very basic example SVG:

After inspecting the SVG structure using the browser's devtools, I can target the <path> element I care about in CSS and modify the fill and stroke attributes like so:

<style>
#arbitrary-id-of-path-element {
  fill: hsla(335, 52%, 89%, 1);
  stroke: hsla(335, 52%, 56%, 1);
}
</style>

This gives me an SVG that looks like this:

<style> #modified { fill: hsla(335, 52%, 89%, 1); stroke: hsla(335, 52%, 56%, 1); } </style>

Pretty simple! It also allows me to export an SVG from a visual design file and manipulate in the browser. Very cool.

How I'm running this ghost blog in docker


slug: running-ghost-in-docker
date: April 7 2015
description:

Update: I'm now running this on Netlify using swyxkit!

May as well start simple, hey? I wanted to start a blog to help organize my thoughts in a public way, because surely I'm not the only one who can benefit from what I learn. Scott Hanselman talks about wasting your keystrokes:

http://www.hanselman.com/blog/DoTheyDeserveTheGiftOfYourKeystrokes.aspx

I hate waste of any kind, but time especially. So here we go.

Docker and Digital Ocean <3

As you may have already determined, I'm running this blog using the ghost blogging platform inside of a Docker container. In case you don't know, Docker is a great way to "containerize" your apps, meaning you can define the application's environment alongside your app code. This is super great for ensuring consistency among your teammates, not to mention you avoid polluting your host machine with all kinds of dependencies. I don't know about you, but I hate globally changing PHP + Apache versions, booting redis, installing npm packages, etc. just to satisfy different project requirements (note the emphasis on globally; obviously, you'll need to satisfy your project's requirements by using the right tools). Docker is great for providing that much-needed isolation. See here for installation instructions.

To start, I booted up a Digital Ocean droplet preconfigured with Docker:

Choosing Digital Ocean Docker droplet

Note that I chose the Docker starting point instead of the Ghost droplet in the left column. I did this because I'll be sharing this droplet with other containerized apps, so I wanted to keep the host environment clean and stripped down.

Next, I created a super-simple Dockerfile that looks like this:

FROM ghost

Docker Compose for Orchestration

Here's where things start to get cool. By default, your data is not persisted in a Docker container. Only files you share in what Docker calls a "volume" will be persisted. Obviously, I want this blog to live on even if I screw up the container somehow. So of course, I need set up a volume. However, my preference is to go one step further and store persisted data in its own volume container and link it to the app container. For that, a tool called Docker Compose (formerly Fig) makes things super easy. Here's my docker-compose.yml:

ghost:
  build: .
  ports:
    - "8000:2368"
  volumes_from:
    - ghostdata
  environment:
    NODE_ENV: production

ghostdata:
  image: busybox:ubuntu-14.04
  command: "true"
  volumes:
    - /var/lib/ghost

Note how I'm mapping the host port 8000 to the container port 2368 (Ghost's default port), and using the "volumes from" the ghostdata container. And voila! We get persisted data. Note that the data container is using a very small image with a command that executes successfully so the data container is not actually ever running. Cool.

See here for more info on Docker data management.

At this point, I wanted to start customizing the config.js file Ghost ships with. That was easy enough; just create a config.js in the same host directory as the Dockerfile, and add the current directory as a volume to the ghost container. That looks like this in docker-compose.yml:

ghost:
...
  volumes:
    - ./:/var/lib/ghost

Unfortunately, the ghost container wasn't honouring the volume. I'm not sure if this is a bug in Ghost's build process for the container, or in Docker itself. I need to investigate some more. To get around this, I added config.js explicitly as a volume:

volumes:
  - ./config.js:/var/lib/ghost/config.js

That worked great.

Next, I started customizing the default casper theme to add prismjs (for syntax highlighting), integration with Disqus, and other tweaks. Rather than copy the whole theme into the host directory, I chose to add individual theme files and add them as explicit volumes, like so:

ghost:
...
  volumes:
    ...
    - ./themes/casper/post.hbs:/var/lib/ghost/themes/casper/post.hbs
    - ./themes/casper/default.hbs:/var/lib/ghost/themes/casper/default.hbs
    - ./themes/casper/partials/loop.hbs:/var/lib/ghost/themes/casper/partials/loop.hbs
    - ./themes/casper/assets/css/prism.css:/var/lib/ghost/themes/casper/assets/css/prism.css
    - ./themes/casper/assets/js/prism.js:/var/lib/ghost/themes/casper/assets/css/prism.js

And that sums up the config. Now it was time to fire it up! This was as simple as running docker-compose build && docker-compose up -d in the directory where docker-compose.yml lives. Once I logged in to the Ghost admin and ran through the basic settings, I was happy.

Supervisord

Finally, I wanted to ensure the blog would keep running in the event of a container failure or host machine reboot. For that, I installed supervisord on the host machine and added this config:

[program:marty_blog]
command=/usr/local/bin/docker-compose up
directory=/path/to/marty_blog
autostart=true
autorestart=true

Then I ran supervisord to kick things off. Now when I update the blog files or configuration, I can run docker-compose build && docker-compose stop and supervisord will take care of restarting the containers. Schawing!

All in all, I'm pretty happy with the workflow for running this blog containerized. A few hiccups were encountered, but using Docker significantly helped ease the mental burden for me of getting this blog running. Shoot me a comment if you have any questions or corrections, and thanks for reading!

Note: You may have noticed that I set the host port of the Ghost container to 8000, yet you're accessing this blog on the standard port 80. To accomplish that, I'm actually running an nginx container as a reverse proxy. That's how I'm running multiple apps on the same host with no global clutter. Shoot me a comment if you'd like to know how I configured that.

Using NVM to enforce node versions


slug: using-nvm-to-enforce-node-versions
date: February 25 2020
description: how I got NVM set up to automatically switch node versions

At work, we previously used docker to give us a common node version by running every node and npm command through a wrapper script. This meant we wouldn't run into strange gotchas between different versions of node and npm. But it also created more problems:

  • We couldn't use different node versions per package. This means we had to find the lowest common denominator among all packages, so we were stuck on an older, slower version with fewer features.
  • We couldn't debug node scripts the usual way because of the docker proxy.
  • Running e2e tests through a real browser required installing and configuring XQuartz, which was really wonky and not easy to train people on.
  • Performance was really bad. People were waiting many minutes for basic builds to finish.
  • Ctrl + C didn't always work because of terminal shenanigans between docker and the host, so you could end up with hanging processes.
  • Watch/live-reload scripts didn't work sometimes for reasons I don't understand, meaning they had to run a full build for every change.

Tools like nvm were made to deal with stuff like this, so why not use it? By modifying the previous wrapper script, but adding .nvmrc files to each package, the wrapper can ensure that nvm installs the needed node version before running. It also avoids polluting your shell, so your active node version doesn't change simply because you ran npm i inside of a package. Works like a charm!

I also investigated volta, but it simply isn't ready for me to use yet, primarily because it doesn't allow installing "global" npm packages yet.

Here's the full code the wrapper script:

#!/usr/bin/env bash

# This file mainly exists to be a facade for our watch/build/install scripts. By
# providing a thin layer on top, we can theoretically change out the underlying
# implementation without having to change our workflow too much.
#
# As of the time of this writing, we're enforcing the desired node and npm
# versions within each package by using `.nvmrc`.

# Fail fast
set -euo pipefail

# Set up nvm
. "$NVM_DIR/nvm.sh"
nvm install

COMMAND="$1"

case "$COMMAND" in
  # Expose npx directly to do whatever we need
  npx)
    npx "${@:2}"
    ;;
  # Expose npm directly to do whatever we need
  npm)
    npm "${@:2}"
    ;;
  # Expose node directly to do whatever we need
  node)
    node "${@:2}"
    ;;
  # Remove all node_modules and bower components inside all packages
  clean)
    find . -type d -iname node_modules -maxdepth 3 -exec rm -rf {} +
    find . -type d -iname bower_components -maxdepth 3 -exec rm -rf {} +
    ;;

  *)
    echo "That command doesn't exist."
    exit 1
    ;;
esac

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.