Code Monkey home page Code Monkey logo

machine-setup's Introduction

Machine setup

CI

This can be run locally on an existing machine by running the following:

ansible-playbook --ask-become-pass --connection=local --inventory 127.0.0.1, --limit 127.0.0.1 desktop-master-playbook.yml

Or to run it on a remote machine, something like this:

ansible-playbook --ask-pass --ask-become-pass --extra-vars 'haos_vm_memory_mb=3072' --inventory 1.2.3.123, home-assistant-server-master-playbook.yml

Be sure to change the playbook to the one appropriate for the host system you're running it on.

Also be sure to put the proper RSA key at files/ssh/id_rsa. This is the RSA key which will be put onto the provisioned machine. It's needed to clone github repos such as the dotfile repo.

After run login to machine as the user 'vagrant' with the default password of 'vagrant'. Then sudo su to log in as root and run passwd scott to create the password for the user 'scott'. Then log out of root and vagrant and log in with scott using the new password.

After logging in run startx to start the X Window System.

If there are issues with setting the resolution properly then make sure the proper VirtualBox guest additions are installed on the guest. Note that this isn't necessary if running on a mac as parallels should be used. To use parallels ensure the plugin is installed before bringing up the VM:

vagrant plugin install vagrant-parallels

You also need a properly licensed version of parallels and may need to specify --provider=parallels on the command line if you have other providers installed (e.g. vbox).

On Windows, you'll need to shut down the machine and adjust 2 things under the 'Display' settings for the VM.

  1. Increase the 'Video Memory' (I increased it to 32 MB)
  2. Change the 'Graphics Controller' to 'VMSVGA'

Start up the VM, then you'll need to set the resolution under the View menu dropdown. This should stick across reboots. It's possible something may also need to be done with xrandr as well, but I didn't have to this time.

Ubuntu USB auto-installer

This will create a USB flash drive that will auto install an Ubuntu system.

First, copy autoinstall-user-data-example.yaml to autoinstall-user-data.yaml and modify it accordingly. Then generate the auto-installer ISO:

git clone [email protected]:covertsh/ubuntu-autoinstall-generator.git && cd ubuntu-autoinstall-generator
wget http://releases.ubuntu.com/20.04/ubuntu-20.04.3-live-server-amd64.iso
./ubuntu-autoinstall-generator.sh -a -k -u ../autoinstall-user-data.yaml -s ./ubuntu-20.04.3-live-server-amd64.iso -d ubuntu-autoinstall.iso

Then write it to a flash drive (e.g. /dev/sdc) using sudo dd if=ubuntu-autoinstall.iso of=/dev/sdx. Make sure to change the above device path and always triple check you're writing to the correct device.

More info on autoinstall can be found here: https://ubuntu.com/server/docs/install/autoinstall

Development

To fully test your changes run ./test.sh at the root of the project. However, first make sure to change the file locations of the secrets to your actual locations.

You can also test locally using something like the following:

vagrant provision desktop --provision-with test

Note that you'll need to comment out the git clone in run.sh otherwise it will fail since you've mounted a directory where it will attempt to clone to.

Tips

Commenting out lines can speed up your local development. Just be sure not to check in these changes! A few examples of doing this are:

  • Commenting out the line to tear down the environment after it's finished running. This can help with turnaround time since you won't have to recreate the environment every time. However, still be aware you could miss problems by not running the suite from scratch. So be careful doing this. Also, be sure to tear down the environment after you're finished with it so there aren't dangling unused instances.
  • Commenting out anything which you don't need to run every time. For instance not running terraform apply subsequent times if you are only working on the playbooks could save time.

Misc

Reset Intellij Ultimate trial

Make sure you're on a version < 2021.2.3. This installs the latest version before than on Ubuntu:

sudo snap refresh --channel 2021.1/stable intellij-idea-ultimate --classic

See this JetBrains blog post for more details.

Then run this to reset the trial.

rm -rf ~/.java/.userPrefs ~/.config/JetBrains/*/options/other.xml ~/.config/JetBrains/*/eval/*

See also: https://dstarod.github.io/idea-trial/

machine-setup's People

Contributors

provscottgiminiani avatar scottg489 avatar scottg489-tw avatar

Stargazers

 avatar

Watchers

 avatar  avatar

machine-setup's Issues

Try using VDI instead of VMDK for HAOS install

When resizing the disk size, I need to change it to VDI first. However, this means that the underlying disk must be at least twice the size of the partition. Instead, lets see if we can just use the VDI image instead.

HAOS offers one, but I think I originally went with the VMDK since it seemed to offer more features. However, I don't remember exactly which ones, and I think I did a little research more recently and I didn't see anything significant.

Do a little more research into VMDK vs VDI and if VDI seems sufficient then use that instead.

.bashrc and .zshrc backup fails when the files don't exist

This seems to happen when the backup happens, then the script fails before the new one is put into place. Then on subsequent runs the backup will fail.

The backing up seems a little dubious, although it may be alright. The right fix to this may just be to have the backup not fail if the original file doesn't exist. This is safe because the reason we do the backup is so we don't blow away the old one/stow doesn't fail to link it. So if it doesn't exist then we're all good.

Figure out what to do with authorized keys

Since it's just a list of public keys it might even be safe to check into source control.

What are the contents of the known_hosts file? Is there presidence since that's already checked in?

Can the public key just be generated from the private key as part of the build?

There's a command line ssh-copy that I believe adds your key to the authorized_keys file. Would this be of any use?

Automate restoring Home Assistant backups to test server

The existing Ansible playbooks are capable of getting a fresh VM HAOS install running on a new host. However, setting up the instance and restoring a backup is still manual and takes a while to wait for large backups to download, upload, and restore.

We should automate the backup restore process by downloading them from the local prod instance (or from the remote backup location, but that would be slower), then restoring them to the new instance. This should be possible using the HA API.

Add tests against the provision machine

The only verification we're doing now is that the playbook finished successfully. However, we should do a little more extensive testing.

Looking into ansible testing libraries might be a good idea or just general infrastructure/machine testing.

A few ideas for things to test:

  • Make sure we can clone a repo. This would indicate id_rsa is set up properly
  • Verify the date is correct

Ansible recommends to not use some kind of external testing framework but rather incorporate checks and the like into your playbook. More information here:
https://docs.ansible.com/ansible/latest/reference_appendices/test_strategies.html

Use an AWS spot instance for test provision machine

The EC2 instance we're using is of an unnecessary high tier. In order to save costs and not use unnecessary resources we should use a spot instance.

Currently I'm unaware exactly how to manage a spot instance via terraform. You can make a spot instance request via terraform but the request and the actual instance that is created are different things. I need to figure out a way to get the instance itself managed by terraform so I can tear down the machine when finished with the provisioning test

Run machine setup test twice in a row

The playbook should successfully run consecutively. It's not meant to only be run on initial machine setup. In order to verify there aren't issues with subsequent runs it's a good idea to run the playbook against the same machine twice in a row.

Refactor how tasks are organized (particularly in master playbook)

How tasks are organized can be refactored to be cleaner. Right most tasks are split out into separate task files in the tasks dir. However, there is still some refactoring that can be done to make everything cleaner.

Particularly, the master playbook is starting to grow. It seems that this is mostly things related to my user (scott). Perhaps pulling this out into a task specifically for this user might make sense but I haven't totally thought this through.

I think the crucial thing with how the tasks need to be organized is determining task dependencies. For instance, cloning the dotfiles repo requires that git is already installed. It might be a good idea to look into if there's a way ansible makes it easier to manage dependencies between tasks. But I think the cleanest way would just be to get everything into the tasks dir and make sure that they are run in order in the master playbook.

Automatically create password for user

Although the user is created, it isn't given a password. The documentation currently instructs the user to manually log in and set the password themselves. However, ansible has the ability to set the password on the user as seen here:

https://docs.ansible.com/ansible/latest/modules/user_module.html#parameter-password

The value of the password should be created using the following command.

openssl passwd -salt abc -1 12345

Where "12345" is the password. The value of the salt can be basically anything. I suppose pick a sufficiently random string but I'm not sure it's too important.

The documentation also currently mentions that you need to log in and manually set the password. This documentation should also be updated. At the time of this writing it can be found here in the README:

https://github.com/ScottG489/machine-setup/blob/master/README.md

Install tmux plugins (tpm)

tpm plugins need to be installed in tmux (typically by pressing prefix - I). We should do this installation during the provisioning rather than needing the user to do it manually.

I tried to get this working but I wasn't able to. Here is one resource which seemed promising but didn't work:

https://github.com/tmux-plugins/tpm/blob/master/docs/automatic_tpm_installation.md

Also the command run from tmux is: run-shell /home/scott/.tmux/plugins/tpm/bindings/install_plugins

I wasn't able to get anything working though unfortunately.

Timezone for scheduled job is UTC and we want PST

This job is scheduled to run Monday at 4AM. This time is meant to be off hours when likely nothing else will be running. However, since this is 4AM UTC it's actually running at 9PM PST Sunday.

This needs to be fixed by either changing the timezone somehow or by just changing the scheduled in consideration to PST.

Don't install peek and remove PPA

I was originally thinking of using peek for screen recording. However, I decided to go with a script using ffmpeg. Since it's no longer part of my workflow it can be removed from the package install list.

Investigate alternatives to bash for testing playbook

We're running a bash script to do some basic testing against the playbook. However, this is quite limiting. Here are some options I've found so far for for validating ansible machine provisioning:

  1. Goss (YAML)
  2. molecule (YAML)
  3. ansible test strategies (YML/ansible)
  4. serverspec (ruby)
  5. inspec (ruby)
  6. testinfra (python)

These are in roughly the order in which I find the most promising based on a quick look. The top 3 are mostly only the ones I am interested in. The last 3 are pretty much off the table but are here in case, anyway.

Stow creates directories if they don't exist and new files we may not want to track get added to source control

For instance, on a fresh system ~/.weechat may not exist. If we install weechat with stow then it will create ~/.weechat as a symlink. This will mean that every new file created in ~/.weechat will be added to source control including logs, etc. which we don't want.

The solution to this is to add the --no-folding flag it seems after a little testing. This creates the directory if it doesn't exist. See the man pages for more info. So the command would look like:

stow -S --no-folding *

Install Google Chrome instead of Chromium

Since Google sucks, they make it so you can't sync your account on non-Google Chrome browsers. So now I have to use the actual Google Chrome browser instead of Chromium.

Here's a quick gist on how to do this in a shell:
https://gist.github.com/jeanpylone/3983049

wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | sudo apt-key add - 
sudo sh -c 'echo "deb http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list'
sudo apt-get update 
sudo apt-get install google-chrome-unstable

Might want to consider using google-chrome-stable instead though

See also:
https://www.google.com/linuxrepositories/

On why they suck:
https://blog.chromium.org/2021/01/limiting-private-api-availability-in.html

Update /etc/default/grub with GRUB_DEFAULT=saved

In order to specify what grub entry you want to reboot into this needs to be changed from GRUB_DEFAULT=0 to GRUB_DEFAULT=saved.

After that is done sudo update-grub also needs to be run so the changes apply to /boot/grub/grub.cfg.

This will allow us to run grub-reboot <entry #, title, etc> to set the default entry that will be used on next reboot, such as windows.

Investigate fix for contention on apt locks

In the pre_tasks we do an apt update twice since for whatever reason this seemed to help avoid problems with these locks. This is something that isn't natively supported by the ansible package playbook. More information can be found here:

ansible/ansible#25414

The best solution from that issue seemed to be the following:

- name: Wait for any possibly running unattended upgrade to finish
  raw: systemd-run --property="After=apt-daily.service apt-daily-upgrade.service" --wait /bin/true

So it would be worth seeing how that works.

The main benefit here is that we would save having to do an update twice in a row, although this doesn't take up much time relatively. It also just seems like a solution that makes sense since I'm not exactly sure why the double updated fixes things.

If the above code is implemented we should keep a close eye on future builds running into the problem again since the current solution has been working pretty reliably.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.