Code Monkey home page Code Monkey logo

Comments (36)

cgwalters avatar cgwalters commented on August 14, 2024

(don't need to build new containers for every one)

Personally, I make && sudo make install a lot for developing this repo rather than rebuild the container. I am very much of the opinion we should (continue to) support that.

We could further "productize" that model by more formally having this one git repo support being built in two ways - as a package (rpm/ebuild/whatever) and as a container (with the package going into the container, with its deps).

build scripts should be versioned with the thing they build.

If I'm understanding you correctly, we currently have thing being built explicitly split out right? Do you disagree with that?

from coreos-assembler.

jlebon avatar jlebon commented on August 14, 2024

If I'm understanding you correctly, we currently have thing being built explicitly split out right? Do you disagree with that?

I think it'd make sense to also include a e.g. coreos-assembler.build-gitrev which is the SHA of this repo itself. This might be more what @ajeddeloh meant.

Separating them (e.g. into seperate repos) here would have a few advantages

One downside at least is that your scripts and your container are now decoupled, so if a build script needs a new package (or revert a package), it's more work to get it into the build environments and synced.

from coreos-assembler.

ajeddeloh avatar ajeddeloh commented on August 14, 2024

Personally, I make && sudo make install a lot for developing this repo rather than rebuild the container. I am very much of the opinion we should (continue to) support that.

I haven't tried yet but that does have some host dependencies, right? (e.g. rpm-ostree). Fedora should not be a requirement for the best developer experience (rpm-ostree isn't packaged by many other distros). No second class developer host os citizens! Additionally things like recreating /dev/kvm become iffy if we're not running it in a container.

If I'm understanding you correctly, we currently have thing being built explicitly split out right? Do you disagree with that?

I do and I don't. Yes we have the thing being built split out, but how we invoke tools to build it is just as important and should be split out too. I'd argue things like the cmd-build script is itself a source that should be split out. It's something I expect to change with releases as we need to do new things in the build process. For CL we version and branch the scripts repo with the overlay and portage stable for this reason.

One downside at least is that your scripts and your container are now decoupled, so if a build script needs a new package (or revert a package), it's more work to get it into the build environments and synced.

Agreed, but I expect that to be much more infrequent.

I think it'd make sense to also include a e.g. coreos-assembler.build-gitrev which is the SHA of this repo itself. This might be more what @ajeddeloh meant.

I don't follow?

from coreos-assembler.

cgwalters avatar cgwalters commented on August 14, 2024

(rpm-ostree isn't packaged by many other distros).

Right. And short of this use case...there's not really a compelling reason to do so that I can think of. However, at least rpm is packaged for Debian - so is libsolv. That just leaves librepo I think.

from coreos-assembler.

dustymabe avatar dustymabe commented on August 14, 2024

changes to the build scripts are lighter weight (don't need to build new containers for every one). This would make development of the build system easier.

I think this is part of my motivation for starting the conversation in: #52 (comment)

from coreos-assembler.

cgwalters avatar cgwalters commented on August 14, 2024

Additionally things like recreating /dev/kvm become iffy if we're not running it in a container.

So I do run it in a container, not necessarily the container. Let's keep in mind that I could also on my Silverblue host pull (looks up Gentoo docker images...hum I'm confused do I want this one? How does emerge thing work[1]) ...a Gentoo image and try to build this repo inside of it too.

I'd argue things like the cmd-build script is itself a source that should be split out.

Currently, the cmd-build is really "gluing together" two processes that already exist externally. It's not quite like we're including something as complex as emerge, mock or bitbake here.

[1]

# emerge -atv bash
!!! Section 'gentoo' in repos.conf has location attribute set to nonexistent directory: '/usr/portage'
!!! Section 'x-portage' in repos.conf has location attribute set to nonexistent directory: '/usr/portage'
!!! Invalid Repository Location (not a dir): '/usr/portage'
# 1fd322eda280 ~ # ls -al /etc/portage/make.profile 
lrwxrwxrwx 1 root root 51 Sep 10 01:29 /etc/portage/make.profile -> ../../usr/portage/profiles/default/linux/amd64/17.0
# 

And that's a broken link

from coreos-assembler.

ajeddeloh avatar ajeddeloh commented on August 14, 2024

So I do run it in a container, not necessarily the container.

Sure, but I don't think that's a reasonable expectation for all users.

Currently, the cmd-build is really "gluing together" two processes that already exist externally. It's not quite like we're including something as complex as emerge, mock or bitbake here.

Right, but that makes it significantly harder to experiment with doing things outside the current virt-install method (to be clear, I'm not saying we shouldn't use virt-install, but rather that we shouldn't lock ourselves to it just yet, especially in the project's infancy). Pulling it into a separate repo, or even moving it to fedora-coreos-config (not sure how I feel about that) would make it much more flexible.

There's also several things I want to experiment with where I'd want to make that script more substantial. For example: using Ignition to create partitions and filesystems instead of anaconda or manually configuring grub (kickstart might allow this, not sure yet, but it doesn't look obvious). Basically experimenting with not using kickstart becomes incredibly painful if this is part of the container.

As for the gentoo container: I dunno, might need an emerge-websync or something first?

from coreos-assembler.

cgwalters avatar cgwalters commented on August 14, 2024

OK, I get it now. Definitely interesting to support Ignition for this instead. One thing I'd like to avoid is us generating loopback devices in the container itself since they're not namespaced in the kernel and are easy to leak. The filesystem developers also use loopback devices a lot for testing but they'll tell you not to do it in "production". So if we're doing things instead inside a VM I think that's better.

Now that means ignition inside a VM. We could simply download a Container Linux disk image just like we do for Anaconda and provide it an ignition that does the install and contains a different ignition config to make filesystems?

Maybe for now...we define a very high level inflexible image.yaml format that lives in the config repo and just supports:

size: 10GB
root:
  size: 6GB
  filesystem: xfs
# And we default to a separate /var that takes the remainder of the space
post: >
  shell script goes here

And translate that to either Kickstart or Ignition? I'd be totally fine changing the default config to not use LVM for now in aid of this too.

from coreos-assembler.

ajeddeloh avatar ajeddeloh commented on August 14, 2024

We use loopback devices for the Ignition blackbox tests and I wholeheartedly agree they're utterly awful to deal with... but... we do use them (successfully) to build CL today. So I'm torn on it /shrug. What I really want is for loopback devices to be good.

As for how using Ignition to generate images would work, I'm not 100% sure yet; it's still in the "idea rattling around in my head" phase. There are definitely parts of the process it can't do and shouldn't ever want to do (like deploying an ostree). It's probably a good idea to start with a list of things the image creator needs to do (e.g. partition, create fs's, deploy a tree, install bootloaders, etc) and figure out what we want to do for each step.

As for the high level image.yaml idea: I'd actually prefer an inverted version. Instead of shell script in a config file, I'd prefer a shell script that uses external config files. It's easier to grok (the shell script is the entry point). There's no magic. No custom config format. And if we can use ostree and Ignition to do the heavy lifting, shouldn't be to long/complex. We could even use Ignition to create the core user (idea: move this to the base config and have Ignition that runs at first boot do it instead, ship an image with no users).

WRT lvm: I think we should drop that in favor of partitions regardless since that was the conclusion from the fcos tracker discussion

from coreos-assembler.

cgwalters avatar cgwalters commented on August 14, 2024

image.sh? OK, though...I think we need to make the base qcow2 from the container before giving it to qemu, so that I think should be a yaml (or whatever) config option. After we've made the disk, then we copy over image.sh into a VM that has ostree + ignition and run it?

from coreos-assembler.

ajeddeloh avatar ajeddeloh commented on August 14, 2024

I don't follow? I'm thinking of a system that looks more like this (could be run from a vm launched from the assembler container or using loopback devices on the assembler container):

  1. Either attach a blank disk to the vm or create a loopback device backed by an empty file
  2. Use Ignition disks to create the partitions, filesystems
  3. Use ostree to populate the rootfs
  4. Install grub to the disk (both bios and EFI)
  5. Harvest disk image and convert to qcow2

from coreos-assembler.

cgwalters avatar cgwalters commented on August 14, 2024

(could be run from a vm launched from the assembler container or using loopback devices on the assembler container):

There's a big difference between those two cases though; in the VM case, it's code outside the VM that needs to create the initial disk image - and that needs to be a specific size.

Although, I guess we could just create a 100GB or some large size but thinly provisioned qcow2, then after image.sh has run in the VM, inspect the space it's actually allocated and shrink the final qcow2 to that.

This problem is really exactly the same as what's being solved with the magical #--coreos-virt-install-disk-size-gb: 8 comment in image.ks - Kickstart files are basically scripts that run inside a machine (VM or physical), they don't have any declarative way to define how big the whole disk is. In practice today in Fedora, that metadata lives in a wholly separate place which is really just broken.

So...we could go with image.sh but also add that same magical comment line.

from coreos-assembler.

ajeddeloh avatar ajeddeloh commented on August 14, 2024

There's a big difference between those two cases though; in the VM case, it's code outside the VM that needs to create the initial disk image - and that needs to be a specific size.

Should have been more clear; I was imagining the script being copied to the vm (which has the necessary tools like Ignition, ostree, etc)

Although, I guess we could just create a 100GB or some large size but thinly provisioned qcow2, then after image.sh has run in the VM, inspect the space it's actually allocated and shrink the final qcow2 to that.

I was thinking of having a separate disk attached; the VM can boot off it's own disk, but doesn't install there. That seems like more work than it's worth.

This problem is really exactly the same as what's being solved with the magical #--coreos-virt-install-disk-size-gb: 8 comment in image.ks - Kickstart files are basically scripts that run inside a machine (VM or physical), they don't have any declarative way to define how big the whole disk is. In practice today in Fedora, that metadata lives in a wholly separate place which is really just broken.

Exactly, I just want something more flexible than kickstart running in the vm so we can do things it doesn't support.

To be clear and jump a little bit back on topic: I'm not sure this is a path we want to go down, I'd like to experiment with it and bundling the build scripts makes that harder since it requires rebuilding the container or having rpm-ostree on the host.

from coreos-assembler.

cgwalters avatar cgwalters commented on August 14, 2024

I'd like to experiment with it and bundling the build scripts makes that harder since it requires rebuilding the container or having rpm-ostree on the host.

I definitely don't rebuild the container each time I want to make a change. I just do make && sudo make install inside my existing f28 pet dev container.

It should also work to turn the existing official container into a pet/dev container by just bind mounting in -v /path/to/src/coreos-assembler:/src and doing cd make && make install inside a shell or so.

If you're talking about running things on the host...well we're going to battle about that 😄 - I avoid running things on the host as much as possible on my dev desktop, servers, everywhere.

from coreos-assembler.

ajeddeloh avatar ajeddeloh commented on August 14, 2024

Pet containers (while neat and useful) shouldn't be a requirement for development. Neither should any one distro.

It should also work to turn the existing official container into a pet/dev container by just bind mounting in -v /path/to/src/coreos-assembler:/src and doing cd make && make install inside a shell or so.

Thoughts on bind mounting that in by default (in the docs) and adding an option to automatically run make/make install when you run build (or make that a new command)?

What advantage is there to bundling the build script?

from coreos-assembler.

dustymabe avatar dustymabe commented on August 14, 2024

Pet containers (while neat and useful) shouldn't be a requirement for development. Neither should any one distro.

Not sure if I totally agree with this. Let's break it apart:

  • Any one distro should not be a requirement for development

I don't disagree with that but I don't think we need to bend over backwards to make sure this works on every distro that exists.

  • Pet containers (while neat and useful) shouldn't be a requirement for development

Agree, but pet containers (or rather containers in general) sure make it easier to run on distros that might otherwise have a hard time with situation above, right?

from coreos-assembler.

cgwalters avatar cgwalters commented on August 14, 2024

adding an option to automatically run make/make install when you run build

If we do that, we need to ship our build dependencies (e.g. golang) in the image. Which, I am not opposed to...but it's going to add a good chunk more size:

# coreos-assembler shell
bash: cannot set terminal process group (8): Inappropriate ioctl for device
bash: no job control in this shell
[coreos-assembler]$ sudo su -
# yum -y install golang cargo
...
Install  22 Packages

Total download size: 206 M
Installed size: 683 M

from coreos-assembler.

ajeddeloh avatar ajeddeloh commented on August 14, 2024

I don't disagree with that but I don't think we need to bend over backwards to make sure this works on every distro that exists.

Right, weird distro-isms (e.g. differences with things like /dev/shm being a symlink, ancient packages, etc) aren't something we should bend over backwards for, but things like not having fedora-specific packages (e.g. rpm-ostree) should not make your development experience worse/slower.

Agree, but pet containers (or rather containers in general) sure make it easier to run on distros that might otherwise have a hard time with situation above, right?

That'd be a giant workaround rather than addressing the root problems. We can require a fedora enviroment to do a build (i.e. coreos-assembler) but we shouldn't require fedora environment to work on the build process which also uses a fedora environment (which in turns runs fedora in a vm :D ). I shouldn't need 3 fedora installs to build an image. Two is iffy enough in my book.

If we do that, we need to ship our build dependencies (e.g. golang) in the image. Which, I am not opposed to...but it's going to add a good chunk more size

I'm fine with that. Also if we go the route in #52 then we could drop some of that, yes?

What advantage is there to bundling the build script?

from coreos-assembler.

cgwalters avatar cgwalters commented on August 14, 2024

What advantage is there to bundling the build script?

If we split out the code, this repository would contain...the Dockerfile and the existing git submodules? And presumably a new git submodule for the scripts?

I don't quite understand how this would help you immediately - how would your workflow be different if src/cmd-build was in a different git repository? Is it just that in theory we could e.g. CI which did a build of it from Debian/Gentoo/Fedora environments?

from coreos-assembler.

ajeddeloh avatar ajeddeloh commented on August 14, 2024

FWIW I'd be a big fan of having the host container have everything needed to assemble the OS, but none of the instructions, then bind mounting the configs (from the fcos config repo) and the build scripts (from wherever those would live, if they don't live in the fcos config repo).

from coreos-assembler.

ajeddeloh avatar ajeddeloh commented on August 14, 2024

I think we have different opinions on what coreos-assembler should be. I think it ought to be the host to run the build process on but not include the scripts, configs, etc. So I don't think it would even need a new submodule for the scripts, since they wouldn't be included, just like how the fcos configs aren't a submodule. I think (correct me if I'm wrong) you think it ought to be a tool that includes the build scripts where you just point it at the config and it spits out the image/tree.

I don't quite understand how this would help you immediately - how would your workflow be different if src/cmd-build was in a different git repository? Is it just that in theory we could e.g. CI which did a build of it from Debian/Gentoo/Fedora environments?

If it were in a different git repo (and bind-mounted in) I could make changes there and commit them without needing to build a new container each time. I can do that now sorta with some extra work (as you sugguested earlier with bind mounting in the src directory and run make && make install) but changes to the build script end up as changes to the container, which means new containers for every build script change.

from coreos-assembler.

dustymabe avatar dustymabe commented on August 14, 2024

@ajeddeloh
Right, weird distro-isms (e.g. differences with things like /dev/shm being a symlink, ancient packages, etc) aren't something we should bend over backwards for, but things like not having fedora-specific packages (e.g. rpm-ostree) should not make your development experience worse/slower.

I guess I don't understand rpm-ostree isn't a fedora specific package. It just happens to be packaged for Fedora. If a distribution doesn't have a tool in it there is always the option to build from source right?

I often find tools that I want to use that aren't in Fedora. I've got a few options. 1) build from source, 2) create an rpm and try to get it into Fedora 3) create an rpm and make a copr and pull from that. Is this any different than that case?

@ajeddeloh
That'd be a giant workaround rather than addressing the root problems. We can require a fedora enviroment to do a build (i.e. coreos-assembler) but we shouldn't require fedora environment to work on the build process which also uses a fedora environment (which in turns runs fedora in a vm :D ). I shouldn't need 3 fedora installs to build an image. Two is iffy enough in my book.

so let me count:

  1. coreos-assember (fedora environment) runs rpm-ostree compose
  2. coreos-assember runs virt-install wtih anaconda (fedora environment) to do an install
  3. ???

I know we are working on how we do the 2nd part. anaconda just happens to be what we use right now.

from coreos-assembler.

ajeddeloh avatar ajeddeloh commented on August 14, 2024

@dustymabe
I guess I don't understand rpm-ostree isn't a fedora specific package. It just happens to be packaged for Fedora.

Sure, but as @cgwalters pointed out:

Right. And short of this use case...there's not really a compelling reason [for other distro's to package it] that I can think of.

I'm talking about if you want to work on the build process itself, not just working on the OS.

  1. Fedora pet container so you can make changes without needing to do a docker build
  2. coreos-assembler container for testing your changes integrated with the official build enviroment
  3. fedora vm that runs anaconda

I know we are working on how we do the 2nd part. anaconda just happens to be what we use right now.

I think you're conflating the "install" that's generating the disk image that would actually be installed and the installer that actually runs on bare metal. I'm just talking about image generation here.

from coreos-assembler.

dustymabe avatar dustymabe commented on August 14, 2024

@dustymabe
I guess I don't understand rpm-ostree isn't a fedora specific package. It just happens to be packaged for Fedora.

Sure, but as @cgwalters pointed out:

Right. And short of this use case...there's not really a compelling reason [for other distro's to package it] that I can think of.

Still trying to drill down on this. Maybe we should grab each other in IRC.. If rpm-ostree was being built from source as part of this process would you consider it to be less Host OS lockin than the fact that we're installing it from an RPM? It's what we're doing for kola.

I'm talking about if you want to work on the build process itself, not just working on the OS.

1. Fedora pet container so you can make changes without needing to do a docker build

2. coreos-assembler container for testing your changes integrated with the official build enviroment

3. fedora vm that runs anaconda

1&2 can be the same container. I'm just hacking around inside the coreos-assembler container. What I'd really like is for us to make it so that rebuilding the container is really lightweight, though. Which is why I opened #52. If we made rebuilding the container lightweight and also made it easy to bind mount stuff in and hack would it help?

I know we are working on how we do the 2nd part. anaconda just happens to be what we use right now.

I think you're conflating the "install" that's generating the disk image that would actually be installed and the installer that actually runs on bare metal. I'm just talking about image generation here.

ahh ok. yeah I was, but i'll drop that tangent so we don't lose focus

from coreos-assembler.

ajeddeloh avatar ajeddeloh commented on August 14, 2024

Still trying to drill down on this. Maybe we should grab each other in IRC.

Sure.

If rpm-ostree was being built from source as part of this process would you consider it to be less Host OS lockin than the fact that we're installing it from an RPM? It's what we're doing for kola.

I don't think that matters? I still want the coreos-assembler container to have it, I just want it so as long as you're not changing the tools themselves (i.e. making changes to the fcos build scripts but not rpm-ostree itself) you don't need to update the whole container. The problem is right now that's cumbersome and so the alternative path is "do it on your host" (or pet container, but lets not dive into that). I want to fix that. I also want to be able to work directly on the repo with the script rather than work in the container then copy my changes over.

from coreos-assembler.

cgwalters avatar cgwalters commented on August 14, 2024

I am not fundamentally opposed to changes here, though I struggle with the number of git repositories we have already. Particularly since we're going to be creating another repo with the Jenkins pipelines at least I'd guess.

I don't see the scripts as entangled with the "container" much today, and that's a good thing. There's not much magic that lives in build.sh that is hard to replicate outside. It's almost just adding a default sudo rule, that's what I did in my dev container for walters.

But of course today we already have a separate git repo with scripts - that's mantle. I'm not quite sure what you're envisioning hacking on, but given you already have a mantle dev environment, you could continue using that, and then whatever new tools would just get aggregated into here?

from coreos-assembler.

ajeddeloh avatar ajeddeloh commented on August 14, 2024

though I struggle with the number of git repositories we have already.

I think that's inevitable. We could employ some sort of git-wrangler like repo if things get bad. I also think we'll need to employ some sort of uniform tag/branch structure once we get multiple streams/channels going, but that's a whole nother discussion.

There's not much magic that lives in build.sh that is hard to replicate outside.

Were you refering to cmd-build.sh or build.sh?

But of course today we already have a separate git repo with scripts - that's mantle

I would call mantle scripts; its more testing and release tools (except maybe cork, which we don't use here). It's also post-build stuff; it doesn't impact what actually gets built.

I'm not quite sure what you're envisioning hacking on

cmd-build.sh is the poster child.

from coreos-assembler.

ajeddeloh avatar ajeddeloh commented on August 14, 2024

We (@cgwalters @dustymabe @jlebon and I) discussed this some in #fedora-coreos today. Some take aways from the discussion:

  • We want to support the use case of hacking on the build system itself and make that not cumbersome
  • There's several different, separable proposals:
  1. Bind mount in the build scripts so changes can be made directly to the git repo and reflected in the build container without rebuilding it
    a) Do we make that the canonical way of using coreos-assembler or just a way of hacking on the build scripts?
  2. Move the build scripts to a separate repo or as part of fedora-coreos-config to couple them more with the thing being build instead of the environment used. This stems from a somewhat more philosophical discussion of "should the build scripts be part of the sources or part of the build environment.
  3. Do we include the build scripts in the build container?
  • We need to make sure whatever model we pick doesn't make coreos-assembler only useful to fedora-coreos or only useful to redhat-coreos, but also allows for the flexibility to experiment with the build system.

I'm almost certainly missing some things, please chime in with what I missed. Above my (hopefully) impartial summary, below is my opinion:

For fcos and rhcos we need to do weird things with how we build the images. Things like installing grub to both efi and bios, having custom grub configs, etc. These are things which anaconda does not support right now (not sure if it would make sense to upstream that). So that means we're going to need to encode those steps in a build script.

The work to implement features like automatic rollback is mostly work in the OS tree itself (i.e. the bits handled by rpm-ostree) and the build scripts to do things like install custom grub configs. This means the build scripts and the contents of the configs are inherently tied. The build scripts and the build environment are also tied, but not as strongly. The main dependency there is just having the tools available or up to date.

Separating the build scripts into either the config repo or their own repo would also make it easier for fcos to charge ahead and experiment with different ways of constructing the os while not impacting rhcos until those features are ready to be picked up (since the build scripts would belong to the rhcos config, rather than the shared coreos-assembler container).

from coreos-assembler.

cgwalters avatar cgwalters commented on August 14, 2024

One angle I started thinking about this from is: How are we different from any other container for which people want to do development conveniently?

One answer to that is the 90% container application case is to be "pure" Go/Rust/Ruby or whatever, and so building the container is basically just an app build, and if you're doing things right you're caching your dependencies.

For us we have a huge list of dependencies installed via yum, including the kernel for example. Which...we could make convenient to cache...it's just not the default in Docker-style builds.

One angle we could take on this then is to split out a separate layer with our dependencies that we build separately.

from coreos-assembler.

dustymabe avatar dustymabe commented on August 14, 2024

ok I think I've got at least a little something that should make "hacking" configs and the (uncompiled) build scripts a little easier.

first off set some env variables that represent the locations of your git repos on your filesystem:

# export COREOS_ASSEMBLER_GIT=/var/sharedfolder/code/github.com/coreos/coreos-assembler/
# export COREOS_ASSEMBLER_CONFIG_GIT=/var/sharedfolder/code/github.com/coreos/fedora-coreos-config/

Then set an alias that has a pinch of magic in it

# alias coreos-assembler='podman run --rm --net=host -ti --privileged -v ${PWD}:/srv/ ${COREOS_ASSEMBLER_CONFIG_GIT:+-v  $COREOS_ASSEMBLER_CONFIG_GIT:/srv/src/config/:ro} ${COREOS_ASSEMBLER_GIT:+-v $COREOS_ASSEMBLER_GIT/src/:/usr/lib/coreos-assembler/:ro} --workdir /srv quay.io/cgwalters/coreos-assembler'

Then you can coreos-assembler build && hack on scripts/configs && coreos-assembler build over and over again.

The magic I mentioned above is:

  • ${COREOS_ASSEMBLER_CONFIG_GIT:+-v $COREOS_ASSEMBLER_CONFIG_GIT:/srv/src/config/:ro}
  • ${COREOS_ASSEMBLER_GIT:+-v $COREOS_ASSEMBLER_GIT/src/:/usr/lib/coreos-assembler/:ro}

each of these basically says if variable is set then insert a -v parameter pointing the location defined by the variable to the appropriate location within the container.

@ajeddeloh WDYT? should I update the readme with something like this in it ?

from coreos-assembler.

dustymabe avatar dustymabe commented on August 14, 2024

also note that the volume mounts are read only, which I like in case something in the container goes off the rails it doesn't delete your repos from your host.

from coreos-assembler.

cgwalters avatar cgwalters commented on August 14, 2024

${COREOS_ASSEMBLER_GIT:+-v $COREOS_ASSEMBLER_GIT/src/:/usr/lib/coreos-assembler/:ro}

While it's true that today the scripts are basically installed "literally", this would box us into that. That's probably fine, but worth noting.

from coreos-assembler.

dustymabe avatar dustymabe commented on August 14, 2024

While it's true that today the scripts are basically installed "literally", this would box us into that. That's probably fine, but worth noting.

yeah this wouldn't work for anything compiled.. we also have to consciously put things into /src or modify the code (which wouldn't be too hard). for example deps.txt needs to go in src today. For now I just copied it

from coreos-assembler.

ajeddeloh avatar ajeddeloh commented on August 14, 2024

I want to shy away from things that diverge from the normal flow for development. In the CL SDK we have the idea of cros_workon which basically flips from "build this from the last release" to "build this from this source in this directory". I'd like something similar for COSA.

New proposal (mostly deals with #1):

The container ships with built versions of all the tooling (including build scripts) installed. These tools are built inside the container as part of it's build process. If you don't need to make changes to the build process, it's just ready to go.

The container also bind mounts in sources for all the tooling. An global option dev-mode (I'm not attached to that name) is added.It rebuilds (when applicable, bash scripts don't need to be built) and reinstalls all tooling from the sources mounted in. It does the builds in the source directories (so that build artifacts persist) but installs them into the container itself, overwriting the versions in the container. If you wanted to do developer builds you would do coreos-assembler --dev build not coreos-assembler build.

There is a little bit of a "chicken and egg" problem here in that if you need to change something about --dev-mode (e.g. to add another tool) you would need to rebuild the container to get the change to dev-mode change to be picked up. To combat this there could be a script which gets mounted in and the top level script calls it if --dev-mode is specified. This should happen first. It should then exec the top level script again without the --dev-mode. It's basically a hook to say "hey, rebuild your tools from source first".

Thoughts?

This doesn't impose restrictions on where the projects live either (build scripts can be in coreos-assembler or fedora-coreos-config or wherever).

from coreos-assembler.

cgwalters avatar cgwalters commented on August 14, 2024

coreos-assembler --dev build

Seems OK, but I'd also say that since it requires bind mounting in the source which one cloned externally, we could have convention for -v /path/to/coreos-assembler.git:/srv/build-src and then detect /srv/build-src in the container.

from coreos-assembler.

ajeddeloh avatar ajeddeloh commented on August 14, 2024

With the merging of #182 I'm closing this since the core problem was addressed.

from coreos-assembler.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.