Comments (39)
Could we generalize this enough so that we support this type of "seeding" just over HTTP? E.g. something like coreos-assembler pull-build https://example.com/fcos
and then we expect https://example.com/fcos/builds/latest/meta.json to exist. That hopefully would work with the myriad of various ways artifacts can be stored. (Though we could add magic for URLs that start with s3://...
, etc... for things that require specialized access).
I wouldn't mind making this directly part of build
. E.g. coreos-assembler build --from https://example.com/fcos
?
from coreos-assembler.
Or part of init
e.g. coreos-assembler init --from s3://fedora-coreos
?
But, this is only half the problem; the other half is syncing the resulting builds back out - and we need to think about how build pruning is handled.
from coreos-assembler.
Currently I have a script which pulls the
builds.json
and then fetches themeta.json
from the previous build - basically setting up enough of a "skeleton" that c-a can start a new build.
Can we not have some persistent storage that is used? I've been setting up a PV for /srv/
. My thoughts are that we'll have a PV for /srv/
and then prune
after archiving the artifacts from the build to a public location. i.e. the prune will keep the PV at approximately the same storage usage, but will leave the last build around for the next run
from coreos-assembler.
I've been setting up a PV for /srv/.
I'm not sure I'd want to take that approach for the RHCOS builds - S3 handles backups/redundancy, also provides a way to download via HTTP, can be used with CloudFront etc.
Were you thinking of running a webserver that mounts the PV to serve content?
from coreos-assembler.
Were you thinking of running a webserver that mounts the PV to serve content?
Nope. I was thinking part of the pipeline would essentially archive (upload/publish/whatever you want to call it) the results of that build to a proper location and then prune N-1 in the local /srv/
directory.
from coreos-assembler.
But, this is only half the problem; the other half is syncing the resulting builds back out - and we need to think about how build pruning is handled.
Hmm, this could get complex if "prod" pruning falls within the scope of c-a. One way I was thinking about this was that in prod situations, we only care about the latest/
output from a coreos-assembler build
. Those artifacts are then pushed, promoted, and pruned by completely separate logic since it's highly dependent on the needs of the larger project.
from coreos-assembler.
Those artifacts are then pushed, promoted, and pruned by completely separate logic
So does that align with what I was proposing to do in #159 (comment) ?
from coreos-assembler.
Kinda? I guess then we can think of the PV more like local cache instead. So it could work just fine without it (and refetch whatever minimum it needs to get the next build number right) but having it allows the fetch
phase to be faster.
from coreos-assembler.
the prune will keep the PV at approximately the same storage usage, but will leave the last build around for the next run
One thing I'll note here - my current S3 scripts only download the previous build's meta.json
. If you want to maintain ostree history then you'll also need the commit object in the repo-build
(and repo
). But that's the only requirement - you don't actually need locally the previous build's .qcow2
for example.
How are you thinking of storing the ostree repo? For RHCOS where we're using oscontainers.
I guess then we can think of the PV more like local cache instead.
Right.
from coreos-assembler.
(Although if you want to generate deltas, then you do need the previous tree's content)
from coreos-assembler.
Looks like I'm back to missing github notifications again! The last two comments from colin didn't come in. <sarcasm>
great! </sarcasm>
@cgwalters
How are you thinking of storing the ostree repo? For RHCOS where we're using oscontainers.
Pure OSTree repo for now.
@jlebon
Kinda? I guess then we can think of the PV more like local cache instead. So it could work just fine without it (and refetch whatever minimum it needs to get the next build number right) but having it allows thefetch
phase to be faster.
Yeah it would be nice if your cache PV gets lost to say "start build from state that is at this http location", which I think is what you're trying to propose here @cgwalters ?
from coreos-assembler.
Yeah it would be nice if your cache PV gets lost to say "start build from state that is at this http location", which I think is what you're trying to propose here @cgwalters ?
Yes. It's really important when dealing with multiple copies of data to have a strong model for what "owns" the data and what copy is canonical.
That problem is why my current pipeline doesn't have a PV, just a S3 bucket.
That said, there are obvious advantages to keeping around cache/
in particular. The whole idea of unified-core is to take advantage of a cache like that. And like I said if you want to do deltas, that's where having the ostree repo is necessary.
Those artifacts are then pushed, promoted, and pruned by completely separate logic since it's highly dependent on the needs of the larger project.
Hmm. I know we want to separate c-a from "pipelines" using it, but there are clearly going to be common concerns here. My thought was that "sync to/from S3" is a script that we could carry here.
A tricky aspect of this though is my current scripts use oscontainers, not bare ostree repos.
from coreos-assembler.
OK I discovered the hard way that #137 (comment) added reverse
.
Moving on from that though...the problem with the current logic is that it assumes that I have all the build directories and meta.json
s locally. I can do that, but it feels like what we really want is for build --skip-prune
to just append the build to builds.json
or so?
from coreos-assembler.
OK I discovered the hard way that #137 (comment) added
reverse
.
Ahh sorry about not pointing this out more in the patch!
Moving on from that though...the problem with the current logic is that it assumes that I have all the build directories and
meta.json
s locally. I can do that, but it feels like what we really want is forbuild --skip-prune
to just append the build tobuilds.json
or so?
(I guess that'd be "prepend" now).
Hmm, yeah that's tricky. I think tying this to --skip-prune
makes sense though. I do wonder if we need a higher level global var/stamp file instead for the "partially synced" state that we can key off from overall.
from coreos-assembler.
from coreos-assembler.
Related: coreos/rpm-ostree#1704
from coreos-assembler.
Been thinking about this again now that I'm working on adding S3 capabilities to the FCOS pipeline.
One thing I'm thinking is whether cosa should in fact consider the repo/
a purely "workdir" concept, possibly even moving it under cache/
, and instead include a tarball of the oscontainer in the build dir. Right now, there's this odd split of what constitutes a build: it's builds/$id
+ the full OSTree commit in the repo. Putting it all under builds/$id
means it now fully describes the build.
This also resolves the major issue of pruning. With the current scheme, pruning the OSTree repo is problematic:
- For example, see
prune_builds
: right now pruning essentially does "find the oldest build we need to keep, then prune everything else older than that". This doesn't play well with tagging, since we're then keeping all the builds since the oldest tag. We could enhance the OSTree pruning API for this, but it's never going to be as clean asrm -rf builds/$id
. - In a prod context, one has to sync the whole repo to be able to prune it with
coreos-assembler prune
. Which is not realistic if you have an ever-growing list of builds and have limited cache storage (related: coreos/fedora-coreos-pipeline#38). Again, this is something we could solve with more complex code, but which would just melt away if build dirs were self-contained (essentially, one would just need to syncbuilds.json
& themeta.json
files to know which build dirs to nuke from the remote/S3). - Related to the above, it'd be much easier in general for any higher-level pruning code outside cosa to interact with directories & JSON files rather than OSTree repos.
Note this is completely independent of how updates are actually pushed out to client machines. We can use oscontainers to store OSTree content but still unpack it and ostree pull-local
into the prod repo (i.e. coreos-assembler oscontainer extract
). Deltas could also be calculated at that time.
Also, local workdirs would still have an archive repo for e.g. testing upgrades and inspecting the compose.
The main downside of course is space efficiency. But in the devel case, we cap at 3 builds by default, and in the prod case, we upload in e.g. S3, where 600M more is not really an issue. (And of course, space is still saved in the prod repo itself).
from coreos-assembler.
and instead include a tarball of the oscontainer in the build dir.
Tarball of ostree repo or actually a container image tarball with ostree repo inside? I'd lean towards the former.
I think the main tricky thing here is whether we try to preserve ostree history - do the commits have parents? Maybe it's simplest to not have parents. If we go that route...perhaps e.g. rpm-ostree should learn how to parse cosa builds so rpm-ostree deploy
works?
from coreos-assembler.
Note this is completely independent of how updates are actually pushed out to client machines. We can use oscontainers to store OSTree content but still unpack it and ostree pull-local into the prod repo (i.e. coreos-assembler oscontainer extract). Deltas could also be calculated at that time.
Ah...so are you thinking that devel builds are promoted to prod? And that'd include both ostree and images naturally?
from coreos-assembler.
Tarball of ostree repo or actually a container image tarball with ostree repo inside? I'd lean towards the former.
Yeah, that's fine. My initial thought was oscontainer since we have code that exists today for this. But yeah, we should definitely discuss bundling formats if we go this way.
Ah...so are you thinking that devel builds are promoted to prod? And that'd include both ostree and images naturally?
OK right, let's tie this now with the current plans for FCOS stream tooling.
First, specifically to your question, promotions imply rebuilds right now (i.e. promoting from testing
to stable
means doing some kind of custom git merge
and then triggering a new stable
build).
The way FCOS is shaping up, we'll be storing build artifacts in S3, but we'll be dealing with two OSTree repos, the main prod repo (at https://ostree.fedoraproject.org/) for the prod refs and an "annex" repo for the mechanical & devel refs.
There are 9 separate streams in total (let's not bring multi-arch into this yet..). A reasonable assumption here is that we want to be able to execute builds on those streams concurrently. This would imply e.g. 9 separate cosa "build root" dirs in the bucket, each with their own builds/
dir.
From a multi-stream perspective, having a separate ostree/
repo per stream doesn't really make sense. It's much easier to manage and interact with fewer repos that hold multiple refs. This in itself is good motivation for keeping OSTree content in the build dir.
My strawman right now is:
cosa build
tarballs OSTree content into build dirs- each stream has a separate build dir in the bucket (e.g.
s3://fcos-builds/streams/$stream/builds/$buildid
) - service watches for new builds across non-prod streams and pulls in new OSTree content into the annex repo
- service watches for new builds across prod streams and pulls in new OSTree content into the prod repo
OSTree repos and build dirs can then be pruned according to different policies, which I think makes sense since one is about first installs, while the other is about upgrades. (E.g. if Cincinnati uses the OSTree repo to build its graph, then it would make sense to keep OSTree content for much longer).
We definitely lose on network efficiency here by downloading the full tarball to update the ref even if just a few files changed. I think that tradeoff is worth it though.
from coreos-assembler.
service watches for new builds across prod streams and pulls in new OSTree content into the prod repo
This might be overly simplistic. We may want to gate this on the release process instead of making it automatic.
I think the main tricky thing here is whether we try to preserve ostree history - do the commits have parents? Maybe it's simplest to not have parents. If we go that route...perhaps e.g. rpm-ostree should learn how to parse cosa builds so rpm-ostree deploy works?
Hmm, that's an interesting question. Another casualty of not preserving history, apart from ostree log
and rpm-ostree deploy
is that it might also make pruning more complicated.
I think I agree though that it's cleaner for OSTree commits cosa creates to not have parents. E.g. we might recompose a prod stream multiple times and not necessarily publish all the commits to prod.
One thing we could do is "graft" the commit onto the ref, preserving history, as part of the service that syncs OSTree content? We wouldn't have the same commit checksum, but still the same content checksum.
from coreos-assembler.
My initial thought was oscontainer since we have code that exists today for this
The other option is rojig...one powerful advantage of that is that it can easily be used again as input to a build to regenerate it (possibly with some targeted changes).
from coreos-assembler.
Also on this topic personally I've been playing with https://git-annex.branchable.com/ a lot lately - one thing to consider that could make a lot of sense is to commit cosa builds into it - if we included the input RPMs (and to follow on the previous comment, the rojig rpm) we'd have everything nicely versioned. It also gives us an abstraction layer that e.g. supports syncing to s3, but also other backends.
from coreos-assembler.
wow. lots of good discussion here. /me just catching up. I have a few comments/questions:
My strawman right now is:
* `cosa build` tarballs OSTree content into build dirs * each stream has a separate build dir in the bucket (e.g. `s3://fcos-builds/streams/$stream/builds/$buildid`) * service watches for new builds across non-prod streams and pulls in new OSTree content into the annex repo * service watches for new builds across prod streams and pulls in new OSTree content into the prod repo
This all sounds mostly good I think. My original thought was to just store the ostree repo for that one commit itself under the build dir. Why do tarball instead? It would be cool if we could rpm-ostree rebase
to a remote tarball (or oscontainer sitting on an http server) for debugging purposes, but we can't right?
I think I agree though that it's cleaner for OSTree commits cosa creates to not have parents. E.g. we might recompose a prod stream multiple times and not necessarily publish all the commits to prod.
I think our FCOS commits our end users get should have parents. Whether COSA maintains that information or we inject it later as part of the release process is up for debate, though.
from coreos-assembler.
I think our FCOS commits our end users get should have parents.
Yeah, I definitely agree with this. I mentioned this higher up:
One thing we could do is "graft" the commit onto the ref, preserving history, as part of the service that syncs OSTree content? We wouldn't have the same commit checksum, but still the same content checksum.
Losing the commit checksum match is significant though. But I think it's worth exploring focusing on the content checksum instead? Doing this also allows us to inject commit metadata at other steps in the pipeline rather than at cosa build
time (e.g. which stream the commit belongs to, or storing release metadata directly as discussed in #189 rather than detached).
The theme here is to make cosa build
dumber and more self-contained so it doesn't put as many constraints on the release process and build storage.
My original thought was to just store the ostree repo for that one commit itself under the build dir. Why do tarball instead? It would be cool if we could rpm-ostree rebase to a remote tarball (or oscontainer sitting on an http server) for debugging purposes, but we can't right?
I was thinking we'd use the annex/prod repos to do this. E.g. the commit from stream X has its active remote set to either the annex or prod repo. So then rpm-ostree deploy/upgrade/rebase
should just work OOTB even on non-prod systems. (Though this is also related to coreos/fedora-coreos-tracker#163 -- ideally, we should be able to use & test the same UX on non-prod systems as we do on prod).
That said, focusing on the storage format itself in the build dir, I think keeping it as a directory would also work. It would definitely simplify the sync services and would also remove the need for keeping the ostree-commit-object
separately. Other than potentially making syncing build dirs slightly more cumbersome, I think the only (subjective) downside is that having a directory in the build dir is less clean as an output artifact than just one blob. Open to try it out though!
from coreos-assembler.
Any comments or even just "gut reaction" to #159 (comment) ?
from coreos-assembler.
I do like the idea of considering the archive repository as a cache. It seems valid to me to implement pruning for example by creating a new archive repo periodically and importing the builds we want.
from coreos-assembler.
One thing I noticed offhand is that oc image mirror
claims to support targeting S3...but I couldn't get it to work. We could clearly fix that or do a new implementation though.
from coreos-assembler.
One thing we could do is "graft" the commit onto the ref, preserving history, as part of the service that syncs OSTree content? We wouldn't have the same commit checksum, but still the same content checksum.
Not really following this bug, but I happened to see this comment. This is what flatpak build-commit-from
does (Source). Basically it takes a commit (or set of commits) from one repo and re-applies them on another, while keeping the content checksum and any commit metadata as well as rewriting existing (typically from-scratch) delta files for the commit.
We use this to great effect in flathub. For example, each build of an app start from scratch, and then we upload from each arch build machine onto a shared machine and then graft all the builds on top of the existing history, also re-setting the timestamp to a shared guaranteed to be increasing value.
Even if you don't use this in the assembler I would recommend ostree grow an operation like this, it is extremely useful.
from coreos-assembler.
also I do have a service called flat-manager which does the repo management, including the uploading a build and grafting it on top of the repo. Its somewhat flatpak specific, but mostly generic ostree.
from coreos-assembler.
@jlebon
Losing the commit checksum match is significant though. But I think it's worth exploring focusing on the content checksum instead? Doing this also allows us to inject commit metadata at other steps in the pipeline rather than atcosa build
time (e.g. which stream the commit belongs to, or storing release metadata directly as discussed in #189 rather than detached).
I think that is fine and will probably help us be more flexible in the future. It would be nice if we promoted the content checksum to be more pronounced in the rpm-ostree status output (maybe we use the short hash for both commit checksum and content checksum so we can include them both then expand them in the --verbose output).
@jlebon
That said, focusing on the storage format itself in the build dir, I think keeping it as a directory would also work. It would definitely simplify the sync services and would also remove the need for keeping theostree-commit-object
separately. Other than potentially making syncing build dirs slightly more cumbersome, I think the only (subjective) downside is that having a directory in the build dir is less clean as an output artifact than just one blob. Open to try it out though!
yeah I'm not sure of the best solution here, but my gut reaction was OSTree repo because that can easily be rebased to without an intermediate step. What are the planned consumers of this? If it's just for us then we can change it later if we decide one solution is better.
@cgwalters
Any comments or even just "gut reaction" to #159 (comment) ?
I don't know if that was directed at @jlebon or if you're asking us all. My gut reaction without thinking deeply about it is: while I love git-annex for my personal use today I'd prefer to keep this particular solution simple for now and not introduce a new technology.
from coreos-assembler.
@cgwalters
Any comments or even just "gut reaction" to #159 (comment) ?
Hmm, I only recently learned about git-annex, but I think I can see the appeal for the use case here (esp. the syncing to S3 bits being taken care of and the partial checkout part for buildprep
).
Though offhand I'm not sure about:
- the complexity it introduces to all the clients that need to interact with the S3 artifacts.
- how it interacts with pruning... i.e. when you really want a build dropped from the object store (this seems to indicate you'd remove the
git-annex
branch thengit annex fsck
before re-adding things?).
Overall though I would agree with Dusty about keeping things simple to start.
@alexlarsson
Not really following this bug, but I happened to see this comment. This is whatflatpak build-commit-from
does
Cool, thanks for bringing this up! Will take a look at it.
@dustymabe
What are the planned consumers of this? If it's just for us then we can change it later if we decide one solution is better.
Yeah we should be able to change this (or maybe to reword: we should strive to set things up so that we can change the layout of artifacts without affecting the interface to consumers).
Anyway, in the interest of making a better informed decision, I took both strategies for a spin by actually uploading/downloading to/from S3. Here are the highlights:
- As previously mentioned in #515, given that the repo objects are already individually compressed, there isn't a significant difference size-wise (added some compressed tarballs as well for fun):
$ du -sh ostree*
624M ostree
568M ostree.tar
543M ostree.tar.gz
542M ostree.tar.xz
-
Uploading to S3 is generally faster as a tarball, but not actually by a significant margin in absolute terms (e.g. 1m6s for the tarball vs 1m32s for the directory, though I did bump
max_concurrent_requests
to 100). Reuploading the same tarball is much faster though, likely because AWS is deduping in the background (that's the case where e.g. the OSTree content didn't change butimage.yaml
changed, which I don't expect we'll actually hit very often in practice). -
It comes to no surprise that doing any bulk operation is slower on the directory. Though it did surprise me just how much slower it is. (And of course, it's pretty much instant for the single tar file). E.g. an
aws s3 rm
takes ~2m. There's lots of overhead there in API calls. There is no "DELETE everything recursively" API, so you have to do multiple calls. At least forrm
, you can batch by 1000 at a time. OTOH recursively changing all the objects' ACL is one API call per object which makes it excruciatingly slow. Similarly e.g.aws s3 mv
is very slow.
So overall, I now think the tarball approach is indeed the better one. This will slightly complicate some things, but will make managing the bucket much easier.
Another strategy which Colin suggested in #515 was using static deltas instead, which would be a nice compromise between the two options. Reasonable amount of files, yet still directly pull-able. The downside though is that computing static-deltas is expensive, and I don't want that to be in the default developer path.
A hybrid approach is to default to a regular repo, and only do the static-delta conversion at cosa compress
time. But the fact that the main repos will also be archive repos and you can't directly use deltas when targeting an archive repo makes this actually awkward to use.
Anyway, all things considered, I'm going to update #515 again to use tarballs unless there are other ideas. Again, remember that ideally you wouldn't normally directly consume this tarball e.g. from a host. Rather you'd use the OSTree repos that are synced from those builds.
from coreos-assembler.
Anyway, all things considered, I'm going to update #515 again to use tarballs unless there are other ideas.
+1
from coreos-assembler.
@cgwalters I've used git-annex for a few years and I like it, but it's a complex external dependency that could also be unpleasant to automate.
from coreos-assembler.
I think the main tricky thing here is whether we try to preserve ostree history - do the commits have parents? Maybe it's simplest to not have parents. If we go that route...perhaps e.g. rpm-ostree should learn how to parse cosa builds so rpm-ostree deploy works?
Hmm, that's an interesting question. Another casualty of not preserving history, apart from ostree log and rpm-ostree deploy is that it might also make pruning more complicated.
I think I agree though that it's cleaner for OSTree commits cosa creates to not have parents. E.g. we might recompose a prod stream multiple times and not necessarily publish all the commits to prod.
Some more thoughts about this.
While I definitely like the idea conceptually behind keeping cosa OSTree commits independent, I think there's a lot of friction in moving away from maintaining OSTree history. We mentioned above some casualties: ostree log
, rpm-ostree deploy
, and ostree prune
. I'll just go into some details on those to give more context.
If we deliver independent OSTree commits, then the OSTree ref will always point at the latest commit only. This in turn means that for Zincati to be able to safely upgrade hosts, it will need to use e.g. rpm-ostree rebase fedora:<SHA256>
instead of deploy <SHA256>
(which by default ensures that the new commit is on the same branch). And this in turn means that rpm-ostree status
no longer shows the ref the system is on, but rather just the SHA256 (and version, which to be fair is how it is in RHCOS today). But this also means that a manual rpm-ostree upgrade
would no longer work (which is irrelevant in RHCOS but not FCOS).
As for pruning, any commit older than the latest one will be "orphaned", which means that the default ostree prune --refs-only
will delete them. So we would have to enhance ostree prune
so it can take e.g. a set of protected commits... awkward, and prone to mishaps.
One thought I had on those two issues is that we could use the ref-binding work in OSTree. This is something we can do because we always rebuild on promotions. So e.g. deploy <SHA256>
could learn to accept the commit if it has a matching ref binding. Similarly, ostree prune
could learn a --protect-ref-binding
which just omits commits with a given ref binding. deploy <VERSION>
and ostree log
would still be broken though.
One thing we could do is "graft" the commit onto the ref, preserving history, as part of the service that syncs OSTree content? We wouldn't have the same commit checksum, but still the same content checksum.
The issue with "grafting" is that we're not just delivering OSTrees, we're delivering whole OS images with OSTree commits embedded in them (and then signing those, see coreos/fedora-coreos-tracker#200 (comment)). So any discussion around a grafting strategy needs to address this.
At the same time, I don't want to go down the path of FAH, where when releasing an OSTree, we also "release" (make public) all the intermediate OSTree commits since the last release. This is essentially implementation details leaking into our release process.
For FCOS, we could improve greatly on this though by explicitly passing the last released build to cosa build
so that all builds have for parent the latest release. So the builds are still independent from each other, but just not from the release process (which is already the case when you think about e.g. streams, versioning, and promotions).
So my conclusion on this is that while we could fully move away from maintaining OSTree history, it will require some amount of non-trivial work. But we need a solution for right now (i.e. for the next FCOS build we want to release). My suggestion is to enhance cosa build
as mentioned above, while we evaluate (1) whether this is something that we want to do, and (2) how we want to rework our tools to do it.
I think if we do it right, it could turn out really well. (E.g. a completely different way is abstracting away the OSTree repo and going along Colin's suggestion to make rpm-ostree aware of cosa (or rather FCOS) metadata. The result could be a richer, more meaningful UX).
from coreos-assembler.
OK, I've put up #625.
from coreos-assembler.
At the same time, I don't want to go down the path of FAH, where when releasing an OSTree, we also "release" (make public) all the intermediate OSTree commits since the last release. This is essentially implementation details leaking into our release process.
+1 - we just never got around to making that cleaner
For FCOS, we could improve greatly on this though by explicitly passing the last released build to
cosa build
so that all builds have for parent the latest release. So the builds are still independent from each other, but just not from the release process (which is already the case when you think about e.g. streams, versioning, and promotions).
+100 - I really like that
from coreos-assembler.
Feels like we can probably close this issue at this point?
from coreos-assembler.
Closing this as per last comment.
from coreos-assembler.
Related Issues (20)
- `cosa aliyun-replicate` is not idempotent
- [4.15-9.2] legacy-oscontainer build killed due to unexpected EOF on ppc64le HOT 2
- How to build a PXE Image with Dockerfile layering HOT 1
- `coreos.unique.boot.failure` kola test fails on aarch64
- `coreos.ignition.failure` sometimes fails on RHCOS HOT 15
- Create disk failed due to incorrect option format on Fedora 39 HOT 1
- build-arch jobs failing with "Error: unmarshalling error into &errorhandling.ErrorModel"
- cosa build error: "cli: stat /var/tmp/mantle-qemu771203327/swtpm-sock: no such file or directory" HOT 4
- [RFE] kola should support to start previous build to do external tests HOT 4
- OSBuild without compression yields GRUB failures HOT 25
- what is the difference between dasd and metal4k on s390x? HOT 9
- Kola Custom Test HOT 10
- `buildextend-virtualbox` and `buildextend-vmware` improperly handle raw disks >=8GB HOT 6
- `kola testiso` tests should check for badness in console/journal output HOT 5
- rework iscsi tests architecture HOT 2
- osbuild should use a buildroot that matches the target system HOT 6
- kola qemuexec fails on PXE with `uefi-secure` qemu-firmware HOT 2
- cosa run should't expect an image when `--netboot` is present
- 4K UEFI PXE tests failing HOT 2
- Docs: Using the provided alias with `COREOS_ASSEMBLER_CONFIG_GIT` leave FS with dangling files HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from coreos-assembler.