On 10/10/2018 10:18 AM, Dusty Mabe wrote:
Right after the community meeting today (17:30 UTC) we are going to discuss
some of the design and strategy for CoreOS Assembler  going forward. The
discussion will mostly be around an issue  in the issue tracker.
We'll use bluejeans from video conferencing. Bluejeans should be able
to run via html5 in firefox or chrome. It shouldn't require a plugin
to be installed, but YMMV. The link for the meeting is . We'll use
an etherpad for discussion .
Very rough notes from the meeting are below. Thanks for everyone who
was able to join!
The recording link is here: https://bluejeans.com/s/lyeNK
2018-10-10 CoreOS Assembler Discussion
[What is CoreOS Assembler?]
- today we have multiple build systems (pungi for example is painful to run locally)
- coreos-assembler is an attempt to bundle the tools together in a container
- if you have the container and a config you can produce artifacts
Commands that operate on a "build directory":
build - build an ostree and bootable qcow
clean - delete build artifacts
fetch - fetch the latest pkgs
init - prepare directory for use with coreos-assembler
prune - delete old build artifacts
run - boot last generated qcow image
shell - grab a shell in the coreos-assembler container environment
oscontainer - ?
- building artifacts (ostree, oscontainer, qcow)
- publishing (uploading to ec2, unless we break mantle out into separate container)
* <slowrie> there's a difference between publicly publishing & pushing
developer (non-public images) which would use different tooling (ore vs plume)
- testing (kola)
[differences between CL SDK and CoreOS Assembler today]
- coreos-assembler is more like a tool wrapped in a container rather than a development
- with CL SDK you can do development entirely inside the SDK. i.e. in CoreOS Assembler you
don't spend time inside the container
- CL SDK build scripts and configs are versioned completely inside the SDK
- CoreOS Assember today the build scripts are part of the container itself (see
[choosing output artifacts]
[consider seperating host bits from build root bits]
- what is the build environment? (in container) all the rpms and software in side of the
- what are build scripts? (in container) software in
- what are build configs? (not in container)
How should we separate these out for the best solution?
- production builds
- local builds
- hacking on the coreos-assembler
[Options/Questions from discussions today]
- keep mantle container builds separate (in a seperate container and just use it
- ngompa, ajeddeloh advocates for keeping things separate
- shouldn't be so tightly coupled that you can't reuse pieces for different
- dustymabe advocates keeping pieces together in a container with ability to call them
- general agreement on points above ^^
- lucab suggests a 'repo-manager' functionality similar to what exists in CL SDK
- used for fetching git repos and preparing said bits
- use jiri to keep/perform all git-fetches in a single place?
- ajeddeloh proposes splitting out 'mantle' into a separate container
- ajeddeloh proposes splitting out the build scripts to allow for rebuilding of the actual
[LB oob notes]
* build coreos-assembler locally too? Or setup auto-building on quay/OS/etc for master?
(maybe it is already and I missed it)
* somehow /dev/kvm manipulation leaks to the host? (Debian perm mismatch after `build`,
seen once and I'm unsure)
* let's try get to an empty 'postprocess'?
* Does rpm-ostree-compose have an assumption that build-env == target-env? Based on:
- Running post scripts... systemd-libs.post: Detected system with nfsnobody defined,
<walters> Nope, that's talking about the target root - ack
* Plans for matching the bits from the one below with the bits the container is consuming?