Code Monkey home page Code Monkey logo

buildstream's Introduction

About

image

GitHub Workflow Status

image

What is BuildStream?

BuildStream is a powerful software integration tool that allows developers to automate the integration of software components including operating systems, and to streamline the software development and production process.

Some key capabilities of BuildStream include:

  • Defining software stacks in a declarative format: BuildStream allows users to define the steps required to build and integrate software components, including fetching source code and building dependencies.
  • Integrating with version control systems: BuildStream can be configured to fetch source code from popular source code management solutions such as GitLab, GitHub, BitBucket as well as a range of non-git technologies.
  • Supporting a wide range of build technologies: BuildStream supports a wide range of technologies, including key programming languages like C, C++, Python, Rust and Java, as well as many build tools including Make, CMake, Meson, distutils, pip and others.
  • Ability to create outputs in a range of formats: e.g. debian packages, flatpak runtimes, sysroots, system images, for multiple platforms and chipsets.
  • Flexible architecture: BuildStream is designed to be flexible and extensible, allowing users to customize their build and integration processes to meet their specific needs and tooling.
  • Enabling fast and reliable software delivery: By extensibly use of sandboxing techniques and by its capability to distribute the build, BuildStream helps teams deliver high-quality software faster.

Why should I use BuildStream?

BuildStream offers the following advantages:

  • Declarative build instructions/definitions

    BuildStream provides a flexible and extensible framework for the modelling of software build pipelines in a declarative YAML format, which allows you to manipulate filesystem data in a controlled, reproducible sandboxed environment.

  • Support for developer and integrator workflows

    BuildStream provides traceability and reproducibility for integrators handling stacks of hundreds/thousands of components, as well as workspace features and shortcuts to minimise cycle-time for developers.

  • Fast and predictable

    BuildStream can cache previous builds and track changes to source file content and build/config commands. BuildStream only rebuilds the things that have changed.

  • Extensible

    You can extend BuildStream to support your favourite build-system.

  • Bootstrap toolchains and bootable systems

    BuildStream can create full systems and complete toolchains from scratch, for a range of ISAs including x86_32, x86_64, ARMv7, ARMv8, MIPS.

How do I use BuildStream?

Please refer to the documentation for information about installing BuildStream, and about the BuildStream YAML format and plugin options.

How does BuildStream work?

BuildStream operates on a set of YAML files (.bst files), as follows:

  • Loads the YAML files which describe the target(s) and all dependencies.
  • Evaluates the version information and build instructions to calculate a build graph for the target(s) and all dependencies and unique cache-keys for each element.
  • Retrieves previously built elements (artifacts) from a local/remote cache, or builds the elements in a sandboxed environment using the instructions declared in the .bst files.
  • Transforms/configures and/or deploys the resulting target(s) based on the instructions declared in the .bst files.

How can I get started?

To get started, first install BuildStream by following the installation guide and then follow our tutorial in the user guide.

We also recommend exploring some existing BuildStream projects:

If you have any questions please ask on our #buildstream channel in irc.gnome.org

buildstream's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

buildstream's Issues

failed to upgrade buildstream at 640a734ec1db923b5

See original issue on GitLab
In GitLab by [Gitlab user @devcurmudgeon] on May 7, 2017, 12:49

admin[[Gitlab user @ip-172-31-2-103]](https://gitlab.com/ip-172-31-2-103):~/src/buildstream$ pip3 install --user .
Unpacking /home/admin/src/buildstream
  Running setup.py (path:/tmp/pip-j55sp20f-build/setup.py) egg_info for package from file:///home/admin/src/buildstream
    your setuptools is too old (<12)
    setuptools_scm functionality is degraded
    zip_safe flag not set; analyzing archive contents...
    
    Installed /tmp/pip-j55sp20f-build/pytest_runner-2.11.1-py3.4.egg
    
  Requirement already satisfied (use --upgrade to upgrade): BuildStream==0.1 from file:///home/admin/src/buildstream in /home/admin/.local/lib/python3.4/site-packages
Requirement already satisfied (use --upgrade to upgrade): setuptools in /usr/lib/python3/dist-packages (from BuildStream==0.1)
Requirement already satisfied (use --upgrade to upgrade): psutil in /home/admin/.local/lib/python3.4/site-packages (from BuildStream==0.1)
Requirement already satisfied (use --upgrade to upgrade): ruamel.yaml in /usr/lib/python3/dist-packages (from BuildStream==0.1)
Requirement already satisfied (use --upgrade to upgrade): pluginbase in /home/admin/.local/lib/python3.4/site-packages (from BuildStream==0.1)
Requirement already satisfied (use --upgrade to upgrade): Click in /home/admin/.local/lib/python3.4/site-packages (from BuildStream==0.1)
Requirement already satisfied (use --upgrade to upgrade): blessings in /home/admin/.local/lib/python3.4/site-packages (from BuildStream==0.1)
Requirement already satisfied (use --upgrade to upgrade): typing in /home/admin/.local/lib/python3.4/site-packages (from ruamel.yaml->BuildStream==0.1)
Cleaning up...
admin[[Gitlab user @ip-172-31-2-103]](https://gitlab.com/ip-172-31-2-103):~/src/buildstream$ pip3 install --user . --upgrade
Unpacking /home/admin/src/buildstream
  Running setup.py (path:/tmp/pip-x7xvew4u-build/setup.py) egg_info for package from file:///home/admin/src/buildstream
    your setuptools is too old (<12)
    setuptools_scm functionality is degraded
    zip_safe flag not set; analyzing archive contents...
    
    Installed /tmp/pip-x7xvew4u-build/pytest_runner-2.11.1-py3.4.egg
    
Downloading/unpacking setuptools from https://pypi.python.org/packages/27/45/79618f80704497f74f2de1ca62216a5c3ffdbd49f43047c81c30e126a055/setuptools-35.0.2-py2.py3-none-any.whl#md5=54a3dac8fe9b912bb884a485d9a2e9cb (from BuildStream==0.1)
  Downloading setuptools-35.0.2-py2.py3-none-any.whl (390kB): 390kB downloaded
Requirement already up-to-date: psutil in /home/admin/.local/lib/python3.4/site-packages (from BuildStream==0.1)
Downloading/unpacking ruamel.yaml from https://pypi.python.org/packages/5a/86/8df701c9de786f25f5f290d1e0b63374becd79149298b86527ebae83f130/ruamel.yaml-0.14.11.tar.gz#md5=9db79fd50c560c2fc38a511354fabd65 (from BuildStream==0.1)
  Downloading ruamel.yaml-0.14.11.tar.gz (242kB): 242kB downloaded
  Running setup.py (path:/tmp/pip-build-koos95yw/ruamel.yaml/setup.py) egg_info for package ruamel.yaml
    /usr/lib/python3.4/importlib/_bootstrap.py:321: UserWarning: Module _ruamel_yaml was already imported from /usr/lib/python3/dist-packages/_ruamel_yaml.cpython-34m-x86_64-linux-gnu.so, but /tmp/pip-build-koos95yw/ruamel.yaml is being added to sys.path
      return f(*args, **kwds)
    Traceback (most recent call last):
      File "<string>", line 17, in <module>
      File "/tmp/pip-build-koos95yw/ruamel.yaml/setup.py", line 858, in <module>
        main()
      File "/tmp/pip-build-koos95yw/ruamel.yaml/setup.py", line 847, in main
        setup(**kw)
      File "/usr/lib/python3.4/distutils/core.py", line 108, in setup
        _setup_distribution = dist = klass(attrs)
      File "/usr/lib/python3/dist-packages/setuptools/dist.py", line 266, in __init__
        _Distribution.__init__(self,attrs)
      File "/usr/lib/python3.4/distutils/dist.py", line 280, in __init__
        self.finalize_options()
      File "/usr/lib/python3/dist-packages/setuptools/dist.py", line 300, in finalize_options
        ep.require(installer=self.fetch_build_egg)
      File "/usr/lib/python3/dist-packages/pkg_resources.py", line 2201, in require
        reqs = self.dist.requires(self.extras)
      File "/usr/lib/python3/dist-packages/pkg_resources.py", line 2401, in requires
        dm = self._dep_map
      File "/usr/lib/python3/dist-packages/pkg_resources.py", line 2457, in __getattr__
        raise AttributeError(attr)
    AttributeError: _dep_map
    sys.argv ['-c', 'egg_info', '--egg-base', 'pip-egg-info']
    test compiling test_ruamel_yaml
    Complete output from command python setup.py egg_info:
    /usr/lib/python3.4/importlib/_bootstrap.py:321: UserWarning: Module _ruamel_yaml was already imported from /usr/lib/python3/dist-packages/_ruamel_yaml.cpython-34m-x86_64-linux-gnu.so, but /tmp/pip-build-koos95yw/ruamel.yaml is being added to sys.path

  return f(*args, **kwds)

Traceback (most recent call last):

  File "<string>", line 17, in <module>

  File "/tmp/pip-build-koos95yw/ruamel.yaml/setup.py", line 858, in <module>

    main()

  File "/tmp/pip-build-koos95yw/ruamel.yaml/setup.py", line 847, in main

    setup(**kw)

  File "/usr/lib/python3.4/distutils/core.py", line 108, in setup

    _setup_distribution = dist = klass(attrs)

  File "/usr/lib/python3/dist-packages/setuptools/dist.py", line 266, in __init__

    _Distribution.__init__(self,attrs)

  File "/usr/lib/python3.4/distutils/dist.py", line 280, in __init__

    self.finalize_options()

  File "/usr/lib/python3/dist-packages/setuptools/dist.py", line 300, in finalize_options

    ep.require(installer=self.fetch_build_egg)

  File "/usr/lib/python3/dist-packages/pkg_resources.py", line 2201, in require

    reqs = self.dist.requires(self.extras)

  File "/usr/lib/python3/dist-packages/pkg_resources.py", line 2401, in requires

    dm = self._dep_map

  File "/usr/lib/python3/dist-packages/pkg_resources.py", line 2457, in __getattr__

    raise AttributeError(attr)

AttributeError: _dep_map

sys.argv ['-c', 'egg_info', '--egg-base', 'pip-egg-info']

test compiling test_ruamel_yaml

----------------------------------------
Cleaning up...
Command python setup.py egg_info failed with error code 1 in /tmp/pip-build-koos95yw/ruamel.yaml
Storing debug log for failure in /home/admin/.pip/pip.log

Dynamically set Element public data

See original issue on GitLab
In GitLab by [Gitlab user @tristanvb] on Apr 28, 2017, 09:27

Public data
is exposed on an Element so that other Elements which depend on the given element can read it

It is only interesting to read public data from dependencies
when an Element assembles its artifact. Current use cases of
this include integration commands and artifact splitting rules.

In some cases, like when deploying debian packages from imported
debian supporting sources, it will be important to populate the
splitting rules (essentially file manifests) at the time when the
artifact is built, as we will only really know which files to put
into which package after having run the build.

This task will consist of:

  • ensuring that public data is recorded in an artifact's metadata
  • ensuring that the public data is read back into the BuildStream data model whenever the pipeline is loaded and the artifact is found to already be cached
  • adding some API to the Element allowing the Element to programmatically assign public data at Element->assemble() time.

Strange artifact pull failures on GitLab CI

See original issue on GitLab
In GitLab by [Gitlab user @samthursfield] on Jul 14, 2017, 18:32

I managed to run a full build on GitLab CI that populated the artifact cache on ostree.baserock.org.

On my local machine I built the same system and successfully pulled all the artifacts.

Running the build again on GitLab CI triggered a strange failure case:

[--:--:--][30ff3f92][ pull:gnu-toolchain/stage1.bst      ] INFO    Downloaded artifact 30ff3f92
[00:00:01][30ff3f92][ pull:gnu-toolchain/stage1.bst      ] SUCCESS baserock/gnu-toolchain-stage1/30ff3f92-pull.121.log
Unknown exception in SIGCHLD handler
Traceback (most recent call last):
  File "/usr/lib/python3.5/site-packages/buildstream/_yaml.py", line 200, in load
    with open(filename) as f:
FileNotFoundError: [Errno 2] No such file or directory: '/cache/buildstream/artifacts/extract/baserock/gnu-toolchain-stage1/225e3b4c9c5f52041118e0c4af80b722045a569bd56c015e870651351074926f/meta/artifact.yaml'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/lib64/python3.5/asyncio/unix_events.py", line 808, in _sig_chld
    self._do_waitpid_all()
  File "/usr/lib64/python3.5/asyncio/unix_events.py", line 874, in _do_waitpid_all
    self._do_waitpid(pid)
  File "/usr/lib64/python3.5/asyncio/unix_events.py", line 908, in _do_waitpid
    callback(pid, returncode, *args)
  File "/usr/lib/python3.5/site-packages/buildstream/_scheduler/job.py", line 289, in child_complete
    self.complete(self, returncode, element)
  File "/usr/lib/python3.5/site-packages/buildstream/_scheduler/queue.py", line 196, in job_done
    if self.done(element, job.result, returncode):
  File "/usr/lib/python3.5/site-packages/buildstream/_scheduler/pullqueue.py", line 51, in done
    element._get_cache_key_from_artifact(recalculate=True)
  File "/usr/lib/python3.5/site-packages/buildstream/element.py", line 830, in _get_cache_key_from_artifact
    meta = _yaml.load(os.path.join(metadir, 'artifact.yaml'))
  File "/usr/lib/python3.5/site-packages/buildstream/_yaml.py", line 204, in load
    "Could not find file at %s" % filename) from e
buildstream.exceptions.LoadError: Could not find file at /cache/buildstream/artifacts/extract/baserock/gnu-toolchain-stage1/225e3b4c9c5f52041118e0c4af80b722045a569bd56c015e870651351074926f/meta/artifact.yaml

These errors don't cause the pipeline to fail, but it did seem to eventually hang and I ended up cancelling it.

See https://gitlab.com/baserock/definitions/-/jobs/22540811 for the full logs.

Note that in order to protect the private SSH key for pushing to ostree.baserock.org, the $baserock_ostree_cache_private_key variable is only available on "protected" branches. You can mark a branch as "protected" in the settings for the definitions repository (if you have sufficient permissions).

Summary output at startup makes no sence at `bst track` time

See original issue on GitLab
In GitLab by [Gitlab user @tristanvb] on May 15, 2017, 11:56

When launching a buildstream task which uses the scheduler, a pipeline summary is printed to the console with the logging header.

This makes sense at bst fetch and bst build time, but for bst track the element state and cache keys are nonsensical, since the result of bst track will cause previously cached elements to not be cached and will cause cache keys to change.

Either we should disable this summary at bst track time, or we should set some internal state so that the output makes more sense.

`bst checkout` prints a stack trace when it lacks permission to make symlinks

See original issue on GitLab
In GitLab by [Gitlab user @jonathanmaw] on Jun 1, 2017, 09:38

Full stack trace is:

jonathanmaw[[Gitlab user @fafnir]](https://gitlab.com/fafnir):~/workspace/buildstream/buildstream-tests$ bst checkout gnome/gnome-system.bst ~/tmp/
Loading:   511
Resolving: 511/511
Checking:  511/511
Traceback (most recent call last):
  File "/home/jonathanmaw/.local/bin/bst", line 9, in 
    load_entry_point('BuildStream==0.1.dev747-g16926bb.d20170525', 'console_scripts', 'bst')()
  File "/home/jonathanmaw/.local/lib/python3.4/site-packages/click/core.py", line 722, in __call__
    return self.main(*args, **kwargs)
  File "/home/jonathanmaw/.local/lib/python3.4/site-packages/click/core.py", line 697, in main
    rv = self.invoke(ctx)
  File "/home/jonathanmaw/.local/lib/python3.4/site-packages/click/core.py", line 1066, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/home/jonathanmaw/.local/lib/python3.4/site-packages/click/core.py", line 895, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/home/jonathanmaw/.local/lib/python3.4/site-packages/click/core.py", line 535, in invoke
    return callback(*args, **kwargs)
  File "/home/jonathanmaw/.local/lib/python3.4/site-packages/click/decorators.py", line 27, in new_func
    return f(get_current_context().obj, *args, **kwargs)
  File "/home/jonathanmaw/workspace/buildstream/buildstream/buildstream/_frontend/main.py", line 343, in checkout
    app.pipeline.checkout(directory, force)
  File "/home/jonathanmaw/workspace/buildstream/buildstream/buildstream/_pipeline.py", line 394, in checkout
    utils.link_files(extract, directory)
  File "/home/jonathanmaw/workspace/buildstream/buildstream/buildstream/utils.py", line 270, in link_files
    return _process_list(src, dest, files, safe_link, ignore_missing=ignore_missing)
  File "/home/jonathanmaw/workspace/buildstream/buildstream/buildstream/utils.py", line 432, in _process_list
    os.symlink(target, destpath)
PermissionError: [Errno 13] Permission denied: 'usr/bin' -> '/home/jonathanmaw/tmp/bin'

The problem on my end turned out to be that ~/tmp was owned by root.

Caching of build trees

See original issue on GitLab
In GitLab by [Gitlab user @tristanvb] on May 5, 2017, 07:02

It can be interesting to cache and store a dirty build tree for later reuse.

The idea is that if the build tree itself, with object files and timestamps in tact could be cached and shared, then this could allow incremental builds at a later time on another host.

One could then, for instance, desire to hack on WebKit and create a workspace using the last known build of the correct cache key, so after modifying some file, only part of the webkit build would have to run.

Loading pipeline is impossibly slow with > 50 elements

See original issue on GitLab
In GitLab by [Gitlab user @samthursfield] on Jul 21, 2017, 11:37

Recent changes to BuildStream have caused load times to go crazy.

For example building core.bst from my Baserock conversion branch takes so long to load the pipeline that I've never seen it finish. However building gnu-toolchain.bst from the same branch (which is a smaller set of elements) works fine.

The problem is in the resolve_project_variant() method -- it seems to touch a combinatorial explosion of the dependency tree.

I reproduced this with commit 9772107 of buildstream.

Enhancement: automate source bisections for develpoers

See original issue on GitLab
In GitLab by [Gitlab user @tristanvb] on Mar 2, 2017, 05:48

This is an enhancement which I think would be awesome to implement, at some point after we meet our primary milestones.

So, today it struck me while I was writing this bug comment https://bugzilla.gnome.org/show_bug.cgi?id=763624#c18 , that with a tool like BuildStream we can vastly improve the experience of bisecting commits and tracking down which commit introduced a regression.

Especially because of how the artifact cache works in a mode where cache keys are calculated independently from their dependencies (non-deterministic build modes), it may even be possible to perform bisections without builds in the cases where an artifact already exists in a shared cache for a given commit. Otherwise, so long as build instructions need not change for a given source module between builds (the most likely case), we need only build that module (and optionally depending modules, depending on cache key mode) for every bisect commit and assemble a sysroot (or VM even) for testing, without much of the hassle which comes with the more regular technique outlined in the bug comment above.

dpkg build element

See original issue on GitLab
In GitLab by [Gitlab user @jonathanmaw] on Jun 6, 2017, 13:47

The specification (from Tristan) is:
This should be a regular BuildElement derived element which will
consist of a yaml file and a little bit of additional code.

The yaml will define how to build and install the software into
the %{install-root} directory, using the debian/rules API, it assumes
that the staged Source in question is a module that contains
a debian subdirectory.

The python BuildElement derived class will additionally have to
inspect some of the build results and populate the split-rules
public data so that another dpkg deployment element will know the
correct file manifests for the packages it creates.

This should come with a separate test case added to the .gitlab-ci.yml

Enhancement: bzr source

See original issue on GitLab
In GitLab by [Gitlab user @jonathanmaw] on Apr 13, 2017, 16:28

From https://wiki.gnome.org/Projects/BuildStream/Roadmap/SourcePlugins
A Bazaar source is needed to deal with bzr repositories. Since Flatpak builder supports this kind of source, it will be needed in order to automatically convert existing Flatpak JSON.

While a bazaar library (bzrlib) exists, and is used by bzr internally, it is only for python2, so is unsuitable, requiring us shell out to the bzr lib.

Semantics and output of new source-bundle command

See original issue on GitLab
In GitLab by [Gitlab user @tristanvb] on Jun 28, 2017, 07:03

Some issues I have with the new bst source-bundle command are as follows:

  • I dont like the name argument for the output of bst source-bundle, it could be changed by a simple mangle of the target element, similar to how we compose artifact paths and log file names (e.g. core/totem.bst could become core-totem.tar.gz, no need for an option here)
  • Should be able to specify an output directory, defaulting to current working directory
  • The resulting tarball contains a tempdir, yuck.

For the last, I ended up creating a test.tar.gz of bst source-bundle --except base.bst core/totem.bst

When I unpacked it, it contained a core-totem-hp1rrdow/ directory... firstly this is ugly, and secondly it means that two runs of bst source-bundle will never produce the same output.

Separate Test Suite

See original issue on GitLab
In GitLab by [Gitlab user @tristanvb] on Apr 26, 2017, 13:16

Currently BuildStream only uses a make check style for testing itself,
however there are still large parts of BuildStream which cannot easily
be tested, especially when it comes to Element implementations.

To remedy this, we should have a separate test suite which involves
building some small BuildStream projects designed to cover all of the
use cases. This will be important moving forward as we add more
Element implementations and need to be sure that they are continuously
tested.

To do this, we can use separate repositories for each test case, or
we could use branches of the buildstream-tests repository.

BuildStream projects for testing every Element should be created,
and BuildStream's ".gitlab-ci.yml" file should be updated to run
BuildStream for each of the test projects.

Lost file metadata in artifacts and images

See original issue on GitLab
In GitLab by [Gitlab user @tristanvb] on Jun 12, 2017, 12:39

Problem

Currently, assuming a linux sandbox (we dont have others currently) everything is committed to the artifacts as:

  • UID/GID 0
  • No extended file attributes

When files are created with setuid/setgid in the sandbox, those should be recorded in the artifacts, but; when we checkout from ostree in user mode, the setuid/setgid bits are stripped.

This means that in addition to the above, when we create a bootable image, we pack in files that the sandbox sees without setuid/setgid bits.

Solution

Since recently, we now have a fuse subsystem in BuildStream which the sandbox uses, currently we only have one fuse layer which provides a copy-on-write hardlink experience (to solve issue #19).

To solve this I would like to take an approach similar to yocto's pseudo tool, but instead of doing any LD_PRELOAD, we would implement it with a fuse layer.

This will mean essentially the following:

  • The local artifact cache will have to be able to tell us about the real UID/GID, file attributes and extended attributes for any file, for the ostree artifact cache this can be done by following the ostree source code of ostree ls
  • The sandbox will use a fuse layer for the sake of spoofing the sandbox environment with the real attributes; this will be a separate fuse layer as the existing one we have for copy-on-write hardlinks.
  • The fuse layer will need to store the real attributes introspected from the artifact cache in a temporary local store, an in memory sqlite database could be a good choice if this is too intense for simple python data structures.
  • The fuse layer will handle filesystem callbacks in such a way that:
    • Calls to chown and redirected to the local store and not applied to the underlying filesystem
    • Setting extended attributes always succeeds, but is stored in the temporary store and not applied to the underlying filesystem
    • Reading the attributes and ownership is always read from the store and not from the underlying filesystem
  • When the fuse layer is unmounted, we need to persist the attributes (real UID/GID and xattrs etc) recorded by the fuse layer, or obtain that data somehow
  • When committing the artifact, the artifact cache API needs to have some interface for accepting the ownership and attributes separately from the files being committed, for the ostree artifact cache implementation this can be applied with the commit modifier callbacks, for other stores such as tarballs its just as easy.
  • The sandbox will have to orchestrate the fuse layer; it will be required on the root filesystem at all times, it is not required in /buildstream/build (it will slow things down significantly if we use it there, too), and it is also required in the read-write /buildstream/install directory. The sandbox mark_directory() API should be enhanced to communicate some of the requirements that an element has on the marked directories so that the sandbox can make a sane choice about this.

bst build could use a --track option

See original issue on GitLab
In GitLab by [Gitlab user @tristanvb] on May 19, 2017, 05:06

Instead of insisting that bst track always be a separate activity as bst build, we could optionally allow the user to launch the build so that it tracks, fetches and builds in tandum (we already do this for fetching and building anyway).

Note that this should never be the default, and should not be considered as a fallback solution for issue #32

Support mounting the host filesystem read-only in the sandbox

See original issue on GitLab
In GitLab by [Gitlab user @aperezdc] on Feb 9, 2017, 15:30

It would be interesting to support bind-mounting the host filesystem read-only. This way instead of necessarily having to pull a root filesystem tarball (or an OSTree sysroot), it would be possible to reuse the development tools from the host. This is interesting in cases where one would want to build software to run in the host which gets built and installed by BuildStream in a prefix.

For example, right now for WebKitGTK+ development we currently use a custom jhbuild module set to ensure all developers use the same dependencies for testing, and we do want the build artifacts to be built with the host compiler and installed into a directory, because the rest of the WebKitGTK+ build system and the test runner expect to use host tools as well.

Master logs of integration commands are confusing

See original issue on GitLab
In GitLab by [Gitlab user @tristanvb] on May 21, 2017, 10:44

Integration commands are implemented by the elements which declare them, and when a BuildElement integrates it's sandbox before a build it will iterate over the dependencies calling Element.integrate().

The element which integrates will naturally sent a message with it's own element in context and that message will appear in the master log and on the console as the integrating element, not the element which is about to build, on behalf of which the integrating element is doing work.

This results in the master log saying something like the following for an integration step of a gstreamer build:

[--:--:--][088951c6][gstreamer.bst                  ] START   Integrating sandbox
[--:--:--][8e7c9bae][linker-priority.bst            ] STATUS  Running integration command

    ldconfig

[--:--:--][46c81645][base.bst                       ] STATUS  Running integration command

    update-mime-database /usr/share/mime

[--:--:--][46c81645][base.bst                       ] STATUS  Running integration command

    update-desktop-database -v

[--:--:--][f5ac9f56][glib.bst                       ] STATUS  Running integration command

    glib-compile-schemas /usr/share/glib-2.0/schemas

[00:00:20][088951c6][gstreamer.bst                  ] SUCCESS Integrating sandbox

While the above it certainly true (i.e. glib.bst is integrating a sandbox), we have lost the context that glib is in fact integrating a sandbox for gstreamer to build in, this is of course especially confusing when you have 4 or more builds in parallel and commands appear mixed in the master logs.

To fix this, the element emitting the message should not be used to print the message in the master log (however this perhaps should not be true for the individual build log), instead the element associated to the scheduled task should be used to print these logs.

Additionally, it would be good to have both in context, so that in the master log and terminal one can see that we are integrating glib on behalf of the gstreamer build, however this should be additional, the main, text aligned element and cache key should remain gstreamer for better readability purposes.

Enhanced Script Base Element

See original issue on GitLab
In GitLab by [Gitlab user @tristanvb] on Apr 28, 2017, 09:54

Currently in the core of BuildStream there is a BuildElement which
is a specialized element for the purpose of implementing various
build elements (like autotools, cmake, etc).

All of these elements share the commonality of:

  • staging one or more sources into /buildstream/build
  • running a series of commands defined by the element's accompanied YAML configuration
  • collection of the artifact from /buildstream/install location in the sandbox

This pattern should be repeated for the purpose of deployments,
currently we have a quite rigid script element which allows
staging multiple artifact dependency chains in different places
and running a series of commands allowing one to use one base
runtime to operate on another staged artifact & dependencies.

In order to create more kinds of deployment elements we should
have a more flexible base scripting element to allow this.

The design of this base scripting element should take into consideration
the current usage of the script element for creating a
bootable image

Support for building from โ€œlocalโ€ source directories

See original issue on GitLab
In GitLab by [Gitlab user @aperezdc] on Feb 9, 2017, 16:01

This may not among the goals of BuildStream, but here goes a suggestion anyway...

As a developer, I would like to be able to use a controlled build environment to develop my library/application/thing. BuildStream seems to be good at that: I can use it already today to e.g. import the org.gnome.Sdk runtime from its OSTree repository, and for building the dependencies needed by my project. Problem is: how would I go about building the code I am developing right now, which may contain local WIP changes (it may not even be in version control)?

The first idea which comes to mind is developing directly inside a shell launched by build-stream shell -s build. Well, no: suddenly the developer's usual โ€œgearโ€ (editors, VCSs, fine-tuned configurations) are gone. Building my favorite editor (and tools, and configs) with BuildStream to have have them inside the sandbox seems ludicrous.

One though would be to write an element that makes a local directory available into the sandbox for building. Currently that involves staging the code into the sandbox (IIUC). While that can be okay for small-to-moderate sized projects, staging a full copy of the sources for a big project is slow and discourages doing quick edit-compile-test cycles. Probably one solution for this could be being able to bind-mount an existing checkout of the source tree inside the sandbox.

(Extra kudos if, when using a modern filesystem like btrfs, what gets exposed into the sandbox is a read-only snapshot of the source directory.)

'getting started' experience is missing...

See original issue on GitLab
In GitLab by [Gitlab user @devcurmudgeon] on Apr 18, 2017, 07:31

google gives me only a few links. i follow them all, and find blog posts, and this repo.

afaict nothing tells me how to get started. also note:

Enhancement: `bst shell --builddir` automatically picking the newest available builddir

See original issue on GitLab
In GitLab by [Gitlab user @samthursfield] on Jun 21, 2017, 12:02

If you want to open a shell into a failed build, it's very often the newest builddir that interests you. Currently I have to do ls -lrt /mnt/build/cache/build/ and then manually pick the latest builddir, but BuildStream could do this for me. Of course the option to look in other builddirs is useful at times and shouldn't be removed.

bst build does not handle elements with Consistency.INCONSISTENT

See original issue on GitLab
In GitLab by [Gitlab user @tristanvb] on May 19, 2017, 05:04

When typing bst build with inconsistent sources (sources that have no ref and need to be tracked), BuildStream just goes ahead and starts to try building these, but will inevitably fail because there are no refs.

We could have different approaches here:

  • Just bail out with a message to the user
  • Try building any elements which could possibly be built, but omit any inconsistent elements (by prefiltering the element list)

progress monitoring and elapsed time in logs...

See original issue on GitLab
In GitLab by [Gitlab user @devcurmudgeon] on Apr 28, 2017, 11:44

as discussed on irc it's important for a user to monitor progress of a build run. Based on experience with morph + ybd i recommend:

  • some ongoing indicator of [current task/tasks required for this run/total tasks for target]
  • ongoing indicator of total elapsed time since run started
  • record elapsed time for each component task completion

Note that

  • users may want to glance at current run status while it's running (so want to see current elapsed time)
  • users may want to look at the timeline profile of a run later, which means having elapsed time in every line of the log is useful
  • users may want to compare specific task times between machines/runs (which AWS machine gives me best time vs cost result?) and between components (which builds are taking most time, to optimise that build or maybe try to adjust dependencies etc)

IMO the current realtime clocks per task are shiny but not really useful, and the zeroes+blanks are taking up space that would be better used as snapshot of elapsed time when that logline was generated.

Build continues after errors in some of the components

See original issue on GitLab
In GitLab by [Gitlab user @palvarez89] on Feb 14, 2017, 16:54

Building https://gitlab.com/tristanvb/buildstream-tests/commit/5e20e04577580c4f56f62f59956a6d4b8d5b7ded triggered some errors (which are now fixed), but the show status executed after that shows that the build continued:

sh-4.3# build-stream show --deps all build-essential.bst
      gnome-platform: 7b86d9adda47b7e17c1b9c419e85355f35f7eb1643348b5ffc694d1021db38f0 (cached)
           gnome-sdk: bfe7dafec7555e417fcec60525c4042a38ea1abfc8ddd8dd03d8950a162e664d (cached)
     stage1-binutils: d92564efa61ee5424d95e7eb6688b6c176633b7de07941d608e6036bd4c14bcf (cached)
          stage1-gcc: fa57f87dbbc5309474cbb66f0449e0fd093f52d42d86dfc8bcfc55b1e9d42274 (cached)
              stage1: 402d08fa51449a5d3fde185fc61459d412fa678eae5898f59c4cd00feb87d0b8 (cached)
stage2-linux-api-headers: a46017e2faa5618cfc7f3eb6b56d4aef931d757660f39c56bde1e9020efc1939 (cached)
        stage2-glibc: 62064d973a4c98ab50d230d219d71cf1a594ee0afdd7c71350a102b1fdc0ee60 (cached)
     stage2-binutils: 8f11da1c335a1b5daa4a51573225b0c1f680799dc0f827dca0cbb679a1595002 (cached)
      stage2-busybox: 5f818c25647c06493c0de74e711f6040a4d92583232149531f7cde05c03a3f3d (cached)
     stage2-fhs-dirs: df5dc158e4e4f5cd3b3f74d5a445bf219a2e645eba902faf4969cb042a36e4b0 (cached)
         stage2-gawk: a096f339bb9305aa3d1176be95545dfbe8dcbc79e68d4bc8c4f68597ad5e8483 (cached)
stage2-gcc-fixed-headers: d9700dda5be40f7ca737d2a03c9c29102e9ca9e70834e9a582f47a50782646b5 (cached)
    stage2-libstdcxx: 9079ffae09dce12546350243fabf812be5e3da3e549120c43bd87960d105aad7 (cached)
          stage2-gcc: 94671d0071b01636cc018dd311185d2e52303e44aa23b3d8326f4a4b5992dcc7 (cached)
         stage2-make: e68cfcad15f664b7e75350532ee584e5ed4395a56b4a1a9d7ab1617be746c8b9 (cached)
  stage2-reset-specs: 561d392d18adae1a229f821cb53d51be8fd7e5597de9986958e3047cf9ae2f36 (cached)
              stage2: d9b4d323e91c1c628eef0f49fa3f7638b2b40c0c541fc0662d3a9183053da705 (cached)
   linux-api-headers: 74c1b2cda2c125fd94614c1f1c0c30b2ae4127d8dcca1fbee917717a05f011c4 (buildable)
               glibc: b2a9dffcb63660a9ec2a49e81234032a9fa4e0c0ca408060f767b2e1e33d3793 (waiting)
                zlib: 020c4231a9363bae6038905d261a5a39ef78000d06113ada693f2b839b440102 (waiting)
            binutils: 1b984826a20e6736231a9339e8bcc42a6767e2392454f889395932e8ce4391fb (waiting)
             busybox: d808d24648885ba1d3683a165e8b56b0791de7cc51973f285b39593ed540c6a4 (waiting)
              ccache: 57331614340c3e01a4a98262a6b286dc930570fcbb0fed20db1d4825e24ef03e (waiting)
            fhs-dirs: 91c8ae954de6a0bc79da86d09d8693a0cac5add900d214099ae20fd6422be2ca (buildable)
                gawk: ab3ab0c7676f203c43574f03647db0ef0f9d9fd7c63107472b68e3a1d357de9d (waiting)
          m4-tarball: 5d64f7df0658f105e5e170ec0aaaec42ff933dcca1a8459ac97edaecf40d3b4f (waiting)
                 gcc: 8251ac83632d0807906a7428d505671a5f81d1b9d9dead94a53b7966119f2bd0 (waiting)
                make: c2ee53ed1f6c44fd305abc0fa289ac508de7b17fd798c7aa0d0329a281a4e604 (waiting)
     build-essential: 0dcced8ae39caa0bde81f8800da41adf78b67e74b7f532c093fc782ba13aa978 (cached)

I believe that "build-essential" component should have never being built.

Better validation for loaded YAML

See original issue on GitLab
In GitLab by [Gitlab user @tristanvb] on May 12, 2017, 08:47

Currently we have very good validation for the YAML that we do load, but no validation on the YAML that we don't load.

For the YAML that we do load, if ever the reported data is an incorrect type, or if a mandatory member is missing; we raise an exception with the filename and line, column information of where things went wrong, and this is nicely reported to the user.

What we should additionally be doing is have a method for validating dictionary nodes and ensuring that no unexpected members are encountered, it should take a list of strings to check for loaded dictionary keys and raise an exception with the filename, line and column information for where any unexpected dictionary members were declared.

This should have a base implementation in _yaml.py which the _loader.py can use for validation of the base format; additionally an API should be exposed in plugin.py for Element and Source plugins to use at plugin->configure() time to assert a correct configuration.

Reusable Image Deployment Element

See original issue on GitLab
In GitLab by [Gitlab user @tristanvb] on Apr 28, 2017, 10:10

After the base scripting element has been created as per issue #16, it should be
possible to create a more generic and reusable element for deploying bootable
disk images, so that we can replace the existing script implementation
with something reusable and configurable.

This scripted image deployment should be used as a map to how this element should be implemented. It should define some variables and use a series of predefined but overridable commands to create the image, in the same way that build element implementations which derive from BuildElement do.

Some criteria for the image deployment element:

  • Should be able to configure the filesystem type to use for root partition
  • Should be able to configure swap size
  • Should be able to configure some parameters for the bootloader, especially
    kernel command line parameters
  • Should be able to specify boot and root partition sizes as minimum sizes
    and use a larger size in the case that the payloads exceed the user requested size

For now:

  • Only supporting syslinux as a bootloader is acceptable for now, as we only
    have support for dos boot partition and syslinux bootloader readily available
    for creating bootable disks without mounting filesystems.
  • No need for highly complex disk partitioning with llvm

Regardless of the above, it would be nice to design this element with some
forward thinking towards being able to support more fancy filesystem layouts
and alternative bootloaders without needing to break API or create separate
elements for those.

Enhancement: Implement tar source plugin

See original issue on GitLab
In GitLab by [Gitlab user @jonathanmaw] on Apr 10, 2017, 13:58

From https://wiki.gnome.org/Projects/BuildStream/Roadmap/SourcePlugins

Tarball Source
This should simply use host wget to obtain and cache tarballs in the local users source directory.

The source should make a sha256 sum of the downloaded tarball in the track() method, so that users need not list the checksums themselves.

The source should be able to detect and extract the tarball by itself, regardless of compression types, using host tar.

If it is not good enough to derive what compression format from the file extension, then a configuration option for the source can be added for the tarball Source so a user can explicitly say this tarball is bzip2.

A source plugin needs to implement:

  • configure
  • preflight
  • get_unique_key
  • get_consistency
  • get_ref
  • set_ref
  • track
  • fetch
  • stage

Artifact Cache Storage Enhancements

See original issue on GitLab
In GitLab by [Gitlab user @tristanvb] on Apr 28, 2017, 09:10

Currently the artifact cache only stores build results, however
it would be nice to store the build logs which contributed to
the artifact being created, and it will be important to store
some metadata related to the artifact.

These should all be stored separately for a given element / cache key
in the artifact cache.

New artifact structure can look something like this:

  • logs/ <-- location to store logs (will be only one log file at least for now)
  • files/ <-- location to store the actual files, the real artifact output
  • meta/ <-- location to store element metadata

Rare exception when terminating processes

See original issue on GitLab
In GitLab by [Gitlab user @tristanvb] on May 2, 2017, 11:22

In some corner cases, we are experiencing the symptoms described in this python upstream bug report

This is mostly harmless, but is nevertheless a stack trace thrown at the user for no good reason, e.g.:

Error in atexit._run_exitfuncs:
Traceback (most recent call last):
  File "/usr/lib/python3.5/multiprocessing/util.py", line 288, in _exit_function
    _run_finalizers(0)
  File "/usr/lib/python3.5/multiprocessing/util.py", line 248, in _run_finalizers
    items = [x for x in list(_finalizer_registry.items()) if f(x)]
RuntimeError: dictionary changed size during iteration

installing buildstream on fresh jessie from instructions at http://buildstream.gitlab.io/buildstream/install.html#installing

See original issue on GitLab
In GitLab by [Gitlab user @devcurmudgeon] on Apr 25, 2017, 15:13

Unpacking /home/admin/src/buildstream
  Running setup.py (path:/tmp/pip-rxu1im61-build/setup.py) egg_info for package from file:///home/admin/src/buildstream
    your setuptools is too old (<12)
    setuptools_scm functionality is degraded
    zip_safe flag not set; analyzing archive contents...
    
    Installed /tmp/pip-rxu1im61-build/pytest_runner-2.11.1-py3.4.egg
    
Requirement already satisfied (use --upgrade to upgrade): setuptools in /usr/lib/python3/dist-packages (from BuildStream==0.1)
Downloading/unpacking psutil (from BuildStream==0.1)
  Downloading psutil-5.2.2.tar.gz (348kB): 348kB downloaded
  Running setup.py (path:/tmp/pip-build-xy965he4/psutil/setup.py) egg_info for package psutil
    
    warning: manifest_maker: MANIFEST.in, line 14: 'recursive-include' expects <dir> <pattern1> <pattern2> ...
    
    warning: no previously-included files matching '*' found under directory 'docs/_build'
    warning: no previously-included files matching '*' found under directory '.ci'
Downloading/unpacking ruamel.yaml (from BuildStream==0.1)
  Downloading ruamel.yaml-0.14.9.tar.gz (241kB): 241kB downloaded
  Running setup.py (path:/tmp/pip-build-xy965he4/ruamel.yaml/setup.py) egg_info for package ruamel.yaml
    Traceback (most recent call last):
      File "/usr/lib/python3/dist-packages/pkg_resources.py", line 2382, in _dep_map
        return self.__dep_map
      File "/usr/lib/python3/dist-packages/pkg_resources.py", line 2457, in __getattr__
        raise AttributeError(attr)
    AttributeError: _Distribution__dep_map
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "<string>", line 17, in <module>
      File "/tmp/pip-build-xy965he4/ruamel.yaml/setup.py", line 858, in <module>
        main()
      File "/tmp/pip-build-xy965he4/ruamel.yaml/setup.py", line 847, in main
        setup(**kw)
      File "/usr/lib/python3.4/distutils/core.py", line 108, in setup
        _setup_distribution = dist = klass(attrs)
      File "/usr/lib/python3/dist-packages/setuptools/dist.py", line 266, in __init__
        _Distribution.__init__(self,attrs)
      File "/usr/lib/python3.4/distutils/dist.py", line 280, in __init__
        self.finalize_options()
      File "/usr/lib/python3/dist-packages/setuptools/dist.py", line 300, in finalize_options
        ep.require(installer=self.fetch_build_egg)
      File "/usr/lib/python3/dist-packages/pkg_resources.py", line 2201, in require
        reqs = self.dist.requires(self.extras)
      File "/usr/lib/python3/dist-packages/pkg_resources.py", line 2401, in requires
        dm = self._dep_map
      File "/usr/lib/python3/dist-packages/pkg_resources.py", line 2390, in _dep_map
        if invalid_marker(marker):
      File "/usr/lib/python3/dist-packages/pkg_resources.py", line 1207, in is_invalid_marker
        cls.evaluate_marker(text)
      File "/usr/lib/python3/dist-packages/pkg_resources.py", line 1319, in _markerlib_evaluate
        for key in env.keys():
    RuntimeError: dictionary changed size during iteration
    sys.argv ['-c', 'egg_info', '--egg-base', 'pip-egg-info']
    test compiling test_ruamel_yaml
    Complete output from command python setup.py egg_info:
    Traceback (most recent call last):

  File "/usr/lib/python3/dist-packages/pkg_resources.py", line 2382, in _dep_map

    return self.__dep_map

  File "/usr/lib/python3/dist-packages/pkg_resources.py", line 2457, in __getattr__

    raise AttributeError(attr)

AttributeError: _Distribution__dep_map



During handling of the above exception, another exception occurred:



Traceback (most recent call last):

  File "<string>", line 17, in <module>

  File "/tmp/pip-build-xy965he4/ruamel.yaml/setup.py", line 858, in <module>

    main()

  File "/tmp/pip-build-xy965he4/ruamel.yaml/setup.py", line 847, in main

    setup(**kw)

  File "/usr/lib/python3.4/distutils/core.py", line 108, in setup

    _setup_distribution = dist = klass(attrs)

  File "/usr/lib/python3/dist-packages/setuptools/dist.py", line 266, in __init__

    _Distribution.__init__(self,attrs)

  File "/usr/lib/python3.4/distutils/dist.py", line 280, in __init__

    self.finalize_options()

  File "/usr/lib/python3/dist-packages/setuptools/dist.py", line 300, in finalize_options

    ep.require(installer=self.fetch_build_egg)

  File "/usr/lib/python3/dist-packages/pkg_resources.py", line 2201, in require

    reqs = self.dist.requires(self.extras)

  File "/usr/lib/python3/dist-packages/pkg_resources.py", line 2401, in requires

    dm = self._dep_map

  File "/usr/lib/python3/dist-packages/pkg_resources.py", line 2390, in _dep_map

    if invalid_marker(marker):

  File "/usr/lib/python3/dist-packages/pkg_resources.py", line 1207, in is_invalid_marker

    cls.evaluate_marker(text)

  File "/usr/lib/python3/dist-packages/pkg_resources.py", line 1319, in _markerlib_evaluate

    for key in env.keys():

RuntimeError: dictionary changed size during iteration

sys.argv ['-c', 'egg_info', '--egg-base', 'pip-egg-info']

test compiling test_ruamel_yaml

----------------------------------------
Cleaning up...
Command python setup.py egg_info failed with error code 1 in /tmp/pip-build-xy965he4/ruamel.yaml
Storing debug log for failure in /home/admin/.pip/pip.log

Enhancement: `bst track --deps=none` should be default behaviour

See original issue on GitLab
In GitLab by [Gitlab user @samthursfield] on Jun 28, 2017, 15:57

I love bst track but I keep messing up my Git tree with it. I try to update the ref for a single element, forget to pass --deps=none and before I realise, a bunch of my .bst files have been overwritten with new 'ref' fields. I then have to use git checkout -p to remove these unwanted changes from the other changes that I have unstaged in my working tree.

Of course at some point I will run bst track over the whole thing and get all the refs up to date so that this doesn't matter any more, but I have other things to do first :-)

In the meantime, can we change the default for bst track to be --deps=none?

This would mean that with no arguments bst track only operates on the filename you explicitly gave it, which is safer. If you want --deps=all you can pass it explicitly.

Artifacts can get corrupted while running integration commands

See original issue on GitLab
In GitLab by [Gitlab user @tristanvb] on Apr 29, 2017, 06:57

During each build, and at other times; we stage artifacts to a sandbox and then run a series of integration commands in the sandbox where the rootfs needs to be read-write. Integration commands can include things like ldconfig, fc-cache, glib-compile-schemas etc.

Because the ostree artifacts are extracted with hardlinks, and the individual extracts are subsequently staged using hardlinks; the sandbox directory rootfs is consequently populated with hardlinks which link back into the ostree repositories, so modifying those files in place will result in corruption of artifacts.

To address this, the current plan is to use fusepy in the sandbox implementation to provide a copy-on-write experience for any hardlinked files. This should only be necessary while the sandbox is running with a read-write rootfs, in other scenarios once the rootfs is integrated then the rootfs is read-only and this is not necessary.

Tar source plugin: needs better behavior at stage() time

See original issue on GitLab
In GitLab by [Gitlab user @tristanvb] on May 12, 2017, 10:33

Most tarballs we use are release tarballs which contain a directory at the root, named as the base name of the tarball without the tar extension.

The default behavior of stage() for a tar source should be to extract the content of that directory into the target directory, not to extract the tarball base directory itself into the build dir.

Of course not every (but almost every) release tarball will follow the standard, so the tar source should have an option for working with non standard tarballs as well.

Automatically generated man pages not integrated in build

See original issue on GitLab
In GitLab by [Gitlab user @tristanvb] on Apr 13, 2017, 11:56

We use click_man to automatically generate man pages from our usage of click library for the frontend.

However, I was unable to easily integrate the generation of man pages into the build_py step in our setup.py.

The click-man page discourages this, because:

If we generate them in our build process and add them to your
distribution we do not have a way to prevent installation to
/usr/share/man for non-UNIX-like Operating Systems.

However, frankly we should not care about this. Having the docs included as 'data_files' in the distribution proper, means that they will automatically appear at ${prefix}/share/man/man1, which is perfect because:

  • It should be obvious for any distribution that what appears in the python distribution should be installed
  • Even if we do support non unix platforms in the future, there is no harm done with having unused man pages installed so long as they are tracked properly by whatever package manager is used.

I have tried to hook this up following the instructions in this blog post, but have not been able to do so. This should be possible but the documentation is near non existent.

The current workaround is to generate the man pages periodically and commit the result to the buildstream repository, which is far from ideal.

sphinx docs fail on python2-default systems

See original issue on GitLab
In GitLab by [Gitlab user @leeming] on Dec 8, 2016, 16:22

Sphinx runs the default python interrupter for generating the documentation. This is annoying/a problem for systems that default to python2.7 for the python command.

A workaround for this is to make a copy of the sphinx-build script from /usr/bin and change the header to explicitly call python 3. Alternatively create an additional script for this case and change the following in the Makefile

SPHINXBUILD = sphinx-build

As per http://stackoverflow.com/questions/8015225/how-to-force-sphinx-to-use-python-3-x-interpreter

#!/usr/bin/python3
# -*- coding: utf-8 -*-
"""
Same as /usr/bin/sphinx-build but with different
interpreter
"""

import sys

if __name__ == '__main__':
    from sphinx import main, make_main
    if sys.argv[1:2] == ['-M']:
        sys.exit(make_main(sys.argv))
    else:
        sys.exit(main(sys.argv))

I could not find an obvious and nice work around for this, thus the issue instead of a PR

Incorrect pipeline total element count for `bst track`

See original issue on GitLab
In GitLab by [Gitlab user @tristanvb] on Jul 19, 2017, 08:32

When running bst track without --deps all, the pipeline status area still shows a total element count inclusive of the track target's elements, even though the intention (and behavior) is only to track a single element.

This may also be true in other scenarios like with bst fetch, the counter should be changed to be based on the real list of elements which are queued for scheduling.

Symlinks in the sandbox are broken by the path to the buildstream cache containing symlinks

See original issue on GitLab
In GitLab by [Gitlab user @jonathanmaw] on Jul 3, 2017, 10:39

The specific use-case found is that during a build that tried to execute rst2man, it turned out to be a broken symlink.
The symlink turned out to be
../../../../../../../../../home/jonathanmaw/.cache/buildstream/build/debian-python-x9gp05kc/root/etc/alternatives/rst2man
instead of /etc/alternatives/rst2man

According to [Gitlab user @tristanvb], the problem is utils.py:_relative_symlink_path() needs to be fixed, so that the realpath of the passed 'root' directory is used as well

Show failed elements in status area

See original issue on GitLab
In GitLab by [Gitlab user @tristanvb] on May 15, 2017, 11:58

When running a pipeline, it's possible that a build or fetch failed but that the user decided to continue building any non failing elements; since we have space in the status area to count the elements (total, session, built, fetched, etc), it makes sense to also show the number of elements which have failed.

Configurable retries for network related tasks

See original issue on GitLab
In GitLab by [Gitlab user @tristanvb] on May 18, 2017, 07:40

Sometimes a build fails simply because of a network timeout or such, this can happen especially when on build has failed and we've suspended all tasks to debug it, often this can cause an ongoing Fetch task to fail when resumed.

We should:

  • Have a configuration for maximum retries for network related activities
  • Reschedule failed network related tasks until maximum retries is reached
  • Frontend should not suspend tasks when a failure occurs that is in fact being retried

This requires that the failure message for a retry-able task should be special, or, the Message object could have a retry field added to it, otherwise the frontend will suspend tasks and try to handle the failure.

End of session reports

See original issue on GitLab
In GitLab by [Gitlab user @tristanvb] on May 30, 2017, 14:45

Currently we display status information about the queues in operation and the total elements of the pipeline vs how many elements were selected for processing in the status area, but these dont end up in a master build log.

Also it will be nice for the user to see something informative when buildstream exits about what was done.

This should essentially have overall information such as:

  • What main action was performed (track, fetch, build)
  • Total pipeline elements
  • Elements processed

And information pertaining to each queue, such as:

  • Elements which were successfully processed
  • Elements which were skipped
  • Elements which failed

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.