ghdl / docker Goto Github PK
View Code? Open in Web Editor NEWScripts to build and use docker images including GHDL
Scripts to build and use docker images including GHDL
Sizes of Buster images:
ghdl/run:buster-gcc 96MB
ghdl/run:buster-llvm-7 116MB
ghdl/run:buster-mcode 84.6MB
ghdl/ghdl:buster-gcc-8.3.0 303MB - 96MB = 207MB
ghdl/ghdl:buster-llvm-7 123MB - 116MB = 7MB
ghdl/ghdl:buster-mcode 88MB - 84.6 = 3.4MB
It is surprising that GHDL tarballs with mcode or LLVM backends require less than 10MB, but GCC requires >200MB! I think we might be doing something wrong, such as adding build artifacts to the GCC tarball (which should not be there).
Image ghdl/ghdl:buster-gcc-8.3.0
can be inspected with wagoodman/dive:
docker run --rm -it \
-v /var/run/docker.sock:/var/run/docker.sock \
wagoodman/dive \
ghdl/ghdl:buster-gcc-8.3.0
@tgingold, in the first screenshot I'm concerned with lib/ghdl/*/*/*.o
files. Should all of those be there? With LLVM or mcode backend only lib/ghdl/*/*/*.cf
files exist. Regarding the second screenshot, libexec/gcc/x86_64-pc-linux-gnu/8.3.0/cc1
requires 239MB, and ghdl1
in the same dir requires 237MB! Is this ok?
Last, as seen in the second screenshot, info and man pages for ghdl are added. However, I believe this is not the same man doc that we generate with sphinx (see ghdl/ghdl#733). Where does it come from?
It would be great to have some quick instructions for getting started for people like me who haven't used docker all that much. I tried running docker pull ghdl/ghdl
as it was mentioned here but I just get
Using default tag: latest
Error response from daemon: manifest for ghdl/ghdl:latest not found
so I guess there's something I'm missing
It would be great if the release versions of GHDL remain on the DockerHub.
So the user can decide to use the latest release instead of the latest nightly build.
From ghdl/ghdl#477
There is no built-in feature in travis to merge all the artifacts and deploy to github just once, instead of having each job edits the release. An external storage service is required, such as S3, to have all of them merged in a dir and the deploy all at once. See https://docs.travis-ci.com/user/build-stages/#Data-persistence-between-stages-and-jobs Luckily, #489 includes using DockerHub, and we can reuse it in replacement of S3. This avoids the requirement to handle credentials for a fourth service. The scheme is as follows:
scratch
. I.e., ghdl-*-stretch-mcode.tgz
.Pack artifacts
This stage mimicks the nested loops of stage 0 (see #489). However, instead of building anything, it pulls images which were created in the modified stage 1 and merges all the tarballs in a single directory. Then, a single deploy can be triggered in this stage.
This stage is especially useful for the following reasons:
Related to nightly builds mentioned in USE_CASES.md, images will always have tarballs corresponding to the latest succesful build. It is kind of stupid to require docker in order to download a tarball, even if a 3-4 line long shell script can be used to make it straightforward. Once again, play-with-docker can be used to execute the script, but this requires a Docker ID. Yet, adquiring the tgz is just a hopefully useful side effect, not the main feature.
It looks like some of the build actions, that are schedule based, are not running due to inactivity in this repository.
As a result, some of the docker images on docker hub are now several months out of date.
Coming from ghdl/ghdl#883
I've been reworking the images in ghdl/docker (see the README and the repository list at dockerhub). Here are some notes which are still to be properly documented, and which are related to the conversation in ghdl/ghdl#883:
ghdl/run:*gcc
images include lcov
now. As a result, all ghdl/ghdl:*gcc*
images should be ready-to-use for coverage analysis now.ghdl/vunit:*
, which includes six images: mcode
, mcode-master
, llvm
, llvm-master
, gcc
and gcc-master
. These are built on top of ghdl/ghdl:buster-mcode
, ghdl/ghdl:buster-llvm-7
and ghdl/ghdl:buster-gcc-8.3.0
, respectively. *-master
variants include latest VUnit (master
branch), while others include the latest stable release (installed through pip).ghdl/ext:gtkwave
, ghdl/ext:broadway
or ghdl/ext:latest
. All of them include VUnit too. The first two of them are based on ghdl/vunit:llvm-master
, and the last one is based on ghdl/ext:ls-debian
(which includes GHDL with LLVM backend too).@sjaeckel, these changes should allow you to rewrite your dockerfile as:
FROM ghdl/vunit:gcc
RUN apt-get update -qq \
&& apt-get -y install git vim \
&& apt-get autoclean -y && apt-get clean -y && apt-get autoremove -y
RUN mkdir /work && chmod 777 /work
Which makes me wonder: do you really need git and vim inside the container? You might have a good reason to do so. I'm just asking so I can help you rethink your workflow to get rid of those dependencies, should you want to do so.
[@sjaeckel]
and btw. I'd happily also test it!
I'd really appreciate if you could test this, since I have never used the coverage feature. There are five ghdl/ghdl:*gcc*
tags and six tags in ghdl/vunit
. I'm not going to ask you to test all of them! Should you want to try a few, I think the priority for your use case is:
Overall, please do not hesitate to request changes/features, such as including lcov
in images with GCC (this was, honestly, so stupid of me) or providing images with VUnit and coverage support (i.e. GCC bakend).
When I run python run.py -g
I am expecting to see the output of my unit test displayed within gtkwave.
Test Bench
LIBRARY vunit_lib;
CONTEXT vunit_lib.vunit_context;
LIBRARY ieee;
USE ieee.std_logic_1164.ALL;
LIBRARY src;
USE src.mux_2i_1o;
ENTITY tb_mux_2i_1o IS
GENERIC (runner_cfg : STRING);
END ENTITY;
ARCHITECTURE tb OF tb_mux_2i_1o IS
SIGNAL button : std_logic := 'X';
SIGNAL input_1 : std_logic := '0';
SIGNAL input_2 : std_logic := '1';
SIGNAL output : std_logic;
BEGIN
mux : ENTITY mux_2i_1o PORT MAP (
button => button,
input_1 => input_1,
input_2 => input_2,
output => output
);
main : PROCESS
BEGIN
test_runner_setup(runner, runner_cfg);
WHILE test_suite LOOP
IF run("button press to switch inputs") THEN
button <= '0';
check(output = input_1);
wait for 20 ns;
button <= '1';
wait for 20 ns;
check(output = input_2, "Input did not change when button was pressed");
END IF;
END LOOP;
test_runner_cleanup(runner); -- Simulation ends here
WAIT;
END PROCESS;
END ARCHITECTURE;
As you can see, there are two 20ns waits within the test bench. Which means I am expecting the gtkwave gui to show me a sim for a total of 40ns. But when I run python run.py -g
I get the following output.
Steps to reproduce on Linux desktop environment
cd $HOME
git clone https://github.com/seanybaggins/ben_eaters_8_bit_computer.git
cd ben_eaters_8_bit_computer
docker run --volume="$HOME/.Xauthority:/root/.Xauthority:rw" --volume="$HOME/ben_eaters_8_bit_computer/:$HOME/ben_eaters_8_bit_computer" -w="$HOME/ben_eaters_8_bit_computer" --env="DISPLAY" --net=host ghdl/ext python run.py -g
docker start -a <default name>
I have a CI system setup that automatically runs simulation tests for our codebase and generates artifacts with code coverage reports.
Today we've noticed that all of our pipelines started failing all of a sudden. Quick investigation suggests that it's most likely caused by the new version of ghdl/vunit:gcc
images that we use.
Errors vary wildly between testbenches, but all are caused by the ghdl-gcc inability to compile VHDL, run testbench or generate code coverage data. A few log extracts below (we use Vunit btw.):
Compiling into sci_master_lib: common/vhd/interfaces/wishbone/sci_master/rtl/sci_master_top.vhd failed
=== Command used: ===
/usr/local/bin/ghdl -a --workdir=/builds/cce/cce/vunit_out/ghdl/libraries/sci_master_lib --work=sci_master_lib --std=08 -P/builds/cce/cce/vunit_out/ghdl/libraries/vunit_lib -P/builds/cce/cce/vunit_out/ghdl/libraries/osvvm -P/builds/cce/cce/vunit_out/ghdl/libraries/sci_master_lib -frelaxed -fprofile-arcs -ftest-coverage /builds/cce/cce/common/vhd/interfaces/wishbone/sci_master/rtl/sci_master_top.vhd
=== Command output: ===
during IPA pass: profile
/builds/cce/cce/common/vhd/interfaces/wishbone/sci_master/rtl/sci_master_top.vhd: In function ‘sci_master_lib__sci_master_top__ARCH__rtl__P2__PROC’:
/builds/cce/cce/common/vhd/interfaces/wishbone/sci_master/rtl/sci_master_top.vhd:165: internal compiler error: in coverage_begin_function, at coverage.c:656
165 | sci_mode_o <= (sci_clk_i or master_command_clk_high) and (not master_command_clk_low);
|
0x60ead2 coverage_begin_function(unsigned int, unsigned int)
../../gcc-srcs/gcc/coverage.c:656
0xb01d14 branch_prob(bool)
../../gcc-srcs/gcc/profile.c:1233
0xc34b62 tree_profiling
../../gcc-srcs/gcc/tree-profile.c:793
0xc34b62 execute
../../gcc-srcs/gcc/tree-profile.c:898
Please submit a full bug report,
with preprocessed source if appropriate.
Please include the complete backtrace with any bug report.
See <https://gcc.gnu.org/bugs/> for instructions.
/usr/local/bin/ghdl: exec error
l_serdes_stop_bit_9770f3ec736766095851a7d3f5f1fec2ae269065/coverage
WARNING - Missing coverage directory: /builds/cce/cce/vunit_out/test_output/uut.aol_serdes_vunit_tb.aol_serdes_zero_bits_607788854b84d43782763aab602c90277ff3345e/coverage
WARNING - Missing coverage directory: /builds/cce/cce/vunit_out/test_output/uut.aol_serdes_vunit_tb.aol_serdes_analogue_patterns_461895921170e375859f53d6a68dd9be965fc4c2/coverage
WARNING - Missing coverage directory: /builds/cce/cce/vunit_out/test_output/uut.aol_serdes_vunit_tb.aol_serdes_gated_transmission_a4d899af41db12640e66feea19a4c2568eae9de0/coverage
WARNING - Missing coverage directory: /builds/cce/cce/vunit_out/test_output/uut.aol_serdes_vunit_tb.aol_serdes_gated_to_normal_transmission_99cc1732306fbbbf9906b29a21db3e6b28fc45bd/coverage
WARNING - Missing coverage directory: /builds/cce/cce/vunit_out/test_output/uut.aol_serdes_vunit_tb.aol_serdes_address_patterns_6cb5d8aa3912c9a556294a7746c1c9f11a9c1979/coverage
WARNING - Missing coverage directory: /builds/cce/cce/vunit_out/test_output/uut.aol_serdes_vunit_tb.aol_serdes_parity_bits_a98bccb3b3c40fb4dba71942347a6558dd8ad898/coverage
WARNING - Missing coverage directory: /builds/cce/cce/vunit_out/test_output/uut.aol_serdes_vunit_tb.aol_serdes_break_sequence_133e6f90afa3241856d0fbd9fcc54bb706740063/coverage
WARNING - Missing coverage directory: /builds/cce/cce/vunit_out/test_output/uut.aol_serdes_vunit_tb.aol_serdes_start_bit_082961e4c64492079a2092f47e66325b09c7d3ce/coverage
Traceback (most recent call last):
File "/usr/local/lib/python3.9/dist-packages/vunit/ui/__init__.py", line 726, in main
all_ok = self._main(post_run)
File "/usr/local/lib/python3.9/dist-packages/vunit/ui/__init__.py", line 772, in _main
all_ok = self._main_run(post_run)
File "/usr/local/lib/python3.9/dist-packages/vunit/ui/__init__.py", line 819, in _main_run
post_run(results=Results(self._output_path, simulator_if, report))
File "/builds/cce/cce/analog_optical_link/common/tb/run.py", line 33, in post_run
results.merge_coverage(file_name="coverage_data")
File "/usr/local/lib/python3.9/dist-packages/vunit/ui/results.py", line 33, in merge_coverage
self._simulator_if.merge_coverage(file_name=file_name, args=args)
File "/usr/local/lib/python3.9/dist-packages/vunit/sim_if/ghdl.py", line 439, in merge_coverage
assert len(gcda_dirs) == 1, "Expected exactly one folder with gcda files"
AssertionError: Expected exactly one folder with gcda files
https://www.docker.com/pricing/resource-consumption-updates
What are the rate limits for pulling Docker images from the Docker Hub Registry?
Rate limits for Docker image pulls are based on the account type of the user requesting the image - not the account type of the image’s owner. These are defined on the pricing page.The highest entitlement a user has, based on their personal account and any orgs they belong to, will be used. Unauthenticated pull requests are “anonymous” and will be rate limited based on IP address rather than user ID. For more information on authenticating image pulls, please see this docs page.
How is a pull request defined for purposes of rate limiting?
A pull request is up to two GET requests to the registry URL path ‘/v2//manifests/’.This accounts for the fact that pull requests for multi-arch images require a manifest list to be downloaded followed by the actual image manifest for the required architecture. HEAD requests are not counted.
Note that all pull requests, including ones for images you already have, are counted by this method. This is the trade-off for not counting individual layers.
Are anonymous (unauthenticated) pulls rate-limited based on IP address?
Yes. Pull rates are limited based on individual IP address (e.g., for anonymous users: 100 pulls per 6 hours per IP address).
What about CI systems where pulls will be anonymous?
We recognize there are some circumstances where many pulls will be made that can not be authenticated. For example, cloud CI providers may host builds based on PRs to open source projects. The project owners may be unable to securely use their project’s Docker Hub credentials to authenticate pulls in this scenario, and the scale of these providers would likely trigger the anonymous rate limits. We will unblock these scenarios as necessary and continue iterating on our rate limiting mechanisms to improve the experience, in cooperation with these providers. Please reach out to [email protected] if you are encountering issues.
Will Docker offer dedicated plans for open source projects?
Yes, as part of Docker’s commitment to the open source community, we will be announcing the availability of new open source plans. To apply for an open source plan, complete our application at: https://www.docker.com/community/open-source-application.
For now, we should be safe. However, we might want to apply for an open source plan in the future.
/cc @tmeissner
The latest image a691ca20ad4b on docker.io is broken with this:
******************** GHDL Bug occurred ***************************
Please report this bug on https://github.com/ghdl/ghdl/issues
GHDL release: 1.0-dev (v0.37.0-794-ge854f72b@buster-mcode) [Dunoon edition]
Compiled with unknown compiler version
Target: x86_64-linux-gnu
/src/
Command line:
Exception SYSTEM.ASSERTIONS.ASSERT_FAILURE raised
Exception information:
raised SYSTEM.ASSERTIONS.ASSERT_FAILURE : vhdl-annotations.adb:1401
******************************************************************
ERROR: vhdl import failed.
The last revision I tried cb1eeee4ff96 worked fine.
As commented in #8, libgnat
is a dependency of GHDL, but libgnarl
is not.
We should check which is the piece of source code that is making libgnarl
be added as a dependency.
As commented in #22:
I now added
libboost-all-dev
to imagetrellis
. (...) that's the same package that is installed in the build image. Ideally, there should be a (smaller) package with runtime dependencies only. For example, there arelibomp-dev
andlibomp5-7
. Are you aware of any other package (or set of packages) that we can use instead oflibboost-all-dev
. Note that this is not only fortrellis
, but also for imagesnextpnr
,nextpnr-ice40
andnextpnr-ecp5
, since all of them depend on boost.
We should be able to install the specific boost images instead of the catch all one. If I get a chance I'll take a look.
Cool. Note that this does not affect any of the features we provide; it'd just be an interesting enhancement. Hence, rather than trying to guess it ourselves, it'd be ok to just remember to ask it whenever we get the chance to talk to someone who is used to developing with boost.
Currently, all images are rolling releases, i.e. all of them are automatically updated if the corresponding travis job succeeds.
On the one hand, the 'buildtest' branches (mcode
, mcodegpl
, llvm
and gcc
) run the default testsuite before pushing the images. This ensures that fundamentally broken images are not pushed to dockerhub. Still, uncaught regressions might find their way.
On the other hand, most of the images created in branches master
, synth
and ext
are unversioned. Some of them are very experimental (most of synth
), but some other are expected to be used in "production" (i.e. vunit
job in ext
, see ghdl/ghdl#883).
Therefore, even though users can use digests to fix their scripts to specific images, it would be desirable to include some kind of versioning. We need to think it thoroughly, tho; supporting images for simulation, synthesis and/or LSP is starting to get complex!
In order to do so, yosys is required. Moreover, the test suite of yosys requires iverilog.
Hi,
is there a docker image, which contains the tools for the whole design flow (ghdl, ghdlsynth, yosys, nextpnr and icestorm)? I could only find images containing single tools.
Add a docker image which includes GHDL ready for debug (i.e., compiled with -g
and including gdb
in the image).
I am trying to use ghdl/ext:vunit-master and am running into issues because my VUnit run.py
scripts expect git and GitPython to be present. In my use case, I am using a python function that uses GitPython to figure out where my git root directory is to enable each run.py
script to grab the correct libraries/packages/source .vhd files for each test. E.g., below is an example of a run.py
script that needs to know where the git root directory is to find all my .vhd
files without hard-coding every file path:
import git
def get_git_root():
repo = git.Repo('.', search_parent_directories=True)
return repo.working_tree_dir
...
lib = vu.add_library("<some_lib>")
lib.add_source_files(join(root, "src/utils/src/*.vhd"))
lib.add_source_files(join(root, "src/misc/src/*.vhd"))
lib.add_source_files(join(root, "src/memory/src/*.vhd"))
lib.add_source_files(join(root, "src/bert/src/*.vhd"))
lib.add_source_files(join(root, "src/bert/test/*.vhd"))
However, because git is not present, I am unable to use this function. I tried another method where I used subprocess.Popen()
and ran into the same issue because the git rev-parse --show-toplevel
command I was issuing did not work because git was not present.
If you do not want to add GitPython, that is fine as I can do a pip3 install
or just use subprocess in the standard package. However, I think it would be really useful to at least have git.
The issue I see when I change anything about my repo is that this creates a build-nightmare to debug with all hard-coded paths across a huge repository for HDL cores I am creating. Once my repo is more stable, I will be using a different method for finding libraries and it will be extremely important for my build environment to know where the git root is.
For reference, this is what I am doing now when I run docker as suggested by VUnit developers:
#!/bin/sh
cd $(dirname $0)
if [ -d "vunit_out" ]; then rm -rf vunit_out; fi
docker run --rm -t \
-v /$(pwd)://work \
-w //work \
ghdl/ext:vunit-master sh -c ' \
VUNIT_SIMULATOR=ghdl; \
apt-get update -qq; \
apt-get install git; \
pip3 install GitPython; \
for f in $(find ./ -name 'run.py'); do python3 $f -v; done \
'
When the apt-get install git
is executed, it hangs while waiting for the [Y/n]
user input, but when this is forced, it is still unable to install git.
Lastly, if you see a compelling reason for me to avoid this method, I am interested in hearing suggestions. I am fairly new to CI with docker, so this was just the method I thought would be easiest to integrate and maintain.
Requested in ghdl/ghdl#577
Looking at it again, this does not seem to be the issue. Closing.
Realated to #32. Specifically #32 (comment) and #32 (comment)
Background/summary
115 MB could be saved in mcode images by not installing gcc and libc6-dev in run_debian.dockerfile. This does however affect the testsuite of ghdl. The following tests fail without gcc:
testsuite/gna/bug097
testsuite/gna/issue1226
testsuite/gna/issue1228
testsuite/gna/issue1233
testsuite/gna/issue1256
testsuite/gna/issue1326
testsuite/gna/issue450
testsuite/gna/issue531
testsuite/gna/issue98
testsuite/vpi/vpi001
testsuite/vpi/vpi002
testsuite/vpi/vpi003
Saving 115 MB from the mcode image is very tempting. That would make it very minimal, and perfect for CI.
GCC is added to mcode images for co-simulation purposes. When LLVM or GCC backends are used, GHDL can build C sources "internally". This allows providing VHDL sources and C sources, and let GHDL do the magic. With mcode, that's not possible, because it can only interact with pre-built shared libraries. Hence, a C compiler is required for users to convert their co-simulation C/C++ sources into a shared library. Providing it in the runtime image is convenient because it ensures that any image can be used for co-simulation with foreign languages.
We should not force all the users to download GCC, unless they need/want to.
Roadmap
There seems to be an issue with python packages in ghdl/vunit:gcc-master
, and possibly/probably others. The vunit package does not appear to be installed in the python environment:
master [/home/lukas/work/repo/docker]$ docker run --rm --interactive --tty ghdl/vunit:gcc-master /bin/bash
root@4d2bca5f6a11:/# python3 -c "import vunit"
Traceback (most recent call last):
File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'vunit'
root@4d2bca5f6a11:/#
This issue appeared only recently. CI pipelines started failing in the tsfpga project after about 06:00 GMT+2 this morning.
To me it looks like some fatal python/pip problem is present:
master [/home/lukas/work/repo/docker]$ docker run --rm --interactive --tty ghdl/vunit:gcc-master /bin/bash
root@b5b80c839da3:/# python3 -m pip list
Package Version
---------- -------
gcovr 4.2
Jinja2 2.11.2
lxml 4.5.2
MarkupSafe 1.1.1
pip 20.2.2
setuptools 50.0.0
root@b5b80c839da3:/# python3 -m pip install vunit_hdl
Collecting vunit_hdl
Downloading vunit_hdl-4.4.0.tar.gz (6.3 MB)
|████████████████████████████████| 6.3 MB 8.6 MB/s
Collecting colorama
Downloading colorama-0.4.3-py2.py3-none-any.whl (15 kB)
Using legacy 'setup.py install' for vunit-hdl, since package 'wheel' is not installed.
Installing collected packages: colorama, vunit-hdl
Running setup.py install for vunit-hdl ... done
Successfully installed colorama-0.4.3 vunit-hdl
root@b5b80c839da3:/# python3 -m pip list
Package Version
---------- -------
colorama 0.4.3
gcovr 4.2
Jinja2 2.11.2
lxml 4.5.2
MarkupSafe 1.1.1
pip 20.2.2
setuptools 50.0.0
root@b5b80c839da3:/# python3 -c "import vunit"
Traceback (most recent call last):
File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'vunit'
root@b5b80c839da3:/#
So the vunit_hdl package does not appear to be installed according to pip. And even after a manual install it is not listed, and can not be imported.
I tried the same thing in the python:3-slim-buster
(upon which ghdl/vunit:gcc-master
is based? I had a hard time following the scripts) and the problem was not present:
master [/home/lukas/work/repo/docker]$ docker run --rm --interactive --tty python:3-slim-buster /bin/bash
root@29e906d4848a:/# python3 -m pip list
Package Version
---------- -------
pip 20.2.2
setuptools 49.3.1
wheel 0.34.2
root@29e906d4848a:/# python3 -m pip install vunit_hdl
Collecting vunit_hdl
Downloading vunit_hdl-4.4.0.tar.gz (6.3 MB)
|████████████████████████████████| 6.3 MB 10.0 MB/s
Collecting colorama
Downloading colorama-0.4.3-py2.py3-none-any.whl (15 kB)
Building wheels for collected packages: vunit-hdl
Building wheel for vunit-hdl (setup.py) ... done
Created wheel for vunit-hdl: filename=vunit_hdl-4.4.0-py3-none-any.whl size=6581190 sha256=d20b5911b017cc6d4214492934acd65fd9d6de96671f0ecd4031bc6fde87a72f
Stored in directory: /root/.cache/pip/wheels/a9/e2/17/e5b8e2569e52742b550213746e0e0042138036acb7aef13e52
Successfully built vunit-hdl
Installing collected packages: colorama, vunit-hdl
Successfully installed colorama-0.4.3 vunit-hdl-4.4.0
root@29e906d4848a:/# python3 -c "import vunit"
root@29e906d4848a:/#
I am a member, but not the owner, of the ghdl GitHub organization. In this repo (ghdl/docker) docker-owners
Team has Admin level access. I am a member of the team.
However, encrypting docker credentials with travis/enc-dockerhub.sh
(as I did in my fork 1138-4EB/ghdl) seems not to work. See travis-ci/travis-ci#9670
When I tried it yesterday, I could not see the Settings
tab when browsing this repo. It is possible that I did it too early. Should try it again.
Right now, credentials are set as hidden variables through the Travis CI GUI.
I am trying to run my tests on ghdl/vunit
images. Calling GHDL directly works, but both VUnit and Makefile fail.
Make output:
root@5ab5d015f669:/wrk# make file
ghdl -a --std=08 file_driver.vhdl
make: ghdl: Operation not permitted
make: *** [Makefile:14: file] Error 127
VUnit output:
root@466176e28305:/wrk/LHCbAurora# python3 run_ci.py *aurora_loopback_tb*
WARNING - /wrk/LHCbAurora/UVVM/uvvm_util/src/rand_pkg.vhd: failed to find library 'cyclic_queue_pkg'
WARNING - /wrk/LHCbAurora/UVVM/bitvis_vip_scoreboard/src/generic_sb_pkg.vhd: failed to find library 'sb_queue_pkg'
Re-compile not needed
Starting aurora_lib.aurora_loopback_tb.loopback_encode_decode
Output file: /wrk/LHCbAurora/vunit_out/test_output/aurora_lib.aurora_loopback_tb.loopback_encode_decode_466902a15048931ee0c2a30e712006ed37783cf8/output.txt
Traceback (most recent call last):
File "/opt/venv/lib/python3.11/site-packages/vunit/test/runner.py", line 244, in _run_test_suite
results = test_suite.run(output_path=output_path, read_output=read_output)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.11/site-packages/vunit/test/list.py", line 105, in run
test_ok = self._test_case.run(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.11/site-packages/vunit/test/suites.py", line 72, in run
results = self._run.run(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.11/site-packages/vunit/test/suites.py", line 178, in run
sim_ok = self._simulate(output_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.11/site-packages/vunit/test/suites.py", line 237, in _simulate
return self._simulator_if.simulate(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.11/site-packages/vunit/sim_if/ghdl.py", line 353, in simulate
proc = Process(cmd, env=gcov_env)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.11/site-packages/vunit/ostools.py", line 134, in __init__
self._reader.start()
File "/usr/lib/python3.11/threading.py", line 957, in start
_start_new_thread(self._bootstrap, ())
RuntimeError: can't start new thread
Exception ignored in: <function Process.__del__ at 0x7fc97f993740>
Traceback (most recent call last):
File "/opt/venv/lib/python3.11/site-packages/vunit/ostools.py", line 240, in __del__
self.terminate()
File "/opt/venv/lib/python3.11/site-packages/vunit/ostools.py", line 234, in terminate
self._reader.join()
File "/usr/lib/python3.11/threading.py", line 1107, in join
raise RuntimeError("cannot join thread before it is started")
RuntimeError: cannot join thread before it is started
fail (P=0 S=0 F=1 T=1) aurora_lib.aurora_loopback_tb.loopback_encode_decode (0.1 seconds)
==== Summary ================================================================
fail aurora_lib.aurora_loopback_tb.loopback_encode_decode (0.1 seconds)
=============================================================================
pass 0 of 1
fail 1 of 1
=============================================================================
Total time was 0.1 seconds
Elapsed time was 0.1 seconds
=============================================================================
Some failed!
docker pull ghdl/ghdl
Using default tag: latest
Error response from daemon: manifest for ghdl/ghdl:latest not found: manifest unknown: manifest unknown
These are some notes about features provided by GitHub Actions that may be useful for us:
For example, you can have your workflow run on push events to master and release branches or, only run on pull_request events that target the master branch or, run every day of the week from Monday - Friday at 02:00.
https://help.github.com/en/articles/events-that-trigger-workflows
GitHub actions provides hosted runners for Linux, Windows and macOS. To change the operating system for you job simply specify a different virutal machine. The available virtual machine types are:
- ubuntu-latest, ubuntu-18.04, or ubuntu-16.04
- windows-latest, windows-2019, or windows-2016
- macOS-latest or macOS-10.14
You can run workflows directly on the virtual machine or in a Docker container.
Each job in a workflow executes in a fresh instance of the virtual environment. All steps in the job execute in the same instance of the virtual environment, allowing the actions in that job to share information using the filesystem.
https://help.github.com/en/articles/virtual-environments-for-github-actions
With the matrix strategy GitHub Actions can automatically run your jobs across a set of different values of your choosing.
GitHub Actions supports conditions on steps and jobs based data present in your workflow context. To run a step only as part of a push and not in a pull_request you simply specify a condition in the if: property based on the event name.
https://help.github.com/articles/workflow-syntax-for-github-actions#jobsjob_idstepsif
Artifacts are the files created when you build and test your code. For example, artifacts might include binary or package files, test results, screenshots, or log files. When a run is complete, these files are removed from the virtual environment that ran your workflow and archived for you to download.
https://help.github.com/en/articles/managing-a-workflow-run#downloading-logs-and-artifacts
For end users:
For issues:
Hi. It seems that since ghdl/ghdl-yosys-plugin#98, ghdl/synth:beta
and ghdl/synth:formal
has been not generated. Both of them are 14 days outdated (the change was 9 days ago), while the rest of ghdl/synth
images were updated 15 hours ago.
Hi @eine, I'm trying to use Docker images to do a complete GHDL/yosys/nextpnr workflow. A simple example is here https://github.com/antonblanchard/ghdl-yosys-blink
Rght now the ghdl/synth:nextpnr
image only looks to have ice40 support. Any thoughts to adding ECP5 to that image? I was hoping nextpnr would have their own Docker images, but I couldn't find them.
Project nextpnr supports a GUI based on Qt to visually explore the P&R procedure and/or the result. Currently available ghdl/synth:*
images include nextpnr without the GUI, since those are to be used in CI environments.
However, we already provide other images with GUI tools, such as GtkWave. These can be easily used with either x11docker or runx, on GNU/Linux or Windows 10 hosts. Hence, should there be interest on having a ghdl/synth:nextpnr-gui
, we might add it.
Now all the stages declared in the Dockerfiles are built as images. For example:
# [run] Fedora 28
FROM fedora:28 AS mcode
RUN dnf --nodocs -y install libgnat gcc \
&& dnf clean all --enablerepo=\*
#---
FROM mcode AS llvm
RUN dnf --nodocs -y install zlib-devel \
&& dnf clean all --enablerepo=\*
RUN dnf --nodocs -y install llvm-libs \
&& dnf clean all --enablerepo=\*
#---
FROM mcode AS gcc-8.1.0
RUN dnf --nodocs -y install zlib-devel \
&& dnf clean all --enablerepo=\*
RUN dnf --nodocs -y install libstdc++* libstdc++*.i686 \
&& dnf clean all --enablerepo=\*
This could be cleaner if the installation of zlib-devel
was defined in a intermediate 'dummy' stage:
# [run] Fedora 28
FROM fedora:28 AS do-mcode
RUN dnf --nodocs -y install libgnat gcc \
&& dnf clean all --enablerepo=\*
#---
FROM mcode AS zlib
RUN dnf --nodocs -y install zlib-devel \
&& dnf clean all --enablerepo=\*
#---
FROM zlib AS do-llvm
RUN dnf --nodocs -y install llvm-libs \
&& dnf clean all --enablerepo=\*
#---
FROM zlib AS do-gcc-8.1.0
RUN dnf --nodocs -y install libstdc++* libstdc++*.i686 \
&& dnf clean all --enablerepo=\*
Description
Latest docker (ghdl/ext) builds are not usable anymore due to missing libgnarl-7 library.
Expected behaviour
If i try to run the docker image of any ghdl/ext: build :
docker run -it --rm ghdl/ext:vunit ghdl
i get this error
ghdl: error while loading shared libraries: libgnarl-7.so.1: cannot open shared object file: No such file or directory
And not the the ghdl version info.
How to reproduce?
Simply run :
docker run -it --rm ghdl/ext:vunit ghdl
Suggested fix
Add install libgnat-7 in dockerfile.
apt-get install -y libgnat-7
Currently, some artifacts are built from tarballs. As a result, users of docker images cannot know exactly which version of GHDL is included. We should add a proper identifier.
Also, we should add a test/check before pushing images to dockerhub, in order to ensure that this is fulfilled by any image that might be added in the future.
@eine I'm trying to add an ECP5 PLL to microwatt and looking at https://github.com/ghdl/ghdl-yosys-plugin/blob/master/examples/ecp5_versa/Makefile it seems I need components.vhdl from there to be able to pull in EHXPLLL().
I don't see components.vhdl in any of the docker images. Can we add that or am I going about this with wrong way?
I recently asked this in the Gitter:
I'm using the ghdl/ext:latest image and would like to run the gtkwave GUI. I am SSH'd to a server with the image via MobaXTerm and can run GUI apps directly on that server. However, when I pass DISPLAY to the container, gtkwave gives an error. Is there a way to get this to work? The x11docker script that is mentioned seems like it's specifically for MSYS2?
I ended up figuring this out. For reference, I was able to accomplish this by volume mounting my users home directory and passing the HOME
and DISPLAY
env vars to the container. Note, home is passed to the container for the ~/.Xauthority file.
For example, after I SSH to Linux server using MobaXTerm, I run something along the lines of this to open the GUI:
docker run --user=$(id -u):$(id -g) --interactive $TTY --rm \
--cap-add=SYS_PTRACE --security-opt seccomp=unconfined --network=host \
-e HOME=$HOME \
-e DISPLAY=$DISPLAY
-v $HOME:$HOME \
... \
<image> <command>
where command
might be python3 run.py -g
. It might be good to add this to the USE_CASES since it took me a lot of digging to get it working right. Probably not very secure, though.
[@tmeissner]
I've tested the ghdl/synth:formal Docker image. In the image, the tool directories under /opt/ are only readable by root. So, if I run the image as non-root user, I can't use the tools as intendedI can live with installing in /opt, a simple change of permissions would be sufficient for me
[@1138-4eb]
755 is ok?
[@tmeissner]
I think so
The subdirectories of ghdl and the other tools have correct permissions
So, only the 3 directories ghdl, yosys & z3 have to changed to 755
[@1138-4eb]
That makes sense. The 'wrong' command is just the mkdir before running tar. The content extracted by tar is correct.Oh. I'm not using tar anymore. I forgot that. Neither mkdir. The directory is implicitly created by COPY in the dockerfiles:
Although it was reported for ghdl/synth:formal
, other images are likely to be affected too. At least ghdl/synth:beta
and ghdl/synth:latest
.
Coming from ghdl/ghdl#1578
/cc @tmeissner
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.