Code Monkey home page Code Monkey logo

aswf-docker's Introduction

Docker Images for the Academy Software Foundation

License Code style: black
Coverage Maintainability Rating
Test Build Docker Images Test Python aswfdocker Library

More information:

Changes are documented in CHANGELOG.md

CI Images

These images are for Continuous Integration testing of various project managed by the ASWF. Each image (apart from ci-common) is available for multiple VFX Platform Years.

Image Stats Description
aswf/ci-common:1 Image Version Image Size Pulls A base CentOS-7 image with devtoolset-6 (GCC-6.3.1), clang-6-10 and cuda-10.2.
aswf/ci-common:2 Image Version Image Size Pulls A base CentOS-7 image with devtoolset-9.1 (GCC-9.3.1), clang-10-14 and cuda-11.4.
aswf/ci-common:3 Image Version Image Size Pulls A base RockyLinux-8 image with gcc-toolset-11 (GCC-11.2.x), clang-14-15 and cuda-11.8.
aswf/ci-common:4 Image Version Image Size Pulls A base RockyLinux-8 image with gcc-toolset-11 (GCC-11.2.x), clang-16-17 and cuda-12.3.
aswf/ci-base:2018 Image Version Image Size Pulls Based on aswf/ci-common:1 with most VFX Platform requirements pre-installed.
aswf/ci-base:2021 Image Version Image Size Pulls Based on aswf/ci-common:2 with most VFX Platform requirements pre-installed.
aswf/ci-base:2022 Image Version Image Size Pulls Based on aswf/ci-common:2 with most VFX Platform requirements pre-installed.
aswf/ci-base:2023 Image Version Image Size Pulls Based on aswf/ci-common:3 with most VFX Platform requirements pre-installed.
aswf/ci-base:2024 Image Version Image Size Pulls Based on aswf/ci-common:4 with most VFX Platform requirements pre-installed.
aswf/ci-openexr:2018 Image Version Image Size Pulls Based on aswf/ci-common:1, comes with all OpenEXR upstream dependencies pre-installed.
aswf/ci-openexr:2019 Image Version Image Size Pulls Based on aswf/ci-common:1, comes with all OpenEXR upstream dependencies pre-installed.
aswf/ci-openexr:2020 Image Version Image Size Pulls Based on aswf/ci-common:1, comes with all OpenEXR upstream dependencies pre-installed.
aswf/ci-openexr:2021 Image Version Image Size Pulls Based on aswf/ci-common:2, comes with all OpenEXR upstream dependencies pre-installed.
aswf/ci-openexr:2022 Image Version Image Size Pulls Based on aswf/ci-common:2, comes with all OpenEXR upstream dependencies pre-installed.
aswf/ci-openexr:2023 Image Version Image Size Pulls Based on aswf/ci-common:3, comes with all OpenEXR upstream dependencies pre-installed.
aswf/ci-openexr:2024 Image Version Image Size Pulls Based on aswf/ci-common:4, comes with all OpenEXR upstream dependencies pre-installed.
aswf/ci-ocio:2018 Image Version Image Size Pulls Based on aswf/ci-common:1, comes with all OpenColorIO upstream dependencies pre-installed.
aswf/ci-ocio:2019 Image Version Image Size Pulls Based on aswf/ci-common:1, comes with all OpenColorIO upstream dependencies pre-installed.
aswf/ci-ocio:2020 Image Version Image Size Pulls Based on aswf/ci-common:1, comes with all OpenColorIO upstream dependencies pre-installed.
aswf/ci-ocio:2021 Image Version Image Size Pulls Based on aswf/ci-common:2, comes with all OpenColorIO upstream dependencies pre-installed.
aswf/ci-ocio:2022 Image Version Image Size Pulls Based on aswf/ci-common:2, comes with all OpenColorIO upstream dependencies pre-installed.
aswf/ci-ocio:2023 Image Version Image Size Pulls Based on aswf/ci-common:3, comes with all OpenColorIO upstream dependencies pre-installed.
aswf/ci-ocio:2024 Image Version Image Size Pulls Based on aswf/ci-common:4, comes with all OpenColorIO upstream dependencies pre-installed.
aswf/ci-opencue:2018 Image Version Image Size Pulls Based on aswf/ci-common:1, comes with all OpenCue upstream dependencies pre-installed.
aswf/ci-opencue:2019 Image Version Image Size Pulls Based on aswf/ci-common:1, comes with all OpenCue upstream dependencies pre-installed.
aswf/ci-opencue:2020 Image Version Image Size Pulls Based on aswf/ci-common:1, comes with all OpenCue upstream dependencies pre-installed.
aswf/ci-opencue:2021 Image Version Image Size Pulls Based on aswf/ci-common:2, comes with all OpenCue upstream dependencies pre-installed.
aswf/ci-opencue:2022 Image Version Image Size Pulls Based on aswf/ci-common:2, comes with all OpenCue upstream dependencies pre-installed.
aswf/ci-opencue:2023 Image Version Image Size Pulls Based on aswf/ci-common:3, comes with all OpenCue upstream dependencies pre-installed.
aswf/ci-opencue:2024 Image Version Image Size Pulls Based on aswf/ci-common:4, comes with all OpenCue upstream dependencies pre-installed.
aswf/ci-openvdb:2018 Image Version Image Size Pulls Based on aswf/ci-common:1, comes with all OpenVDB upstream dependencies pre-installed.
aswf/ci-openvdb:2019 Image Version Image Size Pulls Based on aswf/ci-common:1, comes with all OpenVDB upstream dependencies pre-installed.
aswf/ci-openvdb:2020 Image Version Image Size Pulls Based on aswf/ci-common:1, comes with all OpenVDB upstream dependencies pre-installed.
aswf/ci-openvdb:2021 Image Version Image Size Pulls Based on aswf/ci-common:2, comes with all OpenVDB upstream dependencies pre-installed.
aswf/ci-openvdb:2022 Image Version Image Size Pulls Based on aswf/ci-common:2, comes with all OpenVDB upstream dependencies pre-installed.
aswf/ci-openvdb:2023 Image Version Image Size Pulls Based on aswf/ci-common:3, comes with all OpenVDB upstream dependencies pre-installed.
aswf/ci-openvdb:2024 Image Version Image Size Pulls Based on aswf/ci-common:4, comes with all OpenVDB upstream dependencies pre-installed.
aswf/ci-usd:2019 Image Version Image Size Pulls Based on aswf/ci-common:1, comes with all USD upstream dependencies pre-installed.
aswf/ci-usd:2020 Image Version Image Size Pulls Based on aswf/ci-common:1, comes with all USD upstream dependencies pre-installed.
aswf/ci-usd:2021 Image Version Image Size Pulls Based on aswf/ci-common:2, comes with all USD upstream dependencies pre-installed.
aswf/ci-usd:2022 Image Version Image Size Pulls Based on aswf/ci-common:2, comes with all USD upstream dependencies pre-installed.
aswf/ci-usd:2023 Image Version Image Size Pulls Based on aswf/ci-common:3, comes with all USD upstream dependencies pre-installed.
aswf/ci-usd:2024 Image Version Image Size Pulls Based on aswf/ci-common:4, comes with all USD upstream dependencies pre-installed.
aswf/ci-osl:2018 Image Version Image Size Pulls Based on aswf/ci-common:1, comes with all OpenShadingLanguage upstream dependencies pre-installed.
aswf/ci-osl:2019 Image Version Image Size Pulls Based on aswf/ci-common:1, comes with all OpenShadingLanguage upstream dependencies pre-installed.
aswf/ci-osl:2020 Image Version Image Size Pulls Based on aswf/ci-common:1, comes with all OpenShadingLanguage upstream dependencies pre-installed.
aswf/ci-osl:2021 Image Version Image Size Pulls Based on aswf/ci-common:2, comes with all OpenShadingLanguage upstream dependencies pre-installed.
aswf/ci-osl:2022 Image Version Image Size Pulls Based on aswf/ci-common:2, comes with all OpenShadingLanguage upstream dependencies pre-installed.
aswf/ci-osl:2023 Image Version Image Size Pulls Based on aswf/ci-common:3, comes with all OpenShadingLanguage upstream dependencies pre-installed.
aswf/ci-osl:2024 Image Version Image Size Pulls Based on aswf/ci-common:4, comes with all OpenShadingLanguage upstream dependencies pre-installed.
aswf/ci-vfxall:2019 Image Version Image Size Pulls Based on aswf/ci-common:1, comes with most VFX packages pre-installed.
aswf/ci-vfxall:2020 Image Version Image Size Pulls Based on aswf/ci-common:1, comes with most VFX packages pre-installed.
aswf/ci-vfxall:2021 Image Version Image Size Pulls Based on aswf/ci-common:2, comes with most VFX packages pre-installed.
aswf/ci-vfxall:2022 Image Version Image Size Pulls Based on aswf/ci-common:2, comes with most VFX packages pre-installed.
aswf/ci-vfxall:2023 Image Version Image Size Pulls Based on aswf/ci-common:3, comes with most VFX packages pre-installed.
aswf/ci-vfxall:2024 Image Version Image Size Pulls Based on aswf/ci-common:4, comes with most VFX packages pre-installed.

Versions

The ASWF_VFXPLATFORM_VERSION is the calendar year mentioned in the VFX Platform, e.g. 2019.

The ASWF_VERSION is a semantic version made of the ASWF_VFXPLATFORM_VERSION as the major version number, and a minor version number to indicate minor changes in the Docker Image that still point to the same calendar year version, e.g. 2019.0 would be followed if necessary by a 2019.1 version. The minor version here does not point to a calendar month or quarter, it is solely to express that the image has changed internally. We could also have a patch version.

Image Tags

The most precise version tag is the ASWF_VERSION of the image, e.g. aswf/ci-base:2019.0, but it is recommended to use the ASWF_VFXPLATFORM_VERSION as the tag to use in CI pipelines, e.g. aswf/ci-openexr:2019.

The latest tag is pointing to the current VFX Platform year images, e.g. aswf/ci-openexr:latest points to aswf/ci-openexr:2019.0 but will be updated to point to aswf/ci-openexr:2020.0 in the calendar year 2020.

Testing Images

There is another Docker Hub organization with copies of the aswf Docker images called aswftesting, images published there are for general testing and experimentation. Images can be pushed by any fork of the official repo as long as the branch is called testing. Images in this org will change without notice and could be broken in many unexpected ways!

To get write access to the aswftesting Docker Hub organization you can open a Jira issue there.

Status

As of November 2021 there are full 2018, 2019, 2020, 2021 and 2022 VFX Platform compliant images. N.B. that the 2018 version of the images still exist but are not maintained/rebuilt anymore, which means they might be obsolete (especially the OS part).

CI Packages

In order to decouple the building of packages (which can take a lot of time, such as clang, Qt and USD) from the management of the CI Images, the packages are built and stored into "scratch" Docker images that can be "copied" into the CI images at image build time by Docker.

Storing these CI packages into Docker images has the additional benefit of being completely free to store on the Docker Hub repository. The main negative point about this way of storing build artifacts is that tarballs are not available directly to download. It is very trivial to generate one and the provided download-package.sh script can be used to generate a local tarball from any package.

Also, CI packages are built using experimental Docker syntax that allows cache folders to be mounted at build time, and is built with docker buildx. The new Docker BuildKit system allows the building of many packages in parallel in an efficient way with support for ccache.

Python Utilities

Check aswfdocker for python utility usage.

Manual Builds

To build packages and images locally follow the instructions to install the aswfdocker python utility.

Packages

Packages require a recent Docker version with buildx installed and enabled.

To build all packages (very unlikely to succeed unless run on a very very powerful machine!):

aswfdocker --verbose build -t PACKAGE

To build a single package, e.g. USD:

# First list the available CI packages to know which package belong to which "group":
aswfdocker packages
# Then run the build
aswfdocker --verbose build -t PACKAGE --group vfx --version 2019 --target usd
# Or the simpler but less flexible syntax:
aswfdocker build -n aswftesting/ci-package-usd:2019

Images

Images can be built with recent Docker versions but do not require buildx but it is recommended to speed up large builds.

To build all images (very unlikely to succeed unless run on a very very powerful machine!):

aswfdocker --verbose build -t IMAGE

To build a single image:

# First list the available CI images to know which package belong to which "group":
aswfdocker images
# Then run the build
aswfdocker --verbose build -t IMAGE --group vfx1 --version 2019 --target openexr
# Or the simpler but less flexible syntax:
aswfdocker build -n aswftesting/ci-openexr:2019

aswf-docker's People

Contributors

aloysbaillet avatar bcipriano avatar glevner avatar jfpanisset avatar kelsolaar avatar lgritz avatar pmolodo avatar simran-b avatar tykeal avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aswf-docker's Issues

opentimelineio

It would be great to have an image for OpenTimelineIO as well! @aloysbaillet I'm working on streamlining the cmake build for OTIO, if any accommodations are necessary for a docker image, now would be an opportune moment to work that in.

Create new ci-openvdb and ci-osl Clang variants

The OpenVDB and OSL projects would like containers with different versions of LLVM/Clang installed than the default.

ci-openvdb would like LLVM 6 -> 10 and ci-osl would like LLVM 7 -> 10.

The pre-compiled binaries published by LLVM have GLIBCXX_USE_CXX11_ABI=1 which make them incompatible with other dependencies in the ASWF docker containers.

@aloysbaillet proposes the following for OpenVDB:

aswf/ci-openvdb:2018.clang6
aswf/ci-openvdb:2018.clang7
aswf/ci-openvdb:2018.clang8
aswf/ci-openvdb:2018.clang9
aswf/ci-openvdb:2019.clang6
aswf/ci-openvdb:2019.clang7
aswf/ci-openvdb:2019.clang8
aswf/ci-openvdb:2019.clang9

(@lgritz @Idclip)

OpenVDB blosc version update

Hi all,

Could we please get all the OpenVDB images updated to use Blosc 1.17.0 (bumped from the current 1.5.0)?

Thanks!

Nick

install packages to a different root directory

We need to install packages to a directory other than /usr/local, because we need to be able to access multiple VFX reference platforms without having to switch between different Docker images.

The shell scripts that build each package allow you to specify any location you want, which is a good start. But adapting the Dockerfiles to build and install all the packages to a different location is a huge headache. (For me, anyway.)

Has anybody else out there been confronted with this problem? And perhaps found a solution?

GLEW CMake config error in 2020 images

While moving OCIO CI to GH Actions I encountered a linking error with the GLEW install in the aswf/ci-base:2020 images. When configuring a build with CMake within this container, GLEW is found through the default FindGLEW.cmake module with the following output:

-- Found GLEW: /usr/local/lib64/cmake/glew/glew-config.cmake

In the aswf/ci-base:2019 the same configure step finds GLEW at:

-- Found GLEW: /usr/local/include

The 2019 result links properly, but the 2020 result results in the following linking error during the OCIO build:

[ 73%] Linking CXX executable ociodisplay
../../libutils/oglbuilder/libOpenColorIOoglbuilder.a(glsl.cpp.o): In function `OpenColorIO_v2_0dev::OpenGLBuilder::Uniform::setUp(unsigned int)':
glsl.cpp:(.text+0x122): undefined reference to `__glewGetUniformLocation'
...
CMakeFiles/ociodisplay.dir/main.cpp.o: In function `main':
main.cpp:(.text.startup+0x65): undefined reference to `glewInit'
main.cpp:(.text.startup+0x6f): undefined reference to `glewIsSupported'
collect2: error: ld returned 1 exit status
gmake[2]: *** [src/apps/ociodisplay/ociodisplay] Error 1

I worked around the failure by deleting the GLEW CMake config before running cmake with:

sudo rm -rf /usr/local/lib64/cmake/glew

Following that, the find module behaves identically to the 2019 container.

OCIO VFX Platform 2020 Containers with Python 3

Hello,

I was looking at the current container definitions in the hope to use them for the OpenColorIO-Config-ACES repo but I see that the definitions are for Python 2.7 and VFX Platform 2019:

https://github.com/AcademySoftwareFoundation/aswf-docker/blob/master/ci-ocio/Dockerfile#L6

https://github.com/AcademySoftwareFoundation/aswf-docker/blob/master/ci-ocio/Dockerfile#L22

Even looking at what seems to be the cutting edge here: https://hub.docker.com/layers/aswf/ci-ocio/2021.1/images/sha256-7214648a6f878259024ad4f995c939f6e19a72168b1f70cbe06502f9a475d883?context=explore

I still see Python 2.7.

Did I miss something?

Cheers,

Thomas

Add LLVM_SYMBOLIZER to base images

We use various clang sanitizers in OpenVDB's CI. The address sanitizer requires the llvm_symbolizer binary to create proper reports. It would be awesome if this could be built and provided as part of the base images!

cppunit missing from vfxall

Not sure if this is intentional or not, but cppunit is being brought in but not copied in the ci-vfxall dockerfile.

OIIO missing OCIO binding and OCIO missing tools from source install

I may be misunderstanding how to use these images, but when checking inside aswftesting/ci-vfxall image there are a couple of incomplete pieces relating to OCIO. The same issues noted blow are also true in aswf/ci-ocio as well.

Docker image was installed with:
docker pull aswftesting/ci-vfxall

From OCIO installation docs, https://opencolorio.org/installation.html, i'd expect to find these in /usr/local/bin directory but they do not exist there, or anywhere else when searched from /:

ociobakelut
ociocheck

When running
oiiotool --help

It outputs this info about color config:

Color configuration: built-in
Known color spaces: "linear", "default", "rgb", "RGB", "sRGB", "Rec709"
No OpenColorIO support was enabled at build time.

Compiling against LLVM fails with Clang on CY < 2021 and C++ >= 17

Hi,

This is a report against the aswf/ci-vfxall images for CY 2019 and 2020.

These images provide, along with Clang, LLVM v7.0.1 static libraries. However, it is not possible to compile applications that link against them under the following circumstances:

  • Using clang++ as the compiler
  • Targeting C++17 or higher
  • Using GNU's Glibc as the STL implementation (ie. -stdlib=libstdc++, which is the default anyway)

A minimal test case is the following test.cpp file:

#include <llvm/IR/IRBuilder.h>

int main()
{
return 0;
}

If, under a suitable Docker instance, one tries clang++ -H -I/usr/local/include -std=gnu++17 test.cpp, the compilation fails with (ignoring obvious linker errors):

[root@1480e7eee389 /]# clang++ -I/usr/local/include -std=c++17 test.cpp
In file included from test.cpp:1:
In file included from /usr/local/include/llvm/IR/IRBuilder.h:19:
In file included from /usr/local/include/llvm/ADT/ArrayRef.h:13:
In file included from /usr/local/include/llvm/ADT/Hashing.h:49:
In file included from /usr/local/include/llvm/Support/Host.h:17:
In file included from /usr/local/include/llvm/ADT/StringMap.h:17:
In file included from /usr/local/include/llvm/ADT/StringRef.h:13:
In file included from /usr/local/include/llvm/ADT/STLExtras.h:20:
In file included from /usr/local/include/llvm/ADT/Optional.h:20:
In file included from /usr/local/include/llvm/Support/AlignOf.h:17:
/usr/local/include/llvm/Support/Compiler.h:524:30: error: no member named 'align_val_t'
      in namespace 'std'
                        std::align_val_t(Alignment)
                        ~~~~~^
/usr/local/include/llvm/Support/Compiler.h:544:26: error: no member named 'align_val_t'
      in namespace 'std'
                    std::align_val_t(Alignment)
                    ~~~~~^
2 errors generated.

This is explained when using the -H flag. According to 1, std::align_val_t is defined in <new>; Clang 7, however, uses the following GCC 6.3.1 header:

[root@1480e7eee389 /]# clang++ -H -I/usr/local/include -std=c++17 test.cpp 2>&1 | grep new
........... /opt/rh/devtoolset-6/root/usr/lib/gcc/x86_64-redhat-linux/6.3.1/../../../../include/c++/6.3.1/new
.................. /opt/rh/devtoolset-6/root/usr/lib/gcc/x86_64-redhat-linux/6.3.1/../../../../include/c++/6.3.1/ext/new_allocator.h

A quick grep of /opt/rh/devtoolset-6/root/usr/include/c++/6.3.1/new shows that it does not include std::align_val_t 2. I've noticed there's a /usr/local/include/c++/v1/new LLVM header which does include the missing class, however.

The most I've been able to find about this issue is this revision in LLVM 3.

There are two possible workarounds:

  • Changing the target libc to Clang's (-stdlib=libc++).
  • Adding the -fno-aligned-allocation flag.

A fix would be to sync the Clang and GCC/Glibc versions so that the former does not attempt to use too-new
features. Alternatively, this desync could be documented in the aswf-docker README.

Let me know if you need additional information!

๐Ÿ‘‹๐Ÿผ

Yum broken in most ci images

The python install needs to avoid overriding the system python. Ideally we would leave the default python being the custom built one, and provide a customised "yum" script that reverts to the system python so it stays transparent for build scripts.

docker run -it aswf/ci-base:2018.0 bash
yum search anything
There was a problem importing one of the Python modules
required to run yum. The error leading to this problem was:

   No module named yum

Please install a package which provides this module, or
verify that the module is installed correctly.

It's possible that the above module doesn't match the
current version of Python, which is:
2.7.15 (default, Sep  3 2019, 00:22:02) 
[GCC 6.3.1 20170216 (Red Hat 6.3.1-3)]

If you cannot solve this problem yourself, please go to 
the yum faq at:
  http://yum.baseurl.org/wiki/Faq

Add MaterialX to ci-usd and ci-vfxall images

Since MaterialX is now well integrated into USD it is time to update the ASWF docker images to include it!
MaterialX doesn't seem to bring any new dependency and should be fairly easy to build as its own package.

Add Vulkan SDK eg. to build Qt Vulkan support

Adding to Qt won't interfere with the host machine if Vulkan is not available (it dynamically loads the symbols at runtime), but it needs the SDK present to build the code needed to create a Vulkan context.

This may also open up future uses of Vulkan eg. for USD hdStorm or OpenSubdiv once they support Vulkan.

Add GTest to OpenVDB images

Hi all,

VDB switch some of its unit tests over to gtest a while ago with the minimum supported gtest version being 1.8.X 1.10.X. It would be great if this could be added to the openvdb images.

Thanks!

Installation of aswf-docker - yaml issue

To use the ASWF docker images, I installed the aswf-docker utility following the steps explained here. Even if it successfully installs the tool (i.e. aswf-docker is working fine), it reports an error around yaml during the installation:

  1. git clone https://github.com/AcademySoftwareFoundation/aswf-docker
  2. cd aswf-docker
  3. python3 setup.py install

The installation is on macOS 10.15.7 with docker 20.10.7

Installed /usr/local/lib/python3.9/site-packages/requests-2.23.0-py3.9.egg
Searching for pyyaml==5.3.1
Reading https://pypi.org/simple/pyyaml/
Downloading https://files.pythonhosted.org/packages/64/c2/b80047c7ac2478f9501676c988a5411ed5572f35d1beff9cae07d321512c/PyYAML-5.3.1.tar.gz#sha256=b8eac752c5e14d3eca0e6dd9199cd627518cb5ec06add0de9d32baeee6fe645d
Best match: PyYAML 5.3.1
Processing PyYAML-5.3.1.tar.gz
Writing /var/folders/bv/l130z2yj1h322ry0n1bhk6th0000gp/T/easy_install-affusizc/PyYAML-5.3.1/setup.cfg
Running PyYAML-5.3.1/setup.py -q bdist_egg --dist-dir /var/folders/bv/l130z2yj1h322ry0n1bhk6th0000gp/T/easy_install-affusizc/PyYAML-5.3.1/egg-dist-tmp-oxg38707
warning: the 'license_file' option is deprecated, use 'license_files' instead
In file included from ext/_yaml.c:596:
ext/_yaml.h:2:10: fatal error: 'yaml.h' file not found
#include <yaml.h>
         ^~~~~~~~
1 error generated.
Error compiling module, falling back to pure Python
zip_safe flag not set; analyzing archive contents...
Copying PyYAML-5.3.1-py3.9-macosx-10.15-x86_64.egg to /usr/local/lib/python3.9/site-packages
Adding PyYAML 5.3.1 to easy-install.pth file

Change MaterialX to shared libs

Found this in ci-vfxall_2022.

Currently MaterialX seems to be build as static lib, which does not end up in /usr/local/lib.

It will however leave traces in pxrTargets.cmake, for example:

INTERFACE_LINK_LIBRARIES "tf;sdr;usdMtlx;usdShade;hd;hdMtlx;usdImaging;/usr/local/lib/libMaterialXCore.a;/usr/local/lib/libMaterialXFormat.a;/usr/local/lib/libMaterialXGenGlsl.a;/usr/local/lib/libMaterialXGenOsl.a;/usr/local/lib/libMaterialXGenShader.a;/usr/local/lib/libMaterialXRender.a;/usr/local/lib/libMaterialXRenderGlsl.a;/usr/local/lib/libMaterialXRenderHw.a;/usr/local/lib/libMaterialXRenderOsl.a"

This causes subsequent builds that use the cmake config to try to link against it.

I wonder if we should rather move to shader libs, or at least provide the static libraries.

Refactoring replicated build scripts?

For some tools like ccache and cmake, it looks like there are two different installs involved (and I'm probably getting details wrong, just started looking into this):

  • a global install in /opt/aswfbuilder which (I think) is used to build the infrastructure for building the ASWF containers, and which uses hard coded versions
  • a per-container install in /usr/local which is versioned based on the reference platform year

There are separate but very similar scripts for these two steps:

  • scripts/common/install_dev_cmake.sh : /opt/aswfbuilder with hard coded version
  • scripts/base/install_cmake.sh: /usr/local with per year version
  • scripts/common/install_dev_cache.sh: /opt/aswfbuilder with hard coded version
  • scripts/common/install_ccache.sh: /usr/local with per year version

I would make the following suggestions:

  • add/move definitions such as ARG CMAKE_VERSION=3.12.4 and ARG CCACHE_VERSION=3.7.4 to packages/Dockerfile instead of hard coding those in the scripts, assuming those get overridden once you are inside the package specific containers by the definitions in scripts/20xx/versions_base.sh
  • merge the cmake and ccache scripts into a single one which gets an additional command line parameter to specify the desired behavior of installing in /opt/aswfbuilder vs /usr/local

Of course it is possible that there are additional reasons for needing to keep these scripts split up, I've only given this a brief look.

Versioning ASWF containers and release tags

As per the discussion on PR 22, it might make sense to leverage the GitHub version tagging mechanism so we could relate the version of the code in this repo with the Docker Hub container tags.

To answer a specific question that came up in the initial discussion:

  • Would you agree tags should be added directly on master at release time by the azure pipeline job?

It would seem to make sense to automate this process: if a container image is pushed to Docker Hub and tagged there, the source tree should also be tagged accordingly, and automating that process would help to avoid mistakes.

Eventually we might also want to support GitHub Packages as an alternative repository for ASWF build containers. Since you cannot (easily) delete public packages pushed to GitHub Packages, we would need to make sure that the containers work before they are exposed.

Rename master to main

As per ASWF guidance, master branch should be renamed to main. Local branches will need to:

git branch -m master main
git fetch origin
git branch -u origin/main main
git remote set-head origin -a

Compile own LibRaw for OIIO?

OpenImageIO's oiiotool builds and links against LibRaw: since the base image for the ASWF containers is nvidia/cudalgl:${ASWF_CUDA_VERSION}-devel-centos7 that brings a recent version of CentOS 7, in the build I'm looking at CentOS 7.8 which installs LibRaw-0.19.4-1.el7.x86_64.rpm

Thus /usr/local/bin/oiiotool has an explicit dependency on a system installed libraw 0.19 DSO:

# ldd oiiotool | grep libraw
	libraw_r.so.19 => /lib64/libraw_r.so.19 (0x00007f97b33bf000)

Unfortunately earlier, still fairly widely used versions of CentOS (for instance CentOS 7.6) has an earlier libraw version, LibRaw-0.14.8-5.el7.20120830git98d925.x86_64.rpm which installs an earlier versioned libraw_r.so.5:

/lib64/libraw_r.so.5 -> libraw_r.so.5.0.0
/lib64/libraw_r.so.5.0.0

In the spirit of having ASWF container builds depend as little as possible on "system installed" components (especially for libraries which are particularly relevant to ASWF projects such as libraw), would it make sense to add libraw as an explicit build instead of depending on the system installed version?

[question] EL7 sunset - plans to use EL8 base?

just as a question (not meant as a request): aswf-docker is currently based on a centos 7 base image. As centos 7's regular support ends August 2021 and goes into (limited) maintenance until 2024: Is there any concrete plan or schedule to switch to an EL 8 based base image? (Wouldn't assume short term?). Also would such a switch happen for/as part of an upcoming CY or independently from the CY specs?

Qt5Gui CMake config error

Re: VFX Platform 2020, Qt 5.12.6.

The Qt5GuiConfigExtras.cmake file generated by aswf-docker looks for libGL.so in /usr/local/lib64. So if you try to build an executable or library based on Qt5Gui on a host where the library is installed in /usr/lib64, for example, cmake configuration fails.

An easy kludge would be to patch the cmake file, replacing /usr/local/lib64/libGL.so with just libGL.so. But there may be a cleaner solution, and the problem may be corrected in more recent versions of Qt...?

lib vs lib64 in containers

Looking at the ci-vfxall image, it seems that most of the resulting DSOs end up in /usr/local/lib
but a few end up in /usr/local/lib64 (OpenGL, OIIO, Ptext, OSL). On CentOS 7 systems:

32 bit DSOs: /lib -> /usr/lib
64 bit DSOs: /lib64 -> /usr/lib64

Since these containers are 64 bit only, would it make sense to have all DSOs land in /usr/local/lib64 ? Could always have a backwards compatibility link from /usr/local/lib to /usr/local/lib64

On the other hand systemd defines /lib64 as a "backwards compatibility symlink":

https://www.freedesktop.org/software/systemd/man/file-hierarchy.html#Compatibility%20Symlinks

so it's unclear what's the correct approach, but since for now these containers are somewhat CentOS 7 centric, I would suggest following the lead of CentOS 7, and having all DSOs land in /usr/local/lib64

[Linux images] PS1 with current working dir

This is a feature request :)

Would it be reasonable to build the linux images such that the bash PS1 env var includes the (absolute) current working dir?

Something basic like:

PS1='\u@\h:\w\$ '

Would help wonders when interactively running the image using bash.

Thanks!!

Move to GitHub Actions

Once GitHub Actions gain support for yaml templates as needed by this repo, we should be able to move from Azure to GHA.

Using aswf/ci-ocio:2021 to build ocio and run ociocheck results in "libOpenColorIO.so.2.0: cannot open shared object file"

Hello! I've been trying to figure out how to build ocio in one stage and then use it in another. What I'm trying to achieve ultimately is to use ocioconvert & ociocheck as a subprocess in python. However, this always throws this error: libOpenColorIO.so.2.0: cannot open shared object file: No such file or directory

My Dockerfile looks like this:

# build ocio first
FROM aswf/ci-ocio:2021 AS builder

WORKDIR /app

# this is the actual git repo of ocio
COPY ./OpenColorIO/ /app/OpenColorIO

WORKDIR /app/build

RUN cmake -DCMAKE_INSTALL_PREFIX=/app/ocio-done /app/OpenColorIO && cmake --build . -j 4 && make install

# and copy the output to a python container
FROM python:3.9
WORKDIR /app
COPY --from=builder /app/ocio-done/ /app/ocio
# copy custom aces config
COPY ./aces_1.2 /app/aces_1.2
COPY ./test_image.exr /app/test_image.exr

RUN apt-get update && apt-get install -y
ENV DYLD_LIBRARY_PATH /app/ocio-done/lib
ENV OCIO "/app/aces_1.2/config.ocio"

# now check if ociocheck actually works
RUN /app/ocio/bin/ociocheck

CMD ["bash"]  

but when running docker buildx build --tag conversion:1.0.6 . libOpenColorIO is not found:

error while loading shared libraries: libOpenColorIO.so.2.0: cannot open shared object file: No such file or directory

My thought was that some dependency is missing in the python container so I tried it as well inside the aswf/ci-ocio container like this:

# build ocio first
FROM aswf/ci-ocio:2021 AS builder

WORKDIR /app

# this is the actual git repo of ocio
COPY ./OpenColorIO/ /app/OpenColorIO

WORKDIR /app/build

RUN cmake -DCMAKE_INSTALL_PREFIX=/app/ocio-done /app/OpenColorIO && cmake --build . -j 4 && make install

# copy custom aces config
COPY ./aces_1.2 /app/aces_1.2
COPY ./test_image.exr /app/test_image.exr

ENV DYLD_LIBRARY_PATH /app/ocio-done/lib
ENV OCIO "/app/aces_1.2/config.ocio"

# now check if ociocheck actually works
RUN /app/ocio-done/bin/ociocheck

CMD ["bash"]  

But the same error happens
image

Am I missing a dependency? My thought was that the aswf/ci-ocio:2021 container had everything I need to get this running or is my assumption here wrong?

Also, I'm really not sure if this is the easiest way to get ocio working but from what I can tell from the documentation, the only possibility is to build it yourself. Or am I missing something as well here?

Missing overview on dockerhub

It would be great to have a README.md file for each docker image which would be uploaded to https://hub.docker.com/r/aswf/ so that the overview would give some information about the images.

E.g. ideally the https://hub.docker.com/r/aswf/ci-openexr would describe the fact that this image does not contain OpenEXR itself as it is used by the OpenEXR project to run CI, it contains all OpenEXR upstream dependencies...

I would propose to create a README.md file in each aswf-docker subfolder that would be used as a base to upload to each docker image overview.
E.g. a new ci-openexr/README.md file (sibling to the existing ci-openexr/Dockerfile) would describe the content of the ci-openexr docker image.
There is the special case of the packages folder which contains a single Dockerfile that can build dozens of docker ci package images, but we could have a generic readme file there that could be reused for all ci package images.

The python scripts that build docker images should do the readme upload, see here: https://github.com/AcademySoftwareFoundation/aswf-docker/blob/master/python/aswfdocker/builder.py#L110 and here: https://github.com/AcademySoftwareFoundation/aswf-docker/blob/master/python/aswfdocker/migrater.py#L64

Missing aswf/ci-ocio:2020 images in DockerHub

OCIO master now has Python 3 support, but in updating our CI (we moved to GH Actions) I noticed there are no aswf/ci-ocio:2020 images available in DockerHub. I dug through this repo trying to identify where this could be resolved, but it seems like it should be supported already? Thanks for any insight!

Improvements to individual README.mds?

Currently the jinja2 template which generates the README.md for each container lists "packages + implicitpackages" as one undifferentiated list. Would it make sense to explicitly separate the list of what comes from this specific container build vs what comes from the lower level containers it derives from?

On a related note (and I may be misunderstanding the nature of the hierarchy), the packages from ci-common don't seem to get listed: it would seem to be useful for instance to know exactly which version of clang / cuda / ... is in a container.

What about Windows containers?

Hey guys,

First of all let me say to anyone involved that you've done an amazing job with these containers and tagging them very well. Kudos! :)

I was wondering if there has been thoughts about providing Windows10 containers for the VFX Platform?

Has anyone started or tried already?

Create CY2020 images for ci-opencue

We're trying to create CY2020 images for ci-opencue so we can start to run tests against it.

AcademySoftwareFoundation/OpenCue#587

  • I would file this in Jira as the README suggests, but I don't think I have an account there? OpenCue doesn't use Jira, and it doesn't look like it's using the ASWF Auth0 system.
  • I think we can start with an aswftesting image, but I would need push access there.
  • If we're trying not to give credentials to aswftesting -- I think we just need a pretty standard ci-opencue build, with Python 3.7 and JDK 8.

Please let me know how to proceed here.

Upgrade to openjpeg2-devel

Current aswf-docker builds have OpenJpeg 1.5, which will likely be deprecated from OIIO support soon. I suggest upgrading to v2 with the openjpeg2-devel yum package in this script:

https://github.com/AcademySoftwareFoundation/aswf-docker/blob/master/scripts/common/install_yumpackages.sh

We encountered an issue with this in OCIO CI while treating warnings as errors, since v1.5 raises a warning in some recent OIIO releases.

New LLVM/clang versions soon?

If I understand https://hub.docker.com/r/aswf/ci-osl/tags?name=2019 correctly, there are a number of osl-2019 containers with differing clang versions -- 7, 8, 9, 10?

But osl-2020 only has clang7, I think. https://hub.docker.com/r/aswf/ci-osl/tags?name=2020

OSL is thinking of bumping our minimum LLVM requirement from 7 to 9. Looks like we can bump our VFX Platform 2019 tests to llvm9, but we wouldn't have a usable LLVM for 2020. Can we get a new osl-2020 with a newer version? It doesn't need to be a whole range of them, just llvm10 will do (that was the main llvm release in 2020).

Also, we don't have any containers with LLVM 12 (and now that it's released, 13). Maybe we could get those (at least 13) in a variety of 2022 container, since those will be the dominant versions used next year?

For next time you're touching these containers anyway. There is no rush that would necessitate building a bunch of new containers just for this right now.

vfxall clang containers not picking up correct clang version

I've tried to switch to the new vfxall clang variants in some CI but I only seem to be picking up clang7. I'm getting the correct version of clang in the equivalent openvdb images. For example:

aswf/ci-vfxall:2019-clang8: https://github.com/Idclip/openvdb/runs/1280431746?check_suite_focus=true
ASWF_VERSION=2019-clang8.10
CLANG_MAJOR_VERSION=7
picks up clang7, I expect clang8. I've tested for clang6, 8 and 9 - they all produce clang7

aswf/ci-openvdb:2019-clang8: https://github.com/Idclip/openvdb/runs/1280432068?check_suite_focus=true
ASWF_VERSION=2019-clang8.6
CLANG_MAJOR_VERSION=8
correctly picks up clang8

I've configured the github actions CI in the same way for both, have I missed something? Thank you!

[README] "most most" typo

Minor - in the table under the CI Images section, the descriptions of the ci-base images contain "most most" which looks like a typo.

Ninja needs to be built from source

@lgritz reported that ninja is not functioning in the aswf docker images:

docker run -it --rm aswf/ci-common:1.1 ninja --version
ninja: /lib64/libstdc++.so.6: version `GLIBCXX_3.4.21' not found (required by ninja)
ninja: /lib64/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by ninja)

Looks like it now needs to be built from source.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.