Code Monkey home page Code Monkey logo

sclorg / s2i-python-container Goto Github PK

View Code? Open in Web Editor NEW
260.0 31.0 321.0 7.48 MB

Python container images based on Red Hat Software Collections and intended for OpenShift and general usage, that provide a platform for building and running Python applications. Users can choose between Red Hat Enterprise Linux, Fedora, and CentOS based images.

Home Page: http://softwarecollections.org

License: Apache License 2.0

Python 5.48% Makefile 0.17% Shell 88.32% Smarty 6.02%
python rhel centos dockerfile openshift s2i source-to-image container docker fedora

s2i-python-container's Introduction

Python container images

Build and push container images to Quay.io registry

Images available on Quay are:

This repository contains the source for building various versions of the Python application as a reproducible container image using source-to-image. Users can choose between RHEL, Fedora and CentOS based builder images. The resulting image can be run using podman or docker.

For more information about using these images with OpenShift, please see the official OpenShift Documentation.

For more information about concepts used in these container images, see the Landing page.

Note: while the examples in this README are calling podman, you can replace any such calls by docker with the same arguments

Contributing

In this repository distgen > 1.0 is used for generating directories for Python versions. Also make sure distgen imports the jinja2 package >= 2.10.

Files in directories for a specific Python version are generated from templates in the src directory with values from specs/multispec.yml.

A typical way how to contribute is:

  1. Add a feature or fix a bug in templates (src directory) or values (specs/multispec.yml file).
  2. Commit the changes.
  3. Regenerate all files via make generate-all.
  4. Commit generated files.
  5. Test changes via make test TARGET=fedora VERSIONS=3.9 which will build, tag and test an image in one step.
  6. Open a pull request!

For more information about contributing, see the Contribution Guidelines.

Versions

Python versions currently provided are:

RHEL versions currently supported are:

CentOS and CentOS Stream versions currently supported are:

Fedora versions currently supported are:

Download

To download one of the base Python images, follow the instructions you find in registries mentioned above.

For example, Centos image can be downloaded via:

$ podman pull quay.io/centos7/python-38-centos7

Build

To build a Python image from scratch run:

$ git clone https://github.com/sclorg/s2i-python-container.git
$ cd s2i-python-container
$ make build TARGET=centos7 VERSIONS=3.8

Where TARGET might be one of the supported platforms mentioned above.

Notice: By omitting the VERSIONS parameter, the build/test action will be performed on all provided versions of Python.

Usage

For information about usage of S2I Python images, see the documentation for each version in its folder.

Test

This repository also provides a S2I test framework, which launches tests to check functionality of simple Python applications built on top of the s2i-python-container image.

$ cd s2i-python-container
$ make test TARGET=centos7 VERSIONS=3.8

Where TARGET might be one of the supported platforms mentioned above.

Notice: By omitting the VERSIONS parameter, the build/test action will be performed on all provided versions of Python.

Repository organization

  • <python-version>

    • Dockerfile

      CentOS based Dockerfile.

    • Dockerfile.fedora

      Fedora based Dockerfile.

    • Dockerfile.rhel7 & Dockerfile.rhel8

      RHEL 7/8 based Dockerfile. In order to perform build or test actions on this Dockerfile you need to run the action on a properly subscribed RHEL machine.

    • s2i/bin/

      This folder contains scripts that are run by S2I:

      • assemble

        Used to install the sources into the location where the application will be run and prepare the application for deployment (eg. installing dependencies, etc.)

      • run

        This script is responsible for running the application by using the application web server.

      • usage*

        This script prints the usage of this image.

    • test/

      This folder contains a S2I test framework with multiple test aplications testing different approaches.

      • run

        Script that runs the S2I test framework.

s2i-python-container's People

Contributors

befeleme avatar bparees avatar chrisburr avatar csrwng avatar dependabot[bot] avatar dinhxuanvu avatar ficap avatar fila43 avatar frenzymadness avatar goern avatar grahamdumpleton avatar hhorak avatar hrnciar avatar jhadvig avatar mcyprian avatar mczp avatar mfojtik avatar mnagy avatar omron93 avatar phracek avatar pkubatrh avatar praiskup avatar pvalena avatar rhcarvalho avatar soltysh avatar synfo avatar torsava avatar tumido avatar yselkowitz avatar zmiklank avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

s2i-python-container's Issues

Ignoring Django collectstatic errors.

FWIW, Heroku has just changed their build pack not to ignore errors that occur when running Django collectstatic during the build phase. I saw this when was on way back home on planes so can't readily find the link to page which mentioned it.

Highlighting this in case we need to review what our assemble script does in this situation.

https connections with unknown CA cert

At the moment sti-python's assemble script appears to honour https_proxy, but pip dies if the proxy's CA cert is unknown. sti-perl, by comparison, honours https_proxy and appears not to worry about unknown proxy CA certs.

It would be handy to be able to pass in an additional trusted CA cert without having to go to the hassle of rebuilding sti-python.

Cherrypy - Running natively in OS3

I have tried unsuccessfully to run CherrPy natively in OS3, as it's own server (i.e. no gunicorn or WSGI server).

Any help is appreciated

app.py

import cherrypy

class Root(object):
    @cherrypy.expose
    def index(self):
        return "Hello World!"
if __name__ == '__main__':
    from cherrypy._cpnative_server import CPHTTPServer
    cherrypy.server.httpserver = CPHTTPServer(cherrypy.server)
    cherrypy.server.socket_host = "0.0.0.0"
    cherrypy.quickstart(Root(), '/')

requirements.txt

cherrypy

Log when use APP_FILE

Hello,

I have an application that is started with environment variable APP_FILE. When i execute the app in my local environment show the message in console:
[*] Waiting Messages. Press CTRL+C to finish

But, when I execute in the Openshift, the message doesn't appear. If I access the app in Openshift with "oc rsh" and manually execute /usr/libexec/s2i/run appears the message.

In my app, the message log is generated by the code bellow:
print('[*] Waiting Messages. Press CTRL+C to finish')

Can you help me?

pip version is too old and can't install Linux wheels

You need at least pip version 8.1.0 in order to be able to install Linux wheels.

Some people (e.g. Sentry server), are now only making available some of their software packages as Linux wheels. This means it is impossible to install their software. This could get worse over time.

The SCL pip version should be updated.

There should also be an easy way for a user to say that a newer version of pip should be installed as part of the build before installing packages packages from requirements.txt. This will likely necessitate needing to add use of a Python virtual environment first, as you can't upgrade pip in a per user Python site-packages as it will fail trying to uninstall the system one first.

The project/__init__.py file should be removed from app-home-test-app test.

The app-home-test-app test is for testing APP_HOME environment variable. The idea is that the specified directory then becomes the root from which the application source code is picked up. In using APP_HOME and for the purposes of the test, the project directory doesn't need to be a Python package itself. The project/__init__.py file should be removed as not needed.

Detection of app.py and warnings if missing.

Is the code in the run script of:

APP_FILE="${APP_FILE:-app.py}"
if [[ -f "$APP_FILE" ]]; then
  echo "---> Running application from Python script ($APP_FILE) ..."
  exec python "$APP_FILE"
fi
echo "WARNING: file '$APP_FILE' not found."

really the best way of doing this?

This will output a warning message of:

WARNING: file 'app.py' not found.

every time the script is run when APP_FILE was not set by the user.

Further, if someone supplies the APP_FILE environment variable explicitly but gets it wrong, with it pointing to a directory or a non existent file, should it really be falling through to later options?

It would be better to tell them explicitly that the APP_FILE they provided was wrong and exit.

The last thing they want is it silently matching a latter possibility and so not running what they intended, with it being hard to possibly understand why.

Should be using SCL httpd24 package and not operating system httpd packages.

To make the S2I builder useful in a corporate setting where SSO is required, the SCL httpd24 package and associated authentication/authorisation modules for Apache should be installed rather than the operating system httpd package.

A need for SSO support via Apache httpd when using Python S2I has been raised on behalf of client in:

From other discussions also internal requirements for this.

Rather than installing:

httpd httpd-devel

should install:

httpd24 httpd24-httpd-devel httpd24-mod_ssl httpd24-mod_auth_kerb httpd24-mod_ldap httpd24-mod_session

In installing these, it is likely one doesn't need to remove:

centos-logos

or

redhat-logos

as they would not be installed by SCL versions of httpd packages.

As an additional SCL package is being installed, would need to update contrib/etc/scl_enable to use:

source scl_source enable httpd24 python27

rather than just:

source scl_source enable python27

Reference to Python package will be different for each Python version of builder image.

Images used should include Apache httpd/httpd-devel packages.

The Docker images used as a base or which is created should include httpd/httpd-devel packages or equivalent.

This will allow the use of a more capable WSGI server such as Apache/mod_wsgi to be used.

The gunicorn WSGi server is not the best default server to rely on due to lack of the ability to serve up static files in a performant way. It also does not provide a multi threaded worker which is an important option for WSGI applications which are more I/O bound than CPU bound. The lack of a multi threaded worker forces the use of more worker processes and thus more overall memory, or even the need to scale out to more instances.

Multi threading capabilities are therefore an important choice that users should have available to them and Apache/mod_wsgi is the most flexible and robust choice for that for this sort of environment. The more limited choices of gunicorn don't allow for it. Although one can use coroutines in gunicorn, these are not necessarily a good solution either, as users have to be very aware of requirements for all code to be coroutine aware.

With Apache httpd runtime and development packages being available, it would be a very simple process for users to then use mod_wsgi-express.

All that would be required on users part would be to add mod_wsgi to requirements.txt and drop in an app.py file containing at the minimum something like:

import mod_wsgi.server

mod_wsgi.server.start('wsgi.py', '--port', '8080')

rhel images on registry.access.redhat.com appear to be missing nss_wrapper

Image stream shows:

$ oc describe is python -n openshift
Name:           python
Created:        3 weeks ago
Labels:         <none>
Annotations:        openshift.io/image.dockerRepositoryCheck=2015-12-16T00:46:26Z
Docker Pull Spec:   172.30.224.221:5000/openshift/python

Tag Spec                                Created     PullSpec                            Image
2.7 registry.access.redhat.com/rhscl/python-27-rhel7:latest     3 weeks ago registry.access.redhat.com/rhscl/python-27-rhel7:latest     93d5039ac0fd9c1a9361b3459fdb005cddbdab694afe8d09abf18c064abebf20
3.3 registry.access.redhat.com/openshift3/python-33-rhel7:latest    3 weeks ago registry.access.redhat.com/openshift3/python-33-rhel7:latest    f50d07ce98c32fc94bdb75215468a4df26b9810aaa96cb5bbdfcf9a68571408d
3.4 registry.access.redhat.com/rhscl/python-34-rhel7:latest     3 weeks ago registry.access.redhat.com/rhscl/python-34-rhel7:latest     bae1743ada780f4f14ba992fb5719ecd8cb2360480546280369050e597f98b3f
latest  3.4                             3 weeks ago registry.access.redhat.com/rhscl/python-34-rhel7:latest     bae1743ada780f4f14ba992fb5719ecd8cb2360480546280369050e597f98b3f

Logs show lots of:

ERROR: ld.so: object 'libnss_wrapper.so' from LD_PRELOAD cannot be preloaded: ignored.

If rsh into the container created by the RHEL based S2I builder for Python, no nss_wrapper files exist.

bash-4.2$ find / -name '*nss_wrapper*' -print
find: '/proc/tty/driver': Permission denied
find: '/var/cache/ldconfig': Permission denied
find: '/var/lib/machines': Permission denied
find: '/var/lib/yum/history/2015-12-02/1': Permission denied
find: '/var/lib/yum/history/2015-12-02/2': Permission denied
find: '/var/lib/yum/history/2015-12-02/3': Permission denied
find: '/var/lib/yum/history/2015-12-02/4': Permission denied

bash-4.2$ ls -las /usr/lib64/libnss_*
 48 -rwxr-xr-x. 1 root root  46552 Oct 29 13:19 /usr/lib64/libnss_compat-2.17.so
  0 lrwxrwxrwx. 1 root root     30 Dec  3 03:40 /usr/lib64/libnss_compat.so -> ../../lib64/libnss_compat.so.2
  0 lrwxrwxrwx. 1 root root     21 Dec  2 09:26 /usr/lib64/libnss_compat.so.2 -> libnss_compat-2.17.so
 40 -rwxr-xr-x. 1 root root  38224 Oct 29 13:19 /usr/lib64/libnss_db-2.17.so
  0 lrwxrwxrwx. 1 root root     26 Dec  3 03:40 /usr/lib64/libnss_db.so -> ../../lib64/libnss_db.so.2
  0 lrwxrwxrwx. 1 root root     17 Dec  2 09:26 /usr/lib64/libnss_db.so.2 -> libnss_db-2.17.so
 28 -rwxr-xr-x. 1 root root  27512 Oct 29 13:19 /usr/lib64/libnss_dns-2.17.so
  0 lrwxrwxrwx. 1 root root     27 Dec  3 03:40 /usr/lib64/libnss_dns.so -> ../../lib64/libnss_dns.so.2
  0 lrwxrwxrwx. 1 root root     18 Dec  2 09:26 /usr/lib64/libnss_dns.so.2 -> libnss_dns-2.17.so
 64 -rwxr-xr-x. 1 root root  61928 Oct 29 13:19 /usr/lib64/libnss_files-2.17.so
  0 lrwxrwxrwx. 1 root root     29 Dec  3 03:40 /usr/lib64/libnss_files.so -> ../../lib64/libnss_files.so.2
  0 lrwxrwxrwx. 1 root root     20 Dec  2 09:26 /usr/lib64/libnss_files.so.2 -> libnss_files-2.17.so
 28 -rwxr-xr-x. 1 root root  28264 Oct 29 13:19 /usr/lib64/libnss_hesiod-2.17.so
  0 lrwxrwxrwx. 1 root root     30 Dec  3 03:40 /usr/lib64/libnss_hesiod.so -> ../../lib64/libnss_hesiod.so.2
  0 lrwxrwxrwx. 1 root root     21 Dec  2 09:26 /usr/lib64/libnss_hesiod.so.2 -> libnss_hesiod-2.17.so
 68 -rwxr-xr-x. 1 root root  66184 Oct 12 08:39 /usr/lib64/libnss_myhostname.so.2
260 -rwxr-xr-x. 1 root root 263688 Oct 12 08:39 /usr/lib64/libnss_mymachines.so.2
 56 -rwxr-xr-x. 1 root root  56792 Oct 29 13:19 /usr/lib64/libnss_nis-2.17.so
  0 lrwxrwxrwx. 1 root root     27 Dec  3 03:40 /usr/lib64/libnss_nis.so -> ../../lib64/libnss_nis.so.2
  0 lrwxrwxrwx. 1 root root     18 Dec  2 09:26 /usr/lib64/libnss_nis.so.2 -> libnss_nis-2.17.so
 68 -rwxr-xr-x. 1 root root  65744 Oct 29 13:19 /usr/lib64/libnss_nisplus-2.17.so
  0 lrwxrwxrwx. 1 root root     31 Dec  3 03:40 /usr/lib64/libnss_nisplus.so -> ../../lib64/libnss_nisplus.so.2
  0 lrwxrwxrwx. 1 root root     22 Dec  2 09:26 /usr/lib64/libnss_nisplus.so.2 -> libnss_nisplus-2.17.so

The file for generating the passwd/group files and setting environment variable is present.

bash-4.2$ cat /opt/app-root/etc/generate_container_user
# Set current user in nss_wrapper
PASSWD_DIR="/opt/app-root/etc"

export USER_ID=$(id -u)
export GROUP_ID=$(id -g)
envsubst < ${PASSWD_DIR}/passwd.template > ${PASSWD_DIR}/passwd
export LD_PRELOAD=libnss_wrapper.so
export NSS_WRAPPER_PASSWD=${PASSWD_DIR}/passwd
export NSS_WRAPPER_GROUP=/etc/group

Confused.

I don't remember if saw this with CentOS based images and I can't check right now as the remote cluster I am using decided to run out of disk space. So need to get that fixed up before can try.

Cannot deploy canonical WSGI hello world application.

One cannot deploy the most basic WSGI hello world application with the default builder.

That is, a Git repository such as:

will fail.

Attempting to do so will end up with:

wsgi-hello-world-1-fsss5        0/1       CrashLoopBackOff   5          2m

In order to even work out what the problem is you need to know to use the --previous option to oc logs on the command line. With that magic only then do you find:

$ oc logs --previous wsgi-hello-world-1-fsss5
WARNING: file 'app.py' not found.
ERROR: ld.so: object 'libnss_wrapper.so' from LD_PRELOAD cannot be preloaded: ignored.
ERROR: ld.so: object 'libnss_wrapper.so' from LD_PRELOAD cannot be preloaded: ignored.
ERROR: ld.so: object 'libnss_wrapper.so' from LD_PRELOAD cannot be preloaded: ignored.
ERROR: ld.so: object 'libnss_wrapper.so' from LD_PRELOAD cannot be preloaded: ignored.
ERROR: don't know how to run your application.
Please set either APP_MODULE or APP_FILE environment variables, or create a file 'app.py' to launch your application.

It is not an acceptable experience that even the simplest WSGI hello world application will not work. It would with OpenShift 2.

Unfortunately the builder doesn't provide a default production grade WSGI server which could be used. To solve the problem one would have to go the non ideal path once again (as is done for Django), of using a non production grade WSGI server in the form of that provide by the wsgiref module in the Python standard library.

Because the wsgiref simple server doesn't provide a command line way for being invoked, one would have to bundle or automatically generate a Python application stub, which would load the WSGI application from wsgi.py or other WSGI module file, and then launch the wsgiref simple server with it.

Very big warnings should be issued in the log output to indicate that relying on the default is very bad and to switch to a production grade WSGI server.

Detection of Django manage.py not predictable.

The assemble script works out what is the Django manage.py file by using:

    # Find shallowest manage.py script, either ./manage.py or <project>/manage.py
    manage_file=$(find . -maxdepth 2 -type f -name 'manage.py' -printf '%d\t%P\n' | sort -nk1 | cut -f2 | head -1)

This is problematic when it could match multiple files because it takes the first in sorted order.

Someone could have manage.py in the root of their repository and then have a Django application directory called hello. They might for some reason create a file called hello/manage.py as part of their Django application code. The find command will pick up the manage.py in the sub directory rather than giving precedence to the manage.py in the root directory, because 'h' comes before 'm'.

The rule should be that if there is a manage.py in the root directory that it takes precedence.

If there is no manage.py in the root directory and you do resort to searching the immediate subdirectories, then it shouldn't blindly take the first one it finds as that still isn't predictable. It might initially use the right one, but then if someone later creates a new sub directory that comes earlier in sort order, then it will start using that new one when it shouldn't.

The rule should be that if more than one manage.py file is found when searching the immediate sub directories that it should issue a warning/error and then skip any attempt to interpret one as a Django manage.py file.

If for some reason a user really needs duplicates, but only one in a sub directory is the real manage.py file, then there would need to be an environment variable that can be set to say which it is.

Is arguable that only the manage.py in the root directory should be detected and if one in a sub directory is needed, then required that user specify environment variable indicating the location of the Django project by giving path to the manage.py in the sub directory.

There is also the issue mentioned in prior issues about collectstatic that there is no attempt to verify that the manage.py is actually for Django. The Flask-Scripts package also uses a manage.py and so could be incorrectly interpreted that is Django when it isn't.

No zombie process reaping protection built in.

The builder image doesn't provide its own mechanism for protecting against process ID '1' zombie reaping problems.

The issue is outlined in:

The result is that the requirement for reaping zombie processes is put onto the user, but many WSGI servers, ASYNC web servers or other Python web servers do not meet the requirements of what an application running as process ID '1' needs to do. This could result in exhaustion of kernel resources in the worst case as the process table fills up with dead entries.

The rather abysmal score card for Python WSGI, ASYNC and web servers is:

  • django (runserver) - FAIL
  • flask (builtin) - FAIL
  • gunicorn - PASS
  • Apache/mod_wsgi - PASS
  • tornado (async) - FAIL
  • tornado (wsgi) - FAIL
  • uWSGI - FAIL
  • uWSGI (master) - PASS
  • waitress - FAIL
  • wsgiref - FAIL

So it isn't an isolated issue.

The problem now is that because of the requirement for any third party packages to be packaged in SCL, you can't use a tool such as tini.

or even the pip installable dumb-init:

The only options would be to see if one can at least get away with avoiding some of the issues by using the shell script solution in:

of:

#!/bin/sh
trap 'kill -TERM $PID' TERM INT
python server_wsgiref.py &
PID=$!
wait $PID
trap - TERM INT
wait $PID
STATUS=$?
exit $STATUS

This likely isn't going to be entirely reliable, but, still better than nothing.

Or, instead learn from the Python script mini init originally described by Phusion and implement our own to suit our requirements, add it to the repo and bundle it with the builder.

Note that a proper solution would also resolve other related problems with running as process ID '1' described in #77. Note that only reason accepted that #77 was closed was because it would be impossible to fix the default builder to do this properly and so that anyone would always ignore the default builder for everything anyway. If you actually want to people to use the default builder, this issue does need to be addressed and can't be ignored.

Relying on per user site-packages equivalent causes problems.

The way the image is set up at present means that no Python virtual environment is used. Instead the installation of Python packages from the user requirements.txt file is done into the users home directory using the --user option to pip. Similarly for setup.py.

if [[ -f requirements.txt ]]; then
  echo "---> Installing dependencies ..."
  pip install --user -r requirements.txt
fi

if [[ -f setup.py ]]; then
  echo "---> Installing application ..."
  python setup.py develop --user
fi

Use of no Python virtual environment will ultimately cause problems and confusion for a user.

This is because when the SCL package for Python, e.g.: python2.7, is installed, it is dragging in as dependencies a bunch of Python packages from operating system package repository.

Dependencies Resolved

================================================================================
 Package                     Arch    Version              Repository       Size
================================================================================
Installing:
 python27                    x86_64  1.1-20.el7           centos-sclo-rh  4.8 k
Installing for dependencies:
 dwz                         x86_64  0.11-3.el7           base             99 k
 iso-codes                   noarch  3.46-2.el7           base            2.7 M
 perl-srpm-macros            noarch  1-8.el7              base            4.6 k
 python27-python             x86_64  2.7.8-3.el7          centos-sclo-rh   81 k
 python27-python-babel       noarch  0.9.6-8.el7          centos-sclo-rh  1.4 M
 python27-python-devel       x86_64  2.7.8-3.el7          centos-sclo-rh  384 k
 python27-python-docutils    noarch  0.11-1.el7           centos-sclo-rh  1.5 M
 python27-python-jinja2      noarch  2.6-11.el7           centos-sclo-rh  518 k
 python27-python-libs        x86_64  2.7.8-3.el7          centos-sclo-rh  5.6 M
 python27-python-markupsafe  x86_64  0.11-11.el7          centos-sclo-rh   25 k
 python27-python-nose        noarch  1.3.0-2.el7          centos-sclo-rh  274 k
 python27-python-pip         noarch  1.5.6-5.el7          centos-sclo-rh  1.3 M
 python27-python-pygments    noarch  1.5-2.el7            centos-sclo-rh  774 k
 python27-python-setuptools  noarch  0.9.8-5.el7          centos-sclo-rh  400 k
 python27-python-simplejson  x86_64  3.2.0-3.el7          centos-sclo-rh  173 k
 python27-python-sphinx      noarch  1.1.3-8.el7          centos-sclo-rh  1.1 M
 python27-python-sqlalchemy  x86_64  0.7.9-3.el7          centos-sclo-rh  2.0 M
 python27-python-virtualenv  noarch  1.10.1-2.el7         centos-sclo-rh  1.3 M
 python27-python-werkzeug    noarch  0.8.3-5.el7          centos-sclo-rh  534 k
 python27-python-wheel       noarch  0.24.0-2.el7         centos-sclo-rh   76 k
 python27-runtime            x86_64  1.1-20.el7           centos-sclo-rh  1.1 M
 redhat-rpm-config           noarch  9.1.0-68.el7.centos  base             77 k
 scl-utils-build             x86_64  20130529-17.el7_1    base             17 k
 xml-common                  noarch  0.6.3-39.el7         base             26 k
 zip                         x86_64  3.0-10.el7           base            260 k

This results in the following Python packages being pre installed.

bash-4.2$ pip freeze
Babel==0.9.6
Jinja2==2.6
MarkupSafe==0.11
Pygments==1.5
SQLAlchemy==0.7.9
Sphinx==1.1.3
Werkzeug==0.8.3
docutils==0.11
nose==1.3.0
simplejson==3.2.0
virtualenv==1.10.1
wheel==0.24.0
wsgiref==0.1.2

The problem now is that if a user lists in the requirements.txt file packages which aren't pinned to a specific version, then the latest versions of those packages will not actually be installed as they might reasonably expect. This is because a version of the package is already installed due to above dependencies.

$ pip install --user Jinja2
Requirement already satisfied (use --upgrade to upgrade): Jinja2 in /opt/rh/python27/root/usr/lib/python2.7/site-packages
Cleaning up...

bash-4.2$ pip freeze
Babel==0.9.6
Jinja2==2.6
MarkupSafe==0.11
Pygments==1.5
SQLAlchemy==0.7.9
Sphinx==1.1.3
Werkzeug==0.8.3
docutils==0.11
nose==1.3.0
simplejson==3.2.0
virtualenv==1.10.1
wheel==0.24.0
wsgiref==0.1.2

End result is that they get stuck with an older version, not the latest, and may not realise it until their application starts dying at run time.

Yes you can argue that they should be pinning the version in order to get a reproducible build, but a lot of people wouldn't do that and so would get confused as to why they aren't seeing the latest version.

In practice the only Python packages which should already be installed are pip and whatever it may install, such as virtualenv, setuptools and wheel packages. The user should not be supplied any other packages.

This is where if you can't guarantee a clean site-packages except for pip etc, a Python virtual environment should be created and PATH setup so that it is always used.

These days a Python virtual environment is no longer linked by default to the system site-packages so it will be a clean slate except for pip etc. One of the reasons it doesn't link to the system site-packages it because of confusion like this caused by the installation of Python packages from operating system package repository.

Missing JPEG libraries means that Pillow cannot be built.

This may be an issue for sti-base, but openshift/python-27-centos7 appears to be missing the JPEG libraries necessary to build the popular Pillow package for Python for doing image manipulation.

I0215 22:33:55.158979       1 sti.go:492]   Running setup.py install for Pillow
110 I0215 22:33:55.275770       1 sti.go:492]     
111 I0215 22:33:55.362298       1 sti.go:492]     warning: no previously-included files found matching '.editorconfig'
112 I0215 22:33:55.393596       1 sti.go:492]     Traceback (most recent call last):
113 I0215 22:33:55.394069       1 sti.go:492]       File "<string>", line 1, in <module>
114 I0215 22:33:55.394088       1 sti.go:492]       File "/tmp/pip-build-KN5CPy/Pillow/setup.py", line 767, in <module>
115 I0215 22:33:55.394096       1 sti.go:492]         zip_safe=not debug_build(),
116 I0215 22:33:55.394104       1 sti.go:492]       File "/opt/rh/python27/root/usr/lib64/python2.7/distutils/core.py", line 151, in setup
117 I0215 22:33:55.394124       1 sti.go:492]         dist.run_commands()
118 I0215 22:33:55.394281       1 sti.go:492]       File "/opt/rh/python27/root/usr/lib64/python2.7/distutils/dist.py", line 953, in run_commands
119 I0215 22:33:55.395241       1 sti.go:492]         self.run_command(cmd)
120 I0215 22:33:55.395445       1 sti.go:492]       File "/opt/rh/python27/root/usr/lib64/python2.7/distutils/dist.py", line 972, in run_command
121 I0215 22:33:55.395471       1 sti.go:492]         cmd_obj.run()
122 I0215 22:33:55.395788       1 sti.go:492]       File "/opt/rh/python27/root/usr/lib/python2.7/site-packages/setuptools/command/install.py", line 53, in run
123 I0215 22:33:55.395808       1 sti.go:492]         return _install.run(self)
124 I0215 22:33:55.395851       1 sti.go:492]       File "/opt/rh/python27/root/usr/lib64/python2.7/distutils/command/install.py", line 563, in run
125 I0215 22:33:55.396685       1 sti.go:492]         self.run_command('build')
126 I0215 22:33:55.396701       1 sti.go:492]       File "/opt/rh/python27/root/usr/lib64/python2.7/distutils/cmd.py", line 326, in run_command
127 I0215 22:33:55.397175       1 sti.go:492]         self.distribution.run_command(command)
128 I0215 22:33:55.397563       1 sti.go:492]       File "/opt/rh/python27/root/usr/lib64/python2.7/distutils/dist.py", line 972, in run_command
129 I0215 22:33:55.397578       1 sti.go:492]         cmd_obj.run()
130 I0215 22:33:55.397584       1 sti.go:492]       File "/opt/rh/python27/root/usr/lib64/python2.7/distutils/command/build.py", line 127, in run
131 I0215 22:33:55.397796       1 sti.go:492]         self.run_command(cmd_name)
132 I0215 22:33:55.397877       1 sti.go:492]       File "/opt/rh/python27/root/usr/lib64/python2.7/distutils/cmd.py", line 326, in run_command
133 I0215 22:33:55.397907       1 sti.go:492]         self.distribution.run_command(command)
134 I0215 22:33:55.397969       1 sti.go:492]       File "/opt/rh/python27/root/usr/lib64/python2.7/distutils/dist.py", line 972, in run_command
135 I0215 22:33:55.398003       1 sti.go:492]         cmd_obj.run()
136 I0215 22:33:55.398045       1 sti.go:492]       File "/opt/rh/python27/root/usr/lib64/python2.7/distutils/command/build_ext.py", line 337, in run
137 I0215 22:33:55.398590       1 sti.go:492]         self.build_extensions()
138 I0215 22:33:55.398604       1 sti.go:492]       File "/tmp/pip-build-KN5CPy/Pillow/setup.py", line 516, in build_extensions
139 I0215 22:33:55.398671       1 sti.go:492]         (f, f))
140 I0215 22:33:55.399424       1 sti.go:492]     ValueError: jpeg is required unless explicitly disabled using --disable-jpeg, aborting
141 I0215 22:33:55.452904       1 sti.go:492]     Complete output from command /opt/rh/python27/root/usr/bin/python2 -c "import setuptools, tokenize;__file__='/tmp/pip-build-KN5CPy/Pillow/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-jatxrx-record/install-record.txt --single-version-externally-managed --compile --user:

build.sh uses bash feature incompatible with OS X

The version of /bin/bash on OS X as late as 10.11.3 does not support [[.

Error: hack/build.sh: line 73: conditional binary operator expected

The change below would seem to make it compatible, but you may have a reason you didn't already do this. I understand if you just close this issue.

diff --git a/hack/build.sh b/hack/build.sh
index c62825b..cd2e272 100755
--- a/hack/build.sh
+++ b/hack/build.sh
@@ -70,7 +70,7 @@ for dir in ${dirs}; do

   IMAGE_NAME="${NAMESPACE}${BASE_IMAGE_NAME}-${dir//./}-${OS}"

-  if [[ -v TEST_MODE ]]; then
+  if [ -n "$TEST_MODE" ]; then
     IMAGE_NAME+="-candidate"
   fi

@@ -83,7 +83,7 @@ for dir in ${dirs}; do
     docker_build_with_version Dockerfile
   fi

-  if [[ -v TEST_MODE ]]; then
+  if [ -n "$TEST_MODE" ]; then
     IMAGE_NAME=${IMAGE_NAME} test/run

     if [[ $? -eq 0 ]] && [[ "${TAG_ON_SUCCESS}" == "true" ]]; then

Review addition of libjpeg-turbo and libjpeg-turbo-devel

This is a follow up on #102. Moving the discussion here for it's easier to track an open issue than a merged PR (rather easy to forget).

Open questions:

  • Check where the libs belong, it anywhere in the official images.
    It's unclear what apps or Python libraries depend on those libs. Perhaps it doesn't belong in the Python layer, but in sti-base? Or in a specialized derivative of sti-python?
    Quoting @bparees:

    @donnydavis can you elaborate on why you consider this to be a common dependency?

  • The patch was applied only to the 2.7 image (update 3.3 and 3.4 in a follow up?)

  • The increase in image size is of ~3 MB. Do we consider that OK?

$ docker images | grep jpeg
python-27-jpeg                                      latest              15639fcae25f        29 seconds ago      657.7 MB
python-27-no-jpeg                                   latest              804931d10c74        2 days ago          654.8 MB

Something else to consider?

Have libnss_wrapper updates to passwd file be additive, not replace it.

The generate_container_user script does:

# Set current user in nss_wrapper
PASSWD_DIR="/opt/app-root/etc"

export USER_ID=$(id -u)
export GROUP_ID=$(id -g)
envsubst < ${PASSWD_DIR}/passwd.template > ${PASSWD_DIR}/passwd
export LD_PRELOAD=libnss_wrapper.so
export NSS_WRAPPER_PASSWD=${PASSWD_DIR}/passwd
export NSS_WRAPPER_GROUP=/etc/group

That is, it replaces the existing /etc/passwd rather than edit/extending it.

This means that the original builder user ID of 1001 no longer has a user account. Similarly, if a user created a derived S2I builder image and for some reason added new user accounts, those user accounts would no longer exist either.

It would be better to preserve all existing user accounts so that all known possible user IDs are still mapped to a user.

This way don't end up with:

total 64
 4 drwxrwxrwx. 5    1001 root  4096 Feb  5 11:25 .
 0 drwxrwxrwx. 4    1001 root    26 Sep 29 15:22 ..
 4 -rw-rwxr--. 1    1001 root   747 Feb  5 11:24 .gitignore
 0 drwxrwx---. 4    1001 root    26 Feb  5 11:24 .local
 0 drwxrwxrwx. 3    1001 root    18 Sep 29 15:53 .pki
 4 -rw-rwxr--. 1    1001 root  1299 Feb  5 11:24 LICENSE
 4 -rw-rwxr--. 1    1001 root   128 Feb  5 11:24 README.md
36 -rw-r--r--. 1 default root 36864 Feb  5 11:25 db.sqlite3
 4 drwxrwxr-x. 2    1001 root  4096 Feb  5 11:25 hello_world
 4 -rwxrwxr-x. 1    1001 root   254 Feb  5 11:24 manage.py
 4 -rw-rwxr--. 1    1001 root     7 Feb  5 11:24 requirements.txt

What I suggest is done is to start with /etc/passwd, modifying the user of 1001 from default to builder. Then add an entry for the random user ID.

You can see roughly how I do this in the blog post:

Specifically, something like:

# Override user ID lookup to cope with being randomly assigned IDs using
# the -u option to 'docker run'.

USER_ID=$(id -u)

if [ x"$USER_ID" != x"0" -a x"$USER_ID" != x"1001" ]; then
    NSS_WRAPPER_PASSWD=/tmp/passwd.nss_wrapper
    NSS_WRAPPER_GROUP=/etc/group

    cat /etc/passwd | sed -e โ€™s/^default:/builder:/' > $NSS_WRAPPER_PASSWD

    echo "default:$USER_ID:0:Defaut User Application,,,:${HOME}:/bin/nologin" >> $NSS_WRAPPER_PASSWD

    export NSS_WRAPPER_PASSWD
    export NSS_WRAPPER_GROUP

    LD_PRELOAD=/usr/local/lib64/libnss_wrapper.so
    export LD_PRELOAD
fi

Then the result is:

total 64
 4 drwxrwxrwx. 5 builder root  4096 Feb  5 11:25 .
 0 drwxrwxrwx. 4 builder root    26 Sep 29 15:22 ..
 4 -rw-rwxr--. 1 builder root   747 Feb  5 11:24 .gitignore
 0 drwxrwx---. 4 builder root    26 Feb  5 11:24 .local
 0 drwxrwxrwx. 3 builder root    18 Sep 29 15:53 .pki
 4 -rw-rwxr--. 1 builder root  1299 Feb  5 11:24 LICENSE
 4 -rw-rwxr--. 1 builder root   128 Feb  5 11:24 README.md
36 -rw-r--r--. 1 default root 36864 Feb  5 11:25 db.sqlite3
 4 drwxrwxr-x. 2 builder root  4096 Feb  5 11:25 hello_world
 4 -rwxrwxr-x. 1 builder root   254 Feb  5 11:24 manage.py
 4 -rw-rwxr--. 1 builder root     7 Feb  5 11:24 requirements.txt

That way you don't get naked user IDs with no user account and similar problems as outlined in that blog post, but for users other than the current user.

Provide a user with better information when application is misconfigured

Currently our run scripts fail with error messages being available only in logs when one did not provide certain values, see here:

&2 echo "ERROR: don't know how to run your application."
&2 echo "Please set either APP_MODULE or APP_FILE environment variables, or create a file 'app.py' to launch your application."
exit 1

This results in pod start failure and OpenShift re-starting the pod over and over again, causing kind of fork bomb, but with pods this time. There's the idea of running simple HTTP server (python -m SimpleHTTPServer 8000) return that information with a 500 error page.

All the credits for the idea goes to @GrahamDumpleton

@bparees @rhcarvalho wdyt?

The /tmp/src directory is never removed.

Minor issue.

After the 'assemble' script does:

echo "---> Copying application source ..."
cp -Rf /tmp/src/. ./

Nothing ever appears to remove '/tmp/src'.

This means you have two copies of the application. If the source code and data files for the application were very large, that is an unnecessary waste of space.

Is that intentional for any specific reason?

Django development server not shutting down promptly.

When the Django development server is used on the command line, interrupting its main process with CONTROL-C will result in it shutting down immediately. Under OpenShift when used as the default server for Django applications, if the pod is deleted explicitly or when deploying a new version, the Django development server is not shutting down promptly. Instead it appears to only be shutdown when pod deletion timeout kicks in after about 30 seconds.

django-hello-world-v1-5-uq4hu   1/1       Terminating   0          12h
django-hello-world-v1-5-vzqew   1/1       Running       0          30s

Yes the Django development server should never be used in production, but this is the default for Django and this behaviour gives a bad impression. It is most annoying when trying to use this for demos and you have to explain to people that it wouldn't normally take that long were a production WSGI server were used and that it is some problem with the Django development server when run with OpenShift.

Right now I do not know why this problem occurs.

The issue adds to why there should be a production grade WSGI server, with capabilities to automatically run Django, as a default available.

System packages needed for numpy/scipy installation not present.

System packages needed to permit the installation of scipy Python package are not present. It appears numpy may install, but without certain features then required by scipy.

Don't know at this point know what the missing packages are. Logging issue just so it is known.

Running setup.py install for scipy
lapack_opt_info:
openblas_lapack_info:
libraries openblas not found in ['/opt/rh/rh-python34/root/usr/lib', '/usr/local/lib64', '/usr/local/lib', '/usr/lib64', '/usr/lib', '/usr/lib/']
NOT AVAILABLE

lapack_mkl_info:
mkl_info:
libraries mkl,vml,guide not found in ['/opt/rh/rh-python34/root/usr/lib', '/usr/local/lib64', '/usr/local/lib', '/usr/lib64', '/usr/lib', '/usr/lib/']
NOT AVAILABLE

NOT AVAILABLE

atlas_3_10_threads_info:
Setting PTATLAS=ATLAS
libraries tatlas,tatlas not found in /opt/rh/rh-python34/root/usr/lib
libraries lapack_atlas not found in /opt/rh/rh-python34/root/usr/lib
libraries tatlas,tatlas not found in /usr/local/lib64
libraries lapack_atlas not found in /usr/local/lib64
libraries tatlas,tatlas not found in /usr/local/lib
libraries lapack_atlas not found in /usr/local/lib
libraries tatlas,tatlas not found in /usr/lib64/sse2
libraries lapack_atlas not found in /usr/lib64/sse2
libraries tatlas,tatlas not found in /usr/lib64
libraries lapack_atlas not found in /usr/lib64
libraries tatlas,tatlas not found in /usr/lib/sse2
libraries lapack_atlas not found in /usr/lib/sse2
libraries tatlas,tatlas not found in /usr/lib
libraries lapack_atlas not found in /usr/lib
libraries tatlas,tatlas not found in /usr/lib/sse2
libraries lapack_atlas not found in /usr/lib/sse2
libraries tatlas,tatlas not found in /usr/lib/
libraries lapack_atlas not found in /usr/lib/
<class 'numpy.distutils.system_info.atlas_3_10_threads_info'>
NOT AVAILABLE

atlas_3_10_info:
libraries satlas,satlas not found in /opt/rh/rh-python34/root/usr/lib
libraries lapack_atlas not found in /opt/rh/rh-python34/root/usr/lib
libraries satlas,satlas not found in /usr/local/lib64
libraries lapack_atlas not found in /usr/local/lib64
libraries satlas,satlas not found in /usr/local/lib
libraries lapack_atlas not found in /usr/local/lib
libraries satlas,satlas not found in /usr/lib64/sse2
libraries lapack_atlas not found in /usr/lib64/sse2
libraries satlas,satlas not found in /usr/lib64
libraries lapack_atlas not found in /usr/lib64
libraries satlas,satlas not found in /usr/lib/sse2
libraries lapack_atlas not found in /usr/lib/sse2
libraries satlas,satlas not found in /usr/lib
libraries lapack_atlas not found in /usr/lib
libraries satlas,satlas not found in /usr/lib/sse2
libraries lapack_atlas not found in /usr/lib/sse2
libraries satlas,satlas not found in /usr/lib/
libraries lapack_atlas not found in /usr/lib/
<class 'numpy.distutils.system_info.atlas_3_10_info'>
NOT AVAILABLE

atlas_threads_info:
Setting PTATLAS=ATLAS
libraries ptf77blas,ptcblas,atlas not found in /opt/rh/rh-python34/root/usr/lib
libraries lapack_atlas not found in /opt/rh/rh-python34/root/usr/lib
libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib64
libraries lapack_atlas not found in /usr/local/lib64
libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib
libraries lapack_atlas not found in /usr/local/lib
libraries ptf77blas,ptcblas,atlas not found in /usr/lib64/sse2
libraries lapack_atlas not found in /usr/lib64/sse2
libraries ptf77blas,ptcblas,atlas not found in /usr/lib64
libraries lapack_atlas not found in /usr/lib64
libraries ptf77blas,ptcblas,atlas not found in /usr/lib/sse2
libraries lapack_atlas not found in /usr/lib/sse2
libraries ptf77blas,ptcblas,atlas not found in /usr/lib
libraries lapack_atlas not found in /usr/lib
libraries ptf77blas,ptcblas,atlas not found in /usr/lib/sse2
libraries lapack_atlas not found in /usr/lib/sse2
libraries ptf77blas,ptcblas,atlas not found in /usr/lib/
libraries lapack_atlas not found in /usr/lib/
<class 'numpy.distutils.system_info.atlas_threads_info'>
NOT AVAILABLE

atlas_info:
libraries f77blas,cblas,atlas not found in /opt/rh/rh-python34/root/usr/lib
libraries lapack_atlas not found in /opt/rh/rh-python34/root/usr/lib
libraries f77blas,cblas,atlas not found in /usr/local/lib64
libraries lapack_atlas not found in /usr/local/lib64
libraries f77blas,cblas,atlas not found in /usr/local/lib
libraries lapack_atlas not found in /usr/local/lib
libraries f77blas,cblas,atlas not found in /usr/lib64/sse2
libraries lapack_atlas not found in /usr/lib64/sse2
libraries f77blas,cblas,atlas not found in /usr/lib64
libraries lapack_atlas not found in /usr/lib64
libraries f77blas,cblas,atlas not found in /usr/lib/sse2
libraries lapack_atlas not found in /usr/lib/sse2
libraries f77blas,cblas,atlas not found in /usr/lib
libraries lapack_atlas not found in /usr/lib
libraries f77blas,cblas,atlas not found in /usr/lib/sse2
libraries lapack_atlas not found in /usr/lib/sse2
libraries f77blas,cblas,atlas not found in /usr/lib/
libraries lapack_atlas not found in /usr/lib/
<class 'numpy.distutils.system_info.atlas_info'>
NOT AVAILABLE

lapack_info:
libraries lapack not found in ['/opt/rh/rh-python34/root/usr/lib', '/usr/local/lib64', '/usr/local/lib', '/usr/lib64', '/usr/lib', '/usr/lib/']
NOT AVAILABLE

lapack_src_info:
NOT AVAILABLE

NOT AVAILABLE

/opt/app-root/src/.local/lib/python3.4/site-packages/numpy/distutils/system_info.py:1542: UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
warnings.warn(AtlasNotFoundError.__doc__)
/opt/app-root/src/.local/lib/python3.4/site-packages/numpy/distutils/system_info.py:1553: UserWarning:
Lapack (http://www.netlib.org/lapack/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [lapack]) or by setting
the LAPACK environment variable.
warnings.warn(LapackNotFoundError.__doc__)
/opt/app-root/src/.local/lib/python3.4/site-packages/numpy/distutils/system_info.py:1556: UserWarning:
Lapack (http://www.netlib.org/lapack/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [lapack_src]) or by setting
the LAPACK_SRC environment variable.
warnings.warn(LapackSrcNotFoundError.__doc__)
Running from scipy source directory.
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-build-ovzzqhd8/scipy/setup.py", line 265, in <module>
setup_package()
File "/tmp/pip-build-ovzzqhd8/scipy/setup.py", line 262, in setup_package
setup(**metadata)
File "/opt/app-root/src/.local/lib/python3.4/site-packages/numpy/distutils/core.py", line 135, in setup
config = configuration()
File "/tmp/pip-build-ovzzqhd8/scipy/setup.py", line 182, in configuration
config.add_subpackage('scipy')
File "/opt/app-root/src/.local/lib/python3.4/site-packages/numpy/distutils/misc_util.py", line 1003, in add_subpackage
caller_level = 2)
File "/opt/app-root/src/.local/lib/python3.4/site-packages/numpy/distutils/misc_util.py", line 972, in get_subpackage
caller_level = caller_level + 1)
File "/opt/app-root/src/.local/lib/python3.4/site-packages/numpy/distutils/misc_util.py", line 909, in _get_configuration_from_setup_py
config = setup_module.configuration(*args)
File "scipy/setup.py", line 15, in configuration
config.add_subpackage('linalg')
File "/opt/app-root/src/.local/lib/python3.4/site-packages/numpy/distutils/misc_util.py", line 1003, in add_subpackage
caller_level = 2)
File "/opt/app-root/src/.local/lib/python3.4/site-packages/numpy/distutils/misc_util.py", line 972, in get_subpackage
caller_level = caller_level + 1)
File "/opt/app-root/src/.local/lib/python3.4/site-packages/numpy/distutils/misc_util.py", line 909, in _get_configuration_from_setup_py
config = setup_module.configuration(*args)
File "scipy/linalg/setup.py", line 20, in configuration
raise NotFoundError('no lapack/blas resources found')
numpy.distutils.system_info.NotFoundError: no lapack/blas resources found
Complete output from command /opt/rh/rh-python34/root/usr/bin/python3 -c "import setuptools, tokenize;__file__='/tmp/pip-build-ovzzqhd8/scipy/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-9fo3wjif-record/install-record.txt --single-version-externally-managed --compile --user:
lapack_opt_info:

openblas_lapack_info:

libraries openblas not found in ['/opt/rh/rh-python34/root/usr/lib', '/usr/local/lib64', '/usr/local/lib', '/usr/lib64', '/usr/lib', '/usr/lib/']

NOT AVAILABLE



lapack_mkl_info:

mkl_info:

libraries mkl,vml,guide not found in ['/opt/rh/rh-python34/root/usr/lib', '/usr/local/lib64', '/usr/local/lib', '/usr/lib64', '/usr/lib', '/usr/lib/']

NOT AVAILABLE



NOT AVAILABLE



atlas_3_10_threads_info:

Setting PTATLAS=ATLAS

libraries tatlas,tatlas not found in /opt/rh/rh-python34/root/usr/lib

libraries lapack_atlas not found in /opt/rh/rh-python34/root/usr/lib

libraries tatlas,tatlas not found in /usr/local/lib64

libraries lapack_atlas not found in /usr/local/lib64

libraries tatlas,tatlas not found in /usr/local/lib

libraries lapack_atlas not found in /usr/local/lib

libraries tatlas,tatlas not found in /usr/lib64/sse2

libraries lapack_atlas not found in /usr/lib64/sse2

libraries tatlas,tatlas not found in /usr/lib64

libraries lapack_atlas not found in /usr/lib64

libraries tatlas,tatlas not found in /usr/lib/sse2

libraries lapack_atlas not found in /usr/lib/sse2

libraries tatlas,tatlas not found in /usr/lib

libraries lapack_atlas not found in /usr/lib

libraries tatlas,tatlas not found in /usr/lib/sse2

libraries lapack_atlas not found in /usr/lib/sse2

libraries tatlas,tatlas not found in /usr/lib/

libraries lapack_atlas not found in /usr/lib/

<class 'numpy.distutils.system_info.atlas_3_10_threads_info'>

NOT AVAILABLE



atlas_3_10_info:

libraries satlas,satlas not found in /opt/rh/rh-python34/root/usr/lib

libraries lapack_atlas not found in /opt/rh/rh-python34/root/usr/lib

libraries satlas,satlas not found in /usr/local/lib64

libraries lapack_atlas not found in /usr/local/lib64

libraries satlas,satlas not found in /usr/local/lib

libraries lapack_atlas not found in /usr/local/lib

libraries satlas,satlas not found in /usr/lib64/sse2

libraries lapack_atlas not found in /usr/lib64/sse2

libraries satlas,satlas not found in /usr/lib64

libraries lapack_atlas not found in /usr/lib64

libraries satlas,satlas not found in /usr/lib/sse2

libraries lapack_atlas not found in /usr/lib/sse2

libraries satlas,satlas not found in /usr/lib

libraries lapack_atlas not found in /usr/lib

libraries satlas,satlas not found in /usr/lib/sse2

libraries lapack_atlas not found in /usr/lib/sse2

libraries satlas,satlas not found in /usr/lib/

libraries lapack_atlas not found in /usr/lib/

<class 'numpy.distutils.system_info.atlas_3_10_info'>

NOT AVAILABLE



atlas_threads_info:

Setting PTATLAS=ATLAS

libraries ptf77blas,ptcblas,atlas not found in /opt/rh/rh-python34/root/usr/lib

libraries lapack_atlas not found in /opt/rh/rh-python34/root/usr/lib

libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib64

libraries lapack_atlas not found in /usr/local/lib64

libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib

libraries lapack_atlas not found in /usr/local/lib

libraries ptf77blas,ptcblas,atlas not found in /usr/lib64/sse2

libraries lapack_atlas not found in /usr/lib64/sse2

libraries ptf77blas,ptcblas,atlas not found in /usr/lib64

libraries lapack_atlas not found in /usr/lib64

libraries ptf77blas,ptcblas,atlas not found in /usr/lib/sse2

libraries lapack_atlas not found in /usr/lib/sse2

libraries ptf77blas,ptcblas,atlas not found in /usr/lib

libraries lapack_atlas not found in /usr/lib

libraries ptf77blas,ptcblas,atlas not found in /usr/lib/sse2

libraries lapack_atlas not found in /usr/lib/sse2

libraries ptf77blas,ptcblas,atlas not found in /usr/lib/

libraries lapack_atlas not found in /usr/lib/

<class 'numpy.distutils.system_info.atlas_threads_info'>

NOT AVAILABLE



atlas_info:

libraries f77blas,cblas,atlas not found in /opt/rh/rh-python34/root/usr/lib

libraries lapack_atlas not found in /opt/rh/rh-python34/root/usr/lib

libraries f77blas,cblas,atlas not found in /usr/local/lib64

libraries lapack_atlas not found in /usr/local/lib64

libraries f77blas,cblas,atlas not found in /usr/local/lib

libraries lapack_atlas not found in /usr/local/lib

libraries f77blas,cblas,atlas not found in /usr/lib64/sse2

libraries lapack_atlas not found in /usr/lib64/sse2

libraries f77blas,cblas,atlas not found in /usr/lib64

libraries lapack_atlas not found in /usr/lib64

libraries f77blas,cblas,atlas not found in /usr/lib/sse2

libraries lapack_atlas not found in /usr/lib/sse2

libraries f77blas,cblas,atlas not found in /usr/lib

libraries lapack_atlas not found in /usr/lib

libraries f77blas,cblas,atlas not found in /usr/lib/sse2

libraries lapack_atlas not found in /usr/lib/sse2

libraries f77blas,cblas,atlas not found in /usr/lib/

libraries lapack_atlas not found in /usr/lib/

<class 'numpy.distutils.system_info.atlas_info'>

NOT AVAILABLE



lapack_info:

libraries lapack not found in ['/opt/rh/rh-python34/root/usr/lib', '/usr/local/lib64', '/usr/local/lib', '/usr/lib64', '/usr/lib', '/usr/lib/']

NOT AVAILABLE



lapack_src_info:

NOT AVAILABLE



NOT AVAILABLE



/opt/app-root/src/.local/lib/python3.4/site-packages/numpy/distutils/system_info.py:1542: UserWarning:

Atlas (http://math-atlas.sourceforge.net/) libraries not found.

Directories to search for the libraries can be specified in the

numpy/distutils/site.cfg file (section [atlas]) or by setting

the ATLAS environment variable.

warnings.warn(AtlasNotFoundError.__doc__)

/opt/app-root/src/.local/lib/python3.4/site-packages/numpy/distutils/system_info.py:1553: UserWarning:

Lapack (http://www.netlib.org/lapack/) libraries not found.

Directories to search for the libraries can be specified in the

numpy/distutils/site.cfg file (section [lapack]) or by setting

the LAPACK environment variable.

warnings.warn(LapackNotFoundError.__doc__)

/opt/app-root/src/.local/lib/python3.4/site-packages/numpy/distutils/system_info.py:1556: UserWarning:

Lapack (http://www.netlib.org/lapack/) sources not found.

Directories to search for the sources can be specified in the

numpy/distutils/site.cfg file (section [lapack_src]) or by setting

the LAPACK_SRC environment variable.

warnings.warn(LapackSrcNotFoundError.__doc__)

Running from scipy source directory.

Traceback (most recent call last):

File "<string>", line 1, in <module>

File "/tmp/pip-build-ovzzqhd8/scipy/setup.py", line 265, in <module>

setup_package()

File "/tmp/pip-build-ovzzqhd8/scipy/setup.py", line 262, in setup_package

setup(**metadata)

File "/opt/app-root/src/.local/lib/python3.4/site-packages/numpy/distutils/core.py", line 135, in setup

config = configuration()

File "/tmp/pip-build-ovzzqhd8/scipy/setup.py", line 182, in configuration

config.add_subpackage('scipy')

File "/opt/app-root/src/.local/lib/python3.4/site-packages/numpy/distutils/misc_util.py", line 1003, in add_subpackage

caller_level = 2)

File "/opt/app-root/src/.local/lib/python3.4/site-packages/numpy/distutils/misc_util.py", line 972, in get_subpackage

caller_level = caller_level + 1)

File "/opt/app-root/src/.local/lib/python3.4/site-packages/numpy/distutils/misc_util.py", line 909, in _get_configuration_from_setup_py

config = setup_module.configuration(*args)

File "scipy/setup.py", line 15, in configuration

config.add_subpackage('linalg')

File "/opt/app-root/src/.local/lib/python3.4/site-packages/numpy/distutils/misc_util.py", line 1003, in add_subpackage

caller_level = 2)

File "/opt/app-root/src/.local/lib/python3.4/site-packages/numpy/distutils/misc_util.py", line 972, in get_subpackage

caller_level = caller_level + 1)

File "/opt/app-root/src/.local/lib/python3.4/site-packages/numpy/distutils/misc_util.py", line 909, in _get_configuration_from_setup_py

config = setup_module.configuration(*args)

File "scipy/linalg/setup.py", line 20, in configuration

raise NotFoundError('no lapack/blas resources found')

numpy.distutils.system_info.NotFoundError: no lapack/blas resources found

Priority order for how to run application is wrong.

In the 'run' script, the code:

APP_FILE="${APP_FILE:-app.py}"
if [[ "$APP_FILE" ]]; then
  if [[ -f "$APP_FILE" ]]; then
    echo "---> Running application from Python script ($APP_FILE) ..."
    exec python "$APP_FILE"
  fi
  echo "WARNING: file '$APP_FILE' not found."
fi

should not be last.

Instead, it should be placed before the check for running a WSGI application with gunicorn.

This will mirror better how OpenShift 2 worked where the presence of app.py would override the default Apache/mod_wsgi server.

It is also necessary to have a check for app.py before any check for Django and trying to run Django with manage.py with runserver. This is so that app.py with a customer WSGI server will still take precedence for a Django application.

Reading of the code as it stands suggests that even if app.py is supplied to use an alternate WSGI server, when using Django, it will be ignored and the Django development server always used instead.

Consider setting PYTHONUNBUFFERED.

If a Python web application uses stdout for generating log messages, then the result can be buffered in memory and not output promptly such that the messages can be captured by Docker and OpenShift. The consequence of this is that there is potential for loss of log messages if the application crashes for some reason making it much harder to work out what the application was doing. The messages may also simply be delayed and not come out straight away.

It is not uncommon for people to set up the Python logging module to use stdout rather than stderr which exacerbates this problem.

Consideration should be given to setting the PYTHONUNBUFFERED environment variable as a default.

FWIW, Heroku sets PYTHONUNBUFFERED as a default as they likely have determined it is too confusing for users when they don't see their logged messages.

Note that this issue mainly affects gunicorn and hand rolled applications provided by app.py. When using mod_wsgi-express it treats stdout the same as stderr and ensures that messages are always flushed when an EOL is encountered, with anything still remaining buffered at the end of a request which wasn't EOL terminated, being flushed automatically at the end of a request.

Default lang/locale should be UTF-8, not ASCII.

People are used to the default lang/locale of an operating system when using the command line being a variant of UTF-8. The consequence of this is that people often unknowingly use or try to output strings, using print() from a Python web application or module which contains character strings that are valid UTF-8, but which will fail if the default lang/locale is ASCII.

When they try and deploy any code which does this to OpenShift it will fail when using recommended gunicorn, or if they roll their own web application as an app.py. For example:

[2016-11-21 02:28:10 +0000] [1] [INFO] Starting gunicorn 19.6.0
[2016-11-21 02:28:10 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1)
[2016-11-21 02:28:10 +0000] [1] [INFO] Using worker: sync
[2016-11-21 02:28:10 +0000] [28] [INFO] Booting worker with pid: 28
[2016-11-21 02:28:10 +0000] [29] [INFO] Booting worker with pid: 29
[2016-11-21 02:28:10 +0000] [30] [INFO] Booting worker with pid: 30
Internal Server Error: /
Traceback (most recent call last):
  File "/opt/app-root/src/.local/lib/python3.5/site-packages/django/core/handlers/exception.py", line 39, in inner
    response = get_response(request)
  File "/opt/app-root/src/.local/lib/python3.5/site-packages/django/core/handlers/base.py", line 249, in _legacy_get_response
    response = self._get_response(request)
  File "/opt/app-root/src/.local/lib/python3.5/site-packages/django/core/handlers/base.py", line 187, in _get_response
    response = self.process_exception_by_middleware(e, request)
  File "/opt/app-root/src/.local/lib/python3.5/site-packages/django/core/handlers/base.py", line 185, in _get_response
    response = wrapped_callback(request, *callback_args, **callback_kwargs)
  File "/opt/app-root/src/demo/views.py", line 4, in index
    print(u'\u292e')
UnicodeEncodeError: 'ascii' codec can't encode character '\u292e' in position 0: ordinal not in range(128)
10.1.7.1 - - [21/Nov/2016:02:28:13 +0000] "GET / HTTP/1.1" 500 27 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.98 Safari/537.36"

This problem doesn't actually arise with mod_wsgi-express because Apache/mod_wsgi has been susceptible to this sort of problem a long time as when Apache is run by Linux from system startup scripts it is often stripped of the default UTF-8 lang/locale the system otherwise runs as, and it runs with ASCII instead. As a consequence, mod_wsgi-expresss corrects the situation to a more sane default so that users code doesn't keep blowing up all the time with the user not knowing why and then having to research what configuration changes they need to make.

It would be much more friendly to developers if the default lang/locale for their deployed web applications were en_US.UTF-8. This should be hardwired into the Docker image itself by setting both the LANG and LC_ALL environment variables in the Dockerfile.

ENV LANG=en_US.UTF-8
ENV LC_ALL=en_US.UTF-8

They can still override this via the .s2i/environment file if they need to change it some other variant of UTF-8 or other language.

Update for .s2i from .sti

when using .sti/ folder, a deprecation warning is thrown during assembly.

if you put your files in .s2i then no warning.

Docs should be updated accordingly, 2.7, 3.3, and 3.4

I can make a PR if preferred

Lack of libffi causes some Python packages to fail to be installed.

This appears to affect at least ndg-httpsclient a dependency required by Lektor site generator.

Running setup.py install for ndg-httpsclient
Skipping installation of /opt/app-root/src/.local/lib/python3.4/site-packages/ndg/__init__.py (namespace package)

Installing /opt/app-root/src/.local/lib/python3.4/site-packages/ndg_httpsclient-0.4.1-py3.4-nspkg.pth
Installing ndg_httpclient script to /opt/app-root/src/.local/bin
Running setup.py install for cryptography
Package libffi was not found in the pkg-config search path.
Perhaps you should add the directory containing `libffi.pc'
to the PKG_CONFIG_PATH environment variable
No package 'libffi' found
Package libffi was not found in the pkg-config search path.
Perhaps you should add the directory containing `libffi.pc'
to the PKG_CONFIG_PATH environment variable
No package 'libffi' found
Package libffi was not found in the pkg-config search path.
Perhaps you should add the directory containing `libffi.pc'
to the PKG_CONFIG_PATH environment variable
No package 'libffi' found
Package libffi was not found in the pkg-config search path.
Perhaps you should add the directory containing `libffi.pc'
to the PKG_CONFIG_PATH environment variable
No package 'libffi' found
Package libffi was not found in the pkg-config search path.
Perhaps you should add the directory containing `libffi.pc'
to the PKG_CONFIG_PATH environment variable
No package 'libffi' found
c/_cffi_backend.c:15:17: fatal error: ffi.h: No such file or directory
#include <ffi.h>
^
compilation terminated.
Traceback (most recent call last):
File "/opt/rh/rh-python34/root/usr/lib64/python3.4/distutils/unixccompiler.py", line 126, in _compile
extra_postargs)
File "/opt/rh/rh-python34/root/usr/lib64/python3.4/distutils/ccompiler.py", line 909, in spawn
spawn(cmd, dry_run=self.dry_run)
File "/opt/rh/rh-python34/root/usr/lib64/python3.4/distutils/spawn.py", line 36, in spawn
_spawn_posix(cmd, search_path, dry_run=dry_run)
File "/opt/rh/rh-python34/root/usr/lib64/python3.4/distutils/spawn.py", line 162, in _spawn_posix
% (cmd, exit_status))
distutils.errors.DistutilsExecError: command 'gcc' failed with exit status 1

Building 2.7 app with Gunicorn fails with syntax error

$ s2i build git://github.com/mnagy/stress_test centos/python-27-centos7 python-sample-app  --context-dir=sti-python/test-app
I0531 08:20:51.853106 30506 clone.go:31] Downloading "git://github.com/mnagy/stress_test" ("sti-python/test-app") ...
I0531 08:20:52.638103 30506 install.go:251] Using "assemble" installed from "image:///usr/libexec/s2i/assemble"
I0531 08:20:52.638170 30506 install.go:251] Using "run" installed from "image:///usr/libexec/s2i/run"
I0531 08:20:52.638211 30506 install.go:251] Using "save-artifacts" installed from "image:///usr/libexec/s2i/save-artifacts"
---> Copying application source ...
---> Installing dependencies ...
Downloading/unpacking gunicorn (from -r requirements.txt (line 1))
Installing collected packages: gunicorn
Compiling /tmp/pip-build-izJkqx/gunicorn/gunicorn/workers/_gaiohttp.py ...
File "/tmp/pip-build-izJkqx/gunicorn/gunicorn/workers/_gaiohttp.py", line 84
yield from self.wsgi.close()
^
SyntaxError: invalid syntax

Successfully installed gunicorn
Cleaning up...

Mechanism for enabling SCL packages.

Currently the enabling of software collection packages is triggered by virtue of the following in sti-base.

ENV BASH_ENV=/opt/app-root/etc/scl_enable

The scl_enable file for sti-python includes:

# IMPORTANT: Do not add more content to this file unless you know what you are
#            doing. This file is sourced everytime the shell session is opened.
# This will make scl collection binaries work out of box.
unset BASH_ENV PROMPT_COMMAND ENV
source scl_source enable python27

The problem is that the activation only occurs when a bash shell is created.

This means that doing the following will result in the wrong Python version being run.

$ oc rsh wsgi-hello-world-1-3yi15 python -V
Python 2.7.5

The better way to handle this would be to use an ENTRYPOINT script of:

#!/bin/bash

set -eu
cmd="$1"; shift
exec $cmd "$@"

The sti-base does actually have such a script and defines:

ENTRYPOINT ["container-entrypoint"]

This means that the activation technically should occur when doing above rsh as the command will be child to the entrypoint script, but it isn't.

This is with the RHEL images from registry.access.redhat.com. So it could be out of date images there, although the the container-entrypoint script does I recollect exist in them.

Can't verify right now as OSE cluster is broken and my home Internet is also borked right now so can't check CentOS images even.

Is the container-entrypoint as ENTRYPOINT supposed to ensure SCL packages are enabled?

Are the RHEL images on registry.access.redhat.com up to date?

Updates to nssdb will cause image build to fail.

Within /opt/app-root/src/.pki there is a directory nssdb which is a database for something, possibly related to:

export LD_PRELOAD=libnss_wrapper.so

If when Python packages are installed they trigger some operation (network??) which causes that database to be updated, the ownership of the database directory is ending up as:

bash-4.2$ ls -las .pki/
ls: cannot access .pki/nssdb: Permission denied
total 8
4 drwxrwxrwx 4 default default 4096 Aug 28 07:21 .
4 drwxrwxrwx 5 default default 4096 Aug 28 07:22 ..
? d????????? ? ?       ?          ?            ? nssdb

If that occurs, when the assemble script tries to do:

chmod -R og+rwx /opt/app-root

it will fail with the following error as it doesn't have permission to change the ownership of the database as at that point it is running as the unprivileged default user.

E0828 17:25:31.252773 06387 sti.go:419] chmod: chmod: cannot access '/opt/app-root/src/.pki/nssdb'cannot access '/opt/app-root/src/.pki/nssdb': Permission denied: Permission denied

This error the causes the image build to fail due to use of set -e.

I don't know what the database is for and whether it might best be setup to be placed somewhere not in that location, but a workaround is to use instead of the chmod -R:

find /opt/app-root -name .pki -prune -o -exec chmod og+rwx {} \;

This presumes that a .pki file/directory isn't going to exist elsewhere in the users code which should be changed.

Support running of an app.sh script.

First raised as comment against separate issue:

but breaking out as independent issue.

The ability to provide an app.sh file makes things easier when wishing to run an alternate WSGI server such as uWSGI or Waitress where there may not be a way to start it up by importing a Python module and executing some API.

Right now you would have to do the following.

1 - Create a wsgi.py file with your WSGI application.

2 - Create an app.py file containing:

import os

os.execl('/opt/app-root/src/app.sh', 'uwsgi')

3 - Create an app.sh file containing:

#!/bin/bash

exec uwsgi \
    --http-socket :8080 \
    --die-on-term \
    --master \
    --single-interpreter \
    --enable-threads \
    --threads=5 \
    --thunder-lock \
    --module wsgi

The reason for using the app.sh in this case is so that PATH searching for uwsgi program can be relied on. If you try and launch uwsgi from app.py, because os.exec calls require fill path, you have to either hardwire the path, the location of which would change if a switch is made to a Python virtual environment rather than per user site-packages, or you have to add a Python function to search PATH yourself.

Far simpler to add support for app.sh. So something like the following, which would go before existing check and execution of app.py.

APP_SCRIPT="${APP_SCRIPT:-app.sh}"
if [[ -x "$APP_SCRIPT" ]]; then
  echo "---> Running application from script ($APP_SCRIPT) ..."
  exec "$APP_SCRIPT"
fi

In general, easier to use shell script rather than Python code to do special stuff prior to executing command line based server.

BASH_ENV trick should perhaps setup libnss_wrapper as well.

The BASH_ENV environment variable is set in sti-base as:

BASH_ENV=/opt/app-root/etc/scl_enable

This is done with the intent of enabling SCL packages such as Python version from SCL.

That is, the scl_enable script variant from Python builder is:

# IMPORTANT: Do not add more content to this file unless you know what you are
#            doing. This file is sourced everytime the shell session is opened.
# This will make scl collection binaries work out of box.
unset BASH_ENV PROMPT_COMMAND ENV
source scl_source enable rh-python34

Problem is that this doesn't also set up libnss_wrapper by doing:

source /opt/app-root/etc/generate_container_user

The result is that when creating an interactive shell, or using oc exec or oc rsh to run a shell script within the container, it will not run with the same environment as was set up for the actual deployed web application.

This means that if manually running a command it will not work if otherwise would have been dependent on the libnss_wrapper mechanism being in place.

The difference in environment setup could be annoying when debugging, but also could cause scripts run as an OpenShft job in context of that container to fail.

Django manage.py migrate will cause corruption to databases.

Having Django database migration enabled to be run by default inside of the container when the web application is run is not best practice and could be dangerous. Yes, there is an option to optionally disable it, but this isn't of much use when an unknowing user has already corrupted their database when they scaled up to multiple replicas.

Right now with the current Django version the risk of a corrupted database many be minimal, but with the next version of Django this could all change if what I have heard is correct.

In the current version of Django if deploying with multiple replicas, then there would be a race condition on the determination of whether database migration steps actually need to be run. The database migration itself is still done within a database transaction though, so when migration steps do end up running in multiple instances, then the transaction lock should mean that in the second, it would when it subsequently gets its transaction and checks again, realise the steps have already run and skip them.

The problem now is that I have heard that a future Django version will have transaction-less database migration. If I understand what that implies, that means that there will not be a transaction to protect an attempt to do a migration from multiple replicas. This means that multiple attempts at the same time will interfere with each other and potentially corrupt the database.

In short, running database migration as a deploy step is a very bad idea and not something we want to encourage or even allow. Yes, this does mean extra separate steps will be required but that is better than us corrupting a users databases. So the solution is to disable automatic migration as a default and provide documentation on what is best practice for Python as to how to manage database migration in the context of OpenShift.

Environment variable to disable setup.py processing.

There should be an environment variable to disable processing of setup.py.

Best practice for use of a setup,py as outlined by main developer who manages the Python package index in:

is not to use setup.py directly, but to trigger its processing from the requirements.txt file using:

-e .

In case where people follow this practice, then processing of setup.py would be done twice.

One could actually argue that the decision to have setup.py be processed should always be opt in by using the requirements.txt file and so should not ever be processed, but too late to change it to the better use case. At the least therefore, there should be an environment variable to disable processing setup.py so doesn't cause problems by being run more than once, or even if the user never desired it to be used in the first place.

Per user Python install area bin directory not in PATH when pip being run.

May relate to #57, but when pip is run to install modules, the bin directory for the per user Python install area, of ${HOME}/.local/bin is not in the PATH. This will cause problems where requirements.txt for pip lists two modules which have an ordering relationship and where the latter relies on being able to find a program installed as a side effect of the first module to be installed.

Therefore, in assemble scripts, prior to:

if [[ -f requirements.txt ]]; then
  echo "---> Installing dependencies ..."
  pip install --user -r requirements.txt
fi

if [[ -f setup.py ]]; then
  echo "---> Installing application ..."
  python setup.py develop --user
fi

what is needed is:

PATH=$HOME/.local/bin:$PATH

This may not be necessary if the problem is due to the current working directory not being ${HOME} and as such fixing #57 would resolve the problem.

System packages required to install PyYAML are not present.

System packages needed to permit the installation of PyYAML Python package are not present.

Don't know at this point know what the missing packages are. Logging issue just so it is known.

Running setup.py install for PyYAML
checking if libyaml is compilable
gcc -pthread -Wno-unused-result -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -I/opt/rh/rh-python34/root/usr/include -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -I/opt/rh/rh-python34/root/usr/include/python3.4m -c build/temp.linux-x86_64-3.4/check_libyaml.c -o build/temp.linux-x86_64-3.4/check_libyaml.o
build/temp.linux-x86_64-3.4/check_libyaml.c:2:18: fatal error: yaml.h: No such file or directory
#include <yaml.h>
^
compilation terminated.

libyaml is not found or a compiler error: forcing --without-libyaml
(if libyaml is installed correctly, you may need to
specify the option --include-dirs or uncomment and
modify the parameter include_dirs in setup.cfg)

This may not be entirely fatal, but could result in loss of functionality that will cause some software not to work.

PATH to per user Python install area should not be relative.

The Dockerfile's contain:

ENV PYTHON_VERSION=2.7 \
        PATH=.local/bin/:$PATH

Thus a relative path and not an absolute path is used to the per user Python install area used when running:

pip install --user -r requirements.txt

Presumably that relative path for the bind directory would become invalid if the web application being run changed the working directory to another directory. The consequence then would be that the web application would not then be able to find any programs which were installed as part of the Python modules installed by pip from the requirements.txt file.

Any directories in PATH should be absolute.

It should therefore use:

ENV PYTHON_VERSION=2.7 \
        PATH=$HOME/.local/bin/:$PATH

.local/bin in the PATH

We're prepending .local/bin to the PATH env var, what could have unexpected effects if the user has executables in his repo under .local/bin.

AFAIK we add .local/bin to PATH because of how we use pip with the --user flag that will put.

Quoting @mfojtik:

we should do absolute but not using HOME var

This is somehow related to #18 (maybe can be fixed together), but specific to sti-python.

@bparees @soltysh FYI

oc new-app detection for Python

I realise this possibly needs to be a bug report against 'oc' in 'openshift/origin' repo, but wanted to raise it here first in case there is something I am missing.

In:

The code for detecting Python when using 'oc new-app' against a repository, and which thus then triggers S2I using the images from this repository is:

// DetectPython detects Python source
func DetectPython(dir string) (*Info, bool) {
    return detect("python", dir, "requirements.txt", "config.py")
}

The problem is why is it checking for config.py?

I cannot think of any valid reason why it would look for config.py as I know of no convention in Python where a file by that name is anything significant.

I suspect it should be looking for setup.py.

That would make more sense given that assemble only treats requirements.txt and setup.py as anything special for triggering Python package installation.

if [[ -f requirements.txt ]]; then
  echo "---> Installing dependencies ..."
  pip install --user -r requirements.txt
fi

if [[ -f setup.py ]]; then
  echo "---> Installing application ..."
  python setup.py develop --user
fi

Comments?

SCL package activation no longer working for build if assemble uses /bin/sh.

Am trying the latest all-in-one VM we have based on 1.1.3 alpha 3 and there is a change in behaviour around builds. Don't know if this is because a change in s2i used, the S2I builder image I am using as base, or even some CentOS package.

I am using sourceStrategy.scripts to override default S2I scripts and pull in versions from a HTTP server.

Previously when the assemble script pulled in this way was using /bin/sh in #! line, then the scl_enable would be triggered by virtue of the ENV, BASH_ENV or PROMPT_COMMAND hack. This is no longer occurring so the SCL bin directory for Python is not in the path.

I can see that not right by adding env to start of assemble script. When #! uses /bin/sh I see:

HOSTNAME=wagtail-4-build
OPENSHIFT_BUILD_NAME=wagtail-4
OPENSHIFT_BUILD_SOURCE=https://github.com/GrahamDumpleton/openshift3-wagtail.git
ENV=/opt/app-root/etc/scl_enable
PYTHON_VERSION=2.7
PATH=/opt/app-root/src/.local/bin/:/opt/app-root/src/bin:/opt/app-root/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
OPENSHIFT_BUILD_NAMESPACE=snakes
STI_SCRIPTS_URL=image:///usr/libexec/s2i
PWD=/opt/app-root/src
STI_SCRIPTS_PATH=/usr/libexec/s2i
OPENSHIFT_BUILD_REFERENCE=master
HOME=/opt/app-root/src
SHLVL=2
WARPDRIVE_DEBUG=true
BASH_ENV=/opt/app-root/etc/scl_enable
PROMPT_COMMAND=. /opt/app-root/etc/scl_enable
_=/usr/bin/env

Note how ENV, BASH_ENV and PROMPT_COMMAND are still set and PATH doesn't include SCL bin directory for Python, nor the LD_LIBRARY_PATH change.

If I modify the assemble script so that uses /bin/bash, then it all works again and env at start of script shows:

MANPATH=/opt/rh/python27/root/usr/share/man:
HOSTNAME=wagtail-5-build
OPENSHIFT_BUILD_NAME=wagtail-5
X_SCLS=python27
OPENSHIFT_BUILD_SOURCE=https://github.com/GrahamDumpleton/openshift3-wagtail.git
LD_LIBRARY_PATH=/opt/rh/python27/root/usr/lib64
PYTHON_VERSION=2.7
PATH=/opt/rh/python27/root/usr/bin:/opt/app-root/src/.local/bin/:/opt/app-root/src/bin:/opt/app-root/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
OPENSHIFT_BUILD_NAMESPACE=snakes
STI_SCRIPTS_URL=image:///usr/libexec/s2i
PWD=/opt/app-root/src
STI_SCRIPTS_PATH=/usr/libexec/s2i
OPENSHIFT_BUILD_REFERENCE=master
HOME=/opt/app-root/src
SHLVL=2
WARPDRIVE_DEBUG=true
XDG_DATA_DIRS=/opt/rh/python27/root/usr/share:/usr/local/share:/usr/share
PKG_CONFIG_PATH=/opt/rh/python27/root/usr/lib64/pkgconfig
_=/usr/bin/env

So the environment variables to trigger scl_enable have been removed as expected, plus the PATH and LD_LIBRARY_PATH changes.

What would have changed such that the ENV, BASH_ENV and PROMPT_COMMAND hack would no longer work when assemble uses /bin/sh rather than /bin/bash?

If this doesn't work for assemble, this may mean that the hack for triggering scl_enable may not work either for case where using oc rsh or oc exec with a program implemented as a /bin/sh script. I haven't test this case but needs to be checked as well as work out why assemble now can only use /bin/bash.

Are images on Docker hub out of date in respect to OS packages?

If I created a derived image containing:

FROM openshift/python-27-centos7

USER 0

RUN yum install -y --setopt=tsflags=nodocs httpd httpd-devel && \
    yum clean all -y

USER 1001

The Docker build will fail with errors like:

Dependencies Resolved

================================================================================
 Package                Arch       Version                    Repository   Size
================================================================================
Installing:
 httpd                  x86_64     2.4.6-40.el7.centos        base        2.7 M
 httpd-devel            x86_64     2.4.6-40.el7.centos        base        187 k
Installing for dependencies:
 acl                    x86_64     2.2.51-12.el7              base         81 k
 apr                    x86_64     1.4.8-3.el7                base        103 k
 apr-devel              x86_64     1.4.8-3.el7                base        188 k
 apr-util               x86_64     1.5.2-6.el7                base         92 k
 apr-util-devel         x86_64     1.5.2-6.el7                base         76 k
 centos-logos           noarch     70.0.6-3.el7.centos        base         21 M
 cryptsetup-libs        x86_64     1.6.7-1.el7                base        182 k
 cyrus-sasl             x86_64     2.1.26-19.2.el7            base         88 k
 cyrus-sasl-devel       x86_64     2.1.26-19.2.el7            base        309 k
 dbus                   x86_64     1:1.6.12-13.el7            base        306 k
 device-mapper          x86_64     7:1.02.107-5.el7           base        251 k
 device-mapper-libs     x86_64     7:1.02.107-5.el7           base        304 k
 dracut                 x86_64     033-360.el7_2              updates     311 k
 expat-devel            x86_64     2.1.0-8.el7                base         56 k
 httpd-tools            x86_64     2.4.6-40.el7.centos        base         82 k
 initscripts            x86_64     9.49.30-1.el7              base        429 k
 kmod                   x86_64     20-5.el7                   base        114 k
 kmod-libs              x86_64     20-5.el7                   base         47 k
 kpartx                 x86_64     0.4.9-85.el7               base         59 k
 libdb-devel            x86_64     5.3.21-19.el7              base         38 k
 mailcap                noarch     2.1.41-2.el7               base         31 k
 openldap-devel         x86_64     2.4.40-8.el7               base        799 k
 qrencode-libs          x86_64     3.4.1-3.el7                base         50 k
 systemd                x86_64     219-19.el7                 base        5.1 M
 systemd-libs           x86_64     219-19.el7                 base        356 k
 sysvinit-tools         x86_64     2.88-14.dsf.el7            base         63 k
Updating for dependencies:
 cyrus-sasl-lib         x86_64     2.1.26-19.2.el7            base        155 k
 dbus-libs              x86_64     1:1.6.12-13.el7            base        151 k
 libdb                  x86_64     5.3.21-19.el7              base        718 k
 libdb-utils            x86_64     5.3.21-19.el7              base        102 k
 openldap               x86_64     2.4.40-8.el7               base        348 k

Transaction Summary
================================================================================
Install  2 Packages (+26 Dependent packages)
Upgrade             (  5 Dependent packages)

Total download size: 35 M
Downloading packages:
Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
--------------------------------------------------------------------------------
Total                                              876 kB/s |  35 MB  00:41
Running transaction check
Running transaction test


Transaction check error:
  file /usr/lib64/libsystemd-daemon.so.0 from install of systemd-libs-219-19.el7.x86_64 conflicts with file from package systemd-container-libs-208.20-6.el7.centos.x86_64
  file /usr/lib64/libsystemd-id128.so.0 from install of systemd-libs-219-19.el7.x86_64 conflicts with file from package systemd-container-libs-208.20-6.el7.centos.x86_64
  file /usr/lib64/libsystemd-journal.so.0 from install of systemd-libs-219-19.el7.x86_64 conflicts with file from package systemd-container-libs-208.20-6.el7.centos.x86_64
  file /usr/lib64/libsystemd-login.so.0 from install of systemd-libs-219-19.el7.x86_64 conflicts with file from package systemd-container-libs-208.20-6.el7.centos.x86_64
.....

If I build the openshift/python-27-centos7 image myself on local Docker, (but not using Makefile and so not squashed, but otherwise the same), and change FROM to use freshly built image, then the installation of httpd and httpd-devel in a derived image works fine.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.