Code Monkey home page Code Monkey logo

docker-django-example's Introduction

An example Django + Docker app

CI

You could use this example app as a base for your new project or as a guide to Dockerize your existing Django app.

The example app is minimal but it wires up a number of things you might use in a real world Django app, but at the same time it's not loaded up with a million personal opinions.

For the Docker bits, everything included is an accumulation of Docker best practices based on building and deploying dozens of assorted Dockerized web apps since late 2014.

This app is using Django 5.0.6 and Python 3.12.3. The screenshot doesn't get updated every time I bump the versions:

Screenshot

Table of contents

Tech stack

If you don't like some of these choices that's no problem, you can swap them out for something else on your own.

Back-end

Front-end

But what about JavaScript?!

Picking a JS library is a very app specific decision because it depends on which library you like and it also depends on if your app is going to be mostly Django templates with sprinkles of JS or an API back-end.

This isn't an exhaustive list but here's a few reasonable choices depending on how you're building your app:

On the bright side with esbuild being set up you can use any (or none) of these solutions very easily. You could follow a specific library's installation guides to get up and running in no time.

Personally I'm going to be using Hotwire Turbo + Stimulus in most newer projects.

Notable opinions and extensions

Django is an opinionated framework and I've added a few extra opinions based on having Dockerized and deployed a number of Django projects. Here's a few (but not all) note worthy additions and changes.

  • Packages and extensions:
  • Linting and formatting:
    • flake8 is used to lint the code base
    • isort is used to auto-sort Python imports
    • black is used to format the code base
  • Django apps:
    • Add pages app to render a home page
    • Add up app to provide a few health check pages
  • Config:
    • Log to STDOUT so that Docker can consume and deal with log output
    • Extract a bunch of configuration settings into environment variables
    • Rename project directory from its custom name to config/
    • src/config/settings.py and the .env file handles configuration in all environments
  • Front-end assets:
    • assets/ contains all your CSS, JS, images, fonts, etc. and is managed by esbuild
    • Custom 502.html and maintenance.html pages
    • Generate favicons using modern best practices
  • Django defaults that are changed:
    • Use Redis as the default Cache back-end
    • Use signed cookies as the session back-end
    • public/ is the static directory where Django will serve static files from
    • public_collected/ is where collectstatic will write its files to

Besides the Django app itself:

  • Docker support has been added which would be any files having *docker* in its name
  • GitHub Actions have been set up
  • A requirements-lock.txt file has been introduced using pip3. The management of this file is fully automated by the commands found in the run file. We'll cover this in more detail when we talk about updating dependencies.

Running this app

You'll need to have Docker installed. It's available on Windows, macOS and most distros of Linux. If you're new to Docker and want to learn it in detail check out the additional resources links near the bottom of this README.

You'll also need to enable Docker Compose v2 support if you're using Docker Desktop. On native Linux without Docker Desktop you can install it as a plugin to Docker. It's been generally available for a while now and is stable. This project uses specific Docker Compose v2 features that only work with Docker Compose v2 2.20.2+.

If you're using Windows, it will be expected that you're following along inside of WSL or WSL 2. That's because we're going to be running shell commands. You can always modify these commands for PowerShell if you want.

Clone this repo anywhere you want and move into the directory:

git clone https://github.com/nickjj/docker-django-example hellodjango
cd hellodjango

# Optionally checkout a specific tag, such as: git checkout 0.10.0

Copy an example .env file because the real one is git ignored:

cp .env.example .env

Build everything:

The first time you run this it's going to take 5-10 minutes depending on your internet connection speed and computer's hardware specs. That's because it's going to download a few Docker images and build the Python + Yarn dependencies.

docker compose up --build

Now that everything is built and running we can treat it like any other Django app.

Did you receive a depends_on "Additional property required is not allowed" error? Please update to at least Docker Compose v2.20.2+ or Docker Desktop 4.22.0+.

Did you receive an error about a port being in use? Chances are it's because something on your machine is already running on port 8000. Check out the docs in the .env file for the DOCKER_WEB_PORT_FORWARD variable to fix this.

Did you receive a permission denied error? Chances are you're running native Linux and your uid:gid aren't 1000:1000 (you can verify this by running id). Check out the docs in the .env file to customize the UID and GID variables to fix this.

Setup the initial database:

# You can run this from a 2nd terminal.
./run manage migrate

We'll go over that ./run script in a bit!

Check it out in a browser:

Visit http://localhost:8000 in your favorite browser.

Linting the code base:

# You should get no output (that means everything is operational).
./run lint

Sorting Python imports in the code base:

# You should see that everything is unchanged (imports are already formatted).
./run format:imports

Formatting the code base:

# You should see that everything is unchanged (it's all already formatted).
./run format

There's also a ./run quality command to run the above 3 commands together.

Running the test suite:

# You should see all passing tests. Warnings are typically ok.
./run manage test

Stopping everything:

# Stop the containers and remove a few Docker related resources associated to this project.
docker compose down

You can start things up again with docker compose up and unlike the first time it should only take seconds.

Files of interest

I recommend checking out most files and searching the code base for TODO:, but please review the .env and run files before diving into the rest of the code and customizing it. Also, you should hold off on changing anything until we cover how to customize this example app's name with an automated script (coming up next in the docs).

.env

This file is ignored from version control so it will never be commit. There's a number of environment variables defined here that control certain options and behavior of the application. Everything is documented there.

Feel free to add new variables as needed. This is where you should put all of your secrets as well as configuration that might change depending on your environment (specific dev boxes, CI, production, etc.).

run

You can run ./run to get a list of commands and each command has documentation in the run file itself.

It's a shell script that has a number of functions defined to help you interact with this project. It's basically a Makefile except with less limitations. For example as a shell script it allows us to pass any arguments to another program.

This comes in handy to run various Docker commands because sometimes these commands can be a bit long to type. Feel free to add as many convenience functions as you want. This file's purpose is to make your experience better!

If you get tired of typing ./run you can always create a shell alias with alias run=./run in your ~/.bash_aliases or equivalent file. Then you'll be able to run run instead of ./run.

Running a script to automate renaming the project

The app is named hello right now but chances are your app will be a different name. Since the app is already created we'll need to do a find / replace on a few variants of the string "hello" and update a few Docker related resources.

And by we I mean I created a zero dependency shell script that does all of the heavy lifting for you. All you have to do is run the script below.

Run the rename-project script included in this repo:

# The script takes 2 arguments.
#
# The first one is the lower case version of your app's name, such as myapp or
# my_app depending on your preference.
#
# The second one is used for your app's module name. For example if you used
# myapp or my_app for the first argument you would want to use MyApp here.
bin/rename-project myapp MyApp

The bin/rename-project script is going to:

  • Remove any Docker resources for your current project
  • Perform a number of find / replace actions
  • Optionally initialize a new git repo for you

Afterwards you can delete this script because its only purpose is to assist in helping you change this project's name without depending on any complicated project generator tools or 3rd party dependencies.

If you're not comfy running the script or it doesn't work for whatever reasons you can check it out and perform the actions manually. It's mostly running a find / replace across files and then renaming a few directories and files.

Start and setup the project:

This won't take as long as before because Docker can re-use most things. We'll also need to setup our database since a new one will be created for us by Docker.

docker compose up --build

# Then in a 2nd terminal once it's up and ready.
./run manage migrate

Sanity check to make sure the tests still pass:

It's always a good idea to make sure things are in a working state before adding custom changes.

# You can run this from the same terminal as before.
./run quality
./run manage test

If everything passes now you can optionally git add -A && git commit -m "Initial commit" and start customizing your app. Alternatively you can wait until you develop more of your app before committing anything. It's up to you!

Tying up a few loose ends:

You'll probably want to create a fresh CHANGELOG.md file for your project. I like following the style guide at https://keepachangelog.com/ but feel free to use whichever style you prefer.

Since this project is MIT licensed you should keep my name and email address in the LICENSE file to adhere to that license's agreement, but you can also add your name and email on a new line.

If you happen to base your app off this example app or write about any of the code in this project it would be rad if you could credit this repo by linking to it. If you want to reference me directly please link to my site at https://nickjanetakis.com. You don't have to do this, but it would be very much appreciated!

Updating dependencies

Let's say you've customized your app and it's time to make a change to your requirements.txt or package.json file.

Without Docker you'd normally run pip3 install -r requirements.txt or yarn install. With Docker it's basically the same thing and since these commands are in our Dockerfile we can get away with doing a docker compose build but don't run that just yet.

In development:

You can run ./run pip3:outdated or ./run yarn:outdated to get a list of outdated dependencies based on what you currently have installed. Once you've figured out what you want to update, go make those updates in your requirements.txt and / or assets/package.json file.

Then to update your dependencies you can run ./run pip3:install or ./run yarn:install. That'll make sure any lock files get copied from Docker's image (thanks to volumes) into your code repo and now you can commit those files to version control like usual.

You can check out the run file to see what these commands do in more detail.

As for the requirements' lock file, this ensures that the same exact versions of every package you have (including dependencies of dependencies) get used the next time you build the project. This file is the output of running pip3 freeze. You can check how it works by looking at bin/pip3-install.

You should never modify the lock files by hand. Add your top level Python dependencies to requirements.txt and your top level JavaScript dependencies to assets/package.json, then run the ./run command(s) mentioned earlier.

In CI:

You'll want to run docker compose build since it will use any existing lock files if they exist. You can also check out the complete CI test pipeline in the run file under the ci:test function.

In production:

This is usually a non-issue since you'll be pulling down pre-built images from a Docker registry but if you decide to build your Docker images directly on your server you could run docker compose build as part of your deploy pipeline.

See a way to improve something?

If you see anything that could be improved please open an issue or start a PR. Any help is much appreciated!

Additional resources

Now that you have your app ready to go, it's time to build something cool! If you want to learn more about Docker, Django and deploying a Django app here's a couple of free and paid resources. There's Google too!

Learn more about Docker and Django

Official documentation

Courses / books

Deploy to production

I'm creating an in-depth course related to deploying Dockerized web apps. If you want to get notified when it launches with a discount and potentially get free videos while the course is being developed then sign up here to get notified.

About the author

I'm a self taught developer and have been freelancing for the last ~20 years. You can read about everything I've learned along the way on my site at https://nickjanetakis.com.

There's hundreds of blog posts and a couple of video courses on web development and deployment topics. I also have a podcast where I talk with folks about running web apps in production.

docker-django-example's People

Contributors

dylanschultzie avatar nickjj avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

docker-django-example's Issues

Running as root?

Is there any concern that the primary container is running as root? I know it is a container but I have heard that it is good practice to run as a non-root user.

Celery worker crashes

Hi,
Fresh build, didn't modified anything.

Ubuntu 21.10
Docker version 20.10.12, build e91ed57
docker-compose version 1.29.2, build 5becea4c

worker_1    | [2022-03-01 10:38:31,399: CRITICAL/MainProcess] Unrecoverable error: TypeError('Channel._connparams.<locals>.Connection.disconnect() takes 1 positional argument but 2 were given')
worker_1    | Traceback (most recent call last):
worker_1    |   File "/home/python/.local/lib/python3.10/site-packages/kombu/transport/virtual/base.py", line 923, in create_channel
worker_1    |     return self._avail_channels.pop()
worker_1    | IndexError: pop from empty list
worker_1    | 
worker_1    | During handling of the above exception, another exception occurred:
worker_1    | 
worker_1    | Traceback (most recent call last):
worker_1    |   File "/home/python/.local/lib/python3.10/site-packages/redis/retry.py", line 45, in call_with_retry
worker_1    |     return do()
worker_1    |   File "/home/python/.local/lib/python3.10/site-packages/redis/connection.py", line 608, in <lambda>
worker_1    |     lambda: self._connect(), lambda error: self.disconnect(error)
worker_1    |   File "/home/python/.local/lib/python3.10/site-packages/redis/connection.py", line 673, in _connect
worker_1    |     raise err
worker_1    |   File "/home/python/.local/lib/python3.10/site-packages/redis/connection.py", line 661, in _connect
worker_1    |     sock.connect(socket_address)
worker_1    | TimeoutError: [Errno 110] Connection timed out
worker_1    | 
worker_1    | During handling of the above exception, another exception occurred:
worker_1    | 
worker_1    | Traceback (most recent call last):
worker_1    |   File "/home/python/.local/lib/python3.10/site-packages/celery/worker/worker.py", line 203, in start
worker_1    |     self.blueprint.start(self)
worker_1    |   File "/home/python/.local/lib/python3.10/site-packages/celery/bootsteps.py", line 116, in start
worker_1    |     step.start(parent)
worker_1    |   File "/home/python/.local/lib/python3.10/site-packages/celery/bootsteps.py", line 365, in start
worker_1    |     return self.obj.start()
worker_1    |   File "/home/python/.local/lib/python3.10/site-packages/celery/worker/consumer/consumer.py", line 326, in start
worker_1    |     blueprint.start(self)
worker_1    |   File "/home/python/.local/lib/python3.10/site-packages/celery/bootsteps.py", line 116, in start
worker_1    |     step.start(parent)
worker_1    |   File "/home/python/.local/lib/python3.10/site-packages/celery/worker/consumer/connection.py", line 21, in start
worker_1    |     c.connection = c.connect()
worker_1    |   File "/home/python/.local/lib/python3.10/site-packages/celery/worker/consumer/consumer.py", line 422, in connect
worker_1    |     conn = self.connection_for_read(heartbeat=self.amqheartbeat)
worker_1    |   File "/home/python/.local/lib/python3.10/site-packages/celery/worker/consumer/consumer.py", line 428, in connection_for_read
worker_1    |     return self.ensure_connected(
worker_1    |   File "/home/python/.local/lib/python3.10/site-packages/celery/worker/consumer/consumer.py", line 454, in ensure_connected
worker_1    |     conn = conn.ensure_connection(
worker_1    |   File "/home/python/.local/lib/python3.10/site-packages/kombu/connection.py", line 382, in ensure_connection
worker_1    |     self._ensure_connection(*args, **kwargs)
worker_1    |   File "/home/python/.local/lib/python3.10/site-packages/kombu/connection.py", line 434, in _ensure_connection
worker_1    |     return retry_over_time(
worker_1    |   File "/home/python/.local/lib/python3.10/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
worker_1    |     return fun(*args, **kwargs)
worker_1    |   File "/home/python/.local/lib/python3.10/site-packages/kombu/connection.py", line 878, in _connection_factory
worker_1    |     self._connection = self._establish_connection()
worker_1    |   File "/home/python/.local/lib/python3.10/site-packages/kombu/connection.py", line 813, in _establish_connection
worker_1    |     conn = self.transport.establish_connection()
worker_1    |   File "/home/python/.local/lib/python3.10/site-packages/kombu/transport/virtual/base.py", line 947, in establish_connection
worker_1    |     self._avail_channels.append(self.create_channel(self))
worker_1    |   File "/home/python/.local/lib/python3.10/site-packages/kombu/transport/virtual/base.py", line 925, in create_channel
worker_1    |     channel = self.Channel(connection)
worker_1    |   File "/home/python/.local/lib/python3.10/site-packages/kombu/transport/redis.py", line 679, in __init__
worker_1    |     self.client.ping()
worker_1    |   File "/home/python/.local/lib/python3.10/site-packages/redis/commands/core.py", line 954, in ping
worker_1    |     return self.execute_command("PING", **kwargs)
worker_1    |   File "/home/python/.local/lib/python3.10/site-packages/redis/client.py", line 1173, in execute_command
worker_1    |     conn = self.connection or pool.get_connection(command_name, **options)
worker_1    |   File "/home/python/.local/lib/python3.10/site-packages/redis/connection.py", line 1370, in get_connection
worker_1    |     connection.connect()
worker_1    |   File "/home/python/.local/lib/python3.10/site-packages/redis/connection.py", line 607, in connect
worker_1    |     sock = self.retry.call_with_retry(
worker_1    |   File "/home/python/.local/lib/python3.10/site-packages/redis/retry.py", line 48, in call_with_retry
worker_1    |     fail(error)
worker_1    |   File "/home/python/.local/lib/python3.10/site-packages/redis/connection.py", line 608, in <lambda>
worker_1    |     lambda: self._connect(), lambda error: self.disconnect(error)
worker_1    | TypeError: Channel._connparams.<locals>.Connection.disconnect() takes 1 positional argument but 2 were given
hellodjango_worker_1 exited with code 0

any clues?

HTML templates reloading issue (TailwindCSS)

Hello Nick,

First off, thank you loads for the work you put into this bundle, you have inspired me greatly to dive into Docker and containerization.

  • System: W10, running everything under WSL2, so no mounting issues at all.

The issue at hand relates to hot reloading while developing. I find that whenever I modify a *.py file the webapp reloads just fine, showing the changes instantaneously. This is not the case when its an *.html template being modified. The css and js watchers are up and running. I checked the tailwind.config.js file and its pointing to the app folder, recursively and gathering all *.html files, so I don't know where to look now.

Thank you

`postgres` setup

Hello,

Thanks again for the great repo. I have a question about how you setup postgres for this project. I don't have a fullstack/djano background and I was not able to find the answer anywhere I looked.

When doing the very first migration, the current setup does not recognize my local postgres database so it throws an error saying that my target db name, lets call it testdb, does not exist. Even if I create this database in my local postgres server, is still not recognized.

It only works when I docker exec into the postgres docker image and create the database in the running psql instance.

I also tried updated the docker-compose file to point from "postgres:/var/lib/postgresql/data" to my locatl data directory (/var/lib/postgresql/16/main) but still the same problem.

do you have an idea why is this happening? is it possible to post it at my local postgres ?

edit:

im using ubuntu 22, postgres 16.2

unbound variable

Hello,

I've followed your Running this app manual and changed the name of the project with the rename-project script. Afterward, I built the project and run the migration. So, to test everything I wanted to run the 3 suggested ./run commands but I get for lint and format the following outputs:
./run lint
./run format
./run: line 56: @: unbound variable
./run: line 61: @: unbound variable

I guess something went wrong. Do you have any suggestions?

Best regards

Double public and public_collected directories

After building this, and running ./run manage collectstatic I've noticed there are four directories for static assets in the web container (in addition to /app/assets), but only two are used:

  • /public - always empty
  • /public_collected - fills with the assets after collectstatic
  • /app/public - the static assets collectstatic collects
  • /app/public_collected - always empty

I wonder if this means there are some unnecessary creations in the Dockerfile, but I'm not sure which you intended to be correct? Or have I misunderstood something?

Add `LOG_LEVEL` env var?

Really enjoying the quality of this, I was wondering if it'd be handy to have a LOG_LEVEL env var (detached from DEBUG) that would allow Django to output logs when running with gunicorn in production.

New project name psql role doesn't exist

Thanks for this great setup guide!
I was able to successfully change project names on an older build around the Django 4 update.
However on the latest, I can ./run manage migrate on the default name, but if I change project names, docker-compose up --build, then I run into a problem doing ./run manage migrate.

From Docker logs:
postgres_1 | 2022-01-11 21:42:09.081 UTC [33] FATAL: password authentication failed for user "new_project"
postgres_1 | 2022-01-11 21:42:09.081 UTC [33] DETAIL: Role "new_project" does not exist.

From shell stdout:
django.db.utils.OperationalError: FATAL: password authentication failed for user "new_project"

My .env has the:
export POSTGRES_USER=new_project
export POSTGRES_PASSWORD=password

Service 'worker' failed to build : Build failed

Hi, thanks for the great starter example for someone wanting to get his feet wet in Django.

I have followed the initial steps found in #Running this app.

And I'm facing this error:

Building worker
[+] Building 6.3s (18/22)
=> [internal] load build definition from Dockerfile 0.1s
=> => transferring dockerfile: 38B 0.0s
=> [internal] load .dockerignore 0.1s
=> => transferring context: 35B 0.0s
=> [internal] load metadata for docker.io/library/python:3.10.2-slim-bullseye 5.5s
=> [internal] load metadata for docker.io/library/node:16.14.0-bullseye-slim 2.3s
=> [assets 1/7] FROM docker.io/library/node:16.14.0-bullseye-slim@sha256:f00e66f4e3d5f3cf1322049440f9d84c79462cf2157f6d9bac26cec8d31f950e 0.0s
=> [internal] load build context 0.3s
=> => transferring context: 3.03kB 0.2s
=> [app 1/10] FROM docker.io/library/python:3.10.2-slim-bullseye@sha256:6faf002f0bce2ce81bec4a2253edddf0326dad23fe4e95e90d7790eaee653da5 0.0s
=> CACHED [app 2/10] WORKDIR /app 0.0s
=> CACHED [app 3/10] RUN apt-get update && apt-get install -y --no-install-recommends build-essential curl libpq-dev && rm -rf /var/lib/apt/lists/* /usr/share/doc /usr/share/man && apt-get clean 0.0s
=> CACHED [app 4/10] COPY --chown=python:python requirements*.txt ./ 0.0s
=> CACHED [app 5/10] COPY --chown=python:python bin/ ./bin 0.0s
=> CACHED [assets 2/7] WORKDIR /app/assets 0.0s
=> CACHED [assets 3/7] RUN apt-get update && apt-get install -y --no-install-recommends build-essential && rm -rf /var/lib/apt/lists/* /usr/share/doc /usr/share/man && apt-get clean && mkdir -p / 0.0s
=> CACHED [assets 4/7] COPY --chown=node:node assets/package.json assets/yarn ./ 0.0s
=> CACHED [assets 5/7] RUN yarn install && yarn cache clean 0.0s
=> CACHED [assets 6/7] COPY --chown=node:node . .. 0.0s
=> CACHED [assets 7/7] RUN if [ "development" != "development" ]; then ../run yarn:build:js && ../run yarn:build:css; else mkdir -p /app/public; fi 0.0s
=> ERROR [app 6/10] RUN chmod 0755 bin/* && bin/pip3-install 0.3s
[app 6/10] RUN chmod 0755 bin/* && bin/pip3-install:
#11 0.310 /usr/bin/env: ‘bash\r’: No such file or directory
executor failed running [/bin/sh -c chmod 0755 bin/* && bin/pip3-install]: exit code: 127
ERROR: Service 'worker' failed to build : Build failed

Highlighting the error part will be

ERROR [app 6/10] RUN chmod 0755 bin/* && bin/pip3-install 0.3s
[app 6/10] RUN chmod 0755 bin/* && bin/pip3-install:
#11 0.310 /usr/bin/env: ‘bash\r’: No such file or directory
executor failed running [/bin/sh -c chmod 0755 bin/* && bin/pip3-install]: exit code: 127`

Code reloading stopped working consistently when updating HTML templates

Set up

  • On the latest main branch, if you edit an HTML file gunicorn isn't picking up the change.

  • Django 4.1.X with Python 3.11.X but it's also an issue with Python 3.10.X too.

  • I tried a bunch of different code editors because I've run into situations where Vim's way of saving a file made it incompatible with certain file watchers. It's an issue with all of them that I tried (nano, Vim and VSCode).

Situation

After changing watched files you should see something like this:

hellodjango-web-1       | [2023-02-18 00:22:45 +0000] [8] [INFO] Worker reloading: /app/src/config/gunicorn.py modified
hellodjango-web-1       | [2023-02-18 00:22:45 +0000] [8] [INFO] Worker exiting (pid: 8)
hellodjango-web-1       | [2023-02-18 00:22:45 +0000] [10] [INFO] Booting worker with pid: 10

The above sometimes works if you edit the src/templates/layouts/index.html or src/pages/templates/pages/home.html and always works if you edit a Python file such as any views.py file. JS and CSS also get updated on change in development which is using esbuild and tailwind instead of gunicorn.

The appropriate env vars defined in the .env file appear to be set in the container's runtime environment:

nick@kitt ~/src/github/docker-django-example (main) $ run shell
python@49922f0f9045:/app/src$ python3
Python 3.11.1 (main, Feb  4 2023, 11:23:15) [GCC 10.2.1 20210110] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import os
>>> from distutils.util import strtobool
>>> bool(strtobool(os.getenv("DEBUG", "false")))
True
>>> bool(strtobool(os.getenv("WEB_RELOAD", "false")))
True
>>>

I also confirmed this in the app by going to the src/config/settings.py file and dropping in:

print(bool(strtobool(os.getenv("DEBUG", "false"))))
print(DEBUG)

The Docker Compose logs produce True for both. That is what we want since both are set to true in the .env.

The TEMPLATES configuration is:

TEMPLATES = [
    {
        "BACKEND": "django.template.backends.django.DjangoTemplates",
        "DIRS": [os.path.join(BASE_DIR, "templates")],
        "APP_DIRS": True,
        "OPTIONS": {
            "context_processors": [
                "django.template.context_processors.debug",
                "django.template.context_processors.request",
                "django.contrib.auth.context_processors.auth",
                "django.contrib.messages.context_processors.messages",
            ],
        },
    },
]

Is that not correct?

I also removed the assets container thinking maybe esbuild 0.17.X is having an effect due to both containers sharing the same Docker bind mount and I wanted to reduce variables but the same problem happens. My other Flask example app is using the same version of gunicorn and esbuild and its HTML code reloading is working.

Little help?

Updating dependencies during development

The readme states that I need to execute "./run pip3:install" to update dependencies during development.

Should this only be run when you want to "lock in" your dependencies to your docker container? i.e. if I'm working in my pycharm IDE and adding new python modules via pip3 , running the "./run pip3:install" command takes time to rebuild everything (3-4mins). I assume there's no need to rebuild after each module install as this would be impractical.

Just want to understand this process as a docker noob.

update docker compose sub commands

First, Thank you for the awesome repo, really helpful.

I would suggest updating docker-compose subcommands to match the new docker version.

Docker Compose is now in the Docker CLI, try `docker compose up`

Connection Pooling for Django Project

Hi @nickjj, thank you for making this repository. I would have a look at this repo for my reference.
I'm looking for a Django project example that uses connection pooling (pgbouncer or pgpool) but seems like this repo does not use this.

My concern is because Django's DB connection is created during the request and closed after returns the response. this based on the documentation https://docs.djangoproject.com/en/3.2/ref/databases/#general-notes

I think using connection pooling would make this project better for production-ready project reference.

Thank you in advance

Docker compose fails on WSL

Running docker compose up --build fails for me with message:

=> ERROR [worker app  6/10] RUN chmod 0755 bin/* && bin/pip3-install                                              0.7s
------
 > [worker app  6/10] RUN chmod 0755 bin/* && bin/pip3-install:
0.636 /usr/bin/env: ‘bash\r’: No such file or directory
0.636 /usr/bin/env: use -[v]S to pass options in shebang lines
------
failed to solve: process "/bin/sh -c chmod 0755 bin/* && bin/pip3-install" did not complete successfully: exit code: 127

I am on Windows. I started WSL from within powershell by running wsl -d Ubuntu. Then I ran the docker compose command inside the directory of this cloned project.

Is this a WSL issue? I looked inside my Ubuntu WSL and the directory bin does not exist....

EDIT: I now realise this is the same error as in issue #45. The user there ran some dos2unix command to fix potential file ending issues originating from an text editor. Note that I have not opened or changed any file in a text editor yet! I cloned the repo, started my WSL, perfomed the copying of the .env file inside the WSL and then tried to compose...

tailwindcss --watch is not detecting changes

Hi Nick,

I have came across an unexpected behavior where tailwindcss ... --watch is not detecting changes, following are steps to reproduce:

  1. git clone, cp .env.example .env, cp docker-compose.override.example.yml docker-compose.override.yml
  2. docker-compose up --build

image

image

  1. make changes in ../src/pages/templates/pages/home.html
    image

  2. no changes Rebuilding activity is observed in docker logs -f hellodjango_css_1.

Forked repo: terencetwuo@ea53c77

Any help is much appreciated.

IPDB debugging

I have put a PDB in my code to debug and am using docker attach to connect to the web container. I hit the breakpoint but cant step through the code...

ERROR: Service 'css' failed to build : COPY failed: forbidden path outside the build context:

Hi,

Great project, should save a lot of time! I'm trying to run on Ubuntu 20.04.3 LTS and I get the following error:

Step 10/12 : COPY --chown=node:node ../ ../
ERROR: Service 'css' failed to build : COPY failed: forbidden path outside the build context: ../ ()

Commenting out line 23 of the Dockerfile allows the build to finish

COPY --chown=node:node ../ ../

Perhaps I'm doing something wrong. Django starts up and everything looks good but I'm unsure the impact. Using WSL2 on Windows with the same OS works OK.

Thanks!

Issues with setup

I follow all the steps and I keep getting this error. I am using windows 11

ERROR [web app 6/10] RUN chmod 0755 bin/* && bin/pip3-install 0.4s

[web app 6/10] RUN chmod 0755 bin/* && bin/pip3-install:
0.274 /usr/bin/env: ‘bash\r’: No such file or directory
0.274 /usr/bin/env: use -[v]S to pass options in shebang lines


failed to solve: process "/bin/sh -c chmod 0755 bin/* && bin/pip3-install" did not complete successfully: exit code: 127

docker compose build : Unable to locate package build-essential

Hello,

I am running through the tutorial, but my docker compos build keeps on failing.

Command I run

docker compose up --build

output

[+] Building 240.1s (20/50)                                                                docker:default
 => [web internal] load build definition from Dockerfile                                             0.0s
 => => transferring dockerfile: 2.20kB                                                               0.0s
 => [web internal] load .dockerignore                                                                0.0s
 => => transferring context: 216B                                                                    0.0s
 => [css internal] load .dockerignore                                                                0.0s
 => => transferring context: 216B                                                                    0.0s
 => [css internal] load build definition from Dockerfile                                             0.0s
 => => transferring dockerfile: 2.20kB                                                               0.0s
 => [worker internal] load metadata for docker.io/library/python:3.12.1-slim-bookworm                0.6s
 => [js internal] load metadata for docker.io/library/node:20.6.1-bookworm-slim                      0.6s
 => [js internal] load .dockerignore                                                                 0.0s
 => => transferring context: 216B                                                                    0.0s
 => [js internal] load build definition from Dockerfile                                              0.0s
 => => transferring dockerfile: 2.20kB                                                               0.0s
 => [worker internal] load build definition from Dockerfile                                          0.0s
 => => transferring dockerfile: 2.20kB                                                               0.0s
 => [worker internal] load .dockerignore                                                             0.0s
 => => transferring context: 216B                                                                    0.0s
 => [worker assets 1/7] FROM docker.io/library/node:20.6.1-bookworm-slim@sha256:2dab2d0e8813ee1601f  0.0s
 => [css internal] load build context                                                                0.0s
 => => transferring context: 7.77kB                                                                  0.0s
 => [js internal] load build context                                                                 0.0s
 => => transferring context: 7.77kB                                                                  0.0s
 => CACHED [worker assets 2/7] WORKDIR /app/assets                                                   0.0s
 => ERROR [css assets 3/7] RUN apt-get update   && apt-get install -y --no-install-recommends bui  239.5s
 => [worker app  1/10] FROM docker.io/library/python:3.12.1-slim-bookworm@sha256:123229cfb27c384ee1  0.0s
 => [web internal] load build context                                                                0.0s
 => => transferring context: 7.77kB                                                                  0.0s
 => CACHED [web app  2/10] WORKDIR /app                                                              0.0s
 => [worker internal] load build context                                                             0.0s
 => => transferring context: 7.77kB                                                                  0.0s
 => CANCELED [web app  3/10] RUN apt-get update   && apt-get install -y --no-install-recommends b  239.5s
------                                                                                                    
 > [css assets 3/7] RUN apt-get update   && apt-get install -y --no-install-recommends build-essential   && rm -rf /var/lib/apt/lists/* /usr/share/doc /usr/share/man   && apt-get clean   && groupmod -g "1001" node && usermod -u "1001" -g "1001" node   && mkdir -p /node_modules && chown node:node -R /node_modules /app:                                                                                                         
20.39 Ign:1 http://deb.debian.org/debian bookworm InRelease                                               
40.41 Ign:2 http://deb.debian.org/debian bookworm-updates InRelease                                       
60.43 Ign:3 http://deb.debian.org/debian-security bookworm-security InRelease                             
80.46 Ign:1 http://deb.debian.org/debian bookworm InRelease                                               
100.5 Ign:2 http://deb.debian.org/debian bookworm-updates InRelease
120.5 Ign:3 http://deb.debian.org/debian-security bookworm-security InRelease
140.5 Ign:1 http://deb.debian.org/debian bookworm InRelease
160.5 Ign:2 http://deb.debian.org/debian bookworm-updates InRelease
179.4 Ign:3 http://deb.debian.org/debian-security bookworm-security InRelease
199.4 Err:1 http://deb.debian.org/debian bookworm InRelease
199.4   Temporary failure resolving 'deb.debian.org'
219.4 Err:2 http://deb.debian.org/debian bookworm-updates InRelease
219.4   Temporary failure resolving 'deb.debian.org'
239.4 Err:3 http://deb.debian.org/debian-security bookworm-security InRelease
239.4   Temporary failure resolving 'deb.debian.org'
239.4 Reading package lists...
239.4 W: Failed to fetch http://deb.debian.org/debian/dists/bookworm/InRelease  Temporary failure resolving 'deb.debian.org'
239.4 W: Failed to fetch http://deb.debian.org/debian/dists/bookworm-updates/InRelease  Temporary failure resolving 'deb.debian.org'
239.4 W: Failed to fetch http://deb.debian.org/debian-security/dists/bookworm-security/InRelease  Temporary failure resolving 'deb.debian.org'
239.4 W: Some index files failed to download. They have been ignored, or old ones used instead.
239.4 Reading package lists...
239.5 Building dependency tree...
239.5 Reading state information...
239.5 E: Unable to locate package build-essential
------
failed to solve: process "/bin/sh -c apt-get update   && apt-get install -y --no-install-recommends build-essential   && rm -rf /var/lib/apt/lists/* /usr/share/doc /usr/share/man   && apt-get clean   && groupmod -g \"${GID}\" node && usermod -u \"${UID}\" -g \"${GID}\" node   && mkdir -p /node_modules && chown node:node -R /node_modules /app" did not complete successfully: exit code: 100

Docker version 24.0.7, build afdd53b
Docker Compose version v2.21.0

Thanks for the help

Update README.md about need to "chown 1000 ./public*"

The error in #9 occurs under Ubuntu 20, and likely similar platforms enforcing non-root write privileges for the active user, due to hardcoded uid 1000 in the container images pulled during build. This explains build errors like these:

hellodjango-js-1        | mkdir: cannot create directory '../public/js': Permission denied
hellodjango-js-1        | 
hellodjango-js-1        | Task completed in 0m0.001s
hellodjango-css-1       | mkdir: cannot create directory '../public/css': Permission denied
hellodjango-css-1       | 
hellodjango-css-1       | Task completed in 0m0.001s

Resolution is to chown ./public* prior to first build with docker-compose. Likely have to do with sudo if the active user doesn't have root privileges.

$ sudo chown 1000 {public,public_collected}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.