testdrivenio / fastapi-tdd-docker Goto Github PK
View Code? Open in Web Editor NEWTest-Driven Development with FastAPI and Docker
Test-Driven Development with FastAPI and Docker
Hello,
Got stuck with Text Summarization chapter, because generate_summary doesn't execute in background.
Steps to reproduce:
✔ ~/fastapi-tdd-docker [main|✔]
14:50 $ http --json POST http://localhost:8004/summaries/ url=http://testdriven.io
HTTP/1.1 201 Created
content-length: 38
content-type: application/json
date: Mon, 05 Apr 2021 11:51:16 GMT
server: uvicorn
{
"id": 21,
"url": "http://testdriven.io"
}
✔ ~/fastapi-tdd-docker [main|✔]
14:51 $ http --json GET http://localhost:8004/summaries/21/
HTTP/1.1 200 OK
content-length: 99
content-type: application/json
date: Mon, 05 Apr 2021 11:56:22 GMT
server: uvicorn
{
"created_at": "2021-04-05T11:51:16.676458+00:00",
"id": 21,
"summary": "",
"url": "http://testdriven.io"
}
Meanwhile in docker logs:
web_1 | WARNING: StatReload detected file change in 'app/summarizer.py'. Reloading...
web_1 | INFO: Shutting down
web_1 | INFO:uvicorn.error:Shutting down
web_1 | INFO: Waiting for application shutdown.
web_1 | INFO:uvicorn.error:Waiting for application shutdown.
web_1 | INFO: Shutting down...
web_1 | INFO:uvicorn:Shutting down...
web_1 | INFO: Application shutdown complete.
web_1 | INFO:uvicorn.error:Application shutdown complete.
web_1 | INFO: Finished server process [124]
web_1 | INFO:uvicorn.error:Finished server process [124]
web_1 | INFO: Started server process [126]
web_1 | INFO: Waiting for application startup.
web_1 | INFO: Starting up...
web_1 | INFO: Application startup complete.
web_1 | INFO:uvicorn.error:Application startup complete.
web_1 | INFO: 172.26.0.1:33178 - "POST /summaries/ HTTP/1.1" 201 Created
web_1 | INFO: 172.26.0.1:33182 - "GET /summaries/21/ HTTP/1.1" 200 OK
So no print statement there. =)
Can anyone help me, plz? Thanks in advance.
I have had errors when building the docker images in gitbub, for that reason I have cloned the project, the problem persisted
thanks for your support
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
1006
Removing intermediate container 95870fd272bd
1007
---> 530ee9ecdf51
1008
Step 11/13 : RUN flake8 .
1009
---> Running in fb605485132c
1010
Removing intermediate container fb605485132c
1011
---> aff27eef40ab
1012
Step 12/13 : RUN black --exclude=migrations .
1013
---> Running in aa374e6712d7
1014
Traceback (most recent call last):
1015
File "/usr/local/bin/black", line 8, in
1016
sys.exit(patched_main())
1017
File "/usr/local/lib/python3.10/site-packages/black/init.py", line 1372, in patched_main
1018
patch_click()
1019
File "/usr/local/lib/python3.10/site-packages/black/init.py", line 1358, in patch_click
1020
from click import _unicodefun
1021
ImportError: cannot import name '_unicodefun' from 'click' (/usr/local/lib/python3.10/site-packages/click/init.py)
1022
The command '/bin/sh -c black --exclude=migrations .' returned a non-zero code: 1
1023
1024
Error: Process completed with exit code 1.
https://github.com/abelthf/fastapii-tdd-docker/runs/7680389415?check_suite_focus=true
I will restrict myself to the Dockerfile presented at https://testdriven.io/courses/tdd-fastapi/docker-config/ for now, since anything beyond that is non-public.
Dockerfile
could do with some cleanup and fixing:
gcc
was for... but it's not neededCOPY ./requirements.txt .
can be refined to COPY requirements.txt .
./project
into the container in docker-compose.yml
(volumes:\n- ./project:/usr/src/app
)Hi there,
I've followed the course and on the Postgres Setup section in Part 1 it says to use the following for registering tortoise
register_tortoise(
app,
db_url=os.environ.get("DATABASE_URL"),
modules={"models": ["app.models.tortoise"]},
generate_schemas=True,
add_exception_handlers=True,
)
However, when you run this, tortoise throws an exception tortoise.exceptions.ConfigurationError: You should init either from "config", "config_file" or "db_url"
.
I've pulled in the exact version of tortoise according to the course however I am unable to get this to work following the information provided.
Hi,
At the end of the section on Github packages, it say "[we] should now be able to see the package on your repo".
However the package is not appearing in my (private) repo.
According to this link, it's the expected behavior:
https://github.com/orgs/community/discussions/25775
I suggest to update this part.
Thanks for the course!!
On Python 3.9.2 with fastapi==0.65.2, uvicorn==0.14.0, pydantic==1.8.2, asyncpg==0.23.0, tortoise-orm==0.17.4
, pytest==6.2.4, requests==2.25.1.
Running "docker-compose exec web python -m pytest" against the initial produces a failure in test_summaries.py
________________________________________________________________________________ test_create_summary ________________________________________________________________________________
test_app_with_db = <starlette.testclient.TestClient object at 0x7f24ab8324c0>
def test_create_summary(test_app_with_db):
response = test_app_with_db.post(
"/summaries/", data=json.dumps({"url": "https://foo.bar"})
)
assert response.status_code == 201
E assert 422 == 201
E + where 422 = <Response [422]>.status_code
tests/test_summaries.py:11: AssertionError
Changing the first line of the test_create_summary function
FROM: response = test_app_with_db.post("/summaries/", data=json.dumps({"url": "https://foo.bar"}))
TO: response = test_app_with_db.post("/summaries/", json={"url": "https://foo.bar"})
allows the test to pass as expected.
Hi there, I am having a problem with tortoise settings when I tried to run pytest and raises the next exception
tortoise.exceptions.ConfigurationError: Module "app.models.tortoise" not found
I am following the tutorial path.
Hi thanks for the great software but unfortunately for me i have an issue and cannot figure out whats going on.
I have two containers one db one app.
App connects to the container and runs fine can ping container etc, but when i start tortoise it crashes giving me this.
web_1 | Waiting for postgres...
web_1 | PostgreSQL started
web_1 | /usr/src/app/entrypoint.sh: 5: /usr/src/app/entrypoint.sh: !nc: not found
web_1 | INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
web_1 | INFO: Started reloader process [1] using statreload
web_1 | INFO: Started server process [7]
web_1 | INFO: Waiting for application startup.
web_1 | INFO: starting up
web_1 | INFO: Tortoise-ORM startup
web_1 | connections: {'default': {'engine': 'tortoise.backends.asyncpg', 'credentials': {'port': 5432, 'database': 'web_dev', 'host': 'web-db', 'user': 'postgres', 'password': 'postgres'}}}
web_1 | apps: {'models': {'models': ['app.models.tortoise'], 'default_connection': 'default'}}
web_1 | ERROR: Traceback (most recent call last):
web_1 | File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 517, in lifespan
web_1 | await self.startup()
web_1 | File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 494, in startup
web_1 | await handler()
web_1 | File "/usr/local/lib/python3.8/site-packages/tortoise/contrib/fastapi/__init__.py", line 92, in init_orm
web_1 | await Tortoise.init(config=config, config_file=config_file, db_url=db_url, modules=modules)
web_1 | File "/usr/local/lib/python3.8/site-packages/tortoise/__init__.py", line 555, in init
web_1 | await cls._init_connections(connections_config, _create_db)
web_1 | File "/usr/local/lib/python3.8/site-packages/tortoise/__init__.py", line 385, in _init_connections
web_1 | await connection.create_connection(with_db=True)
web_1 | File "/usr/local/lib/python3.8/site-packages/tortoise/backends/asyncpg/client.py", line 94, in create_connection
web_1 | self._pool = await asyncpg.create_pool(None, password=self.password, **self._template)
web_1 | File "/usr/local/lib/python3.8/site-packages/asyncpg/pool.py", line 398, in _async__init__
web_1 | await self._initialize()
web_1 | File "/usr/local/lib/python3.8/site-packages/asyncpg/pool.py", line 426, in _initialize
web_1 | await first_ch.connect()
web_1 | File "/usr/local/lib/python3.8/site-packages/asyncpg/pool.py", line 125, in connect
web_1 | self._con = await self._pool._get_new_connection()
web_1 | File "/usr/local/lib/python3.8/site-packages/asyncpg/pool.py", line 468, in _get_new_connection
web_1 | con = await connection.connect(
web_1 | File "/usr/local/lib/python3.8/site-packages/asyncpg/connection.py", line 1668, in connect
web_1 | return await connect_utils._connect(
web_1 | File "/usr/local/lib/python3.8/site-packages/asyncpg/connect_utils.py", line 663, in _connect
web_1 | raise last_error
web_1 | File "/usr/local/lib/python3.8/site-packages/asyncpg/connect_utils.py", line 652, in _connect
web_1 | con = await _connect_addr(
web_1 | File "/usr/local/lib/python3.8/site-packages/asyncpg/connect_utils.py", line 621, in _connect_addr
web_1 | tr, pr = await asyncio.wait_for(
web_1 | File "/usr/local/lib/python3.8/asyncio/tasks.py", line 483, in wait_for
web_1 | return fut.result()
web_1 | File "uvloop/loop.pyx", line 1974, in create_connection
web_1 | File "uvloop/loop.pyx", line 1951, in uvloop.loop.Loop.create_connection
web_1 | OSError: [Errno 113] No route to host
When i comment out tortoise and run from cli schema creation i got this:
Traceback (most recent call last):
File "app/db.py", line 33, in <module>
run_async(generate_schemas())
File "/usr/local/lib/python3.8/site-packages/tortoise/__init__.py", line 636, in run_async
loop.run_until_complete(coro)
File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
return future.result()
File "app/db.py", line 23, in generate_schemas
await Tortoise.init(
File "/usr/local/lib/python3.8/site-packages/tortoise/__init__.py", line 555, in init
await cls._init_connections(connections_config, _create_db)
File "/usr/local/lib/python3.8/site-packages/tortoise/__init__.py", line 385, in _init_connections
await connection.create_connection(with_db=True)
File "/usr/local/lib/python3.8/site-packages/tortoise/backends/asyncpg/client.py", line 94, in create_connection
self._pool = await asyncpg.create_pool(None, password=self.password, **self._template)
File "/usr/local/lib/python3.8/site-packages/asyncpg/pool.py", line 398, in _async__init__
await self._initialize()
File "/usr/local/lib/python3.8/site-packages/asyncpg/pool.py", line 426, in _initialize
await first_ch.connect()
File "/usr/local/lib/python3.8/site-packages/asyncpg/pool.py", line 125, in connect
self._con = await self._pool._get_new_connection()
File "/usr/local/lib/python3.8/site-packages/asyncpg/pool.py", line 468, in _get_new_connection
con = await connection.connect(
File "/usr/local/lib/python3.8/site-packages/asyncpg/connection.py", line 1668, in connect
return await connect_utils._connect(
File "/usr/local/lib/python3.8/site-packages/asyncpg/connect_utils.py", line 663, in _connect
raise last_error
File "/usr/local/lib/python3.8/site-packages/asyncpg/connect_utils.py", line 652, in _connect
con = await _connect_addr(
File "/usr/local/lib/python3.8/site-packages/asyncpg/connect_utils.py", line 621, in _connect_addr
tr, pr = await asyncio.wait_for(
File "/usr/local/lib/python3.8/asyncio/tasks.py", line 483, in wait_for
return fut.result()
File "/usr/local/lib/python3.8/asyncio/base_events.py", line 1025, in create_connection
raise exceptions[0]
File "/usr/local/lib/python3.8/asyncio/base_events.py", line 1010, in create_connection
sock = await self._connect_sock(
File "/usr/local/lib/python3.8/asyncio/base_events.py", line 924, in _connect_sock
await self.sock_connect(sock, address)
File "/usr/local/lib/python3.8/asyncio/selector_events.py", line 494, in sock_connect
return await fut
File "/usr/local/lib/python3.8/asyncio/selector_events.py", line 526, in _sock_connect_cb
raise OSError(err, f'Connect call failed {address}')
OSError: [Errno 113] Connect call failed ('172.26.0.2', 5432)
link to db:
postgres://postgres:postgres@web-db:5432/web_dev
The file summaries.py
uses a isort:skip
comment to avoid a conflict between black and isort. Strictly speaking, this isn't necessary; as of version 5 you can configure isort to be compatible with black.
During the Github action for Testing the main.yml file adds installation of testing:
Run docker exec fastapi-tdd pip install black flake8 isort pytest
However the User that is activated on the container does not have permissions to install anything.
So it gives this error:
Run docker exec fastapi-tdd pip install black flake8 isort pytest docker exec fastapi-tdd pip install black flake8 isort pytest shell: /usr/bin/bash -e {0} env: IMAGE: docker.pkg.github.com/$(echo $GITHUB_REPOSITORY | tr '[A-Z]' '[a-z]')/summarizer WARNING: The directory '/home/app/.cache/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you should use sudo's -H flag.
I cloned the repository:
git clone https://github.com/testdrivenio/fastapi-tdd-docker.git
Went to the directory:
cd fastapi-tdd-docker
Spinned up the containers:
docker-compose up -d --build
Then, simply run:
docker-compose exec web ls
to check the files in the container, and got:
Dockerfile db pyproject.toml setup.cfg
Dockerfile.prod entrypoint.sh requirements-dev.txt tests
app migrations requirements.txt
which included some files that are mentioned in ./project/.dockerignore
:
env
.dockerignore
Dockerfile
Dockerfile.prod
htmlcov
.coverage
However, I don't have this problem when I use:
docker build -f project/Dockerfile.prod -t XXX ./project
It's really really weird!
My environment:
MacBook Pro 2017
macOS Monterey Version 12.6 (21G115)
Docker version 20.10.17, build 100c701
Docker Compose version v2.10.2
My issue is that I need to re-build each time I add some code to my project.
I followed the course from start until Code Coverage and Quality.
everything has gone well without any problems but this one :(
I know you mention in your course that
"volumes" tag makes it possible. I am not sure what is causing this problem.
What should I check? or try to fix the problem? many thanks.
here is my yml file;
version: '3.8'
services:
web:
build: ./project
command: uvicorn app.main:app --reload --workers 1 --host 0.0.0.0 --port 8000
volumes:
- ./project:/usr/src/app
ports:
- 8004:8000
environment:
- ENVIRONMENT=dev
- TESTING=0
- DATABASE_URL=postgres://postgres:postgres@web-db:5432/web_dev
- DATABASE_TEST_URL=postgres://postgres:postgres@web-db:5432/web_test
depends_on:
- web-db
web-db:
build:
context: ./project/db
dockerfile: Dockerfile
expose:
- 5432
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
In the step, 'Push image' I got error like below.
denied: installation not allowed to Write organization package
Error: Process completed with exit code 1.
This issue is related with the permission. here are some guides
There links are related with this issue.
I fixed error like below
name: Continuous Integration and Delivery
on: [push]
permissions: write-all
...
But, here's my question: How Can I solve this with this issue by controlling a single job? is it impossible?
At first, I though I can solve this issue with below code but it failed.
So, That's why I changed the entire permission like the above.
jobs:
build:
name: Build Docker Image
runs-on: ubuntu-latest
permissions:
packages: write
Is there any suggestion?
Hello,
I need some assistance in getting flake8 (+others) to run when running GitHub actions (https://github.com/lek18/fastapi-tdd-docker/actions/runs/4735151142/jobs/8405136457). I am using poetry instead of pip and have manually ran the test
section of the main.yml file locally and these commands do work.
- name: Pytest run: docker exec fastapi-tdd python -m pytest . - name: Flake8 run: docker exec fastapi-tdd python -m flake8 . - name: Black run: docker exec fastapi-tdd python -m black . --check - name: isort run: docker exec fastapi-tdd python -m isort . --check-only
I am trying to avoid:
docker exec fastapi-tdd pip install black==19.10b0 flake8===3.8.3 isort==5.4.2 pytest==6.0.1
prior to these test.
We should specify the code like below
LABEL org.opencontainers.image.source https://github.com/user_name/repo_name
Thanks
Hello Michael,
I met some problems when I executed docker-compose exec web python -m pytest
to test the pytest app. (Session: Pytest Setup in part 1)
Error message:
$ winpty docker-compose exec web python -m pytest
======================================================== test session starts =========================================================
platform linux -- Python 3.10.1, pytest-6.2.5, py-1.11.0, pluggy-1.0.0
rootdir: /usr/src/app
plugins: anyio-3.5.0
collected 79 items / 1 error / 78 selected
=============================================================== ERRORS ===============================================================
_______________________________ ERROR collecting env/Lib/site-packages/sniffio/_tests/test_sniffio.py ________________________________
import file mismatch:
imported module 'sniffio._tests.test_sniffio' has this __file__ attribute:
/usr/local/lib/python3.10/site-packages/sniffio/_tests/test_sniffio.py
which is not the same as the test file we want to collect:
/usr/src/app/env/Lib/site-packages/sniffio/_tests/test_sniffio.py
HINT: remove __pycache__ / .pyc files and/or use a unique basename for your test file modules
====================================================== short test summary info =======================================================
ERROR env/Lib/site-packages/sniffio/_tests/test_sniffio.py
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
========================================================== 1 error in 4.54s ==========================================================
Self-check
This error seems to have happened by the duplicate pytest module name; I checked that adding __init__.py
in my tests dir should fix the bug (as the instruction did), but it still does not work. I can't figure out what other reasons might cause this. (I did also try to rebuild the container and then do a test command, still did not work out.)
So my current directory structure is: (as same as the course demo, except .pytest_cache
which is automated popped out when the test error happens.
fastapi-tdd-docker
├─ .gitignore
├─ docker-compose.yml
├─ project
│ ├─ .dockerignore
│ ├─ .pytest_cache
│ │ ├─ .gitignore
│ │ ├─ CACHEDIR.TAG
│ │ ├─ README.md
│ │ └─ v
│ │ └─ cache
│ │ ├─ lastfailed
│ │ ├─ nodeids
│ │ └─ stepwise
│ ├─ app
│ │ ├─ config.py
│ │ ├─ db.py
│ │ ├─ main.py
│ │ ├─ models
│ │ │ ├─ tortoise.py
│ │ │ └─ __init__.py
│ │ └─ __init__.py
│ ├─ db
│ │ ├─ create.sql
│ │ └─ Dockerfile
│ ├─ Dockerfile
│ ├─ entrypoint.sh
│ ├─ migrations
│ │ └─ models
│ │ └─ 0_20220216230908_init.sql
│ ├─ pyproject.toml
│ ├─ requirements.txt
│ └─ tests
│ ├─ conftest.py
│ ├─ test_ping.py
│ └─ __init__.py
└─ README.md
Thanks in advance.
Hello! I would like to request that support for lifespan async context managers be added.
I believe that using async context managers is a more elegant and efficient way to manage the startup and shutdown of the application, compared to the startup and shutdown events. It would also allow for more flexibility and customization when managing the lifecycle of an application.
Hey @mjhea0! There is a question concering suggested GH Actions setup:
Do we really need Build images
(which included building of two images) step in "Test Docker Image" job?
It comes only after Build Docker Image
job (needs: build
) and docker pull ${{ env.IMAGE }}-builder:latest
command (in Build images
step) should always pull images were built and pushed to registry on previous job Build Docker Image
, right?
It's impossible that Test Docker Image
job would run before Build Docker Image
(having the dependencies set), am I correct?
Also I'm not sure why we ever need to build two images and run tests + linter against ${{ env.IMAGE }}-final:latest
only...?
I'm trying to do the deploy to heroku step (first bit of Part 2) but having some SSL related problems. It seems that Heroku requires connections to Postgres be with an SSL certificate? Has anyone else has stumbled across this problem and managed to solve it?
This is the information I get from heroku logs
[2021-03-07 17:11:39 +0000] [132] [ERROR] Application startup failed. Exiting.
2021-03-07T17:11:39.433115+00:00 app[web.1]: [2021-03-07 17:11:39 +0000] [132] [INFO] Worker exiting (pid: 132)
2021-03-07T17:11:39.662653+00:00 app[web.1]: [2021-03-07 17:11:39 +0000] [137] [INFO] Booting worker with pid: 137
2021-03-07T17:11:40.711469+00:00 app[web.1]: [2021-03-07 17:11:40 +0000] [137] [INFO] Started server process [137]
2021-03-07T17:11:40.711604+00:00 app[web.1]: [2021-03-07 17:11:40 +0000] [137] [INFO] Waiting for application startup.
2021-03-07T17:11:40.917185+00:00 app[web.1]: [2021-03-07 17:11:40 +0000] [137] [ERROR] Traceback (most recent call last):
2021-03-07T17:11:40.917187+00:00 app[web.1]: File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 526, in lifespan
2021-03-07T17:11:40.917188+00:00 app[web.1]: async for item in self.lifespan_context(app):
2021-03-07T17:11:40.917191+00:00 app[web.1]: File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 467, in default_lifespan
2021-03-07T17:11:40.917192+00:00 app[web.1]: await self.startup()
2021-03-07T17:11:40.917193+00:00 app[web.1]: File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 502, in startup
2021-03-07T17:11:40.917193+00:00 app[web.1]: await handler()
2021-03-07T17:11:40.917193+00:00 app[web.1]: File "/usr/local/lib/python3.8/site-packages/tortoise/contrib/fastapi/__init__.py", line 92, in init_orm
2021-03-07T17:11:40.917194+00:00 app[web.1]: await Tortoise.init(config=config, config_file=config_file, db_url=db_url, modules=modules)
2021-03-07T17:11:40.917194+00:00 app[web.1]: File "/usr/local/lib/python3.8/site-packages/tortoise/__init__.py", line 567, in init
2021-03-07T17:11:40.917195+00:00 app[web.1]: await cls._init_connections(connections_config, _create_db)
2021-03-07T17:11:40.917195+00:00 app[web.1]: File "/usr/local/lib/python3.8/site-packages/tortoise/__init__.py", line 385, in _init_connections
2021-03-07T17:11:40.917196+00:00 app[web.1]: await connection.create_connection(with_db=True)
2021-03-07T17:11:40.917197+00:00 app[web.1]: File "/usr/local/lib/python3.8/site-packages/tortoise/backends/asyncpg/client.py", line 94, in create_connection
2021-03-07T17:11:40.917197+00:00 app[web.1]: self._pool = await asyncpg.create_pool(None, password=self.password, **self._template)
2021-03-07T17:11:40.917197+00:00 app[web.1]: File "/usr/local/lib/python3.8/site-packages/asyncpg/pool.py", line 398, in _async__init__
2021-03-07T17:11:40.917198+00:00 app[web.1]: await self._initialize()
2021-03-07T17:11:40.917198+00:00 app[web.1]: File "/usr/local/lib/python3.8/site-packages/asyncpg/pool.py", line 426, in _initialize
2021-03-07T17:11:40.917199+00:00 app[web.1]: await first_ch.connect()
2021-03-07T17:11:40.917199+00:00 app[web.1]: File "/usr/local/lib/python3.8/site-packages/asyncpg/pool.py", line 125, in connect
2021-03-07T17:11:40.917199+00:00 app[web.1]: self._con = await self._pool._get_new_connection()
2021-03-07T17:11:40.917200+00:00 app[web.1]: File "/usr/local/lib/python3.8/site-packages/asyncpg/pool.py", line 468, in _get_new_connection
2021-03-07T17:11:40.917200+00:00 app[web.1]: con = await connection.connect(
2021-03-07T17:11:40.917201+00:00 app[web.1]: File "/usr/local/lib/python3.8/site-packages/asyncpg/connection.py", line 1718, in connect
2021-03-07T17:11:40.917201+00:00 app[web.1]: return await connect_utils._connect(
2021-03-07T17:11:40.917202+00:00 app[web.1]: File "/usr/local/lib/python3.8/site-packages/asyncpg/connect_utils.py", line 663, in _connect
2021-03-07T17:11:40.917202+00:00 app[web.1]: con = await _connect_addr(
2021-03-07T17:11:40.917202+00:00 app[web.1]: File "/usr/local/lib/python3.8/site-packages/asyncpg/connect_utils.py", line 642, in _connect_addr
2021-03-07T17:11:40.917203+00:00 app[web.1]: await asyncio.wait_for(connected, timeout=timeout)
2021-03-07T17:11:40.917203+00:00 app[web.1]: File "/usr/local/lib/python3.8/asyncio/tasks.py", line 491, in wait_for
2021-03-07T17:11:40.917204+00:00 app[web.1]: return fut.result()
2021-03-07T17:11:40.917205+00:00 app[web.1]: asyncpg.exceptions.InvalidAuthorizationSpecificationError: no pg_hba.conf entry for host "34.207.122.191", user "cslyxnkrbufuls", database "d517o756cbcm5n", SSL off
I tried adding "ssl=True"
to the db_url
like so
def init_db(app: FastAPI) -> None:
register_tortoise(
app,
db_url=os.environ.get("DATABASE_URL") + "?ssl=True",
modules={"models": ["app.models.tortoise"]},
generate_schemas=False,
add_exception_handlers=True,
)
but it seems that it can't verify a self signed certificate..
2021-03-07T17:16:56.727597+00:00 app[web.1]: [2021-03-07 17:16:56 +0000] [77] [ERROR] Application startup failed. Exiting.
2021-03-07T17:16:56.727835+00:00 app[web.1]: [2021-03-07 17:16:56 +0000] [77] [INFO] Worker exiting (pid: 77)
2021-03-07T17:16:56.789371+00:00 app[web.1]: [2021-03-07 17:16:56 +0000] [82] [INFO] Booting worker with pid: 82
2021-03-07T17:16:57.142427+00:00 app[web.1]: [2021-03-07 17:16:57 +0000] [82] [INFO] Started server process [82]
2021-03-07T17:16:57.142517+00:00 app[web.1]: [2021-03-07 17:16:57 +0000] [82] [INFO] Waiting for application startup.
2021-03-07T17:16:57.207812+00:00 app[web.1]: [2021-03-07 17:16:57 +0000] [82] [ERROR] Traceback (most recent call last):
2021-03-07T17:16:57.207814+00:00 app[web.1]: File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 526, in lifespan
2021-03-07T17:16:57.207814+00:00 app[web.1]: async for item in self.lifespan_context(app):
2021-03-07T17:16:57.207815+00:00 app[web.1]: File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 467, in default_lifespan
2021-03-07T17:16:57.207816+00:00 app[web.1]: await self.startup()
2021-03-07T17:16:57.207816+00:00 app[web.1]: File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 502, in startup
2021-03-07T17:16:57.207816+00:00 app[web.1]: await handler()
2021-03-07T17:16:57.207817+00:00 app[web.1]: File "/usr/local/lib/python3.8/site-packages/tortoise/contrib/fastapi/__init__.py", line 92, in init_orm
2021-03-07T17:16:57.207817+00:00 app[web.1]: await Tortoise.init(config=config, config_file=config_file, db_url=db_url, modules=modules)
2021-03-07T17:16:57.207817+00:00 app[web.1]: File "/usr/local/lib/python3.8/site-packages/tortoise/__init__.py", line 567, in init
2021-03-07T17:16:57.207817+00:00 app[web.1]: await cls._init_connections(connections_config, _create_db)
2021-03-07T17:16:57.207818+00:00 app[web.1]: File "/usr/local/lib/python3.8/site-packages/tortoise/__init__.py", line 385, in _init_connections
2021-03-07T17:16:57.207818+00:00 app[web.1]: await connection.create_connection(with_db=True)
2021-03-07T17:16:57.207819+00:00 app[web.1]: File "/usr/local/lib/python3.8/site-packages/tortoise/backends/asyncpg/client.py", line 94, in create_connection
2021-03-07T17:16:57.207819+00:00 app[web.1]: self._pool = await asyncpg.create_pool(None, password=self.password, **self._template)
2021-03-07T17:16:57.207819+00:00 app[web.1]: File "/usr/local/lib/python3.8/site-packages/asyncpg/pool.py", line 398, in _async__init__
2021-03-07T17:16:57.207822+00:00 app[web.1]: await self._initialize()
2021-03-07T17:16:57.207822+00:00 app[web.1]: File "/usr/local/lib/python3.8/site-packages/asyncpg/pool.py", line 426, in _initialize
2021-03-07T17:16:57.207822+00:00 app[web.1]: await first_ch.connect()
2021-03-07T17:16:57.207822+00:00 app[web.1]: File "/usr/local/lib/python3.8/site-packages/asyncpg/pool.py", line 125, in connect
2021-03-07T17:16:57.207823+00:00 app[web.1]: self._con = await self._pool._get_new_connection()
2021-03-07T17:16:57.207823+00:00 app[web.1]: File "/usr/local/lib/python3.8/site-packages/asyncpg/pool.py", line 468, in _get_new_connection
2021-03-07T17:16:57.207823+00:00 app[web.1]: con = await connection.connect(
2021-03-07T17:16:57.207823+00:00 app[web.1]: File "/usr/local/lib/python3.8/site-packages/asyncpg/connection.py", line 1718, in connect
2021-03-07T17:16:57.207824+00:00 app[web.1]: return await connect_utils._connect(
2021-03-07T17:16:57.207824+00:00 app[web.1]: File "/usr/local/lib/python3.8/site-packages/asyncpg/connect_utils.py", line 674, in _connect
2021-03-07T17:16:57.207824+00:00 app[web.1]: raise last_error
2021-03-07T17:16:57.207824+00:00 app[web.1]: File "/usr/local/lib/python3.8/site-packages/asyncpg/connect_utils.py", line 663, in _connect
2021-03-07T17:16:57.207825+00:00 app[web.1]: con = await _connect_addr(
2021-03-07T17:16:57.207825+00:00 app[web.1]: File "/usr/local/lib/python3.8/site-packages/asyncpg/connect_utils.py", line 632, in _connect_addr
2021-03-07T17:16:57.207825+00:00 app[web.1]: tr, pr = await asyncio.wait_for(
2021-03-07T17:16:57.207826+00:00 app[web.1]: File "/usr/local/lib/python3.8/asyncio/tasks.py", line 491, in wait_for
2021-03-07T17:16:57.207826+00:00 app[web.1]: return fut.result()
2021-03-07T17:16:57.207826+00:00 app[web.1]: File "/usr/local/lib/python3.8/site-packages/asyncpg/connect_utils.py", line 564, in _create_ssl_connection
2021-03-07T17:16:57.207827+00:00 app[web.1]: new_tr = await loop.start_tls(
2021-03-07T17:16:57.207827+00:00 app[web.1]: File "uvloop/loop.pyx", line 1617, in start_tls
2021-03-07T17:16:57.207827+00:00 app[web.1]: File "uvloop/loop.pyx", line 1610, in uvloop.loop.Loop.start_tls
2021-03-07T17:16:57.207827+00:00 app[web.1]: File "uvloop/sslproto.pyx", line 517, in uvloop.loop.SSLProtocol._on_handshake_complete
2021-03-07T17:16:57.207828+00:00 app[web.1]: File "uvloop/sslproto.pyx", line 499, in uvloop.loop.SSLProtocol._do_handshake
2021-03-07T17:16:57.207828+00:00 app[web.1]: File "/usr/local/lib/python3.8/ssl.py", line 944, in do_handshake
2021-03-07T17:16:57.207828+00:00 app[web.1]: self._sslobj.do_handshake()
2021-03-07T17:16:57.207828+00:00 app[web.1]: ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate (_ssl.c:1124)
Any ideas?
Hello I am stuck on the setting up postgresql. I am trying to get the entrypoint.sh to work, but I have been running into a weird error. I am running Microsoft Windows 11 Home
when I run docker-compose logs web
I see: fastapi-web-1 | exec /usr/src/app/entrypoint.sh: no such file or directory
instead of
web_1 | Waiting for postgres...
web_1 | PostgreSQL started
web_1 | INFO: Will watch for changes in these directories: ['/usr/src/app']
web_1 | INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
web_1 | INFO: Started reloader process [1] using statreload
web_1 | INFO: Started server process [37]
web_1 | INFO: Waiting for application startup.
web_1 | INFO: Application startup complete.```
I dug into it and I can see the file is there and it has the correct permissions.
(fastapi) C:\Users\manville\Documents\GitHub\fastapi>docker-compose run web ls -l /usr/src/app/entrypoint.sh
[+] Running 1/0
- Container fastapi-web-db-1 Running 0.0s
-rwxrwxrwx 1 root root 135 Oct 30 17:28 /usr/src/app/entrypoint.sh
(fastapi) C:\Users\manville\Documents\GitHub\fastapi>docker-compose run web /usr/src/app/entrypoint.sh
[+] Running 1/0
- Container fastapi-web-db-1 Running 0.0s
exec /usr/src/app/entrypoint.sh: no such file or directory
I've tried adding a new helloworld.sh, copying the file, trying to convert the file with dos2unix. Honestly, I am shooting in the dark here. I have also searched the SO, but no luck
The creation of the git repository should be at the beginning of the course & should encourage frequent commits throughout the project instead incorporating a git repository in the second part of the course.
why?
So this is a Windows-specific issue from what I can tell, and I'm posting this here in case someone else needs the solution.
Please feel free to edit/close the issue immediately.
Description:
I finished Part 1 on a Windows desktop without issues, pushed my code to GitHub and wanted to resume on another Windows laptop later. When I'd try to bring the containers up, I noticed that only web-db
would stay up, while web
crashed immediately.
docker ps -a
indicated that web
had crashed on the entrypoint command in the Dockerfile (running the entrypoint.sh
shell script)
I checked for clues using docker logs <container id for web>
and saw the error message:
standard_init_linux.go:228: exec user process caused: no such file or directory
Solution:
After a little searching, I found a reference that suggested changing the EOL markers from CRLF to LF on entrypoint.sh
in an editor.
I'm not sure when it was set to CRLF, or if it changed when pushing to GitHub - since it did not pose an issue on my Windows Desktop.
Bringing the containers down and back up again did the trick:
docker-compose down -v
docker-compose up -d --build
My GitHub Repo: https://github.com/KrestouferT/fasapi-tdd
Issue : GitHub workflow Error: Process completed with exit code 137. Without the run commands (Install requirements - isort) it works.
What have I done to fix the problem: I searched for the exit code 137 but didn't find much (except some people saying: the error is caused by excessive memory usage).
Did someone have the same problem? How can I fix this ?
Run docker exec fastapi-tdd pip install black==20.8b1 flake8===3.8.4 isort==5.6.4 pytest==6.2.0
Defaulting to user installation because normal site-packages is not writeable
Collecting black==20.8b1
Downloading black-20.8b1.tar.gz (1.1 MB)
Installing build dependencies: started
Error: Process completed with exit code 137.
to make deployment work in Heroku from my Apple M1 machine
I had to replace
docker build -f project/Dockerfile.prod -t registry.heroku.com/<app_name>/web ./project
to
docker buildx build --platform linux/amd64 -f project/Dockerfile.prod -t registry.heroku.com/<app_name>/web ./project
ref from here:
https://stackoverflow.com/questions/66982720/keep-running-into-the-same-deployment-error-exec-format-error-when-pushing-nod
Hi,
Working on fastapi-tdd-docker,
At step "https://testdriven.io/courses/tdd-fastapi/continuous-delivery/"
I've an error I can't manage...
BUILD OK
LOGIN OK
but...
Any idea ?
Thanks
...
docker login --username=_ --password=${HEROKU_AUTH_TOKEN} registry.heroku.com
Login Succeeded
......
Successfully tagged registry.heroku.com/xxxxxx/summarizer:latest
$ docker push ${HEROKU_REGISTRY_IMAGE}
Using default tag: latest
The push refers to repository [registry.heroku.com/xxxxxx/summarizer]
e9d588cbe421: Preparing
40746bf80917: Preparing
9d0eeba87b86: Preparing
5bf2405f8c1f: Preparing
361b8fbc8335: Preparing
f6f94de819b0: Preparing
57e5e30a29c6: Preparing
3439945bbeec: Preparing
62d5f3023a6f: Preparing
aace1aa11cb6: Preparing
719034028365: Preparing
45a642e84e00: Preparing
2d7195713b6a: Preparing
9d4036e3cf49: Preparing
ad6b69b54919: Preparing
f6f94de819b0: Waiting
57e5e30a29c6: Waiting
3439945bbeec: Waiting
62d5f3023a6f: Waiting
aace1aa11cb6: Waiting
719034028365: Waiting
45a642e84e00: Waiting
2d7195713b6a: Waiting
9d4036e3cf49: Waiting
ad6b69b54919: Waiting
unauthorized: authentication required
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.