immuni-app / immuni-backend-exposure-ingestion Goto Github PK
View Code? Open in Web Editor NEWRepository for the exposure ingestion service
License: GNU Affero General Public License v3.0
Repository for the exposure ingestion service
License: GNU Affero General Public License v3.0
The CODEOWNERS file should be structured as follows:
# These owners will be the default owners for everything in
# the repo. Unless a later match takes precedence,
# @global-owner1 and @global-owner2 will be requested for
# review when someone opens a pull request.
* @global-owner1 @global-owner2
The CONTRIBUTING documentation describes checks that do run upon Pull Request opening, yet they are not currently present.
This is the link to the section.
The MAX_ALLOWED_CLIENT_SKEW_IN_SECONDS
configuration variable found in immuni_exposure_ingestion/core/config.py
is unused.
It's minor, but it should be removed.
There is a typo in CONTRIBUTING.md
security and quality standards in the development tools described in
should be security and quality standards using the development tools described in
Describe the bug
Endpoint v1/ingestion/upload returns status code 500 if the teks
parameter is an empty list.
To Reproduce
cd docker
docker-compose build
docker-compose up
curl --location --request POST 'localhost:5000v1/ingestion/upload' \
--header 'Immuni-Dummy-Data: 0' \
--header 'Immuni-Client-Clock: 1589903340' \
--header 'Content-Type: application/json; charset=utf-8' \
--header 'Authorization: Bearer 4ec50b1f75ef4ad97521f5b3610cee605595266b7e2c42e6ce72eadff067c108' \
--data-raw '{
"exposure_detection_summaries": [
{
"attenuation_durations": [
300,
0,
0
],
"date": "2020-05-27",
"days_since_last_exposure": 1,
"exposure_info": [
{
"attenuation_durations": [
300,
0,
0
],
"attenuation_value": 45,
"date": "2020-05-16",
"duration": 5,
"total_risk_score": 4,
"transmission_risk_level": 1
}
],
"matched_key_count": 2,
"maximum_risk_score": 4
}
],
"padding": "S0meP4Dd1nG",
"province": "AG",
"teks": []
}'
Expected behaviour
The server should return a more explanatory error with status code 400.
After running the docker compose command as described in the documentation I get the following error: Need service name for --build-arg option, command below:
test@test-VirtualBox:~/immuni/immuni-backend-exposure-ingestion/docker$ docker-compose build --build-arg GIT_BRANCH=$(git rev-parse --abbrev-ref HEAD) --build-arg GIT_SHA=$(git rev-parse --verify HEAD) --build-arg GIT_TAG=$(git tag --points-at HEAD | cat) --build-arg BUILD_DATE=$(date -u +"%Y-%m-%dT%H:%M:%SZ")
ERROR: Need service name for --build-arg option
Docker version:
test@test-VirtualBox:~/immuni/immuni-backend-exposure-ingestion/docker$ sudo docker version
Client:
Version: 19.03.6
API version: 1.40
Go version: go1.12.17
Git commit: 369ce74a3c
Built: Fri Feb 28 23:45:43 2020
OS/Arch: linux/amd64
Experimental: false
Describe the bug
Upon Pull Request opening, the CONTRIBUTING link within it is clearly broken for the absence of a correct path.
To Reproduce
Expected behaviour
The link should bring you to https://github.com/immuni-app/immuni-backend-exposure-ingestion/blob/master/CONTRIBUTING.md
First of all, thank you for all the effort you put in this project, your care for sensitive topics has convinced even the paranoid people (like myself) to use the app. Kudos.
I’m a bit concerned about the concurrency model, you have chosen a really fast async framework (Sanic) running with the AsyncIO event loop but you are using incompatible network libraries, most notably MongoEngine, which is a mongoDB ODM on top of pymongo. Pymongo does not support AsyncIO
Let’s take a look at this chunk of code
immuni-backend-exposure-ingestion/immuni_exposure_ingestion/apis/ingestion.py
Lines 133 to 138 in cbd2aca
upload_model.save()
is a blocking operation, you are not yielding control to another coroutine during the write operation, blocking the entire event loop - the most visible effect is that you can't serve other requests during the operation.concurrent.future.ThreadPoolExecutor
using asyncio.loop.run_in_executor
Moreover, here
You are explicitly avoiding concurrency with a distributed lock
you are running blocking code
Why bother with all the concurrency "inception" (Celery -> AsyncIO evt loop -> coroutine) then, when what you want achieve is a serial execution? There is no need of concurrency here. What about a celery task that runs synchronous code?
One could also argue that even Celery is an overkill when the need is to launch a cron job, there are a bunch of tools that allows to keep control of task scheduling on the application layer, without all the celery machinery, APScheduler for instance.
Cheers
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.