Code Monkey home page Code Monkey logo

cutsolver's Introduction

logo CutSolver

CI/CD Docker Image Version Pulls from DockerHub

This is a backend, see CutSolverFrontend for a human usable version.

This API can be used to solve the common problem of finding the perfect placement of cuts for specified lengths. It seems like no other free service tackles this specific problem in an easy-to-use format, so this is my attempt.

You are very welcome to share how you use this tool!

cutsolver

This Solver is using integers exclusively, as there is no need for arbitrary precision (yet). Feel free to shift your numbers a few decimals if you need fractions. It has no concept of units, so you can use whatever you want.

Nerd talk: This is the 2D "Cutting Stock Problem", which is NP-hard. No algorithm exists to calculate a perfect solution in polynomial time, therefore brute force (perfect solution) is used for small jobs (usually <12 entries) and FFD (fast solution) für larger ones. Don't be surprised if you get different results, many combinations have equal trimmings and are therefore seen as equally good.

Usage/Hosting

Feel free to run manually, but the easiest (and advised) way to deploy this is by using Docker and pulling an up-to-date image.

Send POST-Requests to [localhost]/solve to get your results, see /docs for further information.

Also see example job and result from tests.

Docker

You don't need to check out this repository and build your own image, I am pushing prebuild ones to Docker Hub. Download and start this container by using the provided docker-compose file or with docker run [--rm -it] -p80:80 modischfabrications/cutsolver:latest.

Note: Replace latest with a version number if you depend on this interface, I can guarantee you that the interface will change randomly. It's more or less stable since the 1.0 release, but be ready for the unexpected.

Both linux/amd64 and linux/arm/v7 are currently supported, more will be build whenever I get around to it, message me if you need another architecture.

Performance

If it can run Docker it will probably be able to run CutSolver. 1 vCPU with 500MB RAM should be fine for small workloads.

Runtimes strongly depend on the single-core performance of your CPU. You can expect 12 entries to be solved after ~1s with bruteforceand <0.1s with FFD for generic desktops, slower on weaker machines. Multiple cores won't speed up job time, but will enable efficient solving of parallel jobs.

The thresholds that decide which jobs are solved which way are defined in constants.py and can be passed as env, see docker-compose.yml for details.

Contributing

Feel free to contact me or make a pull-request if you want to participate.

Sponsoring and/or paid development is also very welcome, feel free to reach out.

Git

Install pre-commit with pre-commit install && pre-commit install -t pre-push. You might need to replace #!/bin/sh with #!/usr/bin/env sh in the resulting *.legacy file on Windows.

All obvious errors should be checked and or fixed by pre-commit, execute pre-commit run --all-files --hook-stage push to run manually.

use git push --no-verify if you really need to skip these tests, but you better have a good explanation.

Change version number in main.py:version for newer releases, git tags will be created automatically.

Testing

Remember to test your changes using pytest. This will happen automatically both in pre-commit and in CI/CD, but manual tests will reduce iteration times.

Make sure your changes keep app.* imports or pytest will crash and burn due to missing import settings.

Code coverage and runtimes can be checked using pipenv run python -m pytest --durations=5 --cov=app/ --cov-report term-missing. Make sure that all critical parts of the code are covered, at v1.0.1 it is at 94%.

Proper type usage can be checked using pipenv run python -m mypy app.

Development Docker Images

  1. Build and start this image using docker-compose up
  2. wait a while for dependencies to build... (1000s)
  3. Hope that everything works

docker run --rm -it -p 8000:80 $(docker build -q .) is also useful to start a temporary container for testing.

Push Production Docker Images

Docker Hub Images should be updated automatically, but feel free to build yourself should everything else fail. Adding "[skip ci]" to the commit message will prevent any ci builds should the need arise. Thankfully, local builds are easy with the modern buildx workflow.

Installation of a multibuilder (once):

docker buildx create --name multibuilder --use
docker buildx inspect multibuilder --bootstrap

Build and push the new multi-arch image with the following steps (add version, e.g. v0.3.7):

docker login -u modischfabrications
docker buildx build --platform linux/amd64,linux/arm/v7,linux/arm64 \
    -t modischfabrications/cutsolver:<VERSION> \
    -t modischfabrications/cutsolver:latest --push .

Wait a while for every dependency to build (~1000s) and all layers to be pushed (~400s). Feel free to drink some water and be bored, that's healthy from time to time.

Check Docker Hub to see results.

Dependencies

Everything should be handled by Docker and/or pipenv.

This project uses:

Also used for development is:

  • pipenv: library management
  • httpie: simpler curl for docker healthchecks
  • pytest: A lot nicer unit tests
  • flake8: Linting
  • mypy: Static type checking
  • requests: simple HTTP requests
  • httpx: requirement of TestClient
  • black: uncompromising code formatter; currently unused

External links

https://scipbook.readthedocs.io/en/latest/bpp.html

cutsolver's People

Contributors

bouni avatar codacy-badger avatar dependabot[bot] avatar jwidauer avatar modischfabrications avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

cutsolver's Issues

Add CLI

Add interface for consoles.

First idea for this interface:
python main.py -i job.json -o result.json

might be extendable to work with pipes or stdin/stdout, depending on what is actually requested.

Cache answers

People that are trying this service out are likely to queue the same result multiple times.
Caching these values will really save some execution time.

This is somehow relevant to #24

optimize brute-force

brute-forcing could be improved significantly by ignoring duplicates (like 4x200mm)

[Bug] Bruteforce applies cuts a second time

The different solvers return different results

Bruteforce:

grafik

FFD:

grafik

The FFD result is orrect in my opinion. 2x 495 = 990 + 6 = 996 => Two pieces should fit into one stock of 1000

I would propose to ditch the Bruteforce algorithm altogether as it doesn't matter if the FFD is sub milliseconds slower when used for < 10 items. Above that threshold the FFD comes into action anyway and that way we do not have to maintain two different algorithms.

The same goes for the gapfill algorithm.

add solver-suggestions

suggest solver while sending a Job

  1. allow requesting worse but faster requests
  2. allow really expensive Jobs (with auth for security!)

Job as Map?

length: quantity

would make compressing (#11) easier and contains the same information in a structured way.

add Dockerfile labels

look for things that have a common format and are useful to have.

  1. author
  2. repo
  3. license?
  4. version?

Improve Dockerfile

Some pip-packages are installed multiple times, find a way to prevent that without risking missed packages

add watchdog

make Jobs more secure by including a watchdog that kills a calculation if it is running for longer than X.

proposed X: 10s (1min with auth)

add solver-type to Results

users might be interested to see if their solution is the guaranteed optimum or if it is just an approximation.

{
...
"solver": "bruteforce (perfect)"
}

reevaluate the name "CutSolver"

"CutSolver" is not really a unique name, think about a nicer name with similar precision.

Ideas:

  • Cutsy
  • Partitioner
  • Cutify
  • CutIt
  • ...

Test for stable results

Random variations in solvers can result in test working only some of the time. Problem was found during rework of #66 , where unsorted sets can result in different results each time

Not optimized result

Hello, I am testing CutSolver and when trying a job I don't get an optimal solution:
Is there a way to select the solver type?
Thanks for your efforts!
{
"job": {
"max_length": 6500,
"cut_width": 20,
"target_sizes": {
"1258": 2,
"3958": 2
}
},
"solver_type": "bruteforce",
"time_us": 85,
"lengths": [
[
3958
],
[
3958,
1258
],
[
1258
]
]
}

Fix multiarch build

Docker hub builds until the timeout hits while arm64 is done after 2min (according to Logs). Builds can't be cancelled manually.

No errors are shown, seems like other architectures are being ignored even though qemu seems to register successful.

Try installing qemu before building the image instead. Post to docker/hub-feedback#1261 if no other solution was found.

$ sudo apt update
$ sudo install qemu qemu-user-static qemu-user binfmt-support
Building in Docker Cloud's infrastructure...
Cloning into '.'...
Warning: Permanently added the RSA host key for IP address '192.30.253.112' to the list of known hosts.
Reset branch 'master'
Your branch is up-to-date with 'origin/master'.
Executing post_checkout hook...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 619 0 619 0 0 605 0 --:--:-- 0:00:01 --:--:-- 605
0 1678k 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0qemu-3.0.0+resin-arm/
qemu-3.0.0+resin-arm/qemu-arm-static
100 1678k 100 1678k 0 0 1081k 0 0:00:01 0:00:01 --:--:-- 7770k
Pulling cache layers for index.docker.io/modischfabrications/cutsolver:latest...
Done!
Executing pre_build hook...
Unable to find image 'multiarch/qemu-user-static:register' locally
register: Pulling from multiarch/qemu-user-static
fc1a6b909f82: Pulling fs layer
ed0d08c6c573: Pulling fs layer
96aa3d7b9b30: Pulling fs layer
8d669bf48302: Pulling fs layer
8d669bf48302: Waiting
ed0d08c6c573: Verifying Checksum
ed0d08c6c573: Download complete
96aa3d7b9b30: Verifying Checksum
96aa3d7b9b30: Download complete
8d669bf48302: Verifying Checksum
8d669bf48302: Download complete
fc1a6b909f82: Verifying Checksum
fc1a6b909f82: Download complete
fc1a6b909f82: Pull complete
ed0d08c6c573: Pull complete
96aa3d7b9b30: Pull complete
8d669bf48302: Pull complete
Digest: sha256:e972d7cd2aa56ed083dc74d7f0cb708e0a9d041aa64e50462785338b29bceeca
Status: Downloaded newer image for multiarch/qemu-user-static:register
Setting /usr/bin/qemu-alpha-static as binfmt interpreter for alpha
Setting /usr/bin/qemu-arm-static as binfmt interpreter for arm
Setting /usr/bin/qemu-armeb-static as binfmt interpreter for armeb
Setting /usr/bin/qemu-sparc32plus-static as binfmt interpreter for sparc32plus
Setting /usr/bin/qemu-ppc-static as binfmt interpreter for ppc
Setting /usr/bin/qemu-ppc64-static as binfmt interpreter for ppc64
Setting /usr/bin/qemu-ppc64le-static as binfmt interpreter for ppc64le
Setting /usr/bin/qemu-m68k-static as binfmt interpreter for m68k
Setting /usr/bin/qemu-mips-static as binfmt interpreter for mips
Setting /usr/bin/qemu-mipsel-static as binfmt interpreter for mipsel
Setting /usr/bin/qemu-mipsn32-static as binfmt interpreter for mipsn32
Setting /usr/bin/qemu-mipsn32el-static as binfmt interpreter for mipsn32el
Setting /usr/bin/qemu-mips64-static as binfmt interpreter for mips64
Setting /usr/bin/qemu-mips64el-static as binfmt interpreter for mips64el
Setting /usr/bin/qemu-sh4-static as binfmt interpreter for sh4
Setting /usr/bin/qemu-sh4eb-static as binfmt interpreter for sh4eb
Setting /usr/bin/qemu-s390x-static as binfmt interpreter for s390x
Setting /usr/bin/qemu-aarch64-static as binfmt interpreter for aarch64
Setting /usr/bin/qemu-aarch64_be-static as binfmt interpreter for aarch64_be
Setting /usr/bin/qemu-hppa-static as binfmt interpreter for hppa
Setting /usr/bin/qemu-riscv32-static as binfmt interpreter for riscv32
Setting /usr/bin/qemu-riscv64-static as binfmt interpreter for riscv64
Setting /usr/bin/qemu-xtensa-static as binfmt interpreter for xtensa
Setting /usr/bin/qemu-xtensaeb-static as binfmt interpreter for xtensaeb
Setting /usr/bin/qemu-microblaze-static as binfmt interpreter for microblaze
Setting /usr/bin/qemu-microblazeel-static as binfmt interpreter for microblazeel
Setting /usr/bin/qemu-or1k-static as binfmt interpreter for or1k
KernelVersion: 4.4.0-1060-aws
Components: [{u'Version': u'18.03.1-ee-3', u'Name': u'Engine', u'Details': {u'KernelVersion': u'4.4.0-1060-aws', u'Os': u'linux', u'BuildTime': u'2018-08-30T18:42:30.000000000+00:00', u'ApiVersion': u'1.37', u'MinAPIVersion': u'1.12', u'GitCommit': u'b9a5c95', u'Arch': u'amd64', u'Experimental': u'false', u'GoVersion': u'go1.10.2'}}]
Arch: amd64
BuildTime: 2018-08-30T18:42:30.000000000+00:00
ApiVersion: 1.37
Platform: {u'Name': u''}
Version: 18.03.1-ee-3
MinAPIVersion: 1.12
GitCommit: b9a5c95
Os: linux
GoVersion: go1.10.2
Starting build of index.docker.io/modischfabrications/cutsolver:latest...
Step 1/7 : FROM python:3.7
---> a4cc999cf2aa
Step 2/7 : EXPOSE 80
---> Using cache
---> 62a54b98454e
Step 3/7 : COPY ./requirements.txt requirements.txt
---> Using cache
---> 82eaf342eb1e
Step 4/7 : RUN pip install -r requirements.txt
---> Using cache
---> bae301531cc6
Step 5/7 : COPY ./app /app
---> Using cache
---> b23abeae1990
Step 6/7 : WORKDIR /app
---> Using cache
---> 60309b3c52ab
Step 7/7 : CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "80"]
---> Using cache
---> 21fdc721f83e
Successfully built 21fdc721f83e
Successfully tagged modischfabrications/cutsolver:latest
Pushing index.docker.io/modischfabrications/cutsolver:latest...
Done!
Build finished``

evaluate size increase of wget

Curl as addon adds another 15 MB to the image, which is way too much. Try using wget instead, see if that reduces the size increase to <<5MB

Weight equally good solutions by some metric

There are often solutions with identical trimmings. Current code just picks one at random, but there could be a setting to prioritize based on secondary metrics.

  1. Most cuts of the same length in a row
  2. largest trimmings at once / largest distribution of trimmings
  3. ..?

Autolabel versions

Track versions only in git, depend everything else on it.

  1. Git hook (pre commit?): If tag update version in file (simple >> ?)
  2. Python code loads variable from file
  3. Running instances can curl version file to warn with outdated answers.

shrink docker image

all comparisons use amd64. Check using docker image ls

old size (pre-multistage): 344 MB

new size (py3.7 builder, py3.7slim): 239 MB

new size (3.8): 287 MB

no idea what's wrong, might be because of complete copy of pip?
-slim base image has <100MB

Allow items with a length equal to stock length

At the moment when I have a stock length of lets say 1500, an item of 1500 leads to an error because of the cut width.

I think it should be easy to check if an item is equal to the stock length (or rather stock length - cut_width) and allow that special case.

add license stuff

Check which license FastAPI (and other submodules) uses and if that imposes any limits for this project.

Target is something permissive that encourages sharing extensions to this project but allows unmodified usage for everything. (probably LGPL?)

Workers and a job queue

Having workers and a queue with pending jobs was considered but seemed useless,
as ideally all requests have their own thread and a (by comparison) short calculation time.
This makes a queue useless. The same argumentation also holds true for a result-buffer.

improve travis tests

test docker image health? -> waiting for 5m is bad, check it manually with curl after 20s or something.

might want to put the test stuff from the magic file back or split it up completely.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.