davidgillsjo / srw-net Goto Github PK
View Code? Open in Web Editor NEWSemantic Room Wireframe Detection from a single perspective image
License: MIT License
Semantic Room Wireframe Detection from a single perspective image
License: MIT License
Hi David,
I was trying to replicate the training steps provided by you in the repo. 1st step and 2nd step were completed without errors but while training the refinement module I am getting error
RuntimeError: mat1 and mat2 shapes cannot be multiplied (128x58 and 128x64)
I know over here there is a mismatch of layers.
What I found it is the shape changes as follows
(78,1024)
(78,512)
(78,128)
(128,58)
I reckon the 128,58 should be 128,64 over here.
I followed all the steps as follows with only one change. While training the predictor with HAWP weights I changed IMS_PER_BATCH to 1 due to GPU memory and didn't use IMS_PER_BATCH 1 argument in generation of data set for GNN.
Is it possible that these 2 changes are causing this error?
Hi, I'm running the commands in "Run inference on your own images" via the docker enviroment
python3 test.py --config-file ../config-files/layout-SRW-S3D.yaml --img-folder ruby-room.jpg
CHECKPOINT ./data/model_proposal_s3d.pth
GNN_CHECKPOINT ../data/model_gnn_s3d.pth
but get this runtime error:
Traceback (most recent call last):
File "test.py", line 575, in <module>
cfg.merge_from_file(args.config_file)
File "/usr/local/lib/python3.6/dist-packages/yacs/config.py", line 213, in merge_from_file
self.merge_from_other_cfg(cfg)
File "/usr/local/lib/python3.6/dist-packages/yacs/config.py", line 217, in merge_from_other_cfg
_merge_a_into_b(cfg_other, self, self, [])
File "/usr/local/lib/python3.6/dist-packages/yacs/config.py", line 478, in _merge_a_into_b
_merge_a_into_b(v, b[k], root, key_list + [k])
File "/usr/local/lib/python3.6/dist-packages/yacs/config.py", line 478, in _merge_a_into_b
_merge_a_into_b(v, b[k], root, key_list + [k])
File "/usr/local/lib/python3.6/dist-packages/yacs/config.py", line 491, in _merge_a_into_b
raise KeyError("Non-existent config key: {}".format(full_key))
KeyError: 'Non-existent config key: MODEL.PARSING_HEAD.LAYERS'
This is eventhough the /config-files/layout-SRW-S3D.yaml does have these keys.
Any idea?s
While running prediction on my own images inside the docker enviroement i get an error looking for a json file.
Where can I get it ?
I didn't download the annotation files as it is too big (94GB ??). Do i need it in order to run inference on my own images?
File "test.py", line 595, in <module>
show_legend = not args.no_legend)
File "test.py", line 45, in __init__
self.datasets = build_test_dataset(cfg, validation = validation)
File "/host_home/repos/SRW-Net/parsing/dataset/build.py", line 69, in build_test_dataset
dataset = factory(**args)
File "/host_home/repos/SRW-Net/parsing/dataset/test_dataset.py", line 24, in __init__
with open(ann_file, 'r') as _:
FileNotFoundError: [Errno 2] No such file or directory: '/host_home/repos/SRW-Net/data/Structured3D_wf_open_doors_1mm/test.json'```
Hi, I was wondering if you could add an MIT license so we could try to use this in our projects. Thanks for your work!
Hi,
Thanks for the awwesome result!
Can you recommand me few methods to get the length of the detected line? e.g. legth of the wall.
Thanks.
Hi I have problems installing this repo.
When I run build.sh as suggested in the instructions i get the following error even when I update g++ and build-essesntials
running build_ext
building 'parsing._C' extension
Emitting ninja build file /home/ariel/work/Replicate/repos/SRW-Net/build/temp.linux-x86_64-3.8/build.ninja...
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/2] /usr/local/cuda/bin/nvcc -DWITH_CUDA -I/home/ariel/work/Replicate/repos/SRW-Net/parsing/csrc -I/home/ariel/.local/lib/python3.8/site-packages/torch/include -I/home/ariel/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/ariel/.local/lib/python3.8/site-packages/torch/include/TH -I/home/ariel/.local/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda/include -I/usr/include/python3.8 -c -c /home/ariel/work/Replicate/repos/SRW-Net/parsing/csrc/cuda/linesegment.cu -o /home/ariel/work/Replicate/repos/SRW-Net/build/temp.linux-x86_64-3.8/home/ariel/work/Replicate/repos/SRW-Net/parsing/csrc/cuda/linesegment.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -DCUDA_HAS_FP16=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 -std=c++14
FAILED: /home/ariel/work/Replicate/repos/SRW-Net/build/temp.linux-x86_64-3.8/home/ariel/work/Replicate/repos/SRW-Net/parsing/csrc/cuda/linesegment.o
/usr/local/cuda/bin/nvcc -DWITH_CUDA -I/home/ariel/work/Replicate/repos/SRW-Net/parsing/csrc -I/home/ariel/.local/lib/python3.8/site-packages/torch/include -I/home/ariel/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/ariel/.local/lib/python3.8/site-packages/torch/include/TH -I/home/ariel/.local/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda/include -I/usr/include/python3.8 -c -c /home/ariel/work/Replicate/repos/SRW-Net/parsing/csrc/cuda/linesegment.cu -o /home/ariel/work/Replicate/repos/SRW-Net/build/temp.linux-x86_64-3.8/home/ariel/work/Replicate/repos/SRW-Net/parsing/csrc/cuda/linesegment.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -DCUDA_HAS_FP16=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 -std=c++14
gcc: error trying to exec 'cc1plus': execvp: No such file or directory
nvcc fatal : Failed to preprocess host compiler properties.
[2/2] c++ -MMD -MF /home/ariel/work/Replicate/repos/SRW-Net/build/temp.linux-x86_64-3.8/home/ariel/work/Replicate/repos/SRW-Net/parsing/csrc/vision.o.d -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DWITH_CUDA -I/home/ariel/work/Replicate/repos/SRW-Net/parsing/csrc -I/home/ariel/.local/lib/python3.8/site-packages/torch/include -I/home/ariel/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/ariel/.local/lib/python3.8/site-packages/torch/include/TH -I/home/ariel/.local/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda/include -I/usr/include/python3.8 -c -c /home/ariel/work/Replicate/repos/SRW-Net/parsing/csrc/vision.cpp -o /home/ariel/work/Replicate/repos/SRW-Net/build/temp.linux-x86_64-3.8/home/ariel/work/Replicate/repos/SRW-Net/parsing/csrc/vision.o -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
In file included from /home/ariel/.local/lib/python3.8/site-packages/torch/include/ATen/Parallel.h:140,
from /home/ariel/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/utils.h:3,
from /home/ariel/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5,
from /home/ariel/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:3,
from /home/ariel/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/all.h:13,
from /home/ariel/.local/lib/python3.8/site-packages/torch/include/torch/extension.h:4,
from /home/ariel/work/Replicate/repos/SRW-Net/parsing/csrc/cuda/vision.h:2,
from /home/ariel/work/Replicate/repos/SRW-Net/parsing/csrc/linesegment.h:2,
from /home/ariel/work/Replicate/repos/SRW-Net/parsing/csrc/vision.cpp:1:
/home/ariel/.local/lib/python3.8/site-packages/torch/include/ATen/ParallelOpenMP.h:87: warning: ignoring #pragma omp parallel [-Wunknown-pragmas]
87 | #pragma omp parallel for if ((end - begin) >= grain_size)
|
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
File "/home/ariel/.local/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1666, in _run_ninja_build
subprocess.run(
File "/usr/lib/python3.8/subprocess.py", line 516, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "setup.py", line 62, in <module>
setup(
File "/usr/lib/python3/dist-packages/setuptools/__init__.py", line 144, in setup
return distutils.core.setup(**attrs)
File "/usr/lib/python3.8/distutils/core.py", line 148, in setup
dist.run_commands()
File "/usr/lib/python3.8/distutils/dist.py", line 966, in run_commands
self.run_command(cmd)
File "/usr/lib/python3.8/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/usr/lib/python3/dist-packages/setuptools/command/build_ext.py", line 87, in run
_build_ext.run(self)
File "/home/ariel/.local/lib/python3.8/site-packages/Cython/Distutils/old_build_ext.py", line 186, in run
_build_ext.build_ext.run(self)
File "/usr/lib/python3.8/distutils/command/build_ext.py", line 340, in run
self.build_extensions()
File "/home/ariel/.local/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 709, in build_extensions
build_ext.build_extensions(self)
File "/home/ariel/.local/lib/python3.8/site-packages/Cython/Distutils/old_build_ext.py", line 195, in build_extensions
_build_ext.build_ext.build_extensions(self)
File "/usr/lib/python3.8/distutils/command/build_ext.py", line 449, in build_extensions
self._build_extensions_serial()
File "/usr/lib/python3.8/distutils/command/build_ext.py", line 474, in _build_extensions_serial
self.build_extension(ext)
File "/usr/lib/python3/dist-packages/setuptools/command/build_ext.py", line 208, in build_extension
_build_ext.build_extension(self, ext)
File "/usr/lib/python3.8/distutils/command/build_ext.py", line 528, in build_extension
objects = self.compiler.compile(sources,
File "/home/ariel/.local/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 530, in unix_wrap_ninja_compile
_write_ninja_file_and_compile_objects(
File "/home/ariel/.local/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1355, in _write_ninja_file_and_compile_objects
_run_ninja_build(
File "/home/ariel/.local/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1682, in _run_ninja_build
raise RuntimeError(message) from e
RuntimeError: Error compiling objects for extension
Can you help?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.