cvg / gluestick Goto Github PK
View Code? Open in Web Editor NEWJoint Deep Matcher for Points and Lines 🖼️💥🖼️ (ICCV 2023)
Home Page: https://iago-suarez.com/gluestick
License: MIT License
Joint Deep Matcher for Points and Lines 🖼️💥🖼️ (ICCV 2023)
Home Page: https://iago-suarez.com/gluestick
License: MIT License
I have a question for the re-projection error presented on the paper. It's mentioned on the paper that the authors followed SuperGlue to calculate the re-projection error.
Can I assume that you guys used SED to compute the Area Under the Curve (AUC)
of the re-projection error of the four image corners?
Thank you.
Hi,When will the training code be published?
Hi, professor.
This is a great work!Your architecture is so smart!
I am learning your paper and code right now,and successfully execute the python scripts run.py, it can get the points& lines detected and matches correspondingly, but how can i get the Homography of -img1 and -img2 based on point-only, line-only or points+lines matches separately? Can i get the Homography just use your scripts in this repository? or i need to use any other Algorithms like SuperGlue ?
Thank you very much.
Hi, I am very concerned about some recent developments of GlueStick. What I care about is whether GlueStick is easy to deploy and transplant to other platforms, such as whether it can smoothly export onnx files.
Hello Team,
Is there any easy way to integrate Gluestick feature extraction and feature matching outputs to hloc and/or colmap?
I am just curious about it.
PS: This is a great work by the way. Thanks for open-sourcing it.
Thanks.
Hello , sorry to bother you . I want to only use points , what modules do I need ? Is it superpoints ?
Can I use any simple Homography to visual alignment of 2 images instead of Homography estimation, the dependency installation is very difficult on windows and there are a lot of using c++
Hi,
I noticed while using GlueStick that the memory usage is always increasing if I do recursive inferences, for example after running 100 times the memory is getting completely full and the program is being killed.
Any ideas about this issue?
Thanks
Hello, how do I filter out bad confidence scores?
LoFTR has a variable called mconf which is confidence between points
LoFTR structure:
b_ids: batch indices
i_ids: indices of the selected coarse matches on image 0
j_ids: indices of the selected coarse matches on image 1
gt_mask: this variable is only used during training. In the training code, gt matches will be padded to the selected matches to ensure the same batch size among image pairs.
mkpts0_c: matched keypoints at the coarse level on image 0
mkpts1_c: matched keypoints at the coarse level on image 1
mconf: the matching confidence
For GlueStick the closest I could find is:
keypoint_scores0
keypoint_scores1
line_scores0
line_scores1
Match for what?
match_scores0
match_scores1
line_matches0
line_matches1
line_match_scores0
line_match_scores1
raw_line_scores
There are so many scores, really confusing.
Can anyone explain what these do?
An how to filter them with a threshold similar to LoFTR?
hi, great work!
in your paper you mentioned that you use solvers to generate poses from a minimal set of 3 features (3 points, 2 points and 1 line, 1 point and 2 lines, or 3 lines), then combine them in a hybrid RANSAC to recover the query camera poses.
hope that you open source these lines of code. thanks.
Hi
I am trying to build the environment, but faced this error:
subprocess.CalledProcessError: Command '['cmake', '--build', '.']' returned non-zero exit status 2.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for pytlsd
Failed to build pytlsd
ERROR: Could not build wheels for pytlsd, which is required to install pyproject.toml-based projects
Python 3.9 + requirements.txt are used.
I'd be grateful if you could help me solving it
I've been exploring your library and am genuinely impressed with its performance. As I'm relatively new to point/line mapping, I'm seeking guidance on transforming single points after estimating the homography matrix. Using the method:
H = hest.ransac_point_line_homography(matched_kps0, matched_kps1, line_seg0, line_seg1, tol_px, False, [], [])
I obtained the homography matrix H. My intention was to use this matrix to map individual points from img0 to img1. To achieve this, I utilized:
mapped_point = cv2.perspectiveTransform(np.array([[[x, y]]], dtype='float32'), H)
x_mapped, y_mapped = mapped_point[0][0]
However, the results from this transformation were not as expected. Notably, when using the homography matrix H
with cv2.warpPerspective()
to warp the entire image, the results are accurate and as anticipated. Furthermore, the images I'm working with differ in size:
img0: (1080, 1920)
img1: (2020, 3584)
Could the differing image dimensions influence the transformation results? If so, how should I appropriately handle this scenario?
Any insights or guidance on whether I need to adopt a different approach or utilize another function would be greatly appreciated.
Thank you for your assistance!
Hi,
When I try to follow the installation steps, I encounter a problem where 'pip install -e .' cannot find the setup.py file. How should I resolve this issue?
ERROR: File "setup.py" not found. Directory cannot be installed in editable mode: /com.docker.devenvironments.code/GlueStick
(A "pyproject.toml" file was found, but editable mode currently requires a setup.py based build.)
thank you.
pct0427
It seems that the pipeline has the arguments :
use_lines
and use_points
.
However they do not seem to be used as changing their values still outputs both points and lines.
GlueStick/gluestick/models/wireframe.py
Line 119 in 40d71d5
Hello! Thanks for this impressive work. I would like to apply this work to my own code to batch infer images, but I found an issue. In GlueStick/gluestick/models/wireframe.py: Line 119, there seems to be a memory leak in the lsd method.
To verify this, I conducted a simple experiment. When the lsd method is called within a for loop, the memory usage keeps increasing, as shown in the figure. I would like to ask how to solve this problem, or if there are any alternative methods to lsd that can be used.
Looking forward to your reply!
from pytlsd import lsd
import numpy as np
def run():
for i in range(1000):
img = np.random.randint(0, 255, size=(1000, 2000)).astype(np.uint8)
b_segs = lsd(img)
print(i)
if __name__ == '__main__':
run()
Hey!Your work is amazing! I now have a question, which is how much fps can your model achieve when running a live demo? Is it real-time? Is your demo video processed offline and then combined into a video after each frame, or is it processed in real time? As a beginner, look forward to your answer, thank you very much!
Hi,
Thank you for your excellent work.
Can you compare the fps of your model and LoFTR?
I am looking forward to hearing from you.
Thank you very much for your research results. Now I have a question, and I sincerely hope that you can answer it.
I don't understand how to get these four parameters from data[]
lines0 = data['lines0'].flatten(1, 2)
lines1 = data['lines1'].flatten(1, 2)
lines_junc_idx0 = data['lines_junc_idx0'].flatten(1, 2) # [b_size, num_lines * 2]
lines_junc_idx1 = data['lines_junc_idx1'].flatten(1, 2)
Hi, I am very interested in your research project, when you can provide the training code? And can you provide the code to generate the ground truth of the dataset?
Hello, great job!
Can your method match the extracted line segments? For example, I have already obtained line segments from two images using LSD or other line detector. Can you use your method for line matching next?That is, I simply want to match the extracted line segments.
Looking forward to your reply very much!
Hi author, I would like to ask what training set the weights provided in github? I used the weights provided in github to test on the Hpatches dataset and found that it did not match very well in images with Viewpoint, may I ask what dataset you used to train the weights for testing on the Hpatches dataset?
Hello, great job!
I noticed that you used sinkhorn algorithm in the matching phase of superglue, while dual-softmax is used in this paper. May I ask why they are similar in effect? Or is there any theoretical support?
Looking forward to your reply very much!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.