taiya / dgp Goto Github PK
View Code? Open in Web Editor NEWDigital Geometry Processing - Fall 2016 - University of Victoria
Digital Geometry Processing - Fall 2016 - University of Victoria
"The triangulation approach in [6] is tedious, but you might want to impress us and implement a recent extension [7]."
There is no [7]
@ataiya Could you explain again how to get the relationship between L_max/L_min and L (slide 42)? I guess L is the average length of edges. What does the equation |L_max-L|=|0.5L_max-L| mean (actually I think this is the only thing I'm not yet clear)?
My question is how to get the key values of such objects: “SurfaceMesh::Vertex_property vpoints;”. Although I can get the number of vertices using cloud.n_vertices(), it doesn't seems to be an standard array, i.e., I cannot access the vertex by vpoints[int i]. Similarly I need to assign normal values to property vnormal, then I also need a way to modify the value by a key.
Another question is can I update the cloud's property using the new vnormal? If yes what would be the usage?
Page 65 is not consistent.
dgp/hw3_deformation/main.cpp:40:10: error: virtual function 'mouse_press_callback' has a different return type ('void') than the function it overrides (which has return type 'bool') void mouse_press_callback(int button, int action, int /*mods*/) override
And
dgp/hw3_deformation/main.cpp:11:25: error: call to implicitly-deleted copy constructor of 'Deform' Deform deformator = Deform(mesh, this->scene);
I am looking at the very simple quad_mesh object and found something weird. In the picture, when calculating cotan values of vertex i, the vertex of angle beta acquired by using mesh.to_vertex(mesh.next_halfedge(e_ij)) is on the upper-left corner, which does not have an edge connected to vertex i at all. That makes it impossible to get the right laplacian beltrami matrix. The woody object is too big to check, so is it possible this data file also has the same issue?
This is from the assignment description: "Invalid collapses are defined as the ones that produce a non-manifold configuration (or a face fold-over)." For the face fold-over, we can use the threshold provided by the base code (min_cost). What about the "non-manifold configuration"? Any rules (or do we need to deduct the rules ourselves)?
The pos returned from function "unproject_mouse" is not consistent with the mesh coordinate.
I tried printing xPos, yPos, _width, _height
after I clicked the right bottom corner, and I got:
xPos: 326.086, yPos: 325.675, width: 800, height: 800
I am using Mac. Anybody has an idea why this happens?
note: If you plan to work on this bonus question please reply to this message, only the first correct submission will be accepted.
I would like to have a small application to show the distortion a parameterization (discussed when we talked about the first fundamental form) causes around a point. Starting from the OpenGP multi-window example, build an application to realize this kind of visualization:
On the left you'll show the texture map [0,1]x[0,1] of the model, on the right you'll show the 3D model. When the mouse is moved in the texture map window a red circle is displayed at the mouse location, and mapping of this circle to the 3D surface is displayed.
I remain available to answer any question you might have.
Tasks:
Is that the centroid of k-sized neighbourhood?
I wish my understanding was correct:
(1) Pre-factor Laplacian weight matrix w_ij
: 0.5 * (cot_alpha + cot_beta)
. (No area value)
(2), When the selected handle is moved, the new positions of the handle vertices are applied to v_k
. Then v_u
and v_k
are used as the "initial guess" (only the selected handle's positions are changed).
(3), Compare the original v_u
/v_k
with the ones from (1) and compute rotation R_i
for vertex i
. R_i
is got by solving SVD of the covariance matrix (P_i * D_i * P'_i)
, where P_i
has all cell i's edge vectors and D_i
a diagonal matrix containing the weights w_ij
.
Question: here I use the original v_u
/ v_k
to compute P_i
, and use the initial guess to compute P'_i
, is this correct?
(4), Then solve a linear system Lp'=b
. Lp's
is simply the Laplacian-Beltrami operator applied to p'
(but without the area coefficient); b
's ith row is sum(w_ij/2 * (R_i+R_j)(e_ij))
for cell i.
Question: can we use Cholesky to solve this linear system as well? Now I'm using v_u = solver.solve(-L_uk*v_k + b)
, is this correct?
(5) After solving Lp'=b
, before carrying on with the next iteration, update the guess with the new v_u
(update P'_i
but not P_i
, is it correct?)
Up to 3 submissions will be accepted. Post a snapshot showing several iterations of the algorithm and your code here: https://classroom.github.com/assignment-invitations/6149ec28d392c794b2c98a6c87104963
Extending the material presented in the "IK-ICP" develop an algorithm that performs the alignment of the blue segment structure to the yellow point set (yes, you'll have to create this data on your own). Implement the Point-to-Point ICP as well as the Point-to-Plane variant, and then evaluate the respective convergence speed.
Some helpful information can be found in this paper:
https://www.math.ucsd.edu/~sbuss/ResearchWeb/ikmethods/iksurvey.pdf
(remember this is a bonus, so you are expected to work independently)
@chongbingbao and @xuzheng0927 please change the repository settings so that they are private:
https://github.com/chongbingbao/dgp
https://github.com/xuzheng0927/dgp
Thanks
In the class matlab folder you can find ex_delaunay.m that realizes the algorithm described on this slide:
The two tasks for this bonus question are (1% each):
The results of the two steps should be analogous to the two bottom images in this figure:
Note: if you intend to attempt working on this bonus, please declare your intention by replying to this issue. Only one correct implementation will be accepted.
In hw3 part 2, I tried using Laplacian-Beltrami weight matrix WITHOUT 1/(2*A_i) coefficients, and it gave me the correct output (the original function in the code base uses the area coefficients, with which I always got exploding points).
I also noticed that in ARAP, the weight is without 1/(2*A_i) as well. Should that be removed?
I've been tortured by the poor documentation of eigen library so much. How to get the decomposed matrix L after running solver.compute(L_uu)? I tried auto new_p = solver.compute(L_uu)
but doesn't work at all. There is not any example I can find.
A 1% bonus to the the first person that replies to this message with a correct Matlab implementation (you can use the same sample data as for the other LS exercise) of what was described in class (and in the image below).
You should generate both of the highlighted images and paste them in your reply.
You can embed images and code directly in the message, google to find out how.
If you post a solution that is not correct and a fellow student fixes it, it is considered fair game.
Therefore, be absolutely sure your solution is correct.
Would it be a reasonable approach to keep the base (smoothed) mesh in memory and apply the deformation & high-frequency details every step? Essentially, separating the low and high frequencies only during the initialization, not for every frame.
I've been reading the textbook and the slides and they present the multiscale deformation process as one deformation, whereas the assignment is to make an interactive app, so a series of deformations. I looked for similar papers and implementations but didn't have any luck apart from [Zorin 1997], which seems similar to what I'm suggesting, but I'm not confident yet.
Is there an example explaining how this works? I'm reading the slides but it only has this brief description:
Assign each stripe a unique light code
– Project several b/w patterns over time
– Color pattern identifies row/column
@nlguillemot I found some issues in Laplacian.h, starting from line 87. Please kindly see the comments:
cotanAlpha = 1.0f / std::tan(alpha);
cotanBeta = 1.0f / std::tan(beta);
// Should get rid of minus values
omegaList.push_back(Triplet(v_i.idx(), v_j.idx(), -1)); // Should push in (cotanAlpha+cotanBeta) ?
cotanSum += cotanAlpha + cotanBeta;
area += (1 / 6.0f) * (d_ij.cross(d_ia)).norm();
degree++;
}
omegaList.push_back(Triplet(v_i.idx(), v_i.idx(),
(Scalar)degree)); // Should push in -cotanSum?
areaList.push_back(Triplet(v_i.idx(), v_i.idx(),
1.0f / (2.0f * area)));
}
L_omega.setFromTriplets(omegaList.begin(), omegaList.end());
Area.setFromTriplets(areaList.begin(), areaList.end());
return Area * L_omega;
I simply have:
Eigen::SparseMatrix<double> G(cloud.n_vertices(), cloud.n_vertices());
When I ran make, I got:
error: no template named 'SparseMatrix' in namespace 'Eigen'; did you mean 'SparseMatrixBase'?
Is the Eigen library in the repo an old version?
The slides look a bit brief so not sure if I'm following the right logic. In the function factor_matrices
, I'm doing:
1, Get Laplacian matrix (either uniform or Beltrami).
2, Multiply the Laplacian matrix by itself to get the squared Laplacian.
3, Permute the (squared) Laplacian matrix by multiplying it with permute
(permute the rows).
4, Get block(0,0,u,u)
as L_uu
and block(0,u,u,k)
as L_uk
.
5, Permute the vertex matrix, then get the first u vertices as v_u
and the rest as v_k
.
6, Use the Cholesky solver to compute L_uu
.
Then when the mouse is being dragged, I'm doing:
1, Compute the current barycenter of the selected handle, and the displacement between it and the cursor position.
2, Translate all the selected handle's vertices using the displacement by adding the displacement to v_k
(only to the selected handle's vertices).
3, Use the Cholesky solver to solve -L_uk*v_k
, then save it as the new v_u
.
4, Write v_u
and v_k
as the new vertices by vertices_matrix(mesh) << v_u, v_k
.
Is the above logic correct? Right now I'm only getting a pile of exploding points, and I wonder what is going wrong.
To all the folks still working on it,
be aware that numerical problem could arise when computing the dot product between two normals.
Vec3 a = some_normal.normalized(); //forcing a to be a unit vector
Vec3 b = another_normal.normalized(); //forcing b to be a unit vector
Scalar dot = a.dot(b);
dot could be actually greater than 1, due to numerical errors. Something like 1.00000012.
This could/will affect your normal orientation in Part3 and definitely Part4.
"D_ii=0 if i belongs to the constraints; 0 otherwise"
I think should be "D_ii=1..."
That's why my output was always incorrect :(
Things are getting really odd on my side. The algorithm should be very simple:
Get Laplacian matrix L of the mesh, then get it squared (L * L), permute the columns and then the rows so that the unknown vertex coefficient are at the upper-left "quarter" (L_uu or L_11). Then use solver.compute(L_uu). When doing the deformation, get the handles' new positions v_k, then get v_u by solver.solve(-L_uk*v_k).
I'm getting the right output using graph Laplacian (uniform) but the output using Laplacian Beltrami is not:
Can anyone simply show your Laplacian matrix values if you get the right result? Thanks!
I can't get the bi-Laplacian to work in part 1 with the Laplace-Beltrami matrix. (Queried whether the solver successfully factorized the matrix using solver.info()
; also printed out some coefficients which were all 0.) Is this because the test meshes are both planar?
Should I test part 1 on a different mesh, or is it okay to just use the graph Laplacian for part 1?
And I haven't started coding part 2, but it seems I might run into the same problem with the bi-Laplacian when smoothing the mesh?
(I don't think it has to do with L being non-symmetric (L = DM) and my D^-1 matrix looks fine.)
I don't know if I'm doing it right or not.
Links to Geomorph video and reference [4] pdf redirect to dgp wiki.
I implemented the RBF part and it works perfect for the sphere obj, but not as expected on the face obj (while Hoppe works well on the face). So I wonder if I misunderstood something. Are the centers: (1) the cloud points, (2) the cloud points plus epsilon*(their normal values)? If this is correct, what could be the reason? Should I change epsilon?
Hw3 is new this semester and the trace code is not super-polished yet. Create a super high-quality code + UI for use in future DGP courses, and redeem these bonus points! Only the first submission will be accepted
This is how you separate parts to be implemented from trace code:
#ifdef STRIP_CODE
/// TASK: description of task goes here
#else
/// C++ code goes here
#endif
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.