kaanaksit / odak Goto Github PK
View Code? Open in Web Editor NEWScientific computing library for optics, computer graphics and visual perception
Home Page: https://kaanaksit.com/odak
License: Mozilla Public License 2.0
Scientific computing library for optics, computer graphics and visual perception
Home Page: https://kaanaksit.com/odak
License: Mozilla Public License 2.0
I believe there is a bug in the GS pytorch version
Shouldn't the amplitude be set from reconstruction
rather than hologram
?
X and Y axis for quadratic phase function should be corrected.
Both implementations under odak.learn.wave.lens and odak.wave.lens should be updated.
Currently, some methods used in the learn
submodule uses external functions for some operations. The current list of these functions is as the following:
fftshift
in toolkit.py
ifftshift
in toolkit.py
These functions are included in fft module in pytorch 1.8.0. We can update these functions when pytorch 1.8.0 becomes stable.
According to this paper, although bandlimited angular spectrum method is valid for a wider propagation range compared to angular spectrum method, the accuracy of bandlimited angular spectrum method would decrease when the propagation distance is larger than 40 times the wavelength. The same paper explains the reasoning as follows: the band limits of a filter that avoids the aliasing is chosen according to the propagation distance of a beam, and in the large distances, the chosen limits are degrading the accuracy of the calculation. With respect to strict solution, this paper claims a superior accuracy, replicating it would be helpful in the long run.
Odak has two functions to load images as a Numpy and Torch variable. These are odak.tools.load_image
and odak.learn.tools.load_image
. Given many of the recipes within Odak uses normalized values between zero to one, it makes perfect sense to load images in a normalized way by default rather than having them represented in 8-bit pixel depth.
While propagating coherent optical beams with Odak using odak.wave.propagate_beam
or odak.learn.wave.propagate_beam
, oftentimes, we have to first zero pad an input field in space and afterwards crop the propagated field in space again. Rather than following this, it makes perfect sense to have a flag variable as an input to beam propagation functions so that we avoid rewriting the same code in future projects.
Odak supports torch
based hologram calculation routines. Odak.learn.wave
lacks a definition that can calculate holograms using Stochastic Gradient Descent
. The latest and greatest in combining torch
and odak
is at a branch called torch_1_8_0
. At the time of the writing of this issue, any development should go on top of that branch.
stochastic_gradient_descent
definition to odak.learn.wave.classical
.test
folder to run a test for the new added stochastic_gradient_descent
definition.I noticed that most of (if not all) the steps of the Contributing wiki can be automated via pre-commit. Having pre-commit would make that process much easier and enforce that the developers follow coding conventions.
I'd be more than happy to take this up.
Hello,
I intended to use the __call__
function of the display_color_hvs
class in odak.learn.perception.color_conversion.py
. However, since the self.rgb_to_lms
function is deprecated, I encountered the following error:
lms_image_second = self.rgb_to_lms(input_image.to(self.device))
AttributeError: 'display_color_hvs' object has no attribute 'rgb_to_lms'
To reproduce the issue, I have prepared an isolated example below:
import torch
from PIL import Image
from torchvision import transforms
from odak.learn.perception.color_conversion import display_color_hvs
image_path = 'data/parrot.png'
preprocess = transforms.Compose([
transforms.ToTensor(),
])
image = Image.open(image_path)
tensor_image = preprocess(image)
dc_hvs = display_color_hvs(read_spectrum = 'default')
loss = dc_hvs(tensor_image, tensor_image)
With the current design, is there a workaround to obtain equivalent functionality?
The current ray drawing function, odak.visualize.draw_a_ray
works with a single ray. Odak needs a new function or an update in this function to save multiple rays at once.
Currently, beam propagation functions of odak.wave
and odak.learn.wave
give different outputs for the same input field.
As discussed in #7 and #10, these are some of the functions that we can compare:
odak.wave.set_amplitude
and odak.learn.set_amplitude
,fftn
and ifftn
in torch with respect to fft2
and ifft
in numpy, cupy.An example output for comparison from test_learn_beam_propagation.py
with Bandlimited Angular Spectrum
method:
AssertionError:
Arrays are not almost equal to 3 decimals
Mismatched elements: 250000 / 250000 (100%)
Max absolute difference: 9.45276863
Max relative difference: 129.77451864
x: array([[-301.293 +2.111j, 317.095 -225.884j, -209.758 +613.112j, ...,
232.588 +45.735j, 250.056 +82.582j, 181.349 +147.647j],
[-239.635 -60.857j, -221.058 +125.061j, 11.215 -475.677j, ...,...
y: array([[-294.892 -1.521j, 310.806 -225.954j, -204.181 +611.055j, ...,
230.09 +46.909j, 252.016 +83.05j , 176.521 +149.347j],
[-245.039 -56.122j, -215.765 +124.03j , 6.634 -472.519j, ...,...
If I understand it right, in order to calculate the major_axis of the ellipse here
https://github.com/kaanaksit/odak/blob/master/odak/learn/perception/foveation.py#L130
a multiplication is required according to simple trigonometric function as
major_axis = (torch.tan(angle_max) - torch.tan(angle_min)) * real_viewing_distance
But a division is used in the your code.
I'm wondering if it is a bug or I misunderstand your code.
There is lots of time in holographic calculation and Sommerfeld diffraction integration.Here we can optimize the parallel calculation to speed up the time. But I don’t know how to do it.
The paper titled as Reconstruct Holographic 3D Objects by Double Phase Hologram describes a phase only hologram calculation method using double phase information. It can either be realized using two phase only Spatial Light Modulators (SLMs) or with an interferometric method that modulates the input field with one phase only SLM twice. Adding support for this method to Odak
is a good idea as it has been used in the literature in the recent years.
odak/wave/classical.py
,odak/learn/classical.py
,test
folder for both implementations.This paper introduces a beam propagation method that is good for large distances but not for short throws. It might be worth to add this to the library. Major benefit here is the fact that it does not need any zero padding unlike Angular Spectrum method.
This issue will track the work on migrating code from @askaradeniz that implements Gerchberg-Saxton to Odak. There are two veins to this migration. One of them is migrating the code in a way that is suitable to work with Numpy and Cupy. The second deals with the torch implementation, which I believe @askaradeniz can immediatly initiate as his code is already applicable to torch case.
odak/wave/classical.py
,odak/learn/classical.py
,We will also add test cases to test
folder for both method.
I want to drop dependencies as much as possible. The current list of dependencies can be found in here. This issue will track the progress towards dropping dependencies.
This issue will track the implementation of Wirtinger hologram
generation routine described in Wirtinger holography for near-eye displays
paper published by Chakravarthula et al. (@praneethc) at SIGGRAPH 2019. Here is the link for the copy of that paper. There are two veins to this migration. One of them is migrating the code in a way that is suitable to work with Numpy and Cupy. The second deals with the torch implementation.
odak/wave/classical.py
,odak/learn/classical.py
,Test cases will also be added to test
folder for both method.
Fraunhofer beam propagation simulations are essential when working with far field. Odak
needs a verified Fraunhofer beam propagation code, the current one exists in here:
@praneethc stated that he can help with providing a verified Fraunhofer beam propagation code. His future code can replace the one provided above.
There is also a test routine that can be used for testing Fraunhofer beam propagation code. Once implemented the below line can be changed in this script to Fraunhofer
from IR Fresnel
and the code can be tested this way.
https://github.com/kunguz/odak/blob/59a1c71978d677edd2bf611c30bb497833348fba/test/test_beam_propagation.py#L12
To visualize the outcome these following lines can be uncommented within the same test script:
https://github.com/kunguz/odak/blob/59a1c71978d677edd2bf611c30bb497833348fba/test/test_beam_propagation.py#L36-L43
Recently, a beam propagator is added to Odak
, which contains Fresnel Impulse Response (FIR), Fresnel Tranfer Function (FTF), and Fraunhofer propagation routines. They are all added to odak.wave.classical
script that can be found in here.
A test routine is also added accordingly as in here.
The outcome of the test routine can be visualized as follows:
Initial field:
Propagated field:
Back propagated field at the source:
and here is a visualization of the same code with a sample image:
Initial field:
Propagated field:
Back propagated field at the source:
It would be useful to have a beam propagation function written in pytorch for deep learning.
We can rewrite the Fresnel and Fraunhofer beam propagator defined in here with torch.
odak
needs a lens phase function generation routine to generate phase patterns for various lenses (quadratic phase functions). Steps that are required to implement are as follows:
odak
,odak.wave
specifically in here.test_lens_phase.py
to test folder found in here. Other test routines such as this one can help in writing a test routine for this case.odak
using this link.Do not hesitate to ask any questions during your implementation.
Double phase encoded hologram generation routine is missing.
A definition is needed to find the nearest point on a geometric ray with respect to an another geometric ray. Praneeth Chakravatrhula provided this piece of code in the past for this reason:
def findNearestPoints(vec1, vec2, ray):
# written by praneeth chakravarthula
# Refer to the concept of skew lines and line-plane intersection for the following math.
p1 = vec1[0].reshape(3,)
d1 = vec1[1].reshape(3,)
p2 = vec2[0].reshape(3,)
d2 = vec2[1].reshape(3,)
# normal to both vectors
n = np.cross(d1, d2)
# if the rays intersect
if np.all(n)==0:
point, distances = ray.CalculateIntersectionOfTwoVectors(vec1, vec2)
c1 = c2 = point
else:
# normal to plane formed by vectors n and d1
n1 = np.cross(d1, n)
# normal to plane formed by vectors n and d2
n2 = np.cross(d2, n)
# nearest distance point to vec2 along vec1 is equal to
# intersection of vec1 with plane formed by vec2 and normal n
c1 = p1 + (np.dot((p2-p1), n2)/np.dot(d1, n2))*d1
# nearest distance point to vec1 along vec2 is equal to
# intersection of vec2 with plane formed by vec1 and normal n
c2 = p2 + (np.dot((p1-p2), n1)/np.dot(d2, n1))*d2
return c1, c2
The submodule raytracing
can take advantage from this definition. So the ask is to place this piece of code to ray.py
that can be found in the odak/raytracing/ray.py
directory of Odak. The steps here:
odak/raytracing/ray.py
.Thanks!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.