Code Monkey home page Code Monkey logo

tensorflow-nufft's People

Contributors

jmontalt avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

tensorflow-nufft's Issues

NaN's randomly pop up in computations

I observe that we get NaNs coming up in my computations.

I have stopped my runs with conditional breakpoints and I have repeated the specific functions (tfft.nufft, tfft.spread etc.) and observed NaN's repeating once or twice. However, if i repeat the computations, usually, I dont get NaNs again

For now, I am writing my own wrapper to carry out recomputation in case this occurs, but I feel this can be something bigger like a missing _sync_threads or cudaDeviceSynchronize.

Sadly, given how random this issue is, I dont have a repro. I will see if I can setup one. For now this is a more of check if you faced this issue too in which case, do we have any idea?

I would love to debug deeper, is there a way to have a gdb or cuda-gdb setup?

Running NUFFT in parallel causes a seg fault in CPU kernel

The following code reproduces the problem:

with tf.device('/cpu:0'):
  rank = 2
  num_points = 20000
  grid_shape = [128] * rank
  batch_size = 100
  rng = tf.random.Generator.from_seed(10)
  points = rng.uniform([batch_size, num_points, rank], minval=-np.pi, maxval=np.pi)
  source = tf.complex(tf.ones([batch_size, num_points]),
                      tf.zeros([batch_size, num_points]))
  @tf.function
  def parallel_nufft_adjoint(source, points):
    def nufft_adjoint(inputs):
      src, pts = inputs
      return nufft_ops.nufft(src, pts, grid_shape=grid_shape,
                             transform_type='type_1',
                             fft_direction='backward')
    return tf.map_fn(nufft_adjoint, [source, points],
                     parallel_iterations=4,
                     fn_output_signature=tf.TensorSpec(grid_shape, tf.complex64))
  
  result = parallel_nufft_adjoint(source, points)

Still facing NaN issues!

Even after a lot of efforts, we still seem to face NaN issues and sadly this seems to be verry erratic and random. I dont have a repro test at all...
From what I see, I think that this issue is only present in Graph Mode. @jmontalt does that hint at any issues possible? Is there any separate code paths possible for eager mode / graph mode?

I will try to use this issue to track its progress on what I observe.

Random failing to associate CUDA stream issues?

I get the following issues sometimes (again, they are quite random, and hence quite hard to debug)

Failed to associate cuFFT plan with CUDA stream: 1

Mostly arising from:

if (result != CUFFT_SUCCESS) {
return errors::Internal(
"Failed to associate cuFFT plan with CUDA stream: ", result);
}

Seems like cufft returns cufftInvalidPlan which is error code 1. I am a bit clueless on how to start on this one. I might as usual try to get a repro soon though. @jmontalt any idea if you faced this and any pointers where to start?

I dont think the gradients wrt to `points` is right?

Hello,

Thank you for this package!
We had our own version NUFFT based on Kaiser Bessel interpolation here: https://github.com/zaccharieramzi/tfkbnufft

However, this method as is based on CUDA and cufiNUFFT, is more efficient in terms of speed and memory

However, I feel that the gradients wrt to the trajectory is wrong. I remember debuging this exact same issue in the tfkbnufft.

As we can see here, we had to add conjugates of dx and dy.

https://github.com/zaccharieramzi/tfkbnufft/blob/da8de17bc5cb738d11150662d0876bec9efb54d8/tfkbnufft/kbnufft.py#L200

https://github.com/zaccharieramzi/tfkbnufft/blob/da8de17bc5cb738d11150662d0876bec9efb54d8/tfkbnufft/kbnufft.py#L245-L246

Equivalently, in these lines:

if transform_type == 'type_2':
# print((tf.expand_dims(source, -(rank + 1)) * grid_points).shape, tf.expand_dims(points, -3).shape)
grad_points = nufft(tf.expand_dims(source, -(rank + 1)) * grid_points,
tf.expand_dims(points, -3),
transform_type='type_2',
fft_direction=fft_direction,
tol=tol) * tf.expand_dims(grad, -2) * imag_unit
if transform_type == 'type_1':
grad_points = nufft(tf.expand_dims(grad, -(rank + 1)) * grid_points,
tf.expand_dims(points, -3),
transform_type='type_2',
fft_direction=fft_direction,
tol=tol) * tf.expand_dims(source, -2) * imag_unit

must be modified to:

  if transform_type == 'type_2':
    # print((tf.expand_dims(source, -(rank + 1)) * grid_points).shape, tf.expand_dims(points, -3).shape)
    grad_points = nufft(tf.expand_dims(source, -(rank + 1)) * grid_points,
                        tf.expand_dims(points, -3),
                        transform_type='type_2',
                        fft_direction=fft_direction,
                        tol=tol) * tf.expand_dims(tf.math.conj(grad), -2) * imag_unit

  if transform_type == 'type_1':
    grad_points = nufft(tf.expand_dims(tf.math.conj(grad), -(rank + 1)) * grid_points,
                        tf.expand_dims(points, -3),
                        transform_type='type_2',
                        fft_direction=fft_direction,
                        tol=tol) * tf.expand_dims(source, -2) * imag_unit

You can possibly use this method for testing:

https://github.com/zaccharieramzi/tfkbnufft/blob/master/tfkbnufft/tests/ndft_test.py

I can make a PR if needed.

Allow user to specify `max_batch_size`?

Currently the max_batch_size is heuristically calculated. While this is a great idea, I guess its also good to allow user to have freedom to also define this to have better control over the tradeoff of Speed vs GPU memory.
I can handle this PR if we agree its need, but not at the moment.

Test Case NUFFT1 and Real Valued Data

Hi everyone,

first of all thanks for the nice package. Now to my two points:

  1. I want to apply NUFFT to transform atoms and features located on the atoms. I later want to apply it to my (e. g. https://www.nature.com/articles/s41467-022-30530-1). The interesting point to me is to be able to systematically map the off-grid atoms to images and back while using some advantages (e. g. convolution via multiplication in fourier space, per atom predictions, etc. pp.). I constructed a test case with 100 atom at random positions and random (real) values. When I do the transformation via NUFFT type 1 and the backtransform via NUFFT type 2, the backtransformed values are too large. Rescaling by the per sample maximum value fixes this somewhat, but it is not the proper way. I guess I am missing something about reweighting/rescaling or something goes wrong in the code. I'd be grateful for some input. I think it might also be an interesting example case.

test_tfnufft1 .txt

  1. In my case the values are real valued data and there are faster algorithms for real valued FFTs than the general method. Is there any plan to include this into the module?

Thanks and Regards
Stefan

Cannot pip install

I am trying to install the package via pip as mentioned. I checked that my Tensorflow version is 2.10 and supported. However I keep getting the error message "ERROR: Could not find a version that satisfies the requirement tensorflow-nufft (from versions: none). ERROR: No matching distribution found for tensorflow-nufft"

Screen Shot 2023-02-09 at 5 43 19 PM

how to specify number of modes/frequencies?

Hi,

This isn't really an issue, I just have two clarification questions:

  1. Its not clear to me how to specify the number of Fourier modes/frequences (that is, K in https://finufft.readthedocs.io/en/latest/math.html) in the current implementation. In the call signature for pfft.nufft, there doesn't seem to be an explicit argument for this.
  2. It seems like the current implementation has switched type_1 and type_2 in comparison to https://finufft.readthedocs.io/en/latest/math.html, is this correct?

Thanks fo your help.

`tfft.util.estimate_density` is inaccurate

tfft.util.estimate_density, the utility to estimate density compensation weights for arbitrary trajectories, produces inaccurate results.

This needs to be investigated. Perhaps the NUFFT kernel is not suitable for the density estimation algorithm?

Compiling without cuda?

I'm working in a daskhub environment without cuda, and when I try to import the library I get the error NotFoundError: libcudart.so.11.0: cannot open shared object file: No such file or directory. I'd rather not have to try to install libcudart on this environment; is there a way to perform the
_nufft_ops = tf.load_op_library(
24 tf.compat.v1.resource_loader.get_path_to_datafile('_nufft_ops.so'))
operation without attempting to import the GPU version of the code, just a CPU implementation?

ImportError: cannot import name 'nufft_options_pb2'

Hi, I installed the package on Windows via

pip install --force-reinstall --no-deps git+https://github.com/mrphys/tensorflow-nufft.git

that creates the wheel locally. The installation was done without errors, but when I import the module it prompts errors.
image

Does anyone know what causes the problem?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.