Code Monkey home page Code Monkey logo

fastgaussquadrature.jl's People

Contributors

ajt60gaibb avatar chrisrackauckas avatar christosilvia avatar daanhb avatar datseris avatar dlfivefifty avatar geoffchurch avatar github-actions[bot] avatar goggle avatar heltonmc avatar hyrodium avatar jishnub avatar juliatagbot avatar kristofferc avatar mikaelslevinsky avatar nilshg avatar nittanylion avatar pbeckman avatar popsomer avatar robertdj avatar stevengj avatar t-bltg avatar timueh avatar yuyichao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fastgaussquadrature.jl's Issues

Gauss-Jacobi branching bug

I havn't looked through the code base too closely but I just spent the morning trying to fix a bug in DistQuads.jl. I couldn't find out what on earth was wrong with my code. I thought it was my normalization after doing the change of variables, but then I found out it was the weights coming from gaussjacobi.

Look at the figures below. x is alpha and y is beta in the Beta distribution. North-west shows the difference between the mean of the beta distribution from Distributions and the one from DistQuads, north-east shows the mean from Distributions, south-west shows the mean from DistQuads (based on gauss-quadrature), south-east is just gamma(alpha)*gamma(beta)/gamma(alpha*beta) for debugging purposes. You see that it NaNs out a some point. I could change to big here but that doesn't matter for this purpose.

First, I am calling gaussjacobi
jacobinowork

then, I am calling JacobiRec for the recursive version and I am getting
jacworks

Basically, calling JacobiRec gives me the correct answer - but something happens around (x,y).>6 and the weights are complete rubbish if you use gaussjacobi. I'll just switch to JacobiRec in my code, but figured you would want to know.

jacosmall

This is the code

using Distributions, DistQuads, Plots
plotlyjs()
xs = linspace(0.0001,150.0,300)
ys = linspace(0.0001,150.0,300)
z1 = [mean(Beta(x, y))-mean(DistQuad(Beta(x, y))) for x = xs, y = ys]
z2 = [mean(Beta(x, y)) for x = xs, y = ys]
z3 = [mean(DistQuad(Beta(x, y))) for x = xs, y = ys]
z4 = [gamma(x)*gamma(y)/gamma(x+y) for x = xs, y = ys]

p1 = heatmap(xs,ys,z1)
p2 = heatmap(xs,ys,z2)
p3 = heatmap(xs,ys,z3)
p4 = heatmap(xs,ys,z4)
plot(p1,p2,p3,p4)

called on two different versions of DistQuads (one calling gaussjacobi and one calling JacobiRec).

Is CI (Travis) set up correctly?

I don't see any builds, and I am not even sure it is enabled for this repository.

(I am asking because I would like to contribute PRs).

Gauss--Jacobi error

This was unlucky:

julia> using FastGaussQuadrature

julia> gaussjacobi(10_000,0.25,0.35); # A OK

julia> gaussjacobi(10_000,0.1,0.2);
ERROR: DomainError:
 in .^ at arraymath.jl:57
 in feval_asy1 at /Users/Mikael/.julia/v0.4/FastGaussQuadrature/src/gaussjacobi.jl:319
 in asy1 at /Users/Mikael/.julia/v0.4/FastGaussQuadrature/src/gaussjacobi.jl:194
 in JacobiAsy at /Users/Mikael/.julia/v0.4/FastGaussQuadrature/src/gaussjacobi.jl:155
 in gaussjacobi at /Users/Mikael/.julia/v0.4/FastGaussQuadrature/src/gaussjacobi.jl:19

Remove type annotations on N

There should be no reason to restrict N to either Int (which is Int32 on 32bit systems and Int64 on 64bit systems) or Int64. If you try to use Gauss-Laguerre on a 32bit system and type in 16 as the number of nodes, it will fail because it's restricted to be Int64. In Gauss-Hermite an Int64 version of 16 will fail, as that is type-annotated as Int. Did you mean Integer in these cases?

f(x::Int) = x

will only work for Int32 on 32bit systems and Int64 on 64bit systems, while

f(x::Integer) = x

will work for both on both systems.

ref pkofod/DistQuads.jl#4

gaussjacobi(218,19.,0.) doesn't work

gaussjacobi(218,19.,0.) breaks because in Julia a negative Float to a fractional power is not defined. If you wish to get a complex number you can add 0im: (-1.0+0im)^(.1), but it seems likely that there is another bug here. Maybe its because your code breaks down as a becomes large?

DomainError
while loading In[82], in expression starting on line 1

in .^ at array.jl:713
in feval_asy1 at /Users/solver/.julia/v0.3/FastGaussQuadrature/src/gaussjacobi.jl:289
in asy1 at /Users/solver/.julia/v0.3/FastGaussQuadrature/src/gaussjacobi.jl:163
in JacobiAsy at /Users/solver/.julia/v0.3/FastGaussQuadrature/src/gaussjacobi.jl:101
in gaussjacobi at /Users/solver/.julia/v0.3/FastGaussQuadrature/src/gaussjacobi.jl:21
in gaussjacobi at /Users/solver/.julia/v0.3/ApproxFun/src/Spaces/Jacobi/jacobitransform.jl:4

Loss of precision for weights of Gauss-Jacobi quadratures near boundary

The weights of Gauss-Jacobi quadrature rule corresponding to points near the singularities at +-1 are not computed to high accuracy in all cases.

For example, for a = 0.25, b = 0.0 and n = 2^10, the weight corresponding to the node
closest to 1 is (to about 24 digit accuracy)

3.60755490460431077919218552192672111E-0007

while gaussjacobi returns

3.607554904543958d-7

This is roughly 10 digit accuracy. For n=2^12, the weight for the node
closest to 1 is (to 24 digits or so)

1.12865287559907169561093365173501611E-0008

while gaussjacobi returns

1.12865287520011950000000000000000002E-0008.

This is an error of about 3.53x10^(-10). The MATLAB version of the code included with chebfun achieves double precision accuracy.

Tag 0.5

Tag a new version for 0.5 compatibility

Any plans on 0.4 compatibility?

I'm using the package on 0.4 and got a bunch of deprecation warnings. A lot of them are quite easily fixed (such as replacing [] concatenation with collect(), e.g. here).

What are the plans for 0.4 compatibility? Are PR's on this welcome?

TagBot trigger issue

This issue is used to trigger TagBot; feel free to unsubscribe.

If you haven't already, you should update your TagBot.yml to include issue comment triggers.
Please see this post on Discourse for instructions and more details.

Question: automatic integration

With these rules, one can increase quadrature order very easily and with very low memory requirements.

Any ideas on how to use these efficiently in an automatic integrator? (I would like to be able to specify a tolerance and receive an estimate of the integral w/in that tolerance, but accurate error estimation is difficult.) I have not been able to obtain similar performance to spatially adaptive, fixed order quadratures.

Has anyone investigated something along these lines?

Loss of accuracy for modest N?

I see a sudden loss of accuracy at N=26 using gausslaguerre.jl. I use ApproxFun.jl to create a Laguerre polynomial, and then evaluate it at the quadrature points to check normalisation of the weighted Laguerre's:

using FastGaussQuadrature, ApproxFun, Combinatorics 
M=27
α = 0.
x,w=gausslaguerre(M,α)
n=0:M-1
Lnorm = exp.(lgamma.(n+α+1)-lgamma.(n+1))
c=zeros(M)
c[end]=1.
#check norm via coefficients and quadrature:
#coefficients
N1 = sum(abs(c).^2) = 1.0

#quadrature
L=Fun(Laguerre(α),c./sqrt(Lnorm))
N2=sum(w.*abs(L.(x)).^2)

If M<=26, accuracy is excellent (N2=1.0 to 14 figures). At M=27, N2=0.88929

I have taken the same approach with gausshermite.jl and the results are very well behaved (although accuracy appears to reach a global minimum around M=30). I have tried the different ways to call gausslaguerre.jl using the METHOD argument, but this hasn't improved the result. Any suggestions greatly appreciated!

Move to JuliaApproximation?

Maybe it's time to move this to JuliaApproximation? The benefit is that any "member" can merge PRs, tag new releases, etc., which eases the burden of maintenance. This will help prevent PRs sitting for too long waiting for a response.

@ajt60gaibb Any opinions on this?

`gaussjacobi( 100000, .9, -.1 )` takes much more time than before

current master branch:

julia> @time gaussjacobi( 100000, .9, -.1 );
  4.262193 seconds (21.32 k allocations: 7.956 GiB, 30.34% gc time)

on v0.4.4 tag:

julia> @time gaussjacobi( 100000, .9, -.1 );
  0.927837 seconds (5.28 k allocations: 1.320 GiB, 32.22% gc time)

This is because my previous PR was incorrect. (sorry)
For example, the inequality sign for convergence test was wrong.

image

https://github.com/JuliaApproximation/FastGaussQuadrature.jl/blob/master/src/gaussjacobi.jl#L197

Factor out exponential decay in weights

Suppose we want to find
$$
\int_0^\infty f(x) exp(-x) dx
$$
But we are not given $f(x)$, instead, we are given $g(x) exp(-x)$. Now we try Gauss--Laguerre as follows:

x, w= FastGaussQuadrature.gausslaguerre(150)
g = x -> sin(x)*exp(-x)
dot(w, g.(x).*exp.(x))  0.5 # true

Great!

Uh oh:

x, w= FastGaussQuadrature.gausslaguerre(200)
g = x -> sin(x)*exp(-x)
dot(w, g.(x).*exp.(x))  |> isnan # true

If we had a weightedgausslaguerre that returned w.*exp.(x) instead of w everything would be perfect. I suspect it's just as easy to calculate these weights as standard Gauss–Laguerre.

@popsomer

Package build failed on Windows (Julia v1.3.1)

I am getting the followin error trace when I try to install the package.

julia> Pkg.add("FastGaussQuadrature")
Updating registry at C:\Users\rene\.julia\registries\General
Updating git-repo https://github.com/JuliaRegistries/General.git
Resolving package versions...
tar: jl_EDBB.tmp: Couldn't find file: Permission denied
tar: Error exit delayed from previous errors.
ERROR: Unable to automatically install 'OpenSpecFun' from 'C:\Users\rene.julia\packages\OpenSpecFun_jll\XrUb6\Artifacts.toml'
Stacktrace:

Remove hard coded Float64

There should be nothing in the algorithms that depend on the parameters being Float64: BigFloat should in principle be allowed, though I suppose there may be assumptions on eps(T) for choosing cut-offs.

Add Attobot

I forgot Attobot was not enabled and tried tagging a new release with julia v0.7-alpha support.

Alas, I can make tags but can't delete them, so I don't know how to rectify this.

Clenshaw-Curtis?

thanks for your great code... any scope for adding Clenshaw-Curtis quadrature?

Normalization in Golub-Welsch for Gauss-Jacobi correct?

First of all -- thanks for this amazing package; I've been using it quite a bit. However, I have the feeling that there is something fishy with the Golub-Welsch algorithm in gaussjacobi():

For $n&lt;=4000$ and $max(a,b)&gt;=5$, the function JacobiGW(n, a, b) is called, see https://github.com/ajt60gaibb/FastGaussQuadrature.jl/blob/bd0b69e6af7dde596bc831d4beffd8c8f2d76224/src/gaussjacobi.jl#L29

In the function JacobiGW(n, a, b) the Golub-Welsch algorithm is called, i.e. the nodes and weights are computed based on the eigenvalues/-vectors of the Jacobi matrix. The weights are the square of the first entry of the normalized eigenvector times the integral of the weight over the support, $w_j = \mu_0 \v_1^{j}^2$. In gaussjacobi.jl the weights are computed here:

https://github.com/ajt60gaibb/FastGaussQuadrature.jl/blob/bd0b69e6af7dde596bc831d4beffd8c8f2d76224/src/gaussjacobi.jl#L385

with $\mu_0 = 2^(ab+1)*gamma(a+1)*gamma(b+1)/gamma(2+ab)$. But then comes the next line:

https://github.com/ajt60gaibb/FastGaussQuadrature.jl/blob/bd0b69e6af7dde596bc831d4beffd8c8f2d76224/src/gaussjacobi.jl#L386

Why is the division necessary? In fact, this normalization leads to the weights returned by JacobiGW() satisfing $\sum_{i=1} w_i = 1 != \mu_0$ hence a contradiction, in general; the integral of the Jacobi weight is not one in general.

In short: shouldn't line 386 be commented out?

happy holidays! :)

Name of module should be the same as git repository

The module is called FastGauss but the package FastGaussQuadrature.jl. I think they should be consistent. This may be causing the following warning:

Warning: requiring "FastGaussQuadrature" did not define a corresponding module.

Laguerre quadrature

Any plans on adding code for computing Gauss–Laguerre quadrature points and weights?

Not all sizes are working in the newest version

julia> gaussjacobi(100,.5,.0)
([-0.999715, -0.998499, -0.996313, -0.993159, -0.989039, -0.983957, -0.977919, -0.97093, -0.962997, -0.954127  …  0.951777, 0.960879, 0.969047, 0.976273, 0.982549, 0.987871, 0.992232, 0.995628, 0.998056, 0.999514], [0.00103371, 0.00240458, 0.00377339, 0.00513597, 0.00648985, 0.00783271, 0.00916229, 0.0104763, 0.0117726, 0.013049  …  0.00210075, 0.0017082, 0.00135436, 0.0010401, 0.000766175, 0.000533255, 0.000341906, 0.000192595, 8.56847e-5, 2.14342e-5])

julia> gaussjacobi(101,.5,.0)
ERROR: MethodError: no method matching isless(::Int64, ::CartesianIndex{2})
Closest candidates are:
  isless(::Missing, ::Any) at missing.jl:66
  isless(::Real, ::AbstractFloat) at operators.jl:149
  isless(::Real, ::Real) at operators.jl:338
  ...
Stacktrace:
 [1] max(::CartesianIndex{2}, ::Int64) at ./operators.jl:400
 [2] asy1(::Int64, ::Float64, ::Float64, ::Int64) at /Users/vincentcp/.julia/packages/FastGaussQuadrature/lfFe/src/gaussjacobi.jl:191
 [3] JacobiAsy(::Int64, ::Float64, ::Float64) at /Users/vincentcp/.julia/packages/FastGaussQuadrature/lfFe/src/gaussjacobi.jl:164
 [4] gaussjacobi(::Int64, ::Float64, ::Float64) at /Users/vincentcp/.julia/packages/FastGaussQuadrature/lfFe/src/gaussjacobi.jl:28
 [5] top-level scope at none:0

Loosen limit n ≤ 4000 in gaussjacobi

This seems to be a hold over from MATLAB since

julia> n=5000;a=rand(n);b=rand(n-1);@time eig(SymTridiagonal(a,b));
  4.166163 seconds (276.39 k allocations: 203.424 MB, 1.36% gc time)

Extremely slow compilation time

On julia 0.6:

julia> @time gausslobatto(2);
  7.649899 seconds (9.86 M allocations: 451.546 MiB, 4.05% gc time)

On 0.5 it is about 3.5 seconds. Some quick debugging with @code_warntype suggests the problem is in this method with the following return type as a result: Tuple{Union{Array{Complex{Float64},1}, Array{Float64,1}},Union{Array{Complex{Float64},1}, Array{Float64,1}}}

Weights don't sum to 1

I'm using Gauss-Laguerre with a small number of nodes (between 3 and 15). If I only supply the number of nodes that I want, the weights sum to 1 as I would expect. If I supply a second argument, the weights stop summing to 1. Sometimes they are lower, sometimes higher. I am not super familiar with gauss-laguerre (I usually use gauss-hermite), but I am used to thinking of these as normalized weights that should sum to 1. Is that correct? If so, what is going on here?

For reference, I am trying to use this to compute the expectation of a random variable that has an exponential distribution with parameter t>0 (so that E[x]=1/t).

n,w = gausslaguerre(7,0.075)
sum(w)     # = 0.961918

Tag a new release?

How about tag a new release? The deprecation warning in v0.01 is annoying.
If some package is depending on this, it is inconvenient to Pkg.checkout everytime when install in a new computer.

`gausslaguerre(n, α)` produces error if α is Int

Here's an example:

julia> using FastGaussQuadrature

julia> gausslaguerre(4,1)
ERROR: MethodError: no method matching LinearAlgebra.SymTridiagonal(::Array{Int64,1}, ::Array{Float64,1})
Closest candidates are:
  LinearAlgebra.SymTridiagonal(::V, ::V) where {T, V<:AbstractArray{T,1}} at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.5/LinearAlgebra/src/tridiag.jl:70
Stacktrace:
 [1] gausslaguerre_GW(::Int64, ::Int64) at /home/hyrodium/.julia/packages/FastGaussQuadrature/t2tCl/src/gausslaguerre.jl:560
 [2] gausslaguerre(::Int64, ::Int64; reduced::Bool) at /home/hyrodium/.julia/packages/FastGaussQuadrature/t2tCl/src/gausslaguerre.jl:39
 [3] gausslaguerre(::Int64, ::Int64) at /home/hyrodium/.julia/packages/FastGaussQuadrature/t2tCl/src/gausslaguerre.jl:19
 [4] top-level scope at REPL[2]:1

World record

The (probable) world record is no such thing! The race report by Townsend you quote mentions him calculating 1 billion + 2 nodes and weights, so you need to go for 1 billion + 3 ;)

Add multithreading support?

In Bogaert's paper, a billion GL nodes takes 11.0 seconds using his C++ implementation and 8 threads with 4 logical cores. Compare this to the 131 seconds for a billion and one GL nodes.

Threading should improve the computational times since all these methods are trivially parallel. Alternatively, this package could end up being a wrapper for low-level codes that have better thread support. For example, Bogaert's code is written in C++, but minor changes could have it rewritten as C.

Citation

I want to cite this library in a JOSS paper.

How should I do so? Currently I just mentioned it, and then picked a random Olver, Townsend et al paper on quadrature to cite/mention it.

Add documentation

help?> gaussjacobi
search: gaussjacobi

  No documentation found.

  FastGaussQuadrature.gaussjacobi is a Function.

  # 2 methods for generic function "gaussjacobi":
  [1] gaussjacobi(n::Integer, a::Float64, b::Float64) in FastGaussQuadrature at /Users/solver/.julia/packages/FastGaussQuadrature/G3Izs/src/gaussjacobi.jl:7
  [2] gaussjacobi(n::Number, a::Number, b::Number) in FastGaussQuadrature at /Users/solver/.julia/packages/FastGaussQuadrature/G3Izs/src/gaussjacobi.jl:1

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.