juliaapproximation / fastgaussquadrature.jl Goto Github PK
View Code? Open in Web Editor NEWJulia package for Gaussian quadrature
Home Page: https://juliaapproximation.github.io/FastGaussQuadrature.jl/stable/
License: MIT License
Julia package for Gaussian quadrature
Home Page: https://juliaapproximation.github.io/FastGaussQuadrature.jl/stable/
License: MIT License
I havn't looked through the code base too closely but I just spent the morning trying to fix a bug in DistQuads.jl. I couldn't find out what on earth was wrong with my code. I thought it was my normalization after doing the change of variables, but then I found out it was the weights coming from gaussjacobi
.
Look at the figures below. x
is alpha and y
is beta in the Beta distribution. North-west shows the difference between the mean of the beta distribution from Distributions and the one from DistQuads, north-east shows the mean from Distributions, south-west shows the mean from DistQuads (based on gauss-quadrature), south-east is just gamma(alpha)*gamma(beta)/gamma(alpha*beta)
for debugging purposes. You see that it NaNs out a some point. I could change to big
here but that doesn't matter for this purpose.
First, I am calling gaussjacobi
then, I am calling JacobiRec for the recursive version and I am getting
Basically, calling JacobiRec
gives me the correct answer - but something happens around (x,y).>6
and the weights are complete rubbish if you use gaussjacobi
. I'll just switch to JacobiRec in my code, but figured you would want to know.
This is the code
using Distributions, DistQuads, Plots
plotlyjs()
xs = linspace(0.0001,150.0,300)
ys = linspace(0.0001,150.0,300)
z1 = [mean(Beta(x, y))-mean(DistQuad(Beta(x, y))) for x = xs, y = ys]
z2 = [mean(Beta(x, y)) for x = xs, y = ys]
z3 = [mean(DistQuad(Beta(x, y))) for x = xs, y = ys]
z4 = [gamma(x)*gamma(y)/gamma(x+y) for x = xs, y = ys]
p1 = heatmap(xs,ys,z1)
p2 = heatmap(xs,ys,z2)
p3 = heatmap(xs,ys,z3)
p4 = heatmap(xs,ys,z4)
plot(p1,p2,p3,p4)
called on two different versions of DistQuads (one calling gaussjacobi and one calling JacobiRec).
I don't see any builds, and I am not even sure it is enabled for this repository.
(I am asking because I would like to contribute PRs).
@popsomer, can you add tests that execute lines in gausslaguerre
for alpha>1
, asyRHgen
, polyAsyRHgen
, asyBesselgen
, asyAirygen
, etc ?
This was unlucky:
julia> using FastGaussQuadrature
julia> gaussjacobi(10_000,0.25,0.35); # A OK
julia> gaussjacobi(10_000,0.1,0.2);
ERROR: DomainError:
in .^ at arraymath.jl:57
in feval_asy1 at /Users/Mikael/.julia/v0.4/FastGaussQuadrature/src/gaussjacobi.jl:319
in asy1 at /Users/Mikael/.julia/v0.4/FastGaussQuadrature/src/gaussjacobi.jl:194
in JacobiAsy at /Users/Mikael/.julia/v0.4/FastGaussQuadrature/src/gaussjacobi.jl:155
in gaussjacobi at /Users/Mikael/.julia/v0.4/FastGaussQuadrature/src/gaussjacobi.jl:19
There should be no reason to restrict N
to either Int
(which is Int32
on 32bit systems and Int64
on 64bit systems) or Int64
. If you try to use Gauss-Laguerre on a 32bit system and type in 16
as the number of nodes, it will fail because it's restricted to be Int64
. In Gauss-Hermite an Int64
version of 16
will fail, as that is type-annotated as Int
. Did you mean Integer
in these cases?
f(x::Int) = x
will only work for Int32
on 32bit systems and Int64
on 64bit systems, while
f(x::Integer) = x
will work for both on both systems.
It's sensible to support this functionality, especially since they're more accurate.
gaussjacobi(218,19.,0.) breaks because in Julia a negative Float to a fractional power is not defined. If you wish to get a complex number you can add 0im: (-1.0+0im)^(.1), but it seems likely that there is another bug here. Maybe its because your code breaks down as a becomes large?
DomainError
while loading In[82], in expression starting on line 1
in .^ at array.jl:713
in feval_asy1 at /Users/solver/.julia/v0.3/FastGaussQuadrature/src/gaussjacobi.jl:289
in asy1 at /Users/solver/.julia/v0.3/FastGaussQuadrature/src/gaussjacobi.jl:163
in JacobiAsy at /Users/solver/.julia/v0.3/FastGaussQuadrature/src/gaussjacobi.jl:101
in gaussjacobi at /Users/solver/.julia/v0.3/FastGaussQuadrature/src/gaussjacobi.jl:21
in gaussjacobi at /Users/solver/.julia/v0.3/ApproxFun/src/Spaces/Jacobi/jacobitransform.jl:4
The weights of Gauss-Jacobi quadrature rule corresponding to points near the singularities at +-1 are not computed to high accuracy in all cases.
For example, for a = 0.25, b = 0.0 and n = 2^10, the weight corresponding to the node
closest to 1 is (to about 24 digit accuracy)
3.60755490460431077919218552192672111E-0007
while gaussjacobi returns
3.607554904543958d-7
This is roughly 10 digit accuracy. For n=2^12, the weight for the node
closest to 1 is (to 24 digits or so)
1.12865287559907169561093365173501611E-0008
while gaussjacobi returns
1.12865287520011950000000000000000002E-0008.
This is an error of about 3.53x10^(-10). The MATLAB version of the code included with chebfun achieves double precision accuracy.
There is currently no test for gausshermite.jl
gaussjacobi(-1.,0.,10) returns without error
Tag a new version for 0.5 compatibility
I'm using the package on 0.4 and got a bunch of deprecation warnings. A lot of them are quite easily fixed (such as replacing []
concatenation with collect()
, e.g. here).
What are the plans for 0.4 compatibility? Are PR's on this welcome?
This issue is used to trigger TagBot; feel free to unsubscribe.
If you haven't already, you should update your TagBot.yml
to include issue comment triggers.
Please see this post on Discourse for instructions and more details.
With these rules, one can increase quadrature order very easily and with very low memory requirements.
Any ideas on how to use these efficiently in an automatic integrator? (I would like to be able to specify a tolerance and receive an estimate of the integral w/in that tolerance, but accurate error estimation is difficult.) I have not been able to obtain similar performance to spatially adaptive, fixed order quadratures.
Has anyone investigated something along these lines?
Use SymTridiagonal
for the Jacobi matrix in Golub-Welsch algorithm: Turns O(n^3)
into O(n^2)
for calculating eigenvalues.
This is used in gaussjacobi
for small n
.
The algorithm breaks down as Jacobi parameters become large. Maybe it should fall back to Golub-Welsch in this case?
I see a sudden loss of accuracy at N=26 using gausslaguerre.jl
. I use ApproxFun.jl
to create a Laguerre polynomial, and then evaluate it at the quadrature points to check normalisation of the weighted Laguerre's:
using FastGaussQuadrature, ApproxFun, Combinatorics
M=27
α = 0.
x,w=gausslaguerre(M,α)
n=0:M-1
Lnorm = exp.(lgamma.(n+α+1)-lgamma.(n+1))
c=zeros(M)
c[end]=1.
#check norm via coefficients and quadrature:
#coefficients
N1 = sum(abs(c).^2) = 1.0
#quadrature
L=Fun(Laguerre(α),c./sqrt(Lnorm))
N2=sum(w.*abs(L.(x)).^2)
If M<=26, accuracy is excellent (N2=1.0 to 14 figures). At M=27, N2=0.88929
I have taken the same approach with gausshermite.jl
and the results are very well behaved (although accuracy appears to reach a global minimum around M=30). I have tried the different ways to call gausslaguerre.jl
using the METHOD
argument, but this hasn't improved the result. Any suggestions greatly appreciated!
Maybe it's time to move this to JuliaApproximation? The benefit is that any "member" can merge PRs, tag new releases, etc., which eases the burden of maintenance. This will help prevent PRs sitting for too long waiting for a response.
@ajt60gaibb Any opinions on this?
gaussjacobi(10,1,1)
throws an errpr
I'd like to add it to a REQUIRE file but can't since it hasn't been published to METADATA yet.
current master branch:
julia> @time gaussjacobi( 100000, .9, -.1 );
4.262193 seconds (21.32 k allocations: 7.956 GiB, 30.34% gc time)
on v0.4.4 tag:
julia> @time gaussjacobi( 100000, .9, -.1 );
0.927837 seconds (5.28 k allocations: 1.320 GiB, 32.22% gc time)
This is because my previous PR was incorrect. (sorry)
For example, the inequality sign for convergence test was wrong.
https://github.com/JuliaApproximation/FastGaussQuadrature.jl/blob/master/src/gaussjacobi.jl#L197
Suppose we want to find
$$
\int_0^\infty f(x) exp(-x) dx
$$
But we are not given
x, w= FastGaussQuadrature.gausslaguerre(150)
g = x -> sin(x)*exp(-x)
dot(w, g.(x).*exp.(x)) ≈ 0.5 # true
Great!
Uh oh:
x, w= FastGaussQuadrature.gausslaguerre(200)
g = x -> sin(x)*exp(-x)
dot(w, g.(x).*exp.(x)) |> isnan # true
If we had a weightedgausslaguerre
that returned w.*exp.(x)
instead of w
everything would be perfect. I suspect it's just as easy to calculate these weights as standard Gauss–Laguerre.
I am getting the followin error trace when I try to install the package.
julia> Pkg.add("FastGaussQuadrature")
Updating registry at C:\Users\rene\.julia\registries\General
Updating git-repo https://github.com/JuliaRegistries/General.git
Resolving package versions...
tar: jl_EDBB.tmp: Couldn't find file: Permission denied
tar: Error exit delayed from previous errors.
ERROR: Unable to automatically install 'OpenSpecFun' from 'C:\Users\rene.julia\packages\OpenSpecFun_jll\XrUb6\Artifacts.toml'
Stacktrace:
There should be nothing in the algorithms that depend on the parameters being Float64: BigFloat should in principle be allowed, though I suppose there may be assumptions on eps(T) for choosing cut-offs.
I forgot Attobot was not enabled and tried tagging a new release with julia v0.7-alpha support.
Alas, I can make tags but can't delete them, so I don't know how to rectify this.
thanks for your great code... any scope for adding Clenshaw-Curtis quadrature?
First of all -- thanks for this amazing package; I've been using it quite a bit. However, I have the feeling that there is something fishy with the Golub-Welsch algorithm in gaussjacobi()
:
For JacobiGW(n, a, b)
is called, see https://github.com/ajt60gaibb/FastGaussQuadrature.jl/blob/bd0b69e6af7dde596bc831d4beffd8c8f2d76224/src/gaussjacobi.jl#L29
In the function JacobiGW(n, a, b)
the Golub-Welsch algorithm is called, i.e. the nodes and weights are computed based on the eigenvalues/-vectors of the Jacobi matrix. The weights are the square of the first entry of the normalized eigenvector times the integral of the weight over the support, gaussjacobi.jl
the weights are computed here:
with
Why is the division necessary? In fact, this normalization leads to the weights returned by JacobiGW()
satisfing
In short: shouldn't line 386 be commented out?
happy holidays! :)
The module is called FastGauss but the package FastGaussQuadrature.jl. I think they should be consistent. This may be causing the following warning:
Warning: requiring "FastGaussQuadrature" did not define a corresponding module.
Any plans on adding code for computing Gauss–Laguerre quadrature points and weights?
Hi
i just wanted to ask whether it aligns with the philosophy of packet to provide error estimates or if i should should stick with QuadGK.jl
This is wrong:
x,w=gaussjacobi(20,11.,0.)
size(w)==(1,20) # returns true
femtocleaner https://github.com/JuliaComputing/FemtoCleaner.jl updates deprecated syntax automatically via a pull request.
julia> gaussjacobi(100,.5,.0)
([-0.999715, -0.998499, -0.996313, -0.993159, -0.989039, -0.983957, -0.977919, -0.97093, -0.962997, -0.954127 … 0.951777, 0.960879, 0.969047, 0.976273, 0.982549, 0.987871, 0.992232, 0.995628, 0.998056, 0.999514], [0.00103371, 0.00240458, 0.00377339, 0.00513597, 0.00648985, 0.00783271, 0.00916229, 0.0104763, 0.0117726, 0.013049 … 0.00210075, 0.0017082, 0.00135436, 0.0010401, 0.000766175, 0.000533255, 0.000341906, 0.000192595, 8.56847e-5, 2.14342e-5])
julia> gaussjacobi(101,.5,.0)
ERROR: MethodError: no method matching isless(::Int64, ::CartesianIndex{2})
Closest candidates are:
isless(::Missing, ::Any) at missing.jl:66
isless(::Real, ::AbstractFloat) at operators.jl:149
isless(::Real, ::Real) at operators.jl:338
...
Stacktrace:
[1] max(::CartesianIndex{2}, ::Int64) at ./operators.jl:400
[2] asy1(::Int64, ::Float64, ::Float64, ::Int64) at /Users/vincentcp/.julia/packages/FastGaussQuadrature/lfFe/src/gaussjacobi.jl:191
[3] JacobiAsy(::Int64, ::Float64, ::Float64) at /Users/vincentcp/.julia/packages/FastGaussQuadrature/lfFe/src/gaussjacobi.jl:164
[4] gaussjacobi(::Int64, ::Float64, ::Float64) at /Users/vincentcp/.julia/packages/FastGaussQuadrature/lfFe/src/gaussjacobi.jl:28
[5] top-level scope at none:0
This seems to be a hold over from MATLAB since
julia> n=5000;a=rand(n);b=rand(n-1);@time eig(SymTridiagonal(a,b));
4.166163 seconds (276.39 k allocations: 203.424 MB, 1.36% gc time)
I think the style for Julia is lower case for functions and upper case for types. So GaussLegendre(n) should probably be called gausslegendre(n)
The function HermiteInitialGuesses calls flipdim(x_init, 2)
, but the x_init
array appears to be a 1d array. Currently, this means that flipdim
is a no-op, but it may soon change to an error (JuliaLang/julia#17752).
Is this a bug?
On julia 0.6:
julia> @time gausslobatto(2);
7.649899 seconds (9.86 M allocations: 451.546 MiB, 4.05% gc time)
On 0.5 it is about 3.5 seconds. Some quick debugging with @code_warntype
suggests the problem is in this method with the following return type as a result: Tuple{Union{Array{Complex{Float64},1}, Array{Float64,1}},Union{Array{Complex{Float64},1}, Array{Float64,1}}}
I'm using Gauss-Laguerre with a small number of nodes (between 3 and 15). If I only supply the number of nodes that I want, the weights sum to 1 as I would expect. If I supply a second argument, the weights stop summing to 1. Sometimes they are lower, sometimes higher. I am not super familiar with gauss-laguerre (I usually use gauss-hermite), but I am used to thinking of these as normalized weights that should sum to 1. Is that correct? If so, what is going on here?
For reference, I am trying to use this to compute the expectation of a random variable that has an exponential distribution with parameter t>0 (so that E[x]=1/t).
n,w = gausslaguerre(7,0.075)
sum(w) # = 0.961918
How about tag a new release? The deprecation warning in v0.01 is annoying.
If some package is depending on this, it is inconvenient to Pkg.checkout
everytime when install in a new computer.
As I've used FGQ in my thesis, I was wondering whether there's a suggested way of citing the package?
Here's an example:
julia> using FastGaussQuadrature
julia> gausslaguerre(4,1)
ERROR: MethodError: no method matching LinearAlgebra.SymTridiagonal(::Array{Int64,1}, ::Array{Float64,1})
Closest candidates are:
LinearAlgebra.SymTridiagonal(::V, ::V) where {T, V<:AbstractArray{T,1}} at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.5/LinearAlgebra/src/tridiag.jl:70
Stacktrace:
[1] gausslaguerre_GW(::Int64, ::Int64) at /home/hyrodium/.julia/packages/FastGaussQuadrature/t2tCl/src/gausslaguerre.jl:560
[2] gausslaguerre(::Int64, ::Int64; reduced::Bool) at /home/hyrodium/.julia/packages/FastGaussQuadrature/t2tCl/src/gausslaguerre.jl:39
[3] gausslaguerre(::Int64, ::Int64) at /home/hyrodium/.julia/packages/FastGaussQuadrature/t2tCl/src/gausslaguerre.jl:19
[4] top-level scope at REPL[2]:1
I assume this is easy via Newton iteration with besselroots(::Float64, ::Int)
initial guesses? Can add a PR
The (probable) world record is no such thing! The race report by Townsend you quote mentions him calculating 1 billion + 2 nodes and weights, so you need to go for 1 billion + 3 ;)
Potentially related issue: chebfun/chebfun#1622
In Bogaert's paper, a billion GL nodes takes 11.0 seconds using his C++ implementation and 8 threads with 4 logical cores. Compare this to the 131 seconds for a billion and one GL nodes.
Threading should improve the computational times since all these methods are trivially parallel. Alternatively, this package could end up being a wrapper for low-level codes that have better thread support. For example, Bogaert's code is written in C++, but minor changes could have it rewritten as C.
I want to cite this library in a JOSS paper.
How should I do so? Currently I just mentioned it, and then picked a random Olver, Townsend et al paper on quadrature to cite/mention it.
help?> gaussjacobi
search: gaussjacobi
No documentation found.
FastGaussQuadrature.gaussjacobi is a Function.
# 2 methods for generic function "gaussjacobi":
[1] gaussjacobi(n::Integer, a::Float64, b::Float64) in FastGaussQuadrature at /Users/solver/.julia/packages/FastGaussQuadrature/G3Izs/src/gaussjacobi.jl:7
[2] gaussjacobi(n::Number, a::Number, b::Number) in FastGaussQuadrature at /Users/solver/.julia/packages/FastGaussQuadrature/G3Izs/src/gaussjacobi.jl:1
I put in a pull request back in Feb which hasn't been merged yet. Maybe it makes sense to give me push access?
I noticed that this repo still uses travic-ci.org, at least based on the badge in the README. travis-ci.org will be shut down at the end of the year, cf. https://mailchi.mp/3d439eeb1098/travis-ciorg-is-moving-to-travis-cicom. Hence, everything should be migrated to travis-ci.com.
The Travis badge in the README is currently broken since it points to ajt60gaibb/FastGaussQuadrature.jl
instead of JuliaApproximation/FastGaussQuadrature.jl
.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.