Comments (9)
The issue is that in the Newton iterations at gaussjacobi.jl:382
, the recursive formula innerjacobi_rec
is used instead of the boundary asymptotics, which are implemented as feval_asy2
in Chebfun. An equivalent feval_asy2
routine is not implemented in this package, so I now have a fork here with the low order terms in these asymptotics implemented. This gives us back 6 digits or so. See below.
However, to get 16 digit accuracy, we need the higher order terms. The higher order coefficients
Do these Chebyshev routines exist elsewhere in JuliaApproximation? Or (at the risk of bloating this package) should we implement them here?
from fastgaussquadrature.jl.
Thanks for your responses!
Unless I'm mistaken, it doesn't appear that Chebyshev integration is currently implemented in ClassicalOrthogonalPolynomials.jl. As an alternative, ApproxFunOrthogonalPolynomials.jl has both Chebyshev integration and differentiation implemented. However, it is similarly a circular dependency with this package...
In my view, part of the value of FastGaussQuadrature.jl is that it is extremely lightweight, with relatively few dependencies. I can use ApproxFunOrthogonalPolynomials.jl to solve this issue, but it's a heavy (not to mention circular) dependency, so from a software engineering perspective it doesn't seem like a great path forward. It's a hefty price to pay for some basic Chebyshev routines on
The alternative is to use some lookups and reimplement very barebones routines for the necessary computations. This is the path I've taken in my fork. I've saved the 10-point Chebyshev integration and differentiation matrices from Chebfun as constants in gaussjacobi.jl
, and reimplemented only the very short barycentric interpolation routine. This is pretty painless and keeps the repository lightweight.
@dlfivefifty as a contributor to all the involved libraries, let me know if you think this is a reasonable solution. If so, I'll try to iron out the remaining numerical issues (all the existing package tests pass, but I still get ~11 digits on worst-case tests related to this issue, not 16) and submit a pull request.
from fastgaussquadrature.jl.
It certainly is implemented:
julia> using ClassicalOrthogonalPolynomials
julia> T = ChebyshevT()
ChebyshevT()
julia> cumsum(T*[1; zeros(∞)])
ChebyshevT() * [1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0 … ]
It's done by applying an ∞-almost banded matrix:
But I think this is overkill for what we need.
Here's the equivalent routine in ApproxFunOrthogonalPolynomials.jl:
This is more usable. I think the best option is to make a new package ChebyshevTransforms.jl that contains:
- The chebyshev transforms in https://github.com/JuliaApproximation/FastTransforms.jl/blob/master/src/chebyshevtransform.jl
chebyshevintegration!
andchebyshevdifferentiation!
FastChebInterp.jl could depend on these packages if they want. But I would avoid depending on that package directly as it introduces an alternative notion of a Chebyshev expansion as either ApproxFun.jl or ClassicalOrthogonalPolynomials.jl. So have a more "core" package like ChebyshevTransforms.jl that doesn't actually implement convenient structs makes sense here.
from fastgaussquadrature.jl.
If we run the following MATLAB code, which is an equivalent test using the implementation of the Hale and Townsend 2012 algorithm in Chebfun, we see much better results, even using many nodes with a singularity
a = -0.99;
ns = 2.^ (1:16);
I_true = 2^(a+1)/(a+1);
Is_quad = zeros(length(ns), 1);
for i=1:length(ns)
[~, w] = jacpts(ns(i), a, 0);
Is_quad(i) = sum(w);
end
rel_errs = (I_true - Is_quad) / I_true;
n relative error
----------------------
2 -4.516e-15
4 1.552e-15
8 4.145e-13
16 7.042e-13
32 1.437e-12
64 7.099e-12
128 -3.387e-15
256 3.810e-15
512 0.000e+00
1024 -3.528e-15
2048 -1.411e-15
4096 -2.258e-15
8192 4.234e-16
16384 -7.056e-16
32768 4.234e-16
65536 1.694e-15
This seems to suggest an issue in the Julia implementation.
from fastgaussquadrature.jl.
If we take the Chebfun weights as ground truth (given that they integrate
Markers are sized and colored by error size for emphasis. This is a more exaggerated case of exactly what @JamesCBremerJr noted in the aforementioned issue.
from fastgaussquadrature.jl.
Thank you for the detailed comments!
PRs are always welcomed.
Do these Chebyshev routines exist elsewhere in JuliaApproximation?
I'm not much familiar with Chebyshev polynomials, but maybe FastChebInterp.jl helps here?
from fastgaussquadrature.jl.
It should all be doable with ClassicalOrthogonalPolynomials.jl but that depends on this package./.
from fastgaussquadrature.jl.
Ah, I see - I was looking for an integrate
-like function instead of a cumsum
.
Your ChebyshevTransforms.jl
solution makes good sense to me. Given my lack of familiarity with the relevant interfaces in JuliaApproximation.jl
, I've submitted a pull request in the meantime which resolves this issue and #58 in a way that's completely internal to this package. It uses saved integration and differentiation matrices from Chebfun as I mentioned above. It should be easy to substitute more elegant Chebyshev routines once the restructuring of the JuliaApproximation.jl
package ecosystem that you suggest occurs. But I imagine you may want to engineer that yourself as the manager of those packages.
from fastgaussquadrature.jl.
Fixed by #131.
from fastgaussquadrature.jl.
Related Issues (20)
- docstring format is not unified
- Question on unweighted Gauss-Hermite HOT 1
- gausshermite(21) returns error HOT 4
- gausslaguerre(39,44.8) doesn't converge, document these conditions for convergence
- Gauss-Chebyshev documentation has incorrect weights for third and fourth kinds
- Add mutating versions of functions that reuse external storage for sigma points and weights HOT 5
- missing promotion in `gaussjacobi` for integer `α, β` HOT 2
- Gauss–Kronrod points and weights? HOT 5
- export/document 3-term recurrence coefficients HOT 3
- The precision of `gausslegendre` can be improved HOT 1
- PLEASE DO NOT COMMIT DIRECTLY ON THE `master` BRANCH! HOT 3
- Road to v2.0.0 HOT 3
- Migrate to travis-ci.com HOT 3
- TagBot trigger issue HOT 12
- `gausslaguerre(n, α)` produces error if α is Int HOT 2
- `gaussjacobi( 100000, .9, -.1 )` takes much more time than before
- Piessens's Chebyshev series approximations doesn't have enough accurate HOT 4
- Time to switch to GitHub Actions? HOT 18
- Quadrature rules for the weight function w(x) = 1/(1+x^2) HOT 8
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from fastgaussquadrature.jl.