Code Monkey home page Code Monkey logo

mpmath-2's People

Stargazers

 avatar  avatar  avatar

Watchers

 avatar

mpmath-2's Issues

Clamping

There should be an (optional) function for clamping a number inside a given
exponent range (generating an inf if necessary).

Original issue reported on code.google.com by [email protected] on 17 Feb 2008 at 9:32

one tests fails on Debian

After applying the patch in the issue 21, one test fails:

$ py.test 
============================= test process starts
==============================
executable:   /usr/bin/python  (2.4.5-candidate-1)
using py lib: /usr/lib/python2.4/site-packages/py <rev unknown>

mpmath/tests/test_bitwise.py[8] ........
mpmath/tests/test_compatibility.py[3] F..
mpmath/tests/test_convert.py[7] .......
mpmath/tests/test_diff.py[2] ..
mpmath/tests/test_division.py[6] ......
mpmath/tests/test_functions2.py[3] ...
mpmath/tests/test_hp.py[1] .
mpmath/tests/test_interval.py[2] ..
mpmath/tests/test_mpmath.py[26] ..........................
mpmath/tests/test_power.py[2] ..
mpmath/tests/test_quad.py[9] .........
mpmath/tests/test_rootfinding.py[1] .
mpmath/tests/test_special.py[4] ....
mpmath/tests/test_trig.py[3] ...

________________________________________________________________________________
____________________ entrypoint: test_double_compatibility
_____________________

    def test_double_compatibility():
        mp.prec = 53
        mp.rounding = 'default'
        for x, y in zip(xs, ys):
            mpx = mpf(x)
            mpy = mpf(y)
            assert mpf(x) == x
            assert (mpx < mpy) == (x < y)
            assert (mpx > mpy) == (x > y)
            assert (mpx == mpy) == (x == y)
            assert (mpx != mpy) == (x != y)
            assert (mpx <= mpy) == (x <= y)
            assert (mpx >= mpy) == (x >= y)
            assert mpx == mpx
            assert mpx + mpy == x + y
            assert mpx * mpy == x * y
E           assert mpx / mpy == x / y
>           assert (mpf('-4.1974624032366689e+117') /
mpf('-8.4657370748010221e-47')) == (-4.1974624032366689e+117 /
-8.4657370748010221e-47)

[/home/ondra/ext/mpmath/mpmath/tests/test_compatibility.py:35]
________________________________________________________________________________
============= tests finished: 76 passed, 1 failed in 28.46 seconds
=============

Original issue reported on code.google.com by [email protected] on 10 Mar 2008 at 2:13

Root-finding

The bisection and secant root-finding functions from sympy.numerics should
be implemented.

Original issue reported on code.google.com by [email protected] on 16 Feb 2008 at 11:56

Missing standard functions

Available in math or cmath:
log10
degrees
radians
frexp
pow (perhaps named differently to avoid mixup with the builtin)
modf
fabs

Some other functions that could be useful:
ln (just an alias for log)
cbrt (cube root)
nthroot for x^(1/n), maybe powpq(x,p,q) for x^(p/q)
sind or sindg, cosd, etc for trigonometric functions with degree arguments
round (perhaps named differently to avoid mixup with the builtin)

More:
List of functions in SciPy: http://www.scipy.org/SciPyPackages/Special
List of functions in Matlab: http://tinyurl.com/6eq8vd

Original issue reported on code.google.com by [email protected] on 4 Jul 2008 at 1:11

Change internal representation of numbers

The representation should be changed from (man, exp, bc) to (sign, man,
exp, bc). Preliminary tests show that this gives improved performance.
Also, some IEEE 754 features like signed zero can be emulated in a more
natural way.

Original issue reported on code.google.com by [email protected] on 16 Feb 2008 at 12:06

implement precise ODE solvers

I'd like to have something to solve any ODE, where I'd tell it:

I want 15 digits (or 100 digits) and I want them all right.

And the algorithm needs to automatically determine the steps size and the
precision, so that the result is correct to 15 (or 100) digits, whatever I
specify.



Original issue reported on code.google.com by [email protected] on 24 Mar 2008 at 11:23

patch for tests to execute on linux

The attached patch makes the tests work on linux. This works now (but
didn't before):

$ cd mpmath
$ py.test
[...]

I can apply it myself, but I want you to check it first. How do you execute
tests?

Original issue reported on code.google.com by [email protected] on 10 Mar 2008 at 2:12

Attachments:

mpmath doesn't work with python2.4

ondra@fuji:~/ext/mpmath-svn$ python2.4
Python 2.4.5 (#2, Jun 25 2008, 14:11:58) 
[GCC 4.3.1] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import mpmath
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
  File "mpmath/__init__.py", line 5, in ?
    from mptypes import *
  File "mpmath/mptypes.py", line 18, in ?
    from libmpc import *
  File "mpmath/libmpc.py", line 342, in ?
    alpha_crossover = from_float(1.5)
  File "mpmath/lib.py", line 458, in from_float
    return from_man_exp(int(m*(1<<53)), e-53, prec, rnd)
  File "mpmath/lib.py", line 387, in from_man_exp
    return normalize(sign, man, exp, bc, prec, rnd)
  File "mpmath/lib.py", line 282, in _normalize
    t = trailtable[man & 255]
TypeError: list indices must be integers
>>> 

Original issue reported on code.google.com by [email protected] on 5 Jul 2008 at 5:48

Issues with mp.dps and exp function

What steps will reproduce the problem?
--------------------------------------

Run the following python code:

# Test Exp function
from mpmath import *

# set precision and rounding
mp.dps = 512
mp.rounding = 'nearest'

print 'Test 1'
z1 = mpc('-1.0', '0.0')
nprint(z1, 17)
z2 = exp(z1)
nprint(z2, 17)

print 'Test 2'
z3 = mpc(-1.0, 0.0)
nprint(z3, 17)
z4 = exp(z3)
nprint(z4, 17)

What is the expected output?
----------------------------

Test 1
(-1.0 + 0.0j)
(0.36787944117144233 + 0.0j)
Test 2
(-1.0 + 0.0j)
(0.36787944117144233 + 0.0j)

What do you see instead?
------------------------

Test 1
(-1.0 + 0.0j)
(2.7182818284590452 + 0.0j)
Test 2
(-1.0 + 0.0j)
(2.7182818284590452 + 0.0j)

What version of the product are you using? On what operating system?
--------------------------------------------------------------------

Windows XP
Python 2.5.1
mpmath 0.7

Please provide any additional information below.
-----------------------------------------------

If I comment out the precision statement: mp.dps = 512
then the returned answer is correct.

mp.dps=64 works OK
mp.dps=80 works OK

Is there some problem with memory allocation?

This bug causes big time havoc.
I'll try a later version of python and see what happens.

Regards
Richard Lyon



Original issue reported on code.google.com by [email protected] on 25 Mar 2008 at 6:35

Polynomials

Polynomial evaluation (with derivative) and polynomial root-finding should
be implemented.

Most of the code already exists in sympy.numerics.

Original issue reported on code.google.com by [email protected] on 16 Feb 2008 at 11:55

MPF/MPC do not accept unicode as constructor parameter

Actual output with mpmath 0.5 using python 2.5 compiled from release:

>>> a = "2.76"
>>> b = u"2.76"
>>> mpf( a )
mpf('2.7599999999999998')
>>> mpf( b )
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/lib/python2.5/site-packages/mpmath/mpmath.py", line 146, in
__new__
    return +convert_lossless(val)
  File "/usr/lib/python2.5/site-packages/mpmath/mpmath.py", line 36, in
convert_lossless
    raise TypeError("cannot create mpf from " + repr(x))
TypeError: cannot create mpf from u'2.76'
>>> 


What is the expected output? What do you see instead?
Expected behavior is to be identical to python primitives e.g.

>>> float( "2.76" )
2.7599999999999998
>>> float( u"2.76" )
2.7599999999999998
>>> 


Original issue reported on code.google.com by [email protected] on 19 Dec 2007 at 12:03

pickling fails with mpmath numbers

Hello, in order to save large amounts of high-precision data,
I need to serialize mpmath numbers. Unfortunately, this fails with an
exception I don't understand, since the method __getstate__ seems to be
defined (see below). Converting to and from strings is only a temporary
option, because it is way too slow and wastes space.

Example:

In [2]:import mpmath

In [3]:a = mpmath.mpc(1+2j)

In [4]:a
Out[4]:mpc(real='1.0', imag='2.0')

In [5]:import pickle

In [6]:pickle.dumps(a)

Results in:

<type 'exceptions.TypeError'>: a class that defines __slots__ without
defining __getstate__ cannot be pickled

What version of the product are you using? On what operating system?
Python 2.5
mpmath 0.7
on opensuse 10.2



Original issue reported on code.google.com by [email protected] on 26 Mar 2008 at 2:43

gmpy support

I've attached a file that adds gmpy support. The patches are against r498.

The newly released gmpy v1.03 is required. Testing with mpmath uncovered a
couple of serious bugs in gmpy on 64-bit platforms.

Performance for runtests.py:

mpmath, r498 from svn: 25.3 seconds
with patch, but gmpy not present: 25.7 seconds
with patch, gmpy present: 22.7 seconds

The performance improvements become significant when the precision exceeds
100 digits.

What version of the product are you using? On what operating system?
Patches against svn r498. O/S is Ubuntu 8.04 on Centrino Duo using gmpy
1.03 and a Core2 optimized version of GMP 4.2.2.

Please provide any additional information below.
I tried to optimize bitcount() and the square root functions but I haven't
done extensive testing.

Original issue reported on code.google.com by casevh on 26 Jun 2008 at 5:15

Attachments:

speed-up for diffc and TS_node

In the first attached file there is a patch with trivial changes
which speed-up diffc and TS_node.
diffc is 10% faster
In TS_node ldexp has been used when possible; in the example
  4*quadts(lambda x: sqrt(1-x**2), 0, 1)
the first evaluation is on my computer
15% faster for dps < 50, 10% for dps = 100, t% for dps = 200

In the second attached file there is another modification for
TS_node, which saves the computation of an exponential;
in the above example it gives speed-up,
35% for dps < 100, 40% per dps = 200
Also this modification passes runtests.py, but it might have
some precision problems; comments are welcome.

Original issue reported on code.google.com by [email protected] on 22 Mar 2008 at 11:44

Attachments:

mpmath does not interact with float nan's/inf's correctly

What steps will reproduce the problem?
What is the expected output? What do you see instead?

See two examples below

>>> """ Example 1 mpf * floating point inf """
' Example 1 mpf * floating point inf '
>>> mpmath.mpf( '1.2345' ) * mpmath.mpf( 'inf' )
mpf('+inf')
>>> mpmath.mpf( '1.2345' ) * float( 'inf' )
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/lib/python2.5/site-packages/mpmath/mptypes.py", line 214, in
__mul__
    return s.binop(t, fmul)
  File "/usr/lib/python2.5/site-packages/mpmath/mptypes.py", line 196, in binop
    t = mpf_convert_rhs(t)
  File "/usr/lib/python2.5/site-packages/mpmath/mptypes.py", line 77, in
mpf_convert_rhs
    return make_mpf(from_float(x, 53, round_floor))
  File "/usr/lib/python2.5/site-packages/mpmath/lib/floatop.py", line 218,
in from_float
    m, e = math.frexp(x)
OverflowError: math range error
>>>

>>> "Example 2  mpf * floating point nan"
'Example 2  mpf * floating point nan'
>>> mpmath.mpf( '1.2345' ) * mpmath.mpf( 'nan' )
mpf('nan')
>>> mpmath.mpf( '1.2345' ) * float( 'nan' )
mpf('0.0')

What version of the product are you using? On what operating system?
0.6 linux

Please provide any additional information below.


Original issue reported on code.google.com by [email protected] on 20 Jan 2008 at 8:19

Add pickle support

Classes that define __slots__ need to define also __setstate__,
__getstate__ methods for pickling support.

Original issue reported on code.google.com by pearu.peterson on 13 Mar 2008 at 8:41

Improve Lambert W function

http://en.wikipedia.org/wiki/Lambert_W_function

and the code that implements it:

"
import math

def lambertW(x, prec = 1E-12, maxiters = 100):
    w = 0
    for i in range(maxiters):
        we = w * pow(math.e,w)
        w1e = (w + 1) * pow(math.e,w)
        if prec > abs((x - we) / w1e):
            return w
        w -= (we - x) / (w1e - (w+2) * (we-x) / (2*w+2))
    raise ValueError("W doesn't converge fast enough for abs(z) = %f" % abs(x))
"

Original issue reported on code.google.com by [email protected] on 10 Mar 2008 at 2:03

Faster and more accurate complex arithmetic

For improved efficiency, complex multiplication and division should be
implemented using integers instead of creating temporary float values.
These operations should also be done losslessly or at least nearly losslessly.

Original issue reported on code.google.com by [email protected] on 16 Feb 2008 at 12:01

mpmath in sympy

Do you think you could copy the latest version to sympy and make sure it
works? There are some problems with the abs (the one in sympy should be
renamed to abs_), see the related issue in sympy for that. I am trying to
fix solvers and matching functions, but numerics problem should also be
fixed before we can release.

Thanks a lot.

Original issue reported on code.google.com by [email protected] on 3 Oct 2007 at 5:23

Making the interface less dependent on the implementation

See http://wiki.sympy.org/wiki/Generic_interface

As a first step, the attached patch makes an attempt to separate the
dependency on the class hierarchy in the way mathematical properties are
checked for, by replacing isinstance() calls with property checks. The
.func property is also made to work for all objects, and used to check for
exp and log.

Theoretically, using properties should be faster than calling isinstance,
but it might not be so in practice, due to traversal of the class hierarchy
to look up properties (this can be fixed). I did not do any detailed
timings, but I know that the tests ran in ~70 seconds before I started
making changes, and in ~70 seconds after, so this certainly doesn't cause
any major slowdowns.

As usual when sweeping over so much code, I noticed lots of (mostly minor)
bugs and oddities.

I had to do the substitutions manually, as there is quite a lot of code in
SymPy that mixes Basic and non-Basic instances. This is something that
should generally be avoided, unless it is clearly commented (much of the
time it is probably unintended).

In particular, there is a lot of code that looks like

    if isinstance(x, Symbol):
        do_stuff_once(x)
    else:
        do_stuff_repeatedly(x)

whereas the following would be clearer and less error-prone:

    if type(x) is tuple:
        do_repeated_stuff(x)
    x = sympify(x)
    if x.is_Symbol:
        do_symbolic_stuff(x)
    raise ValueError

I think the fact that

    isinstance(<non basic object>, <BasicSubclass>)

stops working when removing the isinstance idiom is an advantage, as it
stops non-sympified objects from silently falling through and causing
trouble far away from where they first appeared.

In re, im, the following idiom is used:

    if not arg.is_Add:
        term_list = [arg]

    if isinstance(arg, Basic):
        term_list = arg.args

This idiom also occurs in one place in integrals.py
and in basic.py. Is there a reason why this is not written as

    if arg.is_Add:
        term_list = arg.args
    else:
        term_list = [arg]

?

I removed the test 'RandomString': 'RandomString' from test_sqrtdenest; it
seems nonsensical to just let invalid input slip through instead of raising
an exception.

Some Function subclasses call sympify inside canonize() while others don't.
But sympify is always called in Function.__new__ before canonize gets
called, so this shouldn't be necessary. I think I fixed most cases of this.

I noticed that max_ and min_ were broken because their canonize methods
were not defined as classmethods; this has been fixed.

One file contained mixed space/tab indentations, causing me some debugging
headache (my editor shows tabs as 4 spaces). Please use spaces everywhere!



Next step might be to replace instances of "x is S.obj" with x.is_obj. (In
many cases, where several singletons are checked for (as in some canonize
methods), it would be even better to use a table lookup).

Original issue reported on code.google.com by [email protected] on 14 Mar 2008 at 4:56

Attachments:

mpmath does not interact with float nan's/inf's correctly

What steps will reproduce the problem?
What is the expected output? What do you see instead?

See two examples below

>>> """ Example 1 mpf * floating point inf """
' Example 1 mpf * floating point inf '
>>> mpmath.mpf( '1.2345' ) * mpmath.mpf( 'inf' )
mpf('+inf')
>>> mpmath.mpf( '1.2345' ) * float( 'inf' )
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/lib/python2.5/site-packages/mpmath/mptypes.py", line 214, in
__mul__
    return s.binop(t, fmul)
  File "/usr/lib/python2.5/site-packages/mpmath/mptypes.py", line 196, in binop
    t = mpf_convert_rhs(t)
  File "/usr/lib/python2.5/site-packages/mpmath/mptypes.py", line 77, in
mpf_convert_rhs
    return make_mpf(from_float(x, 53, round_floor))
  File "/usr/lib/python2.5/site-packages/mpmath/lib/floatop.py", line 218,
in from_float
    m, e = math.frexp(x)
OverflowError: math range error
>>>

>>> "Example 2  mpf * floating point nan"
'Example 2  mpf * floating point nan'
>>> mpmath.mpf( '1.2345' ) * mpmath.mpf( 'nan' )
mpf('nan')
>>> mpmath.mpf( '1.2345' ) * float( 'nan' )
mpf('0.0')

What version of the product are you using? On what operating system?
0.6 linux

Please provide any additional information below.


Original issue reported on code.google.com by [email protected] on 20 Jan 2008 at 8:20

Implement all hypergeometric functions

Mpmath should be able to compute nearly all the functions listed on this
page (which are special cases of the general hypergeometric series which
mpmath now knows how to compute):

http://documents.wolfram.com/mathematica/Built-inFunctions/MathematicalFunctions
/HypergeometricRelated/

In many cases implementing a function is simply be a matter of translating
the appropriate formula to code and writing tests to verify that no typo
was made. (It may be necessary to watch out for cancellation effects at
special points).

The 0F1 and 1F1 series converge for all z, but 2F1 only converges for |z| <
1. For functions based on 2F1, variable transformations have to be used, if
they exist at all.

The bigger challenge is to implement 2F1 for arbitrary z. I have looked at
using the two integral representations given on
http://mathworld.wolfram.com/HypergeometricFunction.html, but they are
nearly useless: the Euler integral has horrible endpoint singularities that
generally seem to fool the tanh-sinh algorithm, and the Barnes integral
oscillates wildly. Is there a trick to do compute these integrals reliably?

Otherwise, the only method I know of to compute 2F1 is to use a generic ODE
solver to integrate the hypergeometric differential equation (the method is
described in Numerical Recipes). This will be slow, but if it works, it is
better than nothing.

Original issue reported on code.google.com by [email protected] on 23 Mar 2008 at 8:22

Making a new release

It's been too long since the last release, given all the new features
(especially the GMPY support).

I think issues 40, 41 and 42 should be fixed first, though. It would also
be nice to include Vinzent's solvers module. Anything else? What parts of
the documentation need to be updated?

Unfortunately, we're running out of version numbers :) I don't have any
definite plans for 1.0, but I'd maybe like to make some fundamental
interface changes before then, and it'd be good to have at least one major
release in between.

I could also just release the current code immediately as 0.8.1 and
postpone 0.9.

Thoughts?

Original issue reported on code.google.com by [email protected] on 24 Jul 2008 at 4:28

Interval arithmetic: pow(0,...)

What steps will reproduce the problem?

>>> from mpmath import mpi
>>> mpi(0,1)**2

What is the expected output? What do you see instead?

Expect [0,1].  But the __pow__ method doesn't seem to handle 0 in base and
2 in exponent.  So instead get:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/lib/python2.5/site-packages/mpmath/apps/interval.py", line
129, in __pow__
    assert s.a >= 1 and s.b >= 1
AssertionError

What version of the product are you using? On what operating system?

0.6 on Linux, Python 2.5.

Please provide any additional information below.

For now I've replaced x**2 with x*x (luckily just needed integer exponent).
 But presumably this case ought to be handlable using ** too...

Original issue reported on code.google.com by [email protected] on 26 Jan 2008 at 12:46

change the sorting order of issues

If you agree, please add:

Milestone Priority -ID

into Administration -> Issue Tracking -> Default sorting order:

(of course if you like the result). This is what we have in sympy.

Original issue reported on code.google.com by [email protected] on 10 Mar 2008 at 2:14

absolute imports

Two imports use the style "from mpmath.lib import ..". This prevents
importing as sympy.thirdparty.mpmath. Could you please change them?

They are in specfun.py at line 611, 612.

Sebastian

Original issue reported on code.google.com by [email protected] on 6 May 2008 at 2:26

Problem mixing mpc and numpy arrays

See http://groups.google.com/group/sympy/browse_thread/thread/c2f5936bc59faf24

There are various places where NotImplementedError should be raised instead
of TypeError.

Original issue reported on code.google.com by [email protected] on 18 May 2008 at 12:12

mpfs are not eval-repr-invariant at some precision levels

It seems mpfs can be recreated from their string representation at the
default precision. But the conversion can fail at some other levels.

There should be functions in mpmath.lib for translating between decimal and
binary precisions, with different use of guard digits etc.

Original issue reported on code.google.com by [email protected] on 16 Feb 2008 at 12:18

Polyroots 1-coefficient lists

What steps will reproduce the problem?
1. let n = an integer
2. call polyroots([n])

Error from Issue 876, sympy :
http://code.google.com/p/sympy/issues/detail?id=876
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)

/home/ondra/sympy/<ipython console> in <module>()

/home/ondra/sympy/sympy/thirdparty/mpmath/calculus.py in polyroots(coeffs, 
maxsteps,
cleanup, extraprec, error)
    252         err = [mpf(1) for n in range(deg)]
    253         for step in range(maxsteps):
--> 254             if max(err).ae(0):
    255                 break
    256             for i in range(deg):

What is the expected output? What do you see instead?
if n == 0
probably an error.
if n != 0
empty list, []


What version of the product are you using? On what operating system?
svn trunk, linux

Please provide any additional information below.
I assumed that polyroots([0]) should throw a ValueError since 0 == 0 is a 
tautology. polyroots([n]) for n != 0 will give [], since there are no 
roots.

Original issue reported on code.google.com by [email protected] on 28 Jun 2008 at 8:59

Attachments:

Suggested big renaming to avoid internal *-imports

lib -> libmpf
lib.ffunction -> libmpf.function
libmpc.mpc_function -> libmpc.function

For example,

  from lib import *
  fadd(x,y,prec)
  fmul(x,y,prec)

becomes:

  import libmpf
  libmpf.add(x,y,prec)
  libmpf.mul(x,y,prec)

This would make parts of the code verbose. However, code that makes
frequent use of e.g. libmpf.add can still simply rebind this function
locally as e.g. fadd.

I think Ondrej would approve?

Original issue reported on code.google.com by [email protected] on 5 Jul 2008 at 4:02

more range-like behaviour of arange

What about something like this?

def arange(*args):
    """arange([a,] b[, dt]) -> list [a, a + dt, a + 2*dt, ..., b]"""
    if not len(args) <= 3:
        raise TypeError('arange expected at most 3 arguments, got %i' 
                        % len(args))
    if not len(args) >= 1:
        raise TypeError('arange expected at least 1 argument, got %i'
                        % len(args))
    # set default
    a = 0
    dt = 1
    # interpret arguments
    if len(args) == 1:
        b = args[0]
    elif len(args) >= 2:
        a = args[0]
        b = args[1]
    if len(args) == 3:
        dt = args[2]
    a, b, dt = mpf(a), mpf(b), mpf(dt)
    result = []
    i = 0
    t = a
    while 1:
        t = a + dt*i
        i += 1
        if t < b:
            result.append(t)
        else:
            break
    return result

Maybe there should be a warning when dt <= eps.
Small dt are taking forever anyway.
(Sorry for not submitting a patch)

Original issue reported on code.google.com by [email protected] on 21 Apr 2008 at 7:42

secant fails for multiple roots

>>> from mpmath import secant
>>> f = lambda x: (x-1)**100
>>> secant(f, 0)
mpf('0.33989945043882264')
>>> secant(f, 0, 3)
mpf('-4.7331654313260708e-30')
>>> g = lambda x: x**2
>>> secant(g, -2)
mpf('-0.00010708112159826222')
>>> secant(g, -2, 3)
mpf('-0.0003107520198881292')

This is an algorithmical problem inherited from Newton's method, which
converges slowly for multiple roots.

A solution could be adding a modified Newton's method like this:

x_{k+1} = x_k - F(x_k)/F'(x_k) with F(x) = f(x)/f'(x)

Original issue reported on code.google.com by [email protected] on 13 Jun 2008 at 8:55

allow to use python float & complex instead of mpf, mpc

mpf and mpc are a lot slower than python floats and comlpexes. Sometimes
I'd like to take advantage of all the nice algorithms in mpmath (like
special functions, ODE solvers), but I'd like them to be executing fast
(using python float & complex classes) and I don't mind some rounding errors.

Imho something like

mpf = float
mpc = complex

should be enough, but it needs hooked up in the mpmath somehow.

Original issue reported on code.google.com by [email protected] on 24 Mar 2008 at 12:14

diffc test fails on Debian (2.6.22-3-amd64)

What steps will reproduce the problem?
1. running 'python runtests.py'
2. a call to mpmath.gamma(mpmath.mpf('0.25'))
3.

What is the expected output? What do you see instead?

Failing test output is attached.

The call to the gamma function should return a number.  Current output is a
list index out of range error:

Traceback (most recent call last):
  File "<stdin>", line 1, in ?
  File "/usr/lib/python2.4/site-packages/mpmath/specfun.py", line 239, in gamma
    prec, a, c = _get_spouge_coefficients(mp.prec + 8)
  File "/usr/lib/python2.4/site-packages/mpmath/specfun.py", line 170, in
_get_spouge_coefficients
    coefs = _calc_spouge_coefficients(a, prec)
  File "/usr/lib/python2.4/site-packages/mpmath/specfun.py", line 144, in
_calc_spouge_coefficients
    c[k] = _fix(((-1)**(k-1) * (a-k)**k) * b / sqrt(a-k), prec)
  File "/usr/lib/python2.4/site-packages/mpmath/mptypes.py", line 372, in
__rmul__
    r._mpf_ = fmuli(s._mpf_, t, g_prec, g_rounding)
  File "/usr/lib/python2.4/site-packages/mpmath/lib.py", line 587, in fmuli
    else:      bc += bctable[man>>bc]
IndexError: list index out of range


What version of the product are you using? On what operating system?

mpmath-0.7, on Debian Linux (2.6.22-3-amd64 #1 SMP)

Please provide any additional information below.


Original issue reported on code.google.com by [email protected] on 13 Mar 2008 at 7:10

Attachments:

Accuracy of trigonometric functions

Sin and cos lose relative accuracy close to zeros other than at x = 0 due
to the precision of the fixed-point arithmetic not being increased. The
implementation should be able to detect when this occurs and adapt accordingly.

Original issue reported on code.google.com by [email protected] on 16 Feb 2008 at 12:15

port all things from sympy/numerics

Let's port all things from sympy/numerics to mpmath. Let's only close this
issue when everything is ported.

We'll then remove sympy/numerics from sympy and will just use mpmath for
everything that doesn't need symbolic manipulation.

Original issue reported on code.google.com by [email protected] on 10 Mar 2008 at 2:23

Performance tips?

I dropped "mpmath" into my iterative transformation grapher (itgrapher)
GIMP plugin.  I replaced all occurrences of float() with mpf().  It worked
but it was much slower than "math".

Based on your benchmark data, I expected an improvement in performance just
for dropping it in (on equations that don't overflow with "math" and
therefore don't need the extra precision).  Do I need to explicitly limit
the precision to get those speed gains?

I had turned to mpmath because I had overflows with the exp() operation. 
Prior to porting itgrapher to Python-fu, it was in PERL, where there was no
trouble.  (BTW, thanks for working to make the transition easier!)

So, if I need to limit the precision most of the time, I'm going to need a
way to open in up when needed.  Can I detect overflows and then repeat
operations with higher precision.  Do you raise exceptions and if so what?

I'm using mpmath 0.6 and Python 2.5 and GIMP 2.4

Original issue reported on code.google.com by [email protected] on 18 Feb 2008 at 4:32

sqrt with interval arithmetic doesn't work

>>> x=mpi('100')
>>> sqrt(x)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\Python25\lib\site-packages\mpmath\mptypes.py", line 643, in f
    x = convert_lossless(x)
  File "C:\Python25\lib\site-packages\mpmath\mptypes.py", line 196, in
convert_l
ossless
    raise TypeError("cannot create mpf from " + repr(x))
TypeError: cannot create mpf from [100.0, 100.0]
>>> x**.5
[9.9999999999999964473, 10.000000000000003553]

Original issue reported on code.google.com by [email protected] on 24 May 2008 at 10:38

fcexp bug fix and speed-up for fsin

What steps will reproduce the problem?
The following test fails:

from mpmath import *

N = 10000
for dps in [15, 30, 50, 75, 100, 200]:
  for i in range(-N, N):
     a = 2*i*pi/N
     e = exp(j*a)
     assert e.imag == sin(a) and e.real == cos(a)

In Python this test passes

from cmath import exp
from math import cos, sin, pi
N = 10000
for i in range(-N, N):
   a = 2*i*pi/N
   e = exp(1j*a)
   assert e.imag == sin(a) and e.real == cos(a)

What version of the product are you using? On what operating system?
mpmath rev.415,  686 GNU/Linux

To fix this one can write

def fcexp(a, b, prec, rounding):
    if a == fzero:
        return cos_sin(b, prec, rounding)
    # continue as before

---
Currently sin(a) calls cos_sin, which computes both sin(a) and cos(a),
using the Taylor expansion of sin and computing cos with a square root.
At low precisions it is faster to compute sin(a) using only one
taylor expansion; in the attached file there is an implementation.
To satisfy the identities
exp(j*a).real == cos(a); exp(j*a).imag == sin(a)
the working precision has been increased; in fact the result
of computing cos(a) with a square root (in cos_sin, called by fcexp)
must be equal at the _mpf_ level to cos(a) computed with a Taylor expansion;
trying the above example for dps in [15, 30, 50, 75, 100, 200]
the minimum extra precision to pass this test turns out
to be 7 on my computer.

On my computer (686 GNU/Linux) the speed-up is around
30% for dps < 30,  10% for dps = 200.


Original issue reported on code.google.com by [email protected] on 19 Mar 2008 at 6:47

Attachments:

Desire: Interval arithmetic trig

It would be really nice to have sin, cos, tan implemented for interval
arithmetic.  I'm a bit worried about doing this myself with Taylor and
getting all the directional rounding correct...  FYI, Boost seems to
support this; see http://www.boost.org/libs/numeric/interval/doc/interval.htm .

Original issue reported on code.google.com by [email protected] on 26 Jan 2008 at 12:47

creating a Decimal like interface

I think mpmath should use a Decimal like interface, so that just
substituting Decimal for "mpf" should do the job. I think it already
basically has the same interface, right? In this case, maybe it could be
written on the front page, that just substituting Decimal for mpf you get
10x speedup, for free.

Original issue reported on code.google.com by [email protected] on 28 Sep 2007 at 2:32

setup.py lacks shebang

$ ./setup.py install      
./setup.py: line 1: from: command not found                    
: command not found                                            
'/setup.py: line 3: syntax error near unexpected token `name='mpmath',
'/setup.py: line 3: `setup(name='mpmath',                             
$ python setup.py install
running install                                               
running build                                                 
running build_py 
[...]                               

This is trivial, simple #!python in the first line would do the job.

Original issue reported on code.google.com by [email protected] on 23 Apr 2008 at 2:23

Uniform interface for calculus functions

Calculus functions should have a uniform interface for specifying goals,
handling errors, etc. Here is a tentative specification.

Some of the default values could perhaps be turned into settings of the
global context.

List of parameters for calculus functions:

problem parameters
    These parameters specify the mathematical problem to be solved.
Typically the first parameter is a function f and the rest specify some
point a or interval a, b over which f should be integrated, differentiated
etc. May be given either positionally or by keyword.

algorithmic parameters
    Numerical algorithms often require manual tuning to perform optimally
(sometimes to give correct results at all). A typical algorithmic parameter
might for example be an integer n specifying the number of point samples to
use. Most functions try to choose reasonable parameters automatically, but
some may require an educated guess from the user. May be given by keyword only.

Additional keyword options common for all functions:

eps, dps, prec

    Sets the accuracy goal for the computation (only one of these should be
given). The computation is considered finished when the estimated error is
less than eps / accurate to at least dps decimal places / prec bits.

    Default: automatically set equal to the working precision.
metric

    Specifies which metric to use for measuring error:

            * 'absolute' - the absolute error must meet the accuracy goal
            * 'relative' - the relative error must meet the accuracy goal
            * 'either' - it is sufficient that either the absolute or the
relative error meets the accuracy goal
            * 'both' - both absolute and relative error must meet the
accuracy goal

    Default: 'either'

workprec, workdps, extraprec, extradps

    Sets the internal working precision, either as an absolute value or
relative to the external working precision. If unspecified, the precision
is automatically set slightly higher (a few digits) than minimally required
to meet the accuracy goal, to guard against typical small rounding errors.
The working precision should be increased manually if rounding errors or
cancellations lead to inaccurate results.

    Default: typically 3-10 dps, sometimes much higher, depending on the
function.

estimate

    Specifies by which method to determine whether the result meets the
accuracy goal:

            * 'fast' - the error is estimated quickly using heuristic
methods known based on experience to work for typical (reasonably
well-behaved) input.
            * 'safe' - the computation is performed twice, the second time
with increased precision and/or slightly tweaked algorithmic parameters.
The error is estimated as twice the difference between the results. At the
cost of increased computation time, this method is very reliable for all
but the most pathological inputs.
            * 'none' - no attempt is made to estimate the error. The
specified algorithmic parameters are assumed to result in the desired
accuracy goal.

    Default: 'fast'.

    Note: for some functions, 'fast' and 'safe' are identical, because no
more efficient heuristic has been implemented for the algorithm.

error

    This parameter determines how to handle failure:

            * 'raise' – the result is returned as soon as it meets the
accuracy goal. Failure to meet the goal with the given algorithmic
parameters results in an exception being raised.
            * 'none' – the function silently returns whatever result it
obtains, even when likely to be inaccurate.
            * 'warn' – a result is returned regardless of whether it is
fully accurate. A warning is printed if the accuracy goal is not met.
            * 'return' – the function returns a tuple (result, err) where
err is the estimated error. Nothing special happens if err is larger than
the epsilon (this is left for the user to handle).

    Default: 'raise'.

retries

    In many cases, increasing precision and/or modifying algorithmic
parameters slightly can save a computation that fails on the first try. If
set to a positive integer, this number of retries will be performed
automatically.

    Default: 0-2, depending on the function.

verbose

    If set to any nonzero value, detailed messages about progress and
errors are printed while the function is running.

    Default: False.

Original issue reported on code.google.com by [email protected] on 5 Apr 2008 at 1:53

More accurate square root rounding

Square roots are not currently guaranteed to be exact in all cases when
they should be.

The question is whether exactness can be ensured without slowing down the
existing code.

Original issue reported on code.google.com by [email protected] on 16 Feb 2008 at 12:11

Code: Implemenation of Jacobi Theta and Jacobi Elliptic Functions

Hiya!

Please find attached my implementation and unit tests for Jacobi Theta and
Elliptic functions, for your consideration for inclusion into mpmath.  I've
implemented a number of unit tests from Abramowitz & Stegun, and Mathworld,
including tests of various identities and special cases.  The tests have
been split into a full blown torture case, named elliptic_torture_tests.py,
and a more modest sampling is given inelliptic_tests.py.  The code
currently passes all of the tests.

Note that I've chosen to use the parameter k, rather than m, used by the
current mpmath.ellipk function.  This is mostly for ease of implementation
and testing, as the series expansions in Abramowitz are in terms of k.

This has had one look from Fredrik Johansson, and I've attempted to make
the initial changes he suggested.  Please let me know if you want me to
make more changes, or feel free to go ahead and modify it for inclusion in
mpmath.  The code is free to release under BSD, and I am authorized to
release it.  

Finally, please don't hesitate to contact me if you have any questions. 
I'll try to watch the mailing list, but please send me an e-mail to get my
attention if I don't respond fast enough.  

Thanks,

Mike Taschuk


Original issue reported on code.google.com by [email protected] on 6 May 2008 at 4:26

Attachments:

Implement all mathematical functions in mpmath.lib

Some functions are currently only implemented in mptypes.py. However, all
functions should be implemented in mpmath.lib to provide a complete
functional interface that is independent of the mpf class interface (and
its relatively fragile state-based management of precision and rounding).

Functions currently not implemented in lib include:
* Noninteger powers (real, complex and real->complex)
* Inverse trigonometric / hyperbolic functions
* All the extra functions (gamma, zeta, ...)

More applied code like numerical integration should probably not be moved,
as implementing it functionally will make it significantly more complex.

Original issue reported on code.google.com by [email protected] on 16 Feb 2008 at 11:54

linear algebra

If you are going to implement some stuff for solving linear equations (as
you mentioned in your recent blog post), I could provide working (yet
somewhat messy) code to do the basic stuff like LU decomposition (this
includes solving linear systems and calculating the inverse/determinant
efficiently). Additionally I could share code for solving overdetermined
(and ordinary) linear systems via QR decomposition (LU decomposition is two
times faster, but less accurate).

Original issue reported on code.google.com by [email protected] on 2 Jul 2008 at 7:58

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.