Skip to content

Examples

SimpleArt edited this page Oct 30, 2021 · 1 revision

x + 1 + sin( x 2)

The function f ( x ) = x + 1 + sin( x 2) is shown in the graph below.

A graph of x + 1 + sin(x^2).

Its root is lies between f (-3) and f (10). We may employ the solver to find this root.

from math import sin
from pyroot import solver

def f(x):
    return x + 1 + sin(x ** 2)

x = solver(f, -3, 10)
print(f"x = {x}")
print(f"f(x) = {f(x)}")

Out:

x = -1.5860188491618332
f(x) = 2.220446049250313e-16

We can compare different methods by using solver_generator and pretty print the results using tabulate.

from math import cos, sin
from pyroot import solver_generator
from tabulate import tabulate

def f(x):
    return x + 1 + sin(x ** 2)

def fprime(x):
    return 1 + 2 * x * cos(x ** 2)

def results(*args, **kwargs):
    iterations = [(i, x, f(x)) for i, x in enumerate(solver_generator(f, -3, 10, *args, **kwargs))]
    return tabulate(iterations, ("i", "x", "y"))

print("Default method:")
print(results())
print()
print("Newton's method:")
print(results(fprime, method="newton"))
print()
print("Brent's method:")
print(results(method="brent"))

Out:

Default method:
  i         x             y
---  --------  ------------
  0  -3        -1.58788
  1  10        10.4936
  2   3.5       4.18888
  3  -1.15274   0.818122
  4  -1.91862  -1.43235
  5  -1.53568   0.169915
  6  -1.59387  -0.0282762
  7  -1.58586   0.000584649
  8  -1.58602  -7.19597e-07
  9  -1.58602   1.93523e-12
 10  -1.58602  -1.2873e-12
 11  -1.58602   2.22045e-16
 12  -1.58602  -6.66134e-16

Newton's method:
  i          x             y
---  ---------  ------------
  0  -3         -1.58788
  1  10         10.4936
  2   3.5        4.18888
  3   0.655852   2.07285
  4  -1.17207    0.808577
  5  -1.39206    0.541338
  6  -1.53694    0.165916
  7  -1.54015    0.155645
  8  -1.54729    0.132507
  9  -2.27365   -2.17099
 10  -1.75831   -0.708398
 11  -1.58634   -0.00113331
 12  -1.58602    5.99712e-09
 13  -1.58602   -1.2873e-12
 14  -1.58602    1.28741e-12
 15  -1.58602    2.22045e-16
 16  -1.58602   -6.66134e-16

Brent's method:
  i              x             y
---  -------------  ------------
  0  -3             -1.58788
  1  10             10.4936
  2   3.5            4.18888
  3   1.78006e-307   1
  4  -1.17858        0.804948
  5  -2.08929       -2.0296
  6  -1.63394       -0.179411
  7  -1.56197        0.083645
  8  -1.58879       -0.00991745
  9  -1.58601        3.17253e-05
 10  -1.58602       -6.34968e-09
 11  -1.58602        1.28841e-12
 12  -1.58602       -1.2873e-12
 13  -1.58602        2.22045e-16
 14  -1.58602       -6.66134e-16

Interestingly, we can see that Newton's method performed the worst and the default method performed the best. This is because the function has a highly non-linear shape before we reach the root, which throws off Newton's method. This forces the solver to back off of Newton's method on some iterations and resort to alternative methods, such as bisection. In contrast, Brent's method recognizes when it can and can't use interpolation, after it attempts to use interpolation. We can thus see from the iterations that Brent's method is much safer than Newton's method. The default method, however, is more decisive on the iterations it chooses to use interpolation versus bisection. For example, the default method recognized that the first 3 points almost form a straight line, and so it attempted to use an interpolation method, which is why its next iteration outperformed all other methods. Furthermore, Newton's method can be seen here to make the bad habit of failing to update both sides of the root frequently. Instead, the solver has to step in and force a sign change in order to ensure Newton's method is actually converging to a root.

The Bring radical of a is given by the root of the following function:

f ( x ) = x 5 + x + a

The unique root may be easily bounded by -inf and inf. Tighter bounds may be given by 0 and -a, an initial estimate of -a 1/5 may be used.

from pyroot import sign, solver

inf = float("inf")

def bring_inverse(x):
    """The inverse of the Bring radical is x^5 + x."""
    return (x*x*x*x + 1) * x

def bring_radical_inf(a):
    """Apply the solver from -inf to +inf."""
    return solver(lambda x: bring_inverse(x) + a, -inf, inf)

def bring_radical(a):
    """A more efficient implementation using more accurate estimates and an initial estimate of the root."""
    return solver(lambda x: bring_inverse(x) + a, 0, -a, x=-sign(a) * abs(a)**0.2)

print(f"bring_radical_inf(3) = {bring_radical_inf(3)}")
print(f"bring_radical(3) = {bring_radical(3)}")
print(f"bring_inverse(bring_radical(3)) = {bring_inverse(bring_radical(3))}")

Out:

bring_radical_inf(3) = -1.1329975658850653
bring_radical(3) = -1.1329975658850653
bring_inverse(bring_radical(3)) = -3.0

Note that bring_inverse is written using Horner's method. This is useful to ensure evaluation at infinite is accurate. Notice that the results are correct to machine precision.

To watch the iterations, we can use solver_generator instead.

from pyroot import sign, solver_generator
from tabulate import tabulate

inf = float("inf")

def bring_inverse(x):
    """The inverse of the Bring radical is x^5 + x."""
    return (x*x*x*x + 1) * x

def bring_radical_inf_results(a):
    """Apply the solver from -inf to +inf."""
    def f(x):
        return bring_inverse(x) + a
    iterations = [(i, x, f(x)) for i, x in enumerate(solver_generator(f, -inf, inf))]
    return tabulate(iterations, ("i", "x", "y"))

def bring_radical_results(a):
    """A more efficient implementation using more accurate estimates and an initial estimate of the root."""
    def f(x):
        return bring_inverse(x) + a
    iterations = [(i, x, f(x)) for i, x in enumerate(solver_generator(f, 0, -a, x=-sign(a) * abs(a)**0.2))]
    return tabulate(iterations, ("i", "x", "y"))

print("Starting from infinity:")
print(bring_radical_inf_results(3))
print()
print("Starting closer to the root:")
print(bring_radical_results(3))

Out:

Starting from infinity:
  i              x               y
---  -------------  --------------
  0  -1.79769e+308  -inf
  1   1.79769e+308   inf
  2   0                3
  3  -1                1
  4  -2.96875       -230.573
  5  -1.98438        -29.7538
  6  -1.49219         -5.89023
  7  -1.24609         -1.25046
  8  -1.14174         -0.0819384
  9  -1.13253          0.00432933
 10  -1.133           -2.02749e-05
 11  -1.133           -1.13589e-10
 12  -1.133            2.37899e-12
 13  -1.133           -2.38032e-12
 14  -1.133            2.22045e-15
 15  -1.133           -2.22045e-15
 16  -1.133            0

Starting closer to the root:
  i              x               y
---  -------------  --------------
  0   0                3
  1  -3             -243
  2  -1.24573         -1.24573
  3  -1.78006e-307     3
  4  -0.622865         2.28338
  5  -0.934298         1.35379
  6  -1.12068          0.111653
  7  -1.13406         -0.00985025
  8  -1.133           -1.98078e-05
  9  -1.133            9.39296e-10
 10  -1.133           -2.38032e-12
 11  -1.133            2.37899e-12
 12  -1.133            2.22045e-15
 13  -1.133           -2.22045e-15
 14  -1.133            0

Perhaps surprisingly one may note that bring_radical_inf is only 2 iterations slower than bring_radical. To explain, the first 2 iterations are the initial points given. The next iteration for bring_radical_inf is to determine the sign of the root. The next iteration is to determine the sign of the order of magnitude of the root (by guessing -1e0). The next iteration makes note that the opposing point, (-1.79769e+308, -inf), is of no use to estimating the root, and proceeds to attempt to overestimate the root using only the previous 2 iterations instead. Beyond this, both find the root in roughly the same manner.

Let us make one more comparison, using Newton's method and Brent's method. We change the above code to include additional settings, and define fprime as well.

from pyroot import sign, solver_generator
from tabulate import tabulate

inf = float("inf")

def bring_inverse(x):
    """The inverse of the Bring radical is x^5 + x."""
    return (x*x*x*x + 1) * x

def bring_prime(x):
    """The derivative of `bring_inverse`, equivalent to 5x^4 + 1. Used for Newton's method."""
    return 5 * x * x * x * x + 1

def bring_radical_inf_results(a, *args, **kwargs):
    """Apply the solver from -inf to +inf."""
    def f(x):
        return bring_inverse(x) + a
    iterations = [(i, x, f(x)) for i, x in enumerate(solver_generator(f, -inf, inf, *args, **kwargs))]
    return tabulate(iterations, ("i", "x", "y"))

def bring_radical_results(a, *args, **kwargs):
    """A more efficient implementation using more accurate estimates and an initial estimate of the root."""
    def f(x):
        return bring_inverse(x) + a
    iterations = [(i, x, f(x)) for i, x in enumerate(solver_generator(f, 0, -a, x=-sign(a) * abs(a)**0.2, *args, **kwargs))]
    return tabulate(iterations, ("i", "x", "y"))

print("Starting from infinity with Newton's method:")
print(bring_radical_inf_results(3, bring_prime, method="newton"))
print()
print("Starting from infinity with Brent's method:")
print(bring_radical_inf_results(3, method="brent"))
print()
print("Starting closer to the root with Newton's method:")
print(bring_radical_results(3, bring_prime, method="newton"))
print()
print("Starting closer to the root with Brent's method:")
print(bring_radical_results(3, method="brent"))

Out:

Starting from infinity with Newton's method:
  i              x                y
---  -------------  ---------------
  0  -1.79769e+308   -inf
  1   1.79769e+308    inf
  2   0                 3
  3  -4.375         -1604.22
  4  -0.0081663         2.99183
  5  -2.19158         -49.7496
  6  -1                 1
  7  -1.16028          -0.263188
  8  -1.13273           0.00248496
  9  -1.133             1.50044e-08
 10  -1.133            -2.38032e-12
 11  -1.133             2.37899e-12
 12  -1.133             2.22045e-15
 13  -1.133            -2.22045e-15
 14  -1.133             0

Starting from infinity with Brent's method:
  i              x               y
---  -------------  --------------
  0  -1.79769e+308  -inf
  1   1.79769e+308   inf
  2   0                3
  3  -1                1
  4  -2.96875       -230.573
  5  -1.98438        -29.7538
  6  -1.49219         -5.89023
  7  -1.24609         -1.25046
  8  -1.12305          0.0905091
  9  -1.13354         -0.00502739
 10  -1.133            1.81389e-05
 11  -1.133           -3.55676e-10
 12  -1.133            2.37899e-12
 13  -1.133           -2.38032e-12
 14  -1.133            2.22045e-15
 15  -1.133           -2.22045e-15
 16  -1.133            0

Starting closer to the root with Newton's method:
  i         x               y
---  --------  --------------
  0   0           3
  1  -3        -243
  2  -1.24573    -1.24573
  3  -1.07702     0.473849
  4  -1.13392    -0.00850935
  5  -1.133       2.62581e-07
  6  -1.133      -2.38032e-12
  7  -1.133       2.37899e-12
  8  -1.133       2.22045e-15
  9  -1.133      -2.22045e-15
 10  -1.133       0

Starting closer to the root with Brent's method:
  i          x               y
---  ---------  --------------
  0   0            3
  1  -3         -243
  2  -1.24573     -1.24573
  3  -0.884571     1.57385
  4  -1.06515      0.563792
  5  -1.14109     -0.0757198
  6  -1.13279      0.00192294
  7  -1.133       -3.99473e-06
  8  -1.133        2.61933e-11
  9  -1.133       -2.38032e-12
 10  -1.133        2.37899e-12
 11  -1.133        2.22045e-15
 12  -1.133       -2.22045e-15
 13  -1.133        0

One can see that Newton's method is the fastest for these examples, but it does require the derivative. Brent's method also appears to outperform the default method. This is because both Newton's and Brent's methods are aggressive, and assume the provided function is well-behaved. Indeed, graphing this example affirms this assumption. Despite this however, the default method is not significantly slower than the other methods.

A graph of x^5 + x + 3.

Clone this wiki locally