Prof. Ming Gu, 861 Evans, tel: 2-3145
Office Hours: TuTh 2:30-4:00PM
Email: XXXXXXXXXX
http:
www.math.berkeley.edu/∼mgu/MA128A2019F
Math128A: Numerical Analysis
Programming Assignment, Due Nov. 6, 2019
Implement the modified zeroin algorithm in A modified Brents method for finding zeros of
functions, by G. Wilkins and M. Gu, in Numerische Mathematik, vol. 123, 2013.
You should turn in a .m file modified
entxxx.m which contains a matlab function of the
form
function
[root,info] = modified
entxxx(@func,Int,params)
where xxx is your student id. On input, func is a function handle, Int is the initial inter-
val, [Int.a, Int.b], containing a root, and params is an object that contains at least three fields
params.root tol, params.func tol and params.maxit. Your algorithm should terminate once
the interval containing the root is at most params.root tol in length, or the function value at the
cu
ent iterate is at most params.func tol in absolute value. On output, root is the computed
oot, and info should have at least one field info.flag, which is 0 for a successful execution, and
1 otherwise.
Your program can not use the matlab built-in function fzero. It will be tested against a few
functions of our choice, against the following criteria:
1. (60 points) A zero is found within given tolerances for each function tested.
2. (40 points) One zero is found within the right number of function calls for each function
tested.
Your program will receive 0 points if the string fzero, case in-sensitive, shows up anywhere in
your .m file.
Submit your .m file to your GSI by 1:00AM, Nov. 7, 2019.
Numer. Math XXXXXXXXXX:177–188
DOI XXXXXXXXXX/s XXXXXXXXXXx
Numerische
Mathematik
A modified Brent’s method for finding zeros of functions
Gautam Wilkins · Ming Gu
Received: 30 June 2011 / Revised: 14 March 2012 / Published online: 26 June 2012
© Springer-Verlag 2012
Abstract Brent’s method, also known as zeroin, has been the most popular method
for finding zeros of functions since it was developed in 1972. This method usually
converges very quickly to a zero; for the occasional difficult functions encountered in
practice, it typically takes O(n) iterations to converge, where n is the number of steps
equired for the bisection method to find the zero to approximately the same accuracy.
While it has long been known that in theory Brent’s method could require as many
as O(n2) iterations to find a zero, such behavior had never been observed in practice.
In this paper, we first show that Brent’s method can indeed take O(n2) iterations
to converge, by explicitly constructing such worst case functions. In particular, fo
double precision accuracy, Brent’s method takes 2,914 iterations to find the zero of
our function, compared to the 77 iterations required by bisection. Secondly, we present
a modification of Brent’s method that places a stricter complexity bound of O(n) on
the search for a zero. In our extensive testing, this modification appears to behave very
similarly to Brent’s method for all the common functions, yet it remains at worst five
times slower than the bisection method for all difficult functions, in sharp contrast to
Brent’s method.
Mathematics Subject Classification XXXXXXXXXX65Y20 · 65Y04
G. Wilkins (B) · M. Gu
Department of Mathematics, University of California, Berkeley, USA
e-mail: XXXXXXXXXX; XXXXXXXXXX
Present Address:
G. Wilkins
Department of Mathematics, University of California, San Diego, USA
123
178 G. Wilkins, M. Gu
1 Introduction
Finding the zeros of single-variable, real-valued functions is a very common and basic
task in scientific computing. Given a function f (x) that is continuous on the interval
[a, b] with f (a) f (b) < 0, the bisection method is guaranteed to find a zero in [a, b].
The reliability of the bisection method is offset by its disappointing linear convergence.
It typically requires log2
−a
δ
iterations to achieve a given accuracy tolerance δ.
On the other hand, methods such as the secant method (see Eq. (1)) can converge
much more quickly, but could diverge without reliable initial guesses [2]. Brent’s
method [1] is a quite successful attempt at combining the reliability of the bisection
method with the faster convergence of the secant method and the inverse quadratic
interpolation method.
Brent’s method is an improvement of an earlier algorithm originally proposed by
Dekker [1,3]. Assume that a given function f (x) is continuous on an interval [a, b],
such that f (a) f (b) < 0. It is well-known that a zero of f is guaranteed to exist
somewhere in [a, b]. The secant method produces a better approximate zero c as
c = b − b − a
f (b) − f (a) f (b). (1)
The secant method is a superlinearly convergent method if the zero is simple and f is
twice differentiable [1]. In order to safeguard against the secant method leading in a
spurious direction, Dekker employs a bisection step anytime the new point computed
y the secant method is not between a+b2 and the previous computed point. The
algorithm will terminate either when an exact zero is found, or when the size of the
interval [a, b] shrinks below some prescribed numerical tolerance, δ > 0.
Dekker’s method, however, does not place a reasonable bound on the complexity
of the search for a zero. For certain functions it may perform a very large numbe
secant iterations yet make virtually no progress in shrinking the interval around the
zero. Brent’s method aims to avoid such stagnant iterations. The details of Brent’s
method are discussed in Sect. 2.
Since its development in 1972, Brent’s method has been the most popular method
for finding zeros of functions. This method usually converges very quickly to a zero;
for the occasional difficult functions encountered in practice, it typically takes O(n)
iterations to converge, where n is the number of steps required for the bisection method
to find the zero to approximately the same accuracy. Brent shows that this method
equires as many as O(n2) iterations in the worst case, although in practice such
ehavior had never been observed.
Brent’s method is the basis of the fzero function in Matlab, a commercial software
package by The MathWorks Inc., because of its ability to use “rapidly convergent
methods when they are available” and “a slower, but sure, method when it is necessary”
(see [5]).
The first contribution of this paper is to show that Brent’s method can indeed take
O(n2) iterations to converge. We do this by explicitly constructing such worst case
functions. In particular, for double precision accuracy, Brent’s method takes 2,914
123
A modified Brent’s method 179
iterations to find the zero of our function, whereas the bisection method takes only 77
iterations to achieve the same accuracy.
The second contribution is a simple modification of Brent’s method, which we will
call modified zeroin. We show that this modification requires at most O(n) iterations
to find a zero, where the constant hidden in the O notation does not exceed 5 in the
worst case. In our extensive testing, this modification appears to behave very similarly
to Brent’s method for all the common functions, yet has the same order of convergence
as the bisection method for all difficult functions, in sharp contrast to Brent’s method.
Brent’s method does not require any information about the derivatives of the func-
tion f (x). This simplicity, plus the practical efficiency, has made Brent’s method the
method of choice for many practical zero finding computations. While derivative infor-
mation can be used to develop more efficient zero finders, it will not be discussed in
this paper as we are primarily interested in techniques that may be employed to safe-
guard superlinearly convergent methods so that they do not lead in spurious directions
or take an unduly large number of iterations to converge to zeros of ill-behaved func-
tions. The interested reader is refe
ed to Kahan [4] for a
oader discussion of zero
finders, as well as recent papers [6–9] that present higher order zero finding methods.
It is worth noting, however, that techniques discussed in this paper may be applied to
any superlinearly convergent method to guarantee O(n) convergence.
2 Brent’s method
We begin this section by introducing the inverse quadratic interpolation method (IQI).
Given three distinct points a, b, and c, with their co
esponding function values f (a),
f (b), and f (c), the IQI method produces a new approximate zero as
d = f (b) f (c)
( f (a) − f (b)) ( f (a) − f (c))a +
f (a) f (c)
( f (b) − f (a)) ( f (b) − f (c))
+ f (a) f (b)
( f (c) − f (a)) ( f (c) − f (b))c, (2)
if the right hand side of (2) is defined. The order of convergence of the IQI method is
approximately 1.839, while the order of convergence of the secant method approxi-
mately 1.618.
Brent’s method differs from Dekker’s method in a number of ways. It uses inverse
quadratic interpolation (see Eq. (2)) whenever possible, resulting in an increased speed
of convergence. In addition, it changes the criteria for which a bisection step will be
performed. Let b j be the best approximation to the zero after the j th iteration of Brent’s
method. If interpolation was used to obtain b j , then the following two inequalities must
simultaneously be satisfied:
|b j+1 − b j | < 0.5|b j−1 − b j−2| (3)
|b j+1 − b j | > δ (4)
123
180 G. Wilkins, M. Gu
where δ is a numerical tolerance analogous to that in Dekker’s method, and b j+1 is
the new point computed by interpolation. If either of these inequalities is not satisfied,
then b j+1 is discarded and a bisection step is performed instead. The first inequality
thus places a bound on the distance between two successive points computed through
interpolation that decreases by a factor of at least two, every two steps. Assuming that
the first condition is never violated, then at the j th step the second condition will be
violated after at most n additional steps, where:
|b j−1 − b j−2|
2n/2
∈ (δ/2, δ]
n = 2 lg
( |bj−1 − bj−2|
δ
)
,
and we define lg(x) := �log2(x)�. Thus, a bisection step will be performed at least
every n steps following an interpolation step. If we assume that in the worst-case the
interval is not shrunk at all by interpolation, and that bisection steps are performed as
infrequently as possible, then the interval size decreases by a factor of two every n
steps. Thus, given an initial interval [a, b], Brent’s method will terminate in no more
than k steps, where:
|b − a|
2k/n