# Mathematics Worksheet

Description

1.Simple answers are NOT sufficient, you need to show all the relevant work
2. The exercises with a * require the use of a computer.(What sofware you use it up to you)
Please see the file I provided first, make sure you can do it before you choose this work
I will send you a message before I confirm you, please check it

3 attachmentsSlide 1 of 3attachment_1attachment_1attachment_2attachment_2attachment_3attachment_3

Unformatted Attachment Preview

MAT 3380: Intro. to Numerical Methods
Winter 2022
Diane Guignard
HOMEWORK 1
MNNNSO
• This assignment is due Friday, January 20
21 on Brightspace. It covers Sections 3.1, 3.2
and part of 3.3.
• Simple answers are NOT sufficient, you need to show all the relevant work to receive
full credit.

• Please write the following information at the top of the front page: your full name, the
course number (MAT 3380) and the number of the assignment.
• The exercises with a * require the use of a computer. For Exercises 1 and 2, you can
use the function bisection.m provided on Brightspace. Attach the code you wrote
Exercise 1⇤
Let f (x) = (x + 2)(x + 1)2 x(x 1)3 (x 2). To which zero of f does the bisection method
converge when applied on the following intervals?
a.
[ 1.5, 2.5]
b.
[ 0.5, 2.4]
c.
[ 0.5, 3]
d.
[ 3, 0.5].
Exercise 2⇤
p
Use the bisection method on [1, 2] to find an approximation of 3 correct to within 10 4 .
Indicate which function f you used and report the values of x0 , x1 and x2 , the final output and
the number of iterations.
Exercise 3⇤
Let f (x) = x2
a.
2x
xk =
3. To find a root of f , the following three fixed point method are proposed
3
xk
1
b.
2
xk = x2k
1
xk
1
3
c.
xk =
x2k
2xk
+3
.
2
1
1
For each method, compute (if possible) the iterates x1 , x2 , x3 and x4 starting from x0 = 0.
Report the values you obtain in a table. Which methods seem to be appropriate? Among those,
which one seems to converge the fastest?
Exercise 4
Prove Corollary 3.4 of the lecture notes (see Section 3.3), namely the estimates
|xk
and
x⇤ |  ⇢k max{x0
|xk
x⇤ | 
⇢k
1

|x1
a, b
x0 |,
x0 },
k
k
0,
1.
Exercise 5
Consider the function
1p
3
x+8
3
i) Show that g has a unique fixed point in [0, 1].
ii) Assuming that we start from x0 = 12 , find a bound for the number of fixed point
iterations needed to achieve 10 6 accuracy.
g(x) =
1
i
i
i
i
Chapter 3
Nonlinear Equations in One
Variable
This chapter is concerned with finding solutions to the scalar, nonlinear equation
f (x) = 0,
where the variable x runs in an interval [a, b]. The topic provides us with an opportunity to discuss
various issues and concepts that arise in more general circumstances.
There are many canned routines that do the job of finding a solution to a nonlinear scalar
equation; the one in MATLAB is called fzero; type help fzero to see how this function is
used and what input parameters it requires.
Following an extended introduction to our topic in Section 3.1 are three sections, 3.2–3.4, each
devoted to a different method or a class of methods for solving our nonlinear equation. A closely
related problem is that of minimizing a function in one variable, and this is discussed in Section 3.5.
3.1 Solving nonlinear equations
Referring to our prototype problem introduce above, f (x) = 0, let us further assume that the function
f is continuous on the interval, denoted f ∈ C[a, b]. Throughout our discussion we denote a solution
of the equation (called root, or zero) by x ∗ .
Note: Why introduce nonlinear equations before introducing their easier comrades, the
linear ones?! Because one linear equation in one unknown is just too easy and not particularly illuminating, and systems of equations bring a wave of complications with them. We
have a keen interest in solving scalar nonlinear equations not only because such problems
arise frequently in many applications, but also because it is an opportunity to discuss various issues and concepts that arise in more general circumstances and highlight ideas that
are extensible well beyond just one equation in one unknown.
Example 3.1. Here are a few simple functions and their roots.
1. f (x) = x − 1, on the interval [a, b] = [0, 2].
Obviously there is one root for this linear equation: x ∗ = 1.
39
i
i
i
i
i
i
i
i
40
Chapter 3. Nonlinear Equations in One Variable
f(x)
0
−1
0
2
4
6
8
10
12
14
x
f(x)
4000
2000
0
−2000
0
2
4
6
8
10
12
14
16
18
20
2
4
6
8
10
x
100
f(x)
1
50
0
−10
−8
−6
−4
−2
0
x
Figure 3.1. Graphs of three functions and their real roots (if there are any): (i) f (x) =
sin(x) on [0, 4π ], (ii) f (x) = x 3 − 30x 2 + 2552 on [0, 20], and (iii) f (x) = 10 cosh(x/4) − x on
[−10, 10].
2. f (x) = sin(x).
Since sin(nπ ) = 0 for any integer n, we have

(a) On the interval [a, b] = [ π2 , 3π
2 ] there is one root, x = π .
(b) On the interval [a, b] = [0, 4π ] there are five roots; see Figure 3.1.
3. f (x) = x 3 − 30x 2 + 2552,
0 ≤ x ≤ 20.
In general, a cubic equation with complex coefficients has three complex roots. But if the
polynomial coefficients are real and x is restricted to be real and lie in a specific interval, then
there is no general a priori rule as to how many (real) roots to expect. A rough plot of this
function on the interval [0, 20] is given in Figure 3.1. Based on the plot we suspect there is
precisely one root x ∗ in this interval.
4. f (x) = 10 cosh(x/4) − x,
−n ≤ atol. Multiplying both sides by 2n /atol and taking log, we see that it
by demanding b−a
2 ·2
takes

b−a
n = log2
2 atol
iterations to obtain a value p = xn that satisfies
|x ∗ − p| ≤ atol.
The following MATLAB function does the job:
function [p,n] = bisect(func,a,b,fa,fb,atol)
%
% function [p,n] = bisect(func,a,b,fa,fb,atol)
%
% Assuming fa = func(a), fb = func(b), and fa*fb < 0, % there is a value root in (a,b) such that func(root) = 0. % This function returns in p a value such that % | p - root | < atol % and in n the number of iterations required. % check input if (a >= b) | (fa*fb >= 0) | (atol a and
to the root x2∗ . Indeed, starting with x0 = 10 the iteration diverges quickly, and we obtain overflow
after 3 iterations, whereas starting this fixed point iteration with x0 = 8 we do obtain convergence,
but to x1∗ rather than to x2∗ .
It is important to realize that the root x2∗ is there, even though the fixed point iteration with the
“natural g” does not converge to it. Not everything natural is good for you! There are other choices
of g for the purpose of fixed point iteration that perform better here.
Note: A discussion of rates of convergence similar to that appearing here is also given in
Section 7.3. There it is more crucial, because here we will soon see methods that converge
faster than a rate of convergence can suitably quantify.
Rate of convergence
Suppose now that at a particular root of a given fixed point iteration, ρ = |g (x ∗ )| satisfies 0 < ρ < 1. Starting with x0 sufficiently close to x ∗ we can write xk − x ∗ ≈ g (x ∗ )(xk−1 − x ∗ ). Hence we get |xk − x ∗ | ≈ ρ|xk−1 − x ∗ | ≈ · · · ≈ ρ k |x0 − x ∗ |. To quantify the speed of convergence in terms of ρ we can ask, how many iterations does it take to reduce the error by a fixed factor, say, 10? To answer this we set |xk − x ∗ | ≈ 0.1|x0 − x ∗ |, obtaining ρ k ≈ 0.1. Taking log10 of both sides yields k log10 ρ ≈ −1. Let us define the rate of convergence as rate = − log10 ρ. (3.1) Then it takes about k = 1/rate iterations to reduce the error by more than an order of magnitude. Obviously, the smaller ρ the higher the convergence rate and correspondingly, the smaller the number of iterations required to achieve the same error reduction effect. Example 3.6. The bisection method is not exactly a fixed point iteration, but it corresponds to an iterative method of a similar sort with ρ = 0.5. Thus its convergence rate according to (3.1) is rate = − log10 0.5 ≈ 0.301. This yields k = 4, and indeed the error reduction factor with this many bisections is 16, which is more than 10. i i i i i i i i 50 Chapter 3. Nonlinear Equations in One Variable Downloaded 01/15/22 to 137.122.8.73 Redistribution subject to SIAM license or copyright; see https://epubs.siam.org/page/terms For the root x1∗ of Example 3.5 we have ρ ≈ 0.3121 and rate = − log10 0.3121 ≈ 0.506. Here it takes only two iterations to roughly reduce the error by a factor of 10 if starting close to x ∗ . For the root x2∗ of Example 3.5 we obtain from (3.1) a negative rate of convergence, as is appropriate for a case where the method does not converge. Of course, it makes no sense to calculate ρ or the convergence rate so precisely as in Example 3.6: we did this only so that you can easily follow the algebra with a calculator. Indeed, such accurate estimation of ρ would require knowing the solution first! Moreover, there is nothing magical about an error reduction by a particularly precise factor. The rate of convergence should be considered as a rough indicator only. It is of importance, however, to note that the rate becomes negative for ρ > 1, indicating no convergence of the fixed point iteration, and that it becomes infinite
for ρ = 0, indicating that the error reduction per iteration is faster than by any constant factor. We
encounter such methods next.
Specific exercises for this section: Exercises 3–4.
3.4 Newton’s method and variants
The bisection method requires the function f to be merely continuous (which is good) and makes
no further use of further information on f such as availability of its derivatives (which causes it
to be painfully slow at times). At the other end of the scale is Newton’s method, which requires
more knowledge and smoothness of the function f but which converges much faster in appropriate
circumstances.
Newton’s method
Newton’s method is the most basic fast method for root finding. The principle we use below to
derive it can be directly extended to more general problems.
Assume that the first and second derivatives of f exist and are continuous: f ∈ C 2 [a, b].
Assume also that f can be evaluated with sufficient ease. Let xk be a current iterate. By Taylor’s
expansion on page 5 we can write
f (x) = f (xk ) + f (xk )(x − xk ) + f (ξ (x))(x − xk )2 /2,
where ξ (x) is some (unknown) point between x and xk .
Now, set x = x ∗ , for which f (x ∗ ) = 0. If f were linear, i.e., f ≡ 0, then we could find
the root by solving 0 = f (xk ) + f (xk )(x ∗ − xk ), yielding x ∗ = xk − f (xk )/ f (xk ). For a nonlinear
function we therefore define the next iterate by the same formula, which gives
xk+1 = xk −
f (xk )
,
f (xk )
k = 0, 1, 2, . . . .
This corresponds to neglecting the term f (ξ (x ∗ ))(x ∗ − xk )2 /2 when defining the next iterate; however, if xk is already close to x ∗ , then (x ∗ − xk )2 is very small, so we would expect xk+1 to be much
closer to x ∗ than xk is.
A geometric interpretation of Newton’s method is that xk+1 is the x-intercept of the tangent
line to f at xk ; see Figure 3.4.
i
i
i
i
i
i
i
i
3.4. Newton’s method and variants
51
1.5
1
0.5
0
x1
x2
x0
−0.5
−1
−1
−0.8
−0.6
−0.4
−0.2
0
x
0.2
0.4
0.6
0.8
1
Figure 3.4. Newton’s method: the next iterate is the x-intercept of the tangent line to f at
the current iterate.
Algorithm: Newton’s Method.
Given a scalar differentiable function in one variable, f (x):
1. Start from an initial guess x0 .
2. For k = 0, 1, 2, . . . , set
xk+1 = xk −
f (xk )
,
f (xk )
until xk+1 satisfies termination criteria.
Example 3.7. Consider the same function as in Example 3.5, given by
f (x) = 2 cosh(x/4) − x.
The Newton iteration here is
2 cosh(xk /4) − xk
.
0.5 sinh(xk /4) − 1
We use the same absolute tolerance of 1.e-8 and the same four initial iterates as in Example 3.5.
xk+1 = xk −
• Starting from x0 = 2 requires 4 iterations to reach x1∗ to within the given tolerance.
• Starting from x0 = 4 requires 5 iterations to reach x1∗ to within the given tolerance.
• Starting from x0 = 8 requires 5 iterations to reach x2∗ to within the given tolerance.
• Starting from x0 = 10 requires 6 iterations to reach x2∗ to within the given tolerance.
i
i
i
i
i
i
i
i
52
Chapter 3. Nonlinear Equations in One Variable
The values of f (xk ), starting from x0 = 8, are displayed below:
k
0
1
2
3
4
5
f (xk )
−4.76e-1
8.43e-2
1.56e-3
5.65e-7
7.28e-14
1.78e-15
This table suggests (indirectly) that the number of significant digits essentially doubles at each iteration. Of course, when roundoff level is reached, no meaningful improvement can be obtained upon
heaping more floating point operations, and the improvement from the 4th to the 5th iteration in this
example is minimal.
Speed of convergence
How fast does a nonlinear iteration converge, if it does? Let us assume that the method indeed
converges and that xk is already “close enough” to the root x ∗ . We define convergence orders as
follows. The method is said to be
turns out, Newton’s method may still converge, but only at a linear (rather than quadratic) rate.
Example 3.9. For the monomial f (x) = x m , m > 1, we get
x
f (x)
= .
f (x) m
xk+1 = xk −
m −1
xk
=
xk .
m
m
Since the root is x ∗ = 0, we can somewhat trivially write
|xk+1 − x ∗ | =
m −1
|xk − x ∗ |,
m
so the method is linearly convergent, with the contraction factor ρ = m−1
m . The corresponding rate
,
and
it
is
not
difficult
to
calculate
these values and see that
of convergence is rate = − log10 m−1
m
they deteriorate as m grows. If f (x ∗ ) = 0, i.e., m = 2, then ρ = 0.5. See Example 3.6.
i
i
i
i
i
i
i
i
3.5. Minimizing a function in one variable
55
Globalizing Newton’s method
Both Newton’s method and the secant method are guaranteed to converge under appropriate conditions only locally: there is the assumption in the theorem statement that the initial iterate x0 is
already “close enough” to an isolated root. This is in contrast to the situation for the bisection
method, which is guaranteed to converge provided only that a bracketing interval is found on which
f changes sign. The δ-neighborhood for which Newton’s method is guaranteed to converge may be
large for some applications, as is the case for Example 3.7, but it may be small for others, and it is
more involved to assess ahead of time which is the case for a given problem.
A natural idea, then, is to combine two methods into a hybrid one. We start with a rough plot
or probe of the given function f in order to bracket roots. Next, starting from a given bracket with
at x̂ = x ∗ in some neighborhood which includes x ∗ .
• If φ (x ∗ ) < 0, then x ∗ is a local maximizer of φ(x). This means that φ attains maximum at x̂ = x ∗ in some neighborhood which includes x ∗ . • If φ (x ∗ ) = 0, then a further investigation at x ∗ is required. Algorithms for function minimization It is not difficult to see that if φ(x) attains a minimum (or maximum) at a point x̂, then this point must be critical, i.e., f (x̂) = 0. Thus, the problem of finding all minima of a given function φ(x) can be solved by finding all the critical roots7 and then checking for each if it is a minimum by examining the sign of the second derivative of φ. Example 3.10. For the function φ(x) = 10 cosh(x/4) − x we have 10 sinh(x/4) − 1, 4 10 φ (x) = f (x) = cosh(x/4). 16 φ (x) = f (x) = 7 Note that for quadratic convergence of Newton’s method, φ(x) must have three continuous derivatives. i i i i i i i i Downloaded 01/15/22 to 137.122.8.73 Redistribution subject to SIAM license or copyright; see https://epubs.siam.org/page/terms 3.5. Minimizing a function in one variable 57 Since φ (x) > 0 for all x, we can have only one minimum point. (Think of how the curve
must look like if we had more than one minimum.) Applying any of the algorithms described in
Sections 3.2–3.4, we obtain the unique minimum point as the root of f (x), given by
x̂ ≈ 1.5601412791.
At this point, φ(x̂) ≈ 9.21018833518615.
Example 3.11. For an example with many critical points, consider φ(x) = cos(x).
The critical points are where f (x) = φ (x) = − sin(x) = 0; thus
x ∗ = jπ ,
Since
j = 0, ±1, ±2, . . . .
φ (x ∗ ) = − cos( jπ ) = (−1) j+1 ,
j = 0, ±1, ±2, . . . ,
we have minimum points where j is odd, i.e., at every second critical point: the local minimizers
are
x̂ = (2l + 1)π , l = 0, ±1, ±2, . . . .
The other critical points are maximum points.
For simple problems this could mark the end of this section. However, for problems where
function evaluations are really dear we note that there are two imperfections in the simple procedure
outlined above:
• We are investing a considerable effort into finding all critical points, even those that end up
being local maxima. Is there a way to avoid finding such unwanted points?
• No advantage is taken of the fact that we are minimizing φ(x). Can’t we gauge how good an
iterate xk is by checking φ(xk )?
These objections become more of an issue when considering the minimization of functions of several
variables. Nonetheless, let us proceed to quickly outline some general ideas with Newton’s method
on page 51 for the scalar case.
The kth iteration of Newton’s method applied to f (x) = φ (x) can be written here as
xk+1 = xk + αk ξk , where
φ (xk )
ξk = −
, αk = 1.
φ (xk )
Now, if φ (xk ) > 0 and the step size αk > 0 is small enough, then
φ(xk+1 ) ≈ φ(xk ) + αk ξk φ (xk ) = φ(xk ) − αk φ (xk )2 /φ (xk ) < φ(xk ). Therefore, we have a decrease in φ, i.e., a step in the right direction. Thus, it is possible to design a procedure where we insist that φ(x) decrease in each iteration. This would prevent convergence to critical points which are local maxima rather than minima. In such a procedure we monitor φ (xk ), replacing it by max{φ (xk ), }, where > 0 is a parameter
chosen not large. We then select a step size αk by trying αk = 1 first and decreasing it if necessary
until a sufficient decrease is obtained in φ(xk+1 ) relatively to φ(xk ). We shall have more to say about
this sort of procedure when considering minimizing functions in several variables.
i
i
i
i
i
i
i
i
58
Chapter 3. Nonlinear Equations in One Variable
Local and global minimum
The procedures outlined above find local minima. The problem of finding a global minimizer, i.e.,
a point x̂ such that φ(x̂) ≤ φ(x) for all x ∈ [a, b], is significantly more difficult in general. In the
worst situations, where not much is known about the objective function φ, a suitable algorithm may
involve a complete enumeration, i.e., finding all local minima and then comparing them. At the
other end of the scale, if the objective function is known to be convex, which means that
φ(θ x + (1 − θ )y) ≤ θ φ(x) + (1 − θ )φ(y) ∀ 0 ≤ θ ≤ 1
for any two points x, y ∈ [a, b], then there is a unique minimum and the problems of finding local and
global minima coincide. Such is the instance considered in Example 3.10, but not that considered in
Example 3.11.
Specific exercises for this section: Exercises 21–23.
3.6 Exercises
0. Review questions
(a) What is a nonlinear equation?
(b) Is the bisection method (i) efficient? (ii) robust? Does it (iii) require a minimal amount
of additional knowledge? (iv) require f to satisfy only minimum smoothness properties?
(v) generalize easily to several functions in several variables?
(c) Answer similar questions for the Newton and secant methods.
(d) State at least one advantage and one disadvantage of the recursive implementation of the
bisection method over the iterative nonrecursive implementation.
(e) In what way is the fixed point iteration a family of methods, rather than just one method
like bisection or secant?
(f) What is the basic condition for convergence of the fixed point iteration, and how does
the speed of convergence relate to the derivative of the iteration function g?
(g) Suppose a given fixed point iteration does not converge: does this mean that there is
no root in the relevant interval? Answer a similar question for the Newton and secant
methods.
(h) State at least two advantages and two disadvantages of Newton’s method.
(i) What are order of convergence and rate of convergence, and how do they relate?
(j) State at least one advantage and one disadvantage of the secant method over Newton’s.
(k) In what situation does Newton’s method converge only linearly?
(l) Explain the role that roundoff errors play in the convergence of solvers for nonlinear
equations, and explain their relationship with convergence errors.
(m) State a similarity and a difference between the problem of minimizing a function φ(x)
and that of solving the nonlinear equation φ (x) = 0.
(n) State what a convex function is, and explain what happens if an objective function is
convex.
1. Apply the bisection routine bisect to find the root of the function

f (x) = x − 1.1
starting from the interval [0, 2] (that is, a = 0 and b = 2), with atol = 1.e-8.
i
i
i
i
i
i
i
i
3.6. Exercises
59
(a) How many iterations are required? Does the iteration count match the expectations,
based on our convergence analysis?
(b) What is the resulting absolute error? Could this absolute error be predicted by our convergence analysis?
2. Consider the polynomial function8
f (x) = (x − 2)9
= x 9 − 18x 8 + 144x 7 − 672x 6 + 2016x 5 − 4032x 4 + 5376x 3 − 4608x 2
+ 2304x − 512.
(a) Write a MATLAB script which evaluates this function at 161 equidistant points in the
interval [1.92, 2.08] using two methods:
i. Apply nested evaluation (cf. Example 1.4) for evaluating the polynomial in the
expanded form x 9 − 18x 8 + · · · .
ii. Calculate (x − 2)9 directly.
Plot the results in two separate figures.
(b) Explain the difference between the two graphs.
(c) Suppose you were to apply the bisection routine from Section 3.2 to find a root of this
function, starting from the interval [1.92, 2.08] and using the nested evaluation method,
to an absolute tolerance 10−6 . Without computing anything, select the correct outcome:
i. The routine will terminate with a root p satisfying | p − 2| ≤ 10−6 .
ii. The routine will terminate with a root p not satisfying | p − 2| ≤ 10−6 .
iii. The routine will not find a root.
Justify your choice in one short sentence.
3. Consider the fixed point iteration xk+1 = g(xk ), k = 0, 1, . . . , and let all the assumptions of
the Fixed Point Theorem hold. Use a Taylor’s series expansion to show that the order of
convergence depends on how many of the derivatives of g vanish at x = x ∗ . Use your result
to state how fast (at least) a fixed point iteration is expected to converge if g (x ∗ ) = · · · =
g (r ) (x ∗ ) = 0, where the integer r ≥ 1 is given.
3
4. Consider the function g(x) = x 2 + 16
.
(a) This function has two fixed points. What are they?
(b) Consider the fixed point iteration xk+1 = g(xk ) for this g. For which of the points you
have found in (a) can you be sure that the iterations will converge to that fixed point?
Briefly justify your answer. You may assume that the initial guess is sufficiently close to
the fixed point.
(c) For the point or points you found in (b), roughly how many iterations will be required to
reduce the convergence error by a factor of 10?

5. Write a MATLAB script for computing the cube root of a number, x = 3 a, with only basic
arithmetic operations using Newton’s method, by finding a root of the function f (x) = x 3 −
a. Run your program for a = 0, 2, 10. For each of these cases, start with an initial guess
reasonably close to the solution. As a stopping criterion, require the function value whose
root you are searching to be smaller than 10−8 . Print out the values of xk and f (xk ) in each
iteration. Comment on the convergence rates and explain how they match your expectations.
8 This beautiful example is inspired by Demmel [21]. We gave a modified version of it as an exam question once.
Unfortunately, not everyone thought it was beautiful.
i
i
i
i
i
i
i
i
60
Chapter 3. Nonlinear Equations in One Variable
6.
(a) Derive a third order method for solving f (x) = 0 in a way similar to the derivation
of Newton’s method, using evaluations of f (xn ), f (xn ), and f (xn ). The following
remarks may be helpful in constructing the algorithm:
• Use the Taylor expansion with three terms plus a remainder term.
• Show that in the course of derivation a quadratic equation arises, and therefore two
distinct schemes can be derived.
(b) Show that the order of convergence (under the appropriate conditions) is cubic.
(c) Estimate the number of iterations and the cost needed to reduce the initial error by a
factor of 10m .
(d) Write a script for solving the problem of Exercise 5. To guarantee that your program
does not generate complex roots, make sure to start sufficiently close to a real root.
(e) Can you speculate what makes this method less popular than Newton’s method, despite
its cubic convergence? Give two reasons.
7. Consider Steffensen’s method
xk+1 = xk −
where
g(x) =
f (xk )
,
g(xk )
k = 0, 1, . . . ,
f (x + f (x)) − f (x)
.
f (x)
(a) Show that in general the method converges quadratically to a root of f (x).
(b) Compare the method’s efficiency to the efficiency of the secant method.

8. It is known that the order of convergence of the secant method is p = 1+2 5 = 1.618 . . . and
that of Newton’s method is p = 2. Suppose that evaluating f costs approximately α times
the cost of approximating f . Determine approximately for what values of α Newton’s method
is more efficient (in terms of number of function evaluations) than the secant method. You
may neglect the asymptotic error constants in your calculations. Assume that both methods
are starting with initial guesses of a similar quality.
9. This exercise essentially utilizes various forms of Taylor’s expansion and relies on expertise
in calculus.
(a) Prove that if f ∈ C 2 [a, b] and there is a root x ∗ in [a, b] such that f (x ∗ ) = 0, f (x ∗ ) = 0,
then there is a number δ such that, starting with x0 from anywhere in the neighborhood
[x ∗ − δ, x ∗ + δ], Newton’s method converges quadratically.
(b) This is more challenging: prove the same conclusions under the same assumptions,
except that now f is only assumed to have a first Lipschitz continuous derivative. Thus,
while f may not exist everywhere in [a, b], there is a constant γ such that for any
x, y ∈ [a, b]
| f (x) − f (y)| ≤ γ |x − y|.
10. The function
f (x) = (x − 1)2 e x
has a double root at x = 1.
i
i
i
i
i
i
i
i
3.6. Exercises
61
(a) Derive Newton’s iteration for this function. Show that the iteration is well-defined so
long as xk = −1 and that the convergence rate is expected to be similar to that of the
bisection method (and certainly not quadratic).
(b) Implement Newton’s method and observe its performance starting from x0 = 2.
(c) How easy would it be to apply the bisection method? Explain.
11. In general, when a smooth function f (x) has a multiple root at x ∗ , the function
ψ(x) =
f (x)
f (x)
has a simple root there.
(a) Show that a Newton method for ψ(x) yields the iteration
xk+1 = xk −
f (xk ) f (xk )
.
[ f (xk )]2 − f (xk ) f (xk )
(b) Give one potential advantage and two potential disadvantages for this iteration as compared to Newton’s method for f (x).
(c) Try this method on the problem of Exercise 10. What are your observations?
12. Given a > 0, we wish to compute x = ln a using addition, subtraction, multiplication, division,
and the exponential function, e x .
(a) Suggest an iterative formula based on Newton’s method, and write it in a way suitable
for numerical computation.
(c) Write down an iterative formula based on the secant method.
(d) State which of the secant and Newton’s methods is expected to perform better in this
case in terms of overall number of exponential function evaluations. Assume a fair
comparison, i.e., same floating point system, “same quality” initial guesses, and identical
convergence criterion.
13. Write a MATLAB program to find all the roots of a given, twice continuously differentiable,
function f ∈ C 2 [a, b].
Your program should first probe the function f (x) on the given interval to find out where it
changes sign. (Thus, the program has, in addition to f itself, four other input arguments: a,
b, the number npr obe of equidistant values between a and b at which f is probed, and a
tolerance tol.)
For each subinterval [ai , bi ] over which the function changes sign, your program should then
find a root as follows. Use either Newton’s method or the secant method to find the root,
monitoring decrease in | f (xk )|. If an iterate is reached for which there is no sufficient decrease
(e.g., if | f (xk )| ≥ 0.5| f (xk−1 )|), then revert back to [ai , bi ], apply three bisection steps and
restart the Newton or secant method.
The ith root is deemed “found” as xk if both
|xk − xk−1 | < tol(1 + |xk |) and | f (xk )| < tol hold. i i i i i i i i 62 Chapter 3. Nonlinear Equations in One Variable Downloaded 01/15/22 to 137.122.8.73 Redistribution subject to SIAM license or copyright; see https://epubs.siam.org/page/terms Verify your program by finding the two roots (given in Example 3.5) of the function f (x) = 2 cosh(x/4) − x, starting your search with [a, b] = [0, 10] and npr obe = 10. Then use your program to answer other questions in this section where relevant. 14. Find all the roots of the function f (x) = sin(x) x , 1, x = 0, x = 0, in the interval [−10, 10] for tol = 10−7 . [This function is special enough to have a name. It is called the sinc function.] 15. For x > 0 consider the equation
x + ln x = 0.
It is a reformulation of the equation of Example 3.4.

## Reviews, comments, and love from our customers and community:

This page is having a slideshow that uses Javascript. Your browser either doesn't support Javascript or you have it turned off. To see this page as it is meant to appear please use a Javascript enabled browser.

Peter M.
So far so good! It's safe and legit. My paper was finished on time...very excited!
Sean O.N.
Experience was easy, prompt and timely. Awesome first experience with a site like this. Worked out well.Thank you.
Angela M.J.
Good easy. I like the bidding because you can choose the writer and read reviews from other students
Lee Y.
My writer had to change some ideas that she misunderstood. She was really nice and kind.
Kelvin J.
I have used other writing websites and this by far as been way better thus far! =)
Antony B.
I received an, "A". Definitely will reach out to her again and I highly recommend her. Thank you very much.