# MATH 3113 George Washington University Differential Geometry Exercises

Description

Please solve the Question and show all the steps. you may need the note(pdf) but you cannot copy from it

3 attachmentsSlide 1 of 3attachment_1attachment_1attachment_2attachment_2attachment_3attachment_3

Unformatted Attachment Preview

G. Kokarev: MATH3113 Differential geometry.
MATH3113/MATH5113M: Differential Geometry
Contact details
Lecturer:
Email:
Dr Gerasim Kokarev
G.Kokarev@leeds.ac.uk
All hand-outs will be made available on Minerva. Lecture notes ©G. Kokarev.
MATH2051 (Geometry of curves and surfaces) is a necessary prerequisite. It is the responsibility of
students to ensure that they satisfy the prerequisite knowledge before taking the module.
Introduction
In this module we will study various concepts, some of which you may have already encountered
when studied Euclidean geometry and Geometry of Curves and Surfaces, used to describe geometric
arrangements (e.g. arrangements of lines, and related notions of congruence in Euclidean geometry)
and study shapes of objects (e.g. notions of curvature for curves and surfaces). However, to get deeper
and more rigorous understanding we will rely on more material from Linear Algebra and Calculus.
For example, all classical geometry of a Euclidean plane can be described as the geometry of a 2dimensional vector space equipped with a dot product. Using this approach, one can rigorously prove
all Euclid’s postulates, and moreover, develop a similar theory in all dimensions, which leads to the
notion of n-dimensional Euclidean space.
Later we will also use various results from Calculus to study higher dimensional shapes in ndimensional Euclidean spaces. One of the main problems that geometers encountered here in the last
century is the rigorous definition of such an object. For example, the notion of a surface that you saw
in the module on curves and surfaces, is not always satisfactory, since many natural examples (e.g. a
2-dimensional sphere) are strictly speaking not surfaces in the sense of that module. We will discuss
this in more detail in due course, but for now I mention that the problems occur when we attempt
to study the sphere as a whole (that is globally) rather than a piece of it only (that is locally). For
example, we will see that it is impossible to deform a round 2-dimensional sphere so that its Gauss
curvature would be non-positive everywhere. This statement becomes false if you take only a piece of
a sphere – it can be always deformed to a flat piece of plane.
Another concept that we will meet is the distinction between the so-called extrinsic and intrinsic
quantities describing the geometry of a surface. The intrinsic quantities (or related properties) are
the ones that do not change under bending a surface. For example, inhabitants of a surface who
need to travel along a fixed curve (lying on a surface) would always travel the same amount of length
independently how the surface is bent. In particular, the least amount of length, that is the infimum
of lengths of all curves lying on a surface that join two given points, defines the intrinsic quantity,
called the intrinsic distance function. In contrast, the usual Euclidean distance between two points
on a surface will change under bending, and is not intrinsic. The latter is an example of the extrinsic
quantity, it depends on the way the surface is placed in the Euclidean space.
1
G. Kokarev: MATH3113 Differential geometry.
Chapter I
Geometry of the Euclidean space
In this chapter we recall basic notions and facts from Linear Algebra (e.g. MATH1012, MATH1060)
and learn how to use them to describe the geometry of the Euclidean space.
1. Dot product on Rn
1.1. Basic properties and facts. Recall that Rn is the set of all n-tuples of real numbers,
Rn = {(x1 , . . . , xn ) : x1 , . . . , xn ∈ R}.
The components xi are often viewed as coordinates (which are in essence are real-valued functions)
of a point in the space.
The space Rn is equipped with the natural dot product (also called, the inner product, or scalar
product), defined as:
x·y =
n
X
xi yi ,
x = (x1 , . . . , xn ), y = (y1 , . . . , yn ) ∈ Rn .
where
i=1
It satisfies the following properties:
(i) (α1 x1 + α2 x2 ) · y = α1 (x1 · y) + α2 (x2 · y) for any x1 , x2 , and y ∈ Rn , and any real numbers
α1 , α2 ∈ R;
(ii) x · y = y · x for any x and y ∈ Rn ;
(iii) x · x > 0 for any x ∈ Rn , and x · x = 0 if and only if x = 0.
Self-Check Question I.1. Do you remember the names for these properties (i), (ii), and (iii)? Can
you prove (without using the definition of the dot product) that (i) and (ii) imply the linearity with
respect to the second variable?
The following statement is often discussed in vector calculus.
Proposition I.1 (Cauchy-Schwarz √
inequality). For any vectors x, y ∈ Rn the following inequality

holds: |x · y| 6 |x| |y|, where |x| = x · x, and |y| = y · y are lengths of vectors in Rn . Besides,
the equality occurs if and only if x = 0 or y = λx for some λ ∈ R.
Proof. For MATH5113M.
Corollary I.2 (Triangle inequality). For any vectors x, y ∈ Rn the following inequality holds:
|x + y| 6 |x| + |y|. Besides, the equality occurs if and only if x = 0 or y = λx for some real
number λ > 0.
Proof. We have the following sequence of inequalities:
2
2
2
2
2
|x + y| = (x + y) · (x + y) = |x| + x · y + y · x + |y| = |x| + 2x · y + |y| 6
2
2
2
2
|x| + 2 |x · y| + |y| 6 |x| + 2 |x| |y| + |y| = (|x| + |y|)2 ,
where in the second inequality we used the Cauchy-Schwarz inequality. In particular, we see that
equality in the triangle inequality implies equality in the Cauchy-Schwarz inequality, and hence,
implies that x = 0 or y = λx for some λ > 0. Conversely, if x = 0 or y = λx, where λ > 0, then the
triangle inequality becomes an equality.
2
G. Kokarev: MATH3113 Differential geometry.
Later we will see that the dot product describes the geometry of the Euclidean space completely.
For now mention that it allows us:
1. to compute distances in Rn :
dist(x, y) = |x − y| =
p
(x − y) · (x − y).
From MATH2051 we also know that the dot product allows us to compute lengths of curves
in Rn , see the discussion in Part II. In particular, the triangle inequality can be written in the
following form
dist(x, y) 6 dist(x, z) + dist(z, y)
for any vectors x, y, and z ∈ Rn . (Make sure that you can explain why.)
Self-Check Question I.2. Note that geometrically the triangle inequality means that the length of
each side of a triangle in Rn is not greater than the sum of lengths of the remaining sides. Using
this observation you should be able to answer the following question: can the numbers 1, 2, and 4 be
lengths of sides of a triangle in Rn ?
2. to compute the cosine of angles θ between non-zero vectors x and y by the formula
x·y
.
cos θ =
|x| |y|
The fact that the ratio on the right-hand side above is cos θ for some θ is a consequence of the
Cauchy-Schwarz inequality above: it guarantees that the quotient on the right hand-side above
takes values in the interval [−1, 1].
The equality case in the Cauchy-Schwarz inequality says that the angle θ between non-zero
vectors x and y equals πk, where k ∈ Z, if and only if the vectors are collinear.
Recall that vectors x and y are called orthogonal if x · y = 0, i.e. the angle between them equals
π/2 + πk, k ∈ Z. In particular, we have the following Pythagorean theorem: vectors x and y are
2
2
2
orthogonal if and only if |x + y| = |x| + |y| .
3. to compute volumes of parallelotopes: in more detail, for a system e1 , . . . , ek of k linearly
independent vectors the k-dimensional parallelotope Pk , spanned by them, is defined as
Pk = {t1 e1 + . . . + tk ek : ti ∈ [0, 1]}.
Its k-dimensional volume is given by the formula
v

u
u
e1 · e1
u
 e2 · e1
u

Vol k (Pk ) = udet 
t

ek · e1

. . . e1 · e k
. . . e2 · e k 
.

… 
. . . ek · ek
In particular, if the ei ’s are pair-wise orthogonal (that is Pk is an orthotope ), then we obtain
p
Vol k (Pk ) = (e1 · e1 ) · · · (ek · ek ) = |e1 | · · · |ek | ,
that is, the volume is a product of lengths of all sides.
Self-Check Question I.3. Do you remember the formula for the area of a plane parallelogram?
Can you check that the formula above for the volume of a 2-dimensional polytope, which is precisely
a plane parallelogram, coincides with the formula for the area?
Definition I.1. The space Rn equipped with the dot-product, that is the pair (Rn , ·), is called the
n-dimensional Euclidean space.
3
G. Kokarev: MATH3113 Differential geometry.
1.2. Linear subspaces. Most of you are familiar with the following definition; it is taken from the
modules on Linear Algebra.
Definition I.2. A subset X ⊂ Rn is called a linear subspace (or vector subspace) if for any x, y ∈ X
and any a, b ∈ R we have ax + by ∈ X.
To get the feeling for this definition, we consider the following example.
Example I.3. Let X1 , X2 , X3 ⊂ R2 be subsets defined as:
X1 = {(x1 , x2 ) ∈ R2 : x1 = x2 },
X2 = {(x1 , x2 ) ∈ R2 : x1 = x2 + 1},
X3 = {(x1 , x2 ) ∈ R2 : x1 = x22 }.
Then X1 is a linear subspace, but X2 and X3 are not. For the first we need to check that if x = (x1 , x2 )
and y = (y1 , y2 ) are vectors from X1 , then any linear combination ax + by also lies in X1 . Indeed,
we have
ax + by = a(x1 , x2 ) + b(y1 , y2 ) = (ax1 + by1 , ax2 + by2 ).
Since x1 = x2 and y1 = y2 , we conclude that ax1 + by1 = ax2 + by2 , that is ax + by lies in X1 . The
set X2 is not a vector space, since it does not contain the zero vector 0 = (0, 0); by Definition I.2 any
vector space should do so. Finally, to see that X3 is not a vector space, note that the vectors (1, 1)
and (1, −1) lie in X3 , but the sum (2, 0) = (1, 1) + (1, −1) does not.
We proceed with the following important definitions.
Definition I.4. A collection of non-zero vectors x1 , . . . , xk ∈ Rn is called linearly dependent if there
exists a collection of real numbers α1 , . . . , αk ∈ R such that not all of the αi ’s are equal to zero and
α1 x1 + α2 x2 + · · · + αk xk = 0.
A collection of vectors that are not linearly dependent is called linearly independent.
Self-Check Question I.4. Let x1 , . . . , xk ∈ Rn be a collection of non-zero vectors that are pair-wise
orthogonal, that is xi · xj = 0 for all i 6= j. Can you show that these vectors are linearly independent?
Definition I.5. Let X ⊂ Rn be a linear subspace. The dimension of X is the maximal number k such
that there exists k linearly independent vectors in X, k 6 n. Such a system of linearly independent
vectors is called a basis of X.
It is always important to keep a geometric picture in mind when working with these notions. For
example, one-dimensional subspaces in Rn can be thought as lines through zero `v , that is the sets of
the form {tv : t ∈ R}, where v ∈ Rn is a fixed non-zero vector.
Self-Check Question I.5. Give an example of a vector subspace in Rn whose dimension equals two.
How would you define the notion of plane through zero in Rn ?
Let X ⊂ Rn be a vector subspace and let x1 , . . . , xk be a basis of X. It is known that any vector
x ∈ X can be uniquely presented as the linear combination
x = α1 x1 + α2 x2 + · · · + αk xk
(I.1)
for some αi ∈ R, where i = 1, . . . , k, that is the vectors x1 , . . . , xk span the vector space X. The
following simple, but useful example, shows that the dimension of Rn equals n.
4
G. Kokarev: MATH3113 Differential geometry.
Example I.6. Consider Rn , the space of n-tuples (x1 , . . . , xn ), where xi ∈ R. It is a vector space
of dimension n; as a basis one can take a collection e1 , . . . , en , where all coordinates of ei equal zero,
apart from the ith coordinate that equals one,
ei = (0, . . . , 0, 1 , 0, . . . , 0).
ith
In the sequel, we refer to this basis as the standard basis.
Note that the basis vectors ei in the example above are pair-wise orthogonal, ei · ej = 0 for all
i 6= j, and have unit length |ei | = 1. Such bases are called orthonormal. It is known that for any
subspace X ⊂ Rn there exists an orthonormal basis. Orthonormal bases are very useful, since they
allow us to compute all coefficients αi ’s in decomposition (I.1). More precisely, taking the dot product
of xi with both sides in (I.1), we obtain that αi = x · xi for any i = 1, . . . , k.
Now we give the following useful definition.
Definition I.7. For a subspace X ⊂ Rn the space X ⊥ ⊂ Rn that consists of all vectors y that are
orthogonal to all vectors in x ∈ X,
X ⊥ = {y ∈ Rn : x · y = 0 for all x ∈ X},
is called the orthogonal complement of X.
For example, if `v is a line through zero in R3 , then the orthogonal complement `⊥
v is a plane
that is orthogonal to `v . More generally, later we will see that the spaces X and X ⊥ in Rn have
complementary dimensions, that is if dim X = k, then dim X ⊥ = n − k.
Self-Check Question I.6. Can you show that (X ⊥ ) in Definition I.7 is a vector subspace? Can you
show that (X ⊥ )⊥ = X?
Example I.8. Let us find the orthogonal complement to the line `v in R3 , where v = (1, 2, 3). First,
note that, since `v is one-dimensional, x ∈ `⊥
v if and only if x · v = 0. Writing a vector x in coordinates
can
be described as the space of solutions to the following
(x1 , x2 , x3 ), we see that the space `⊥
v
equation
x1 + 2×2 + 3×3 = 0.
As you were taught in first year modules, you can write down a general formula for the solution in the
form of linear combinations of two independent solutions. The latter precisely reflects the fact that
`⊥
v is two-dimensional. More precisely, it is straightforward to find two linearly independent vectors
u1 and u2 that are orthogonal to v, i.e. u1 = (2, −1, 0) and u2 = (3, 0, −1), and then any solution x
is a linear combination α1 u1 + α2 u2 for appropriate α1 , α2 ∈ R. Conversely, any linear combination
of this form is a solution.
1.3. Remarks on classical geometry of the Euclidean plane. Most basic facts of the classical
geometry of the Euclidean plane go back to the first five axioms that describe various arrangements
of points and lines in the plane. More precisely, the first five Euclid axioms are:
A1 . For any two points there exists a line that goes through both of them.
A2 . There exists at most one line that passes through two distinct points.
A3 . Every line contains at least two distinct points.
A4 . There exist three points that do not lie on a straight line.
5
G. Kokarev: MATH3113 Differential geometry.
A5 . (Parallel axiom). Let ` be a line and p a point that does not lie on `. Then there exists a unique
line that contains p and does not intersect the line `.
Here we intend to explain that all these axioms are simply consequences of the fact that R2 is a linear
space of dimension two. For this we need first to specify more precisely what we mean by ”point”
and ”line” in R2 . By a point p we call an element of R2 . By a line in R2 we mean the set
`p,v = {p + tv : t ∈ R} ⊂ R2 ,
(I.2)
where p ∈ R2 is a point, and v ∈ R2 is a non-zero vector. We call v the direction vector of a line
` = `p,v . One can prove the following lemma.
Lemma I.3. Two lines `p,v and `q,w coincide as sets, `p,v = `q,w , if and only if the direction vectors
v and w are linearly dependent, and p − q = t0 v for some t0 ∈ R.
Proof. First, if the lines `p,v and `q,w coincide, then we see that q ∈ `p,v . Hence, there exists t0 ∈ R
such that q = p + t0 v. Further, note that q + w ∈ `p,v , and hence,
q + w = p + tv
for some
t ∈ R.
Now using the relation q = p+t0 v, we obtain that (t−t0 )v = w. Thus, v and w are linearly dependent.
Conversely, if q = p + t0 v for some t0 ∈ R, then q ∈ `p,v . Since the non-zero vectors v and w are
linearly dependent, we may assume that w = αv, where α 6= 0, and we conclude that
`q,w = {q + sw : s ∈ R} = {p + (t0 + sα)v : s ∈ R} = `p,v ,
that is the sets `q,w and `p,v coincide.
With this definition of line we can prove the axioms A1 -A5 .
Proposition I.4. The lines, viewed as subsets of R2 of form (I.2), satisfy Euclid’s axioms A1 -A5 .
Proof. Here we prove only the first four axioms; for a full proof see Video 1 on Minerva.
A1 : Let p 6= q be two distinct points, then the line `p,v , where v = q − p, contains both points p and
q; they correspond to the values t = 0 and t = 1, respectively.
A2 : The uniqueness of such a line is a consequence of Lemma I.3. In more detail, if `p0 ,v0 is another
line that contains p and q. Then, from
p = p 0 + t1 v 0
and q = p0 + t2 v 0 ,
we obtain q − p = (t2 − t1 )v 0 . Since q − p = v, we conclude that v and v 0 are linearly dependent, and
p − p 0 = t1 v 0 =
t1
v.
t2 − t1
Thus, by Lemma I.3, we conclude that the lines `p,v and `p0 ,v0 coincide.
A3 : This is obvious. For example, the points p and p + v always lie on `p,v , and since, v 6= 0, have to
be distinct.
A4 : Since R2 is a two-dimensional vector space, for any line `p,v we may always find a vector w that
is linearly independent with v. Then the point p + w can not lie on line `p,v . Indeed, for otherwise
there exists t ∈ R such that
p + w = p + tv ⇒ w = tv,
and we arrive at a contradiction.
6
G. Kokarev: MATH3113 Differential geometry.
2. Linear operators
In this section we make a deeper excursion into Linear Algebra, and recall useful material on linear
operators. It is used later to study orthogonal transformations of the Euclidean space.
2.1. Reminder of basic definitions and facts. In the past you have met the notion of a vector
space. Below we denote by V and W finite-dimensional vector spaces over R. In future we normally
work with Euclidean spaces (Rn , Rm ), spaces of m × n matrices, and their subspaces.
Now we consider maps between vector spaces.
Definition I.9. A map L : V → W is called linear if
L(α1 u1 + α2 u2 ) = α1 L(u1 ) + α2 L(u2 )
for any α1 , α2 ∈ R, u1 , u2 ∈ V.
Linear maps are also called linear operators.
Simplest examples can be produced when we think about these maps geometrically. For example,
thinking of the Euclidean plane as the set of complex numbers C, it is straightforward to see that
rotations and reflections in the Euclidean plane are linear maps (see Video 2 on Minerva for a detailed
discussion). We will meet higher dimensional generalisations of such maps in Section 3. For now we
confine ourselves to the following example of the projection onto a line.
Example I.10 (Projection onto a line). Let `v = {tv : t ∈ R} be a line through the origin, where
the vector v has unit length, |v| = 1. Consider the map L : Rn → `v , defined as L(u) = (u · v)v.
Geometrically it describes the orthogonal projection on `v . We claim that L is a linear map. This is
a direct consequence of the properties of the dot product:
L(α1 u1 + α2 u2 ) = ((α1 u1 + α2 u2 ) · v)v = α1 (u1 · v)v + α2 (u2 · v)v = α1 L(u1 ) + α2 L(u2 ).
Definition I.11. A linear map L : V → W is called isomorphism if it is bijective, that is both
injective and surjective.
Since linear maps are special (in the sense of Definition I.9), their injectivity and surjectivity can
be checked by looking at certain subspaces, the kernel and image (range). We recall them in the next
definition.
Definition I.12. The kernel of a linear map L : V → W is the subset
Ker L = {u ∈ V : L(u) = 0} ⊂ V.
The image Im L is a subset in W , defined as the collection of all w ∈ W for which there exists u ∈ V
such that L(u) = w.
It is straightforward to check that the kernel Ker L and image Im L are subspaces in V and W
respectively. The dimension of Ker L is called the nullity of L, and is denoted by nul L. The dimension
of Im L is called the rank of L, and is denoted by rank L.
Self-Check Question I.7. Can you check, using the definitions above, that the kernel Ker L and
image Im L are indeed subspaces in V and W ?
The following statement is a standard consequence of definitions.
Proposition I.5. A linear map L : V → W is injective if and only if its kernel is trivial, that is
nul L = 0. A linear map L : V → W is surjective if and only if rank L = dim W .
Self-Check Question I.8. Can you prove Proposition I.5?
7
G. Kokarev: MATH3113 Differential geometry.
Let us now compute the nullity and the rank for a linear map from Example I.10.
Example I.13 (Projection onto a line: continued). First, note that the map L : Rn → `v is surjective.
Indeed, for any w = tv ∈ `v , we have
2
L(w) = (w · v)v = t |v| v = tv = w,
where we used that |v| = 1. Thus, the image of L coincides with `v , and hence, rank L = dim `v = 1.
Now let us analyse the kernel of L. Unravelling the definitions, we obtain
Ker L = {u ∈ Rn : L(u) = 0} = {u ∈ Rn : u · v = 0} = `⊥
v,
where we used that v 6= 0 in the second equality, and Definition I.7 in the last. Thus, we obtain
nul L = dim Ker L = dim `⊥
v.
The last space is the orthogonal hyperplane to a line, and intuitively, its dimension should be equal
to n − 1. This is indeed the case, and is a consequence of the following well-known statement.
Proposition I.6 (Rank-Nullity theorem). Let L : V → W be a linear map. Then its nullity and rank
satisfy the following relation
dim V = rank L + nul L.
(I.3)
In particular, rank L 6 dim V .
For a proof we refer to the modules on Linear Algebra (e.g. MATH1012, MATH1060). From
Proposition I.6 we conclude that for any linear map the following inequality holds:
rank L 6 min{dim V, dim W }.
(I.4)
Important convention: when equality in (I.4) is attained we say that L has maximal rank.
Corollary I.7. Let L : V → W be a linear operator. Then the following holds:
(i) if dim V 6 dim W , the operator L has maximal rank if and only if it is injective;
(ii) if dim V > dim W , the operator L has maximal rank if and only if it is surjective.
In particular, if dim V = dim W , then a linear operator L has maximal rank if and only if it is an
isomorphism.
Proof. Under the hypothesis dim V 6 dim W , the operator L has maximal rank if and only if
rank L = dim V , and by Proposition I.6 this occurs if and only if nul L = 0. By Proposition I.5
the latter is equivalent to L being injective. Thus, the statement (i) is proved. The statement (ii) is
a reformulation of Proposition I.5.
We end this subsection with the following example of isomorphism, which gives a useful identification between the space of m × n matrices and Rm×n .
Example I.14. Let V be a space of m × n matrices and W be Rm×n . Consider a basis of V that
consists of matrices ji whose all components are equal to zero apart from the component on the jth
row and ith column that is equal to 1. Also denote by e` the standard basis in W : all components of e`
are equal to zero apart from the `th that is equal to 1. Consider a linear operator Im,n : V → W that
sends a matrix ji to the vector en(j−1)+i , where j = 1, . . . , m and i = 1, . . . , n. It is straightforward
to see that the linear operator Im,n is a linear isomorphism.
8
G. Kokarev: MATH3113 Differential geometry.
2.2. Linear operators and matrices. Let {v1 , . . . , vn } and {w1 , . . . , wm } be bases in V and W
respectively. Then to any linear operator L : V → W , we can assign an m × n-matrix AL whose
components (aji ) are obtained from relations
L(vi ) =
m
X
aji wj ,
where i = 1, . . . , n.
j=1
The correspondence L 7→ AL satisfies the following properties:
(i) AL =0 if and only if L = 0,
(ii) AλL = λAL for any λ ∈ R and L : V → W ,
(iii) AL+S = AL + AS for any linear operators L, S : V → W ,
(iv) AL◦S = AL AS for any linear operators S : Z → V and L : V → W .
P
(v) Ker L = { xi vi }, where (x1 , . . . , xn ) is a solution to the linear system
n
X
aji xi = 0
for any j = 1, . . . , m;
i=1
in particular, if n = m, then nul L = 0 if and only if det AL 6= 0.
(vi) rank L = rank AL , where the right hand-side is the rank of the matrix AL ; in particular, if
n = m, we see that L has maximal rank if and only if det AL 6= 0.
Recall that the rank of a matrix A can be defined as the maximal number of linearly independent
columns of A (so called column rank), or equivalently, as the maximal number of linearly independent
rows of A (so called row rank); the fundamental theorem of linear algebra says that these numbers
are equal (MATH1011, MATH1012). In particular, if a given matrix A is reduced to the row echelon
form, then its rank equals the number of non-zero rows.
Self-Check Question I.9. Can you compute the matrix of an operator in Example I.10? The answer
depends on the choice of bases, but you may choose bases that you prefer. Can you read off the rank
2.3. Linear isomorphisms (invertible linear operators). Now we specialise considerations to
linear operators L : Rn → Rn . By the discussion above, in this setting the following hypotheses are
equivalent:
nul L = 0 ⇔ rank L = n ⇔ det AL 6= 0 ⇔ L is an isomorphism.
(I.5)
In particular, if L satisfies any of the hypotheses in (I.5), then it is invertible, and the inverse linear
map L−1 also satisfies these hypotheses. If L and S are two maps that satisfy any of the hypotheses
in (I.5), then so does the composition L ◦ S.
Definition I.15. The collection of all linear operators that satisfy any of the hypotheses in (I.5) is
called the general linear group and is denoted GL(n).
Choosing a basis {e1 , . . . , en } in Rn , we may identify linear operators L with n × n matrices AL ,
and hence, may equivalently say that
GL(n) = {n × n-matrices A : det A 6= 0}.
9
G. Kokarev: MATH3113 Differential geometry.
If {ē1 , . . . , ēn } is another basis, and ĀL is the matrix of a linear operator in it, then the following
relation holds
n
X
ĀL = C −1 AL C,
where C = (cji ), and ej =
cji ēi .
i=1
Here C is a matrix representing the so-called transition map, which maps one basis to the other. In
particular, by the multiplicative property of the determinant we conclude that
det ĀL = det(C −1 AL C) = det C −1 det AL det C = det AL ,
(I.6)
where we used that det C −1 = 1/ det C. In particular, relation (I.6) shows that the following definition
does not depend on a basis choice.
Definition I.16. An isomorphism (invertible operator) L : Rn → Rn is called orientation preserving,
if its matrix AL with respect to some (and hence any) basis satisfies det AL > 0. Otherwise, an
isomorphism L is called orientation reversing.
Example I.17. Consider the so-called reflection (or mirror symmetry) operator Mn : Rn → Rn ,
defined by the formula
Mn (x1 , . . . , xn−1 , xn ) = (x1 , . . . , xn−1 , −xn ).
In the standard basis e1 , . . . , en , see Example I.6, its matrix has the form

1
0 ···
0
0
 0
1 ···
0
0 

AMn =  · · · · · · · · · · · · · · · 
.
 0
0 ···
1
0 
0
0 ···
0 −1
Thus, det AMn = −1, and we conclude that Mn is orientation reversing.
3. Orthogonal transformations and Euclidean isometries
As we know, standard transformations in the Euclidean plane, such as rotations and reflections, play
an important role in Euclidean geometry. Our next aim is to describe versions of such maps in higher
dimensional Euclidean spaces.
Definition I.18. A linear operator L : Rn → Rn is called orthogonal (or an orthogonal transformation), if it preserves the dot product:
for any u, v ∈ Rn .
L(u) · L(v) = u · v
(I.7)
Basic properties of orthogonal operators can be summarised in the following statement.
Proposition I.8. Let L : Rn → Rn be an orthogonal operator. Then:
(i) it is invertible, and the inverse operator L−1 is also orthogonal;
(ii) if S : Rn → Rn is another orthogonal operator, then the composition L ◦ S is also an orthogonal
operator;
(iii) the operator L preserves lengths of vectors, that is |L(u)| = |u| for any u ∈ Rn .
10
G. Kokarev: MATH3113 Differential geometry.
Proof. We first check property (iii):
2
2
|L(u)| = L(u) · L(u) = u · u = |u|
for any u ∈ Rn . Now property (iii) implies that nul L = 0. Hence, any of the hypotheses in (I.5) is
satisfied, and we conclude that L is invertible. Then using relation (I.7), we obtain
L−1 (u) · L−1 (v) = L(L−1 (u)) · L(L−1 (v)) = u · v
for any u and v ∈ Rn . Thus, property (i) is verified. Now the last statement (ii) follows by the
repeated application of relation (I.7):
(L ◦ S)(u) · (L ◦ S)(v) = L(S(u)) · L(S(v)) = S(u) · S(v) = u · v
for any u and v ∈ Rn .
The first two statements in Proposition I.8 show that the collection of all orthogonal operators
L : Rn → Rn forms a group. It is called the orthogonal group, and is denoted by O(n).
Now we take an even more general point of view, and look at maps that preserve distances between
points. We proceed with the following important definition.
Definition I.19. A (not necessarily linear!) map T : Rn → Rn is called a Euclidean isometry, if it
preserves distances between points, that is
dist(u, v) = dist(T (u), T (v))
for any u, v ∈ Rn .
Example I.20 (Translations). The translation for a fixed vector p ∈ Rn is the map P : Rn → Rn
defined by the rule u 7−→ u + p. It is straightforward to check that P is a Euclidean isometry:
dist(P (u), P (v)) = |P (u) − P (v)| = |(u + p) − (v + p)| = |u − v| = dist(u, v)
for any u, v ∈ Rn .
Note that Euclidean isometries in the example above are not linear maps. Among linear maps we
have the following examples of Euclidean isometries.
Corollary I.9. Any orthogonal operator L : Rn → Rn is a Euclidean isometry.
Proof. Since L is a linear operator, then using property (iii) in Proposition I.8, we obtain
dist(u, v) = |u − v| = |L(u − v)| = |L(u) − L(v)| = dist(L(u), L(v))
for any u and v ∈ Rn .
Up to a certain extent the converse also holds.
Theorem I.10. Let T : Rn → Rn be a Euclidean isometry that fixes the origin, that is T (0) = 0.
Then T is an orthogonal linear operator.
Proof. For MATH5113M only.
Note that there is no assumption that the map T is linear in Theorem I.10 – the statement says
that if T is an isometry that fixes the origin, then it has to be linear, and has to be an orthogonal
operator!
Now we look at matrices representing orthogonal linear operators. Let (ei ) be an orthonormal
basis in Rn , that is ei · ej = δij for any i, j = 1, . . . , n, where δij is the standard Dirac symbol, also
called the Kronecker symbol (it equals 1 if i = j and 0 otherwise). For example, we can choose (ei )
to be the standard basis, that is all components of the vector ei are equal to zero apart from the ith
that is equal to 1. Let A = (aji ) be a matrix of L in this basis. The following proposition describes
the property of a linear operator L to be orthogonal in terms of its matrix A.
11
G. Kokarev: MATH3113 Differential geometry.
Proposition I.11. Let (ei ) be an orthonormal basis in Rn . Then a linear operator L : Rn → Rn is
orthogonal if and only if its matrix A with respect to the basis (ei ) satisfies the relation A> A = E,
where E is the identity matrix, and A> stands for the transpose matrix of A.
Proof. Suppose that L is orthogonal. Then testing relation (I.7) on basis vectors ei and ej , we obtain
!
!
X
X
X
δij = ei · ej = L(ei ) · L(ej ) =
aki ek ·
alj el =
aki alj (ek · el )
k
l
=
X
aki alj δkl =
k,l
k,l
X
a`i a`j = (A> A)ij
for any i, j,
`
where δij is the standard Dirac symbol. The converse statement follows by reversing the above
argument.
Self-Check Question I.10. The argument in the proof of Proposition I.11 shows that
L is orthogonal ⇒ A> A = E.
Can you reverse the argument and prove the converse statement, thus completing the proof of Proposition I.11?
Example I.21. The first of the 2 × 2-matrices below is orthogonal, and the second is not:
√ 
√ 

1
3
1
3
 2
 2

2
2

and
 √
 √
,
.

3
3
1
1

2
2
2
2
Proposition I.11 shows that the orthogonal group O(n) can be equivalently defined as the following
subset in the set of matrices
O(n) = {A ∈ GL(n) : A> A = E}.
As another consequence of Proposition I.11 we can describe all orthogonal operators on a Euclidean
plane R2 .
Corollary I.12. Let e1 , e2 be an orthonormal basis in a Euclidean plane R2 . Then the matrix A of
any orthogonal operator L : R2 → R2 has one of the following forms

cos θ − sin θ
cos θ
sin θ
AL =
or
AL =
sin θ
cos θ
sin θ − cos θ
with respect to the cases L is orientation preserving (det AL > 0) or not (det AL < 0), where 0 6 θ < 2π. Proof. Suppose that the matrix A of L is given in the form a c A= . b d Then the relation A> A = E yields the following identities:
a2 + b2 = 1,
ac + bd = 0,
c2 + d2 = 1.
By the first relation we may set a = cos θ and b = sin θ for some 0 6 θ < 2π. The second relation yields c = −t sin θ and d = t cos θ for some t ∈ R. Now substituting these identities in the last relation above, we obtain t2 = 1. Thus, t = ±1, and the sign corresponds to the cases det A > 0 or
det A < 0. 12 G. Kokarev: MATH3113 Differential geometry. Academic Year 2020/21 The statement above shows that orientation preserving orthogonal operators R2 → R2 are precisely rotations (through an angle θ in the anti-clockwise direction), and orientation reversing orthogonal operators R2 → R2 are precisely rotations (through an angle θ in the clockwise direction) followed by a reflection (mirror symmetry) in e1 , see Video 2 on Minerva for a computation of matrices of such transformations. The matrix form of a reflection (mirror symmetry) also appears in Example I.17. 4. Remarks on notation: pre-images of maps In the final section of this chapter we discuss important notation that is often used in the sequel chapters. Let X and Y be abstract sets, and let u : X → Y be a map. Definition I.22. For any subset B ⊂ Y the subset u−1 (B) := {x ∈ X : u(x) ∈ B} ⊂ X is called the pre-image of B under u. The pre-image of a one-point set B = {b} we call the pre-image of a point b, and denote it by u−1 (b). Warning: the pre-image u−1 (B) is defined for any map u : X → Y and any subset B ⊂ Y . In particular, we do not assume that u is invertible. The following standard example is a good illustration of the definition above. Example I.23. Let f : R → R be a function given by the formula f (t) = t2 . Then the pre-image of the interval [1, 2] is the set √ √ f −1 ([1, 2]) = [− 2, −1] ∪ [1, 2]. If you understood Example I.23, you should be able to answer the following question. Self-Check Question I.11. Let u : X → Y be a map, and let A ⊂ X be a subset. Is the relation u−1 (u(A)) = A true? What is the correct statement that relates A and u−1 (u(A))? The notion of pre-image appears very often in many settings. For example, the set of all real roots of the polynomial pn (t) = an tn + · · · + a1 t + a0 is the precisely the pre-image p−1 n (0), where pn is viewed as a map pn : R → R. The kernel of a linear map L : V → W is precisely the pre-image L−1 (0). In particular, a plane H ⊂ R3 can be described as the pre-image L−1 (0) of the orthogonal projection L : R3 → R3 , x 7→ (x · v)v, where v is a unit length vector orthogonal to H. This way of describing geometric objects as pre-images plays an important role in the sequel chapters. We end with the list of relations, which show that taking pre-image is an operation that agrees very nicely with standard operations over sets. Self-Check Question I.12. Let u : X → Y be a map. Show that for any two subsets A, B ⊂ Y the following relations hold: u−1 (A ∪ B) = u−1 (A) ∪ u−1 (B), u−1 (A ∩ B) = u−1 (A) ∩ u−1 (B), A ⊂ B ⇒ u−1 (A) ⊂ u−1 (B), −1 u −1 (Y B) = Xu 13 (B). (I.8) (I.9) G. Kokarev: MATH3113 Differential geometry. Academic Year 2020/21 Chapter II Global theory of curves In this chapter we recall basic notions from the theory of curves and study regular closed curves. Later we introduce the notion of regular homotopy between regular closed curves, describing the way such curves deform, and classify all homotopy classes of regular closed curves in the plane R2 . 5. Curves in the Euclidean space Rn : basic definitions In this section we recall basic notions of theory of curves that you met in MATH2051, such as the notion of regular parameterised curve, length, and curvature vector. Notation and conventions. Below by I ⊂ R we denote an interval of the real line; it can be open (a, b), semi-open [a, b), (a, b], or closed [a, b]. A vector-function f : I → Rn is called smooth on I if it extends to an infinitely differentiable vector-function defined on an open interval J that contains I, I ⊂ J. For example, if I = (a, b), then this simply means that f is infinitely differentiable on I; here we can take J = I. If I = [a, b], then a smooth function on I is the one that extends to a smooth function defined on (a − ε, b + ε) for some ε > 0.
Recall the following definition from MATH2051.
Definition II.1. A parameterised curve (PC) is a smooth vector-function γ : I → Rn . It is called
regular (RPC) if γ 0 (t) 6= 0 for any t ∈ I. For a given parametrised curve γ we call: γ 0 by the velocity
of γ, |γ 0 | by the speed of γ , and γ 00 by the acceleration of γ.
Example II.2. The simplest standard examples are:
(i) Straight line. The parameterised curve γ : R → Rn is given by γ(t) = a + tv, where a, v ∈ Rn . It
is regular if and only if v 6= 0.
(ii) Simple circular curve. This parametrised plane curve γ : R → R2 is defined by the relation γ(t) = (r cos t, r sin t), where r > 0. Computing the velocity vector γ 0 , we obtain
γ 0 (t) = (−r sin t, r cos t). Since |γ 0 | = r > 0, we conclude that γ is a regular parameterised curve
(RPC).
(iii) Now consider a plane parameterised curve γ : R → R2 defined as γ(t) = (t3 , t2 ). We claim that it
is not regular. Indeed, computing the velocity, we obtain γ 0 (t) = (3t2 , 2t), and hence, γ 0 (0) = (0, 0).
Thus, the condition ”γ 0 (t) 6= 0 for any t ∈ I = R” fails to hold, and γ is a not regular parametrised
curve. (In the space below draw the image of this curve.)
Self-Check Question II.1. Consider the so-called circular helix given by the relation
γ(t) = (r cos t, r sin t, ht), where r > 0. Show that for any h ∈ R it is a RPC.
Definition II.3. A re-parametrisation of a PC γ : I → Rn is another PC γ̃ : J → Rn such that
γ̃ = γ ◦ ϕ, where ϕ : J → I is a smooth surjective function such that ϕ0 (τ ) > 0 for any τ ∈ J. The
function ϕ : J → I is called the parameter transformation.
14
G. Kokarev: MATH3113 Differential geometry.
Self-Check Question II.2. Can you see that the conditions of Definition II.3 imply that the reparametrisation function ϕ : J → I is actually a bijection? Can you see that ϕ : [0, 1] → [a, b] such
that ϕ0 (τ ) > 0, and ϕ(0) = a, ϕ(1) = b, is a parameter transformation?
The following statement says that the property of a curve being regular does not change under a
re-parametrisation.
Proposition II.1. Every re-parametrisation of an RPC is an RPC.
Proof. In the notation of Definition II.3, by the chain rule we obtain
γ̃ 0 (τ ) = (γ ◦ ϕ)0 (τ ) = γ 0 (ϕ(τ ))ϕ0 (τ ),
where τ ∈ J. Since ϕ0 (τ ) > 0 and γ 0 (t) 6= 0 for any t ∈ I, we conclude that γ̃ 0 (τ ) 6= 0 for any τ ∈ J.
Thus, the PC γ̃ is indeed regular.
Self-Check Question II.3. The argument in the proof of Proposition II.1 uses the chain rule for
real-valued functions. Do you remember this statement? Can you write it down?
In future we tend to identify parameterised curves that can be obtained from each other by a
re-parametrisation. The reason is that they carry the most of the information that we are interested
in: such curves have the same image, are traversed in the same direction and the same number of
times. In particular, given a parametrised curve γ : [a, b] → Rn by a re-parametrisation we can always
assume that it is defined on the unit interval [0, 1]. Indeed, this can be achieved by considering the
PC γ̃(t) = γ(a + t(b − a)), where t ∈ [0, 1].
Example II.4. Consider the straight line γ(t) = a + tv, where a, v ∈ Rn , the vector v is nonzero, and t ranges on the real line, that is I = R. Clearly, the PC γ̃(τ ) = a + (tan τ )v, where
τ ∈ J = (−π/2, π/2) is a re-parametrisation of γ; the function ϕ(τ ) = tan τ satisfies the conditions
of the parameter transformation in Definition II.3. However, neither of the following PCs can be
obtained from γ by a re-parametrisation:
γ̃1 : R → Rn ,
γ̃1 (t) = a + t3 v;
γ̃2 : R → Rn ,
γ̃2 (t) = a − tv;
γ̃3 : R → Rn ,
γ̃3 (t) = b + tv,
where b 6= a. Indeed, the PC γ̃1 is not regular (γ̃10 (0) = 0), while γ is. Thus, if γ̃1 would be a
re-parametrisation of γ, we would be in contradiction with Proposition II.1. The PC γ̃2 is a straight
line that is traversed in the opposite direction, and the PC γ̃3 has a different image, since b 6= a.
Recall another definition from MATH2051.
Definition II.5. Let γ : I → Rn be a PC. For a closed interval [t0 , t1 ] ⊂ I the quantity
L( γ| [t0 , t1 ]) =
Z
t1
t0
|γ 0 (t)| dt
is called the length of the arc γ| [t0 , t1 ].
The length is a basic geometric characteristic of a curve. The following statement says that it is
a natural quantity in the sense that it does not change under a re-parametrisation.
Proposition II.2. The length of an arc does not change under a re-parametrisation.
15
G. Kokarev: MATH3113 Differential geometry.
Proof. Let γ̃ : J → Rn be a re-parametrisation of a PC γ : I → Rn , that is γ̃ = γ ◦ ϕ, where ϕ : J → I
is a function that satisfies the hypotheses of Definition II.3. Suppose that ϕ maps [τ0 , τ1 ] → [t0 , t1 ]
bijectively. Then by the change of variables formula, we obtain
Z τ1
Z τ1
Z t1
L( γ̃| [τ0 , τ1 ]) =
|γ̃ 0 (τ )| dτ =
|γ 0 (ϕ(τ ))| ϕ0 (τ )dτ =
|γ 0 (t)| dt = L( γ| [t0 , t1 ]),
τ0
τ0
t0
where the hypothesis ϕ0 (τ ) > 0 has been used twice – in the second relation and in the third (change
of variable in the integral), and t0 = ϕ(τ0 ), t1 = ϕ(τ1 ).
Self-Check Question II.4. The argument in the proof of Proposition II.2 above uses the change of
variables in the integral formula. Do you remember this formula? Can you write it down?
The following example shows that the property of curves to be a re-parametrisation of one another
is not preserved when restricting to arcs.
Example II.6. Consider the PCs γ` : R → R2 defined by the relation γ` (t) = (cos 2π`t, sin 2π`t),
where t ∈ R and ` > 0 is an integer. A direct computation gives |γ`0 (t)| = 2π`, and thus, we can
compute the lengths
Z
Z
1
L( γ` | [0, 1]) =
0
|γ`0 | (t)dt =
1
2π`dt = 2π`.
(II.1)
0
Note that the PC γ` : R → R2 is a re-parametrisation of γ1 ; the parameter transformation is the
function ϕ` : R → R, t 7→ `t. However, the PCs obtained by restricting γ` to [0, 1], that is γ` | [0, 1],
can not be obtained from γ1 | [0, 1] by a re-parametrisation. Otherwise, by Proposition II.2 they would
have the same length, which contradicts to formula (II.1).
We proceed with the following definition.
Definition II.7. A PC γ : I → Rn is called a unit speed curve (USC) if |γ 0 (t)| = 1 for any t ∈ I.
Unit speed parametrisations can be very useful, since many computations are simplified for such
curves. The following statement shows that any RPC can be re-parametrised to a USC.
Proposition II.3. Let γ : I → Rn be an RPC. Then there exists a re-parametrisation γ̃ of γ that
has unit speed.
Proof. Pick a point t0 ∈ I and consider the arc-length function
Z t
s(t) =
|γ 0 (τ )| dτ.
t0
Clearly, s : I → R is a smooth function, and s0 (t) = |γ 0 (t)| > 0. Denote by J the image of s; it
is an interval in R (make sure you can explain this). Then s : I → J is bijective, and there is an
inverse function ϕ = s−1 : J → I. Moreover, by the inverse function theorem, ϕ is smooth, and
ϕ0 (τ ) = (1/s0 (ϕ(τ ))) > 0. Thus, the function ϕ can be used as a parameter transformation, and we
claim that the parametrised curve γ̃ = γ ◦ ϕ is a unit speed curve. Indeed, by the chain rule we obtain
γ̃ 0 (τ ) = γ 0 (ϕ(τ ))ϕ0 (τ ) = γ 0 (ϕ(τ ))/s0 (ϕ(τ )) = γ 0 (ϕ(τ ))/ |γ 0 (ϕ(τ ))| ,
and hence, we conclude that |γ̃ 0 (τ )| = 1.
Self-Check Question II.5. The argument in the proof of Proposition II.3 uses the inverse function
theorem. Do you remember what this theorem says? Can you state it?
16
G. Kokarev: MATH3113 Differential geometry.
The unit speed parametrisation constructed in the proof of Prop II.3 is often referred to as the
arc-length parametrisation. It is also known (see MATH2051) that any unit speed parametrisation is
an arc-length parametrisation for an appropriate choice of t0 ∈ I.
We continue with the following definition.
Definition II.8. Let γ : I → Rn be an RPC. The vector-function
!
γn00 (t)
1
γ 00 (t) · γ 0 (t) 0
00
γ
(t)
=
k(t) =
γ
(t)

2
2
2,
|γ 0 (t)|
|γ 0 (t)|
|γ 0 (t)|
where γn00 (t) is the normal component of the acceleration, given by the expression in the brackets
above, is called the curvature vector of γ.
Self-Check Question II.6. Can you see that the curvature vector is always orthogonal to the
velocity vector, that is k(t) · γ 0 (t) = 0 for all t ∈ I? Can you see that if a PC γ is a USC, then
k(t) = γ 00 (t)?
As we know (see Prop II.2), the length is invariant under a re-parametrisation of a curve, and
hence, is a well-defined geometric quantity if we identify PCs that obtained from each other by a
re-parametrisation. The following proposition shows that so is the curvature vector of a curve.
Proposition II.4. The curvature vector is unchanged under a re-parametrisation, that is if
γ̃ : J → Rn is a re-parametrisation of an RPC γ : I → Rn , then
for any t ∈ J,
k̃(t) = k(ϕ(t))
where γ̃ = γ ◦ ϕ, k̃ is the curvature vector of γ̃, and k is the curvature vector of γ.
Proof. See Exercise Sheet 2.
6. Closed curves
Now we introduce the notion of closed curve. Intuitively, such a curve should look like a deformation
of a circular curve, where there is no particular distinction between the ends. To make this rigorous
and to be able work with such curves, we first recall basic properties of smooth periodic functions,
and then use them to define closed curves and their re-parametrisations. In the sequel chapters closed
curves will serve as examples of one-dimensional surfaces.
6.1. Periodic vector-functions. We start with a preliminary discussion on periodic and equivariant
functions.
Definition II.9. A vector-function f : R → Rn is called T -periodic (or periodic with period T > 0)
if f (t + T ) = f (t) for any t ∈ R.
Example II.10. The function f (t) = sin(2πt) is a 1-periodic smooth function, and g(x) = sin x is
a 2π-periodic smooth function. On the other hand, any unbounded continuous function h : R → R
(e.g. h(t) = t) can not be T -periodic with T > 0. (Make sure that you can explain why.)
Let us ask the question: when the vector-function γ : [a, b] → Rn admits a (b−a)-periodic extension
to the real line? The answer is given by the following proposition.
Proposition II.5. A smooth function γ : [a, b] → Rn has a smooth (b − a)-periodic extension if and
only if all derivatives at the ends coincide, that is γ (m) (a) = γ (m) (b) for any non-negative integer m.
17
G. Kokarev: MATH3113 Differential geometry.
Proof. (For MATH5113M only.) Let γ̄ : R → Rn be a smooth vector-function with period T = b − a
such that γ̄(t) = γ(t) for any t ∈ [a, b]. Then by the chain rule, we obtain
γ̄ (m) (t + T ) = γ̄ (m) (t)
for any t ∈ R, m ∈ Z, m > 0.
In particular, since T = (b − a), then setting t = a, we obtain
γ (m) (b) = γ̄ (m) (a + T ) = γ̄ (m) (a) = γ (m) (a)
for any m ∈ Z, m > 0.
Conversely, let γ : [a, b] → Rn be a smooth vector-function such that γ (m) (a) = γ (m) (b) for any
non-negative integer m. We define its extension γ̄ : R → Rn by setting γ̄(t) = γ(r), where r ∈ [a, b)
is such that t = r + N (b − a) for an appropriate integer N . (Can you write down a formula for N ?)
So defined γ̄ is indeed periodic with period T = b − a:
γ̄(t + T ) = γ̄(r + N T + T ) = γ(r) = γ̄(r + N T ) = γ̄(t)
for any t ∈ R. Since γ is smooth on (a, b) by periodicity we conclude that γ̄ is smooth on all intervals
(a + T k, b + T k), where k ∈ Z. Thus, it remains to check that γ̄ is smooth at the points a + T k, where
k ∈ Z. (Note that b + T k = a + T (k + 1).) We show that the derivative γ̄ (m) exists for any m > 1.
Assume that the function γ̄ is differentiable (m − 1) times everywhere, and let us show that γ̄ (m−1)
is differentiable at the points a + T k, where k ∈ Z. We have:
lim
t→(a+T k)+
γ̄ (m) (t) = lim γ (m) (t) = γ (m) (a) = γ (m) (b) = lim γ (m) (t) =
t→a+
t→b−
lim
t→(b+T (k−1))−
γ̄ (m) (t) =
lim
t→(a+T k)−
γ̄ (m) (t).
Thus, the limit of γ̄ (m) (t) as t → (a + T k) exists, and by the l’Hopital rule the function γ̄ (m−1) is
differentiable at points a + T k, where k ∈ Z.
Definition II.11. A real-valued function ϕ : R → R is called (T1 , T2 )-equivariant for given positive
real numbers T1 and T2 , if ϕ(t + T2 ) = ϕ(t) + T1 for any t ∈ R.
Example II.12. The function ϕ(t) = 2πt is a (2π, 1)-equivariant function. Indeed, for any t ∈ R we
have
ϕ(t + 1) = 2π(t + 1) = 2πt + 2π = ϕ(t) + 2π.
The following statement explains the usefulness of equivariant functions.
Proposition II.6. Let ϕ : R → R be a (T1 , T2 )-equivariant function. Then for any T1 -periodic
vector-function f : R → Rn the composition f ◦ ϕ is a T2 -periodic vector function.
Proof. Indeed, for any t ∈ R we have
f ◦ ϕ(t + T2 ) = f (ϕ(t + T2 )) = f (ϕ(t) + T1 ) = f (ϕ(t)) = f ◦ ϕ(t),
where in the second relation we used equivariance of ϕ, and in the third – the periodicity of f .
We also have the following version of Proposition II.5, which explains when equivariant functions
can be obtained as extensions of functions defined on intervals.
Proposition II.7. Let ϕ : [c, d] → [a, b] be a surjective smooth function. Then it has a smooth
(T1 , T2 )-equivariant extension ϕ̄ : R → R with T1 = (b − a) and T2 = (d − c) if and only if
ϕ(c) = a,
ϕ(d) = b,
and
ϕ(m) (c) = ϕ(m) (d) for any integer m > 0.
Proof. (For MATH5113M only.) The proof follows an argument similar to the one in the proof of
Proposition II.5.
18
G. Kokarev: MATH3113 Differential geometry.
6.2. Closed curves and their re-parametrisations. Now we define notions of closed parametrised
curve and regular closed parametrised curve.
Definition II.13. A vector-function γ : [a, b] → Rn is called a closed parameterised curve (CPC) if
it has a (b − a)-periodic extension γ̄ : R → Rn that is a smooth map. A closed parameterised curve γ
is called regular (RCPC), if its periodic extension γ̄ is regular.
Example II.14. Consider a PC γ : [0, 1] → R2 given by γ(t) = (t2 − t, sin 2πt). Note that
γ(0) = γ(1). However, it is not a CPC. If γ would have a smooth periodic extension, then we
would have γ 0 (0) = γ 0 (1). On the other hand the computation gives
γ 0 (t) = (2t − 1, 2π cos 2πt) =⇒ γ 0 (0) = (−1, 2π) 6= (1, 2π) = γ 0 (1).
For the future purposes we also need to be able to work with re-parametrisations of closed curves.
This notion is formalised in the following definition.
Definition II.15. A CPC γ̃ : [c, d] → Rn is called the re-parametrisation of a CPC γ : [a, b] → Rn
if there exists a surjective map ϕ : [c, d] → [a, b] that has a (T1 , T2 )-equivariant extension ϕ̄ : R → R
with T1 = b − a and T2 = d − c such that ϕ̄ is smooth, ϕ̄0 (t) > 0 for any t ∈ R, and γ̃¯ = γ̄ ◦ ϕ̄, where
γ̃¯ and γ̄ are periodic extensions of γ̃ and γ respectively.
We already saw that any PC γ : [a, b] → Rn can be re-parametrised to a PC γ̃, defined on the
interval [0, 1]; the PC γ̃ can be defined by setting γ̃(t) = γ(a + t(b − a)), where t ∈ [0, 1]. If γ is a
CPC, then the PC γ̃ defined above is also a CPC. Indeed, the function ϕ : [0, 1] → [a, b] given by
ϕ(t) = a + t(b − a) satisfies the hypotheses of Definition II.15: its ((b − a), 1)-equivariant extension
ϕ̄ : R → R is given by the same relation
ϕ̄(t) = a + t(b − a),
where t ∈ R,
and ϕ̄0 (t) = b − a > 0.
Finally, we define the property of a closed curve being simple. Intuitively, such curves look like
deformations of the circular curve whose image does not have any self-intersection. A rigorous way
of phrasing this property is given in the following definition.
Definition II.16. A closed parametrised curve (CPC) γ : [a, b] → Rn is called simple if the restricted
map γ| [a, b) is injective.
Example II.17. Consider the CPC γ` : [0, 1] → R2 given by γ` (t) = (cos 2π`t, sin 2π`t), where ` is
an integer, see Example II.6. You should be able to verify that γ` is simple if and only if ` = 1 or
` = −1. Geometrically γ` wraps around γ1 ` times; if ` > 0 it is traversed in the same direction, while
if ` < 0 – in the opposite. Self-Check Question II.7. Can you verify that the property of a CPC to be simple does not depend on a re-parametrisation? 7. Homotopy of closed curves. Now we look at topological properties of closed curves. Below we always assume that a given CPC γ is regular, and is defined on the unit interval [0, 1]. The following definition is the main definition of this chapter. Definition II.18. A regular homotopy from a closed curve α : [0, 1] → Rn to a closed curve β : [0, 1] → Rn is a continuous map F : [0, 1] × [0, 1] → Rn that satisfies the following properties: 19 G. Kokarev: MATH3113 Differential geometry. Academic Year 2020/21 (i) α(t) = F (0, t), β(t) = F (1, t) for any t ∈ [0, 1], and for any fixed τ ∈ [0, 1] the map [0, 1] 3 t 7−→ F (τ, t) ∈ Rn is an RCPC. (ii) For any integer k > 0 the derivatives (∂ k /∂tk F (τ, t)) are continuous in τ .
The definition above formalises the intuition behind the deformation of an RCPC through RCPCs:
this is the content of condition (i). The second condition (ii) is more technical, but is very useful. For
example, it ensures that many geometric quantities of a curve, such as curvature, change continuously
under such a deformation. We proceed with the following worked example.
Example II.19 (see Video 3 on Minerva). Let us check that the simple circle α(t) = (cos 2πt, sin 2πt)
is regularly homotopic to the ellipse β(t) = (2 cos 2πt, sin 2πt), where t ∈ [0, 1]. We claim that the
map
F (τ, t) = ((1 + τ ) cos 2πt, sin 2πt),
where τ, t ∈ [0, 1],
satisfies the conditions of Definition II.18. (In the space below sketch the details of the argument
given in Video 3 on Minerva.)
Note that the example above shows that a homotopy can change the geometry of a curve drastically.
Later we will be able to give examples of non-homotopic closed curves. For example, closed curves γ`
from Example II.17 that correspond to different integer values of ` are not regularly homotopic.
It is useful to understand another meaning of homotopy: intuitively it can be viewed as a path in
the set of RCPCs that joins to given curves, α and β. This point of view is used in the proof of the
following statement.
Proposition II.8. The relation of being regularly homotopic is an equivalence relation on the set of
closed regular curves.
Proof. Recall that the equivalence relation is a relation ∼ that satisfies the following properties:
(1) reflexivity α ∼ α,
(2) symmetry α ∼ β =⇒ β ∼ α,
(3) transitivity α ∼ β and β ∼ γ =⇒ α ∼ γ.
We verify all these properties below.
(1) For any RCPC α : [0, 1] → Rn we define F (τ, t) = α(t). The conditions of Definition II.18 are
trivially satisfied.
(2) Let α and β : [0, 1] → Rn be two RCPC for which there exists a regular homotopy from α to β,
that is F : [0, 1] × [0, 1] → Rn such that F (0, t) = α(t) and F (1, t) = β(t). Then the map
G(τ, t) := F (1 − τ, t),
20
where τ, t ∈ [0, 1]
G. Kokarev: MATH3113 Differential geometry.
defines a regular homotopy from β to α, that is G(0, t) = β(t) and G(1, t) = α(t). The conditions
in Definition II.18 for G follow from similar conditions for F . For example, for any fixed τ the map
t 7−→ G(τ, t) is an RCPC, since G(τ, t) = F (1 − τ, t) and t 7−→ F (1 − τ, t) is an RCPC. We also see
that for any k > 0 the derivative
∂k
∂k
G(τ, t) = k F (1 − τ, t)
k
∂t
∂t
is continuous in τ , as a composition of continuous maps.
(3) Suppose that α is regularly homotopic to β via a regular homotopy F , and β is regularly homotopic
to γ via G. Consider the map

F (2τ, t),
τ ∈ [0, 1/2];
H : [0, 1] × [0, 1] → Rn ,
H(τ, t) =
G(2τ − 1, t), τ ∈ [1/2, 1].
It is straightforward to verify that H satisfies all conditions of Definition II.18, and hence, is a regular
homotopy from α to γ.
We end this section with a number of statements that describe relationships between homotopy
and various properties of curves.
Proposition II.9. Every RCPC is regularly homotopic to an RCPC of unit length.
Proof. Let γ : [0, 1] → Rn be an RCPC of length L. Consider the following map
F : [0, 1] × [0, 1] → Rn ,
F (τ, t) = (1 − τ + τ /L)γ(t).
(II.2)
We claim that It is a regular homotopy from γ to γ̃ = γ/L. Since γ̃ has unit length,
L(γ̃) =
Z
0
1
|γ̃ 0 | (t)dt =
1
L
Z
1
0
|γ 0 (t)| dt =
1
L = 1,
L
Thus, it remains to verify that the map F , given by (II.2), is a regular homotopy. The first condition
in Definition II.18 is satisfied, since for any fixed τ ∈ [0, 1] we have
(1 − τ + τ /L) > 0 =⇒ the map t 7→ F (τ, t) = (1 − τ + τ /L)γ(t) is an RCPC.
Differentiating F with respect to t, we obtain
∂k
F (τ, t) = (1 − τ + τ /L)γ (k) (t)
∂tk
for any k > 0,
and hence, the derivatives on the right hand-side above depend on τ continuously.
The following statement shows that re-parametrisations yield curves homotopic to the original
curves.
Proposition II.10. Let γ̃ : [0, 1] → Rn be a re-parametrisation of an RCPC γ : [0, 1] → Rn . Then γ
and γ̃ are regularly homotopic.
Proof. Since γ̃ is a re-parametrisation of γ there exists a smooth function ϕ̄ : R → R such that:
• ϕ̄(0) = 0, ϕ̄(1) = 1, and ϕ̄0 (t) > 0 for any t ∈ R;
• ϕ̄(t + 1) = ϕ̄(t) + 1 for any t ∈ R;
21
G. Kokarev: MATH3113 Differential geometry.
• γ̃¯ = γ̄ ◦ ϕ̄, where γ̃¯ and γ̄ are 1-periodic extensions of γ̃ and γ respectively.
We define a map F̄ : [0, 1] × R → Rn by setting
F̄ (τ, t) = γ̄(τ ϕ̄(t) + (1 − τ )t).
We claim that for any τ ∈ [0, 1] the map F̄τ : t 7→ F̄ (τ, t) is smooth, 1-periodic, and has non-vanishing
derivative. Indeed, it is smooth as a composition of smooth functions. To verify 1-periodicity, we
write
F̄τ (t + 1) = γ̄(τ ϕ̄(t + 1) + (1 − τ )(t + 1)) = γ̄(τ ϕ̄(t) + τ + (1 − τ )t + (1 − τ )) =
γ̄(τ ϕ̄(t) + (1 − τ )t + 1) = γ̄(τ ϕ̄(t) + (1 − τ )t) = F̄τ (t).
Finally, computing the derivative we obtain
F̄τ0 (t) =

F̄ (τ, t) = γ̄ 0 (τ ϕ̄(t) + (1 − τ )t)(τ ϕ̄0 (t) + (1 − τ )).
∂t
Since γ̄ 0 6= 0, we obtain that for any τ ∈ [0, 1] the derivative F̄τ0 (t) 6= 0. These properties show that
for any τ ∈ [0, 1] the restriction of F̄τ to [0, 1] defines an RCPC.
Now we define the homotopy F between γ and γ̃ by setting
F (τ, t) := γ(τ ϕ(t) + (1 − τ )t).
Clearly, F (0, t) = γ(t) and F (1, t) = γ(ϕ(t)) = γ̃(t). We verify that the conditions of Definition II.18
hold. For any fixed τ ∈ [0, 1] the RPC F̄τ is an 1-periodic extension of t 7→ F (τ, t), and hence,
the latter is an RCPC. To verify the second condition of Definition II.18 we should show that the
derivative ∂ k F̄ (τ, t)/∂tk is continuous in τ for any t ∈ R. The latter is a consequence of the fact that
F̄ (τ, t) is defined as a composition of the maps whose all derivatives are continuous in τ .
Corollary II.11. Every RCPC γ is regularly homotopic to an RCPC of constant speed L, where L
is the length of γ.
Proof. Let γ : [0, 1] → Rn be an RCPC, and γ̄ : R → Rn be its 1-periodic extension. Consider the
arc-length function
Zt
s̄(t) = |γ̄ 0 (u)| du,
where t ∈ R.
0
It is straightforward to see that the following relations hold (make sure that you can do this):
s̄(1) = L
and
s̄(t + 1) = s̄(t) + L
for any t ∈ R.
(II.3)
Moreover, s̄0 (t) = |γ̄ 0 (t)| > 0, s(t) → −∞ when t → −∞, and s(t) → +∞ when t → +∞, and we
conclude that s̄ : R → R is bijective. Let ϕ̄ = s̄−1 : R → R be an inverse function. Then the second
relation in (II.3) implies that
ϕ̄(τ + L) = ϕ̄(τ ) + 1,
where τ ∈ R.
Besides, by the inverse function theorem ϕ̄0 (τ ) = (1/s̄0 (t))|t=ϕ̄(τ ) > 0, and hence, ϕ̄ can be used as
¯
a re-parametrisation function. We define a PC γ̃¯ : R → Rn by the formula γ̃(s)
:= γ̄(ϕ̄(Ls)). It is a
smooth, 1-periodic map:
¯ + 1) = γ̄(ϕ̄(L(s + 1))) = γ̄(ϕ̄(Ls + L))) = γ̄(ϕ̄(Ls) + 1) = γ̄(ϕ̄(Ls)) = γ̃(s).
¯
γ̃(s
22
G. Kokarev: MATH3113 Differential geometry.
Besides, we obtain
|γ̃¯ 0 (s)| = |γ̄ 0 (ϕ̄(Ls))| ϕ̄0 (Ls)L = L
where we used the chain rule and the relation ϕ̄0 (τ ) = (1/s̄0 (t))|t=ϕ̄(τ ) = (1/ |γ̄ 0 (t)|)|t=ϕ̄(τ ) . In
particular, we conclude that the restriction of γ̃¯ to [0, 1] defines an RCPC γ̃ of constant speed L.
Since γ̃ is a re-parametrisation of an RCPC γ, the statement now follows from Proposition II.10.
Corollary II.12. Every RCPC γ is regularly homotopic to an RCPC of unit speed.
Proof. The statement is direct consequence of Proposition II.9 and Corollary II.11
8. Plane curves: rotation index and homotopy classification
In this section we restrict our considerations to curves in the Euclidean plane R2 . We introduce the
notion of rotation index for plane regular closed curves, and use it to classify all regular homotopy
classes of such curves.
8.1. Basic notation and facts (reminder). Let γ : [a, b] → R2 be a regular parametrised curve.
The vector v(t) = γ 0 (t)/ |γ 0 (t)| is called the unit tangent vector to a curve. Viewing v(t) as a vectorfunction, we may write v(t) = (v1 (t), v2 (t)). Using this form, we define the unit normal vector to
γ by setting n(t) := (−v2 (t), v1 (t)). Note that the vectors v(t) and n(t) are orthogonal and form a
positively oriented basis of R2 .
Let k(t) be a curvature vector to γ. As we know, it is orthogonal to v(t), and since γ is a plane
curve, we conclude that k(t) = κ(t)n(t) for some real-valued function κ(t). The function κ(t) is called
the signed curvature of γ. Note that |κ(t)| = |k(t)|. Geometrically the signed curvature function κ(t)
measures the rate of change of a tangent line direction: when κ > 0 the curve is bending towards the
unit normal, while when κ < 0 it is bending away from the normal. For a curve in the figure below indicate regions when the signed curvature is positive and when it is negative. The following statement essentially says that the signed curvature function determines a unit speed PC uniquely up to certain initial data. Theorem II.13 (Fundamental theorem of plane curves). Given a smooth real-valued function κ : [a, b] → R, a point t0 ∈ [a, b], and vectors γ0 , v0 ∈ R2 such that |v0 | = 1, there exists a unique unit speed parametrised curve γ : [a, b] → R2 whose signed curvature equals κ(t) and γ(t0 ) = γ0 , γ 0 (t0 ) = v0 . Sketch of a proof. Since any unit tangent vector v(t) = (v1 (t), v2 (t)) of a unit speed curve satisfies the relation v 0 (t) = κ(t)n(t), where n(t) = (−v2 (t), v1 (t)) is a unit normal vector, we obtain the following system of ordinary differential equations on the functions v1 (t) and v2 (t): 0 0 −κ v1 v1 = . v20 κ 0 v2 By the standard results in the course of ODEs, this system has a unique solution v(t) = (v1 (t), v2 (t)) Rt such that v(t0 ) = v0 . Now the curve γ is obtained as the integral γ(t) = γ0 + t0 v(τ )dτ . 23 G. Kokarev: MATH3113 Differential geometry. Academic Year 2020/21 8.2. Rotation index of a closed plane curve. Now we study the dynamics of the unit tangent vector v(t) along a given RPC. We start with the following lemma. Lemma II.14. Let γ : [a, b] → R2 be a regular parametrised curve. Then there exists a smooth function θ : [a, b] → R such that the unit tangent vector v(t) satisfies the relation v(t) = (cos θ(t), sin θ(t)). If θ1 and θ2 are two such functions, then they differ only by an integer multiple of 2π, that is θ1 (t) = θ2 (t) + 2πm, where m ∈ Z is a constant. In particular, the quantity θ(b) − θ(a) is uniquely determined by the parametrised curve γ. Proof. Step 1. We first consider the case when the image v([a, b]) is contained in one of the following four semi-circles SR = {(x, y) ∈ S 1 ⊂ R2 : x > 0},
SL = {(x, y) ∈ S 1 ⊂ R2 : x < 0}, ST = {(x, y) ∈ S 1 ⊂ R2 : y > 0},
SB = {(x, y) ∈ S 1 ⊂ R2 : y < 0}, where S 1 = {(x, y) ∈ R2 : x2 + y 2 = 1}. Let us consider the case when v([a, b]) ⊂ SR . Then the function θ(t) has to satisfy the relation v2 (t) = tan(θ(t)), v1 (t) (II.4) where the functions on the left hand-side above are the coordinates of the vector-function v(t), that is v(t) = (v1 (t), v2 (t)). For a given θ0 ∈ (−π/2 + 2πm, π/2 + 2πm) there exists a unique θ(t) ∈ (−π/2 + 2πm, π/2 + 2πm) such that θ(a) = θ0 and θ(t) satisfies (II.4). Indeed, such a function is given by relation v2 (t) θ(t) = arctan + 2πm, (II.5) v1 (t) and is clearly smooth. The cases when the image v([a, b]) is contained in other semicircles is considered similarly. Step 2. For an arbitrary RPC γ : [a, b] → R2 we choose a partition of the interval a = t0 < t1 < . . . < tN −1 < tN = b such that every image v([ti , ti+1 ]) is contained in one of the four semi-circles above. Such a partition can be chosen due the continuity of the function v(t). (Make sure that you understand this!) Moreover, for any i we can find a small δi > 0 such that the image
v([ti − δi , ti+1 ]) lies in the same semi-circle as the image v([ti , ti+1 ]).
Now for any value θ0 such that v(a) = (cos θ0 , sin θ0 ), by Step 1 we can find a unique smooth
function θ(t) defined on [t0 , t1 ] such that θ(t0 ) = θ0 and v(t) = (cos θ(t), sin θ(t)) for any t ∈ [t0 , t1 ].
Then for the value θ1 = θ(t1 − δ1 ), by Step 1 we can again find a unique smooth function θ̃(t) defined
on [t1 − δ1 , t2 ] such that θ̃(t1 − δ1 ) = θ1 . By uniqueness the functions θ(t) and θ̃(t) coincide on
[t1 − δ, t1 ], and hence, define a smooth function, again denoted by θ, on the whole interval [t0 , t2 ].
Continuing this procedure, we obtain a desired smooth function θ : [a, b] → R uniquely determined
by the choice of θ0 . Any two choices of the initial value θ0 differ by 2πm, and hence, so are the
corresponding functions θ(t).
Remark II.20. The argument in the proof of Lemma II.14 shows that for any smooth function
v : [a, b] → S 1 ⊂ R2 there exists a smooth function θ : [a, b] → R such that v(t) = (cos θ(t), sin θ(t)),
and any two such functions differ by an integer multiple of 2π. Moreover, if a function v(t) is only
continuous, then the statement continues to hold with θ(t) being continuous.
Lemma II.14 shows that if γ : [0, 1] → R2 is an RCPC, then the quantity θ(1) − θ(0) is an
integer multiple of 2π. (Make sure that you can explain this!) This observation leads to the following
definition.
24
G. Kokarev: MATH3113 Differential geometry.
Definition II.21. For an RCPC γ : [0, 1] → R2 the integer
r(γ) :=
1
(θ(1) − θ(0)) ∈ Z

is called the rotation index of γ.
Intuitively the rotation index is the number of times the unit tangent vector v(t) winds around the
unit circle S 1 (where the anticlockwise winding is counted positively) as the closed curve is traversed.
Example II.22. In the space below draw the images of the plane RCPCs, given by the formulae
γ1 (t) = (cos(2πt), sin(2πt)) and γ2 (t) = (cos(2πt), sin(4πt)), where t ∈ [0, 1]. Compute (analytically
or visually) how many times the unit tangent vector winds around the unit circle S 1 for each curve.
Remark II.23. Note that Lemma II.14 holds for any smooth vector-function h : [0, 1] → R2 that
takes values in a unit circle (if h is assumed to be only continuous, then so is θ(t)). In particular, we
could have defined the rotation index considering the unit normal vector n(t) instead of v(t).
The following proposition gives another more geometric formula for the rotation index.
Proposition II.15. Let γ : [0, 1] → R2 be a plane RCPC. Then its rotation index satisfies the relation
Z1
1
r(γ) =

0
κ(t) |γ 0 (t)| dt,
where κ is the signed curvature of γ.
Proof. By Lemma II.14 there exists a smooth function θ : [0, 1] → R such that the unit tangent
vectors v(t) satisfies the relation v(t) = (cos θ(t), sin θ(t)). Hence, we obtain
γ 0 (t) = |γ 0 (t)| (cos θ(t), sin θ(t)) =⇒
0
γ 00 (t) = |γ 0 (t)| θ0 (t)(− sin θ(t), cos θ(t)) + |γ 0 (t)| v(t) =⇒
κ(t) = k(t) · n(t) =
γ 00 (t) · n(t)
|γ 0 (t)|
2
=
|γ 0 (t)| θ0 (t)
2
|γ 0 (t)|
=
θ0 (t)
=⇒ θ0 (t) = κ(t) |γ 0 (t)| .
|γ 0 (t)|
Thus, by the Newton-Leibnitz formula we finally obtain:
θ(1) − θ(0) =
Z1
0
θ (t)dt =
0
Z1
0
and hence, the statement follows from Definition II.21.
25
κ(t) |γ 0 (t)| dt,
G. Kokarev: MATH3113 Differential geometry.
The geometric formula above for the rotation index allows us to show that the rotation index is a
homotopy invariant of closed plane curves. This is the content of the following proposition.
Proposition II.16. Regularly homotopic RCPCs have the same rotation index.
Proof. Let γ0 and γ1 be two plane RCPCs, and F be a regular homotopy between them. Then by
Definition II.18 for each τ ∈ [0, 1] the PC
γτ : [0, 1] 3 t 7−→ F (τ, t) ∈ Rn
is an RCPC, and all derivatives (∂ k /∂tk )γτ (t), where k > 0 is integer, are continuous in τ . The latter,
in particular implies that the speed |γτ0 (t)| and signed curvature function κτ (t) of γτ are continuous
functions of τ . Hence, by Proposition II.15 we conclude that the rotation index
1
r(γτ ) =

Z1
0
κτ (t) |γτ0 (t)| dt
is a continuous function of τ ∈ [0, 1] that takes values in Z. Now by the intermediate value theorem
we conclude that it has to be constant, that is r(γτ ) ≡ c for any τ ∈ [0, 1]. Thus, r(γ0 ) = r(γ1 ).
Self-Check Question II.8. The argument in the proof of Proposition II.16 above uses the intermediate value theorem. Do you remember what it says? Can you state it?
The following statement is an important strengthening of Proposition II.16; it is a central result
in the theory of plane curves.
Theorem II.17 (Whitney-Graustein). Two regular closed plane curves are regularly homotopic if
and only if they have the same rotation index.
We discuss the proof of the Whitney-Graustein theorem at the end of the section. Now we proceed
with another important theorem that deals with simple plane curves.
Theorem II.18 (Hopf). Let γ be a simple regular closed plane curve. Then its rotation index r(γ)
equals either 1 or −1.
Proof. No proof will be given in this course.
Corollary II.19. Any simple regular closed plane curve is regularly homotopic either to the standard
circle
[0, 1] 3 t 7→ (cos(2πt), sin(2πt))
or to the reversed standard circle
[0, 1] 3 t 7→ (cos(2πt), − sin(2πt)).
Corollary II.20. For any simple plane RCPC γ : [0, 1] → R2 the following relation holds
Z1
0
|κ(t)| |γ 0 (t)| dt > 2π;
the equality occurs if and only if κ(t) does not change sign (that is, κ(t) > 0 everywhere or κ(t) 6 0
everywhere).
26
G. Kokarev: MATH3113 Differential geometry.
Proof. By Proposition II.16 and Theorem II.18 we conclude that |r(γ)| = 1. Now by Proposition II.15
we obtain
Z1
Z1
0
κ(t) |γ 0 (t)| dt = 2π |r(γ)| = 2π.
|κ(t)| |γ (t)| dt >
0
0
The equality occurs if and only if
Z1
0
|κ(t)| |γ 0 (t)| dt =
Z1
0
κ+ (t) |γ 0 (t)| dt +
Z1
0
Z1
0
κ− (t) |γ 0 (t)| dt =
0
κ+ (t) |γ (t)| dt −
Z1
0
0
κ− (t) |γ (t)| dt =
Z1
0
κ(t) |γ 0 (t)| dt ,
where κ+ (t) = max{κ(t), 0} > 0 and κ− (t) = − min{κ(t), 0} > 0. The second equality above occurs
if and only if either κ+ or κ− equals zero identically.
8.3. Proof of Theorem II.17.
Lemma II.21 (Integral Cauchy-Schwarz inequality). For any two continuous vector-functions
f, g : [a, b] → Rn the following inequality holds
 b
1/2  b
1/2
Zb
Z
Z
2
2
f (t) · g(t)dt 6  |f (t)| dt  |g(t)| dt ;
a
a
a
the equality occurs if and only if f (t) ≡ 0 or g(t) ≡ λf (t) for any t ∈ [a, b] and some λ ∈ R.
Proof. MATH5113M only.
Corollary II.22. Let f : [0, 1] → Rn be a continuous vector-function. Then it satisfies the following
inequality
 1
1/2
Z1
Z
2
f (t)dt 6  |f (t)| dt ,
0
0
and the equality occurs if and only if f (t) ≡ const.
Proof. First, we check that the statement holds when n = 1, that is for any continuous function
ϕ : [0, 1] → R we have
 1
1/2
Z1
Z
2
ϕ(t)dt 6  ϕ(t) dt ,
(II.6)
0
0
and the equality occurs if and only if ϕ ≡ const. Indeed, this is a direct consequence of Lemma II.21
used with n = 1, f = ϕ, and g = 1. Now for an arbitrary n > 1, let f = (f1 , . . . , fn ) be a given
continuous vector-function. Then using (II.6), we obtain
Z1
0
2
f (t)dt
=
n Z1
X
i=1 0
2
fi (t)dt
n Z
X
1
6
fi2 (t)dt
i=1 0
=
Z1 X
n
0
i=1
fi2 (t)dt
=
Z1
0
2
|f (t)| dt;
the above implies the inequality in the statement. Finally, the equality occurs if and only if each
fi ≡ const, where i = 1, . . . , n.
27
G. Kokarev: MATH3113 Differential geometry.
Proof of Theorem II.17. Note that for a proof of the theorem it is sufficient to prove the ”if” part of
the statement; the ”only if” part is a direct consequence of Proposition II.16. We will prove this part
rigorously under an additional hypothesis r(γ) 6= 0.
Step 1: Simplifying hypotheses. We claim that without loss of generality, we may consider only RCPCs
α such that:
(i) α is a USC,
(ii) α(0) = (0, 0),
(iii) α0 (0) = (1, 0).
More precisely, we claim that for any given RCPC α there exists a regular homotopy to an RCPC
that satisfies (i)-(iii). Then, if we prove the statement for two RCPCs α and β that have the same
rotation index and satisfy properties (i)-(iii), by Propositions II.8 and II.16 it will hold for any RCPCs
with the same rotation index.
First, by Corollary II.12 for any given RCPC there exists a regular homotopy to an RCPC α1
of unit speed, that is α1 satisfies (i). Given such an RCPC α1 we can deform to an RCPC α2 that
satisfies (i) and (ii) by the homotopy
F : [0, 1] × [0, 1] → R2 ,
F (τ, t) = α1 (t) − τ α1 (0),
that is we set α2 (t) = F (1, t). It is straightforward to check that F (τ, t) is indeed a regular homotopy,
and
|α20 (t)| = |α10 (t)| = 1,
α2 (0) = α1 (0) − α1 (0) = 0.
Finally, given such an RCPC α2 we can deform to an RCPC α3 that satisfies all properties (i)-(iii)
by the homotopy

cos(τ θ) sin(τ θ)
G : [0, 1] × [0, 1] → R2 ,
G(τ, t) =
α2 (t),
− sin(τ θ) cos(τ θ)
where the value θ is chosen so that α20 (0) = (cos θ, sin θ). We set α3 (t) to be G(1, t). Since for τ = 1
the matrix above represents a rotation for the angle θ in clockwise direction, we conclude that

cos θ sin θ
cos θ sin θ
cos θ
1
α30 (0) =
α20 (0) =
=
− sin θ cos θ
− sin θ cos θ
sin θ
0
Since rotations are Euclidean isometries, they preserve the length of vectors
|α30 (t)| = |α20 (t)| = 1,
and have the origin as a fixed point, α3 (0) = (0, 0). Thus, the RCPC α3 is regularly homotopic to an
original RCPC α, and satisfies all properties (i)-(iii).
Step 2: Constructing a homotopy. Now let α, β : [0, 1] → R2 be two RCPCs that have equal rotation
indices and satisfy the conditions (i)-(iii) above. In other words, we have
r(α) = r(β) = k,
where k ∈ Z,
α0 (0) = β 0 (0) = (1, 0),
α(0) = β(0) = (0, 0),
α0 (t) = (cos θ0 (t), sin θ0 (t)),
β 0 (t) = (cos θ1 (t), sin θ1 (t)),
where θ0 (t) and θ1 (t) are smooth functions, whose existence is guaranteed by Lemma II.14. We may
choose these functions such that
θ0 (0) = θ1 (0) = 0 =⇒ θ0 (1) = θ1 (1) = 2πk.
28
(II.7)
G. Kokarev: MATH3113 Differential geometry.
First, we define a function
θ : [0, 1] × [0, 1] → R,
θ(τ, t) = (1 − τ )θ0 (t) + τ θ1 (t).
(II.8)
Note that θ(τ, 0) = 0 and θ(τ, 1) = 2πk for any τ ∈ [0, 1]. Second, we define a vector-function
U : [0, 1] × [0, 1] → R2 ,
U (τ, t) = (cos θ(τ, t), sin θ(τ, t)).
Note that for any fixed τ ∈ [0, 1] the function t 7→ U (τ, t) has a smooth 1-periodic extension to R;
this statement is a consequence of Lemma II.14. Finally, we define the homotopy H by setting
2
H : [0, 1] × [0, 1] → R ,
H(τ, t) =
Zt
0
U (τ, s)ds − t
Z1
U (τ, s)ds.
0
Now start checking the conditions of Definition II.18:
H(0, t) =
Zt
U (0, s)ds =
Zt
(cos θ0 (s), sin θ0 (s))ds =
α0 (s)ds = α(t),
0
0
0
Zt
where in the first relation we used
Z1
Z1
U (0, s)ds = α0 (s)ds = α(1) − α(0) = 0.
0
0
Similarly, we have
H(1, t) =
Zt
U (1, s)ds =
0
Zt
(cos θ1 (s), sin θ1 (s))ds =
0
Zt
β 0 (s)ds = β(t),
0
where in the first relation we used
Z1
Z1
U (1, s)ds = β 0 (s)ds = β(1) − β(0) = 0.
0
0
Note that the map H(τ, t) is defined via maps whose all derivatives in t depend continuously on
τ ∈ [0, 1]. Hence, all derivatives (∂ k /∂tk )H(τ, t) also depend on τ continuously.
Now we check that for any fixed τ ∈ [0, 1] the map t 7→ H(τ, t) is a CPC. Let Ū (τ, t) be a smooth
1-periodic extension in t of U (τ, t). We claim that a smooth map
2
H̄ : [0, 1] × R → R ,
H̄(τ, t) =
Zt
0
Ū (τ, s)ds − t
Z1
Ū (τ, s)ds
0
is 1-periodic in t ∈ R. (In the space below write an explanation for this statement given during the
lecture.)
Step 3: Regularity of the homotopy. Thus, for a proof of the statement that H(τ, t) is a regular
homotopy it remains to check that for any fixed τ ∈ [0, 1] the map t 7→ H(τ, t) is an RCPC, that is
29
G. Kokarev: MATH3113 Differential geometry.
(∂/∂t)H(τ, t) 6= 0. Suppose the contrary: there exists τ0 ∈ (0, 1) such that

∂t
H(τ0 , t) = 0 =⇒ U (τ0 , t0 ) −
t=t0
Z1
U (τ0 , s)ds = 0.
0
Hence, we obtain
1 = |U (τ0 , t0 )| =
Z1
0
 1
1/2
Z
2
U (τ0 , s)ds 6  |U (τ0 , s)| ds = 1.
0
Now by Corollary II.22 we conclude that U (τ0 , t) ≡ const as a function of t, and hence, θ(τ0 , t) ≡ const
as a function of t. By (II.8), we obtain
(1 − τ0 )θ0 (t) + τ0 θ1 (t) = C,
(II.9)
where C ∈ R is a constant. Evaluating the relation above at t = 0 and using (II.7), we obtain that
C = 0. Using the last fact, relation (II.7), and evaluating at t = 1, we conclude that that the rotation
index k has to vanish. This yields a contradiction when k 6= 0.
Remark II.24 (The idea of a proof when k = 0). Relation (II.9) yields
κ0 (t) = θ00 (t) = −
τ0
τ0
θ0 (t) = −
κ1 (t),
1 − τ0 1
1 − τ0
where κ0 (t) and κ1 (t) are signed curvatures of USCs α and β respectively. Since τ0 ∈ (0, 1), we see
that the real number τ0 /(1 − τ0 ) is positive. Thus, the relation above implies that signed curvature
functions of α and β always have opposite signs and the same set of zeros. The proof of Theorem II.17
in the general case uses the observation that the latter can be always violated by a small deformation
of one of the curves.
8.4. Visual computation of the rotation index: informal discussion and examples. Now
we describe a method that can be used to compute the rotation index of a curve visually. First, note
that given a picture of the image of a curve γ in a plane R2 we can try to compute the value r(γ) by
imagining how the unit tangent vector v(t) moves around the unit circle at t traverses [0, 1]. While
doing this one needs to remember that the image (range) of an RCPC does not contain all necessary
(i) it does not tell us which direction the curve is traversed in – if we traverse it in the opposite
direction the rotation index r(γ) changes sign;
(ii) it does not tell us how many times the curve is traversed – if we traversed it k times, the rotation
index gets multiplied by k.
Such visual computation works for reasonable simple curves, see Example II.22. However, for more
complicated curves it might not be so easy to imagine the movement of the unit tangent vector v(t).
Below we describe another method of computing the rotation index r(γ), based on viewing it as the
so-called topological degree – the number of pre-images counted with signs.
We seek to determine how many times the unit tangent vector v(t) winds anti-clockwise around
the unit circle as t traverses [0, 1]. Since the curve is an RCPC, we know that v(0) = v(1), and
similarly for all derivatives of v(t). Let us choose and fix a unit vector u ∈ S 1 ⊂ R2 , and consider
30
G. Kokarev: MATH3113 Differential geometry.
the set of times t at which v(t) = u. In other words, we look at the pre-image of u under the map
v : [0, 1] → S 1 , that is the set
v −1 (u) = {t ∈ [0, 1] : v(t) = u}.
Unless we are extremely unlucky, the pre-image v −1 (u) is a finite set. (If it is not, choose a different
point u ∈ S 1 . A big theorem, due to Sard, says that a ”good” u always exists and, even better, that
almost every u is ”good”.) At each time t∗ ∈ v −1 (u) the derivative of v(t) is orthogonal to v(t∗ ) = u:
1 = v(t) · v(t) =⇒ 2v(t) · v 0 (t) = 0.
Hence, the derivative v 0 (t∗ ) is a multiple of n(t∗ ), the unit normal vector. Moreover, one can show
that
v 0 (t∗ ) = |γ 0 (t∗ )| κ(t∗ )n(t∗ ),
(II.10)
where κ is the signed curvature function.
Self-Check Question II.9. Can you verify the last relation?
Thus, for each t∗ ∈ v −1 (u) we can associate a sign, plus or minus, according to whether κ(t∗ ) is
positive or negative. If we are unlucky, and κ(t∗ ) = 0 for some t∗ ∈ v −1 (u), we again go back and
choose a different u. (The same big theorem guarantees that an almost every u the signed curvature
κ(t∗ ) does not vanish for any t∗ ∈ v −1 (u).) Recall that the sign of κ is easy to read off: κ(t∗ ) > 0 if
the curve is bending towards the normal vector, and κ(t∗ ) < 0 if it is bending away from the normal vector. Now the rotation index r(γ) is the number of (+) points in v −1 (u) minus the number of (−) points. Why? Every time the vector v(t) loops around S 1 , it passes through u exactly once. If it passes through u anti-clockwise, the derivative v 0 has the same direction as the normal n, so by (II.10) we see that κ > 0, while if it passes through u clockwise, the derivative v 0 has the opposite direction
to the normal n, and κ < 0. Example II.25 (see Video 4 on Minerva). Compute the rotation index of the plane RCPC γ : [0, 1] → R2 , γ(t) = (cos(6πt), sin(10πt)). One could try to track v(t) as t traverses [0, 1], but that looks tricky. Instead we can try to count signed pre-images of u = (0, 1), that is, the points where v(t) = (0, 1). There are precisely three of these (mark them) and taking them in order (from the start point γ(0) = (1, 0)) their signed curvatures have the signs +, −, and −. Thus, the rotation index equals r(γ) = +1 − 1 − 1 = −1. We conclude from the Whitney-Graustein Theorem that this curve is regularly homotopic to the curve α : [0, 1] → R2 , α(t) = (cos(2πt), − sin(2πt)), the unit circle traversed once clockwise, which is not obvious. 31 G. Kokarev: MATH3113 Differential geometry. Academic Year 2020/21 Chapter III Multivariable Calculus In this chapter we discuss background material on calculus, which is necessary for a proper definition of an n-dimensional regular surface and the study of such surfaces in the sequel chapters. In particular, we discuss differential calculus from a viewpoint that emphasises the use of Linear Algebra. 9. Convergence in the Euclidean space and continuous maps In this section we recall the basic notion of convergence, generalising the one that you met in the calculus in one variable. We relate it to the properties of subsets in Rn , and then proceed with the discussion of continuous maps. 9.1. Open and closed sets in Rn . Let Rn be an n-dimensional Euclidean space. By Br (x) we denote an open ball centred at a point x ∈ Rn of radius r > 0, that is
Br (x) = {y ∈ Rn : |x − y| < r}. These n-dimensional balls can be viewed as versions of open intervals in higher dimensions. You might have met the following definition in other modules. Definition III.1. A subset U ⊂ Rn is called open if for every x ∈ U there exists a ball Br (x) such that Br (x) ⊂ U . A subset V ⊂ Rn is called closed, if the complement Rn V is open. It is not hard to see (make sure that you do) that in dimension one, open intervals (a, b) ⊂ R are open subsets: for every point x ∈ (a, b) one can find an interval (x − δ, x + δ) with an appropriate small δ > 0 such that it is contained in (a, b). Similarly, closed intervals [a, b] ⊂ R are closed subsets.
We proceed with the following examples.
Example III.2 (Basic examples).
1. The empty set ∅ and the whole space Rn are both open and closed.
2. The set [0, 1) ⊂ R is neither open nor closed in R. (Try to sketch your own argument for this
statement and then compare it with the one given in a lecture session.)
3. An open ball Br (x) ⊂ Rn is an open set.
Proof. Let y ∈ Br (x) be an arbitrary point. We need to show that there exists r0 > 0 such that the
ball Br0 (y) is contained in Br (x). Denote by ρ the distance |x − y| < r, and set r0 = r − ρ > 0. We
claim that Br0 (y) ⊂ Br (x). To see the latter pick an arbitrary point z ∈ Br0 (y). Then by the triangle
inequality we obtain
|z − x| 6 |z − y| + |y − x| < r0 + ρ = (r − ρ) + ρ = r, 32 G. Kokarev: MATH3113 Differential geometry. Academic Year 2020/21 and conclude that z lies in Br (x). Thus, the inclusion Br0 (y) ⊂ Br (x) is demonstrated. 4. A closed ball B̄r (x) = {y ∈ Rn : |x − y| 6 r} ⊂ Rn is a closed set. Self-Check Question III.1. Can you write up an argument for the statement 4 in Example III.2 above? (Try to show that the complement Rn B̄r (x) is open and argue similarly to the proof of the statement 3.) Now we give the definition of a converging sequence in the Euclidean space Rn . Definition III.3. A sequence (pk ) of points pk ∈ Rn , where k = 1, . . . , +∞, is called converging to a point p ∈ Rn , if the sequence of lengths |pk − p|, or equivalently the distance between pk and p, converges to zero, that is |pk − p| → 0 as k → +∞. It is straightforward to see that a sequence pk = (x1k , . . . , xnk ) ∈ Rn converges to a point p = (x1 , . . . , xn ) ∈ Rn if and only if for each i = 1, . . . , n the sequence xik converges to xi as k → +∞. The point p above is called the limit of a converging sequence. Self-Check Question III.2. Can you prove the statement above that says the convergence pk → p is equivalent to the convergence of all coordinates of pk to the corresponding coordinates of p? Hint: write down the distance |pk − p| in terms of coordinates and use the so-called squeezing lemma. Example III.4. Consider a sequence pk = (1/k, 0) ∈ R2 . The discussion above shows that pk is a converging sequence, and the limit is the origin p = (0, 0). Consider the set U = R2 (∪pk ) = R2 {pk : k > 1}.
The argument similar to the one we used in Example III.2 shows that U ⊂ R2 is not an open set: the
point p = (0, 0) belongs to U and no ball centred at p lies in U . Thus, the set V = (∪pk ) = R2 U
is not closed. Note that if we add the limit point (0, 0) to the set V , then we obtain a closed set
V̄ = V ∪ {(0, 0)}.
The example above shows that the property of a set to be closed might be related to the property
to contain limit points of converging sequences. As the following statement shows, the latter is indeed
the case.
Proposition III.1. A subset V ⊂ Rn is closed if and only if for any converging sequence (pk ) of
points pk ∈ V its limit p belongs to V .
Proof. Suppose that V is closed, and let us show that for any converging sequence (pk ) of points
pk ∈ V its limit p belongs to V . Suppose the contrary, that is, there exists a sequence (pk ) of points
pk ∈ V whose limit p does not belong to V . Then p ∈ Rn V , and since the latter set is open, there
exists r > 0 such that the ball Br (p) lies in Rn V . Since p is the limit of pk , there exists integer N
such that |pk − p| < r for any k > N . Hence, pk ∈ Br (p) ⊂ Rn V for any k > N , and in particular,
pk ∈
/ V for any k > N . Contradiction.
Now we prove the converse statement. Suppose the contrary: V is not closed, and hence, Rn V
is not open. The latter means that there exists a point p ∈ Rn V such that for any ball Br (p) there
exists a point q ∈ Br (p) such that q ∈
/ Rn V , i.e. q ∈ V . For any integer k > 0 choose such a point
qk ∈ B1/k (p), qk ∈ V . Then the sequence qk converges to p as k → +∞. Hence, by our hypotheses,
p ∈ V , and we arrive at a contradiction: p has been chosen from the set Rn V .
33
G. Kokarev: MATH3113 Differential geometry.
9.2. Continuous maps, homeomorphisms, and embeddings. Now we give a definition of the
property of a map to be continuous at a given point. It is a natural higher dimensional version of the
definition that you might have seen in MATH1026.
Definition III.5. Let W ⊂ Rn be an open subset. A map Φ : W → Rm is called continuous at a point
x ∈ W if for any ball Bε (Φ(x)) ⊂ Rm there exists a ball Bδ (x) ⊂ W such that Φ(Bδ (x)) ⊂ Bε (Φ(x)).
A map Φ : W → Rm is called continuous if it is continuous at every point x ∈ W .
Self-Check Question III.3. Find your old MATH1026 lecture notes, and then find the definition
of a continuous map in these notes. Compare this definition with Definition III.5 in the case n = 1.
Are they equivalent?
We will discuss the geometric meaning of the above definition in lecture sessions. Now we proceed
with the following very useful proposition, which gives a number of equivalent reformulations of the
property being continuous.
Proposition III.2. Let W ⊂ Rn be an open set. Then for a map Φ : W → Rm the following
hypotheses are equivalent:
(i) Φ is continuous everywhere in W (in the sense of Definition III.5);
(ii) for any open subset U ⊂ Rm the pre-image Φ−1 (U ) is open;
(iii) for any closed subset V ⊂ Rm the pre-image Φ−1 (V ) is closed;
(iv) for any x ∈ W and any converging sequence xk → x, where xk ∈ W , the sequence Φ(xk )
converges to Φ(x).
Proof. It is sufficient to prove the chain of the implications (i) ⇒ (ii) ⇒ (iii) ⇒ (iv) ⇒ (i).
(i) ⇒ (ii) : Let x be a point in Φ−1 (U ). Since U is open, there exists a ball Bε (Φ(x)) that is
contained in U . Since Φ is continuous (in the sense of Definition III.5), there exists a ball Bδ (x) such
that Φ(Bδ (x)) ⊂ Bε (Φ(x)). Then, taking the pre-images, we obtain
Bδ (x) ⊂ Φ−1 (Φ(Bδ (x))) ⊂ Φ−1 (Bε (Φ(x))) ⊂ Φ−1 (U ),
where in the second inclusion we used that the pre-image respects the inclusion of sets, see relation (I.8)
in Chapter I, which is straightforward to verify. Thus, for any x ∈ Φ−1 (U ) we found a ball Bδ (x)
that lies in Φ−1 (U ), and hence, Φ−1 (U ) is an open set.
(ii) ⇒ (iii) : if V ⊂ Rm is closed, then U = Rm V is open. Thus, by (ii) the set Φ−1 (U ) is open, and
we conclude that the set
Φ−1 (V ) = Φ−1 (Rm U ) = Rn Φ−1 (U )
is closed. In the second equality above we used that the pre-image respects the complement of sets,
see relation (I.9) in Chapter I, which is straightforward to verify.
(iii) ⇒ (iv) : Suppose the contrary, that is the sequence Φ(xk ) does not converge to Φ(x). Then there
exists a subsequence xk` and a ball Bε (Φ(x)) such that Φ(xk` ) ∈ Rm Bε (Φ(x)). The latter implies
xk` ∈ Φ−1 (∪Φ(xk` )) ⊂ Φ−1 (Rm Bε (Φ(x))).
By the hypothesis (iii) the last set above is closed as the pre-image of a closed set, and by Proposition III.1 we conclude that
lim xk` = x ∈ Φ−1 (Rm Bε (Φ(x))) ⇒ Φ(x) ∈ Rm Bε (Φ(x))
The latter is clearly a contradiction, since Φ(x) ∈ Bε (Φ(x)).
34
G. Kokarev: MATH3113 Differential geometry.
(iv) ⇒ (i) : (Use the space below to sketch an argument for this statement given in a lecture session.)
Self-Check Question III.4. The property of a map being continuous as map of both variables
already appears in the definition of regular homotopy in Chapter II. Go back to the examples of
regular homotopy in Chapter II, and using Proposition III.2, or otherwise, check that the maps
F : [0, 1] × [0, 1] → Rn in these examples are continuous in both variables.
The following useful example allows us to produce many open and closed sets.
Example III.6. Let f1 , . . . , fm be a collection of continuous functions on Rn . Then by Proposition III.2 the subset
U = {x ∈ Rn : f1 (x) < 0, . . . , fm (x) < 0} ⊂ Rn is open, and the subset Ū = {x ∈ Rn : f1 (x) 6 0, . . . , fm (x) 6 0} ⊂ Rn is closed. Self-Check Question III.5. Try to experiment by picking a continuous function f : R2 → R and drawing the sets {x ∈ R2 : f (x) < 0} and {x ∈ R2 : f (x) > 0}.
Self-Check Question III.6. How would you define the interior of an ellipsoid in R3 ? Your set
should not include points that lie on such an ellipsoid. Can you show, using the example above, that
the interior of an ellipsoid is open?
We proceed with discussing the following more peculiar, but important classes of continuous maps.
Some of you might have seen a similar definition in other modules.
Definition III.7. Let U ⊂ Rn and V ⊂ Rn be two open sets. A map Φ : U → V is called a
homeomorphism if it is bijective, continuous, and the inverse map Φ−1 is also continuous.
Example III.8. Consider the maps of the real line to itself:
(i) f : R → R, f (x) = x3 : it is a homemorphism. First, f is a continuous function. It is straightforward
to see that it is bijective, and the inverse function, given by f −1 (y) = y 1/3 , is also continuous.
(ii) f : R → R, f (x) = x2 : it is not a homemorphism (since it is neither surjective nor injective, hence,
not bijective). However, the map f+ : (0, +∞) → (0, +∞), f+ (x) = x2 is a homeomorphism.
Self-Check Question III.7. Can you generalise Example III.8, by considering maps f (x) = xk ,
where k > 0 is an integer?
35
G. Kokarev: MATH3113 Differential geometry.
The notion of homeomorphism is very special and imposes a strong relationship between open sets
U ⊂ Rn and V ⊂ Rn . Note that the sets here are chosen from the same Euclidean space Rn not
coincidentally. There is a deep theorem in topology (the so-called Brauer theorem) that says if for
sets U ⊂ Rn and V ⊂ Rm from different Euclidean spaces there is a homeomorphism Φ : U → V ,
then the dimensions n and m have to be equal. For the convenience we give the following definition.
Definition III.9. The sets U ⊂ Rn and V ⊂ Rn are called homeomorphic if there exists a homeomorphism Φ : U → V .
Example III.10. The intervals (−1, 1) and (−n, n) are homeomorphic; as a homemorphism one
can take Φ(t) = nt, where t ∈ (−1, 1). However, neither of these sets is homeomorphic to the
subset X = (−∞, −1) ∪ (1, +∞). Indeed, suppose the contrary: there exists a homeomorphism
Φ : (−1, 1) → X. Let t1 and t2 be points from (−1, 1) such that Φ(t1 ) = −2 and Φ(t2 ) = 2. Then by
the intermediate value theorem there exists a point t0 ∈ (−1, 1) such that Φ(t0 ) = 0. However, this is
a contradiction, since Φ takes values in X and 0 ∈
/ X.
Self-Check Question III.8. In the example above we used the intermediate value theorem. Do you
remember what is says? Can you write down the statement of this theorem?
We end this section with a brief discussion of continuous embeddings. These are maps that carry
many properties similar to homeomorphisms, but could be considered between subsets in Euclidean
spaces of different dimensions. The following definition plays an important role in the study of surfaces
in the sequel chapters.
Definition III.11. Let U ⊂ Rn be an open subset. A map Φ : U → Rm , where m > n, is called a
continuous embedding if it is injective and any sequence xk ∈ U converges to a point x ∈ U if and
only if the sequence Φ(xk ) converges to Φ(x).
A few remarks on Definition III.11: first, note that any map that satisfies the hypotheses in the
definition is continuous. Second, since a map Φ : U → Rm is assumed to be injective, the map
Φ : U → Φ(U ) is bijective, and one can talk about the inverse map Φ−1 : Φ(U ) → U . The last
hypothesis in the definition says precisely that the inverse map Φ−1 is continuous in the so-called
sequential sense, see statement (iv) in Proposition III.2.
Example III.12 (see Video 5 on Minerva). Consider the following maps, representing different ways
of bending intervals in a plane:
(i) Φ1 : (0, 1) → R2 is given by Φ1 (t) = (cos 2πt, sin 2πt).
(ii) Φ2 : (−1/4, 3/4) → R2 is given by Φ2 (t) = (cos 2πt, sin 4πt).
(In the space below draw the images of Φ1 and Φ2 and explain that Φ1 is a continuous embedding,
while Φ2 is not.)
36
G. Kokarev: MATH3113 Differential geometry.
10. Introducing the protagonist: notion of differential
In this section we discuss the notion of differential and its properties. We explicitly explain relationships with other terms used in calculus, such as Jacobian, gradient, and directional derivative.
Throughout this section by U , V we normally denote open subsets in Euclidean spaces.
10.1. Differential of a map. The following definition describes a central object of this chapter.
Definition III.13. A vector-function Φ : U ⊂ Rn → Rm is called differentiable at a point x ∈ U if
there exists a linear map L : Rn → Rm such that
|Φ(x + h) − Φ(x) − L(h)|
= 0,
h→0
|h|
lim
(III.1)
where | · | denotes the length of a vector in Rm in the numerator and the length in Rn in the denominator. The linear map L is called the differential of Φ at x, and is denoted by Dx Φ.
First, note that if a map Φ : U ⊂ Rn → Rm is differentiable at a point x ∈ U , then the linear
map L that satisfies relation (III.1) is unique, and hence, the differential Dx Φ is well-defined. This
statement is a consequence of the following fact: if L : Rn → Rm is a linear operator such that
|L(h)| / |h| → 0 as h → 0, then L ≡ 0.
Self-Check Question III.9. Can you prove the last statement?
The following simple example shows that for maps Φ : U ⊂ R → R the above definition of
differentiability at a given point x ∈ U …

attachment

Tags:
real numbers

Equation Formula

the Euclidean space

natural dot product

second variable

User generated content is uploaded by users for the purposes of learning and should be used following Studypool’s honor code & terms of service.

## Reviews, comments, and love from our customers and community:

This page is having a slideshow that uses Javascript. Your browser either doesn't support Javascript or you have it turned off. To see this page as it is meant to appear please use a Javascript enabled browser.

Peter M.
So far so good! It's safe and legit. My paper was finished on time...very excited!
Sean O.N.
Experience was easy, prompt and timely. Awesome first experience with a site like this. Worked out well.Thank you.
Angela M.J.
Good easy. I like the bidding because you can choose the writer and read reviews from other students
Lee Y.
My writer had to change some ideas that she misunderstood. She was really nice and kind.
Kelvin J.
I have used other writing websites and this by far as been way better thus far! =)
Antony B.
I received an, "A". Definitely will reach out to her again and I highly recommend her. Thank you very much.