A Quick and Dirty Introduction to the Curvature of Surfaces

(Original author: Keenan Crane)

Let’s take a more in-depth look at the curvature of surfaces. The word “curvature” really corresponds to our everyday understanding of what it means for something to be curved: eggshells, donuts, and cavatappi pasta have a lot of curvature; floors, ceilings, and cardboard boxes do not. But what about something like a beer bottle? Along one direction the bottle quickly curves around in a circle; along another direction it’s completely flat and travels along a straight line:

This way of looking at curvature — in terms of curves traveling along the surface — is often how we treat curvature in general. In particular, let \(X\) be a unit tangent direction at some distinguished point on the surface, and consider a plane containing both \(df(X)\) and the corresponding normal \(N\). This plane intersects the surface in a curve, and the curvature \(\kappa_n\) of this curve is called the normal curvature in the direction \(X\):

Remember the Frenet-Serret formulas (Theorem 1.1)? They tell us that the change in the normal along a curve is given by \(dN = \kappa T + \tau B\). We can therefore get the normal curvature along \(X\) by extracting the tangential part of \(dN\):

\[ \kappa_n(X) = \frac{df(X) \cdot dN(X)}{|df(X)|^2}. \]

The factor \(|df(X)|^2\) in the denominator simply normalizes any “stretching out” that occurs as we go from the domain \(M\) into \(\mathbb{R}^3\) — a derivation of this formula can be found in Appendix A. Note that normal curvature is signed, meaning the surface can bend toward the normal or away from it.

Principal, Mean, and Gaussian Curvature

At any given point we can ask: along which directions does the surface bend the most? The unit vectors \(X_1\) and \(X_2\) along which we find the maximum and minimum normal curvatures \(\kappa_1\) and \(\kappa_2\) are called the principal directions; the curvatures \(\kappa_i\) are called the principal curvatures. For instance, the beer bottle above might have principal curvatures \(\kappa_1 = 1\), \(\kappa_2 = 0\) at the marked point.

We can also talk about principal curvature in terms of the shape operator, which is the unique map \(S: TM \rightarrow TM\) satisfying

\[ df(SX) = dN(X) \]

for all tangent vectors \(X\). The shape operator \(S\) and the Weingarten map \(dN\) essentially represent the same idea: they both tell us how the normal changes as we travel along a direction \(X\). The only difference is that \(S\) specifies this change in terms of a tangent vector on \(M\), whereas \(dN\) gives us the change as a tangent vector in \(\mathbb{R}^3\). It’s worth noting that many authors do not make this distinction, and simply assume an isometric identification of tangent vectors on \(M\) and the corresponding tangent vectors in \(\mathbb{R}^3\). However, we choose to be more careful so that we can explicitly account for the dependence of various quantities on the immersion \(f\) — this dependence becomes particularly important if you actually want to compute something! (By the way, why can we always express the change in \(N\) in terms of a tangent vector? It’s because \(N\) is the unit normal, hence it cannot grow or shrink in the normal direction.)

One important fact about the principal directions and principal curvatures is that they correspond to eigenvectors and eigenvalues (respectively) of the shape operator:

\[ S X_i = \kappa_i X_i. \]

Moreover, the principal directions are orthogonal with respect to the induced metric: \(g(X_1,X_2) = df(X_1) \cdot df(X_2) = 0\) — see Appendix B for a proof of these two facts. The principal curvatures therefore tell us everything there is to know about normal curvature at a point, since we can express any tangent vector \(Y\) as a linear combination of the principal directions \(X_1\) and \(X_2\). In particular, if \(Y\) is a unit vector offset from \(X_1\) by an angle \(\theta\), then the associated normal curvature is

\[ \kappa_n(Y) = \kappa_1 \cos^2 \theta + \kappa_2 \sin^2 \theta, \]

as you should be able to easily verify using the relationships above. Often, however, working directly with principal curvatures is fairly inconvenient — especially in the discrete setting.

On the other hand, two closely related quantities — called the mean curvature and the Gaussian curvature will show up over and over again (and have some particularly nice interpretations in the discrete world). The mean curvature \(H\) is the arithmetic mean of principal curvatures:

\[ H = \frac{\kappa_1 + \kappa_2}{2}, \]

and the Gaussian curvature is the (square of the) geometric mean:

\[ K = \kappa_1 \kappa_2. \]

What do the values of \(H\) and \(K\) imply about the shape of the surface? Perhaps the most elementary interpretation is that Gaussian curvature is like a logical “and” (is there curvature along both directions?) whereas mean curvature is more like a logical “or” (is there curvature along at least one direction?) Of course, you have to be a little careful here since you can also get zero mean curvature when \(\kappa_1 = -\kappa_2\).

It also helps to see pictures of surfaces with zero mean and Gaussian curvature. Zero-curvature surfaces are so well-studied in mathematics that they have special names. Surfaces with zero Gaussian curvature are called developable surfaces because they can be “developed” or flattened out into the plane without any stretching or tearing. For instance, any piece of a cylinder is developable since one of the principal curvatures is zero:

Surfaces with zero mean curvature are called minimal surfaces because (as we’ll see later) they minimize surface area (with respect to certain constraints). Minimal surfaces tend to be saddle-like since principal curvatures have equal magnitude but opposite sign:

The saddle is also a good example of a surface with negative Gaussian curvature. What does a surface with positive Gaussian curvature look like? The hemisphere is one example:

Note that in this case \(\kappa_1 = \kappa_2\) and so principal directions are not uniquely defined — maximum (and minimum) curvature is achieved along any direction \(X\). Any such point on a surface is called an umbilic point.

There are plenty of cute theorems and relationships involving curvature, but those are the basic facts: the curvature of a surface is completely characterized by the principal curvatures, which are the maximum and minimum normal curvatures. The Gaussian and mean curvature are simply averages of the two principal curvatures, but (as we’ll see) are often easier to get your hands on in practice.

 

The Second Fundamental Form

For historical reasons, it’s probably good to mention an object called the second fundamental form. I’m actually not sure what’s so fundamental about this form, since it’s nothing more than a mashup of the metric \(g\) and the shape operator \(S\), which themselves are simple functions of two truly fundamental objects, namely the immersion \(f\) and the Gauss map \(N\):

\[ I\!I(X,Y) = g(SX,Y) = dN(X) \cdot df(Y). \]

(I suppose “the somewhat auxilliary form” didn’t have a nice ring to it…) The most important thing to realize is that \(I\!I\) does not introduce any new geometric ideas — just another way of writing down things we’ve already seen.


Appendix A: A Nice Formula for Normal Curvature

Consider a unit-speed curve \(c(t)\) on a domain \(M \subset \mathbb{R}^2\), and an immersion \(f\) of \(M\) into \(\mathbb{R}^3\); the composition \(\gamma = f(c)\) defines a curve in \(\mathbb{R}^3\). Letting \(X = \dot{c}\) denote the time derivative of \(c\), we can express the unit tangent field on \(\gamma\) as

\[ T = \frac{df(X)}{|df(X)|}. \]

Recall from our notes on curves that the curvature normal \(\kappa n\) is defined as the (negative of the) change in tangent direction as we travel along the curve at unit speed (we’ll use a lowercase “\(n\)” here to distinguish from the surface normal \(N\)). In this case, however, the initially unit-speed curve \(c\) may get stretched out by the map \(f\). Therefore, to get the curvature normal we have to evaluate

\[ \kappa n =- \frac{dT}{d\ell}, \]

where \(\ell\) denotes the distance traveled in \(\mathbb{R}^3\) along \(\gamma\). The normal curvature \(\kappa_n(\gamma)\) can be defined as the projection of the curvature normal onto the surface normal \(N\). More explicitly, we have

\[ \kappa_n = N \cdot \kappa n = -N \cdot \frac{dT}{d\ell} = -N \cdot \frac{dT}{dt}\frac{dt}{d\ell}. \]

The quantity \(\tfrac{d\ell}{dt}\) is just the amount by which the curve gets stretched out as we go from \(M\) into \(\mathbb{R}^3\), which we can also write as \(|df(X)|\). We therefore have

\[
\begin{array}{rcl}
-|df(X)| \kappa_N &=& N \cdot \frac{dT}{dt} \\
&=& N \cdot \frac{d}{dt}\left( df(X) |df(X)|^{-1} \right) \\
&=& N \cdot \left( \frac{d}{dt} df(X) \right) |df(X)|^{-1} +\underbrace{N \cdot df(X)}_{=0} \left( \frac{d}{dt} |df(X)|^{-1} \right) \\
&=& N \cdot \left( \frac{d}{dt} df(\dot{c}) \right) |df(\dot{c})|^{-1} \\
&=& \displaystyle\frac{N \cdot df(\ddot{c})}{|df(\dot{c})|}.
\end{array}
\]

Noting that \(N \cdot df(\dot{c}) = 0\) implies \(N \cdot df(\ddot{c}) = -\dot{N} \cdot df(\dot{c})\), and moreover that \(\dot{N} = dN(\dot{c})\), we get

\[ |df(X)| \kappa_N = \frac{dN(\dot{c}) \cdot df(\dot{c})}{|df(\dot{c})|}, \]

or equivalently

\[ \kappa_N = \frac{dN(X) \cdot df(X)}{|df(X)|^2}, \]

which is the formula introduced above.


Appendix B: Why Are Principal Directions Orthogonal?

Earlier we stated that the unit principal directions \(X_1, X_2\) are orthogonal with respect to the metric \(g\) induced by the immersion \(f\), i.e.,

\[ g(X_1,X_2) = df(X_1) \cdot df(X_2) = 0. \]

First, let’s show that

\[ g(SX,Y) = g(X,SY), \]

i.e., \(S\) is self-adjoint with respect to the induced metric (equivalently: the second fundamental form \(I\!I\) is symmetric in its two arguments, i.e., \(I\!I(X,Y)=I\!I(Y,X)\)). To see why, consider that (by definition) the normal \(N\) is orthogonal to any tangent vector \(df(X)\):

\[ N \cdot df(X) = 0. \]

Differentiating this expression with respect to some other direction \(Y\), we get

\[ dN(Y) \cdot df(X) = -N \cdot d(df(X))(Y). \]

Using the equality of mixed partial derivatives, we see that \(S\) is indeed self-adjoint with respect to \(g\):

\[
\begin{array}{rcl}
g(SX,Y) &=& dN(X) \cdot df(Y) \\
&=& -N \cdot d(df(X))(Y) \\
&=& -N \cdot d(df(Y))(X) \\
&=& dN(Y) \cdot df(X) \\
&=& g(X,SY).
\end{array}
\]

(By the way, the essential trick we used here comes up all the time: if you see a product involving a derivative, try expressing it in terms of the derivative of a product.) Returning to our original question, we have

\[
\begin{array}{rcl}
\kappa_1 g(X_1,X_2) &=& \kappa_1 df(X_1) \cdot df(X_2) \\
&=& dN(X_1) \cdot df(X_2) \\
&=& dN(X_2) \cdot df(X_1) \\
&=& \kappa_2 df(X_2) \cdot df(X_1) \\
&=& \kappa_2 g(X_1,X_2).
\end{array}
\]

Therefore, either \(\kappa_1 = \kappa_2\) or else \(g(X_1,X_2)=0\). But if \(\kappa_1 = \kappa_2\) (i.e., the maximum and minimum principle curvatures are equal) then we’re at an umbilic point where all normal curvatures are equal — in this case we’re free to pick the principal directions however we please — in particular, we can use an arbitrary pair of orthogonal directions. The phenomenon we experience here reflects a more general phenomenon in linear algebra: roughly speaking, if \(A\) is self-adjoint with respect to \(B\), then \(A\)’s eigenvectors will be orthogonal with respect to \(B\).

One final question: why should \(\kappa_1\) and \(\kappa_2\) be the maximum and minimum normal curvatures? Well, think about what the largest and smallest eigenvalues of a linear map represent: they represent the largest and smallest amount of “stretch” experienced by a unit vector in any direction. Hence, the normal curvature can be no larger than \(\kappa_1\) and no smaller than \(\kappa_2\).

Print Friendly, PDF & Email