The previous section emphasized that knowing the signs of the derivative \(f'(x)\) to a function \(f(x)\) informs us regarding the intervals of monotonicity of the function \(f(x)\text{.}\) We can use this information to identify extreme values.

Applications involving the identification of extreme values are often called optimization problems. The task in optimization is to identify the value of an independent variable in the system that will maximize or minimize some objective. Aside from the calculus in finding the extreme values, creating an appropriate function that will serve as the objective is often the greatest challenge.

Subsection6.1.1Local Extreme Values

When the derivative \(f'(x)\) changes sign at a point where \(f(x)\) is continuous, the function has a local or relative extreme value. We begin by focusing on what we mean by a local extreme value. A local extreme is a point where the function reaches either its highest or lowest point on an interval around that point. The function might exceed the value on some other interval, but the value needs to be the extreme in a neighborhood of the point.

Definition6.1.1Local (Relative) Extreme Values

A function \(f(x)\) has a local maximum at a point \(x=c\) in the domain if there is an interval \((a,b)\) with \(c \in (a,b)\) so that \(f(x) \le f(c)\) for all \(x \in (a,b)\text{.}\)

A function \(f(x)\) has a local minimum at a point \(x=c\) in the domain if there is an interval \((a,b)\) with \(c \in (a,b)\) so that \(f(c) \le f(x)\) for all \(x \in (a,b)\text{.}\)

The first derivative can often provide enough information to identify local extreme values. The applicable theorem is called the first derivative test.

Theorem6.1.2First Derivative Test

Suppose that \(f'(x)\) exists on an interval \((a,b)\text{,}\) possibly except at \(x=c\) with \(a \lt c \lt b\) and that \(f(x)\) is continuous at \(x=c\text{.}\)

If \(f'(x) \lt 0\) for \(x \in (a,c)\) and \(f'(x) \gt 0\) for \(x \in (c,b)\text{,}\) then \(f(x)\) is decreasing on \((a,c]\) and increasing on \([c,b)\) so that \(f\) has a local minimum at \(x=c\text{.}\)

If \(f'(x) \gt 0\) for \(x \in (a,c)\) and \(f'(x) \lt 0\) for \(x \in (c,b)\text{,}\) then \(f(x)\) is increasing on \((a,c]\) and decreasing on \([c,b)\) so that \(f\) has a local maximum at \(x=c\text{.}\)

If \(f'(x)\) does not change sign, then \(f\) does not have a local extreme value at \(x=c\text{.}\)

The denominator of \(f'(x)\) is never zero because \(x^2+3 \ge 3\) will never equal zero. So the sign can only change where \(3-x^2=0\) which occurs at two values, \(x=\pm\sqrt{3}\text{.}\) There are three intervals of interest. We can find the signs of \(f'(x)\) using the values \(x=-2\text{,}\) \(x=0\) and \(x=2\text{.}\) The signs are summarized in the number line summary below.

We finish by interpreting our results.

Because \(f'(x) \lt 0\) on \((-\infty,-\sqrt{3})\) and \(f'(x) \gt 0\) on \((-\sqrt{3},\sqrt{3})\text{,}\) we know \(f(x)\) has a local minimum at \(x=-\sqrt{3}\text{.}\) (Minimum over the interval \((-\infty,+\sqrt{3})\))

Because \(f'(x) \gt 0\) on \((-\sqrt{3},\sqrt{3})\) and \(f'(x) \lt 0\) on \((\sqrt{3},\infty)\text{,}\) we know \(f(x)\) has a local maximum at \(x=\sqrt{3}\text{.}\) (Maximum over the interval \((-\sqrt{3},\infty)\))

A graph of the function is illustrated below.

We will learn later how to identify whether a function has global extreme values.

Subsection6.1.2Optimization

Optimization is the application of finding extreme values to either maximize or minimize some quantity of interest. In general, we will have a system where there is some variable that we have freedom to vary and some quantity that is a function of that variable that we want to be at a maximum or minimum value. The variable that we vary is the independent variable. The quantity that we optimize is the dependent variable and is often called the objective function.

Frequently, the more challenging aspect of an optimization problem is identifying the appropriate function. Once the function is identified, the task is reduced to identifying local extreme values and determining whether one of those might also be the extreme value of interest. If there are multiple independent varaibles, we generally find a constraint that allows us to treat one as a dependent variable so that we ultimately only have a single independent variable.

Several simple examples come from geometry where we need to construct a shape that has some feature (like a given perimeter, area or volume) and we wish to make some other feature as large as possible. We use these examples not because they are practical but because they illustrate the principles of optimization effectively.

Example6.1.6

Suppose we want to cut out a rectangle that has an area of \(500 \; cm^2\) but that horizontal and vertical cuts have different costs. If a horizontal cut costs twice as much as a vertical cut per centimeter, how should we cut the rectangle to minimize our cost?

Once we have identified our variables, we need to find a formula for the cost because that is what we want to minimize. Let \(p\) be the unit cost for a vertical cut of 1 cm so that \(2p\) is the cost of a horizontal cut of 1 cm. The rectangle involves two horizontal cuts and two vertical cuts. So the total cost is given by

\begin{equation*}
C = 2h \cdot 2p + 2v \cdot p = (4h+2v) p.
\end{equation*}

The value of \(p\) is just the unit cost, so we really want to minimize \(C = 4h+2v\) (measured in \(p\) units).

Our objective function involves two independent variables. This means there must be an additional constraint. Reviewing the problem, we recall that the total area needs to be \(500 \; cm^2\text{.}\) The area is computed by \(A=h \cdot v = 500\) so that we can treat \(v\) as another dependent variable,

\begin{equation*}
v = \frac{500}{h}.
\end{equation*}

Substituting this formula into our objective function, we can rewrite it involving only a single independent variable \(h\text{:}\)

As a mathematical function, there is a vertical asymptote at \(h=0\) and intercepts at \(h= \pm \sqrt{250}\text{.}\) Geometrically, only \(h \gt 0\) makes physical sense, so we do sign analysis on \((0,\sqrt{250})\) and \((\sqrt{250},\infty)\text{.}\)

The sign analysis of \(C'(h)\) informs us that \(C(h)\) is decreasing on \((0,\sqrt{250})\) and increasing on \((\sqrt{250},\infty)\text{.}\) The first derivative test then guarantees that \(C(h)\) has a local minimum at \(h=\sqrt{250}\) over the interval \((0,\infty)\) which is the entire physically relevant domain. A graph of the cost as a function of \(h\) illustrates our result.

We finish by interpreting our mathematics. The question was how to cut the rectangle. Our analysis gave us a value for \(h=\sqrt{250} \approx 15.81 \; cm\text{.}\) We also need \(v\text{,}\) which was another dependent variable:

The minimal cost rectangle would have a horizontal cut of 15.81 cm and a vertical cut of 31.62 cm.

A biological example follows. A fundamental hypothesis of biology is that evolution drives organisms to maximize their fitness, which corresponds to the number of surviving offspring. There is often a trade-off between the number of offspring and the probability that the offspring survive. Let \(f\) (fecundity) represent the number of offspring an organism produces and let \(s\) (survival) represent the probability that an offspring will survive. The then fitness is given by \(F=f \cdot s\text{,}\) the average number of offspring that survive.

Example6.1.9

Suppose that the survival probability is related to fecundity so that it decreases exponentially according to the equation

\begin{equation*}
s = e^{-0.012f}.
\end{equation*}

How many offspring should the organism have to maximize fitness?

First, identify the variables. The objective function is the fitness \(F\text{.}\) This depends on both \(f\) (fecundity) and \(s\) (survival probability) through \(F = f \cdot s\text{.}\) The given relation between \(f\) and \(s\) expresses \(s\) as a dependent variable so that we treat \(f\) as the independent variable. Then we can write

\begin{equation*}
F = f \cdot e^{-0.012f}.
\end{equation*}

Second, we compute a derivative to determine the extreme values. We find

The exponential term is always positive, so the only intercept is when \(1-0.012f = 0\) which has a solution \(f = \frac{1}{0.012} = \frac{250}{3}\text{.}\) Using this, our sign analysis for \(F'(f)\) consists of two intervals.

The sign analysis implies that \(F\) is increasing on \((-\infty,\frac{250}{3})\) and decreasing on \((\frac{250}{3},\infty)\text{.}\) The first derivative test then implies that \(F\) has a maximum value when \(f=\frac{250}{3}\approx 83.33\) on the interval \((-\infty,\infty)\text{.}\) We only needed a maximum on \([0,\infty)\) (physically relevant), so this is result is appropriate. The organism will optimize fitness if the average number of offspring is 83.33 individuals.