Skip to main content

Section 10.2 Differentiable Functions

When we learned about the definite integral and defined accumulation functions, we were able to characterize the behavior of those functions by considering the behavior of the rates of accumulation. In particular, we learned that the monotonicity of a function depended on the sign of the rate of accumulation, and the concavity of a function depended, in turn, on the monotonicity of the rate. These concepts have a natural analogue for functions in terms of their derivatives, even when a function is not defined as an accumulation function.

In this section, we introduce the major theorems relating to differentiability. Differentiability is a property of a function characterized by where the derivative is defined. We first learn that local extremes can only occur at critical points, or points where the derivative equals zero or is not defined. We then learn about Rolle's theorem, which is a theorem guaranteeing a point with zero derivative. Rolle's theorem principal value is in proving the Mean Value Theorem for Derivatives. The Mean Value Theorem plays a prominent role in characterizing the behavior of functions. In particular, it will be used to prove that antiderivatives can only differ by constants.

Subsection 10.2.1 Differentiability of Functions

Recall that continuity and differentiability are properties of functions. To say that a function is continuous at a point means that the function itself has a value at that point and that the limits of the function from both the left and the right converge to the same value. The property of continuity essentially characterizes the idea that the graph of the function is connected at the given point. In a similar way, differentiability is a property that the limit defining the derivative at a point is defined. Differentiability guarantees that a function has a linear tangent line approximation.

Now that we know how to compute derivatives with the rules of differentiation, we can consider when these functions are differentiable. As an example, consider power functions \(f(x)=x^p\text{.}\) When \(p\) is an irrational number, this is defined in terms of the exponential \(f(x) = e^{p \ln(x)}\text{,}\) so that the domain is \(x \gt 0\text{.}\) However, when \(p\) is a rational number \(p = \frac{k}{n}\) for integers \(k\) and \(n\) with \(n \gt 0\text{,}\) then \(f(x) = x^{k/n}\) is defined by the \(n\)th roots of \(x\text{,}\)

\begin{equation*} f(x) = x^{k/n} = (\sqrt[n]{x})^k. \end{equation*}

For odd values \(n\text{,}\) the root \(\sqrt[n]{x}\) is defined for all values of \(x\text{.}\) However, \(f(0)\) is only defined if \(k \ge 0\text{.}\)

What about the derivative? We have

\begin{equation*} f'(x) = \frac{k}{n} x^{(k-n)/n}. \end{equation*}

If \(k \ge n\text{,}\) then \(f'(0)\) will exist. However, if \(k \lt n\) corresponding to \(0 \lt p=\frac{k}{n} \lt 1\text{,}\) then \(f'(0)\) will not exist. This is an example of a nondifferentiable function. Graphically, the tangent line at the point is vertical so that the slope is undefined with infinite limits.

(a) \(p=\frac{2}{5}\)
(b) \(p=\frac{3}{5}\)
Figure 10.2.1. Examples of power functions that are nondifferentiable at \(x=0\text{.}\)

When we first introduced the concept of differentiability, we used piecewise functions to provide examples of nondifferentiable functions. In those examples, we used the definition of the derivative. With the rules of differentiation, we can determine differentiability more directly.

Because \(f_\ell\) and \(f_r\) are continuous, the requirements of continuity that

\begin{equation*} \lim_{x \to a^-} f(x) = f(a) \quad \text{and} \quad \lim_{x \to a^+} f(x) = f(a) \end{equation*}

are replaced by \(f_\ell(a) = f(a)\) and \(f_r(a) = f(a)\text{.}\) Similarly, the calculation of the derivative using the definition reduces to the values of the derivatives of \(f_\ell\) and \(f_r\) at \(x=a\text{:}\)

\begin{align*} \lim_{h \to 0^-} \frac{f(a+h) - f(a)}{h} &= \lim_{h \to 0^-} \frac{f_\ell(a+h) - f_\ell(a)}{h}\\ &= f'_\ell(a), \\ \lim_{h \to 0^+} \frac{f(a+h) - f(a)}{h} &= \lim_{h \to 0^+} \frac{f_r(a+h) - f_r(a)}{h}\\ &= f'_r(a). \end{align*}

For the two sided limit to exist, and thus for the derivative itself to exist, the left- and right-side limits must agree, \(f'_\ell(a) = f'_r(a)\text{.}\) Then \(f'(a) = f'_\ell(a) = f'_r(a)\text{.}\)

Example 10.2.3.

Determine the values of \(a\) and \(b\) so that the function

\begin{equation*} f(x) = \begin{cases} x^2-2x, & x \le 2, \\ -2x^2+ax+b, & x \gt 2, \end{cases} \end{equation*}

is differentiable at \(x=2\text{.}\)

Solution

The function used for \(x \lt 2\) is \(f_\ell(x)=x^2-2x\text{,}\) and the function used for \(x \gt 2\) is \(f_r(x = -2x^2+ax+b\text{.}\) The derivatives are found using differentiation rules:

\begin{align*} f'_\ell(x) &= 2x-2,\\ f'_r(x) &= -4x+a. \end{align*}

The requirement for continuity will give us one equation, which we simplify: which becomes

\begin{gather*} f_\ell(2) = f_r(2)\\ 2^2-2(2) = -2(2^2)+a(2)+b\\ 0 = 2a+b-8\\ 2a+b = 8. \end{gather*}

This means that so long as \(b=8-2a\text{,}\) \(f(x)\) will be continuous at \(x=2\text{.}\) However, it may or may not be differentiable, depending on whether the derivatives match.

The requirement that the left- and right-sided derivatives are equal gives us a second equation, which we also simplify:

\begin{gather*} f'_\ell(2) = f'_r(2)\\ 2(2)-2 = -4(2)+a\\ 2 = -8+a\\ a=10. \end{gather*}

Once we know \(a=10\text{,}\) we can substitute that into the first equation to find \(b\text{:}\)

\begin{gather*} b = 8 - 2a\\ b = 8 - 2(10)\\ b = -12. \end{gather*}

Consequently, \(f(x)\) will be differentiable at \(x=2\) if and only if \(a=10\) and \(b=-12\text{.}\)

Drag the sliders to change parameters \(a\) and \(b\) in order to make \(f(x)\) differentiable at \(x=2\text{.}\)

Figure 10.2.4. A graph of \(f(x)\) where parameters \(a\) and \(b\) can be changed dynamically.

Subsection 10.2.2 Consequences of Differentiability

There are a number of important consequences of a function being differentiable. These consequences are stated as mathematical theorems. The first such theorem focuses on differentiability at local extreme values.

Suppose that \(f\) has a local maximum at \(x=a\text{.}\) Then there is some value \(\delta \gt 0\) so that if \(a-\delta \lt x \lt a+\delta\text{,}\) we must have \(f(x) \le f(a)\text{.}\) For \(-\delta \lt h \lt 0\text{,}\) we therefore have \(f(a+h) - f(a) \le 0\) so that dividing by \(h \lt 0\) gives

\begin{equation*} \frac{f(a+h) - f(a)}{h} \ge 0. \end{equation*}

This implies that

\begin{equation*} \lim_{h \to 0^-} \frac{f(a+h) - f(a)}{h} \ge 0. \end{equation*}

For \(0 \lt h \lt \delta\text{,}\) we also have \(f(a+h) - f(a) \le 0\) so that dividing by \(h \gt 0\) gives

\begin{equation*} \frac{f(a+h) - f(a)}{h} \le 0. \end{equation*}

Thus, we have

\begin{equation*} \lim_{h \to 0^+} \frac{f(a+h) - f(a)}{h} \le 0. \end{equation*}

If \(f'(a)\) exists, these limits must equal and \(f'(a)=0\text{.}\)

If \(f\) has a local minimum at \(x=a\text{,}\) the argument is similar.

If we are looking for extreme values of a function, we can ignore all points where \(f'(x)\) exists but \(f'(x) \ne 0\text{.}\) The only points in the domain of \(f\) that might be considered are where \(f'(x)\) does not exist or where \(f'(x)=0\) and \(f\) has a horizontal tangent line. We call such points the critical points of \(f\text{.}\)

Definition 10.2.6.

The critical points of a function \(f\) are all values in the domain of \(f\) such that \(f'(x)\) does not exist or \(f'(x)=0\text{.}\)

The second theorem combines the Extreme Value Theorem with Fermat's Theorem. If a function is continuous on a closed interval \([a,b]\text{,}\) then it must achieve both a maximum and a minimum value. If that function has \(f(a)=f(b)\text{,}\) then one of the extreme values must occur inside the interval at some point \(c \in (a,b)\text{.}\) If the function is also differentiable, then we must have \(f'(c)=0\text{.}\) This result is named Rolle's theorem.

The argument is given in the paragraph preceding the theorem. The hypothesis of continuity allows us to apply the Extreme Value Theorem. The hypothesis of differentiability allows us to apply Fermat's Theorem to the local extreme that was guaranteed at the point between \(a\) and \(b\text{.}\)

The consequence of Rolle's theorem is that if a function starts and ends at the same value over an interval, it must turn around somewhere with a horizontal tangent.

Figure 10.2.8. A graphical illustration of Rolle's theorem. Note that extreme values have horizontal tangents.

Rolle's theorem is not usually applied on its own. It is most often referenced in the context of proving more useful theorems. The third theorem about differentiability applies Rolle's theorem to create the Mean Value Theorem for derivatives in relation to the average rate of change. Recall that the average rate of change,

\begin{equation*} \left.\frac{\Delta f}{\Delta x}\right|_{[a,b]} = \frac{f(b)-f(a)}{b-a}, \end{equation*}

is the slope of the line, called a secant line, that joins the points \((a,f(a))\) and \((b,f(b))\text{.}\) The Mean Value Theorem guarantees that a continuous and differentiable function will have some point at which the tangent line has the same slope as the secant line over the given interval.

Figure 10.2.9. A graphical illustration of the Mean Value theorem. Note that at the point furthest from the secant line (dashed), the slope matches that of the secant line.

Let \(s(x)\) be the linear function corresponding to this secant line. That is, \(s(a)=f(a)\) and \(s(b)=f(b)\) and \(s(x)\) has the constant slope

\begin{equation*} s'(x) = \frac{f(b)-f(a)}{b-a}. \end{equation*}

We now define \(g(x)=f(x)-s(x)\text{.}\) Since \(s(a)=f(a)\) and \(s(b)=f(b)\text{,}\) we have \(g(a)=g(b)=0\text{.}\) If \(f\) is continuous and differentiable, then so is \(g\text{.}\) Rolle's theorem guarantees that \(g'(c)=f'(c)-s'(c) = 0\) for some value \(c \in (a,b)\text{.}\) Thus, \(\displaystyle f'(c)=s'(c)=\left.\frac{\Delta f}{\Delta x}\right|_{[a,b]}\text{.}\)

Subsection 10.2.3 Applications of the Mean Value Theorem

The Mean Value Theorem for derivatives allows us to know that the average rate of change of a differentiable function between any two points will be equal to the instantaneous rate of change at some point within the interval. Consequently, if we know properties of the derivative on entire intervals, that can provide information about how the function is changing on the interval. In particular, we learn that the sign of a derivative can be used to determine monotonicity of a function.

Consider any two points \(a,b \in I\) with \(a \lt b\text{.}\) Because \(f\) is differentiable on \(I\text{,}\) we know that \(f\) is continuous and differentiable on the subinterval \([a,b]\text{.}\) The Mean Value Theorem guarantees the existence of a point \(c \in (a,b)\) such that

\begin{equation*} f(b)-f(a) = f'(c)\cdot(b-a). \end{equation*}

Now assume that \(f'(x) \gt 0\) for all \(x \in I\text{.}\) Then \(f'(c) \gt 0\) and \(b-a \gt 0\text{,}\) guaranteeing that \(f(b)-f(a) \gt 0\text{.}\) That is, \(f(b) \gt f(a)\text{.}\) This is what is needed to show that \(f\) is increasing on \(I\text{.}\)

Next assume that \(f'(x) \lt 0\) for all \(x \in I\text{.}\) Then \(f'(c) \lt 0\) while \(b-a \gt 0\text{,}\) guaranteeing that \(f(b)-f(a) \lt 0\text{.}\) That is, \(f(b) \lt f(a)\text{,}\) which shows that \(f\) is decreasing on \(I\text{.}\)

Finally assume that \(f'(x) = 0\) for all \(x \in I\text{.}\) Then \(f'(c) = 0\text{,}\) implying that \(f(b)-f(a) = 0\text{.}\) That is, \(f(b) = f(a)\text{,}\) which shows that \(f\) is constant on \(I\text{.}\)

We can now justify doing the same sign analysis work using a derivative as we did for the rate of accumulation functions. What is different from then? Our previous justification required that the function could be written as an accumulation function with a known rate of accumulation. Now, we can do the same type of sign analysis with any function for which we can determine the derivative.

Because the second derivative gives the rate of change of the first derivative, we can use sign analysis of \(f''(x)\) to describe concavity of \(f(x)\text{.}\)

This is just an application of Theorem Theorem 10.2.11 applied to \(f'\text{.}\) Once we know that \(f'\) is increasing on \(I\text{,}\) the definition of concavity allows us to say that \(f\) is concave up on \(I\text{.}\) Similarly, knowing that \(f'\) is decreasing is equivalent to saying that \(f\) is concave down. If \(f'\) is constant on an interval \(I\text{,}\) this is exactly what it means for \(f\) to be linear on \(I\text{.}\)

Example 10.2.13.

Describe the monotonicity and concavity of \(f(x) = xe^{-2x}\text{.}\)

Solution

Start by computing the first and second derivatives. Note that we must use the product rule:

\begin{align*} f(x) &= xe^{-2x}, \\ f'(x) &= 1 \cdot e^{-2x} + x \cdot -2e^{-2x} \\ &= (1-2x)e^{-2x},\\ f''(x) &= -2 \cdot e^{-2x} + (1-2x) \cdot -2e^{-2x} \\ &= (-2-2+4x) e^{-2x} \\ &= (-4+4x) e^{-2x}. \end{align*}

We can now do sign analysis for \(f'(x)\) and \(f''(x)\text{.}\) Because \(e^{-2x}\) is a factor for each of the functions, we will use the fact that \(e^{-2x} \gt 0\) for all values of \(x\text{.}\) The only point where \(f'(x)=0\) is where \(1-2x=0\) or \(x=\frac{1}{2}\text{.}\) The resulting sign analysis summary for \(f'(x)\) is shown below.

The only point where \(f''(x)=0\) is where \(-4+4x=0\) or \(x=1\text{.}\) The resulting sign analysis summary for \(f''(x)\) is shown below.

We now interpret our results. Because \(f\) is continuous, we can extend open intervals to include end-points. The function \(f(x)\) is increasing on \((-\infty,\frac{1}{2}]\) and decreasing on \([\frac{1}{2},\infty)\text{.}\) In addition, \(f(x)\) is concave down on \((-\infty,1]\) and concave up on \([1,\infty)\text{.}\) A graph of \(y=f(x)\) is shown below, with the local maximum at \(x=\frac{1}{2}\) and the point of inflection at \(x=1\text{.}\)

Subsection 10.2.4 Classifying Antiderivatives

The Mean Value Theorem results in another important consequence: all antiderivatives of a particular function differ by constants. In particular, the result applies to intervals where the derivative is defined.

Define a function \(H(x) = G(x) - F(x)\text{.}\) Because \(F\) and \(G\) are differentiable at all \(x \in I\text{,}\) \(H(x)\) is both continuous and differentiable at all \(x \in I\text{.}\) With \(F'(x)=G'(x)\text{,}\) we have \(H'(x)=0\) for all \(x \in I\text{.}\) Consequently, by Theorem 10.2.11, \(H(x)\) is constant on \(I\text{,}\) or \(H(x) = C\) for some constant \(C\text{.}\) Therefore \(G(x)-F(x)=C\) or \(G(x)=F(x)+C\text{.}\)

Be aware that the constant only applies to an interval where the antiderivatives are differentiable. The constant can be different over different intervals.

Example 10.2.15.

We know that \(F(x) = \ln(|x|)\) is an antiderivative of \(\displaystyle f(x) = \frac{1}{x}\text{.}\) Now, construct

\begin{equation*} G(x)=\begin{cases} \ln(-2x), & x \lt 0, \\ \ln(3x), & x \gt 0. \end{cases} \end{equation*}

We can differentiate on each interval:

\begin{equation*} G'(x)=\begin{cases} \frac{d}{dx}\Big[\ln(-2x)\Big] = \frac{-2}{-2x} = \frac{1}{x}, & x \lt 0, \\ \frac{d}{dx}\Big[\ln(3x)\Big] = \frac{3}{3x} = \frac{1}{x}, & x \gt 0. \end{cases} \end{equation*}

This shows that \(F(x)\) and \(G(x)\) are each antiderivatives of \(f(x)\text{.}\)

So what are the constants on the intervals? They can be found from the properties of logarithms:

\begin{gather*} \ln(-2x) = \ln(2) + \ln(-x),\\ \ln(3x) = \ln(3) + \ln(x). \end{gather*}

We see that on the interval \((-\infty,0)\text{,}\) \(G(x) = F(x) + \ln(2)\text{,}\) but on the interval \((0,\infty)\text{,}\) \(G(x) = F(x) + \ln(3)\text{.}\)

Subsection 10.2.5 Summary

  1. Differentiability: We look for points in the domain of \(f(x)\) where \(f'(x)\) also exists. The function is nondifferentiable if \(f'(x)\) does not exist.

    Examples of causes for nondifferentiability: \(f(x)\) not being continuous, left- and right-slopes differ, or the tangent line is vertical.

  2. Theorem 10.2.5: Local extremes of \(f(x)\) can only occur at critical points, which are values where \(f'(x)=0\) or \(f'(x)\) does not exist.

  3. Theorem 10.2.7: For a continuous and differentiable function, it will have a horizontal tangent between any two zeros.

  4. Theorem 10.2.10: For a continuous and differentiable function, the average rate of change on an interval will be matched by the slope of the tangent line at some intermediate point.

  5. The Mean Value Theorem provides the justification of using sign analysis of \(f'(x)\) and \(f''(x)\) to determine intervals of monotonicity and concavity, respectively, for the function \(f(x)\text{.}\)

  6. The Mean Value Theorem also provides the justification that any two antiderivatives of a function \(f(x)\) can differ at most by a constant value over an interval on which they are differentiable.

Exercises 10.2.6 Exercises