##### Definition6.1.1Continuity at a Point

A function \(f\) is *continuous* at \(a\) if \begin{equation*}\lim_{x \to a} f(x) = f(a).\end{equation*}

Recall our definition of continuity for a function at a single point.

A function \(f\) is *continuous* at \(a\) if \begin{equation*}\lim_{x \to a} f(x) = f(a).\end{equation*}

The single equation captures the full definition because for the equation to be true, the limit must exist and the value of the function must exist. Also, recall that the function is *right-continuous* if the limit comes from the right (\(x \to a^+\)) and *left-continuous* if the limit comes from the left (\(x \to a^-\)).

These ideas allow us to define what we mean by saying that a function is continuous on an interval.

A function \(f\) is *continuous on an interval* \((a,b)\) if \(f\) is continuous at every point \(x \in (a,b)\). We can include an endpoint if the limit statement is true coming from within the interval. That is, we include \(a\) if \begin{equation*}\lim_{x \to a^+}f(x)=f(a)\end{equation*} and we include \(b\) if \begin{equation*}\lim_{x \to b^-}f(x)=f(b).\end{equation*}

There are two important theorems that describe what we know about functions that are continuous on an interval. The Extreme Value Theorem guarantees that any function that is continuous on a closed interval has a highest and lowest point within that interval. The Intermediate Value Theorem guarantees that a function that is continuous on a closed interval can not skip over any values between its values at the endpoints. The proofs for both of these theorems require advanced methods not taught at this level. We treat them essentially as axioms, statements that are true without proof.

Suppose \(f\) is a function that is continuous on \([a,b]\). Then there must exist values \(c_m, c_M \in [a,b]\) so that for any \(x \in [a,b]\) we have \begin{equation*}f(c_m) \le f(x) \le f(c_M).\end{equation*} The values \(f(c_m)\) and \(f(c_M)\) are the minimum and maximum values, respectively, of the function \(f\) on \([a,b]\).

If a function is not continuous on \([a,b]\), then it does not necessarily have a maximum or minimum value. One way that this might happen is if \(f\) has a vertical asymptote within the interval. In that case, the values of \(f\) would be unbounded. Another way that this might happen is that \(f\) is bounded by what would be a maximum (or minimum) value but just doesn't reach it because of a sudden jump.

Consider the function defined piecewise as \begin{equation*}f(x) = \begin{cases} \frac{1}{x^2}, & x \ne 0, \\ 0, & x=0. \end{cases}\end{equation*} This function has a non-removable discontinuity at \(x=0\), corresponding to a vertical asymptote. Because the formula has \(x^2\) in the denominator (always positive), we have \begin{equation*}\lim_{x \to 0} f(x) = +\infty.\end{equation*} This function is unbounded on the interval \([-1,1]\) and has no maximum. It does have a minimum at \(f(0)=0\) since that is below the rest of the graph.

Consider the function defined piecewise as \begin{equation*}f(x) = \begin{cases} x^2, & -1 \lt x \lt 1, \\ \frac{1}{2}, & x = \pm 1. \end{cases}\end{equation*} This function has a removable discontinuities at \(x=\pm 1\), where the limits are 1 but the values are \(\frac{1}{2}\). In this case, \(f\) is continuous on \((-1,1)\) but not on \([-1,1]\). The maximum value should have been \(y=1\), but the graph never reaches that value because of the discontinuity. The function does have a minimum value at \(f(0)=0\).

Consider the function defined piecewise as \begin{equation*}f(x) = \begin{cases} x^2, & -1 \lt x \le 1, \\ 2, & x = -1. \end{cases}\end{equation*} This function has a removable discontinuity at \(x=-1\). In this case, \(f\) is continuous on \((-1,1]\) but not on \([-1,1]\). In spite of the discontinuity at \(x=-1\), this function has a maximum value \(f(-1)=2\) because that value is above every other point in the interval.

The previous example is included to emphasize that a theorem gives conditions that guarantee something is true. But those conditions are not always required. The extreme value theorem gives conditions that guarantee a function will have a maximum value. There are no exceptions for a continuous function on a closed interval to have both maximum and minimum values. But there are discontinuous functions that have them as well. It is just that there are also discontinuous functions that do not have extreme values.

Suppose \(f\) is a function that is continuous on \([a,b]\). Then for every \(y\) between \(f(a)\) and \(f(b)\), there exists some \(x \in (a,b)\) so that \(f(x)=y\).

The Intermediate Value Theorem guarantees that the graph of \(y=f(x)\) intersects every horizontal line between \(y=f(a)\) and \(y=f(b)\) at least once for values of \(x\) between \(a\) and \(b\). Because continuity is essentially connectedness, the only way for the graph to go from \(y=f(a)\) to \(y=f(b)\) is to cross through all intermediate values. A discontinuous function has the ability to jump across values without touching them.

Consider the function defined piecewise as \begin{equation*}f(x) = \begin{cases} -1, & x \lt 0, \\ 0, & x = 0, \\ 1, & x \gt 0. \end{cases}\end{equation*} This function has a jump discontinuity at \(x=0\), and is otherwise constant. If we consider the interval \([-1,1]\), the values at the endpoints are \(f(-1)=-1\) and \(f(1)=1\). Except for \(y=0\), the function \(y=f(x)\) has no solutions for \(-1 \lt y \lt 1\) because of the jump.

The Intermediate Value Theorem allows us to know that a continuous function has a solution to an equation within a particular interval. If the interval is small, we have an approximation to the value of the solution. We say that the interval *brackets* the solution. Finding successively smaller bracketing intervals allows us to approximate the root to any needed precision. The Intermediate Value Theorem guarantees this works for continuous functions.

The function \(f(x)=x^3-x-3\) is continuous because it is a polynomial. Because \(f(1)=-3\) and \(f(2)=3\), we know that \(f(x)\) must pass through every \(y\)-value between -3 and 3 for at least one value of \(x\) in the interval \((1,2)\). In particular, if we are solving \(f(x)=0\), since \(y=0\) is between \(f(1)=-3\) and \(f(2)=3\), we know that there is a solution \(x\) bracketed by the interval \([1,2]\).

If we find a smaller interval, then we can know more precisely where the root occurs. In particular, since \(f(1.6)=-0.504\) and \(f(1.7)=0.213\) and \(y=0\) is between those values, the Intermediate Value Theorem guarantees that our continuous function has a root bracketed by the interval \([1.6, 1.7]\).

When we studied the definite integral, we learned that continuity implies integrability. However, a discontinuous function might still be integrable. For example, the definite integral of a piecewise continuous function with a finite number of jump discontinuities can be computed using the splitting property. The total definite integral would be equal to the sum of the definite integrals on each of the subintervals.

Continuity does guarantee something stronger than integrability. It guarantees that the function attains its average value over an interval. To make this precise, we first need to define the average value.

The *average value* of a function \(f\) on an interval \([a,b]\), denoted \(\langle f \rangle_{[a,b]}\), is defined as \begin{equation*} \langle f \rangle_{[a,b]} = \frac{1}{b-a} \int_{a}^{b} f(x) \, dx, \end{equation*} so long as \(f\) is integrable on \([a,b]\).

The average value is defined as the value of a constant function that has the same definite integral over the interval: \begin{equation*} \int_{a}^{b} \langle f \rangle_{[a,b]} \, dx = \langle f \rangle_{[a,b]} \cdot (b-a) = \int_{a}^{b} f(x)\, dx. \end{equation*}.

The figure below illustrates a simple function \(f(x)\) defined on the interval \([0,5]\), \begin{equation*}f(x) = \begin{cases} 3, & 0 \le x \lt 1, \\ 5, & 1 \le x \lt 3, \\ -1, & 3 \le x \le 5. \end{cases}\end{equation*}

The definite integral equals the sum of the signed areas, \begin{equation*}\int_0^5 f(x) \, dx = 3 \cdot 1 + 5 \cdot 2 + -1 \cdot 2 = 11.\end{equation*} The average value is equal to this definite integral divided by the width of the interval, \begin{equation*}\langle f \rangle_{[0,5]} = \frac{1}{5} \int_{0}^5 f(x)\,dx = \frac{11}{5}.\end{equation*}

Given a function \(f\) that is continuous on \([a,b]\), there must exist a value \(c \in (a,b)\) such that \begin{equation*}f(c) = \langle f \rangle_{[a,b]} = \frac{1}{b-a} \int_{a}^{b} f(x)\, dx,\end{equation*} or equivalently, \(\displaystyle \int_{a}^{b} f(x)\, dx = f(c) \cdot (b-a)\).

In the previous example, \(f\) was not continuous and we can see that the graph \(y=f(x)\) did not intersect the constant value \(\langle f \rangle_{[0,5]}\). The Mean Value Theorem for Integrals guarantees that when the function is continuous, the constant function using the average value must intersect the graph \(y=f(x)\).

The function \(f(x)=x^2\) is continuous everywhere. The average value on the interval \([-1,2]\) can be found using the elementary accumulation formula for a quadratic rate and the splitting property. \begin{align*} \langle f \rangle_{[-1,2]} &= \frac{1}{2--1} \int_{-1}^{2} x^2 \, dx \\ &= \frac{1}{3} \Big( \int_0^2 x^2 \, dx - \int_0^{-1} x^2 \, dx \Big) \\ &= \frac{1}{3} \Big( \frac{1}{3}(2^3) - \frac{1}{3}(-1)^3 \Big) = \frac{1}{3}\Big(\frac{8}{3} + \frac{1}{3}\Big) = 1 \end{align*} A figure showing the graphs \(y=f(x)=x^2\) and \(y=\langle f \rangle_{[-1,2]} = 1\) is shown below. The Mean Value Theorem predicted the existence of a point \(c \in (-1,2)\) where \(f(c)=\langle f \rangle_{[-1,2]}=1\), which we can see occurs at \(c=1\).

The Mean Value Theorem for Integrals also provides the justification for the Monotonicity Test for Accumulation Functions.

Suppose that \(A(x)\) is an accumulation function with corresponding rate function \(f(x)\), and suppose that \(f(x)\) is continuous on \([a,b]\).

If \(f(x) \gt 0\) for all \(x \in (a,b)\), then \(A(x)\) is increasing on \([a,b]\).

If \(f(x) \lt 0\) for all \(x \in (a,b)\), then \(A(x)\) is decreasing on \([a,b]\).

If \(f(x) = 0\) for all \(x \in (a,b)\), then \(A(x)\) is constant on \([a,b]\).