*Halfway through, the idea of “exponent” takes a sudden turn.
And—fair warning—so does this post.*

A drumroll and an awed hush, please! Here’s my teaching load for this year:

Though it makes my Yankee eyeballs melt and dribble out of my head, this is a fairly typical schedule here in England. The one aberration—a scheduling concession my Head of Department graciously made—is that instead of a group each of Year 8 and Year 9, I’ve got two of the latter.

That means I get to focus (so to speak) on that critical year when “elementary” math (the stuff every citizen needs) yields to “advanced” math (the gateway to specialized professions and fields of expertise). And what proud little gatekeeper stands at this fork in the road, welcoming those students who understand its nature, and vindictively punishing those who don’t?

Why, the exponent, of course!

Exponents start pretty simple. Exponentiation is just repeated multiplication. The big number tells you what you’re multiplying, and the little parrot-number on its shoulder tells you how many times to multiply it:

Sometimes we multiply them together, like this:

From this pattern you can glean a simple rule, the kind of tidy and easy-to-apply fact that we lovingly expect from mathematics class:

But this is when exponents take a sudden turn. Without much warning, we rebel against our original definition—“exponentiation is repeated multiplication”—and start complaining about its flaws.

Specifically, that definition makes perfect sense for values like 5^{4} or 22^{7}, or even (-3.5)^{14}. But what about when the exponent is negative? Or zero? Or a fraction? What would it mean to compute, say, 9^{1/2}—i.e., to multiply 9 by itself “half of a time”?

To say “exponentiation is repeated multiplication” is perfectly pleasant. But it takes us only so far. It opens up the world of whole-number exponents, but leaves other realms locked behind soundproof doors.

And so we renounce this definition, and begin to worship a new one: Exponentiation is “the thing that follows the rule a^{b}a^{c} = a^{b+c}.”

It’s a weird change of game plan. We’re abandoning a clear-cut explanation of exponentiation in favor of a more nebulous one. Instead of defining the operation by how you actually do it (“multiply repeatedly”), we’re defining it by an abstract rule that it happens to follow.

Why bother? Because suddenly we can make sense of statements like 9^{0}, 9^{1/2}, and 9^{-2}.Any number to the zero must equal one—because our rule says so.

Any number to the ½ must be the number’s square root—because our rule says so.

And any number to the –n must equal the reciprocal of that number to the n—because our rule says so.

These new statements represent a funny sort of mathematical fact. They’re not just arbitrary and capricious, as students might grudgingly maintain. But nor are they 100% natural and inevitable, as teachers might optimistically insist. Rather, these truths depend on a leap of faith, a change of heart, an *extension* of the exponent into terrain where it could not originally tread.

We tear out the first page of our exponentiation bible, and replace it with a rule that, when we first encountered it, felt merely peripheral or secondary.

I celebrate this as a magnificent sleight of hand, an M. Night Shyamalan twist that reconfigures your sense of everything that came before.

When you meet exponentiation at a cocktail party, and ask it what it does for a living, it replies, “Oh, I’m just repeated multiplication.” But it’s only being modest. It has a secret identity, as the all-important operation that translates fluently between addition and multiplication.

Why am I so smitten with this? Well, because weirdly enough, it strikes close to home.

I’m half a decade into my teaching career, and to be honest, I scarcely remember why I originally got into the profession. To impart truths? To change lives? To “give back”? I doubt my reasons—whatever they were—carried enough oomph to sustain me for long.

But over time, my reasons have transformed. These days, I love this job because it’s equal parts social and intellectual. What other job puts you in such close contact with people *and* ideas—not just one or the other, but both of them, constantly?

My core reason for doing what I do—just like my notion of exponentiation—switched somewhere along the way.

I’m hoping some of my students can experience that same evolution. Many arrive in 6^{th} grade as “math kids,” accustomed to top marks, easy A’s, and plentiful praise. They often cite math as their favorite subject, and I can guess why—because it makes them feel smart. All told, that’s a good thing. It’s perfectly natural to enjoy something that makes you feel like a star.

But this momentum has its limits. When you find yourself surrounded by equally talented peers, you lose heart. You don’t feel so smart anymore. It’s a straitjacket sort of success that depends on the failure of others.

Math’s saving grace, though, is that it can make us feel smart for another reason: because we’ve mastered an ancient, powerful craft. Because we’ve laid down rails of logic, and guided a train of thought smoothly to its destination. Because we’re masters—not over our peers, but over the deep patterns of the universe itself.

Above all, I hope my students learn this lesson: that, regardless of how slowly or quickly you achieve it, and regardless of how you compare to the kids surrounding you, mathematical mastery is a badge of intellect. It makes you smart. It is your glorious gain, at no one’s expense.

And if they don’t learn that, I hope they at least learn that a^{b}a^{c} = a^{b+c}.

I love the logic of anything to the zero is 1 because it doesn’t change things. It is an identity function.

I explain it as everything is times one; i like your image better – it leads into the fractions and negatives of exponents more “cleanly.”

You show 9^0.5 could be 3, but you don’t show that’s the only number of could be. For example, -3 also satisfies x*x=9.

Yeah, fair point. To exclude -3, we might need an argument like this:

(a^b)^c = a^(bc) [by repeated use of our central rule]

So we want 9^0.25 = (9^0.5)^0.5

If we define ^0.5 as “take the positive square root of,” this can hold true.

But if we define ^0.5 as “take the negative square root of,” then the outer ^0.5 forces us into complex numbers. Assuming we’d rather stay in real numbers, we need to pick the positive square root.

There might be a better argument, but that’s what comes to mind.

√ with a literal (value rather than a variable) is a literal denoting positive value. Where would you plot √3 on a number line? I am sure to the right of the 0. The solutions (avoiding the overloaded use of the word “root” here) of x^2 = 3 are +√3 (right of 0) and -√3 (left of 0).

Wait, but doesn’t repeated use of the central rule only show that

(a^b)^c = a^(bc) when c is a natural number?

Maybe we should have two central rules, in which that is the second?

In any case, why would we want to exclude -3? Negative numbers are so lonely just because a lot of people prefer positives over them. 😦

It should be a valid solution according to your rules about exponents, since the way exponents were defined in this post comes from the definition of repeated multiplication, in which -3 works. Nevertheless, that is an interesting way to show the intuition behind preferring positive integers. That was pretty cool.

EDIT:

After thinking about it for another minute or so, I realized I was a bit too quick in saying that you can’t use (a^b)*(a^c)=a^(b+c) to prove (a^b)^c = a^(bc).

I just didn’t repeat the rule enough times.

First we show the fact for when c is a natural number, and then:

((3^0.5)^0.5)^4=(((3^0.5)^0.5)^2)^2=((3^0.5)^1)^2=(3^0.5)^2=3=(3^0.25)^4

And therefore (3^0.5)^0.5=3^0.25, and the same idea can be used for any two rational numbers b and c. (The assumption of continuity finishes off the rest of the reals, as you mentioned in comments below me)

I just found this post, but I wanted to comment. I’m a Physics Ph.D. with a strong theoretical bent (but now working in software). I studied the theoretical basis for general exponentiation in my complex analysis class.

It turns out that all exponents which aren’t integral (e.g. a^b, where b is not an integer) are ambiguous. (Carefully defining and controlling this ambiguity involves yet another generalization of the definition of exponent, by the way. It’s carefully designed to make a^b*a^c=a^(b+c) as true as possible given the inherent ambiguities in the definitions.) At some level, the choice is, in fact, arbitrary. It is quite impossible to make an argument which makes 9^0.5 equal to 3 but not equal to -3, except by fiat. It gets worse if b is irrational; the set of “plausible” values is infinite, although countably so.

The only thing that saves this from being a complete train wreck is that any definition we choose—+3 or -3 for 9^0.5—can be extended to nearby points without being ambiguous. If we choose 9^0.5 to be +3, then we get that 9.00000001^0.5 is roughly 3.000000001666… This means that when we make a choice about what a^b means, we get a consistent set of choices for nearby points (a being zero being a notable exception.) The ambiguity persists in complex ways (no pun intended, but the complex plane is a large part of it), but its now somewhat better controlled. But the identity a^b*a^c=a^(b+c), which before made perfect sense, is actually broken in general, while still being true is “carefully controlled ways.” But it’s no subject for the inexperienced.

Just because x = -3 is a solution to x^2 = 9, that does not mean -3 = 9^(.5). These are not the same. The ONLY number that is 9^(.5) is 3.

What a great post! You have captured the quandary of grading, the problems of “being smart” (fixed mindset) and the beauty of math so eloquently. I wanted to let you know that I am sharing your post with everyone who will listen today. I hope you have a chance to increase a student’s understanding of math today! Thank you for sharing – Alicia Bates @Aliciafbates

Here I am in my junior year of college and I’m suddenly a computer science major, meaning the math department.

Gulp.

This is one of the coolest, smoothest, easy-to-read math teacher blog posts that I have read in a long time.

I love the twist in the middle. “What other job puts you in such close contact with people and ideas—not just one or the other, but both of them, constantly?” Never thought of that, but a perfect reflection… matches my experiences as well. Nice post!

I was just about to make the same comment, quoting the same favorite sentence. I’ll definitely be using that line in the future when I justify my love for teaching–people and ideas!

“people and ideas” — I’ve said versions of this about teaching math so many times.

Great post.

And don’t forget irrational exponents! (Those presumably require analysis, no? Going further and requiring more of us…)

I think the “exponents are the thing that obey this rule” is a supremely elegant approach.

It’s not required for building up a full theory of exponents, though. In my analysis course, we defined positive integer exponents first (as a notation shorthand for repeated multiplication), then we defined the notation with to be the unique exponent for which . Negative exponents can be defined as the notation for their multiplicative inverse. From these definitions you can show the additive property holds for all *rational* exponents. Using limits, the property can be extended to all reals.

The point is, we can arrive at these rules using brute force. (The motivation in the background is still likely the one you pointed out, though!)

Interesting! That approach is one that the typical 8th-grade book stays more loyal to (defining the meanings of different notations, then deducing properties).

I remember a professor defining the exponential function by exp(a)exp(b) = exp(a+b) and exp(1) = e, which turns out to be enough to deduce the rest! (You do need a limit argument to get from the rationals to the reals, though.) I guess that’s more where my approach comes from.

My wife points out: you also need continuity for this definition to work.

Yes, or equivalently, the “least upper bound property.” But if you wish to speak of real numbers, then you’ve clearly already agreed to this!

Mr. Chase:

The least upper bound property is not sufficient; you need to define the exponential function specifically as the continuous function satisfying the properties Ben gives, as there are many discontinuous functions on the reals satisfying those properties.

I read an interesting explanation in a calculus book (if I recall correctly). The basic concept of multiplying repeatedly with natural number exponents (the discrete exponential function with positive integers as domain) agreed with the inverse of the log function at those domain values. Thus, the abovementioned discrete exponential function is generalized to real domain by incorporating the other mappings from the inverse of the log function (in other words, call the inverse of log as the exponential function over reals). The various properties can be inferred from the properties of the log function, which themselves can be inferred from f(x)=1/x by integration.

Mmm, I’ve seen that development of e^x as the inverse of log(x), which (as you say) is defined by integrating 1/x.

Was the book you were reading describing the historical development? I’d always assumed that exponential functions preceded logs, but it occurs to me that Napier was working with logs around 1600, and the exponential function with real (or even rational) domain may postdate that.

I found the book – it’s McGraw Hill’s Calculus (Smith and Minton, 4th Ed, Page 392). They are not talking about historical development. They are trying to answer, “What does it mean to have an irrational power?” That’s where they define the exponential function to be the inverse of log x. Personally, I like this approach because there is no chicken-and-egg issue (which came first), and the generalization can be understood as a utilitarian step (it doesn’t have to mean anything simple for irrational powers).

Regarding the historical development, the logarithms appeared well before the exponential function. At the time, they were not in the current form.They evolved later to be related to the exponents – Gregory was one of the first guys to call out the relationship.

Whether ‘logically’ is an appropriate term or not for year 9 it seems to me that stating that “the exponentiation rule must hold” is too direct, as it can lead to “Why?”. I see it as initially a desirable feature which would be useful, following from “Root(2) times root(2) = 2, could we use the notation and rules of exponents to say the same thing?”.

So the definintion “2 to the power 1/2 equals root(2)” is really a definition of the meaning to be ascribed to “2 to the power 1/2”. And it’s ok because it works!

Not only that, but the revised definition wipes out this fallacy, which fools a lot of mathematically sophisticated people I show it to:

x * y = x * x * … * x (y times)

x * x = x * x * … * x (x times)

x^2 = x * x * … * x (x times)

Differentiating both sides:

2x = 1 + 1 + … + 1 (x times)

2x = x

2 = 1

Since when did the derivative of x*x*…*x equal 1+1+…+1? You need to apply the product rule n times, which would actually yield the correct answer 2x.

True, but I think that’s a typo on John’s post – should be:

x * y = x + x + … + x (y times)

x * x = x + x + … + x (x times)

x^2 = x + x + … + x (x times)

Differentiating both sides:

2x = 1 + 1 + … + 1 (x times)

2x = x

2 = 1

What does differentiation mean for discrete x? “x times” makes sense for whole number values of x. Even if you could generalize the “x times” concept in a meaningful way, the sum rule would become inapplicable as the number of terms in the summation becomes dependent on the variable x itself.

I guess that’s the point being made.

Just so. There has to be a fallacy somewhere, after all, or Mathematics is done for. But it’s a nice one for catching unthinking rule-followers.

Sorry about the typos.

I like this a lot, but don’t miss a good opportunity. I think you should stick with the repeated multiplication definition a little longer, long enough to examine inverses and identities across operations. The key is to treat the base and the exponent separately, as they are using different operations. You’ve shown that the exponents are adding, so their operation is addition. You’ve shown that the bases are multiplying, so their operation is multiplication. Now you get to 9^-2. Well, -2 is an additive inverse, because subtraction is the inverse of addition. So what to do with the base? The operation of the base is multiplication, the inverse of multiplication is division, so try repeated division instead of repeated multiplication. Next you get to 9^0. Well, 0 for the exponent is the additive identity, so for the base we need the multiplicative identity, 1. Then you can show that the exponent rules still hold, and it is beautiful. But the math of operations, identities and inverses on those operations is what makes the exponent rules hold, not vice-versa. I think the students can handle this. Then I would use the exponent rules to explain fractional exponents after that.

Great post. I hope you don’t mind my sharing a couple of my pet peeves:

Any number (except zero) to the zero must equal one—because our rule says so.

And any number (except zero) to the –n must equal the reciprocal of that number to the n—because our rule says so.

If you haven’t already watched it, you (Ben Orlin) need to immediately view Herb Gross’ presentation on logarithms in calculus without exponents: http://video.mit.edu/watch/lecture-1-logarithms-without-exponents-840/

Thanks for the tip! I’ve watched some of his complex analysis lectures but that sounds like fun. Is it defining log(x) as the integral of f(x) = 1/x from 1 to x?

The same post could be made about multiplication, right? It’s just repeated addition…until we get to those same strange cases.

Yeah, exactly! First, it’s repeated addition…

…then you switch to an area model, which helps you capture all positive reals…

…and then you need to make the distributive property your centerpiece if you want a definition that works for negatives.

…so in what sense is exponentiation the first time students see this?

Touche.

But it is perhaps the first time they realize what they’re seeing. When generalizing multiplication, it feels like, “Oh, I’m learning new things about multiplication I didn’t know before.” When generalizing exponentiation, it feels (or perhaps ought to feel) more like, “Ah, I’m pushing the idea of exponentiation into places it couldn’t go before.”

I like to talk about 9^(1/2) as “half” of 9 in the sense that when you put the two equal “halves” together (via multiplication), you get 9.

Just like half of 10 is 5… putting 5 and 5 together with addition yields 10.

I’ve yet to find a good word for such magical halving. Other than “non-cupcake half.”

Seems like you are talking about the difference between an “arithmetic half” and a “geometric half.”

The mathematician in me is worried whether “exponentiation to base a is a homomorphism (R, +) to ([0,∞), ×) for all a in R” uniquely defines a (two-place) function. Does it?

Well, it’s all a little more obvious that the definition of exponentiation given in first-year real analysis (aka calculus but the math way, dammit, proofs and no applications!). There we defined the exponential as the inverse of the log, and defined the log as the solution to the integral of 1/x 🙂

But the best part was in full real analysis, the next year, when our teacher pointed out something similar to what you note in this post. Multiplication is clearly

notdefined as repeated addition, like we’d all been taught when tiny – the two unique operations are only linked by the distributive property of a field.You forgot to define x^1 := x.

You *use* it when argue for 9^(1/2) * 9^(1/2) = 9^1 = 3, but there is nothing in your axiom that implies that last equality.

As far as I can see f(x, y) = x^(r*y) fulfils the axiom for any r within the real numbers:

f(x, y) * f(x, z)

= x^(r*y) * x^(r*z)

= x^(r*y+r*z)

= x^(r*(y+z))

= f(x, y+z)

So you need the axiom x^1 := x to “force” r to be 1.

I like that axiomatic approach there. But I’m afraid it’s to abstract for common students who learn exponentiation for the first time. 😦

I use patterns to help students begin to extend exponents to ‘non-natural’ [pun intended] uses …

2^1 = 2

2^2 = 2*2 = 4

2^3 = 2*2*2 = 8

2^4 = 2*2*2*2 = 16

… and so on

somewhere along the way they’ll say the answer pretty quickly – “how did you multiply all those 2s so quickly? that was a lot of 2s!” of course, students have actually seized the pattern of doubling the previous result .. “Well, Algebra is mostly about learning how to ‘work backwards’ arithmetic” so if we already knew 2^10 = 1024, then how could we figure out 2^9″ Students should be able to recognize now we just have to divide repeatedly by 2 to start moving backwards through the powers … then I throw the kicker at them “Wait a minute, so what would be the power of 2 before 2^1 = 2 then?” … students realize the right hand side should be 2/2 = 1 … “but what should the name for that power of 2 be?” students realize the power of 2 on the left hand side should be zero … you keep working backwards a bit longer and with other numerical bases, and students have hopefully developed a fairly intuitive understanding of zero and negative integers as an exponent and are ready for the algebraic rules x^0 = 1 and x^-1 = 1/x for bases other than zero …

My colleagues were looking for AA physician consent form recently and encountered a company that has an online forms library . If you are wanting AA physician consent form too , here’s

`https://goo.gl/TU29A2`

.Pingback: Teaching for Tricks or Sensemaking – dy/dan

I dont understand the last diagram that shows a+b on the left and e^a*e^b on the right with “exponentiation” in the middle. The left and right aren’t equal. Can you explain what that diagram is supposed to mean? Thanks!