Digit Sum

loopspace

2019-11-26

Creative Commons License

Contents

  1. Home

  2. 1. Solving the Puzzle

  3. 2. Whither Polynomials?

  4. 3. A Source of Perturbation

  5. 4. Conclusion

1 Solving the Puzzle

A little while ago, Jo Morgan posted a puzzle on twitter that she'd found in one of her journeys through different maths texts. I gave it a miss, as it ticks all the boxes of puzzles that I don't like. A bit later, Ben Orlin wrote a short post on his blog about it. I skimmed through it, saw which puzzle it was, and was ready to give it a miss again when I read the last paragraph which seemed to address me directly. So I thought a little bit about it, and ended up with the following solution.

Theorem 1

Let p(x)[x] be a polynomial with only positive coefficients1. Let a>b>0 be real numbers and let d=deg(p). Then

1At the risk of starting a flame war, zero is a positive number.

adbdp(a)p(b)

Proof

Let p(x)=c0+c1x+c2x2++cdxd. By assumption, cj0. Then

adbdp(b)=adbd(c0+c1b+c2b2++cdbd)=c0(ab)d+c1(ab)d-1a+c2(ab)d-2a2++cdadc0+c1a+c2a2++cdad=p(a)

That last inequality comes from the fact that since ab>1 and each cj0, then cj(ab)d-jcj.

Since b>0 and each cj0, p(b)0 and so rearranging yields

adbdp(a)p(b)

as required.

Corollary 2

Fix a base k and length of representation d. Let n be a number whose representation in base k has length d. Let σk(n) be the digit sum of n in this representation and consider the quotient:

nσk(n)

Then this achieves its maximum of kd-1 when n has representation a00k (with d digits in total).

Proof

Clearly, if n has representation a00k, then:

nσk(n)=a00ka=100k=kd-1

Now let n have representation cd-1cd-2c1c0k. Define p(x) to be the polynomial c0+c1x++cd-1xd-1. Then the digit sum of n is p(1) and n itself is p(k). Hence

nσk(n)=p(k)p(1)

and by the above Theorem, this is bounded above by kd-1.

In the rest of this, I'll explain where that came from.

2 Whither Polynomials?

The idea of using polynomials to study this problem was sparked by a conjunction of a few things:

  1. I've studied what are called integral polynomials in the past, and there'd been another problem posted on twitter recently that recalled that to mind.

  2. Ben Orlin's post suggests looking at the original problem for other bases, which means needing some flexibility.

  3. James Tanton's Exploding Dots is quite prevalent on twitter, and the mathematical underpinnings of the main idea is that there is a close relationship between polynomials and representations of an integer in a given base.

These all meant that when thinking of how to study the original problem, the idea of passing to polynomials wasn't an unnatural one. It was as if it was already at the back of my mind, and just needed a nudge to move to the front.

The final nudge was that the question is about maximising something. This suggests using calculus, but applying calculus to actual numbers is slightly problematic. On the other hand, applying it to polynomials is very natural.

But it is not the polynomials themselves that I'm applying the techniques of calculus to. I don't intend to differentiate a polynomial. Rather, it is the space of polynomials. I intend to differentiate a curve of polynomials.

Let me back-track slightly. The advantage of polynomials over representations of numbers in the Exploding Dots saga is that one doesn't have to worry about carries when doing arithmetic. Or rather, one deals with the arithmetic first and the carries second, rather than interleaving them. So to subtract 97 from 134, you would subtract in columns to get 1(3-9)(4-7)=1(-6)(-3) and then recognise this as 100-60-3=37. The initial stage is analogous to arithmetic of polynomials.

Now throw in the concept of calculus, and specifically of small variation. With polynomials, it is possible to have things like .5x+.25. We're not, though, used to seeing (.5)(.25) as the representation of a number in base 10 (it would be .5×10+.25=5.25). But if we're trying to use the techniques of calculus to study this situation, we need to be able to consider small variations of a thing. So we'd better put that thing in a place where small variations are possible. Instead of studying, say, 67, we study 6x+7 so that we can also consider 6.1x+7.1.

In actual fact, this took me down a slight blind alley. I did consider the map [x] defined by:

p(x)p(k)p(1)

for k, and looked at what happens to this when p is perturbed to p+hq for small h. The idea is quite simple: if p maximises this quotient then perturbing it in any direction should mean that the value of the quotient goes down and so studying its behaviour under small change will help find maxima. This can be made rigorous, but at this stage I wasn't concerned with that.

Unfortunately, there's a big flaw in the argument. The map above is not defined for the whole of [x] because we can have p(1)=0. And as we approach a polynomial with p(1)=0, then the quotient can get arbitrarily large. So maximising the quotient over the whole of [x] (or at least where the quotient is defined) is not a viable line of enquiry.

But nothing should be discarded completely, even if it itself leads nowhere. And although the direct application of calculus didn't work, the idea of considering perturbations did.

3 A Source of Perturbation

The techniques of calculus aren't just about finding maxima and minima. Differential calculus hinges on the idea of "If I just change the input a little bit, what will happen to the output?". The derivative tells us how to answer that question.

Now we're really interested in [x] rather than [x], but we can nevertheless take the idea of perturbing the input and seeing what happens to the output. The other thing that calculus teaches us is that, providing the function is "nice" in some technical fashion, we don't need to consider every possible perturbation but just "enough".

So let's start with a polynomial p[x]2 of a fixed degree, say d, and consider perturbing it in the simplest fashion. That would be adding 1 to just one of its coefficients. Let us define ej[x] to be the polynomial ej(x)=xj, then we're considering what happens to the quotient p(a)/p(b) when we replace p by p+ej with jd. A little algebraic manipulation leads to:

2we'll specialise a little as we go through, but it can be instructive to see where the restrictions are needed

(p+ej)(a)(p+ej)(b)=p(a)+ajp(b)+bj=p(a)p(b)×1+ajp(a)1+bjp(b)

So the quotient increases when we replace p by p+ej if that second term in the product is greater than 1.

Here's where we make our assumptions. We want everything to be positive so that we don't have to worry about the inequality when multiplying and dividing. The simplest way to achieve that is to assume that a,b0 and all the coefficients of p are positive. We can therefore manipulate the condition that the second term is greater than one as follows.

1+ajp(a)1+bjp(b)>11+ajp(a)>1+bjp(b)ajp(a)>bjp(b)

Lastly, we obtain:

ajbj&>p(a)p(b)
1

So by considering what happens when we perturb the polynomial p slightly, we end up with the above inequality to consider. Let us pause to consider what that means: if j is such that the inequality in (1) holds, then the quotient for p+ej is larger than that for p and so we want to "move" in that direction.

Now if we assume that a>b, then if the inequality in (1) is satisfied for a particular j, it is also satisfied for any larger j, such as the degree of p itself. So an interesting question is to determine, for a given polynomial p, the minimum j such that the inequality in (1) holds.

In telling the story of my approach I should take a moment to say that because the original problem was posed for two-digit numbers, and I originally only considered it for two-digit numbers, I didn't realise at first the significance of the inequality in (1). With only two directions to consider, the general form in (1) is too simple to see the full structure. It was only when I considered the general case that I saw how this inequality is not just a piece of the solution but is, in fact, the solution in its entirety.

The required realisation was that (1) can be written as:

ej(a)ej(b)>p(a)p(b)

and that ej is a perfectly valid polynomial in its own right. So if we can show that for a given polynomial p, this inequality holds for some jd, then it must hold for d, the degree of p. And if we can show that, then we have established that

ed(a)ed(b)>p(a)p(b)

and so the quotient is maximised among polynomials of degree at most d when we take a monomial of degree d. And this is precisely what Theorem 1 shows is true.

4 Conclusion

Although the original approach of using calculus to study the problem proved to be a red herring, it did put me on the right path to discover an approach that worked. Having taught and researched in calculus for many years, I've come to the conclusion that calculus is such an extremely successful concept that Mathematicians have continually sought to push it into other areas of Mathematics, whether it wants to go there or not. Although I didn't ultimately use any of the techniques of calculus, using the ideas of it proved successful. So, sometimes, it's worth trying something strange to attack a problem because you never know what ideas that might spark.

Using calculus here might seem like the classic sledgehammer to crack a walnut, but in Mathematics we are in the unique position of being able to crack the walnut and then reassemble it to try different ways to crack it. By watching carefully how it cracks under the sledgehammer, we can sometimes see just the right place to tap it gently to make it simply fall apart.

Lastly, calculus is awesome. Someone should write a book about it.