Digit Sum

loopspace

2021-03-26

Creative Commons License

Contents

  1. Home

  2. 1. Solving the Puzzle

  3. 2. Whither Polynomials?

  4. 3. A Source of Perturbation

  5. 4. Least Value

  6. 5. Surprising Sequences

  7. 6. Conclusion

1 Solving the Puzzle

A little while ago, Jo Morgan posted a puzzle on twitter that she'd found in one of her journeys through different maths texts:

[S]uppose a positive two-digit whole number is divided by the sum of its digits, how can you find the largest and smallest possible answers without searching for all possible answers on your calculator?

I gave it a miss, as it ticks all the boxes of puzzles that I don't like. A bit later, Ben Orlin wrote a short post on his blog about it. I skimmed through it, saw which puzzle it was, and was ready to give it a miss again when I read the last paragraph which seemed to address me directly. So I thought a little bit about it, and ended up with the following solution.

Theorem 1

Let p(x)[x] be a polynomial with only positive coefficients1. Let a>b>0 be real numbers and let d=deg(p). Then

1At the risk of starting a flame war, zero is a positive number.

adbdp(a)p(b)

Proof

Let p(x)=c0+c1x+c2x2++cdxd. By assumption, cj0. Then

adbdp(b)=adbd(c0+c1b+c2b2++cdbd)=c0(ab)d+c1(ab)d-1a+c2(ab)d-2a2++cdadc0+c1a+c2a2++cdad=p(a)

That last inequality comes from the fact that since ab>1 and each cj0, then cj(ab)d-jcj.

Since b>0 and each cj0, p(b)0 and so rearranging yields

adbdp(a)p(b)

as required.

Corollary 2

Fix a base k and length of representation d. Let n be a number whose representation in base k has length d. Let σk(n) be the digit sum of n in this representation and consider the quotient:

nσk(n)

Then this achieves its maximum of kd-1 when n has representation a00k (with d digits in total).

Proof

Clearly, if n has representation a00k, then:

nσk(n)=a00ka=100k=kd-1

Now let n have representation cd-1cd-2c1c0k. Define p(x) to be the polynomial c0+c1x++cd-1xd-1. Then the digit sum of n is p(1) and n itself is p(k). Hence

nσk(n)=p(k)p(1)

and by the above Theorem, this is bounded above by kd-1.

Note that this establishes the largest value. The original problem asked for both the largest and smallest values, but originally I overlooked that it asked for both and focussed completely on finding the largest value. After explaining where the above came from, I'll return to the problem of the smallest values as it turns out to be a little more intricate.

2 Whither Polynomials?

The idea of using polynomials to study this problem was sparked by a conjunction of a few things:

  1. I've studied what are called integral polynomials in the past, and there'd been another problem posted on twitter recently that recalled that to mind.

  2. Ben Orlin's post suggests looking at the original problem for other bases, which means needing some flexibility.

  3. James Tanton's Exploding Dots is quite prevalent on twitter, and the mathematical underpinnings of the main idea is that there is a close relationship between polynomials and representations of an integer in a given base.

These all meant that when thinking of how to study the original problem, the idea of passing to polynomials wasn't an unnatural one. It was as if it was already at the back of my mind, and just needed a nudge to move to the front.

The final nudge was that the question is about maximising something. This suggests using calculus, but applying calculus to actual numbers is slightly problematic. On the other hand, applying it to polynomials is very natural.

But it is not the polynomials themselves that I'm applying the techniques of calculus to. I don't intend to differentiate a polynomial. Rather, it is the space of polynomials. I intend to differentiate a curve of polynomials.

Let me back-track slightly. The advantage of polynomials over representations of numbers in the Exploding Dots saga is that one doesn't have to worry about carries when doing arithmetic. Or rather, one deals with the arithmetic first and the carries second, rather than interleaving them. So to subtract 97 from 134, you would subtract in columns to get 1(3-9)(4-7)=1(-6)(-3) and then recognise this as 100-60-3=37. The initial stage is analogous to arithmetic of polynomials.

Now throw in the concept of calculus, and specifically of small variation. With polynomials, it is possible to have things like .5x+.25. We're not, though, used to seeing (.5)(.25) as the representation of a number in base 10 (it would be .5×10+.25=5.25). But if we're trying to use the techniques of calculus to study this situation, we need to be able to consider small variations of a thing. So we'd better put that thing in a place where small variations are possible. Instead of studying, say, 67, we study 6x+7 so that we can also consider 6.1x+7.1.

In actual fact, this took me down a slight blind alley. I did consider the map [x] defined by:

p(x)p(k)p(1)

for k, and looked at what happens to this when p is perturbed to p+hq for small h. The idea is quite simple: if p maximises this quotient then perturbing it in any direction should mean that the value of the quotient goes down and so studying its behaviour under small change will help find maxima. This can be made rigorous, but at this stage I wasn't concerned with that.

Unfortunately, there's a big flaw in the argument. The map above is not defined for the whole of [x] because we can have p(1)=0. And as we approach a polynomial with p(1)=0, then the quotient can get arbitrarily large. So maximising the quotient over the whole of [x] (or at least where the quotient is defined) is not a viable line of enquiry.

But nothing should be discarded completely, even if it itself leads nowhere. And although the direct application of calculus didn't work, the idea of considering perturbations did.

3 A Source of Perturbation

The techniques of calculus aren't just about finding maxima and minima. Differential calculus hinges on the idea of "If I just change the input a little bit, what will happen to the output?". The derivative tells us how to answer that question.

Now we're really interested in [x] rather than [x], but we can nevertheless take the idea of perturbing the input and seeing what happens to the output. The other thing that calculus teaches us is that, providing the function is "nice" in some technical fashion, we don't need to consider every possible perturbation but just "enough".

So let's start with a polynomial p[x]2 of a fixed degree, say d, and consider perturbing it in the simplest fashion. That would be adding 1 to just one of its coefficients. Let us define ej[x] to be the polynomial ej(x)=xj, then we're considering what happens to the quotient p(a)/p(b) when we replace p by p+ej with jd. A little algebraic manipulation leads to:

2we'll specialise a little as we go through, but it can be instructive to see where the restrictions are needed

(p+ej)(a)(p+ej)(b)=p(a)+ajp(b)+bj=p(a)p(b)×1+ajp(a)1+bjp(b)

So the quotient increases when we replace p by p+ej if that second term in the product is greater than 1.

Here's where we make our assumptions. We want everything to be positive so that we don't have to worry about the inequality when multiplying and dividing. The simplest way to achieve that is to assume that a,b0 and all the coefficients of p are positive. We can therefore manipulate the condition that the second term is greater than one as follows.

1+ajp(a)1+bjp(b)>11+ajp(a)>1+bjp(b)ajp(a)>bjp(b)

Lastly, we obtain:

ajbj>p(a)p(b)
1

So by considering what happens when we perturb the polynomial p slightly, we end up with the above inequality to consider. Let us pause to consider what that means: if j is such that the inequality in (1) holds, then the quotient for p+ej is larger than that for p and so we want to "move" in that direction.

Now if we assume that a>b, then if the inequality in (1) is satisfied for a particular j, it is also satisfied for any larger j, such as the degree of p itself. So an interesting question is to determine, for a given polynomial p, the minimum j such that the inequality in (1) holds.

In telling the story of my approach I should take a moment to say that because the original problem was posed for two-digit numbers, and I originally only considered it for two-digit numbers, I didn't realise at first the significance of the inequality in (1). With only two directions to consider, the general form in (1) is too simple to see the full structure. It was only when I considered the general case that I saw how this inequality is not just a piece of the solution but is, in fact, the solution in its entirety.

The required realisation was that (1) can be written as:

ej(a)ej(b)>p(a)p(b)

and that ej is a perfectly valid polynomial in its own right. So if we can show that for a given polynomial p, this inequality holds for some jd, then it must hold for d, the degree of p. And if we can show that, then we have established that

ed(a)ed(b)>p(a)p(b)

and so the quotient is maximised among polynomials of degree at most d when we take a monomial of degree d.

This then led me to hypothesise and prove Theorem 1, whose proof (and implication for the original problem) is given at the outset.

Because of the role that the inequality in (1) will come to play in looking for the minimum value of the quotient, I feel it worth pointing out that although it gave me the idea behind Theorem 1, the more general form isn't actually used. So if I were to write this more in the style of an academic paper, Theorem 1 would appear with very little evidence of where it came from.

4 Least Value

In my original working on this problem I completely ignored the part of the question asking about the minimum value of the quotient. This wasn't deliberate – once I started thinking about the largest value then I forgot about the original setting altogether. It was only much later (and indeed after an earlier version of this was posted on my website) that I realised I'd missed out half the problem. Having used polynomials to such success on the first part, I wanted to see if I could do so for the second.

Obviously, my first strategy was to try to mirror the work of the search for the maximum value. A similar analysis leads to wanting to have the inequality:

ajbj<p(a)p(b)
2

As for the maximum, if this holds for some j then it holds for all smaller j. Mirroring the argument for the maximum, we'd then expect to want to work with monomials of minimum degree. However, we can't do this because we have fixed the degree of p. It does, though, suggest that we will get the minimum value for the quotient if we push the coefficients of p into lower degree.

We can make this more precise. Let p be a polynomial of degree d with positive coefficients cj. Let p^ be p without its term of degree d. Let a>b>0. Define q to be the polynomial cdxd+p^(b). Then q(b)=p(b) and

q(a)=cdad+p^(b)=cdad+p^(a)+p^(b)-p^(a)=p(a)+p^(b)-p^(a)

As p has positive coefficients and a>b>0, p^(a)p^(b) so q(a)p(a). Thus

q(a)q(b)p(a)p(b)

Thus any suitable polynomial p can be replaced by one of the form cdxd+c0 with a smaller quotient. This suggests that we only need to look at polynomials of this form. A little more work shows that actually we only need to look at polynomials of the form xd+c and that the quotient gets smaller as c gets bigger.

This looks promising, but there is a tiny snag. To apply the work to the original problem we need the coefficients of the polynomial bounded above by some value (in fact, we'll have cj<a). So we can't simply keep increasing the constant term.

What this analysis does tell us is that finding minima in the space of all polynomials, or even all polynomials with positive coefficients, is unlikely to be a fruitful strategy. So we need new ideas.

At this point I have a confession to make that will not surprise anyone who has read A Tale of Two Puzzles. My next move was to write a computer program to find the minimum value of the quotient for a variety of different conditions. In decimal notation, what I found was that the minimum for two digits was at 19, at three for 109, and then at 1099 until we get to 15 digits, at which point there are two zeros until around 115 digits. My rather simple program reaches the limit of the precision of my computer at this juncture. With base 2, the results of the calculations are smaller and it can be seen that the number of terms of the form 10011 with a given number of zeros which minimises the quotient roughly doubles (in fact, it appears to be 2n+1 which seems reasonable).

Intrigued by this, I returned to the inequality of 2 and thought a bit more about it. The problem with Inequality 2 is that it is a wish-list rather than a fact. If that inequality is satisfied then increasing the jth coefficient of p will reduce the quotient, but that doesn't help with figuring out when that inequality will be satisfied.

It took me several blind alleys and more time than I'm willing to admit to, to realise that it was better to rearrange it into the following (using the fact that everything is positive so multiplying up doesn't change the inequality sign):

ajp(b)-bjp(a)<0
3

The crucial insight of arranging it like this is that the term that corresponds to the xjth term in p(x) vanishes. So this is unaffected by changing the jth coefficient within p and means that if the inequality in 3 is satisfied, then increasing the coefficient of the xjth term to as high as it can be reduces the value of the quotient. Conversely, if the inequality in 3 goes the other way then we want to reduce the coefficient of the xjth term as much as possible.

The form of the inequality in 2 shows that there is some j0<d such that 2 holds for all jj0 and doesn't hold for all j>j0. This means that for j>j0 we reduce the coefficient as far as possible, meaning that we set it to 0 for jd and 1 for j=d, while for jj0 we set it as high as we're allowed, which recalling the origin of the problem means that we set it to a-1.

So our minimising polynomial is:

p(x)=(a-1)(1+x+x2++xj0)+xd

and j0 is the maximum power such that the inequality in 3 is satisfied.

Writing 1+x+x2++xj0 as xj0+1-1x-1, we can rewrite our minimising polynomial as:

p(x)=xd+(a-1)xj0+1-1x-1

Substituting in x=a yields:

p(a)=ad+(a-1)aj0+1-1a-1=ad+aj0+1-1

Although we might be interested in general b, we're most interested in b=1. Substituting this into the original expression yields:

p(1)=(a-1)(j0+1)+1=j0a+a-j0

Putting this into the inequality we see that j0 is the largest integer such that:

aj0(j0a+a-j0)-ad-aj0+1+1<0j0aj0+1+aj0+1-j0aj0-ad-aj0+1+1<0j0aj0+1-j0aj0-ad+1<0j0aj0(a-1)<ad-1j0aj0<ad-1a-1

While this isn't quite a formula for j0, it is a criterion that can be easily used to find it.

In conclusion, then, the solution for the minimum value of the quotient for d digits in base 10 is achieved for the number:

10099

where the number of 9s is the largest number j0 satisfying:

j010j0<10d-19

This feels a little more complicated than the situation for the maximum value.

5 Surprising Sequences

Using new symbols, the relation:

max{k:kbk<bn-1b-1}

leads to several interesting sequences.

First there is simply the sequence whose nth term is the above maximum. This sequence is largely dull. With base b=10, it starts out:

0,1,1,2,3,4,5,6,7,8,9,10,11,11,12,13,14,15,16,17,18,19

The difficulty here is that the sequence only does something interesting at increasingly infrequent intervals. The next hiccough occurs with the value 111, and after that at 1111, and so on.

Rearranging the defining inequality shows a bit more precisely what is going on. Multiplying up by b-1 gives:

(b-1)kbk<bn-1

Now bn-1 is the largest number with n digits in base b. So this says that (b-1)kbk must have at most n digits in base b. Normally, increasing n by 1 means that k also increases by 1 because the dominating factor in the (b-1)kbk is the bk. But every so often, increasing k by 1 also adds to the number of digits and so going from (b-1)(k-1)bk-1 to (b-1)kbk increases the number of digits by 2. This causes the pauses in the sequence.

The presence of the b-1 factor is what means that this happens when k has representation all 1s in base b.

What is more interesting is to record the ns at which these hiccoughs occur. In base 10, we get:

1,3,14,115,1116,11117,

We get an equivalent pattern in other bases, for example in base 3 the sequence starts:

1,3,7,17,45,127,371,1101,3289,

and note that:

3+1+3=79+3+1+4=1727+9+3+1+5=4581+27+9+3+1+6=127

The general formula is:

an=bn-2+bn-3++b+1+n=bn-1-1b-1+n

Interestingly, the OEIS has an entry for this sequence in base 2 (A006127) and for base 3 (A233656) but not for any other base that I've searched, including base 10; maybe I should send that one in3.

3One problem, I guess, is that there are an infinite number of these sequences.

6 Conclusion

Although the original approach of using calculus to study the problem proved to be a red herring, it did put me on the right path to discover an approach that worked. Having taught and researched in calculus for many years, I've come to the conclusion that calculus is such an extremely successful concept that Mathematicians have continually sought to push it into other areas of Mathematics, whether it wants to go there or not. Although I didn't ultimately use any of the techniques of calculus, using the ideas of it proved successful. So, sometimes, it's worth trying something strange to attack a problem because you never know what ideas that might spark.

Using calculus here might seem like the classic sledgehammer to crack a walnut, but in Mathematics we are in the unique position of being able to crack the walnut and then reassemble it to try different ways to crack it. By watching carefully how it cracks under the sledgehammer, we can sometimes see just the right place to tap it gently to make it simply fall apart.

Lastly, calculus is awesome. Someone should write a book about it.