Friday, August 27, 2010

Kaprekar's constant

Kaprekar's constant is arrived at by following these steps:
  1. Take any four-digit number whose digits are not all the same (leading zeroes are allowed).
  2. Sort the digits in descending then ascending order, making the largest and smallest possible 4-digit numbers composed of those digits. (Add leading zeroes if necessary.)
  3. Subtract the smaller number from the larger number. The result will be another 4-digit number.
  4. Repeat the process starting with the result.
For example, starting with the digits 0827 (today's date) we get:

8720 - 0278 = 8442
8442 - 2448 = 5994
9954 - 4599 = 5355
5553 - 3555 = 1998
9981 - 1899 = 8082
8820 - 0288 = 8532
8532 - 2358 = 6174

If we continue the process outlined above the result will always be 6174.

7641 - 1467 = 6174

The amazing thing is that starting with any 4-digit number whose digits are not all the same will result in 6174 in at most seven steps. 6174 is called the Kaprekar constant for 4-digit numbers after Dattaraya Ramchandra Kaprekar who discovered the property in 1949.

There's a Kaprekar constant for 3-digit numbers as well. Following the same procedure above starting with any 3-digit number whose digits are not all the same will end in the result of 495 in at most six steps. Applying the same process to base 10 numbers with any other number of digits will result in either zero or a cycle that never ends. (Check out the article Mysterious number 6174 by Yutaka Nishiyama for the proof.)

You can see the results of the Kaprekar routine for 3-digit or 4-digit numbers using the utility below. Just enter a number in the first field and click the "Calculate" button. The results will appear below.









Sunday, August 22, 2010

SICP 1.46: Iterative improvement

From SICP section 1.3.4 Procedures as Returned Values

Exercise 1.46 asks us to combine several of the concepts we've learned throughout the first chapter of SICP into a procedure that generalizes the iterative improvement strategy. Iterative improvement is the process where we start with an initial guess, test if the guess is good enough, then improve the guess and start the process over again with the new guess. Our iterative-improve procedure will take two procedures as arguments and return a procedure as its value. The first argument will define a method for telling whether a guess is good enough, and the second will define a method for improving a guess. The returned procedure will take a guess as its argument and will keep improving that guess until it is good enough. Finally, we will rewrite the sqrt procedure from section 1.1.7 and the fixed-point procedure from section 1.3.3 using iterative-improve.

A good starting point is to take a look at the original versions of the two procedures mentioned in the exercise and see what they have in common.
; sqrt procedure and sub-procedures from SICP 1.1.7
(define (sqrt x)
(sqrt-iter 1.0 x))

(define (sqrt-iter guess x)
(if (good-enough? guess x)
guess
(sqrt-iter (improve guess x)
x)))

(define (improve guess x)
(average guess (/ x guess)))

(define (average x y)
(/ (+ x y) 2))

(define (good-enough? guess x)
(< (abs (- (square guess) x)) 0.001))

(define (square x) (* x x))


; fixed-point procedure from SICP 1.3.3
(define tolerance 0.00001)
(define (fixed-point f first-guess)
(define (close-enough? v1 v2)
(< (abs (- v1 v2)) tolerance))
(define (try guess)
(let ((next (f guess)))
(if (close-enough? guess next)
next
(try next))))
(try first-guess))

Besides the similarities that you might expect from the description in the exercise, these two procedures also both make use of helper functions to perform the required iteration, sqrt-iter and try. We'll need a helper function in iterative-improve as well, and this will be the function that will be returned by the procedure. (We can't use a lambda to define the returned procedure like past exercises in this section because the returned procedure in this case needs to call itself recursively. To do that, it needs a name.)
(define (iterative-improve good-enough? improve)
(define (iter-imp guess)
(if (good-enough? guess)
guess
(iter-imp (improve guess))))
iter-imp)

The solution doesn't require a lot of explanation. We define the procedure iter-imp that takes a guess as its only parameter. If the guess is good enough (as judged by the function passed in as the first argument to iterative-improve) we just return the guess. If not, we call iter-imp again with a new guess that we get by calling the improve function that was passed in as the second parameter to iterative-improve.

Let's see how we can redefine sqrt in terms of iterative-improve.
(define (sqrt x)
((iterative-improve (lambda (guess)
(< (abs (- (square guess) x))
0.001))
(lambda (guess)
(average guess (/ x guess))))
1.0))

Here I've taken the implementation of the procedures for good-enough? and improve almost exactly from the earlier sqrt solution, but I've recast them as lambdas. Since iterative-improve returns a procedure, we give it an initial guess of 1.0 to start things off. This implementation of sqrt works much the same as the original.
> (sqrt 4)
2.0000000929222947
> (sqrt 16)
4.000000636692939
> (sqrt 100)
10.000000000139897
> (sqrt 1000)
31.622782450701045

The reimplementation of fixed-point is very similar.
(define (fixed-point f first-guess)
((iterative-improve (lambda (guess)
(< (abs (- (f guess) guess))
0.00001))
(lambda (guess)
(f guess)))
first-guess))

The good-enough? procedures for sqrt and fixed-point are nearly identical. The improve procedure for fixed-point is simpler though, because to improve a guess when finding a fixed point, you just apply the function whose fixed point you're trying to find to the guess.

To test the new fixed-point procedure, we can use the procedure we defined in exercise 1.35 to approximate the golden ratio.
> (fixed-point (lambda (x) (+ 1 (/ 1 x))) 2.0)
1.6180371352785146


Related:

For links to all of the SICP lecture notes and exercises that I've done so far, see The SICP Challenge.

Saturday, August 21, 2010

SICP 1.45: Computing nth roots

From SICP section 1.3.4 Procedures as Returned Values

Exercise 1.45 asks us to do some experiments to find out how many average damps are needed to compute nth roots as a fixed-point search based on repeated average damping of y → x / yn-1. Once we know that, we need to write a general procedure for finding nth roots using fixed-point, average-damp, and the repeated procedure from exercise 1.43.

We already saw in section 1.3.3 that computing a square root by finding a fixed point of y → y/x doesn't converge, and that this can be fixed by average damping (see exercise 1.36). The same method (a single average damp) works for finding cube roots as fixed points of y → x / y2.

Here's a first attempt at an nth-root procedure, which works when n is 2 or 3.
(define (nth-root x n)
(fixed-point
(average-damp
(lambda (y) (/ x (expt y (- n 1)))))
1.0))

> (nth-root 100 2)
10.0
> (nth-root 1000 3)
10.000002544054729

The same procedure does not work for fourth roots. Try running nth-root with the following parameters and you'll see that the procedure never finishes.
> (nth-root 10000 4)

This is because the fixed-point search for y → x / y3 doesn't converge when only one average damp is used. We can fix this by damping the average twice.
(define (nth-root x n)
(fixed-point
((repeated average-damp 2)
(lambda (y) (/ x (expt y (- n 1)))))
1.0))

We'll need to run the new procedure several times to find out where average damping twice fails to converge. The code above works for finding nth roots when n is less than 8.
> (nth-root 10000 4)
10.0
> (nth-root 100000 5)
9.99999869212542
> (nth-root 1000000 6)
9.999996858149522
> (nth-root 10000000 7)
9.9999964240619

Let's increase the repetitions of average-damp one more time and see if we can spot a pattern.
(define (nth-root x n)
(fixed-point
((repeated average-damp 3)
(lambda (y) (/ x (expt y (- n 1)))))
1.0))

> (nth-root 256 8)
2.0000000000039666
> (nth-root 512 9)
1.9999997106840102
> (nth-root 1024 10)
2.000001183010332
> (nth-root 2048 11)
1.999997600654736
> (nth-root 4096 12)
1.9999976914703093
> (nth-root 8192 13)
2.0000029085658984
> (nth-root 16384 14)
1.9999963265447058
> (nth-root 32768 15)
2.0000040951543023

When we average damp 3 times, the nth-root procedure converges for n up to 15. That worked longer than I expected, but it gives us enough information to see the pattern. When n (roughly) doubles, we need to increase the number of average damps by one.

maximum n: 3, 7, 15
average damps: 1, 2, 3

Where a is the number of average damps, we can say

nmax = 2a+1 - 1

We can test this by checking to see that by increasing the number of average damps to four, our nth-root procedure works for values of n up to 31.
(define (nth-root x n)
(fixed-point
((repeated average-damp 4)
(lambda (y) (/ x (expt y (- n 1)))))
1.0))

> (nth-root 2147483648 31)
1.9999951809750391

Just as expected, the procedure fails to converge if we run it with n = 32. I'm convinced that this is the right pattern.

In order to calculate the number of average damps from n, we just need to take the log2 of n then floor the result. Since Scheme only has a log procedure that finds logarithms in base e, this must be what the authors meant when they said in the exercise "Assume that any arithmetic operations you need are available as primitives."

It's easy enough to write a procedure to compute log2 if we just remember that

logn(x) = log(x) / log(n)

(define (log2 x)
(/ (log x) (log 2)))

> (log2 16)
4.0
> (log2 64)
6.0
> (log2 65536)
16.0

With that we can fix the nth-root procedure so it works for any value of n.
(define (nth-root x n)
(fixed-point
((repeated average-damp (floor (log2 n)))
(lambda (y) (/ x (expt y (- n 1)))))
1.0))

> (nth-root 4294967296 32)
2.000000000000006
> (nth-root 18446744073709551616 64)
2.0000000000000853
> (nth-root 340282366920938463463374607431768211456 128)
2.0000000000082006


Related:

For links to all of the SICP lecture notes and exercises that I've done so far, see The SICP Challenge.

Sunday, August 15, 2010

SICP 1.44: Smoothing a function

From SICP section 1.3.4 Procedures as Returned Values

Exercise 1.44 introduces the idea of smoothing a function, a concept from signal processing. The smoothed version of a function f is the function whose value at x is the average of f(x), f(x + dx), and f(x - dx), where dx is some small value.

We're to write a procedure smooth that takes as its input a procedure that computes f(x) and returns a procedure that computes the smoothed f. Since it is sometimes valuable to repeatedly smooth a function, we also need to show how to generate an n-fold smoothed function using smooth and the repeated procedure from exercise 1.43. The exercise doesn't specify a value for dx, so we'll pass that in as a parameter to smooth.

The smooth procedure is fairy straightforward to implement, but a little bit complicated to test. Here is the implementation:
(define (smooth f dx)
(lambda (x)
(/ (+ (f x)
(f (+ x dx))
(f (- x dx)))
3)))

But what should we use for test data? It would be nice to have some independent data to use to verify that the function works, but we don't have any. We can create some using a spreadsheet though (click the image to enlarge).



You can see from the image the affects of smoothing on the sine wave function. It's already a smooth function, so instead of the intended affect of smoothing out noise, we've just damped the function. Still, this gives us data to test with. Here's the output of our smooth function at 90 and 180 degrees.
> (sin (/ pi 2))
1.0
> ((smooth sin 0.7) (/ pi 2))
0.8432281248563256
> (sin pi)
1.2246467991473532e-016
> ((smooth sin 0.7) pi)
7.401486830834377e-017

To complete the exercise we need to write an n-fold smooth function using the repeated function.
(define (n-fold-smooth f dx n)
(repeated (smooth f dx) n))

Repeatedly smoothing the sine function in this way should have the affect of simply damping the function even more.
> ((n-fold-smooth sin 0.7 2) (/ pi 2))
0.6297176112540723
> ((n-fold-smooth sin 0.7 3) (/ pi 2))
0.49659100370209336
> ((n-fold-smooth sin 0.7 4) (/ pi 2))
0.4017400886852076


Related:

For links to all of the SICP lecture notes and exercises that I've done so far, see The SICP Challenge.

Thursday, August 5, 2010

SICP 1.41 - 1.43: Procedures as Returned Values

From SICP section 1.3.4 Procedures as Returned Values

Exercise 1.41 asks us to define a procedure double that takes a procedure of one argument as its argument and returns a procedure that applies the original procedure twice. For example, if inc is a procedure that adds 1 to its argument, then (double inc) should return a procedure that adds 2. We need to be able to use our double procedure to evaluate expressions like
(((double (double double)) inc) 5)

Just like the previous exercise, we can use lambda to return a procedure from a procedure.
(define (double f)
(lambda (x) (f (f x))))

We'll also need to define a couple of procedures to test with.
(define (inc x) (+ x 1))
(define (square x) (* x x))

> ((double inc) 1)
3
> ((double square) 2)
16

As you can see, the name double is a little bit misleading. The procedure doesn't apply the input procedure then double the result, it applies the input procedure the applies it again to the result.
So in the first test the input parameter 1 is incremented, then the resulting value is incremented again. Likewise in the second test, the input parameter 2 is squared, then the result is squared.

Here's the result from running the test case given in the text:
> (((double (double double)) inc) 5)
21
Here we have nested calls to double itself. One set of nested calls to double has the effect of quadrupling the input procedure. Nesting it again results in 16 calls to inc.



Exercise 1.42 asks us to define a procedure compose that implements the composition of two functions that are supplied as input parameters. This is very similar to the double procedure we saw in the last exercise, but instead of applying the same procedure twice, we're given two different procedures.

If we evaluate
((compose square inc) 6)

we should get back a value of 49 because the input 6 will first be incremented then squared.
(define (compose f g)
(lambda (x) (f (g x))))

We can run two tests to show the order that the input procedures are evaluated.
> ((compose square inc) 6)
49
> ((compose inc square) 6)
37


Exercise 1.43 asks us to write a procedure that takes a procedure and a number n as arguments, and returns a procedure that applies the input procedure n times. We can make use of the compose procedure from the previous exercise.
(define (repeated f x)
(if (= x 1)
f
(compose f (repeated f (- x 1)))))

Repeatedly incrementing a value n times should give us the expected result of just adding n to the original value.
> ((repeated inc 2) 5)
7
> ((repeated inc 10) 10)
20

Squaring a value twice has the effect of raising it to the 4th power.
> ((repeated square 2) 5)
625


Related:

For links to all of the SICP lecture notes and exercises that I've done so far, see The SICP Challenge.

Sunday, August 1, 2010

SICP 1.40: Zeros of the Cubic Function

From SICP section 1.3.4 Procedures as Returned Values

Exercise 1.40 asks us to define a procedure cubic that can be used together with newtons-method (defined earlier in the section) in expressions of the form
(newtons-method (cubic a b c) 1)

to approximate zeros of cubic functions of the form

f(x) = x3 + ax2 + bx +c

Note that this is a simplification of the standard equation

f(x) = ax3 + bx2 + cx + d

You arrive at the simplified form by dividing by the leading coefficient (see Cardano's method). This distinction will become important when we need to check our solution against known values.

Quite a lot of code that we'll need was given in the text, starting with the fixed-point procedure from section 1.3.3.
(define tolerance 0.00001)
(define (fixed-point f first-guess)
(define (close-enough? v1 v2)
(< (abs (- v1 v2)) tolerance))
(define (try guess)
(let ((next (f guess)))
(if (close-enough? guess next)
next
(try next))))
(try first-guess))

We'll also be using the code from section 1.3.4 that defines newtons-method.
(define dx 0.00001)
(define (deriv g)
(lambda (x)
(/ (- (g (+ x dx)) (g x))
dx)))

(define (newton-transform g)
(lambda (x)
(- x (/ (g x) ((deriv g) x)))))

(define (newtons-method g guess)
(fixed-point (newton-transform g) guess))

The newtons-method procedure takes as its arguments a procedure that computes the function for which we want to find a zero, along with an initial guess. That means the cubic procedure itself needs to return a procedure. The returned procedure should take one parameter and compute the value of the cubic function given at the top. Combining it with newtons-method will compute the zeros of that function.
(define (cubic a b c)
(lambda (x)
(+ (* x x x)
(* a x x)
(* b x)
c)))

We need to know some zeros of the cubic function with specific values of a, b, and c so we can check our work. There are online tools like the Cubic Equation Calculator and Cubic Function Explorer that take four coefficients and solve cubic functions in the standard form. We can use either of these tools with the a term set to 1 to check the results of our procedure.

Adjust the sliders on the Cubic Function Explorer until the curve crosses the x-axis very close to a whole number (again, make sure the a slider is set to 1).


Now we can use the b, c, and d coefficients to check our solution.
> (newtons-method (cubic 3 -2.4 6) 1)
-3.981336651146034

You should also verify this solution using the same coefficients with the Cubic Equation Calculator mentioned above.


Related:

For links to all of the SICP lecture notes and exercises that I've done so far, see The SICP Challenge.