Saturday, February 27, 2010

"Goes to" Considered Harmful

A few months ago the following question was asked (half jokingly) on Stack Overflow,
What is the name of this operator: “-->”?
The question includes the following C++ code:
#include <stdio.h>
int main()
{
int x = 10;
while( x --> 0 ) // x goes to 0
{
printf("%d ", x);
}
}
The output of this program is to simply count down from 9 to 0, printing each value. As many others pointed out, "-->" isn't a single operator at all, but two operators smashed together. The condition of the while loop above can be rewritten with proper spacing as:
while( x-- > 0 ) // x-- is greater than  0
{
...
}
Joshua Bloch commented on the goes-to operator on Twitter yesterday.



He links to the results of a Google Code search for instances of "-->" in the wild. The results are specific to the C programming language, but "-->" turns up in similar searches for C++ and Java as well (and probably any other language that allows it).

What's the harm?

So what's so harmful about this bit of cleverness? The main problem is that it's too clever. Any time you use standard operators in a non-standard way (even if the language specification doesn't strictly forbid it) you're negatively impacting the readability of your code. In this specific case, the "-->" operator isn't documented anywhere. It's just not reasonable to expect other programmers to know what it means.

On top of that, this situation isn't best handled by a while loop to begin with. If you know ahead of time how many iterations your loop needs to make, use a for loop. Kernighan and Richie covered this a very long time ago in The C Programming Language. It's still true today.
The for is preferable when there is a simple initialization and increment, since it keeps the loop control statements close together and visible at the top of the loop.
This, of course, is also true for a simple decrementing loop. The meaning of the following code should be obvious to any junior programmer.
int i;
for( i = 9; i >= 0; i-- )
{
printf("%d ", i);
}

Writing code that's "too clever" for other programmers to understand immediately just obfuscates your code needlessly. You'll seem much cleverer to your colleagues if you write your code in a clear and readable style to begin with. Besides that, using the common idioms of your language is probably the quickest way to reduce your WTFs/minute score at your next code review.

Tuesday, February 23, 2010

SICP Exercise 1.27: Carmichael numbers

From SICP section 1.2.6 Example: Testing for Primality

Exercise 1.27 asks us to demonstrate that the first six Carmicheal numbers (561, 1105, 1729, 2465, 2821, and 6601) really do fool the Fermat test. We're to write a procedure that takes an integer n and tests whether an is congruent to a modulo n for every value less than n, then test the procedure on each of the Carmichael numbers listed.

The fermat-test procedure that we first saw in exercise 1.24 is a good starting point.
(define (fermat-test n)
(define (try-it a)
(= (expmod a n n) a))
(try-it (+ 1 (random-integer (- n 1)))))

Let' start by modifying this procedure so that we pass it the value to test, instead of generating a random value.
(define (fermat-test n a)
(= (expmod a n n) a))

Now we just need to define a new procedure that will call this one for every value less than the one we want to test. If fermat-test passes for all values tested we should return true, but if it fails for any value we should return false. Here is the complete code listing for the exercise.
(define (square x)
(* x x))

(define (even? n)
(= (remainder n 2) 0))

(define (expmod base exp m)
(cond ((= exp 0) 1)
((even? exp)
(remainder (square (expmod base (/ exp 2) m))
m))
(else
(remainder (* base (expmod base (- exp 1) m))
m))))

(define (fermat-test n a)
(= (expmod a n n) a))

(define (fermat-full n)
(define (iter a)
(cond ((= a 1) #t)
((not (fermat-test n a)) #f)
(else (iter (- a 1)))))
(iter (- n 1)))

Prime numbers will legitimately pass this test, and Carmichael numbers will fool it. All other composites should fail. You can test out a few primes and composites in a Scheme interpreter to make sure it works as described. Here's the output for the first six Carmichael numbers.
> (fermat-full 561)
#t
> (fermat-full 1105)
#t
> (fermat-full 1729)
#t
> (fermat-full 2465)
#t
> (fermat-full 2821)
#t
> (fermat-full 6601)
#t


Related:

For links to all of the SICP lecture notes and exercises that I've done so far, see The SICP Challenge.

SICP Exercise 1.26: Explicit multiplication vs. squaring

From SICP section 1.2.6 Example: Testing for Primality

Exercise 1.26 asks us to consider yet another variation on the fast-prime? procedure from exercise 1.24. This time the expmod procedure has been modified to use an explicit multiplication instead of calling square:
(define (expmod base exp m)
(cond ((= exp 0) 1)
((even? exp)
(remainder (* (expmod base (/ exp 2) m)
(expmod base (/ exp 2) m))
m))
(else
(remainder (* base (expmod base (- exp 1) m))
m))))

This change causes fast-prime? to run more slowly than the original prime? test by transforming the O(log n) process into a O(n) process. We're asked to explain this transformation.

A cursory inspection of the code makes it seem like the explicit multiplication will cause twice as many calls to expmod because each input argument to * will be evaluated separately, instead of only once when expmod is passed as the single argument to square. This isn't enough to account for the reported slow down.

Let's take a closer look at the process generated by each version of expmod. (Since the only difference between the two versions of expmod is in the branch where exp is even, I'm using powers of 2 for that argument to better illustrate the difference in the resulting processes. I should point out that this is a worst case scenario.)

Here is what the process generated by the original expmod procedure using square might look like:
(expmod 2 8 2)
(expmod 2 4 2)
(expmod 2 2 2)
(expmod 2 1 2)
(expmod 2 0 2)
1
0
0
0
0

This is a pretty straightforward linear recursive process. Now let's look at the process for expmod when explicit multiplication is used. (This is only part of the process diagram, but it's enough to see what's going on.)


Right away, it's easy to see what went wrong here. By replacing the call to square with explicit multiplication, we've transformed expmod from a linear recursive process (like the factorial example) into a tree recursion (like the original Fibonacci example). This causes the number of recursive calls to expmod to grow exponentially instead of simply doubling. What was once a O(log n) process is now O(log 2n), which simplifies to O(n).


Related:

For links to all of the SICP lecture notes and exercises that I've done so far, see The SICP Challenge.

Sunday, February 21, 2010

SICP Exercise 1.25: A closer look at expmod

From SICP section 1.2.6 Example: Testing for Primality

Exercise 1.25 asks us to take a closer look at the expmod procedure that we used in exercise 1.24 to compute the exponential of a number modulo another number:
(define (expmod base exp m)
(cond ((= exp 0) 1)
((even? exp)
(remainder (square (expmod base (/ exp 2) m))
m))
(else
(remainder (* base (expmod base (- exp 1) m))
m))))

We're asked to consider a more straightforward implementation of the same function. This version makes use of the fast-expt procedure from section 1.2.4, which I've also included here.
(define (expmod base exp m)
(remainder (fast-expt base exp) m))

(define (fast-expt b n)
(cond ((= n 0) 1)
((even? n) (square (fast-expt b (/ n 2))))
(else (* b (fast-expt b (- n 1))))))

Will this version perform just as well in the fast prime tester from exercise 1.24? The easiest way to answer that question is to drop it in and test it out. Starting out with relatively small values, you can see that the new procedure doesn't perform very well at all.
> (timed-prime-test 1999)

1999 *** 138.0
> (timed-prime-test 10007)

10007 *** 681.0
> (timed-prime-test 100003)

100003 *** 29059.0

These values are several orders of magnitude smaller than those used in the previous exercise, but the new procedure is taking a much longer time to complete. Why is that?

The answer is buried in a footnote to the expmod code (#46).
The reduction steps in the cases where the exponent e is greater than 1 are based on the fact that, for any integers x, y, and m, we can find the remainder of x times y modulo m by computing separately the remainders of x modulo m and y modulo m, multiplying these, and then taking the remainder of the result modulo m. For instance, in the case where e is even, we compute the remainder of be/2 modulo m, square this, and take the remainder modulo m. This technique is useful because it means we can perform our computation without ever having to deal with numbers much larger than m.

The important point is that the original expmod procedure uses successive squaring to perform its computations without ever having to deal with numbers larger than m. Simple arithmetic with very large numbers is much slower than arithmetic with 32-bit integers. That's because the time complexity for arithmetic operations is based on the number of bits in the operands. The intermediate results during computation in the fast-expt procedure get very big, very quickly (the final result is growing exponentially, after all). Since we're only interested in the remainder, modular arithmetic provides a much sleeker solution, as explained in the footnote.


Related:

For links to all of the SICP lecture notes and exercises that I've done so far, see The SICP Challenge.

Saturday, February 20, 2010

SICP Exercise 1.24: The Fermat Test

From SICP section 1.2.6 Example: Testing for Primality

Exercise 1.24 asks us to once again modify the timed-prime-test procedure from exercise 1.22, this time using fast-prime? (which uses the Fermat test), then test the improvement using the timing statistics we gathered before.

The Fermat test is a probabilistic method for testing primality, meaning that if a given number passes the Fermat test, the best we can say is that the number is very probably prime. It's based on Fermat's Little Theorem which states:
If n is a prime number and a is any positive integer less than n, then a raised to the nth power is congruent to a modulo n.
So to test n for primality, we select a number, a, which is less than n, and raise it to the nth power. We then divide an by n to get the remainder. If n is prime, then the remainder will be equal to a, the base we selected.

Fermat's Little Theorem says that this property will always hold true if n is prime, but it doesn't say anything about when n isn't prime. If an mod n is not equal to a, then we know for certain that n is not prime. However, there are values of n and a that will pass the Fermat test even when n is not prime. If we test enough values of a for a given n, we can increase our confidence that n is prime. Unfortunately, there are extremely rare values of n called Carmichael numbers that pass the Fermat test for any value of a. There are variations on the Fermat test that cannot be fooled. We'll look at one of those variations, the Miller-Rabin test, in a later exercise.

The complete code listing below shows what modifications were necessary to use the Fermat test in the timed-prime-test procedure. The fast-prime? procedure takes two arguments, n, and times. The first argument is the number to test, the second argument tells the procedure how many random values to test with. The more random values we use, the higher our confidence that n is prime, so we should test a lot of values. I've (rather arbitrarily) chosen to test 100 random values.

One other change that you may notice is the inclusion of a library module in the first line of code. When testing the original code listing from the book, I found that Scheme's primitive random procedure has a limit of 4,294,967,087. This won't do for the values we're testing, so I replaced it with the random-integer procedure, which doesn't have this limitation. I found out about this procedure and library through the Scheme Cookbook.
(require (lib "27.ss" "srfi"))

(define (timed-prime-test n)
(newline)
(display n)
(start-prime-test n (current-inexact-milliseconds)))

(define (start-prime-test n start-time)
(cond ((fast-prime? n 100)
(report-prime (- (current-inexact-milliseconds) start-time)))))

(define (report-prime elapsed-time)
(display " *** ")
(display elapsed-time))

(define (square x)
(* x x))

(define (even? n)
(= (remainder n 2) 0))

(define (expmod base exp m)
(cond ((= exp 0) 1)
((even? exp)
(remainder (square (expmod base (/ exp 2) m))
m))
(else
(remainder (* base (expmod base (- exp 1) m))
m))))

(define (fermat-test n)
(define (try-it a)
(= (expmod a n n) a))
(try-it (+ 1 (random-integer (- n 1)))))

(define (fast-prime? n times)
(cond ((= times 0) true)
((fermat-test n) (fast-prime? n (- times 1)))
(else false)))

Test Results


We need to run timed-prime-test again with the new modifications using the same set of inputs that we found in the previous exercises, then compare the results to see how much we improved the run time. The following table compares the original data we gathered in exercise 1.22, the improved algorithm we used in exercise 1.23 (improved time column), and the new results we got using the Fermat test (all three time columns are in milliseconds). The ratio column is the ratio of the improved values from exercise 1.23 to the fermat time column.
























prime original time improved time fermat time ratio
100000000191411031780.58
100000000331721011870.54
100000000611541121730.65
1000000000035163101811.71
1000000000194932951861.59
1000000000575273051891.61
100000000003916278611894.56
100000000006115598841954.53
100000000006315498731974.43
100000000000375014267120812.84
100000000000514932266821112.64
100000000000994855269720812.97
10000000000003115745853123336.61
10000000000006716022826423135.77
10000000000009715861853022537.91
10000000000000374895026346244108.0
10000000000000914883626210255102.8
10000000000001594900826256257102.2


Analysis

The timing numbers from the Fermat test start out looking pretty poor compared to what we've seen previously. This has mostly to do with the completely arbitrary number of random values I chose to test each prime with. As we increase the size of the numbers we're testing, you can see that the time using the Fermat test barely increases at all. We can verify that the time increase is logarithmic by observing that as the numbers under test increase by a factor of 10, the ratio column increases by a factor of roughly three. This logarithmic characteristic is due to the fact that the expmod procedure used in fermat-test has a logarithmic time complexity.


Related:

For links to all of the SICP lecture notes and exercises that I've done so far, see The SICP Challenge.

Sunday, February 14, 2010

Solution to A Curiously Clever Chess Puzzle

In yesterday's post I asked two questions about the following chess position.



First, how is this a legal position, and second, how can white guarantee a checkmate in four moves or fewer?

This puzzle is much more commonly known as Lord Dunsany's Chess Problem, although he had several others. I first ran across this particular problem by Lord Dunsany in Martin Gardner's My Best Mathematical and Logic Puzzles.

I'll let Mr. Gardner explain the first part of the puzzle:
The key to Lord Dunsany's chess problem is the fact that the black queen is not on a black square as she must be at the start of a game. This means that the black king and queen have moved, and this could have happened only if some black pawns have moved. Pawns cannot move backward, so we are forced to conclude that the black pawns reached their present positions from the other side of the board! With this in mind, it is easy to discover that the white knight on the right has an easy mate in four moves.

If that seems like a dirty, dirty trick to you, that's because it is. Chess diagrams are traditionally drawn from the perspective of the white player (white pieces at the bottom of the diagram in the starting position), so this puzzle really forces you to check your assumptions and "think outside the box."

As Gardner mentioned, the mate in four is easy once you've mentally turned the board around. First, white moves his knight in front of his king, threatening mate in two more moves. Black can delay white's plan by one move by moving his own knight out into the bishop's file.



When white advances his knight on the bishop's file, black can protect the mating square with his own knight.



Next, white can capture the knight with his queen. After that, black is defenseless to stop the white knight from delivering checkmate on the fourth move. The black king is hemmed in by his own pieces.

All chess diagrams are courtesy of the Online Chess Diagram Editor.

A Curiously Clever Chess Puzzle

It's been some time since I've posted any logic puzzles, and I don't remember ever posting a chess puzzle, so today I'll kill two birds with one stone. In fact, the following position represents two puzzles in one. (You don't have to be a grandmaster at chess to solve it, so even if you only know the basic rules of the game you should give it a try.)




The first part of the puzzle requires more logical thinking than chess skill. Explain how this highly unlikely position could possibly be reached in a game. You don't have to show all the moves, but you should be able to convince yourself that it is a legal position.

Second, it's white's turn to move. Show how white can checkmate black in at most four moves.

I'll give the answer and the original source of this puzzle in an upcoming post. (Update: the full solution is up.)

The diagram image is courtesy of the Online Chess Diagram Editor.

Monday, February 8, 2010

SICP Exercise 1.23: Improved Prime Test

From SICP section 1.2.6 Example: Testing for Primality

Exercise 1.23 asks us to add a modification to the smallest-divisor procedure, then test the improvement using the timing statistics we gathered in exercise 1.22.

We start with the observation that smallest-divisor inefficiently finds the divisor of n by checking every candidate value from 2 to √n. The number of candidates can easily be cut nearly in half by simply not checking even numbers greater than 2.

Our first task is to define a procedure next that returns 3 if its input is equal to 2 and otherwise returns its input plus 2. The code is as straightforward as the description:
(define (next x)
(if (= x 2) 3 (+ x 2)))

Our next task is to modify the find-divisor procedure to use (next test-divisor) instead of (+ test-divisor 1). This is a straight substitution, and no other changes to the code from exercise 1.22 are necessary.
(define (find-divisor n test-divisor)
(cond ((> (square test-divisor) n) n)
((divides? test-divisor n) test-divisor)
(else (find-divisor n (next test-divisor)))))

Finally, we need to run timed-prime-test with these modifications using the same set of inputs that we found in the previous exercise, then compare the results to see if we really did cut the run time in half. The following table compares the original data to new timing data gathered using the improved algorithm. (Values in the new time column are averaged from three runs of the procedure.)
























prime old time (ms) new time (ms) ratio
100000000191411031.37
100000000331721011.70
100000000611541121.38
1000000000035163101.67
1000000000194932951.67
1000000000575273051.73
100000000003916278611.89
100000000006115598841.76
100000000006315498731.77
10000000000037501426711.88
10000000000051493226681.85
10000000000099485526971.80
1000000000000311574585311.85
1000000000000671602282641.94
1000000000000971586185301.86
100000000000003748950263461.86
100000000000009148836262101.86
100000000000015949008262561.87


Analysis


We're seeing a clear improvement in the new procedure, but it's not quite as fast as we expected. The first thing that needs to be explained in this data is the fact that the first three values shows very little performance gain, the next three a little more, then fairly consistent results for the remaining data. I think this can be explained by other processes running on the computer. Measuring shorter runs of the procedure (those in the 100-500 millisecond range) is going to be much more sensitive to measurement error due to being interrupted by background processes. These errors will be a less significant proportion of the total run time for longer runs of the procedure.

We're also seeing that the procedure is only running approximately 1.85 times as fast, instead of the expected factor of 2. This may be explained by the fact that we replaced a primitive operation, (+ test-divisor 1), by a user-defined operation, (next test-divisor). Each time that user-defined operation is called, an extra if must be evaluated (to check if the input is 2). Other than this small discrepancy, I think the improvement is quite good for such a small change to the code.


Related:

For links to all of the SICP lecture notes and exercises that I've done so far, see The SICP Challenge.

Friday, February 5, 2010

SICP Exercise 1.22: Timed Prime Test

From SICP section 1.2.6 Example: Testing for Primality

Exercise 1.22 introduces a new primitive called runtime...
Most Lisp implementations include a primitive called runtime that returns an integer that specifies the amount of time the system has been running (measured, for example, in microseconds).

Unfortunately, this turns out not to be the case for Scheme. After a bit of research, I found what I think to be a suitable replacement: current-inexact-milliseconds. It's not as precise as I'd like, but there are a couple of workarounds to this problem that I'll explain shortly.

The exercise goes on to to introduce the timed-prime-test procedure, which prints its input n and tests to see if n is prime. If n is prime, the procedure prints three asterisks followed by the amount of time used in performing the test. Here is the modified version that I'll be using, including all of the required support procedures from the book:
(define (timed-prime-test n)
(newline)
(display n)
(start-prime-test n (current-inexact-milliseconds)))

(define (start-prime-test n start-time)
(cond ((prime? n)
(report-prime (- (current-inexact-milliseconds) start-time)))))

(define (report-prime elapsed-time)
(display " *** ")
(display elapsed-time))

(define (smallest-divisor n)
(find-divisor n 2))

(define (find-divisor n test-divisor)
(cond ((> (square test-divisor) n) n)
((divides? test-divisor n) test-divisor)
(else (find-divisor n (+ test-divisor 1)))))

(define (divides? a b)
(= (remainder b a) 0))

(define (square x)
(* x x))

(define (prime? n)
(= n (smallest-divisor n)))

Our task (finally) is to use timed-prime-test to write a procedure named search-for-primes that checks the primality of consecutive odd integers in a specified range. We're instructed to use this procedure to find the three smallest primes larger than 1000, 10000, 100000, and 1000000. Since the prime? procedure has an order of growth of O(√n), we should expect a run time increase of a factor of √10 at each step. We'll use the output from timed-prime-test to check this assumption.

The following procedure checks the primality of consecutive odd integers in the range from start to end.
(define (search-for-primes start end)
(if (even? start)
(search-for-primes (+ start 1) end)
(cond ((< start end) (timed-prime-test start)
(search-for-primes (+ start 2) end)))))

(define (even? n)
(= (remainder n 2) 0))

The procedure initially checks to see if the start input argument is even. If it is, the procedure simply starts over with the next (odd) integer. Next the procedure checks to see if start is less than end. If so, it performs a timed-prime-test on start, then recursively calls itself with the next odd integer as a starting value. This process will continue until start exceeds end.

With a little bit of trial and error we can find start and end values that surround the first three primes larger than our target values. Here's a sample run starting at 1000.
> (search-for-primes 1000 1020)

1001
1003
1005
1007
1009 *** 1.0
1011
1013 *** 1.0
1015
1017
1019 *** 0.0

We can tell from this output that our timer's resolution isn't fine enough to give us meaningful results on modern desktop hardware (even if it is cheap). I see two things we can do to work around this problem. First, we could ignore the suggested inputs from the exercise and just keep increasing the input values until we get meaningful measurements. Second, we could run prime? in a loop and measure how long it takes to test a value 1000 (or maybe more) times. The code is ready as-is for the first option, so that's what I'm going to try. (If anyone implements the second option, please leave a comment so we can compare results.)

The first consistent results I got were by using a start value of 1010. It took an average of 154 milliseconds to test each of the first three primes after that value. The following table shows details of the time required to test the primality of numbers spread over several orders of magnitude.


































































































start primes time(ms) avg time(ms) ratio
1010 10000000019 141 155.67 -
10000000033 172
10000000061 154
1011 100000000003 516 512 3.289
100000000019 493
100000000057 527
1012 1000000000039 1627 1578.33 3.082
1000000000061 1559
1000000000063 1549
1013 10000000000037 5014 4933.67 3.126
10000000000051 4932
10000000000099 4855
1014 100000000000031 15745 15876 3.218
100000000000067 16022
100000000000097 15861
1015 1000000000000037 48950 48931.33 3.082
1000000000000091 48836
1000000000000159 49008


The last column shows the ratio of the average time for each row to the average time for the preceding row. These ratios stay very close to √10, which is approximately 3.162. These results do seem to verify that programs on my machine run in time proportional to the number of steps required for the computation.


Related:

For links to all of the SICP lecture notes and exercises that I've done so far, see The SICP Challenge.

Monday, February 1, 2010

Unsolved: Perfect Numbers and Mersenne Primes

In an earlier post I talked about perfect numbers, and how it was still unknown whether there are any odd perfect numbers. There's another unsolved problem surrounding the perfect numbers, and this one intersects with a special set of numbers that you may have heard of, the Mersenne primes.

Mersenne numbers and Mersenne primes

Mersenne numbers take their name from the French monk and mathematician Father Marin Mersenne, even though they were studied as long ago as the times of Euclid. You construct Mersenne numbers by subtracting 1 from a power of 2.

Mn = 2n - 1

The following table details the construction of the first sixteen Mersenne numbers.


















n 2n 2n - 1
121
243
387
41615
53231
66463
7128127
8256255
9512511
1010241023
1120482047
1240964095
1381928191
141638416383
153276832767
166553665535


If you look closely at the highlighted fields you may notice what mathematicians have known for centuries. A Mersenne number (2n - 1) cannot be prime unless the exponent n is prime. This is often expressed with a slightly different notation.

Mp = 2p - 1

What wasn't known in the time of the ancient Greeks was whether or not the exponent being prime meant that Mp would always be prime. It wasn't until the 16th century, when it was discovered that 211 - 1 = 2047 is not prime, that this question was finally settled (2047 = 23 * 89). In order for Mp to be prime, it is necessary but not sufficient for the exponent p to also be prime.

Constructing Perfect Numbers

Perfect numbers have also been studied since the times of the early Greek mathematicians. It was Euclid who first discovered that the first four perfect numbers are given by the formula

P = 2p-1 * (2p - 1)

6 = 21 * (22 - 1)
28 = 22 * (23 - 1)
496 = 24 * (25 - 1)
8128 = 26 * (27 - 1)

It wasn't until 1849 that it was shown by Euler (in a paper published after his death) that all even perfect numbers are of the form 2p-1 * (2p - 1), where (2p - 1) is a Mersenne prime.

Since every even perfect number corresponds to a Mersenne prime (and vice versa), searching for perfect numbers is the same as searching for Mersenne primes. This leads us to two unsolved problems.

It isn't known if there are infinitely many even perfect numbers.

It isn't known if there are infinitely many Mersenne primes.

(In fact, given the 1-to-1 correspondence between even perfect numbers and Mersenne primes, it really wouldn't be incorrect to say that either one of these problems is just a restatement of the other.)

The largest prime number yet discovered as of this writing is the Mersenne prime 243,112,609 - 1, which has a starggering 12,978,189 digits. The corresponding perfect number, 243,112,608 * (243,112,609 - 1), has 25,956,377 digits.


Additional reading

In this article I've barely scratched the surface of prime number research. There is a very large network of people and computers dedicated to the search for Mersenne primes. For more information (and to possibly join in the search) have a look at the Great Internet Mersenne Prime Search (GIMPS).