Lecture 1 - Overview and Introduction to Lisp
Covers Text Section 1.1
You can download the video lecture or stream it on MIT's OpenCourseWare site. It's also available on YouTube.
This first lecture is presented by MIT's Professor Hal Abelson.
Part 1
Prof. Abelson opens the lecture with the following “definition”
Computer Science – It's not really a science, and not about computers.
He supports the first half of this definition with the idea that computer science might be engineering or art, but that it is more akin to magic than science. A process can be thought of as like a magical spirit that lives in the computer. A pattern of rules called a procedure is like a magical spell, and the language that these spells are written in is Lisp.
These are the only references in this lecture to “computer science as magic.” Since a wizard graces the cover of the course text and the introductory slides, I hope the idea will be fleshed out as some point.
The second half of the definition, that computer science isn't about computers is supported with the following observations:
- Physics is not really about particle accelerators.
- Biology is not about microscopes and petri dishes.
- Geometry (originally from the Egyptian words for “Earth” and “measure”) is not about surveying instruments.
Computer Science is no more about computers than astronomy is about telescopes.
Prof. Abelson gives the following examples to illustrate the distinction between declarative knowledge and imperative knowledge (what he terms “how-to” knowledge).
Declarative – The square root of x is the y such that
y2 = x and y ≥ 0
Imperative – In order to find the square root of x (using Heron of Alexandria's method of successive averaging):
- Make a guess, G.
- Improve the guess by averaging G and x/G.
- Keep improving the guess until the it is good enough.
Declarative knowledge is about stating and cataloguing facts. Imperative knowledge is about the methods and processes for finding knowledge (both declarative and imperative). Imperative knowledge is the meta-language for producing new knowledge of both types.
Techniques for controlling the complexity of large systems is what computer science is really about.
The three main techniques for controlling complexity in computer science, and the three main topics for discussion in the course are going to be
- Black Box Abstraction
- Conventional Interfaces
- Meta-Linguistic Abstraction (making new languages)
Black Box Abstraction – From an outside observer's point of view, the internals of how a procedure works should not be important. The example given in the lecture is the square root procedure from earlier. If we write a square root procedure, it should be modular in the sense that someone else can take it and use it as one step in their own procedures, and not need to worry about how it works.
Another reason to use Black Box Abstraction is that your imperative knowledge might be expressed in terms of a more general form of some other piece of imperative knowledge. The example given is a strategy for finding fixed points. Given a general procedure for finding fixed points, you can use it to find square roots by passing it a function as its data.
Black Box Abstraction
- Primitive Objects
- Primitive Procedures
- Primitive Data
- Procedure composition
- Construction of compound data
- Procedure Definition
- Simple data abstraction
- Higher-order procedures
- Data as procedures
Conventional Interfaces – agreed-upon ways of plugging things together.
- Generic Operations
- Large-scale structure and modularity
- Object-oriented programming
- Operations on aggregates
Meta-linguistic Abstraction
- Talks about how you construct new languages.
- Looks at the process of interpretation and evaluation of code.
Part 2
In the first part of the lecture, Prof. Abelson revealed that, like chess, the rules of Lisp can be learned in a few minutes, but take years to master. The second part of the lecture talks about Lisp specifically, but more generally about the three main aspects of any programming language.
Primitive Elements – What does the language include in terms of data and procedures?
Means of Combination – How do you put the primitive elements together to build bigger things out of them?
Means of Abstraction – How do you use the combinations of primitive elements that you create as though they were primitive elements themselves?
Primitive Elements of Lisp: Numbers and mathematical operators make up the primitive elements in Lisp. Some examples are
3, 17.4, 5, +, -, *, and /
No real surprises here.
Means of Combination in Lisp:
A combination consists of applying an operator to some operands. Lisp uses prefix notation. The operator is written to the left of the operands. The parentheses make combinations unambiguous. A simple combination adding three numbers would be:
(+ 3 17.4 5)
The operands themselves can also be combinations instead of primitive data values.
(+ 3 (* 2 4) 7)
In the MIT Scheme interpreter environment, you hit C-x C-e (hold down the Control key while hitting x, then e) in order to evaluate an expression you've typed in. Evaluating the expression above gives a result of 18. This is the sum of 3, plus the product of 2 and 4, plus 7.
Prefix notation also makes it easy to show the structure of a program using a tree diagram. The following combination
(* (+ 2 (* 4 6))
(+ 3 5 7))
is represented by the following tree diagram (from section 1.1.3 of the course text, Evaluating Combinations)Note that the nodes of the tree (including the non-leaf nodes) are represented by the value that the expression for that node evaluates to. If you start at the leaf nodes, the evaluation for the entire tree "percolates up" to the root node. This form of evaluation is a process known as tree accumulation.
Means of Abstraction in Lisp: Lisp uses the define keyword to define both variables (abstract data) and to apply names to procedures.
(define (square x)
(* x x))
You can then use the new square procedure just like any other operator in Lisp.
(square 10)
An alternative syntax for the definition of square above is:
(define square (lambda (x)
(* x x)))
Technically, the first syntax we looked at is the alternative, and this syntax is the more "correct" one. In Lisp, the lambda keyword is used to define a function. If you're giving a function a name, the alternative syntax we looked at first is a shorter way to do the same thing. Later on in the course we'll be looking at situations where we might want to create a function without explicitly giving it a name. We'll come back to the lambda operator then. For now, we'll use the short syntax for defining procedures.The following function computes the average of x and y.
(define (average x y)
(/ (+ x y) 2))
The average of x and y is computed by dividing the sum of x and y by 2.Now that the procedures for square and average are defined, you can use them just as you would any primitive procedure in Lisp. This is because Lisp does not make arbitrary distinctions between primitives of the language and procedures defined by the user.
The last section of the second part of the lecture explains how to do Case Analysis (Conditionals) in Lisp.
Consider the following definition of the absolute value function:
|x| = -x for x < 0
0 for x = 0
x for x > 0
0 for x = 0
x for x > 0
The absolute value function could be implemented in Lisp using a conditional expression as follows:
(define (abs x)
(cond ((< x 0) -x))
((= x 0) 0)
((> x 0) x)))
Refer to section 1.1.6 of the text, Conditional Expressions and Predicates for a more complete discussion of conditionals, including the most common logical composition operators (and, or , and not) which are not covered in the lecture.Part 3
We now know enough Lisp to implement any numerical procedure.
The main lecture concludes with an implementation of Heron of Alexandria's square root procedure seen earlier in the lecture.
To find an approximation to sqrt(x)
- make a guess G
- average G and x/G
- keep improving the guess until it's good enough
- use 1 as an initial guess
Each of these steps can be defined as a separate procedure, some of them in terms of other procedures we've seen already.
(define (try guess x)
(if (good-enough? guess x)
guess
(try (improve guess x) x)))
(define (sqrt x) (try 1 x))
(define (improve guess x)
(average guess (/ x guess)))
(define (good-enough? guess x)
(< (abs (- (square guess) x))
0.001))
Notice that the try procedure is defined partially in terms of itself. The try procedure defined above checks an initial guess, then calls itself with a refined guess if the initial guess is not good enough. This new guess is then checked to see if it is good enough, and the guess is defined again, and try will be called again with the new refined guess. This will go on until the guess is refined to the point where it is close enough. This is called a recursive definition. The ability to make recursive definitions allows you to do infinite computations that go on until a condition is reached (or just go on forever, if you make a mistake).
Summary
The following table shows what we've learned so far, and what we have yet to learn.
Procedures | Data | |
---|---|---|
Primitive Elements | + - * / = | 23 1.738 |
Means of Combination | ( ) cond if | |
Means of Abstraction | define |
10 comments:
Your code loses its indentation. You might like to look at the howto on posting source code that I show on Programming Praxis.
Phil
PS: Please email me at programmingpraxis@gmail.com. I have a question to ask you privately.
Phil,
Thanks. I'll go back through this post and clean it up soon. I'm just so frustrated with Blogger's editor right now... :)
Hi,
Thanks for the transcript.
I was wondering if "means of combination" and "primitive functions" were somewhat the same thing.
For instance, basic functions like "+" are absolutely useless when not given parameters. I don't see why we consider this expression:
(+ 2 3 4)
like a combination while it seems to me that it's only the application of a primitve function accepting parameters, not really a "combination"..
anyways, if you have any clue on this, i'd be happy to hear it :)
Thanks,
Maxime
Maxime,
In this case, a primitive function is a means of combination, but not all means of combination are primitive functions. So they're not necessarily the same thing. Primitive data can be combined in many different ways, addition is just one example. Assembling the elements into a list would be another means of combination. Come to think of it, that's probably a far better example, it just hadn't been covered yet in the text or the first lecture.
If we consider that everything tangible in a program is either data or functions, I'd be tempted to say that every means of combination is a primitive function, intuitively because it can't be any concrete data. If this is true, then means of combination would be a subset of primitive functions.
To go even further, at the low level (ex:assembly) and even lower, well everything that acts on the data is a mechanical/electrical function done in the CPU.
For your list example, I'm still not convinced. I see a new primitive data type (a list) and built-in/primitive methods used with reserved language syntax (adding elements, initializing).
In fact, while I am writing this, I think I see more clearly the distinction: Would it be that means of combination are related with the syntax of the language, yet they have the same "purpose" than primitive functions ?
For instance, using traditional and basic syntax like functions to create and "return" a new user-defined class would really be difficult for nothing. As a basic example made by me (in an imaginary language), if you would want to create a class containing x and y coordinates:
declaration
considering the funct CreateClass takes a name and then declare a brand new class:
CreateClass("Coordinate")
considering that the funct AddNewAttribute takes (1) a class name, (2) the name of an attribute to add to the class, (3) the type of the attribute.
Coordinate=AddNewAttribute(Coordinate,x,integer)
Coordinate=AddNewAttribute(Coordinate,y,integer)
This syntax would be too verbose, inefficient. And, the way I see it, we could instead add words like "class" to the language syntax to simplify things. The same would apply to while constructs, if, etc. We would now have a simple way to declare that multiple statements (for instance, in a "if" clause) are related, yet it would still be a sort of primitive function
As you probably would agree, not all primitive functions are means of combination. That would be absurd. So, the distinction between the two terms is important.
It's the way I see things, but I'm only a 2nd semester CS student.
If you could help me out on this, i would really be grateful!!
Thanks
Also, I think some code like this would be a mean of combination:
anInteger.toString().toInteger()
Maxime,
What are you combining with:
anInteger.toString().toInteger() ?
Also (in response to your earlier comment), a list is compound data by definition. From one point of view the functions that create lists might be considered primitive if they only take primitive data as inputs, but they always output compound data. Also, you can make lists of lists, so these functions can also take compound data as inputs.
If you try to see everything as primitive, then doesn't the term lose all meaning? It's true that you can always drill down deeper and look at all programs and data as bits in the hardware, but that's not very useful or productive. We build layers and layers of abstractions on top of each other just so we can get away from thinking this way.
For my example
anInteger.toString().toInteger()
well, it seems to me like a composition because the intrepreter/compiler first take the leftmost expression (anInteger.toString()) that returns a string to then call a method of the result.
It seemed to me that it was a sort of mean of combination because the intrepeter allows you to do things "in the air"/ (don't know if its a good methaphor, I don't speak english regularly).The same program without that mean would force you to bind each result and only do one thing at each expression of the program:
string result = anInteger.toString()
int result2 = result.toInt()
Note that the program couldn't be more pointless!
Besides that, I think I'm starting to understand more clearly.
Dijkstra never actually said that about telescopes. If you check his Wikiquote page, it actually says the quote comes from either this lecture or some other guy named Michael Fellows.
By the way "Geometry" is from Hellenic words "Earth" and "measure".
Post a Comment