Lazy evaluation makes macros redundant
This is pure nonsense (not your fault; I've heard it before). It's true that you can use macros to change the order, context, etc. of expression evaluation, but that's the most basic use of macros, and it's really not convenient to simulate a lazy language using ad-hoc macros instead of functions. So if you came at macros from that direction, you would indeed be disappointed.
Macros are for extending the language with new syntactic forms. Some of the specific capabilities of macros are
- Affecting the order, context, etc. of expression evaluation.
- Creating new binding forms (i.e. affecting the scope an expression is evaluated in).
- Performing compile-time computation, including code analysis and transformation.
Macros that do (1) can be pretty simple. For example, in Racket, the exception-handling form, with-handlers
, is just a macro that expands into call-with-exception-handler
, some conditionals, and some continuation code. It's used like this:
(with-handlers ([(lambda (e) (exn:fail:network? e))
(lambda (e)
(printf "network seems to be broken\n")
(cleanup))])
(do-some-network-stuff))
The macro implements the notion of "predicate-and-handler clauses in the dynamic context of the exception" based on the primitive call-with-exception-handler
which handles all exceptions at the point they're raised.
A more sophisticated use of macros is an implementation of an LALR(1) parser generator. Instead of a separate file that needs pre-processing, the parser
form is just another kind of expression. It takes a grammar description, computes the tables at compile time, and produces a parser function. The action routines are lexically-scoped, so they can refer to other definitions in the file or even lambda
-bound variables. You can even use other language extensions in the action routines.
At the extreme end, Typed Racket is a typed dialect of Racket implemented via macros. It has a sophisticated type system designed to match the idioms of Racket/Scheme code, and it interoperates with untyped modules by protecting typed functions with dynamic software contracts (also implemented via macros). It's implemented by a "typed module" macro that expands, type-checks, and transforms the module body as well as auxiliary macros for attaching type information to definitions, etc.
FWIW, there's also Lazy Racket, a lazy dialect of Racket. It's not implemented by turning every function into a macro, but by rebinding lambda
, define
, and the function application syntax to macros that create and force promises.
In summary, lazy evaluation and macros have a small point of intersection, but they're extremely different things. And macros are certainly not subsumed by lazy evaluation.
In the first version (+ 1 2 3)
is raw code, whereas in the second version it is data. By assuming the truth of this statement it can be argued that Lisp isn't even homiconic. The code has the same representation as data in the sense that they are both lists/trees/S-expressions. But the fact that you have to explicitly mark which of these lists/trees/S-expressions are code and which are data to me seems to say that Lisp is not homiconic after all.
This is not true. In the first version, the list (+ 1 2 3)
, which is data, is being fed to the interpreter to be executed, i.e. to be interpreted as code. The fact that you have to mark s-expressions as being code or data in a specific context does not make Lisp non-homoiconic.
The point of homoiconicity is that all programs are data, not that all data are programs, so there is still a difference between the two. In Lisp, (1 2 3)
is a valid list but not a valid program since an integer is not a function.
[If we look at the other great homoiconic programming language, Prolog, then we see the same phenomenon: we can build a data structure foo(X, 1, bar)
, but without a definition of foo
, we can't execute it. Also, variables cannot be the names of predicates or facts, so X.
is never a valid program.]
Lisp is self-modifying to a great degree. E.g., here's how to change the definition of a function:
[1]> (defun foo (x) (+ x 1))
FOO
[2]> (defun bar (x) (+ x 2))
BAR
[3]> (setf (symbol-function 'foo) #'bar)
#<FUNCTION BAR (X) (DECLARE (SYSTEM::IN-DEFUN BAR)) (BLOCK BAR (+ X 2))>
[4]> (foo 3)
5
Explanation: at [1]
, we defined the function foo
to be the add-1 function. At [2]
, we defined bar
to be the add-2 function. At [3]
, we reset foo
to the add-2 function. At [4]
, we see that we've successfully modified foo
.
Best Solution
To give the short answer, macros are used for defining language syntax extensions to Common Lisp or Domain Specific Languages (DSLs). These languages are embedded right into the existing Lisp code. Now, the DSLs can have syntax similar to Lisp (like Peter Norvig's Prolog Interpreter for Common Lisp) or completely different (e.g. Infix Notation Math for Clojure).
Here is a more concrete example:
Python has list comprehensions built into the language. This gives a simple syntax for a common case. The line
yields a list containing all even numbers between 0 and 9. Back in the Python 1.5 days there was no such syntax; you'd use something more like this:
These are both functionally equivalent. Let's invoke our suspension of disbelief and pretend Lisp has a very limited loop macro that just does iteration and no easy way to do the equivalent of list comprehensions.
In Lisp you could write the following. I should note this contrived example is picked to be identical to the Python code not a good example of Lisp code.
Before I go further, I should better explain what a macro is. It is a transformation performed on code by code. That is, a piece of code, read by the interpreter (or compiler), which takes in code as an argument, manipulates and the returns the result, which is then run in-place.
Of course that's a lot of typing and programmers are lazy. So we could define DSL for doing list comprehensions. In fact, we're using one macro already (the loop macro).
Lisp defines a couple of special syntax forms. The quote (
'
) indicates the next token is a literal. The quasiquote or backtick (`
) indicates the next token is a literal with escapes. Escapes are indicated by the comma operator. The literal'(1 2 3)
is the equivalent of Python's[1, 2, 3]
. You can assign it to another variable or use it in place. You can think of`(1 2 ,x)
as the equivalent of Python's[1, 2, x]
wherex
is a variable previously defined. This list notation is part of the magic that goes into macros. The second part is the Lisp reader which intelligently substitutes macros for code but that is best illustrated below:So we can define a macro called
lcomp
(short for list comprehension). Its syntax will be exactly like the python that we used in the example[x for x in range(10) if x % 2 == 0]
-(lcomp x for x in (range 10) if (= (% x 2) 0))
Now we can execute at the command line:
Pretty neat, huh? Now it doesn't stop there. You have a mechanism, or a paintbrush, if you like. You can have any syntax you could possibly want. Like Python or C#'s
with
syntax. Or .NET's LINQ syntax. In end, this is what attracts people to Lisp - ultimate flexibility.