Whilst starting to learn lisp, I've come across the term tail-recursive. What does it mean exactly?
First: The term monad is a bit vacuous if you are not a mathematician. An alternative term is computation builder which is a bit more descriptive of what they are actually useful for.
They are a pattern for chaining operations. It looks a bit like method chaining in object-oriented languages, but the mechanism is slightly different.
The pattern is mostly used in functional languages (especially Haskell which uses monads pervasively) but can be used in any language which support higher-order functions (that is, functions which can take other functions as arguments).
The gist of the pattern is we have a type (
Array in this case) which has a method which takes a function as argument. The operation supplied must return an instance of the same type (i.e. return an
First an example of method chaining which does not use the monad pattern:
[1,2,3].map(x => x + 1)
The result is
[2,3,4]. The code does not conform to the monad pattern, since the function we are supplying as an argument returns a number, not an Array. The same logic in monad form would be:
[1,2,3].flatMap(x => [x + 1])
Here we supply an operation which returns an
Array, so now it conforms to the pattern. The
flatMap method executes the provided function for every element in the array. It expects an array as result for each invocation (rather than single values), but merges the resulting set of arrays into a single array. So the end result is the same, the array
(The function argument provided to a method like
If we chain multiple operations (in the traditional way):
[1,2,3].map(a => a + 1).filter(b => b != 3)
Results in the array
The same chaining in monad form:
[1,2,3].flatMap(a => [a + 1]).flatMap(b => b != 3 ? [b] : )
Yields the same result, the array
You will immediately notice that the monad form is quite a bit uglier than the non-monad! This just goes to show that monads are not necessarily “good”. They are a pattern which is sometimes beneficial and sometimes not.
Do note that the monad pattern can be combined in a different way:
[1,2,3].flatMap(a => [a + 1].flatMap(b => b != 3 ? [b] : ))
Here the binding is nested rather than chained, but the result is the same. This is an important property of monads as we will see later. It means two operations combined can be treated the same as a single operation.
The operation is allowed to return an array with different element types, for example transforming an array of numbers into an array of strings or something else; as long as it still an Array.
This can be described a bit more formally using Typescript notation. An array has the type
T is the type of the elements in the array. The method
flatMap() takes a function argument of the type
T => Array<U> and returns an
Generalized, a monad is any type
Foo<Bar> which has a "bind" method which takes a function argument of type
Bar => Foo<Baz> and returns a
This answers what monads are. The rest of this answer will try to explain through examples why monads can be a useful pattern in a language like Haskell which has good support for them.
Haskell and Do-notation
To translate the map/filter example directly to Haskell, we replace
flatMap with the
[1,2,3] >>= \a -> [a+1] >>= \b -> if b == 3 then  else [b]
>>= operator is the bind function in Haskell. It does the same as
But Haskell also has a dedicated syntax for monad expressions, the
do-block, which hides the bind operator altogether:
do a <- [1,2,3] b <- [a+1] if b == 3 then  else [b]
This hides the "plumbing" and lets you focus on the actual operations applied at each step.
do-block, each line is an operation. The constraint still holds that all operations in the block must return the same type. Since the first expression is a list, the other operations must also return a list.
<- looks deceptively like an assignment, but note that this is the parameter passed in the bind. So, when the expression on the right side is a List of Integers, the variable on the left side will be a single Integer – but will be executed for each integer in the list.
Example: Safe navigation (the Maybe type)
Enough about lists, lets see how the monad pattern can be useful for other types.
Some functions may not always return a valid value. In Haskell this is represented by the
Maybe-type, which is an option that is either
Some value or
Chaining operations which always return a valid value is of course straightforward:
streetName = getStreetName (getAddress (getUser 17))
But what if any of the functions could return
Nothing? We need to check each result individually and only pass the value to the next function if it is not
case getUser 17 of Nothing -> Nothing Just user -> case getAddress user of Nothing -> Nothing Just address -> getStreetName address
Quite a lot of repetitive checks! Imagine if the chain was longer. Haskell solves this with the monad pattern for
do user <- getUser 17 addr <- getAddress user getStreetName addr
do-block invokes the bind-function for the
Maybe type (since the result of the first expression is a
Maybe). The bind-function only executes the following operation if the value is
Just value, otherwise it just passes the
Here the monad-pattern is used to avoid repetitive code. This is similar to how some other languages use macros to simplify syntax, although macros achieve the same goal in a very different way.
Haskell does not support mutable state. All variables are constants and all values immutable. But the
State type can be used to emulate programming with mutable state:
add2 :: State Integer Integer add2 = do -- add 1 to state x <- get put (x + 1) -- increment in another way modify (+1) -- return state get evalState add2 7 => 9
add2 function builds a monad chain which is then evaluated with 7 as the initial state.
Obviously this is something which only makes sense in Haskell. Other languages support mutable state out of the box. Haskell is generally "opt-in" on language features - you enable mutable state when you need it, and the type system ensures the effect is explicit. IO is another example of this.
IO type is used for chaining and executing “impure” functions.
Like any other practical language, Haskell has a bunch of built-in functions which interface with the outside world:
readLine and so on. These functions are called “impure” because they either cause side effects or have non-deterministic results. Even something simple like getting the time is considered impure because the result is non-deterministic – calling it twice with the same arguments may return different values.
A pure function is deterministic – its result depends purely on the arguments passed and it has no side effects on the environment beside returning a value.
Haskell heavily encourages the use of pure functions – this is a major selling point of the language. Unfortunately for purists, you need some impure functions to do anything useful. The Haskell compromise is to cleanly separate pure and impure, and guarantee that there is no way that pure functions can execute impure functions, directly or indirect.
This is guaranteed by giving all impure functions the
IO type. The entry point in Haskell program is the
main function which have the
IO type, so we can execute impure functions at the top level.
But how does the language prevent pure functions from executing impure functions? This is due to the lazy nature of Haskell. A function is only executed if its output is consumed by some other function. But there is no way to consume an
IO value except to assign it to
main. So if a function wants to execute an impure function, it has to be connected to
main and have the
Using monad chaining for IO operations also ensures that they are executed in a linear and predictable order, just like statements in an imperative language.
This brings us to the first program most people will write in Haskell:
main :: IO () main = do putStrLn ”Hello World”
do keyword is superfluous when there is only a single operation and therefore nothing to bind, but I keep it anyway for consistency.
() type means “void”. This special return type is only useful for IO functions called for their side effect.
A longer example:
main = do putStrLn "What is your name?" name <- getLine putStrLn "hello" ++ name
This builds a chain of
IO operations, and since they are assigned to the
main function, they get executed.
Maybe shows the versatility of the monad pattern. For
Maybe, the pattern is used to avoid repetitive code by moving conditional logic to the binding function. For
IO, the pattern is used to ensure that all operations of the
IO type are sequenced and that
IO operations cannot "leak" to pure functions.
In my subjective opinion, the monad pattern is only really worthwhile in a language which has some built-in support for the pattern. Otherwise it just leads to overly convoluted code. But Haskell (and some other languages) have some built-in support which hides the tedious parts, and then the pattern can be used for a variety of useful things. Like:
- Avoiding repetitive code (
- Adding language features like mutable state or exceptions for delimited areas of the program.
- Isolating icky stuff from nice stuff (
- Embedded domain-specific languages (
- Adding GOTO to the language.
The stack is the memory set aside as scratch space for a thread of execution. When a function is called, a block is reserved on the top of the stack for local variables and some bookkeeping data. When that function returns, the block becomes unused and can be used the next time a function is called. The stack is always reserved in a LIFO (last in first out) order; the most recently reserved block is always the next block to be freed. This makes it really simple to keep track of the stack; freeing a block from the stack is nothing more than adjusting one pointer.
The heap is memory set aside for dynamic allocation. Unlike the stack, there's no enforced pattern to the allocation and deallocation of blocks from the heap; you can allocate a block at any time and free it at any time. This makes it much more complex to keep track of which parts of the heap are allocated or freed at any given time; there are many custom heap allocators available to tune heap performance for different usage patterns.
Each thread gets a stack, while there's typically only one heap for the application (although it isn't uncommon to have multiple heaps for different types of allocation).
To answer your questions directly:
To what extent are they controlled by the OS or language runtime?
The OS allocates the stack for each system-level thread when the thread is created. Typically the OS is called by the language runtime to allocate the heap for the application.
What is their scope?
The stack is attached to a thread, so when the thread exits the stack is reclaimed. The heap is typically allocated at application startup by the runtime, and is reclaimed when the application (technically process) exits.
What determines the size of each of them?
The size of the stack is set when a thread is created. The size of the heap is set on application startup, but can grow as space is needed (the allocator requests more memory from the operating system).
What makes one faster?
The stack is faster because the access pattern makes it trivial to allocate and deallocate memory from it (a pointer/integer is simply incremented or decremented), while the heap has much more complex bookkeeping involved in an allocation or deallocation. Also, each byte in the stack tends to be reused very frequently which means it tends to be mapped to the processor's cache, making it very fast. Another performance hit for the heap is that the heap, being mostly a global resource, typically has to be multi-threading safe, i.e. each allocation and deallocation needs to be - typically - synchronized with "all" other heap accesses in the program.
A clear demonstration:
Image source: vikashazrati.wordpress.com
- Dependency injection
- What are bitwise shift (bit-shift) operators and how do they work
- Tail call optimization
- A plain English explanation of “Big O” notation
- (functional) reactive programming
- What does O(log n) mean exactly
- Ukkonen’s suffix tree algorithm in plain English
- Haskell – What part of Hindley-Milner do you not understand