Just about every modern operating system will recover all the allocated memory space after a program exits. The only exception I can think of might be something like Palm OS where the program's static storage and runtime memory are pretty much the same thing, so not freeing might cause the program to take up more storage. (I'm only speculating here.)
So generally, there's no harm in it, except the runtime cost of having more storage than you need. Certainly in the example you give, you want to keep the memory for a variable that might be used until it's cleared.
However, it's considered good style to free memory as soon as you don't need it any more, and to free anything you still have around on program exit. It's more of an exercise in knowing what memory you're using, and thinking about whether you still need it. If you don't keep track, you might have memory leaks.
On the other hand, the similar admonition to close your files on exit has a much more concrete result - if you don't, the data you wrote to them might not get flushed, or if they're a temp file, they might not get deleted when you're done. Also, database handles should have their transactions committed and then closed when you're done with them. Similarly, if you're using an object oriented language like C++ or Objective C, not freeing an object when you're done with it will mean the destructor will never get called, and any resources the class is responsible might not get cleaned up.
OK some answers about malloc were already posted.
The more interesting part is how free works (and in this direction, malloc too can be understood better).
In many malloc/free implementations, free does normally not return the memory to the operating system (or at least only in rare cases). The reason is that you will get gaps in your heap and thus it can happen, that you just finish off your 2 or 4 GB of virtual memory with gaps. This should be avoided, since as soon as the virtual memory is finished, you will be in really big trouble. The other reason is, that the OS can only handle memory chunks that are of a specific size and alignment. To be specific: Normally the OS can only handle blocks that the virtual memory manager can handle (most often multiples of 512 bytes e.g. 4KB).
So returning 40 Bytes to the OS will just not work. So what does free do?
Free will put the memory block in its own free block list. Normally it also tries to meld together adjacent blocks in the address space. The free block list is just a circular list of memory chunks which have some administrative data in the beginning. This is also the reason why managing very small memory elements with the standard malloc/free is not efficient. Every memory chunk needs additional data and with smaller sizes more fragmentation happens.
The free-list is also the first place that malloc looks at when a new chunk of memory is needed. It is scanned before it calls for new memory from the OS. When a chunk is found that is bigger than the needed memory, it is divided into two parts. One is returned to caller, the other is put back into the free list.
There are many different optimizations to this standard behaviour (for example for small chunks of memory). But since malloc and free must be so universal, the standard behaviour is always the fallback when alternatives are not usable. There are also optimizations in handling the free-list — for example storing the chunks in lists sorted by sizes. But all optimizations also have their own limitations.
Why does your code crash:
The reason is that by writing 9 chars (don't forget the trailing null byte) into an area sized for 4 chars, you will probably overwrite the administrative-data stored for another chunk of memory that resides "behind" your chunk of data (since this data is most often stored "in front" of the memory chunks). When free then tries to put your chunk into the free list, it can touch this administrative-data and therefore stumble over an overwritten pointer. This will crash the system.
This is a rather graceful behaviour. I have also seen situations where a runaway pointer somewhere has overwritten data in the memory-free-list and the system did not immediately crash but some subroutines later. Even in a system of medium complexity such problems can be really, really hard to debug! In the one case I was involved, it took us (a larger group of developers) several days to find the reason of the crash -- since it was in a totally different location than the one indicated by the memory dump. It is like a time-bomb. You know, your next "free" or "malloc" will crash, but you don't know why!
Those are some of the worst C/C++ problems, and one reason why pointers can be so problematic.
Best Solution
Setting unused pointers to NULL is a defensive style, protecting against dangling pointer bugs. If a dangling pointer is accessed after it is freed, you may read or overwrite random memory. If a null pointer is accessed, you get an immediate crash on most systems, telling you right away what the error is.
For local variables, it may be a little bit pointless if it is "obvious" that the pointer isn't accessed anymore after being freed, so this style is more appropriate for member data and global variables. Even for local variables, it may be a good approach if the function continues after the memory is released.
To complete the style, you should also initialize pointers to NULL before they get assigned a true pointer value.