In :w !sudo tee %
...
%
means "the current file"
As eugene y pointed out, %
does indeed mean "the current file name", which is passed to tee
so that it knows which file to overwrite.
(In substitution commands, it's slightly different; as :help :%
shows, it's equal to 1,$ (the entire file)
(thanks to @Orafu for pointing out that this does not evaluate to the filename). For example, :%s/foo/bar
means "in the current file, replace occurrences of foo
with bar
." If you highlight some text before typing :s
, you'll see that the highlighted lines take the place of %
as your substitution range.)
:w
isn't updating your file
One confusing part of this trick is that you might think :w
is modifying your file, but it isn't. If you opened and modified file1.txt
, then ran :w file2.txt
, it would be a "save as"; file1.txt
wouldn't be modified, but the current buffer contents would be sent to file2.txt
.
Instead of file2.txt
, you can substitute a shell command to receive the buffer contents. For instance, :w !cat
will just display the contents.
If Vim wasn't run with sudo access, its :w
can't modify a protected file, but if it passes the buffer contents to the shell, a command in the shell can be run with sudo. In this case, we use tee
.
Understanding tee
As for tee
, picture the tee
command as a T-shaped pipe in a normal bash piping situation: it directs output to specified file(s) and also sends it to standard output, which can be captured by the next piped command.
For example, in ps -ax | tee processes.txt | grep 'foo'
, the list of processes will be written to a text file and passed along to grep
.
+-----------+ tee +------------+
| | -------- | |
| ps -ax | -------- | grep 'foo' |
| | || | |
+-----------+ || +------------+
||
+---------------+
| |
| processes.txt |
| |
+---------------+
(Diagram created with Asciiflow.)
See the tee
man page for more info.
Tee as a hack
In the situation your question describes, using tee
is a hack because we're ignoring half of what it does. sudo tee
writes to our file and also sends the buffer contents to standard output, but we ignore standard output. We don't need to pass anything to another piped command in this case; we're just using tee
as an alternate way of writing a file and so that we can call it with sudo
.
Making this trick easy
You can add this to your .vimrc
to make this trick easy-to-use: just type :w!!
.
" Allow saving of files as sudo when I forgot to start vim using sudo.
cmap w!! w !sudo tee > /dev/null %
The > /dev/null
part explicitly throws away the standard output, since, as I said, we don't need to pass anything to another piped command.
Docker originally used LinuX Containers (LXC), but later switched to runC (formerly known as libcontainer), which runs in the same operating system as its host. This allows it to share a lot of the host operating system resources. Also, it uses a layered filesystem (AuFS) and manages networking.
AuFS is a layered file system, so you can have a read only part and a write part which are merged together. One could have the common parts of the operating system as read only (and shared amongst all of your containers) and then give each container its own mount for writing.
So, let's say you have a 1 GB container image; if you wanted to use a full VM, you would need to have 1 GB x number of VMs you want. With Docker and AuFS you can share the bulk of the 1 GB between all the containers and if you have 1000 containers you still might only have a little over 1 GB of space for the containers OS (assuming they are all running the same OS image).
A full virtualized system gets its own set of resources allocated to it, and does minimal sharing. You get more isolation, but it is much heavier (requires more resources). With Docker you get less isolation, but the containers are lightweight (require fewer resources). So you could easily run thousands of containers on a host, and it won't even blink. Try doing that with Xen, and unless you have a really big host, I don't think it is possible.
A full virtualized system usually takes minutes to start, whereas Docker/LXC/runC containers take seconds, and often even less than a second.
There are pros and cons for each type of virtualized system. If you want full isolation with guaranteed resources, a full VM is the way to go. If you just want to isolate processes from each other and want to run a ton of them on a reasonably sized host, then Docker/LXC/runC seems to be the way to go.
For more information, check out this set of blog posts which do a good job of explaining how LXC works.
Why is deploying software to a docker image (if that's the right term) easier than simply deploying to a consistent production environment?
Deploying a consistent production environment is easier said than done. Even if you use tools like Chef and Puppet, there are always OS updates and other things that change between hosts and environments.
Docker gives you the ability to snapshot the OS into a shared image, and makes it easy to deploy on other Docker hosts. Locally, dev, qa, prod, etc.: all the same image. Sure you can do this with other tools, but not nearly as easily or fast.
This is great for testing; let's say you have thousands of tests that need to connect to a database, and each test needs a pristine copy of the database and will make changes to the data. The classic approach to this is to reset the database after every test either with custom code or with tools like Flyway - this can be very time-consuming and means that tests must be run serially. However, with Docker you could create an image of your database and run up one instance per test, and then run all the tests in parallel since you know they will all be running against the same snapshot of the database. Since the tests are running in parallel and in Docker containers they could run all on the same box at the same time and should finish much faster. Try doing that with a full VM.
From comments...
Interesting! I suppose I'm still confused by the notion of "snapshot[ting] the OS". How does one do that without, well, making an image of the OS?
Well, let's see if I can explain. You start with a base image, and then make your changes, and commit those changes using docker, and it creates an image. This image contains only the differences from the base. When you want to run your image, you also need the base, and it layers your image on top of the base using a layered file system: as mentioned above, Docker uses AuFS. AuFS merges the different layers together and you get what you want; you just need to run it. You can keep adding more and more images (layers) and it will continue to only save the diffs. Since Docker typically builds on top of ready-made images from a registry, you rarely have to "snapshot" the whole OS yourself.
Best Solution
Just got it. As regan pointed out, I had to add the user to the sudoers group. But the main reason was I'd forgotten to update the repositories cache, so apt-get couldn't find the sudo package. It's working now. Here's the completed code: