How to scale HTTP server like Google


I often marvel at how I can go to, from anywhere in the world at any time, and get the returned page so fast.

Sure, they compress their output and keep to a minimal design – that helps.

But they must have millions of simultaneous hits to the box sitting on the web that DNS lists as "".

All of you who have set up Apache or other web servers know that things are great and super fast until you start getting a few thousand simultaneous connections, let alone millions!

So, how do they do it? I guess they have a whole farm of server machines, but you'd never know it. When I went to Verizon just now the url was You never see "", never.

Any ideas what specific technologies they use, or what technologies us non-Google mortals can use to do the same thing?

Best Solution,, and other services which handle astonishingly high aggregate bandwidth do much of their magic via DNS.

BGP Anycast routing is used to announce the IP address of their DNS servers from multiple points around the world. Each DNS server is configured to resolve to IP addresses within a data center which is geographically close. So this is the first level of load balancing, based geographically.

Next, though a DNS query for will return only a small number of IP addresses, the DNS server rapidly cycles through a large range of addresses in its responses. Each client requesting will get a particular answer and will be allowed to cache that answer for a while, but the next client will get a different IP address. So this is the second level of load balancing.

Third, they use traditional server load balancers to map sessions to a single IP address to multiple backend servers. So this is a third level of load balancing.