Suppose you have a program which reads from a socket. How do you keep the download rate below a certain given threshold?
R – How to rate-limit an IO operation
- Java – How to read / convert an InputStream into a String in Java
- Java – How to create a Java string from the contents of a file
- An idempotent operation
- Easy interview question got harder: given numbers 1..100, find the missing number(s) given exactly k are missing
- Ukkonen’s suffix tree algorithm in plain English
- C++ – Image Processing: Algorithm Improvement for ‘Coca-Cola Can’ Recognition
- How to pair socks from a pile efficiently
- The optimal algorithm for the game 2048
At the application layer (using a Berkeley socket style API) you just watch the clock, and read or write data at the rate you want to limit at.
If you only read 10kbps on average, but the source is sending more than that, then eventually all the buffers between it and you will fill up. TCP/IP allows for this, and the protocol will arrange for the sender to slow down (at the application layer, probably all you need to know is that at the other end, blocking write calls will block, nonblocking writes will fail, and asynchronous writes won't complete, until you've read enough data to allow it).
At the application layer you can only be approximate - you can't guarantee hard limits such as "no more than 10 kb will pass a given point in the network in any one second". But if you keep track of what you've received, you can get the average right in the long run.