LRU is certainly the most popular replacement algorithm used by web caches. The algorithm is quite simple to understand and implement, and it gives very good performance in almost all situations. As the name implies, LRU removes the objects that have not been accessed for the longest time.

Design LRU Cache

A strict interpretation of LRU would consider time-since-reference as the only parameter. In practice, web caches almost always use a variant known as LRU-Threshold, where “threshold” refers to object size. Objects larger than the threshold size are simply not cached. This prevents one very large object from ejecting many smaller ones. This highlights the biggest problem with LRU: it doesn’t consider object sizes. Would you rather have one large object in your cache or many smaller ones? Your answer probably depends on what you wish to optimize. If saving bandwidth is important, you want the large object. However, caching numerous small objects results in a higher hit ratio.

Application of LRU

Keep the most recently used apps at the front of the list.