How would you design a “multithreaded” LRU cache using C++ (unordered_map and Linkedlist)?

by Joe Black   Last Updated August 14, 2018 04:05 AM

Please note that this is about a thread-safe LRU cache (not simply a LRU cache as specified in https://leetcode.com/problems/lru-cache/description/). It isn't a duplicate of LRU cache design question as there are some tricky aspects of Locking Hashtable/Linkedlist(LL) that aren't addressed in other multithreaded LRU design questions.

The credited approach on how to make LRU cache thread-safe in C++ seems to be all over the place. I noticed some links mentioning that locking both Hashtable/LL is ok while some others don't directly address the issue in C++ and just vaguely discuss various ways to do it.

  • Can someone describe a functional and efficient approach using locks that includes what's locked and when implementing a multithreaded version of the LRU cache in C++ https://leetcode.com/problems/lru-cache/description/ so all get/set operations are O(1) ?

  • how do caches such as memcached by Facebook implement multithreading support?



Related Questions




Data structure for concurrent look ups

Updated April 15, 2015 20:02 PM

How to use lock-free FIFOs in multi-threading?

Updated October 13, 2018 18:05 PM