How would you design a “multithreaded” LRU cache using C++ (unordered_map and Linkedlist)?

by Joe Black   Last Updated August 14, 2018 04:05 AM - source

Please note that this is about a thread-safe LRU cache (not simply a LRU cache as specified in It isn't a duplicate of LRU cache design question as there are some tricky aspects of Locking Hashtable/Linkedlist(LL) that aren't addressed in other multithreaded LRU design questions.

The credited approach on how to make LRU cache thread-safe in C++ seems to be all over the place. I noticed some links mentioning that locking both Hashtable/LL is ok while some others don't directly address the issue in C++ and just vaguely discuss various ways to do it.

  • Can someone describe a functional and efficient approach using locks that includes what's locked and when implementing a multithreaded version of the LRU cache in C++ so all get/set operations are O(1) ?

  • how do caches such as memcached by Facebook implement multithreading support?

Answers 1

You can find an open-source C++ implementation of a hash-table/linked list based LRU Cache LRUCache.h and LRUCache.inl (note - the 'apparent locking' in this implementation isn't real - its just assertions to detect errant unsafe thread use - for code that is externally synchronized).

You can then find a simple wrapper on this cache which adds internal shared synchronization (read locks and write locks) SynchronizedLRUCache.h and SynchronizedLRUCache.inl

And a small test case (Test.cpp#L721for the synchronization wrapper - there are other tests for the LRUCache itself).

Note - even if you use a different set of utility classes to implement your LRUCache, the implementation of SynchronizedLRUCache makes clear how to implement synchronization using shared_lock/lock_guard.

Lewis Pringle
Lewis Pringle
August 16, 2018 01:38 AM

Related Questions

Design of file I/O -> processing -> file I/O system

Updated January 21, 2019 06:05 AM

Data structure for concurrent look ups

Updated April 15, 2015 20:02 PM