How to improve multi-threaded access to Cache (custom implementation)

Posted by Andy on Stack Overflow See other posts from Stack Overflow or by Andy
Published on 2010-05-29T06:50:45Z Indexed on 2010/05/29 7:02 UTC
Read the original article Hit count: 224

Filed under:
|
|
|

I have a custom Cache implementation, which allows to cache TCacheable<TKey> descendants using LRU (Least Recently Used) cache replacement algorithm.

Every time an element is accessed, it is bubbled up to the top of the LRU queue using the following synchronized function:

// a single instance is created to handle all TCacheable<T> elements
public class Cache()
{
    private object syncQueue = new object();
    private void topQueue(TCacheable<T> el)
    {
        lock (syncQueue)
        if (newest != el)
        {
            if (el.elder != null) el.elder.newer = el.newer;
            if (el.newer != null) el.newer.elder = el.elder;

            if (oldest == el) oldest = el.newer;
            if (oldest == null) oldest = el;

            if (newest != null) newest.newer = el;
            el.newer = null;
            el.elder = newest;
            newest = el;
        }
    }
}

The bottleneck in this function is the lock() operator, which limits cache access to just one thread at a time.

Question: Is it possible to get rid of lock(syncQueue) in this function while still preserving the queue integrity?

© Stack Overflow or respective owner

Related posts about c#

Related posts about multithreading