AtomicCache
AtomicCache
¶
Thread-safe cache based on AtomicDict.
AtomicCache provides a thread-safe caching mechanism that automatically
fills cache entries using a provided fill function. It supports optional
time-to-live (TTL) expiration and explicit invalidation.
When multiple threads access the same key concurrently, only one thread will execute the fill function, while others wait for the result to be ready. This ensures that expensive computations are only performed once per key, even under high concurrency. Threads waiting for a result will be blocked until the result is ready.
Example
from cereggii import AtomicCache
import time
def expensive_computation(key):
return key
cache = AtomicCache(expensive_computation, ttl=60.0)
value = cache["spam"] # computes and caches the result
value = cache["spam"] # doesn't call expensive_computation("spam") again
time.sleep(61.0) # wait for TTL to expire
value = cache["spam"] # recomputes the result
cache.invalidate("spam") # explicitly remove the entry from the cache
value = cache["spam"] # recomputes the result
Example
Parameters:
-
(fill¶Callable[[K], V]) –A callable that takes a key and returns the value to cache. This function is called once per key to populate the cache, even if multiple threads access the same key concurrently.
-
(ttl¶float | None, default:None) –Optional time-to-live in seconds. If specified, cached entries will expire after this duration and be refilled on the next access. Expired keys are removed lazily. Do not rely on this TTL for memory management.
Methods:
-
__getitem__–Get the cached value for a key, filling it if necessary.
-
__contains__–Check if a key exists in the cache and has not expired.
-
__setitem__–Direct assignment is not supported in AtomicCache.
-
invalidate–Remove a key from the cache, forcing it to be refilled on the next access.
-
memoize–Decorator for caching the return values of a function.
__getitem__
¶
__getitem__(key: K) -> V
Get the cached value for a key, filling it if necessary.
If the key is not in the cache or has expired, the
fill function is called
to compute the value. If multiple threads request the same key concurrently,
only one thread will execute the fill function while others wait for the result.
Parameters:
-
(key¶K) –The key to look up in the cache.
Returns:
-
V–The cached or newly computed value.
Raises:
-
Exception–If the
fillfunction raised an exception for this key, that exception is re-raised.
__contains__
¶
Check if a key exists in the cache and has not expired.
This method never calls the fill function, and never blocks.
cache = AtomicCache(lambda x: x * 2)
_ = cache[5] # populate cache
assert 5 in cache
assert 10 not in cache
Parameters:
-
(key¶K) –The key to check.
Returns:
-
bool–Trueif the key exists in the cache and has not expired,Falseotherwise.
__setitem__
¶
Direct assignment is not supported in AtomicCache.
This method raises NotImplementedError to prevent race conditions.
Values should be computed through the fill function provided at initialization.
Raises:
-
NotImplementedError–Always raised to prevent direct assignment.
invalidate
¶
invalidate(key: K)
Remove a key from the cache, forcing it to be refilled on the next access.
If the key is currently being filled by another thread, this method waits for the fill operation to complete before invalidating. Subsequent accesses to the key will trigger a new fill operation.
cache = AtomicCache(lambda x: x * 2)
result1 = cache[5] # computes 10
cache.invalidate(5)
result2 = cache[5] # recomputes 10
Parameters:
-
(key¶K) –The key to invalidate.
memoize
classmethod
¶
Decorator for caching the return values of a function.
Creates a memoized version of a function that caches results based on the function's arguments. The decorated function will only compute each unique set of arguments once, returning cached results for subsequent calls.
Additional parameters will be passed to the underlying
AtomicCache constructor.