Source: Adam Nir on Unsplash
Memoization avoids redundant computations. When a function is called with a particular set of arguments, memoization checks if the result for those arguments has already been computed and stored. If so, it returns the stored result. Otherwise, it performs the computation, stores the result, and then returns it.
Consider a function that simulates an expensive operation:
import time
def expensive_func(num):
"""Simulates an expensive computation."""
print(f"Computing {num}...")
time.sleep(1) # Simulate a time-consuming operation
return num * num
print(expensive_func(2))
print(expensive_func(2)) # Without memoization, this recomputes
Without memoization, expensive_func(2)
will perform the "expensive" computation (sleeping for 1 second) each time it's called.
Here's how to implement memoization using a dictionary in Python:
import time
def expensive_func(num, cache={}):
"""Simulates an expensive computation with memoization."""
if num in cache:
print(f"Getting cached result for {num}...")
return cache[num]
else:
print(f"Computing {num}...")
time.sleep(1)
result = num * num
cache[num] = result # Store the result
return result
print(expensive_func(2))
print(expensive_func(2)) # Now, the cached result is used
In this memoized version:
cache
dictionary stores the results.num
is in the cache
.cache
, and then returned.Memoization is most effective in these scenarios:
functools.lru_cache
Python's functools
module provides a built-in decorator, lru_cache
(Least Recently Used Cache), which makes memoization even easier:
import time
from functools import lru_cache
@lru_cache(maxsize=32) # Store up to 32 recent results
def expensive_func(num):
"""Simulates an expensive computation with memoization using lru_cache."""
print(f"Computing {num}...")
time.sleep(1)
return num * num
print(expensive_func(2))
print(expensive_func(2)) # Cached!
@lru_cache
automatically memoizes the function. The maxsize
argument limits the number of cached results (the default is 128). If more results are stored, the least recently used ones are discarded.
The concept of memoization is applicable in various programming languages. The specific implementation techniques might differ, but the core principle of caching function results remains the same.
Memoization is a valuable optimization tool for improving the performance of your Python code. By caching the results of expensive function calls, you can avoid redundant computations and make your programs faster and more efficient.