Home > Uncategorized > Thoughts on memoization

## Thoughts on memoization

As I said in the previous post about GSoC, recently I started to think about how to implement random access to BAM file (which consists of gzipped blocks about 20kbytes each).

The problem is, we have a decompression function which is computationally intensive, and we are to convert requested blocks in the most efficient way, preferably using multithreading.

Let’s generalize it. We have a function $f: X \to Y$. What we want is the same function but returning the result faster. However, that’s not a good idea because it will be hard to parallelize iteration over a subset of $X$. It’s better to define a memoization operator $M$ mapping $f$ to $Mf: X \to D_Y$ which will return a delay instead of actual value. To work with this delay, we need another function $g: D_Y \to Y$ such that $f = g \circ Mf$. The nice thing about this approach is that the delay can be returned immediately, while computation can be done in another thread.

Thus we can get both things working:

• Parallel map. We can use a cyclic buffer of delays, prefilling it at the start. This way, there will always be a fixed amount of tasks being executed in a multithreaded task pool. Advancing to the next element will be accompanied with pushing new task into the pool.
• Evaluation at random points. Memoization operator can maintain a cache of delays, to which access will be syncronized.

I googled these idea and found this great article: http://kotka.de/blog/2010/03/memoize_done_right.html. It contains some code in Clojure, and particularly, it contains an implementation described above, using futures and cache with synchronized access.

Converting that code to D shouldn’t be a big problem thanks to std.parallelism module providing future/promise stuff. I currently aim at developing a reusable solution built on templates and allowing arbitrary function and cache implementation.

(to be continued…)

Categories: Uncategorized