Random Write Performance in Solid-State Drives

I have written that solid-state memory drives (SSD) — as found in recent laptops such as the MacBook Air — nearly bridge the gap between internal and external memory. Indeed, we went from 3 orders of magnitude to 1 order of magnitude of difference between disk and RAM!

There is a catch however. SSDs can have terrible random write performance: at least two orders of magnitude slower than sequential writes!

Kevin Burton points out that — as a work-around — you can use log-structured file system. In effect, random writes are replaced by appends at the end of a log of changes. There are certainly cases where log-structured file systems are appropriate — I don’t know much about them — but are they appropriate for external-memory B-trees or hash tables?

However, some systems are designed to avoid random writes. For example, Google’s BigTable sorts data in memory before writing it to disk. Random writes are also minimized with most column-based databases and indexes such as C-store and bitmap indexes.

It is an interesting time to be a database researcher!

Leave a Reply

Your email address will not be published. Required fields are marked *