Random Write Performance in Solid-State Drives

I have written that solid-state memory drives (SSD) — as found in recent laptops such as the MacBook Air — nearly bridge the gap between internal and external memory. Indeed, we went from 3 orders of magnitude to 1 order of magnitude of difference between disk and RAM!

There is a catch however. SSDs can have terrible random write performance: at least two orders of magnitude slower than sequential writes!

Kevin Burton points out that — as a work-around — you can use log-structured file system. In effect, random writes are replaced by appends at the end of a log of changes. There are certainly cases where log-structured file systems are appropriate — I don’t know much about them — but are they appropriate for external-memory B-trees or hash tables?

However, some systems are designed to avoid random writes. For example, Google’s BigTable sorts data in memory before writing it to disk. Random writes are also minimized with most column-based databases and indexes such as C-store and bitmap indexes.

It is an interesting time to be a database researcher!

Published by

Daniel Lemire

A computer science professor at the Université du Québec (TELUQ).

Leave a Reply

Your email address will not be published. Required fields are marked *

To create code blocks or other preformatted text, indent by four spaces:

    This will be displayed in a monospaced font. The first four 
    spaces will be stripped off, but all other whitespace
    will be preserved.
    
    Markdown is turned off in code blocks:
     [This is not a link](http://example.com)

To create not a block, but an inline code span, use backticks:

Here is some inline `code`.

For more help see http://daringfireball.net/projects/markdown/syntax