When recommendations go bad: Walmart

Through Bruce Spencer, I learned about the Walmart recommendation engine fiasco: people searching for Planet of the Apes were directed to movies about Martin Luther King Jr. Note that Walmart does not seem to use a collaborative filtering engine, but rather uses manually entered association rules, or so they claim. But there has been examples of offensive recommendations before which were based on collaborative filtering.

This brings about a new problem in collaborative filtering and recommender engines: how to avoid offensive recommendations.

This one is tough. What if people who like Martin Luther King movies are more likely to buy porn? It could be. (Please, don’t sue me, this is just an academic example.) What happens then?

How is Machine Learning to know that this is not a good association? How do we even know as human beings in the first place?

This is a hard and important problem.

Leave a Reply

Your email address will not be published. Required fields are marked *

To create code blocks or other preformatted text, indent by four spaces:

    This will be displayed in a monospaced font. The first four 
    spaces will be stripped off, but all other whitespace
    will be preserved.
    Markdown is turned off in code blocks:
     [This is not a link](http://example.com)

To create not a block, but an inline code span, use backticks:

Here is some inline `code`.

For more help see http://daringfireball.net/projects/markdown/syntax