I often rant about the bias we find in modern science toward positive results. That is, the typical research paper in Computer Science is about some new technique that improves over the previous techniques. Not everyone focuses on these sort of papers, but they are the easiest to get accepted and they are often not very difficult to write. It is often easy to pick a problem and find some way to improve some existing technique. Is this worth your time though? And more importantly, why would a care about a reader?
(I am guilty of writing such papers myself too!)
Well, some people agree because I just found out about the Journal of Interesting Negative Results in Natural Language Processing and Machine Learning. What a cool title! It is also a pretty serious venture since I recognize a few names such as Guy Lapalme and Stan Matwin.