Can Stack Exchange save scientific peer review? [Update]

One of the few things everybody seems to agree on is that the scientific review process, especially for computer science, is broken. I wont go into details here as there are many sources on the net.

But personally I found Yann LeCun’s pamphlets for “A New Publishing Model in Computer Science” inspiring. He proposes an open, karma-based online repository which I will summarize as follows:

  • In this system authors post their papers as soon as they feel, that there finished. The publication is put under version control and is immediately citable.
  • “Reviewing Entities” (RE), individuals or groups like editorial boards, then choose papers they want to review or accept review requests from authors.
  • REs do not “own” papers exclusively, so RE can choose to review any paper at any time. Papers can be reviewed by multiple REs.
  • The reviews are published with the paper and are themselves citable documents like regular publications.
  • Reviews are furthermore rated by readers. Good reviews will generate “karma” points for the RE, to show the usefulness of their review.
  • Additionally RE’s “karma” will increase if they are the first to positively review a paper which is than later rated as high quality by other REs as well. As a result RE will have an incentive to be the first to review good papers.

I will not repeat LeCun’s explanations on how it works in detail and why it would be superior to the existing system. Instead I want to point out how very similar this approach is to the Stack Exchange (SE) QA websites. Stack Exchange is a network of over 70 Q&A websites, with stackoverflow.com, a Q&A site on programming, being the first and largest one. On Stack Exchange websites everyone can ask questions which can be answered by all the members of the community. Both questions and answers will be rated by the community, so users are incentivized to write useful answers to questions which are relevant to many other users in order to gain reputation.

Especially if you have used a SE website, it is hard to ignore the similarities. Even though the SE framework was build to solve a different problem, I can see it being adapted to act as a repository for LeCun’s publishing model. Publications would be questions and reviews would be answers. I can only make out following necessary changes.

  • There needs to support for groups (RE),
  • high level users should not be permitted to change other people’s posts anymore and
  • the ‘answer’ functionality has to be removed.

Everyone who follows the two founders of Stack Exchange, Jeff Atwood and Joel Spolsky, knows, how determine both are to remove all diversion of their vision for Stack Exchange, so it wouldn’t be possible to be officially part of the SE community. But there is also OSQA, the open source copy of SE. Using this service makes it seem possible to implement the necessary features.

So, what do you think? Can Stack Exchange save scientific peer review?

[UPDATE]

LeCun was so generous to comment on my article via e-mail. He confirmed that his views on the peer review process and his model haven’t changed and agrees that creating the technical infrastructure shouldn’t be too hard. He already received several offers from possible volunteers, but the project is still missing a highly competent developer(-team) to “own” the project.

Disclaimer: I am not the first one to bring Stack Exchange on the table, but I found the other approach far less concrete.

Computer Vision News

I created Computer Vision News (CVN), an aggregator for all the events and academic vacancies within the field of Computer Vision, ImageAnalysis, and Medical Image Analysis. You can also follow it on Twitter @compvisionnews!

At the moment I use following sources:

Please write me, if you have sources I should add. I am happy to extend CVN.
I prefer just to have the headlines in my Twitter timeline, where they don’t clutter my mail client or my feedreeder. But use it as you like! Yeah! for more choices!

Who is collaborating?

Collaboration graph of master thesis created with collabgraph

In my scarce spare time, I have written Collabgraph to visualize connections between authors of scientific publications.

This python script reads a (your) bibtex file and draws a graph in which all the nodes are authors and the edges represent that the two authors have collaborated (or at least wrote a paper together).

On the right is the graph created by from the references used in my diploma thesis.  You can immediately see what central role Eakins, Meier and Flickner played.
Collabgraph requires only the pygraphviz library, which can installed with “easy_install pygraphviz”.

You can find the sourcode and the example at bitbucket.org.

I am looking forward to your feedback!!!

How to create good and fast Matlab code

As most of the readers of this blog land on one of the pages with the Matlab applications, I thought I collect some of my resources I use to write Matlab code.

First start with the official Mathworks help on how to write good code

than we have this 33-page tutorial “Writing Fast MATLAB Code” (PDF)

followed by the Recorded Webinar: Handling Large Data Sets Efficiently in MATLAB

For asking questions, I enjoy the Stackoverflow community. Here are two examples of answers you get for generall Matlab questions.

So, I hope you find these links more helpful than overwhelming.

Please leave a comment if you have anything to add!

On the importance of context

For people who just start thinking about computer vision, it is of hard to understand why computers have such a difficult time finding faces in images, even though it is so easy for us.

Adding to one of my earlier articles about why is vision hard, which points out, that computers are missing concepts, there is another reason. They are also missing context. It is so easy for us, to spot faces, because we know were to look for them, where to expect them. When we see a person, we know were to look for the face, and when we see a house, we know, that we wont find a face. But computers don’t. And to show you that even we are lost without context I present you this nice picture with the coffee beans. Your job is to find the face.

Where is the face in the coffee beans?

Did you find the face? Probably you didn’t find it at once, but started to scan the picture till you found it. It took me nearly 30s,which is so much slower, than any recent software.

So how can we improve our algorithm with context?

p.s.

About the picture. I got it from a very interesting talk held at TID by Aleix M Martinez from the Ohio State University on classification. His main point, PCA and LDA. For starters check out his paper PCA versus LDA (2001).

Twitter

Hi,

this post is for all the readers, who use rss to read this blog.

I have started writing tweets in the last weeks. I mainly post links to interesting pages, but with a slightly broader focus than the blog. Yesterday I posted for instance this comic http://dilbert.com/strips/comic/2009-12-09/

There are three ways to read my tweets:

Also feel free to look through the people I follow at twitter, maybe you find something interesting.

Jobs in computer vision

As I am currently looking for a new job for the time after my internship, I came across several websites dedicated to jobs in the field of computer vision and image retrieval. And being the nice guy I am, I want to share what I found.

The links are sorted by descending frequency of new offers:

Furthermore some conferences, like CVPR, have a jobs section:

A lot of positions are also offered via the Imageworld mailing list:

So, and now good luck! 🙂