The end of Everpix, a sad week for photographers and machine learning researchers.

This week the photo storage service Everpix announced, that they will close down. They did not have enough paying costumers and could not find new investors.

That is sad. Not only because it was the world’s best photo startup according to the Verge, but also because it was the only company besides Google that used new machine learning techniques to help people manage their photo mess.

everpix home screen

Everpix home screen

Their closure can be seen as an indicator that end users and investors are not ready yet to spend additional money on machine learning algorithms.

Flashback mail

Flashback mail

Having read some articles and the associated comments[1, 2], it is clear to me that not their use of sophisticated machine learning algorithms but the daily ‘flashback’ email with pictures taken on the same day in previous years was the more popular feature. In fact, I did not even see one single comment about the algorithms that analysed the pictures.

But maybe their algorithms were just not good enough.

Unfortunately I could not try out their algorithms myself. My pictures just finished processing a few days before they announced to close down. But I found a comment of one of the founders on Hacker News, saying that they used a deep convolutional neural network with 3 layers for the semantic image analysis. This is the same technology Google now uses for their photo search.

But they were unhappy with the results of the algorithm so in January this year they changed their approach as their CTO, Kevin Quennesson, explains in ‘To Reclaim Your Photos, Kill the Algorithm’.  He writes: “If a user is a food enthusiast and takes a lot of food close-ups, are we going to tell him that this photo is not the photo of a dish because an algorithm only learned to model some other kind of dishes?” They found that the algorithm’s errors were not comprehensible for the end user.

So they planned to change their system. As I understand it, their old system learned and used concepts independent of the single user. But the new system also uses pictures of the same user to infer the content of a new picture. He calls this “feature based image backlinks”.

Explanation Feature-based Image Backlinks

The graph shows how a picture of a dish can be correctly identified because the content can be inferred by similar pictures of the user that the system identified correctly before. – from Quennesson’s blog post

Regardless of the success of Everpix, I think using the context of an image more is a helpful and necessary approach to build systems, that will reliably predict the content of an image in the future.

In any case I wish we would hear more about the underlying algorithms, what they tried, what worked and what not.

On the importance of context

For people who just start thinking about computer vision, it is of hard to understand why computers have such a difficult time finding faces in images, even though it is so easy for us.

Adding to one of my earlier articles about why is vision hard, which points out, that computers are missing concepts, there is another reason. They are also missing context. It is so easy for us, to spot faces, because we know were to look for them, where to expect them. When we see a person, we know were to look for the face, and when we see a house, we know, that we wont find a face. But computers don’t. And to show you that even we are lost without context I present you this nice picture with the coffee beans. Your job is to find the face.

Where is the face in the coffee beans?

Did you find the face? Probably you didn’t find it at once, but started to scan the picture till you found it. It took me nearly 30s,which is so much slower, than any recent software.

So how can we improve our algorithm with context?


About the picture. I got it from a very interesting talk held at TID by Aleix M Martinez from the Ohio State University on classification. His main point, PCA and LDA. For starters check out his paper PCA versus LDA (2001).