Intuition behind Average Precision and MAP

The Technical Experience Page

Average Precision (AP), more commonly, further averaged over all queries and reported as a single score — Mean Average Precision (MAP) — is a very popular performance measure in information retrieval. However, it is quite tricky to interpret and compare the scores that are seen. From my observations, most hard challenges (TRECVID Semantic Indexing / Genre Classification) have very low (0.05 to 0.3) MAP scores (max 1).

This post is my view on what the score conveys. I will also try to interpret what is a “good” MAP score, and what is “bad”. While this depends on the application, it is still good to have an idea of what to expect.

So firstly, what is MAP, or AP. Suppose we are searching for images of a flower and we provide our image retrieval system a sample picture of a rose (query), we do get back a bunch of ranked images…

View original post 392 more words

Advertisements

Experience @ MakerFest 17

https://twitter.com/MakerFestMJFF/status/817716590531792896 MakerFest is an amazing fest to celebrate the spirit of Makers and think about how we could expand this spirit to domains which are not yet explored.. hosted by Motwani Jadeja Family Foundation and founded by Asha Jadeja, I could not chat with her much but found an oppurtunity to invite her to our Mozilla … Continue reading Experience @ MakerFest 17