Intuition behind Average Precision and MAP

The Technical Experience Page

Average Precision (AP), more commonly, further averaged over all queries and reported as a single score — Mean Average Precision (MAP) — is a very popular performance measure in information retrieval. However, it is quite tricky to interpret and compare the scores that are seen. From my observations, most hard challenges (TRECVID Semantic Indexing / Genre Classification) have very low (0.05 to 0.3) MAP scores (max 1).

This post is my view on what the score conveys. I will also try to interpret what is a “good” MAP score, and what is “bad”. While this depends on the application, it is still good to have an idea of what to expect.

So firstly, what is MAP, or AP. Suppose we are searching for images of a flower and we provide our image retrieval system a sample picture of a rose (query), we do get back a bunch of ranked images…

View original post 392 more words


MozillaTN Community Redefined

Mozilla Tamil Nadu Community Meetup 2017 Mozilla Tamil Nadu community had its annual meetup event on 21st & 22nd of Jan 2017 at KGISL campus Coimbatore. The event focused on planning the roadmap of the community for the upcoming year and training its enthusiastic contributors on Mozilla focus areas and  facilitating them to become evangelists … Continue reading MozillaTN Community Redefined

Experience @ MakerFest 17 MakerFest is an amazing fest to celebrate the spirit of Makers and think about how we could expand this spirit to domains which are not yet explored.. hosted by Motwani Jadeja Family Foundation and founded by Asha Jadeja, I could not chat with her much but found an oppurtunity to invite her to our Mozilla … Continue reading Experience @ MakerFest 17