Intuition behind Average Precision and MAP

The Technical Experience Page

Average Precision (AP), more commonly, further averaged over all queries and reported as a single score — Mean Average Precision (MAP) — is a very popular performance measure in information retrieval. However, it is quite tricky to interpret and compare the scores that are seen. From my observations, most hard challenges (TRECVID Semantic Indexing / Genre Classification) have very low (0.05 to 0.3) MAP scores (max 1).

This post is my view on what the score conveys. I will also try to interpret what is a “good” MAP score, and what is “bad”. While this depends on the application, it is still good to have an idea of what to expect.

So firstly, what is MAP, or AP. Suppose we are searching for images of a flower and we provide our image retrieval system a sample picture of a rose (query), we do get back a bunch of ranked images…

View original post 392 more words

Unraveling Rust Design

Following topics are covered in the blog, Difference between system programming and scripting language Why do we need a new programming language Understand the different terminology in system programming language Deep-drive into the value propositions of Rust language Introduction This blog is focussed on bringing about a behavioral change in the readers on understanding the problems … Continue reading Unraveling Rust Design