diff --git a/mypaper-final.tex b/mypaper-final.tex index bc39a3878c614c2a2aacc467805ff99ec0a732de..c7d3466dddb7165f7b546158a9e744ff872c4ec5 100644 --- a/mypaper-final.tex +++ b/mypaper-final.tex @@ -150,12 +150,7 @@ In 2012, the Text REtrieval Conferences (TREC) introduced the Knowledge Base Acc Filtering is a crucial step in CCR for selecting a potentially relevant set of working documents for subsequent steps of the - pipeline out of a big collection of stream documents. The TREC - Filtering track defines filtering as a ``system that sifts through - stream of incoming information to find documents that are relevant to - a set of user needs represented by profiles'' - \cite{robertson2002trec}. -In the specific setting of CCR, these profiles are + pipeline out of a big collection of stream documents. Filtering sifts an incoming information for information relevant to user profiles \cite{robertson2002trec}. In the specific setting of CCR, these profiles are represented by persistent KB entities (Wikipedia pages or Twitter users, in the TREC scenario). @@ -180,7 +175,7 @@ Also, different approaches use different entity profiles for filtering, varying from using just the KB entities' canonical names to looking up DBpedia name variants, and from using the bold words in the first paragraph of the Wikipedia -entities’ page to using anchor texts from other Wikipedia pages, and from +entities' page to using anchor texts from other Wikipedia pages, and from using the exact name as given to WordNet derived synonyms. The type of entities (Wikipedia or Twitter) and the category of documents in which they occur (news, blogs, or tweets) cause further variations. @@ -210,7 +205,7 @@ compromising precision, describing and classifying relevant documents that are not amenable to filtering , and estimating the upper-bound of recall on entity-based filtering. -The rest of the paper is organized as follows. Section \ref{sec:desc} describes the dataset and section \ref{sec:fil} defines the task. In section \ref{sec:lit}, we discuss related litrature folowed by a discussion of our method in \ref{mthd}. Following that, we present the experimental resulsy in \ref{sec:expr}, and discuss and analyze them in \ref{sec:analysis}. Towards the end, we discuss the impact of filtering choices on classification in section \ref{sec:impact}, examine and categorize unfilterable docuemnts in section \ref{sec:unfil}. Finally, we present our conclusions in \section{sec:conc}. +The rest of the paper is organized as follows. Section \ref{sec:desc} describes the dataset and section \ref{sec:fil} defines the task. In section \ref{sec:lit}, we discuss related litrature folowed by a discussion of our method in \ref{sec:mthd}. Following that, we present the experimental resulsy in \ref{sec:expr}, and discuss and analyze them in \ref{sec:analysis}. Towards the end, we discuss the impact of filtering choices on classification in section \ref{sec:impact}, examine and categorize unfilterable docuemnts in section \ref{sec:unfil}. Finally, we present our conclusions in \ref{}{sec:conc}. \section{Data Description}\label{sec:desc} @@ -367,7 +362,7 @@ pipeline performance we use the official TREC KBA evaluation metric and scripts \cite{frank2013stream} to report max-F, the maximum F-score obtained over all relevance cut-offs. -\section{Literature Review} \label{sec:rev} +\section{Literature Review} \label{sec:lit} There has been a great deal of interest as of late on entity-based filtering and ranking. One manifestation of that is the introduction of TREC KBA in 2012. Following that, there have been a number of research works done on the topic \cite{frank2012building, ceccarelli2013learning, taneva2013gem, wang2013bit, balog2013multi}. These works are based on KBA 2012 task and dataset and they address the whole problem of entity filtering and ranking. TREC KBA continued in 2013, but the task underwent some changes. The main change between the 2012 and 2013 are in the number of entities, the type of entities, the corpus and the relevance rankings. The number of entities increased from 29 to 141, and it included 20 Twitter entities. The TREC KBA 2012 corpus is 1.9TB after xz-compression and has 400M documents. By contrast, the KBA 2013 corpus is 6.45 after XZ-compression and GPG encryption. A version with all-non English documented removed is 4.5 TB and consists of 1 Billion documents. The 2013 corpus subsumed the 2012 corpus and added others from spinn3r, namely main-stream news, forum, arxiv, classified, reviews and meme-tracker. A more important difference is, however, a change in the definitions of relevance ratings vital and relevant. While in KBA 2012, a document was judged vital if it has citation-worthy content for a given entity, in 2013 it must have the freshliness, that is the content must trigger an editing of the given entity's KB entry. @@ -378,7 +373,7 @@ All of the studies used filtering as their first step to generate a smaller set Moreover, there has not been a chance to study at this scale and/or a study into what type of documents defy filtering and why? In this paper, we conduct a manual examination of the documents that are missing and classify them into different categories. We also estimate the general upper bound of recall using the different entities profiles and choose the best profile that results in an increased over all performance as measured by F-measure. -\section{Method}\label{sec:mth} +\section{Method}\label{sec:mthd} All analyses in this paper are carried out on the documents that have relevance assessments associated to them. For this purpose, we extracted those documents from the big corpus. We experiment with all