Changeset - 2d75f50f60d9
[Not reviewed]
0 1 0
Arjen de Vries (arjen) - 11 years ago 2014-06-12 02:56:00
arjen.de.vries@cwi.nl
"done" with section 2 up to literature review
1 file changed with 22 insertions and 23 deletions:
0 comments (0 inline, 0 general)
mypaper-final.tex
Show inline comments
 
@@ -214,9 +214,9 @@ The rest of the paper is is organized as follows:
 
\textbf{TODO!!}
 

	
 
 \section{Data Description}
 
We base this analysis on the TREC-KBA 2013 dataset
 
\footnote{http://http://trec-kba.org/trec-kba-2013.shtml}. This dataset
 
consists of three main parts: a time-stamped stream corpus, a set of
 
We base this analysis on the TREC-KBA 2013 dataset%
 
\footnote{http://http://trec-kba.org/trec-kba-2013.shtml}
 
that consists of three main parts: a time-stamped stream corpus, a set of
 
KB entities to be curated, and a set of relevance judgments. A CCR
 
system now has to identify for each KB entity which documents in the
 
stream corpus are to be considered by the human curator.
 
@@ -286,14 +286,14 @@ that can be useful for initial KB-dossier are annotated as
 
%The results of the different entity profiles on the raw corpus are
 
%broken down by source categories and relevance rank% (vital, or
 
%relevant).  
 
 
In total, there are 24162 vital or relevant unique entity-document
 
pairs. 9521 of them are vital  and  17424 are relevant. These
 
documents  are categorized into 8 source categories: 0.98\% arxiv(a),
 
0.034\% classified(c), 0.34\% forum(f), 5.65\% linking(l), 11.53\%
 
mainstream-news(m-n), 18.40\% news(n), 12.93\% social(s) and 50.2\%
 
weblog(w). We have regrouped these source categories into three groups
 
``news'', ``social'', and ``other'', for two reasons: 1) some groups
 
In total, the dataset contains 24162 unique entity-document
 
pairs, vital or relevant; 9521 of these have been labelled as vital,
 
and the remaining 17424 as relevant.
 
All documents are categorized into 8 source categories: 0.98\%
 
arxiv(a), 0.034\% classified(c), 0.34\% forum(f), 5.65\% linking(l),
 
11.53\% mainstream-news(m-n), 18.40\% news(n), 12.93\% social(s) and
 
50.2\% weblog(w). We have regrouped these source categories into three
 
groups ``news'', ``social'', and ``other'', for two reasons: 1) some groups
 
are very similar to each other. Mainstream-news and news are
 
similar. The reason they exist separately, in the first place,  is
 
because they were collected from two different sources, by different
 
@@ -306,14 +306,13 @@ relevant annotations are social (social and weblog) (63.13\%). News
 
93\% of all annotations.  The rest make up about 7\% and are all
 
grouped as others.
 

	
 
 
 \section{Stream Filtering}
 
 
 
 The TREC Filtering track defines filtering as a ``system that sifts
 
 through stream of incoming information to find documents that are
 
 relevant to a set of user needs represented by profiles''
 
 \cite{robertson2002trec}. Its information needs are long-term and are
 
 reprsented persistent profiles  unlike the traditional search system
 
 represented by persistent profiles, unlike the traditional search system
 
 whose adhoc information need is represented by a search
 
 query. Adaptive Filtering, one task of the filtering track,  starts
 
 with  a persistent user profile and a very small number of positive
 
@@ -325,25 +324,25 @@ grouped as others.
 
 entities, and the relevance judgments are the small number of postive
 
 examples.
 

	
 
 
 
 Stream filtering: given a stream of documents of news items, blogs
 
 and social media on one hand and KB entities  on the other, filter
 
 the stream for  potentially relevant documents  such that the
 
 relevance classifier(ranker) achieves as maximum performance as
 
Stream filtering is then the task to, given a stream of documents of news items, blogs
 
 and social media on one hand and a set of KB entities on the other,
 
 to filter the stream for  potentially relevant documents  such that
 
 the relevance classifier(ranker) achieves as maximum performance as
 
 possible.  Specifically, we conduct in-depth analysis on the choices
 
 and factors affecting the cleansing step, the entity-profile
 
 construction, the document category of the stream items, and the type
 
 of entities (Wikipedia or Twitter) , and finally their impact
 
 overall performance of the pipeline. Finally, we conduct manual
 
 examination of the vital documents that defy filtering. We strive to
 
 answer the following research questions:
 
 of entities (Wikipedia or Twitter) , and finally their impact overall
 
 performance of the pipeline. Finally, we conduct manual examination
 
 of the vital documents that defy filtering. We strive to answer the
 
 following research questions:
 
\begin{enumerate}
 
  \item Does cleansing affect filtering and subsequent performance
 
  \item What is the most effective way of entity profile representation
 
  \item Is filtering different for Wikipedia and Twitter entities?
 
  \item Are some type of documents easily filterable and others not?
 
  \item Does a gain in recall at filtering step translate to a gain in F-measure at the end of the pipeline?
 
  \item What are the vital(relevant) documents that are not retrievable by a system?
 
  \item What characterizes the vital (and relevant) documents that are
 
    missed in the filtering step?
 
\end{enumerate}
 

	
 
The TREC filtering and the filtering as part of the entity-centric
0 comments (0 inline, 0 general)