Changeset - 1c02d4192bbe
[Not reviewed]
Merge
0 1 0
Gebrekirstos Gebremeskel - 11 years ago 2014-06-12 03:00:13
destinycome@gmail.com
Merge branch 'master' of https://scm.cwi.nl/IA/cikm-paper
merge auto
1 file changed with 46 insertions and 47 deletions:
0 comments (0 inline, 0 general)
mypaper-final.tex
Show inline comments
 
@@ -219,9 +219,9 @@ The rest of the paper is is organized as follows:
 
\textbf{TODO!!}
 

	
 
 \section{Data Description}
 
We base this analysis on the TREC-KBA 2013 dataset
 
\footnote{http://http://trec-kba.org/trec-kba-2013.shtml}. This dataset
 
consists of three main parts: a time-stamped stream corpus, a set of
 
We base this analysis on the TREC-KBA 2013 dataset%
 
\footnote{http://http://trec-kba.org/trec-kba-2013.shtml}
 
that consists of three main parts: a time-stamped stream corpus, a set of
 
KB entities to be curated, and a set of relevance judgments. A CCR
 
system now has to identify for each KB entity which documents in the
 
stream corpus are to be considered by the human curator.
 
@@ -291,34 +291,33 @@ that can be useful for initial KB-dossier are annotated as
 
%The results of the different entity profiles on the raw corpus are
 
%broken down by source categories and relevance rank% (vital, or
 
%relevant).  
 
 
In total, there are 24162 vital or relevant unique entity-document
 
pairs. 9521 of them are vital  and  17424 are relevant. These
 
documents  are categorized into 8 source categories: 0.98\% arxiv(a),
 
0.034\% classified(c), 0.34\% forum(f), 5.65\% linking(l), 11.53\%
 
mainstream-news(m-n), 18.40\% news(n), 12.93\% social(s) and 50.2\%
 
weblog(w). We have regrouped these source categories into three groups
 
``news'', ``social'', and ``other'', for two reasons: 1) some groups
 
are very similar to each other. Mainstream-news and news are
 
similar. The reason they exist separately, in the first place,  is
 
because they were collected from two different sources, by different
 
groups and at different times. we call them news from now on.  The
 
same is true with weblog and social, and we call them social from now
 
on.   2) some groups have so small number of annotations that treating
 
them independently does not make much sense. Majority of vital or
 
relevant annotations are social (social and weblog) (63.13\%). News
 
(mainstream +news) make up 30\%. Thus, news and social make up about
 
93\% of all annotations.  The rest make up about 7\% and are all
 
grouped as others. 
 
 
 
 \section{Stream Filtering}
 
 
 
In total, the dataset contains 24162 unique entity-document
 
pairs, vital or relevant; 9521 of these have been labelled as vital,
 
and the remaining 17424 as relevant.
 
All documents are categorized into 8 source categories: 0.98\%
 
arxiv(a), 0.034\% classified(c), 0.34\% forum(f), 5.65\% linking(l),
 
11.53\% mainstream-news(m-n), 18.40\% news(n), 12.93\% social(s) and
 
50.2\% weblog(w). We have regrouped these source categories into three
 
groups ``news'', ``social'', and ``other'', for two reasons: 1) some groups
 
are very similar to each other. Mainstream-news and news are
 
similar. The reason they exist separately, in the first place,  is
 
because they were collected from two different sources, by different
 
groups and at different times. we call them news from now on.  The
 
same is true with weblog and social, and we call them social from now
 
on.   2) some groups have so small number of annotations that treating
 
them independently does not make much sense. Majority of vital or
 
relevant annotations are social (social and weblog) (63.13\%). News
 
(mainstream +news) make up 30\%. Thus, news and social make up about
 
93\% of all annotations.  The rest make up about 7\% and are all
 
grouped as others.
 

	
 
 \section{Stream Filtering}
 
 
 
 The TREC Filtering track defines filtering as a ``system that sifts
 
 through stream of incoming information to find documents that are
 
 relevant to a set of user needs represented by profiles''
 
 \cite{robertson2002trec}. Its information needs are long-term and are
 
 reprsented persistent profiles  unlike the traditional search system
 
 represented by persistent profiles, unlike the traditional search system
 
 whose adhoc information need is represented by a search
 
 query. Adaptive Filtering, one task of the filtering track,  starts
 
 with  a persistent user profile and a very small number of positive
 
@@ -329,27 +328,27 @@ grouped as others.
 
 of the filtering track. The persistent information needs are the KB
 
 entities, and the relevance judgments are the small number of postive
 
 examples.
 
 
 
 
 
 Stream filtering: given a stream of documents of news items, blogs
 
 and social media on one hand and KB entities  on the other, filter
 
 the stream for  potentially relevant documents  such that the
 
 relevance classifier(ranker) achieves as maximum performance as
 

	
 
Stream filtering is then the task to, given a stream of documents of news items, blogs
 
 and social media on one hand and a set of KB entities on the other,
 
 to filter the stream for  potentially relevant documents  such that
 
 the relevance classifier(ranker) achieves as maximum performance as
 
 possible.  Specifically, we conduct in-depth analysis on the choices
 
 and factors affecting the cleansing step, the entity-profile
 
 construction, the document category of the stream items, and the type
 
 of entities (Wikipedia or Twitter) , and finally their impact
 
 overall performance of the pipeline. Finally, we conduct manual
 
 examination of the vital documents that defy filtering. We strive to
 
 answer the following research questions:
 
 \begin{enumerate}
 
  \item Does cleansing affect filtering and subsequent performance
 
  \item What is the most effective way of entity profile representation
 
  \item Is filtering different for Wikipedia and Twitter entities?
 
  \item Are some type of documents easily filterable and others not ? 
 
  \item Does a gain in recall at filtering step translate to a gain in F-measure at the end of the pipeline?
 
  \item What are the vital(relevant) documents that are not retrievable by a system?
 
\end{enumerate}
 
 of entities (Wikipedia or Twitter) , and finally their impact overall
 
 performance of the pipeline. Finally, we conduct manual examination
 
 of the vital documents that defy filtering. We strive to answer the
 
 following research questions:
 
\begin{enumerate}
 
  \item Does cleansing affect filtering and subsequent performance
 
  \item What is the most effective way of entity profile representation
 
  \item Is filtering different for Wikipedia and Twitter entities?
 
  \item Are some type of documents easily filterable and others not?
 
  \item Does a gain in recall at filtering step translate to a gain in F-measure at the end of the pipeline?
 
  \item What characterizes the vital (and relevant) documents that are
 
    missed in the filtering step?
 
\end{enumerate}
 

	
 
The TREC filtering and the filtering as part of the entity-centric
 
stream filtering and ranking pipepline have different purposes. The
 
@@ -366,14 +365,14 @@ filtering in this case can only be studied by binding it to the later
 
stages of the entity-centric pipeline. This bond influnces how we do
 
evaluation.
 

	
 
To achieve this,  we use recall percentages in the filtering stage for
 
To achieve this, we use recall percentages in the filtering stage for
 
the different choices of entity profiles. However, we use the overall
 
performance to select the best entity profiles.To generate the overall
 
pipeline performance we use the official TREC KBA evaluation metric
 
and scripts \cite{frank2013stream} to report max-F, the maximum
 
F-score obtained over all relevance cut-offs.
 

	
 
\section{Literature Review}
 
\section{Literature Review}
 
There has been a great deal of interest  as of late on entity-based filtering and ranking. One manifestation of that is the introduction of TREC KBA in 2012. Following that, there have been a number of research works done on the topic \cite{frank2012building, ceccarelli2013learning, taneva2013gem, wang2013bit, balog2013multi}.  These works are based on KBA 2012 task and dataset  and they address the whole problem of entity filtering and ranking.  TREC KBA continued in 2013, but the task underwent some changes. The main change between  the 2012 and 2013 are in the number of entities, the type of entities, the corpus and the relevance rankings.
 
 
The number of entities increased from 29 to 141, and it included 20 Twitter entities. The TREC KBA 2012 corpus is 1.9TB after xz-compression and has  400M documents. By contrast, the KBA 2013 corpus is 6.45 after XZ-compression and GPG encryption. A version with all-non English documented removed  is 4.5 TB and consists of 1 Billion documents. The 2013 corpus subsumed the 2012 corpus and added others from spinn3r, namely main-stream news, forum, arxiv, classified, reviews and meme-tracker.  A more important difference is, however, a change in the definitions of relevance ratings vital and relevant. While in KBA 2012, a document was judged vital if it has citation-worthy content for a given entity, in 2013 it must have the freshliness, that is the content must trigger an editing of the given entity's KB entry. 
0 comments (0 inline, 0 general)