Changeset - eacd7e88deb3
[Not reviewed]
Merge
0 1 0
Gebrekirstos Gebremeskel - 11 years ago 2014-06-12 03:00:02
destinycome@gmail.com
updated
1 file changed with 158 insertions and 34 deletions:
0 comments (0 inline, 0 general)
mypaper-final.tex
Show inline comments
 
@@ -128,8 +128,13 @@ relevance of the document-entity pair under consideration.
 
We analyze how these factors (and the design choices made in their
 
corresponding system components) affect filtering performance.
 
We identify and characterize the relevant documents that do not pass
 
<<<<<<< HEAD
 
the filtering stage by examining their contents. This way, we give
 
estimate of a practical upper-bound of recall for entity-centric stream
 
=======
 
the filtering stage by examing their contents. This way, we
 
estimate a practical upper-bound of recall for entity-centric stream
 
>>>>>>> 68fbea2f0372ab9b4199b88f980dbf5e97f49063
 
filtering.  
 
 
\end{abstract}
 
@@ -144,37 +149,106 @@ filtering.
 
\keywords{Information Filtering; Cumulative Citation Recommendation; knowledge maintenance; Stream Filtering;  emerging entities} % NOT required for Proceedings
 
 
\section{Introduction}
 
  In 2012, the Text REtrieval Conferences (TREC) introduced the Knowledge Base Acceleration (KBA) track  to help Knowledge Bases(KBs) curators. The track is crucial to address a critical need of KB curators: given KB (Wikipedia or Twitter) entities, filter  a stream  for relevant documents, rank the retrieved documents and recommend them to the KB curators. The track is crucial and timely because  the number of entities in a KB on one hand, and the huge amount of new information content on the Web on the other hand make the task of manual KB maintenance challenging.   TREC KBA's main task, Cumulative Citation Recommendation (CCR), aims at filtering a stream to identify   citation-worthy  documents, rank them,  and recommend them to KB curators.
 
In 2012, the Text REtrieval Conferences (TREC) introduced the Knowledge Base Acceleration (KBA) track  to help Knowledge Bases(KBs) curators. The track is crucial to address a critical need of KB curators: given KB (Wikipedia or Twitter) entities, filter  a stream  for relevant documents, rank the retrieved documents and recommend them to the KB curators. The track is crucial and timely because  the number of entities in a KB on one hand, and the huge amount of new information content on the Web on the other hand make the task of manual KB maintenance challenging.   TREC KBA's main task, Cumulative Citation Recommendation (CCR), aims at filtering a stream to identify   citation-worthy  documents, rank them,  and recommend them to KB curators.
 
  
 
   
 
 Filtering is a crucial step in CCR for selecting a potentially relevant set of working documents for subsequent steps of the pipeline out of a big collection of stream documents. The TREC Filtering track defines filtering as a ``system that sifts through stream of incoming information to find documents that are relevant to a set of user needs represented by profiles'' \cite{robertson2002trec}. Adaptive Filtering, one task of the filtering track,  starts with   a persistent user profile and a very small number of positive examples. The  filtering step used in CCR systems fits under adaptive filtering: the profiles are represented by persistent KB (Wikipedia or Twitter) entities and there is a small set of relevance judgments representing positive examples. 
 
 Filtering is a crucial step in CCR for selecting a potentially
 
 relevant set of working documents for subsequent steps of the
 
 pipeline out of a big collection of stream documents. The TREC
 
 Filtering track defines filtering as a ``system that sifts through
 
 stream of incoming information to find documents that are relevant to
 
 a set of user needs represented by profiles''
 
 \cite{robertson2002trec}. 
 
In the specific setting of CCR, these profiles are
 
represented by persistent KB entities (Wikipedia pages or Twitter
 
users, in the TREC scenario).
 
 
 
 TREC-KBA 2013's participants applied Filtering as a first step  to produce a smaller working set for subsequent experiments. As the subsequent steps of the pipeline use the output of the filter, the final performance of the system is dependent on this important step.  The filtering step particularly determines the recall of the overall system. However, all submitted systems suffered from poor recall \cite{frank2013stream}.  The most important components of the filtering step are cleansing, and entity profiling. Each component has choices to make. For example, there are two versions of corpus: cleansed and raw. Different approaches used different entity profiles for filtering. These entity profiles varied from  KB entities' canonical names, to  DBpedia name variants, to using bold words in the first paragraph of the Wikipedia entities’ profiles and anchor texts from other Wikipedia pages, to using exact name and wordNet synonyms. Moreover, the Type of entities (Wikipedia or Twitter), the category of 
 
documents (news, blog, tweets) can influence filtering.
 
 TREC-KBA 2013's participants applied Filtering as a first step  to
 
 produce a smaller working set for subsequent experiments. As the
 
 subsequent steps of the pipeline use the output of the filter, the
 
 final performance of the system is dependent on this step.  The
 
 filtering step particularly determines the recall of the overall
 
 system. However, all 141 runs submitted by 13 teams did suffer from
 
 poor recall, as pointed out in the track's overview paper 
 
 \cite{frank2013stream}. 
 

	
 
The most important components of the filtering step are cleansing
 
(referring to pre-processing noisy web text into a canonical ``clean''
 
text format), and
 
entity profiling (creating a representation of the entity that can be
 
used to match the stream documents to). For each component, different
 
choices can be made. In the specific case of TREC KBA, organisers have
 
provided two different versions of the corpus: one that is already cleansed,
 
and one that is the raw data as originally collected by the organisers. 
 
Also, different
 
approaches use different entity profiles for filtering, varying from
 
using just the KB entities' canonical names to looking up DBpedia name
 
variants, and from using the bold words in the first paragraph of the Wikipedia
 
entities’ page to using anchor texts from other Wikipedia pages, and from
 
using the exact name as given to WordNet derived synonyms. The type of entities
 
(Wikipedia or Twitter) and the category of documents in which they
 
occur (news, blogs, or tweets) cause further variations.
 
% A variety of approaches are employed  to solve the CCR
 
% challenge. Each participant reports the steps of the pipeline and the
 
% final results in comparison to other systems.  A typical TREC KBA
 
% poster presentation or talk explains the system pipeline and reports
 
% the final results. The systems may employ similar (even the same)
 
% steps  but the choices they make at every step are usually
 
% different. 
 
In such a situation, it becomes hard to identify the factors that
 
result in improved performance. There is  a lack of insight across
 
different approaches. This makes  it hard to know whether the
 
improvement in performance of a particular approach is due to
 
preprocessing, filtering, classification, scoring  or any of the
 
sub-components of the pipeline.
 
 
 
 
 A variety of approaches are employed  to solve the CCR challenge. Each participant reports the steps of the pipeline and the final results in comparison to other systems.  A typical TREC KBA poster presentation or talk explains the system pipeline and reports the final results. The systems may employ similar (even the same) steps  but the choices they make at every step are usually different. In such a situation, it becomes hard to identify the factors that result in improved performance. There is  a lack of insight across different approaches. This makes  it hard to know  whether the improvement in performance of a particular approach is due to preprocessing, filtering, classification, scoring  or any of the sub-components of the pipeline. 
 
 
 
  
 
 
 
 
 
 In this paper,  we hold the subsequent steps of the pipeline fixed, zoom in on the filtering step and  conduct an in-depth analysis of the main components in it.  In particular, we study  cleansing, different entity profiling,  type of entities (Wikipedia or Twitter), and document categories (social, news, etc).  The main contribution of the paper are an in-depth analysis of the factors that affect entity-based stream filtering, identifying optimal entity profiles vis-avis not compromising precision, describing and classifying relevant documents that are not amenable to filtering , and  estimating the upper-bound of recall on entity-based filtering.
 
 
 
 The rest of the paper is is organized as follows: 
 
 
 
 
 
 
 
 
In this paper, we therefore fix the subsequent steps of the pipeline,
 
and zoom in on \emph{only} the filtering step; and conduct an in-depth analysis of its
 
main components.  In particular, we study the effect of cleansing,
 
entity profiling, type of entity filtered for (Wikipedia or Twitter), and
 
document category (social, news, etc) on the filtering components'
 
performance. The main contribution of the
 
paper are an in-depth analysis of the factors that affect entity-based
 
stream filtering, identifying optimal entity profiles without
 
compromising precision, describing and classifying relevant documents
 
that are not amenable to filtering , and estimating the upper-bound
 
of recall on entity-based filtering.
 

	
 
The rest of the paper is is organized as follows: 
 

	
 
\textbf{TODO!!}
 

	
 
 \section{Data Description}
 
We use the TREC-KBA 2013 dataset \footnote{http://http://trec-kba.org/trec-kba-2013.shtml}. The dataset consists of a time-stamped  stream corpus, a set of KB entities, and a set of relevance judgments. 
 
\subsection{Stream corpus} The stream corpus comes in two versions: raw and cleaned. The raw  and cleansed versions are 6.45TB and 4.5TB respectively,  after xz-compression and GPG encryption. The raw data is a  dump of  raw HTML pages. The cleansed version is the raw data after its HTML tags are stripped off and only English documents identified with Chromium Compact Language Detector \footnote{https://code.google.com/p/chromium-compact-language-detector/} are included.  The stream corpus is organized in hourly folders each of which contains many  chunk files. Each chunk file contains between hundreds and hundreds of thousands of serialized  thrift objects. One thrift object is one document. A document could be a blog article, a news article, or a social media post (including tweet).  The stream corpus comes from three sources: TREC KBA 2012 (social, news and linking) \footnote{http://trec-kba.org/kba-stream-corpus-2012.shtml}, arxiv\footnote{http://arxiv.org/}, and spinn3r\footnote{http://spinn3r.com/}. 
 
Table \ref{tab:streams}   shows the sources, the number of hourly directories, and the number of chunk files. 
 
 
We base this analysis on the TREC-KBA 2013 dataset
 
\footnote{http://http://trec-kba.org/trec-kba-2013.shtml}. This dataset
 
consists of three main parts: a time-stamped stream corpus, a set of
 
KB entities to be curated, and a set of relevance judgments. A CCR
 
system now has to identify for each KB entity which documents in the
 
stream corpus are to be considered by the human curator.
 

	
 
\subsection{Stream corpus} The stream corpus comes in two versions:
 
raw and cleaned. The raw and cleansed versions are 6.45TB and 4.5TB
 
respectively,  after xz-compression and GPG encryption. The raw data
 
is a  dump of  raw HTML pages. The cleansed version is the raw data
 
after its HTML tags are stripped off and only English documents
 
identified with Chromium Compact Language Detector
 
\footnote{https://code.google.com/p/chromium-compact-language-detector/}
 
are included.  The stream corpus is organized in hourly folders each
 
of which contains many  chunk files. Each chunk file contains between
 
hundreds and hundreds of thousands of serialized  thrift objects. One
 
thrift object is one document. A document could be a blog article, a
 
news article, or a social media post (including tweet).  The stream
 
corpus comes from three sources: TREC KBA 2012 (social, news and
 
linking) \footnote{http://trec-kba.org/kba-stream-corpus-2012.shtml},
 
arxiv\footnote{http://arxiv.org/}, and
 
spinn3r\footnote{http://spinn3r.com/}.
 
Table \ref{tab:streams} shows the sources, the number of hourly
 
directories, and the number of chunk files.
 
\begin{table}
 
\caption{Retrieved documents to different sources }
 
\begin{center}
 
 
 \begin{tabular}{r*{4}{r}l}
 
 \begin{tabular}{l*{4}{l}l}
 
 documents     &   chunk files    &    Sub-stream \\
 
\hline
 
 
@@ -200,8 +274,18 @@ Table \ref{tab:streams}   shows the sources, the number of hourly directories, a
 
 
\subsection{Relevance judgments}
 
 
TREC-KBA provided relevance judgments for training and testing. Relevance judgments are given to a document-entity pairs. Documents with citation-worthy content to a given entity are annotated  as \emph{vital},  while documents with tangentially relevant content, or documents that lack freshliness o  with content that can be useful for initial KB-dossier are annotated as \emph{relevant}. Documents with no relevant content are labeled \emph{neutral} and spam is labeled as \emph{garbage}.  The inter-annotator agreement on vital in 2012 was 70\% while in 2013 it is 76\%. This is due to the more refined definition of vital and the distinction made between vital and relevant. 
 
 
TREC-KBA provided relevance judgments for training and
 
testing. Relevance judgments are given as a document-entity
 
pairs. Documents with citation-worthy content to a given entity are
 
annotated  as \emph{vital},  while documents with tangentially
 
relevant content, or documents that lack freshliness o  with content
 
that can be useful for initial KB-dossier are annotated as
 
\emph{relevant}. Documents with no relevant content are labeled
 
\emph{neutral} and spam is labeled as \emph{garbage}. 
 
%The inter-annotator agreement on vital in 2012 was 70\% while in 2013 it
 
%is 76\%. This is due to the more refined definition of vital and the
 
%distinction made between vital and relevant.
 

	
 
\subsection{Breakdown of results by document source category}
 
 
%The results of the different entity profiles on the raw corpus are
 
@@ -230,12 +314,35 @@ grouped as others.
 
 
 \section{Stream Filtering}
 
 
 
 The TREC Filtering track defines filtering as a ``system that sifts through stream of incoming information to find documents that are relevant to a set of user needs represented by profiles'' \cite{robertson2002trec}. Its information needs are long-term and are reprsented persistent profiles  unlike the traditional search system whose adhoc information need is represented by a search query. Adaptive Filtering, one task of the filtering track,  starts with  a persistent user profile and a very small number of positive examples. A filtering system can improve its user profiles with a feedback obtained from interaction with users, and thereby improve its performance. The  filtering stage of entity-based stream filtering and ranking can be likened to the adaptive filtering task of the filtering track. The persistent information needs are the KB entities, and the relevance judgments are the small number of postive examples.  
 
 
 
 The TREC Filtering track defines filtering as a ``system that sifts
 
 through stream of incoming information to find documents that are
 
 relevant to a set of user needs represented by profiles''
 
 \cite{robertson2002trec}. Its information needs are long-term and are
 
 reprsented persistent profiles  unlike the traditional search system
 
 whose adhoc information need is represented by a search
 
 query. Adaptive Filtering, one task of the filtering track,  starts
 
 with  a persistent user profile and a very small number of positive
 
 examples. A filtering system can improve its user profiles with a
 
 feedback obtained from interaction with users, and thereby improve
 
 its performance. The  filtering stage of entity-based stream
 
 filtering and ranking can be likened to the adaptive filtering task
 
 of the filtering track. The persistent information needs are the KB
 
 entities, and the relevance judgments are the small number of postive
 
 examples.
 
 
 
 Stream filtering: given a stream of documents of news items, blogs and social media on one hand and KB entities  on the other, filter the stream for  potentially relevant documents  such that the relevance classifier(ranker) achieves as maximum performance as possible.  Specifically, we conduct in-depth analysis on the choices and factors affecting the cleansing step, the entity-profile construction, the document category of the stream items, and the type of entities (Wikipedia or Twitter) , and finally their impact  overall performance of the pipeline. Finally, we conduct manual examination of the vital documents that defy filtering. We strive to answer the following research questions:
 
 
 
 \begin{enumerate}
 
 Stream filtering: given a stream of documents of news items, blogs
 
 and social media on one hand and KB entities  on the other, filter
 
 the stream for  potentially relevant documents  such that the
 
 relevance classifier(ranker) achieves as maximum performance as
 
 possible.  Specifically, we conduct in-depth analysis on the choices
 
 and factors affecting the cleansing step, the entity-profile
 
 construction, the document category of the stream items, and the type
 
 of entities (Wikipedia or Twitter) , and finally their impact
 
 overall performance of the pipeline. Finally, we conduct manual
 
 examination of the vital documents that defy filtering. We strive to
 
 answer the following research questions:
 
 \begin{enumerate}
 
  \item Does cleansing affect filtering and subsequent performance
 
  \item What is the most effective way of entity profile representation
 
  \item Is filtering different for Wikipedia and Twitter entities?
 
@@ -243,12 +350,29 @@ grouped as others.
 
  \item Does a gain in recall at filtering step translate to a gain in F-measure at the end of the pipeline?
 
  \item What are the vital(relevant) documents that are not retrievable by a system?
 
\end{enumerate}
 
 
The TREC filtering and the filtering as part of the entity-centric stream filtering and ranking pipepline have different purposes. The TREC filtering track's goal is the binary classification of documents: for each incoming docuemnt, it decides whether the incoming document is relevant or not for a given profile. The docuemnts are either relevant or not. In our case, the documents have relevance ranking and the goal of the filtering stage is to filter as many potentially relevant documents as possible, but less  irrelevant documents as possible not to obfuscate the later stages of the piepline.  Filtering as part of the pipeline needs that delicate balance between retrieving relavant documents and irrrelevant documensts. Bcause of this, filtering in this case can only be studied by binding it to the later stages of the entity-centric pipeline. This bond influnces how we do evaluation. 
 
 
To achieve this,  we use recall percentages in the filtering stage for the different choices of entity profiles. However, we use the overall performance to select the best entity profiles.To generate the overall pipeline performance we use the official TREC KBA evaluation metric and scripts \cite{frank2013stream} to report max-F, the maximum F-score obtained over all relevance cut-offs. 
 
 
 

	
 
The TREC filtering and the filtering as part of the entity-centric
 
stream filtering and ranking pipepline have different purposes. The
 
TREC filtering track's goal is the binary classification of documents:
 
for each incoming docuemnt, it decides whether the incoming document
 
is relevant or not for a given profile. The docuemnts are either
 
relevant or not. In our case, the documents have relevance ranking and
 
the goal of the filtering stage is to filter as many potentially
 
relevant documents as possible, but less  irrelevant documents as
 
possible not to obfuscate the later stages of the piepline.  Filtering
 
as part of the pipeline needs that delicate balance between retrieving
 
relavant documents and irrrelevant documensts. Bcause of this,
 
filtering in this case can only be studied by binding it to the later
 
stages of the entity-centric pipeline. This bond influnces how we do
 
evaluation.
 

	
 
To achieve this,  we use recall percentages in the filtering stage for
 
the different choices of entity profiles. However, we use the overall
 
performance to select the best entity profiles.To generate the overall
 
pipeline performance we use the official TREC KBA evaluation metric
 
and scripts \cite{frank2013stream} to report max-F, the maximum
 
F-score obtained over all relevance cut-offs.
 

	
 
\section{Literature Review}
 
There has been a great deal of interest  as of late on entity-based filtering and ranking. One manifestation of that is the introduction of TREC KBA in 2012. Following that, there have been a number of research works done on the topic \cite{frank2012building, ceccarelli2013learning, taneva2013gem, wang2013bit, balog2013multi}.  These works are based on KBA 2012 task and dataset  and they address the whole problem of entity filtering and ranking.  TREC KBA continued in 2013, but the task underwent some changes. The main change between  the 2012 and 2013 are in the number of entities, the type of entities, the corpus and the relevance rankings.
 
0 comments (0 inline, 0 general)