Changeset - fca12371b032
[Not reviewed]
0 1 0
Arjen de Vries (arjen) - 11 years ago 2014-06-11 12:00:52
arjen.de.vries@cwi.nl
abstract - one pass
1 file changed with 23 insertions and 3 deletions:
0 comments (0 inline, 0 general)
mypaper-final.tex
Show inline comments
 
@@ -9,195 +9,215 @@
 
%       1) The Permission Statement
 
%       2) The Conference (location) Info information
 
%       3) The Copyright Line with ACM data
 
%       4) Page numbering
 
% ---------------------------------------------------------------------------------------------------------------
 
% It is an example which *does* use the .bib file (from which the .bbl file
 
% is produced).
 
% REMEMBER HOWEVER: After having produced the .bbl file,
 
% and prior to final submission,
 
% you need to 'insert'  your .bbl file into your source .tex file so as to provide
 
% ONE 'self-contained' source file.
 
%
 
% Questions regarding SIGS should be sent to
 
% Adrienne Griscti ---> griscti@acm.org
 
%
 
% Questions/suggestions regarding the guidelines, .tex and .cls files, etc. to
 
% Gerald Murray ---> murray@hq.acm.org
 
%
 
% For tracking purposes - this is V3.1SP - APRIL 2009
 
 
\documentclass{acm_proc_article-sp}
 
\usepackage{booktabs}
 
\usepackage{multirow}
 
\usepackage{todonotes}
 
 
\begin{document}
 
 
\title{Entity-Centric Stream Filtering and ranking: Filtering and Unfilterable Documents 
 
}
 
%
 
% You need the command \numberofauthors to handle the 'placement
 
% and alignment' of the authors beneath the title.
 
%
 
% For aesthetic reasons, we recommend 'three authors at a time'
 
% i.e. three 'name/affiliation blocks' be placed beneath the title.
 
%
 
% NOTE: You are NOT restricted in how many 'rows' of
 
% "name/affiliations" may appear. We just ask that you restrict
 
% the number of 'columns' to three.
 
%
 
% Because of the available 'opening page real-estate'
 
% we ask you to refrain from putting more than six authors
 
% (two rows with three columns) beneath the article title.
 
% More than six makes the first-page appear very cluttered indeed.
 
%
 
% Use the \alignauthor commands to handle the names
 
% and affiliations for an 'aesthetic maximum' of six authors.
 
% Add names, affiliations, addresses for
 
% the seventh etc. author(s) as the argument for the
 
% \additionalauthors command.
 
% These 'additional authors' will be output/set for you
 
% without further effort on your part as the last section in
 
% the body of your article BEFORE References or any Appendices.
 
 
\numberofauthors{8} %  in this sample file, there are a *total*
 
% of EIGHT authors. SIX appear on the 'first-page' (for formatting
 
% reasons) and the remaining two appear in the \additionalauthors section.
 
%
 
% \author{
 
% % You can go ahead and credit any number of authors here,
 
% % e.g. one 'row of three' or two rows (consisting of one row of three
 
% % and a second row of one, two or three).
 
% %
 
% % The command \alignauthor (no curly braces needed) should
 
% % precede each author name, affiliation/snail-mail address and
 
% % e-mail address. Additionally, tag each line of
 
% % affiliation/address with \affaddr, and tag the
 
% % e-mail address with \email.
 
% %
 
% % 1st. author
 
% \alignauthor
 
% Ben Trovato\titlenote{Dr.~Trovato insisted his name be first.}\\
 
%        \affaddr{Institute for Clarity in Documentation}\\
 
%        \affaddr{1932 Wallamaloo Lane}\\
 
%        \affaddr{Wallamaloo, New Zealand}\\
 
%        \email{trovato@corporation.com}
 
% % 2nd. author
 
% \alignauthor
 
% G.K.M. Tobin\titlenote{The secretary disavows
 
% any knowledge of this author's actions.}\\
 
%        \affaddr{Institute for Clarity in Documentation}\\
 
%        \affaddr{P.O. Box 1212}\\
 
%        \affaddr{Dublin, Ohio 43017-6221}\\
 
%        \email{webmaster@marysville-ohio.com}
 
% }
 
% There's nothing stopping you putting the seventh, eighth, etc.
 
% author on the opening page (as the 'third row') but we ask,
 
% for aesthetic reasons that you place these 'additional authors'
 
% in the \additional authors block, viz.
 
% Just remember to make sure that the TOTAL number of authors
 
% is the number that will appear on the first page PLUS the
 
% number that will appear in the \additionalauthors section.
 
 
\maketitle
 
\begin{abstract}
 
 
 
Entity-centric information processing requires complex pipelines involving both natural language processing and information retrieval components. In entity-centric stream filtering and ranking, the pipeline involves four  important stages: filtering, classification, ranking(scoring)  and evaluation. Filtering is an important step  that creates a manageable working set of documents  from a  web-scale corpus for the next stages.  It thus  determines the performance of the overall system.  Keeping the subsequent steps constant, we  zoom in on the filtering stage and conduct an in-depth analysis of the  main components of cleansing, entity profiles, relevance levels, category of documents and entity types with a view to understanding  the factors and choices that affect filtering performance. The study demonstrates the most  effective entity profiling,  identifies those relevant documents that defy filtering and conducts manual examination into their contents. The paper classifies the ways unfilterable documents 
 
are mentioned in text and estimates the practical upper-bound of recall in  entity-based filtering.  
 
Entity-centric information processing requires complex pipelines
 
involving both natural language processing and information retrieval
 
components. In entity-centric stream filtering and ranking, the
 
pipeline involves four stages: filtering, classification,
 
ranking (scoring) and evaluation. Filtering is an initial step, that
 
extracts a working-set of documents from the web-scale corpus, aiming
 
for a smaller size collection that would be more manageable in the
 
subsequent stages of the pipeline. This filtering step therefore
 
determines the maximally attainable performance of the overall system.
 
 
This paper investigates the filtering stage in isoltation, in context
 
of a cumulative citation recommendation problem. We conduct an
 
in-depth analysis of the main factors that determine filtering
 
effectiveness: cleansing noisy web data, methods to create entity
 
profiles, the types of entity of interest, document category, and the
 
relevance level of the entity-document pair under consideration.
 
We analyze how these factors (and the design choices made in their
 
corresponding system components) affect filtering performance.
 
We identify and characterize the relevant documents that do not pass the
 
filtering stage, and conduct a manual examination into their
 
contents. The paper classifies the ways unfilterable documents  
 
are mentioned in text and estimates the practical upper-bound of
 
recall in entity-based filtering.  
 
 
\end{abstract}
 
% A category with the (minimum) three required fields
 
\category{H.4}{Information Filtering}{Miscellaneous}
 
 
%A category including the fourth, optional field follows...
 
%\category{D.2.8}{Software Engineering}{Metrics}[complexity measures, performance measures]
 
 
\terms{Theory}
 
 
\keywords{Information Filtering; Cumulative Citation Recommendation; knowledge maintenance; Stream Filtering;  emerging entities} % NOT required for Proceedings
 
 
\section{Introduction}
 
  In 2012, the Text REtrieval Conferences (TREC) introduced the Knowledge Base Acceleration (KBA) track  to help Knowledge Bases(KBs) curators. The track is crucial to address a critical need of KB curators: given KB (Wikipedia or Twitter) entities, filter  a stream  for relevant documents, rank the retrieved documents and recommend them to the KB curators. The track is crucial and timely because  the number of entities in a KB on one hand, and the huge amount of new information content on the Web on the other hand make the task of manual KB maintenance challenging.   TREC KBA's main task, Cumulative Citation Recommendation (CCR), aims at filtering a stream to identify   citation-worthy  documents, rank them,  and recommend them to KB curators.
 
  
 
   
 
 Filtering is a crucial step in CCR for selecting a potentially relevant set of working documents for subsequent steps of the pipeline out of a big collection of stream documents. The TREC Filtering track defines filtering as a ``system that sifts through stream of incoming information to find documents that are relevant to a set of user needs represented by profiles'' \cite{robertson2002trec}. Adaptive Filtering, one task of the filtering track,  starts with   a persistent user profile and a very small number of positive examples. The  filtering step used in CCR systems fits under adaptive filtering: the profiles are represented by persistent KB (Wikipedia or Twitter) entities and there is a small set of relevance judgments representing positive examples. 
 
 
 
 TREC-KBA 2013's participants applied Filtering as a first step  to produce a smaller working set for subsequent experiments. As the subsequent steps of the pipeline use the output of the filter, the final performance of the system is dependent on this important step.  The filtering step particularly determines the recall of the overall system. However, all submitted systems suffered from poor recall \cite{frank2013stream}.  The most important components of the filtering step are cleansing, and entity profiling. Each component has choices to make. For example, there are two versions of corpus: cleansed and raw. Different approaches used different entity profiles for filtering. These entity profiles varied from  KB entities' canonical names, to  DBpedia name variants, to using bold words in the first paragraph of the Wikipedia entities’ profiles and anchor texts from other Wikipedia pages, to using exact name and wordNet synonyms. Moreover, the Type of entities (Wikipedia or Twitter), the category of 
 
documents (news, blog, tweets) can influence filtering.
 
 
 
 
 A variety of approaches are employed  to solve the CCR challenge. Each participant reports the steps of the pipeline and the final results in comparison to other systems.  A typical TREC KBA poster presentation or talk explains the system pipeline and reports the final results. The systems may employ similar (even the same) steps  but the choices they make at every step are usually different. In such a situation, it becomes hard to identify the factors that result in improved performance. There is  a lack of insight across different approaches. This makes  it hard to know  whether the improvement in performance of a particular approach is due to preprocessing, filtering, classification, scoring  or any of the sub-components of the pipeline. 
 
 
 
  
 
 
 
 
 
 In this paper,  we hold the subsequent steps of the pipeline fixed, zoom in on the filtering step and  conduct an in-depth analysis of the main components in it.  In particular, we study  cleansing, different entity profiling,  type of entities (Wikipedia or Twitter), and document categories (social, news, etc).  The main contribution of the paper are an in-depth analysis of the factors that affect entity-based stream filtering, identifying optimal entity profiles vis-avis not compromising precision, describing and classifying relevant documents that are not amenable to filtering , and  estimating the upper-bound of recall on entity-based filtering.
 
 
 
 The rest of the paper is is organized as follows: 
 
 
 
 
 
 
 
 
 \section{Data Description}
 
We use TREC-KBA 2013 dataset \footnote{http://http://trec-kba.org/trec-kba-2013.shtml}. The dataset consists of a time-stamped  stream corpus, a set of KB entities, and a set of relevance judgments. 
 
\subsection{Stream corpus} The stream corpus comes in two versions: raw and cleaned. The raw  and cleansed versions are 6.45TB and 4.5TB respectively,  after xz-compression and GPG encryption. The raw data is a  dump of  raw HTML pages. The cleansed version is the raw data after its HTML tags are stripped off and non-English documents removed. The stream corpus is organized in hourly folders each of which contains many  chunk files. Each chunk file contains between hundreds and hundreds of thousands of serialized  thrift objects. One thrift object is one document. A document could be a blog article, a news article, or a social media post (including tweet).  The stream corpus comes from three sources: TREC KBA 2012 (social, news and linking) \footnote{http://trec-kba.org/kba-stream-corpus-2013.shtml}, arxiv\footnote{http://arxiv.org/}, and spinn3r\footnote{http://spinn3r.com/}. Table \ref{tab:streams}   shows the sources, the number of hourly directories, and the number of chunk files. 
 
 
\begin{table*}
 
\caption{retrieved documents to different sources }
 
\begin{center}
 
 
 \begin{tabular}{l*{4}{l}l}
 
 documents     &   chunk files    &    Sub-stream \\
 
\hline
 
 
126,952         &11,851         &arxiv \\
 
394,381,405      &   688,974        & social \\
 
134,933,117       &  280,658       &  news \\
 
5,448,875         &12,946         &linking \\
 
57,391,714         &164,160      &   MAINSTREAM\_NEWS (spinn3r)\\
 
36,559,578         &85,769      &   FORUM (spinn3r)\\
 
14,755,278         &36,272     &    CLASSIFIED (spinn3r)\\
 
52,412         &9,499         &REVIEW (spinn3r)\\
 
7,637         &5,168         &MEMETRACKER (spinn3r)\\
 
1,040,520,595   &      2,222,554 &        Total\\
 
 
\end{tabular}
 
\end{center}
 
\label{tab:streams}
 
\end{table*}
 
 
\subsection{KB entities}
 
 
 The KB entities consist of 20 Twitter entities and 121 Wikipedia entities. The selected entities are, on purpose, sparse. The entities consist of 71 people, 1 organization, and 24 facilities.  
 
\subsection{Relevance judgments}
 
 
TREC-KBA provided relevance judgments for training and testing. Relevance judgments are given to a document-entity pairs. Documents with citation-worthy content to a given entity are annotated  as \emph{vital},  while documents with tangentially relevant content, or documents that lack freshliness o  with content that can be useful for initial KB-dossier are annotated as \emph{relevant}. Documents with no relevant content are labeled \emph{neutral} and spam is labeled as \emph{garbage}.  The inter-annotator agreement on vital in 2012 was 70\% while in 2013 it is 76\%. This is due to the more refined definition of vital and the distinction made between vital and relevant. 
 
 
 
 
 \subsection{Stream Filtering}
 
 Given a stream of documents of news items, blogs and social media on one hand and KB entities (Wikipedia, Twitter)  on the other,  we study the factors and choices that affect filtering perfromance. Specifically, we conduct in-depth analysis on the cleansing step, the entity-profile construction, the document category of the stream items, and the type of entities (Wikipedia or Twitter). We also study the impact of choices on classification performance. Finally, we conduct manual examination of the relevant documents that defy filtering. We strive to answer the following research questions:
 
 
 
 \begin{enumerate}
 
  \item Does cleansing affect filtering and subsequent performance
 
  \item What is the most effective way of entity profile representation
 
  \item Is filtering different for Wikipedia and Twitter entities?
 
  \item Are some type of documents easily filterable and others not ? 
 
  \item Does a gain in recall at filtering step translate to a gain in F-measure at the end of the pipeline?
 
  \item What are the vital(relevant documents that are not retrievable by a system?
 
  \item Are there vital (relevant) documents that are not filterable by a reasonable system?
 
\end{enumerate}
 
 
\subsection{Evaluation}
 
The TREC filtering track 
 
 
\subsection{Literature Review}
 
There has been a great deal of interest  as of late on entity-based filtering and ranking. One manifestation of that is the introduction of TREC KBA in 2012. Following that, there have been a number of research works done on the topic \cite{frank2012building, ceccarelli2013learning, taneva2013gem, wang2013bit, balog2013multi}.  These works are based on KBA 2012 task and dataset  and they address the whole problem of entity filtering and ranking.  TREC KBA continued in 2013, but the task underwent some changes. The main change between  the 2012 and 2013 are in the number of entities, the type of entities, the corpus and the relevance rankings.
 
 
The number of entities increased from 29 to 141, and it included 20 Twitter entities. The TREC KBA 2012 corpus is 1.9TB after xz-compression and has  400M documents. By contrast, the KBA 2013 corpus is 6.45 after XZ-compression and GPG encryption. A version with all-non English documented removed  is 4.5 TB and consists of 1 Billion documents. The 2013 corpus subsumed the 2012 corpus and added others from spinn3r, namely main-stream news, forum, arxiv, classified, reviews and meme-tracker.  A more important difference is, however, a change in the definitions of relevance ratings vital and relevant. While in KBA 2012, a document was judged vital if it has citation-worthy content for a given entity, in 2013 it must have the freshliness, that is the content must trigger an editing of the given entity's KB entry. 
 
 
While the tasks of 2012 and 2013 are fundamentally the same, the approaches  varied due  to the size of the corpus. In 2013, all participants used filtering to reduce the size of the big corpus.   They used different ways of filtering: many of them used two or more of different name variants from DBpedia such as labels, names, redirects, birth names, alias, nicknames, same-as and alternative names \cite{wang2013bit,dietzumass,liu2013related, zhangpris}.  Although most of the participants used DBpedia name variants none of them used all the name variants.  A few other participants used bold words in the first paragraph of the Wikipedia entity's profiles and anchor texts from other Wikipedia pages  \cite{bouvierfiltering, niauniversity}. One participant used Boolean \emph{and} built from the tokens of the canonical names \cite{illiotrec2013}.  
 
 
All of the studies used filtering as their first step to generate a smaller set of documents. And many systems suffered from poor recall and their system performances were highly affected \cite{frank2012building}. Although  systems  used different entity profiles to filter the stream, and achieved different performance levels, there is no study on and the factors and choices that affect the filtering step itself. Of course filtering has been extensively examined in TREC Filtering \cite{robertson2002trec}. However, those studies were isolated in the sense that they were intended to optimize recall. What we have here is a different scenario. Documents have relevance rating. Thus we want to study filtering in connection to  relevance to the entities and thus can be done by coupling filtering to the later stages of the pipeline. This is new to the best of our knowledge and the TREC KBA problem setting and data-sets offer a good opportunity to examine this aspect of filtering. 
 
0 comments (0 inline, 0 general)