Changeset - ef88c0e1e6e7
[Not reviewed]
0 1 0
Arjen de Vries (arjen) - 11 years ago 2014-06-12 03:54:57
arjen.de.vries@cwi.nl
some latex glitches mainly
1 file changed with 45 insertions and 27 deletions:
0 comments (0 inline, 0 general)
mypaper-final.tex
Show inline comments
 
@@ -21,24 +21,25 @@
 
% Questions regarding SIGS should be sent to
 
% Adrienne Griscti ---> griscti@acm.org
 
%
 
% Questions/suggestions regarding the guidelines, .tex and .cls files, etc. to
 
% Gerald Murray ---> murray@hq.acm.org
 
%
 
% For tracking purposes - this is V3.1SP - APRIL 2009
 
 
\documentclass{acm_proc_article-sp}
 
\usepackage{booktabs}
 
\usepackage{multirow}
 
\usepackage{todonotes}
 
\usepackage{url}
 
 
\begin{document}
 
 
\title{Entity-Centric Stream Filtering and ranking: Filtering and Unfilterable Documents 
 
}
 
%SUGGESTION:
 
%\title{The Impact of Entity-Centric Stream Filtering on Recall and
 
%  Missed Documents}
 
 
%
 
% You need the command \numberofauthors to handle the 'placement
 
% and alignment' of the authors beneath the title.
 
@@ -206,46 +207,46 @@ performance. The main contribution of the
 
paper are an in-depth analysis of the factors that affect entity-based
 
stream filtering, identifying optimal entity profiles without
 
compromising precision, describing and classifying relevant documents
 
that are not amenable to filtering , and estimating the upper-bound
 
of recall on entity-based filtering.
 

	
 
The rest of the paper is is organized as follows: 
 

	
 
\textbf{TODO!!}
 

	
 
 \section{Data Description}
 
We base this analysis on the TREC-KBA 2013 dataset%
 
\footnote{http://http://trec-kba.org/trec-kba-2013.shtml}
 
\footnote{\url{http://trec-kba.org/trec-kba-2013.shtml}}
 
that consists of three main parts: a time-stamped stream corpus, a set of
 
KB entities to be curated, and a set of relevance judgments. A CCR
 
system now has to identify for each KB entity which documents in the
 
stream corpus are to be considered by the human curator.
 

	
 
\subsection{Stream corpus} The stream corpus comes in two versions:
 
raw and cleaned. The raw and cleansed versions are 6.45TB and 4.5TB
 
respectively,  after xz-compression and GPG encryption. The raw data
 
is a  dump of  raw HTML pages. The cleansed version is the raw data
 
after its HTML tags are stripped off and only English documents
 
identified with Chromium Compact Language Detector
 
\footnote{https://code.google.com/p/chromium-compact-language-detector/}
 
\footnote{\url{https://code.google.com/p/chromium-compact-language-detector/}}
 
are included.  The stream corpus is organized in hourly folders each
 
of which contains many  chunk files. Each chunk file contains between
 
hundreds and hundreds of thousands of serialized  thrift objects. One
 
thrift object is one document. A document could be a blog article, a
 
news article, or a social media post (including tweet).  The stream
 
corpus comes from three sources: TREC KBA 2012 (social, news and
 
linking) \footnote{http://trec-kba.org/kba-stream-corpus-2012.shtml},
 
arxiv\footnote{http://arxiv.org/}, and
 
spinn3r\footnote{http://spinn3r.com/}.
 
linking) \footnote{\url{http://trec-kba.org/kba-stream-corpus-2012.shtml}},
 
arxiv\footnote{\url{http://arxiv.org/}}, and
 
spinn3r\footnote{\url{http://spinn3r.com/}}.
 
Table \ref{tab:streams} shows the sources, the number of hourly
 
directories, and the number of chunk files.
 
\begin{table}
 
\caption{Retrieved documents to different sources }
 
\begin{center}
 
 
 \begin{tabular}{l*{4}{l}l}
 
 documents     &   chunk files    &    Sub-stream \\
 
\hline
 
 
126,952         &11,851         &arxiv \\
 
394,381,405      &   688,974        & social \\
 
@@ -655,92 +656,109 @@ Twitter entities.  Wikipedia entities' canonical representation
 
achieves a recall of 70\%, while canonical partial achieves a recall of 86.1\%. This is an
 
increase in recall of 16.1\%. By contrast, the increase in recall of
 
name-variant partial over name-variant is 8.3\%.
 
%This high increase in recall when moving from canonical names to their
 
%partial names, in comparison to the lower increase when moving from
 
%all name variants to their partial names can be explained by
 
%saturation: documents have already been extracted by the different
 
%name variants and thus using their partial names do not bring in many
 
%new relevant documents.
 
For Wikipedia entities, canonical
 
partial achieves better recall than name-variant in both the cleansed and
 
the raw corpus.  %In the raw extraction, the difference is about 3.7.
 
In Twitter entities, however, it is different. Both canonical and
 
their partials perform the same and the recall is very low. Canonical
 
In Twitter entities, recall of canonical matching is very low.%
 
\footnote{Canonical
 
and canonical partial are the same for Twitter entities because they
 
are one word strings. For example in https://twitter.com/roryscovel,
 
``roryscovel`` is the canonical name and its partial is also the same.
 
``roryscovel`` is the canonical name and its partial is identical.}
 
%The low recall is because the canonical names of Twitter entities are
 
%not really names; they are usually arbitrarily created user names. It
 
%shows that  documents  refer to them by their display names, rarely
 
%by their user name, which is reflected in the name-variant recall
 
%(67.9\%). The use of name-variant partial increases the recall to
 
%88.2\%.
 
 
 
 
The tables in \ref{tab:name} and \ref{tab:source-delta} show recall for Wikipedia entities are higher than for Twitter. Generally, at both aggregate and document category levels, we observe that recall increases as we move from canonicals to canonical partial, to name-variant, and to name-variant partial. The only case where this does not hold is in the transition from Wikipedia's canonical partial to name-variant. At the aggregate level(as can be inferred from Table \ref{tab:name}), the difference in performance between  canonical  and name-variant partial is 31.9\% on all entities, 20.7\% on Wikipedia entities, and 79.5\% on Twitter entities. This is a significant performance difference. 
 
 
The tables in \ref{tab:name} and \ref{tab:source-delta} show a higher recall
 
for Wikipedia than for Twitter entities. Generally, at both
 
aggregate and document category levels, we observe that recall
 
increases as we move from canonicals to canonical partial, to
 
name-variant, and to name-variant partial. The only case where this
 
does not hold is in the transition from Wikipedia's canonical partial
 
to name-variant. At the aggregate level (as can be inferred from Table
 
\ref{tab:name}), the difference in performance between  canonical  and
 
name-variant partial is 31.9\% on all entities, 20.7\% on Wikipedia
 
entities, and 79.5\% on Twitter entities. 
 

	
 
Section \ref{sec:analysis} discusses the most plausible explanations for these findings.
 
%% TODO: PERHAPS SUMMARY OF DISCUSSION HERE
 

	
 
 
\section{Impact on classification}
 
  In the overall experimental setup, classification, ranking,  and evaluationn are kept constant. Following \cite{balog2013multi} settings, we use WEKA's\footnote{http://www.cs.waikato.ac.nz/∼ml/weka/} Classification Random Forest. However, we use fewer numbers of features which we found to be more effective. We determined the effectiveness of the features by running the classification algorithm using the fewer features we implemented and their features. Our feature implementations achieved better results.  The total numbers of features we used are 13 and are listed below. 
 
  
 
\paragraph{Google's Cross Lingual Dictionary (GCLD)}
 
In the overall experimental setup, classification, ranking, and
 
evaluation are kept constant. Following \cite{balog2013multi}
 
settings, we use
 
WEKA's\footnote{\url{http://www.cs.waikato.ac.nz/~ml/weka/}} Classification
 
Random Forest. However, we use fewer numbers of features which we
 
found to be more effective. We determined the effectiveness of the
 
features by running the classification algorithm using the fewer
 
features we implemented and their features. Our feature
 
implementations achieved better results.  The total numbers of
 
features we used are 13 and are listed below.
 
  
 
\paragraph*{Google's Cross Lingual Dictionary (GCLD)}
 
 
This is a mapping of strings to Wikipedia concepts and vice versa
 
\cite{spitkovsky2012cross}. 
 
(1) the probability with which a string is used as anchor text to
 
a Wikipedia entity 
 
 
\paragraph{jac} 
 
\paragraph*{jac} 
 
  Jaccard similarity between the document and the entity's Wikipedia page
 
\paragraph{cos} 
 
\paragraph*{cos} 
 
  Cosine similarity between the document and the entity's Wikipedia page
 
\paragraph{kl} 
 
\paragraph*{kl} 
 
  KL-divergence between the document and the entity's Wikipedia page
 
  
 
  \paragraph{PPR}
 
  \paragraph*{PPR}
 
For each entity, we computed a PPR score from
 
a Wikipedia snapshot  and we kept the top 100  entities along
 
with the corresponding scores.
 
 
 
\paragraph{Surface Form (sForm)}
 
\paragraph*{Surface Form (sForm)}
 
For each Wikipedia entity, we gathered DBpedia name variants. These
 
are redirects, labels and names.
 
 
 
\paragraph{Context (contxL, contxR)}
 
\paragraph*{Context (contxL, contxR)}
 
From the WikiLink corpus \cite{singh12:wiki-links}, we collected
 
all left and right contexts (2 sentences left and 2 sentences
 
right) and generated n-grams between uni-grams and quadro-grams
 
for each left and right context. 
 
Finally,  we select  the 5 most frequent n-grams for each context.
 
 
\paragraph{FirstPos}
 
\paragraph*{FirstPos}
 
  Term position of the first occurrence of the target entity in the document 
 
  body 
 
\paragraph{LastPos }
 
\paragraph*{LastPos }
 
  Term position of the last occurrence of the target entity in the document body
 
 
\paragraph{LengthBody} Term count of document body
 
\paragraph{LengthAnchor} Term count  of document anchor
 
\paragraph*{LengthBody} Term count of document body
 
\paragraph*{LengthAnchor} Term count  of document anchor
 
  
 
\paragraph{FirstPosNorm} 
 
\paragraph*{FirstPosNorm} 
 
  Term position of the first occurrence of the target entity in the document 
 
  body normalised by the document length 
 
\paragraph{MentionsBody }
 
\paragraph*{MentionsBody }
 
  No. of occurrences of the target entity in the  document body
 
 
 
 
  
 
  Features we use incude similarity features such as cosine and jaccard, document-entity features such as docuemnt mentions entity in title, in body, frequency  of mention, etc., and related entity features such as page rank scores. In total we sue  The features consist of similarity measures between the KB entiities profile text, document-entity features such as  
 
  In here, we present results showing how  the choices in corpus, entity types, and entity profiles impact these latest stages of the pipeline.  In tables \ref{tab:class-vital} and \ref{tab:class-vital-relevant}, we show the performances in max-F. 
 
\begin{table*}
 
\caption{vital performance under different name variants(upper part from cleansed, lower part from raw)}
 
\begin{center}
 
\begin{tabular}{ll@{\quad}lllllll}
 
\hline
 
@@ -889,25 +907,25 @@ In vital-relevant category (Table \ref{tab:class-vital-relevant}), the performan
 
% Jasper Schneider &&no mention, talks about rural development of which he is a director \\
 
% urbren00 && No content\\
 
% \hline
 
% \end{tabular}
 
% \end{center}
 
% \label{tab:miss from both}
 
% \end{table*}
 
 
 
 
 
   
 
  
 
\section{Analysis and Discussion}
 
\section{Analysis and Discussion}\label{sec:analysis}
 
 
 
We conducted experiments to study  the impacts on recall of 
 
different components of the filtering stage of entity-based filtering and ranking pipeline. Specifically 
 
we conducted experiments to study the impacts of cleansing, 
 
entity profiles, relevance ratings, categories of documents, entity profiles. We also measured  impact of the different factors and choices  on later stages of the pipeline. 
 
 
Experimental results show that cleansing can remove entire or parts of the content of documents making them difficult to retrieve. These documents can, otherwise, be retrieved from the raw version. The use of the raw corpus brings in documents that can not be retrieved from the cleansed corpus. This is true for all entity profiles and for all entity types. The  recall difference between the cleansed and raw ranges from  6.8\% t 26.2\%. These increases, in actual document-entity pairs,  is in thousands. We believe this is a substantial increase. However, the recall increases do not always translate to improved F-score in overall performance.  In the vital relevance ranking for both Wikipedia and aggregate entities, the cleansed version performs better than the raw version.  In Twitter entities, the raw corpus achieves better except in the case of all name-variant, though the difference is negligible.  However, for vital-relevant, the raw corpus performs  better across all entity profiles and entity types 
 
except in partial canonical names of Wikipedia entities. 
 
 
The use of different profiles also shows a big difference in recall. Except in the case of Wikipedia where the use of canonical partial achieves better than name-variant, there is a steady increase in recall from canonical to  canonical partial, to name-variant, and to name-variant partial. This pattern is also observed across the document categories.  However, here too, the relationship between   the gain in recall as we move from less richer profile to a more richer profile and overall performance as measured by F-score  is not linear. 
 
0 comments (0 inline, 0 general)