Changeset - 94186893ebc1
[Not reviewed]
0 1 0
Arjen de Vries (arjen) - 11 years ago 2014-06-12 06:37:05
arjen.de.vries@cwi.nl
ref error
1 file changed with 10 insertions and 3 deletions:
0 comments (0 inline, 0 general)
mypaper-final.tex
Show inline comments
 
% THIS IS SIGPROC-SP.TEX - VERSION 3.1
 
% WORKS WITH V3.2SP OF ACM_PROC_ARTICLE-SP.CLS
 
% APRIL 2009
 
%
 
% It is an example file showing how to use the 'acm_proc_article-sp.cls' V3.2SP
 
% LaTeX2e document class file for Conference Proceedings submissions.
 
% ----------------------------------------------------------------------------------------------------------------
 
% This .tex file (and associated .cls V3.2SP) *DOES NOT* produce:
 
%       1) The Permission Statement
 
%       2) The Conference (location) Info information
 
%       3) The Copyright Line with ACM data
 
%       4) Page numbering
 
% ---------------------------------------------------------------------------------------------------------------
 
% It is an example which *does* use the .bib file (from which the .bbl file
 
% is produced).
 
% REMEMBER HOWEVER: After having produced the .bbl file,
 
% and prior to final submission,
 
% you need to 'insert'  your .bbl file into your source .tex file so as to provide
 
% ONE 'self-contained' source file.
 
%
 
% Questions regarding SIGS should be sent to
 
% Adrienne Griscti ---> griscti@acm.org
 
%
 
% Questions/suggestions regarding the guidelines, .tex and .cls files, etc. to
 
% Gerald Murray ---> murray@hq.acm.org
 
%
 
% For tracking purposes - this is V3.1SP - APRIL 2009
 
 
\documentclass{acm_proc_article-sp}
 
\usepackage{booktabs}
 
\usepackage{multirow}
 
\usepackage{todonotes}
 
\usepackage{url}
 
 
\begin{document}
 
 
\title{Entity-Centric Stream Filtering and ranking: Filtering and Unfilterable Documents 
 
}
 
%SUGGESTION:
 
%\title{The Impact of Entity-Centric Stream Filtering on Recall and
 
%  Missed Documents}
 
 
%
 
% You need the command \numberofauthors to handle the 'placement
 
% and alignment' of the authors beneath the title.
 
%
 
% For aesthetic reasons, we recommend 'three authors at a time'
 
% i.e. three 'name/affiliation blocks' be placed beneath the title.
 
%
 
% NOTE: You are NOT restricted in how many 'rows' of
 
% "name/affiliations" may appear. We just ask that you restrict
 
% the number of 'columns' to three.
 
%
 
% Because of the available 'opening page real-estate'
 
% we ask you to refrain from putting more than six authors
 
% (two rows with three columns) beneath the article title.
 
% More than six makes the first-page appear very cluttered indeed.
 
%
 
% Use the \alignauthor commands to handle the names
 
% and affiliations for an 'aesthetic maximum' of six authors.
 
% Add names, affiliations, addresses for
 
% the seventh etc. author(s) as the argument for the
 
% \additionalauthors command.
 
% These 'additional authors' will be output/set for you
 
% without further effort on your part as the last section in
 
% the body of your article BEFORE References or any Appendices.
 
 
\numberofauthors{8} %  in this sample file, there are a *total*
 
% of EIGHT authors. SIX appear on the 'first-page' (for formatting
 
% reasons) and the remaining two appear in the \additionalauthors section.
 
%
 
% \author{
 
% % You can go ahead and credit any number of authors here,
 
% % e.g. one 'row of three' or two rows (consisting of one row of three
 
% % and a second row of one, two or three).
 
% %
 
% % The command \alignauthor (no curly braces needed) should
 
% % precede each author name, affiliation/snail-mail address and
 
% % e-mail address. Additionally, tag each line of
 
% % affiliation/address with \affaddr, and tag the
 
% % e-mail address with \email.
 
% %
 
% % 1st. author
 
% \alignauthor
 
% Ben Trovato\titlenote{Dr.~Trovato insisted his name be first.}\\
 
%        \affaddr{Institute for Clarity in Documentation}\\
 
%        \affaddr{1932 Wallamaloo Lane}\\
 
%        \affaddr{Wallamaloo, New Zealand}\\
 
%        \email{trovato@corporation.com}
 
% % 2nd. author
 
% \alignauthor
 
% G.K.M. Tobin\titlenote{The secretary disavows
 
% any knowledge of this author's actions.}\\
 
%        \affaddr{Institute for Clarity in Documentation}\\
 
%        \affaddr{P.O. Box 1212}\\
 
%        \affaddr{Dublin, Ohio 43017-6221}\\
 
%        \email{webmaster@marysville-ohio.com}
 
% }
 
% There's nothing stopping you putting the seventh, eighth, etc.
 
% author on the opening page (as the 'third row') but we ask,
 
% for aesthetic reasons that you place these 'additional authors'
 
% in the \additional authors block, viz.
 
% Just remember to make sure that the TOTAL number of authors
 
% is the number that will appear on the first page PLUS the
 
% number that will appear in the \additionalauthors section.
 
 
\maketitle
 
\begin{abstract}
 
 
Cumulative citation recommendation refers to the problem faced by
 
knowledge base curators, who need to continuously screen the media for
 
updates regarding the knowledge base entries they manage. Automatic
 
system support for this entity-centric information processing problem
 
requires complex pipe\-lines involving both natural language
 
processing and information retrieval components. The pipeline
 
encountered in a variety of systems that approach this problem
 
involves four stages: filtering, classification, ranking (or scoring),
 
and evaluation. Filtering is only an initial step, that reduces the
 
web-scale corpus of news and other relevant information sources that
 
may contain entity mentions into a working set of documents that should
 
be more manageable for the subsequent stages.
 
Nevertheless, this step has a large impact on the recall that can be
 
maximally attained! Therefore, in this study, we have focused on just
 
this filtering stage and conduct an in-depth analysis of the main design
 
decisions here: how to cleans the noisy text obtained online, 
 
the methods to create entity profiles, the
 
types of entities of interest, document type, and the grade of
 
relevance of the document-entity pair under consideration.
 
We analyze how these factors (and the design choices made in their
 
corresponding system components) affect filtering performance.
 
We identify and characterize the relevant documents that do not pass
 
the filtering stage by examing their contents. This way, we
 
estimate a practical upper-bound of recall for entity-centric stream
 
filtering.
 
 
\end{abstract}
 
% A category with the (minimum) three required fields
 
\category{H.4}{Information Filtering}{Miscellaneous}
 
 
%A category including the fourth, optional field follows...
 
%\category{D.2.8}{Software Engineering}{Metrics}[complexity measures, performance measures]
 
 
\terms{Theory}
 
 
\keywords{Information Filtering; Cumulative Citation Recommendation; knowledge maintenance; Stream Filtering;  emerging entities} % NOT required for Proceedings
 
 
\section{Introduction}
 
In 2012, the Text REtrieval Conferences (TREC) introduced the Knowledge Base Acceleration (KBA) track  to help Knowledge Bases(KBs) curators. The track is crucial to address a critical need of KB curators: given KB (Wikipedia or Twitter) entities, filter  a stream  for relevant documents, rank the retrieved documents and recommend them to the KB curators. The track is crucial and timely because  the number of entities in a KB on one hand, and the huge amount of new information content on the Web on the other hand make the task of manual KB maintenance challenging.   TREC KBA's main task, Cumulative Citation Recommendation (CCR), aims at filtering a stream to identify   citation-worthy  documents, rank them,  and recommend them to KB curators.
 
  
 
   
 
 Filtering is a crucial step in CCR for selecting a potentially
 
 relevant set of working documents for subsequent steps of the
 
 pipeline out of a big collection of stream documents. Filtering  sifts  an incoming information for information relevant to user profiles \cite{robertson2002trec}. In the specific setting of CCR, these profiles are
 
represented by persistent KB entities (Wikipedia pages or Twitter
 
users, in the TREC scenario).
 
 
 
 TREC-KBA 2013's participants applied Filtering as a first step  to
 
 produce a smaller working set for subsequent experiments. As the
 
 subsequent steps of the pipeline use the output of the filter, the
 
 final performance of the system is dependent on this step.  The
 
 filtering step particularly determines the recall of the overall
 
 system. However, all 141 runs submitted by 13 teams did suffer from
 
 poor recall, as pointed out in the track's overview paper 
 
 \cite{frank2013stream}. 
 
 
The most important components of the filtering step are cleansing
 
(referring to pre-processing noisy web text into a canonical ``clean''
 
text format), and
 
entity profiling (creating a representation of the entity that can be
 
used to match the stream documents to). For each component, different
 
choices can be made. In the specific case of TREC KBA, organisers have
 
provided two different versions of the corpus: one that is already cleansed,
 
and one that is the raw data as originally collected by the organisers. 
 
Also, different
 
approaches use different entity profiles for filtering, varying from
 
using just the KB entities' canonical names to looking up DBpedia name
 
variants, and from using the bold words in the first paragraph of the Wikipedia
 
entities' page to using anchor texts from other Wikipedia pages, and from
 
using the exact name as given to WordNet derived synonyms. The type of entities
 
(Wikipedia or Twitter) and the category of documents in which they
 
occur (news, blogs, or tweets) cause further variations.
 
% A variety of approaches are employed  to solve the CCR
 
% challenge. Each participant reports the steps of the pipeline and the
 
% final results in comparison to other systems.  A typical TREC KBA
 
% poster presentation or talk explains the system pipeline and reports
 
% the final results. The systems may employ similar (even the same)
 
% steps  but the choices they make at every step are usually
 
% different. 
 
In such a situation, it becomes hard to identify the factors that
 
result in improved performance. There is  a lack of insight across
 
different approaches. This makes  it hard to know whether the
 
improvement in performance of a particular approach is due to
 
preprocessing, filtering, classification, scoring  or any of the
 
sub-components of the pipeline.
 
 
 
In this paper, we therefore fix the subsequent steps of the pipeline,
 
and zoom in on \emph{only} the filtering step; and conduct an in-depth analysis of its
 
main components.  In particular, we study the effect of cleansing,
 
entity profiling, type of entity filtered for (Wikipedia or Twitter), and
 
document category (social, news, etc) on the filtering components'
 
performance. The main contribution of the
 
paper are an in-depth analysis of the factors that affect entity-based
 
stream filtering, identifying optimal entity profiles without
 
compromising precision, describing and classifying relevant documents
 
that are not amenable to filtering , and estimating the upper-bound
 
of recall on entity-based filtering.
 
 
 
The rest of the paper  is organized as follows. Section \ref{sec:desc} describes the dataset and section \ref{sec:fil} defines the task. In section  \ref{sec:lit}, we discuss related litrature folowed by a discussion of our method in \ref{sec:mthd}. Following that,  we present the experimental resulsy in \ref{sec:expr}, and discuss and analyze them in \ref{sec:analysis}. Towards the end, we discuss the impact of filtering choices on classification in section \ref{sec:impact}, examine and categorize unfilterable docuemnts in section \ref{sec:unfil}. Finally, we present our conclusions in \ref{sec:conc}.
 
 
The rest of the paper  is organized as follows. Section \ref{sec:desc} describes the dataset and section \ref{sec:fil} defines the task. In section  \ref{sec:lit}, we discuss related literature folowed by a discussion of our method in \ref{sec:mthd}. Following that,  we present the experimental resulsy in \ref{sec:expr}, and discuss and analyze them in \ref{sec:analysis}. Towards the end, we discuss the impact of filtering choices on classification in section \ref{sec:impact}, examine and categorize unfilterable documents in section \ref{sec:unfil}. Finally, we present our conclusions in \ref{}{sec:conc}.
 
The rest of the paper  is organized as follows. Section \ref{sec:desc}
 
describes the dataset and section \ref{sec:fil} defines the task. In
 
section  \ref{sec:lit}, we discuss related literature folowed by a
 
discussion of our method in \ref{sec:mthd}. Following that,  we
 
present the experimental resulsy in \ref{sec:expr}, and discuss and
 
analyze them in \ref{sec:analysis}. Towards the end, we discuss the
 
impact of filtering choices on classification in section
 
\ref{sec:impact}, examine and categorize unfilterable documents in
 
section \ref{sec:unfil}. Finally, we present our conclusions in
 
\ref{sec:conc}.
 
 
 
 
 \section{Data Description}\label{sec:desc}
 
We base this analysis on the TREC-KBA 2013 dataset%
 
\footnote{\url{http://trec-kba.org/trec-kba-2013.shtml}}
 
that consists of three main parts: a time-stamped stream corpus, a set of
 
KB entities to be curated, and a set of relevance judgments. A CCR
 
system now has to identify for each KB entity which documents in the
 
stream corpus are to be considered by the human curator.
 
 
\subsection{Stream corpus} The stream corpus comes in two versions:
 
raw and cleaned. The raw and cleansed versions are 6.45TB and 4.5TB
 
respectively,  after xz-compression and GPG encryption. The raw data
 
is a  dump of  raw HTML pages. The cleansed version is the raw data
 
after its HTML tags are stripped off and only English documents
 
identified with Chromium Compact Language Detector
 
\footnote{\url{https://code.google.com/p/chromium-compact-language-detector/}}
 
are included.  The stream corpus is organized in hourly folders each
 
of which contains many  chunk files. Each chunk file contains between
 
hundreds and hundreds of thousands of serialized  thrift objects. One
 
thrift object is one document. A document could be a blog article, a
 
news article, or a social media post (including tweet).  The stream
 
corpus comes from three sources: TREC KBA 2012 (social, news and
 
linking) \footnote{\url{http://trec-kba.org/kba-stream-corpus-2012.shtml}},
 
arxiv\footnote{\url{http://arxiv.org/}}, and
 
spinn3r\footnote{\url{http://spinn3r.com/}}.
 
Table \ref{tab:streams} shows the sources, the number of hourly
 
directories, and the number of chunk files.
 
\begin{table}
 
\caption{Retrieved documents to different sources }
 
\begin{center}
 
 
 \begin{tabular}{l*{4}{l}l}
 
 documents     &   chunk files    &    Sub-stream \\
 
\hline
 
 
126,952         &11,851         &arxiv \\
 
394,381,405      &   688,974        & social \\
 
134,933,117       &  280,658       &  news \\
 
5,448,875         &12,946         &linking \\
 
57,391,714         &164,160      &   MAINSTREAM\_NEWS (spinn3r)\\
 
36,559,578         &85,769      &   FORUM (spinn3r)\\
 
14,755,278         &36,272     &    CLASSIFIED (spinn3r)\\
 
52,412         &9,499         &REVIEW (spinn3r)\\
 
7,637         &5,168         &MEMETRACKER (spinn3r)\\
 
1,040,520,595   &      2,222,554 &        Total\\
 
 
\end{tabular}
 
\end{center}
 
\label{tab:streams}
 
\end{table}
 
 
\subsection{KB entities}
 
 
 The KB entities consist of 20 Twitter entities and 121 Wikipedia entities. The selected entities are, on purpose, sparse. The entities consist of 71 people, 1 organization, and 24 facilities.  
 
 
\subsection{Relevance judgments}
 
 
TREC-KBA provided relevance judgments for training and
 
testing. Relevance judgments are given as a document-entity
 
pairs. Documents with citation-worthy content to a given entity are
 
annotated  as \emph{vital},  while documents with tangentially
 
relevant content, or documents that lack freshliness o  with content
 
that can be useful for initial KB-dossier are annotated as
 
\emph{relevant}. Documents with no relevant content are labeled
 
\emph{neutral} and spam is labeled as \emph{garbage}. 
 
%The inter-annotator agreement on vital in 2012 was 70\% while in 2013 it
 
%is 76\%. This is due to the more refined definition of vital and the
 
%distinction made between vital and relevant.
 
 
\subsection{Breakdown of results by document source category}
 
 
%The results of the different entity profiles on the raw corpus are
 
%broken down by source categories and relevance rank% (vital, or
 
%relevant).  
 
In total, the dataset contains 24162 unique entity-document
 
pairs, vital or relevant; 9521 of these have been labelled as vital,
 
and the remaining 17424 as relevant.
 
All documents are categorized into 8 source categories: 0.98\%
 
arxiv(a), 0.034\% classified(c), 0.34\% forum(f), 5.65\% linking(l),
 
11.53\% mainstream-news(m-n), 18.40\% news(n), 12.93\% social(s) and
 
50.2\% weblog(w). We have regrouped these source categories into three
 
groups ``news'', ``social'', and ``other'', for two reasons: 1) some groups
 
are very similar to each other. Mainstream-news and news are
 
similar. The reason they exist separately, in the first place,  is
 
because they were collected from two different sources, by different
 
groups and at different times. we call them news from now on.  The
 
same is true with weblog and social, and we call them social from now
 
on.   2) some groups have so small number of annotations that treating
 
them independently does not make much sense. Majority of vital or
 
relevant annotations are social (social and weblog) (63.13\%). News
 
(mainstream +news) make up 30\%. Thus, news and social make up about
 
93\% of all annotations.  The rest make up about 7\% and are all
 
grouped as others.
 
 
 \section{Stream Filtering}\label{sec:fil}
 
 
 
 The TREC Filtering track defines filtering as a ``system that sifts
 
 through stream of incoming information to find documents that are
 
 relevant to a set of user needs represented by profiles''
 
 \cite{robertson2002trec}. Its information needs are long-term and are
 
 represented by persistent profiles, unlike the traditional search system
 
 whose adhoc information need is represented by a search
 
 query. Adaptive Filtering, one task of the filtering track,  starts
 
 with  a persistent user profile and a very small number of positive
 
 examples. A filtering system can improve its user profiles with a
 
 feedback obtained from interaction with users, and thereby improve
 
 its performance. The  filtering stage of entity-based stream
 
 filtering and ranking can be likened to the adaptive filtering task
 
 of the filtering track. The persistent information needs are the KB
 
 entities, and the relevance judgments are the small number of postive
 
 examples.
 
 
Stream filtering is then the task to, given a stream of documents of news items, blogs
 
 and social media on one hand and a set of KB entities on the other,
 
 to filter the stream for  potentially relevant documents  such that
 
 the relevance classifier(ranker) achieves as maximum performance as
 
 possible.  Specifically, we conduct in-depth analysis on the choices
 
 and factors affecting the cleansing step, the entity-profile
 
 construction, the document category of the stream items, and the type
 
 of entities (Wikipedia or Twitter) , and finally their impact overall
 
 performance of the pipeline. Finally, we conduct manual examination
 
 of the vital documents that defy filtering. We strive to answer the
 
 following research questions:
 
\begin{enumerate}
 
  \item Does cleansing affect filtering and subsequent performance
 
  \item What is the most effective way of entity profile representation
 
  \item Is filtering different for Wikipedia and Twitter entities?
 
  \item Are some type of documents easily filterable and others not?
 
  \item Does a gain in recall at filtering step translate to a gain in F-measure at the end of the pipeline?
 
  \item What characterizes the vital (and relevant) documents that are
 
    missed in the filtering step?
 
\end{enumerate}
 
 
The TREC filtering and the filtering as part of the entity-centric
 
stream filtering and ranking pipepline have different purposes. The
 
TREC filtering track's goal is the binary classification of documents:
 
for each incoming docuemnt, it decides whether the incoming document
 
is relevant or not for a given profile. The docuemnts are either
 
relevant or not. In our case, the documents have relevance ranking and
 
the goal of the filtering stage is to filter as many potentially
 
relevant documents as possible, but less  irrelevant documents as
 
possible not to obfuscate the later stages of the piepline.  Filtering
 
as part of the pipeline needs that delicate balance between retrieving
 
relavant documents and irrrelevant documensts. Bcause of this,
 
filtering in this case can only be studied by binding it to the later
 
stages of the entity-centric pipeline. This bond influnces how we do
 
evaluation.
 
 
To achieve this, we use recall percentages in the filtering stage for
 
the different choices of entity profiles. However, we use the overall
 
performance to select the best entity profiles.To generate the overall
 
pipeline performance we use the official TREC KBA evaluation metric
 
and scripts \cite{frank2013stream} to report max-F, the maximum
 
F-score obtained over all relevance cut-offs.
 
 
\section{Literature Review} \label{sec:lit}
 
 
There has been a great deal of interest  as of late on entity-based filtering and ranking.  The Text Analysis Conference   started  Knwoledge Base Population with the goal of developing methods and technologies to fascilitate the creation and population of KBs \cite{ji2011knowledge}. The most relevant track in KBP is entity-linking: given an entity and
 
a document containing a mention of the entity, identify the mention in the document and link it to the its profile in a KB.  Many studies have attempted to address this task \cite{dalton2013neighborhood, dredze2010entity, davis2012named}. 
 
 
 A more recent manifestation of that is the introduction of TREC KBA in 2012.  Following that, there have been a number of research works done on the topic \cite{frank2012building, ceccarelli2013learning, taneva2013gem, wang2013bit, balog2013multi}.  These works are based on KBA 2012 task and dataset  and they address the whole problem of entity filtering and ranking.  TREC KBA continued in 2013, but the task underwent some changes. The main change between  the 2012 and 2013 are in the number of entities, the type of entities, the corpus and the relevance rankings.
 
 
The number of entities increased from 29 to 141, and it included 20 Twitter entities. The TREC KBA 2012 corpus is 1.9TB after xz-compression and has  400M documents. By contrast, the KBA 2013 corpus is 6.45 after XZ-compression and GPG encryption. A version with all-non English documented removed  is 4.5 TB and consists of 1 Billion documents. The 2013 corpus subsumed the 2012 corpus and added others from spinn3r, namely main-stream news, forum, arxiv, classified, reviews and meme-tracker.  A more important difference is, however, a change in the definitions of relevance ratings vital and relevant. While in KBA 2012, a document was judged vital if it has citation-worthy content for a given entity, in 2013 it must have the freshliness, that is the content must trigger an editing of the given entity's KB entry. 
 
 
While the tasks of 2012 and 2013 are fundamentally the same, the approaches  varied due  to the size of the corpus. In 2013, all participants used filtering to reduce the size of the big corpus.   They used different ways of filtering: many of them used two or more of different name variants from DBpedia such as labels, names, redirects, birth names, alias, nicknames, same-as and alternative names \cite{wang2013bit,dietzumass,liu2013related, zhangpris}.  Although most of the participants used DBpedia name variants none of them used all the name variants.  A few other participants used bold words in the first paragraph of the Wikipedia entity's profiles and anchor texts from other Wikipedia pages  \cite{bouvierfiltering, niauniversity}. One participant used Boolean \emph{and} built from the tokens of the canonical names \cite{illiotrec2013}.  
 
 
All of the studies used filtering as their first step to generate a smaller set of documents. And many systems suffered from poor recall and their system performances were highly affected \cite{frank2012building}. Although  systems  used different entity profiles to filter the stream, and achieved different performance levels, there is no study on and the factors and choices that affect the filtering step itself. Of course filtering has been extensively examined in TREC Filtering \cite{robertson2002trec}. However, those studies were isolated in the sense that they were intended to optimize recall. What we have here is a different scenario. Documents have relevance rating. Thus we want to study filtering in connection to  relevance to the entities and thus can be done by coupling filtering to the later stages of the pipeline. This is new to the best of our knowledge and the TREC KBA problem setting and data-sets offer a good opportunity to examine this aspect of filtering. 
 
 
Moreover, there has not been a chance to study at this scale and/or a study into what type of documents defy filtering and why? In this paper, we conduct a manual examination of the documents that are missing and classify them into different categories. We also estimate the general upper bound of recall using the different entities profiles and choose the best profile that results in an increased over all performance as measured by F-measure. 
 
 
\section{Method}\label{sec:mthd}
 
All analyses in this paper are carried out on the documents that have
 
relevance assessments associated to them. For this purpose, we
 
extracted those documents from the big corpus. We experiment with all
 
KB entities. For each KB entity, we extract different name variants
 
from DBpedia and Twitter.
 
\
 
 
\subsection{Entity Profiling}
 
We build entity profiles for the KB entities of interest. We have two
 
types: Twitter and Wikipedia. Both entities have been selected, on
 
purpose by the track organisers, to occur only sparsely and be less-documented.
 
For the Wikipedia entities, we fetch different name variants
 
from DBpedia: name, label, birth name, alternative names,
 
redirects, nickname, or alias. 
 
These extraction results are summarized in Table
 
\ref{tab:sources}.
 
For the Twitter entities, we visit
 
their respective Twitter pages and fetch their display names. 
 
\begin{table}
 
\caption{Number of different DBpedia name variants}
 
\begin{center}
 
 
 \begin{tabular}{l*{4}{c}l}
 
 Name variant& No. of strings  \\
 
\hline
 
 Name  &82\\
 
 Label   &121\\
 
Redirect  &49 \\
 
 Birth Name &6\\
 
 Nickname & 1&\\
 
 Alias &1 \\
 
 Alternative Names &4\\
 
 
\hline
 
\end{tabular}
 
\end{center}
 
\label{tab:sources}
 
\end{table}
 
 
 
The collection contains a total number of 121 Wikipedia entities.
 
Every entity has a corresponding DBpedia label.  Only 82 entities have
 
a name string and only 49 entities have redirect strings. (Most of the
 
entities have only one string, except for a few cases with multiple
 
redirect strings; Buddy\_MacKay, has the highest (12) number of
 
redirect strings.) 
 
 
We combine the different name variants we extracted to form a set of
 
strings for each KB entity. For Twitter entities, we used the display
 
names that we collected. We consider the names of the entities that
 
are part of the URL as canonical. For example in entity\\
 
\url{http://en.wikipedia.org/wiki/Benjamin_Bronfman}\\
 
Benjamin Bronfman is a canonical name of the entity. 
 
An example is given in Table \ref{tab:profile}.
 
 
From the combined name variants and
 
the canonical names, we  created four sets of profiles for each
 
entity: canonical(cano) canonical partial (cano-part), all name
 
variants combined (all) and partial names of all name
 
variants(all-part). We refer to the last two profiles as name-variant
 
and name-variant partial. The names in parentheses are used in table
 
captions.
 
 
 
\begin{table*}
 
\caption{Example entity profiles (upper part Wikipedia, lower part Twitter)}
 
\begin{center}
 
\begin{tabular}{l*{3}{c}}
 
 &Wikipedia&Twitter \\
 
\hline
 
 
 &Benjamin\_Bronfman& roryscovel\\
 
  cano&[Benjamin Bronfman] &[roryscovel]\\
 
  cano-part &[Benjamin, Bronfman]&[roryscovel]\\
 
  all&[Ben Brewer, Benjamin Zachary Bronfman] &[Rory Scovel] \\
 
  all-part& [Ben, Brewer, Benjamin, Zachary, Bronfman]&[Rory, Scovel]\\
 
			   
 
                  
 
   \hline                      
 
\end{tabular}
 
\end{center}
 
\label{tab:profile}
 
\end{table*}
 
\subsection{Annotation Corpus}
 
 
The annotation set is a combination of the annotations from before the Training Time Range(TTR) and Evaluation Time Range (ETR) and consists of 68405 annotations.  Its breakdown into training and test sets is  shown in Table \ref{tab:breakdown}.
 
 
 
\begin{table}
 
\caption{Number of annotation documents with respect to different categories(relevance rating, training and testing)}
 
\begin{center}
 
\begin{tabular}{l*{3}{c}r}
 
 &&Vital&Relevant  &Total \\
 
\hline
 
 
\multirow{2}{*}{Training}  &Wikipedia & 1932  &2051& 3672\\
 
			  &Twitter&189   &314&488 \\
 
			   &All Entities&2121&2365&4160\\
 
                        
 
\hline 
 
\multirow{2}{*}{Testing}&Wikipedia &6139   &12375 &16160 \\
 
                         &Twitter&1261   &2684&3842  \\
 
                         &All Entities&7400   &12059&20002 \\
 
                         
 
             \hline 
 
\multirow{2}{*}{Total} & Wikipedia       &8071   &14426&19832  \\
 
                       &Twitter  &1450  &2998&4330  \\
 
                       &All Entities&9521   &17424&24162 \\
 
	                 
 
\hline
 
\end{tabular}
 
\end{center}
 
\label{tab:breakdown}
 
\end{table}
 
 
 
 
 
 
 
%Most (more than 80\%) of the annotation documents are in the test set.
 
The 2013 training and test data contain 68405
 
annotations, of which 50688 are unique document-entity pairs.   Out of
 
these, 24162 unique document-entity pairs are vital (9521) or relevant
 
(17424).
 
 
 
 
 
\section{Experiments and Results}\label{sec:expr}
 
 We conducted experiments to study  the effect of cleansing, different entity profiles, types of entities, category of documents, relevance ranks (vital or relevant), and the impact on classification.  In the following subsections, we present the results in different categories, and describe them.
 
 
 
 \subsection{Cleansing: raw or cleansed}
 
\begin{table}
 
\caption{Percentage of vital or relevant documents retrieved under different name variants (upper part from cleansed, lower part from raw)}
 
\begin{center}
 
\begin{tabular}{l@{\quad}rrrrrrr}
 
\hline
 
&cano&cano-part  &all &all-part  \\
 
\hline
 
 
 
 
   Wikipedia      &61.8  &74.8  &71.5  &77.9\\
 
   Twitter        &1.9   &1.9   &41.7  &80.4\\
 
   All Entities   &51.0  &61.7  &66.2  &78.4 \\	
 
  
 
 
 
\hline
 
\hline
 
   Wikipedia      &70.0  &86.1  &82.4  &90.7\\
 
   Twitter        & 8.7  &8.7   &67.9  &88.2\\
 
  All Entities    &59.0  &72.2  &79.8  &90.2\\
 
\hline
 
 
\end{tabular}
 
\end{center}
 
\label{tab:name}
 
\end{table}
 
 
 
The upper part of Table \ref{tab:name} shows the recall performances on the cleansed version and the lower part on the raw version. The recall performances for all entity types  are increased substantially in the raw version. Recall increases on Wikipedia entities  vary from 8.2 to 12.8, and in Twitter entities from 6.8 to 26.2. In all entities, it ranges from 8.0 to 13.6.  The recall increases are substantial. To put it into perspective, an 11.8 increase in recall on all entities is a retrieval of 2864 more unique document-entity pairs. %This suggests that cleansing has removed some documents that we could otherwise retrieve. 
 
 
\subsection{Entity Profiles}
 
If we look at the recall performances for the raw corpus,   filtering documents by canonical names achieves a recall of  59\%.  Adding other name variants  improves the recall to 79.8\%, an increase of 20.8\%. This means  20.8\% of documents mentioned the entities by other names  rather than by their canonical names. Canonical partial  achieves a recall of 72\%  and name-variant partial achives 90.2\%. This says that 18.2\% of documents mentioned the entities by  partial names of other non-canonical name variants. 
 
 
 
%\begin{table*}
 
%\caption{Breakdown of recall percentage increases by document categories }
 
%\begin{center}\begin{tabular}{l*{9}{c}r}
 
% && \multicolumn{3}{ c| }{All entities}  & \multicolumn{3}{ c| }{Wikipedia} &\multicolumn{3}{ c| }{Twitter} \\ 
 
% & &others&news&social & others&news&social &  others&news&social \\
 
%\hline
 
% 
 
%\multirow{4}{*}{Vital}	 &cano-part $-$ cano  	&8.2  &14.9    &12.3           &9.1  &18.6   &14.1             &0      %&0       &0  \\
 
%                         &all$-$ cano         	&12.6  &19.7    &12.3          &5.5  &15.8   &8.4             &73   &35%.9    &38.3  \\
 
%	                 &all-part $-$ cano\_part&9.7    &18.7  &12.7       &0    &0.5  &5.1        &93.2 & 93 &64.4 \\%
 
%	                 &all-part $-$ all     	&5.4  &13.9     &12.7           &3.6  &3.3    &10.8              &20.3 %  &57.1    &26.1 \\
 
%	                 \hline
 
%	                 
 
%\multirow{4}{*}{Relevant}  &cano-part $-$ cano  	&10.5  &15.1    &12.2          &11.1  &21.7   &14.1            % &0   &0    &0  \\
 
%                         &all $-$ cano         	&11.7  &36.6    &17.3          &9.2  &19.5   &9.9             &%54.5   &76.3   &66  \\
 
%	                 &all-part $-$ cano-part &4.2  &26.9   &15.8          &0.2    &0.7    &6.7           &72.2   &8%7.6 &75 \\
 
%	                 &all-part $-$ all     	&3    &5.4     &10.7           &2.1  &2.9    &11              &18.2   &%11.3    &9 \\
 
%	                 
 
%	                 \hline
 
%\multirow{4}{*}{total} 	&cano-part $-$ cano   	&10.9   &15.5   &12.4         &11.9  &21.3   &14.4          &0 %    &0       &0\\
 
%			&all $-$ cano         	&13.8   &30.6   &16.9         &9.1  &18.9   &10.2          &63.6  &61.8%    &57.5 \\
 
%                        &all-part $-$ cano-part	&7.2   &24.8   &15.9          &0.1    &0.7    &6.8           &8%2.2  &89.1    &71.3\\
 
%                        &all-part $-$ all     	&4.3   &9.7    &11.4           &3.0  &3.1   &11.0          &18.9  &27.3%    &13.8\\	                 
 
%	                 
 
%                                  	                 
 
%\hline
 
%\end{tabular}
 
%\end{center}
 
%\label{tab:source-delta2}
 
%\end{table*}
 
 
 
 \begin{table*}
 
\caption{Breakdown of recall performances by document source category}
 
\begin{center}\begin{tabular}{l*{9}{c}r}
 
 && \multicolumn{3}{ c| }{All entities}  & \multicolumn{3}{ c| }{Wikipedia} &\multicolumn{3}{ c| }{Twitter} \\ 
 
 & &others&news&social & others&news&social &  others&news&social \\
 
\hline
 
 
 
\multirow{4}{*}{Vital} &cano                 &82.2& 65.6& 70.9& 90.9&  80.1& 76.8&   8.1&  6.3&  30.5\\
 
&cano part & 90.4& 80.6& 83.1& 100.0& 98.7& 90.9&   8.1&  6.3&  30.5\\
 
&all  & 94.8& 85.4& 83.1& 96.4&  95.9& 85.2&   81.1& 42.2& 68.8\\
 
&all part &100& 99.2& 95.9& 100.0&  99.2& 96.0&   100&  99.3& 94.9\\
 
\hline
 
	                 
0 comments (0 inline, 0 general)