Changeset - 2594949a76bb
[Not reviewed]
0 1 0
Arjen de Vries (arjen) - 11 years ago 2014-06-12 03:02:11
arjen.de.vries@cwi.nl
trying...
1 file changed with 5 insertions and 1 deletions:
0 comments (0 inline, 0 general)
mypaper-final.tex
Show inline comments
 
@@ -334,97 +334,101 @@ Stream filtering is then the task to, given a stream of documents of news items,
 
 of entities (Wikipedia or Twitter) , and finally their impact overall
 
 performance of the pipeline. Finally, we conduct manual examination
 
 of the vital documents that defy filtering. We strive to answer the
 
 following research questions:
 
\begin{enumerate}
 
  \item Does cleansing affect filtering and subsequent performance
 
  \item What is the most effective way of entity profile representation
 
  \item Is filtering different for Wikipedia and Twitter entities?
 
  \item Are some type of documents easily filterable and others not?
 
  \item Does a gain in recall at filtering step translate to a gain in F-measure at the end of the pipeline?
 
  \item What characterizes the vital (and relevant) documents that are
 
    missed in the filtering step?
 
\end{enumerate}
 

	
 
The TREC filtering and the filtering as part of the entity-centric
 
stream filtering and ranking pipepline have different purposes. The
 
TREC filtering track's goal is the binary classification of documents:
 
for each incoming docuemnt, it decides whether the incoming document
 
is relevant or not for a given profile. The docuemnts are either
 
relevant or not. In our case, the documents have relevance ranking and
 
the goal of the filtering stage is to filter as many potentially
 
relevant documents as possible, but less  irrelevant documents as
 
possible not to obfuscate the later stages of the piepline.  Filtering
 
as part of the pipeline needs that delicate balance between retrieving
 
relavant documents and irrrelevant documensts. Bcause of this,
 
filtering in this case can only be studied by binding it to the later
 
stages of the entity-centric pipeline. This bond influnces how we do
 
evaluation.
 

	
 
To achieve this, we use recall percentages in the filtering stage for
 
the different choices of entity profiles. However, we use the overall
 
performance to select the best entity profiles.To generate the overall
 
pipeline performance we use the official TREC KBA evaluation metric
 
and scripts \cite{frank2013stream} to report max-F, the maximum
 
F-score obtained over all relevance cut-offs.
 

	
 
\section{Literature Review}
 
There has been a great deal of interest  as of late on entity-based filtering and ranking. One manifestation of that is the introduction of TREC KBA in 2012. Following that, there have been a number of research works done on the topic \cite{frank2012building, ceccarelli2013learning, taneva2013gem, wang2013bit, balog2013multi}.  These works are based on KBA 2012 task and dataset  and they address the whole problem of entity filtering and ranking.  TREC KBA continued in 2013, but the task underwent some changes. The main change between  the 2012 and 2013 are in the number of entities, the type of entities, the corpus and the relevance rankings.
 
 
The number of entities increased from 29 to 141, and it included 20 Twitter entities. The TREC KBA 2012 corpus is 1.9TB after xz-compression and has  400M documents. By contrast, the KBA 2013 corpus is 6.45 after XZ-compression and GPG encryption. A version with all-non English documented removed  is 4.5 TB and consists of 1 Billion documents. The 2013 corpus subsumed the 2012 corpus and added others from spinn3r, namely main-stream news, forum, arxiv, classified, reviews and meme-tracker.  A more important difference is, however, a change in the definitions of relevance ratings vital and relevant. While in KBA 2012, a document was judged vital if it has citation-worthy content for a given entity, in 2013 it must have the freshliness, that is the content must trigger an editing of the given entity's KB entry. 
 
 
While the tasks of 2012 and 2013 are fundamentally the same, the approaches  varied due  to the size of the corpus. In 2013, all participants used filtering to reduce the size of the big corpus.   They used different ways of filtering: many of them used two or more of different name variants from DBpedia such as labels, names, redirects, birth names, alias, nicknames, same-as and alternative names \cite{wang2013bit,dietzumass,liu2013related, zhangpris}.  Although most of the participants used DBpedia name variants none of them used all the name variants.  A few other participants used bold words in the first paragraph of the Wikipedia entity's profiles and anchor texts from other Wikipedia pages  \cite{bouvierfiltering, niauniversity}. One participant used Boolean \emph{and} built from the tokens of the canonical names \cite{illiotrec2013}.  
 
 
All of the studies used filtering as their first step to generate a smaller set of documents. And many systems suffered from poor recall and their system performances were highly affected \cite{frank2012building}. Although  systems  used different entity profiles to filter the stream, and achieved different performance levels, there is no study on and the factors and choices that affect the filtering step itself. Of course filtering has been extensively examined in TREC Filtering \cite{robertson2002trec}. However, those studies were isolated in the sense that they were intended to optimize recall. What we have here is a different scenario. Documents have relevance rating. Thus we want to study filtering in connection to  relevance to the entities and thus can be done by coupling filtering to the later stages of the pipeline. This is new to the best of our knowledge and the TREC KBA problem setting and data-sets offer a good opportunity to examine this aspect of filtering. 
 
 
Moreover, there has not been a chance to study at this scale and/or a study into what type of documents defy filtering and why? In this paper, we conduct a manual examination of the documents that are missing and classify them into different categories. We also estimate the general upper bound of recall using the different entities profiles and choose the best profile that results in an increased over all performance as measured by F-measure. 
 
 
\section{Method}
 
We work with the docuemnts have relavance assessments. For this purpose, we extracted those docuemnts from the big corpus.    We experiment with all KB entities.  For each KB entity, we extract different name variants from DBpedia and Twitter. 
 
All analyses in this paper are carried out on the documents that have
 
relevance assessments associated to them. For this purpose, we
 
extracted those documents from the big corpus. We experiment with all
 
KB entities. For each KB entity, we extract different name variants
 
from DBpedia and Twitter.
 
\
 
 
\subsection{Entity Profiling}
 
We build profiles for the KB entities of interest. We have two types: Twitter and Wikipedia. Both Entities are selected, on purpose, to be sparse, less-documented.  For the Twitter entities, we visit their respective Twitter pages  and  manually fetch their display names. For the Wikipedia entities, we fetch different name variants from DBpedia, namely  name, label, birth name, alternative names, redirects, nickname, or alias.  The extraction results are in Table \ref{tab:sources}.
 
\begin{table}
 
\caption{Number of different DBpedia name variants}
 
\begin{center}
 
 
 \begin{tabular}{l*{4}{c}l}
 
 Name variant& No. of strings  \\
 
\hline
 
 Name  &82\\
 
 Label   &121\\
 
Redirect  &49 \\
 
 Birth Name &6\\
 
 Nickname & 1&\\
 
 Alias &1 \\
 
 Alternative Names &4\\
 
 
\hline
 
\end{tabular}
 
\end{center}
 
\label{tab:sources}
 
\end{table}
 
 
 
We have a total of 121 Wikipedia entities.  Every entity has a DBpedia label.  Only 82 entities have a name string and only 49 entities have redirect strings. Most of the entities have only one string, but some have several redirect sterings. One entity, Buddy\_MacKay, has the highest (12) number of redirect strings. 6 entities have  birth names, 1 entity has a nick name, 1 entity has alias and  4 entities have alternative names.
 
 
We combined the different name variants  we extracted to form a set of strings for each KB entity.  For Twitter entities, we used the display names that we collected . We consider the names of the entities that are part of the URL as canonical. For example in http://en.wikipedia.org/wiki/Benjamin\_Bronfman, Benjamin Bronfman is a canonical name of the entity. From the combined name variants and the canonical names, we  created four sets of profiles for each entity: canonical(cano) canonical partial (cano-part), all name variants combined (all) and partial names of all name variants(all-part). We refer to the last two profiles as name-variant and name-variant partial. The names in paranthesis are used in table captions.
 
\subsection{Annotation Corpus}
 
 
The annotation set is a combination of the annotations from before the Training Time Range(TTR) and Evaluation Time Range (ETR) and consists of 68405 annotations.  Its breakdown into training and test sets is  shown in Table \ref{tab:breakdown}.
 
 
 
\begin{table}
 
\caption{Number of annotation documents with respect to different categories(relevance rating, training and testing)}
 
\begin{center}
 
\begin{tabular}{l*{3}{c}r}
 
 &&Vital&Relevant  &Total \\
 
\hline
 
 
\multirow{2}{*}{Training}  &Wikipedia & 1932  &2051& 3672\\
 
			  &Twitter&189   &314&488 \\
 
			   &All Entities&2121&2365&4160\\
 
                        
 
\hline 
 
\multirow{2}{*}{Testing}&Wikipedia &6139   &12375 &16160 \\
 
                         &Twitter&1261   &2684&3842  \\
0 comments (0 inline, 0 general)