Changeset - fb8b29e39fa2
[Not reviewed]
0 1 0
Arjen de Vries (arjen) - 11 years ago 2014-06-12 03:37:47
arjen.de.vries@cwi.nl
up into analysis almost done
1 file changed with 86 insertions and 26 deletions:
0 comments (0 inline, 0 general)
mypaper-final.tex
Show inline comments
 
@@ -347,319 +347,384 @@ Stream filtering is then the task to, given a stream of documents of news items,
 
  \item Are some type of documents easily filterable and others not?
 
  \item Does a gain in recall at filtering step translate to a gain in F-measure at the end of the pipeline?
 
  \item What characterizes the vital (and relevant) documents that are
 
    missed in the filtering step?
 
\end{enumerate}
 

	
 
The TREC filtering and the filtering as part of the entity-centric
 
stream filtering and ranking pipepline have different purposes. The
 
TREC filtering track's goal is the binary classification of documents:
 
for each incoming docuemnt, it decides whether the incoming document
 
is relevant or not for a given profile. The docuemnts are either
 
relevant or not. In our case, the documents have relevance ranking and
 
the goal of the filtering stage is to filter as many potentially
 
relevant documents as possible, but less  irrelevant documents as
 
possible not to obfuscate the later stages of the piepline.  Filtering
 
as part of the pipeline needs that delicate balance between retrieving
 
relavant documents and irrrelevant documensts. Bcause of this,
 
filtering in this case can only be studied by binding it to the later
 
stages of the entity-centric pipeline. This bond influnces how we do
 
evaluation.
 

	
 
To achieve this, we use recall percentages in the filtering stage for
 
the different choices of entity profiles. However, we use the overall
 
performance to select the best entity profiles.To generate the overall
 
pipeline performance we use the official TREC KBA evaluation metric
 
and scripts \cite{frank2013stream} to report max-F, the maximum
 
F-score obtained over all relevance cut-offs.
 

	
 
\section{Literature Review}
 
There has been a great deal of interest  as of late on entity-based filtering and ranking. One manifestation of that is the introduction of TREC KBA in 2012. Following that, there have been a number of research works done on the topic \cite{frank2012building, ceccarelli2013learning, taneva2013gem, wang2013bit, balog2013multi}.  These works are based on KBA 2012 task and dataset  and they address the whole problem of entity filtering and ranking.  TREC KBA continued in 2013, but the task underwent some changes. The main change between  the 2012 and 2013 are in the number of entities, the type of entities, the corpus and the relevance rankings.
 
 
The number of entities increased from 29 to 141, and it included 20 Twitter entities. The TREC KBA 2012 corpus is 1.9TB after xz-compression and has  400M documents. By contrast, the KBA 2013 corpus is 6.45 after XZ-compression and GPG encryption. A version with all-non English documented removed  is 4.5 TB and consists of 1 Billion documents. The 2013 corpus subsumed the 2012 corpus and added others from spinn3r, namely main-stream news, forum, arxiv, classified, reviews and meme-tracker.  A more important difference is, however, a change in the definitions of relevance ratings vital and relevant. While in KBA 2012, a document was judged vital if it has citation-worthy content for a given entity, in 2013 it must have the freshliness, that is the content must trigger an editing of the given entity's KB entry. 
 
 
While the tasks of 2012 and 2013 are fundamentally the same, the approaches  varied due  to the size of the corpus. In 2013, all participants used filtering to reduce the size of the big corpus.   They used different ways of filtering: many of them used two or more of different name variants from DBpedia such as labels, names, redirects, birth names, alias, nicknames, same-as and alternative names \cite{wang2013bit,dietzumass,liu2013related, zhangpris}.  Although most of the participants used DBpedia name variants none of them used all the name variants.  A few other participants used bold words in the first paragraph of the Wikipedia entity's profiles and anchor texts from other Wikipedia pages  \cite{bouvierfiltering, niauniversity}. One participant used Boolean \emph{and} built from the tokens of the canonical names \cite{illiotrec2013}.  
 
 
All of the studies used filtering as their first step to generate a smaller set of documents. And many systems suffered from poor recall and their system performances were highly affected \cite{frank2012building}. Although  systems  used different entity profiles to filter the stream, and achieved different performance levels, there is no study on and the factors and choices that affect the filtering step itself. Of course filtering has been extensively examined in TREC Filtering \cite{robertson2002trec}. However, those studies were isolated in the sense that they were intended to optimize recall. What we have here is a different scenario. Documents have relevance rating. Thus we want to study filtering in connection to  relevance to the entities and thus can be done by coupling filtering to the later stages of the pipeline. This is new to the best of our knowledge and the TREC KBA problem setting and data-sets offer a good opportunity to examine this aspect of filtering. 
 
 
Moreover, there has not been a chance to study at this scale and/or a study into what type of documents defy filtering and why? In this paper, we conduct a manual examination of the documents that are missing and classify them into different categories. We also estimate the general upper bound of recall using the different entities profiles and choose the best profile that results in an increased over all performance as measured by F-measure. 
 
 
\section{Method}
 
All analyses in this paper are carried out on the documents that have
 
relevance assessments associated to them. For this purpose, we
 
extracted those documents from the big corpus. We experiment with all
 
KB entities. For each KB entity, we extract different name variants
 
from DBpedia and Twitter.
 
\
 
 
\subsection{Entity Profiling}
 
We build profiles for the KB entities of interest. We have two types: Twitter and Wikipedia. Both Entities are selected, on purpose, to be sparse, less-documented.  For the Twitter entities, we visit their respective Twitter pages  and  manually fetch their display names. For the Wikipedia entities, we fetch different name variants from DBpedia, namely  name, label, birth name, alternative names, redirects, nickname, or alias.  The extraction results are in Table \ref{tab:sources}.
 
We build entity profiles for the KB entities of interest. We have two
 
types: Twitter and Wikipedia. Both entities have been selected, on
 
purpose by the track organisers, to occur only sparsely and be less-documented.
 
For the Wikipedia entities, we fetch different name variants
 
from DBpedia: name, label, birth name, alternative names,
 
redirects, nickname, or alias. 
 
These extraction results are summarized in Table
 
\ref{tab:sources}.
 
For the Twitter entities, we visit
 
their respective Twitter pages and fetch their display names. 
 
\begin{table}
 
\caption{Number of different DBpedia name variants}
 
\begin{center}
 
 
 \begin{tabular}{l*{4}{c}l}
 
 Name variant& No. of strings  \\
 
\hline
 
 Name  &82\\
 
 Label   &121\\
 
Redirect  &49 \\
 
 Birth Name &6\\
 
 Nickname & 1&\\
 
 Alias &1 \\
 
 Alternative Names &4\\
 
 
\hline
 
\end{tabular}
 
\end{center}
 
\label{tab:sources}
 
\end{table}
 
 
 
We have a total of 121 Wikipedia entities.  Every entity has a DBpedia label.  Only 82 entities have a name string and only 49 entities have redirect strings. Most of the entities have only one string, but some have several redirect sterings. One entity, Buddy\_MacKay, has the highest (12) number of redirect strings. 6 entities have  birth names, 1 entity has a nick name, 1 entity has alias and  4 entities have alternative names.
 
The collection contains a total number of 121 Wikipedia entities.
 
Every entity has a corresponding DBpedia label.  Only 82 entities have
 
a name string and only 49 entities have redirect strings. (Most of the
 
entities have only one string, except for a few cases with multiple
 
redirect strings; Buddy\_MacKay, has the highest (12) number of
 
redirect strings.) 
 

	
 
We combine the different name variants we extracted to form a set of
 
strings for each KB entity. For Twitter entities, we used the display
 
names that we collected. 
 

	
 
We consider the names of the entities that
 
are part of the URL as canonical. For example in
 
http://en.wikipedia.org/wiki/Benjamin\_Bronfman, Benjamin Bronfman is
 
a canonical name of the entity. From the combined name variants and
 
the canonical names, we  created four sets of profiles for each
 
entity: canonical(cano) canonical partial (cano-part), all name
 
variants combined (all) and partial names of all name
 
variants(all-part). We refer to the last two profiles as name-variant
 
and name-variant partial. The names in parentheses are used in table
 
captions.
 

	
 
We combined the different name variants  we extracted to form a set of strings for each KB entity.  For Twitter entities, we used the display names that we collected . We consider the names of the entities that are part of the URL as canonical. For example in http://en.wikipedia.org/wiki/Benjamin\_Bronfman, Benjamin Bronfman is a canonical name of the entity. From the combined name variants and the canonical names, we  created four sets of profiles for each entity: canonical(cano) canonical partial (cano-part), all name variants combined (all) and partial names of all name variants(all-part). We refer to the last two profiles as name-variant and name-variant partial. The names in paranthesis are used in table captions.
 
\subsection{Annotation Corpus}
 
 
The annotation set is a combination of the annotations from before the Training Time Range(TTR) and Evaluation Time Range (ETR) and consists of 68405 annotations.  Its breakdown into training and test sets is  shown in Table \ref{tab:breakdown}.
 
 
 
\begin{table}
 
\caption{Number of annotation documents with respect to different categories(relevance rating, training and testing)}
 
\begin{center}
 
\begin{tabular}{l*{3}{c}r}
 
 &&Vital&Relevant  &Total \\
 
\hline
 
 
\multirow{2}{*}{Training}  &Wikipedia & 1932  &2051& 3672\\
 
			  &Twitter&189   &314&488 \\
 
			   &All Entities&2121&2365&4160\\
 
                        
 
\hline 
 
\multirow{2}{*}{Testing}&Wikipedia &6139   &12375 &16160 \\
 
                         &Twitter&1261   &2684&3842  \\
 
                         &All Entities&7400   &12059&20002 \\
 
                         
 
             \hline 
 
\multirow{2}{*}{Total} & Wikipedia       &8071   &14426&19832  \\
 
                       &Twitter  &1450  &2998&4330  \\
 
                       &All entities&9521   &17424&24162 \\
 
                       &All Entities&9521   &17424&24162 \\
 
	                 
 
\hline
 
\end{tabular}
 
\end{center}
 
\label{tab:breakdown}
 
\end{table}
 
 
 
 
 
 
 
Most (more than 80\%) of the annotation documents are in the test set.  In both the training and test data for 2013, there are  68405 annotations, of which 50688 are unique document-entity pairs.   Out of 50688,  24162  unique document-entity pairs are vital-relevant, of which 9521 are vital and 17424 are relevant. 
 
%Most (more than 80\%) of the annotation documents are in the test set.
 
The 2013 training and test data contain 68405
 
annotations, of which 50688 are unique document-entity pairs.   Out of
 
these, 24162 unique document-entity pairs are vital (9521) or relevant
 
(17424).
 
 
 
 
 
\section{Experiments and Results}
 
 We conducted experiments to study  the effect of cleansing, different entity profiles, types of entities, category of documents, relevance ranks (vital or relevant), and the impact on classification.  In the following subsections, we present the results in different categories, and describe them.
 
 
 
 \subsection{Cleansing: raw or cleansed}
 
\begin{table}
 
\caption{vital-relevant documents that are retrieved under different name variants (upper part from cleansed, lower part from raw)}
 
\caption{Percentage of vital or relevant documents retrieved under different name variants (upper part from cleansed, lower part from raw)}
 
\begin{center}
 
\begin{tabular}{l@{\quad}lllllll}
 
\begin{tabular}{l@{\quad}rrrrrrr}
 
\hline
 
&cano&cano-part  &all &all-part  \\
 
\hline
 
 
 
 
   all-entities   &51.0  &61.7  &66.2  &78.4 \\	
 
   Wikipedia      &61.8  &74.8  &71.5  &77.9\\
 
   twitter        &1.9   &1.9   &41.7  &80.4\\
 
   Twitter        &1.9   &1.9   &41.7  &80.4\\
 
   All Entities   &51.0  &61.7  &66.2  &78.4 \\	
 
  
 
 
 
\hline
 
\hline
 
  all-entities    &59.0  &72.2  &79.8  &90.2\\
 
   Wikipedia      &70.0  &86.1  &82.4  &90.7\\
 
   twitter        & 8.7  &8.7   &67.9  &88.2\\
 
   Twitter        & 8.7  &8.7   &67.9  &88.2\\
 
  All Entities    &59.0  &72.2  &79.8  &90.2\\
 
\hline
 
 
\end{tabular}
 
\end{center}
 
\label{tab:name}
 
\end{table}
 
 
 
The upper part of Table \ref{tab:name} shows the recall performances on the cleansed version and the lower part on the raw version. The recall performances for all entity types  are increased substantially in the raw version. Recall increases on Wikipedia entities  vary from 8.2 to 12.8, and in Twitter entities from 6.8 to 26.2. In all entities, it ranges from 8.0 to 13.6.  The recall increases are substantial. To put it into perspective, an 11.8 increase in recall on all entities is a retrieval of 2864 more unique document-entity pairs. %This suggests that cleansing has removed some documents that we could otherwise retrieve. 
 
 
\subsection{Entity Profiles}
 
If we look at the recall performances for the raw corpus,   filtering documents by canonical names achieves a recall of  59\%.  Adding other name variants  improves the recall to 79.8\%, an increase of 20.8\%. This means  20.8\% of documents mentioned the entities by other names  rather than by their canonical names. Canonical partial  achieves a recall of 72\%  and name-variant partial achives 90.2\%. This says that 18.2\% of documents mentioned the entities by  partial names of other non-canonical name variants. 
 
 
 
%\begin{table*}
 
%\caption{Breakdown of recall percentage increases by document categories }
 
%\begin{center}\begin{tabular}{l*{9}{c}r}
 
% && \multicolumn{3}{ c| }{All entities}  & \multicolumn{3}{ c| }{Wikipedia} &\multicolumn{3}{ c| }{Twitter} \\ 
 
% & &others&news&social & others&news&social &  others&news&social \\
 
%\hline
 
% 
 
%\multirow{4}{*}{Vital}	 &cano-part $-$ cano  	&8.2  &14.9    &12.3           &9.1  &18.6   &14.1             &0      %&0       &0  \\
 
%                         &all$-$ cano         	&12.6  &19.7    &12.3          &5.5  &15.8   &8.4             &73   &35%.9    &38.3  \\
 
%	                 &all-part $-$ cano\_part&9.7    &18.7  &12.7       &0    &0.5  &5.1        &93.2 & 93 &64.4 \\%
 
%	                 &all-part $-$ all     	&5.4  &13.9     &12.7           &3.6  &3.3    &10.8              &20.3 %  &57.1    &26.1 \\
 
%	                 \hline
 
%	                 
 
%\multirow{4}{*}{Relevant}  &cano-part $-$ cano  	&10.5  &15.1    &12.2          &11.1  &21.7   &14.1            % &0   &0    &0  \\
 
%                         &all $-$ cano         	&11.7  &36.6    &17.3          &9.2  &19.5   &9.9             &%54.5   &76.3   &66  \\
 
%	                 &all-part $-$ cano-part &4.2  &26.9   &15.8          &0.2    &0.7    &6.7           &72.2   &8%7.6 &75 \\
 
%	                 &all-part $-$ all     	&3    &5.4     &10.7           &2.1  &2.9    &11              &18.2   &%11.3    &9 \\
 
%	                 
 
%	                 \hline
 
%\multirow{4}{*}{total} 	&cano-part $-$ cano   	&10.9   &15.5   &12.4         &11.9  &21.3   &14.4          &0 %    &0       &0\\
 
%			&all $-$ cano         	&13.8   &30.6   &16.9         &9.1  &18.9   &10.2          &63.6  &61.8%    &57.5 \\
 
%                        &all-part $-$ cano-part	&7.2   &24.8   &15.9          &0.1    &0.7    &6.8           &8%2.2  &89.1    &71.3\\
 
%                        &all-part $-$ all     	&4.3   &9.7    &11.4           &3.0  &3.1   &11.0          &18.9  &27.3%    &13.8\\	                 
 
%	                 
 
%                                  	                 
 
%\hline
 
%\end{tabular}
 
%\end{center}
 
%\label{tab:source-delta2}
 
%\end{table*}
 
 
 
 \begin{table*}
 
\caption{Breakdown of recall performances by document source category}
 
\begin{center}\begin{tabular}{l*{9}{c}r}
 
 && \multicolumn{3}{ c| }{All entities}  & \multicolumn{3}{ c| }{Wikipedia} &\multicolumn{3}{ c| }{Twitter} \\ 
 
 & &others&news&social & others&news&social &  others&news&social \\
 
\hline
 
 
 
\multirow{4}{*}{Vital} &cano                 &82.2& 65.6& 70.9& 90.9&  80.1& 76.8&   8.1&  6.3&  30.5\\
 
&cano part & 90.4& 80.6& 83.1& 100.0& 98.7& 90.9&   8.1&  6.3&  30.5\\
 
&all  & 94.8& 85.4& 83.1& 96.4&  95.9& 85.2&   81.1& 42.2& 68.8\\
 
&all part &100& 99.2& 95.9& 100.0&  99.2& 96.0&   100&  99.3& 94.9\\
 
\hline
 
	                 
 
\multirow{4}{*}{Relevant} &cano & 84.2& 53.4& 55.6& 88.4& 75.6& 63.2& 10.6& 2.2& 6.0\\
 
&cano part &94.7& 68.5& 67.8& 99.6& 97.3& 77.3& 10.6& 2.2& 6.0\\
 
&all & 95.8& 90.1& 72.9& 97.6& 95.1& 73.1& 65.2& 78.4& 72.0\\
 
&all part &98.8& 95.5& 83.7& 99.7& 98.0& 84.1& 83.3& 89.7& 81.0\\
 
	                 
 
	                 \hline
 
\multirow{4}{*}{total} 	&cano    &   81.1& 56.5& 58.2& 87.7& 76.4& 65.7& 9.8& 3.6& 13.5\\
 
&cano part &92.0& 72.0& 70.6& 99.6& 97.7& 80.1& 9.8& 3.6& 13.5\\
 
&all & 94.8& 87.1& 75.2& 96.8& 95.3& 75.8& 73.5& 65.4& 71.1\\
 
&all part & 99.2& 96.8& 86.6& 99.8& 98.4& 86.8& 92.4& 92.7& 84.9\\
 
	                 
 
\hline
 
\end{tabular}
 
\end{center}
 
\label{tab:source-delta}
 
\end{table*}
 
    
 
 
The break down of the raw corpus by document source category is presented in Table
 
\ref{tab:source-delta}.  
 
 
 
 
 
 
 
 
 
 \subsection{ Relevance Rating: vital and relevant}
 
 
 
 When comparing the recall performances in vital and relevant, we observe that canonical names achieve better in vital than in relevant. This is specially true with Wikipedia entities. For example, the recall for news is 80.1 and for social is 76, while the corresponding recall in relevant is 75.6 and 63.2 respectively. We can generally see that the recall in vital are better than the recall in relevant suggesting that relevant documents are more probable to mention the entities and when they do, using some of their common name variants. 
 
 
 
When comparing recall for vital and relevant, we observe that
 
canonical names are more effective for vital than for relevant
 
entities, in particular for the Wikipedia entities. 
 
%For example, the recall for news is 80.1 and for social is 76, while the corresponding recall in relevant is 75.6 and 63.2 respectively.
 
We conclude that the most relevant documents mention the
 
entities by their common name variants.
 
%  \subsection{Difference by document categories}
 
%  
 
 
 
%  Generally, there is greater variation in relevant rank than in vital. This is specially true in most of the Delta's for Wikipedia. This  maybe be explained by news items referring to  vital documents by a some standard name than documents that are relevant. Twitter entities show greater deltas than Wikipedia entities in both vital and relevant. The greater variation can be explained by the fact that the canonical name of Twitter entities retrieves very few documents. The deltas that involve canonical names of Twitter entities, thus, show greater deltas.  
 
%  
 
 
% If we look in recall performances, In Wikipedia entities, the order seems to be others, news and social. This means that others achieve a higher recall than news than social.  However, in Twitter entities, it does not show such a strict pattern. In all, entities also, we also see almost the same pattern of other, news and social. 
 
 
 
 
  
 
\subsection{Recall across document categories: others, news and social}
 
The recall for Wikipedia entities in Table \ref{tab:name} ranged from 61.8\% (canonicals) to 77.9\% (name-variants).  Table \ref{tab:source-delta} shows how recall is distributed across document categories. For Wikipedia entities, across all entity profiles, others have a higher recall followed by news, and then by social.  While the recall for news  ranged from 76.4\% to 98.4\%, the recall for social documents ranged from 65.7\% to 86.8\%. In Twitter entities, however, the pattern is different. In canonicals (and their partials), social documents achieve higher recall than news. 
 
The recall for Wikipedia entities in Table \ref{tab:name} ranged from
 
61.8\% (canonicals) to 77.9\% (name-variants).  Table
 
\ref{tab:source-delta} shows how recall is distributed across document
 
categories. For Wikipedia entities, across all entity profiles, others
 
have a higher recall followed by news, and then by social.  While the
 
recall for news ranges from 76.4\% to 98.4\%, the recall for social
 
documents ranges from 65.7\% to 86.8\%. In Twitter entities, however,
 
the pattern is different. In canonicals (and their partials), social
 
documents achieve higher recall than news.
 
%This indicates that social documents refer to Twitter entities by their canonical names (user names) more than news do. In name- variant partial, news achieve better results than social. The difference in recall between canonicals and name-variants show that news do not refer to Twitter entities by their user names, they refer to them by their display names.
 
Overall, across all entities types and all entity profiles, others achieve higher recall than news, and  news, in turn, achieve higher recall than social documents. 
 
Overall, across all entities types and all entity profiles, documents
 
in the others category achieve a higher recall than news, and news documents, in turn, achieve higher recall than social documents. 
 
 
% This suggests that social documents are the hardest  to retrieve.  This  makes sense since social posts such as tweets and blogs are short and are more likely to point to other resources, or use short informal names.
 
 
 
%%NOTE TABLE REMOVED:\\\\
 
%
 
%We computed four percentage increases in recall (deltas)  between the
 
%different entity profiles (Table \ref{tab:source-delta2}). The first
 
%delta is the recall percentage between canonical partial  and
 
%canonical. The second  is  between name= variant and canonical. The
 
%third is the difference between name-variant partial  and canonical
 
%partial and the fourth between name-variant partial and
 
%name-variant. we believe these four deltas offer a clear meaning. The
 
%delta between name-variant and canonical measn the percentage of
 
%delta between name-variant and canonical means the percentage of
 
%documents that the new name variants retrieve, but the canonical name
 
%does not. Similarly, the delta between  name-variant partial and
 
%partial canonical-partial means the percentage of document-entity
 
%pairs that can be gained by the partial names of the name variants. 
 
% The  biggest delta  observed is in Twitter entities between partials
 
% of all name variants and partials of canonicals (93\%). delta. Both
 
% of them are for news category.  For Wikipedia entities, the highest
 
% delta observed is 19.5\% in cano\_part - cano followed by 17.5\% in
 
% all\_part in relevant. 
 
  
 
  \subsection{Entity Types: Wikipedia and Twitter}
 
Table \ref{tab:name} shows the difference between Wikipedia and Twitter entities.  Wikipedia entities' canonical achieves a recall of 70\%, and canonical partial  achieves a recall of 86.1\%. This is an increase in recall of 16.1\%. By contrast, the increase in recall of name-variant partial over name-variant is 8.3.   
 
%The high increase in recall when moving from canonical names  to their partial names, in comparison to the lower increase when moving from all name variants to their partial names can be explained by saturation. This is to mean that documents have already been extracted by the different name variants and thus using their partial names does not bring in many new relevant documents. 
 
One interesting observation is that, For Wikipedia entities, canonical partial achieves better recall than name-variant in both cleansed and raw corpus.  %In the raw extraction, the difference is about 3.7. 
 
In Twitter entities, however, it is different. Both canonical and their partials perform the same and the recall is very low. Canonical  and canonical partial are the same for Twitter entities because they are one word strings. For example in https://twitter.com/roryscovel, ``roryscovel`` is the canonical name and its partial is also the same.  
 
%The low recall is because the canonical names of Twitter entities are not really names; they are usually arbitrarily created user names. It shows that  documents  refer to them by their display names, rarely by their user name, which is reflected in the name-variant recall (67.9\%). The use of name-variant partial increases the recall to 88.2\%.
 
Table \ref{tab:name} summarizes the differences between Wikipedia and
 
Twitter entities.  Wikipedia entities' canonical representation
 
achieves a recall of 70\%, while canonical partial achieves a recall of 86.1\%. This is an
 
increase in recall of 16.1\%. By contrast, the increase in recall of
 
name-variant partial over name-variant is 8.3\%.
 
%This high increase in recall when moving from canonical names to their
 
%partial names, in comparison to the lower increase when moving from
 
%all name variants to their partial names can be explained by
 
%saturation: documents have already been extracted by the different
 
%name variants and thus using their partial names do not bring in many
 
%new relevant documents.
 
For Wikipedia entities, canonical
 
partial achieves better recall than name-variant in both the cleansed and
 
the raw corpus.  %In the raw extraction, the difference is about 3.7.
 
In Twitter entities, however, it is different. Both canonical and
 
their partials perform the same and the recall is very low. Canonical
 
and canonical partial are the same for Twitter entities because they
 
are one word strings. For example in https://twitter.com/roryscovel,
 
``roryscovel`` is the canonical name and its partial is also the same.
 
%The low recall is because the canonical names of Twitter entities are
 
%not really names; they are usually arbitrarily created user names. It
 
%shows that  documents  refer to them by their display names, rarely
 
%by their user name, which is reflected in the name-variant recall
 
%(67.9\%). The use of name-variant partial increases the recall to
 
%88.2\%.
 
 
 
 
The tables in \ref{tab:name} and \ref{tab:source-delta} show recall for Wikipedia entities are higher than for Twitter. Generally, at both aggregate and document category levels, we observe that recall increases as we move from canonicals to canonical partial, to name-variant, and to name-variant partial. The only case where this does not hold is in the transition from Wikipedia's canonical partial to name-variant. At the aggregate level(as can be inferred from Table \ref{tab:name}), the difference in performance between  canonical  and name-variant partial is 31.9\% on all entities, 20.7\% on Wikipedia entities, and 79.5\% on Twitter entities. This is a significant performance difference. 
 
 
 
%% TODO: PERHAPS SUMMARY OF DISCUSSION HERE
 
 
 
\section{Impact on classification}
 
  In the overall experimental setup, classification, ranking,  and evaluationn are kept constant. Following \cite{balog2013multi} settings, we use WEKA's\footnote{http://www.cs.waikato.ac.nz/∼ml/weka/} Classification Random Forest. However, we use fewer numbers of features which we found to be more effective. We determined the effectiveness of the features by running the classification algorithm using the fewer features we implemented and their features. Our feature implementations achieved better results.  The total numbers of features we used are 13 and are listed below. 
 
  
 
\paragraph{Google's Cross Lingual Dictionary (GCLD)}
 
 
This is a mapping of strings to Wikipedia concepts and vice versa
 
\cite{spitkovsky2012cross}. 
 
(1) the probability with which a string is used as anchor text to
 
a Wikipedia entity 
 
 
\paragraph{jac} 
 
  Jaccard similarity between the document and the entity's Wikipedia page
 
\paragraph{cos} 
 
  Cosine similarity between the document and the entity's Wikipedia page
 
\paragraph{kl} 
 
  KL-divergence between the document and the entity's Wikipedia page
 
  
 
  \paragraph{PPR}
 
For each entity, we computed a PPR score from
 
a Wikipedia snapshot  and we kept the top 100  entities along
 
with the corresponding scores.
 
 
 
\paragraph{Surface Form (sForm)}
 
For each Wikipedia entity, we gathered DBpedia name variants. These
 
are redirects, labels and names.
 
 
 
\paragraph{Context (contxL, contxR)}
 
From the WikiLink corpus \cite{singh12:wiki-links}, we collected
 
all left and right contexts (2 sentences left and 2 sentences
 
right) and generated n-grams between uni-grams and quadro-grams
 
for each left and right context. 
 
Finally,  we select  the 5 most frequent n-grams for each context.
 
 
\paragraph{FirstPos}
 
  Term position of the first occurrence of the target entity in the document 
 
  body 
 
\paragraph{LastPos }
 
@@ -848,103 +913,98 @@ entity profiles, relevance ratings, categories of documents, entity profiles. We
 
 
Experimental results show that cleansing can remove entire or parts of the content of documents making them difficult to retrieve. These documents can, otherwise, be retrieved from the raw version. The use of the raw corpus brings in documents that can not be retrieved from the cleansed corpus. This is true for all entity profiles and for all entity types. The  recall difference between the cleansed and raw ranges from  6.8\% t 26.2\%. These increases, in actual document-entity pairs,  is in thousands. We believe this is a substantial increase. However, the recall increases do not always translate to improved F-score in overall performance.  In the vital relevance ranking for both Wikipedia and aggregate entities, the cleansed version performs better than the raw version.  In Twitter entities, the raw corpus achieves better except in the case of all name-variant, though the difference is negligible.  However, for vital-relevant, the raw corpus performs  better across all entity profiles and entity types 
 
except in partial canonical names of Wikipedia entities. 
 
 
The use of different profiles also shows a big difference in recall. Except in the case of Wikipedia where the use of canonical partial achieves better than name-variant, there is a steady increase in recall from canonical to  canonical partial, to name-variant, and to name-variant partial. This pattern is also observed across the document categories.  However, here too, the relationship between   the gain in recall as we move from less richer profile to a more richer profile and overall performance as measured by F-score  is not linear. 
 
 
 
%%%%% MOVED FROM LATER ON - CHECK FLOW
 
 
There is a trade-off between using a richer entity-profile and retrieval of irrelevant documents. The richer the profile, the more relevant documents it retrieves, but also the more irrelevant documents. To put it into perspective, lets compare the number of documents that are retrieved with  canonical partial and with name-variant partial. Using the raw corpus, the former retrieves a total of 2547487 documents and achieves a recall of 72.2\%. By contrast, the later retrieves a total of 4735318 documents and achieves a recall of 90.2\%. The total number of documents extracted increases by 85.9\% for a recall gain of 18\%. The rest of the documents, that is 67.9\%, are newly introduced irrelevant documents. 
 
 
%%%%%%%%%%%%
 
 
 
In vital ranking, across all entity profiles and types of corpus, Wikipedia's canonical partial  achieves better performance than any other Wikipedia entity profiles. In vital-relevant documents too, Wikipedia's canonical partial achieves the best result. In the raw corpus, it achieves a little less than name-variant partial. For Twitter entities, the name-variant partial profile achieves the highest F-score across all entity profiles and types of corpus.  
 
 
 
There are 3 interesting observations: 
 
 
1) cleansing impacts Twitter
 
entities and relevant documents.  This  is validated by the
 
observation that recall  gains in Twitter entities and the relevant
 
categories in the raw corpus also translate into overall performance
 
gains. This observation implies that cleansing removes relevant and
 
social documents than it does vital and news. That it removes relevant
 
documents more than vital can be explained by the fact that cleansing
 
removes the related links and adverts which may contain a mention of
 
the entities. One example we saw was the the cleansing removed an
 
image with a text of an entity name which was actually relevant. And
 
that it removes social documents can be explained by the fact that
 
most of the missing of the missing  docuemnts from cleansed are
 
social. And all the docuemnts that are missing from raw corpus
 
social. So in both cases socuial seem to suffer from text
 
transformation and cleasing processes. 
 
 
%%%% NEEDS WORK:
 
 
2) Taking both performance (recall at filtering and overall F-score
 
during evaluation) into account, there is a clear trade-off between using a richer entity-profile and retrieval of irrelevant documents. The richer the profile, the more relevant documents it retrieves, but also the more irrelevant documents. To put it into perspective, lets compare the number of documents that are retrieved with  canonical partial and with name-variant partial. Using the raw corpus, the former retrieves a total of 2547487 documents and achieves a recall of 72.2\%. By contrast, the later retrieves a total of 4735318 documents and achieves a recall of 90.2\%. The total number of documents extracted increases by 85.9\% for a recall gain of 18\%. The rest of the documents, that is 67.9\%, are newly introduced irrelevant documents. 
 
 
Wikipedia's canonical partial is the best entity profile for Wikipedia entities. This is interesting  to see that the retrieval of of  thousands vital-relevant document-entity pairs by name-variant partial does not translate to an increase in over all performance. It is even more interesting since canonical partial was not considered as contending profile for stream filtering by any of participant to the best of our knowledge. With this understanding, there  is actually no need to go and fetch different names variants from DBpedia, a saving of time and computational resources.
 
 
 
%%%%%%%%%%%%
 
 
 
 
 
<<<<<<< HEAD
 
The deltas between entity profiles, relevance ratings, and document categories reveal four differences between Wikipedia and Twitter entities. 1) For Wikipedia entities, the difference between canonical partial and canonical is higher(16.1\%) than between name-variant partial and  name-variant(18.3\%).  This can be explained by saturation. This is to mean that documents have already been extracted by  name-variants and thus using their partials does not bring in many new relevant documents.  2) Twitter entities are mentioned by name-variant or name-variant partial and that is seen in the high recall achieved  compared to the low recall achieved by canonical(or their partial). This indicates that documents (specially news and others) almost never use user names to refer to Twitter entities. Name-variant partials are the best entity profiles for Twitter entities. 3) However, comparatively speaking, social documents refer to Twitter entities by their user names than news and others suggesting a difference in 
 
adherence to standard in names and naming. 4) Wikipedia entities achieve higher recall and higher overall performance. 
 
=======
 
The deltas between entity profiles, relevance ratings, and document categories reveal four differences between Wikipedia and Twitter entities. 1) For Wikipedia entities, the difference between canonical partial and canonical is higher(16.1\%) than between name-variant partial and  name-variant(8.3\%).  This can be explained by saturation. This is to mean that documents have already been extracted by  name-variants and thus using their partials does not bring in many new relevant documents.  2) Twitter entities are mentioned by name-variant or name-variant partial and that is seen in the high recall achieved  compared to the low recall achieved by canonical(or their partial). This indicates that documents (specially news and others) almost never use user names to refer to Twitter entities. Name-variant partials are the best entity profiles for Twitter entities. 3) However, comparatively speaking, social documents refer to Twitter entities by their user names than news and others suggesting a difference in 
 
adherence to standard in names and naming. 4) Wikipedia entities achieve higher recall and higher overall performance. 
 
>>>>>>> 60fbfbab0287ab72519987bdcba3adb5a0aa93c8
 
 
The high recall and subsequent higher overall performance of Wikipedia entities can  be due to two reasons. 1) Wikipedia entities are relatively well described than Twitter entities. The fact that we can retrieve different name variants from DBpedia is a measure of relatively rich description. Rich description plays a role in both filtering and computation of features such as similarity measures in later stages of the pipeline.   By contrast, we have only two names for Twitter entities: their user names and their display names which we collect from their Twitter pages. 2) There is not DBpedia-like resource for Twitter entities from which alternative names cane be collected.   
 
 
 
In the experimental results, we also observed that recall scores in the vital category are higher than in the relevant category. This observation  confirms one commonly held assumption:(frequency) mention is related to relevance.  this is the assumption why term frequency is used an indicator of document relevance in many information retrieval systems. The more  a document mentions an entity explicitly by name, the more likely the document is vital to the entity.
 
 
Across document categories, we observe a pattern in recall of others, followed by news, and then by social. Social documents are the hardest to retrieve. This can be explained by the fact that social documents (tweets and  blogs) are more likely to point to a resource where the entity is mentioned, mention the entities with some short abbreviation, or talk without mentioning the entities, but with some context in mind. By contrast news documents mention the entities they talk about using the common name variants more than social documents do. However, the greater difference in percentage recall between the different entity profiles in the news category indicates news refer to a given entity with different names, rather than by one standard name. By contrast others show least variation in referring to news. Social documents falls in between the two.  The deltas, for Wikipedia entities, between canonical partials and canonicals,  and name-variants and canonicals are high, an indication that canonical partials 
 
and name-variants bring in new relevant documents that can not be retrieved by canonicals. The rest of the two deltas are very small,  suggesting that partial names of name variants do not bring in new relevant documents. 
 
 
 
\section{Unfilterable documents}
 
 
\subsection{Missing vital-relevant documents \label{miss}}
 
 
% 
 
 
 The use of name-variant partial for filtering is an aggressive attempt to retrieve as many relevant documents as possible at the cost of retrieving irrelevant documents. However, we still miss about  2363(10\%) of the vital-relevant documents.  Why are these documents missed? If they are not mentioned by partial names of name variants, what are they mentioned by? Table \ref{tab:miss} shows the documents that we miss with respect to cleansed and raw corpus.  The upper part shows the number of documents missing from cleansed and raw versions of the corpus. The lower part of the table shows the intersections and exclusions in each corpus.  
 
 
\begin{table}
 
\caption{The number of documents missing  from raw and cleansed extractions. }
 
\begin{center}
 
\begin{tabular}{l@{\quad}llllll}
 
\hline
 
\multicolumn{1}{l}{\rule{0pt}{12pt}category}&\multicolumn{1}{l}{\rule{0pt}{12pt}Vital }&\multicolumn{1}{l}{\rule{0pt}{12pt}Relevant }&\multicolumn{1}{l}{\rule{0pt}{12pt}Total }\\[5pt]
 
\hline
 
 
Cleansed &1284 & 1079 & 2363 \\
 
Raw & 276 & 4951 & 5227 \\
 
\hline
 
 missing only from cleansed &1065&2016&3081\\
 
  missing only from raw  &57 &160 &217 \\
 
  Missing from both &219 &1927&2146\\
 
\hline
 
 
 
 
\end{tabular}
 
\end{center}
 
\label{tab:miss}
 
\end{table}
 
 
One would  assume that  the set of document-entity pairs extracted from cleansed are a sub-set of those   that are extracted from the raw corpus. We find that that is not the case. There are 217  unique entity-document pairs that are retrieved from the cleansed corpus, but not from the raw. 57 of them are vital.    Similarly,  there are  3081 document-entity pairs that are missing  from cleansed, but are present in  raw. 1065 of them are vital.  Examining the content of the documents reveals that it is due to a missing part of text from a corresponding document.  All the documents that we miss from the raw corpus are social. These are documents such as tweets and blogs, posts from other social media. To meet the format of the raw data (binary byte array), some of them must have been converted later, after collection and on the way lost a part or the entire content. It is similar for the documents that we miss from cleansed: a part or the entire content  is lost in during the cleansing process (the removal of 
 
HTML tags and non-English documents).  In both cases the mention of the entity happened to be on the part of the text that is cut out during transformation. 
 
 
 
 
 The interesting set  of relevance judgments are those that  we miss from both raw and cleansed extractions. These are 2146 unique document-entity pairs, 219 of them are with vital relevance judgments.   The total number of entities in the missed vital annotations is  28 Wikipedia and 7  Twitter, making a total of 35. The  great majority (86.7\%) of the documents are social. This suggests that social (tweets and blogs) can talk about the entities without mentioning  them by name more than news and others do. This is, of course, inline with intuition. 
 
   
 
0 comments (0 inline, 0 general)