Changeset - fe74b611603f
[Not reviewed]
0 1 0
Gebrekirstos Gebremeskel - 11 years ago 2014-06-12 02:54:53
destinycome@gmail.com
updated
1 file changed with 53 insertions and 1 deletions:
0 comments (0 inline, 0 general)
mypaper-final.tex
Show inline comments
 
@@ -309,385 +309,437 @@ The annotation set is a combination of the annotations from before the Training
 
                        
 
\hline 
 
\multirow{2}{*}{Testing}&Wikipedia &6139   &12375 &16160 \\
 
                         &Twitter&1261   &2684&3842  \\
 
                         &All Entities&7400   &12059&20002 \\
 
                         
 
             \hline 
 
\multirow{2}{*}{Total} & Wikipedia       &8071   &14426&19832  \\
 
                       &Twitter  &1450  &2998&4330  \\
 
                       &All entities&9521   &17424&24162 \\
 
	                 
 
\hline
 
\end{tabular}
 
\end{center}
 
\label{tab:breakdown}
 
\end{table}
 
 
 
 
 
 
 
Most (more than 80\%) of the annotation documents are in the test set.  In both the training and test data for 2013, there are  68405 annotations, of which 50688 are unique document-entity pairs.   Out of 50688,  24162  unique document-entity pairs are vital-relevant, of which 9521 are vital and 17424 are relevant. 
 
 
 
 
 
\section{Experiments and Results}
 
 We conducted experiments to study  the effect of cleansing, different entity profiles, types of entities, category of documents, relevance ranks (vital or relevant), and the impact on classification.  In the following subsections, we present the results in different categories, and describe them.
 
 
 
 \subsection{Cleansing: raw or cleansed}
 
\begin{table}
 
\caption{vital-relevant documents that are retrieved under different name variants (upper part from cleansed, lower part from raw)}
 
\begin{center}
 
\begin{tabular}{l@{\quad}lllllll}
 
\hline
 
&cano&cano-part  &all &all-part  \\
 
\hline
 
 
 
 
   all-entities   &51.0  &61.7  &66.2  &78.4 \\	
 
   Wikipedia      &61.8  &74.8  &71.5  &77.9\\
 
   twitter        &1.9   &1.9   &41.7  &80.4\\
 
  
 
 
 
\hline
 
\hline
 
  all-entities    &59.0  &72.2  &79.8  &90.2\\
 
   Wikipedia      &70.0  &86.1  &82.4  &90.7\\
 
   twitter        & 8.7  &8.7   &67.9  &88.2\\
 
\hline
 
 
\end{tabular}
 
\end{center}
 
\label{tab:name}
 
\end{table}
 
 
 
The upper part of Table \ref{tab:name} shows the recall performances on the cleansed version and the lower part on the raw version. The recall performances for all entity types  are increased substantially in the raw version. Recall increases on Wikipedia entities  vary from 8.2 to 12.8, and in Twitter entities from 6.8 to 26.2. In all entities, it ranges from 8.0 to 13.6.  The recall increases are substantial. To put it into perspective, an 11.8 increase in recall on all entities is a retrieval of 2864 more unique document-entity pairs. %This suggests that cleansing has removed some documents that we could otherwise retrieve. 
 
 
\subsection{Entity Profiles}
 
If we look at the recall performances for the raw corpus,   filtering documents by canonical names achieves a recall of  59\%.  Adding other name variants  improves the recall to 79.8\%, an increase of 20.8\%. This means  20.8\% of documents mentioned the entities by other names  rather than by their canonical names. Canonical partial  achieves a recall of 72\%  and name-variant partial achives 90.2\%. This says that 18.2\% of documents mentioned the entities by  partial names of other non-canonical name variants. 
 
 
 
%\begin{table*}
 
%\caption{Breakdown of recall percentage increases by document categories }
 
%\begin{center}\begin{tabular}{l*{9}{c}r}
 
% && \multicolumn{3}{ c| }{All entities}  & \multicolumn{3}{ c| }{Wikipedia} &\multicolumn{3}{ c| }{Twitter} \\ 
 
% & &others&news&social & others&news&social &  others&news&social \\
 
%\hline
 
% 
 
%\multirow{4}{*}{Vital}	 &cano-part $-$ cano  	&8.2  &14.9    &12.3           &9.1  &18.6   &14.1             &0      %&0       &0  \\
 
%                         &all$-$ cano         	&12.6  &19.7    &12.3          &5.5  &15.8   &8.4             &73   &35%.9    &38.3  \\
 
%	                 &all-part $-$ cano\_part&9.7    &18.7  &12.7       &0    &0.5  &5.1        &93.2 & 93 &64.4 \\%
 
%	                 &all-part $-$ all     	&5.4  &13.9     &12.7           &3.6  &3.3    &10.8              &20.3 %  &57.1    &26.1 \\
 
%	                 \hline
 
%	                 
 
%\multirow{4}{*}{Relevant}  &cano-part $-$ cano  	&10.5  &15.1    &12.2          &11.1  &21.7   &14.1            % &0   &0    &0  \\
 
%                         &all $-$ cano         	&11.7  &36.6    &17.3          &9.2  &19.5   &9.9             &%54.5   &76.3   &66  \\
 
%	                 &all-part $-$ cano-part &4.2  &26.9   &15.8          &0.2    &0.7    &6.7           &72.2   &8%7.6 &75 \\
 
%	                 &all-part $-$ all     	&3    &5.4     &10.7           &2.1  &2.9    &11              &18.2   &%11.3    &9 \\
 
%	                 
 
%	                 \hline
 
%\multirow{4}{*}{total} 	&cano-part $-$ cano   	&10.9   &15.5   &12.4         &11.9  &21.3   &14.4          &0 %    &0       &0\\
 
%			&all $-$ cano         	&13.8   &30.6   &16.9         &9.1  &18.9   &10.2          &63.6  &61.8%    &57.5 \\
 
%                        &all-part $-$ cano-part	&7.2   &24.8   &15.9          &0.1    &0.7    &6.8           &8%2.2  &89.1    &71.3\\
 
%                        &all-part $-$ all     	&4.3   &9.7    &11.4           &3.0  &3.1   &11.0          &18.9  &27.3%    &13.8\\	                 
 
%	                 
 
%                                  	                 
 
%\hline
 
%\end{tabular}
 
%\end{center}
 
%\label{tab:source-delta2}
 
%\end{table*}
 
 
 
 \begin{table*}
 
\caption{Breakdown of recall performances by document source category}
 
\begin{center}\begin{tabular}{l*{9}{c}r}
 
 && \multicolumn{3}{ c| }{All entities}  & \multicolumn{3}{ c| }{Wikipedia} &\multicolumn{3}{ c| }{Twitter} \\ 
 
 & &others&news&social & others&news&social &  others&news&social \\
 
\hline
 
 
 
\multirow{4}{*}{Vital} &cano                 &82.2& 65.6& 70.9& 90.9&  80.1& 76.8&   8.1&  6.3&  30.5\\
 
&cano part & 90.4& 80.6& 83.1& 100.0& 98.7& 90.9&   8.1&  6.3&  30.5\\
 
&all  & 94.8& 85.4& 83.1& 96.4&  95.9& 85.2&   81.1& 42.2& 68.8\\
 
&all part &100& 99.2& 95.9& 100.0&  99.2& 96.0&   100&  99.3& 94.9\\
 
\hline
 
	                 
 
\multirow{4}{*}{Relevant} &cano & 84.2& 53.4& 55.6& 88.4& 75.6& 63.2& 10.6& 2.2& 6.0\\
 
&cano part &94.7& 68.5& 67.8& 99.6& 97.3& 77.3& 10.6& 2.2& 6.0\\
 
&all & 95.8& 90.1& 72.9& 97.6& 95.1& 73.1& 65.2& 78.4& 72.0\\
 
&all part &98.8& 95.5& 83.7& 99.7& 98.0& 84.1& 83.3& 89.7& 81.0\\
 
	                 
 
	                 \hline
 
\multirow{4}{*}{total} 	&cano    &   81.1& 56.5& 58.2& 87.7& 76.4& 65.7& 9.8& 3.6& 13.5\\
 
&cano part &92.0& 72.0& 70.6& 99.6& 97.7& 80.1& 9.8& 3.6& 13.5\\
 
&all & 94.8& 87.1& 75.2& 96.8& 95.3& 75.8& 73.5& 65.4& 71.1\\
 
&all part & 99.2& 96.8& 86.6& 99.8& 98.4& 86.8& 92.4& 92.7& 84.9\\
 
	                 
 
\hline
 
\end{tabular}
 
\end{center}
 
\label{tab:source-delta}
 
\end{table*}
 
    
 
 
The break down of the raw corpus by document source category is presented in Table
 
\ref{tab:source-delta}.  
 
 
 
 
 
 
 
 
 
 \subsection{ Relevance Rating: vital and relevant}
 
 
 
 When comparing the recall performances in vital and relevant, we observe that canonical names achieve better in vital than in relevant. This is specially true with Wikipedia entities. For example, the recall for news is 80.1 and for social is 76, while the corresponding recall in relevant is 75.6 and 63.2 respectively. We can generally see that the recall in vital are better than the recall in relevant suggesting that relevant documents are more probable to mention the entities and when they do, using some of their common name variants. 
 
 
 
%  \subsection{Difference by document categories}
 
%  
 
 
 
%  Generally, there is greater variation in relevant rank than in vital. This is specially true in most of the Delta's for Wikipedia. This  maybe be explained by news items referring to  vital documents by a some standard name than documents that are relevant. Twitter entities show greater deltas than Wikipedia entities in both vital and relevant. The greater variation can be explained by the fact that the canonical name of Twitter entities retrieves very few documents. The deltas that involve canonical names of Twitter entities, thus, show greater deltas.  
 
%  
 
 
% If we look in recall performances, In Wikipedia entities, the order seems to be others, news and social. This means that others achieve a higher recall than news than social.  However, in Twitter entities, it does not show such a strict pattern. In all, entities also, we also see almost the same pattern of other, news and social. 
 
 
 
 
  
 
\subsection{Recall across document categories: others, news and social}
 
The recall for Wikipedia entities in Table \ref{tab:name} ranged from 61.8\% (canonicals) to 77.9\% (name-variants).  Table \ref{tab:source-delta} shows how recall is distributed across document categories. For Wikipedia entities, across all entity profiles, others have a higher recall followed by news, and then by social.  While the recall for news  ranged from 76.4\% to 98.4\%, the recall for social documents ranged from 65.7\% to 86.8\%. In Twitter entities, however, the pattern is different. In canonicals (and their partials), social documents achieve higher recall than news. 
 
%This indicates that social documents refer to Twitter entities by their canonical names (user names) more than news do. In name- variant partial, news achieve better results than social. The difference in recall between canonicals and name-variants show that news do not refer to Twitter entities by their user names, they refer to them by their display names.
 
Overall, across all entities types and all entity profiles, others achieve higher recall than news, and  news, in turn, achieve higher recall than social documents. 
 
 
% This suggests that social documents are the hardest  to retrieve.  This  makes sense since social posts such as tweets and blogs are short and are more likely to point to other resources, or use short informal names.
 
 
 
%%NOTE TABLE REMOVED:\\\\
 
%
 
%We computed four percentage increases in recall (deltas)  between the
 
%different entity profiles (Table \ref{tab:source-delta2}). The first
 
%delta is the recall percentage between canonical partial  and
 
%canonical. The second  is  between name= variant and canonical. The
 
%third is the difference between name-variant partial  and canonical
 
%partial and the fourth between name-variant partial and
 
%name-variant. we believe these four deltas offer a clear meaning. The
 
%delta between name-variant and canonical measn the percentage of
 
%documents that the new name variants retrieve, but the canonical name
 
%does not. Similarly, the delta between  name-variant partial and
 
%partial canonical-partial means the percentage of document-entity
 
%pairs that can be gained by the partial names of the name variants. 
 
% The  biggest delta  observed is in Twitter entities between partials
 
% of all name variants and partials of canonicals (93\%). delta. Both
 
% of them are for news category.  For Wikipedia entities, the highest
 
% delta observed is 19.5\% in cano\_part - cano followed by 17.5\% in
 
% all\_part in relevant. 
 
  
 
  \subsection{Entity Types: Wikipedia and Twitter}
 
Table \ref{tab:name} shows the difference between Wikipedia and Twitter entities.  Wikipedia entities' canonical achieves a recall of 70\%, and canonical partial  achieves a recall of 86.1\%. This is an increase in recall of 16.1\%. By contrast, the increase in recall of name-variant partial over name-variant is 8.3.   
 
%The high increase in recall when moving from canonical names  to their partial names, in comparison to the lower increase when moving from all name variants to their partial names can be explained by saturation. This is to mean that documents have already been extracted by the different name variants and thus using their partial names does not bring in many new relevant documents. 
 
One interesting observation is that, For Wikipedia entities, canonical partial achieves better recall than name-variant in both cleansed and raw corpus.  %In the raw extraction, the difference is about 3.7. 
 
In Twitter entities, however, it is different. Both canonical and their partials perform the same and the recall is very low. Canonical  and canonical partial are the same for Twitter entities because they are one word strings. For example in https://twitter.com/roryscovel, ``roryscovel`` is the canonical name and its partial is also the same.  
 
%The low recall is because the canonical names of Twitter entities are not really names; they are usually arbitrarily created user names. It shows that  documents  refer to them by their display names, rarely by their user name, which is reflected in the name-variant recall (67.9\%). The use of name-variant partial increases the recall to 88.2\%.
 
 
 
 
The tables in \ref{tab:name} and \ref{tab:source-delta} show recall for Wikipedia entities are higher than for Twitter. Generally, at both aggregate and document category levels, we observe that recall increases as we move from canonicals to canonical partial, to name-variant, and to name-variant partial. The only case where this does not hold is in the transition from Wikipedia's canonical partial to name-variant. At the aggregate level(as can be inferred from Table \ref{tab:name}), the difference in performance between  canonical  and name-variant partial is 31.9\% on all entities, 20.7\% on Wikipedia entities, and 79.5\% on Twitter entities. This is a significant performance difference. 
 
 
 
%% TODO: PERHAPS SUMMARY OF DISCUSSION HERE
 
 
 
\section{Impact on classification}
 
  In the overall experimental setup, classification, ranking,  and evaluationn are kept constant. Following \cite{balog2013multi} settings, we use WEKA's\footnote{http://www.cs.waikato.ac.nz/∼ml/weka/} Classification Random Forest. Features we use incude similarity features such as cosine and jaccard, document-entity features such as docuemnt mentions entity in title, in body, frequency  of mention, etc., and related entity features such as page rank scores. In total we sue  The features consist of similarity measures between the KB entiities profile text, document-entity features such as  
 
  In the overall experimental setup, classification, ranking,  and evaluationn are kept constant. Following \cite{balog2013multi} settings, we use WEKA's\footnote{http://www.cs.waikato.ac.nz/∼ml/weka/} Classification Random Forest. However, we use fewer numbers of features which we found to be more effective. We determined the effectiveness of the features by running the classification algorithm using the fewer features we implemented and their features. Our feature implementations achieved better results.  The total numbers of features we used are 13 and are listed below. 
 
  
 
\paragraph{Google's Cross Lingual Dictionary (GCLD)}
 
 
This is a mapping of strings to Wikipedia concepts and vice versa
 
\cite{spitkovsky2012cross}. 
 
(1) the probability with which a string is used as anchor text to
 
a Wikipedia entity 
 
 
\paragraph{jac} 
 
  Jaccard similarity between the document and the entity's Wikipedia page
 
\paragraph{cos} 
 
  Cosine similarity between the document and the entity's Wikipedia page
 
\paragraph{kl} 
 
  KL-divergence between the document and the entity's Wikipedia page
 
  
 
  \paragraph{PPR}
 
For each entity, we computed a PPR score from
 
a Wikipedia snapshot  and we kept the top 100  entities along
 
with the corresponding scores.
 
 
 
\paragraph{Surface Form (sForm)}
 
For each Wikipedia entity, we gathered DBpedia name variants. These
 
are redirects, labels and names.
 
 
 
\paragraph{Context (contxL, contxR)}
 
From the WikiLink corpus \cite{singh12:wiki-links}, we collected
 
all left and right contexts (2 sentences left and 2 sentences
 
right) and generated n-grams between uni-grams and quadro-grams
 
for each left and right context. 
 
Finally,  we select  the 5 most frequent n-grams for each context.
 
 
\paragraph{FirstPos}
 
  Term position of the first occurrence of the target entity in the document 
 
  body 
 
\paragraph{LastPos }
 
  Term position of the last occurrence of the target entity in the document body
 
 
\paragraph{LengthBody} Term count of document body
 
\paragraph{LengthAnchor} Term count  of document anchor
 
  
 
\paragraph{FirstPosNorm} 
 
  Term position of the first occurrence of the target entity in the document 
 
  body normalised by the document length 
 
\paragraph{MentionsBody }
 
  No. of occurrences of the target entity in the  document body
 
 
 
 
  
 
  Features we use incude similarity features such as cosine and jaccard, document-entity features such as docuemnt mentions entity in title, in body, frequency  of mention, etc., and related entity features such as page rank scores. In total we sue  The features consist of similarity measures between the KB entiities profile text, document-entity features such as  
 
  In here, we present results showing how  the choices in corpus, entity types, and entity profiles impact these latest stages of the pipeline.  In tables \ref{tab:class-vital} and \ref{tab:class-vital-relevant}, we show the performances in max-F. 
 
\begin{table*}
 
\caption{vital performance under different name variants(upper part from cleansed, lower part from raw)}
 
\begin{center}
 
\begin{tabular}{ll@{\quad}lllllll}
 
\hline
 
%&\multicolumn{1}{l}{\rule{0pt}{12pt}}&\multicolumn{1}{l}{\rule{0pt}{12pt}cano}&\multicolumn{1}{l}{\rule{0pt}{12pt}canonical partial }&\multicolumn{1}{l}{\rule{0pt}{12pt}name-variant }&\multicolumn{1}{l}{\rule{0pt}{50pt}name-variant partial}\\[5pt]
 
  &&cano&cano-part&all  &all-part \\
 
 
 
   all-entities &max-F& 0.241&0.261&0.259&0.265\\
 
%	      &SU&0.259  &0.258 &0.263 &0.262 \\	
 
   Wikipedia &max-F&0.252&0.274& 0.265&0.271\\
 
%	      &SU& 0.261& 0.259&  0.265&0.264 \\
 
   
 
   
 
   twitter &max-F&0.105&0.105&0.218&0.228\\
 
%     &SU &0.105&0.250& 0.254&0.253\\
 
  
 
 
 
\hline
 
\hline
 
  all-entities &max-F & 0.240 &0.272 &0.250&0.251\\
 
%	  &SU& 0.258   &0.151  &0.264  &0.258\\
 
   Wikipedia&max-F &0.257&0.257&0.257&0.255\\
 
%   &SU	     & 0.265&0.265 &0.266 & 0.259\\
 
   twitter&max-F &0.188&0.188&0.208&0.231\\
 
%	&SU&    0.269 &0.250 &0.250&0.253\\
 
\hline
 
 
\end{tabular}
 
\end{center}
 
\label{tab:class-vital}
 
\end{table*}
 
  
 
  
 
  \begin{table*}
 
\caption{vital-relevant performances under different name variants(upper part from cleansed, lower part from raw)}
 
\begin{center}
 
\begin{tabular}{ll@{\quad}lllllll}
 
\hline
 
%&\multicolumn{1}{l}{\rule{0pt}{12pt}}&\multicolumn{1}{l}{\rule{0pt}{12pt}canonical}&\multicolumn{1}{l}{\rule{0pt}{12pt}canonical partial }&\multicolumn{1}{l}{\rule{0pt}{12pt}name-variant }&\multicolumn{1}{l}{\rule{0pt}{50pt}name-variant partial}\\[5pt]
 
 
 &&cano&cano-part&all  &all-part \\
 
 
   all-entities &max-F& 0.497&0.560&0.579&0.607\\
 
%	      &SU&0.468  &0.484 &0.483 &0.492 \\	
 
   Wikipedia &max-F&0.546&0.618&0.599&0.617\\
 
%   &SU&0.494  &0.513 &0.498 &0.508 \\
 
   
 
   twitter &max-F&0.142&0.142& 0.458&0.542\\
 
%    &SU &0.317&0.328&0.392&0.392\\
 
  
 
 
 
\hline
 
\hline
 
  all-entities &max-F& 0.509 &0.594 &0.590&0.612\\
 
%    &SU       &0.459   &0.502  &0.478  &0.488\\
 
   Wikipedia &max-F&0.550&0.617&0.605&0.618\\
 
%   &SU	     & 0.483&0.498 &0.487 & 0.495\\
 
   twitter &max-F&0.210&0.210&0.499&0.580\\
 
%	&SU&    0.319  &0.317 &0.421&0.446\\
 
\hline
 
 
\end{tabular}
 
\end{center}
 
\label{tab:class-vital-relevant}
 
\end{table*}
 
 
 
 
 
Table \ref{tab:class-vital} shows the recall performance for vitally judged documents.  On Wikipedia entities, except in the canonical profile, the cleansed version achieves  better results than the raw version.  However, on Twitter entities, the raw corpus achieves  better  in all entity profiles (except  in name-variant partial).  At an aggregate (both Wikipedia and Twitter) level, we see that in three profiles, cleansed achieves better.  Only in canonical partial, does raw perform better. Overall cleansed achieves better results than raw.  This result is interesting because we saw in previous sections that the raw corpus achieves  higher recall than cleansed. In the case name-variant partial, for example, 10\% more relevant documents are retrieved in the raw corpus. The gain in recall in raw corpus does not translate into a gain in F\_measure. In fact, in most cases F\_measure decreased. % One explanation for this is that it brings in many false positives from, among related links, adverts, etc.  
 
For Wikipedia entities,  canonical partial  achieves the highest performance. For Twitter, name-variant partial achieves  better results.
 
 
In vital-relevant category (Table \ref{tab:class-vital-relevant}), the performances are different.  Except in canonical partial,  raw achieves better results in all cases. For Twitter entities, the raw corpus achieves better results in all cases.  In terms of  entity profiles, Wikipedia's canonical partial  achieves  the best F-score. For Twitter, as before, canonical partial. The raw corpus has more effect on relevant documents and Twitter entities.  
 
 
%The fact that canonical partial names achieve better results is interesting.  We know that partial names were used as a baseline in TREC KBA 2012, but no one of the KBA participants actually used partial names for filtering.
 
 
 
   
 
%    
 
   
 
   
 
%    
 
%    \begin{table*}
 
% \caption{Breakdown of missing documents by sources for cleansed, raw and cleansed-and-raw}
 
% \begin{center}\begin{tabular}{l*{9}r}
 
%   &others&news&social \\
 
% \hline
 
% 
 
% 			&missing from raw only &   0 &0   &217 \\
 
% 			&missing from cleansed only   &430   &1321     &1341 \\
 
% 
 
%                          &missing from both    &19 &317     &2196 \\
 
%                         
 
%                          
 
% 
 
% \hline
 
% \end{tabular}
 
% \end{center}
 
% \label{tab:miss-category}
 
% \end{table*}
 
 
 
 
%    To gain more insight, I sampled for each 35 entities, one document-entity pair and looked into the contents. The results are in \ref{tab:miss from both}
 
%    
 
%    \begin{table*}
 
% \caption{Missing documents and their mentions }
 
% \begin{center}
 
% 
 
%  \begin{tabular}{l*{4}{l}l}
 
%  &entity&mentioned by &remark \\
 
% \hline
 
%  Jeremy McKinnon  & Jeremy McKinnon& social, mentioned in read more link\\
 
% Blair Thoreson   & & social, There is no mention by name, the article talks about a subject that is political (credit rating), not apparent to me\\
 
%   Lewis and Clark Landing&&Normally, maha music festival does not mention ,but it was held there \\
 
% Cementos Lima &&It appears a mistake to label it vital. the article talks about insurance and centos lima is a cement company.entity-deleted from wiki\\
 
% Corn Belt Power Cooperative & &No content at all\\
 
% Marion Technical Institute&&the text could be of any place. talks about a place whose name is not mentioned. 
 
%  roryscovel & &Talks about a video hinting that he might have seen in the venue\\
 
% Jim Poolman && talks of party convention, of which he is member  politician\\
 
% Atacocha && No mention by name The article talks about waste from mining and Anacocha is a mining company.\\
 
% Joey Mantia & & a mention of a another speeedskater\\
 
% Derrick Alston&&Text swedish, no mention.\\
 
% Paul Johnsgard&& not immediately clear why \\
 
% GandBcoffee&& not immediately visible why\\
 
% Bob Bert && talks about a related media and entertainment\\
 
% FrankandOak&& an article that talks about a the realease of the most innovative companies of which FrankandOak is one. \\
 
% KentGuinn4Mayor && a theft in a constituency where KentGuinn4Mayor is vying.\\
 
% Hjemkomst Center && event announcement without mentioning where. it takes a a knowledge of \\
 
% BlossomCoffee && No content\\
 
% Scotiabank Per\%25C3\%25BA && no content\\
 
% Drew Wrigley && politics and talk of oilof his state\\
 
% Joshua Zetumer && mentioned by his film\\
 
% Théo Mercier && No content\\
 
% Fargo Air Museum && No idea why\\
 
% Stevens Cooperative School && no content\\
 
% Joshua Boschee && No content\\
 
% Paul Marquart &&  No idea why\\
 
% Haven Denney && article on skating competition\\
 
% Red River Zoo && animal show in the zoo, not indicated by name\\
 
% RonFunches && talsk about commedy, but not clear whyit is central\\
 
% DeAnne Smith && No mention, talks related and there are links\\
 
% Richard Edlund && talks an ward ceemony in his field \\
 
% Jennifer Baumgardner && no idea why\\
 
% Jeff Tamarkin && not clear why\\
 
% Jasper Schneider &&no mention, talks about rural development of which he is a director \\
 
% urbren00 && No content\\
 
% \hline
 
% \end{tabular}
 
% \end{center}
 
% \label{tab:miss from both}
 
% \end{table*}
 
 
 
 
 
   
 
  
 
\section{Analysis and Discussion}
 
 
 
We conducted experiments to study  the impacts on recall of 
 
different components of the filtering stage of entity-based filtering and ranking pipeline. Specifically 
 
we conducted experiments to study the impacts of cleansing, 
 
entity profiles, relevance ratings, categories of documents, entity profiles. We also measured  impact of the different factors and choices  on later stages of the pipeline. 
 
 
Experimental results show that cleansing can remove entire or parts of the content of documents making them difficult to retrieve. These documents can, otherwise, be retrieved from the raw version. The use of the raw corpus brings in documents that can not be retrieved from the cleansed corpus. This is true for all entity profiles and for all entity types. The  recall difference between the cleansed and raw ranges from  6.8\% t 26.2\%. These increases, in actual document-entity pairs,  is in thousands. We believe this is a substantial increase. However, the recall increases do not always translate to improved F-score in overall performance.  In the vital relevance ranking for both Wikipedia and aggregate entities, the cleansed version performs better than the raw version.  In Twitter entities, the raw corpus achieves better except in the case of all name-variant, though the difference is negligible.  However, for vital-relevant, the raw corpus performs  better across all entity profiles and entity types 
 
except in partial canonical names of Wikipedia entities. 
 
 
The use of different profiles also shows a big difference in recall. Except in the case of Wikipedia where the use of canonical partial achieves better than name-variant, there is a steady increase in recall from canonical to  canonical partial, to name-variant, and to name-variant partial. This pattern is also observed across the document categories.  However, here too, the relationship between   the gain in recall as we move from less richer profile to a more richer profile and overall performance as measured by F-score  is not linear. 
 
 
 
%%%%% MOVED FROM LATER ON - CHECK FLOW
 
 
There is a trade-off between using a richer entity-profile and retrieval of irrelevant documents. The richer the profile, the more relevant documents it retrieves, but also the more irrelevant documents. To put it into perspective, lets compare the number of documents that are retrieved with  canonical partial and with name-variant partial. Using the raw corpus, the former retrieves a total of 2547487 documents and achieves a recall of 72.2\%. By contrast, the later retrieves a total of 4735318 documents and achieves a recall of 90.2\%. The total number of documents extracted increases by 85.9\% for a recall gain of 18\%. The rest of the documents, that is 67.9\%, are newly introduced irrelevant documents. 
 
 
%%%%%%%%%%%%
 
 
 
In vital ranking, across all entity profiles and types of corpus, Wikipedia's canonical partial  achieves better performance than any other Wikipedia entity profiles. In vital-relevant documents too, Wikipedia's canonical partial achieves the best result. In the raw corpus, it achieves a little less than name-variant partial. For Twitter entities, the name-variant partial profile achieves the highest F-score across all entity profiles and types of corpus.  
 
 
 
There are 3 interesting observations: 
 
 
1) cleansing impacts Twitter
 
entities and relevant documents.  This  is validated by the
 
observation that recall  gains in Twitter entities and the relevant
 
categories in the raw corpus also translate into overall performance
 
gains. This observation implies that cleansing removes relevant and
 
social documents than it does vital and news. That it removes relevant
0 comments (0 inline, 0 general)