Changeset - fe74b611603f
[Not reviewed]
0 1 0
Gebrekirstos Gebremeskel - 11 years ago 2014-06-12 02:54:53
destinycome@gmail.com
updated
1 file changed with 53 insertions and 1 deletions:
0 comments (0 inline, 0 general)
mypaper-final.tex
Show inline comments
 
@@ -489,25 +489,77 @@ One interesting observation is that, For Wikipedia entities, canonical partial a
 
In Twitter entities, however, it is different. Both canonical and their partials perform the same and the recall is very low. Canonical  and canonical partial are the same for Twitter entities because they are one word strings. For example in https://twitter.com/roryscovel, ``roryscovel`` is the canonical name and its partial is also the same.  
 
%The low recall is because the canonical names of Twitter entities are not really names; they are usually arbitrarily created user names. It shows that  documents  refer to them by their display names, rarely by their user name, which is reflected in the name-variant recall (67.9\%). The use of name-variant partial increases the recall to 88.2\%.
 
 
 
 
The tables in \ref{tab:name} and \ref{tab:source-delta} show recall for Wikipedia entities are higher than for Twitter. Generally, at both aggregate and document category levels, we observe that recall increases as we move from canonicals to canonical partial, to name-variant, and to name-variant partial. The only case where this does not hold is in the transition from Wikipedia's canonical partial to name-variant. At the aggregate level(as can be inferred from Table \ref{tab:name}), the difference in performance between  canonical  and name-variant partial is 31.9\% on all entities, 20.7\% on Wikipedia entities, and 79.5\% on Twitter entities. This is a significant performance difference. 
 
 
 
%% TODO: PERHAPS SUMMARY OF DISCUSSION HERE
 
 
 
\section{Impact on classification}
 
  In the overall experimental setup, classification, ranking,  and evaluationn are kept constant. Following \cite{balog2013multi} settings, we use WEKA's\footnote{http://www.cs.waikato.ac.nz/∼ml/weka/} Classification Random Forest. Features we use incude similarity features such as cosine and jaccard, document-entity features such as docuemnt mentions entity in title, in body, frequency  of mention, etc., and related entity features such as page rank scores. In total we sue  The features consist of similarity measures between the KB entiities profile text, document-entity features such as  
 
  In the overall experimental setup, classification, ranking,  and evaluationn are kept constant. Following \cite{balog2013multi} settings, we use WEKA's\footnote{http://www.cs.waikato.ac.nz/∼ml/weka/} Classification Random Forest. However, we use fewer numbers of features which we found to be more effective. We determined the effectiveness of the features by running the classification algorithm using the fewer features we implemented and their features. Our feature implementations achieved better results.  The total numbers of features we used are 13 and are listed below. 
 
  
 
\paragraph{Google's Cross Lingual Dictionary (GCLD)}
 
 
This is a mapping of strings to Wikipedia concepts and vice versa
 
\cite{spitkovsky2012cross}. 
 
(1) the probability with which a string is used as anchor text to
 
a Wikipedia entity 
 
 
\paragraph{jac} 
 
  Jaccard similarity between the document and the entity's Wikipedia page
 
\paragraph{cos} 
 
  Cosine similarity between the document and the entity's Wikipedia page
 
\paragraph{kl} 
 
  KL-divergence between the document and the entity's Wikipedia page
 
  
 
  \paragraph{PPR}
 
For each entity, we computed a PPR score from
 
a Wikipedia snapshot  and we kept the top 100  entities along
 
with the corresponding scores.
 
 
 
\paragraph{Surface Form (sForm)}
 
For each Wikipedia entity, we gathered DBpedia name variants. These
 
are redirects, labels and names.
 
 
 
\paragraph{Context (contxL, contxR)}
 
From the WikiLink corpus \cite{singh12:wiki-links}, we collected
 
all left and right contexts (2 sentences left and 2 sentences
 
right) and generated n-grams between uni-grams and quadro-grams
 
for each left and right context. 
 
Finally,  we select  the 5 most frequent n-grams for each context.
 
 
\paragraph{FirstPos}
 
  Term position of the first occurrence of the target entity in the document 
 
  body 
 
\paragraph{LastPos }
 
  Term position of the last occurrence of the target entity in the document body
 
 
\paragraph{LengthBody} Term count of document body
 
\paragraph{LengthAnchor} Term count  of document anchor
 
  
 
\paragraph{FirstPosNorm} 
 
  Term position of the first occurrence of the target entity in the document 
 
  body normalised by the document length 
 
\paragraph{MentionsBody }
 
  No. of occurrences of the target entity in the  document body
 
 
 
 
  
 
  Features we use incude similarity features such as cosine and jaccard, document-entity features such as docuemnt mentions entity in title, in body, frequency  of mention, etc., and related entity features such as page rank scores. In total we sue  The features consist of similarity measures between the KB entiities profile text, document-entity features such as  
 
  In here, we present results showing how  the choices in corpus, entity types, and entity profiles impact these latest stages of the pipeline.  In tables \ref{tab:class-vital} and \ref{tab:class-vital-relevant}, we show the performances in max-F. 
 
\begin{table*}
 
\caption{vital performance under different name variants(upper part from cleansed, lower part from raw)}
 
\begin{center}
 
\begin{tabular}{ll@{\quad}lllllll}
 
\hline
 
%&\multicolumn{1}{l}{\rule{0pt}{12pt}}&\multicolumn{1}{l}{\rule{0pt}{12pt}cano}&\multicolumn{1}{l}{\rule{0pt}{12pt}canonical partial }&\multicolumn{1}{l}{\rule{0pt}{12pt}name-variant }&\multicolumn{1}{l}{\rule{0pt}{50pt}name-variant partial}\\[5pt]
 
  &&cano&cano-part&all  &all-part \\
 
 
 
   all-entities &max-F& 0.241&0.261&0.259&0.265\\
 
%	      &SU&0.259  &0.258 &0.263 &0.262 \\	
0 comments (0 inline, 0 general)