diff --git a/mypaper-final.tex b/mypaper-final.tex index 6e98c162e5b0d1414ce991553cd11dd16baef5e8..e8299c85696d318c48dcf1411c1acec5cc1b979e 100644 --- a/mypaper-final.tex +++ b/mypaper-final.tex @@ -30,6 +30,7 @@ \usepackage{booktabs} \usepackage{multirow} \usepackage{todonotes} +\usepackage{url} \begin{document} @@ -215,7 +216,7 @@ The rest of the paper is is organized as follows: \section{Data Description} We base this analysis on the TREC-KBA 2013 dataset% -\footnote{http://http://trec-kba.org/trec-kba-2013.shtml} +\footnote{\url{http://trec-kba.org/trec-kba-2013.shtml}} that consists of three main parts: a time-stamped stream corpus, a set of KB entities to be curated, and a set of relevance judgments. A CCR system now has to identify for each KB entity which documents in the @@ -227,16 +228,16 @@ respectively, after xz-compression and GPG encryption. The raw data is a dump of raw HTML pages. The cleansed version is the raw data after its HTML tags are stripped off and only English documents identified with Chromium Compact Language Detector -\footnote{https://code.google.com/p/chromium-compact-language-detector/} +\footnote{\url{https://code.google.com/p/chromium-compact-language-detector/}} are included. The stream corpus is organized in hourly folders each of which contains many chunk files. Each chunk file contains between hundreds and hundreds of thousands of serialized thrift objects. One thrift object is one document. A document could be a blog article, a news article, or a social media post (including tweet). The stream corpus comes from three sources: TREC KBA 2012 (social, news and -linking) \footnote{http://trec-kba.org/kba-stream-corpus-2012.shtml}, -arxiv\footnote{http://arxiv.org/}, and -spinn3r\footnote{http://spinn3r.com/}. +linking) \footnote{\url{http://trec-kba.org/kba-stream-corpus-2012.shtml}}, +arxiv\footnote{\url{http://arxiv.org/}}, and +spinn3r\footnote{\url{http://spinn3r.com/}}. Table \ref{tab:streams} shows the sources, the number of hourly directories, and the number of chunk files. \begin{table} @@ -664,11 +665,11 @@ name-variant partial over name-variant is 8.3\%. For Wikipedia entities, canonical partial achieves better recall than name-variant in both the cleansed and the raw corpus. %In the raw extraction, the difference is about 3.7. -In Twitter entities, however, it is different. Both canonical and -their partials perform the same and the recall is very low. Canonical +In Twitter entities, recall of canonical matching is very low.% +\footnote{Canonical and canonical partial are the same for Twitter entities because they are one word strings. For example in https://twitter.com/roryscovel, -``roryscovel`` is the canonical name and its partial is also the same. +``roryscovel`` is the canonical name and its partial is identical.} %The low recall is because the canonical names of Twitter entities are %not really names; they are usually arbitrarily created user names. It %shows that documents refer to them by their display names, rarely @@ -678,60 +679,77 @@ are one word strings. For example in https://twitter.com/roryscovel, -The tables in \ref{tab:name} and \ref{tab:source-delta} show recall for Wikipedia entities are higher than for Twitter. Generally, at both aggregate and document category levels, we observe that recall increases as we move from canonicals to canonical partial, to name-variant, and to name-variant partial. The only case where this does not hold is in the transition from Wikipedia's canonical partial to name-variant. At the aggregate level(as can be inferred from Table \ref{tab:name}), the difference in performance between canonical and name-variant partial is 31.9\% on all entities, 20.7\% on Wikipedia entities, and 79.5\% on Twitter entities. This is a significant performance difference. - - +The tables in \ref{tab:name} and \ref{tab:source-delta} show a higher recall +for Wikipedia than for Twitter entities. Generally, at both +aggregate and document category levels, we observe that recall +increases as we move from canonicals to canonical partial, to +name-variant, and to name-variant partial. The only case where this +does not hold is in the transition from Wikipedia's canonical partial +to name-variant. At the aggregate level (as can be inferred from Table +\ref{tab:name}), the difference in performance between canonical and +name-variant partial is 31.9\% on all entities, 20.7\% on Wikipedia +entities, and 79.5\% on Twitter entities. + +Section \ref{sec:analysis} discusses the most plausible explanations for these findings. %% TODO: PERHAPS SUMMARY OF DISCUSSION HERE - - + \section{Impact on classification} - In the overall experimental setup, classification, ranking, and evaluationn are kept constant. Following \cite{balog2013multi} settings, we use WEKA's\footnote{http://www.cs.waikato.ac.nz/∼ml/weka/} Classification Random Forest. However, we use fewer numbers of features which we found to be more effective. We determined the effectiveness of the features by running the classification algorithm using the fewer features we implemented and their features. Our feature implementations achieved better results. The total numbers of features we used are 13 and are listed below. +In the overall experimental setup, classification, ranking, and +evaluation are kept constant. Following \cite{balog2013multi} +settings, we use +WEKA's\footnote{\url{http://www.cs.waikato.ac.nz/~ml/weka/}} Classification +Random Forest. However, we use fewer numbers of features which we +found to be more effective. We determined the effectiveness of the +features by running the classification algorithm using the fewer +features we implemented and their features. Our feature +implementations achieved better results. The total numbers of +features we used are 13 and are listed below. -\paragraph{Google's Cross Lingual Dictionary (GCLD)} +\paragraph*{Google's Cross Lingual Dictionary (GCLD)} This is a mapping of strings to Wikipedia concepts and vice versa \cite{spitkovsky2012cross}. (1) the probability with which a string is used as anchor text to a Wikipedia entity -\paragraph{jac} +\paragraph*{jac} Jaccard similarity between the document and the entity's Wikipedia page -\paragraph{cos} +\paragraph*{cos} Cosine similarity between the document and the entity's Wikipedia page -\paragraph{kl} +\paragraph*{kl} KL-divergence between the document and the entity's Wikipedia page - \paragraph{PPR} + \paragraph*{PPR} For each entity, we computed a PPR score from a Wikipedia snapshot and we kept the top 100 entities along with the corresponding scores. -\paragraph{Surface Form (sForm)} +\paragraph*{Surface Form (sForm)} For each Wikipedia entity, we gathered DBpedia name variants. These are redirects, labels and names. -\paragraph{Context (contxL, contxR)} +\paragraph*{Context (contxL, contxR)} From the WikiLink corpus \cite{singh12:wiki-links}, we collected all left and right contexts (2 sentences left and 2 sentences right) and generated n-grams between uni-grams and quadro-grams for each left and right context. Finally, we select the 5 most frequent n-grams for each context. -\paragraph{FirstPos} +\paragraph*{FirstPos} Term position of the first occurrence of the target entity in the document body -\paragraph{LastPos } +\paragraph*{LastPos } Term position of the last occurrence of the target entity in the document body -\paragraph{LengthBody} Term count of document body -\paragraph{LengthAnchor} Term count of document anchor +\paragraph*{LengthBody} Term count of document body +\paragraph*{LengthAnchor} Term count of document anchor -\paragraph{FirstPosNorm} +\paragraph*{FirstPosNorm} Term position of the first occurrence of the target entity in the document body normalised by the document length -\paragraph{MentionsBody } +\paragraph*{MentionsBody } No. of occurrences of the target entity in the document body @@ -898,7 +916,7 @@ In vital-relevant category (Table \ref{tab:class-vital-relevant}), the performan -\section{Analysis and Discussion} +\section{Analysis and Discussion}\label{sec:analysis} We conducted experiments to study the impacts on recall of