From 6b4b6cc8af6c1d9c8ae19f8a7555cda76bc5cd69 2014-06-12 04:00:23 From: Gebrekirstos Gebremeskel Date: 2014-06-12 04:00:23 Subject: [PATCH] updated --- diff --git a/mypaper-final.tex b/mypaper-final.tex index bfd84b2c2d5f675d7f220906d73467d33e265297..b83820c61771e3df1d6b5bdb05b04c613434c515 100644 --- a/mypaper-final.tex +++ b/mypaper-final.tex @@ -30,6 +30,7 @@ \usepackage{booktabs} \usepackage{multirow} \usepackage{todonotes} +\usepackage{url} \begin{document} @@ -128,6 +129,7 @@ relevance of the document-entity pair under consideration. We analyze how these factors (and the design choices made in their corresponding system components) affect filtering performance. We identify and characterize the relevant documents that do not pass +<<<<<<< HEAD <<<<<<< HEAD the filtering stage by examining their contents. This way, we give estimate of a practical upper-bound of recall for entity-centric stream @@ -135,6 +137,10 @@ estimate of a practical upper-bound of recall for entity-centric stream the filtering stage by examing their contents. This way, we estimate a practical upper-bound of recall for entity-centric stream >>>>>>> 68fbea2f0372ab9b4199b88f980dbf5e97f49063 +======= +the filtering stage by examing their contents. This way, we +estimate a practical upper-bound of recall for entity-centric stream +>>>>>>> 3eb20e9cca3d074a4001a593e626a9269cb5608c filtering. \end{abstract} @@ -219,6 +225,7 @@ The rest of the paper is is organized as follows: \textbf{TODO!!} \section{Data Description} +<<<<<<< HEAD We base this analysis on the TREC-KBA 2013 dataset% \footnote{http://http://trec-kba.org/trec-kba-2013.shtml} that consists of three main parts: a time-stamped stream corpus, a set of @@ -244,6 +251,33 @@ arxiv\footnote{http://arxiv.org/}, and spinn3r\footnote{http://spinn3r.com/}. Table \ref{tab:streams} shows the sources, the number of hourly directories, and the number of chunk files. +======= +We base this analysis on the TREC-KBA 2013 dataset% +\footnote{\url{http://trec-kba.org/trec-kba-2013.shtml}} +that consists of three main parts: a time-stamped stream corpus, a set of +KB entities to be curated, and a set of relevance judgments. A CCR +system now has to identify for each KB entity which documents in the +stream corpus are to be considered by the human curator. + +\subsection{Stream corpus} The stream corpus comes in two versions: +raw and cleaned. The raw and cleansed versions are 6.45TB and 4.5TB +respectively, after xz-compression and GPG encryption. The raw data +is a dump of raw HTML pages. The cleansed version is the raw data +after its HTML tags are stripped off and only English documents +identified with Chromium Compact Language Detector +\footnote{\url{https://code.google.com/p/chromium-compact-language-detector/}} +are included. The stream corpus is organized in hourly folders each +of which contains many chunk files. Each chunk file contains between +hundreds and hundreds of thousands of serialized thrift objects. One +thrift object is one document. A document could be a blog article, a +news article, or a social media post (including tweet). The stream +corpus comes from three sources: TREC KBA 2012 (social, news and +linking) \footnote{\url{http://trec-kba.org/kba-stream-corpus-2012.shtml}}, +arxiv\footnote{\url{http://arxiv.org/}}, and +spinn3r\footnote{\url{http://spinn3r.com/}}. +Table \ref{tab:streams} shows the sources, the number of hourly +directories, and the number of chunk files. +>>>>>>> 3eb20e9cca3d074a4001a593e626a9269cb5608c \begin{table} \caption{Retrieved documents to different sources } \begin{center} @@ -392,7 +426,16 @@ from DBpedia and Twitter. \ \subsection{Entity Profiling} -We build profiles for the KB entities of interest. We have two types: Twitter and Wikipedia. Both Entities are selected, on purpose, to be sparse, less-documented. For the Twitter entities, we visit their respective Twitter pages and manually fetch their display names. For the Wikipedia entities, we fetch different name variants from DBpedia, namely name, label, birth name, alternative names, redirects, nickname, or alias. The extraction results are in Table \ref{tab:sources}. +We build entity profiles for the KB entities of interest. We have two +types: Twitter and Wikipedia. Both entities have been selected, on +purpose by the track organisers, to occur only sparsely and be less-documented. +For the Wikipedia entities, we fetch different name variants +from DBpedia: name, label, birth name, alternative names, +redirects, nickname, or alias. +These extraction results are summarized in Table +\ref{tab:sources}. +For the Twitter entities, we visit +their respective Twitter pages and fetch their display names. \begin{table} \caption{Number of different DBpedia name variants} \begin{center} @@ -415,6 +458,7 @@ Redirect &49 \\ \end{table} +<<<<<<< HEAD We have a total of 121 Wikipedia entities. Every entity has a DBpedia label. Only 82 entities have a name string and only 49 entities have redirect strings. Most of the entities have only one string, but some have several redirect sterings. One entity, Buddy\_MacKay, has the highest (12) number of redirect strings. 6 entities have birth names, 1 entity has a nick name, 1 entity has alias and 4 entities have alternative names. We combined the different name variants we extracted to form a set of strings for each KB entity. For Twitter entities, we used the display names that we collected . We consider the names of the entities that are part of the URL as canonical. For example in http://en.wikipedia.org/wiki/Benjamin\_Bronfman, Benjamin Bronfman is a canonical name of the entity. From the combined name variants and the canonical names, we created four sets of profiles for each entity: canonical(cano) canonical partial (cano-part), all name variants combined (all) and partial names of all name variants(all-part). We refer to the last two profiles as name-variant and name-variant partial. The names in paranthesis are used in table captions. @@ -442,6 +486,30 @@ We combined the different name variants we extracted to form a set of strings f +======= +The collection contains a total number of 121 Wikipedia entities. +Every entity has a corresponding DBpedia label. Only 82 entities have +a name string and only 49 entities have redirect strings. (Most of the +entities have only one string, except for a few cases with multiple +redirect strings; Buddy\_MacKay, has the highest (12) number of +redirect strings.) + +We combine the different name variants we extracted to form a set of +strings for each KB entity. For Twitter entities, we used the display +names that we collected. + +We consider the names of the entities that +are part of the URL as canonical. For example in entity\\ +\url{http://en.wikipedia.org/wiki/Benjamin_Bronfman}\\ +Benjamin Bronfman is a canonical name of the entity. From the combined name variants and +the canonical names, we created four sets of profiles for each +entity: canonical(cano) canonical partial (cano-part), all name +variants combined (all) and partial names of all name +variants(all-part). We refer to the last two profiles as name-variant +and name-variant partial. The names in parentheses are used in table +captions. + +>>>>>>> 3eb20e9cca3d074a4001a593e626a9269cb5608c \subsection{Annotation Corpus} The annotation set is a combination of the annotations from before the Training Time Range(TTR) and Evaluation Time Range (ETR) and consists of 68405 annotations. Its breakdown into training and test sets is shown in Table \ref{tab:breakdown}. @@ -466,7 +534,7 @@ The annotation set is a combination of the annotations from before the Training \hline \multirow{2}{*}{Total} & Wikipedia &8071 &14426&19832 \\ &Twitter &1450 &2998&4330 \\ - &All entities&9521 &17424&24162 \\ + &All Entities&9521 &17424&24162 \\ \hline \end{tabular} @@ -479,7 +547,11 @@ The annotation set is a combination of the annotations from before the Training -Most (more than 80\%) of the annotation documents are in the test set. In both the training and test data for 2013, there are 68405 annotations, of which 50688 are unique document-entity pairs. Out of 50688, 24162 unique document-entity pairs are vital-relevant, of which 9521 are vital and 17424 are relevant. +%Most (more than 80\%) of the annotation documents are in the test set. +The 2013 training and test data contain 68405 +annotations, of which 50688 are unique document-entity pairs. Out of +these, 24162 unique document-entity pairs are vital (9521) or relevant +(17424). @@ -488,25 +560,25 @@ Most (more than 80\%) of the annotation documents are in the test set. In both \subsection{Cleansing: raw or cleansed} \begin{table} -\caption{vital-relevant documents that are retrieved under different name variants (upper part from cleansed, lower part from raw)} +\caption{Percentage of vital or relevant documents retrieved under different name variants (upper part from cleansed, lower part from raw)} \begin{center} -\begin{tabular}{l@{\quad}lllllll} +\begin{tabular}{l@{\quad}rrrrrrr} \hline &cano&cano-part &all &all-part \\ \hline - all-entities &51.0 &61.7 &66.2 &78.4 \\ Wikipedia &61.8 &74.8 &71.5 &77.9\\ - twitter &1.9 &1.9 &41.7 &80.4\\ + Twitter &1.9 &1.9 &41.7 &80.4\\ + All Entities &51.0 &61.7 &66.2 &78.4 \\ \hline \hline - all-entities &59.0 &72.2 &79.8 &90.2\\ Wikipedia &70.0 &86.1 &82.4 &90.7\\ - twitter & 8.7 &8.7 &67.9 &88.2\\ + Twitter & 8.7 &8.7 &67.9 &88.2\\ + All Entities &59.0 &72.2 &79.8 &90.2\\ \hline \end{tabular} @@ -592,8 +664,12 @@ The break down of the raw corpus by document source category is presented in Tab \subsection{ Relevance Rating: vital and relevant} - When comparing the recall performances in vital and relevant, we observe that canonical names achieve better in vital than in relevant. This is specially true with Wikipedia entities. For example, the recall for news is 80.1 and for social is 76, while the corresponding recall in relevant is 75.6 and 63.2 respectively. We can generally see that the recall in vital are better than the recall in relevant suggesting that relevant documents are more probable to mention the entities and when they do, using some of their common name variants. - +When comparing recall for vital and relevant, we observe that +canonical names are more effective for vital than for relevant +entities, in particular for the Wikipedia entities. +%For example, the recall for news is 80.1 and for social is 76, while the corresponding recall in relevant is 75.6 and 63.2 respectively. +We conclude that the most relevant documents mention the +entities by their common name variants. % \subsection{Difference by document categories} % @@ -606,9 +682,18 @@ The break down of the raw corpus by document source category is presented in Tab \subsection{Recall across document categories: others, news and social} -The recall for Wikipedia entities in Table \ref{tab:name} ranged from 61.8\% (canonicals) to 77.9\% (name-variants). Table \ref{tab:source-delta} shows how recall is distributed across document categories. For Wikipedia entities, across all entity profiles, others have a higher recall followed by news, and then by social. While the recall for news ranged from 76.4\% to 98.4\%, the recall for social documents ranged from 65.7\% to 86.8\%. In Twitter entities, however, the pattern is different. In canonicals (and their partials), social documents achieve higher recall than news. +The recall for Wikipedia entities in Table \ref{tab:name} ranged from +61.8\% (canonicals) to 77.9\% (name-variants). Table +\ref{tab:source-delta} shows how recall is distributed across document +categories. For Wikipedia entities, across all entity profiles, others +have a higher recall followed by news, and then by social. While the +recall for news ranges from 76.4\% to 98.4\%, the recall for social +documents ranges from 65.7\% to 86.8\%. In Twitter entities, however, +the pattern is different. In canonicals (and their partials), social +documents achieve higher recall than news. %This indicates that social documents refer to Twitter entities by their canonical names (user names) more than news do. In name- variant partial, news achieve better results than social. The difference in recall between canonicals and name-variants show that news do not refer to Twitter entities by their user names, they refer to them by their display names. -Overall, across all entities types and all entity profiles, others achieve higher recall than news, and news, in turn, achieve higher recall than social documents. +Overall, across all entities types and all entity profiles, documents +in the others category achieve a higher recall than news, and news documents, in turn, achieve higher recall than social documents. % This suggests that social documents are the hardest to retrieve. This makes sense since social posts such as tweets and blogs are short and are more likely to point to other resources, or use short informal names. @@ -622,7 +707,7 @@ Overall, across all entities types and all entity profiles, others achieve highe %third is the difference between name-variant partial and canonical %partial and the fourth between name-variant partial and %name-variant. we believe these four deltas offer a clear meaning. The -%delta between name-variant and canonical measn the percentage of +%delta between name-variant and canonical means the percentage of %documents that the new name variants retrieve, but the canonical name %does not. Similarly, the delta between name-variant partial and %partial canonical-partial means the percentage of document-entity @@ -634,68 +719,105 @@ Overall, across all entities types and all entity profiles, others achieve highe % all\_part in relevant. \subsection{Entity Types: Wikipedia and Twitter} -Table \ref{tab:name} shows the difference between Wikipedia and Twitter entities. Wikipedia entities' canonical achieves a recall of 70\%, and canonical partial achieves a recall of 86.1\%. This is an increase in recall of 16.1\%. By contrast, the increase in recall of name-variant partial over name-variant is 8.3. -%The high increase in recall when moving from canonical names to their partial names, in comparison to the lower increase when moving from all name variants to their partial names can be explained by saturation. This is to mean that documents have already been extracted by the different name variants and thus using their partial names does not bring in many new relevant documents. -One interesting observation is that, For Wikipedia entities, canonical partial achieves better recall than name-variant in both cleansed and raw corpus. %In the raw extraction, the difference is about 3.7. -In Twitter entities, however, it is different. Both canonical and their partials perform the same and the recall is very low. Canonical and canonical partial are the same for Twitter entities because they are one word strings. For example in https://twitter.com/roryscovel, ``roryscovel`` is the canonical name and its partial is also the same. -%The low recall is because the canonical names of Twitter entities are not really names; they are usually arbitrarily created user names. It shows that documents refer to them by their display names, rarely by their user name, which is reflected in the name-variant recall (67.9\%). The use of name-variant partial increases the recall to 88.2\%. - - - -The tables in \ref{tab:name} and \ref{tab:source-delta} show recall for Wikipedia entities are higher than for Twitter. Generally, at both aggregate and document category levels, we observe that recall increases as we move from canonicals to canonical partial, to name-variant, and to name-variant partial. The only case where this does not hold is in the transition from Wikipedia's canonical partial to name-variant. At the aggregate level(as can be inferred from Table \ref{tab:name}), the difference in performance between canonical and name-variant partial is 31.9\% on all entities, 20.7\% on Wikipedia entities, and 79.5\% on Twitter entities. This is a significant performance difference. - - +Table \ref{tab:name} summarizes the differences between Wikipedia and +Twitter entities. Wikipedia entities' canonical representation +achieves a recall of 70\%, while canonical partial achieves a recall of 86.1\%. This is an +increase in recall of 16.1\%. By contrast, the increase in recall of +name-variant partial over name-variant is 8.3\%. +%This high increase in recall when moving from canonical names to their +%partial names, in comparison to the lower increase when moving from +%all name variants to their partial names can be explained by +%saturation: documents have already been extracted by the different +%name variants and thus using their partial names do not bring in many +%new relevant documents. +For Wikipedia entities, canonical +partial achieves better recall than name-variant in both the cleansed and +the raw corpus. %In the raw extraction, the difference is about 3.7. +In Twitter entities, recall of canonical matching is very low.% +\footnote{Canonical +and canonical partial are the same for Twitter entities because they +are one word strings. For example in https://twitter.com/roryscovel, +``roryscovel`` is the canonical name and its partial is identical.} +%The low recall is because the canonical names of Twitter entities are +%not really names; they are usually arbitrarily created user names. It +%shows that documents refer to them by their display names, rarely +%by their user name, which is reflected in the name-variant recall +%(67.9\%). The use of name-variant partial increases the recall to +%88.2\%. + + + +The tables in \ref{tab:name} and \ref{tab:source-delta} show a higher recall +for Wikipedia than for Twitter entities. Generally, at both +aggregate and document category levels, we observe that recall +increases as we move from canonicals to canonical partial, to +name-variant, and to name-variant partial. The only case where this +does not hold is in the transition from Wikipedia's canonical partial +to name-variant. At the aggregate level (as can be inferred from Table +\ref{tab:name}), the difference in performance between canonical and +name-variant partial is 31.9\% on all entities, 20.7\% on Wikipedia +entities, and 79.5\% on Twitter entities. + +Section \ref{sec:analysis} discusses the most plausible explanations for these findings. %% TODO: PERHAPS SUMMARY OF DISCUSSION HERE - - + \section{Impact on classification} - In the overall experimental setup, classification, ranking, and evaluationn are kept constant. Following \cite{balog2013multi} settings, we use WEKA's\footnote{http://www.cs.waikato.ac.nz/∼ml/weka/} Classification Random Forest. However, we use fewer numbers of features which we found to be more effective. We determined the effectiveness of the features by running the classification algorithm using the fewer features we implemented and their features. Our feature implementations achieved better results. The total numbers of features we used are 13 and are listed below. +In the overall experimental setup, classification, ranking, and +evaluation are kept constant. Following \cite{balog2013multi} +settings, we use +WEKA's\footnote{\url{http://www.cs.waikato.ac.nz/~ml/weka/}} Classification +Random Forest. However, we use fewer numbers of features which we +found to be more effective. We determined the effectiveness of the +features by running the classification algorithm using the fewer +features we implemented and their features. Our feature +implementations achieved better results. The total numbers of +features we used are 13 and are listed below. -\paragraph{Google's Cross Lingual Dictionary (GCLD)} +\paragraph*{Google's Cross Lingual Dictionary (GCLD)} This is a mapping of strings to Wikipedia concepts and vice versa \cite{spitkovsky2012cross}. (1) the probability with which a string is used as anchor text to a Wikipedia entity -\paragraph{jac} +\paragraph*{jac} Jaccard similarity between the document and the entity's Wikipedia page -\paragraph{cos} +\paragraph*{cos} Cosine similarity between the document and the entity's Wikipedia page -\paragraph{kl} +\paragraph*{kl} KL-divergence between the document and the entity's Wikipedia page - \paragraph{PPR} + \paragraph*{PPR} For each entity, we computed a PPR score from a Wikipedia snapshot and we kept the top 100 entities along with the corresponding scores. -\paragraph{Surface Form (sForm)} +\paragraph*{Surface Form (sForm)} For each Wikipedia entity, we gathered DBpedia name variants. These are redirects, labels and names. -\paragraph{Context (contxL, contxR)} +\paragraph*{Context (contxL, contxR)} From the WikiLink corpus \cite{singh12:wiki-links}, we collected all left and right contexts (2 sentences left and 2 sentences right) and generated n-grams between uni-grams and quadro-grams for each left and right context. Finally, we select the 5 most frequent n-grams for each context. -\paragraph{FirstPos} +\paragraph*{FirstPos} Term position of the first occurrence of the target entity in the document body -\paragraph{LastPos } +\paragraph*{LastPos } Term position of the last occurrence of the target entity in the document body -\paragraph{LengthBody} Term count of document body -\paragraph{LengthAnchor} Term count of document anchor +\paragraph*{LengthBody} Term count of document body +\paragraph*{LengthAnchor} Term count of document anchor -\paragraph{FirstPosNorm} +\paragraph*{FirstPosNorm} Term position of the first occurrence of the target entity in the document body normalised by the document length -\paragraph{MentionsBody } +\paragraph*{MentionsBody } No. of occurrences of the target entity in the document body @@ -862,7 +984,7 @@ In vital-relevant category (Table \ref{tab:class-vital-relevant}), the performan -\section{Analysis and Discussion} +\section{Analysis and Discussion}\label{sec:analysis} We conducted experiments to study the impacts on recall of @@ -917,13 +1039,8 @@ Wikipedia's canonical partial is the best entity profile for Wikipedia entities. -<<<<<<< HEAD -The deltas between entity profiles, relevance ratings, and document categories reveal four differences between Wikipedia and Twitter entities. 1) For Wikipedia entities, the difference between canonical partial and canonical is higher(16.1\%) than between name-variant partial and name-variant(18.3\%). This can be explained by saturation. This is to mean that documents have already been extracted by name-variants and thus using their partials does not bring in many new relevant documents. 2) Twitter entities are mentioned by name-variant or name-variant partial and that is seen in the high recall achieved compared to the low recall achieved by canonical(or their partial). This indicates that documents (specially news and others) almost never use user names to refer to Twitter entities. Name-variant partials are the best entity profiles for Twitter entities. 3) However, comparatively speaking, social documents refer to Twitter entities by their user names than news and others suggesting a difference in -adherence to standard in names and naming. 4) Wikipedia entities achieve higher recall and higher overall performance. -======= The deltas between entity profiles, relevance ratings, and document categories reveal four differences between Wikipedia and Twitter entities. 1) For Wikipedia entities, the difference between canonical partial and canonical is higher(16.1\%) than between name-variant partial and name-variant(8.3\%). This can be explained by saturation. This is to mean that documents have already been extracted by name-variants and thus using their partials does not bring in many new relevant documents. 2) Twitter entities are mentioned by name-variant or name-variant partial and that is seen in the high recall achieved compared to the low recall achieved by canonical(or their partial). This indicates that documents (specially news and others) almost never use user names to refer to Twitter entities. Name-variant partials are the best entity profiles for Twitter entities. 3) However, comparatively speaking, social documents refer to Twitter entities by their user names than news and others suggesting a difference in adherence to standard in names and naming. 4) Wikipedia entities achieve higher recall and higher overall performance. ->>>>>>> 60fbfbab0287ab72519987bdcba3adb5a0aa93c8 The high recall and subsequent higher overall performance of Wikipedia entities can be due to two reasons. 1) Wikipedia entities are relatively well described than Twitter entities. The fact that we can retrieve different name variants from DBpedia is a measure of relatively rich description. Rich description plays a role in both filtering and computation of features such as similarity measures in later stages of the pipeline. By contrast, we have only two names for Twitter entities: their user names and their display names which we collect from their Twitter pages. 2) There is not DBpedia-like resource for Twitter entities from which alternative names cane be collected.