From beb845cb846b64108e4f9ccbe226f23e304b3138 2014-06-12 03:02:24 From: Arjen P. de Vries Date: 2014-06-12 03:02:24 Subject: [PATCH] Merge branch 'master' of https://scm.cwi.nl/IA/cikm-paper --- diff --git a/mypaper-final.tex b/mypaper-final.tex index b71c3162484ab5534b5d9c4fcda4fe9d136fecae..e3a846f5486ea520725dd03875302da3ed6a295b 100644 --- a/mypaper-final.tex +++ b/mypaper-final.tex @@ -128,8 +128,13 @@ relevance of the document-entity pair under consideration. We analyze how these factors (and the design choices made in their corresponding system components) affect filtering performance. We identify and characterize the relevant documents that do not pass +<<<<<<< HEAD +the filtering stage by examining their contents. This way, we give +estimate of a practical upper-bound of recall for entity-centric stream +======= the filtering stage by examing their contents. This way, we estimate a practical upper-bound of recall for entity-centric stream +>>>>>>> 68fbea2f0372ab9b4199b88f980dbf5e97f49063 filtering. \end{abstract} @@ -620,7 +625,59 @@ The tables in \ref{tab:name} and \ref{tab:source-delta} show recall for Wikipedi \section{Impact on classification} -% In the overall experimental setup, classification, ranking, and evaluationn are kept constant. + In the overall experimental setup, classification, ranking, and evaluationn are kept constant. Following \cite{balog2013multi} settings, we use WEKA's\footnote{http://www.cs.waikato.ac.nz/∼ml/weka/} Classification Random Forest. However, we use fewer numbers of features which we found to be more effective. We determined the effectiveness of the features by running the classification algorithm using the fewer features we implemented and their features. Our feature implementations achieved better results. The total numbers of features we used are 13 and are listed below. + +\paragraph{Google's Cross Lingual Dictionary (GCLD)} + +This is a mapping of strings to Wikipedia concepts and vice versa +\cite{spitkovsky2012cross}. +(1) the probability with which a string is used as anchor text to +a Wikipedia entity + +\paragraph{jac} + Jaccard similarity between the document and the entity's Wikipedia page +\paragraph{cos} + Cosine similarity between the document and the entity's Wikipedia page +\paragraph{kl} + KL-divergence between the document and the entity's Wikipedia page + + \paragraph{PPR} +For each entity, we computed a PPR score from +a Wikipedia snapshot and we kept the top 100 entities along +with the corresponding scores. + + +\paragraph{Surface Form (sForm)} +For each Wikipedia entity, we gathered DBpedia name variants. These +are redirects, labels and names. + + +\paragraph{Context (contxL, contxR)} +From the WikiLink corpus \cite{singh12:wiki-links}, we collected +all left and right contexts (2 sentences left and 2 sentences +right) and generated n-grams between uni-grams and quadro-grams +for each left and right context. +Finally, we select the 5 most frequent n-grams for each context. + +\paragraph{FirstPos} + Term position of the first occurrence of the target entity in the document + body +\paragraph{LastPos } + Term position of the last occurrence of the target entity in the document body + +\paragraph{LengthBody} Term count of document body +\paragraph{LengthAnchor} Term count of document anchor + +\paragraph{FirstPosNorm} + Term position of the first occurrence of the target entity in the document + body normalised by the document length +\paragraph{MentionsBody } + No. of occurrences of the target entity in the document body + + + + + Features we use incude similarity features such as cosine and jaccard, document-entity features such as docuemnt mentions entity in title, in body, frequency of mention, etc., and related entity features such as page rank scores. In total we sue The features consist of similarity measures between the KB entiities profile text, document-entity features such as In here, we present results showing how the choices in corpus, entity types, and entity profiles impact these latest stages of the pipeline. In tables \ref{tab:class-vital} and \ref{tab:class-vital-relevant}, we show the performances in max-F. \begin{table*} \caption{vital performance under different name variants(upper part from cleansed, lower part from raw)} @@ -836,12 +893,13 @@ Wikipedia's canonical partial is the best entity profile for Wikipedia entities. -<<<<<<< HEAD +<<<<<<< HEAD The deltas between entity profiles, relevance ratings, and document categories reveal four differences between Wikipedia and Twitter entities. 1) For Wikipedia entities, the difference between canonical partial and canonical is higher(16.1\%) than between name-variant partial and name-variant(18.3\%). This can be explained by saturation. This is to mean that documents have already been extracted by name-variants and thus using their partials does not bring in many new relevant documents. 2) Twitter entities are mentioned by name-variant or name-variant partial and that is seen in the high recall achieved compared to the low recall achieved by canonical(or their partial). This indicates that documents (specially news and others) almost never use user names to refer to Twitter entities. Name-variant partials are the best entity profiles for Twitter entities. 3) However, comparatively speaking, social documents refer to Twitter entities by their user names than news and others suggesting a difference in adherence to standard in names and naming. 4) Wikipedia entities achieve higher recall and higher overall performance. -======= -The deltas between entity profiles, relevance ratings, and document categories reveal four differences between Wikipedia and Twitter entities. 1) For Wikipedia entities, the difference between canonical partial and canonical is higher(16.1\%) than between name-variant partial and name-variant(8.3\%). This can be explained by saturation. This is to mean that documents have already been extracted by name-variants and thus using their partials does not bring in many new relevant documents. 2) Twitter entities are mentioned by name-variant or name-variant partial and that is seen in the high recall achieved compared to the low recall achieved by canonical(or their partial). This indicates that documents (specially news and others) almost never use user names to refer to Twitter entities. Name-variant partials are the best entity profiles for Twitter entities. 3) However, comparatively speaking, social documents refer to Twitter entities by their user names than news and others suggesting a difference in adherence to standard in names and naming. 4) Wikipedia entities achieve higher recall and higher overall performance. ->>>>>>> 60fbfbab0287ab72519987bdcba3adb5a0aa93c8 +======= +The deltas between entity profiles, relevance ratings, and document categories reveal four differences between Wikipedia and Twitter entities. 1) For Wikipedia entities, the difference between canonical partial and canonical is higher(16.1\%) than between name-variant partial and name-variant(8.3\%). This can be explained by saturation. This is to mean that documents have already been extracted by name-variants and thus using their partials does not bring in many new relevant documents. 2) Twitter entities are mentioned by name-variant or name-variant partial and that is seen in the high recall achieved compared to the low recall achieved by canonical(or their partial). This indicates that documents (specially news and others) almost never use user names to refer to Twitter entities. Name-variant partials are the best entity profiles for Twitter entities. 3) However, comparatively speaking, social documents refer to Twitter entities by their user names than news and others suggesting a difference in +adherence to standard in names and naming. 4) Wikipedia entities achieve higher recall and higher overall performance. +>>>>>>> 60fbfbab0287ab72519987bdcba3adb5a0aa93c8 The high recall and subsequent higher overall performance of Wikipedia entities can be due to two reasons. 1) Wikipedia entities are relatively well described than Twitter entities. The fact that we can retrieve different name variants from DBpedia is a measure of relatively rich description. Rich description plays a role in both filtering and computation of features such as similarity measures in later stages of the pipeline. By contrast, we have only two names for Twitter entities: their user names and their display names which we collect from their Twitter pages. 2) There is not DBpedia-like resource for Twitter entities from which alternative names cane be collected.