From c63611b82d0595c54759bbb0301662a9914a3380 2014-06-12 05:02:12 From: Arjen P. de Vries Date: 2014-06-12 05:02:12 Subject: [PATCH] hopefully fixed the conflicts Merge branch 'master' of https://scm.cwi.nl/IA/cikm-paper Conflicts: mypaper-final.tex --- diff --git a/mypaper-final.tex b/mypaper-final.tex index 50ec63fba60cc61b284fdac6e6fdaf6e0166d24e..1c0cc9aae49531d9f60ffe2db427074a21b696ea 100644 --- a/mypaper-final.tex +++ b/mypaper-final.tex @@ -30,7 +30,7 @@ \usepackage{booktabs} \usepackage{multirow} \usepackage{todonotes} -\usepackage{url} +\usepackage{url} \begin{document} @@ -65,7 +65,7 @@ % without further effort on your part as the last section in % the body of your article BEFORE References or any Appendices. -\numberofauthors{2} % in this sample file, there are a *total* +\numberofauthors{8} % in this sample file, there are a *total* % of EIGHT authors. SIX appear on the 'first-page' (for formatting % reasons) and the remaining two appear in the \additionalauthors section. % @@ -131,7 +131,7 @@ corresponding system components) affect filtering performance. We identify and characterize the relevant documents that do not pass the filtering stage by examing their contents. This way, we estimate a practical upper-bound of recall for entity-centric stream -filtering. +filtering. \end{abstract} % A category with the (minimum) three required fields @@ -150,12 +150,7 @@ In 2012, the Text REtrieval Conferences (TREC) introduced the Knowledge Base Acc Filtering is a crucial step in CCR for selecting a potentially relevant set of working documents for subsequent steps of the - pipeline out of a big collection of stream documents. The TREC - Filtering track defines filtering as a ``system that sifts through - stream of incoming information to find documents that are relevant to - a set of user needs represented by profiles'' - \cite{robertson2002trec}. -In the specific setting of CCR, these profiles are + pipeline out of a big collection of stream documents. Filtering sifts an incoming information for information relevant to user profiles \cite{robertson2002trec}. In the specific setting of CCR, these profiles are represented by persistent KB entities (Wikipedia pages or Twitter users, in the TREC scenario). @@ -180,7 +175,7 @@ Also, different approaches use different entity profiles for filtering, varying from using just the KB entities' canonical names to looking up DBpedia name variants, and from using the bold words in the first paragraph of the Wikipedia -entities’ page to using anchor texts from other Wikipedia pages, and from +entities' page to using anchor texts from other Wikipedia pages, and from using the exact name as given to WordNet derived synonyms. The type of entities (Wikipedia or Twitter) and the category of documents in which they occur (news, blogs, or tweets) cause further variations. @@ -210,36 +205,35 @@ compromising precision, describing and classifying relevant documents that are not amenable to filtering , and estimating the upper-bound of recall on entity-based filtering. -The rest of the paper is is organized as follows: - -\textbf{TODO!!} - - \section{Data Description} -We base this analysis on the TREC-KBA 2013 dataset% -\footnote{\url{http://trec-kba.org/trec-kba-2013.shtml}} -that consists of three main parts: a time-stamped stream corpus, a set of -KB entities to be curated, and a set of relevance judgments. A CCR -system now has to identify for each KB entity which documents in the -stream corpus are to be considered by the human curator. - -\subsection{Stream corpus} The stream corpus comes in two versions: -raw and cleaned. The raw and cleansed versions are 6.45TB and 4.5TB -respectively, after xz-compression and GPG encryption. The raw data -is a dump of raw HTML pages. The cleansed version is the raw data -after its HTML tags are stripped off and only English documents -identified with Chromium Compact Language Detector -\footnote{\url{https://code.google.com/p/chromium-compact-language-detector/}} -are included. The stream corpus is organized in hourly folders each -of which contains many chunk files. Each chunk file contains between -hundreds and hundreds of thousands of serialized thrift objects. One -thrift object is one document. A document could be a blog article, a -news article, or a social media post (including tweet). The stream -corpus comes from three sources: TREC KBA 2012 (social, news and -linking) \footnote{\url{http://trec-kba.org/kba-stream-corpus-2012.shtml}}, -arxiv\footnote{\url{http://arxiv.org/}}, and -spinn3r\footnote{\url{http://spinn3r.com/}}. -Table \ref{tab:streams} shows the sources, the number of hourly -directories, and the number of chunk files. +The rest of the paper is organized as follows. Section \ref{sec:desc} describes the dataset and section \ref{sec:fil} defines the task. In section \ref{sec:lit}, we discuss related litrature folowed by a discussion of our method in \ref{sec:mthd}. Following that, we present the experimental resulsy in \ref{sec:expr}, and discuss and analyze them in \ref{sec:analysis}. Towards the end, we discuss the impact of filtering choices on classification in section \ref{sec:impact}, examine and categorize unfilterable docuemnts in section \ref{sec:unfil}. Finally, we present our conclusions in \ref{}{sec:conc}. + + + \section{Data Description}\label{sec:desc} +We base this analysis on the TREC-KBA 2013 dataset% +\footnote{\url{http://trec-kba.org/trec-kba-2013.shtml}} +that consists of three main parts: a time-stamped stream corpus, a set of +KB entities to be curated, and a set of relevance judgments. A CCR +system now has to identify for each KB entity which documents in the +stream corpus are to be considered by the human curator. + +\subsection{Stream corpus} The stream corpus comes in two versions: +raw and cleaned. The raw and cleansed versions are 6.45TB and 4.5TB +respectively, after xz-compression and GPG encryption. The raw data +is a dump of raw HTML pages. The cleansed version is the raw data +after its HTML tags are stripped off and only English documents +identified with Chromium Compact Language Detector +\footnote{\url{https://code.google.com/p/chromium-compact-language-detector/}} +are included. The stream corpus is organized in hourly folders each +of which contains many chunk files. Each chunk file contains between +hundreds and hundreds of thousands of serialized thrift objects. One +thrift object is one document. A document could be a blog article, a +news article, or a social media post (including tweet). The stream +corpus comes from three sources: TREC KBA 2012 (social, news and +linking) \footnote{\url{http://trec-kba.org/kba-stream-corpus-2012.shtml}}, +arxiv\footnote{\url{http://arxiv.org/}}, and +spinn3r\footnote{\url{http://spinn3r.com/}}. +Table \ref{tab:streams} shows the sources, the number of hourly +directories, and the number of chunk files. \begin{table} \caption{Retrieved documents to different sources } \begin{center} @@ -307,7 +301,7 @@ relevant annotations are social (social and weblog) (63.13\%). News 93\% of all annotations. The rest make up about 7\% and are all grouped as others. - \section{Stream Filtering} + \section{Stream Filtering}\label{sec:fil} The TREC Filtering track defines filtering as a ``system that sifts through stream of incoming information to find documents that are @@ -368,7 +362,7 @@ pipeline performance we use the official TREC KBA evaluation metric and scripts \cite{frank2013stream} to report max-F, the maximum F-score obtained over all relevance cut-offs. -\section{Literature Review} +\section{Literature Review} \label{sec:lit} There has been a great deal of interest as of late on entity-based filtering and ranking. One manifestation of that is the introduction of TREC KBA in 2012. Following that, there have been a number of research works done on the topic \cite{frank2012building, ceccarelli2013learning, taneva2013gem, wang2013bit, balog2013multi}. These works are based on KBA 2012 task and dataset and they address the whole problem of entity filtering and ranking. TREC KBA continued in 2013, but the task underwent some changes. The main change between the 2012 and 2013 are in the number of entities, the type of entities, the corpus and the relevance rankings. The number of entities increased from 29 to 141, and it included 20 Twitter entities. The TREC KBA 2012 corpus is 1.9TB after xz-compression and has 400M documents. By contrast, the KBA 2013 corpus is 6.45 after XZ-compression and GPG encryption. A version with all-non English documented removed is 4.5 TB and consists of 1 Billion documents. The 2013 corpus subsumed the 2012 corpus and added others from spinn3r, namely main-stream news, forum, arxiv, classified, reviews and meme-tracker. A more important difference is, however, a change in the definitions of relevance ratings vital and relevant. While in KBA 2012, a document was judged vital if it has citation-worthy content for a given entity, in 2013 it must have the freshliness, that is the content must trigger an editing of the given entity's KB entry. @@ -379,7 +373,7 @@ All of the studies used filtering as their first step to generate a smaller set Moreover, there has not been a chance to study at this scale and/or a study into what type of documents defy filtering and why? In this paper, we conduct a manual examination of the documents that are missing and classify them into different categories. We also estimate the general upper bound of recall using the different entities profiles and choose the best profile that results in an increased over all performance as measured by F-measure. -\section{Method} +\section{Method}\label{sec:mthd} All analyses in this paper are carried out on the documents that have relevance assessments associated to them. For this purpose, we extracted those documents from the big corpus. We experiment with all @@ -388,16 +382,16 @@ from DBpedia and Twitter. \ \subsection{Entity Profiling} -We build entity profiles for the KB entities of interest. We have two -types: Twitter and Wikipedia. Both entities have been selected, on -purpose by the track organisers, to occur only sparsely and be less-documented. -For the Wikipedia entities, we fetch different name variants -from DBpedia: name, label, birth name, alternative names, -redirects, nickname, or alias. -These extraction results are summarized in Table -\ref{tab:sources}. -For the Twitter entities, we visit -their respective Twitter pages and fetch their display names. +We build entity profiles for the KB entities of interest. We have two +types: Twitter and Wikipedia. Both entities have been selected, on +purpose by the track organisers, to occur only sparsely and be less-documented. +For the Wikipedia entities, we fetch different name variants +from DBpedia: name, label, birth name, alternative names, +redirects, nickname, or alias. +These extraction results are summarized in Table +\ref{tab:sources}. +For the Twitter entities, we visit +their respective Twitter pages and fetch their display names. \begin{table} \caption{Number of different DBpedia name variants} \begin{center} @@ -420,29 +414,29 @@ Redirect &49 \\ \end{table} -The collection contains a total number of 121 Wikipedia entities. -Every entity has a corresponding DBpedia label. Only 82 entities have -a name string and only 49 entities have redirect strings. (Most of the -entities have only one string, except for a few cases with multiple -redirect strings; Buddy\_MacKay, has the highest (12) number of -redirect strings.) - -We combine the different name variants we extracted to form a set of -strings for each KB entity. For Twitter entities, we used the display -names that we collected. We consider the names of the entities that -are part of the URL as canonical. For example in entity\\ -\url{http://en.wikipedia.org/wiki/Benjamin_Bronfman}\\ -Benjamin Bronfman is a canonical name of the entity. -An example is given in Table \ref{tab:profile}. - -From the combined name variants and -the canonical names, we created four sets of profiles for each -entity: canonical(cano) canonical partial (cano-part), all name -variants combined (all) and partial names of all name -variants(all-part). We refer to the last two profiles as name-variant -and name-variant partial. The names in parentheses are used in table -captions. - +The collection contains a total number of 121 Wikipedia entities. +Every entity has a corresponding DBpedia label. Only 82 entities have +a name string and only 49 entities have redirect strings. (Most of the +entities have only one string, except for a few cases with multiple +redirect strings; Buddy\_MacKay, has the highest (12) number of +redirect strings.) + +We combine the different name variants we extracted to form a set of +strings for each KB entity. For Twitter entities, we used the display +names that we collected. We consider the names of the entities that +are part of the URL as canonical. For example in entity\\ +\url{http://en.wikipedia.org/wiki/Benjamin_Bronfman}\\ +Benjamin Bronfman is a canonical name of the entity. +An example is given in Table \ref{tab:profile}. + +From the combined name variants and +the canonical names, we created four sets of profiles for each +entity: canonical(cano) canonical partial (cano-part), all name +variants combined (all) and partial names of all name +variants(all-part). We refer to the last two profiles as name-variant +and name-variant partial. The names in parentheses are used in table +captions. + \begin{table*} \caption{Example entity profiles (upper part Wikipedia, lower part Twitter)} @@ -500,15 +494,15 @@ The annotation set is a combination of the annotations from before the Training -%Most (more than 80\%) of the annotation documents are in the test set. -The 2013 training and test data contain 68405 -annotations, of which 50688 are unique document-entity pairs. Out of -these, 24162 unique document-entity pairs are vital (9521) or relevant -(17424). +%Most (more than 80\%) of the annotation documents are in the test set. +The 2013 training and test data contain 68405 +annotations, of which 50688 are unique document-entity pairs. Out of +these, 24162 unique document-entity pairs are vital (9521) or relevant +(17424). -\section{Experiments and Results} +\section{Experiments and Results}\label{sec:expr} We conducted experiments to study the effect of cleansing, different entity profiles, types of entities, category of documents, relevance ranks (vital or relevant), and the impact on classification. In the following subsections, we present the results in different categories, and describe them. \subsection{Cleansing: raw or cleansed} @@ -617,12 +611,12 @@ If we look at the recall performances for the raw corpus, filtering documents \subsection{ Relevance Rating: vital and relevant} -When comparing recall for vital and relevant, we observe that -canonical names are more effective for vital than for relevant -entities, in particular for the Wikipedia entities. -%For example, the recall for news is 80.1 and for social is 76, while the corresponding recall in relevant is 75.6 and 63.2 respectively. -We conclude that the most relevant documents mention the -entities by their common name variants. +When comparing recall for vital and relevant, we observe that +canonical names are more effective for vital than for relevant +entities, in particular for the Wikipedia entities. +%For example, the recall for news is 80.1 and for social is 76, while the corresponding recall in relevant is 75.6 and 63.2 respectively. +We conclude that the most relevant documents mention the +entities by their common name variants. % \subsection{Difference by document categories} % @@ -635,17 +629,17 @@ entities by their common name variants. \subsection{Recall across document categories: others, news and social} -The recall for Wikipedia entities in Table \ref{tab:name} ranged from -61.8\% (canonicals) to 77.9\% (name-variants). Table -\ref{tab:source-delta} shows how recall is distributed across document -categories. For Wikipedia entities, across all entity profiles, others -have a higher recall followed by news, and then by social. While the -recall for news ranges from 76.4\% to 98.4\%, the recall for social -documents ranges from 65.7\% to 86.8\%. In Twitter entities, however, -the pattern is different. In canonicals (and their partials), social -documents achieve higher recall than news. +The recall for Wikipedia entities in Table \ref{tab:name} ranged from +61.8\% (canonicals) to 77.9\% (name-variants). Table +\ref{tab:source-delta} shows how recall is distributed across document +categories. For Wikipedia entities, across all entity profiles, others +have a higher recall followed by news, and then by social. While the +recall for news ranges from 76.4\% to 98.4\%, the recall for social +documents ranges from 65.7\% to 86.8\%. In Twitter entities, however, +the pattern is different. In canonicals (and their partials), social +documents achieve higher recall than news. %This indicates that social documents refer to Twitter entities by their canonical names (user names) more than news do. In name- variant partial, news achieve better results than social. The difference in recall between canonicals and name-variants show that news do not refer to Twitter entities by their user names, they refer to them by their display names. -Overall, across all entities types and all entity profiles, documents +Overall, across all entities types and all entity profiles, documents in the others category achieve a higher recall than news, and news documents, in turn, achieve higher recall than social documents. % This suggests that social documents are the hardest to retrieve. This makes sense since social posts such as tweets and blogs are short and are more likely to point to other resources, or use short informal names. @@ -672,59 +666,59 @@ in the others category achieve a higher recall than news, and news documents, in % all\_part in relevant. \subsection{Entity Types: Wikipedia and Twitter} -Table \ref{tab:name} summarizes the differences between Wikipedia and -Twitter entities. Wikipedia entities' canonical representation -achieves a recall of 70\%, while canonical partial achieves a recall of 86.1\%. This is an -increase in recall of 16.1\%. By contrast, the increase in recall of -name-variant partial over name-variant is 8.3\%. -%This high increase in recall when moving from canonical names to their -%partial names, in comparison to the lower increase when moving from -%all name variants to their partial names can be explained by -%saturation: documents have already been extracted by the different -%name variants and thus using their partial names do not bring in many -%new relevant documents. -For Wikipedia entities, canonical -partial achieves better recall than name-variant in both the cleansed and -the raw corpus. %In the raw extraction, the difference is about 3.7. -In Twitter entities, recall of canonical matching is very low.% -\footnote{Canonical -and canonical partial are the same for Twitter entities because they -are one word strings. For example in https://twitter.com/roryscovel, -``roryscovel`` is the canonical name and its partial is identical.} -%The low recall is because the canonical names of Twitter entities are -%not really names; they are usually arbitrarily created user names. It -%shows that documents refer to them by their display names, rarely -%by their user name, which is reflected in the name-variant recall -%(67.9\%). The use of name-variant partial increases the recall to -%88.2\%. - - - -The tables in \ref{tab:name} and \ref{tab:source-delta} show a higher recall -for Wikipedia than for Twitter entities. Generally, at both -aggregate and document category levels, we observe that recall -increases as we move from canonicals to canonical partial, to -name-variant, and to name-variant partial. The only case where this -does not hold is in the transition from Wikipedia's canonical partial -to name-variant. At the aggregate level (as can be inferred from Table -\ref{tab:name}), the difference in performance between canonical and -name-variant partial is 31.9\% on all entities, 20.7\% on Wikipedia -entities, and 79.5\% on Twitter entities. - -Section \ref{sec:analysis} discusses the most plausible explanations for these findings. +Table \ref{tab:name} summarizes the differences between Wikipedia and +Twitter entities. Wikipedia entities' canonical representation +achieves a recall of 70\%, while canonical partial achieves a recall of 86.1\%. This is an +increase in recall of 16.1\%. By contrast, the increase in recall of +name-variant partial over name-variant is 8.3\%. +%This high increase in recall when moving from canonical names to their +%partial names, in comparison to the lower increase when moving from +%all name variants to their partial names can be explained by +%saturation: documents have already been extracted by the different +%name variants and thus using their partial names do not bring in many +%new relevant documents. +For Wikipedia entities, canonical +partial achieves better recall than name-variant in both the cleansed and +the raw corpus. %In the raw extraction, the difference is about 3.7. +In Twitter entities, recall of canonical matching is very low.% +\footnote{Canonical +and canonical partial are the same for Twitter entities because they +are one word strings. For example in https://twitter.com/roryscovel, +``roryscovel`` is the canonical name and its partial is identical.} +%The low recall is because the canonical names of Twitter entities are +%not really names; they are usually arbitrarily created user names. It +%shows that documents refer to them by their display names, rarely +%by their user name, which is reflected in the name-variant recall +%(67.9\%). The use of name-variant partial increases the recall to +%88.2\%. + + + +The tables in \ref{tab:name} and \ref{tab:source-delta} show a higher recall +for Wikipedia than for Twitter entities. Generally, at both +aggregate and document category levels, we observe that recall +increases as we move from canonicals to canonical partial, to +name-variant, and to name-variant partial. The only case where this +does not hold is in the transition from Wikipedia's canonical partial +to name-variant. At the aggregate level (as can be inferred from Table +\ref{tab:name}), the difference in performance between canonical and +name-variant partial is 31.9\% on all entities, 20.7\% on Wikipedia +entities, and 79.5\% on Twitter entities. + +Section \ref{sec:analysis} discusses the most plausible explanations for these findings. %% TODO: PERHAPS SUMMARY OF DISCUSSION HERE - -\section{Impact on classification} -In the overall experimental setup, classification, ranking, and -evaluation are kept constant. Following \cite{balog2013multi} -settings, we use -WEKA's\footnote{\url{http://www.cs.waikato.ac.nz/~ml/weka/}} Classification -Random Forest. However, we use fewer numbers of features which we -found to be more effective. We determined the effectiveness of the -features by running the classification algorithm using the fewer -features we implemented and their features. Our feature -implementations achieved better results. The total numbers of -features we used are 13 and are listed below. + +\section{Impact on classification}\label{sec:impact} +In the overall experimental setup, classification, ranking, and +evaluation are kept constant. Following \cite{balog2013multi} +settings, we use +WEKA's\footnote{\url{http://www.cs.waikato.ac.nz/~ml/weka/}} Classification +Random Forest. However, we use fewer numbers of features which we +found to be more effective. We determined the effectiveness of the +features by running the classification algorithm using the fewer +features we implemented and their features. Our feature +implementations achieved better results. The total numbers of +features we used are 13 and are listed below. \paragraph*{Google's Cross Lingual Dictionary (GCLD)} @@ -988,7 +982,9 @@ Twitter entities, the name-variant partial profile achieves the highest F-score across all entity profiles and types of corpus. -Cleansing impacts Twitter +There are 3 interesting observations: + +1) cleansing impacts Twitter entities and relevant documents. This is validated by the observation that recall gains in Twitter entities and the relevant categories in the raw corpus also translate into overall performance @@ -1040,80 +1036,26 @@ step. -The deltas between entity profiles, relevance ratings, and document -categories reveal four differences between Wikipedia and Twitter -entities. 1) For Wikipedia entities, the difference between canonical -partial and canonical is higher(16.1\%) than between name-variant -partial and name-variant(8.3\%). This can be explained by -saturation. This is to mean that documents have already been extracted -by name-variants and thus using their partials does not bring in many -new relevant documents. 2) Twitter entities are mentioned by -name-variant or name-variant partial and that is seen in the high -recall achieved compared to the low recall achieved by canonical(or -their partial). This indicates that documents (specially news and -others) almost never use user names to refer to Twitter -entities. Name-variant partials are the best entity profiles for -Twitter entities. 3) However, comparatively speaking, social documents -refer to Twitter entities by their user names than news and others -suggesting a difference in adherence to standard in names and naming. -4) Wikipedia entities achieve higher recall and higher overall performance. - -The high recall and subsequent higher overall performance of Wikipedia -entities can be due to two reasons. 1) Wikipedia entities are -relatively well described than Twitter entities. The fact that we can -retrieve different name variants from DBpedia is a measure of -relatively rich description. Rich description plays a role in both -filtering and computation of features such as similarity measures in -later stages of the pipeline. By contrast, we have only two names -for Twitter entities: their user names and their display names which -we collect from their Twitter pages. 2) There is not DBpedia-like -resource for Twitter entities from which alternative names cane be -collected. - -In the experimental results, we also observed that recall scores in -the vital category are higher than in the relevant category. This -observation confirms one commonly held assumption:(frequency) mention -is related to relevance. this is the assumption why term frequency is -used an indicator of document relevance in many information retrieval -systems. The more a document mentions an entity explicitly by name, -the more likely the document is vital to the entity. - -Across document categories, we observe a pattern in recall of -documents from the ``others'' category, followed by ``news'', and then -by ``social''. The social documents relevant to an entity are the -hardest to retrieve. This can be explained by the fact that social -documents (tweets and blogs) are more likely to point to a resource -where the entity is mentioned, mention the entities with some short -abbreviation, or talk without mentioning the entities, but with some -context in mind. By contrast news documents mention the entities they -talk about using the common name variants more than social documents -do. However, the greater difference in percentage recall between the -different entity profiles in the news category indicates news refer to -a given entity with different names, rather than by one standard -name. By contrast others show least variation in referring to -news. Social documents falls in between the two. The deltas, for -Wikipedia entities, between canonical partials and canonicals, and -name-variants and canonicals are high, an indication that canonical -partials -and name-variants bring in new relevant documents that can not be -retrieved by canonicals. The rest of the two deltas are very small, -suggesting that partial names of name variants do not bring in new -relevant documents. - -% Was: \section{Unfilterable documents} -\section{Missing vital-relevant documents \label{miss}} - - The use of name-variant partial for filtering is an aggressive - attempt to retrieve as many relevant documents as possible at the - cost of retrieving irrelevant documents. However, we still miss about - 2363(10\%) of the vital-relevant documents. Why are these documents - never retrieved? If they are not mentioned by partial names of name - variants, what are they mentioned by? Table \ref{tab:miss} summarizes - the number of documents that we miss with respect to cleansed and raw - corpus. The upper part shows the number of documents missing from - cleansed and raw versions of the corpus. The lower part of the table - shows the intersections and exclusions in each corpus. - +The deltas between entity profiles, relevance ratings, and document categories reveal four differences between Wikipedia and Twitter entities. 1) For Wikipedia entities, the difference between canonical partial and canonical is higher(16.1\%) than between name-variant partial and name-variant(8.3\%). This can be explained by saturation. This is to mean that documents have already been extracted by name-variants and thus using their partials does not bring in many new relevant documents. 2) Twitter entities are mentioned by name-variant or name-variant partial and that is seen in the high recall achieved compared to the low recall achieved by canonical(or their partial). This indicates that documents (specially news and others) almost never use user names to refer to Twitter entities. Name-variant partials are the best entity profiles for Twitter entities. 3) However, comparatively speaking, social documents refer to Twitter entities by their user names than news and others suggesting a difference in +adherence to standard in names and naming. 4) Wikipedia entities achieve higher recall and higher overall performance. + +The high recall and subsequent higher overall performance of Wikipedia entities can be due to two reasons. 1) Wikipedia entities are relatively well described than Twitter entities. The fact that we can retrieve different name variants from DBpedia is a measure of relatively rich description. Rich description plays a role in both filtering and computation of features such as similarity measures in later stages of the pipeline. By contrast, we have only two names for Twitter entities: their user names and their display names which we collect from their Twitter pages. 2) There is not DBpedia-like resource for Twitter entities from which alternative names cane be collected. + + +In the experimental results, we also observed that recall scores in the vital category are higher than in the relevant category. This observation confirms one commonly held assumption:(frequency) mention is related to relevance. this is the assumption why term frequency is used an indicator of document relevance in many information retrieval systems. The more a document mentions an entity explicitly by name, the more likely the document is vital to the entity. + +Across document categories, we observe a pattern in recall of others, followed by news, and then by social. Social documents are the hardest to retrieve. This can be explained by the fact that social documents (tweets and blogs) are more likely to point to a resource where the entity is mentioned, mention the entities with some short abbreviation, or talk without mentioning the entities, but with some context in mind. By contrast news documents mention the entities they talk about using the common name variants more than social documents do. However, the greater difference in percentage recall between the different entity profiles in the news category indicates news refer to a given entity with different names, rather than by one standard name. By contrast others show least variation in referring to news. Social documents falls in between the two. The deltas, for Wikipedia entities, between canonical partials and canonicals, and name-variants and canonicals are high, an indication that canonical partials +and name-variants bring in new relevant documents that can not be retrieved by canonicals. The rest of the two deltas are very small, suggesting that partial names of name variants do not bring in new relevant documents. + + +\section{Unfilterable documents}\label{sec:unfil} + +\subsection{Missing vital-relevant documents \label{miss}} + +% + + The use of name-variant partial for filtering is an aggressive attempt to retrieve as many relevant documents as possible at the cost of retrieving irrelevant documents. However, we still miss about 2363(10\%) of the vital-relevant documents. Why are these documents missed? If they are not mentioned by partial names of name variants, what are they mentioned by? Table \ref{tab:miss} shows the documents that we miss with respect to cleansed and raw corpus. The upper part shows the number of documents missing from cleansed and raw versions of the corpus. The lower part of the table shows the intersections and exclusions in each corpus. + \begin{table} \caption{The number of documents missing from raw and cleansed extractions. } \begin{center} @@ -1135,60 +1077,26 @@ Raw & 276 & 4951 & 5227 \\ \end{tabular} \end{center} \label{tab:miss} -\end{table} - -One would usually have assumed that the set of document-entity pairs extracted -from the cleansed part of the corpus would form a sub-set of those -extracted from the raw corpus. Suprisingly, we found this not to be -the case: 217 unique entity-document pairs are retrieved from the -cleansed corpus, but not from the raw one, out of which 57 have been -judged as vital. Similarly, 3081 document-entity pairs only occur in -the raw corpus, with 1065 vital documents among these. Examining the content of these -documents reveals that these ommissions are easily explained by -missing text in the corresponding documents. All the documents that we miss from the raw -corpus are social, like tweets, blogs and posts -from other social media. To meet the format of the raw data (binary -byte array), some of these must have been converted later, after -collection (as a cleansed version has been produced), but affected by -some processing error. For the documents that we miss from the -cleansed corpus, a part of their (or even the -entire) content is lost during the cleansing process (the removal of -HTML tags and non-English documents). In both cases the mention of -the entity happened to be on the part of the text that is cut out -during transformation. - -The more intriguing set of relevance judgments are those that we miss -from both raw and cleansed extractions, concerning 2146 unique -document-entity pairs, 219 of them assessed as vital to the entity. -The number of entities in the missed vital annotations is 28 -Wikipedia and 7 Twitter, making a total of 35. The great majority -(86.7\%) of these documents are social. This suggests that social -media sources -(tweets and blogs) may discuss these entities without mentioning -them explicitly by name, more than in news and other types of -documents. (This is, of course, in line with intuition.) +\end{table} + +One would assume that the set of document-entity pairs extracted from cleansed are a sub-set of those that are extracted from the raw corpus. We find that that is not the case. There are 217 unique entity-document pairs that are retrieved from the cleansed corpus, but not from the raw. 57 of them are vital. Similarly, there are 3081 document-entity pairs that are missing from cleansed, but are present in raw. 1065 of them are vital. Examining the content of the documents reveals that it is due to a missing part of text from a corresponding document. All the documents that we miss from the raw corpus are social. These are documents such as tweets and blogs, posts from other social media. To meet the format of the raw data (binary byte array), some of them must have been converted later, after collection and on the way lost a part or the entire content. It is similar for the documents that we miss from cleansed: a part or the entire content is lost in during the cleansing process (the removal of +HTML tags and non-English documents). In both cases the mention of the entity happened to be on the part of the text that is cut out during transformation. + + + The interesting set of relevance judgments are those that we miss from both raw and cleansed extractions. These are 2146 unique document-entity pairs, 219 of them are with vital relevance judgments. The total number of entities in the missed vital annotations is 28 Wikipedia and 7 Twitter, making a total of 35. The great majority (86.7\%) of the documents are social. This suggests that social (tweets and blogs) can talk about the entities without mentioning them by name more than news and others do. This is, of course, inline with intuition. + %%%%%%%%%%%%%%%%%%%%%% -We observed that among the missing documents, different document -ids can have the same content, and be judged multiple times for a -given entity. -%In the vital annotation, there are 88 news, and 409 -%weblog. -Avoiding duplicates, we randomly selected 35 documents, one for each -entity. The documents are 13 news and 22 social. Here below we -have classified the situation under which a document can be vital for -an entity without mentioning the entities with the different entity -profiles we used for filtering. +We observed that there are vital-relevant documents that we miss from raw only, and similarly from cleansed only. The reason for this is transformation from one format to another. The most interesting documents are those that we miss from both raw and cleansed corpus. We first identified the number of KB entities who have a vital relevance judgment and whose documents can not be retrieved (they were 35 in total) and conducted a manual examination into their content to find out why they are missing. + + + We observed that among the missing documents, different document ids can have the same content, and be judged multiple times for a given entity. %In the vital annotation, there are 88 news, and 409 weblog. + Avoiding duplicates, we randomly selected 35 documents, one for each entity. The documents are 13 news and 22 social. Here below we have classified the situation under which a document can be vital for an entity without mentioning the entities with the different entity profiles we used for filtering. \paragraph*{Outgoing link mentions} A post (tweet) with an outgoing link which mentions the entity. -\paragraph*{Event place - Event} A document that talks about an event -is vital to the location entity where it takes place. For example -Maha Music Festival takes place in Lewis and Clark\_Landing, and a -document talking about the festival is vital for the park. There are -also cases where an event's address places the event in a park, and due -to that the document becomes vital to the park. +\paragraph*{Event place - Event} A document that talks about an event is vital to the location entity where it takes place. For example Maha Music Festival takes place in Lewis and Clark\_Landing, and a document talking about the festival is vital for the park. There are also cases where an event's address places the event in a park and due to that the document becomes vital to the park. This is basically being mentioned by address which belongs to alarger space. \paragraph*{Entity -related entity} A document about an important figure such as artist, athlete can be vital to another. This is specially true if the two are contending for the same title, one has snatched a title, or award from the other. \paragraph*{Organization - main activity} A document that talks about about an area on which the company is active is vital for the organization. For example, Atacocha is a mining company and a news item on mining waste was annotated vital. \paragraph*{Entity - group} If an entity belongs to a certain group (class), a news item about the group can be vital for the individual members. FrankandOak is named innovative company and a news item that talks about the group of innovative companies is relevant for a it. Other examples are: a big event of which an entity is related such an Film awards for actors. @@ -1198,11 +1106,11 @@ to that the document becomes vital to the park. \paragraph*{head - organization} A document that talks about an organization of which the entity is the head can be vital for the entity. Jasper\_Schneider is USDA Rural Development state director for North Dakota and an article about problems of primary health centers in North Dakota is judged vital for him. \paragraph*{World Knowledge} Some things are impossible to know without your world knowledge. For example ''refreshments, treats, gift shop specials, "bountiful, fresh and fabulous holiday decor," a demonstration of simple ways to create unique holiday arrangements for any home; free and open to the public`` is judged relevant to Hjemkomst\_Center. This is a social media post, and unless one knows the person posting it, there is no way that this text shows that. Similarly ''learn about the gray wolf's hunting and feeding behaviors and watch the wolves have their evening meal of a full deer carcass; $15 for members, $20 for nonmembers`` is judged vital to Red\_River\_Zoo. \paragraph*{No document content} A small number of documents were found to have no content. -\paragraph*{Disagreement} For a few remaining documents, the authors disagree with the assessors as to why these are vital to the entity. - +\paragraph*{Disagreement} For a few remaining documents, the authors disagree with the assessors as to why these are vital to the entity. + -\section{Conclusions} +\section{Conclusions} \label{sec:conc} In this paper, we examined the filtering stage of the entity-centric stream filtering and ranking by holding the later stages of fixed. In particular, we studied the cleansing step, different entity profiles, type of entities(Wikipedia or Twitter), categories of documents(news, social, or others) and the relevance ratings. We attempted to address the following research questions: 1) does cleansing affect filtering and subsequent performance? 2) what is the most effective way of entity profiling? 3) is filtering different for Wikipedia and Twitter entities? 4) are some type of documents easily filterable and others not? 5) does a gain in recall at filtering step translate to a gain in F-measure at the end of the pipeline? and 6) what are the circumstances under which vital documents can not be retrieved? Cleansing does remove parts or entire contents of documents making them irretrievable. However, because of the introduction of false positives, recall gains by raw corpus and some richer entity profiles do not necessarily translate to overall performance gain. The results conclusion on this is mixed in the sense that cleansing helps improve the recall on vital documents and Wikipedia entities, but reduces the recall on Twitter entities and the relative category of relevance ranking. Vital and relevant documents show a difference in retrieval nonperformance documents are easier to filter than relevant.