From 40375de630f55bb24fa6d5ba368fe0b14d2a5e55 2014-06-12 04:44:30 From: Arjen P. de Vries Date: 2014-06-12 04:44:30 Subject: [PATCH] changes in missing documents section --- diff --git a/mypaper-final.tex b/mypaper-final.tex index 9bc262a01824b417a0ccd77513f15dfcace2a875..46d4f173a323cba3dd16cef14498faef72475eef 100644 --- a/mypaper-final.tex +++ b/mypaper-final.tex @@ -982,7 +982,15 @@ transformation and cleasing processes. Taking both performance (recall at filtering and overall F-score during evaluation) into account, there is a clear trade-off between using a richer entity-profile and retrieval of irrelevant documents. The richer the profile, the more relevant documents it retrieves, but also the more irrelevant documents. To put it into perspective, lets compare the number of documents that are retrieved with canonical partial and with name-variant partial. Using the raw corpus, the former retrieves a total of 2547487 documents and achieves a recall of 72.2\%. By contrast, the later retrieves a total of 4735318 documents and achieves a recall of 90.2\%. The total number of documents extracted increases by 85.9\% for a recall gain of 18\%. The rest of the documents, that is 67.9\%, are newly introduced irrelevant documents. -Wikipedia's canonical partial is the best entity profile for Wikipedia entities. This is interesting to see that the retrieval of of thousands vital-relevant document-entity pairs by name-variant partial does not translate to an increase in over all performance. It is even more interesting since canonical partial was not considered as contending profile for stream filtering by any of participant to the best of our knowledge. With this understanding, there is actually no need to go and fetch different names variants from DBpedia, a saving of time and computational resources. +Wikipedia's canonical partial is the best entity profile for Wikipedia +entities. This is interesting to see that the retrieval of of +thousands vital-relevant document-entity pairs by name-variant partial +does not translate to an increase in over all performance. It is even +more interesting since canonical partial was not considered as +contending profile for stream filtering by any of participant to the +best of our knowledge. With this understanding, there is actually no +need to go and fetch different names variants from DBpedia, a saving +of time and computational resources. %%%%%%%%%%%% @@ -990,26 +998,80 @@ Wikipedia's canonical partial is the best entity profile for Wikipedia entities. -The deltas between entity profiles, relevance ratings, and document categories reveal four differences between Wikipedia and Twitter entities. 1) For Wikipedia entities, the difference between canonical partial and canonical is higher(16.1\%) than between name-variant partial and name-variant(8.3\%). This can be explained by saturation. This is to mean that documents have already been extracted by name-variants and thus using their partials does not bring in many new relevant documents. 2) Twitter entities are mentioned by name-variant or name-variant partial and that is seen in the high recall achieved compared to the low recall achieved by canonical(or their partial). This indicates that documents (specially news and others) almost never use user names to refer to Twitter entities. Name-variant partials are the best entity profiles for Twitter entities. 3) However, comparatively speaking, social documents refer to Twitter entities by their user names than news and others suggesting a difference in -adherence to standard in names and naming. 4) Wikipedia entities achieve higher recall and higher overall performance. - -The high recall and subsequent higher overall performance of Wikipedia entities can be due to two reasons. 1) Wikipedia entities are relatively well described than Twitter entities. The fact that we can retrieve different name variants from DBpedia is a measure of relatively rich description. Rich description plays a role in both filtering and computation of features such as similarity measures in later stages of the pipeline. By contrast, we have only two names for Twitter entities: their user names and their display names which we collect from their Twitter pages. 2) There is not DBpedia-like resource for Twitter entities from which alternative names cane be collected. - - -In the experimental results, we also observed that recall scores in the vital category are higher than in the relevant category. This observation confirms one commonly held assumption:(frequency) mention is related to relevance. this is the assumption why term frequency is used an indicator of document relevance in many information retrieval systems. The more a document mentions an entity explicitly by name, the more likely the document is vital to the entity. - -Across document categories, we observe a pattern in recall of others, followed by news, and then by social. Social documents are the hardest to retrieve. This can be explained by the fact that social documents (tweets and blogs) are more likely to point to a resource where the entity is mentioned, mention the entities with some short abbreviation, or talk without mentioning the entities, but with some context in mind. By contrast news documents mention the entities they talk about using the common name variants more than social documents do. However, the greater difference in percentage recall between the different entity profiles in the news category indicates news refer to a given entity with different names, rather than by one standard name. By contrast others show least variation in referring to news. Social documents falls in between the two. The deltas, for Wikipedia entities, between canonical partials and canonicals, and name-variants and canonicals are high, an indication that canonical partials -and name-variants bring in new relevant documents that can not be retrieved by canonicals. The rest of the two deltas are very small, suggesting that partial names of name variants do not bring in new relevant documents. - - -\section{Unfilterable documents} - -\subsection{Missing vital-relevant documents \label{miss}} - -% - - The use of name-variant partial for filtering is an aggressive attempt to retrieve as many relevant documents as possible at the cost of retrieving irrelevant documents. However, we still miss about 2363(10\%) of the vital-relevant documents. Why are these documents missed? If they are not mentioned by partial names of name variants, what are they mentioned by? Table \ref{tab:miss} shows the documents that we miss with respect to cleansed and raw corpus. The upper part shows the number of documents missing from cleansed and raw versions of the corpus. The lower part of the table shows the intersections and exclusions in each corpus. - +The deltas between entity profiles, relevance ratings, and document +categories reveal four differences between Wikipedia and Twitter +entities. 1) For Wikipedia entities, the difference between canonical +partial and canonical is higher(16.1\%) than between name-variant +partial and name-variant(8.3\%). This can be explained by +saturation. This is to mean that documents have already been extracted +by name-variants and thus using their partials does not bring in many +new relevant documents. 2) Twitter entities are mentioned by +name-variant or name-variant partial and that is seen in the high +recall achieved compared to the low recall achieved by canonical(or +their partial). This indicates that documents (specially news and +others) almost never use user names to refer to Twitter +entities. Name-variant partials are the best entity profiles for +Twitter entities. 3) However, comparatively speaking, social documents +refer to Twitter entities by their user names than news and others +suggesting a difference in adherence to standard in names and naming. +4) Wikipedia entities achieve higher recall and higher overall performance. + +The high recall and subsequent higher overall performance of Wikipedia +entities can be due to two reasons. 1) Wikipedia entities are +relatively well described than Twitter entities. The fact that we can +retrieve different name variants from DBpedia is a measure of +relatively rich description. Rich description plays a role in both +filtering and computation of features such as similarity measures in +later stages of the pipeline. By contrast, we have only two names +for Twitter entities: their user names and their display names which +we collect from their Twitter pages. 2) There is not DBpedia-like +resource for Twitter entities from which alternative names cane be +collected. + +In the experimental results, we also observed that recall scores in +the vital category are higher than in the relevant category. This +observation confirms one commonly held assumption:(frequency) mention +is related to relevance. this is the assumption why term frequency is +used an indicator of document relevance in many information retrieval +systems. The more a document mentions an entity explicitly by name, +the more likely the document is vital to the entity. + +Across document categories, we observe a pattern in recall of +documents from the ``others'' category, followed by ``news'', and then +by ``social''. The social documents relevant to an entity are the +hardest to retrieve. This can be explained by the fact that social +documents (tweets and blogs) are more likely to point to a resource +where the entity is mentioned, mention the entities with some short +abbreviation, or talk without mentioning the entities, but with some +context in mind. By contrast news documents mention the entities they +talk about using the common name variants more than social documents +do. However, the greater difference in percentage recall between the +different entity profiles in the news category indicates news refer to +a given entity with different names, rather than by one standard +name. By contrast others show least variation in referring to +news. Social documents falls in between the two. The deltas, for +Wikipedia entities, between canonical partials and canonicals, and +name-variants and canonicals are high, an indication that canonical +partials +and name-variants bring in new relevant documents that can not be +retrieved by canonicals. The rest of the two deltas are very small, +suggesting that partial names of name variants do not bring in new +relevant documents. + +% Was: \section{Unfilterable documents} +\section{Missing vital-relevant documents \label{miss}} + + The use of name-variant partial for filtering is an aggressive + attempt to retrieve as many relevant documents as possible at the + cost of retrieving irrelevant documents. However, we still miss about + 2363(10\%) of the vital-relevant documents. Why are these documents + never retrieved? If they are not mentioned by partial names of name + variants, what are they mentioned by? Table \ref{tab:miss} summarizes + the number of documents that we miss with respect to cleansed and raw + corpus. The upper part shows the number of documents missing from + cleansed and raw versions of the corpus. The lower part of the table + shows the intersections and exclusions in each corpus. + \begin{table} \caption{The number of documents missing from raw and cleansed extractions. } \begin{center} @@ -1031,26 +1093,60 @@ Raw & 276 & 4951 & 5227 \\ \end{tabular} \end{center} \label{tab:miss} -\end{table} - -One would assume that the set of document-entity pairs extracted from cleansed are a sub-set of those that are extracted from the raw corpus. We find that that is not the case. There are 217 unique entity-document pairs that are retrieved from the cleansed corpus, but not from the raw. 57 of them are vital. Similarly, there are 3081 document-entity pairs that are missing from cleansed, but are present in raw. 1065 of them are vital. Examining the content of the documents reveals that it is due to a missing part of text from a corresponding document. All the documents that we miss from the raw corpus are social. These are documents such as tweets and blogs, posts from other social media. To meet the format of the raw data (binary byte array), some of them must have been converted later, after collection and on the way lost a part or the entire content. It is similar for the documents that we miss from cleansed: a part or the entire content is lost in during the cleansing process (the removal of -HTML tags and non-English documents). In both cases the mention of the entity happened to be on the part of the text that is cut out during transformation. - - - The interesting set of relevance judgments are those that we miss from both raw and cleansed extractions. These are 2146 unique document-entity pairs, 219 of them are with vital relevance judgments. The total number of entities in the missed vital annotations is 28 Wikipedia and 7 Twitter, making a total of 35. The great majority (86.7\%) of the documents are social. This suggests that social (tweets and blogs) can talk about the entities without mentioning them by name more than news and others do. This is, of course, inline with intuition. - +\end{table} + +One would usually have assumed that the set of document-entity pairs extracted +from the cleansed part of the corpus would form a sub-set of those +extracted from the raw corpus. Suprisingly, we found this not to be +the case: 217 unique entity-document pairs are retrieved from the +cleansed corpus, but not from the raw one, out of which 57 have been +judged as vital. Similarly, 3081 document-entity pairs only occur in +the raw corpus, with 1065 vital documents among these. Examining the content of these +documents reveals that these ommissions are easily explained by +missing text in the corresponding documents. All the documents that we miss from the raw +corpus are social, like tweets, blogs and posts +from other social media. To meet the format of the raw data (binary +byte array), some of these must have been converted later, after +collection (as a cleansed version has been produced), but affected by +some processing error. For the documents that we miss from the +cleansed corpus, a part of their (or even the +entire) content is lost during the cleansing process (the removal of +HTML tags and non-English documents). In both cases the mention of +the entity happened to be on the part of the text that is cut out +during transformation. + +The more intriguing set of relevance judgments are those that we miss +from both raw and cleansed extractions, concerning 2146 unique +document-entity pairs, 219 of them assessed as vital to the entity. +The number of entities in the missed vital annotations is 28 +Wikipedia and 7 Twitter, making a total of 35. The great majority +(86.7\%) of these documents are social. This suggests that social +media sources +(tweets and blogs) may discuss these entities without mentioning +them explicitly by name, more than in news and other types of +documents. (This is, of course, in line with intuition.) %%%%%%%%%%%%%%%%%%%%%% -We observed that there are vital-relevant documents that we miss from raw only, and similarly from cleansed only. The reason for this is transformation from one format to another. The most interesting documents are those that we miss from both raw and cleansed corpus. We first identified the number of KB entities who have a vital relevance judgment and whose documents can not be retrieved (they were 35 in total) and conducted a manual examination into their content to find out why they are missing. - - - We observed that among the missing documents, different document ids can have the same content, and be judged multiple times for a given entity. %In the vital annotation, there are 88 news, and 409 weblog. - Avoiding duplicates, we randomly selected 35 documents, one for each entity. The documents are 13 news and 22 social. Here below we have classified the situation under which a document can be vital for an entity without mentioning the entities with the different entity profiles we used for filtering. +We observed that among the missing documents, different document +ids can have the same content, and be judged multiple times for a +given entity. +%In the vital annotation, there are 88 news, and 409 +%weblog. +Avoiding duplicates, we randomly selected 35 documents, one for each +entity. The documents are 13 news and 22 social. Here below we +have classified the situation under which a document can be vital for +an entity without mentioning the entities with the different entity +profiles we used for filtering. \paragraph*{Outgoing link mentions} A post (tweet) with an outgoing link which mentions the entity. -\paragraph*{Event place - Event} A document that talks about an event is vital to the location entity where it takes place. For example Maha Music Festival takes place in Lewis and Clark\_Landing, and a document talking about the festival is vital for the park. There are also cases where an event's address places the event in a park and due to that the document becomes vital to the park. This is basically being mentioned by address which belongs to alarger space. +\paragraph*{Event place - Event} A document that talks about an event +is vital to the location entity where it takes place. For example +Maha Music Festival takes place in Lewis and Clark\_Landing, and a +document talking about the festival is vital for the park. There are +also cases where an event's address places the event in a park, and due +to that the document becomes vital to the park. \paragraph*{Entity -related entity} A document about an important figure such as artist, athlete can be vital to another. This is specially true if the two are contending for the same title, one has snatched a title, or award from the other. \paragraph*{Organization - main activity} A document that talks about about an area on which the company is active is vital for the organization. For example, Atacocha is a mining company and a news item on mining waste was annotated vital. \paragraph*{Entity - group} If an entity belongs to a certain group (class), a news item about the group can be vital for the individual members. FrankandOak is named innovative company and a news item that talks about the group of innovative companies is relevant for a it. Other examples are: a big event of which an entity is related such an Film awards for actors.