Changeset - 88cb4827648d
[Not reviewed]
0 1 0
Gebrekirstos Gebremeskel - 11 years ago 2014-06-09 00:05:08
destinycome@gmail.com
second commit
1 file changed with 28 insertions and 28 deletions:
0 comments (0 inline, 0 general)
mypaper-final.tex
Show inline comments
 
@@ -124,18 +124,18 @@ are mentioned in text and estimates the practical upper-bound of recall in  enti
 
 Filtering is a crucial step in CCR for selecting a potentially relevant set of working documents for subsequent steps of the pipeline out of a big collection of stream documents. The TREC Filtering track defines filtering as a ``system that sifts through stream of incoming information to find documents that are relevant to a set of user needs represented by profiles'' \cite{robertson2002trec}. Adaptive Filtering, one task of the filtering track,  starts with   a persistent user profile and a very small number of positive examples. The  filtering step used in CCR systems fits under adaptive filtering: the profiles are represented by persistent KB (Wikipedia or Twitter) entities and there is a small set of relevance judgments representing positive examples. 
 
 
 
 TREC-KBA 2013's participants applied Filtering as a first step  to produce a smaller working set for subsequent experiments. As the subsequent steps of the pipeline use the output of the filter, the final performance of the system is dependent on this important step.  The filtering step particularly determines the recall of the overall system. However, all submitted systems suffered from poor recall \cite{frank2013stream}.  The most important components of the filtering step are cleansing, and entity profiling. Each component has choices to make. For example, there are two versions of corpus: cleansed and raw. Different approaches used different entity profiles for filtering. These entity profiles varied from  KB entities' canonical names, to  DBpedia name variants, to using bold words in the first paragraph of the Wikipedia entities’ profiles and anchor texts from other Wikipedia pages, to using exact name and wordNet synonyms. Moreover, the Type of entities (Wikipedia or Twitter), the category of 
 
docuemnst (news, blog, tweets) can influence filtering.
 
documents (news, blog, tweets) can influence filtering.
 
 
 
 
 A variety of approaches are employed  to solve the CCR challenge. Each participant reports the steps of the pipeline and the final results in comparison to other systems.  A typical TREC KBA poster presentation or talk explains the system pipeline and reports the final results. The systems may employ similar (even thesame) steps  but the choices they make at every step are usually different. In such a situation, it becomes hard to identify the factors that result in improved performance. There is  a lack of insight across different approaches. This makes  it hard to know  whether the improvement in performance of a particular approach is due to preprocessing, filtering, classification, scoring  or any of the sub-componets of the pipeline. 
 
 A variety of approaches are employed  to solve the CCR challenge. Each participant reports the steps of the pipeline and the final results in comparison to other systems.  A typical TREC KBA poster presentation or talk explains the system pipeline and reports the final results. The systems may employ similar (even the same) steps  but the choices they make at every step are usually different. In such a situation, it becomes hard to identify the factors that result in improved performance. There is  a lack of insight across different approaches. This makes  it hard to know  whether the improvement in performance of a particular approach is due to preprocessing, filtering, classification, scoring  or any of the sub-components of the pipeline. 
 
 
 
  
 
 
 
 
 
 In this paper,  we hold the subsequent steps of the pipeline fixed, zoom in on the filtering step and  conduct an in-depth analysis of the main components in it.  In particular, we study  cleansing, different entity profiling,  type of entities (Wikipedia or Twitter), and type of documents (social, news, etc).  The main contribution of the paper: 
 
 An in-depth analysis of the factors that affect entity-based stream filtering
 
 Identifying optimal entity profiles vis-avis not compromisingprecision
 
 Describing relevant documents that are not amenable to filtering and thereby estimating the upper-bound on entity-based fiiltering
 
 Identifying optimal entity profiles vis-avis not compromising precision
 
 Describing relevant documents that are not amenable to filtering and thereby estimating the upper-bound on entity-based filtering
 
 
 
 The rest of the paper is is organized as follows: 
 
 
 
@@ -189,23 +189,23 @@ TREC-KBA provided relevance judgments for training and testing. Relevance judgme
 
  \item Are some type of documents easily filterable and others not ? 
 
  \item Does a gain in recall at filtering step translate to a gain in F-measure at the end of the pipeline?
 
  \item What are the vital(relevant documents that are not retrievable by a system?
 
  \item Are tehre vital (relevant) docuemnts that are not filterable by a reasonable system?
 
  \item Are there vital (relevant) documents that are not filterable by a reasonable system?
 
\end{enumerate}
 
 
\subsection{Literature Review}
 
There has been a great deal of interest  as of late on entity-based filtering and ranking. One manifestation of that is the introduction of TREC KBA in 2012. Following that, there have been a number of research works done on the topic \cite{frank2012building, ceccarelli2013learning, taneva2013gem, wang2013bit, balog2013multi}.  These works are based on KBA 2012 task and dataset  and they address the whole problem of entity filtering and ranking.  TREC KBA continued in 2013, but the task underwent some changes.
 
 
The main change between the the 2012 and 2013 are in the number of enties, the type of enttiies, the corpus and the relevance rankings.The number of entites increased from 29 to 141, and it included 20 Twitter entites. The TREC KBA 2012 corpus was 1.9TB after xz-compression and had  400M docuemnts. By contrast, the KBA 2013 corpus was 6.45 after XZ-compression and GPG encryption. A version with all-non English docuemnted removed  is 4.5 TB and consists of 1 Billion documnets. The 2013 corpus subsumed the 2012 corpus and added others from spinn3r, namely main-stream news, forum, arxiv,classified, reviews and memetracker.  A more important difference is, however, that the definition of rthe relevance ranking changed. 
 
The main change between the the 2012 and 2013 are in the number of entities, the type of entities, the corpus and the relevance rankings.The number of entities increased from 29 to 141, and it included 20 Twitter entities. The TREC KBA 2012 corpus was 1.9TB after xz-compression and had  400M documents. By contrast, the KBA 2013 corpus was 6.45 after XZ-compression and GPG encryption. A version with all-non English documented removed  is 4.5 TB and consists of 1 Billion documents. The 2013 corpus subsumed the 2012 corpus and added others from spinn3r, namely main-stream news, forum, arxiv,classified, reviews and meme-tracker.  A more important difference is, however, that the definition of the relevance ranking changed. 
 
 
The change in between KBA 2012 and 2013 is that in the definitions of vital and relavant. While in KBA 2012, a document was judged vital if it has citation-worthy content, In 2013 it must have the freshliness, that is the cintent must trigger an editing of the KB entry. 
 
The change in between KBA 2012 and 2013 is that in the definitions of vital and relevant. While in KBA 2012, a document was judged vital if it has citation-worthy content, In 2013 it must have the freshliness, that is the content must trigger an editing of the KB entry. 
 
 
While the task of 2012 and 2013 are fundamentally the same, the approaches for the tasks varied due  to the size of the corpus. In the 2013, all participants used filtering to reduce the size of the big corpus.   They used different ways of filtering: Many of them used two or more of different name variants from DBpedia such as labels, names, redirects, birth names, alias, nickknames, sameas and alternative names \cite{wang2013bit, dietzumass ,liu2013related, zhangpris}.  Although all of the participants used DBpedia name variants such as name, label, birth name, alternative names, redirects, nickname, or alias, none of them used all them.  A few other participants used bold words in the first paragraph of the Wikipedia entity's profiles and anchor texts from other Wikipedia pages  \cite{bouvierfiltering, niauniversity}.  Very few participants used Boolean And built from the tokens of the canonical names \cite{illiotrec2013}.  
 
While the task of 2012 and 2013 are fundamentally the same, the approaches for the tasks varied due  to the size of the corpus. In the 2013, all participants used filtering to reduce the size of the big corpus.   They used different ways of filtering: Many of them used two or more of different name variants from DBpedia such as labels, names, redirects, birth names, alias, nicknames, same-as and alternative names \cite{wang2013bit, dietzumass ,liu2013related, zhangpris}.  Although all of the participants used DBpedia name variants such as name, label, birth name, alternative names, redirects, nickname, or alias, none of them used all them.  A few other participants used bold words in the first paragraph of the Wikipedia entity's profiles and anchor texts from other Wikipedia pages  \cite{bouvierfiltering, niauniversity}.  Very few participants used Boolean And built from the tokens of the canonical names \cite{illiotrec2013}.  
 
 
All of the studies mentions used filtering as their first step to generate a smaller set of docuemnts. However, none of them conducted a study on it.   Of all the systems submitted for TREC conference, the highest recall achieved is 83\% \cite{frank2012building}. However, many systems suffered from poor recall and their system perfromances were highly affected \cite{frank2012building}.
 
All of the studies mentions used filtering as their first step to generate a smaller set of documents. However, none of them conducted a study on it.   Of all the systems submitted for TREC conference, the highest recall achieved is 83\% \cite{frank2012building}. However, many systems suffered from poor recall and their system performances were highly affected \cite{frank2012building}.
 
 
Although as mentioned before, systems have used difefrent entitiy profiles to filter the stream, and achieved difefrent performance levels, there is no study on and the factors and choices that affect the filtering step itself. Of course filtering has been extensively examined in TREC Filtering \cite{robertson2002trec}. However, those studies were isolated in the sense that they were intended to optimize recall. What we have here is a differet senario. Docuemnts have relevance rating. Thus we want to study filtering in connection to the relevance to the entites and thus can be done by coupling filtering to the later stages of the pipeline. This is new to the best of our knowledge and the TREC KBA problme setting and datasets offer a good opportunity to examine this aspect of filtering. 
 
Although as mentioned before, systems have used different entity profiles to filter the stream, and achieved different performance levels, there is no study on and the factors and choices that affect the filtering step itself. Of course filtering has been extensively examined in TREC Filtering \cite{robertson2002trec}. However, those studies were isolated in the sense that they were intended to optimize recall. What we have here is a different scenario. Documents have relevance rating. Thus we want to study filtering in connection to the relevance to the entities and thus can be done by coupling filtering to the later stages of the pipeline. This is new to the best of our knowledge and the TREC KBA problem setting and data-sets offer a good opportunity to examine this aspect of filtering. 
 
 
Moreover, there has not been a chance to studt at this scale and/or a study into what type of documents defy filtering and why? In this paper, we conduct a manual examination of the documents that are missing and classify them into different categories. We also estimate the general upper boound of recall for different entities profiles and choose the best profuile for maximum precision.
 
Moreover, there has not been a chance to study at this scale and/or a study into what type of documents defy filtering and why? In this paper, we conduct a manual examination of the documents that are missing and classify them into different categories. We also estimate the general upper bound of recall for different entities profiles and choose the best profile for maximum precision.
 
 
\section{Method}
 
We work with the subset of stream corpus documents  for whom there exist  annotation. For this purpose, we extracted the documents that have annotation from the big corpus. All our experiments are based on this smaller subset.   We experiment with all KB entities.  For each KB entity, we extract different name variants from DBpedia and Twitter. 
 
@@ -399,21 +399,21 @@ The results  of the different entity profiles on the raw corpus are broken down
 
 
The 8 document source categories are regrouped into three for two reasons: 1) some groups are very similar to each other. Mainstream-news and news are  similar. The reason they exist separately, in the first place,  is because they were collected from two different sources, by different groups and at different times. we call them news from now on.  The same is true with weblog and social, and we call them social from now on.   2) some groups have so small number of annotations that treating them independently does not make much sense. Majority of vital or relevant annotations are social (social and weblog) (63.13\%). News (mainstream +news) make up 30\%. Thus, news and social make up about 93\% of all annotations.  The rest make up about 7\% and are all grouped as others. 
 
 
The results of the breakdown by document categopries is presented in a multi-dementional tabele shown in \ref{tab:source-delta}. There are three outer columns for  all entities, Wikipedia and Twitter. Each of the outer columns consist of the document categories of other,news and social. The rows consist of Vital, relevant and total each of which have the four percentage deltas.   The first delta is the recall percentage between partial names of canonical names and canonical names. The second  is the delta between name variants and canonical names. The third is the difference between partial names of name variants  and partial names of canonical names and the fourth between partial names of name variants and name variants.  
 
The results of the breakdown by document categopries is presented in a multi-dimensional table shown in \ref{tab:source-delta}. There are three outer columns for  all entities, Wikipedia and Twitter. Each of the outer columns consist of the document categories of other,news and social. The rows consist of Vital, relevant and total each of which have the four percentage deltas.   The first delta is the recall percentage between partial names of canonical names and canonical names. The second  is the delta between name variants and canonical names. The third is the difference between partial names of name variants  and partial names of canonical names and the fourth between partial names of name variants and name variants.  
 
 
While one can compute many deltas by pairing the different entity profiles, we believe these four deltas offer a clear meaning. The delta between all name variants and canonical names shows the percentage of documents that the new name variants retrieve, but the canonical name does not. Similarly, the delta between partial names of name variants and partial names of canonical names shows the percentage of document-entity pairs that can be gained by the partial names of the name variants. 
 
 
 
 \subsection{Vital vs. relevant}
 
 
 
 When comparing the recall performances in vital and relevant, we observe that canonical names achieve better in vital than in relavnt. This is specially truewith Wikipedia entities. For example, the recall for news is 80.1 and for social is 76, while the corresponding recall in relvant is 75.6 and 63.2 respectively. We can generally see that the recall in vital are better than the recall in relevant suggesting that relevant documents are more probable to mention the enties and if they do, using some of their common name variants. 
 
 When comparing the recall performances in vital and relevant, we observe that canonical names achieve better in vital than in relaxant. This is specially true with Wikipedia entities. For example, the recall for news is 80.1 and for social is 76, while the corresponding recall in relevant is 75.6 and 63.2 respectively. We can generally see that the recall in vital are better than the recall in relevant suggesting that relevant documents are more probable to mention the entities and if they do, using some of their common name variants. 
 
 
 
 \subsection{Difference by docuemnt categories}
 
 There are is some consistent difefrence between others, news and social. In the Wikipedia enties category, one can see that the recall of others is higher than news. The recall of news is higher than social. This holds across all name variants suggesting that social docuemnts are more difficult to retrieve that othersand news. Notice that the others category stands for arxive (scientific docuemnts), classifieds, forums and linking. However, there is no clear pattern when it comes to Twitter enties. On all entites, we see the effect of the Wikipedia entites overiding the Twitter entities and hence the order of others, news and social. 
 
 \subsection{Difference by document categories}
 
 There are is some consistent difference between others, news and social. In the Wikipedia entities category, one can see that the recall of others is higher than news. The recall of news is higher than social. This holds across all name variants suggesting that social documents are more difficult to retrieve that other and news. Notice that the others category stands for arxiv (scientific documents), classifieds, forums and linking. However, there is no clear pattern when it comes to Twitter entities. On all entities, we see the effect of the Wikipedia entities overriding the Twitter entities and hence the order of others, news and social. 
 
 
 
%  Generally, there is greater variation in relevant rank than in vital. This is specially true in most of the Delta's for Wikipedia. This  maybe be explained by news items referring to  vital documents by a some standard name than documents that are relevant. Twitter entities show greater deltas than Wikipedia entities in both vital and relevant. The greater variation can be explained by the fact that the canonical name of Twitter entities retrieves very few documents. The deltas that involve canonical names of Twitter entities, thus, show greater deltas.  
 
%  
 
 
If we look in recall performances, In Wikipedia entities, the order seems to be others, news and social. This means that others achieve a higher recall than news than social.  However, in Twitter entities, it does not show such a strict pattern. In all, entites also, we also see almost the same pattern of other, news and social. It seems that news docuemnts are the hardest to retrieve. This is of course makes sense since social posts are short and are more likely to point to other resources, or use short informal names.
 
If we look in recall performances, In Wikipedia entities, the order seems to be others, news and social. This means that others achieve a higher recall than news than social.  However, in Twitter entities, it does not show such a strict pattern. In all, entities also, we also see almost the same pattern of other, news and social. It seems that news documents are the hardest to retrieve. This is of course makes sense since social posts are short and are more likely to point to other resources, or use short informal names.
 
 
 
 In the most cases,   news show greater difference between the deltas followed by social and by others. The substantial difference in delta means that news refer to entities by different names, rather than by a certain standard name.  This is counter-intuitive since one would expect news to mention entities by some consistent name(s) thereby reducing the difference.  In the rest of the deltas for Wikipedia, the order is social, news and others. In Twitter entities, we see that  others  show greater difference followed by social and then news. That is a complete  reversal of the order in Wikipedia. On total, we find that the influence of the Wikipedia entities completely overriding  over Twitter entities, and as a result,  making the order same as the Wikipedia order. The deltas for relevant rank for Wikipedia entities have  similar patterns with some changes: in the case of  all\_part,  others shows bigger delta than social. For Twitter, there is no 
 
@@ -421,7 +421,7 @@ difference between canonical and canonical-partial for they are one word. In the
 
 
 
 The  biggest delta  observed is in Twitter entities in   all\_part-cano\_part in relevant annotations.  The delta is 80.5\% difference, and in all\_part, a 70.1\% delta. Both of them are for news category.  For Wikipedia entities, the highest delta observed is 19.5\% in cano\_part - cano followed by 17.5\% in all\_part in relevant.  
 
  
 
\subsection{Document category:news, social, others}
 
\subsection{Document category: others. news and social}
 
In all entities(vital or relevant) annotation ranks, news followed by social followed by others show greater difference. In Wikipedia entities, the order still holds except in the difference between the partials in which case it is other ,social and news. In Twitter, there is no difference between canonical and partials of canonical names as before. However, in the rest, the order is always news, social, and others. 
 
  
 
 
 
@@ -514,12 +514,12 @@ It seems, the raw corpus has more effect on Twitter entities performances. An in
 
 
 
\subsection{Missing relevant documents \label{miss}}
 
There is a trade-off between using a richer entity-profile and retrieval of irrelevant docuemnts. The richer the profile, the more relevant documents it retirves, but also the more irrelevant docuemnts. To put into perspective, lets compare the number of documents that are retrieved with partial names of name variants and partial names of canonical names. Using the raw corpus, the partial names of canonical names extracts a total of 2547487 and achieves a recall of 72.2\%. By contrast, the partial names of name variants extracts a total 4735318 docuemnts and achieves a recall of 90.2\%. The total number of documents extracted increases by 85.9\% for a recall gain of 18\%. The rest of the documents, that is 67.9\%, are newly introduced irrelevant docuemnts. There is an advantage in excluding irrelevant docuemnts from filtering because they confuse the later stages of the pipeline. 
 
There is a trade-off between using a richer entity-profile and retrieval of irrelevant documents. The richer the profile, the more relevant documents it retrieves, but also the more irrelevant documents. To put into perspective, lets compare the number of documents that are retrieved with partial names of name variants and partial names of canonical names. Using the raw corpus, the partial names of canonical names extracts a total of 2547487 and achieves a recall of 72.2\%. By contrast, the partial names of name variants extracts a total 4735318 documents and achieves a recall of 90.2\%. The total number of documents extracted increases by 85.9\% for a recall gain of 18\%. The rest of the documents, that is 67.9\%, are newly introduced irrelevant documents. There is an advantage in excluding irrelevant documents from filtering because they confuse the later stages of the pipeline. 
 
 
 The use of the partial names of name variants for filtering is, therefore, an aggressive attempt to retrieve as manyrelevant documents as possible at the cost retrieving irrelevant docuemnts. However, we still miss about  2363(10\%) documents.  Why are these documents missed? If they are not mentioned by partial names of name variants, what are they mentioned by? Table \ref{tab:miss} shows the documents that miss with respect to cleansed and raw corpus.  The upper part shows the number of documents missing from cleansed and raw versions of the corpus. The lower part of the table shows the intersections and exclusions in each corpus.  
 
 The use of the partial names of name variants for filtering is, therefore, an aggressive attempt to retrieve as many relevant documents as possible at the cost retrieving irrelevant documents. However, we still miss about  2363(10\%) documents.  Why are these documents missed? If they are not mentioned by partial names of name variants, what are they mentioned by? Table \ref{tab:miss} shows the documents that miss with respect to cleansed and raw corpus.  The upper part shows the number of documents missing from cleansed and raw versions of the corpus. The lower part of the table shows the intersections and exclusions in each corpus.  
 
 
\begin{table}
 
\caption{The number of docuemnts that are missing from raw and cleansed extractions. }
 
\caption{The number of documents that are missing from raw and cleansed extractions. }
 
\begin{center}
 
\begin{tabular}{l@{\quad}llllll}
 
\hline
 
@@ -545,7 +545,7 @@ Raw & 276 & 4951 & 5227 \\
 
converting.  In both cases the mention of the entity happened to be on the part of the text that is cut out during conversion. 
 
 
 
 
 The interesting set of of relevance judgements are those that  we miss from both raw and cleansed extractions. These are 2146 unique document-entity pairs, 219 of them are with vital relevance judgements.   The total number of entities in the missed vital annotations is  28 Wikipedia and 7  Twitter, making a total of 35. Looking into document categories shows that the  greate majority (86.7\%) of the documents are social. This suggests that social (tweets and blogs) can talk about the entites without mentioning  them by name. This is, of course, inline with intuition. 
 
 The interesting set of of relevance judgments are those that  we miss from both raw and cleansed extractions. These are 2146 unique document-entity pairs, 219 of them are with vital relevance judgments.   The total number of entities in the missed vital annotations is  28 Wikipedia and 7  Twitter, making a total of 35. Looking into document categories shows that the  great majority (86.7\%) of the documents are social. This suggests that social (tweets and blogs) can talk about the entities without mentioning  them by name. This is, of course, inline with intuition. 
 
   
 
   
 
   
 
@@ -570,23 +570,23 @@ converting.  In both cases the mention of the entity happened to be on the part
 
% \label{tab:miss-category}
 
% \end{table*}
 
 
However, it is interesting to look into the actual content of the documents to gaing an insight into the ways a document can talk about an entity without mentioning the entitity . We collected 35 documents, one for each entity, for manual examination. Here below we present the reasons.
 
However, it is interesting to look into the actual content of the documents to gain an insight into the ways a document can talk about an entity without mentioning the entity . We collected 35 documents, one for each entity, for manual examination. Here below we present the reasons.
 
\paragraph{Outgoing link mentions} a post (tweet) with an outgoing link which mentions the entity.
 
\paragraph{Event place - Event} a docuemnt that talks about an event is vital to the location entity where it takes place.  For example Maha Music Festival takes place in Lewis and Clark\_Landing, and a docuemnt talking about the festival is vital for the park. There are also cases where an event's address places the event in a park and due to the the docuemnt becomes vital to the park. This basically being mentioned by address
 
\paragraph{Entity -related entity} A docuemnt about an inportant figure such as artist, athlet  can be vital to another. This is specially true if the two are contending for the same title, one has snatched a title, and award from the other. 
 
\paragraph{Organization - mian activity} A document that talks about about an area on which the company is active is vital for the organization. For example, Atacocha is a mining company and and an news item on mining waste was annotated vital. 
 
\paragraph{Event place - Event} a document that talks about an event is vital to the location entity where it takes place.  For example Maha Music Festival takes place in Lewis and Clark\_Landing, and a document talking about the festival is vital for the park. There are also cases where an event's address places the event in a park and due to the the document becomes vital to the park. This basically being mentioned by address
 
\paragraph{Entity -related entity} A document about an important figure such as artist, athlete  can be vital to another. This is specially true if the two are contending for the same title, one has snatched a title, and award from the other. 
 
\paragraph{Organization - main activity} A document that talks about about an area on which the company is active is vital for the organization. For example, Atacocha is a mining company and and an news item on mining waste was annotated vital. 
 
\paragraph{Entity - class} If an entity belongs to a certain class (group) and a news item about the class can be vital for the individual members. FrankandOak is  named innovative company and a news item that talks about a class of innovative companies is relevant for a  it. Other examples are: a  big event  of which an entity is related such an Film awards for actors. 
 
\paragraph{Artist - work} docuemnts that discuss the work of artists can be relevant to the artists. Such cases include  books or films being vital for the book author or the director (actor) of the film. robocop is film whose screenplay is by Joshua Zetumer. An blog that talks about the film was judged vital for Joshua Zetumer. 
 
\paragraph{Artist - work} documents that discuss the work of artists can be relevant to the artists. Such cases include  books or films being vital for the book author or the director (actor) of the film. robocop is film whose screenplay is by Joshua Zetumer. An blog that talks about the film was judged vital for Joshua Zetumer. 
 
\paragraph{Politician - constituency} A major political event in a certain constituency is vital for the politician from that constituency. 
 
 A good example is a weblog that talks about two north Dakota counties being drought disasters. The news is vital for Joshua Boschee, a politician, a member of North Dakota democratic party.  
 
\paragraph{head -organization} a document that talks about an organization of which the entity is the head can be vital for the entity.  Jasper\_Schneider is USDA Rural Development state director for North Dakota and an article about problems of primary health centers in North Dakota is judged vital for him. 
 
\paragraph{World Knowledge} Some things are impossible to know without your world knowledge. For example ''refreshments, treats, gift shop specials, "bountiful, fresh and fabulous holiday decor," a demonstration of simple ways to create unique holiday arrangements for any home; free and open to the public`` is judged relevant to Hjemkomst\_Center. This is a social media post, and unless one knows the person posting it, there is no way that this text shows that. Similarly ''learn about the gray wolf's hunting and feeding behaviors and watch the wolves have their evening meal of a full deer carcass; $15 for members, $20 for nonmembers`` is judged vital to Red\_River\_Zoo.  
 
\paragraph{No docuemnt content} Some documents were found to have no content
 
\paragraph{No document content} Some documents were found to have no content
 
\paragraph{Not clear why} It is not clear why some documents are annotated vital for some entities.
 
 
 
 
 
Although they have different document ids, many of the documents have the same content. In the vital annotation, there are only three (88 mainstream, 6scoial, 401 weblog). In the 35 docuemnt vital docuemnt-entity pairs we examined, 22 are social, and 13 are news. 
 
Although they have different document ids, many of the documents have the same content. In the vital annotation, there are only three (88 mainstream, social, 401 weblog). In the 35 document vital document-entity pairs we examined, 22 are social, and 13 are news. 
 
%    To gain more insight, I sampled for each 35 entities, one document-entity pair and looked into the contents. The results are in \ref{tab:miss from both}
 
%    
 
%    \begin{table*}
0 comments (0 inline, 0 general)