Changeset - 51b8586f2e1d
[Not reviewed]
0 1 0
Arjen de Vries (arjen) - 11 years ago 2014-06-12 05:12:35
arjen.de.vries@cwi.nl
missing sec label fixed
1 file changed with 3 insertions and 3 deletions:
0 comments (0 inline, 0 general)
mypaper-final.tex
Show inline comments
 
@@ -112,193 +112,193 @@ knowledge base curators, who need to continuously screen the media for
 
updates regarding the knowledge base entries they manage. Automatic
 
system support for this entity-centric information processing problem
 
requires complex pipe\-lines involving both natural language
 
processing and information retrieval components. The pipeline
 
encountered in a variety of systems that approach this problem
 
involves four stages: filtering, classification, ranking (or scoring),
 
and evaluation. Filtering is only an initial step, that reduces the
 
web-scale corpus of news and other relevant information sources that
 
may contain entity mentions into a working set of documents that should
 
be more manageable for the subsequent stages.
 
Nevertheless, this step has a large impact on the recall that can be
 
maximally attained! Therefore, in this study, we have focused on just
 
this filtering stage and conduct an in-depth analysis of the main design
 
decisions here: how to cleans the noisy text obtained online, 
 
the methods to create entity profiles, the
 
types of entities of interest, document type, and the grade of
 
relevance of the document-entity pair under consideration.
 
We analyze how these factors (and the design choices made in their
 
corresponding system components) affect filtering performance.
 
We identify and characterize the relevant documents that do not pass
 
the filtering stage by examing their contents. This way, we
 
estimate a practical upper-bound of recall for entity-centric stream
 
filtering.
 
 
\end{abstract}
 
% A category with the (minimum) three required fields
 
\category{H.4}{Information Filtering}{Miscellaneous}
 
 
%A category including the fourth, optional field follows...
 
%\category{D.2.8}{Software Engineering}{Metrics}[complexity measures, performance measures]
 
 
\terms{Theory}
 
 
\keywords{Information Filtering; Cumulative Citation Recommendation; knowledge maintenance; Stream Filtering;  emerging entities} % NOT required for Proceedings
 
 
\section{Introduction}
 
In 2012, the Text REtrieval Conferences (TREC) introduced the Knowledge Base Acceleration (KBA) track  to help Knowledge Bases(KBs) curators. The track is crucial to address a critical need of KB curators: given KB (Wikipedia or Twitter) entities, filter  a stream  for relevant documents, rank the retrieved documents and recommend them to the KB curators. The track is crucial and timely because  the number of entities in a KB on one hand, and the huge amount of new information content on the Web on the other hand make the task of manual KB maintenance challenging.   TREC KBA's main task, Cumulative Citation Recommendation (CCR), aims at filtering a stream to identify   citation-worthy  documents, rank them,  and recommend them to KB curators.
 
  
 
   
 
 Filtering is a crucial step in CCR for selecting a potentially
 
 relevant set of working documents for subsequent steps of the
 
 pipeline out of a big collection of stream documents. Filtering  sifts  an incoming information for information relevant to user profiles \cite{robertson2002trec}. In the specific setting of CCR, these profiles are
 
represented by persistent KB entities (Wikipedia pages or Twitter
 
users, in the TREC scenario).
 
 
 
 TREC-KBA 2013's participants applied Filtering as a first step  to
 
 produce a smaller working set for subsequent experiments. As the
 
 subsequent steps of the pipeline use the output of the filter, the
 
 final performance of the system is dependent on this step.  The
 
 filtering step particularly determines the recall of the overall
 
 system. However, all 141 runs submitted by 13 teams did suffer from
 
 poor recall, as pointed out in the track's overview paper 
 
 \cite{frank2013stream}. 
 
 
The most important components of the filtering step are cleansing
 
(referring to pre-processing noisy web text into a canonical ``clean''
 
text format), and
 
entity profiling (creating a representation of the entity that can be
 
used to match the stream documents to). For each component, different
 
choices can be made. In the specific case of TREC KBA, organisers have
 
provided two different versions of the corpus: one that is already cleansed,
 
and one that is the raw data as originally collected by the organisers. 
 
Also, different
 
approaches use different entity profiles for filtering, varying from
 
using just the KB entities' canonical names to looking up DBpedia name
 
variants, and from using the bold words in the first paragraph of the Wikipedia
 
entities' page to using anchor texts from other Wikipedia pages, and from
 
using the exact name as given to WordNet derived synonyms. The type of entities
 
(Wikipedia or Twitter) and the category of documents in which they
 
occur (news, blogs, or tweets) cause further variations.
 
% A variety of approaches are employed  to solve the CCR
 
% challenge. Each participant reports the steps of the pipeline and the
 
% final results in comparison to other systems.  A typical TREC KBA
 
% poster presentation or talk explains the system pipeline and reports
 
% the final results. The systems may employ similar (even the same)
 
% steps  but the choices they make at every step are usually
 
% different. 
 
In such a situation, it becomes hard to identify the factors that
 
result in improved performance. There is  a lack of insight across
 
different approaches. This makes  it hard to know whether the
 
improvement in performance of a particular approach is due to
 
preprocessing, filtering, classification, scoring  or any of the
 
sub-components of the pipeline.
 
 
 
In this paper, we therefore fix the subsequent steps of the pipeline,
 
and zoom in on \emph{only} the filtering step; and conduct an in-depth analysis of its
 
main components.  In particular, we study the effect of cleansing,
 
entity profiling, type of entity filtered for (Wikipedia or Twitter), and
 
document category (social, news, etc) on the filtering components'
 
performance. The main contribution of the
 
paper are an in-depth analysis of the factors that affect entity-based
 
stream filtering, identifying optimal entity profiles without
 
compromising precision, describing and classifying relevant documents
 
that are not amenable to filtering , and estimating the upper-bound
 
of recall on entity-based filtering.
 
 
The rest of the paper  is organized as follows. Section \ref{sec:desc} describes the dataset and section \ref{sec:fil} defines the task. In section  \ref{sec:lit}, we discuss related litrature folowed by a discussion of our method in \ref{sec:mthd}. Following that,  we present the experimental resulsy in \ref{sec:expr}, and discuss and analyze them in \ref{sec:analysis}. Towards the end, we discuss the impact of filtering choices on classification in section \ref{sec:impact}, examine and categorize unfilterable docuemnts in section \ref{sec:unfil}. Finally, we present our conclusions in \ref{}{sec:conc}.
 
The rest of the paper  is organized as follows. Section \ref{sec:desc} describes the dataset and section \ref{sec:fil} defines the task. In section  \ref{sec:lit}, we discuss related litrature folowed by a discussion of our method in \ref{sec:mthd}. Following that,  we present the experimental resulsy in \ref{sec:expr}, and discuss and analyze them in \ref{sec:analysis}. Towards the end, we discuss the impact of filtering choices on classification in section \ref{sec:impact}, examine and categorize unfilterable documents in section \ref{sec:unfil}. Finally, we present our conclusions in \ref{}{sec:conc}.
 
 
 
 \section{Data Description}\label{sec:desc}
 
We base this analysis on the TREC-KBA 2013 dataset%
 
\footnote{\url{http://trec-kba.org/trec-kba-2013.shtml}}
 
that consists of three main parts: a time-stamped stream corpus, a set of
 
KB entities to be curated, and a set of relevance judgments. A CCR
 
system now has to identify for each KB entity which documents in the
 
stream corpus are to be considered by the human curator.
 
 
\subsection{Stream corpus} The stream corpus comes in two versions:
 
raw and cleaned. The raw and cleansed versions are 6.45TB and 4.5TB
 
respectively,  after xz-compression and GPG encryption. The raw data
 
is a  dump of  raw HTML pages. The cleansed version is the raw data
 
after its HTML tags are stripped off and only English documents
 
identified with Chromium Compact Language Detector
 
\footnote{\url{https://code.google.com/p/chromium-compact-language-detector/}}
 
are included.  The stream corpus is organized in hourly folders each
 
of which contains many  chunk files. Each chunk file contains between
 
hundreds and hundreds of thousands of serialized  thrift objects. One
 
thrift object is one document. A document could be a blog article, a
 
news article, or a social media post (including tweet).  The stream
 
corpus comes from three sources: TREC KBA 2012 (social, news and
 
linking) \footnote{\url{http://trec-kba.org/kba-stream-corpus-2012.shtml}},
 
arxiv\footnote{\url{http://arxiv.org/}}, and
 
spinn3r\footnote{\url{http://spinn3r.com/}}.
 
Table \ref{tab:streams} shows the sources, the number of hourly
 
directories, and the number of chunk files.
 
\begin{table}
 
\caption{Retrieved documents to different sources }
 
\begin{center}
 
 
 \begin{tabular}{l*{4}{l}l}
 
 documents     &   chunk files    &    Sub-stream \\
 
\hline
 
 
126,952         &11,851         &arxiv \\
 
394,381,405      &   688,974        & social \\
 
134,933,117       &  280,658       &  news \\
 
5,448,875         &12,946         &linking \\
 
57,391,714         &164,160      &   MAINSTREAM\_NEWS (spinn3r)\\
 
36,559,578         &85,769      &   FORUM (spinn3r)\\
 
14,755,278         &36,272     &    CLASSIFIED (spinn3r)\\
 
52,412         &9,499         &REVIEW (spinn3r)\\
 
7,637         &5,168         &MEMETRACKER (spinn3r)\\
 
1,040,520,595   &      2,222,554 &        Total\\
 
 
\end{tabular}
 
\end{center}
 
\label{tab:streams}
 
\end{table}
 
 
\subsection{KB entities}
 
 
 The KB entities consist of 20 Twitter entities and 121 Wikipedia entities. The selected entities are, on purpose, sparse. The entities consist of 71 people, 1 organization, and 24 facilities.  
 
 
\subsection{Relevance judgments}
 
 
TREC-KBA provided relevance judgments for training and
 
testing. Relevance judgments are given as a document-entity
 
pairs. Documents with citation-worthy content to a given entity are
 
annotated  as \emph{vital},  while documents with tangentially
 
relevant content, or documents that lack freshliness o  with content
 
that can be useful for initial KB-dossier are annotated as
 
\emph{relevant}. Documents with no relevant content are labeled
 
\emph{neutral} and spam is labeled as \emph{garbage}. 
 
%The inter-annotator agreement on vital in 2012 was 70\% while in 2013 it
 
%is 76\%. This is due to the more refined definition of vital and the
 
%distinction made between vital and relevant.
 
 
\subsection{Breakdown of results by document source category}
 
 
%The results of the different entity profiles on the raw corpus are
 
%broken down by source categories and relevance rank% (vital, or
 
%relevant).  
 
In total, the dataset contains 24162 unique entity-document
 
pairs, vital or relevant; 9521 of these have been labelled as vital,
 
and the remaining 17424 as relevant.
 
All documents are categorized into 8 source categories: 0.98\%
 
arxiv(a), 0.034\% classified(c), 0.34\% forum(f), 5.65\% linking(l),
 
11.53\% mainstream-news(m-n), 18.40\% news(n), 12.93\% social(s) and
 
50.2\% weblog(w). We have regrouped these source categories into three
 
groups ``news'', ``social'', and ``other'', for two reasons: 1) some groups
 
are very similar to each other. Mainstream-news and news are
 
similar. The reason they exist separately, in the first place,  is
 
because they were collected from two different sources, by different
 
groups and at different times. we call them news from now on.  The
 
same is true with weblog and social, and we call them social from now
 
on.   2) some groups have so small number of annotations that treating
 
them independently does not make much sense. Majority of vital or
 
relevant annotations are social (social and weblog) (63.13\%). News
 
(mainstream +news) make up 30\%. Thus, news and social make up about
 
93\% of all annotations.  The rest make up about 7\% and are all
 
grouped as others.
 
 
 \section{Stream Filtering}\label{sec:fil}
 
@@ -955,189 +955,189 @@ relevance ranking for both Wikipedia and aggregate entities, the
 
cleansed version performs better than the raw version.  In Twitter
 
entities, the raw corpus achieves better except in the case of all
 
name-variant, though the difference is negligible.  However, for
 
vital-relevant, the raw corpus performs  better across all entity
 
profiles and entity types except in partial canonical names of
 
Wikipedia entities.
 

	
 
The use of different profiles also shows a big difference in
 
recall. While in Wikipedia the use of canonical
 
partial achieves better than name-variant, there is a steady increase
 
in recall from canonical to canonical partial, to name-variant, and
 
to name-variant partial. This pattern is also observed across the
 
document categories.  However, here too, the relationship between
 
the gain in recall as we move from less richer profile to a more
 
richer profile and overall performance as measured by F-score  is not
 
linear.
 

	
 
 
%%%%%%%%%%%%
 
 
 
In vital ranking, across all entity profiles and types of corpus,
 
Wikipedia's canonical partial  achieves better performance than any
 
other Wikipedia entity profiles. In vital-relevant documents too,
 
Wikipedia's canonical partial achieves the best result. In the raw
 
corpus, it achieves a little less than name-variant partial. For
 
Twitter entities, the name-variant partial profile achieves the
 
highest F-score across all entity profiles and types of corpus.
 
 
 
There are 3 interesting observations: 
 
 
1) cleansing impacts Twitter
 
entities and relevant documents.  This  is validated by the
 
observation that recall  gains in Twitter entities and the relevant
 
categories in the raw corpus also translate into overall performance
 
gains. This observation implies that cleansing removes relevant and
 
social documents than it does vital and news. That it removes relevant
 
documents more than vital can be explained by the fact that cleansing
 
removes the related links and adverts which may contain a mention of
 
the entities. One example we saw was the the cleansing removed an
 
image with a text of an entity name which was actually relevant. And
 
that it removes social documents can be explained by the fact that
 
most of the missing of the missing  docuemnts from cleansed are
 
social. And all the docuemnts that are missing from raw corpus
 
social. So in both cases social seem to suffer from text
 
transformation and cleasing processes. 
 
 
%%%% NEEDS WORK:
 
 
Taking both performance (recall at filtering and overall F-score
 
during evaluation) into account, there is a clear trade-off between
 
using a richer entity-profile and retrieval of irrelevant
 
documents. The richer the profile, the more relevant documents it
 
retrieves, but also the more irrelevant documents. To put it into
 
perspective, lets compare the number of documents that are retrieved
 
with  canonical partial and with name-variant partial. Using the raw
 
corpus, the former retrieves a total of 2547487 documents and achieves
 
a recall of 72.2\%. By contrast, the later retrieves a total of
 
4735318 documents and achieves a recall of 90.2\%. The total number of
 
documents extracted increases by 85.9\% for a recall gain of 18\%. The
 
rest of the documents, that is 67.9\%, are newly introduced irrelevant
 
documents.
 

	
 
Perhaps surprising, Wikipedia's canonical partial is the best entity profile for Wikipedia
 
entities. Here, the retrieval of
 
thousands vital-relevant document-entity pairs by name-variant partial
 
does not materialize into an increase in over all performance. Notice
 
that none of the participants in TREC KBA considered canonical partial
 
as a viable strategy though. We conclude that, at least for our
 
system, the remainder of the pipeline needs a different approach to
 
handle the correct scoring of the additional documents -- that are
 
necessary if we do not want to accept a low recall of the filtering
 
step.
 
%With this understanding, there  is actually no
 
%need to go and fetch different names variants from DBpedia, a saving
 
%of time and computational resources.
 
 
 
%%%%%%%%%%%%
 
 
 
 
 
The deltas between entity profiles, relevance ratings, and document categories reveal four differences between Wikipedia and Twitter entities. 1) For Wikipedia entities, the difference between canonical partial and canonical is higher(16.1\%) than between name-variant partial and  name-variant(8.3\%).  This can be explained by saturation. This is to mean that documents have already been extracted by  name-variants and thus using their partials does not bring in many new relevant documents.  2) Twitter entities are mentioned by name-variant or name-variant partial and that is seen in the high recall achieved  compared to the low recall achieved by canonical(or their partial). This indicates that documents (specially news and others) almost never use user names to refer to Twitter entities. Name-variant partials are the best entity profiles for Twitter entities. 3) However, comparatively speaking, social documents refer to Twitter entities by their user names than news and others suggesting a difference in 
 
adherence to standard in names and naming. 4) Wikipedia entities achieve higher recall and higher overall performance. 
 
 
The high recall and subsequent higher overall performance of Wikipedia entities can  be due to two reasons. 1) Wikipedia entities are relatively well described than Twitter entities. The fact that we can retrieve different name variants from DBpedia is a measure of relatively rich description. Rich description plays a role in both filtering and computation of features such as similarity measures in later stages of the pipeline.   By contrast, we have only two names for Twitter entities: their user names and their display names which we collect from their Twitter pages. 2) There is not DBpedia-like resource for Twitter entities from which alternative names cane be collected.   
 
 
 
In the experimental results, we also observed that recall scores in the vital category are higher than in the relevant category. This observation  confirms one commonly held assumption:(frequency) mention is related to relevance.  this is the assumption why term frequency is used an indicator of document relevance in many information retrieval systems. The more  a document mentions an entity explicitly by name, the more likely the document is vital to the entity.
 
 
Across document categories, we observe a pattern in recall of others, followed by news, and then by social. Social documents are the hardest to retrieve. This can be explained by the fact that social documents (tweets and  blogs) are more likely to point to a resource where the entity is mentioned, mention the entities with some short abbreviation, or talk without mentioning the entities, but with some context in mind. By contrast news documents mention the entities they talk about using the common name variants more than social documents do. However, the greater difference in percentage recall between the different entity profiles in the news category indicates news refer to a given entity with different names, rather than by one standard name. By contrast others show least variation in referring to news. Social documents falls in between the two.  The deltas, for Wikipedia entities, between canonical partials and canonicals,  and name-variants and canonicals are high, an indication that canonical partials 
 
and name-variants bring in new relevant documents that can not be retrieved by canonicals. The rest of the two deltas are very small,  suggesting that partial names of name variants do not bring in new relevant documents. 
 
 
 
\section{Unfilterable documents}\label{sec:unfil}
 
%\section{Unfilterable documents}\label{sec:unfil}
 
 
\subsection{Missing vital-relevant documents \label{miss}}
 
\section{Missing vital-relevant documents}\label{sec:unfil}
 
 
% 
 
 
 The use of name-variant partial for filtering is an aggressive attempt to retrieve as many relevant documents as possible at the cost of retrieving irrelevant documents. However, we still miss about  2363(10\%) of the vital-relevant documents.  Why are these documents missed? If they are not mentioned by partial names of name variants, what are they mentioned by? Table \ref{tab:miss} shows the documents that we miss with respect to cleansed and raw corpus.  The upper part shows the number of documents missing from cleansed and raw versions of the corpus. The lower part of the table shows the intersections and exclusions in each corpus.  
 
 
\begin{table}
 
\caption{The number of documents missing  from raw and cleansed extractions. }
 
\begin{center}
 
\begin{tabular}{l@{\quad}llllll}
 
\hline
 
\multicolumn{1}{l}{\rule{0pt}{12pt}category}&\multicolumn{1}{l}{\rule{0pt}{12pt}Vital }&\multicolumn{1}{l}{\rule{0pt}{12pt}Relevant }&\multicolumn{1}{l}{\rule{0pt}{12pt}Total }\\[5pt]
 
\hline
 
 
Cleansed &1284 & 1079 & 2363 \\
 
Raw & 276 & 4951 & 5227 \\
 
\hline
 
 missing only from cleansed &1065&2016&3081\\
 
  missing only from raw  &57 &160 &217 \\
 
  Missing from both &219 &1927&2146\\
 
\hline
 
 
 
 
\end{tabular}
 
\end{center}
 
\label{tab:miss}
 
\end{table}
 
 
One would  assume that  the set of document-entity pairs extracted from cleansed are a sub-set of those   that are extracted from the raw corpus. We find that that is not the case. There are 217  unique entity-document pairs that are retrieved from the cleansed corpus, but not from the raw. 57 of them are vital.    Similarly,  there are  3081 document-entity pairs that are missing  from cleansed, but are present in  raw. 1065 of them are vital.  Examining the content of the documents reveals that it is due to a missing part of text from a corresponding document.  All the documents that we miss from the raw corpus are social. These are documents such as tweets and blogs, posts from other social media. To meet the format of the raw data (binary byte array), some of them must have been converted later, after collection and on the way lost a part or the entire content. It is similar for the documents that we miss from cleansed: a part or the entire content  is lost in during the cleansing process (the removal of 
 
HTML tags and non-English documents).  In both cases the mention of the entity happened to be on the part of the text that is cut out during transformation. 
 
 
 
 
 The interesting set  of relevance judgments are those that  we miss from both raw and cleansed extractions. These are 2146 unique document-entity pairs, 219 of them are with vital relevance judgments.   The total number of entities in the missed vital annotations is  28 Wikipedia and 7  Twitter, making a total of 35. The  great majority (86.7\%) of the documents are social. This suggests that social (tweets and blogs) can talk about the entities without mentioning  them by name more than news and others do. This is, of course, inline with intuition. 
 
   
 
 
 
%%%%%%%%%%%%%%%%%%%%%%
 
 
We observed that there are vital-relevant documents that we miss from raw only, and similarly from cleansed only. The reason for this is transformation from one format to another. The most interesting documents are those that we miss from both raw and cleansed corpus. We first identified the number of KB entities who have a vital relevance judgment and  whose documents can not be retrieved (they were 35 in total) and conducted a manual examination into their content to find out why they are missing. 
 
 
 
 
 
 We  observed  that among the missing documents, different document ids can have the same content, and be judged multiple times for a given entity.  %In the vital annotation, there are 88 news, and 409 weblog. 
 
 Avoiding duplicates, we randomly selected 35 documents, one for each entity.   The documents are 13 news and  22  social. Here below we have classified the situation under which a document can be vital for an entity without mentioning the entities with the different entity  profiles we used for filtering. 
 
 
\paragraph*{Outgoing link mentions} A post (tweet) with an outgoing link which mentions the entity.
 
\paragraph*{Event place - Event} A document that talks about an event is vital to the location entity where it takes place.  For example Maha Music Festival takes place in Lewis and Clark\_Landing, and a document talking about the festival is vital for the park. There are also cases where an event's address places the event in a park and due to that the document becomes vital to the park. This is basically being mentioned by address which belongs to alarger space. 
 
\paragraph*{Entity -related entity} A document about an important figure such as artist, athlete  can be vital to another. This is specially true if the two are contending for the same title, one has snatched a title, or award from the other. 
 
\paragraph*{Organization - main activity} A document that talks about about an area on which the company is active is vital for the organization. For example, Atacocha is a mining company  and a news item on mining waste was annotated vital. 
 
\paragraph*{Entity - group} If an entity belongs to a certain group (class),  a news item about the group can be vital for the individual members. FrankandOak is  named innovative company and a news item that talks about the group  of innovative companies is relevant for a  it. Other examples are: a  big event  of which an entity is related such an Film awards for actors. 
 
\paragraph*{Artist - work} Documents that discuss the work of artists can be relevant to the artists. Such cases include  books or films being vital for the book author or the director (actor) of the film. Robocop is film whose screenplay is by Joshua Zetumer. A blog that talks about the film was judged vital for Joshua Zetumer. 
 
\paragraph*{Politician - constituency} A major political event in a certain constituency is vital for the politician from that constituency. 
 
 A good example is a weblog that talks about two north Dakota counties being drought disasters. The news is vital for Joshua Boschee, a politician, a member of North Dakota democratic party.  
 
\paragraph*{head - organization} A document that talks about an organization of which the entity is the head can be vital for the entity.  Jasper\_Schneider is USDA Rural Development state director for North Dakota and an article about problems of primary health centers in North Dakota is judged vital for him. 
 
\paragraph*{World Knowledge} Some things are impossible to know without your world knowledge. For example ''refreshments, treats, gift shop specials, "bountiful, fresh and fabulous holiday decor," a demonstration of simple ways to create unique holiday arrangements for any home; free and open to the public`` is judged relevant to Hjemkomst\_Center. This is a social media post, and unless one knows the person posting it, there is no way that this text shows that. Similarly ''learn about the gray wolf's hunting and feeding behaviors and watch the wolves have their evening meal of a full deer carcass; $15 for members, $20 for nonmembers`` is judged vital to Red\_River\_Zoo.  
 
\paragraph*{No document content} A small number of documents were found to have no content.
 
\paragraph*{Disagreement} For a few remaining documents, the authors disagree with the assessors as to why these are vital to the entity.
 
 
 
 
\section{Conclusions} \label{sec:conc}
 
In this paper, we examined the filtering stage of the entity-centric stream filtering and ranking  by holding the later stages of fixed. In particular, we studied the cleansing step, different entity profiles, type of entities(Wikipedia or Twitter), categories of documents(news, social, or others) and the relevance ratings. We attempted to address the following research questions: 1) does cleansing affect filtering and subsequent performance? 2) what is the most effective way of entity profiling? 3) is filtering different for Wikipedia and Twitter entities? 4) are some type of documents easily filterable and others not? 5) does a gain in recall at filtering step translate to a gain in F-measure at the end of the pipeline? and 6) what are the circumstances under which vital documents can not be retrieved? 
 
 
Cleansing does remove parts or entire contents of documents making them irretrievable. However, because of the introduction of false positives, recall gains by  raw corpus and some  richer entity profiles do not necessarily translate to overall performance gain. The results conclusion on this is mixed in the sense that cleansing helps improve the recall on vital documents and Wikipedia entities, but reduces the recall on Twitter entities and the relative category of relevance ranking. Vital and relevant documents show a difference in retrieval nonperformance documents are easier to filter than relevant.  
 
 
 
Despite an aggressive attempt to filter as many vital-relevant documents as possible,  we observe that there are still documents that we miss. While some are possible to retrieve with some modifications, some others are not. There are some document that indicate that an information filtering system does not seem to get them no matter how rich representation of entities they use. These circumstances under which this happens are many. We found that some documents have no content at all, subjectivity(it is not clear why some are judged vital). However, the main circumstances under which vital  documents can defy filtering is: outgoing link mentions, 
 
venue-event, entity - related entity, organization - main area of operation, entity - group, artist - artist's work,  party-politician, and world knowledge.  
 
 
 
%ACKNOWLEDGMENTS are optional
 
%\section{Acknowledgments}
 
 
%
 
% The following two commands are all you need in the
 
% initial runs of your .tex file to
 
% produce the bibliography for the citations in your paper.
 
\bibliographystyle{abbrv}
 
\bibliography{sigproc}  % sigproc.bib is the name of the Bibliography in this case
 
% You must have a proper ".bib" file
 
%  and remember to run:
 
% latex bibtex latex latex
 
% to resolve all references
 
%
 
% ACM needs 'a single self-contained file'!
 
%
 
%APPENDICES are optional
 
%\balancecolumns
 
 
 
\end{document}
0 comments (0 inline, 0 general)