Changeset - 7babd96c03f9
[Not reviewed]
0 1 0
Gebrekirstos Gebremeskel - 11 years ago 2014-06-12 05:21:03
destinycome@gmail.com
updated
1 file changed with 70 insertions and 68 deletions:
0 comments (0 inline, 0 general)
mypaper-final.tex
Show inline comments
 
@@ -160,101 +160,101 @@ users, in the TREC scenario).
 
 final performance of the system is dependent on this step.  The
 
 filtering step particularly determines the recall of the overall
 
 system. However, all 141 runs submitted by 13 teams did suffer from
 
 poor recall, as pointed out in the track's overview paper 
 
 \cite{frank2013stream}. 
 
 
The most important components of the filtering step are cleansing
 
(referring to pre-processing noisy web text into a canonical ``clean''
 
text format), and
 
entity profiling (creating a representation of the entity that can be
 
used to match the stream documents to). For each component, different
 
choices can be made. In the specific case of TREC KBA, organisers have
 
provided two different versions of the corpus: one that is already cleansed,
 
and one that is the raw data as originally collected by the organisers. 
 
Also, different
 
approaches use different entity profiles for filtering, varying from
 
using just the KB entities' canonical names to looking up DBpedia name
 
variants, and from using the bold words in the first paragraph of the Wikipedia
 
entities' page to using anchor texts from other Wikipedia pages, and from
 
using the exact name as given to WordNet derived synonyms. The type of entities
 
(Wikipedia or Twitter) and the category of documents in which they
 
occur (news, blogs, or tweets) cause further variations.
 
% A variety of approaches are employed  to solve the CCR
 
% challenge. Each participant reports the steps of the pipeline and the
 
% final results in comparison to other systems.  A typical TREC KBA
 
% poster presentation or talk explains the system pipeline and reports
 
% the final results. The systems may employ similar (even the same)
 
% steps  but the choices they make at every step are usually
 
% different. 
 
In such a situation, it becomes hard to identify the factors that
 
result in improved performance. There is  a lack of insight across
 
different approaches. This makes  it hard to know whether the
 
improvement in performance of a particular approach is due to
 
preprocessing, filtering, classification, scoring  or any of the
 
sub-components of the pipeline.
 
 
 
In this paper, we therefore fix the subsequent steps of the pipeline,
 
and zoom in on \emph{only} the filtering step; and conduct an in-depth analysis of its
 
main components.  In particular, we study the effect of cleansing,
 
entity profiling, type of entity filtered for (Wikipedia or Twitter), and
 
document category (social, news, etc) on the filtering components'
 
performance. The main contribution of the
 
paper are an in-depth analysis of the factors that affect entity-based
 
stream filtering, identifying optimal entity profiles without
 
compromising precision, describing and classifying relevant documents
 
that are not amenable to filtering , and estimating the upper-bound
 
of recall on entity-based filtering.
 
 
<<<<<<< HEAD
 
<<<<<<< HEAD
 
The rest of the paper  is organized as follows. Section \ref{sec:desc} describes the dataset and section \ref{sec:fil} defines the task. In section  \ref{sec:lit}, we discuss related litrature folowed by a discussion of our method in \ref{sec:mthd}. Following that,  we present the experimental resulsy in \ref{sec:expr}, and discuss and analyze them in \ref{sec:analysis}. Towards the end, we discuss the impact of filtering choices on classification in section \ref{sec:impact}, examine and categorize unfilterable docuemnts in section \ref{sec:unfil}. Finally, we present our conclusions in \ref{sec:conc}.
 
=======
 
=======
 
The rest of the paper  is organized as follows. Section \ref{sec:desc} describes the dataset and section \ref{sec:fil} defines the task. In section  \ref{sec:lit}, we discuss related litrature folowed by a discussion of our method in \ref{sec:mthd}. Following that,  we present the experimental resulsy in \ref{sec:expr}, and discuss and analyze them in \ref{sec:analysis}. Towards the end, we discuss the impact of filtering choices on classification in section \ref{sec:impact}, examine and categorize unfilterable documents in section \ref{sec:unfil}. Finally, we present our conclusions in \ref{}{sec:conc}.
 
>>>>>>> 51b8586f2e1def3777b3e65737b7ab32c2ff0981
 
>>>>>>> 51b8586f2e1def3777b3e65737b7ab32c2ff0981
 
 
 
 \section{Data Description}\label{sec:desc}
 
We base this analysis on the TREC-KBA 2013 dataset%
 
\footnote{\url{http://trec-kba.org/trec-kba-2013.shtml}}
 
that consists of three main parts: a time-stamped stream corpus, a set of
 
KB entities to be curated, and a set of relevance judgments. A CCR
 
system now has to identify for each KB entity which documents in the
 
stream corpus are to be considered by the human curator.
 
 
\subsection{Stream corpus} The stream corpus comes in two versions:
 
raw and cleaned. The raw and cleansed versions are 6.45TB and 4.5TB
 
respectively,  after xz-compression and GPG encryption. The raw data
 
is a  dump of  raw HTML pages. The cleansed version is the raw data
 
after its HTML tags are stripped off and only English documents
 
identified with Chromium Compact Language Detector
 
\footnote{\url{https://code.google.com/p/chromium-compact-language-detector/}}
 
are included.  The stream corpus is organized in hourly folders each
 
of which contains many  chunk files. Each chunk file contains between
 
hundreds and hundreds of thousands of serialized  thrift objects. One
 
thrift object is one document. A document could be a blog article, a
 
news article, or a social media post (including tweet).  The stream
 
corpus comes from three sources: TREC KBA 2012 (social, news and
 
linking) \footnote{\url{http://trec-kba.org/kba-stream-corpus-2012.shtml}},
 
arxiv\footnote{\url{http://arxiv.org/}}, and
 
spinn3r\footnote{\url{http://spinn3r.com/}}.
 
Table \ref{tab:streams} shows the sources, the number of hourly
 
directories, and the number of chunk files.
 
\begin{table}
 
\caption{Retrieved documents to different sources }
 
\begin{center}
 
 
 \begin{tabular}{l*{4}{l}l}
 
 documents     &   chunk files    &    Sub-stream \\
 
\hline
 
 
126,952         &11,851         &arxiv \\
 
394,381,405      &   688,974        & social \\
 
134,933,117       &  280,658       &  news \\
 
5,448,875         &12,946         &linking \\
 
57,391,714         &164,160      &   MAINSTREAM\_NEWS (spinn3r)\\
 
36,559,578         &85,769      &   FORUM (spinn3r)\\
 
14,755,278         &36,272     &    CLASSIFIED (spinn3r)\\
 
52,412         &9,499         &REVIEW (spinn3r)\\
 
7,637         &5,168         &MEMETRACKER (spinn3r)\\
 
1,040,520,595   &      2,222,554 &        Total\\
 
 
\end{tabular}
 
@@ -681,100 +681,102 @@ name-variant partial over name-variant is 8.3\%.
 
%saturation: documents have already been extracted by the different
 
%name variants and thus using their partial names do not bring in many
 
%new relevant documents.
 
For Wikipedia entities, canonical
 
partial achieves better recall than name-variant in both the cleansed and
 
the raw corpus.  %In the raw extraction, the difference is about 3.7.
 
In Twitter entities, recall of canonical matching is very low.%
 
\footnote{Canonical
 
and canonical partial are the same for Twitter entities because they
 
are one word strings. For example in https://twitter.com/roryscovel,
 
``roryscovel`` is the canonical name and its partial is identical.}
 
%The low recall is because the canonical names of Twitter entities are
 
%not really names; they are usually arbitrarily created user names. It
 
%shows that  documents  refer to them by their display names, rarely
 
%by their user name, which is reflected in the name-variant recall
 
%(67.9\%). The use of name-variant partial increases the recall to
 
%88.2\%.
 
 
 
 
The tables in \ref{tab:name} and \ref{tab:source-delta} show a higher recall
 
for Wikipedia than for Twitter entities. Generally, at both
 
aggregate and document category levels, we observe that recall
 
increases as we move from canonicals to canonical partial, to
 
name-variant, and to name-variant partial. The only case where this
 
does not hold is in the transition from Wikipedia's canonical partial
 
to name-variant. At the aggregate level (as can be inferred from Table
 
\ref{tab:name}), the difference in performance between  canonical  and
 
name-variant partial is 31.9\% on all entities, 20.7\% on Wikipedia
 
entities, and 79.5\% on Twitter entities. 
 
 
Section \ref{sec:analysis} discusses the most plausible explanations for these findings.
 
%% TODO: PERHAPS SUMMARY OF DISCUSSION HERE
 
 
\section{Impact on classification}\label{sec:impact}
 
In the overall experimental setup, classification, ranking, and
 
evaluation are kept constant. Following \cite{balog2013multi}
 
settings, we use
 
WEKA's\footnote{\url{http://www.cs.waikato.ac.nz/~ml/weka/}} Classification
 
Random Forest. However, we use fewer numbers of features which we
 
found to be more effective. We determined the effectiveness of the
 
features by running the classification algorithm using the fewer
 
features we implemented and their features. Our feature
 
implementations achieved better results.  The total numbers of
 
features we used are 13 and are listed below.
 
  
 
\paragraph*{Google's Cross Lingual Dictionary (GCLD)}
 
 
This is a mapping of strings to Wikipedia concepts and vice versa
 
\cite{spitkovsky2012cross}. 
 
The GCLD corpus estimates two probabilities:
 
(1) the probability with which a string is used as anchor text to
 
a Wikipedia entity 
 
%thus distributing the probability mass over the different entities that it is used as anchor text;
 
and (2) the 
 
probability that indicates the strength of co-reference of an anchor with respect to other anchors to  a given Wikipedia entity.  We use the product of both for each string.
 
 
\paragraph*{jac} 
 
  Jaccard similarity between the document and the entity's Wikipedia page
 
\paragraph*{cos} 
 
  Cosine similarity between the document and the entity's Wikipedia page
 
\paragraph*{kl} 
 
  KL-divergence between the document and the entity's Wikipedia page
 
  
 
  \paragraph*{PPR}
 
For each entity, we computed a PPR score from
 
a Wikipedia snapshot  and we kept the top 100  entities along
 
with the corresponding scores.
 
 
 
\paragraph*{Surface Form (sForm)}
 
For each Wikipedia entity, we gathered DBpedia name variants. These
 
are redirects, labels and names.
 
 
 
\paragraph*{Context (contxL, contxR)}
 
From the WikiLink corpus \cite{singh12:wiki-links}, we collected
 
all left and right contexts (2 sentences left and 2 sentences
 
right) and generated n-grams between uni-grams and quadro-grams
 
for each left and right context. 
 
Finally,  we select  the 5 most frequent n-grams for each context.
 
 
\paragraph*{FirstPos}
 
  Term position of the first occurrence of the target entity in the document 
 
  body 
 
\paragraph*{LastPos }
 
  Term position of the last occurrence of the target entity in the document body
 
 
\paragraph*{LengthBody} Term count of document body
 
\paragraph*{LengthAnchor} Term count  of document anchor
 
  
 
\paragraph*{FirstPosNorm} 
 
  Term position of the first occurrence of the target entity in the document 
 
  body normalised by the document length 
 
\paragraph*{MentionsBody }
 
  No. of occurrences of the target entity in the  document body
 
 
 
 
  
 
  Features we use incude similarity features such as cosine and jaccard, document-entity features such as docuemnt mentions entity in title, in body, frequency  of mention, etc., and related entity features such as page rank scores. In total we sue  The features consist of similarity measures between the KB entiities profile text, document-entity features such as  
 
  In here, we present results showing how  the choices in corpus, entity types, and entity profiles impact these latest stages of the pipeline.  In tables \ref{tab:class-vital} and \ref{tab:class-vital-relevant}, we show the performances in max-F. 
 
\begin{table*}
 
\caption{vital performance under different name variants(upper part from cleansed, lower part from raw)}
 
@@ -896,188 +898,188 @@ In vital-relevant category (Table \ref{tab:class-vital-relevant}), the performan
 
% Cementos Lima &&It appears a mistake to label it vital. the article talks about insurance and centos lima is a cement company.entity-deleted from wiki\\
 
% Corn Belt Power Cooperative & &No content at all\\
 
% Marion Technical Institute&&the text could be of any place. talks about a place whose name is not mentioned. 
 
%  roryscovel & &Talks about a video hinting that he might have seen in the venue\\
 
% Jim Poolman && talks of party convention, of which he is member  politician\\
 
% Atacocha && No mention by name The article talks about waste from mining and Anacocha is a mining company.\\
 
% Joey Mantia & & a mention of a another speeedskater\\
 
% Derrick Alston&&Text swedish, no mention.\\
 
% Paul Johnsgard&& not immediately clear why \\
 
% GandBcoffee&& not immediately visible why\\
 
% Bob Bert && talks about a related media and entertainment\\
 
% FrankandOak&& an article that talks about a the realease of the most innovative companies of which FrankandOak is one. \\
 
% KentGuinn4Mayor && a theft in a constituency where KentGuinn4Mayor is vying.\\
 
% Hjemkomst Center && event announcement without mentioning where. it takes a a knowledge of \\
 
% BlossomCoffee && No content\\
 
% Scotiabank Per\%25C3\%25BA && no content\\
 
% Drew Wrigley && politics and talk of oilof his state\\
 
% Joshua Zetumer && mentioned by his film\\
 
% Théo Mercier && No content\\
 
% Fargo Air Museum && No idea why\\
 
% Stevens Cooperative School && no content\\
 
% Joshua Boschee && No content\\
 
% Paul Marquart &&  No idea why\\
 
% Haven Denney && article on skating competition\\
 
% Red River Zoo && animal show in the zoo, not indicated by name\\
 
% RonFunches && talsk about commedy, but not clear whyit is central\\
 
% DeAnne Smith && No mention, talks related and there are links\\
 
% Richard Edlund && talks an ward ceemony in his field \\
 
% Jennifer Baumgardner && no idea why\\
 
% Jeff Tamarkin && not clear why\\
 
% Jasper Schneider &&no mention, talks about rural development of which he is a director \\
 
% urbren00 && No content\\
 
% \hline
 
% \end{tabular}
 
% \end{center}
 
% \label{tab:miss from both}
 
% \end{table*}
 
 
 
 
 
   
 
  
 
\section{Analysis and Discussion}\label{sec:analysis}
 
 
 
We conducted experiments to study the impacts on recall of 
 
different components of the filtering stage of entity-based filtering and ranking pipeline. Specifically 
 
we conducted experiments to study the impacts of cleansing, 
 
entity profiles, relevance ratings, categories of documents, entity
 
profiles. We also measured  impact of the different factors and
 
entity profiles, relevance ratings, categories of documents, entity
 
profiles. We also measured  impact of the different factors and
 
choices  on later stages of the pipeline of our own system. 
 
 
Experimental results show that cleansing can remove entire or parts of
 
the content of documents making them difficult to retrieve. These
 
documents can, otherwise, be retrieved from the raw version. The use
 
of the raw corpus brings in documents that can not be retrieved from
 
the cleansed corpus. This is true for all entity profiles and for all
 
entity types. The  recall difference between the cleansed and raw
 
ranges from  6.8\% t 26.2\%. These increases, in actual
 
document-entity pairs,  is in thousands. We believe this is a
 
substantial increase. However, the recall increases do not always
 
translate to improved F-score in overall performance.  In the vital
 
relevance ranking for both Wikipedia and aggregate entities, the
 
cleansed version performs better than the raw version.  In Twitter
 
entities, the raw corpus achieves better except in the case of all
 
name-variant, though the difference is negligible.  However, for
 
vital-relevant, the raw corpus performs  better across all entity
 
profiles and entity types except in partial canonical names of
 
Wikipedia entities.
 

	
 
The use of different profiles also shows a big difference in
 
recall. While in Wikipedia the use of canonical
 
partial achieves better than name-variant, there is a steady increase
 
in recall from canonical to canonical partial, to name-variant, and
 
to name-variant partial. This pattern is also observed across the
 
document categories.  However, here too, the relationship between
 
the gain in recall as we move from less richer profile to a more
 
richer profile and overall performance as measured by F-score  is not
 
linear.
 

	
 
Experimental results show that cleansing can remove entire or parts of
 
the content of documents making them difficult to retrieve. These
 
documents can, otherwise, be retrieved from the raw version. The use
 
of the raw corpus brings in documents that can not be retrieved from
 
the cleansed corpus. This is true for all entity profiles and for all
 
entity types. The  recall difference between the cleansed and raw
 
ranges from  6.8\% t 26.2\%. These increases, in actual
 
document-entity pairs,  is in thousands. We believe this is a
 
substantial increase. However, the recall increases do not always
 
translate to improved F-score in overall performance.  In the vital
 
relevance ranking for both Wikipedia and aggregate entities, the
 
cleansed version performs better than the raw version.  In Twitter
 
entities, the raw corpus achieves better except in the case of all
 
name-variant, though the difference is negligible.  However, for
 
vital-relevant, the raw corpus performs  better across all entity
 
profiles and entity types except in partial canonical names of
 
Wikipedia entities.
 
 
The use of different profiles also shows a big difference in
 
recall. While in Wikipedia the use of canonical
 
partial achieves better than name-variant, there is a steady increase
 
in recall from canonical to canonical partial, to name-variant, and
 
to name-variant partial. This pattern is also observed across the
 
document categories.  However, here too, the relationship between
 
the gain in recall as we move from less richer profile to a more
 
richer profile and overall performance as measured by F-score  is not
 
linear.
 
 
 
%%%%%%%%%%%%
 
 
 
In vital ranking, across all entity profiles and types of corpus,
 
Wikipedia's canonical partial  achieves better performance than any
 
other Wikipedia entity profiles. In vital-relevant documents too,
 
Wikipedia's canonical partial achieves the best result. In the raw
 
corpus, it achieves a little less than name-variant partial. For
 
Twitter entities, the name-variant partial profile achieves the
 
highest F-score across all entity profiles and types of corpus.
 
In vital ranking, across all entity profiles and types of corpus,
 
Wikipedia's canonical partial  achieves better performance than any
 
other Wikipedia entity profiles. In vital-relevant documents too,
 
Wikipedia's canonical partial achieves the best result. In the raw
 
corpus, it achieves a little less than name-variant partial. For
 
Twitter entities, the name-variant partial profile achieves the
 
highest F-score across all entity profiles and types of corpus.
 
 
 
There are 3 interesting observations: 
 
 
1) cleansing impacts Twitter
 
entities and relevant documents.  This  is validated by the
 
observation that recall  gains in Twitter entities and the relevant
 
categories in the raw corpus also translate into overall performance
 
gains. This observation implies that cleansing removes relevant and
 
social documents than it does vital and news. That it removes relevant
 
documents more than vital can be explained by the fact that cleansing
 
removes the related links and adverts which may contain a mention of
 
the entities. One example we saw was the the cleansing removed an
 
image with a text of an entity name which was actually relevant. And
 
that it removes social documents can be explained by the fact that
 
most of the missing of the missing  docuemnts from cleansed are
 
social. And all the docuemnts that are missing from raw corpus
 
social. So in both cases social seem to suffer from text
 
transformation and cleasing processes. 
 
 
%%%% NEEDS WORK:
 
 
Taking both performance (recall at filtering and overall F-score
 
during evaluation) into account, there is a clear trade-off between
 
using a richer entity-profile and retrieval of irrelevant
 
documents. The richer the profile, the more relevant documents it
 
retrieves, but also the more irrelevant documents. To put it into
 
perspective, lets compare the number of documents that are retrieved
 
with  canonical partial and with name-variant partial. Using the raw
 
corpus, the former retrieves a total of 2547487 documents and achieves
 
a recall of 72.2\%. By contrast, the later retrieves a total of
 
4735318 documents and achieves a recall of 90.2\%. The total number of
 
documents extracted increases by 85.9\% for a recall gain of 18\%. The
 
rest of the documents, that is 67.9\%, are newly introduced irrelevant
 
documents.
 

	
 
Perhaps surprising, Wikipedia's canonical partial is the best entity profile for Wikipedia
 
entities. Here, the retrieval of
 
thousands vital-relevant document-entity pairs by name-variant partial
 
does not materialize into an increase in over all performance. Notice
 
that none of the participants in TREC KBA considered canonical partial
 
as a viable strategy though. We conclude that, at least for our
 
system, the remainder of the pipeline needs a different approach to
 
handle the correct scoring of the additional documents -- that are
 
necessary if we do not want to accept a low recall of the filtering
 
step.
 
%With this understanding, there  is actually no
 
%need to go and fetch different names variants from DBpedia, a saving
 
%of time and computational resources.
 
during evaluation) into account, there is a clear trade-off between
 
using a richer entity-profile and retrieval of irrelevant
 
documents. The richer the profile, the more relevant documents it
 
retrieves, but also the more irrelevant documents. To put it into
 
perspective, lets compare the number of documents that are retrieved
 
with  canonical partial and with name-variant partial. Using the raw
 
corpus, the former retrieves a total of 2547487 documents and achieves
 
a recall of 72.2\%. By contrast, the later retrieves a total of
 
4735318 documents and achieves a recall of 90.2\%. The total number of
 
documents extracted increases by 85.9\% for a recall gain of 18\%. The
 
rest of the documents, that is 67.9\%, are newly introduced irrelevant
 
documents.
 
 
Perhaps surprising, Wikipedia's canonical partial is the best entity profile for Wikipedia
 
entities. Here, the retrieval of
 
thousands vital-relevant document-entity pairs by name-variant partial
 
does not materialize into an increase in over all performance. Notice
 
that none of the participants in TREC KBA considered canonical partial
 
as a viable strategy though. We conclude that, at least for our
 
system, the remainder of the pipeline needs a different approach to
 
handle the correct scoring of the additional documents -- that are
 
necessary if we do not want to accept a low recall of the filtering
 
step.
 
%With this understanding, there  is actually no
 
%need to go and fetch different names variants from DBpedia, a saving
 
%of time and computational resources.
 
 
 
%%%%%%%%%%%%
 
 
 
 
 
The deltas between entity profiles, relevance ratings, and document categories reveal four differences between Wikipedia and Twitter entities. 1) For Wikipedia entities, the difference between canonical partial and canonical is higher(16.1\%) than between name-variant partial and  name-variant(8.3\%).  This can be explained by saturation. This is to mean that documents have already been extracted by  name-variants and thus using their partials does not bring in many new relevant documents.  2) Twitter entities are mentioned by name-variant or name-variant partial and that is seen in the high recall achieved  compared to the low recall achieved by canonical(or their partial). This indicates that documents (specially news and others) almost never use user names to refer to Twitter entities. Name-variant partials are the best entity profiles for Twitter entities. 3) However, comparatively speaking, social documents refer to Twitter entities by their user names than news and others suggesting a difference in 
 
adherence to standard in names and naming. 4) Wikipedia entities achieve higher recall and higher overall performance. 
 
 
The high recall and subsequent higher overall performance of Wikipedia entities can  be due to two reasons. 1) Wikipedia entities are relatively well described than Twitter entities. The fact that we can retrieve different name variants from DBpedia is a measure of relatively rich description. Rich description plays a role in both filtering and computation of features such as similarity measures in later stages of the pipeline.   By contrast, we have only two names for Twitter entities: their user names and their display names which we collect from their Twitter pages. 2) There is not DBpedia-like resource for Twitter entities from which alternative names cane be collected.   
 
 
 
In the experimental results, we also observed that recall scores in the vital category are higher than in the relevant category. This observation  confirms one commonly held assumption:(frequency) mention is related to relevance.  this is the assumption why term frequency is used an indicator of document relevance in many information retrieval systems. The more  a document mentions an entity explicitly by name, the more likely the document is vital to the entity.
 
 
Across document categories, we observe a pattern in recall of others, followed by news, and then by social. Social documents are the hardest to retrieve. This can be explained by the fact that social documents (tweets and  blogs) are more likely to point to a resource where the entity is mentioned, mention the entities with some short abbreviation, or talk without mentioning the entities, but with some context in mind. By contrast news documents mention the entities they talk about using the common name variants more than social documents do. However, the greater difference in percentage recall between the different entity profiles in the news category indicates news refer to a given entity with different names, rather than by one standard name. By contrast others show least variation in referring to news. Social documents falls in between the two.  The deltas, for Wikipedia entities, between canonical partials and canonicals,  and name-variants and canonicals are high, an indication that canonical partials 
 
and name-variants bring in new relevant documents that can not be retrieved by canonicals. The rest of the two deltas are very small,  suggesting that partial names of name variants do not bring in new relevant documents. 
 
 
 
%\section{Unfilterable documents}\label{sec:unfil}
 
 
\section{Missing vital-relevant documents}\label{sec:unfil}
 
 
% 
 
 
 The use of name-variant partial for filtering is an aggressive attempt to retrieve as many relevant documents as possible at the cost of retrieving irrelevant documents. However, we still miss about  2363(10\%) of the vital-relevant documents.  Why are these documents missed? If they are not mentioned by partial names of name variants, what are they mentioned by? Table \ref{tab:miss} shows the documents that we miss with respect to cleansed and raw corpus.  The upper part shows the number of documents missing from cleansed and raw versions of the corpus. The lower part of the table shows the intersections and exclusions in each corpus.  
 
 
\begin{table}
 
\caption{The number of documents missing  from raw and cleansed extractions. }
 
\begin{center}
 
\begin{tabular}{l@{\quad}llllll}
 
\hline
 
\multicolumn{1}{l}{\rule{0pt}{12pt}category}&\multicolumn{1}{l}{\rule{0pt}{12pt}Vital }&\multicolumn{1}{l}{\rule{0pt}{12pt}Relevant }&\multicolumn{1}{l}{\rule{0pt}{12pt}Total }\\[5pt]
 
\hline
 
 
Cleansed &1284 & 1079 & 2363 \\
 
Raw & 276 & 4951 & 5227 \\
 
\hline
 
 missing only from cleansed &1065&2016&3081\\
 
  missing only from raw  &57 &160 &217 \\
 
  Missing from both &219 &1927&2146\\
 
\hline
 
 
 
 
\end{tabular}
 
\end{center}
 
\label{tab:miss}
0 comments (0 inline, 0 general)