From 2594949a76bb1f4fc51197ff242fe4603587102d 2014-06-12 03:02:11 From: Arjen P. de Vries Date: 2014-06-12 03:02:11 Subject: [PATCH] trying... --- diff --git a/mypaper-final.tex b/mypaper-final.tex index 8c1aaf06e815fd4f97c199ab2b5610a60842aca1..b71c3162484ab5534b5d9c4fcda4fe9d136fecae 100644 --- a/mypaper-final.tex +++ b/mypaper-final.tex @@ -379,7 +379,11 @@ All of the studies used filtering as their first step to generate a smaller set Moreover, there has not been a chance to study at this scale and/or a study into what type of documents defy filtering and why? In this paper, we conduct a manual examination of the documents that are missing and classify them into different categories. We also estimate the general upper bound of recall using the different entities profiles and choose the best profile that results in an increased over all performance as measured by F-measure. \section{Method} -We work with the docuemnts have relavance assessments. For this purpose, we extracted those docuemnts from the big corpus. We experiment with all KB entities. For each KB entity, we extract different name variants from DBpedia and Twitter. +All analyses in this paper are carried out on the documents that have +relevance assessments associated to them. For this purpose, we +extracted those documents from the big corpus. We experiment with all +KB entities. For each KB entity, we extract different name variants +from DBpedia and Twitter. \ \subsection{Entity Profiling}