From 86ac41c213d16547cfd603f71044b3d61bfa3e64 2014-06-12 04:01:21 From: Arjen P. de Vries Date: 2014-06-12 04:01:21 Subject: [PATCH] Merge branch 'master' of https://scm.cwi.nl/IA/cikm-paper --- diff --git a/mypaper-final.tex b/mypaper-final.tex index e7cc6b45e4981e3be4d5db3a62b46a37921fc2e8..bcfb7c9d7314942f4f533165204f4a50979697e9 100644 --- a/mypaper-final.tex +++ b/mypaper-final.tex @@ -129,8 +129,18 @@ relevance of the document-entity pair under consideration. We analyze how these factors (and the design choices made in their corresponding system components) affect filtering performance. We identify and characterize the relevant documents that do not pass +<<<<<<< HEAD +<<<<<<< HEAD +the filtering stage by examining their contents. This way, we give +estimate of a practical upper-bound of recall for entity-centric stream +======= the filtering stage by examing their contents. This way, we estimate a practical upper-bound of recall for entity-centric stream +>>>>>>> 68fbea2f0372ab9b4199b88f980dbf5e97f49063 +======= +the filtering stage by examing their contents. This way, we +estimate a practical upper-bound of recall for entity-centric stream +>>>>>>> 3eb20e9cca3d074a4001a593e626a9269cb5608c filtering. \end{abstract} @@ -148,73 +158,100 @@ filtering. In 2012, the Text REtrieval Conferences (TREC) introduced the Knowledge Base Acceleration (KBA) track to help Knowledge Bases(KBs) curators. The track is crucial to address a critical need of KB curators: given KB (Wikipedia or Twitter) entities, filter a stream for relevant documents, rank the retrieved documents and recommend them to the KB curators. The track is crucial and timely because the number of entities in a KB on one hand, and the huge amount of new information content on the Web on the other hand make the task of manual KB maintenance challenging. TREC KBA's main task, Cumulative Citation Recommendation (CCR), aims at filtering a stream to identify citation-worthy documents, rank them, and recommend them to KB curators. - Filtering is a crucial step in CCR for selecting a potentially - relevant set of working documents for subsequent steps of the - pipeline out of a big collection of stream documents. The TREC - Filtering track defines filtering as a ``system that sifts through - stream of incoming information to find documents that are relevant to - a set of user needs represented by profiles'' - \cite{robertson2002trec}. -In the specific setting of CCR, these profiles are -represented by persistent KB entities (Wikipedia pages or Twitter + Filtering is a crucial step in CCR for selecting a potentially + relevant set of working documents for subsequent steps of the + pipeline out of a big collection of stream documents. The TREC + Filtering track defines filtering as a ``system that sifts through + stream of incoming information to find documents that are relevant to + a set of user needs represented by profiles'' + \cite{robertson2002trec}. +In the specific setting of CCR, these profiles are +represented by persistent KB entities (Wikipedia pages or Twitter users, in the TREC scenario). - TREC-KBA 2013's participants applied Filtering as a first step to - produce a smaller working set for subsequent experiments. As the - subsequent steps of the pipeline use the output of the filter, the - final performance of the system is dependent on this step. The - filtering step particularly determines the recall of the overall - system. However, all 141 runs submitted by 13 teams did suffer from - poor recall, as pointed out in the track's overview paper - \cite{frank2013stream}. - -The most important components of the filtering step are cleansing -(referring to pre-processing noisy web text into a canonical ``clean'' -text format), and -entity profiling (creating a representation of the entity that can be -used to match the stream documents to). For each component, different -choices can be made. In the specific case of TREC KBA, organisers have -provided two different versions of the corpus: one that is already cleansed, -and one that is the raw data as originally collected by the organisers. -Also, different -approaches use different entity profiles for filtering, varying from -using just the KB entities' canonical names to looking up DBpedia name -variants, and from using the bold words in the first paragraph of the Wikipedia -entities’ page to using anchor texts from other Wikipedia pages, and from -using the exact name as given to WordNet derived synonyms. The type of entities -(Wikipedia or Twitter) and the category of documents in which they -occur (news, blogs, or tweets) cause further variations. -% A variety of approaches are employed to solve the CCR -% challenge. Each participant reports the steps of the pipeline and the -% final results in comparison to other systems. A typical TREC KBA -% poster presentation or talk explains the system pipeline and reports -% the final results. The systems may employ similar (even the same) -% steps but the choices they make at every step are usually -% different. -In such a situation, it becomes hard to identify the factors that -result in improved performance. There is a lack of insight across -different approaches. This makes it hard to know whether the -improvement in performance of a particular approach is due to -preprocessing, filtering, classification, scoring or any of the -sub-components of the pipeline. + TREC-KBA 2013's participants applied Filtering as a first step to + produce a smaller working set for subsequent experiments. As the + subsequent steps of the pipeline use the output of the filter, the + final performance of the system is dependent on this step. The + filtering step particularly determines the recall of the overall + system. However, all 141 runs submitted by 13 teams did suffer from + poor recall, as pointed out in the track's overview paper + \cite{frank2013stream}. + +The most important components of the filtering step are cleansing +(referring to pre-processing noisy web text into a canonical ``clean'' +text format), and +entity profiling (creating a representation of the entity that can be +used to match the stream documents to). For each component, different +choices can be made. In the specific case of TREC KBA, organisers have +provided two different versions of the corpus: one that is already cleansed, +and one that is the raw data as originally collected by the organisers. +Also, different +approaches use different entity profiles for filtering, varying from +using just the KB entities' canonical names to looking up DBpedia name +variants, and from using the bold words in the first paragraph of the Wikipedia +entities’ page to using anchor texts from other Wikipedia pages, and from +using the exact name as given to WordNet derived synonyms. The type of entities +(Wikipedia or Twitter) and the category of documents in which they +occur (news, blogs, or tweets) cause further variations. +% A variety of approaches are employed to solve the CCR +% challenge. Each participant reports the steps of the pipeline and the +% final results in comparison to other systems. A typical TREC KBA +% poster presentation or talk explains the system pipeline and reports +% the final results. The systems may employ similar (even the same) +% steps but the choices they make at every step are usually +% different. +In such a situation, it becomes hard to identify the factors that +result in improved performance. There is a lack of insight across +different approaches. This makes it hard to know whether the +improvement in performance of a particular approach is due to +preprocessing, filtering, classification, scoring or any of the +sub-components of the pipeline. -In this paper, we therefore fix the subsequent steps of the pipeline, -and zoom in on \emph{only} the filtering step; and conduct an in-depth analysis of its -main components. In particular, we study the effect of cleansing, -entity profiling, type of entity filtered for (Wikipedia or Twitter), and -document category (social, news, etc) on the filtering components' -performance. The main contribution of the -paper are an in-depth analysis of the factors that affect entity-based -stream filtering, identifying optimal entity profiles without -compromising precision, describing and classifying relevant documents -that are not amenable to filtering , and estimating the upper-bound -of recall on entity-based filtering. - +In this paper, we therefore fix the subsequent steps of the pipeline, +and zoom in on \emph{only} the filtering step; and conduct an in-depth analysis of its +main components. In particular, we study the effect of cleansing, +entity profiling, type of entity filtered for (Wikipedia or Twitter), and +document category (social, news, etc) on the filtering components' +performance. The main contribution of the +paper are an in-depth analysis of the factors that affect entity-based +stream filtering, identifying optimal entity profiles without +compromising precision, describing and classifying relevant documents +that are not amenable to filtering , and estimating the upper-bound +of recall on entity-based filtering. + The rest of the paper is is organized as follows: - -\textbf{TODO!!} - + +\textbf{TODO!!} + \section{Data Description} +<<<<<<< HEAD +We base this analysis on the TREC-KBA 2013 dataset% +\footnote{http://http://trec-kba.org/trec-kba-2013.shtml} +that consists of three main parts: a time-stamped stream corpus, a set of +KB entities to be curated, and a set of relevance judgments. A CCR +system now has to identify for each KB entity which documents in the +stream corpus are to be considered by the human curator. + +\subsection{Stream corpus} The stream corpus comes in two versions: +raw and cleaned. The raw and cleansed versions are 6.45TB and 4.5TB +respectively, after xz-compression and GPG encryption. The raw data +is a dump of raw HTML pages. The cleansed version is the raw data +after its HTML tags are stripped off and only English documents +identified with Chromium Compact Language Detector +\footnote{https://code.google.com/p/chromium-compact-language-detector/} +are included. The stream corpus is organized in hourly folders each +of which contains many chunk files. Each chunk file contains between +hundreds and hundreds of thousands of serialized thrift objects. One +thrift object is one document. A document could be a blog article, a +news article, or a social media post (including tweet). The stream +corpus comes from three sources: TREC KBA 2012 (social, news and +linking) \footnote{http://trec-kba.org/kba-stream-corpus-2012.shtml}, +arxiv\footnote{http://arxiv.org/}, and +spinn3r\footnote{http://spinn3r.com/}. +Table \ref{tab:streams} shows the sources, the number of hourly +directories, and the number of chunk files. +======= We base this analysis on the TREC-KBA 2013 dataset% \footnote{\url{http://trec-kba.org/trec-kba-2013.shtml}} that consists of three main parts: a time-stamped stream corpus, a set of @@ -240,6 +277,7 @@ arxiv\footnote{\url{http://arxiv.org/}}, and spinn3r\footnote{\url{http://spinn3r.com/}}. Table \ref{tab:streams} shows the sources, the number of hourly directories, and the number of chunk files. +>>>>>>> 3eb20e9cca3d074a4001a593e626a9269cb5608c \begin{table} \caption{Retrieved documents to different sources } \begin{center} @@ -270,105 +308,105 @@ directories, and the number of chunk files. \subsection{Relevance judgments} -TREC-KBA provided relevance judgments for training and -testing. Relevance judgments are given as a document-entity -pairs. Documents with citation-worthy content to a given entity are -annotated as \emph{vital}, while documents with tangentially -relevant content, or documents that lack freshliness o with content -that can be useful for initial KB-dossier are annotated as -\emph{relevant}. Documents with no relevant content are labeled -\emph{neutral} and spam is labeled as \emph{garbage}. -%The inter-annotator agreement on vital in 2012 was 70\% while in 2013 it -%is 76\%. This is due to the more refined definition of vital and the -%distinction made between vital and relevant. - +TREC-KBA provided relevance judgments for training and +testing. Relevance judgments are given as a document-entity +pairs. Documents with citation-worthy content to a given entity are +annotated as \emph{vital}, while documents with tangentially +relevant content, or documents that lack freshliness o with content +that can be useful for initial KB-dossier are annotated as +\emph{relevant}. Documents with no relevant content are labeled +\emph{neutral} and spam is labeled as \emph{garbage}. +%The inter-annotator agreement on vital in 2012 was 70\% while in 2013 it +%is 76\%. This is due to the more refined definition of vital and the +%distinction made between vital and relevant. + \subsection{Breakdown of results by document source category} %The results of the different entity profiles on the raw corpus are %broken down by source categories and relevance rank% (vital, or %relevant). -In total, the dataset contains 24162 unique entity-document -pairs, vital or relevant; 9521 of these have been labelled as vital, -and the remaining 17424 as relevant. -All documents are categorized into 8 source categories: 0.98\% -arxiv(a), 0.034\% classified(c), 0.34\% forum(f), 5.65\% linking(l), -11.53\% mainstream-news(m-n), 18.40\% news(n), 12.93\% social(s) and -50.2\% weblog(w). We have regrouped these source categories into three -groups ``news'', ``social'', and ``other'', for two reasons: 1) some groups -are very similar to each other. Mainstream-news and news are -similar. The reason they exist separately, in the first place, is -because they were collected from two different sources, by different -groups and at different times. we call them news from now on. The -same is true with weblog and social, and we call them social from now -on. 2) some groups have so small number of annotations that treating -them independently does not make much sense. Majority of vital or -relevant annotations are social (social and weblog) (63.13\%). News -(mainstream +news) make up 30\%. Thus, news and social make up about -93\% of all annotations. The rest make up about 7\% and are all -grouped as others. - - \section{Stream Filtering} - - The TREC Filtering track defines filtering as a ``system that sifts - through stream of incoming information to find documents that are - relevant to a set of user needs represented by profiles'' - \cite{robertson2002trec}. Its information needs are long-term and are - represented by persistent profiles, unlike the traditional search system - whose adhoc information need is represented by a search - query. Adaptive Filtering, one task of the filtering track, starts - with a persistent user profile and a very small number of positive - examples. A filtering system can improve its user profiles with a - feedback obtained from interaction with users, and thereby improve - its performance. The filtering stage of entity-based stream - filtering and ranking can be likened to the adaptive filtering task - of the filtering track. The persistent information needs are the KB - entities, and the relevance judgments are the small number of postive - examples. - -Stream filtering is then the task to, given a stream of documents of news items, blogs - and social media on one hand and a set of KB entities on the other, - to filter the stream for potentially relevant documents such that - the relevance classifier(ranker) achieves as maximum performance as - possible. Specifically, we conduct in-depth analysis on the choices - and factors affecting the cleansing step, the entity-profile - construction, the document category of the stream items, and the type - of entities (Wikipedia or Twitter) , and finally their impact overall - performance of the pipeline. Finally, we conduct manual examination - of the vital documents that defy filtering. We strive to answer the - following research questions: -\begin{enumerate} - \item Does cleansing affect filtering and subsequent performance - \item What is the most effective way of entity profile representation - \item Is filtering different for Wikipedia and Twitter entities? - \item Are some type of documents easily filterable and others not? - \item Does a gain in recall at filtering step translate to a gain in F-measure at the end of the pipeline? - \item What characterizes the vital (and relevant) documents that are - missed in the filtering step? -\end{enumerate} - -The TREC filtering and the filtering as part of the entity-centric -stream filtering and ranking pipepline have different purposes. The -TREC filtering track's goal is the binary classification of documents: -for each incoming docuemnt, it decides whether the incoming document -is relevant or not for a given profile. The docuemnts are either -relevant or not. In our case, the documents have relevance ranking and -the goal of the filtering stage is to filter as many potentially -relevant documents as possible, but less irrelevant documents as -possible not to obfuscate the later stages of the piepline. Filtering -as part of the pipeline needs that delicate balance between retrieving -relavant documents and irrrelevant documensts. Bcause of this, -filtering in this case can only be studied by binding it to the later -stages of the entity-centric pipeline. This bond influnces how we do -evaluation. - -To achieve this, we use recall percentages in the filtering stage for -the different choices of entity profiles. However, we use the overall -performance to select the best entity profiles.To generate the overall -pipeline performance we use the official TREC KBA evaluation metric -and scripts \cite{frank2013stream} to report max-F, the maximum -F-score obtained over all relevance cut-offs. - -\section{Literature Review} +In total, the dataset contains 24162 unique entity-document +pairs, vital or relevant; 9521 of these have been labelled as vital, +and the remaining 17424 as relevant. +All documents are categorized into 8 source categories: 0.98\% +arxiv(a), 0.034\% classified(c), 0.34\% forum(f), 5.65\% linking(l), +11.53\% mainstream-news(m-n), 18.40\% news(n), 12.93\% social(s) and +50.2\% weblog(w). We have regrouped these source categories into three +groups ``news'', ``social'', and ``other'', for two reasons: 1) some groups +are very similar to each other. Mainstream-news and news are +similar. The reason they exist separately, in the first place, is +because they were collected from two different sources, by different +groups and at different times. we call them news from now on. The +same is true with weblog and social, and we call them social from now +on. 2) some groups have so small number of annotations that treating +them independently does not make much sense. Majority of vital or +relevant annotations are social (social and weblog) (63.13\%). News +(mainstream +news) make up 30\%. Thus, news and social make up about +93\% of all annotations. The rest make up about 7\% and are all +grouped as others. + + \section{Stream Filtering} + + The TREC Filtering track defines filtering as a ``system that sifts + through stream of incoming information to find documents that are + relevant to a set of user needs represented by profiles'' + \cite{robertson2002trec}. Its information needs are long-term and are + represented by persistent profiles, unlike the traditional search system + whose adhoc information need is represented by a search + query. Adaptive Filtering, one task of the filtering track, starts + with a persistent user profile and a very small number of positive + examples. A filtering system can improve its user profiles with a + feedback obtained from interaction with users, and thereby improve + its performance. The filtering stage of entity-based stream + filtering and ranking can be likened to the adaptive filtering task + of the filtering track. The persistent information needs are the KB + entities, and the relevance judgments are the small number of postive + examples. + +Stream filtering is then the task to, given a stream of documents of news items, blogs + and social media on one hand and a set of KB entities on the other, + to filter the stream for potentially relevant documents such that + the relevance classifier(ranker) achieves as maximum performance as + possible. Specifically, we conduct in-depth analysis on the choices + and factors affecting the cleansing step, the entity-profile + construction, the document category of the stream items, and the type + of entities (Wikipedia or Twitter) , and finally their impact overall + performance of the pipeline. Finally, we conduct manual examination + of the vital documents that defy filtering. We strive to answer the + following research questions: +\begin{enumerate} + \item Does cleansing affect filtering and subsequent performance + \item What is the most effective way of entity profile representation + \item Is filtering different for Wikipedia and Twitter entities? + \item Are some type of documents easily filterable and others not? + \item Does a gain in recall at filtering step translate to a gain in F-measure at the end of the pipeline? + \item What characterizes the vital (and relevant) documents that are + missed in the filtering step? +\end{enumerate} + +The TREC filtering and the filtering as part of the entity-centric +stream filtering and ranking pipepline have different purposes. The +TREC filtering track's goal is the binary classification of documents: +for each incoming docuemnt, it decides whether the incoming document +is relevant or not for a given profile. The docuemnts are either +relevant or not. In our case, the documents have relevance ranking and +the goal of the filtering stage is to filter as many potentially +relevant documents as possible, but less irrelevant documents as +possible not to obfuscate the later stages of the piepline. Filtering +as part of the pipeline needs that delicate balance between retrieving +relavant documents and irrrelevant documensts. Bcause of this, +filtering in this case can only be studied by binding it to the later +stages of the entity-centric pipeline. This bond influnces how we do +evaluation. + +To achieve this, we use recall percentages in the filtering stage for +the different choices of entity profiles. However, we use the overall +performance to select the best entity profiles.To generate the overall +pipeline performance we use the official TREC KBA evaluation metric +and scripts \cite{frank2013stream} to report max-F, the maximum +F-score obtained over all relevance cut-offs. + +\section{Literature Review} There has been a great deal of interest as of late on entity-based filtering and ranking. One manifestation of that is the introduction of TREC KBA in 2012. Following that, there have been a number of research works done on the topic \cite{frank2012building, ceccarelli2013learning, taneva2013gem, wang2013bit, balog2013multi}. These works are based on KBA 2012 task and dataset and they address the whole problem of entity filtering and ranking. TREC KBA continued in 2013, but the task underwent some changes. The main change between the 2012 and 2013 are in the number of entities, the type of entities, the corpus and the relevance rankings. The number of entities increased from 29 to 141, and it included 20 Twitter entities. The TREC KBA 2012 corpus is 1.9TB after xz-compression and has 400M documents. By contrast, the KBA 2013 corpus is 6.45 after XZ-compression and GPG encryption. A version with all-non English documented removed is 4.5 TB and consists of 1 Billion documents. The 2013 corpus subsumed the 2012 corpus and added others from spinn3r, namely main-stream news, forum, arxiv, classified, reviews and meme-tracker. A more important difference is, however, a change in the definitions of relevance ratings vital and relevant. While in KBA 2012, a document was judged vital if it has citation-worthy content for a given entity, in 2013 it must have the freshliness, that is the content must trigger an editing of the given entity's KB entry. @@ -380,11 +418,11 @@ All of the studies used filtering as their first step to generate a smaller set Moreover, there has not been a chance to study at this scale and/or a study into what type of documents defy filtering and why? In this paper, we conduct a manual examination of the documents that are missing and classify them into different categories. We also estimate the general upper bound of recall using the different entities profiles and choose the best profile that results in an increased over all performance as measured by F-measure. \section{Method} -All analyses in this paper are carried out on the documents that have -relevance assessments associated to them. For this purpose, we -extracted those documents from the big corpus. We experiment with all -KB entities. For each KB entity, we extract different name variants -from DBpedia and Twitter. +All analyses in this paper are carried out on the documents that have +relevance assessments associated to them. For this purpose, we +extracted those documents from the big corpus. We experiment with all +KB entities. For each KB entity, we extract different name variants +from DBpedia and Twitter. \ \subsection{Entity Profiling} @@ -420,6 +458,35 @@ Redirect &49 \\ \end{table} +<<<<<<< HEAD +We have a total of 121 Wikipedia entities. Every entity has a DBpedia label. Only 82 entities have a name string and only 49 entities have redirect strings. Most of the entities have only one string, but some have several redirect sterings. One entity, Buddy\_MacKay, has the highest (12) number of redirect strings. 6 entities have birth names, 1 entity has a nick name, 1 entity has alias and 4 entities have alternative names. + +We combined the different name variants we extracted to form a set of strings for each KB entity. For Twitter entities, we used the display names that we collected . We consider the names of the entities that are part of the URL as canonical. For example in http://en.wikipedia.org/wiki/Benjamin\_Bronfman, Benjamin Bronfman is a canonical name of the entity. From the combined name variants and the canonical names, we created four sets of profiles for each entity: canonical(cano) canonical partial (cano-part), all name variants combined (all) and partial names of all name variants(all-part). We refer to the last two profiles as name-variant and name-variant partial. The names in paranthesis are used in table captions. + +\begin{table*} +\caption{Example entity profiles (upper part Wikipedia, lower part Twitter)} +\begin{center} +\begin{tabular}{l*{3}{c}} + &Wikipedia&Twitter \\ +\hline + + &Benjamin\_Bronfman& roryscovel\\ + cano&[Benjamin Bronfman] &[roryscovel]\\ + cano-part &[Benjamin, Bronfman]&[roryscovel]\\ + all&[Ben Brewer, Benjamin Zachary Bronfman] &[Rory Scovel] \\ + all-part& [Ben, Brewer, Benjamin, Zachary, Bronfman]&[Rory, Scovel]\\ + + + \hline +\end{tabular} +\end{center} +\label{tab:breakdown} +\end{table*} + + + + +======= The collection contains a total number of 121 Wikipedia entities. Every entity has a corresponding DBpedia label. Only 82 entities have a name string and only 49 entities have redirect strings. (Most of the @@ -442,6 +509,7 @@ variants(all-part). We refer to the last two profiles as name-variant and name-variant partial. The names in parentheses are used in table captions. +>>>>>>> 3eb20e9cca3d074a4001a593e626a9269cb5608c \subsection{Annotation Corpus} The annotation set is a combination of the annotations from before the Training Time Range(TTR) and Evaluation Time Range (ETR) and consists of 68405 annotations. Its breakdown into training and test sets is shown in Table \ref{tab:breakdown}.