Changeset - 4c5df7fa8a15
[Not reviewed]
0 1 0
Gebrekirstos Gebremeskel - 11 years ago 2014-06-09 22:44:37
destinycome@gmail.com
second commit
1 file changed with 69 insertions and 61 deletions:
0 comments (0 inline, 0 general)
mypaper-final.tex
Show inline comments
 
@@ -193,19 +193,15 @@ TREC-KBA provided relevance judgments for training and testing. Relevance judgme
 
\end{enumerate}
 
 
\subsection{Literature Review}
 
There has been a great deal of interest  as of late on entity-based filtering and ranking. One manifestation of that is the introduction of TREC KBA in 2012. Following that, there have been a number of research works done on the topic \cite{frank2012building, ceccarelli2013learning, taneva2013gem, wang2013bit, balog2013multi}.  These works are based on KBA 2012 task and dataset  and they address the whole problem of entity filtering and ranking.  TREC KBA continued in 2013, but the task underwent some changes.
 
There has been a great deal of interest  as of late on entity-based filtering and ranking. One manifestation of that is the introduction of TREC KBA in 2012. Following that, there have been a number of research works done on the topic \cite{frank2012building, ceccarelli2013learning, taneva2013gem, wang2013bit, balog2013multi}.  These works are based on KBA 2012 task and dataset  and they address the whole problem of entity filtering and ranking.  TREC KBA continued in 2013, but the task underwent some changes. The main change between  the 2012 and 2013 are in the number of entities, the type of entities, the corpus and the relevance rankings.
 
 
The main change between the the 2012 and 2013 are in the number of entities, the type of entities, the corpus and the relevance rankings.The number of entities increased from 29 to 141, and it included 20 Twitter entities. The TREC KBA 2012 corpus was 1.9TB after xz-compression and had  400M documents. By contrast, the KBA 2013 corpus was 6.45 after XZ-compression and GPG encryption. A version with all-non English documented removed  is 4.5 TB and consists of 1 Billion documents. The 2013 corpus subsumed the 2012 corpus and added others from spinn3r, namely main-stream news, forum, arxiv,classified, reviews and meme-tracker.  A more important difference is, however, that the definition of the relevance ranking changed. 
 
The number of entities increased from 29 to 141, and it included 20 Twitter entities. The TREC KBA 2012 corpus was 1.9TB after xz-compression and had  400M documents. By contrast, the KBA 2013 corpus was 6.45 after XZ-compression and GPG encryption. A version with all-non English documented removed  is 4.5 TB and consists of 1 Billion documents. The 2013 corpus subsumed the 2012 corpus and added others from spinn3r, namely main-stream news, forum, arxiv, classified, reviews and meme-tracker.  A more important difference is, however, that the definition of the relevance ranking changed. The change   in the definitions of vital and relevant. While in KBA 2012, a document was judged vital if it has citation-worthy content, In 2013 it must have the freshliness, that is the content must trigger an editing of the KB entry. 
 
 
The change in between KBA 2012 and 2013 is that in the definitions of vital and relevant. While in KBA 2012, a document was judged vital if it has citation-worthy content, In 2013 it must have the freshliness, that is the content must trigger an editing of the KB entry. 
 
While the task of 2012 and 2013 are fundamentally the same, the approaches for the tasks varied due  to the size of the corpus. In the 2013, all participants used filtering to reduce the size of the big corpus.   They used different ways of filtering: many of them used two or more of different name variants from DBpedia such as labels, names, redirects, birth names, alias, nicknames, same-as and alternative names \cite{wang2013bit, dietzumass ,liu2013related, zhangpris}.  Although all of the participants used DBpedia name variants none of them used all them.  A few other participants used bold words in the first paragraph of the Wikipedia entity's profiles and anchor texts from other Wikipedia pages  \cite{bouvierfiltering, niauniversity}.  Very few participants used Boolean And built from the tokens of the canonical names \cite{illiotrec2013}.  
 
 
While the task of 2012 and 2013 are fundamentally the same, the approaches for the tasks varied due  to the size of the corpus. In the 2013, all participants used filtering to reduce the size of the big corpus.   They used different ways of filtering: Many of them used two or more of different name variants from DBpedia such as labels, names, redirects, birth names, alias, nicknames, same-as and alternative names \cite{wang2013bit, dietzumass ,liu2013related, zhangpris}.  Although all of the participants used DBpedia name variants such as name, label, birth name, alternative names, redirects, nickname, or alias, none of them used all them.  A few other participants used bold words in the first paragraph of the Wikipedia entity's profiles and anchor texts from other Wikipedia pages  \cite{bouvierfiltering, niauniversity}.  Very few participants used Boolean And built from the tokens of the canonical names \cite{illiotrec2013}.  
 
All of the studies mentioned used filtering as their first step to generate a smaller set of documents. And many systems suffered from poor recall and their system performances were highly affected \cite{frank2012building}. Although  systems  used different entity profiles to filter the stream, and achieved different performance levels, there is no study on and the factors and choices that affect the filtering step itself. Of course filtering has been extensively examined in TREC Filtering \cite{robertson2002trec}. However, those studies were isolated in the sense that they were intended to optimize recall. What we have here is a different scenario. Documents have relevance rating. Thus we want to study filtering in connection to  relevance to the entities and thus can be done by coupling filtering to the later stages of the pipeline. This is new to the best of our knowledge and the TREC KBA problem setting and data-sets offer a good opportunity to examine this aspect of filtering. 
 
 
All of the studies mentions used filtering as their first step to generate a smaller set of documents. However, none of them conducted a study on it.   Of all the systems submitted for TREC conference, the highest recall achieved is 83\% \cite{frank2012building}. However, many systems suffered from poor recall and their system performances were highly affected \cite{frank2012building}.
 
 
Although as mentioned before, systems have used different entity profiles to filter the stream, and achieved different performance levels, there is no study on and the factors and choices that affect the filtering step itself. Of course filtering has been extensively examined in TREC Filtering \cite{robertson2002trec}. However, those studies were isolated in the sense that they were intended to optimize recall. What we have here is a different scenario. Documents have relevance rating. Thus we want to study filtering in connection to the relevance to the entities and thus can be done by coupling filtering to the later stages of the pipeline. This is new to the best of our knowledge and the TREC KBA problem setting and data-sets offer a good opportunity to examine this aspect of filtering. 
 
 
Moreover, there has not been a chance to study at this scale and/or a study into what type of documents defy filtering and why? In this paper, we conduct a manual examination of the documents that are missing and classify them into different categories. We also estimate the general upper bound of recall for different entities profiles and choose the best profile for maximum precision.
 
Moreover, there has not been a chance to study at this scale and/or a study into what type of documents defy filtering and why? In this paper, we conduct a manual examination of the documents that are missing and classify them into different categories. We also estimate the general upper bound of recall using the different entities profiles and choose the best profile that results in an increased over all performance as measured by F-measure. 
 
 
\section{Method}
 
We work with the subset of stream corpus documents  for whom there exist  annotation. For this purpose, we extracted the documents that have annotation from the big corpus. All our experiments are based on this smaller subset.   We experiment with all KB entities.  For each KB entity, we extract different name variants from DBpedia and Twitter. 
 
@@ -327,41 +323,41 @@ When we talk at an aggregate-level (both Twitter and Wikipedia entities), we obs
 
 
  
 
  
 
%   \begin{table*}
 
% \caption{Breakdown of sources and delta }
 
% \begin{center}\begin{tabular}{l*{9}{c}r}
 
%  && \multicolumn{3}{ c| }{All entities}  & \multicolumn{3}{ c| }{Wikipedia} &\multicolumn{3}{ c| }{Twitter} \\ 
 
%  & &Others&news&social & Others&news&social &  Others&news&social \\
 
% \hline
 
%  
 
% \multirow{4}{*}{Vital} &cano                 &82.2 &65.6    &70.9          &90.9 &80.1   &76.8             &8.1   &6.3   &   30.5\\
 
% 			 &cano\_part - cano  	&8.2  &14.9    &12.3           &9.1  &18.6   &14.1             &0      &0       &0  \\
 
%                          &all - cano         	&12.6  &19.7    &12.3          &5.5  &15.8   &8.4             &73   &35.9    &38.3  \\
 
% 	                 &all\_part - cano\_part&9.7    &18.7  &12.7       &0    &0.5  &5.1        &93.2 & 93 &64.4 \\
 
% 	                 &all\_part - all     	&5.4  &13.9     &12.7           &3.6  &3.3    &10.8              &20.3   &57.1    &26.1 \\
 
% 	                 \hline
 
% 	                 
 
% \multirow{4}{*}{Relevant} &cano                 &84.2 &53.4    &55.6          &88.4 &75.6   &63.2             &10.6   &2.2    &  6\\
 
% 			 &cano\_part - cano  	&10.5  &15.1    &12.2          &11.1  &21.7   &14.1             &0   &0    &0  \\
 
%                          &all - cano         	&11.7  &36.6    &17.3          &9.2  &19.5   &9.9             &54.5   &76.3   &66  \\
 
% 	                 &all\_part - cano\_part &4.2  &26.9   &15.8          &0.2    &0.7    &6.7           &72.2   &87.6 &75 \\
 
% 	                 &all\_part - all     	&3    &5.4     &10.7           &2.1  &2.9    &11              &18.2   &11.3    &9 \\
 
% 	                 
 
% 	                 \hline
 
% \multirow{4}{*}{total} 	&cano                &    81.1   &56.5   &58.2         &87.7 &76.3   &65.6          &9.8  &1.4    &13.5\\
 
% 			&cano\_part - cano   	&10.9   &15.5   &12.4         &11.9  &21.3   &14.4          &0     &0       &0\\
 
% 			&all - cano         	&13.8   &30.6   &16.9         &9.1  &18.9   &10.2          &63.6  &23.2    &57.5 \\
 
%                         &all\_part - cano\_part	&7.2   &24.8   &15.9          &0.1    &0.7    &6.8           &82.2  &33.4    &71.3\\
 
%                         &all\_part - all     	&4.3   &9.7    &11.4           &3.0  &3.1   &11.0          &18.9  &10.3    &13.8\\	                 
 
% 	                 
 
%                                   	                 
 
% 	                
 
% 	                 
 
% \hline
 
% \end{tabular}
 
% \end{center}
 
% \label{tab:source-delta}
 
% \end{table*}
 
  \begin{table*}
 
\caption{Breakdown of sources and delta }
 
\begin{center}\begin{tabular}{l*{9}{c}r}
 
 && \multicolumn{3}{ c| }{All entities}  & \multicolumn{3}{ c| }{Wikipedia} &\multicolumn{3}{ c| }{Twitter} \\ 
 
 & &Others&news&social & Others&news&social &  Others&news&social \\
 
\hline
 
 
 
\multirow{4}{*}{Vital} &cano                 &82.2 &65.6    &70.9          &90.9 &80.1   &76.8             &8.1   &6.3   &   30.5\\
 
			 &cano\_part - cano  	&8.2  &14.9    &12.3           &9.1  &18.6   &14.1             &0      &0       &0  \\
 
                         &all - cano         	&12.6  &19.7    &12.3          &5.5  &15.8   &8.4             &73   &35.9    &38.3  \\
 
	                 &all\_part - cano\_part&9.7    &18.7  &12.7       &0    &0.5  &5.1        &93.2 & 93 &64.4 \\
 
	                 &all\_part - all     	&5.4  &13.9     &12.7           &3.6  &3.3    &10.8              &20.3   &57.1    &26.1 \\
 
	                 \hline
 
	                 
 
\multirow{4}{*}{Relevant} &cano                 &84.2 &53.4    &55.6          &88.4 &75.6   &63.2             &10.6   &2.2    &  6\\
 
			 &cano\_part - cano  	&10.5  &15.1    &12.2          &11.1  &21.7   &14.1             &0   &0    &0  \\
 
                         &all - cano         	&11.7  &36.6    &17.3          &9.2  &19.5   &9.9             &54.5   &76.3   &66  \\
 
	                 &all\_part - cano\_part &4.2  &26.9   &15.8          &0.2    &0.7    &6.7           &72.2   &87.6 &75 \\
 
	                 &all\_part - all     	&3    &5.4     &10.7           &2.1  &2.9    &11              &18.2   &11.3    &9 \\
 
	                 
 
	                 \hline
 
\multirow{4}{*}{total} 	&cano                &    81.1   &56.5   &58.2         &87.7 &76.3   &65.6          &9.8  &1.4    &13.5\\
 
			&cano\_part - cano   	&10.9   &15.5   &12.4         &11.9  &21.3   &14.4          &0     &0       &0\\
 
			&all - cano         	&13.8   &30.6   &16.9         &9.1  &18.9   &10.2          &63.6  &61.8    &57.5 \\
 
                        &all\_part - cano\_part	&7.2   &24.8   &15.9          &0.1    &0.7    &6.8           &82.2  &89.1    &71.3\\
 
                        &all\_part - all     	&4.3   &9.7    &11.4           &3.0  &3.1   &11.0          &18.9  &27.3    &13.8\\	                 
 
	                 
 
                                  	                 
 
	                
 
	                 
 
\hline
 
\end{tabular}
 
\end{center}
 
\label{tab:source-delta2}
 
\end{table*}
 
 
 
 \begin{table*}
 
@@ -383,10 +379,10 @@ When we talk at an aggregate-level (both Twitter and Wikipedia entities), we obs
 
&all part &98.8& 95.5& 83.7& 99.7& 98.0& 84.1& 83.3& 89.7& 81.0\\
 
	                 
 
	                 \hline
 
\multirow{4}{*}{total} 	&cano    &   81.1& 56.5& 58.2& 87.7& 76.4& 65.7& 9.8& 1.4& 13.5\\
 
&cano part &92.0& 72.0& 70.6& 99.6& 97.7& 80.1& 9.8& 1.4& 13.5\\
 
&all & 94.8& 87.1& 75.2& 96.8& 95.3& 75.8& 73.5& 24.5& 71.1\\
 
&all part & 99.2& 96.8& 86.6& 99.8& 98.4& 86.8& 92.4& 34.8& 84.9\\
 
\multirow{4}{*}{total} 	&cano    &   81.1& 56.5& 58.2& 87.7& 76.4& 65.7& 9.8& 3.6& 13.5\\
 
&cano part &92.0& 72.0& 70.6& 99.6& 97.7& 80.1& 9.8& 3.6& 13.5\\
 
&all & 94.8& 87.1& 75.2& 96.8& 95.3& 75.8& 73.5& 65.4& 71.1\\
 
&all part & 99.2& 96.8& 86.6& 99.8& 98.4& 86.8& 92.4& 92.7& 84.9\\
 
	                 
 
\hline
 
\end{tabular}
 
@@ -399,32 +395,44 @@ The results  of the different entity profiles on the raw corpus are broken down
 
 
The 8 document source categories are regrouped into three for two reasons: 1) some groups are very similar to each other. Mainstream-news and news are  similar. The reason they exist separately, in the first place,  is because they were collected from two different sources, by different groups and at different times. we call them news from now on.  The same is true with weblog and social, and we call them social from now on.   2) some groups have so small number of annotations that treating them independently does not make much sense. Majority of vital or relevant annotations are social (social and weblog) (63.13\%). News (mainstream +news) make up 30\%. Thus, news and social make up about 93\% of all annotations.  The rest make up about 7\% and are all grouped as others. 
 
 
The results of the breakdown by document categopries is presented in a multi-dimensional table shown in \ref{tab:source-delta}. There are three outer columns for  all entities, Wikipedia and Twitter. Each of the outer columns consist of the document categories of other,news and social. The rows consist of Vital, relevant and total each of which have the four percentage deltas.   The first delta is the recall percentage between partial names of canonical names and canonical names. The second  is the delta between name variants and canonical names. The third is the difference between partial names of name variants  and partial names of canonical names and the fourth between partial names of name variants and name variants.  
 
 
While one can compute many deltas by pairing the different entity profiles, we believe these four deltas offer a clear meaning. The delta between all name variants and canonical names shows the percentage of documents that the new name variants retrieve, but the canonical name does not. Similarly, the delta between partial names of name variants and partial names of canonical names shows the percentage of document-entity pairs that can be gained by the partial names of the name variants. 
 
The results of the breakdown by document categopries is presented in a multi-dimensional table shown in \ref{tab:source-delta}. There are three outer columns for  all entities, Wikipedia and Twitter. Each of the outer columns consist of the document categories of other,news and social. The rows consist of Vital, relevant and total each of which have the four entity profiles.   
 
 
 
 
 
 
 
 \subsection{Vital vs. relevant}
 
 
 
 When comparing the recall performances in vital and relevant, we observe that canonical names achieve better in vital than in relaxant. This is specially true with Wikipedia entities. For example, the recall for news is 80.1 and for social is 76, while the corresponding recall in relevant is 75.6 and 63.2 respectively. We can generally see that the recall in vital are better than the recall in relevant suggesting that relevant documents are more probable to mention the entities and if they do, using some of their common name variants. 
 
 When comparing the recall performances in vital and relevant, we observe that canonical names achieve better in vital than in relevant. This is specially true with Wikipedia entities. For example, the recall for news is 80.1 and for social is 76, while the corresponding recall in relevant is 75.6 and 63.2 respectively. We can generally see that the recall in vital are better than the recall in relevant suggesting that relevant documents are more probable to mention the entities and when they do, using some of their common name variants. 
 
 
 
 \subsection{Difference by document categories}
 
 There are is some consistent difference between others, news and social. In the Wikipedia entities category, one can see that the recall of others is higher than news. The recall of news is higher than social. This holds across all name variants suggesting that social documents are more difficult to retrieve that other and news. Notice that the others category stands for arxiv (scientific documents), classifieds, forums and linking. However, there is no clear pattern when it comes to Twitter entities. On all entities, we see the effect of the Wikipedia entities overriding the Twitter entities and hence the order of others, news and social. 
 
%  \subsection{Difference by document categories}
 
%  
 
 
 
%  Generally, there is greater variation in relevant rank than in vital. This is specially true in most of the Delta's for Wikipedia. This  maybe be explained by news items referring to  vital documents by a some standard name than documents that are relevant. Twitter entities show greater deltas than Wikipedia entities in both vital and relevant. The greater variation can be explained by the fact that the canonical name of Twitter entities retrieves very few documents. The deltas that involve canonical names of Twitter entities, thus, show greater deltas.  
 
%  
 
 
If we look in recall performances, In Wikipedia entities, the order seems to be others, news and social. This means that others achieve a higher recall than news than social.  However, in Twitter entities, it does not show such a strict pattern. In all, entities also, we also see almost the same pattern of other, news and social. It seems that news documents are the hardest to retrieve. This is of course makes sense since social posts are short and are more likely to point to other resources, or use short informal names.
 
% If we look in recall performances, In Wikipedia entities, the order seems to be others, news and social. This means that others achieve a higher recall than news than social.  However, in Twitter entities, it does not show such a strict pattern. In all, entities also, we also see almost the same pattern of other, news and social. 
 
 
 
 
 In the most cases,   news show greater difference between the deltas followed by social and by others. The substantial difference in delta means that news refer to entities by different names, rather than by a certain standard name.  This is counter-intuitive since one would expect news to mention entities by some consistent name(s) thereby reducing the difference.  In the rest of the deltas for Wikipedia, the order is social, news and others. In Twitter entities, we see that  others  show greater difference followed by social and then news. That is a complete  reversal of the order in Wikipedia. On total, we find that the influence of the Wikipedia entities completely overriding  over Twitter entities, and as a result,  making the order same as the Wikipedia order. The deltas for relevant rank for Wikipedia entities have  similar patterns with some changes: in the case of  all\_part,  others shows bigger delta than social. For Twitter, there is no 
 
difference between canonical and canonical-partial for they are one word. In the rest, they show consistent news, social and other order. In  total, we see that news, social, and other is the order.  
 
 
 
 The  biggest delta  observed is in Twitter entities in   all\_part-cano\_part in relevant annotations.  The delta is 80.5\% difference, and in all\_part, a 70.1\% delta. Both of them are for news category.  For Wikipedia entities, the highest delta observed is 19.5\% in cano\_part - cano followed by 17.5\% in all\_part in relevant.  
 
  
 
\subsection{Document category: others. news and social}
 
In all entities(vital or relevant) annotation ranks, news followed by social followed by others show greater difference. In Wikipedia entities, the order still holds except in the difference between the partials in which case it is other ,social and news. In Twitter, there is no difference between canonical and partials of canonical names as before. However, in the rest, the order is always news, social, and others. 
 
The recall for Wikipedia entities in \ref{tab:name} ranged from 61.8\% (cannonical names) to 77.9\% (partial names of name variants. We looked at how these recall is distributted across the three document categories. In Table \ref{tab:source-delta}, Wikipedia column, we see, across all entity profiles, that others have a higher recall followed by news. Social documents achieve the lowest recall.  While the news recall  ranged from 76.4\% to 98.4\%, the recall for social documents ranged from 65.7\% to 86.8\%. Others achieve higher than news and news achieve higher than social. This pattern  holds across  all name variants in  Wikipedia  entities. Notice that the others category stands for arxiv (scientific documents), classifieds, forums and linking.
 
 
In Twitter entities, however, the pattern is different. In cannonical names (and their partials), social documents achieve higher recall than news . This suggests that social documents refer to Twitter entities by their cannonical names (user names) more than news. In partial names of all name variants, news achieve better results than social. The difference in recall between cannonical and partial names of all name variants shows that news do not refer to Twitter entties by their user names, they refer to them with their display names.
 
 
Overall, across all entities types and all entity profiles, others achieve better recall than news, and  news, in turn, achieve higher recall than social documents. This suggests that social documents are the hardest  to retrieve.  This of course makes sense since social posts are short and are more likely to point to other resources, or use short informal names.
 
 
 
 
 
 
We computed four percentage increases in recall (deltas)  between the difefrent entity profiles (see \ref{tab:source-delta2}. The first delta is the recall percentage between partial names of canonical names and canonical names. The second  is the delta between name variants and canonical names. The third is the difference between partial names of name variants  and partial names of canonical names and the fourth between partial names of name variants and name variants. we believe these four deltas offer a clear meaning. The delta between all name variants and canonical names shows the percentage of documents that the new name variants retrieve, but the canonical name does not. Similarly, the delta between partial names of name variants and partial names of canonical names shows the percentage of document-entity pairs that can be gained by the partial names of the name variants. 
 
 
In most of the  deltas, news followed by social followed by others show greater difference. This suggests s that news refer to entities by different names, rather than by a certain standard name.  This is counter-intuitive since one would expect news to mention entities by some consistent name(s) thereby reducing the difference. The deltas, for Wikipedia entities, between canonical partials and canonicals,  and all name variants and canonicals are high  suggesting that partial names and all other name variants bring in new docuemnts that can not be retrieved by canonical names. The rest of the two deltas are very small suggesting that partial names of all name variants do not bring in new relevant docuemnts. In Twitter entities,  name variants bring in new documents. 
 
 
% The  biggest delta  observed is in Twitter entities between partials of all name variants and partials of canonicals (93\%). delta. Both of them are for news category.  For Wikipedia entities, the highest delta observed is 19.5\% in cano\_part - cano followed by 17.5\% in all\_part in relevant.  
 
  
 
 
 
\subsection{Wikipedia versus Twitter}
 
The tables in \ref{tab:name} and \ref{tab:source-delta} show, recall for Wikipedia entities are higher than for Twitter. This indicates that Wikipedia entities are easier to match in docuemnts than Twitter. This can be due to two reasons: 1) Wikipedia entities are relatively well described rthan Twitter entities. The fact that we can retrieve difeffernt name variants from DBpedia is a measure of relative description. By contrast, we have only two names for Twitter entities:their user names and theur display names wh9ich we collect from their Twitter pages. 2) DbPedia entities are less obscure and that is why they are not in Wikipedia anyways. Another point is that mentioned by their display names more than they are by their user names. We also observed that social docuemnts mention Twitter entities by their user names more than news suggesting a disnction between the standard in news and social documents. 
 
 
 
0 comments (0 inline, 0 general)