Changeset - c2b3d260b547
[Not reviewed]
3 4 0
Gebrekirstos Gebremeskel - 10 years ago 2016-02-10 03:06:38
destinycome@gmail.com
update
7 files changed with 194 insertions and 122 deletions:
0 comments (0 inline, 0 general)
img/click100.pdf
Show inline comments
 
deleted file
 
binary diff not shown
img/tage_click100.pdf
Show inline comments
 
binary diff not shown
img/tage_view100.pdf
Show inline comments
 
binary diff not shown
img/tage_view_click.pdf
Show inline comments
 
binary diff not shown
img/view100.pdf
Show inline comments
 
deleted file
 
binary diff not shown
img/view_click.pdf
Show inline comments
 
deleted file
 
binary diff not shown
main.tex
Show inline comments
 
@@ -38,7 +38,7 @@
 

	
 
\begin{document}
 

	
 
\title{Items that Trigger Clicks upon Recommendation}
 
\title{Factors That Trigger Clicks on Recommendations}
 

	
 
% You need the command \numberofauthors to handle the 'placement
 
% and alignment' of the authors beneath the title.
 
@@ -150,12 +150,12 @@
 
\maketitle
 

	
 
\begin{abstract}
 
In a setting where  recommendations are provided to users when they are viewing a particular item, what factors contribute to the user clicking on some items and not on others?  We examine what triggers users to click on those recommended items in relation to the items the user is currently viewing. More specifically, we examine the items from which clicks happen and what type of items get clicked. Are some items more likely to cause the user to click  on recommendations and are some recommendations more likely to be clicked?  We attempt to explain the factors that trigger clicks on recommendations from different angles. 
 
In a setting where  recommendations are provided to users when they are viewing particular items, what rae the  factors that contribute to the user clicking on some items and not on others?  We examine what triggers users to click on those recommended items in relation to the items the user is currently viewing. More specifically, we examine the items from which clicks happen and what type of items get clicked. Are some items more likely to cause the user to click  on recommendations, and are some other recommendations more likely to be clicked? In short, are clicks on recommendations a function of the base item, or are they a function of the recommended items?  We attempt to explain the factors that trigger clicks on recommendations from different angles. 
 

	
 
\end{abstract}
 

	
 
\section{Introduction}
 
In a recommendation setting where recommendations are provided to a user on the item that the user is reading,   one might wonder whether some items trigger  clicks  more than others, and if they do, what could  possibly explain that?  In a study on the similar Plista dataset \cite{said2013month}, it was found that traditional news portals providing news and opinions on politics and current events are more likely to generate clicks on recommendation than special interest portals such as sports,  gardening, and automechanic forums. In this study, we focus on one traditional news portal, tagespiegel and examine it to find out factors that trigger recommendations on clicks or lack thereof. %wether some categories are more likely to recieve clicks on recommendations. We also even go further and look at what type of items are more likely to trigger more clicks than others.
 
In a recommendation setting where recommendations are provided to  users on the items that the user is currently  viewing,    one might wonder whether some items are more likely to  trigger  clicks  more than others, and if they do, what could  possibly explain that? From here on, we refere to the item that the user is currently viewing as the base item.   In a study on a similar dataset \cite{said2013month}, it was found that traditional news portals providing news and opinions on politics and current events are more likely to generate clicks on recommendation than special interest portals such as sports,  gardening, and automechanic forums. In this study, we focus on one traditional news portal, tagespiegel and examine it to find out factors that trigger recommendations on clicks or lack thereof. %wether some categories are more likely to recieve clicks on recommendations. We also even go further and look at what type of items are more likely to trigger more clicks than others.
 

	
 

	
 
In this study we examine this factors that might trigger clicks on recommendations  from several angles. One angle  is from  the categories of items the user is currently reading. More specifically, are some categories of items  more likely to  cause the user to click on recommendations? How are the categories of the base item and the categories of the recommendation items related. Are some categories  more likely to trigger clicks on some categories? For example, is political category more likely to trigger clicks on political categories, or another category such as local category? 
 
@@ -172,10 +172,10 @@ The insights from examining which categories and items generate trigger clicks i
 

	
 
\section{Dataset, Results and Analysis}
 

	
 
We ussed Plista dataset of of one month from Plsia. Plista is a company that provides a recommendation platform where recommendation providers are inked with online publishers in need of recommendation sertvice. From a datset collected over a month,  we extracted the number of times items have been shown and the number of times that items have been clicked from them. It is not easy to get the exact number of times recommendations have been shown on an item. We assume that the number of times an item has been viewed as the number of times recommendations were shown. Although each time an item is viewed, more than one item (usually 5 items) are shown to the user as recommendations, we just count the number of clicks that have happened from those items regardless of which items are clicked. 
 
We ussed Plista\footnote{http://orp.plista.com/documentation}  dataset collected  over two months. The recommendation that we used are Recency, and recencyRandom.  Plista is a company that provides a recommendation platform where recommendation providers are linked with online publishers in need of recommendation sertvice. From a dataset collected over a month,  we extracted the number of times items have been viewed and the number of times that recommended items have been clicked from them. It is not easy to get the exact number of times recommendations have been shown on a certain base item. We assume that the number of times a base item has been viewed as the number of times recommendations were shown. We assume this to be a fair assumption as recommendation were sought each time a an item was viewed by a user. % Although each time an item is viewed, more than one item (usually 5 items) are shown to the user as recommendations, we just count the number of clicks that have happened from those items regardless of which items are clicked. 
 

	
 

	
 
Figure \ref{fig:view_click} shows the plot of views and clicks. Because of the big difference between views and clicks, the view and click plots appear to be the same, except at the beginning. However, when we focus on the first 100 items that have been viewed the most, we obtaine the view plot in Figure \ref{fig:view100} and the corresponding clicks that the views triggered produce the click plot in Figure \ref{fig:click100}. The plots were generated by first sorting the scores accroding to views. The rough click plot shows that some items are more likely to trigger clicks on recommendations than others.  
 
Figure \ref{fig:view_click} shows the plot of views and clicks. Because of the big difference between views and clicks, the view and click plots appear to be the same, except at the beginning. However, when we focus on the first 100 items that have been viewed the most, we obtaine the view plot in Figure \ref{fig:view100} and the corresponding clicks that the views triggered produce the click plot in Figure \ref{fig:click100}. The plots were generated by first sorting the scores, indecreasing order,  accroding to views. The ragged click plot shows that some items are more likely to trigger clicks on recommendations than others.  
 
 \begin{figure} [t]
 
\centering
 
\includegraphics[scale=0.5]{img/tage_view_click.pdf}
 
@@ -203,43 +203,72 @@ Figure \ref{fig:view_click} shows the plot of views and clicks. Because of the b
 
\caption{Plot of the clicks triggered from the 100 most viewed items}
 
\end{figure}
 

	
 

	
 

	
 

	
 

	
 
To start to explain the difference between the view plot and the click plot,  we aggeragated both views and clicks by the $\mathit{12}$ categories of the base items that the base items are placed under in the Tagespiegel website. So the views are the number of times that the base items of a category have been viewd and the licks are the number of times that recommendations have been clicked from the base item.   Table \ref{base} presents the views, clicks and CTR scores for the  ctegories. The table is sorted by CTR.
 

	
 
We observe that political items trigger clicks more than any other category. After political items, the categories of opinion and the the Berlin local categories trigger more clicks on recommendations. Special categories such as culture and and automechanic trigger the least clicks on recommendations. This is consistent with previous findings that reported special interest portals enjoyed less clicks than tradiional and mainstream news and opinion portals. 
 

	
 

	
 
\begin{table}
 
\caption{A table showing the views, clicks, and ctr of the 12 categories of Tagesspiegel that we considred. }
 
\caption{A table showing the views, clicks, and ctr of the 12 categories of Tagesspiegel on the basis of the base items. This table shows the views, clicks and CTRs of the base item. A click for base item happens when an item recommended to it is clicked. }
 
  \begin{tabular}{|l|l|l|l|l|}
 
\hline
 
           category  &  Views & Clicks & CTR (\%)\\
 
           \hline
 
      auto&2875&3246&14&0.43\\
 
berlin&2875&88473&2031&2.3\\
 
kultur&2875&12218&294&2.41\\
 
medien&2875&19565&530&2.71\\
 
meinung&2875&12290&144&1.17\\
 
politik&2875&52658&1906&3.62\\
 
sport&2875&22489&479&2.13\\
 
weltspiegel&2875&16395&435&2.65\\
 
wirtschaft&2875&17446&476&2.73\\
 
wissen&2875&5861&71&1.21\\
 

	
 

	
 
politik&73197&178&0.24\\
 
medien&22426&50&0.22\\
 
weltspiegel&37413&77&0.21\\
 
wirtschaft&30045&61&0.2\\
 
sport&29812&58&0.19\\
 
berlin&123595&129&0.1\\
 
meinung&4611&3&0.07\\
 
kultur&21840&11&0.05\\
 
wissen&13500&4&0.03\\
 

	
 

	
 

	
 
   \hline
 
  \end{tabular}
 
  \label{ctr}
 
  \label{base}
 
\end{table}
 

	
 

	
 

	
 
\begin{table}
 
\caption{A table showing the views, clicks, and ctr of the 12 categories of Tagesspiegel on the basis of the recommended items. This table shows the views, clicks and CTRs of the recommendations.}
 
  \begin{tabular}{|l|l|l|l|l|}
 
\hline
 
           category  &  Views & Clicks & CTR (\%)\\
 
           \hline
 

	
 
medien&22147&68&0.31\\
 
politik&68230&170&0.25\\\
 
berlin&123559&188&0.15\\
 
weltspiegel&37535&58&0.15\\
 
sport&28160&36&0.13\\
 
meinung&4925&5&0.1\\
 
kultur&23278&21&0.09\\
 
wissen&15650&10&0.06\\
 
wirtschaft&32955&15&0.05\\
 

	
 
   \hline
 
  \end{tabular}
 
  \label{reco}
 
\end{table}
 

	
 
To start to explain the observation that some items trigger more clicks that others, we aggeragated the items (both views and clicks) by 12 categories. These are the main categories that are shown in the tagespiegel website. 
 
Table \ref{ctr} presents the views. clicks and CTR scores for 12 ctegories of items we considred. The table is sorted by CTR. We observe that political items trigger clicks more than any other category. After political items, the categories of opinion and the the Berlin local categories trigger more clicks on recommendations. Special categories such as culture and and automechanic trigger the least clicks on recommendations. This is consistent with previous findings that reported special interest portals enjoyed less clicks than tradiional and mainstream news and opinion portals. 
 

	
 
\subsection{Clicked and Rejected Items}
 
We plan to extract a sample of base items with  recommended and clicked items and separate them into clicked and rejected recommendations. We then compare the contenet of the clicked items with the contenet of the base item. We also do the same with the rejected items and see if there is any similarities/differences bertween these two categories.  The sepration of clicked and rejected items and comparing them to the base item is similar to the sepration of recommended moviews into viwed and ignored in \cite{nguyen2014exploring}. 
 

	
 
On the same dataset, there has been a study on the transition probababilities of users on the categories \cite{esiyok2014users}. The finding was that there is a relationship between what the user is reading on and what the user  reads next. They  report that the category local and sports enjoyed the most loyal readers, that is that a user reading on local items will more likely keep reading items of the same categoy. This study was on genral reading. In this study 1) we repeat the same study on a dataset from a different time and 2) we analyze results in terms of similarity of content with the base items. 
 

	
 

	
 
Question for myself: Is it maybe possible to compute the category CTR's? Like a hitmap of the CTRs where the recommendations are subsidvided to their categories and a CTR is computed? I think so. We can also go durther and look at the contenet similarities. Further, we can look at what type of items trigger more clicks by selecting some items which generated more clicks and analyzing them. 
 

	
 
We have looked into the categories of the base items. We can also look into the categories of the recommendatins to see which categories are more likely to be clicked in general. This results are shown in Table \ref{reco}. 
 

	
 
Now, we would like to see the relationship between the categories of base items and the categories of clicked items. More speicifcally, is the political category more likely to trigger clicks on politcal category of rec ommendations? We plotted to the transition CTRs and the results are presented in Table \ref{heatmap}
 

	
 

	
 
\begin{table*}
 
@@ -247,7 +276,10 @@ Question for myself: Is it maybe possible to compute the category CTR's? Like a
 
  \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|l|l|}
 
\hline
 

	
 
berlin&politik&wirtschaft&sport&kultur&weltspiegel&meinung&medien&wissen&auto\\
 

	
 

	
 
&berlin&politik&wirtschaft&sport&kultur&weltspiegel&meinung&medien&wissen&auto\\
 
\hline
 
berlin&2.62&2.08&1.14&2.08&1.69&2.27&1.61&4.32&1.22&0.4\\
 
politik&3.89&3.16&0.94&3.2&3.75&4.59&3.04&5.88&2.99&0.21\\
 
wirtschaft&2.53&2.89&0.9&1.74&1.52&1.95&3.48&8.43&2.83&0\\
 
@@ -266,133 +298,173 @@ auto&0.22&0.92&0&0.52&0&0&0&1.22&2&0\\
 
\end{table*}
 

	
 

	
 
The hitmap in Table \ref{heatmap} shows the CTR heatmap of recommendation clicks. For example  if we look at the politk row, we see that the CTR from politics to politics is the highest than from politics to any other category.  We also observe that the CTR from local category Berlin to politics is higher than from the local category Berlin to any other category including to itself. A little surprising result is the high CTR from media to politics. 
 

	
 
The way we extracted our recommendations and clciks is a little uncertan. In the Plista setting, when click results are reported to users, they are not known whose recommendations are being clicked. So while we know our recommendation, we do not know for sure how much of the click notifications that we recieve belong to our recommendations. To extract our clciks, we introduced a time frame of 5 minutes. That is if the click notification happens in with in a range of time, in our case 5 minutes, we consider the clcik is on our recommendations. We consider the click information is a bit inflated for users might not stay for more than 5 minutes. While the actual CTR might be a bit inflated as a result of the inflated number of clicks, we consider the relative scores as indicative of the true difference.
 
Here, I have to also include the recommendation CTRs to show that there has been a proportional amount of recommendations to the groups. 
 

	
 
To find out therelationship between base item recommendation pairs that resulted in high CTR scoores, we selected some item-recommendations pairs. To avoid selecting item-recommendation pairs that have very low views and clicks which is usually the type of combination that results in high CTR scores, we first sort our data according to views, and according to clicks. Using  cutt off values, we repeat the intersection until we find the items that have both the highest view and the hight clicks. Using this approach we selected 12 item-recommendation pairs and out of them we selected the 5 pairs that have the highest score. These pairs are presented in Table \ref{}
 

	
 

	
 
\begin{table}
 
\caption{Items and their categories that triggered the highest clicks. }
 
  \begin{tabular}{|l|l|l|l|l|}
 
\hline
 

	
 
 138084944&berlin&4528&15&0.33\\
 
138260114&berlin&364&67&18.41\\
 
138276052&berlin&352&128&36.36\\
 
138288428&berlin&295&21&7.12\\
 
138331188&politik&516&25&4.84\\
 
138353486&politik&314&15&4.78\\
 
138657855&berlin&295&27&9.15\\
 
139760872&politik&387&30&7.75\\
 
140069310&berlin&307&55&17.92\\
 
140069310&politik&552&37&6.7\\
 
140290935&berlin&306&112&36.6\\
 
140451940&berlin&435&35&8.05\\
 

	
 

	
 

	
 
 \hline
 
  \end{tabular}
 
  \label{top-base}
 
\end{table}
 

	
 

	
 

	
 

	
 

	
 
\begin{table}
 
\caption{Items and their categories that triggered the lowest clicks. }
 
   \begin{tabular}{|l|l|l|l|l|}
 
\hline
 

	
 
110758362&politik&30&0&0\\
 
113192171&politik&18&0&0\\
 
115276998&politik&70&0&0\\
 
118343158&politik&15&0&0\\
 
121749581&politik&34&0&0\\
 
45287388&politik&16&0&0\\
 
45322502&politik&16&0&0\\
 
62615560&politik&22&0&0\\
 
63451502&politik&17&0&0\\
 
68227587&politik&21&0&0\\
 
88961035&politik&18&0&0\\
 
95418946&politik&18&0&0\\
 
96260589&politik&44&0&0\\
 

	
 
The hitmap in Table \ref{heatmap} shows the CTR heatmap of recommendation clicks. For example  if we look at the politk row, we see that the CTR from politics to politics is the highest than from politics to any other category.  We also observe that the CTR from local category Berlin to politics is higher than from the local category Berlin to any other category including to itself. A little surprising result is the high CTR from media to politics. 
 

	
 
The way we extracted our recommendations and clciks is a little uncertan. In the Plista setting, when click results are reported to users, they are not known whose recommendations are being clicked. So while we know our recommendation, we do not know for sure how much of the click notifications that we recieve belong to our recommendations. To extract our clciks, we introduced a time frame of 5 minutes. That is if the click notification happens in with in a range of time, in our case 5 minutes, we consider the clcik is on our recommendations. We consider the click information is a bit inflated for users might not stay for more than 5 minutes. While the actual CTR might be a bit inflated as a result of the inflated number of clicks, we consider the relative scores as indicative of the true difference.
 

	
 
 \hline
 
  \end{tabular}
 
  \label{bot-base}
 
\end{table}
 
To find out therelationship between base item recommendation pairs that resulted in high CTR scoores, we selected some item-recommendations pairs. To avoid selecting item-recommendation pairs that have very low views and clicks which is usually the type of combination that results in high CTR scores, we first sort our data according to views, and according to clicks. Using  cutt off values, we repeat the intersection until we find the items that have both the highest view and the hight clicks. Using this approach we selected 12 item-recommendation pairs and out of them we selected the 5 pairs that have the highest score. These pairs are presented in Table \ref{}
 

	
 

	
 

	
 
\begin{table}
 
\caption{Items Recommendations and their categories that triggered the highest clicks. }
 
  \begin{tabular}{|l|l|l|l|l|l|l|}
 
\hline
 
\subsection{Item-level Base and Recommendation CTRs}
 
We look at the two types of item-level CTR's:the base item CTRs and the recommendation ctr.s The base item CTR measures how likely the base item is to trigger clicks on recommendation. We assume that part if clicking on recommendations is a function of the item the user is reading. this is corroborated by the category-level CTr's that we looked at above in thesense that some categories do not generate clicks. even if the item are from clickable categories. The recommendation CTR's ameasures how likely the item is to recieve a click when recomened to a user regardless of the category of the base item.  But, should we not be concerned about the base item? 
 

	
 
base&reco&view.x&click.x&base\_cat.x&reco\_cat.x&ctr.x\\
 
138614685&138507870&70&23&berlin&berlin&32.86\\
 
138614685&138657855&69&22&politik&berlin&31.88\\
 
138622180&138657855&86&19&politik&berlin&22.09\\
 
138657855&138507870&84&6&berlin&politik&7.14\\
 
139322452&139370303&62&53&politik&weltspiegel&85.48\\
 
139385769&139370303&89&14&politik&berlin&15.73\\
 
139881545&139883694&80&28&politik&medien&35\\
 
140032342&140069310&91&6&politik&berlin&6.59\\
 
140141990&140069310&91&7&politik&weltspiegel&7.69\\
 
140290935&140451940&144&43&medien&politik&29.86\\
 
140410389&140451940&88&8&medien&berlin&9.09\\
 
140454828&140451940&69&9&medien&kultur&13.04\\
 
140462049&140451940&126&11&medien&medien&8.73\\
 

	
 
We plan to extract a sample of base items with  recommended and clicked items and separate them into clicked and rejected recommendations. We then compare the contenet of the clicked items with the contenet of the base item. We also do the same with the rejected items and see if there is any similarities/differences bertween these two categories.  The sepration of clicked and rejected items and comparing them to the base item is similar to the sepration of recommended moviews into viwed and ignored in \cite{nguyen2014exploring}. 
 

	
 
 \hline
 
  \end{tabular}
 
  \label{top-base-reco}
 
\end{table}
 
On the same dataset, there has been a study on the transition probababilities of users on the categories \cite{esiyok2014users}. The finding was that there is a relationship between what the user is reading on and what the user  reads next. They  report that the category local and sports enjoyed the most loyal readers, that is that a user reading on local items will more likely keep reading items of the same categoy. This study was on genral reading. In this study 1) we repeat the same study on a dataset from a different time and 2) we analyze results in terms of similarity of content with the base items. 
 

	
 

	
 
Question for myself: Is it maybe possible to compute the category CTR's? Like a hitmap of the CTRs where the recommendations are subsidvided to their categories and a CTR is computed? I think so. We can also go durther and look at the contenet similarities. Further, we can look at what type of items trigger more clicks by selecting some items which generated more clicks and analyzing them. 
 

	
 

	
 

	
 
\begin{table}
 
\caption{Items recommendations and their categories that triggered the lowest clicks. }
 
   \begin{tabular}{|l|l|l|l|l|l|l|}
 
\hline
 
 \begin{figure} [t]
 
\centering
 
\includegraphics[scale=0.5]{img/base_reco_ctr_sorted_by_base.pdf}
 

	
 
base&reco&view.x&click.x&base\_cat.x&reco\_cat.x&ctr.x\\
 
107201359&138507870&9&0&berlin&berlin&0\\
 
45276650&140069310&5&0&politik&berlin&0\\
 
62615560&139622331&9&0&politik&kultur&0\\
 
62615560&139648400&8&0&kultur&kultur&0\\
 
62615560&139667911&5&0&berlin&kultur&0\\
 
63103846&139648400&8&0&kultur&kultur&0\\
 
63104505&139648400&5&0&kultur&kultur&0\\
 
65982081&140451940&10&0&medien&politik&0\\
 
89607701&140069310&6&0&politik&sport&0\\
 
96260589&138288428&6&0&berlin&kultur&0\\
 
\label{fig:view_click}
 
\caption{Plots of ctrs on base items and recommedned items. Plots are generated by first sorting results according to base CTRs}
 
\end{figure}
 

	
 

	
 

	
 
 \begin{figure} [t]
 
\centering
 
\includegraphics[scale=0.5]{img/base_reco_ctr_sorted_by_reco.pdf}
 

	
 
 \hline
 
  \end{tabular}
 
  \label{bot-base_reco}
 
\end{table}
 
\label{fig:view_click}
 
\caption{Plots of ctrs on base items and recommedned items. Plots are generated by first sorting results according to recommendation CTRs.}
 
\end{figure}
 

	
 

	
 

	
 
% 
 
% 
 
% \begin{table}
 
% \caption{Items and their categories that triggered the highest clicks. }
 
%   \begin{tabular}{|l|l|l|l|l|}
 
% \hline
 
% 
 
%  138084944&berlin&4528&15&0.33\\
 
% 138260114&berlin&364&67&18.41\\
 
% 138276052&berlin&352&128&36.36\\
 
% 138288428&berlin&295&21&7.12\\
 
% 138331188&politik&516&25&4.84\\
 
% 138353486&politik&314&15&4.78\\
 
% 138657855&berlin&295&27&9.15\\
 
% 139760872&politik&387&30&7.75\\
 
% 140069310&berlin&307&55&17.92\\
 
% 140069310&politik&552&37&6.7\\
 
% 140290935&berlin&306&112&36.6\\
 
% 140451940&berlin&435&35&8.05\\
 
% 
 
% 
 
% 
 
%  \hline
 
%   \end{tabular}
 
%   \label{top-base}
 
% \end{table}
 
% 
 
% 
 
% 
 
% 
 
% 
 
% \begin{table}
 
% \caption{Items and their categories that triggered the lowest clicks. }
 
%    \begin{tabular}{|l|l|l|l|l|}
 
% \hline
 
% 
 
% 110758362&politik&30&0&0\\
 
% 113192171&politik&18&0&0\\
 
% 115276998&politik&70&0&0\\
 
% 118343158&politik&15&0&0\\
 
% 121749581&politik&34&0&0\\
 
% 45287388&politik&16&0&0\\
 
% 45322502&politik&16&0&0\\
 
% 62615560&politik&22&0&0\\
 
% 63451502&politik&17&0&0\\
 
% 68227587&politik&21&0&0\\
 
% 88961035&politik&18&0&0\\
 
% 95418946&politik&18&0&0\\
 
% 96260589&politik&44&0&0\\
 
% 
 
% 
 
% 
 
%  \hline
 
%   \end{tabular}
 
%   \label{bot-base}
 
% \end{table}
 
% 
 
% 
 
% 
 
% \begin{table}
 
% \caption{Items Recommendations and their categories that triggered the highest clicks. }
 
%   \begin{tabular}{|l|l|l|l|l|l|l|}
 
% \hline
 
% 
 
% base&reco&view.x&click.x&base\_cat.x&reco\_cat.x&ctr.x\\
 
% 138614685&138507870&70&23&berlin&berlin&32.86\\
 
% 138614685&138657855&69&22&politik&berlin&31.88\\
 
% 138622180&138657855&86&19&politik&berlin&22.09\\
 
% 138657855&138507870&84&6&berlin&politik&7.14\\
 
% 139322452&139370303&62&53&politik&weltspiegel&85.48\\
 
% 139385769&139370303&89&14&politik&berlin&15.73\\
 
% 139881545&139883694&80&28&politik&medien&35\\
 
% 140032342&140069310&91&6&politik&berlin&6.59\\
 
% 140141990&140069310&91&7&politik&weltspiegel&7.69\\
 
% 140290935&140451940&144&43&medien&politik&29.86\\
 
% 140410389&140451940&88&8&medien&berlin&9.09\\
 
% 140454828&140451940&69&9&medien&kultur&13.04\\
 
% 140462049&140451940&126&11&medien&medien&8.73\\
 
% 
 
% 
 
%  \hline
 
%   \end{tabular}
 
%   \label{top-base-reco}
 
% \end{table}
 
% 
 
% 
 
% 
 
% 
 
% 
 
% \begin{table}
 
% \caption{Items recommendations and their categories that triggered the lowest clicks. }
 
%    \begin{tabular}{|l|l|l|l|l|l|l|}
 
% \hline
 
% 
 
% base&reco&view.x&click.x&base\_cat.x&reco\_cat.x&ctr.x\\
 
% 107201359&138507870&9&0&berlin&berlin&0\\
 
% 45276650&140069310&5&0&politik&berlin&0\\
 
% 62615560&139622331&9&0&politik&kultur&0\\
 
% 62615560&139648400&8&0&kultur&kultur&0\\
 
% 62615560&139667911&5&0&berlin&kultur&0\\
 
% 63103846&139648400&8&0&kultur&kultur&0\\
 
% 63104505&139648400&5&0&kultur&kultur&0\\
 
% 65982081&140451940&10&0&medien&politik&0\\
 
% 89607701&140069310&6&0&politik&sport&0\\
 
% 96260589&138288428&6&0&berlin&kultur&0\\
 
% 
 
% 
 
% 
 
% 
 
%  \hline
 
%   \end{tabular}
 
%   \label{bot-base_reco}
 
% \end{table}
 
% 
 
% 
 
% 
 

	
 

	
 

	
 
\section{discussion and conclusion}
 

	
 
An idea, maybe show the variance of the categories in terms of their CTR? 
 
An idea, maybe show the variance of the categories in terms of their CTR? Another thing we can do is to explore the high achieving base itemsand the high achiving recommended itesm and see if they are some how the same items. We also do similar thing with lowe achving base item and recommended items. Is this holds, then clearly it indicates that a big factor is not about the current context, but just the nature of the items themselves, both ion the base items, and in the recommended items.  This is going to gold , as it already shows in the groups. But, we can also zoom in on the politics items and see if that holds too. Another thing we can consider is find base items and recommended items with big variance and study them with the view to finding the causes in terms of categories and also in terms of contenet. The variance of a recommendation item tells us information that is it is recommended to some values it makes sense, but if to others, it does not. This can also be studied at a particlat group's items, for example of politics. 
 

	
 
\bibliographystyle{abbrv}
 
\bibliography{ref} 
0 comments (0 inline, 0 general)