diff --git a/main.tex b/main.tex index d381f4ea4d60d311e422f709277ad558326be755..d5f1ab670cfb180ffd2711ed0bd7b5f6bc1e7c9c 100644 --- a/main.tex +++ b/main.tex @@ -14,7 +14,7 @@ % It is an example which *does* use the .bib file (from which the .bbl file % is produced). % REMEMBER HOWEVER: After having produced the .bbl file, -% and prior to final submission, +% % and prior to final submission, % you need to 'insert' your .bbl file into your source .tex file so as to provide % ONE 'self-contained' source file. % @@ -150,19 +150,25 @@ \maketitle \begin{abstract} -In a setting where recommendations are provided to users when they are viewing particular items, what are the factors that contribute to clicks on recommendations? We examine factors that trigger clicks on recommended items in relation to the items the user viewing and to which the recommendations are provided. More specifically, we examine the items from which clicks happen and what type of items get clicked. Are some items more likely to cause the user to click on recommendations, and are some other recommendations more likely to be clicked? In short, are clicks on recommendations a function of the base item, or are they a function of the recommended items? We attempt to explain the factors that trigger clicks on recommendations from different angles. +In a setting where recommendations are provided to users when they are viewing particular items (base items), what are the factors that contribute to clicks on recommendations? We examine whether a click on a recommendation is a function of the base item, the recommended item, or of both. More specifically, we examine the items from which clicks happen and what type of items get clicked. Are some base items more likely to cause the user to click on recommendations, and are some recommendations more likely to be clicked? We attempt to explain the factors that trigger clicks on recommendations at the levels of categories of items, transitions between the categories. \end{abstract} \section{Introduction} - In a study that investigated the relationship between the number of times items are viewed and the the number of times clicks happened from those items in several online publishers \cite{said2013month}, it was reported that traditional news portals providing news and opinions on politics and current events are more likely to generate clicks on recommendation than special interest portals such as sports, gardening, and auto mechanic forums. Another study \cite{esiyok2014users}, using a similar dataset, investigated the impressions and clicks at level of the category of items of one of the traditional news portals - Tagesspiegel (a popular German national news portal). The finding was that there is a relationship between what the user is currently reading and what they read next. They reported that the category local and sports enjoyed the most loyal readers, that is that a user reading on local items will more likely keep reading items of the same category. % recommendations that were made to the different websites raising a queation as to whether the clicks on recommendations were because of nature of the online publishers or the recommendation items.this study, we focus on one traditional news portal, tagespiegel and examine it to find out factors that trigger recommendations on clicks or lack thereof. %wether some categories are more likely to recieve clicks on recommendations. We also even go further and look at what type of items are more likely to trigger more clicks than others. + In a study that investigated the relationship between the number of times items are viewed and the the number of times clicks happened from those items in several online publishers \cite{said2013month}, it was reported that traditional news portals providing news and opinions on politics and current events are more likely to generate clicks on recommendation than special interest portals such as sports, gardening, and auto mechanic forums. Another study \cite{esiyok2014users}, using a similar dataset, investigated the impressions and clicks at level of the category of items of one of the traditional news portals - Tagesspiegel (a popular German national news portal). The finding was that there is a relationship between what the user is currently reading and what they read next. They reported that the categories local and sports enjoyed the most loyal readers, that is, that a user reading on local items will more likely keep reading items of the same category. % recommendations that were made to the different websites raising a queation as to whether the clicks on recommendations were because of nature of the online publishers or the recommendation items.this study, we focus on one traditional news portal, tagespiegel and examine it to find out factors that trigger recommendations on clicks or lack thereof. %wether some categories are more likely to recieve clicks on recommendations. We also even go further and look at what type of items are more likely to trigger more clicks than others. - While both these studies are very related and relevant, they did not investigate the relationship between the base items, the recommended items and the resulting clicks or lack thereof. In a recommendation setting where recommendation items are provided to users on the items that the user is currently viewing (henceforth referred to as base items), what are the factors that trigger user to click on recommendations? Are the clicks a function of the base items or of the recommended items? Do some base items and some recommended items cause users to click on recommendations more than others, and if they do what explains this difference? + While both studies are very related and relevant to our interest in factors that contribute to clicks, they did not investigate the relationship between the base items, the recommended items and the resulting clicks or lack thereof. In a recommendation setting where recommendation items are provided to users on the items that the user is currently viewing (henceforth referred to as base items), what are the factors that trigger users to click on recommendations? Are the clicks a function of the base items or of the recommended items? Do some base items and some recommended items cause users to click on recommendations more than others, and if they do what explains this difference? -In this study we examine the factors that might trigger clicks on recommendations from several angles. One angle is from the categories of the base items the user is currently reading. More specifically, are some categories of items the user is currently on more likely to cause the user to click on recommendations? Similarly, we examine the categories of the recommended items and investigate whether some are more likely to trigger clicks on themselves upon recommendation. We also investigate how the categories of the base items and the categories of the recommendation items are related in the way they trigger clicks. %Are some categories more likely to trigger clicks on some categories? For example, is political category more likely to trigger clicks on political categories, or another category such as local category? +In this study we examine these factors using the categories of the base items and the recommended items. +%that might trigger clicks on recommendations from several angles. One angle is from the categories of the base items the user is currently reading. More specifically, +Are some categories more likely to cause the user to click on recommendations? +%Similarly, we examine the categories of the recommended items and investigate whether some are more likely to trigger clicks on themselves upon recommendation. +We also investigate how the categories of the base items and the categories of the recommendation items are related in the way they trigger clicks. %Are some categories more likely to trigger clicks on some categories? For example, is political category more likely to trigger clicks on political categories, or another category such as local category? +%We also go down to the item level and look at the relationships of the base items and the recommendation items with respect to how likely they are to trigger clicks. More specifically, +We examine whether those base items that are more likely to trigger clicks on recommendations are the same as the recommended items that are more likely to receive clicks. -We also go down to the item level and look at the relationships of the base items and the recommendation items with respect to how likely they are to trigger clicks. More specifically, we examine whether those base items that are more likely to trigger clicks on recommendations are the same with those recommendation items that are more likely to receive clicks. The study contributes to the understanding of factors that influence recommendation systems. The insights from investigating from different angles help 1) to understand what aspects of the base item the user is viewing makes user click on a recommendations 2) to understand what aspects of the recommended items make the user click on those recommendations 3) to target those items that generate clicks and to ignore those that do not trigger recommendations. +The study contributes to the understanding of factors that influence recommendation systems. The insights from investigating from different angles help 1) to understand what aspects of the base item causes the user to click on a recommendation, 2) to understand what aspects of the recommended items make the user click on those recommendations, and 3) to target those items that generate clicks and to ignore those that do not. @@ -180,20 +186,22 @@ We also go down to the item level and look at the relationships of the base item \section{Dataset} -We used Plista\footnote{http://orp.plista.com/documentation} dataset collected from user-item interaction with the tagesspiegel.com news portal, German online news and opinions portal, over more than two months, from 15-04-2015 to 04-07-2015. Items in tagesspiegel are manually placed under $\mathit{10}$ categories, $\mathit{9}$ of which we investigate in this study. The dataset is aggregated from the logs of the recommender systems that we used during our participation in the CLEF NewsREEL 2015 challenge \cite{kille2015overview}. The challenge offered participants the opportunity to plug their recommendation algorithms to Plista\footnote{http://orp.plista.com/documentation} and provide recommendations to real users visiting online publishers. Plista is a framework that connects recommendation providers such as ourselves and recommendation service requester such as online news portals. Participation in the challenge enabled us to collect information of user-item interaction such as impression (a user viewing an item), update (appearance of new item, or change of content of existing item) and click (a user clicking on recommendation item). +We used a dataset off user-item interactions on Tagesspiegel, a real online German news portal. The dataset was collected from from 15-04-2015 to 04-07-2015. Items in Tagesspiegel are manually placed by the journalists under categories. For our study, we investigated $\mathit{9}$ categories: \textbf{politics (politik)}, \textbf{business (wirtschaft}, \textbf{sports (sport)}, \textbf{culture (kultur}, \textbf{world (weltspiegel)}, \textbf{opinion (meinung)}, \textbf{media (medien)}, \textbf{education (wissen)} and the local category \textbf{berlin}. -The three recommendation algorithms that we used are two instances of \textbf{Recency}, and \textbf{RecencyRandom}. The Recency algorithm keeps the most recently viewed or updated items and recommends the top most $mathit{k}$ recent items every time a recommendation request is made. The RecencyRandom recommender keeps the most recent $\mathit{100}$ items at any time and recommends, randomly, the requested number of items every time a recommendation request is made. +The dataset is aggregated from the logs of the recommender systems that we used during our participation in the CLEF NewsREEL 2015 challenge \cite{kille2015overview}. This challenge offered participants the opportunity to plug their recommendation algorithms to Plista\footnote{http://orp.plista.com/documentation} and provide recommendations to real users visiting online publishers. Plista is a recommendation framework that connects recommendation providers such as ourselves and recommendation service requester such as online news portals. Participation in the challenge enabled us to collect information of user-item interaction such as impressions (a user viewing an item), updates (appearance of new item, or change of content of existing item) and clicks (a user clicking on recommendation item). - Unfortunately, the click information did not include whether the click on recommendation was on our recommendations or on someone else's recommendations. Since we know the user and the base item for which we recommended and the recommended items, we considered a click notification on one of our recommended items as a click on our recommendation if that click happened with in $\mathit{5}$ minutes from the time of our recommendation. From the combined collected dataset, we extracted the base item, the category of the base item, the recommended item and the category of the recommended item, the number of times a recommendation item has been recommended to a base item (view) and the number of times that the recommended item has been clicked from the base item. A sample dataset is presented in Table \ref{tab:sample}. +The three recommendation algorithms that we used are two instances of \textbf{Recency}, and one instance of \textbf{RecencyRandom}. The Recency algorithm keeps the most recently viewed or updated items and recommends the top $\mathit{k}$ most recent items every time a recommendation request is made. The RecencyRandom recommender keeps the most recent $\mathit{100}$ items at any time and recommends, randomly, the requested number of items every time a recommendation request is made. + Unfortunately, click information provided by the Plista platform does not include whether the click on recommendation received is in response to our recommendations or on someone other participants' recommendations. Since we know the user and the base item for which we recommended and the recommended items, we considered a click notification on one of our recommended items as a click on our recommendation, if that click happened with in $\mathit{5}$ minutes from the time of our recommendation. From the combined collected dataset, we extracted the base item, the category of the base item, the recommended item, the category of the recommended item, the number of times a recommendation item has been recommended to a base item (view) and the number of times that the recommended item has been clicked from the base item. From the views and clicks, we compute click-through-rate (CTR) as the percentage of views that are clicked. A sample of the dataset is presented in Table \ref{tab:sample}. -\begin{table} -\caption{A sample dataset. B is the base item id, R is the recommendation item id, and B-Cat and R-cat are the categories of the base item and the recommendation item, respectively. \label{tab:sample}} +\begin{table*} + +\caption{A sample of the dataset. \label{tab:sample}} \centering \begin{tabular}{|l|l|l|l|l|l|l|} \hline - B & B-Cat & R& R-Cat &View&Click & CTR \\ + Base Item & Base Item Category & Commendation& Recommendation Category &View&Click & CTR \\ \hline 229397219 &229495114 & Berlin & Berlin & 17 & 1 & 5.88\\ @@ -203,7 +211,7 @@ The three recommendation algorithms that we used are two instances of \textbf{R \end{tabular} -\end{table} +\end{table*} %Plista is a company that provides a recommendation platform where recommendation providers are linked with online publishers in need of recommendation sertvice. @@ -213,15 +221,15 @@ The three recommendation algorithms that we used are two instances of \textbf{R \section{Results and Analysis} -Our dataset consists of a total of $\mathit{288979}$ base-item recommendation-item pairs. To see the relationship between views and clicks, we first sorted the dataset according to views and then normalized the \textbf{view} and \textbf{click} counts by the total number of views and total number of click, respectively. We then slected the top $\mathit{1000}$ pairs and plotted the views and the clicks. The reason for normalization is to be able to plot them together for easy comparision. %The selection of only $\mathit{1000}$ pairs is because the more items we use, the more difficult is to see . +Our dataset consists of a total of $\mathit{288979}$ base-item \\recommendation-item pairs. To see the relationship between views and clicks, we first sorted the dataset according to views and then normalized the \textbf{view} and \textbf{click} counts by the total number of views and the total number of clicks, respectively. We then selected the top $\mathit{1000}$ pairs and plotted the views and the clicks. The reason for normalization is to be able to plot them together for easy comparison. %The selection of only $\mathit{1000}$ pairs is because the more items we use, the more difficult is to see . -Figure \ref{fig:view_click} shows the plot of views and clicks for the $\mathit{1000}$ pairs. The blue plot is for views and is smooth since the data was sorted by views. The red plot is for the corresponding clicks on recommendations. We observe that the clicks do not follow the views, an indication that t clicks do not correspond with the number of times that a recommendation items is recommended to a base item. This is the reason we set out to investigate, to begin with. The ragged click plot shows that some items are more likely to trigger clicks on recommendations than others. What can possibly explain this observation? What causes these difference between the number of views and the number of clicks? +Figure \ref{fig:view_click} shows the plot of views and clicks for the $\mathit{1000}$ pairs. The blue plot is for views and is smooth since the data was sorted by views. The red plot is for the corresponding clicks on recommendations. We observe that the clicks do not follow the views, an indication that t clicks do not correspond with the number of times that a recommendation items is recommended to a base item. This observation is the primary reason we set out to investigate, the relation between base items and recommended items and their attractiveness to the user, to begin with. The ragged click plot shows that some items are more likely to trigger clicks on recommendations than others. What can possibly explain this observation? What causes these difference in CTR scores for the various items? \begin{figure} [t] \centering -\includegraphics[scale=0.5]{img/tage_view1_click000.pdf} +\includegraphics[scale=0.5]{img/tage_view_click1000-crop.pdf} -\caption{Plots of views and clicks on Tagesspiegel. The Plots are normalzized by the total views and total clicks. The Blue plot (bottom) ais the sorted view plot and the red plot is the corresponding click plot.\label{fig:view_click}} +\caption{Plots of views (blue) and clicks (red). The Plots are generated by first sorting by views. The difference in between the view and click plots suggests that some items are more likely to trigger clicks on recommendation than others. \label{fig:view_click}} \end{figure} @@ -247,14 +255,14 @@ Figure \ref{fig:view_click} shows the plot of views and clicks for the $\mathit{ \subsection{Categories of Base and Recommendation Items} -To start to explain the difference between the view plot and the click plot observed in \ref{fig:view_click}, we aggregated views and clicks by the $\mathit{9}$ categories of items that the items are placed under in the Tagesspiegel website. The aggregation gives us two results: view and click counts the categories in base and in recommendation. With the categories, we attemopt to answer two questions: 1) is there a relationship between the category of the base item and the likelihood of triggering a click on recommendation, and 2) is there a relationship between the category of the recommended item and the likelihood of triggering a click upon its recommendation? Tables \ref{tab:base} and \ref{tab:reco} present the views, clicks and CTR scores. The results are sorted by CTR scores. +To start to explain the difference between the view plot and the click plot observed in \ref{fig:view_click}, we aggregated views and clicks by the $\mathit{9}$ categories of items that the items are placed under in the Tagesspiegel website. The aggregation gives us two results: view and click counts of the categories as base and as recommendation. With the categories, we attempt to answer two questions: 1) is there a relationship between the category of the base item and the likelihood of triggering a click on recommendation?, and 2) is there a relationship between the category of the recommended item and the likelihood of triggering a click upon its recommendation? Tables \ref{tab:base} and \ref{tab:reco} present the views, clicks and CTR scores. The results are sorted by CTR scores. -We observe that there is a difference between the base categories and the recommendation categories with respect to the likelihood of triggering clicks. In the base categories, the \textbf{politics} is more likely to triggers clicks than any other category \textbf{opinion} and \textbf{world}. Special categories such as \textbf{culture} and and \textbf{knowledge} are the least likely to trigger clicks on recommendations. This is consistent with previous findings that reported special interest portals generated less clicks on recommendations than traditional and mainstream news and opinion portals. +We observe a difference between the base categories and the recommendation categories with respect to the likelihood of triggering clicks. In the base categories, items of \textbf{politics} are more likely to trigger clicks than other categories, followed by \textbf{opinion} and \textbf{world}. Special categories such as \textbf{culture} and and \textbf{education} are the least likely to trigger clicks on recommendations. This is consistent with the previous findings that reported special interest portals generate less clicks on recommendations than traditional portals of providing news, opinions and current events. \begin{table*} -\caption{A table showing the views, clicks, and ctr of the 12 categories of Tagesspiegel on the basis of the base items. This table shows the views, clicks and CTRs of the base item. A click for base item happens when an item recommended to it is clicked. } +\caption{The views, clicks, and CTR of the categories . Table \ref{tab:base} is for the categories in base and \ref{tab:reco} is for the categories in recommendation. The CTR scores are generally higher in recommendation, and the ranking of the categories in terms of the CTR scores are different in base and in recommendation. } \parbox{.45\linewidth}{ \centering \begin{tabular}{|l|l|l|l|l|} @@ -278,7 +286,7 @@ wissen&13500&4&0.03\\ \hline \end{tabular} -\caption{Base Category \label{tab:base}} +\subcaption{Base Category \label{tab:base}} } \hfill \parbox{.45\linewidth}{ @@ -302,24 +310,24 @@ wirtschaft&32955&15&0.05\\ \end{tabular} -\caption{Recommendation Category \label{tab:reco}} +\subcaption{Recommendation Category \label{tab:reco}} } \end{table*} -On the recommendation side, however, it is \textbf{media} that is the more likely to triggers clicks upon recommendation, followed by \textbf{politics} and the local category (\textbf{Berlin}. The two least performing categories are \textbf{business} and \textbf{knowledge}, similar to the least performing categories in base. So, overall, it seems that the likelihood of triggering clicks by the categories shows a difference when they are in base and recommendation. In general, the categories have higher CTRs in recommendation that in base. To gain further insight, we looked at the CTRs of transitions from base category to recommendation category. The aim of this is to find out whether some base categories are more likely to trigger clicks on some recommendation categories. The results are presented in Table \ref{heatmap}. - - +On the recommendation side, however, it is \textbf{media} that is the more likely to incur clicks upon recommendation, followed by \textbf{politics} and the local category (\textbf{berlin}. The two least performing categories are \textbf{business} and \textbf{education}, similar to the least performing categories in base. So, overall, it seems that the likelihood of triggering clicks by the categories shows a difference when they are in base or in recommendation. In general, the categories have higher CTR scores in recommendation than in base. -There are some interesting observations in the category-to-category transitions. While the highest transition CTRs for the base categories of \textbf{berlin} and \textbf{politics} are to \textbf{media}, for \textbf{business}, it is to \textbf{opinion}, for \textbf{sport} it is to \textbf{sport}. The highest transition CTR for \textbf{Culture} is to the local categry, \textbf{berlin}, and for \textbf{world} it is to \textbf{politics} followed by to \textbf{berlin}. \textbf{Media} is the one that is more likely to trigger clicks upon recommendation. The local category \textbf{berlin} is the one that is more likely to trigger clicks on diverse recommendation categories. +To gain further insight, we looked at the CTRs of transitions from base category to recommendation category. The aim of this is to find out whether some base categories are more likely to trigger clicks on some recommendation categories. The results are presented in Table \ref{heatmap}. +Some interesting observations can be seen in the category-to-category transitions. While the highest transition CTRs for the base categories of \textbf{berlin} and \textbf{politics} are to \textbf{media}, for \textbf{business}, it is to \textbf{opinion}, for \textbf{sport} it is to \textbf{sport}. The highest transition CTR for \textbf{Culture} is to the local category, \textbf{berlin}, and for \textbf{world} it is to \textbf{politics} followed by to \textbf{berlin}. \textbf{Media} is the one that is more likely to trigger clicks upon recommendation. The local category \textbf{berlin} is the one that is more likely to trigger clicks on diverse recommendation categories. \begin{table*} +\centering \caption{Transition CTR scores from base categories to recommendation categories. The row categories represent the categories of base items and the column categories represent the recommendation categories. \label{heatmap}} \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|l|} \hline -&Berlin&politics&wirtschaft&sport&kultur&weltspiegel&meinung&medien&wissen\\ +&Berlin&politik&wirtschaft&sport&kultur&weltspiegel&meinung&medien&wissen\\ \hline berlin&0.14&0.08&0.06&0.05&0.06&0.12&0.12&0.16&0.06\\ politik&0.2&0.39&0.06&0.12&0.04&0.3&0&0.73&0.1\\ @@ -364,28 +372,28 @@ wissen&0.02&0&0&0&0.11&0.15&0&0&0\\ % % Question for myself: Is it maybe possible to compute the category CTR's? Like a hitmap of the CTRs where the recommendations are subsidvided to their categories and a CTR is computed? I think so. We can also go durther and look at the contenet similarities. Further, we can look at what type of items trigger more clicks by selecting some items which generated more clicks and analyzing them. -At the item level, we investigated whether the %re is a relationship, in triggering clicks on recommendations, between the base items and the recommended items. More specifically, are -the base items that are more likely to trigger recommendation are also the ones that are more likely to be clicked upon recommendations. To accomplish this, we first computed the CTRs, separately, for base items and recommendation items and then intersected them to find the items that are in both. It is important to state here that we have more items in our recommendations than in our base items. This is because we are only requested to provide recommendation to some items, while we have all items to use for recommendation. We had a total of $\mathit{55708}$ items in our recommendation items and $\mathit{18967}$ on our base items. The intersection resulted in $\mathit{15221}$ items for which we looked at the CTRs they score when they are used as base items and as recommendation items. +At the item level, we investigated whether %re is a relationship, in triggering clicks on recommendations, between the base items and the recommended items. More specifically, are +the base items that are more likely to trigger recommendation are also the ones that are more likely to be clicked upon recommendations. To accomplish this, we first computed the CTRs separately for base items and recommendation items and then intersected these to find the items that are in both. It is important to state here that we have more items in our recommendations than in our base items. This is because we are only requested to provide recommendations to some items via the Plista platform, while we could choose from all items for recommendation. Our dataset collected over two month comprises $\mathit{55708}$ recommended items and $\mathit{18967}$ base items. The intersection resulted in $\mathit{15221}$ items for which we computed CTRs in base and in recommendation. -To better visualize the results, we present two plots. In Figure \ref{fig:view_click_base}, we present plots generated by sorting the results by base CTR scores. The blue plot is for base CTR and red plot is for recommendation CTR. What we observe here is that although the base items that are more likely to trigger clicks on recommendations are also the items that are more likely to trigger clicks upon their recommendations, there are many other items that are more likely to trigger clicks upon their recommendation, but they do not do so as base items. To visualize this better, we also sorted the results by recommendation CTRs, and we obtained the plots in Figure \ref{fig:view_click_reco}. We observe here the base items (the blue line) that are more likely to trigger clicks on recommendation are a subset of the recommendation items that are more likely to trigger clicks upon their recommendation. %The discrepancy we observe might have to do with the fact that we had a limited access to base items while we have a full access to the items for recommendation. +To better visualize the results, we present two plots. In Figure \ref{fig:view_click_base}, we present plots generated by sorting the results by base CTR. The blue plot is for base CTR and red plot is for recommendation CTR. What we observe here is that the base items that are more likely to trigger clicks on recommendations are mostly also the items that are more likely to trigger clicks upon their recommendations.There are, however many items that are more likely to trigger clicks upon their recommendation, but they do not do so as base items. To visualize this better, we also sorted the results by recommendation CTR, and we obtained the plots in Figure \ref{fig:view_click_reco}. We observe here the base items (the blue line) that are more likely to trigger clicks on recommendation are a subset of the recommendation items that are more likely to trigger clicks upon their recommendation. So from the overlap in the plots, we can conclude that for most of the items their ability to trigger clicks on recommendation as base items is indicative of their attractiveness as recommendation items. It seems, however, not the case that the ability to incur clicks upon recommendation is indicative of the ability to trigger clicks as a base item. %The discrepancy we observe might have to do with the fact that we had a limited access to base items while we have a full access to the items for recommendation. \begin{figure} [t] \centering -\includegraphics[scale=0.45]{img/base_reco_ctr_sorted_by_base.pdf} +\includegraphics[scale=0.45]{img/base_reco_ctr_sorted_by_base-crop.pdf} -\caption{Plots of CTRs on base items and recommended items. Plots are generated by first sorting results according to base CTRs. Blue plot is base CTR and red plot is recommendation CTR. \label{fig:view_click_base}} +\caption{CTRs of base items (blue) and of recommended items (red) generated by first sorting by base CTR. The high-scoring recommendation items do not follow the high-scoring base items. \label{fig:view_click_base}} \end{figure} \begin{figure} [t] \centering -\includegraphics[scale=0.45]{img/base_reco_ctr_sorted_by_reco.pdf} +\includegraphics[scale=0.45]{img/base_reco_ctr_sorted_by_reco-crop.pdf} -\caption{Plots of CTRs on base items and recommended items. Plots are generated by first sorting results according to recommendation CTRs. Blue plot is base CTR and red plot is recommendation CTR. \label{fig:view_click_reco}} +\caption{CTRs of base items (blue) and of recommended items (red) generated by first sorting by recommendation CTR. The high-scoring base items are mostly a subset of the high-scoring recommendation items. \label{fig:view_click_reco}} \end{figure} @@ -394,20 +402,24 @@ To better visualize the results, we present two plots. In Figure \ref{fig:view_ \section{Discussion and Conclusion} -In this study, we attempted to explain the factors that cause clicks on recommendations. We specifically investigated whether clicks on recommendations are a function of the base items, or a function of the recommended items. We attempted to explain that by looking at the categories of items and the transitions between the categories. We found that indeed the category of the items explains some of the discrepancy between the likelihoods of triggering clicks both as base items and recommendation items in the sense that some base categories and some recommendation categories are more likely to trigger clicks than others. +In this study, we attempted to explain the factors that trigger clicks on recommendations. We specifically investigated whether clicks on recommendations are a function of the base items, or a function of the recommended items. We attempted to explain that by looking at the categories of items and the transitions between the categories. We found that indeed the category of the items explains some of the discrepancy between the likelihoods of triggering clicks both as base items and recommendation items in the sense that some base categories and some recommendation categories are more likely to trigger clicks than others. -There is, however, a difference between the categories in their likelihood to trigger clicks as base category and in recommendation category. As base category, the politics category is the most likely to trigger clicks on recommendations followed by media. In recommendations, however, it is the media followed by politics that trigger clicks upon their recommendation. The results suggest that click on recommendations is a function of both the base items and the recommended items. This is indicated by the fact that some categories are less or more likely to generate clicks on recommendation whether as base, which means we should not recommend to those items, nor as recommendations, that means we should not recommend them. The results also show that the performance of the categories as base and recommendations are not similar. +There is, however, a difference between the categories in their likelihood to trigger clicks as base category and as recommendation category. As base category, the politics category is the most likely to trigger clicks on recommendations followed by media. In recommendations, however, it is the media followed by politics that trigger clicks upon their recommendation. The results suggest that click on recommendation is a function of both the base items and the recommended items. This is indicated by the fact that some categories are less or more likely to generate clicks on recommendation whether as base or as recommendation. This suggests that leveraging category information holds a potential for improving the performance of a recommender system. The results also show that the performance of the categories as base and recommendations are not exactly aligned. This non-alignment was also observed at the item-level in that there were many items that were more likely to trigger clicks as recommendation, but not as base. -The investigation of the transition CRTs shows which categories are more likely to trigger clicks on which categories. This result suggests that recommendation can be improved by recommending some categories to those categories where they are more likely to get clicked. For example, we observe that it is more likely to receive clicks if we recommend media items to the politics category and to the local category (Berlin). Similarly, we observe that recommending sports items to sports items is much more likely to trigger clicks. These results suggest that there is a way to improve recommender system by leveraging category information of items. +The investigation of the transitions between categories suggests that recommendation can be improved by recommending some categories to those categories where they are more likely to get clicked. For example, we observe that it is more likely to receive clicks if we recommend media items to the politics category and to the local category (berlin). Similarly, we observe that recommending sports items to sports items is much more likely to trigger clicks. %These results suggest that there is a way to improve recommender system by leveraging category information of items. -We hope that this work can contribute to the understanding of factors affecting recommender systems. We have shown that category-level information can take us a long way in explaining why clicks on recommendations happen. Item level information also showed that there is a relationship between base items that are more likely to trigger clicks and those recommendation items that are more likely to trigger clicks. This all suggests that leveraging information at both the category and item levels is important to improve recommender systems. As a future work, we would like to investigate the factors that lead to clicks on recommendation using a larger dataset and at the content level of the items. +We hope that this work can contribute to the understanding of factors affecting recommender systems. We have shown that category-level information can take us a long way in explaining a clicks on a recommendations. Item level information also showed that there is a relationship between base items that are more likely to trigger clicks and those recommendation items that are more likely to trigger clicks upon their recommendation. This all suggests that leveraging information at both the category and item levels might hold a potential for improving recommender systems. As a future work, we would like to investigate the factors that lead to clicks on recommendation using a larger dataset and at the content level of the items. % An idea, maybe show the variance of the categories in terms of their CTR? Another thing we can do is to explore the high achieving base itemsand the high achiving recommended itesm and see if they are some how the same items. We also do similar thing with lowe achving base item and recommended items. Is this holds, then clearly it indicates that a big factor is not about the current context, but just the nature of the items themselves, both ion the base items, and in the recommended items. This is going to gold , as it already shows in the groups. But, we can also zoom in on the politics items and see if that holds too. Another thing we can consider is find base items and recommended items with big variance and study them with the view to finding the causes in terms of categories and also in terms of contenet. The variance of a recommendation item tells us information that is it is recommended to some values it makes sense, but if to others, it does not. This can also be studied at a particlat group's + + + + \bibliographystyle{abbrv} \bibliography{ref}