From dec17e349fb321cb197d8cda92901c271536af2f 2016-02-11 16:13:20 From: Gebrekirstos Gebremeskel Date: 2016-02-11 16:13:20 Subject: [PATCH] update --- diff --git a/main.tex b/main.tex index 7a17cde39eed8555a43c471e8c42e798b5d9e5fe..16b5ecb1b62ae230484426e8e4f1dbfb8dc115b1 100644 --- a/main.tex +++ b/main.tex @@ -180,16 +180,16 @@ We also go down to the item level and look at the relationships of the base item \section{Dataset} -We used Plista\footnote{http://orp.plista.com/documentation} dataset collected from user-item interaction with the tagesspiegel.com news portal, German online news and opinions portal, over more than two months, from 15-04-2015 to 04-07-2015. Items in tagesspiegel are manually placed under $\mathit{10}$ categories, $\mathit{9}$ of which we investigate in this study. The dataset is aggregated from the logs of the recommender systems that we used during our participation in the CLEF NewsREEL 2015 challenge \cite{kille2015overview}. The challenge offered participants the opportunity to plug their recommendation algorithms to PLISTA\footnote{http://orp.plista.com/documentation} and provide recommendations to real users visiting online publishers. PLISTA is a framework that connects recommendation providers such as ourselves and recommendation service requester such as online news portals. Participation in the challenge enabled us to collect information of user -item interaction such as impressions (a user viewing an item), update (appearance of news item, or change of content of existing item) and clicks[user clicking on recommendation. +We used Plista\footnote{http://orp.plista.com/documentation} dataset collected from user-item interaction with the tagesspiegel.com news portal, German online news and opinions portal, over more than two months, from 15-04-2015 to 04-07-2015. Items in tagesspiegel are manually placed under $\mathit{10}$ categories, $\mathit{9}$ of which we investigate in this study. The dataset is aggregated from the logs of the recommender systems that we used during our participation in the CLEF NewsREEL 2015 challenge \cite{kille2015overview}. The challenge offered participants the opportunity to plug their recommendation algorithms to Plista\footnote{http://orp.plista.com/documentation} and provide recommendations to real users visiting online publishers. Plista is a framework that connects recommendation providers such as ourselves and recommendation service requester such as online news portals. Participation in the challenge enabled us to collect information of user-item interaction such as impression (a user viewing an item), update (appearance of new item, or change of content of existing item) and click (a user clicking on recommendation item). -The three recommendation that we used are two instances of Recency, and RecencyRandom. The recency algorithm keeps the most recently viewed or updated items and recommends the top most $mathit{k}$ recent items every time a recommendation request is made. The RecencyRandom recommender keeps the most recent $\mathit{100}$ items at any time and recommends, randomly, the requested number of recommendations every time recommendation request is made +The three recommendation algorithms that we used are two instances of \textbf{Recency}, and \textbf{RecencyRandom}. The Recency algorithm keeps the most recently viewed or updated items and recommends the top most $mathit{k}$ recent items every time a recommendation request is made. The RecencyRandom recommender keeps the most recent $\mathit{100}$ items at any time and recommends, randomly, the requested number of items every time a recommendation request is made. - Unfortunately, the click information did not include whether the click on recommendation was on our recommendations or on someone else's recommendations. Since we know the user and the base item for whom we recommended and the recommended items, we considered a click notification on one of our recommended items as a click on our recommendation if that click happened with in $\mathit{5}$ minutes from the time of recommendation. From the combined collected dataset, we extracted the base item, the category of the base item, the recommended item and the category of the recommended item, the number of times a recommendation item has been recommended to a base item and the number of times that the recommended item has been clicked from the base item. A sample dataset is presented in Table \ref{tab:sample}. + Unfortunately, the click information did not include whether the click on recommendation was on our recommendations or on someone else's recommendations. Since we know the user and the base item for which we recommended and the recommended items, we considered a click notification on one of our recommended items as a click on our recommendation if that click happened with in $\mathit{5}$ minutes from the time of our recommendation. From the combined collected dataset, we extracted the base item, the category of the base item, the recommended item and the category of the recommended item, the number of times a recommendation item has been recommended to a base item (view) and the number of times that the recommended item has been clicked from the base item. A sample dataset is presented in Table \ref{tab:sample}. \begin{table} -\caption{A sample dataset. B is the base item id, R is the recommendation item id, and B-Cat and R-cat are the categories of the base item and the recommendation item respectively. } +\caption{A sample dataset. B is the base item id, R is the recommendation item id, and B-Cat and R-cat are the categories of the base item and the recommendation item, respectively. \label{tab:sample}} \centering \begin{tabular}{|l|l|l|l|l|l|l|} \hline @@ -202,7 +202,7 @@ The three recommendation that we used are two instances of Recency, and Recency \hline \end{tabular} - \label{tab:sample} + \end{table} @@ -213,9 +213,9 @@ The three recommendation that we used are two instances of Recency, and Recency \section{Results and Analysis} -Our dataset consists of a a total of $\mathit{288979}$ base- item recommendation-item pairs. To see the relationship between views and recommendations, we first sorted dataset according to views and then normalized the view and click counts by the total number of views and total number of click, respectively. We then sleeted the top $\mathit{1000}$ pairs and plotted the views and the clicks. the reason for normalization is to be able to plot them together and compare them. The selection of only 100 items is because the more items we use, the more difficult it is to compare them. +Our dataset consists of a total of $\mathit{288979}$ base-item recommendation-item pairs. To see the relationship between views and clicks, we first sorted the dataset according to views and then normalized the \textbf{view} and \textbf{click} counts by the total number of views and total number of click, respectively. We then slected the top $\mathit{1000}$ pairs and plotted the views and the clicks. The reason for normalization is to be able to plot them together for easy comparision. %The selection of only $\mathit{1000}$ pairs is because the more items we use, the more difficult is to see . -Figure \ref{fig:view_click} shows the plot of views and clicks for the $\mathit{1000}$ pairs. The blue plot is for views and is smooth since the data was sorted by views. The red plot is for the corresponding clicks on recommendations. We observe that the clicks do not follow the views, an indication that t clicks do not correspond with the number of of times that a recommendation items is recommended to a base item. This is the reason we set out to investigate, to begin with. The ragged click plot shows that some items are more likely to trigger clicks on recommendations than others. What can possibly explain this observation? What causes these difference between the number of views and the number of clicks? +Figure \ref{fig:view_click} shows the plot of views and clicks for the $\mathit{1000}$ pairs. The blue plot is for views and is smooth since the data was sorted by views. The red plot is for the corresponding clicks on recommendations. We observe that the clicks do not follow the views, an indication that t clicks do not correspond with the number of times that a recommendation items is recommended to a base item. This is the reason we set out to investigate, to begin with. The ragged click plot shows that some items are more likely to trigger clicks on recommendations than others. What can possibly explain this observation? What causes these difference between the number of views and the number of clicks? \begin{figure} [t] \centering @@ -247,9 +247,9 @@ Figure \ref{fig:view_click} shows the plot of views and clicks for the $\mathit{ \subsection{Categories of Base and Recommendation Items} -To start to explain the difference between the view plot and the click plot, we aggregated views and clicks by the $\mathit{9}$ categories of items that the items are placed under in the Tagesspiegel website. The aggregation gives us two results: view and click counts aggregated by base categories of the base items and by the categories of recommended items. In other words, we are attempting to answer two questions: 1) is there a relationship between the category of the base item and the likelihood that it triggers a click on recommendation, regardless of the type of recommendation and 2) is there a relationship between the category of the recommended item and the likelihood that it triggers a click upon its recommendation? Tables \ref{tab:base} and \ref{tab:reco} present the views, clicks and CTR scores. The results are sorted by CTR scores. +To start to explain the difference between the view plot and the click plot observed in \ref{fig:view_click}, we aggregated views and clicks by the $\mathit{9}$ categories of items that the items are placed under in the Tagesspiegel website. The aggregation gives us two results: view and click counts the categories in base and in recommendation. With the categories, we attemopt to answer two questions: 1) is there a relationship between the category of the base item and the likelihood of triggering a click on recommendation, and 2) is there a relationship between the category of the recommended item and the likelihood of triggering a click upon its recommendation? Tables \ref{tab:base} and \ref{tab:reco} present the views, clicks and CTR scores. The results are sorted by CTR scores. -We observe that there is a difference between the base categories and the recommendation categories with respect to generating clicks. In the base categories, the category of politics triggers clicks more than any other category. After the category of politics, the categories of opinion and the world trigger more clicks on recommendations. Special categories such as culture and and knowledge trigger the least clicks on recommendations. This is consistent with previous findings that reported special interest portals enjoyed less clicks than traditional and mainstream news and opinion portals. +We observe that there is a difference between the base categories and the recommendation categories with respect to the likelihood of triggering clicks. In the base categories, the \textbf{politics} is more likely to triggers clicks than any other category \textbf{opinion} and \textbf{world}. Special categories such as \textbf{culture} and and \textbf{knowledge} are the least likely to trigger clicks on recommendations. This is consistent with previous findings that reported special interest portals generated less clicks on recommendations than traditional and mainstream news and opinion portals. \begin{table*} @@ -307,7 +307,7 @@ wirtschaft&32955&15&0.05\\ \end{table*} -On the recommendation categories, however, it is the media category that triggers more clicks upon recommendation, followed by politics and the local categories. The two least performing recommendation categories are economy and knowledge, similar to the least performing base categories. So, overall, it seems that the likelihood of triggering clicks by the categories shows a difference when they are in base and recommendation. Overall, it seems the categories have higher CTRs in recommendation that in base. To gain further insight, we looked at the CTRs of transitions from base category to recommendation category. The aim of this is to find out whether some base categories are more likely to trigger clicks on some recommendation categories. The results are presented in Table \ref{heatmap}. +On the recommendation side, however, it is \textbf{media} that is the more likely to triggers clicks upon recommendation, followed by \textbf{politics} and the local category (\textbf{Berlin}. The two least performing categories are \textbf{business} and \textbf{knowledge}, similar to the least performing categories in base. So, overall, it seems that the likelihood of triggering clicks by the categories shows a difference when they are in base and recommendation. In general, the categories have higher CTRs in recommendation that in base. To gain further insight, we looked at the CTRs of transitions from base category to recommendation category. The aim of this is to find out whether some base categories are more likely to trigger clicks on some recommendation categories. The results are presented in Table \ref{heatmap}.