% THIS IS SIGPROC-SP.TEX - VERSION 3.1 % WORKS WITH V3.2SP OF ACM_PROC_ARTICLE-SP.CLS % APRIL 2009 % % It is an example file showing how to use the 'acm_proc_article-sp.cls' V3.2SP % LaTeX2e document class file for Conference Proceedings submissions. % ---------------------------------------------------------------------------------------------------------------- % This .tex file (and associated .cls V3.2SP) *DOES NOT* produce: % 1) The Permission Statement % 2) The Conference (location) Info information % 3) The Copyright Line with ACM data % 4) Page numbering % --------------------------------------------------------------------------------------------------------------- % It is an example which *does* use the .bib file (from which the .bbl file % is produced). % REMEMBER HOWEVER: After having produced the .bbl file, % and prior to final submission, % you need to 'insert' your .bbl file into your source .tex file so as to provide % ONE 'self-contained' source file. % % Questions regarding SIGS should be sent to % Adrienne Griscti ---> griscti@acm.org % % Questions/suggestions regarding the guidelines, .tex and .cls files, etc. to % Gerald Murray ---> murray@hq.acm.org % % For tracking purposes - this is V3.1SP - APRIL 2009 \documentclass{acm_proc_article-sp} \usepackage{graphicx} \usepackage{subcaption} \usepackage{booktabs} \usepackage{color, colortbl} \usepackage[utf8]{inputenc} \usepackage{multirow} \usepackage[usenames,dvipsnames]{xcolor} \begin{document} \title{Towards Explaining Clicks on Recommendations} % You need the command \numberofauthors to handle the 'placement % and alignment' of the authors beneath the title. % % For aesthetic reasons, we recommend 'three authors at a time' % i.e. three 'name/affiliation blocks' be placed beneath the title. % % NOTE: You are NOT restricted in how many 'rows' of % name/affiliations may appear. We just ask that you restrict % the number of 'columns' to three. % % Because of the available 'opening page real-estate' % we ask you to refrain from putting more than six authors % (two rows with three columns) beneath the article title. % More than six makes the first-page appear very cluttered indeed. % % Use the \alignauthor commands to handle the names % and affiliations for an 'aesthetic maximum' of six authors. % Add names, affiliations, addresses for % the seventh etc. author(s) as the argument for the % \additionalauthors command. % These 'additional authors' will be output/set for you % without further effort on your part as the last section in % the body of your article BEFORE References or any Appendices. % \numberofauthors{8} % in this sample file, there are a *total* % of EIGHT authors. SIX appear on the 'first-page' (for formatting % reasons) and the remaining two appear in the \additionalauthors section. % \author{ % You can go ahead and credit any number of authors here, % e.g. one 'row of three' or two rows (consisting of one row of three % and a second row of one, two or three). % % The command \alignauthor (no curly braces needed) should % precede each author name, affiliation/snail-mail address and % e-mail address. Additionally, tag each line of % affiliation/address with \affaddr, and tag the % e-mail address with \email. % % 1st. author % \alignauthor % Ben Trovato\titlenote{Dr.~Trovato insisted his name be first.}\\ % \affaddr{Institute for Clarity in Documentation}\\ % \affaddr{1932 Wallamaloo Lane}\\ % \affaddr{Wallamaloo, New Zealand}\\ % \email{trovato@corporation.com} % % 2nd. author % \alignauthor % G.K.M. Tobin\titlenote{The secretary disavows % any knowledge of this author's actions.}\\ % \affaddr{Institute for Clarity in Documentation}\\ % \affaddr{P.O. Box 1212}\\ % \affaddr{Dublin, Ohio 43017-6221}\\ % \email{webmaster@marysville-ohio.com} % % 3rd. author % \alignauthor Lars Th{\o}rv{\a}ld\titlenote{This author is the % one who did all the really hard work.}\\ % \affaddr{The Th{\o}rv{\a}ld Group}\\ % \affaddr{1 Th{\o}rv{\a}ld Circle}\\ % \affaddr{Hekla, Iceland}\\ % \email{larst@affiliation.org} % \and % use '\and' if you need 'another row' of author names % % 4th. author % \alignauthor Lawrence P. Leipuner\\ % \affaddr{Brookhaven Laboratories}\\ % \affaddr{Brookhaven National Lab}\\ % \affaddr{P.O. Box 5000}\\ % \email{lleipuner@researchlabs.org} % % 5th. author % \alignauthor Sean Fogarty\\ % \affaddr{NASA Ames Research Center}\\ % \affaddr{Moffett Field}\\ % \affaddr{California 94035}\\ % \email{fogartys@amesres.org} % % 6th. author % \alignauthor Charles Palmer\\ % \affaddr{Palmer Research Laboratories}\\ % \affaddr{8600 Datapoint Drive}\\ % \affaddr{San Antonio, Texas 78229}\\ % \email{cpalmer@prl.com} } % There's nothing stopping you putting the seventh, eighth, etc. % author on the opening page (as the 'third row') but we ask, % for aesthetic reasons that you place these 'additional authors' % in the \additional authors block, viz. % \additionalauthors{Additional authors: John Smith (The Th{\o}rv{\a}ld Group, % email: {\texttt{jsmith@affiliation.org}}) and Julius P.~Kumquat % (The Kumquat Consortium, email: {\texttt{jpkumquat@consortium.net}}).} % \date{30 July 1999} % Just remember to make sure that the TOTAL number of authors % is the number that will appear on the first page PLUS the % number that will appear in the \additionalauthors section. \maketitle %opening \title{Items that trigger clicks on recommendation} \author{} \maketitle \begin{abstract} In a setting where recommendations are provided to users when they are viewing particular items, what are the factors that contribute to clicks on recommendations? We examine factors that t trigger clicks on recommended items in relation to the items the user viewing and to which the recommendations are provided. More specifically, we examine the items from which clicks happen and what type of items get clicked. Are some items more likely to cause the user to click on recommendations, and are some other recommendations more likely to be clicked? In short, are clicks on recommendations a function of the base item, or are they a function of the recommended items? We attempt to explain the factors that trigger clicks on recommendations from different angles. \end{abstract} \section{Introduction} In a study that investigated the relationship between the number of times items are viewed and the the number of times clicks happened from those items in several online publishers \cite{said2013month}, it was reported that traditional news portals providing news and opinions on politics and current events are more likely to generate clicks on recommendation than special interest portals such as sports, gardening, and auto mechanic forums. Another study \cite{esiyok2014users}, using the same datasets investigated the impressions and clicks on the category level of items of one of the traditional news portals - Tagesspiegel ( a popular national news portal in Germany). The finding was that there is a relationship between what the user is reading on and what the user reads next. They reported that the category local and sports enjoyed the most loyal readers, that is that a user reading on local items will more likely keep reading items of the same category. % recommendations that were made to the different websites raising a queation as to whether the clicks on recommendations were because of nature of the online publishers or the recommendation items.this study, we focus on one traditional news portal, tagespiegel and examine it to find out factors that trigger recommendations on clicks or lack thereof. %wether some categories are more likely to recieve clicks on recommendations. We also even go further and look at what type of items are more likely to trigger more clicks than others. While both these study are very related and relevant, they both did not investigate the relationship between the base items, the recommended items and the resulting clicks or lack thereof. In a recommendation setting where recommendation items are provided to users on the items that the user is currently viewing (henceforth referred to as base items), what are the factor's that trigger user to click on recommendations? Are the clicks a function of the base items or of the recommended items? Do some base items and some recommended items cause users to click on recommendations more than others, and if they do what explains this difference? In this study we examine the factors that might trigger clicks on recommendations from several angles. One angle is from the categories of the base items the user is currently reading. More specifically, are some categories of items the user is currently on more likely to cause the user to click on recommendations? Similarly, we examine the categories of the recommended items and investigate whether some are more likely to trigger clicks on themselves upon recommendation. We also investigate how the categories of the base items and the categories of the recommendation items are related in the way they trigger clicks. %Are some categories more likely to trigger clicks on some categories? For example, is political category more likely to trigger clicks on political categories, or another category such as local category? We also go down to the item level and look at the relationships of the base items and the recommendation items with respect to how likely they are to trigger clicks. More specifically, we examine whether those base items that are more likely to trigger clicks on recommendations are the same with those recommendation items that are more likely to receive clicks. The study contributes to the understanding of factors that influence recommendation systems. The insights from investigating from different angles help 1) to understand what aspects of the base item the user is viewing makes user click on a recommendations 2) to understand what aspects of the recommended items make the user click on those recommendations 3) to target those items that generate clicks and to ignore those that do not trigger recommendations. % are more likely to trigger clicks, and what recommendation items are more likely to be clicked. To accomplish this task, we focus on items of categories that trigger more clicks. We identify items that triggered more clicks and items that caused less clicks. We also examine the relationship of the items that triggered clicks and the recommendations that are clicked and not clicked. % % % % We also examine the relationship of the contenet of the base items and the items that are clicked to glean any relationship. For this, we employ contenet similarity measures between the base item and those items that are clicked from the base item. %A third angle is to look at the relationship between the content of the base item and the items that are clicked from it. %This is interesting because sometimes it is not clear whether there i a direct relationship between the content similarity and behavioral factors. % we can also look at whether users actually clicked on those items which have some geographical relevance in the sense that the items are about the geographical region that they come from too. % % % \section{Dataset} We used Plista\footnote{http://orp.plista.com/documentation} dataset collected from user-item interaction with the tagesspiegel.com news portal, German online news and opinions portal, over more than two months, from 15-04-2015 to 04-07-2015. Items in tagesspiegel are manually placed under $\mathit{10}$ categories. The dataset is aggregated from the logs of our recommender systems that were used during our participation in the CLEF NewsREEL 2015 challenge \cite{kille2015overview}. The challenge offered participants the opportunity to plug their recommendation algorithms to PLISTA\footnote{http://orp.plista.com/documentation} and provide recommendations to real users visiting online publishers. PLISTA is a framework that connects recommendation providers such as ourselves and recommendation service requester such as online news portals. Participation in the challenge enabled us to collect information of user -item interaction such as impressions (a user viewing an item), update (appearance of news item, or change of content of existing item) and clicks[user clicking on recommendation. The three recommendation that we used are two instances of Recency, and RecencyRandom. The recency algorithm keeps the most recently viewed or updated items and recommends the top most $mathit{k}$ recent items every time a recommendation request is made. The RecencyRandom recommender keeps the most recent $\mathit{100}$ items at any time and recommends, randomly, the requested number of recommendations every time recommendation request is made Unfortunately, the click information did not include whether the click on recommendation was on our recommendations or on someone else's recommendations. Since we know the user and the base item for whom we recommended and the recommended items, we considered a click notification on one of our recommended items as a click on our recommendation if that click happened with in $\mathit{5}$ minutes from the time of recommendation. From the combined collected dataset, we extracted the base item, the category of the base item, the recommended item and the category of the recommended item, the number of times a recommendation item has been recommended to a base item and the number of times that the recommended item has been clicked from the base item. A sample dataset is presented in Table \ref{tab:sample}. \begin{table} \caption{A sample dataset. B is the base item id, R is the recommendation item id, and B-Cat and R-cat are the categories of the base item and the recommendation item respectively. } \centering \begin{tabular}{|l|l|l|l|l|l|l|} \hline B & B-Cat & R& R-Cat &View&Click & CTR \\ \hline 229397219 &229495114 & Berlin & Berlin & 17 & 1 & 5.88\\ 230306628 &230291175& politics & wissen & 14 & 1 & 7.14\\ 40485126 & 225589114 & Berlin & politics & 2 & 0 & 0.00\\ \hline \end{tabular} \label{tab:sample} \end{table} %Plista is a company that provides a recommendation platform where recommendation providers are linked with online publishers in need of recommendation sertvice. % It is not easy to get the exact number of times a recommendation item is recommended to a certain base item since the logs did not include wWe assume that the number of times a base item has been viewed as the number of times recommendations were shown. We assume this to be a fair assumption as recommendation were sought each time a an item was viewed by a user. % Although each time an item is viewed, more than one item (usually 5 items) are shown to the user as recommendations, we just count the number of clicks that have happened from those items % regardless of which items are clicked. \section{Results and Analysis} Our dataset consists of a a total of $\mathit{288979}$ base- item recommendation-item pairs. To see the relationship between views and recommendations, we first sorted dataset according to views and then normalized the view and click counts by the total number of views and total number of click, respectively. We then sleeted the top $\mathit{1000}$ pairs and plotted the views and the clicks. the reason for normalization is to be able to plot them together and compare them. The selection of only 100 items is because the more items we use, the more difficult it is to compare them. Figure \ref{fig:view_click} shows the plot of views and clicks for the $\mathit{1000}$ pairs. The blue plot is for views and is smooth since the data was sorted by views. The red plot is for the corresponding clicks on recommendations. We observe that the clicks do not follow the views, an indication that t clicks do not correspond with the number of of times that a recommendation items is recommended to a base item. This is the reason we set out to investigate, to begin with. The ragged click plot shows that some items are more likely to trigger clicks on recommendations than others. What can possibly explain this observation? What causes these difference between the number of views and the number of clicks? \begin{figure} [t] \centering \includegraphics[scale=0.5]{img/tage_view1_click000.pdf} \label{fig:view_click} \caption{Plots of views and clicks on Tagesspiegel and Ksta.} \end{figure} % % \begin{figure} [t] % \centering % \includegraphics[scale=0.5]{img/tage_view100.pdf} % % \label{fig:view100} % \caption{Plot of the most viewed 100 items} % \end{figure} % \begin{figure} [t] % \centering % \includegraphics[scale=0.5]{img/tage_click100.pdf} % % \label{fig:click100} % \caption{Plot of the clicks triggered from the 100 most viewed items} % \end{figure} % % \subsection{Categories of Base and Recommendation Items} To start to explain the difference between the view plot and the click plot, we aggregated views and clicks by the $\mathit{9}$ categories of items that the items are placed under in the Tagesspiegel website. The aggregation gives us two results: view and click counts aggregated by base categories of the base items and by the categories of recommended items. In other words, we are attempting to answer two questions: 1) is there a relationship between the category of the base item and the likelihood that it triggers a click on recommendation, regardless of the type of recommendation and 2) is there a relationship between the category of the recommended item and the likelihood that it triggers a click upon its recommendation? Tables \ref{tab:base} and \ref{tab:reco} present the views, clicks and CTR scores. The results are sorted by CTR scores. We observe that there is a difference between the base categories and the recommendation categories with respect to generating clicks. In the base categories, the category of politics triggers clicks more than any other category. After the category of politics, the categories of opinion and the world trigger more clicks on recommendations. Special categories such as culture and and knowledge trigger the least clicks on recommendations. This is consistent with previous findings that reported special interest portals enjoyed less clicks than traditional and mainstream news and opinion portals. \begin{table*} \caption{A table showing the views, clicks, and ctr of the 12 categories of Tagesspiegel on the basis of the base items. This table shows the views, clicks and CTRs of the base item. A click for base item happens when an item recommended to it is clicked. } \parbox{.45\linewidth}{ \centering \begin{tabular}{|l|l|l|l|l|} \hline category & Views & Clicks & CTR (\%)\\ \hline politics&73197&178&0.24\\ media&22426&50&0.22\\ weltspiegel&37413&77&0.21\\ wirtschaft&30045&61&0.2\\ sport&29812&58&0.19\\ berlin&123595&129&0.1\\ meinung&4611&3&0.07\\ kultur&21840&11&0.05\\ wissen&13500&4&0.03\\ \hline \end{tabular} \label{tab:base} \caption{Base Category } } \hfill \parbox{.45\linewidth}{ \centering \begin{tabular}{|l|l|l|l|} \hline category & Views & Clicks & CTR (\%)\\ \hline medien&22147&68&0.31\\ politik&68230&170&0.25\\\ berlin&123559&188&0.15\\ weltspiegel&37535&58&0.15\\ sport&28160&36&0.13\\ meinung&4925&5&0.1\\ kultur&23278&21&0.09\\ wissen&15650&10&0.06\\ wirtschaft&32955&15&0.05\\ \hline \end{tabular} \label{tab:reco} \caption{Recommendation Category} } \end{table*} On the recommendation categories, however, it is the media category that triggers more clicks upon recommendation, followed by politics and the local categories. The two least performing recommendation categories are economy and knowledge, similar to the least performing base categories. So, overall, it seems that the likelihood of triggering clicks by the categories shows a difference when they are in base and recommendation. Overall, it seems the categories have higher CTRs in recommendation that in base. To gain further insight, we looked at the CTRs of transitions from base category to recommendation category. The aim of this is to find out whether some base categories are more likely to trigger clicks on some recommendation categories. The results are presented in Table \ref{heatmap}. There are some interesting observations in the category-to-category transitions. While the highest transition CTRs for the base categories of Berlin and Politics are to Media, for economy, it is to opinion, for sport it is to sport. The highest transition CTR for Culture is to the local category Berlin, and for world it is to politics followed by to Berlin. The media category is the one that receive clicks from more categories than any others. The local category Berlin is the one that clicks on recommendations from more categories. \begin{table*} \caption{Transition CTR scores from base categories to recommendation categories. The row categories represent the categories of base items and the column categories represent the recommendation categories.} \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|l|} \hline &Berlin&politics&wirtschaft&sport&kultur&weltspiegel&meinung&medien&wissen\\ \hline berlin&0.14&0.08&0.06&0.05&0.06&0.12&0.12&0.16&0.06\\ politik&0.2&0.39&0.06&0.12&0.04&0.3&0&0.73&0.1\\ wirtschaft&0.15&0.4&0.07&0.13&0.36&0.13&0.46&0.21&0\\ sport&0.14&0.27&0&0.68&0.05&0.18&0&0.27&0.07\\ kultur&0.11&0&0&0.06&0.07&0&0&0.07&0\\ weltspiegel&0.24&0.27&0.06&0.13&0.17&0.13&0&0.4&0.18\\ meinung&0.06&0&0&0&0&0&1.85&0.32&0\\ medien&0.1&0.85&0&0.06&0&0.08&0&0.16&0\\ wissen&0.02&0&0&0&0.11&0.15&0&0&0\\ \hline \end{tabular} \label{heatmap} \end{table*} % For example if we look at the category of politics , we see that the CTR from politics to politics is the highest than from politics to any other category. We also observe that the CTR from local category Berlin to politics is higher than from the local category Berlin to any other category including to itself. A little surprising result is the high CTR from media to politics. % The way we extracted our recommendations and clciks is a little uncertan. In the Plista setting, when click results are reported to users, they are not known whose recommendations are being clicked. So while we know our recommendation, we do not know for sure how much of the click notifications that we recieve belong to our recommendations. To extract our clciks, we introduced a time frame of 5 minutes. That is if the click notification happens in with in a range of time, in our case 5 minutes, we consider the clcik is on our recommendations. We consider the click information is a bit inflated for users might not stay for more than 5 minutes. While the actual CTR might be a bit inflated as a result of the inflated number of clicks, we consider the relative scores as indicative of the true difference. % To find out therelationship between base item recommendation pairs that resulted in high CTR scoores, we selected some item-recommendations pairs. To avoid selecting item-recommendation pairs that have very low views and clicks which is usually the type of combination that results in high CTR scores, we first sort our data according to views, and according to clicks. Using cutt off values, we repeat the intersection until we find the items that have both the highest view and the hight clicks. Using this approach we selected 12 item-recommendation pairs and out of them we selected the 5 pairs that have the highest score. These pairs are presented in Table \ref{} \subsection{Item-level Base and Recommendation CTRs} % We look at the two types of item-level CTR's:the base item CTRs and the recommendation CTRs. The base item CTR measures how likely the base item is to trigger clicks on recommendation. We assume that part if clicking on recommendations is a function of the item the user is reading. this is corroborated by the category-level CTr's that we looked at above in thesense that some categories do not generate clicks. even if the item are from clickable categories. The recommendation CTR's ameasures how likely the item is to recieve a click when recomened to a user regardless of the category of the base item. But, should we not be concerned about the base item? % We plan to extract a sample of base items with recommended and clicked items and separate them into clicked and rejected recommendations. We then compare the contenet of the clicked items with the contenet of the base item. We also do the same with the rejected items and see if there is any similarities/differences bertween these two categories. The sepration of clicked and rejected items and comparing them to the base item is similar to the sepration of recommended moviews into viwed and ignored in \cite{nguyen2014exploring}. % % On the same dataset, there has been a study on the transition probababilities of users on the categories This study was on genral reading. In this study 1) we repeat the same study on a dataset from a different time and 2) we analyze results in terms of similarity of content with the base items. % % % Question for myself: Is it maybe possible to compute the category CTR's? Like a hitmap of the CTRs where the recommendations are subsidvided to their categories and a CTR is computed? I think so. We can also go durther and look at the contenet similarities. Further, we can look at what type of items trigger more clicks by selecting some items which generated more clicks and analyzing them. At the item level, we tried to investigate whether there is a relationship, in triggering clicks on recommendations, between the base items and the recommended items. More specifically, are the base items that are more likely to trigger recommendation also the ones that are more likely to trigger recommendation upon recommendations? To accomplish this task, we first computed the CTRs separately for base items and recommendation items and then intersected them to find the items that are in both. It is important to state here that we have more items in our recommendations than in our base items. This is because we are only requested to provide recommendation to some items, while we have all items to use for our recommendation. We had a total of $\mathit{55708}$ items in our recommendation items and $\mathit{18967}$ on our base items. The intersection resulted in $\mathit{15221}$ items for which we looked at the CTRs they score when they are used as base items and as recommendation item. To better visualize the results, we present two plots. In Figure \ref{fig:view_click_base}, we present plots generated by sorting the results by base CTR scores. Blue plot is base CTR and red plot is recommendation CTR. What we observe here is that although the base items that are more likely to trigger clicks on recommendations are also the items that are more likely to trigger clicks upon their recommendations, there are many other items that are more likely to trigger clicks upon their recommendation, but not when they are base items. To visualize this better, we also sorted the results by recommendation CTRs, and we obtained the plots in Figure \ref{fig:view_click_reco}. We observe here the base items (the blue lines) that are more likely to trigger clicks on recommendation are a subset of the recommendation items that are more likely to trigger clicks upon their recommendation. The discrepancy we observed might have to do with the fact that we had a limited access to the base items while we have a full access to the items for recommendation. \begin{figure} [t] \centering \includegraphics[scale=0.45]{img/base_reco_ctr_sorted_by_base.pdf} \label{fig:view_click_base} \caption{Plots of CTRs on base items and recommended items. Plots are generated by first sorting results according to base CTRs. Blue plot is base CTR and red plot is recommendation CTR.} \end{figure} \begin{figure} [t] \centering \includegraphics[scale=0.45]{img/base_reco_ctr_sorted_by_reco.pdf} \label{fig:view_click_reco} \caption{Plots of CTRs on base items and recommended items. Plots are generated by first sorting results according to recommendation CTRs. Blue plot is base CTR and red plot is recommendation CTR.} \end{figure} \section{Discussion and Conclusion} In this study, we attempted to explain the factors that cause clicks on recommendations. We specifically investigated whether clicks on recommendations are a function of the base items, or a function of the recommended items. We attempted to explain that by looking at the categories of items and the transitions between the categories. We found that indeed the category of the items explains some of the discrepancy between the likelihoods of triggering clicks both as base items and recommendation items in the sense that some base categories and some recommendation categories are more likely to trigger clicks than others. There is, however, a difference between the categories in their likelihood to trigger clicks as base category and in recommendation category. As base category, the politics category is the most likely to trigger clicks on recommendations followed by media. In recommendations, however, it is the media followed by politics that trigger clicks upon their recommendation. The results suggest that click on recommendations is a function of both the base items and the recommended items. This is indicated by the fact that some categories are less or more likely to generate clicks on recommendation whether as base, which means we should not recommend to those items, nor as recommendations, that means we should not recommend them. The results also show that the performance of the categories as base and recommendations are not similar. The investigation of the transition CRTs shows which categories are more likely to trigger clicks on which categories. This result suggests that recommendation can be improved by recommending some categories to those categories where they are more likely to get clicked. For example, we observe that it is more likely to receive clicks if we recommend media items to the politics category and to the local category (Berlin). Similarly, we observe that recommending sports items to sports items is much more likely to trigger clicks. These results suggest that there is a way to improve recommender system by leveraging category information of items. We hope that this work can contribute to the understanding of factors affecting recommender systems. We have shown that category-level information can take us a long way in explaining why clicks on recommendations happen. Item level information also showed that there is a relationship between base items that are more likely to trigger clicks and those recommendation items that are more likely to trigger clicks. This all suggests that leveraging information at both the category and item levels is important to improve recommender systems. As a future work, we would like to investigate the factors that lead to clicks on recommendation using a larger dataset and at the content level of the items. % An idea, maybe show the variance of the categories in terms of their CTR? Another thing we can do is to explore the high achieving base itemsand the high achiving recommended itesm and see if they are some how the same items. We also do similar thing with lowe achving base item and recommended items. Is this holds, then clearly it indicates that a big factor is not about the current context, but just the nature of the items themselves, both ion the base items, and in the recommended items. This is going to gold , as it already shows in the groups. But, we can also zoom in on the politics items and see if that holds too. Another thing we can consider is find base items and recommended items with big variance and study them with the view to finding the causes in terms of categories and also in terms of contenet. The variance of a recommendation item tells us information that is it is recommended to some values it makes sense, but if to others, it does not. This can also be studied at a particlat group's \bibliographystyle{abbrv} \bibliography{ref} \end{document}