RecSys ’17: Translation-based Recommendation

Arthur Lee
4 min readMay 16, 2020

--

Recommendation system paper challenge (12/50)

paper link

What problem do they solve?

  1. Recommend item for a user when we know the historical click sequence of the user
  2. item-item recommendation

What model do they propose?

TransRec: Translation-based Recommendation

From this figure, we can clearly see the next item given the user depends on (user translation vector and item vector)

We do a trick to resolve cold-user problem with global translation vector.

for cold-user, since we don’t have enough information for user specific translation vector. user translation vector dominated by global translation vector.

Inferring the Parameters

They apply SBPR (Sequential Bayesian Personalized Ranking) to optimize the formula (1).

Inferring the Parameters

They apply SGA (stochastic gradient ascent) to update the parameters.

Data — personalized recommendation

baseline models — personalized recommendation

PopRec: Ranking items according to their popularity

Bayesian Personalized Ranking (BPR-MF): Taking item recommendation model which takes Matrix Factorization as the underlying predictor without sequential signals.

Factorized Markov Chain (FMC): non personalized model.

Factorized Personalized Markov Chain (FPMC): Combining Matrix Factorization (M, N)and factorized Markov Chains (P, Q)

Hierarchical Representation Model (HRM): HRM extends FPMC by applying aggregation operations like max pooling to capture non-linear interactions.

Personalized Ranking Metric Embedding (PRME): PRME models personalized Markov behavior by the summation of two Euclidean distances

Evaluation Metric — personalized recommendation

Result — personalized recommendation

FPMC and PRME perform better than FMC and BPR-MF in denser data while perform worse in sparse data. That means non-personalized models perform better in sparse data and personalized models has much power so that it is easily overfitting in sparse data.

TransRec outperforms other methods in nearly all cases.

Data — item-item recommendation

models — item-item recommendation

They utilize content-based to get item feature and add 1 additional embedding E(.).

TransRec:

Weighted Nearest Neighbor (WNN):

WNN measures the ‘dis-similarity’ between pairs of items by a weighted Euclidean distance.

Low-rank Mahalanobis Transform (LMT):

LMT learns a single low-rank Mahalanobis transform matrixW to embed all items into relation space.

More Detail: Mahalonobis Distance — Understanding the math with examples

Mixtures of Non-metric Embeddings (Monomer):

Monomer extends LMT by learning mixtures of low-rank embeddings to uncover more complex reasons to explain the relationships between items.

Result — item-item recommendation

Other related blogs:

Beyond Clicks: Dwell Time for Personalization

RecSys’15: Context-Aware Event Recommendation in Event-based Social Networks

RecSys’11: Utilizing related products for post-purchase recommendation in e-commerce

RecSys16: Adaptive, Personalized Diversity for Visual Discovery

RecSys ’16: Local Item-Item Models for Top-N Recommendation

COLING’14: Deep Convolutional Neural Networks for Sentiment Analysis of Short Texts

NAACL’19: Utilizing BERT for Aspect-Based Sentiment Analysis via Constructing Auxiliary Sentence

Best paper in RecSys:

https://recsys.acm.org/best-papers/

My Website:

https://light0617.github.io/#/

--

--

Arthur Lee
Arthur Lee

Written by Arthur Lee

An machine learning engineer in Bay Area in the United States

No responses yet