Recommendation system paper challenge (12/50)
What problem do they solve?
- Recommend item for a user when we know the historical click sequence of the user
- item-item recommendation
What model do they propose?
TransRec: Translation-based Recommendation
From this figure, we can clearly see the next item given the user depends on (user translation vector and item vector)
We do a trick to resolve cold-user problem with global translation vector.
for cold-user, since we don’t have enough information for user specific translation vector. user translation vector dominated by global translation vector.
Inferring the Parameters
They apply SBPR (Sequential Bayesian Personalized Ranking) to optimize the formula (1).
Inferring the Parameters
They apply SGA (stochastic gradient ascent) to update the parameters.
Data — personalized recommendation
baseline models — personalized recommendation
PopRec: Ranking items according to their popularity
Bayesian Personalized Ranking (BPR-MF): Taking item recommendation model which takes Matrix Factorization as the underlying predictor without sequential signals.
Factorized Markov Chain (FMC): non personalized model.
Factorized Personalized Markov Chain (FPMC): Combining Matrix Factorization (M, N)and factorized Markov Chains (P, Q)
Hierarchical Representation Model (HRM): HRM extends FPMC by applying aggregation operations like max pooling to capture non-linear interactions.
Personalized Ranking Metric Embedding (PRME): PRME models personalized Markov behavior by the summation of two Euclidean distances
Evaluation Metric — personalized recommendation
Result — personalized recommendation
FPMC and PRME perform better than FMC and BPR-MF in denser data while perform worse in sparse data. That means non-personalized models perform better in sparse data and personalized models has much power so that it is easily overfitting in sparse data.
TransRec outperforms other methods in nearly all cases.
Data — item-item recommendation
models — item-item recommendation
They utilize content-based to get item feature and add 1 additional embedding E(.).
Weighted Nearest Neighbor (WNN):
WNN measures the ‘dis-similarity’ between pairs of items by a weighted Euclidean distance.
Low-rank Mahalanobis Transform (LMT):
LMT learns a single low-rank Mahalanobis transform matrixW to embed all items into relation space.
Mixtures of Non-metric Embeddings (Monomer):
Monomer extends LMT by learning mixtures of low-rank embeddings to uncover more complex reasons to explain the relationships between items.
Result — item-item recommendation
Other related blogs:
Best paper in RecSys: