Code Monkey home page Code Monkey logo

neural-collaborative-filtering's Introduction

neural-collaborative-filtering

Neural collaborative filtering(NCF), is a deep learning based framework for making recommendations. The key idea is to learn the user-item interaction using neural networks. Check the follwing paper for details about NCF.

He, Xiangnan, et al. "Neural collaborative filtering." Proceedings of the 26th International Conference on World Wide Web. International World Wide Web Conferences Steering Committee, 2017.

The authors of NCF actually published a nice implementation written in tensorflow(keras). This repo instead provides my implementation written in pytorch. I hope it would be helpful to pytorch fans. Have fun playing with it!

Run!

python train.py

modify the config in train.py to change the hyper-parameters.

Dataset

The Movielens 1M Dataset is used to test the repo.

Files

data.py: prepare train/test dataset

utils.py: some handy functions for model training etc.

metrics.py: evaluation metrics including hit ratio(HR) and NDCG

gmf.py: generalized matrix factorization model

mlp.py: multi-layer perceptron model

neumf.py: fusion of gmf and mlp

engine.py: training engine

train.py: entry point for train a NCF model

Performance

The hyper params are not tuned. Better performance can be achieved with careful tuning, especially for the MLP model. Pretraining the user embedding & item embedding might be helpful to improve the performance of the MLP model.

Experiments' results with num_negative_samples = 4 and dim_latent_factor=8 are shown as follows

GMF V.S. MLP

Note that the MLP model was trained from scratch but the authors suggest that the performance might be boosted by pretrain the embedding layer with GMF model.

NeuMF pretrain V.S no pretrain

The pretrained version converges much faster.

L2 regularization for GMF model

Large l2 regularization might lead to the bug of HR=0.0 NDCG=0.0

L2 regularization for MLP model

a bit l2 regulzrization seems to improve the performance of the MLP model

L2 for MLP

MLP with pretrained user/item embedding

Pre-training the MLP model with user/item embedding from the trained GMF gives better result.

MLP network size = [16, 64, 32, 16, 8]

Pretrain for MLP Pretrain for MLP

Implicit feedback without pretrain

Ratings are set to 1 (interacted) or 0 (uninteracted). Train from scratch. binarize

CPU training

The code can also run on CPUs and actually pretty fast for small datasets.

Requirements

The repo works under torch 1.0 (gpu&cpu) and torch 2.3.1(cpu, gpu yet to be tested). You can find the old versions in tags.

neural-collaborative-filtering's People

Contributors

ruihongqiu avatar yihong-chen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

neural-collaborative-filtering's Issues

NeuMF algorithm

Dear Yihong Chen,

I am very interested in this project. Thank you for sharing it.

I tried running the NeuMF algorithm using the default config that you provide in your code (see below), but I am getting bad results.

Do I need to change the parameters?
Do you maybe have a model that you already trained, and you could share with me?

Thanks,
Nir Ailon

neumf_config = {'alias': 'pretrain_neumf_factor8neg4',
'num_epoch': 200,
'batch_size': 1024,
'optimizer': 'adam',
'adam_lr': 1e-3,
'num_users': 6040,
'num_items': 3706,
'latent_dim_mf': 8,
'latent_dim_mlp': 8,
'num_negative': 4,
'layers': [16,32,16,8], # layers[0] is the concat of latent user vector & latent item vector
'l2_regularization': 0.01,
'use_cuda': True,
'device_id': 7,
'pretrain': True,
'pretrain_mf': 'checkpoints/{}'.format('gmf_factor8neg4_Epoch100_HR0.6391_NDCG0.2852.model'),
'pretrain_mlp': 'checkpoints/{}'.format('mlp_factor8neg4_Epoch100_HR0.5606_NDCG0.2463.model'),
'model_dir':'checkpoints/{}_Epoch{}_HR{:.4f}_NDCG{:.4f}.model'
}

Test Dataloader for large dataset.

Maybe a test dataloader for iterate the test set is more user friendly for large dataset? Just simply treat all test data as a huge batch sometimes will cause OOM error.

image

Sometimes the size could be like this.

L2 regularization

Hi,
I notice that in your config code of mlp and neumf, you set l2_regularization. But I can't find l2 regularization in your model and loss implementation. Could you help me to know how to implement the l2 regularization? Thanks.

size mismatch for fc_layers

I want to calculate the cosine similarity of items, so I need to get feature vectors. However, NeuMF has a complicated structure, I would like to know which piece of data in the code is the feature vector of items, please teach me

The GPU utilization is low

When I run the code on RTX4090, the utilization of the GPU is always 0%, and I don't change the code. Is it alright?

question about the version of pytorch

Hi,

Thanks for your kind sharing.
I want to reproduce your results on my device, but my training results are too bad,
The evaluating HR and NDCG of some epochs are 0.0000, do you have any idea about this case? And could you tell me the Pytorch version that you use in your side?

Thanks.

The NDCG metric

The ndcg metric here is defined as 1/log2(rank_i) according to Xiangnan He's paper. Therefore, the formula in metrics.py should be log(2)/log(x+1) instead of log(2)/log(x+2) because the rank here starts from 1. Please update your code, thanks!

The code implementation does not match the original paper

The function _sample_negative in data.py appears to be incorrect. It currently utilizes ratings to generate negative samples, resulting in the exclusion of test items for each user from the negative samples. But the test items should be treated as negative samples in training set.

Missing layer and training workflow

Hi,

Thank you for the work you have done. It is extremely useful and the code is very clean. I wish every paper implementation could be like this :) I have noticed three major discrepancies from what is written in the original paper. Could you explain to me if they are on purpose and if yes, what is the reasoning behind them?

  1. I think that in neuMF architecture you are missing one linear layer between embeddings and collective layer for GMF and MLP, according to paper schema there should be one.

  2. I am a bit lost with loading pretrained weights in MLP. I see that you offer a possibility to load pretrained GMF embeddings to MLP model. I believe that according to paper they are separate embeddings and are not mixed in the original work. Does this change provide noticable improvement?

Add LICENSE.txt

Could you please add a license here?

Thanks for your work on this project!

'checkpoints/gmf_factor8neg4_Epoch100_HR0.6391_NDCG0.2852.model'

mldl@ub1604:/ub16_prj/neural-collaborative-filtering/src$ python3 train.py
Range of userId is [0, 6039]
Range of itemId is [0, 3705]
MLP(
(embedding_user): Embedding(6040, 8)
(embedding_item): Embedding(3706, 8)
(fc_layers): ModuleList(
(0): Linear(in_features=16, out_features=64, bias=True)
(1): Linear(in_features=64, out_features=32, bias=True)
(2): Linear(in_features=32, out_features=16, bias=True)
(3): Linear(in_features=16, out_features=8, bias=True)
)
(affine_output): Linear(in_features=8, out_features=1, bias=True)
(logistic): Sigmoid()
)
Traceback (most recent call last):
File "train.py", line 79, in
engine = MLPEngine(config)
File "/home/mldl/ub16_prj/neural-collaborative-filtering/src/mlp.py", line 63, in init
self.model.load_pretrain_weights()
File "/home/mldl/ub16_prj/neural-collaborative-filtering/src/mlp.py", line 47, in load_pretrain_weights
resume_checkpoint(gmf_model, model_dir=config['pretrain_mf'], device_id=config['device_id'])
File "/home/mldl/ub16_prj/neural-collaborative-filtering/src/utils.py", line 14, in resume_checkpoint
map_location=lambda storage, loc: storage.cuda(device=device_id)) # ensure all storage are on gpu
File "/usr/local/lib/python3.5/dist-packages/torch/serialization.py", line 301, in load
f = open(f, 'rb')
FileNotFoundError: [Errno 2] No such file or directory: 'checkpoints/gmf_factor8neg4_Epoch100_HR0.6391_NDCG0.2852.model'
mldl@ub1604:
/ub16_prj/neural-collaborative-filtering/src$

How to apply NCF to datasets that only have the number of interactions?

As the question states, how could this be applied to a dataset that only has the number of interactions between an user and the item? Movielens has the ratings in the, which is explicit feedback, but how could this model be applied to a dataset like the audioscrobbler dataset which has as implicit-feedback the number of times a user heard an artist? Here is an example of recommendations implementing ALS and using that dataset: http://www.gousios.gr/courses/bigdata/audioscrobbler.html

AssertionError

AssertionError Traceback (most recent call last)
in
82 # Specify the exact model
83 config = gmf_config
---> 84 engine = GMFEngine(config)
85 # config = mlp_config
86 # engine = MLPEngine(config)

1 frames
/content/utils.py in use_cuda(enabled, device_id)
19 def use_cuda(enabled, device_id=0):
20 if enabled:
---> 21 assert torch.cuda.is_available(), 'CUDA is not available'
22 torch.cuda.set_device(device_id)
23

AssertionError: CUDA is not available

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.