Code Monkey home page Code Monkey logo

rechorus's Introduction

THUwangcy's GitHub stats

rechorus's People

Contributors

dependabot[bot] avatar sasrec avatar thuwangcy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rechorus's Issues

Add type hint by typing

It will be better if there is a type hint in all code. For example, when we implement some subclass of basic class BaseModel, we have no idea about the content and structure of the input variable(s).

CFKG模型

想问一下CFKG这个模型,构建的KG数据集是在哪个文件里,也是在data目录里吗

ContraRec训练报错

作者您好,我用pytorch1.8版本测试ContraRec的训练发生了下图的错误。
image
请问是因为torch不是1.1版本导致的吗?

Fix bug in NARM

Issue: A2J3(34R 6U7LK_77P1F3(8

location: ReChorus/src/models/sequential/NARM.py line 61

Reason: #18

Fix Solution: add .cpu() to sort_his_lengths.
history_packed = torch.nn.utils.rnn.pack_padded_sequence(sort_his_vectors, sort_his_lengths.cpu(), batch_first=True)

great job

hi,dear
well done,
will try to reproduce the rp,
btw,could it be used in Recall term ,what's the metrics ?

thx

Questions about the paper

Nice paper which inspires me a lot ! But I still have two questions:

  1. Why ReChorus need the categories information?
  2. How do you get the relations like “complment” or “substitute” between the items?The original Amazon review data just list the relations like “buy after viewing” or “also bought_together”.

Thank you for replying

About the model KDA

In the model KDA, according to your experimental parameter setting method, I found that the experimental results in your table in github can only be reached for the time being HR@5: 0.5174, NDGG@5: 0.3876, but the result in the paper is 0.4201 and in run.sh The experimental result of more fine-grained parameter setting is not up to 0.47 temporarily. Could you please give me some guidance? Looking forward to your reply

About the baseline TiSASRec

When computing the user_min_interval, you use the interval_matrix = np.abs(time_seqs[:, None] - time_seqs[None, :]). As the diagnal line of the matrix is always all-zero, min_interval = np.min(interval + (interval <= 0) * 0xFFFF) will be 0xFFFF for all users. By the way, I correct the computation of user_min_interval, but the performance of TiSASRec doesn't get any improvement. This seems to be not important.

Question about the evaluate method

Hi Wang,

Recently, I was reading the source code of Rechorus. There is something coufused me in BaseRunner.py.

It's about evaluate_method in BaseRunner.py on line 48.

def evaluate_method(predictions: np.ndarray, topk: list, metrics: list) -> Dict[str, float]:
    """
    :param predictions: (-1, n_candidates) shape, the first column is the score for ground-truth item
    :param topk: top-K value list
    :param metrics: metric string list
    :return: a result dict, the keys are metric@topk
    """
    evaluations = dict()
    sort_idx = (-predictions).argsort(axis=1)  
    gt_rank = np.argwhere(sort_idx == 0)[:, 1] + 1 
    for k in topk:
        hit = (gt_rank <= k)
        for metric in metrics:
            key = '{}@{}'.format(metric, k)
            if metric == 'HR':
                evaluations[key] = hit.mean()
            elif metric == 'NDCG':
                evaluations[key] = (hit / np.log2(gt_rank + 1)).mean()
            else:
                raise ValueError('Undefined evaluation metric: {}.'.format(metric))
  return evaluations

As the comment says, the first column indicates the score of ground_truth. The sort_idx contains the index values of the array values in descending order. The sort_idx == 0 represents the highest record.

My confusion is that the code hit = (gt_rank <= k). In my understanding, gt_rank means the item with the highest score, not necessarily the ground_truth, but also the random sample item. Can you please explain this for me?

evaluate质疑

感谢提供如此优异的框架,非常易懂易用!
但是有几个问题:

  1. 划分方式通常也可以比如8:1:1,但是ReChorus框架似乎只适合于序列型?
  2. 因为1的问题,导致evaluate的指标计算也仅适用于ground-truth物品只有1个的情况,这显然是不足的。
  3. evaluate指标还有非常多,是否拆开为模块写,而不是混在一起会不会更好

相比于RecBole,ReChorus将数据集单独预处理确实是更好的方式。

Expected tensor for argument #1 'indices' to have scalar type Long; but got torch.IntTensor instead (while checking arguments for embedding)

Getting this error while running sample on windows 10, torch 1.7.1

Traceback (most recent call last):
File "main.py", line 116, in
main()
File "main.py", line 74, in main
logging.info('Test Before Training: ' + runner.print_res(model, data_dict['test']))
File "xxx\ReChorus-master\src\helpers\BaseRunner.py", line 221, in print_res
result_dict = self.evaluate(model, data, self.topk, self.metrics)
File "xxx\ReChorus-master\src\helpers\BaseRunner.py", line 197, in evaluate
predictions = self.predict(model, data)
File "xxx\ReChorus-master\src\helpers\BaseRunner.py", line 212, in predict
prediction = model(utils.batch_to_gpu(batch, model.device))['prediction']
File "xxx\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "xxx\ReChorus-master\src\models\general\BPR.py", line 30, in forward
cf_u_vectors = self.u_embeddings(u_ids)
File "xxx\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "xxx\lib\site-packages\torch\nn\modules\sparse.py", line 126, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "xxx\lib\site-packages\torch\nn\functional.py", line 1852, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected tensor for argument #1 'indices' to have scalar type Long; but got torch.IntTensor instead (while checking arguments for embedding)

about the dataset rechorus

Hello, can you tell me how the "relational ratio in test set" in the paper Rechorus is calculated? thank you very much

GRU4Rec Runtime error for pytorch v1.8

Issue: GRU4Rec model run will encounter runtime error "RuntimeError: 'lengths' argument should be a 1D CPU int64 tensor, but got 1D cuda:0 Long tensor"

Error code location: ReChorus/src/models/sequential/GRU4Rec.py line 52

Reason: in latest pytorch version (maybe after pytorch v1.4), nn.utils.rnn.pack_padded_sequence requires lengths tensor to be CPU int 64 tensor instead of GPU long tensor.

Fix Solution: add .cpu() to sort_his_lengths. e.g.
history_packed = torch.nn.utils.rnn.pack_padded_sequence(sort_his_vectors, sort_his_lengths.cpu(), batch_first=True)

About The Model ComiRec

Hi, I was looking for a pytorch implementation of this paper recently. After flipping through GitHub, I found that there is a file called ComiRec in your project. I'm not sure whether it is the complete implement of that paper. Looking forward to your reply. :)

Unusually high HR when applying Linear on i_vectors

Hi,

I am relatively new to the field. I'm using your package to write some code. Thanks for the contribution to the community by the way!

So when I add a simple linear layer to i_vectors (after passing i_ids to an embedding), I get a strangely high HR (almost 100%). Did I do something wrong? Is it not allowed to use item embedding to make predictions? Thank you in advance!

I run my code using the line python main.py --gpu 0 --num_neg 99 --model_name Linear --emb_size 64 --hidden_size 128 --lr 1e-3 --l2 1e-4 --history_max 20 --dataset 'Grocery_and_Gourmet_Food'

Please see my module below:

import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np

from models.BaseModel import SequentialModel
from utils import layers


class Linear(SequentialModel):
    reader = 'SeqReader'
    runner = 'BaseRunner'
    extra_log_args = ['emb_size', 'num_layers', 'num_heads']

    @staticmethod
    def parse_model_args(parser):
        parser.add_argument('--emb_size', type=int, default=64,
                            help='Size of embedding vectors.')
        parser.add_argument('--num_layers', type=int, default=1,
                            help='Number of self-attention layers.')
        parser.add_argument('--num_heads', type=int, default=4,
                            help='Number of attention heads.')
        return SequentialModel.parse_model_args(parser)

    def __init__(self, args, corpus):
        super().__init__(args, corpus)
        self.emb_size = args.emb_size
        self.max_his = args.history_max
        self.num_layers = args.num_layers

        self.len_range = torch.from_numpy(np.arange(self.max_his)).to(self.device)
        self._define_params()

    def _define_params(self):
        self.i_embeddings = nn.Embedding(self.item_num, self.emb_size)
        self.p_embeddings = nn.Embedding(self.max_his + 1, self.emb_size)

        self.linear = nn.Linear(self.emb_size, 1 + self.num_neg)

    def forward(self, feed_dict):
        self.check_list = []
        i_ids = feed_dict['item_id']  # [batch_size, -1]
        history = feed_dict['history_items']  # [batch_size, history_max]
        lengths = feed_dict['lengths']  # [batch_size]
        batch_size, seq_len = history.shape

        i_vectors = self.i_embeddings(i_ids)
        prediction = self.linear(i_vectors.mean(axis=1))
        # prediction = self.sedot(stacked_X)

        return {'prediction': prediction.view(batch_size, -1)}

Prediction on real time

Hi,

Great job on he implementation.
Is there a way that predict method for SASrec works in the following way : (This can be used for real time cases)

Given a new user sequence of items that the user interacted with in real time and the model created based on the training data, a set of recommended items with their scores returns.

Thanks,
Sara

Baselines for S3Rec and CLRec?

Dear authors,

Thanks for your implementations of this framework with several baselines. I wonder whether the implementations of S3Rec and CLRec is ready in the developing folder? I tried to run them directly but faced some errors.

数据集

你好,运行各个数据集的参数如何设置呢?谢谢

How to get top K recommendations for any interaction?

Hello,

Thanks for implementing this beautiful library. I have a couple of queries:-

  1. Is there any code on generating the top 10 recommendations given a user input particular set of interactions?
  2. When we generate results on the test dataset, aren't we supposed to calculate the scores for every item present in the dataset instead of just the ground truth and the negative sampled items? I believe the code is just for the negative sampled items as well as the ground truth(a total of 100) when in reality it should have been done for all of the items(3707 in the case of ML 1m)?

Framework for hyperparameter Tuning

Hi,

Firstly, the package looks great and informative. Thanks.

I was just working on the models and was interested in optimzing the hyperparameters of the models . I neverthless tried to understand , but was finding it challenging if package such as Optuna can be used for optimizing the hyperparameters.

Any help or comments on this topic would be helpful !

Thanks in advance!

Chorus input error

raise ValueError('The value argument must be within the support')
ValueError: The value argument must be within the support

My input
error

code's log
codelog

Please help me.

About the baseline "RCF".

Hello, I am very interested in your work.

I find that "RCF" is used for your baseline, and I want to know what item relations are used in your implementation of RCF on Amazon datasets. If it is convenient for you, can you send me the code of "RCF" that you implemented? Look forward to your reply.

在使用gpu时,指标计算出错

运行“python main.py --model_name BPRMF --emb_size 64 --lr 1e-3 --l2 1e-6 --dataset Grocery_and_Gourmet_Food --gpu 0”
部分结果为:
Device: cuda
Load corpus from ../data/Grocery_and_Gourmet_Food\BaseReader.pkl
#params: 1497344
BPRMF(
(u_embeddings): Embedding(14682, 64)
(i_embeddings): Embedding(8714, 64)
)
Optimizer: Adam
Epoch 1 loss=0.6931 [7.5 s] dev=(HR@5:1.0000,NDCG@5:1.0000) [6.3 s] *
Epoch 2 loss=0.6931 [6.9 s] dev=(HR@5:1.0000,NDCG@5:1.0000) [6.4 s] *
Epoch 3 loss=0.6931 [7.5 s] dev=(HR@5:1.0000,NDCG@5:1.0000) [6.5 s] *
Epoch 4 loss=0.6931 [7.6 s] dev=(HR@5:1.0000,NDCG@5:1.0000) [6.5 s] *
Epoch 5 loss=0.6931 [7.6 s] dev=(HR@5:1.0000,NDCG@5:1.0000) [6.4 s] *
Epoch 6 loss=0.6931 [7.0 s] dev=(HR@5:1.0000,NDCG@5:1.0000) [6.3 s] *

在使用cpu时计算正常

关于CMCC数据集

您好:
关于在CIKM2022(TiMiRec)中使用的CMCC数据集是否可以提供? 😃
感谢 👍

Dataset Preprocessing: Generating leave_df in amazon.ipynb

Hi, thanks for the great code!
I would like to ask something about the preprocessing code in Amazon.ipynb, where leave_df is made which I think it shouldn't be.
`
leave_df = out_df.groupby('user_id').head(1)

data_df = out_df.drop(leave_df.index)
`
If it is just to leave the last two items out for dev/test data, why is it included here? I think this would harm getting negative samples since the items in leave_df are excluded during generating test_df and dev_df.

Again, thanks for the great code!

Problems running SLRCPlus

Hi Wang,
I met some problem when i try to run SLRCPlus on 'ml-1m'.

The CMD command I input is:
python main.py --model_name SLRCPlus --emb_size 64 --lr 5e-4 --l2 1e-5 --dataset 'ml-1m'

The error is shown as:
image

Could you please give me some guidance? Thanks in advance.

About the training speed

Hi THUwangcy.

I use your lib to run the SASRec model with command as below (cuda environment):

python main.py --model_name SASRec --emb_size 50 --lr 0.001 --l2 0.0 --dataset ml-1m --test_all 1 --history_max 200 --num_layers 2 --num_heads 1 --batch_size 128 --topk 10 --num_workers 2

The parameters are same with the original paper SASRec. But I find the training speed is much slower than the original code for one epoch, even I modify the code to avoid inferring at every epoch.

# Record dev results
dev_result = self.evaluate(data_dict['dev'], self.topk[:1], self.metrics)
dev_results.append(dev_result)
main_metric_results.append(dev_result[self.main_metric])
logging_str = 'Epoch {:<5} loss={:<.4f} [{:<3.1f} s] dev=({})'.format(
epoch + 1, loss, training_time, utils.format_metric(dev_result))

Can you provide some solutions to this problem?

How to run the code

I ran your implement code of Chorus

my running cmd is

python main.py --model_name Chorus --emb_size 64 --margin 1 --lr 5e-4 --l2 1e-5 --epoch 50 --early_stop 0 --batch_size 512 --dataset 'Grocery_and_Gourmet_Food' --stage 1
python main.py --model_name Chorus --emb_size 64 --margin 1 --lr_scale 0.1 --lr 1e-3 --l2 0 --dataset 'Grocery_and_Gourmet_Food' --base_method 'BPR' --stage 2

The errors message is as follows:
Load corpus from ../data/Grocery_and_Gourmet_Food/KGReader.pkl
#params: 1521509
Chorus(
(u_embeddings): Embedding(14682, 64)
(i_embeddings): Embedding(8714, 64)
(r_embeddings): Embedding(3, 64)
(betas): Embedding(57, 3)
(mus): Embedding(57, 3)
(sigmas): Embedding(57, 3)
(prediction): Linear(in_features=64, out_features=1, bias=False)
(user_bias): Embedding(14682, 1)
(item_bias): Embedding(8714, 1)
(kg_loss): MarginRankingLoss()
)
Traceback (most recent call last):
File "main.py", line 116, in
main()
File "main.py", line 66, in main
model.actions_before_train()
File "/home/zlc/zlc/lhz/DIDN/ReChorus/src/models/sequential/Chorus.py", line 76, in actions_before_train
raise ValueError('Pre-trained KG model does not exist, please run with "--stage 1"')
ValueError: Pre-trained KG model does not exist, please run with "--stage 1"

I was wondering whether the difference was made due to my negligence. Looking forward to your reply! Thanks!

An issue on implementation of ComiRec

image

It seems that 'his_vectors' in this line should be 'his_pos_vectors'.
When 'self.add_pos' is set to True, we should use 'his_pos_vectors' instead of 'his_vectors'.
In my experiments, alter 'his_vectors' into 'his_pos_vectors' can lead to significant performance improvement when setting 'self.add_pos' to True.

Reference for CFKG

FYI, the current reference to CFKG is linked to the short paper. A more comprehensive description of CFKG can be found in the journal paper "Learning heterogeneous knowledge base embeddings for explainable recommendation" (https://www.mdpi.com/1999-4893/11/9/137/pdf). It might worth linking to that as well :)

How to convert predict to real item ?

Following the document, Currently I run TiSAS model the output include [[score grouth truth, k-score-item],[....],[....]] and I don't know how to convert score to item? Please help me! Thank for your support.

The number of negative items is changed

Hi again 👍

I was writing down some codes and I found that the default parameter 'neg_items' in BaseModel.py is changed from 99 to 1, while amazon.ipynb has still the original 99. Any reason for that? The paper says it's 99, so a bit confusing.

Accuracy of evaluation results using negative sampling

Hi,

thank you for your work!
I read through your code and trained a few models on my dataset (#users 45'000, #items 260'000) using your library and have a question for which I would greatly appreciate your thoughts.

I have the feeling that evaluation results for evaluations using negative sampling are overly confident and do not reflect the actual performance of a model on a dataset. Let's say we have 99 negative samples and add the ground truth label, then the model must only differentiate between 100 classes instead of e.g. 260'000, and can even be sure that the actual class is part of these 100. This approach, IMO, greatly simplifies the task at hand. If that assumption is correct, what is even the point of this strategy (apart from lower computational efficiency)?

Also, would you mind explaining what the below code should do. From my understanding, this piece of code prevents proper evaluation on all the items, because it adjusts what the model has actually predicted.

if dataset.model.test_all:
    rows, cols = list(), list()
    for i, u in enumerate(dataset.data['user_id']):
        clicked_items = list(dataset.corpus.train_clicked_set[u] | dataset.corpus.residual_clicked_set[u])
        idx = list(np.ones_like(clicked_items) * i)
        rows.extend(idx)
        cols.extend(clicked_items)
    predictions[rows, cols] = -np.inf

Thank you!

Encoder issues related to ContraRec

When I use the BERT encoder in the ContraRec framework, the results are normal, but when I use GRU4Rec and Caser, I do not use contrastive learning HR@5 It's 0.2741, using contrastive learning HR@5 It's 0.0759. I modified the encoder parameters and loss directly in the ContraRec framework. Can you give me some help? Thank you!

About the model "GRU4Rec"

Hi Wang,
Today I run "GRU4Rec" with command python main.py --model_name GRU4Rec --emb_size 64 --hidden_size 128 --lr 1e-3 --l2 1e-4 --history_max 20 as shown in the code.
However, the results of the run are far from the results mentioned in your paper, Can you give me some guidance?
My result:
(HR@5:0.3203,NDCG@5:0.2223,HR@10:0.4232,NDCG@10:0.2555,HR@20:0.5417,NDCG@20:0.2854,HR@50:0.7626,NDCG@50:0.3289)
In your paper:
(HR@5:0.3704,NDCG@5:0.2643,HR@10:0.4721,NDCG@10:0.2972,

looking forward to your reply.

How do we leverage the predict endpoints

I was able to train on one of the cellphone dataset.
image
image
But how do we call the inference or predict endpoints? I believe there is Baserunner class but that expects args right?

Is there some sample code to do that?

About the experimental result on the CLRec model

Hello, I am very interested in your paper 《Contrastive Learning for Sequential Recommendation》. However, I have some questions when reproducing the experiments. The baseline model I reproduced is consistent with the results in your readme table, but the experimental results of the CLRec model are not ideal. The value of the HR@5 metric is only 0.4808. Could you provide the parameters of this model? Or could you point out any possible operational errors on my part?

TypeError: __cinit__() takes at least 2 positional arguments (0 given)

========================================
cuda available: True

cuda devices: 1

Load corpus from ../data/Grocery_and_Gourmet_Food/BaseReader.pkl
Traceback (most recent call last):
File "main.py", line 116, in
main()
File "main.py", line 56, in main
corpus = pickle.load(open(corpus_path, 'rb'))
File "pandas/_libs/internals.pyx", line 572, in pandas._libs.internals.BlockManager.cinit
TypeError: cinit() takes at least 2 positional arguments (0 given)

This Error shows when I run the code, is the problem with input? Thanks for reply

Fix warning for numpy ndarray creation in BaseModel.py

Warning: ReChorus/src/models/BaseModel.py:147: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.

Location: BaseModel.py line 147

Reason: in the latest Numpy, the np.array() requires to specify dtype=np.object if the lengths of arrays are different.

Possible Solution: replace line 147 with

if isinstance(feed_dicts[0][key], np.ndarray):
    tmp_list = [len(d[key]) for d in feed_dicts]
    if any([tmp_list[0] != l for l in tmp_list]):
        stack_val = np.array([d[key] for d in feed_dicts], dtype=np.object)
    else:
        stack_val = np.array([d[key] for d in feed_dicts])
else:
    stack_val = np.array([d[key] for d in feed_dicts])

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.