Code Monkey home page Code Monkey logo

stid's People

Contributors

dependabot[bot] avatar zezhishao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

stid's Issues

sth about my own dataset

Hi,dear author!Thanks for ur great work STID!Im a rookie in ST field,I want to deploy my own dataset with STID model,could u please tell me how to onfigure my own dataset?
My dataset time span from 2019-2020,every one hour for one record, count for 23347 nodes.
Thanks!

关于Spatial and Temporal Identities的问题

您好!看到您的论文,感觉大道之简,但本人不太熟悉时空预测领域问题,想请教一下:

为什么要将时间信息和空间信息作为embedding去学,而不直接将时间信息和空间信息作为特征输入呢?是因为该领域的数据集没有时间信息和空间信息么,如果是这样,方便问下该领域数据集的输入特征是什么吗?

关于模型维度的处理

您好,模型开始前的代码是这样的,备注是我自己写的维度:
batch_size, _, num_nodes, _ = input_data.shape # B,L,N,C
input_data = input_data.transpose(1, 2).contiguous() # B,N,L,C
input_data = input_data.view(batch_size, num_nodes, -1).transpose(1, 2).unsqueeze(-1) # B,N,(LC)->B,(LC),N->B,(LC),N,1
time_series_emb = self.time_series_emb_layer(input_data)
如果我没理解错的话L是窗口长度,C是每个节点的dim,请问这里为什么把L,C合并了,而不是把NC合并呢?谢谢

Questions about the time of day embedding

I noticed that only the last time step of each sample is kept by the following code:

if self.if_time_in_day:
t_i_d_data = history_data[..., 1]
# In the datasets used in STID, the time_of_day feature is normalized to [0, 1]. We multiply it by 288 to get the index.
# If you use other datasets, you may need to change this line.
time_in_day_emb = self.time_in_day_emb[(t_i_d_data[:, -1, :] * self.time_of_day_size).type(torch.LongTensor)]
else:
time_in_day_emb = None
if self.if_day_in_week:
d_i_w_data = history_data[..., 2]
day_in_week_emb = self.day_in_week_emb[(
d_i_w_data[:, -1, :]).type(torch.LongTensor)]
else:
day_in_week_emb = None

More specifically, the operation "t_i_d_data[:, -1, :]" is used to get the last time step from input data.

Could you please provide more interpretations?

请问scaler pkl文件在哪里?

请问作者代码中
self.scaler = load_pkl("{0}/scaler_in{1}_out{2}.pkl".format(cfg["TRAIN"]["DATA"]["DIR"], cfg["DATASET_INPUT_LEN"], cfg["DATASET_OUTPUT_LEN"]))
使用了这个文件在哪里可以下载

stid_arch.py代码问题

self.hidden_dim = self.embed_dim+self.node_dim * \
            int(self.if_spatial)+self.temp_dim_tid*int(self.if_day_in_week) + \
            self.temp_dim_diw*int(self.if_time_in_day)

维度相加的代码,if_day_in_week是和tid相乘吗?好像乘反了,会影响消融实验的?希望作者看下

我感觉预处理完数据后,使用run.py依旧没有办法执行到主函数

from basicts import launch_training
首先,上述会报错。basicts内部似乎没有launch_training
即使改成from basicts.launcher import launch_training,也无法运行额。
更为重要的是,不知道是如何落实到训练和预测的。感觉缺失了一部分,只有数据预处理和模型骨架和参数设置的部分。可能是我没有找到,还望指点。

Code

Hi:
I tried to run your code, but I didn't run through
TypeError: forward() missing 4 required positional arguments: 'future_data', 'batch_seen', 'epoch', and 'train'
Thanks so much for your reply.

Question about T^TiD and T^DiW

I noticed you take the last slice in the L dimension. In my understanding, T^TiD will change in L dimension. Why don't use t_i_d_data[:, :, -1]? temporal embeddings should have nothing to do with num_nodes

time_in_day_emb = self.time_in_day_emb[(t_i_d_data[:, -1, :] * self.time_of_day_size).type(torch.LongTensor)]
else:
time_in_day_emb = None
if self.if_day_in_week:
d_i_w_data = history_data[..., 2]
day_in_week_emb = self.day_in_week_emb[(
d_i_w_data[:, -1, :]).type(torch.LongTensor)]

有关于空间Identity

您好,有关于空间Identity,我只看到了随机生成的空间特征的矩阵,但是比如说PEMS04数据集,他的adj_mx,距离矩阵是不是没有用上?还是我在什么地方遗漏了。还望解答,谢谢

The time series embedding in file "stid_arch.py"

# prepare data
input_data = history_data[..., range(self.input_dim)]
...
# time series embedding
batch_size, _, num_nodes, _ = input_data.shape
input_data = input_data.transpose(1, 2).contiguous()
input_data = input_data.view(batch_size, num_nodes, -1).transpose(1, 2).unsqueeze(-1)
time_series_emb = self.time_series_emb_layer(input_data)

I want to know whether the shape of "input_data" in input_data = input_data.transpose(1, 2).contiguous() should be (B,N,L,1). But now it is (B,N,L,3). And i think the code input_data = history_data[..., range(self.input_dim)] should be replaced with input_data = history_data[..., 0].

Some questions about the Visualization Section in the paper.

Dear shao, in your paper,

For T DiW ∈ R𝑁𝑤 ×𝐷 , where 𝑁𝑤 = 7 ≪ 𝐷 = 32, we train STID by setting the embedding size of T DiW to 2 to get a more accurate visualization.

And in your code, # concate all embeddings hidden = torch.cat([time_series_emb] + node_emb + tem_emb, dim=1)

My confusion is how to concatenate the three embeddings in the given dimension when training the model if you change the embedding size of T DiW to 2 whileas the size of other embeddings is still 32.
Other embeddings size change to 2 also? Or something else?

node_emb为什么不和his_data数据交互呢?

node_emb仅仅对随机生成的矩阵进行交互,但是不直接对his_data进行embed。虽然从知乎上看到这个是一直随机embed方式,但是还是从直觉上很难接受,his_data的空间效应没有直接被编码。如果采用nn.Embedding对空间信息同时进行embed效果会怎么样呢,请问这个方面有对比过吗?或者说,直接embed空间效应有什么编程上的困难吗?

How to get prediction with shape [B, L, N, C]

Great project!
However, when I try to get a prediction with shape [B, L, N, C] through STID, I can only get shape [B, L, N, 1].
During the time series embedding process, the input dim C is reduced and unsqueezed with 1. How to keep the output dim the same as C.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.