zezhishao / stid Goto Github PK
View Code? Open in Web Editor NEWCode for our CIKM'22 paper Spatial-Temporal Identity: A Simple yet Effective Baseline for Multivariate Time Series Forecasting.
Code for our CIKM'22 paper Spatial-Temporal Identity: A Simple yet Effective Baseline for Multivariate Time Series Forecasting.
Hi,dear author!Thanks for ur great work STID!Im a rookie in ST field,I want to deploy my own dataset with STID model,could u please tell me how to onfigure my own dataset?
My dataset time span from 2019-2020,every one hour for one record, count for 23347 nodes.
Thanks!
您好!看到您的论文,感觉大道之简,但本人不太熟悉时空预测领域问题,想请教一下:
为什么要将时间信息和空间信息作为embedding去学,而不直接将时间信息和空间信息作为特征输入呢?是因为该领域的数据集没有时间信息和空间信息么,如果是这样,方便问下该领域数据集的输入特征是什么吗?
您好,模型开始前的代码是这样的,备注是我自己写的维度:
batch_size, _, num_nodes, _ = input_data.shape # B,L,N,C
input_data = input_data.transpose(1, 2).contiguous() # B,N,L,C
input_data = input_data.view(batch_size, num_nodes, -1).transpose(1, 2).unsqueeze(-1) # B,N,(LC)->B,(LC),N->B,(LC),N,1
time_series_emb = self.time_series_emb_layer(input_data)
如果我没理解错的话L是窗口长度,C是每个节点的dim,请问这里为什么把L,C合并了,而不是把NC合并呢?谢谢
PEMS 系列数据集包括:PEMS03、PEMS04、PEMS07、PEMS08。为什么没有用PEMS03呢?谢谢。
I noticed that only the last time step of each sample is kept by the following code:
STID/stid/stid_arch/stid_arch.py
Lines 76 to 88 in f9801a5
More specifically, the operation "t_i_d_data[:, -1, :]" is used to get the last time step from input data.
Could you please provide more interpretations?
请问作者代码中
self.scaler = load_pkl("{0}/scaler_in{1}_out{2}.pkl".format(cfg["TRAIN"]["DATA"]["DIR"], cfg["DATASET_INPUT_LEN"], cfg["DATASET_OUTPUT_LEN"]))
使用了这个文件在哪里可以下载
self.hidden_dim = self.embed_dim+self.node_dim * \
int(self.if_spatial)+self.temp_dim_tid*int(self.if_day_in_week) + \
self.temp_dim_diw*int(self.if_time_in_day)
维度相加的代码,if_day_in_week是和tid相乘吗?好像乘反了,会影响消融实验的?希望作者看下
from basicts import launch_training
首先,上述会报错。basicts内部似乎没有launch_training
即使改成from basicts.launcher import launch_training,也无法运行额。
更为重要的是,不知道是如何落实到训练和预测的。感觉缺失了一部分,只有数据预处理和模型骨架和参数设置的部分。可能是我没有找到,还望指点。
Hi:
I tried to run your code, but I didn't run through
TypeError: forward() missing 4 required positional arguments: 'future_data', 'batch_seen', 'epoch', and 'train'
Thanks so much for your reply.
I noticed you take the last slice in the L dimension. In my understanding, T^TiD will change in L dimension. Why don't use t_i_d_data[:, :, -1]
? temporal embeddings should have nothing to do with num_nodes
STID/stid/stid_arch/stid_arch.py
Lines 80 to 86 in f9801a5
您好,有关于空间Identity,我只看到了随机生成的空间特征的矩阵,但是比如说PEMS04数据集,他的adj_mx,距离矩阵是不是没有用上?还是我在什么地方遗漏了。还望解答,谢谢
# prepare data
input_data = history_data[..., range(self.input_dim)]
...
# time series embedding
batch_size, _, num_nodes, _ = input_data.shape
input_data = input_data.transpose(1, 2).contiguous()
input_data = input_data.view(batch_size, num_nodes, -1).transpose(1, 2).unsqueeze(-1)
time_series_emb = self.time_series_emb_layer(input_data)
I want to know whether the shape of "input_data" in input_data = input_data.transpose(1, 2).contiguous()
should be (B,N,L,1). But now it is (B,N,L,3). And i think the code input_data = history_data[..., range(self.input_dim)]
should be replaced with input_data = history_data[..., 0]
.
Dear shao, in your paper,
For T DiW ∈ R𝑁𝑤 ×𝐷 , where 𝑁𝑤 = 7 ≪ 𝐷 = 32, we train STID by setting the embedding size of T DiW to 2 to get a more accurate visualization.
And in your code, # concate all embeddings hidden = torch.cat([time_series_emb] + node_emb + tem_emb, dim=1)
My confusion is how to concatenate the three embeddings in the given dimension when training the model if you change the embedding size of T DiW to 2 whileas the size of other embeddings is still 32.
Other embeddings size change to 2 also? Or something else?
node_emb仅仅对随机生成的矩阵进行交互,但是不直接对his_data进行embed。虽然从知乎上看到这个是一直随机embed方式,但是还是从直觉上很难接受,his_data的空间效应没有直接被编码。如果采用nn.Embedding对空间信息同时进行embed效果会怎么样呢,请问这个方面有对比过吗?或者说,直接embed空间效应有什么编程上的困难吗?
Great project!
However, when I try to get a prediction with shape [B, L, N, C]
through STID, I can only get shape [B, L, N, 1]
.
During the time series embedding process, the input dim C
is reduced and unsqueezed with 1
. How to keep the output dim the same as C
.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.