Code Monkey home page Code Monkey logo

paddlepaddle / paddlets Goto Github PK

View Code? Open in Web Editor NEW
474.0 20.0 116.0 7.05 MB

Awesome Easy-to-Use Deep Time Series Modeling based on PaddlePaddle, including comprehensive functionality modules like TSDataset, Analysis, Transform, Models, AutoTS, and Ensemble, etc., supporting versatile tasks like time series forecasting, representation learning, and anomaly detection, etc., featured with quick tracking of SOTA deep models.

License: Apache License 2.0

Python 98.98% Shell 1.02%
time-series-forecasting time-series-models time-series-analysis nbeats nhits paddlets paddlepaddle deepar ts2vec informer

paddlets's Introduction

简体中文 | English



PaddleTS 是一个易用的深度时序建模的Python库,它基于飞桨深度学习框架PaddlePaddle,专注业界领先的深度模型,旨在为领域专家和行业用户提供可扩展的时序建模能力和便捷易用的用户体验。PaddleTS 的主要特性包括:

  • 设计统一数据结构,实现对多样化时序数据的表达,支持单目标与多目标变量,支持多类型协变量
  • 封装基础模型功能,如数据加载、回调设置、损失函数、训练过程控制等公共方法,帮助开发者在新模型开发过程中专注网络结构本身
  • 内置业界领先的深度学习模型,包括NBEATS、NHiTS、LSTNet、TCN、Transformer、DeepAR、Informer等时序预测模型, TS2Vec、CoST等时序表征模型,以及 Autoencoder、VAE、AnomalyTransformer等时序异常检测模型
  • 内置多样化的数据转换算子,支持数据处理与转换,包括缺失值填充、异常值处理、归一化、时间相关的协变量提取等
  • 内置经典的数据分析算子,帮助开发者便捷实现数据探索,包括数据统计量信息及数据摘要等功能
  • 自动模型调优AutoTS,支持多类型HPO(Hyper Parameter Optimization)算法,在多个模型和数据集上展现显著调优效果
  • 第三方机器学习模型及数据转换模块自动集成,支持包括sklearn、pyod等第三方库的时序应用
  • 支持在GPU设备上运行基于PaddlePaddle的时序模型
  • 时序模型集成学习能力

📣 近期更新

  • 📚 《高精度时序分析星河零代码产线全新上线》,汇聚时序分析3大场景任务,涵盖11个前沿的时序模型。高精度多模型融合时序特色产线,自适应不同场景自动搜索模型最优组合,真实产业场景应用时序预测精度提升约20%,时序异常检测精度提升5%。支持云端和本地端服务化部署与纯离线使用。直播时间:8月1日(周四)19:00。报名链接:https://www.wjx.top/vm/YLz6DY6.aspx?udsid=146765
  • [2024-06-27] 💥 飞桨低代码开发工具 PaddleX 3.0 重磅更新!
    • 丰富的模型产线:精选 68 个优质飞桨模型,涵盖图像分类、目标检测、图像分割、OCR、文本图像版面分析、时序分析等任务场景;
    • 低代码开发范式:支持单模型和模型产线全流程低代码开发,提供 Python API,支持用户自定义串联模型;
    • 多硬件训推支持:支持英伟达 GPU、昆仑芯、昇腾和寒武纪等多种硬件进行模型训练与推理。PaddleTS支持的模型见 模型列表
  • 新增时序分类能力
  • 全新发布6个深度时序模型。 USAD(UnSupervised Anomaly Detection)与MTAD_GAT(Multivariate Time-series Anomaly Detection via Graph Attention Network)异常检测模型, CNN与Inception Time时序分类模型, SCINet(Sample Convolution and Interaction Network)与TFT(Temporal Fusion Transformer)时序预测模型
  • 新发布Paddle Inference支持,已适配时序预测与时序异常检测
  • 新增模型可解释性能力。包括模型无关的可解释性与模型相关的可解释性
  • 新增支持基于表征的聚类与分类

您也可以参考发布说明获取更详尽的更新列表。

未来,更多的高级特性会进一步发布,包括但不限于:

  • 更多时序模型
  • 场景化Pipeline,支持端到端真实场景解决方案

关于 PaddleTS

具体来说,PaddleTS 时序库包含以下子模块:

模块 简述
paddlets.datasets 时序数据模块,统一的时序数据结构和预定义的数据处理方法
paddlets.autots 自动超参寻优
paddlets.transform 数据转换模块,提供数据预处理和特征工程相关能力
paddlets.models.forecasting 时序模型模块,基于飞桨深度学习框架PaddlePaddle的时序预测模型
paddlets.models.representation 时序模型模块,基于飞桨深度学习框架PaddlePaddle的时序表征模型
paddlets.models.anomaly 时序模型模块,基于飞桨深度学习框架PaddlePaddle的时序异常检测模型
paddlets.models.classify 时序模型模块,基于飞桨深度学习框架PaddlePaddle的时序分类模型
paddlets.pipeline 建模任务流模块,支持特征工程、模型训练、模型评估的任务流实现
paddlets.metrics 效果评估模块,提供多维度模型评估能力
paddlets.analysis 数据分析模块,提供高效的时序特色数据分析能力
paddlets.ensemble 时序集成学习模块,基于模型集成提供时序预测能力
paddlets.xai 时序模型可解释性模块
paddlets.utils 工具集模块,提供回测等基础功能

安装

前置条件

  • python >= 3.7
  • paddlepaddle >= 2.3

pip 安装 paddlets 命令如下:

pip install paddlets

更多安装方式请参考:环境安装

文档

社区

欢迎通过扫描下面的微信二维码加入PaddleTS开源社区,与PaddleTS维护者及社区成员随时进行技术讨论:

代码发布与贡献

我们非常感谢每一位代码贡献者。如果您发现任何Bug,请随时通过提交issue的方式告知我们。

如果您计划贡献涉及新功能、工具类函数、或者扩展PaddleTS的核心组件相关的代码,请您在提交代码之前先提交issue,并针对此次提交的功能与我们进行讨论。

如果在没有讨论的情况下直接发起的PR请求,可能会导致此次PR请求被拒绝。原因是对于您提交的PR涉及的模块,我们也许希望该模块朝着另一个不同的方向发展。

许可证

PaddleTS 使用Apache风格的许可证, 可参考 LICENSE 文件.

paddlets's People

Contributors

2bear avatar a10210532 avatar annnnnnnnnnnnn avatar bianchuanxin avatar changdazhou avatar cuicheng01 avatar eltociear avatar kehuo avatar lfan-ke avatar linwencong avatar luckyyangrun avatar lunlu avatar nepeplwu avatar ouyangliping avatar qgn123 avatar shiyutang avatar sunting78 avatar wangdong2222 avatar wenzhaojie avatar willionzs avatar yanghaihua avatar yangs16 avatar zeyuchen avatar zhangyubo0722 avatar zy07604 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

paddlets's Issues

Does TSDataset support slicing like dataframe?

Does TSDataset support slicing like dataframe? What I what to do is get a sub ts of a TSDataset object, the only way I found is to transform the TSDataset object to a DataFrame, then use .loc() to get a sub dataframe, and finally transform the dataframe to TSDataset again. It seems a liitle tedious.

LSTnet运行fit函数报错

我用PaddleTS的LSTnet进行预测。在模型fit那里,pycharm报错[paddlets] [ERROR] ValueError: attr: shape doesn't exist!;jupyter notebook内核直接挂掉。但是在飞桨平台就可以正常运行。请问这是什么原因呢?我用MLP和RNN拟合却没问题。

TSDataset数据定义加载出错

spyder 运行官网例子代码:
import numpy as np
import pandas as pd
from paddlets import TSDataset

x = np.linspace(-np.pi, np.pi, 200)
sinx = np.sin(x) * 4 + np.random.randn(200)
df = pd.DataFrame(
{
'time_col': pd.date_range('2022-01-01', periods=200, freq='1h'),
'value': sinx,
'known_cov_1': sinx + 4,
'known_cov_2': sinx + 5,
'observed_cov': sinx + 8,
'static_cov': [1 for i in range(200)],
}
)

target_dataset = TSDataset.load_from_dataframe(
df, #Also can be path to the CSV file
time_col='time_col',
target_cols='value',
freq='1h'
)

输出:
[2022-12-16 18:45:10,564] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!

paddlets.datasets.tsdataset.TSDataset.concat方法使用时好像有bug

我有一个predict_result_list,这个list中所有的数据都是有效的。但是,经过paddlets.datasets.tsdataset.TSDataset.concat方法把他们结合起来之后,这里除了第一条数据是正确的之外,剩下的数据都变成了NAN,没有报任何错误。
下面是我的代码:
`

使用test_data预测

predict_result_list = []
for i in paddletsData_list_X:
predicted_dataset = model.predict(i) # 将归一化后的数据集放进去预测
predict_result_list.append(predicted_dataset)

print(predict_result_list[1])

将列表转化为一个paddlets_dataset

total_minmax_result = paddlets.datasets.tsdataset.TSDataset.concat(predict_result_list,axis=1)
total_minmax_result
`

这里面predict_result_list 里面的所有的数据都是有效的,但是使用paddlets.datasets.tsdataset.TSDataset.concat操作之后,得到的结果里面只有第一条数据是正常的,剩下的数据全部都是NAN,时间索引也是正常且连续的。

月度数据用AutoTS训练时,utils.py中119行调用的pd.to_timedelta(train_index.freq))报错

问题:utils.py中119行调用的pd.to_timedelta(train_index.freq))方法似乎不支持以月为频率,当我尝试用paddlets对颗粒度为月的数据进行训练时,TSDataset中的freq自动转成或者,导致出现下面的报错信息。

ts = TSDataset.load_from_dataframe(
data, #Also can be path to the CSV file
time_col='time_col',
target_cols=target_field,
known_cov_cols=data.columns.drop(['time_col', target_field]).tolist(),
)
'''
**沿海煤炭运价指数 国际波罗的海干散货海运总和指数 PMI 矿石 焦炭
time_col
2018-01-01 1.49331 1242.000000 51.0 72.227273 2169.090909
2018-02-01 1.09641 1126.157895 50.0 72.769231 1960.000000
2018-03-01 1.05876 1154.238095 52.0 68.136364 1930.454545
2018-04-01 1.08231 1128.900000 51.0 64.764706 1725.500000
2018-05-01 1.26693 1293.095238 52.0 65.523810 1842.045455
2018-06-01 1.21853 1351.619048 52.0 64.500000 2220.000000
......
'''

from paddlets.automl.autots import AutoTS
from paddlets.models.forecasting import MLPRegressor
from ray.tune import uniform, qrandint, choice
from paddlets.transform import Fill

ts_train_val, ts_test = ts.split(0.8)
ts_train, ts_val = ts_train_val.split(0.8)
autots_model = AutoTS(NBEATSModel, 3, 1)
autots_model.fit(train_tsdataset=ts_train_scaled, valid_tsdataset=ts_val_scaled, n_trials=1)
sp = autots_model.search_space()
predicted = autots_model.predict(ts_test_scaled)

报错信息如下:
Failure # 1 (occurred at 2022-11-21_14-41-47)
�[36mray::ImplicitFunc.train()�[39m (pid=9724, ip=127.0.0.1, repr=run_trial)
File "python\ray_raylet.pyx", line 859, in ray._raylet.execute_task
File "python\ray_raylet.pyx", line 863, in ray._raylet.execute_task
File "python\ray_raylet.pyx", line 810, in ray._raylet.execute_task.function_executor
File "D:\Anaconda\envs\time_series_py38\lib\site-packages\ray_private\function_manager.py", line 674, in actor_method_executor
return method(__ray_actor, *args, **kwargs)
File "D:\Anaconda\envs\time_series_py38\lib\site-packages\ray\util\tracing\tracing_helper.py", line 466, in _resume_span
return method(self, *_args, **_kwargs)
File "D:\Anaconda\envs\time_series_py38\lib\site-packages\ray\tune\trainable\trainable.py", line 355, in train
raise skipped from exception_cause(skipped)
File "D:\Anaconda\envs\time_series_py38\lib\site-packages\ray\tune\trainable\function_trainable.py", line 325, in entrypoint
return self._trainable_func(
File "D:\Anaconda\envs\time_series_py38\lib\site-packages\ray\util\tracing\tracing_helper.py", line 466, in _resume_span
return method(self, *_args, **_kwargs)
File "D:\Anaconda\envs\time_series_py38\lib\site-packages\ray\tune\trainable\function_trainable.py", line 651, in _trainable_func
output = fn()
File "C:\Users\ouyeel\AppData\Roaming\Python\Python38\site-packages\paddlets\automl\optimize_runner.py", line 174, in run_trial
score = fit_and_score(train_data=train_tsdataset,
File "C:\Users\ouyeel\AppData\Roaming\Python\Python38\site-packages\paddlets\utils\validation.py", line 135, in fit_and_score
and check_train_valid_continuity(train_data, valid_data):
File "C:\Users\ouyeel\AppData\Roaming\Python\Python38\site-packages\paddlets\utils\utils.py", line 119, in check_train_valid_continuity
continuious = (valid_index[0] - train_index[-1] == pd.to_timedelta(train_index.freq))
File "C:\Users\ouyeel\AppData\Roaming\Python\Python38\site-packages\pandas\core\tools\timedeltas.py", line 142, in to_timedelta
return _coerce_scalar_to_timedelta_type(arg, unit=unit, errors=errors)
File "C:\Users\ouyeel\AppData\Roaming\Python\Python38\site-packages\pandas\core\tools\timedeltas.py", line 150, in _coerce_scalar_to_timedelta_type
result = Timedelta(r, unit)
File "pandas_libs\tslibs\timedeltas.pyx", line 1315, in pandas._libs.tslibs.timedeltas.Timedelta.new
ValueError: Value must be Timedelta, string, integer, float, timedelta or convertible, not MonthEnd

使用backtest时报错

代码如下:
from paddlets.utils import backtest

q_loss, quantiles = backtest(data=ts_test_scaled,
model=deepar,
start="2021/9/24 19:00",
metric=QuantileLoss([0.1, 0.5, 0.9]),
predict_window=306,
stride=30
6,
return_predicts=True
)

quantiles.plot(
add_data=ts_test_scaled,
low_quantile=0.05,
high_quantile=0.95
)
plt.show()
报错如下:
[2022-11-07 16:10:23,106] [paddlets.models.common.callbacks.callbacks] [INFO] Best weights from best epoch are automatically used!
[2022-11-07 16:10:23,119] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:23,120] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:23,120] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:23,120] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:23,120] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:23,120] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:23,120] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:23,120] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:23,155] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:23,207] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:23,217] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:23,227] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:23,241] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:23,262] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:23,280] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:23,300] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:31,822] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:31,822] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:31,822] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:31,823] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:31,823] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:31,823] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:31,823] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:31,823] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:31,885] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:31,905] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:31,908] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:31,934] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:31,948] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:31,954] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:31,956] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:31,956] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:31,963] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:31,963] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:31,964] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:31,964] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:31,968] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:31,968] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:32,000] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:32,004] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:32,034] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:32,080] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:32,084] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:32,120] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:32,123] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:32,142] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:32,159] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:32,180] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:34,758] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:34,758] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:34,758] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:34,758] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:34,758] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:34,759] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:34,759] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:34,759] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:34,817] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:34,821] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:34,844] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:34,861] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:34,911] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:34,913] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:34,917] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:34,935] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:35,653] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:35,653] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:35,653] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:35,653] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:35,654] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:35,654] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:35,654] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:35,654] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:35,736] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:35,748] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:35,754] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:35,756] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:35,795] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:35,797] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:35,814] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:35,831] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:36,401] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:36,401] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:36,401] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:36,401] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:36,401] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:36,402] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:36,402] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:36,402] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:36,437] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:36,464] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:36,501] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:36,504] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:36,538] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:36,541] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:36,560] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:36,578] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:37,085] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:37,085] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:37,085] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:37,085] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:37,085] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:37,086] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:37,086] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:37,086] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:37,133] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:37,167] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:37,184] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:37,190] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:37,241] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:37,245] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:37,248] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:37,264] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:37,746] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:37,746] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:37,746] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:37,746] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:37,747] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:37,747] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:37,747] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:37,747] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:37,788] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:37,828] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:37,846] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:37,863] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:37,871] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:37,887] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:37,907] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:37,923] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:38,344] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:38,344] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:38,344] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:38,344] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:38,344] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:38,345] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:38,345] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:38,345] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:38,387] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:38,423] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:38,430] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:38,447] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:38,467] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:38,499] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:38,503] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:38,521] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:39,035] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:39,035] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:39,035] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:39,035] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:39,035] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:39,036] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:39,036] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:39,036] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:39,086] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:39,123] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:39,137] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:39,139] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:39,139] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:39,142] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:39,144] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:39,145] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:39,146] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:39,146] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:39,147] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:39,148] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:39,168] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:39,199] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:39,217] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:39,221] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:39,244] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:39,286] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:39,306] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:39,311] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:39,327] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:39,347] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:39,367] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:39,383] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
C:\ProgramData\Anaconda3\envs\paddlets\lib\site-packages\paddlets\automl\searcher.py:4: DeprecationWarning: The module ray.tune.suggest has been moved to ray.tune.search and the old location will be deprecated soon. Please adjust your imports to point to the new location. Example: Do a global search and replace ray.tune.suggest with ray.tune.search.
from ray.tune.suggest import BasicVariantGenerator
C:\ProgramData\Anaconda3\envs\paddlets\lib\site-packages\paddlets\automl\searcher.py:5: DeprecationWarning: The module ray.tune.suggest.optuna has been moved to ray.tune.search.optuna and the old location will be deprecated soon. Please adjust your imports to point to the new location. Example: Do a global search and replace ray.tune.suggest.optuna with ray.tune.search.optuna.
from ray.tune.suggest.optuna import OptunaSearch
C:\ProgramData\Anaconda3\envs\paddlets\lib\site-packages\flaml\tune_init_.py:5: DeprecationWarning: The module ray.tune.sample has been moved to ray.tune.search.sample and the old location will be deprecated soon. Please adjust your imports to point to the new location. Example: Do a global search and replace ray.tune.sample with ray.tune.search.sample.
from ray.tune import (
C:\ProgramData\Anaconda3\envs\paddlets\lib\site-packages\flaml\tune\space.py:6: DeprecationWarning: The module ray.tune.suggest.variant_generator has been moved to ray.tune.search.variant_generator and the old location will be deprecated soon. Please adjust your imports to point to the new location. Example: Do a global search and replace ray.tune.suggest.variant_generator with ray.tune.search.variant_generator.
from ray.tune.suggest.variant_generator import generate_variants
C:\ProgramData\Anaconda3\envs\paddlets\lib\site-packages\paddlets\automl\searcher.py:6: DeprecationWarning: The module ray.tune.suggest.flaml has been moved to ray.tune.search.flaml and the old location will be deprecated soon. Please adjust your imports to point to the new location. Example: Do a global search and replace ray.tune.suggest.flaml with ray.tune.search.flaml.
from ray.tune.suggest.flaml import CFO
C:\ProgramData\Anaconda3\envs\paddlets\lib\site-packages\paddlets\automl\searcher.py:8: DeprecationWarning: The module ray.tune.suggest.bohb has been moved to ray.tune.search.bohb and the old location will be deprecated soon. Please adjust your imports to point to the new location. Example: Do a global search and replace ray.tune.suggest.bohb with ray.tune.search.bohb.
from ray.tune.suggest.bohb import TuneBOHB
Backtest Progress: 0%| | 0/9 [00:00<?, ?it/s]C:\ProgramData\Anaconda3\envs\paddlets\lib\site-packages\paddle\fluid\layers\tensor.py:657: UserWarning: paddle.assign doesn't support float64 input now due to current platform protobuf data limitation, we convert it to float32
warnings.warn(
Backtest Progress: 0%| | 0/9 [00:01<?, ?it/s]
[2022-11-07 16:10:57,802] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:57,802] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:57,803] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:57,803] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:57,803] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:57,803] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:57,803] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:57,803] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:57,843] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:57,867] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:57,902] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:57,904] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:57,953] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:57,959] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:57,962] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-11-07 16:10:57,983] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
Traceback (most recent call last):
File "C:\Program Files\JetBrains\PyCharm 2021.3.2\plugins\python\helpers\pydev\pydevd.py", line 1483, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "C:\Program Files\JetBrains\PyCharm 2021.3.2\plugins\python\helpers\pydev_pydev_imps_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "E:/Paddle-release-2.2/PaddleTS/demos/fit_deepar_model.py", line 46, in
q_loss, quantiles = backtest(data=ts_test_scaled,
File "C:\ProgramData\Anaconda3\envs\paddlets\lib\site-packages\paddlets\utils\backtest.py", line 150, in backtest
score_dict = metric(real, predict)
File "C:\ProgramData\Anaconda3\envs\paddlets\lib\site-packages\paddlets\metrics\base.py", line 262, in call
res_array = self._build_prob_metrics_data(tsdataset_true, tsdataset_pred, self._TYPE)
File "C:\ProgramData\Anaconda3\envs\paddlets\lib\site-packages\paddlets\metrics\base.py", line 104, in _build_prob_metrics_data
target_true, target_pred = self._reindex_data(target_true, target_pred)
File "C:\ProgramData\Anaconda3\envs\paddlets\lib\site-packages\paddlets\metrics\base.py", line 197, in _reindex_data
merge_index = pd.merge(target_true.time_index.to_frame(index=False), target_pred.time_index.to_frame(index=False))
File "C:\ProgramData\Anaconda3\envs\paddlets\lib\site-packages\pandas\core\reshape\merge.py", line 106, in merge
op = _MergeOperation(
File "C:\ProgramData\Anaconda3\envs\paddlets\lib\site-packages\pandas\core\reshape\merge.py", line 681, in init
self._validate_specification()
File "C:\ProgramData\Anaconda3\envs\paddlets\lib\site-packages\pandas\core\reshape\merge.py", line 1346, in _validate_specification
raise MergeError(
pandas.errors.MergeError: No common columns to perform merge on. Merge options: left_on=None, right_on=None, left_index=False, right_index=False

AutoTS运行fit()报错;KeyError: 'C'; FileNotFoundError: [Errno 2] No such file or directory;

运行文档https://github.com/PaddlePaddle/PaddleTS/blob/main/examples/autots_example.ipynb到autots_model.fit(tsdataset)这一步的时候出现以下报错:
(pid=30292) C:\Users\Administrator\AppData\Roaming\Python\Python38\site-packages\paddlets\automl\searcher.py:4: DeprecationWarning: The module ray.tune.suggest has been moved to ray.tune.search and the old location will be deprecated soon. Please adjust your imports to point to the new location. Example: Do a global search and replace ray.tune.suggest with ray.tune.search.
(pid=30292) from ray.tune.suggest import BasicVariantGenerator
(pid=30292) C:\Users\Administrator\AppData\Roaming\Python\Python38\site-packages\paddlets\automl\searcher.py:5: DeprecationWarning: The module ray.tune.suggest.optuna has been moved to ray.tune.search.optuna and the old location will be deprecated soon. Please adjust your imports to point to the new location. Example: Do a global search and replace ray.tune.suggest.optuna with ray.tune.search.optuna.
(pid=30292) from ray.tune.suggest.optuna import OptunaSearch
(pid=30292) C:\Users\Administrator\AppData\Roaming\Python\Python38\site-packages\paddlets\automl\searcher.py:6: DeprecationWarning: The module ray.tune.suggest.flaml has been moved to ray.tune.search.flaml and the old location will be deprecated soon. Please adjust your imports to point to the new location. Example: Do a global search and replace ray.tune.suggest.flaml with ray.tune.search.flaml.
(pid=30292) from ray.tune.suggest.flaml import CFO
(pid=30292) C:\Users\Administrator\AppData\Roaming\Python\Python38\site-packages\paddlets\automl\searcher.py:8: DeprecationWarning: The module ray.tune.suggest.bohb has been moved to ray.tune.search.bohb and the old location will be deprecated soon. Please adjust your imports to point to the new location. Example: Do a global search and replace ray.tune.suggest.bohb with ray.tune.search.bohb.
(pid=30292) from ray.tune.suggest.bohb import TuneBOHB
(pid=30292) C:\Users\Administrator\AppData\Roaming\Python\Python38\site-packages\paddlets\automl\search_space_configer.py:8: DeprecationWarning: The module ray.tune.sample has been moved to ray.tune.search.sample and the old location will be deprecated soon. Please adjust your imports to point to the new location. Example: Do a global search and replace ray.tune.sample with ray.tune.search.sample.
(pid=30292) from ray.tune.sample import Float, Integer, Categorical
(run_trial pid=30292) [2022-11-14 01:01:15,022] [paddlets.automl.optimize_runner] [INFO] trial config: {'hidden_config': 'Choice_0: [64]', 'use_bn': True, 'batch_size': 112, 'max_epochs': 330, 'optimizer_params': {'learning_rate': 0.009473699093873009}, 'patience': 15}
(run_trial pid=30292) [2022-11-14 01:01:15,022] [paddlets.automl.optimize_runner] [INFO] setup_estimator: init model. Params: {'hidden_config': [64], 'use_bn': True, 'batch_size': 112, 'max_epochs': 330, 'optimizer_params': {'learning_rate': 0.009473699093873009}, 'patience': 15, 'in_chunk_len': 96, 'out_chunk_len': 2, 'skip_chunk_len': 0, 'sampling_stride': 1, 'seed': 2022}
(run_trial pid=30292) C:\Users\Administrator\anaconda3\lib\site-packages\paddle\nn\layer\norm.py:653: UserWarning: When training, we now always track global mean and variance.
(run_trial pid=30292) warnings.warn(

KeyError Traceback (most recent call last)
~\anaconda3\lib\site-packages\tensorboardX\record_writer.py in open_file(path)
57 prefix = path.split(':')[0]
---> 58 factory = REGISTERED_FACTORIES[prefix]
59 return factory.open(path)

KeyError: 'C'

During handling of the above exception, another exception occurred:

FileNotFoundError Traceback (most recent call last)
~\anaconda3\lib\site-packages\ray\tune\execution\trial_runner.py in _wait_and_handle_event(self, next_trial)
832 if event.type == _ExecutorEventType.PG_READY:
--> 833 self._on_pg_ready(next_trial)
834 elif event.type == _ExecutorEventType.NO_RUNNING_TRIAL_TIMEOUT:

~\anaconda3\lib\site-packages\ray\tune\execution\trial_runner.py in _on_pg_ready(self, next_trial)
922 logger.debug(f"Trying to start trial: {next_trial}")
--> 923 if not _start_trial(next_trial) and next_trial.status != Trial.ERROR:
924 # Only try to start another trial if previous trial startup

~\anaconda3\lib\site-packages\ray\tune\execution\trial_runner.py in _start_trial(trial)
914 if self.trial_executor.start_trial(trial):
--> 915 self._callbacks.on_trial_start(
916 iteration=self._iteration, trials=self._trials, trial=trial

~\anaconda3\lib\site-packages\ray\tune\callback.py in on_trial_start(self, **info)
316 for callback in self._callbacks:
--> 317 callback.on_trial_start(**info)
318

~\anaconda3\lib\site-packages\ray\tune\logger\logger.py in on_trial_start(self, iteration, trials, trial, **info)
134 ):
--> 135 self.log_trial_start(trial)
136

~\anaconda3\lib\site-packages\ray\tune\logger\tensorboardx.py in log_trial_start(self, trial)
178 trial.init_logdir()
--> 179 self._trial_writer[trial] = self._summary_writer_cls(
180 trial.logdir, flush_secs=30

~\anaconda3\lib\site-packages\tensorboardX\writer.py in init(self, logdir, comment, purge_step, max_queue, flush_secs, filename_suffix, write_to_disk, log_dir, comet_config, **kwargs)
300 self.file_writer = self.all_writers = None
--> 301 self._get_file_writer()
302

~\anaconda3\lib\site-packages\tensorboardX\writer.py in _get_file_writer(self)
348 if self.all_writers is None or self.file_writer is None:
--> 349 self.file_writer = FileWriter(logdir=self.logdir,
350 max_queue=self._max_queue,

~\anaconda3\lib\site-packages\tensorboardX\writer.py in init(self, logdir, max_queue, flush_secs, filename_suffix)
104 logdir = str(logdir)
--> 105 self.event_writer = EventFileWriter(
106 logdir, max_queue, flush_secs, filename_suffix)

~\anaconda3\lib\site-packages\tensorboardX\event_file_writer.py in init(self, logdir, max_queue_size, flush_secs, filename_suffix)
105 self._event_queue = multiprocessing.Queue(max_queue_size)
--> 106 self._ev_writer = EventsWriter(os.path.join(
107 self._logdir, "events"), filename_suffix)

~\anaconda3\lib\site-packages\tensorboardX\event_file_writer.py in init(self, file_prefix, filename_suffix)
42 self._num_outstanding_events = 0
---> 43 self._py_recordio_writer = RecordWriter(self._file_name)
44 # Initialize an event instance.

~\anaconda3\lib\site-packages\tensorboardX\record_writer.py in init(self, path)
178 self._writer = None
--> 179 self._writer = open_file(path)
180

~\anaconda3\lib\site-packages\tensorboardX\record_writer.py in open_file(path)
60 except KeyError:
---> 61 return open(path, 'wb')
62

FileNotFoundError: [Errno 2] No such file or directory: 'C:\Users\Administrator\ray_results\run_trial_2022-11-14_01-01-08\run_trial_c88b9bca_2_batch_size=120,hidden_config=Choice_5_128_128_128,max_epochs=180,learning_rate=0.0042,patience=45,use_bn=Fals_2022-11-14_01-01-15\events.out.tfevents.1668358875.SK-20200101ELSU'

During handling of the above exception, another exception occurred:

TuneError Traceback (most recent call last)
in
----> 1 autots_model.fit(tsdataset)

~\AppData\Roaming\Python\Python38\site-packages\paddlets\logger\logger.py in wrapper(*args, **kwargs)
25 logger = Logger(module_name)
26 logger.debug("function:%s" % func_name)
---> 27 result = f(*args, **kwargs)
28 return result
29

~\AppData\Roaming\Python\Python38\site-packages\paddlets\automl\autots.py in fit(self, train_tsdataset, valid_tsdataset, n_trails, cpu_resource, gpu_resource)
205 if valid_tsdataset is None:
206 raise NotImplementedError("When the train_tsdataset is a list, valid_tsdataset is required!")
--> 207 analysis = self._optimize_runner.optimize(self._estimator,
208 self._in_chunk_len,
209 self._out_chunk_len,

~\AppData\Roaming\Python\Python38\site-packages\paddlets\automl\optimize_runner.py in optimize(self, paddlets_estimator, in_chunk_len, out_chunk_len, train_tsdataset, valid_tsdataset, sampling_stride, skip_chunk_len, search_space, metric, mode, resampling_strategy, split_ratio, k_fold, n_trials, cpu_resource, gpu_resource, local_dir)
202 self._track_choice_mapping = self._preprocess_search_space(running_search_space)
203
--> 204 return tune.run(run_trial, num_samples=n_trials, config=running_search_space,
205 metric=self.report_metric,
206 mode=mode,

~\anaconda3\lib\site-packages\ray\tune\tune.py in run(run_or_experiment, name, metric, mode, stop, time_budget_s, config, resources_per_trial, num_samples, local_dir, search_alg, scheduler, keep_checkpoints_num, checkpoint_score_attr, checkpoint_freq, checkpoint_at_end, verbose, progress_reporter, log_to_file, trial_name_creator, trial_dirname_creator, sync_config, export_formats, max_failures, fail_fast, restore, server_port, resume, reuse_actors, trial_executor, raise_on_failed_trial, callbacks, max_concurrent_trials, _experiment_checkpoint_dir, _remote)
739 )
740 while not runner.is_finished() and not state["signal"]:
--> 741 runner.step()
742 if has_verbosity(Verbosity.V1_EXPERIMENT):
743 _report_progress(runner, progress_reporter)

~\anaconda3\lib\site-packages\ray\tune\execution\trial_runner.py in step(self)
884 logger.debug(f"Got new trial to run: {next_trial}")
885
--> 886 self._wait_and_handle_event(next_trial)
887
888 self._stop_experiment_if_needed()

~\anaconda3\lib\site-packages\ray\tune\execution\trial_runner.py in _wait_and_handle_event(self, next_trial)
863 raise e
864 else:
--> 865 raise TuneError(traceback.format_exc())
866
867 def step(self):

TuneError: Traceback (most recent call last):
File "C:\Users\Administrator\anaconda3\lib\site-packages\tensorboardX\record_writer.py", line 58, in open_file
factory = REGISTERED_FACTORIES[prefix]
KeyError: 'C'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\Administrator\anaconda3\lib\site-packages\ray\tune\execution\trial_runner.py", line 833, in _wait_and_handle_event
self._on_pg_ready(next_trial)
File "C:\Users\Administrator\anaconda3\lib\site-packages\ray\tune\execution\trial_runner.py", line 923, in _on_pg_ready
if not _start_trial(next_trial) and next_trial.status != Trial.ERROR:
File "C:\Users\Administrator\anaconda3\lib\site-packages\ray\tune\execution\trial_runner.py", line 915, in _start_trial
self._callbacks.on_trial_start(
File "C:\Users\Administrator\anaconda3\lib\site-packages\ray\tune\callback.py", line 317, in on_trial_start
callback.on_trial_start(**info)
File "C:\Users\Administrator\anaconda3\lib\site-packages\ray\tune\logger\logger.py", line 135, in on_trial_start
self.log_trial_start(trial)
File "C:\Users\Administrator\anaconda3\lib\site-packages\ray\tune\logger\tensorboardx.py", line 179, in log_trial_start
self._trial_writer[trial] = self._summary_writer_cls(
File "C:\Users\Administrator\anaconda3\lib\site-packages\tensorboardX\writer.py", line 301, in init
self._get_file_writer()
File "C:\Users\Administrator\anaconda3\lib\site-packages\tensorboardX\writer.py", line 349, in _get_file_writer
self.file_writer = FileWriter(logdir=self.logdir,
File "C:\Users\Administrator\anaconda3\lib\site-packages\tensorboardX\writer.py", line 105, in init
self.event_writer = EventFileWriter(
File "C:\Users\Administrator\anaconda3\lib\site-packages\tensorboardX\event_file_writer.py", line 106, in init
self._ev_writer = EventsWriter(os.path.join(
File "C:\Users\Administrator\anaconda3\lib\site-packages\tensorboardX\event_file_writer.py", line 43, in init
self._py_recordio_writer = RecordWriter(self._file_name)
File "C:\Users\Administrator\anaconda3\lib\site-packages\tensorboardX\record_writer.py", line 179, in init
self._writer = open_file(path)
File "C:\Users\Administrator\anaconda3\lib\site-packages\tensorboardX\record_writer.py", line 61, in open_file
return open(path, 'wb')
FileNotFoundError: [Errno 2] No such file or directory: 'C:\Users\Administrator\ray_results\run_trial_2022-11-14_01-01-08\run_trial_c88b9bca_2_batch_size=120,hidden_config=Choice_5_128_128_128,max_epochs=180,learning_rate=0.0042,patience=45,use_bn=Fals_2022-11-14_01-01-15\events.out.tfevents.1668358875.SK-20200101ELSU'

(run_trial pid=30292) [2022-11-14 01:01:15,572] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 000| loss: 3.339083| val_0_mae: 0.883445| 0:00:00s
(run_trial pid=30292) C:\Users\Administrator\anaconda3\lib\site-packages\paddle\nn\layer\norm.py:653: UserWarning: When training, we now always track global mean and variance.
(run_trial pid=30292) warnings.warn(
(run_trial pid=30292) [2022-11-14 01:01:15,993] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 001| loss: 1.339651| val_0_mae: 0.780203| 0:00:00s
(run_trial pid=30292) C:\Users\Administrator\anaconda3\lib\site-packages\paddle\nn\layer\norm.py:653: UserWarning: When training, we now always track global mean and variance.
(run_trial pid=30292) warnings.warn(
(run_trial pid=30292) [2022-11-14 01:01:16,418] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 002| loss: 1.073304| val_0_mae: 0.742995| 0:00:01s
(run_trial pid=30292) C:\Users\Administrator\anaconda3\lib\site-packages\paddle\nn\layer\norm.py:653: UserWarning: When training, we now always track global mean and variance.
(run_trial pid=30292) warnings.warn(
(run_trial pid=30292) [2022-11-14 01:01:16,830] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 003| loss: 1.000667| val_0_mae: 0.761179| 0:00:01s
(run_trial pid=30292) C:\Users\Administrator\anaconda3\lib\site-packages\paddle\nn\layer\norm.py:653: UserWarning: When training, we now always track global mean and variance.
(run_trial pid=30292) warnings.warn(
(run_trial pid=30292) [2022-11-14 01:01:17,249] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 004| loss: 0.964694| val_0_mae: 0.707594| 0:00:02s
(run_trial pid=30292) C:\Users\Administrator\anaconda3\lib\site-packages\paddle\nn\layer\norm.py:653: UserWarning: When training, we now always track global mean and variance.
(run_trial pid=30292) warnings.warn(
(run_trial pid=30292) [2022-11-14 01:01:17,678] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 005| loss: 0.972320| val_0_mae: 0.850871| 0:00:02s
(run_trial pid=30292) C:\Users\Administrator\anaconda3\lib\site-packages\paddle\nn\layer\norm.py:653: UserWarning: When training, we now always track global mean and variance.
(run_trial pid=30292) warnings.warn(
(run_trial pid=30292) [2022-11-14 01:01:18,108] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 006| loss: 0.966438| val_0_mae: 0.885582| 0:00:02s
(run_trial pid=30292) C:\Users\Administrator\anaconda3\lib\site-packages\paddle\nn\layer\norm.py:653: UserWarning: When training, we now always track global mean and variance.
(run_trial pid=30292) warnings.warn(
(run_trial pid=30292) [2022-11-14 01:01:18,520] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 007| loss: 0.896922| val_0_mae: 0.909875| 0:00:03s
(run_trial pid=30292) C:\Users\Administrator\anaconda3\lib\site-packages\paddle\nn\layer\norm.py:653: UserWarning: When training, we now always track global mean and variance.
(run_trial pid=30292) warnings.warn(
(run_trial pid=30292) [2022-11-14 01:01:18,943] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 008| loss: 0.932748| val_0_mae: 0.702189| 0:00:03s
(run_trial pid=30292) C:\Users\Administrator\anaconda3\lib\site-packages\paddle\nn\layer\norm.py:653: UserWarning: When training, we now always track global mean and variance.
(run_trial pid=30292) warnings.warn(
(run_trial pid=30292) [2022-11-14 01:01:19,355] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 009| loss: 0.883199| val_0_mae: 0.681072| 0:00:04s
(run_trial pid=30292) C:\Users\Administrator\anaconda3\lib\site-packages\paddle\nn\layer\norm.py:653: UserWarning: When training, we now always track global mean and variance.
(run_trial pid=30292) warnings.warn(
(pid=42476) C:\Users\Administrator\AppData\Roaming\Python\Python38\site-packages\paddlets\automl\searcher.py:4: DeprecationWarning: The module ray.tune.suggest has been moved to ray.tune.search and the old location will be deprecated soon. Please adjust your imports to point to the new location. Example: Do a global search and replace ray.tune.suggest with ray.tune.search.
(pid=42476) from ray.tune.suggest import BasicVariantGenerator
(run_trial pid=30292) [2022-11-14 01:01:19,774] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 010| loss: 0.942490| val_0_mae: 0.726340| 0:00:04s
(run_trial pid=30292) C:\Users\Administrator\anaconda3\lib\site-packages\paddle\nn\layer\norm.py:653: UserWarning: When training, we now always track global mean and variance.
(run_trial pid=30292) warnings.warn(
(pid=42476) C:\Users\Administrator\AppData\Roaming\Python\Python38\site-packages\paddlets\automl\searcher.py:5: DeprecationWarning: The module ray.tune.suggest.optuna has been moved to ray.tune.search.optuna and the old location will be deprecated soon. Please adjust your imports to point to the new location. Example: Do a global search and replace ray.tune.suggest.optuna with ray.tune.search.optuna.
(pid=42476) from ray.tune.suggest.optuna import OptunaSearch
(run_trial pid=30292) [2022-11-14 01:01:20,197] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 011| loss: 0.863567| val_0_mae: 0.679934| 0:00:05s
(run_trial pid=30292) C:\Users\Administrator\anaconda3\lib\site-packages\paddle\nn\layer\norm.py:653: UserWarning: When training, we now always track global mean and variance.
(run_trial pid=30292) warnings.warn(
(pid=42476) C:\Users\Administrator\AppData\Roaming\Python\Python38\site-packages\paddlets\automl\searcher.py:6: DeprecationWarning: The module ray.tune.suggest.flaml has been moved to ray.tune.search.flaml and the old location will be deprecated soon. Please adjust your imports to point to the new location. Example: Do a global search and replace ray.tune.suggest.flaml with ray.tune.search.flaml.
(pid=42476) from ray.tune.suggest.flaml import CFO
(run_trial pid=30292) [2022-11-14 01:01:20,627] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 012| loss: 0.920713| val_0_mae: 0.684235| 0:00:05s
(run_trial pid=30292) C:\Users\Administrator\anaconda3\lib\site-packages\paddle\nn\layer\norm.py:653: UserWarning: When training, we now always track global mean and variance.
(run_trial pid=30292) warnings.warn(
(pid=42476) C:\Users\Administrator\AppData\Roaming\Python\Python38\site-packages\paddlets\automl\searcher.py:8: DeprecationWarning: The module ray.tune.suggest.bohb has been moved to ray.tune.search.bohb and the old location will be deprecated soon. Please adjust your imports to point to the new location. Example: Do a global search and replace ray.tune.suggest.bohb with ray.tune.search.bohb.
(pid=42476) from ray.tune.suggest.bohb import TuneBOHB
(pid=42476) C:\Users\Administrator\AppData\Roaming\Python\Python38\site-packages\paddlets\automl\search_space_configer.py:8: DeprecationWarning: The module ray.tune.sample has been moved to ray.tune.search.sample and the old location will be deprecated soon. Please adjust your imports to point to the new location. Example: Do a global search and replace ray.tune.sample with ray.tune.search.sample.
(pid=42476) from ray.tune.sample import Float, Integer, Categorical
(run_trial pid=30292) [2022-11-14 01:01:21,056] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 013| loss: 0.847748| val_0_mae: 0.739015| 0:00:05s
(run_trial pid=30292) C:\Users\Administrator\anaconda3\lib\site-packages\paddle\nn\layer\norm.py:653: UserWarning: When training, we now always track global mean and variance.
(run_trial pid=30292) warnings.warn(
(run_trial pid=42476) [2022-11-14 01:01:21,147] [paddlets.automl.optimize_runner] [INFO] trial config: {'hidden_config': 'Choice_5: [128, 128, 128]', 'use_bn': False, 'batch_size': 120, 'max_epochs': 180, 'optimizer_params': {'learning_rate': 0.004155371012314867}, 'patience': 45}
(run_trial pid=42476) [2022-11-14 01:01:21,147] [paddlets.automl.optimize_runner] [INFO] setup_estimator: init model. Params: {'hidden_config': [128, 128, 128], 'use_bn': False, 'batch_size': 120, 'max_epochs': 180, 'optimizer_params': {'learning_rate': 0.004155371012314867}, 'patience': 45, 'in_chunk_len': 96, 'out_chunk_len': 2, 'skip_chunk_len': 0, 'sampling_stride': 1, 'seed': 2022}
(run_trial pid=30292) [2022-11-14 01:01:21,477] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 014| loss: 0.877982| val_0_mae: 0.915531| 0:00:06s
(run_trial pid=30292) C:\Users\Administrator\anaconda3\lib\site-packages\paddle\nn\layer\norm.py:653: UserWarning: When training, we now always track global mean and variance.
(run_trial pid=30292) warnings.warn(
(run_trial pid=42476) [2022-11-14 01:01:21,839] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 000| loss: 2.250421| val_0_mae: 0.866377| 0:00:00s
(run_trial pid=30292) [2022-11-14 01:01:21,898] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 015| loss: 0.865720| val_0_mae: 1.103737| 0:00:06s
(run_trial pid=30292) C:\Users\Administrator\anaconda3\lib\site-packages\paddle\nn\layer\norm.py:653: UserWarning: When training, we now always track global mean and variance.
(run_trial pid=30292) warnings.warn(
(run_trial pid=30292) [2022-11-14 01:01:22,317] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 016| loss: 0.882259| val_0_mae: 0.690138| 0:00:07s
(run_trial pid=30292) C:\Users\Administrator\anaconda3\lib\site-packages\paddle\nn\layer\norm.py:653: UserWarning: When training, we now always track global mean and variance.
(run_trial pid=30292) warnings.warn(
(run_trial pid=42476) [2022-11-14 01:01:22,436] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 001| loss: 1.000779| val_0_mae: 0.903012| 0:00:01s
(run_trial pid=30292) [2022-11-14 01:01:22,734] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 017| loss: 0.853780| val_0_mae: 0.726888| 0:00:07s
(run_trial pid=30292) C:\Users\Administrator\anaconda3\lib\site-packages\paddle\nn\layer\norm.py:653: UserWarning: When training, we now always track global mean and variance.
(run_trial pid=30292) warnings.warn(
(run_trial pid=42476) [2022-11-14 01:01:23,041] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 002| loss: 0.970277| val_0_mae: 0.745609| 0:00:01s
(run_trial pid=30292) [2022-11-14 01:01:23,157] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 018| loss: 0.864675| val_0_mae: 0.746551| 0:00:08s
(run_trial pid=30292) C:\Users\Administrator\anaconda3\lib\site-packages\paddle\nn\layer\norm.py:653: UserWarning: When training, we now always track global mean and variance.
(run_trial pid=30292) warnings.warn(
(run_trial pid=30292) [2022-11-14 01:01:23,573] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 019| loss: 0.852602| val_0_mae: 0.806146| 0:00:08s
(run_trial pid=30292) C:\Users\Administrator\anaconda3\lib\site-packages\paddle\nn\layer\norm.py:653: UserWarning: When training, we now always track global mean and variance.
(run_trial pid=30292) warnings.warn(
(run_trial pid=42476) [2022-11-14 01:01:23,761] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 003| loss: 0.924974| val_0_mae: 0.824449| 0:00:02s
(run_trial pid=30292) [2022-11-14 01:01:23,990] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 020| loss: 0.865885| val_0_mae: 0.668478| 0:00:08s
(run_trial pid=30292) C:\Users\Administrator\anaconda3\lib\site-packages\paddle\nn\layer\norm.py:653: UserWarning: When training, we now always track global mean and variance.
(run_trial pid=30292) warnings.warn(
(run_trial pid=30292) [2022-11-14 01:01:24,421] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 021| loss: 0.822864| val_0_mae: 0.782634| 0:00:09s
(run_trial pid=30292) C:\Users\Administrator\anaconda3\lib\site-packages\paddle\nn\layer\norm.py:653: UserWarning: When training, we now always track global mean and variance.
(run_trial pid=30292) warnings.warn(
(run_trial pid=42476) [2022-11-14 01:01:24,499] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 004| loss: 0.923414| val_0_mae: 0.789935| 0:00:03s
(run_trial pid=30292) [2022-11-14 01:01:24,829] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 022| loss: 0.854838| val_0_mae: 0.668849| 0:00:09s
(run_trial pid=30292) C:\Users\Administrator\anaconda3\lib\site-packages\paddle\nn\layer\norm.py:653: UserWarning: When training, we now always track global mean and variance.
(run_trial pid=30292) warnings.warn(
(run_trial pid=30292) [2022-11-14 01:01:25,238] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 023| loss: 0.832975| val_0_mae: 0.848204| 0:00:10s
(run_trial pid=30292) C:\Users\Administrator\anaconda3\lib\site-packages\paddle\nn\layer\norm.py:653: UserWarning: When training, we now always track global mean and variance.
(run_trial pid=30292) warnings.warn(
(run_trial pid=42476) [2022-11-14 01:01:25,267] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 005| loss: 0.884985| val_0_mae: 0.797978| 0:00:04s
(run_trial pid=30292) [2022-11-14 01:01:25,646] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 024| loss: 0.803786| val_0_mae: 0.771176| 0:00:10s
(run_trial pid=30292) C:\Users\Administrator\anaconda3\lib\site-packages\paddle\nn\layer\norm.py:653: UserWarning: When training, we now always track global mean and variance.
(run_trial pid=30292) warnings.warn(
(run_trial pid=30292) [2022-11-14 01:01:26,063] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 025| loss: 0.831367| val_0_mae: 0.755703| 0:00:10s
(run_trial pid=30292) C:\Users\Administrator\anaconda3\lib\site-packages\paddle\nn\layer\norm.py:653: UserWarning: When training, we now always track global mean and variance.
(run_trial pid=30292) warnings.warn(
(run_trial pid=42476) [2022-11-14 01:01:26,054] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 006| loss: 0.846988| val_0_mae: 0.720191| 0:00:04s
(run_trial pid=30292) [2022-11-14 01:01:26,473] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 026| loss: 0.809912| val_0_mae: 0.741790| 0:00:11s
(run_trial pid=30292) C:\Users\Administrator\anaconda3\lib\site-packages\paddle\nn\layer\norm.py:653: UserWarning: When training, we now always track global mean and variance.
(run_trial pid=30292) warnings.warn(
(run_trial pid=30292) [2022-11-14 01:01:26,881] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 027| loss: 0.813126| val_0_mae: 0.743538| 0:00:11s
(run_trial pid=30292) C:\Users\Administrator\anaconda3\lib\site-packages\paddle\nn\layer\norm.py:653: UserWarning: When training, we now always track global mean and variance.
(run_trial pid=30292) warnings.warn(
(run_trial pid=42476) [2022-11-14 01:01:26,853] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 007| loss: 0.790365| val_0_mae: 0.783437| 0:00:05s
(run_trial pid=30292) [2022-11-14 01:01:27,289] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 028| loss: 0.798136| val_0_mae: 0.682494| 0:00:12s
(run_trial pid=30292) C:\Users\Administrator\anaconda3\lib\site-packages\paddle\nn\layer\norm.py:653: UserWarning: When training, we now always track global mean and variance.
(run_trial pid=30292) warnings.warn(
(run_trial pid=42476) [2022-11-14 01:01:27,680] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 008| loss: 0.816146| val_0_mae: 0.837121| 0:00:06s
(run_trial pid=30292) [2022-11-14 01:01:27,698] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 029| loss: 0.816031| val_0_mae: 0.689467| 0:00:12s
(run_trial pid=30292) C:\Users\Administrator\anaconda3\lib\site-packages\paddle\nn\layer\norm.py:653: UserWarning: When training, we now always track global mean and variance.
(run_trial pid=30292) warnings.warn(
(run_trial pid=30292) [2022-11-14 01:01:28,115] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 030| loss: 0.798248| val_0_mae: 0.810028| 0:00:13s
(run_trial pid=30292) C:\Users\Administrator\anaconda3\lib\site-packages\paddle\nn\layer\norm.py:653: UserWarning: When training, we now always track global mean and variance.
(run_trial pid=30292) warnings.warn(
(run_trial pid=42476) [2022-11-14 01:01:28,521] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 009| loss: 0.844155| val_0_mae: 0.773559| 0:00:07s
(run_trial pid=30292) [2022-11-14 01:01:28,536] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 031| loss: 0.782819| val_0_mae: 0.675081| 0:00:13s
(run_trial pid=30292) C:\Users\Administrator\anaconda3\lib\site-packages\paddle\nn\layer\norm.py:653: UserWarning: When training, we now always track global mean and variance.
(run_trial pid=30292) warnings.warn(
(run_trial pid=30292) [2022-11-14 01:01:28,941] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 032| loss: 0.830348| val_0_mae: 0.703546| 0:00:13s
(run_trial pid=30292) C:\Users\Administrator\anaconda3\lib\site-packages\paddle\nn\layer\norm.py:653: UserWarning: When training, we now always track global mean and variance.
(run_trial pid=30292) warnings.warn(
(run_trial pid=30292) [2022-11-14 01:01:29,344] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 033| loss: 0.787820| val_0_mae: 0.671389| 0:00:14s
(run_trial pid=30292) C:\Users\Administrator\anaconda3\lib\site-packages\paddle\nn\layer\norm.py:653: UserWarning: When training, we now always track global mean and variance.
(run_trial pid=30292) warnings.warn(
(run_trial pid=42476) [2022-11-14 01:01:29,361] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 010| loss: 0.800365| val_0_mae: 0.769280| 0:00:08s
(run_trial pid=30292) [2022-11-14 01:01:29,764] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 034| loss: 0.812075| val_0_mae: 0.731617| 0:00:14s
(run_trial pid=30292) C:\Users\Administrator\anaconda3\lib\site-packages\paddle\nn\layer\norm.py:653: UserWarning: When training, we now always track global mean and variance.
(run_trial pid=30292) warnings.warn(
(run_trial pid=30292) [2022-11-14 01:01:30,184] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 035| loss: 0.808069| val_0_mae: 0.674505| 0:00:15s
(run_trial pid=30292) [2022-11-14 01:01:30,185] [paddlets.models.common.callbacks.callbacks] [INFO]
(run_trial pid=30292) Early stopping occurred at epoch 35 with best_epoch = 20 and best_val_0_mae = 0.668478
(run_trial pid=30292) [2022-11-14 01:01:30,185] [paddlets.models.common.callbacks.callbacks] [INFO] Best weights from best epoch are automatically used!
(run_trial pid=42476) [2022-11-14 01:01:30,183] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 011| loss: 0.814131| val_0_mae: 0.712737| 0:00:08s
(run_trial pid=42476) [2022-11-14 01:01:31,032] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 012| loss: 0.808135| val_0_mae: 0.739639| 0:00:09s
(run_trial pid=42476) [2022-11-14 01:01:31,897] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 013| loss: 0.814179| val_0_mae: 0.755509| 0:00:10s
(run_trial pid=42476) [2022-11-14 01:01:32,795] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 014| loss: 0.815168| val_0_mae: 0.766864| 0:00:11s
(run_trial pid=42476) [2022-11-14 01:01:33,732] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 015| loss: 0.753962| val_0_mae: 0.690585| 0:00:12s
(run_trial pid=42476) [2022-11-14 01:01:34,675] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 016| loss: 0.801936| val_0_mae: 0.709537| 0:00:13s
(run_trial pid=42476) [2022-11-14 01:01:35,645] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 017| loss: 0.794926| val_0_mae: 0.749197| 0:00:14s
(run_trial pid=42476) [2022-11-14 01:01:36,600] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 018| loss: 0.762111| val_0_mae: 0.678700| 0:00:15s
(run_trial pid=42476) [2022-11-14 01:01:37,540] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 019| loss: 0.745253| val_0_mae: 0.922469| 0:00:16s
(run_trial pid=42476) [2022-11-14 01:01:38,506] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 020| loss: 0.787604| val_0_mae: 0.700718| 0:00:17s
(run_trial pid=42476) [2022-11-14 01:01:39,488] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 021| loss: 0.732736| val_0_mae: 0.741522| 0:00:18s
(run_trial pid=42476) [2022-11-14 01:01:40,478] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 022| loss: 0.756187| val_0_mae: 0.687385| 0:00:19s
(run_trial pid=42476) [2022-11-14 01:01:41,501] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 023| loss: 0.751891| val_0_mae: 0.690699| 0:00:20s
(run_trial pid=42476) [2022-11-14 01:01:42,519] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 024| loss: 0.727448| val_0_mae: 0.682869| 0:00:21s
(run_trial pid=42476) [2022-11-14 01:01:43,555] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 025| loss: 0.755604| val_0_mae: 0.895392| 0:00:22s
(run_trial pid=42476) [2022-11-14 01:01:44,597] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 026| loss: 0.792194| val_0_mae: 0.706429| 0:00:23s
(run_trial pid=42476) [2022-11-14 01:01:45,655] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 027| loss: 0.744930| val_0_mae: 0.791593| 0:00:24s
(run_trial pid=30292) C:\Users\Administrator\anaconda3\lib\site-packages\ray\tune\trainable\session.py:229: DeprecationWarning: tune.report and tune.checkpoint_dir APIs are deprecated in Ray 2.0, and is replaced by ray.air.session. This will provide an easy-to-use API across Tune session and Data parallel worker sessions.The old APIs will be removed in the future.
(run_trial pid=30292) warnings.warn(
(run_trial pid=42476) [2022-11-14 01:01:46,718] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 028| loss: 0.748070| val_0_mae: 0.791396| 0:00:25s
(run_trial pid=42476) [2022-11-14 01:01:47,765] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 029| loss: 0.728586| val_0_mae: 0.787615| 0:00:26s
(run_trial pid=42476) [2022-11-14 01:01:48,822] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 030| loss: 0.729428| val_0_mae: 0.750572| 0:00:27s
(run_trial pid=42476) [2022-11-14 01:01:49,869] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 031| loss: 0.711299| val_0_mae: 0.708207| 0:00:28s
(run_trial pid=42476) [2022-11-14 01:01:50,932] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 032| loss: 0.735087| val_0_mae: 0.711936| 0:00:29s
(run_trial pid=42476) [2022-11-14 01:01:52,010] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 033| loss: 0.730024| val_0_mae: 0.741676| 0:00:30s
(run_trial pid=42476) [2022-11-14 01:01:53,085] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 034| loss: 0.703183| val_0_mae: 0.755382| 0:00:31s
(run_trial pid=42476) [2022-11-14 01:01:54,170] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 035| loss: 0.704181| val_0_mae: 0.686618| 0:00:32s
(run_trial pid=42476) [2022-11-14 01:01:55,267] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 036| loss: 0.704718| val_0_mae: 0.724146| 0:00:34s
(run_trial pid=42476) [2022-11-14 01:01:56,361] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 037| loss: 0.693106| val_0_mae: 0.697631| 0:00:35s
(run_trial pid=42476) [2022-11-14 01:01:57,466] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 038| loss: 0.727850| val_0_mae: 0.662641| 0:00:36s
(run_trial pid=42476) [2022-11-14 01:01:58,583] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 039| loss: 0.688710| val_0_mae: 0.698511| 0:00:37s
(run_trial pid=42476) [2022-11-14 01:01:59,692] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 040| loss: 0.685247| val_0_mae: 0.702356| 0:00:38s
(run_trial pid=42476) [2022-11-14 01:02:00,810] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 041| loss: 0.702037| val_0_mae: 0.677072| 0:00:39s
(run_trial pid=42476) [2022-11-14 01:02:01,935] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 042| loss: 0.675219| val_0_mae: 0.693985| 0:00:40s
(run_trial pid=42476) [2022-11-14 01:02:03,079] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 043| loss: 0.665016| val_0_mae: 0.688647| 0:00:41s
(run_trial pid=42476) [2022-11-14 01:02:04,227] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 044| loss: 0.714889| val_0_mae: 0.741173| 0:00:42s
(run_trial pid=42476) [2022-11-14 01:02:05,384] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 045| loss: 0.692830| val_0_mae: 0.690192| 0:00:44s
(run_trial pid=42476) [2022-11-14 01:02:06,557] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 046| loss: 0.674419| val_0_mae: 0.760584| 0:00:45s
(run_trial pid=42476) [2022-11-14 01:02:07,734] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 047| loss: 0.694529| val_0_mae: 0.695555| 0:00:46s
(run_trial pid=42476) [2022-11-14 01:02:08,914] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 048| loss: 0.670605| val_0_mae: 0.821415| 0:00:47s
(run_trial pid=42476) [2022-11-14 01:02:10,092] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 049| loss: 0.666438| val_0_mae: 0.755908| 0:00:48s
(run_trial pid=42476) [2022-11-14 01:02:11,287] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 050| loss: 0.699248| val_0_mae: 0.679514| 0:00:50s
(run_trial pid=42476) [2022-11-14 01:02:12,483] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 051| loss: 0.636179| val_0_mae: 0.672630| 0:00:51s
(run_trial pid=42476) [2022-11-14 01:02:13,704] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 052| loss: 0.654464| val_0_mae: 0.702088| 0:00:52s
(run_trial pid=42476) [2022-11-14 01:02:14,950] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 053| loss: 0.701259| val_0_mae: 0.688155| 0:00:53s
(run_trial pid=42476) [2022-11-14 01:02:16,178] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 054| loss: 0.666418| val_0_mae: 0.683142| 0:00:54s
(run_trial pid=42476) [2022-11-14 01:02:17,413] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 055| loss: 0.680250| val_0_mae: 0.683969| 0:00:56s
(run_trial pid=42476) [2022-11-14 01:02:18,641] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 056| loss: 0.668339| val_0_mae: 0.695880| 0:00:57s
(run_trial pid=42476) [2022-11-14 01:02:19,877] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 057| loss: 0.692120| val_0_mae: 0.701301| 0:00:58s
(run_trial pid=42476) [2022-11-14 01:02:21,113] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 058| loss: 0.665579| val_0_mae: 0.690541| 0:00:59s
(run_trial pid=42476) [2022-11-14 01:02:22,363] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 059| loss: 0.656419| val_0_mae: 0.765069| 0:01:01s
(run_trial pid=42476) [2022-11-14 01:02:23,606] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 060| loss: 0.682641| val_0_mae: 0.802213| 0:01:02s
(run_trial pid=42476) [2022-11-14 01:02:24,863] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 061| loss: 0.643246| val_0_mae: 0.765911| 0:01:03s
(run_trial pid=42476) [2022-11-14 01:02:26,120] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 062| loss: 0.657356| val_0_mae: 0.743121| 0:01:04s
(run_trial pid=42476) [2022-11-14 01:02:27,378] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 063| loss: 0.704962| val_0_mae: 0.681874| 0:01:06s
(run_trial pid=42476) [2022-11-14 01:02:28,625] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 064| loss: 0.627800| val_0_mae: 0.684119| 0:01:07s
(run_trial pid=42476) [2022-11-14 01:02:29,886] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 065| loss: 0.635989| val_0_mae: 0.716252| 0:01:08s
(run_trial pid=42476) [2022-11-14 01:02:31,157] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 066| loss: 0.647488| val_0_mae: 0.694206| 0:01:09s
(run_trial pid=42476) [2022-11-14 01:02:32,441] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 067| loss: 0.667528| val_0_mae: 0.684332| 0:01:11s
(run_trial pid=42476) [2022-11-14 01:02:33,729] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 068| loss: 0.626634| val_0_mae: 0.667108| 0:01:12s
(run_trial pid=42476) [2022-11-14 01:02:35,006] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 069| loss: 0.627659| val_0_mae: 0.675814| 0:01:13s
(run_trial pid=42476) [2022-11-14 01:02:36,276] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 070| loss: 0.646840| val_0_mae: 0.710005| 0:01:15s
(run_trial pid=42476) [2022-11-14 01:02:37,545] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 071| loss: 0.630839| val_0_mae: 0.758983| 0:01:16s
(run_trial pid=42476) [2022-11-14 01:02:38,811] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 072| loss: 0.648888| val_0_mae: 0.736240| 0:01:17s
(run_trial pid=42476) [2022-11-14 01:02:40,079] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 073| loss: 0.635487| val_0_mae: 0.685271| 0:01:18s
(run_trial pid=42476) [2022-11-14 01:02:41,345] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 074| loss: 0.632486| val_0_mae: 0.720887| 0:01:20s
(run_trial pid=42476) [2022-11-14 01:02:42,615] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 075| loss: 0.646977| val_0_mae: 0.705984| 0:01:21s
(run_trial pid=42476) [2022-11-14 01:02:43,889] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 076| loss: 0.667855| val_0_mae: 0.678300| 0:01:22s
(run_trial pid=42476) [2022-11-14 01:02:45,162] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 077| loss: 0.642378| val_0_mae: 0.680050| 0:01:23s
(run_trial pid=42476) [2022-11-14 01:02:46,444] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 078| loss: 0.606339| val_0_mae: 0.718853| 0:01:25s
(run_trial pid=42476) [2022-11-14 01:02:47,736] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 079| loss: 0.610563| val_0_mae: 0.696663| 0:01:26s
(run_trial pid=42476) [2022-11-14 01:02:49,010] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 080| loss: 0.624385| val_0_mae: 0.738892| 0:01:27s
(run_trial pid=42476) [2022-11-14 01:02:50,293] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 081| loss: 0.650479| val_0_mae: 0.785787| 0:01:29s
(run_trial pid=42476) [2022-11-14 01:02:51,577] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 082| loss: 0.630244| val_0_mae: 0.725113| 0:01:30s
(run_trial pid=42476) [2022-11-14 01:02:52,875] [paddlets.models.common.callbacks.callbacks] [INFO] epoch 083| loss: 0.596762| val_0_mae: 0.722377| 0:01:31s
(run_trial pid=42476) [2022-11-14 01:02:52,876] [paddlets.models.common.callbacks.callbacks] [INFO]
(run_trial pid=42476) Early stopping occurred at epoch 83 with best_epoch = 38 and best_val_0_mae = 0.662641
(run_trial pid=42476) [2022-11-14 01:02:52,876] [paddlets.models.common.callbacks.callbacks] [INFO] Best weights from best epoch are automatically used!
(run_trial pid=42476) C:\Users\Administrator\anaconda3\lib\site-packages\ray\tune\trainable\session.py:229: DeprecationWarning: tune.report and tune.checkpoint_dir APIs are deprecated in Ray 2.0, and is replaced by ray.air.session. This will provide an easy-to-use API across Tune session and Data parallel worker sessions.The old APIs will be removed in the future.
(run_trial pid=42476) warnings.warn(

请问应该怎么处理很多个比较短的序列的预测

我现在在处理的问题是有很多个比较短的序列,需要对每个序列分别预测他的未来的值,但是我阅读paddlets的文档后似乎没有找到相关的api,请问我可以通过什么方法来解决这样的问题呢?
我的数据可以看成是假设有1000只股票的数据,每只股票有过去30个星期的数据,我需要预测未来10个星期每个股票的走势,请问paddlets是否提供了相应的api?或者我可以通过什么方法来解决这一问题?

import paddlets报错

系统版本:Ubuntu 22.04
电脑配置: i7-12700 + Nvidia RTX 3070
在conda环境中安装正常,import时报错

(pdts) ncg@ncg-pc:~$ pip list | grep paddlets
paddlets           1.0.1
(pdts) ncg@ncg-pc:~$ pip list | grep paddlepaddle
paddlepaddle-gpu   2.3.2.post111
(pdts) ncg@ncg-pc:~$ python
Python 3.8.13 (default, Oct 21 2022, 23:50:54) 
[GCC 11.2.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from paddlets import TSDataset
Error: Can not import avx core while this file exists: /home/ncg/anaconda3/envs/pdts/lib/python3.8/site-packages/paddle/fluid/core_avx.so
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/ncg/anaconda3/envs/pdts/lib/python3.8/site-packages/paddlets/__init__.py", line 5, in <module>
    from paddlets.pipeline import Pipeline
  File "/home/ncg/anaconda3/envs/pdts/lib/python3.8/site-packages/paddlets/pipeline/__init__.py", line 4, in <module>
    from paddlets.pipeline.pipeline import Pipeline
  File "/home/ncg/anaconda3/envs/pdts/lib/python3.8/site-packages/paddlets/pipeline/pipeline.py", line 16, in <module>
    from paddlets.utils.utils import get_tsdataset_max_len, split_dataset
  File "/home/ncg/anaconda3/envs/pdts/lib/python3.8/site-packages/paddlets/utils/__init__.py", line 8, in <module>
    from paddlets.utils.backtest import backtest
  File "/home/ncg/anaconda3/envs/pdts/lib/python3.8/site-packages/paddlets/utils/backtest.py", line 12, in <module>
    from paddlets.metrics import Metric, MSE
  File "/home/ncg/anaconda3/envs/pdts/lib/python3.8/site-packages/paddlets/metrics/__init__.py", line 7, in <module>
    from paddlets.metrics.metrics import (
  File "/home/ncg/anaconda3/envs/pdts/lib/python3.8/site-packages/paddlets/metrics/metrics.py", line 8, in <module>
    import paddle
  File "/home/ncg/anaconda3/envs/pdts/lib/python3.8/site-packages/paddle/__init__.py", line 25, in <module>
    from .framework import monkey_patch_variable
  File "/home/ncg/anaconda3/envs/pdts/lib/python3.8/site-packages/paddle/framework/__init__.py", line 17, in <module>
    from . import random  # noqa: F401
  File "/home/ncg/anaconda3/envs/pdts/lib/python3.8/site-packages/paddle/framework/random.py", line 16, in <module>
    import paddle.fluid as fluid
  File "/home/ncg/anaconda3/envs/pdts/lib/python3.8/site-packages/paddle/fluid/__init__.py", line 36, in <module>
    from . import framework
  File "/home/ncg/anaconda3/envs/pdts/lib/python3.8/site-packages/paddle/fluid/framework.py", line 37, in <module>
    from . import core
  File "/home/ncg/anaconda3/envs/pdts/lib/python3.8/site-packages/paddle/fluid/core.py", line 298, in <module>
    raise e
  File "/home/ncg/anaconda3/envs/pdts/lib/python3.8/site-packages/paddle/fluid/core.py", line 256, in <module>
    from . import core_avx
ImportError: /home/ncg/anaconda3/envs/pdts/lib/python3.8/site-packages/paddle/fluid/core_avx.so: undefined symbol: _dl_sym, version GLIBC_PRIVATE

NBEATSModel推理的时候协变量名字和训练时候协变量名字不一致,而且交换多个协变量顺序预测结果不发生改变

NBEATSModel推理的时候指定的协变量名字和训练时候的协变量名字不一致,模型可以完成推理,这个还算可以理解,可以假设predict时候构造TSDataset填的observed_cov_cols列表是顺序相关的
但是,我主动改变observed_cov_cols协变量名字的顺序,预测结果没发生改变,这个似乎不太合理,参考如下代码

import numpy as np
import pandas as pd
import paddlets


if __name__ == "__main__":
    x = np.linspace(-np.pi, np.pi, 200)
    sinx = np.sin(x) * 4 + np.random.randn(200)

    df1 = pd.DataFrame(
        {
            "time_col1": pd.date_range("2022-01-01", periods=200, freq="1h"),
            "value_col1": sinx,
            "observed_col11": sinx + 1,
            "observed_col12": sinx + 2,
        }
    )
    ts1 = paddlets.TSDataset.load_from_dataframe(
        df1,
        time_col="time_col1",
        target_cols="value_col1",
        observed_cov_cols=["observed_col11", "observed_col12"],
    )

    # 训练模型
    train_ts, val_ts = ts1.split(0.8)
    from paddlets.models.forecasting import NBEATSModel

    nbeast = NBEATSModel(in_chunk_len=20, out_chunk_len=10, max_epochs=500)
    nbeast.fit(train_ts, val_ts)

    # 预测一,协变量的名字和训练时候协变量名字不同
    np.random.seed(1)
    df21 = pd.DataFrame(
        {
            "time_col2": pd.date_range("2022-01-01", periods=200, freq="1h"),
            "value_col2": sinx,
            "observed_col21": sinx - 20 * np.random.randn(200),
            "observed_col22": sinx + 30 * np.random.randn(200),
        }
    )
    ts21 = paddlets.TSDataset.load_from_dataframe(
        df21,
        time_col="time_col2",
        target_cols="value_col2",
        observed_cov_cols=["observed_col21", "observed_col22"],
        freq="1h",
    )
    predict_result_21 = nbeast.predict(ts21)

    # 预测二,重新生成TSDataset对象,但是指定observed_cov_cols的时候把名字对调
    ts22 = paddlets.TSDataset.load_from_dataframe(
        df21,
        time_col="time_col2",
        target_cols="value_col2",
        observed_cov_cols=["observed_col22", "observed_col21"],
        freq="1h",
    )
    predict_result_22 = nbeast.predict(ts22)

    # 打印两次预测结果
    print("predict 1:")
    print(predict_result_21)
    print("predict 2:")
    print(predict_result_22)
    print("预测用的ts的observed_cov_cols列名称和训练对不上,而且名称的顺序不影响预测结果,有点奇怪")

这边得到的结果如下:

predict 1:
                     value_col2
2022-01-09 08:00:00   29.420271
2022-01-09 09:00:00   30.838749
2022-01-09 10:00:00    4.161640
2022-01-09 11:00:00   24.282003
2022-01-09 12:00:00   30.760273
2022-01-09 13:00:00   -1.353035
2022-01-09 14:00:00   57.743870
2022-01-09 15:00:00   44.322231
2022-01-09 16:00:00   -2.120617
2022-01-09 17:00:00   16.197647
predict 2:
                     value_col2
2022-01-09 08:00:00   29.420271
2022-01-09 09:00:00   30.838749
2022-01-09 10:00:00    4.161640
2022-01-09 11:00:00   24.282003
2022-01-09 12:00:00   30.760273
2022-01-09 13:00:00   -1.353035
2022-01-09 14:00:00   57.743870
2022-01-09 15:00:00   44.322231
2022-01-09 16:00:00   -2.120617
2022-01-09 17:00:00   16.197647

AutoTS中Deepar的相关问题

看当前版本中DeepAR model没有在paddlets_default_search_space中,我自定义了一个search_space,

                            "rnn_type_or_module": choice(["GRU","LSTM"]),
                            "hidden_size": qrandint(32, 512, q=32),
                            "num_layers_recurrent": randint(1, 4),
                            "dropout": quniform(0, 1, 0.05),
                            "skip_chunk_len":0,
                            "regression_mode":choice(["mean", "sampling"]),
                            "batch_size":qrandint(8,128,q=8),
                            "max_epochs":qrandint(30,600,q=30),
                            "patience":qrandint(5,50,q=5),
                            "optimizer_params": {
                                "learning_rate": uniform(1e-4, 1e-2)
                            },
                            "seed":42
                        }

但是实际使用时提示

ray.exceptions.RayTaskError(ValueError): ray::ImplicitFunc.train() (pid=38826, ip=127.0.0.1, repr=run_trial)
  File "/Users/susheng/opt/anaconda3/envs/paddle/lib/python3.8/site-packages/ray/tune/trainable/trainable.py", line 355, in train
    raise skipped from exception_cause(skipped)
  File "/Users/susheng/opt/anaconda3/envs/paddle/lib/python3.8/site-packages/ray/tune/trainable/function_trainable.py", line 325, in entrypoint
    return self._trainable_func(
  File "/Users/susheng/opt/anaconda3/envs/paddle/lib/python3.8/site-packages/ray/tune/trainable/function_trainable.py", line 651, in _trainable_func
    output = fn()
  File "/Users/susheng/opt/anaconda3/envs/paddle/lib/python3.8/site-packages/paddlets/automl/optimize_runner.py", line 174, in run_trial
    score = fit_and_score(train_data=train_tsdataset,
  File "/Users/susheng/opt/anaconda3/envs/paddle/lib/python3.8/site-packages/paddlets/utils/validation.py", line 144, in fit_and_score
    score_dict, predicts = backtest(data,
  File "/Users/susheng/opt/anaconda3/envs/paddle/lib/python3.8/site-packages/paddlets/utils/backtest.py", line 150, in backtest
    score_dict = metric(real, predict)
  File "/Users/susheng/opt/anaconda3/envs/paddle/lib/python3.8/site-packages/paddlets/metrics/base.py", line 261, in __call__
    res_array = self._build_metrics_data(tsdataset_true, tsdataset_pred)
  File "/Users/susheng/opt/anaconda3/envs/paddle/lib/python3.8/site-packages/paddlets/metrics/base.py", line 54, in _build_metrics_data
    raise_if_not(
  File "/Users/susheng/opt/anaconda3/envs/paddle/lib/python3.8/site-packages/paddlets/logger/logger.py", line 135, in raise_if_not
    raise ValueError(message)
ValueError: In `normal` mode, only point forecasting data is supported!

请问这该如何修改呀

AutoTS.fit()遇到的相关问题

去运行时提示这些未能完成
Traceback (most recent call last):
File "/Users/susheng/PycharmProjects/paddleTest/AutoTS.py", line 124, in
autots_tcn.fit(ts_train, ts_val)
File "/Users/susheng/opt/anaconda3/envs/paddle/lib/python3.8/site-packages/paddlets/logger/logger.py", line 27, in wrapper
result = f(*args, **kwargs)
File "/Users/susheng/opt/anaconda3/envs/paddle/lib/python3.8/site-packages/paddlets/automl/autots.py", line 207, in fit
analysis = self._optimize_runner.optimize(self._estimator,
File "/Users/susheng/opt/anaconda3/envs/paddle/lib/python3.8/site-packages/paddlets/automl/optimize_runner.py", line 204, in optimize
return tune.run(run_trial, num_samples=n_trials, config=running_search_space,
File "/Users/susheng/opt/anaconda3/envs/paddle/lib/python3.8/site-packages/ray/tune/tune.py", line 771, in run
raise TuneError("Trials did not complete", incomplete_trials)
ray.tune.error.TuneError: ('Trials did not complete', [run_trial_732b269c, run_trial_88119474, run_trial_7c009fba, run_trial_94cc868a, run_trial_ab0a0170, run_trial_a7e53298, run_trial_afbec2c2, run_trial_be90113e])

然后我去看快ray-result对应trail的报错文档,发现他们基本显示的
Failure # 1 (occurred at 2022-11-17_19-39-14)
�[36mray::ImplicitFunc.train()�[39m (pid=30278, ip=127.0.0.1, repr=run_trial)
File "/Users/susheng/opt/anaconda3/envs/paddle/lib/python3.8/site-packages/ray/tune/trainable/trainable.py", line 355, in train
raise skipped from exception_cause(skipped)
File "/Users/susheng/opt/anaconda3/envs/paddle/lib/python3.8/site-packages/ray/tune/trainable/function_trainable.py", line 325, in entrypoint
return self._trainable_func(
File "/Users/susheng/opt/anaconda3/envs/paddle/lib/python3.8/site-packages/ray/tune/trainable/function_trainable.py", line 651, in _trainable_func
output = fn()
File "/Users/susheng/opt/anaconda3/envs/paddle/lib/python3.8/site-packages/paddlets/automl/optimize_runner.py", line 174, in run_trial
score = fit_and_score(train_data=train_tsdataset,
File "/Users/susheng/opt/anaconda3/envs/paddle/lib/python3.8/site-packages/paddlets/utils/validation.py", line 127, in fit_and_score
estimator.fit(train_data, valid_data)
File "/Users/susheng/opt/anaconda3/envs/paddle/lib/python3.8/site-packages/paddlets/models/forecasting/dl/paddle_base_impl.py", line 346, in fit
self._fit(train_dataloader, valid_dataloaders)
File "/Users/susheng/opt/anaconda3/envs/paddle/lib/python3.8/site-packages/paddlets/models/forecasting/dl/paddle_base_impl.py", line 363, in _fit
self._network = self._init_network()
File "/Users/susheng/opt/anaconda3/envs/paddle/lib/python3.8/site-packages/paddlets/models/forecasting/dl/tcn.py", line 341, in _init_network
return _TCNModule(
File "/Users/susheng/opt/anaconda3/envs/paddle/lib/python3.8/site-packages/paddlets/models/forecasting/dl/tcn.py", line 126, in init
raise_if_not(
File "/Users/susheng/opt/anaconda3/envs/paddle/lib/python3.8/site-packages/paddlets/logger/logger.py", line 135, in raise_if_not
raise ValueError(message)
ValueError: The valid range of kernel_size is (1, in_chunk_len], got kernel_size:1 <= 1 or kernel_size:1 > in_chunk_len:24.

看起来像kernel-size不合规导致的,请问这该如何修改呢?

单元测试 test_MinMaxScaler 无法通过

paddlets\tests\transform\test_normalization.py

第50行:

self.assertTrue(inverse1.get_known_cov().data.astype('int').equals(input1.get_known_cov().data))

inverse1.get_known_cov().data.astype('int')input1.get_known_cov().data 永远不相等,

因为前者的dtype是int32,而后者是int64。如果修改成:

self.assertTrue(inverse1.get_known_cov().data.astype('int').equals(input1.get_known_cov().data.astype('int')))

则可以顺利通过。

该单元测试中test_MinMaxtest_StandardScaler这行代码出现了8次,都存在这个情况。

Python3.8 64bit


另外我发现该单元测试中test_with_sklearn也有类似的相等判断,但是两侧已经都加了.astype('int'),所以可以顺利通过。

load模型失败

ubuntu 20.04
paddlets 1.0.1
##########################################################################################

ts2vec_params = {"segment_size": 200,
"repr_dims": 320,
"batch_size": 32,
"sampling_stride": 200,
"max_epochs": 20}
model = ReprForecasting(in_chunk_len=200,
out_chunk_len=24,
sampling_stride=1,
repr_model=TS2Vec,
repr_model_params=ts2vec_params)
model.fit(train_data)
model.save("/weights/models1")
model=load("/weights/models1")

发生异常: ValueError
path must be a file path, not a directory: /home/kylin/pythondevelop/paddle_tutorial/ts/trade/weights/models1

请问下PaddleTS具体哪些模型支持协变量?

想问下PaddleTS具体哪些模型是支持协变量的,从文档里面没找到很好的说明
然后用MLPRegressor模型测试了下,训练用的TSDataset是包含observed_cov_cols,但是预测的时候传递的TSDataset不包含observed_cov_cols,还是能正确推理,例如下面这段代码:

import numpy as np
import pandas as pd
import paddlets

if __name__ == "__main__":
    x = np.linspace(-np.pi, np.pi, 200)
    sinx = np.sin(x) * 4 + np.random.randn(200)

    df1 = pd.DataFrame(
        {
            "time_col1": pd.date_range("2022-01-01", periods=200, freq="1h"),
            "value_col1": sinx,
            "observed_col1": sinx + 1,
        }
    )
    ts1 = paddlets.TSDataset.load_from_dataframe(
        df1,
        time_col="time_col1",
        target_cols="value_col1",
        observed_cov_cols=["observed_col1"],
    )

    # 训练模型
    train_ts, val_ts = ts1.split(0.8)
    from paddlets.models.forecasting import MLPRegressor

    mlp = MLPRegressor(in_chunk_len=20, out_chunk_len=10, max_epochs=500)
    mlp.fit(train_ts, val_ts)

    # 预测模型
    df2 = pd.DataFrame(
        {
            "time_col2": pd.date_range("2022-01-01", periods=200, freq="1h"),
            "value_col2": sinx,
        }
    )
    ts2 = paddlets.TSDataset.load_from_dataframe(
        df2, time_col="time_col2", target_cols="value_col2", freq="1h"
    )
    predict_result_2 = mlp.predict(ts2)
    print("predict results: ", predict_result_2)

关于异常检测模块有一处小问题

首先感谢分享!我目前主要研究异常检测学习。
在如下链接:
https://paddlets.readthedocs.io/zh_CN/latest/source/modules/models/anomaly.html
其中 5. 模型预测和评估 的指标计算,未见 res 从哪里来?应该是 pred_label 的吧?
pred_label = model.predict(test_data_scaled) lable_name = pred_label.target.data.columns[0] f1 = F1()(test_tsdata, **res**) precision = Precision()(test_tsdata, **res**) recall = Recall()(test_tsdata, **res**)

TCNRegressor回测存在偏移

代码如下:

ix_dataset2 = TSDataset.load_from_dataframe(
    data3,
    time_col='time',
    target_cols='ix',
    freq='1d',
    known_cov_cols=['work_day','holiday']
)

from paddlets.models.forecasting import MLPRegressor,LSTNetRegressor,TCNRegressor
md = TCNRegressor(
    in_chunk_len = 7,
    out_chunk_len = 1,
    max_epochs=100,
)

train_dataset, val_test_dataset = ix_dataset2.split(0.7)
val_dataset, test_dataset = val_test_dataset.split(0.5)
# train_dataset.plot(add_data=[val_dataset,test_dataset], labels=['Val', 'Test'])

md.fit(train_dataset, val_dataset)

from paddlets.utils import backtest
score, preds_data= backtest(
    data=test_dataset,
    model=md,
    return_predicts = True)
test_dataset.plot(add_data=preds_data,labels="backtest")

结果如下:
image
预测趋势相当不错,但是存在明显偏移,请问这个是什么原因?求解答

PaddleTS第一个issues提问者,哈哈哈哈,三生有幸

1.千呼万唤始出来,终于有了飞桨框架的时间序列预测了.
2.我进入时间序列预测领域,都是由于获得AAAI2021最佳论文奖的那篇论文引起我的兴趣的.AAAI 2021最佳论文:比Transformer更有效的长时间序列预测.看到这个标题,竟然比Transformer更牛逼,万分吃惊.请问PaddleTS大佬,几时复现这篇论文呢?或者是复现2022年比较牛逼的时间序列预测的论文.如果我记得不错的话,AAAI2021最佳论文,是有pytorch的复现版本的,就在github.其实多数论文都有pytorch的复现版本,毕竟科研者写论文,超过80%以上的论文就是使用pytorch框架.使用飞桨框架复现,可以参考参考.
3.看到大佬们对于未来的工作的概括,小弟最感兴趣的就是概率模型,希望早日复现该领域最牛逼的论文.我听说有人使用概率模型炒股,发了大财,不知道真假,哈哈哈哈.
4.图像分类的数据集,有一个很大的imagenet数据集,请问时间序列预测,有什么很大的数据集吗?目前PaddleTS提供了很多模型,请问你们认为哪个模型效果最好呢?哪个模型适用于短时间序列预测,哪个模型适用于长时间序列预测?哪个模型适用于小数据集,哪个模型适用于大数据集?
5.我是看了飞桨视觉的负责人的一篇微信朋友圈,才知道PaddleTS这个仓库.朋友圈中介绍,时间序列预测可以用于股价预测,请问你们认同时间序列预测可以用于股价预测这种观点吗?祝你们炒股赚大钱,发大财.

Transformer模型报错

W0915 12:02:58.731670 1250 gpu_resources.cc:61] Please NOTE: device: 0, GPU Compute Capability: 8.6, Driver API Version: 11.7, Runtime API Version: 11.7
W0915 12:02:58.744315 1250 gpu_resources.cc:91] device: 0, cuDNN Version: 8.5.
Traceback (most recent call last):
File "test.py", line 34, in
model.fit(train_dataset, val_dataset)
File "/usr/local/lib/python3.8/dist-packages/paddlets/models/dl/paddlepaddle/paddle_base_impl.py", line 321, in fit
self._fit(train_dataloader, valid_dataloaders)
File "/usr/local/lib/python3.8/dist-packages/paddlets/models/dl/paddlepaddle/paddle_base_impl.py", line 347, in _fit
self._train_epoch(train_dataloader)
File "/usr/local/lib/python3.8/dist-packages/paddlets/models/dl/paddlepaddle/paddle_base_impl.py", line 415, in _train_epoch
batch_logs = self._train_batch(X, y)
File "/usr/local/lib/python3.8/dist-packages/paddlets/models/dl/paddlepaddle/paddle_base_impl.py", line 434, in _train_batch
output = self._network(X)
File "/usr/local/lib/python3.8/dist-packages/paddle/fluid/dygraph/layers.py", line 930, in call
return self._dygraph_call_func(*inputs, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/paddle/fluid/dygraph/layers.py", line 915, in _dygraph_call_func
outputs = self.forward(*inputs, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/paddlets/models/dl/paddlepaddle/transformer.py", line 196, in forward
out = self._transformer(src, tgt)
File "/usr/local/lib/python3.8/dist-packages/paddle/fluid/dygraph/layers.py", line 930, in call
return self._dygraph_call_func(*inputs, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/paddle/fluid/dygraph/layers.py", line 915, in _dygraph_call_func
outputs = self.forward(*inputs, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/paddle/nn/layer/transformer.py", line 1628, in forward
memory = self.encoder(src, src_mask=src_mask)
File "/usr/local/lib/python3.8/dist-packages/paddle/fluid/dygraph/layers.py", line 930, in call
return self._dygraph_call_func(*inputs, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/paddle/fluid/dygraph/layers.py", line 915, in _dygraph_call_func
outputs = self.forward(*inputs, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/paddle/nn/layer/transformer.py", line 971, in forward
src_mask = _convert_attention_mask(src_mask, src.dtype,
File "/usr/local/lib/python3.8/dist-packages/paddle/nn/layer/transformer.py", line 108, in _convert_attention_mask
mha_meta = _prepare_mha_meta(attn_mask, enable_cudnn)
File "/usr/local/lib/python3.8/dist-packages/paddle/nn/layer/transformer.py", line 134, in _prepare_mha_meta
assert attn_mask is not None,
AssertionError: The attention mask should be given for MultiHeadAttention when enable_cudnn=True. But received attn_mask = None

autots 模型测试阶段发生报错

ImportError: cannot import name 'hparams' from 'tensorboardX.summary' (/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/tensorboardX/summary.py)

`import pandas as pd
import os

import shutil
from paddlets.models.forecasting import MLPRegressor
from paddlets.automl.autots import AutoTS

通过pandas读取数据并显示前三条,与上图一一对应

csv_path = "tu_share_data_day/688388.SH.csv"

tu_share_data_day = os.listdir("tu_share_data_day")
for one in tu_share_data_day:
csv_path = os.path.join("tu_share_data_day", one)

dam_data = pd.read_csv(csv_path).values.tolist()
if len(dam_data) < 100:
    continue
dam_data.reverse()
for i in range(len(dam_data) - 1):
    dam_data[i].append(dam_data[i + 1][4])
    dam_data[i].append(dam_data[i + 1][5])

    dam_data[i].append(dam_data[i + 1][6])
    dam_data[i].append(int(i + 1))
# print(dam_data)
dam_data = pd.DataFrame(dam_data[:-1],
                        columns=['ts_code', 'trade_date', 'open', 'high', 'low', 'close',
                                 'pre_close', 'change', 'pct_chg', 'vol', 'amount', 'next_high',
                                 'next_low',
                                 'next_close', 'index_new'])
from paddlets.datasets import TSDataset

# 构建数据集
dataset = TSDataset.load_from_dataframe(
    dam_data,  # pd.DataFrame
    time_col="index_new",  # 索引
    target_cols=['next_high', 'next_low',
                 'next_close'],  # 需要预测的结果
    observed_cov_cols=['open', 'high', 'low', 'close',
                       'pre_close', 'change', 'pct_chg', 'vol', 'amount']  # 观测协变量
)
# 划分训练集和验证集
train_dataset, val_dataset = dataset.split(0.8)  # 8:2

autots_model = AutoTS(MLPRegressor, 96, 2)
autots_model.fit(train_dataset, val_dataset)`

[paddlets] [ERROR] ValueError: attr: shape doesn't exist!

import paddlets
from paddlets.datasets.repository import get_dataset, dataset_list

print(paddlets.version)
0.1.0

print(f"built-in datasets:{dataset_list()}")
built-in datasets:['UNI_WTH', 'ETTh1', 'ETTm1', 'ECL', 'WTH']

dataset = get_dataset('WTH')
执行到 dataset = get_dataset('WTH') 这行时提示
[2022-08-11 14:07:44,459] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!
[2022-08-11 14:07:44,487] [paddlets] [ERROR] ValueError: attr: shape doesn't exist!

异常检测创建完数据集后,使用plot_anoms可视化数据报错。

加载数据集

ts_data = load_datasets.load_datasets_anomaly(time_col='Date',
feature_cols=["L#SVR4_VAR830_A","L#SVR4_VAR833_A","L#SVR4_VAR476_A","L#SVR4_VAR350_A","L#SVR4_VAR340_A","L#SVR4_VAR342_A","L#SVR4_VAR344_A"]
)

可视化数据集

from paddlets.utils.utils import plot_anoms
plot_anoms(origin_data=ts_data, feature_name="L#SVR4_VAR830_A")

报错如下:


AttributeError Traceback (most recent call last)
Cell In [16], line 4
2 matplotlib.use('TkAgg') #加上,不然显示图表有问题
3 import matplotlib.pyplot as plt
----> 4 ts_data.plot(add_data=["L#SVR4_VAR830_A","L#SVR4_VAR833_A","L#SVR4_VAR476_A","L#SVR4_VAR350_A","L#SVR4_VAR340_A","L#SVR4_VAR342_A","L#SVR4_VAR344_A"])
5 plt.show()
6 # from paddlets.utils.utils import plot_anoms
7 #
8 # plot_anoms(origin_data=ts_data, feature_name="L#SVR4_VAR830_A")

File E:\Paddle-release-2.2\PaddleTS\paddlets\datasets\tsdataset.py:1689, in TSDataset.plot(self, columns, add_data, labels, low_quantile, high_quantile, central_quantile, **kwargs)
1656 def plot(self,
1657 columns:Union[List[str], str] = None,
1658 add_data:Union[List["TSDataset"], "TSDataset"] = None,
(...)
1662 central_quantile:float = 0.5,
1663 **kwargs) -> "pyplot":
1664 """
1665 plot function, a wrapper for Dataframe.plot()
1666
(...)
1687
1688 """
-> 1689 quantile_cols = self._get_quantile_cols_origin_names()
1690 if not columns:
1691 if len(quantile_cols) == 0:

File E:\Paddle-release-2.2\PaddleTS\paddlets\datasets\tsdataset.py:1849, in TSDataset._get_quantile_cols_origin_names(self)
1841 """
1842 Get quantile cols origin names
1843
(...)
1846
1847 """
1848 origin_columns = []
-> 1849 for name in self.target.columns:
1850 tmp = name.split("@quantile")
1851 if tmp[0] not in origin_columns and len(tmp) > 1:

AttributeError: 'NoneType' object has no attribute 'columns'

表征模型是否支持多变量

请问一下表征模型是否支持多变量,在dataset中设置了known_cov_cols,然后使用CoST表征模型,但似乎没起作用,请问表征模型是否支持进行多变量训练,如果没有的话,如何才能在表征模型中使用多变量呢?

AutoTS参数metric设置

修改metric参数为mse,但实际训练评估使用的仍是mae。
metric (str) – metric的名字。metric 会被用于计算在 validation 数据集上的 loss,并反馈给超参优化算法。
捕获

Paddle Inference 进行部署推理,build_ts_infer_input方法出错!

加载已保存的模型

def load_model(self,
load_pdmodel: str = '../model/save_model.pdmodel',
load_pdiparams: str = '../model/save_model.pdiparams'
):
config = paddle_infer.Config(load_pdmodel, load_pdiparams)
self.predictor = paddle_infer.create_predictor(config)
input_names = self.predictor.get_input_names()
print(f"input_name: f{input_names}")
import json
with open("../model/save_model_model_meta") as f:
self.json_data = json.load(f)
print(self.json_data)

预测推理

def predict_ts(self,
               pre_tsdataset: TSDataset,
               cols,
               recursive: bool = False):
    val_dataset = self.data_transform(tsdataset=pre_tsdataset,
                                      cols=cols)
    from paddlets.utils.utils import build_ts_infer_input
    input_data = build_ts_infer_input(val_dataset, "../model/save_model_model_meta")

    for key, value in self.json_data['input_data'].items():
        input_handle1 = self.predictor.get_input_handle(key)
        # set batch_size=1
        value[0] = 1
        input_handle1.reshape(value)
        input_handle1.copy_from_cpu(input_data[key])

    self.predictor.run()
    output_names = self.predictor.get_output_names()
    output_handle = self.predictor.get_output_handle(output_names[0])
    output_data = output_handle.copy_to_cpu()
    print(output_data)

##############################################################################################

调用过程:

# 模型加载

fit_lstnet.load_model(load_pdmodel='../model/LSTNet.pdmodel',
load_pdiparams='../model/LSTNet.pdiparams')

# 预测推理

fit_lstnet.predict_ts(pre_tsdataset=fit_lstnet.val_dataset, cols=target_cols)

执行load_model保存模型成功
执行predict_ts函数时build_ts_infer_input报错,报错信息如下:
e[1me[35m--- Running analysis [ir_graph_build_pass]e[0m
e[1me[35m--- Running analysis [ir_graph_clean_pass]e[0m
e[1me[35m--- Running analysis [ir_analysis_pass]e[0m
e[32m--- Running IR pass [simplify_with_basic_ops_pass]e[0m
e[32m--- Running IR pass [layer_norm_fuse_pass]e[0m
e[37m--- Fused 0 subgraphs into layer_norm op.e[0m
e[32m--- Running IR pass [attention_lstm_fuse_pass]e[0m
e[32m--- Running IR pass [seqconv_eltadd_relu_fuse_pass]e[0m
e[32m--- Running IR pass [seqpool_cvm_concat_fuse_pass]e[0m
e[32m--- Running IR pass [mul_lstm_fuse_pass]e[0m
e[32m--- Running IR pass [fc_gru_fuse_pass]e[0m
e[37m--- fused 0 pairs of fc gru patternse[0m
e[32m--- Running IR pass [mul_gru_fuse_pass]e[0m
e[32m--- Running IR pass [seq_concat_fc_fuse_pass]e[0m
e[32m--- Running IR pass [gpu_cpu_squeeze2_matmul_fuse_pass]e[0m
e[32m--- Running IR pass [gpu_cpu_reshape2_matmul_fuse_pass]e[0m
e[32m--- Running IR pass [gpu_cpu_flatten2_matmul_fuse_pass]e[0m
e[32m--- Running IR pass [matmul_v2_scale_fuse_pass]e[0m
e[32m--- Running IR pass [gpu_cpu_map_matmul_v2_to_mul_pass]e[0m
I1230 08:59:19.689038 23564 fuse_pass_base.cc:57] --- detected 2 subgraphs
e[32m--- Running IR pass [gpu_cpu_map_matmul_v2_to_matmul_pass]e[0m
e[32m--- Running IR pass [matmul_scale_fuse_pass]e[0m
e[32m--- Running IR pass [gpu_cpu_map_matmul_to_mul_pass]e[0m
e[32m--- Running IR pass [fc_fuse_pass]e[0m
I1230 08:59:19.691967 23564 fuse_pass_base.cc:57] --- detected 2 subgraphs
e[32m--- Running IR pass [repeated_fc_relu_fuse_pass]e[0m
e[32m--- Running IR pass [squared_mat_sub_fuse_pass]e[0m
e[32m--- Running IR pass [conv_bn_fuse_pass]e[0m
e[32m--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]e[0m
e[32m--- Running IR pass [conv_transpose_bn_fuse_pass]e[0m
e[32m--- Running IR pass [conv_transpose_eltwiseadd_bn_fuse_pass]e[0m
e[32m--- Running IR pass [is_test_pass]e[0m
e[32m--- Running IR pass [runtime_context_cache_pass]e[0m
e[1me[35m--- Running analysis [ir_params_sync_among_devices_pass]e[0m
e[1me[35m--- Running analysis [adjust_cudnn_workspace_size_pass]e[0m
e[1me[35m--- Running analysis [inference_op_replace_pass]e[0m
e[1me[35m--- Running analysis [ir_graph_to_program_pass]e[0m
I1230 08:59:19.704661 23564 analysis_predictor.cc:1035] ======= optimize end =======
I1230 08:59:19.704661 23564 naive_executor.cc:102] --- skip [feed], feed -> past_target
I1230 08:59:19.705637 23564 naive_executor.cc:102] --- skip [tmp_0], fetch -> fetch
input_name: f['past_target']
{'model_type': 'forecasting', 'ancestor_classname_set': ['LSTNetRegressor', 'PaddleBaseModelImpl', 'PaddleBaseModel', 'BaseModel', 'Trainable', 'ABC', 'object'], 'modulename': 'paddlets.models.forecasting.dl.lstnet', 'size': {'in_chunk_len': 3600, 'out_chunk_len': 3600, 'skip_chunk_len': 0}, 'input_data': {'past_target': [None, 3600, 1]}}
Traceback (most recent call last):
File "E:/Paddle-release-2.2/PaddleTS/examples_self/fit_forecasting_model.py", line 410, in
fit_lstnet.predict_ts(pre_tsdataset=fit_lstnet.val_dataset, cols=target_cols)
File "E:/Paddle-release-2.2/PaddleTS/examples_self/fit_forecasting_model.py", line 284, in predict_ts
input_data = build_ts_infer_input(val_dataset, "../model/LSTNet_model_meta")
File "E:\Paddle-release-2.2\PaddleTS\paddlets\utils\utils.py", line 439, in build_ts_infer_input
sample = next(iter(dataloader))
File "F:\CondaData\envs\paddlets\lib\site-packages\paddle\fluid\dataloader\dataloader_iter.py", line 298, in next
six.reraise(*sys.exc_info())
File "F:\CondaData\envs\paddlets\lib\site-packages\six.py", line 719, in reraise
raise value
File "F:\CondaData\envs\paddlets\lib\site-packages\paddle\fluid\dataloader\dataloader_iter.py", line 272, in next
data = self._reader.read_next_var_list()
StopIteration

不支持paddle版本为2.3.1

之前使用过飞浆的cv和nlp,偶然看到发布的paddlets,可实现对时间序列的预测,刚好未来有这方面的需求,马上来体验下!
运行mlp.fit(train_dataset, val_dataset)报错:
ValueError: only the following paddle versions are supported: {'2.2.0', '2.3.0'}, current paddle version: 2.3.1

PaddleTS增加NLinear时序预测模型的提案

综述

本提案旨在建议将NLinear时序预测模型加入PaddleTS模型库。

👉NLinear原作论文
👉基于PaddleTS的NLinear实现

本实现立足于原作设计,亦做出了以下改进:

⭐支持已知协变量与观测协变量。
⭐Linear结构可以按需增加隐层。

本提案将NLinear与结构类似的MLP以及主流的NBEATS进行了效果比较(见有效性验证一节)。
实验证明NLinear能以极低的模型复杂度实现较优异的效果。本实现做出的改进则进一步提升了NLinear的效果和泛用性。

NLinear模型介绍

NLinear是一个非常简单的线性模型,仅将历史输入通过一个线性层,就得到预测输出。

linear

为了解决时序数据的分布漂移问题,NLinear又对输入输出做了一次非常简单的Normalization。以历史输入的最后一个数作为尾数。历史输入先减去尾数,然后通过线性层,得到的输出再加回尾数,作为模型的预测输出。

本实现做出的改进

支持已知协变量与观测协变量

原作设计虽然简单高效,但是并不能利用TSDataset中包含的协变量信息,对于协变量至关重要的任务并不适用,为了解决这个问题,本实现做了相应改进。在构建输入张量时,将PAST_TARGETKNOWN_COVOBSERVED_COV一维展平,然后横向拼接作为输入,得到一维输出后,再重新折叠成所需要的张量形状。

# concat backcast, known_cov, observed_cov if any
feature = [backcast.reshape((batch_size, 1, -1))]
if known_cov is not None:
    feature.append(known_cov.reshape((batch_size, 1, -1)))
if observed_cov is not None:
    feature.append(observed_cov.reshape((batch_size, 1, -1)))
out = paddle.concat(x=feature, axis=2)

# forward
out = self._nn(out)
out = out.reshape([batch_size, -1, self._target_dim])

Linear结构可以按需增加隐层

协变量增加了输入张量的复杂性,单线性层可能无法充分学习特征,导致欠拟合。为了解决这个问题,本实现也做了相应改进。增加了与MLP类似的hidden_config属性,允许开发者按需增加隐层。

# Unlike MLP, `hidden_config` is empty by default, so there can be no hidden layer.
layers = []
dims = [in_chunk_len_multi] + hidden_config + [out_chunk_len_multi]
for i in range(len(dims) - 2):
    layers.append(paddle.nn.Linear(dims[i], dims[i + 1]))
    if use_bn:
        layers.append(paddle.nn.BatchNorm1D(1))
    layers.append(paddle.nn.ReLU())
layers.append(paddle.nn.Linear(dims[-2], dims[-1]))
self._nn = paddle.nn.Sequential(*layers)

有效性验证

以下是NLinear与其它模型在主流数据集的实验结果,考虑到MLP并不支持协变量,且数据集中协变量不一定对预测有正向作用,所以每个数据集都做了有协变量与无协变量两遍测试。

result

隐层的有效性

从实验结果可知,虽然单线性层的NLinear效果不错,但是有隐层的NLinear更优,尤其当协变量多导致输入复杂时,效果尤为明显。因此可以认为本实现的可增加隐层改进是有效的。

Normalization的有效性

有隐层的NLinear换种角度看,可以被认为是一种增加了Normalization的MLP。从实验结果可知,当输入相同(都没有协变量),且网络结构一致(都有一个相同的隐层)时,NLinear均优于MLP。因此可以认为原作的Normalization特性是一个有效的特性。

协变量的有效性

并不是所有数据集的协变量都对预测有正向作用。通过与同样支持协变量的NBEATS对比可知,当协变量有助益时,NLinear模型可以从协变量中学习有效特征,实现比无协变量时更好的效果。因此可以认为本实现的支持协变量改进是有效的。(而MLP完全不支持协变量,对此类任务无适用性。)

结论

总的来说,NLinear简单好用,而本实现的NLinear模型在原作设计的基础上,进一步提升了效果和泛用性。NLinear的模型复杂程度并不比MLP高,却能实现比某些复杂模型更好的效果。模型调参简单,训练快,可以迅速地得出结论,适合作为各种任务的入手模型。

另外从PaddleTS自身发展的角度来看,NLinear无疑是对现有模型库的有益补充。考虑到开源社区内已有顶流时序建模库(5K+⭐)支持了该模型,PaddleTS吸纳NLinear亦有利于提高自身竞争力。

所以,建议PaddleTS加入NLinear时序预测模型。

TFT model can't be saved

Traceback (most recent call last):
File "D:\ProgramData\Anaconda3\lib\site-packages\paddlets\models\forecasting\dl\paddle_base.py", line 233, in save
pickle.dump(self, f)
TypeError: cannot pickle 'Tensor' object

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "D:\work\py\dossen-alg-api\paddle_tft.py", line 220, in
main()
File "D:\work\py\dossen-alg-api\paddle_tft.py", line 198, in main
model = train(train_dataset, valid_dataset)
File "D:\work\py\dossen-alg-api\paddle_tft.py", line 162, in train
model.save(RNN_MODEL_PATH)
File "D:\ProgramData\Anaconda3\lib\site-packages\paddlets\models\forecasting\dl\paddle_base.py", line 235, in save
raise_log(ValueError("error occurred while saving %s, err: %s" % (abs_model_path, str(e))))
File "D:\ProgramData\Anaconda3\lib\site-packages\paddlets\logger\logger.py", line 112, in raise_log
raise exception
ValueError: error occurred while saving D:\work\py\dossen-alg-api\model\tft\tft_model, err: cannot pickle 'Tensor' object

使用GPU设备进行模型训练、预测代码报错

我按照官方文档https://paddlets.readthedocs.io/zh_CN/latest/source/get_started/run_on_gpu.html运行GPU实验代码:

import numpy as np

from paddlets.datasets.repository import get_dataset
from paddlets.transform.normalization import StandardScaler
from paddlets.models.forecasting import MLPRegressor

np.random.seed(2022)

# prepare data
tsdataset = get_dataset("WTH")
ts_train, ts_val_test = ts.split("2012-03-31 23:00:00")
ts_val, ts_test = ts_val_test.split("2013-02-28 23:00:00")

# transform
scaler = StandardScaler()
scaler.fit(ts_train)
ts_train_scaled = scaler.transform(ts_train)
ts_val_scaled = scaler.transform(ts_val)
ts_test_scaled = scaler.transform(ts_test)
ts_val_test_scaled = scaler.transform(ts_val_test)

# model
model = MLPRegressor(
     in_chunk_len=7 * 24,
     out_chunk_len=24,
     skip_chunk_len=0,
     sampling_stride=24,
     eval_metrics=["mse", "mae"],
     batch_size=32,
     max_epochs=1000,
     patience=100,
     use_bn=True,
     seed=2022
)

model.fit(ts_train_scaled, ts_val_scaled)

predicted_tsdataset = model.predict(ts_val_test_scaled)

print(predicted_tsdataset)

#                      WetBulbCelsius
# 2014-01-01 00:00:00       -0.124221
# 2014-01-01 01:00:00       -0.184970
# 2014-01-01 02:00:00       -0.398122
# 2014-01-01 03:00:00       -0.500016
# 2014-01-01 04:00:00       -0.350443
# 2014-01-01 05:00:00       -0.580986
# 2014-01-01 06:00:00       -0.482264
# 2014-01-01 07:00:00       -0.413248
# 2014-01-01 08:00:00       -0.451982
# 2014-01-01 09:00:00       -0.471430
# 2014-01-01 10:00:00       -0.427212
# 2014-01-01 11:00:00       -0.264509
# 2014-01-01 12:00:00       -0.308266
# 2014-01-01 13:00:00       -0.386270
# 2014-01-01 14:00:00       -0.261341
# 2014-01-01 15:00:00       -0.492441
# 2014-01-01 16:00:00       -0.497322
# 2014-01-01 17:00:00       -0.628926
# 2014-01-01 18:00:00       -0.528971
# 2014-01-01 19:00:00       -0.588881
# 2014-01-01 20:00:00       -0.860580
# 2014-01-01 21:00:00       -0.742121
# 2014-01-01 22:00:00       -0.819053
# 2014-01-01 23:00:00       -0.875322

遇到了一些Bug:

Traceback (most recent call last):
  File "old.py", line 4, in <module>
    from paddlets.transform.normalization import StandardScaler
ModuleNotFoundError: No module named 'paddlets.transform.normalization'
Traceback (most recent call last):
  File "old.py", line 11, in <module>
    ts_train, ts_val_test = ts.split("2012-03-31 23:00:00")
NameError: name 'ts' is not defined

我已经在PR中解决了上述Bug。https://github.com/PaddlePaddle/PaddleTS/pull/230

AutoTS报错和运行异常

执行代码

官方示例

from paddlets.datasets.repository import get_dataset
tsdataset = get_dataset("UNI_WTH")
from paddlets.models.forecasting import MLPRegressor
from paddlets.automl.autots import AutoTS
autots_model = AutoTS(MLPRegressor, 96, 2)
autots_model.fit(tsdataset)

运行得到输出如下

加粗部分为报错,然后程序不终止,一直不断打印==Status==之后的内容,Number of trials也不变,这个是什么原因呢?

C:\ProgramData\Anaconda3\lib\site-packages\paddlets\automl\searcher.py:4: DeprecationWarning: The module ray.tune.suggest has been moved to ray.tune.search and the old location will be deprecated soon. Please adjust your imports to point to the new location. Example: Do a global search and replace ray.tune.suggest with ray.tune.search.
from ray.tune.suggest import BasicVariantGenerator
C:\ProgramData\Anaconda3\lib\site-packages\paddlets\automl\searcher.py:5: DeprecationWarning: The module ray.tune.suggest.optuna has been moved to ray.tune.search.optuna and the old location will be deprecated soon. Please adjust your imports to point to the new location. Example: Do a global search and replace ray.tune.suggest.optuna with ray.tune.search.optuna.
from ray.tune.suggest.optuna import OptunaSearch
C:\ProgramData\Anaconda3\lib\site-packages\paddlets\automl\searcher.py:6: DeprecationWarning: The module ray.tune.suggest.flaml has been moved to ray.tune.search.flaml and the old location will be deprecated soon. Please adjust your imports to point to the new location. Example: Do a global search and replace ray.tune.suggest.flaml with ray.tune.search.flaml.
from ray.tune.suggest.flaml import CFO
C:\ProgramData\Anaconda3\lib\site-packages\paddlets\automl\searcher.py:8: DeprecationWarning: The module ray.tune.suggest.bohb has been moved to ray.tune.search.bohb and the old location will be deprecated soon. Please adjust your imports to point to the new location. Example: Do a global search and replace ray.tune.suggest.bohb with ray.tune.search.bohb.
from ray.tune.suggest.bohb import TuneBOHB
C:\ProgramData\Anaconda3\lib\site-packages\paddlets\automl\search_space_configer.py:8: DeprecationWarning: The module ray.tune.sample has been moved to ray.tune.search.sample and the old location will be deprecated soon. Please adjust your imports to point to the new location. Example: Do a global search and replace ray.tune.sample with ray.tune.search.sample.
from ray.tune.sample import Float, Integer, Categorical
2022-12-27 14:21:00,844 INFO worker.py:1538 -- Started a local Ray instance.
C:\ProgramData\Anaconda3\lib\site-packages\ray\tune\search\optuna\optuna_search.py:694: FutureWarning: IntUniformDistribution has been deprecated in v3.0.0. This feature will be removed in v6.0.0. See https://github.com/optuna/optuna/releases/tag/v3.0.0. Use :class:~optuna.distributions.IntDistribution instead.
return ot.distributions.IntUniformDistribution(
C:\ProgramData\Anaconda3\lib\site-packages\ray\tune\search\optuna\optuna_search.py:682: FutureWarning: UniformDistribution has been deprecated in v3.0.0. This feature will be removed in v6.0.0. See https://github.com/optuna/optuna/releases/tag/v3.0.0. Use :class:~optuna.distributions.FloatDistribution instead.
return ot.distributions.UniformDistribution(
[I 2022-12-27 14:21:02,351] A new study created in memory with name: optuna
C:\ProgramData\Anaconda3\lib\site-packages\optuna\distributions.py:766: FutureWarning: IntUniformDistribution(high=128, low=8, step=8) is deprecated and internally converted to IntDistribution(high=128, log=False, low=8, step=8). See optuna/optuna#2941.
warnings.warn(message, FutureWarning)
C:\ProgramData\Anaconda3\lib\site-packages\optuna\distributions.py:766: FutureWarning: IntUniformDistribution(high=600, low=30, step=30) is deprecated and internally converted to IntDistribution(high=600, log=False, low=30, step=30). See optuna/optuna#2941.
warnings.warn(message, FutureWarning)
C:\ProgramData\Anaconda3\lib\site-packages\optuna\distributions.py:766: FutureWarning: IntUniformDistribution(high=50, low=5, step=5) is deprecated and internally converted to IntDistribution(high=50, log=False, low=5, step=5). See optuna/optuna#2941.
warnings.warn(message, FutureWarning)
C:\ProgramData\Anaconda3\lib\site-packages\optuna\distributions.py:766: FutureWarning: UniformDistribution(high=0.01, low=0.0001) is deprecated and internally converted to FloatDistribution(high=0.01, log=False, low=0.0001, step=None). See optuna/optuna#2941.
warnings.warn(message, FutureWarning)
== Status ==
Current time: 2022-12-27 14:21:07 (running for 00:00:05.18)
Memory usage on this node: 12.3/31.7 GiB
Using FIFO scheduling algorithm.
Resources requested: 0/20 CPUs, 0/1 GPUs, 0.0/12.44 GiB heap, 0.0/6.22 GiB objects (0.0/1.0 accelerator_type:G)
Result logdir: C:\Users\Steven\ray_results\run_trial_2022-12-27_14-21-02
Number of trials: 1/20 (1 PENDING)
+--------------------+----------+-------+--------------+----------------------+--------------+------------------------+------------+----------+
| Trial name | status | loc | batch_size | hidden_config | max_epochs | optimizer_params/lea | patience | use_bn |
| | | | |
| | rning_rate | | |
|--------------------+----------+-------+--------------+----------------------+--------------+------------------------+------------+----------|
| run_trial_41b3f5c7 | PENDING | | 104 | Choice_2: [64, _66c0 | 300 | 0.00969596 | 45 | False |
+--------------------+----------+-------+--------------+----------------------+--------------+------------------------+------------+----------+

2022-12-27 14:21:12,864 WARNING worker.py:1851 -- This worker was asked to execute a function that has not been registered ({type=PythonFunctionDescriptor, module_name=ray.util.placement_group, class_name=, function_name=_export_bundle_reservation_check_method_if_needed..bundle_reservation_check_func, function_hash=c0f5d1653dc84f95bbeed36cb66471cd}, node=127.0.0.1, worker_id=e226e520eafc567ab1ccf4e297e0559f5116718dbeda073aa17dee4a, pid=9048). You may have to restart Ray.
(pid=9048) 2022-12-27 14:21:12,863 ERROR function_manager.py:415 -- This worker was asked to execute a function that has not been registered ({type=PythonFunctionDescriptor, module_name=ray.util.placement_group, class_name=, function_name=_export_bundle_reservation_check_method_if_needed..bundle_reservation_check_func, function_hash=c0f5d1653dc84f95bbeed36cb66471cd}, node=127.0.0.1, worker_id=e226e520eafc567ab1ccf4e297e0559f5116718dbeda073aa17dee4a, pid=9048). You may have to restart Ray.

== Status ==
Current time: 2022-12-27 14:21:12 (running for 00:00:10.23)
Memory usage on this node: 12.3/31.7 GiB
Using FIFO scheduling algorithm.
Resources requested: 0/20 CPUs, 0/1 GPUs, 0.0/12.44 GiB heap, 0.0/6.22 GiB objects (0.0/1.0 accelerator_type:G)
Result logdir: C:\Users\Steven\ray_results\run_trial_2022-12-27_14-21-02
Number of trials: 1/20 (1 PENDING)
+--------------------+----------+-------+--------------+----------------------+--------------+------------------------+------------+----------+
| Trial name | status | loc | batch_size | hidden_config | max_epochs | optimizer_params/lea | patience | use_bn |
| | | | |
| | rning_rate | | |
|--------------------+----------+-------+--------------+----------------------+--------------+------------------------+------------+----------|
| run_trial_41b3f5c7 | PENDING | | 104 | Choice_2: [64, _66c0 | 300 | 0.00969596 | 45 | False |
+--------------------+----------+-------+--------------+----------------------+--------------+------------------------+------------+----------+

== Status ==
Current time: 2022-12-27 14:21:18 (running for 00:00:15.30)
Memory usage on this node: 12.3/31.7 GiB
Using FIFO scheduling algorithm.
Resources requested: 0/20 CPUs, 0/1 GPUs, 0.0/12.44 GiB heap, 0.0/6.22 GiB objects (0.0/1.0 accelerator_type:G)
Result logdir: C:\Users\Steven\ray_results\run_trial_2022-12-27_14-21-02
Number of trials: 1/20 (1 PENDING)
+--------------------+----------+-------+--------------+----------------------+--------------+------------------------+------------+----------+
| Trial name | status | loc | batch_size | hidden_config | max_epochs | optimizer_params/lea | patience | use_bn |
| | | | |
| | rning_rate | | |
|--------------------+----------+-------+--------------+----------------------+--------------+------------------------+------------+----------|
| run_trial_41b3f5c7 | PENDING | | 104 | Choice_2: [64, _66c0 | 300 | 0.00969596 | 45 | False |
+--------------------+----------+-------+--------------+----------------------+--------------+------------------------+------------+----------+

运行环境

windows 11 vscode powershell
paddlepaddle-gpu 2.4.1
paddlets 1.0.2

PaddleTS 是否支持时序数据的聚类?

您好,请教一个问题,现在有一系列无标签的时序数据,也不清楚这些数据有几个类别,请问PaddleTS是否有模型能够对这样的数据进行聚类?是否有相关的示例代码?

多路时序组成训练数据时 groupid 是否支持乱序输入?

这个例子中,假如 id的顺序 是0,1交叉的,就报错了。不支持这种吗?

sample = pd.DataFrame(np.random.randn(200, 3), columns=['a', 'c', 'd'])
sample['id'] = pd.Series([0]*80 + [1]*120, name='id')

#Load TSDatasets by group_id
from paddlets import TSDataset
tsdatasets = TSDataset.load_from_dataframe(
df=sample,
group_id='id',
target_cols='a',
observed_cov_cols=['c', 'd'],
#static_cov_cols='id'
)

设置的协变量是observed_cov_cols,但是报错known_cov长度不够。

报错具体信息:ValueError: known_cov length is too short to build known_cov chunk feature.
It needs at least 1 extra Timestamps after known_timeseries.time_index[6:]
数据集构建:
custom_dataset = TSDataset.load_from_dataframe(
result, #Also can be path to the CSV file
time_col='time',
target_cols=['A' ,'B' ,'C'],
freq='1d',
observed_cov_cols='D',
fill_missing_dates=True,
fillna_method='zero' #max, min, avg, median, pre, back, zero
)
custom_dataset.plot()

关于TSDataset.load_from_dataframe方法参数的一些问题

我是一个时序数据方面的新手,通过官方文档load自己的数据(数据时间间隔是不固定的),输出的结果如下图所示。经过阅读源码发现是freq参数没有理解到位。我觉得pandas中的asfreq()方法其他参数也应该提供用户使用(例如:method和fill_value)这样可以让我这样的新手避免出现数据全是NAN的情况。
d846c24d-42b4-439d-ba0d-24da37aacd2b

PaddleTS第一个issues提问者,哈哈哈哈,三生有幸

1.千呼万唤始出来,终于有了飞桨框架的时间序列预测了.
2.我进入时间序列预测领域,都是由于获得AAAI2021最佳论文奖的那篇论文引起我的兴趣的.AAAI 2021最佳论文:比Transformer更有效的长时间序列预测.看到这个标题,竟然比Transformer更牛逼,万分吃惊.请问PaddleTS大佬,几时复现这篇论文呢?或者是复现2022年比较牛逼的时间序列预测的论文.如果我记得不错的话,AAAI2021最佳论文,是有pytorch的复现版本的,就在github.其实多数论文都有pytorch的复现版本,毕竟科研者写论文,超过80%以上的论文就是使用pytorch框架.使用飞桨框架复现,可以参考参考.
3.看到大佬们对于未来的工作的概括,小弟最感兴趣的就是概率模型,希望早日复现该领域最牛逼的论文.我听说有人使用概率模型炒股,发了大财,不知道真假,哈哈哈哈.
4.图像分类的数据集,有一个很大的imagenet数据集,请问时间序列预测,有什么很大的数据集吗?目前PaddleTS提供了很多模型,请问你们认为哪个模型效果最好呢?哪个模型适用于短时间序列预测,哪个模型适用于长时间序列预测?哪个模型适用于小数据集,哪个模型适用于大数据集?
5.我是看了飞桨视觉的负责人的一篇微信朋友圈,才知道PaddleTS这个仓库.朋友圈中介绍,时间序列预测可以用于股价预测,请问你们认同时间序列预测可以用于股价预测这种观点吗?祝你们炒股赚大钱,发大财.

几个建议

1.支持目标变量分类,先实现所有模型和算法,单目标变量分类功能,再实现多目标变量都是整型数的功能,再实现多目标变量有的是整形术,有的是浮点数的功能.
2.支持单机多卡GPU训练,能支持分布式训练就更好了,但是单机多卡训练是最重要的,很多人没钱买机器,租用八卡3090ti进行训练,真心爽歪歪.对于paddlets来说,如果已经选定模型,设置好超参数,或许不用太多GPU资源,但是如果刚开始一个项目,所有模型都要测试一遍,甚至超参数都要多测试,单机多卡GPU训练是很重要的,一个模型训练1天,测试一遍所有模型+超参数那就是一个月的事情了,慢啊.
3.支持C++推理,我指的是,使用VS2022,C++20标准写代码,能准确运行程序.当然了,支持CUDA,OPENCL也是需求之一,这样做可以大幅度提高训练和推理的速度吧.AMD的显卡没什么需求,算了吧,真没想到,谁会买AMD显卡.
4.在ai studio加点项目或案例
5.能不能弄个大数据集.
6.我之前花时间,看了paddlets所有文档,没看代码.给我的感觉就是,预测目标变量的未来值,基本依靠目标变量的过去值,协变量只是起到辅助作用,如果目标变量的未来值,就是靠协变量预测,跟目标变量的过去值没什么关系,还能用paddlets吗?比如y=2*x,y的未来值,是由协变量x决定的,跟y的过去值毫无关系,这时候,能用paddlets吗?
7.我没看过paddlets的模型代码,但我估计这个领域的模型都不大,换句话说,不太清楚paddleslim是否有用.但以我的经验,PaddleSlim用于paddlets,还是能降低使用者推理成本的,比如原来推理需要3090ti,就可以换成2070.拿图像分类来说,有些图像分类不难,只使用很小的神经网络就可以达到99准确率,用了paddleclas的最小模型进行分类的训练,然后用PaddleSlim剪枝量化等,生产出来的模型,相同机器下,推理速度大幅提升,或者是,推理速度不变,GPU,CPU成本大幅下降.总结,paddlets支持PaddleSlim还是很有用的,哈哈哈哈.

苹果macOS M1 arm64安装报错

PaddlePaddle官方已经支持 Mac M1芯片的运行。但是,PaddleTS却不能支持。
我看到PaddleTS中numpy依赖的版本是1.19.5,但是1.19.5并没有macOS的arm64的版本。

关于DeepAr AutoTS的后续问题

在#issue280中,通过相关部分修改能够成功运行,但是我发现运行结果绘图时,正式值和原来的真实值出现了偏差。
这是默认版本的deeper绘制的效果图
image
然后我使用了自动调参,获取到了一个最优参数,然后绘制的结果图如下:
image
这里可以看到truth值,也就是真实值都发生了变化,和原来的真实值不符合,请问这是什么情况呢?

anomaly_example中模型训练报ModuleNotFoundError: No module named 'more_itertools'错误

相关环境如下:

Name Version Build Channel

aiosignal 1.2.0 pypi_0 pypi
alembic 1.8.1 pypi_0 pypi
amqp 5.1.1 pypi_0 pypi
anyio 3.6.2 pypi_0 pypi
argon2-cffi 21.3.0 pypi_0 pypi
argon2-cffi-bindings 21.2.0 pypi_0 pypi
astor 0.8.1 pypi_0 pypi
asttokens 2.1.0 pypi_0 pypi
attrs 22.1.0 pypi_0 pypi
autopage 0.5.1 pypi_0 pypi
backcall 0.2.0 pypi_0 pypi
beautifulsoup4 4.11.1 pypi_0 pypi
billiard 3.6.4.0 pypi_0 pypi
bleach 5.0.1 pypi_0 pypi
ca-certificates 2022.07.19 haa95532_0 https://mirrors.aliyun.com/anaconda/pkgs/main
celery 5.2.7 pypi_0 pypi
certifi 2022.9.14 py38haa95532_0 https://mirrors.aliyun.com/anaconda/pkgs/main
cffi 1.15.1 pypi_0 pypi
charset-normalizer 2.1.1 pypi_0 pypi
chinese-calendar 1.8.0 pypi_0 pypi
click 8.0.4 pypi_0 pypi
click-didyoumean 0.3.0 pypi_0 pypi
click-plugins 1.1.1 pypi_0 pypi
click-repl 0.2.0 pypi_0 pypi
cliff 4.0.0 pypi_0 pypi
cloudpickle 2.2.0 pypi_0 pypi
cmaes 0.8.2 pypi_0 pypi
cmd2 2.4.2 pypi_0 pypi
colorama 0.4.5 pypi_0 pypi
colorlog 6.7.0 pypi_0 pypi
configspace 0.6.0 pypi_0 pypi
contourpy 1.0.5 pypi_0 pypi
cycler 0.11.0 pypi_0 pypi
cython 0.29.32 pypi_0 pypi
debugpy 1.6.3 pypi_0 pypi
decorator 5.1.1 pypi_0 pypi
defusedxml 0.7.1 pypi_0 pypi
distlib 0.3.6 pypi_0 pypi
entrypoints 0.4 pypi_0 pypi
executing 1.2.0 pypi_0 pypi
fastjsonschema 2.16.2 pypi_0 pypi
filelock 3.8.0 pypi_0 pypi
flaml 1.0.12 pypi_0 pypi
flask 2.2.2 pypi_0 pypi
flask-cors 3.0.10 pypi_0 pypi
fonttools 4.37.3 pypi_0 pypi
frozenlist 1.3.1 pypi_0 pypi
greenlet 1.1.3 pypi_0 pypi
grpcio 1.43.0 pypi_0 pypi
hpbandster 0.7.4 pypi_0 pypi
idna 3.4 pypi_0 pypi
importlib-metadata 4.12.0 pypi_0 pypi
importlib-resources 5.9.0 pypi_0 pypi
ipykernel 6.17.0 pypi_0 pypi
ipython 8.6.0 pypi_0 pypi
ipython-genutils 0.2.0 pypi_0 pypi
ipywidgets 8.0.2 pypi_0 pypi
itsdangerous 2.1.2 pypi_0 pypi
jedi 0.18.1 pypi_0 pypi
jinja2 3.1.2 pypi_0 pypi
joblib 1.2.0 pypi_0 pypi
jsonschema 4.16.0 pypi_0 pypi
jupyter 1.0.0 pypi_0 pypi
jupyter-client 7.4.4 pypi_0 pypi
jupyter-console 6.4.4 pypi_0 pypi
jupyter-core 4.11.2 pypi_0 pypi
jupyter-server 1.21.0 pypi_0 pypi
jupyterlab-pygments 0.2.2 pypi_0 pypi
jupyterlab-widgets 3.0.3 pypi_0 pypi
kiwisolver 1.4.4 pypi_0 pypi
kombu 5.2.4 pypi_0 pypi
lightgbm 3.3.2 pypi_0 pypi
llvmlite 0.39.1 pypi_0 pypi
lxml 4.9.1 pypi_0 pypi
mako 1.2.2 pypi_0 pypi
markupsafe 2.1.1 pypi_0 pypi
matplotlib 3.6.0 pypi_0 pypi
matplotlib-inline 0.1.6 pypi_0 pypi
mistune 2.0.4 pypi_0 pypi
more-itertools 8.14.0 pypi_0 pypi
mpmath 1.2.1 pypi_0 pypi
msgpack 1.0.4 pypi_0 pypi
nbclassic 0.4.8 pypi_0 pypi
nbclient 0.7.0 pypi_0 pypi
nbconvert 7.2.3 pypi_0 pypi
nbformat 5.7.0 pypi_0 pypi
nest-asyncio 1.5.6 pypi_0 pypi
netifaces 0.11.0 pypi_0 pypi
notebook 6.5.2 pypi_0 pypi
notebook-shim 0.2.2 pypi_0 pypi
numba 0.56.4 pypi_0 pypi
numpy 1.19.5 pypi_0 pypi
openssl 1.1.1q h2bbff1b_0 https://mirrors.aliyun.com/anaconda/pkgs/main
opt-einsum 3.3.0 pypi_0 pypi
optuna 3.0.3 pypi_0 pypi
packaging 21.3 pypi_0 pypi
paddle-bfloat 0.1.7 pypi_0 pypi
paddlepaddle-gpu 2.3.2 pypi_0 pypi
paddlets 1.1.0 pypi_0 pypi
pandas 1.3.5 pypi_0 pypi
pandocfilters 1.5.0 pypi_0 pypi
parso 0.8.3 pypi_0 pypi
patsy 0.5.2 pypi_0 pypi
pbr 5.10.0 pypi_0 pypi
pickleshare 0.7.5 pypi_0 pypi
pillow 9.2.0 pypi_0 pypi
pip 22.1.2 py38haa95532_0 https://mirrors.aliyun.com/anaconda/pkgs/main
pkgutil-resolve-name 1.3.10 pypi_0 pypi
platformdirs 2.5.2 pypi_0 pypi
prettytable 3.4.1 pypi_0 pypi
prometheus-client 0.15.0 pypi_0 pypi
prompt-toolkit 3.0.32 pypi_0 pypi
protobuf 3.20.0 pypi_0 pypi
psutil 5.9.3 pypi_0 pypi
pure-eval 0.2.2 pypi_0 pypi
pycparser 2.21 pypi_0 pypi
pygments 2.13.0 pypi_0 pypi
pyod 1.0.6 pypi_0 pypi
pyparsing 3.0.9 pypi_0 pypi
pyperclip 1.8.2 pypi_0 pypi
pyreadline3 3.4.1 pypi_0 pypi
pyro4 4.82 pypi_0 pypi
pyrsistent 0.18.1 pypi_0 pypi
python 3.8.13 h6244533_0 https://mirrors.aliyun.com/anaconda/pkgs/main
python-dateutil 2.8.2 pypi_0 pypi
python-docx 0.8.11 pypi_0 pypi
pytz 2022.2.1 pypi_0 pypi
pywavelets 1.3.0 pypi_0 pypi
pywin32 304 pypi_0 pypi
pywinpty 2.0.9 pypi_0 pypi
pyyaml 6.0 pypi_0 pypi
pyzmq 24.0.1 pypi_0 pypi
qtconsole 5.4.0 pypi_0 pypi
qtpy 2.2.1 pypi_0 pypi
ray 2.1.0 pypi_0 pypi
requests 2.28.1 pypi_0 pypi
scikit-learn 1.1.2 pypi_0 pypi
scipy 1.7.3 pypi_0 pypi
seaborn 0.12.1 pypi_0 pypi
send2trash 1.8.0 pypi_0 pypi
serpent 1.41 pypi_0 pypi
setuptools 63.4.1 py38haa95532_0 https://mirrors.aliyun.com/anaconda/pkgs/main
shap 0.41.0 pypi_0 pypi
six 1.16.0 pypi_0 pypi
slicer 0.0.7 pypi_0 pypi
sniffio 1.3.0 pypi_0 pypi
soupsieve 2.3.2.post1 pypi_0 pypi
sqlalchemy 1.4.41 pypi_0 pypi
sqlite 3.39.2 h2bbff1b_0 https://mirrors.aliyun.com/anaconda/pkgs/main
stack-data 0.6.0 pypi_0 pypi
statsmodels 0.12.2 pypi_0 pypi
stevedore 4.0.0 pypi_0 pypi
sympy 1.11.1 pypi_0 pypi
tabulate 0.8.10 pypi_0 pypi
tensorboardx 2.5.1 pypi_0 pypi
terminado 0.17.0 pypi_0 pypi
threadpoolctl 3.1.0 pypi_0 pypi
tinycss2 1.2.1 pypi_0 pypi
tornado 6.2 pypi_0 pypi
tqdm 4.64.1 pypi_0 pypi
traitlets 5.5.0 pypi_0 pypi
typing-extensions 4.3.0 pypi_0 pypi
urllib3 1.26.12 pypi_0 pypi
vc 14.2 h21ff451_1 https://mirrors.aliyun.com/anaconda/pkgs/main
vine 5.0.0 pypi_0 pypi
virtualenv 20.16.5 pypi_0 pypi
vs2015_runtime 14.27.29016 h5e58377_2 https://mirrors.aliyun.com/anaconda/pkgs/main
wcwidth 0.2.5 pypi_0 pypi
webencodings 0.5.1 pypi_0 pypi
websocket-client 1.4.2 pypi_0 pypi
werkzeug 2.2.2 pypi_0 pypi
wheel 0.37.1 pyhd3eb1b0_0 https://mirrors.aliyun.com/anaconda/pkgs/main
widgetsnbextension 4.0.3 pypi_0 pypi
wincertstore 0.2 py38haa95532_2 https://mirrors.aliyun.com/anaconda/pkgs/main
xgboost 1.6.2 pypi_0 pypi
zipp 3.8.1 pypi_0 pypi

报错部分代码:

import paddle
import numpy as np

固定随机随机种子,保证训练结果可复现

seed = 2022
paddle.seed(seed)
np.random.seed(seed)

#建模与训练
from paddlets.models.anomaly import AutoEncoder
model = AutoEncoder(
in_chunk_len=2, # 样本数据窗口大小
max_epochs=100 # 最大epoch设为100
)
model.fit(train_data_scaled)

#报错内容如下

ModuleNotFoundError Traceback (most recent call last)
Cell In [6], line 10
7 np.random.seed(seed)
9 #建模与训练
---> 10 from paddlets.models.anomaly import AutoEncoder
11 model = AutoEncoder(
12 in_chunk_len=2, # 样本数据窗口大小
13 max_epochs=100 # 最大epoch设为100
14 )
15 model.fit(train_data_scaled)

File E:\Paddle-release-2.2\PaddleTS\paddlets\models\anomaly_init_.py:8
1 # !/usr/bin/env python3
2 # -- coding:utf-8 --
4 """
5 paddlets anomaly.
6 """
----> 8 from paddlets.models.anomaly.dl.autoencoder import AutoEncoder
9 from paddlets.models.anomaly.dl.anomaly_transformer import AnomalyTransformer
10 from paddlets.models.anomaly.dl.vae import VAE

File E:\Paddle-release-2.2\PaddleTS\paddlets\models\anomaly\dl\autoencoder.py:15
13 from paddlets.models.anomaly.dl._ed.ed import MLP, CNN
14 from paddlets.models.common.callbacks import Callback
---> 15 from paddlets.models.anomaly.dl import utils as U
16 from paddlets.datasets import TSDataset
17 from paddlets.logger import raise_if, raise_if_not

File E:\Paddle-release-2.2\PaddleTS\paddlets\models\anomaly\dl\utils.py:11
9 import paddle
10 import paddle.nn.functional as F
---> 11 import more_itertools as mit
13 from paddlets.logger import raise_if_not, raise_if, raise_log, Logger
14 from paddlets.datasets import TSDataset

ModuleNotFoundError: No module named 'more_itertools'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.