Comments (1)
Hi, thanks for your question.
Recall that SMAP and MSL actually consist of multiple individual time-series (A-1, C-2, etc.), each of which has 24/54 (SMAP/MSL) one-hot encoded features and 1 continuous feature. Each of these time-series is very short (typically 1-3k timesteps), so most implementations (including ours) concatenate all these time-series in the time direction, creating one large time-series.
So, the dataset will "jump" up and down whenever it transitions to a new channel. This has some effects on the forecastings and reconstructions of the model:
- It will be "impossible" for model to predict correct values when the data is transitioning to a new channel, so the error at these timestamps will be high. Because the errors are used to fit the threshold, we set the errors at these timestamps to zero, so that they do not affect the thresholding.
- Different channels will have different ranges (min-max values), and will therefore typically yield errors with different ranges. Therefore, we want to normalize the errors for each channel before they are used to fit the threshold.
These two steps are performed in adjust_anomaly_scores
and are only applied when dataset is MSL or SMAP.
labeled_anomalies.csv is used to get the length of each channel of the test set, while smap/msl_train_md.csv is used for the same purpose but for the train set.
from mtad-gat-pytorch.
Related Issues (20)
- Reason for using `find_epsilon` in feature-level HOT 1
- Running repo on custom data HOT 1
- Exception: Dataset ".\DATASETS\DATA\SMAP_TRAIN_MD.CSV" not available. HOT 2
- ValueError: time data '<built-in function id>' does not match format '%d%m%Y_%H%M%S' HOT 1
- FC layer out_dim not matching RECOn layer in_dim HOT 2
- Loss and some slice of the output tensors become NAN HOT 1
- About data cleaning HOT 1
- Problems encountered in calculating anomaly scores on the SMAP dataset HOT 1
- FileNotFoundError: [Errno 2] No such file or directory: 'datasets/ServerMachineDataset/processed\\machine-1-1_train.pkl' HOT 1
- Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same HOT 1
- The reason why use shuffle in time-series data HOT 1
- about gat_layer
- My understanding of mtad _ gat.py does not reflect the ' Multivariate Time-series ' in the title of the paper. HOT 2
- Computational resource HOT 1
- The issue with the dataset HOT 1
- msl and smap dataset preprocess gives error HOT 2
- Pot results on the SMD dataset
- Multiple inconsistent training results HOT 1
- about the example output,please!!! HOT 1
- Embedding vector dimension issue in the paper HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from mtad-gat-pytorch.